OK, I could workaround the situation by creating the VLAN interface in the other network 192.168.1.0/24.
I removed the 10.0.5.0/24 entry in the ceph.conf
Thanks !
Hello,
I'm in the same situation, the CLI command gives :
root@proxmox1:~# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "192.168.1.10/24, 10.0.5.0/24".
command '/sbin/ip address show to '192.168.1.10/24, 10.0.5.0/24' up' failed: exit code 1
any other hint...
Hello,
In proxmox server, I could see the user role PVESDNAdmin : SDN Allocate, SDN Audit.
In the doc, the role is not referenced : https://pve.proxmox.com/wiki/User_Management
What are these two SDN authorizations ?
Thanks !
Hi Tim,
I'm really sorry not having found this part of doc, I used a search with the "VM State storage" and got no result, but 'vmstatestorage' does.
Thank you for the reply !
Best Regards,
Thomas.
Hello,
I checked the VM Options, and at the bottom I saw "VM State storage"
What is it used for ? I could not find documentation regarding this.
thanks !
Regards,
Thomas.
Hi !
I would like to reboot all ceph daemons after a config change.
From the ceph doc :
'sudo /etc/init.d/ceph -a stop' but ceph is not in here
'sudo service ceph -a stop' but there is no ceph service defined
so the last one is systemctl ?
does anyone already did this ?
Regards,
Thomas.
Hi,
During a Cephfs keyring for a user import, I did overwrite the client.admin with less permissions (by mistake!) :
[client.admin] <<<<< here should have been client.foo
key = xxxxx
caps mds = "allow rw path=/nas/nas"
caps mon = "allow r"
caps osd = "allow...
I found that reverting a container snapshot fails and cannot be done.
replication scenario :
- deploy a debian template on proxmox ceph
- snapshot the CT
- touch toto in the vm
- shutdown th CT
- revert (rollback) : rbd snapshot vm-103-disk-0 to 'test' error: Rolling back to snapshot: 0%...
I could make the node join the cluster by having the same source.list on all nodes, as some debian repository I was using on nodes were not containing the same version of packages.
Thanks !
With the latest template of the Ceph repository, it does not work, but using the updatedb and locate, then using this template, works perfectly.
there is the ceph cluster and the proxmox cluster.
I could resolve my issue by performing the following procedure :
https://pve.proxmox.com/pve-docs/pve-admin-guide.html at paragraph 6.5.1 Separate A Node Without Reinstalling
Then I had to remove some other files in the node folder on the...
Hi Bisser, I ended up in the same situation as yours, but I wonder if recreating the cluster means loosing the ceph also ?
Or if I setup the OSD the same way it will keep the data ? or import the ceph cluster ?
OK, I used the ceph command to rename my pool : ceph osd pool rename redpool hddpool
However, the storage.cfg was not updated (obviously as this is Proxmox thing).
Is there a proxmox command to rename a pool ?
My source info is https://pve.proxmox.com/pve-docs/chapter-pveceph.html
Also, if...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.