I'm running a 5 node cluster versin 6.0 and had to remove 2 nodes.
Everything went fine until I wanted to delete the removed nodes from the GUI.
By accident I deleted the directory of a still existing node, which was at that time running an LXC container and a VM.
Both, container and VM...
fresh PVE6 installation on ZFS. Everything is fine until I created:
zfs create -o encryption=aes-256-gcm -o keyformat=passphrase rpool/encrypted
After the next reboot , the system hangs at:
error: unknown filesystem.
Entering rescue mode...
After removing the encrypted...
today I did some HA testing on a 4 node cluster, version 5.4.
I configured a HA group including all nodes and added to VMs tho the group.
The two VMS were running on node 1 and node 2.
I also set shutdown_policy=failover to initiate the failover by simply rebooting a node.
I configured the proxy settings in the GUI, but /etc/apt/apt.conf.d/76pveproxy was not created.
Even after a reboot it's still missing.
I know from other installations, it's there.
How can I force to create it?
possibly I have to deploy a few new Proxmox clusters. Too many to install them all manually ;-)
There are many tools out there, doing their job very well: dd, clonezilla...
Just wondering, which parts I have to change on each clone afterwards to make them individual.
I have to size a PVE/CEPH environment for two data center.
We need a new home for roughly 300 small VMs (4 cores, 4GB Memory,100-200GB storage)
I calculate half a year until all 300 VMs are migrated and calculated 100% growth in the next three years.
Storage bandwidth should not be...
I just want to add the 15th scsi drive to a VM. I'm aware of the limitation of 14 drive per scsi bus, therevor I have choosen virtio-scsi-single in the controller option to have a dedicated controller per drive. But the GUI still doesn't allow me to add values higher than 13 for the...
I migrated a ceph node to new hardware. After moving the OSDs, they won't come up again. All pgs are unknown, all OSDs are down. The GUI shows them as filestore, but they are definitely bluestore.
The old server was originally installed with PVE 4.4/hammer and migrated from time to...
I just tried to get ceph's zabbix module running.
I followed the ceph docs at http://docs.ceph.com/docs/master/mgr/zabbix/
Communication seems to be fine but no data is send.
Got this in the mgr log:
2017-11-01 19:07:46.614999 7f7415d00700 20 mgr[zabbix] Waking up for new iteration...
this time I really need your help :-(
first my setup:
3 nodes (pve0,pve1,pve2)
2 pools (both of size 2/1)
all nodes are running latest version 5.0/12.2.0
I just wanted to migrate all OSDs from filestore to bluestore, so I removed all OSDs of node pve2 and recreated them...
since a few day I plan a new single PVE server for home use.
Here is my strategy:
I don't care much about availability, so a single host is ok for me.
I don't care much about performance, so a single host is ok for me.
I care about flexibility, that's why I want ceph for storage and...
I have some issues with my 1st CEPH deployment.
I have just 3 servers available, which have to manage everything (VMs, monitors and OSDs).
All 3 servers have the latest community packages installed.
Here is, what I tested so far:
root@pve1:~# ceph -s