Search results

  1. K

    Deleted ceph on node, stupid!

    Is that the only file that is deleted on all other 3 nodes (node1, node2 and node3)?
  2. K

    Deleted ceph on node, stupid!

    Dear Proxmox forum readers, I did something stupid, I have a networking running 4 nodes with Ceph. On node4 I wanted to remove Ceph with 'pveceph purge' but I did not know it would remove ceph.conf on all nodes. Yes I know this is very STUPID, but maybe someone can help this fool. If I go to...
  3. K

    Proxmox VE 6.0 released!

    We need to update xx nodes so that will take a long time (currently running Debian, Proxmox 5.4, Corosync 2 with Ceph). I was thinking to update Corosync to v3 on all nodes first (at the same time), this will keep everything running right? Then start the Proxmox update per node incl Ceph...
  4. K

    Container backup problems

    pct unlock doesn't work. I can reset the node/server and then it works again but this is not a good solution. I found a workaround for the container that will not start again and that is to do a forced rbd unmap on the container: rbd unmap -o force /dev/rbd/ceph_ssd/vm-XXX-disk-X
  5. K

    Container backup problems

    Hereby: arch: amd64 cores: 12 hostname: server01 lock: backup memory: 10240 nameserver: 213.132.xx.xx 213.132.xx.xx net0: name=eth0,bridge=vmbr1,gw=213.132.xx.x,hwaddr=42:2F:D6:5A:97:D4,ip=213.132.xx.xx/24,type=veth ostype: centos parent: vzdump rootfs: ceph_ssd:vm-116-disk-1,size=140G...
  6. K

    Container backup problems

    I am a big fan of Proxmox, got several clusters running and in the meantime got a lot of experience with it. I got only 1 thing that is not working properly and that are the backups/vzdumps to local of my containers on one of my clusters. If I make backup of the vm's, no problem at all ! If I...