Search results

  1. S

    Backup work only on local node

    Hi, I have cluster with 4 nodes, when I set backup on node1 it's backup only node1 when I go directly to node2 GUI -> DataCenter -> Backups I don't see there the backup that set on node1 any suggestion / solution for that? Regards,
  2. S

    [SOLVED] Remove node from cluster

    Hi, I have removed node from cluster before the node was shutdown now, when I check from another node I see: # pvecm status Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 4 Quorum: 3 Flags: Quorate but I have only 4...
  3. S

    Backup hangup with Ceph/rbd

    it's not down the Containers during the backup? you don't need solution without downtime on each backup?
  4. S

    CEPH, Hanging Backups=>IO Waits=>Reboots (Including solutions)

    wow, Thanks for that! any plan to include A neat fix in Proxmox release?
  5. S

    Linked Clone Network on LXC

    Hi, I want to create multiple LXC Containers using Linked Clone, the problem is that if I set some network / IP settings inside one container, it's apply to all other Container Linked clone containers any solution for that? Regards,
  6. S

    LXC Backup Issues

    Hi, I have 2 issues with LXC containers backup 1. some times it's show even few days it's processing for some container but not completing the backup and I need to stop the task manually. 2. some containers got this error: command 'mount -o ro,noload /dev/rbd5 /mnt/vzsnap0//' failed: exit code...
  7. S

    LXC + Ceph Quota

    Yes, CentOS 7.x, I have some suspicion it's can be related to containers that was converted from Proxmox 3.x and OpenVZ to LXC please let me know if you have any suggestion
  8. S

    Proxmox Cluster Broken almost every day

    don't see changes on that issue, any news please? https://bugzilla.proxmox.com/show_bug.cgi?id=1532
  9. S

    LXC + Ceph Quota

    It's set on fstab and it's work for me in local-lvm just for Ceph storage it's not working, any Idea / suggestions?
  10. S

    LXC + Ceph Quota

    Hi, sorry, what you mean? what should I do to enable the quota? Regards,
  11. S

    LXC + Ceph Quota

    Hi, I have LXC Containers working over Ceph Shared storage with Quota = 1 option for the Disk now, I try to Run: # quotacheck -vguma quotacheck: Scanning /dev/rbd1 [/] quotacheck: error (1) while opening /dev/rbd1 how can I add quota support? LXC Container OS: CentOS 7.x /etc/fstab: none...
  12. S

    Proxmox roadmap Suggestions

    a. Incremental backups b. Reduce disk volume c. GUI support for adding external disk (nfs / network share) d. Storage QoS and maybe the community can suggest additional features the problem is that if you try to migrate from local disk to NAS or Ceph sotrage it's just not working and not...
  13. S

    Proxmox roadmap Suggestions

    Hello to Proxmox team, first, thanks from the community for your great job! some Ideas for next releases Features / Feedbacks 1. more options for LXC a. option to select target storage for migration (both CLI and FUI) b. option to do Live migration and not only restart mode (criu?) c. better CPU...
  14. S

    Detect Container with High CPU Load

    so, what is the solution to know witch LXC Container case the high load? Thanks!
  15. S

    Detect Container with High CPU Load

    Hello to the community, when I have some very high load on LXC container, all the containers show same CPU load so, how can I see from the node list of the container with real CPU status for each Conatiner so I can turn off the container case the high CPU or just handle it well? Thanks!
  16. S

    Backup fails with Logical Volume already exists in volume group

    Find the solution: 1. show this by Volume group by using command : lvdisplay 2. remove this lv by: lvremove /dev/pve/snap_vm-326-disk-1_vzdump now the new snapshots works :)
  17. S

    Copy Ceph Disk on Ceph Storage

    Hi, I have Ceph storage, now I want to copy my VM Disk from disk-1 to disk-3 using: rbd -p Ceph1 -m 10.10.10.1 -n client.admin --keyring /etc/pve/priv/ceph/Ceph1_vm.keyring --auth_supported cephx cp vm-110-disk-1 vm-110-disk-3 show me the error: rbd: error opening default pool 'rbd' Ensure that...
  18. S

    Backup fails with Logical Volume already exists in volume group

    how can I remove snap_vm-326-disk-1_vzdump? lvremove snap_vm-326-disk-1_vzdump not working :( root@server215:/# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- 716.38g...
  19. S

    Optimizing proxmox

    if possible Ceph over 10Gb / 100Gb networks are better from my personal experience, zfs was less success than Ceph or even HW RAID
  20. S

    Backup fails with Logical Volume already exists in volume group

    same here, any solution for that without backup the data > remove the container -> re-create all -> restore the data?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!