Search results

  1. S

    Proxmox roadmap Suggestions

    Hello to Proxmox team, first, thanks from the community for your great job! some Ideas for next releases Features / Feedbacks 1. more options for LXC a. option to select target storage for migration (both CLI and FUI) b. option to do Live migration and not only restart mode (criu?) c. better CPU...
  2. S

    Detect Container with High CPU Load

    so, what is the solution to know witch LXC Container case the high load? Thanks!
  3. S

    Detect Container with High CPU Load

    Hello to the community, when I have some very high load on LXC container, all the containers show same CPU load so, how can I see from the node list of the container with real CPU status for each Conatiner so I can turn off the container case the high CPU or just handle it well? Thanks!
  4. S

    Backup fails with Logical Volume already exists in volume group

    Find the solution: 1. show this by Volume group by using command : lvdisplay 2. remove this lv by: lvremove /dev/pve/snap_vm-326-disk-1_vzdump now the new snapshots works :)
  5. S

    Copy Ceph Disk on Ceph Storage

    Hi, I have Ceph storage, now I want to copy my VM Disk from disk-1 to disk-3 using: rbd -p Ceph1 -m 10.10.10.1 -n client.admin --keyring /etc/pve/priv/ceph/Ceph1_vm.keyring --auth_supported cephx cp vm-110-disk-1 vm-110-disk-3 show me the error: rbd: error opening default pool 'rbd' Ensure that...
  6. S

    Backup fails with Logical Volume already exists in volume group

    how can I remove snap_vm-326-disk-1_vzdump? lvremove snap_vm-326-disk-1_vzdump not working :( root@server215:/# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- 716.38g...
  7. S

    Optimizing proxmox

    if possible Ceph over 10Gb / 100Gb networks are better from my personal experience, zfs was less success than Ceph or even HW RAID
  8. S

    Backup fails with Logical Volume already exists in volume group

    same here, any solution for that without backup the data > remove the container -> re-create all -> restore the data?
  9. S

    Proxmox Cluster Broken almost every day

    Hi Tom thanks for that note! before I report bug, I checking with the community if it's something that happen only to me just to inform everyone, I did update to the last version and so far it's stable, so it's OK I sent another issue to the bugzilla, it's without any status changes from...
  10. S

    Proxmox Cluster Broken almost every day

    wow, thanks for that update! any news so far? updates?
  11. S

    Proxmox Cluster Broken almost every day

    1. thanks for that, can you please provide any additional info? why it's effect this node only? why pve-cluster.service restart solve it if it's network issue? why the ceph network on same switches not effected that issue? 2. what should we changed? Thanks again!
  12. S

    Proxmox Cluster Broken almost every day

    sure, here: Virtual Environment 5.1-46 Node 'server202' 2018-04-08 Apr 08 07:55:54 server202 kernel: read cpu 6 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 7 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 8 ibrs val 0 Apr 08 07:55:54 server202 kernel: read cpu 9 ibrs val 0 Apr...
  13. S

    Proxmox Cluster Broken almost every day

    Hi, I have cluster of 2x10Gbps network via Bond, using LACP but almost every day, and sometime even few times a day, I connect to the GUI and see only on server info all servers and nodes with questionmark attaching screenshot after I logged-in to server202 and run: systemctl restart...
  14. S

    LXC and lsblk command

    Hi, am running lsblk from one of the container and I see it's show the list of the disks for all the containers under the physical node server how can I avoid the container owner to run that command or view this info? Regards,
  15. S

    Internal Network for software defined storage

    Hi, we are using Proxmox VE with Ceph SDS in Internal network for HA and now we just debating between the following option: 1. Supermicro Microblade with 2x10Gbps switches and 28 nodes on 6U (we will wait for nodes with nvme, in my opinion should be Sometime this year) 2. Supermicro Superblade...
  16. S

    fstrim for Windows Guests and Ceph Storage

    sure, I did for the 2016 add dummy drive, install and remove the dummy drive after that I moved the disks to virtio-scsi and try to boot
  17. S

    fstrim for Windows Guests and Ceph Storage

    Thanks Klaus! I did and got for the Win2016 blue screen after moved to virtio-scsi also, for the win2012 it's not detecting the virtIO scsi driver from the last and stable ISOs suggestions?
  18. S

    fstrim for Windows Guests and Ceph Storage

    Hi, thanks for that response! as you can see at the screenshot, discard is disabled, Ceph + Virtio + OS Type: Win10/2016 any suggestion? os should be power Off for that?
  19. S

    fstrim for Windows Guests and Ceph Storage

    Hello, how can I run fstrim solution for Windows KVM with Ceph storage? the best is if possible to run it In a constant timing Thanks!
  20. S

    Rescan LVM-Thin Volume

    Thanks, I will, Wondering about the LXC, why Proxmox deamon not runing that every day? https://forum.proxmox.com/threads/lxc-lvm-thin-discard-fstrim.34393/#post-168569