Search results

  1. Z

    qm set --sshkey

    So can I provide additional keys as new lines in GUI here? And if --sshkey accepts a file, does it also accepts several lines with several keys here?
  2. Z

    3 node Ceph-cluster upgrade from 7.0-13 to current - Ceph?

    Hi am running a 3-node proxmox/ceph cluster which is sheduled for an upgrade to the current PVE version, which as of today, is 7.3. The plan was to perform a "rolling" upgrade, starting with upgrading one node at first, and then after a week, also upgrade the remaining two nodes. When...
  3. Z

    [SOLVED] (safe/best) way to temporarily operate a cluster with only one node?

    Did not know about these qdevices before. This sounds like the way to go. Thanks a lot!
  4. Z

    [SOLVED] (safe/best) way to temporarily operate a cluster with only one node?

    Hi @Dunuin , thanks for your help! Sounds very interesting… So, like putting two low power intel NUC or any other (more) energy efficient devices running PVE in the rack and add them to the cluster, for quorum only? Cool! :) Having five nodes again would allow me to temporarily shut down the...
  5. Z

    [SOLVED] (safe/best) way to temporarily operate a cluster with only one node?

    Hi! I currently operate a production PVE7 cluster of three nodes for an internal development team. Due to energy saving demands my boss wants to run all VMs/CTs on one cluster node and temporarily shut down the two then empty nodes. As each of the nodes has plenty of RAM/ressources and we...
  6. Z

    Win10 VM gets stuck rebooting/shutting down when installing win Updates

    My Windows 10 VMs suffer from exact same issue - quite shockes to see this long thread over there....
  7. Z

    Slow VM on external CEPH Cluster

    Our IO was degraded due to two bad disks affecting the cluster. Swapped disks. Fine.
  8. Z

    PBS datastore 100% full - manual GC/prune does not work

    Okay, yeah, makes definitely a lot of sense to have vanishing off by default. So I just thought about mirroring the two PBS' which on would achieve with the vanish option. But normaly I can achieve different retention shedules if needed with leaving vanishing off and implementing a different...
  9. Z

    PBS datastore 100% full - manual GC/prune does not work

    Hi Matthias and thanks for your quick help! As the first PBS still looks fine - I would delete the .chunks folders content, then set the vanish option on the sync job (that I totally missed) and finally do a resync or wait for the next sheduled run. So if the the vanish option is set I would...
  10. Z

    PBS datastore 100% full - manual GC/prune does not work

    Hi, I have two PBS servers in my setup. The primary one is ok and working as expected. The second PBS has a sync job to basically clone the content from the first PBS. Thats all it does. Well, apparently I didnt understood that te datastore on the second pbs needed its own GC/Prune job config -...
  11. Z

    Slow VM on external CEPH Cluster

    Hi and sorry for adding in here with a question and no answer. Did you finally found a solution to this issue? Or did you stay on iSCSI? I am currently experiencing the exact same behavior on a somewhat similar setup. Windows disk usage nearly always at around 100% on an all Flash cluster...
  12. Z

    ceph 3 node cluster Windows guests at 100% disk i/o

    Thanks a lot @aaron this helps!, will 1st) set the vm to "VirtIO SCSI single" and then activate "IO thread" on the corresponding SCSI device for this VM. Besides I`ll try to set cache from "Default" to "Write back" on this disk. then 2nd) look for abnormalities in ceph tell osd.* bench. Is...
  13. Z

    ceph 3 node cluster Windows guests at 100% disk i/o

    Looking at the "wa" colum n from top I'd say there is no I/O wait on the host. Is top okay for checking for IO delay on the PVE host? Or what yould you suggest? iostat? There is currently no backup running. The guest is mostly above several hundreds or thounds milisseconds latency for a few...
  14. Z

    ceph 3 node cluster Windows guests at 100% disk i/o

    Sure @aaron , here it is: root@xxx-xxx-xxx:~# qm config 111 agent: 1 balloon: 8192 bios: ovmf boot: order=scsi0 cores: 8 description: ## ... efidisk0: ceph-ssd:vm-111-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K hostpci0: 0000:e1:00,pcie=1,x-vga=1 machine: pc-q35-6.0 memory: 65536 name...
  15. Z

    ceph 3 node cluster Windows guests at 100% disk i/o

    Hi, I am currently operating a 3 node PVE 7.0 & Ceph (cluster based on 10*NVMe/ 10*SATA-SSDs in each node). The CEPH network is a dedicated 100GbEthernet link. In Windows guests the disk perfomance is maxing out at a 100% disk usage and the latency is at multiple hundred up to several thousand...
  16. Z

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    HI, totally new to this "timeout waiting on systemd" thing. Today I made a snapshot and then a manually backup of a Windows VM (ID 229) (with PCIe passthrough) to a PBS server and the backup failed. After that the machine wont start again with error...
  17. Z

    Remote Spice access *without* using web manager

    Add Try adding SkipCertificateCheck = $true to the params.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!