Search results

  1. B

    Replacing PBS and moving old backups in it

    Install and configure the new server and then add the old server as a remote and pull the data to the new server with a sync job.
  2. B

    New all flash Proxmox Ceph Installation

    You can use the (very high?) performance drives as an additional pool. You may mix them in with the other drives but it is recommended to have drives of similar performance in each pool. In order to have more than 1 OSD per disk you partition the disks and use each partition for the whole OSD...
  3. B

    New all flash Proxmox Ceph Installation

    I don't think you will gain anything from having nvme cache disks when you already have nvme storage disks. There exist both cache storage (tiered) for Ceph and the concept of using different disks for the different storages in an OSD (DB+WAL+STORAGE). In both of those scenarios I think you will...
  4. B

    Kernel BUG CPU Soft lockup.(VM/HOST freezes)

    I have the same issue with live migrations. Since PVE 7.2 live migrations hangs my Linux VM:s when migrating from an Epyc gen3 "Milan" node, regardless of the target node. I don't see it when migrating from an Epyc gen1. It never happened before 7.2 and kernel 5.15. If I go back to the 5.13...
  5. B

    Possible bug after upgrading to 7.2: VM freeze if backing up large disks

    I might have gotten something related to this as well but on smalls disks. CPU stalls, qemu-agent involved. See https://forum.proxmox.com/threads/rcu-info-rcu_sched-self-detected-stall-on-cpu.109112/post-469315
  6. B

    rcu: INFO: rcu_sched self-detected stall on CPU

    I have also gotten this some some machines after the upgrade to PVE 7.2 and kernel 5.15. [155118.277548] INFO: rcu_sched self-detected stall on CPU [155118.277575] 0-...: (2 GPs behind) idle=6ef/1/0 softirq=1813405/1813406 fqs=0 [155118.277590] (t=334307 jiffies g=1395437 c=1395436...
  7. B

    Proxmox VE 7.2 released!

    Debian 9 VM 1 CPU core spiked sometime during the upgrade and didn't go down for almost 12 hours until reset. Average CPU graph for the last month looks like a hockey stick. Storage is Ceph RBD. Not much to go on but here's the VM config, slightly redacted. agent: 1 balloon: 1024 bootdisk...
  8. B

    Proxmox VE 7.2 released!

    I have had a couple of VM:s misbehaving after upgrading to PVE 7.2. The ones doing so were all live migrated from a 7.1 node (node A) to an upgraded and rebooted 7.2 node (node B), and later back to the upgraded and rebooted 7.2 node (node A again). The symptoms were high CPU usage on at least...
  9. B

    Ceph pool size (is 2/1 really a bad idea?)

    I think your issue is that you got 4 MON nodes which means that after 2 nodes are down your cluster is not quorate any longer since 50% of the cluster is down and the remaining 50% cannot be certain it is the "surviving" part of the cluster or if it just subject to a split-brain scenario. I...
  10. B

    High Volume VM Backup

    Short answer: Yes. You can have a primary PBS instance with high performance drives but relatively few backups and short retain per VM, and an additional (or several additional) secondary PBS instances with larger storage and more backups and longer retain. I would actually recommend having 2...
  11. B

    PBS 2.0.14-1 cannot sync from PBS 1.1.13-3

    Great! I already downgraded and fixed the syncing but good to know that I can reverse it fairly soon.
  12. B

    PBS 2.0.14-1 cannot sync from PBS 1.1.13-3

    I get the following errors for each VM: sync group vm/101 failed - missing field `protected` This started happening after updating a secondary PBS server to the latest non-enterprise version 2.0.14-1. The primary, and backup source of the sync, is on 1.1.13-3 enterprise. The primary will be...
  13. B

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Seems like you also need to restart certain PVE services in order to get the UI so show ceph related data after disallowing insecure reclaims (unless you reboot). I restarted pvedeamon and pvestatd and that seemed to be enough, but perhaps there are more?
  14. B

    Poor disk performance

    At this point I would think the problem is the storage, but I don't use ZFS so I cannot help you there.
  15. B

    Poor disk performance

    The SCSI controller should be set to "VirtIO SCSI" and the NIC "VirtIO". Changing the NIC is pretty easy but changing the controller can be tricky since it will cause Windows to not boot most of the time because of different disk paths. Before you start doing that perhaps you could build a new...
  16. B

    Poor disk performance

    How is your hardware configuration on the VMs? And did you install the VirtIO drivers? Without them Windows VMs perform very poorly in my experience. That would explain the poor IO performance even on SSDs.
  17. B

    Backup job failed due to insufficient disk space

    I have also gotten this a few times and the reason has been that some tasks create multi gigabyte large files in the "/var/log/proxmox-backup/tasks/" directory, and not that the datastore is been exhausted. In my scenario it is because the datastore is remote and there was network issues causing...
  18. B

    Anyone installed tailscale.com (WireGuard "vpn" based) in the host hypervisor?

    Thanks for the writeup! I read about their technology a while back and it sounded very interesting, good to know it can be deployed in this fashion.
  19. B

    Linux Kernel 5.4 for Proxmox VE

    Just tested 5.4.65 and everything seems to work nicely with the X710-T2L.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!