Search results

  1. Cpu cores and threads.

    I would double check that your VM boot disks are indeed on that SSD. Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial...
  2. Cpu cores and threads.

    Slo VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot...
  3. Possible to change Proxmox default port now?

    I would suggest leaving the proxmox configuration alone, and rather than trying to "fix" something that isn't a problem, simply create a proxy that does what you want it to do. I use haproxy (on pfsense) to direct incoming host.domain requests to their appropriate servers and ports. Don't try...
  4. Proxmox won't boot after unplugging usb-c hub.

    Does the USB-C hub have any storage in it? Like, was a thumb drive or flash drive installed in it? Wondering if maybe one of the required partitions was inadvertently installed on external media...
  5. Proxmox VE 8.0 released!

    Just stopping in to thank the Proxmox Dev team and all contributors for doing great work as always. Both homelab and production clusters at work upgraded to 8 following the provided instructions without any issues. I really appreciate the addition of an enterprise ceph repo.
  6. How to configure Proxmox and PfSense VM so that all network requests go through PfSense

    The option to define an upstream gateway for an interface only appears with the interface configured for static IP assignment. With DHCP, the upstream gateway should be "provided," however, if you are the administrator of both the pfsense you're configuring and the upstream gateway, you might...
  7. CentOS 7 based VM's won't boot anymore

    Aha! Thanks Neobin! I wonder if I'm observing 2 separate issues here or are these related? I believe both issues cropped up around the same time. When "host" is selected for these CentOS7 VM's, I get those virtio mapping errors during the boot sequence of the VM, when "EPYC-Rome" is...
  8. CentOS 7 based VM's won't boot anymore

    Bumping this. "EPYC-ROME" option does not work on EPYC-ROME host hardware. Causing problems with some VM's if set to "host." I waited for a round of updates to see if this would "Self resolve" but latest enterprise repo kernel was installed yesterday. Issue persisting.
  9. CentOS 7 based VM's won't boot anymore

    I was experimenting with various possible causes.... Switched CPU type of the VM from HOST to EPYC-ROME which is what these servers are (7402P CPU's) and got the following error: () /dev/rbd32 /dev/rbd33 swtpm_setup: Not overwriting existing state file. kvm: warning: host doesn't support...
  10. CentOS 7 based VM's won't boot anymore

    Hello Proxmox Devs & Users! Interesting problem this week. I went to reboot our Security Onion nodes and ran into some problems... at VM boot: Then after awhile of waiting: Updated our cluster last week: proxmox-ve: 7.4-1 (running kernel: 5.15.104-1-pve) pve-manager: 7.4-3 (running...
  11. Poor write performance on ceph backed virtual disks.

    Good question... Curious if anyone else is still struggling with these issues. I thought we had this problem whipped but within a few months of adding the NVME drives to use for WAL/DB, write speed from windows guests to the spinning pool collapsed again. It's just terrible. Seems to be...
  12. [SOLVED] Consolidate 2 Encrypted Datastores into a new Datastore

    Thanks very much fabian! That gives me a path forward. I'm basically going to do option 2 there, and choose to archive one that I can delete most of the backups in as soon as a new backup is made in "A" In a few months, when we have a good recent history established in the new datastore, I'll...
  13. [SOLVED] Consolidate 2 Encrypted Datastores into a new Datastore

    I have created a bit of a mess for myself and looking for a way forward that makes sense. At one time, I had a single datastore on our PBS server and a weekly backup schedule. I wanted to move some VM's to a daily backup schedule, and for some reason I created a separate datastore on...
  14. Ceph Quincy "Daemons have recently crashed" after node reboots

    Interesting! Thanks for sharing that brudy! I prefer not to modify anything on production proxmox servers unless the modification is part of the administration guide / documentation. In other words, the modification I make to fix something like this, should be in the scope of visibility and...
  15. Ceph Quincy "Daemons have recently crashed" after node reboots

    Wondering if anyone else has observed this, or if I missed a memo on how to fix it (or maybe I'm doing something wrong!) Since updating my homelab and office production server clusters to Ceph Quincy earlier this year, we get "Daemons have recently crashed" errors after doing routine cluster...
  16. More cores & threads or more ghz? Which is better?

    My advise is set aside the upgrade money to put towards a new more efficient platform.
  17. Ceph 17.2 Quincy Available as Stable Release

    We upgraded our 6-node production cluster to Quincy shortly after this thread was posted. As always, carefully follow the well written instructions provided by the Proxmox team and you will be rewarded with continuity of operations. No issues. Upgrade was quick and easy. Thank you! -Eric PS...
  18. More cores & threads or more ghz? Which is better?

    If you already have the 2650v2's just use them. They are going to perform about the same as the 10 and 12 core options for most homelab purposes, while keeping the system power draw lower. On many boards the 2650's will run ~3GHZ base clock speeds anyway. I wouldn't throw any more money at it.
  19. [SOLVED] slow migrations

    I hope yall don't mind me bumping this thread but I'm seeing several responses here that I share similar experiences with. Migration is FAST between recently rebooted nodes. Even with encryption I see ~400-600MB/s on our 10Gb network. We have 2 corosync networks, both 10Gb, and we use the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!