Search results

  1. A

    AMD Nested Virtualization not working after 7.1 upgrade

    7.2 Update seems to have resolved this issue for us. Have a guest now running CG/VBS on our AMD-EPYC-ROME cluster.
  2. A

    AMD Nested Virtualization not working after 7.1 upgrade

    Going to give this thread a nudge for some increased attention. 6 node cluster of matching 2113S-WTRT w/7402P CPU's. We are attempting to enable credential guard / virtualization based security for our windows server guests. This is dependent on Hyper-V working on the guest. I believe the...
  3. A

    Poor write performance on ceph backed virtual disks.

    Hi AngryAdm, Random 4K reads at Que depth 1 will always be pretty slow. That is heavily impacted by the network latency combined with drive access latency / service time.
  4. A

    kernel panic: BUG: unable to handle page fault for address: 0000000000008000

    Similar issues on our cluster happening at random. Started a few weeks ago. Have had a node freeze twice in the last few weeks, bringing down lots of VM's/services in a production environment. We have not made any changes to underlying hardware or BIOS config recently. Jul 28 10:31:13 px3...
  5. A

    Poor write performance on ceph backed virtual disks.

    We moved forward with the install of some NVME DB/WAL drives based on Ingo S's post. We are using the Micron 7300 Pro M.2 2TB drives for this and have 437G per 16TB drive assigned as DB/WAL space. Result is about 3% of total space being NVME for DB/WAL. BIG improvement! Now seeing ~150MB/s...
  6. A

    Awfully slow write performance

    I have the same behavior on Windows Server VM's on Proxmox. 30MB/s is about the same performance I get on spinning disks. The actual transfer has the same bad behavior you've described where it "peaks" for a little while then stalls at zero, then peaks, then stalls.... yuk! On linux VM's the...
  7. A

    Poor write performance on ceph backed virtual disks.

    6 X 10Gb 1: Coro1 2: Coro2 3: CephP 4: CephC 5: Network Trunks 6: Unused Write performance from Windows guests is limited to approximately the write-sync performance of drives in the pool. Other guests do slightly better. Ceph bench shows results similar to expected bare drive performance...
  8. A

    Poor write performance on ceph backed virtual disks.

    Hi Ingo, Our DB/WAL is directly on the spinners on both my home cluster and work cluster. Write performance on my home cluster with far less hardware power seems to perform better. Odd eh? I'm willing to try the dedicated WAL/DB disk. The servers have multiple M.2 slots on the motherboards...
  9. A

    Poor write performance on ceph backed virtual disks.

    Production cluster is still on Nautilus. I did a bunch of testing at home to fine a "best config" to try for the production cluster. Best performance I can get on a windows guest seems to be krbd, virtio block, iothread, writeback, and then configuring windows to disable write cache buffer...
  10. A

    Poor write performance on ceph backed virtual disks.

    Write back starts off fast... like 150MB/s as reported in ceph logs and VM summary graphs, but within a few minutes drops to 25MB/s. It also has a very nasty problem. When I cancel a file copy mid-way, windows "cancels" it, but there's a crap ton of data still waiting to be flushed on the...
  11. A

    Poor write performance on ceph backed virtual disks.

    I just set the virtual disk to direct sync mode to "test a theory." Big surprise here: 8MB/s. So how do we get the VM to respect the actual cache settings?
  12. A

    Poor write performance on ceph backed virtual disks.

    Hello mmidgett, Any file copy operation or samba share file copy operation suffers the severe performance bottleneck when writing to the spinning pool. When I copy a file from the spinning pool to the SSD pool, I also get about 100MB/s in non-cache mode just like you, and windows "behaves"...
  13. A

    Poor write performance on ceph backed virtual disks.

    I turned on krdb for this pool, then shut down and booted the 2 VMs with virtual disks in this pool. Performance appears to have improved about 60%. So my instance of PBS is writing ~80MB/s instead of ~50MB/s to this pool, and the Windows File server is now moving at a scorching 8MB/s instead...
  14. A

    Poor write performance on ceph backed virtual disks.

    Hello! We're getting 5MB/s write speed on the production cluster now. That's on a 10Gb network. The SSD pool on this cluster rebalances and recovers at speeds of 500-2000MB/s. This is not a network or CPU issue. With max backfills and recovery_max_active cranked up a bit the spinning disk...
  15. A

    Poor write performance on ceph backed virtual disks.

    Tried a few things.... Enabled the autoscaler, which shrunk the number of pgs quite a bit. Performance dropped a bit after rebalancing. Gave the autoscaler some information about how large the pool is likely to be down the road, and it grew the number of pgs quite a bit. Performance dropped...
  16. A

    Poor write performance on ceph backed virtual disks.

    I was just reading the latest ceph benchmark PDF from the proxmox folks here for any possible insight. In the single thread sequential write test on a Windows VM, they're only getting 600MB/s on drives that would do near 3000MB/s if directly attached. I'm seeing a similar relative...
  17. A

    Poor write performance on ceph backed virtual disks.

    I don't understand the question but will try to offer some clarification. The enclosure is a 24 X 3.5" bay direct-attach (non-expander) design. groups of 4 drive bays are mapped to 6 X mini-SAS connections. Each node of the cluster is directly attached to 4 of the drive bays in the enclosure...
  18. A

    Poor write performance on ceph backed virtual disks.

    Hello! I've had this issue on my home cluster since I built it in late 2019, but always figured it was caused by being on old/slow hardware and using a mixed bag of consumer SSD/HDDs... ~50MB/s is as good as I can get out of ceph write performance from any VM, to any pool type (SSD or HDD)...
  19. A

    Community Help Request, Purchasing Suggestions and Options (All Opinions Welcome)

    I don't know if you have any way to "import" a purchase over the boarder (maybe a friend who could act as a middle-man?), but I found something on ebay that might be of interest...
  20. A

    Small office server tips

    If you're familiar with reloading a firewall config from backup, you know that very complex configurations with numerous packages installed often don't recover properly, so bare hardware is not a great place to configure a complex firewall unless you can afford to lose the config. I like a true...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!