Search results

  1. A

    CPU Type "EPYC Milan" kvm: warning: host doesn't support requested feature

    In my case it was a BIOS update which fixed the problem. Seems that there was an problem in the bios so that the features whrere not usable. This leded also in some trouble with VMs that rebooted sometimes.
  2. A

    CPU Type "EPYC Milan" kvm: warning: host doesn't support requested feature

    Hi @all, we have a node with AMD EPYC 7343 running on 7.3-6 and tried to configure a VM with CPU Type "EPYC Milan". Unfortunately this does not work as I get at start of the VM the following error: kvm: warning: host doesn't support requested feature: CPUID.07H:EBX.erms [bit 9] kvm: warning...
  3. A

    No Network Adapter in fresh Windows Server after Upgrade to Proxmox 7

    We can confirm that the QEMU Version 7.2.0-5 fixed the network problem in Windows 2016 and 2022. @tom Can you tell me when we can expect QEMU 7.2.0-5 to be in pve-enterprise repo?
  4. A

    [SOLVED] Problem during Migration with gluster filesystem.

    Sure, just wanted to comment that it also occours with "ZFS over iSCSI" for other who may have that problem.
  5. A

    [SOLVED] Problem during Migration with gluster filesystem.

    Dec 12 14:14:20 node1 pmxcfs[2381]: [status] notice: received log Dec 12 14:14:30 node1 QEMU[38582]: kvm: ../block/io.c:2847: bdrv_co_pdiscard: Assertion `num < max_pdiscard' failed. This is happaning on a "zpool trim" command inside a VM, where the VM has a "ZFS over iSCSI" Disk attached...
  6. A

    [SOLVED] Problem during Migration with gluster filesystem.

    Hi, would just like to add that the problem is also recreatable with "ZFS over iscsi" LUNs.
  7. A

    [SOLVED] Roadmap multiple Cluster Management.

    Hi @fabian , Meinst du da die nächste offizielle Proxmox Version (vermutlich 7.3)?
  8. A

    [SOLVED] Roadmap multiple Cluster Management.

    Naja, ich will ja nicht eine fertige Gui haben sondern die API nutzen und daher auch meine Frage ob über die API bereits ZFS Local-Storage Migrationen möglich sind.
  9. A

    [SOLVED] Roadmap multiple Cluster Management.

    Hi @fabian , gibt es hierzu schon Neuigkeiten? Ist die Migration von Local-Storage (ZFS) möglich?
  10. A

    Live migration to another cluster (even from 3 to 5)

    Well the last message I read about that was on Jan 2021 so more then one year passed. Do you maybe have some update about their progress?
  11. A

    Live migration to another cluster (even from 3 to 5)

    Hi @tapsa , this hack seems to not work any more as the qm command is telling no such cluster node
  12. A

    Securing third party application Proxmox integration with proxy api gateway

    Hi @EuroDomenii a couple of years passed but I wanted to reopen the thread because it is a really intieresting approach of securing the API. Any news on that?
  13. A

    Minimum VMID increase to 1000?

    Hi, in my opinion this would be a very very important configuration! We are dealing with many clusters and manage them automatically, so having same VMIDs in different clusters is quite a problem.
  14. A

    PBS backup pool performance issues

    OK, we have added a spedial device SSD mirror to the PBS and it seems like it fixed our problem. Now our backups do not fail any more because of timeouts.
  15. A

    PBS backup pool performance issues

    Task viewer: VM/CT 250 - Backup OutputStatus Stop INFO: starting new backup job: vzdump 250 --node DC1C01N02 --remove 0 --mode snapshot --storage BR1PXBCK1 INFO: Starting Backup of VM 250 (qemu) INFO: Backup started at 2021-08-10 10:19:28 INFO: status = running INFO: VM Name: test25 INFO...
  16. A

    PBS backup pool performance issues

    And here the benchmark from the pve-host to pbs: root@DC1C01N02:~# proxmox-backup-client benchmark --repository test@pbs@10.10.1.101:dc1c01d Are you sure you want to continue connecting? (y/n): y Uploaded 724 chunks in 5 seconds. Time per request: 6940 microseconds. TLS speed: 604.29 MB/s SHA256...
  17. A

    PBS backup pool performance issues

    Via SCP we have arround 370MB/s from PVE-Host to one of the storage pools. Iperf3 test gets arround 9.7GBit/s which is the full 10GBit/s Network adapters speed. Running a backup task of one VM from PVE-Host to PBS results in 100MB/s speed.
  18. A

    PBS backup pool performance issues

    test "write" root@br1pxbck1:~# fio --rw=write --name=/mnt/datastore/storage1/fiotest --size=4G /mnt/datastore/storage1/fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 fio-3.25 Starting 1 process Jobs: 1 (f=1)...
  19. A

    PBS backup pool performance issues

    @floh8 we allready use ashift 12 which I think is ok for 4KB sectors rpool ashift 12 local storage1 ashift 12 local storage2 ashift 12 local

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!