Search results

  1. Epyc Zen 2 with Proxmox 5.4-13

    Funny that really noone did tried this CPU or has any expirience with the new ZEN2 CPUs.
  2. Epyc Zen 2 with Proxmox 5.4-13

    I really would appreciate if anyone could give us advise
  3. Epyc Zen 2 with Proxmox 5.4-13

    Usually the Epyc 7002 should be the same as the new Ryzen 3000 CPUs because both ZEN2. So how about expiriences with this CPUs and Proxmox 5.4? I relly would love to get my hands on a EPYC 7402, maybe as Dual Processor System, but am afraid of compatibility issues.
  4. Epyc Zen 2 with Proxmox 5.4-13

    Hi all, I was wondering if there are any compatibility issues with Proxmox 5.4-13 (kernel 4.15.18-20-pve) and the new Zen 2 architecture with the EPYC lineup. Specifically the new EPYC 7402P. Is somebody already using such a setup, and if so, are there any other issues? Regards, Andy
  5. error with cfs lock on zfs filesystem because of timout

    Hi, any other chance then to create a volume when the pool is bussy? It is somehow just the "zfs list" command with the specific parameters that is taking very long, so that the timeout of the create command is caused. As I understand this "zfs list" command is only there to check if a volume...
  6. error with cfs lock on zfs filesystem because of timout

    Hi @all, I am facing actually a problem where I am not able to create a iscis-zfs disk because the "cfs lock" gets an timeout. The problem that "zfs list" takes very long on my iscis server, because it has a lot of writes. Is it possible somehow to higher this wait-time so that we get no...
  7. Cluster Join failed - help please

    OK, I've recreated the certificate with the following command pvecm updatecerts --force I hope now that everything else is working with the cluster. And idea how to check if really everything is ok?
  8. Cluster Join failed - help please

    Hi@all, I did tried to add a "reinstalled" node to a cluster with "pvecm add" what stopped in "waiting for quorum". As 15 nodes are allready runningn on that cluster, and this node which I wanted to add now, was already minutes before in that cluster, I think the problem was not...
  9. Passthrough of Cores and Threads

    Unfortunately this config is resulting in Windows VM showing 16 vCores instead of 8.
  10. Passthrough of Cores and Threads

    Hmm... but as I understand "taskset" it would not help in my example. I still would have to assign 8 vcors only to the VM what would result in running only 8 threads and the usage of 4 hardware cores????
  11. Passthrough of Cores and Threads

    Hi, sorry, I didn't express myself properly. The goal is to give one VM max. 8 virtual cores, but still have 16 threads in this VM. Reason for this is SQL Licensing because it is licensed per Core, so that on a physical machine with 8 hardware cores, you would only have to license 8 cores and...
  12. Passthrough of Cores and Threads

    Hi all, can I pass all cores and threads to a Virtual Machine? So that the machine has for example all 8 cores and all 16 threads of the node. Purpose for this is licensing. Thanks in advance for answering. Regards, Andreas
  13. pve-zsync and "ZFS over iscsi" storage

    Hi, did not really got you last info... What means "not covered"? If I configure in the vm.conf for the iscsi drive "nobackup" I can run the pve-zsync backup for the local disk without a problem. Additionaly, I would use the pve-zsync on the iscsimachine to do the backup like you mentioned...
  14. pve-zsync and "ZFS over iscsi" storage

    Well but in my example this would be bad because the KVM could have additional local-zfs disks. This is why I wanted to let it run on the node and on the iscsimachine
  15. pve-zsync and "ZFS over iscsi" storage

    Hmm... wouldn't I have the same problem that no vm.conf file is available on the iscsimachine? And this way I would have to run pve-zsync two times. One time on the node (where I would have to configure the iscsi disk vm-100-disk-0 as "nobackup") so that I do not get the error message, and the...
  16. pve-zsync and "ZFS over iscsi" storage

    The NAS is a proxmox node that has targetcli running. Some of our KVMs that are running on ssd nodes has additional ISCSI disks to the "iscsimachine" which has hdds for "slow big storage". Now when I do a pve-zsync of the KVM, I am getting the error mentioned above. Because on the scsimachine...
  17. pve-zsync and "ZFS over iscsi" storage

    So which would be correct? This: pve-zsync sync --source iscsimachine:100 --dest iscsimachine:/rpool/backup Or this: pve-zsync sync --source iscsimachine:/rpool/data/vm-100-disk-0 --dest iscsimachine:/rpool/backup
  18. pve-zsync and "ZFS over iscsi" storage

    Hi, what do you mean with "target"? Maybe like that: pve-zsync sync --source iscsimachine:100 --dest iscsimachine:/rpool/backup
  19. pve-zsync and "ZFS over iscsi" storage

    Hi @ all, when trying to backup a VMs disk with pve-zsync that is located at a "ZFS over iscsi" storage, I am getting "ERROR: in path". Is it possible that this does not work with "ZFS over iscsi" storages? Regards Andreas


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!