Usually the Epyc 7002 should be the same as the new Ryzen 3000 CPUs because both ZEN2. So how about expiriences with this CPUs and Proxmox 5.4?
I relly would love to get my hands on a EPYC 7402, maybe as Dual Processor System, but am afraid of compatibility issues.
I was wondering if there are any compatibility issues with Proxmox 5.4-13 (kernel 4.15.18-20-pve) and the new Zen 2 architecture with the EPYC lineup.
Specifically the new EPYC 7402P.
Is somebody already using such a setup, and if so, are there any other issues?
Hi, any other chance then to create a volume when the pool is bussy? It is somehow just the "zfs list" command with the specific parameters that is taking very long, so that the timeout of the create command is caused. As I understand this "zfs list" command is only there to check if a volume...
I am facing actually a problem where I am not able to create a iscis-zfs disk because the "cfs lock" gets an timeout. The problem that "zfs list" takes very long on my iscis server, because it has a lot of writes.
Is it possible somehow to higher this wait-time so that we get no...
I did tried to add a "reinstalled" node to a cluster with "pvecm add 10.1.1.1" what stopped in "waiting for quorum". As 15 nodes are allready runningn on that cluster, and this node which I wanted to add now, was already minutes before in that cluster, I think the problem was not...
Hmm... but as I understand "taskset" it would not help in my example.
I still would have to assign 8 vcors only to the VM what would result in running only 8 threads and the usage of 4 hardware cores????
sorry, I didn't express myself properly.
The goal is to give one VM max. 8 virtual cores, but still have 16 threads in this VM. Reason for this is SQL Licensing because it is licensed per Core, so that on a physical machine with 8 hardware cores, you would only have to license 8 cores and...
can I pass all cores and threads to a Virtual Machine?
So that the machine has for example all 8 cores and all 16 threads of the node.
Purpose for this is licensing.
Thanks in advance for answering.
Hi, did not really got you last info...
What means "not covered"? If I configure in the vm.conf for the iscsi drive "nobackup" I can run the pve-zsync backup for the local disk without a problem. Additionaly, I would use the pve-zsync on the iscsimachine to do the backup like you mentioned...
Hmm... wouldn't I have the same problem that no vm.conf file is available on the iscsimachine?
And this way I would have to run pve-zsync two times. One time on the node (where I would have to configure the iscsi disk vm-100-disk-0 as "nobackup") so that I do not get the error message, and the...
The NAS is a proxmox node that has targetcli running.
Some of our KVMs that are running on ssd nodes has additional ISCSI disks to the "iscsimachine" which has hdds for "slow big storage".
Now when I do a pve-zsync of the KVM, I am getting the error mentioned above.
Because on the scsimachine...
So which would be correct?
pve-zsync sync --source iscsimachine:100 --dest iscsimachine:/rpool/backup
pve-zsync sync --source iscsimachine:/rpool/data/vm-100-disk-0 --dest iscsimachine:/rpool/backup
Hi @ all,
when trying to backup a VMs disk with pve-zsync that is located at a "ZFS over iscsi" storage, I am getting "ERROR: in path".
Is it possible that this does not work with "ZFS over iscsi" storages?