the issue he is facing with the VM is the vm cpu is probably set to KVM and AES Flag need to be ON for TLS , but not supported on KVM Cpu type ,
so change your Vcpu to Type = Host and enable AES flag
Its s physical server 2x e5-2680 v2
While doing the bench test i see only 1 thread in each cpu so it use only 1 core on each socket
That is what i see from htop while doing the test
We have 4x PM863a samsung 3.84 TB 3d-Nand TLC in ZRAID2 backed by a HBA from HPE , we believe the performance are really slow. is there anything we can do to improve the overall performance ?
we enabled Trim with no change .
the PBS as 128GB and use around 20-30 %
any benefit of adding zpecial device when running 100% ssd ?
is saw somewhere we can issue a command for caching on the ZFS pool, for entreprise SSD who handle write aknowledge automaticaly i dont find it
hi fabian
we are having issue where a snapshot for a 2 disk vm qcow2 ( 1TB 2TB ) total of 3 with 32gb ram , vm as 10gb ram in use, but the progress is crossing the 10gb usage,
on a host with Shared GFS2 file system and 128gb ram (75% free) take for ever.
so far writing this msg i always killed...
Hi , Proxmox is as killer hypervisor for competition , everybody know that but one of the most important feature that we lack for is REPLICATION of virtual machine on other file system than ZFS, why is this still not available ?
we have alot of VM we cant move from HYPER-V and Vmware because of...
worked like charm thx.
question: is it normal that that each successfuly live migrated VM , left the old one in a lock migrate state ?
this appened on each of my test and migration task job report OK ( no error )
Hi sorry I was referring to qm-remote-migrate.
We can't do qcow2 file to RBD block with the qm remote . It always fail as unsupported. Even if it work from the same cluster when moving disk.
So I need to send the qcow2 to another filesystem and vmdisk seem disabled in cephfs and it's the...
Correct
we have been used proxmox with HPE SAS shared SAS for years.
GFS2 file system with DLM .
but its tricky there alot of learning curve. that is why we moved all new scenario to CPEH to ease the maintenances
senario.
we want to move VM from SHARED SAS storage on CLUSTER1 to RBD on CLUSTER2
the vm image on cluster1 are Qcow2, i get a error saying RBD is not supported.
my only solution is to use a tmp file system from ceph and move the vm there at step 1 and switch them on RBD at step 2
but...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.