I just have had the same experience right now with the same Supermicro AMD hardware type but on a different server.
Took exactly 30 minutes to come back. I failed to access IPMI or attach screen in time.
I see you did what I have told you, and since rpool/data was already made, all you have had to do is to add it as a storage in PM GUI.
Hopefully you ticked the thin provision box also. :-)
Now you are all good, and tuning is another separate issue.
My 5 cents.
Create a new dataset and add it as storage option (disk image, etc..) using PM GUI.
Then PM will create ZVOLs for your VMs in that dataset. Usually it is already created (rpool/data called local-zfs), but I do not know what your provider did in it's PM install recipe.
What about space on the pool level?
zfs list -t all
Also take a look at this:
zfs list -o name,quota,refquota,reservation,refreservation,volsize,used,available,referenced
Try removing quotas and reservations..
is there any reason (except for race cases when /dev/sd* device names change) why since PM 6 installer switched from /dev/sdX to /dev/disks/by-id/* names when creating rpool?
I guess some documentation like one here:
should be updated...
I have PM cluster on private network. While I can enable access using DNAT or VPN with a VM running on this cluster or another that can reach this PM private network, I still wonder what is the easiest solution to make https GUI available on another (WAN) interface?
Maybe I can just define...
I have a few VMs on PM 5 with set replication using ZFS.
I used to set */30 or more for replication schedule.
Just now I set it to */1 for every VM because counter intuitively, it might put less work for the source disks.
The reasoning is that, if we sync often, then the data to be...
FYI, I have had the same experience with PM 5 after fully updating it around month or two ago.
I rebooted it remotely and it did not come up. I drove to the office (1 a.m. - the life of sysadmins :-( )
Just when I unlocked the office door, around 30 minute mark, the monitoring system sent...
Hmm... I don't think I understand correctly.
Do you think high CPU usage with ZVOL on restore is due to the fact that data disk of ZFS VM is actual zvol?
And that I should change data disk for this NFS VM from ZVOL to RAW file on ZFS dataset?
When I have some time and I put additional servers...
So host is:
24 x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz (2 Sockets)
62.90 GiB RAM
Linux 4.15.18-16-pve #1 SMP PVE 4.15.18-41 (Tue, 18 Jun 2019 07:36:54 +0200)
2 x Intel DC S3510 Series 1.6TB
NAME STATE READ WRITE CKSUM
i did some testing yesterday with PM 5. to see how fast can I import some big VMs from PM 4.
So I setup a NFS VM in new cluster on a node where I will import VMs. I defined it as "exportVM" storage on PM 4 and PM 5 cluster concurrently.
I exported VMs (backup with GUI) from PM 4 to NFS VM...
FYI I experience exact same symptoms (never stops) with LXC with NFS network mounts.
I decided I will not investigate further, because there is a simple solution, and it is just my home server.
So just do "ps faxuw" in console and look for process name with VM ID in it. Usually it is just one or...
as reported here:
in ProxMox 5.* VMs do not get tagged traffic passed to them, if traffic is incoming via MT26448. They work for Intel 10G cards. In ProxMox 4.* MT26448 works just fine.
Any ideas or hints to solve this, are welcome.