ProxMox users, et alia:
I am contemplating to deploy CEPH on my ProxMox nodes (they have 500GB to 2TB SSDs in them, I have 3 nodes), unfortunately ProxMox was installed on these large drives. Thus, I now wish to obtain some JetFlash (64GB) USB drives to use as boot drives and migrate my boot...
That was my assessment too, thanks for confirming for me.
Well when the bug hits for me it crashes the entire server, both on ProxMox and on other Linux machines running ZFS.
Oh no I meant something else. I meant it might be better (for now) if I just decompress and compress my files again...
Moayad,
Well, yes I understand what the host type does. I was trying to understand how and with what level of pain I could add the nested virtualization flag to the particular CPU type I was looking at. Are you suggesting that there is no way to do that?
Thanks for your reply.
Stuart
Fiona,
It seems that zfs 2.2.3 has come out. Has the version of zstd been updated yet there within? Moreover, will ProxMox soon update to ZFS 2.2.3? I presume that if I simply switch my compression from zstd to a different algorithm this issue is gone, correct?
Thanks!
Stuart
ProxMox users, developers, et alia:
It seems that if one selects the CPU type of "x86-64-v2-AES" it does not allow nested virtualization. Is there a way to custom configure this CPU type such that it will allow that functionality?
Stuart
Fiona, et alia:
I presume at this juncture the current version of ProxMox has a newer version of ZFS and as such is inclusive of the fix for the aforementioned issue?
Stuart
ProxMox support, developers, Martin, et alia:
THANK YOU for another great upgrade.
(Yes, I am yelling it, because everyone at ProxMox deserves to be yelled at in this way with vigor and appreciation).
I just conducted an upgrade of my cluster and all is looking good. I also wanted to make sure...
Ballistic,
Interesting indeed!
When I move the pfSense virtual machine to either one of my IBM x3650 servers (M1 or M3) or one of my NUCs (NUC7i7DNHE or NUC7i7BNB) it runs fine just not on the HP T620.
The only thing I blamed it on, was the HP T620 being AMD, based, whereas the other systems...
Dunuin, et alia:
I am still researching it, but it does seem that BTFRS now has code to properly deal with SMR drives. I have to admit, it is going to be a royal pain to move all my data around to reformat my 4x8TB array from ZFS to BTRFS (if that is the solution I end up using), but, I suppose...
Since I am using these for backups / archives, perhaps there is a way to tell ZFS to increase the 120 second write time as I am not concerned about how long a write takes. Alternatively, I could try to convert the drives to use btrfs I suppose.
Stuart
Dunuin,
As far as I can tell:
Both of these models are 5400RPM I realize now.
The 8TB Seagate drives (model ST8000DM004) are SMR.
The WDC WD80EZZX-11CSGA0 is CMR.
Stuart
Dunuin, et alia:
I misspoke. The 10,000RPM drives are a pool that is connected via an IBM/LSI SAS controller internally. There are then two USB disk, one is a Western Digital 8TB drive and then other is a 4 drive enclosure (OWC) with 4 8TB Seagate drives in it. I believe that the WD and the...
Perhaps, but I doubt that is the overarching issue in the present case before us.
follows hereupon an execution of the free command:
$ free -h --giga --committed
total used free shared buff/cache available
Mem: 177G 109G 68G...
Gabriel, Ballistic, et alia:
Whereas there has been a rather significant intercession of time since the last moment I posted regarding the issue subject to this epistle, it is worthy of notation that I am now running ProxMox 8.0.4 and pfSense 2.7.0 software releases. Moreover, the OpenWRT...
ProxMox forum members,
I recently have started to see this in my kern.log when trying to copy a large (4TB) file between two USB devices both formatted with ZFS under the latest ProxMox (fully updated). The rsync is running from the command line on the ProxMox node, not in a VM. This error has...
Hello all!
Is anyone familiar with the configuration of GRUB / Linux in pursuance of instantiating a functioning serial console?
I have SOL (Serial over LAN) enabled on my server, whereby I can SSH to the BMC on my server enter 'console 1' and have access to the console for all UEFI/BIOS...
Fiona, et alia:
I took notice that no one has actioned or commented on the bug you raised with upstream ZFS. Interesting, I’ve always seen ZFS as a very active project.
Stuart
ProxMox users, developers, et alia:
Follows hereupon an excerpt from my /var/log/kern.log of what appears to be a problem in the ZFS code. In researching the issue I found some information that seems related, but I am not fully sure yet (see link #1 below). Has anyone seen such behavior before...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.