Search results

  1. S

    USB Ethernet - 2.5GB

    Forum members, I have been contemplating obtaining an USB Ethernet 2.5GB dongle recently and have read that some of the RealTek chipsets have presented a problem for ProxMox. Can someone comment on the current state of affairs in this regard? It looks the chipset that has had issues is the...
  2. S

    ZFS Bug / array-index-out-of-bounds

    I I posted a message within GitHub asking why it has taken almost a year to resolve it or at least decide how to resolve it. Stuart
  3. S

    ZFS Bug / array-index-out-of-bounds

    Fiona, Ah, I was not sure to whom it was directed. In as much as it was not directed to you, do you have any suggestions as to how we can try to get the ZFS developers to put some attention onto this? Stuart
  4. S

    Considering CEPH

    alexskysilk, I want to first say thank you for engaging in this colloquy as I do appreciate your perspective and learning more about CEPH. So, let us presume I ascertain the speed of the different disks I have (be they mechanical, SSD, or NVMe), how do I group the different speed drives to...
  5. S

    Considering CEPH

    Alexskysilk, Please realize I am not trying to argue your points but obtain a more nuanced understanding of them and their relation to my environment. Surely, I want to do what is sensible but also, this is not an enterprise production environment either (it's a home lab). Well the nodes have...
  6. S

    Considering CEPH

    Forum members, I am interested in moving to CEPH and am as well desirous of configuring it intelligently. I have enumerated my concerns herein below and perhaps forum members can post of their experiences so I can design a deployment that makes sense. 1) I have a mix of SSDs, NVMe, and...
  7. S

    ZFS Bug / array-index-out-of-bounds

    Fiona, I see that with respect to the bug filed with OpenZFS (https://github.com/openzfs/zfs/issues/15219) there seems to be a question for you awaiting reply? Can you kindly look into if the OpenZFS folks have yet resolved this issue? I am running the latest ProxMox and would like to know if...
  8. S

    Migrating my boot/root and storage

    ProxMox users, et alia: I am contemplating to deploy CEPH on my ProxMox nodes (they have 500GB to 2TB SSDs in them, I have 3 nodes), unfortunately ProxMox was installed on these large drives. Thus, I now wish to obtain some JetFlash (64GB) USB drives to use as boot drives and migrate my boot...
  9. S

    ZFS Bug / array-index-out-of-bounds

    That was my assessment too, thanks for confirming for me. Well when the bug hits for me it crashes the entire server, both on ProxMox and on other Linux machines running ZFS. Oh no I meant something else. I meant it might be better (for now) if I just decompress and compress my files again...
  10. S

    Selecting CPU Type x86-64-v2-AES

    Moayad, Well, yes I understand what the host type does. I was trying to understand how and with what level of pain I could add the nested virtualization flag to the particular CPU type I was looking at. Are you suggesting that there is no way to do that? Thanks for your reply. Stuart
  11. S

    ZFS Bug / array-index-out-of-bounds

    Fiona, It seems that zfs 2.2.3 has come out. Has the version of zstd been updated yet there within? Moreover, will ProxMox soon update to ZFS 2.2.3? I presume that if I simply switch my compression from zstd to a different algorithm this issue is gone, correct? Thanks! Stuart
  12. S

    Selecting CPU Type x86-64-v2-AES

    ProxMox users, developers, et alia: It seems that if one selects the CPU type of "x86-64-v2-AES" it does not allow nested virtualization. Is there a way to custom configure this CPU type such that it will allow that functionality? Stuart
  13. S

    ZFS Bug / array-index-out-of-bounds

    Fiona, et alia: I presume at this juncture the current version of ProxMox has a newer version of ZFS and as such is inclusive of the fix for the aforementioned issue? Stuart
  14. S

    Proxmox VE 8.1 released!

    ProxMox support, developers, Martin, et alia: THANK YOU for another great upgrade. (Yes, I am yelling it, because everyone at ProxMox deserves to be yelled at in this way with vigor and appreciation). I just conducted an upgrade of my cluster and all is looking good. I also wanted to make sure...
  15. S

    pfSense VM - very slow network throughput

    Ballistic, Interesting indeed! When I move the pfSense virtual machine to either one of my IBM x3650 servers (M1 or M3) or one of my NUCs (NUC7i7DNHE or NUC7i7BNB) it runs fine just not on the HP T620. The only thing I blamed it on, was the HP T620 being AMD, based, whereas the other systems...
  16. S

    ZFS Bug / INFO: task txg_sync:1943 blocked for more than 120 seconds.

    Dunuin, et alia: I am still researching it, but it does seem that BTFRS now has code to properly deal with SMR drives. I have to admit, it is going to be a royal pain to move all my data around to reformat my 4x8TB array from ZFS to BTRFS (if that is the solution I end up using), but, I suppose...
  17. S

    ZFS Bug / INFO: task txg_sync:1943 blocked for more than 120 seconds.

    Since I am using these for backups / archives, perhaps there is a way to tell ZFS to increase the 120 second write time as I am not concerned about how long a write takes. Alternatively, I could try to convert the drives to use btrfs I suppose. Stuart
  18. S

    ZFS Bug / INFO: task txg_sync:1943 blocked for more than 120 seconds.

    Dunuin, As far as I can tell: Both of these models are 5400RPM I realize now. The 8TB Seagate drives (model ST8000DM004) are SMR. The WDC WD80EZZX-11CSGA0 is CMR. Stuart
  19. S

    ZFS Bug / INFO: task txg_sync:1943 blocked for more than 120 seconds.

    Dunuin, et alia: I misspoke. The 10,000RPM drives are a pool that is connected via an IBM/LSI SAS controller internally. There are then two USB disk, one is a Western Digital 8TB drive and then other is a 4 drive enclosure (OWC) with 4 8TB Seagate drives in it. I believe that the WD and the...
  20. S

    ZFS Bug / INFO: task txg_sync:1943 blocked for more than 120 seconds.

    Perhaps, but I doubt that is the overarching issue in the present case before us. follows hereupon an execution of the free command: $ free -h --giga --committed total used free shared buff/cache available Mem: 177G 109G 68G...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!