Search results for query: SMR

  1. B

    Non-constant copy speed

    The exact name of the disks is Toshiba NAS N300 HDWN180. And according to a quick google search, these shoult be CMR. That would be okay, right? Yes, at the moment the disks are still connected to the HPE Raid Controller. I only deleted the array. I can try connecting the disks directly. No, so...
  2. leesteken

    Non-constant copy speed

    It was just a guess as there was no information to rule SMR or QLC out. Are the drives still connected to the RAID-controller? It is recommended to not use a hardware RAID-controller with ZFS: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_hardware_2 If you are having the same problem...
  3. B

    Non-constant copy speed

    Hi leesteken Thank you very much for your quick reply. I use the NAS N300 4 TB disks from Toshiba. Do you really think it is the combination HDDs + ZFS? Because yesterday I had the same behavior when I made the RAID 10 with my hpe smart array p408i-a sr gen10 controller. And wouldn't it have...
  4. leesteken

    Non-constant copy speed

    What drives (make and model) are you using? QLC flash and SMR harddisks slow down on sustained writes and need idle time to recover. There is more than one thread on this forum that warns against using them for VMs (especially with ZFS).
  5. leesteken

    [SOLVED] Seeking advice for installation - LVM-Thin or ZFS

    ...are trimming, destroyed a whole filesystem for me once (but after a secure erase it was usable again although empty) . HDDs (except for maybe the SMR kind) don't manipulate existing data on the discs and can handle power loss much better. Some SSDs even advertise with "existing data is safe"...
  6. A

    Beste Disk-Konfiguration für Proxmox bei Ugreen NAS

    ok.. meinst du z.B. sowas? https://www.reichelt.de/8tb-festplatte-toshiba-n300-nas-bulk-hdwg480uzsva-p342654.html?utm_source=psuma&utm_medium=idealo.de&PROVID=2378
  7. W

    Beste Disk-Konfiguration für Proxmox bei Ugreen NAS

    ...den Status in der db. VM's auf nvme sind toll und kein Problem. Hdd sind idR CMR, die hochkapazitiven Varianten haben diese (zu Recht) unbeliebte SMR Technologie (wenn auch schon seit einiger Zeit auf dem Markt und damit von heute aus gesehen auch mit kleineren "TB"'s), die man nur für...
  8. leesteken

    Any activity on HDD is become veeeery slow

    Seagate Barracuda st2000dm008 is an SMR drive and might need more idle time to reorganize the drive as it is used more? People have complained here in the past that SMR drives can become very slow. Not sure what to do about it, except letting it run idle for a long time or replace it with a CMR...
  9. news

    Very Slow speed Backing up VM on ZFS

    ...1k = 1024. These Blocks must read/ write and veryfied, so that take time. Proxmox BS says only use SSD Drives for the Backup Server, they have mutch more 4k R/W IOPS. Your disks my be slow 5600 RPM? and SMR (Shingled Magnetic Recording). That will be realy bad, then take other and make it...
  10. E

    Hardware Feedback - Homelab single node

    ...That all with slightly better performance (on the striped setup) and well, lower cost. BTW The WD Blues are very very quiet, but some are SMR, others are not, you need to check...
  11. E

    Hardware Feedback - Homelab single node

    ...nothing special about "NAS" drives other than they are overpriced thanks to the marketing effort and the target demographics. It is true that SMR drives for a RAID deployment are a bad idea. It is also true that any cheap hardware will do, all drives suffer failures and so the redundancy...
  12. leesteken

    Hardware Feedback - Homelab single node

    Enterprise SSDs with PLP give much better IOPS and fsync/s for VMs (and wear much slower): https://forum.proxmox.com/search/7556950/?q=PLP&o=date Some WD Red drives use SMR with is terrible with ZFS; make sure you get CMR (or Red Plus): https://forum.proxmox.com/search/7556955/?q=SMR&o=date
  13. I

    [SOLVED] Backup suddenly 6 times slower

    Yes. Your external HDDs could be SMR. And ZFS recommends to be at 80% so there hopefully enough of free space (without fragmentation) to work with (but even at 50% you will already start to see a negative performance impact). BTW: you should never use ZFS over USB! I would just use a single...
  14. P

    VM filesystem corruption during backup; ZFS snapshot?

    We use Seagate and WD "enterprise" HDDs (probably rather "prosumer", but they did a very good job for several years now) and Samsung SSDs. I know that this is not super enterprise hardware, but they suited our needs well so far. We use lots of storage (>400TB in a single server), but have...
  15. K

    VM filesystem corruption during backup; ZFS snapshot?

    ...failed to boot normally, dropping me into an initramfs shell needing to fsck the disk. After that, it resumed normally. What kind of disks are you using for the backing storage / ZFS? Consumer-level SSDs and SMR for spinners can be really flaky. Also, what are the backup-job settings for the vm?
  16. Y

    Windows 11 VM IO drops to 0Mbit

    ...limitation! Is there some technical explanation as to why QLC specifically makes it incompatible with ZFS? I have seen some talk about why (HDD) SMR does not work, but not QLC. Thanks for that! I did not know any specifics about ZFS caching. But in my case, is it the sync (?) write between...
  17. Dunuin

    Hardwareempfehlung

    ...Writes angeht (SSDs für leseintensive/gemischte/schreibintensive Workloads). Und die "WD Red" HDDs nutzen z.B. Shingled Magnetic Recording (SMR) und sind daher völlig ungeeignet für ZFS oder allgemein irgendein Raid oder eine Server-Workload. Bei HDDs muss man dann schon zu "WD Red Plus"...
  18. Dunuin

    To ZFS or not

    ...sync writes) you will kill them pretty fast. Could last for years or be killed in a few weeks. Especially make sure not to buy QLC NAND SSDs or SMR HDDs in case you want to stick with consumer SSDs. Yes, that is one way to do it. You shouldn`t fill a ZFS pool too much. Usually you try to...
  19. E

    ZONED STORAGE, f2fs, btrfs - experience?

    I could not find any posts regarding zoned device use on this forum. Be it an HM-SMR (w/bcache?) or NVMe ZNS, has anyone used these setups as it makes perfect sense for a ... well ... hypervisor forum to ask on. Thanks.
  20. leesteken

    [PVE] High I/O delay when transferring data

    Those drives appear to be Red Plus, which at least do not use SMR. For best performance use (second-hand) enterprise SSDs with PLP. How did you use those old drives with ESXi? If you were happy with the performance, maybe do that again. A ZFS stripe of two mirrors (which is like RAID10) will...