Search results for query: SMR

  1. J

    Hard Drive Conundrum

    I'm pretty sure I saw TGMR, but now I can't find it. I did download the users manual from Seagate and confirmed it is an SMR drive. It's frustrating that the manufacturers still burry this information. It worked fine as a secondary drive in my main PC, so I guess that's where it stays. I've...
  2. J

    Hard Drive Conundrum

    I think I'm just going to avoid SSDs in this machine all together. I'm not a fan of SMR, but the drive in question is TGMR. I don't know if this is going to be closer to SMR or CMR. I'm not doing any RAID. There will be zero redundancy in this machine... I know, be ready for disaster. I think...
  3. C

    Hard Drive Conundrum

    I dont think SMR lowers endurance, its mainly a performance negative. But it typically isnt suited for RAID setups for this reason. Might be ok as a single drive storage, but not if its part of a RAID. If using an SSD for proxmox, you probably want either an enterprise or at least something...
  4. J

    Hard Drive Conundrum

    ...I don't. Now I don't want to use it. So I decided to switch it with the 2TB HDD in my main PC. Then I realize I bought that before I knew what SMR was. So I look it up. Data sheet says it's TGMR. What the hell is that? Can't really find any information on it. I do have a 3TB 5400 RPM CMR...
  5. leesteken

    ZFS device fault

    It's most likely SMR if it's a HDD but it might be CMR, and it might be TLC or QLC if it's a SSD. Either way, it's cheap and probably not suitable for ZFS or any other CoW filesystem. Please show the output of zpool status before and/or after a scrub. It's not just you but posts like these make...
  6. leesteken

    ZFS device fault

    What kind of fault? What does zpool status actually report (in CODE-tags)? It really depends on read, write or cksum. Are you using QLC or SMR drives then please search the forum about the issues they cause with ZFS. Even brand new SSDs can be terrible for use with ZFS and can also be broken (or...
  7. Z

    Tesla P4 massive performance problems

    ...(7200rpm) HDD 4/5: WD Red 8TB / HGST8TB enterprise (7200rpm) HDD 6: WD blue SA510 1TB sata SSD HDD 7: 1TB WD Blue 2.5in HDD (5400RPM non-smr) i am not sure about the pipeline setup with this board beyond which pcie slots are cpu and chipset connected haha i have mitigations disabled...
  8. leesteken

    Help Needed to Design Ideal ZFS Setup on PVE

    ...copies of the data and at least one in another place. PBS has remote sync which is perfect for that. That also possible (but please don't use SMR or QLC drives). Alternatively, install PBS as a container (instead of a VM) and you can use the PVE storage directly (and avoid ZFS on ZFS). Or...
  9. leesteken

    Help Needed to Design Ideal ZFS Setup on PVE

    Maybe show the output of zpool status for the VMS pool (in CODE-tags)? Are you using SMR or QLC drives? Maybe we can find out the actual problem with your physical zpool and suggest improvements?
  10. leesteken

    [SOLVED] WD RED degrades when used as OSDs

    SMR is for rotating HDDs these are TLC-flash SATA SSDs. QLC-flash is terrible with sustained writes (go down to KB/s) but most TLC-flash consumer drives work "well enough" for homelabs (but I have no experience with Ceph). Try (second-hand) enterprise SSDs with Power Loss Protection (PLP) as...
  11. B

    [SOLVED] WD RED degrades when used as OSDs

    those are ssds. there is no smr on ssds because there is no magnetic recording. so that can be ruled out as cause. the thing is that wd red ssds are still consumer ssds even if they are labelled as nas-ssds. they lack critical features such as plp which can increase performance and decrease wear...
  12. B

    [SOLVED] WD RED degrades when used as OSDs

    I did found nowhere about they were SMR but I confirmed their behavior when iops starts be large on writes. Will replace them.
  13. fabian

    [SOLVED] WD RED degrades when used as OSDs

    if you are really talking about a WD RED 2T, then likely it is using SMR, which makes it pretty much unusable for hypervisor workloads..
  14. A

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    ohmmm...I know you don't want to hear it but this is a SSD drive. and befor 18.2.6 it works without issue and nope no SMR method.. i think its only for HDD
  15. G

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    ...in 120ms which is long even for spinning hard drives, with a spinning drive, you would expect <20ms +/- network latency of ~1-2ms. Are you using SMR drives? These seem to be mostly around the time you are rebuilding an OSD, which can indeed put very high load on both drives and network...
  16. Z

    Should I install Proxmox on NVMe SSD or HDD?

    It bears stating that for most users in a homelab, all these logs are pretty unimportant. There unfortunately isn't any control in the UI to turn off persistent logging, but it's not incredibly difficult to do so from the command line. That might make this decision easier for a home user. Where...
  17. leesteken

    Should I install Proxmox on NVMe SSD or HDD?

    Proxmox VE itself runs fine from a HDD (but don't use SMR) as it writes much more (logs, graphs) than it reads. Put (most of) your VMs and containers on (enterprise) SSDs as they compete for IOPS
  18. leesteken

    Install Proxmox as a second operating system on boot drive

    ...good endurance like an enterprise SSD with PLP for your VMs. Proxmox VE itself can run fine from an HDD (just make sure it is CMR and not the new SMR kind). Proxmox logs a lot which is not great for some SSDs. So if you already have an old HDD laying around that might be perfect to install...
  19. B

    Issue setting up no-subscription repositories for Proxmox Backup Server on Ubuntu 24.04

    ...to keep it up to date I am a bit nervous to install it on my homelab NAS (no critical data, but to be honest its running a raid5 on some SMR drives and its slow as anything to rebuild the array) for fear of having to remake it. If I do end up bricking things, I do plan to explore the hardware...
  20. L

    [SOLVED] Extremely slow backups (PVE, not PBS)

    ...960GB datacenter SSDs in a RAID1 mirror. The backup target is a Synology running 2x WD Red Pro 22TB NAS 7200 RPM HDDs. The HDDs are CMR, not SMR, and they are mounted to Proxmox via SMB. This is a backup from March 24th. Note the time running and size. Only 41mins. Here a backup from...