Seagate Exos Enterprise 6TB HDD issues with PBS

techie1443

New Member
May 14, 2022
10
0
1
United States
Hi,

I am having an issue with a brand new Seagate Exos Enterprise 6TB 7200RPM single SATA Hard Drive on Prox Mox Backup. When I format the Hard Drive I select EXT4 file system and add as datastore. PBS sometimes has errors when backing up all my PVE VM's to this PBS datastore. I have a second PBS with a old SATA 500GB Western Digital HDD (same EXT4 file system) and all my PVE VM's backup with no problems at all. I contacted Seagate support and they had me run all Seatools HDD test which all short and long test came out successful. Furthermore they said use my Seagate EXOS hard drive on a Windows 10/11 PC and run backups with Seagate Disc Wizard and see if there are any errors. In Windows 11 I ran a backup 3TB of data to the Seagate EXOS drive without any problems with their Seagate Disc Wizard software. At this point Seagate support said it's an issue with either PBS or the debian OS unable to write backups to this Seagate EXOS drive. Seagate support said they are unfamiliar with Linux and PBS, recommend I ask ProxMox for help.

Has anyone experienced issues similar to mine?

Some stuff I read:
Seagate EXOS on Unix

About Linux 4K Sector Hard Drives

Here are some screenshots:
backup errors.png

backup errors2.png
 
Sounds more like your PVE is loosing the network connection to your PBS, so the backups fail because the PBS is unavailable for a short time?
 
Sounds more like your PVE is loosing the network connection to your PBS, so the backups fail because the PBS is unavailable for a short time?
That did come to my mind is to test my network switch. I added a second "PBS2" with that Western Digital drive I mentioned prior on the same network switch and PVE to PBS2 backs up with no problems at all. Furthermore I switched the network ports between PBS1 and PBS2 and same problem (backup errors) happens with PBS1(Seagate HDD as datastore) while PBS2 (Western Digital HDD as datastore) has no problems at all.

At onetime I thought Seagate HDD goes to sleep in PBS, so I tried this Linux HDD Sleep turn off with command: hdparm -s 0 /dev/sdX
and that did nothing.

Another thing I wanted to mention on the same network switch I have a Synology NAS NFS Share, and PVE is able to backup to this NFS network share with no problem.

Also I tried that same Seagate EXOS HDD on another computer (installed PBS) and same problem happens. Furthermore when I go to PBS1 web GUI and browse the Seagate HDD datastore contents it errors out sometimes and at other times it works but it’s slow. On PBS2 web GUI this does not happen when I browse Western Digital HDD data store contents.
 
Last edited:
So I just purchased another Seagate 8TB Ironwolf NAS 7200 RPM 256MB Cache Hard Drive and same problem is happening. PVE has intermittent problems backing up to PBS. Same intermittent communication errors occur when I browse datastore directly on PBS web GUI.

I formatted the new 8TB single HDD as EXT4.

So it seems like PBS can work with old Hard Drives, but not new.

Can anyone please advise in PBS how to properly configure new HDD?

With my 6TBHDD I tried XFS and it failed to format, same happened with ZFS single disk format.
I'm only stuck with using EXT4 with intermittent backup failures.

If new HDD can’t be used in PBS, please advise what HDD can be used.

As shown below from PBS web GUI with new EXT4 formatted HDD
pbs_com_error.png
As shown below It sometimes get stuck on this screen for a long time. BTW... this does not happen with my PBS2 with 5 year old Western Digital 500GB HDD.
error.png
 
Last edited:
Under PBS

Recommended Server System Requirements​

I see
  • Backup storage:
    • Use only SSDs, for best results
    • If HDDs are used: Using a metadata cache is highly recommended, for example, add a ZFS special device mirror.

How do I add metadata cache on a single new HDD?
 
You already posted the link yourself: https://pbs.proxmox.com/docs/sysadmin.html#local-zfs-special-device

To create a pool with special device and RAID-1:
# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>

Adding a special device to an existing pool with RAID-1:
# zpool add <pool> special mirror <device1> <device2>
But then you need a SSD as the special device.

So it would be for a single HDD and SSD without any redundancy: zpool create -f -o ashift=12 YourNewPoolName /dev/disk/by-id/YourEmptyHDD special /dev/disk/by-id/YourEmptySSD

And just HDDs will work too, its just incredible slow, especially when doing verify and GC tasks.
 
Last edited:
And just HDDs will work too, its just incredible slow, especially when doing verify and GC tasks.
I had some quite good results with just adding 2 SSDs as l2arc. That dropped GC and other tasks by some magnitudes. Only problem left is the "not so perfect" read/write-rate to the tape device (around 150mb/s, with tops up to 180mb/s and some dips)

The l2arc - in contrast to special devices - can just be removed without any problems because it's just cache.
 
You already posted the link yourself: https://pbs.proxmox.com/docs/sysadmin.html#local-zfs-special-device


But then you need a SSD as the special device.

So it would be for a single HDD and SSD without any redundancy: zpool create -f -o ashift=12 YourNewPoolName /dev/disk/by-id/YourEmptyHDD special /dev/disk/by-id/YourEmptySSD

And just HDDs will work too, its just incredible slow, especially when doing verify and GC tasks.
Yes I posted the link and didn’t understand how to use, I'm a beginner and have some intermediate skills in Linux.:)

I'm all for using SSD as special device cache (for PBS performance) and single HDD (don’t need redundancy now). However since I already have installed PBS on existing SATA SSD and have my datastore on the HDD.

Is it possible to start from fresh (wipe out existing SATA SSD & HDD) and use SATA SSD as both PBS and as a Special device for cache?
and HDD for datastore.

This is how my existing SATA SSD and HDD looks like:

root@pbs:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 7.3T 0 disk
└─sda1 8:1 0 7.3T 0 part /mnt/datastore/8TBHDD
sdb 8:16 0 119.2G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 118.7G 0 part
├─pbs-swap 253:0 0 8G 0 lvm [SWAP]
└─pbs-root 253:1 0 96G 0 lvm /


root@pbs:~# fdisk -l

Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-2M21
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F516A911-85AF-428B-B4E1-F8EE33C81390

Device Start End Sectors Size Type
/dev/sda1 2048 15628053134 15628051087 7.3T Linux filesystem


Disk /dev/sdb: 119.24 GiB, 128035676160 bytes, 250069680 sectors
Disk model: SATA SSD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 956A9635-BBBA-4F5C-9BED-BE1E69952835

Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 1050623 1048576 512M EFI System
/dev/sdb3 1050624 250069646 249019023 118.7G Linux LVM


Disk /dev/mapper/pbs-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pbs-root: 95.99 GiB, 103070826496 bytes, 201310208 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@pbs:~#
 
I was curious about my Seagate 8TB Ironwolf NAS 7200 RPM w/ 256MB Cache HDD performance, so I tried benchmarking it and this is how it looks:

root@pbs:~# proxmox-backup-client benchmark --repository 8TBHDD

Uploaded 1263 chunks in 5 seconds.
Time per request: 3967 microseconds.
TLS speed: 1057.07 MB/s
SHA256 speed: 1925.69 MB/s
Compression speed: 600.26 MB/s
Decompress speed: 874.73 MB/s
AES256/GCM speed: 2249.11 MB/s
Verify speed: 601.16 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 1057.07 MB/s (86%) │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 1925.69 MB/s (95%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed │ 600.26 MB/s (80%) │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed │ 874.73 MB/s (73%) │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed │ 601.16 MB/s (79%) │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed │ 2249.11 MB/s (62%) │
└───────────────────────────────────┴────────────────────┘
root@pbs:~#
 
that benchmark doesn't touch the disk at all - it just gives you a theoretical upper bound based on network and CPU performance.
 
The l2arc - in contrast to special devices - can just be removed without any problems because it's just cache.
But got the downside that it can only improve read IOPS performance but can't accelerate write IOPS. A special device would improve both reads and writes. And also adding a additional removable SLOG wouldn't do the same, as the SLOG can only improve sync writes and a special device would improve async + sync writes.

I'm all for using SSD as special device cache (for PBS performance) and single HDD (don’t need redundancy now). However since I already have installed PBS on existing SATA SSD and have my datastore on the HDD.

Is it possible to start from fresh (wipe out existing SATA SSD & HDD) and use SATA SSD as both PBS and as a Special device for cache?
Yes, for that you would need to shrink your existing sdb3 partition by 87GB so you got some (something like 40GB should be fine) unallocated space on the SSD. Not sure if the PBS installer got the same options like the PVE installer. If it got the same you also could install a fresh PBS and set the "hdsize" option to 32GB for ZFS/LVM (see here: https://pve.proxmox.com/wiki/Installation) so PBS will only use 32GB of that SSD.
With that unallocated space you could then use the CLI to manually create a sda4 partition.

Then you could create a ZFS pool with your entire HDD for data and a partition on the SSD (sda4) for storing metadata as a special device with a comand like this: zpool create -f -o ashift=12 YourNewPoolName /dev/disk/by-id/YourEmptyHDD special /dev/disk/by-id/YourSSD-part4
 
Last edited:
But got the downside that it can only improve read IOPS performance but can't accelerate write IOPS. A special device would improve both reads and writes. And also adding a additional removable SLOG wouldn't do the same, as the SLOG can only improve sync writes and a special device would improve async + sync writes.


Yes, for that you would need to shrink your existing sdb3 partition by 87GB so you got some (something like 40GB should be fine) unallocated space on the SSD. Not sure if the PBS installer got the same options like the PVE installer. If it got the same you also could install a fresh PBS and set the "hdsize" option to 32GB for ZFS/LVM (see here: https://pve.proxmox.com/wiki/Installation) so PBS will only use 32GB of that SSD.
With that unallocated space you could then use the CLI to manually create a sda4 partition.

Then you could create a ZFS pool with your entire HDD for data and a partition on the SSD (sda4) for storing metadata as a special device with a comand like this: zpool create -f -o ashift=12 YourNewPoolName /dev/disk/by-id/YourEmptyHDD special /dev/disk/by-id/YourSSD-part4
I had no updates on this because I had some emergencies to deal with. Now that I have some time, I noticed that I can’t even create a new ZFS pool with the command you mentioned. PBS is giving me delays to just view my unused Seagate 8TBHDD! I got frustrated to the point, that I returned that drive and got a new SATA 2TB SSD, and now I see PBS gives me delays on seeing this as well! I see PBS released 2.2-1, even that version does nothing with the SSD problem I am having. At this point I quit using PBS with my SSD. My last effort, I may try to install PBS as a VM on PVE and see if that works. Likely PVE has better handling of any HDD, SDD I have tried it on.
 
I had no updates on this because I had some emergencies to deal with. Now that I have some time, I noticed that I can’t even create a new ZFS pool with the command you mentioned. PBS is giving me delays to just view my unused Seagate 8TBHDD! I got frustrated to the point, that I returned that drive and got a new SATA 2TB SSD, and now I see PBS gives me delays on seeing this as well! I see PBS released 2.2-1, even that version does nothing with the SSD problem I am having. At this point I quit using PBS with my SSD. My last effort, I may try to install PBS as a VM on PVE and see if that works. Likely PVE has better handling of any HDD, SDD I have tried it on.
this sounds like there is something else wrong with has nothing to do with the disks? can you post a bit of the journal while it's timing out ? e.g from the last boot with 'journalctl -b' ?
also what kind of hardware is the pbs running on ? cpu/mainboard/controller/memory/nic/etc..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!