Hello,
I rented a blade server with two 3.5" 4 Tb SATA disks, added both as ZFS mirror. No problem was seen initially, but soon I got one of drives failed (physically), so hosting company replaced it promptly and I have pool resilvered. After 3 days second disk died physically, and again I have it replaced. Since that moment I see disks keep die roughtly every 2-3 days (and hosting keep replace it, they good guys), and the once replaced whole server in a hope the controller may be faulty. None helped so far.
You see, these drives are ST4000NM002A (see https://www.seagate.com/enterprise-storage/exos-drives/exos-e-drives/exos-7e8/ ) - that is, Exos 7E8 4TB 512e SATA, not that it is home grade disks, so 3 days is too short for them to withstand (and the I/O load is quite low too).
I have only one VM on that PVE server, this VM is for web development purpose so load is small, and disks appears to die at night, when backup is in progress. Backup is set to run from rpool to "local" (/var/lib/vz) dir, so it run from and to the same pool physically.
I doubt ZFS capable to kill disks in any way, nor these disks can kill inself with vibration or whatever. The server itself is this https://www.supermicro.com/en/products/system/3U/5039/SYS-5039MC-H12TRF.cfm one (SuperServer 5039MC-H12TRF), it is provided by hosting company and I see no suspicious in this choice. The CPU is Xeon E-2288G, new and promising one (https://ark.intel.com/content/www/u...eon-e-2288g-processor-16m-cache-3-70-ghz.html).
Please advice how can I save disks further and make the server work not kill disks!
				
			I rented a blade server with two 3.5" 4 Tb SATA disks, added both as ZFS mirror. No problem was seen initially, but soon I got one of drives failed (physically), so hosting company replaced it promptly and I have pool resilvered. After 3 days second disk died physically, and again I have it replaced. Since that moment I see disks keep die roughtly every 2-3 days (and hosting keep replace it, they good guys), and the once replaced whole server in a hope the controller may be faulty. None helped so far.
You see, these drives are ST4000NM002A (see https://www.seagate.com/enterprise-storage/exos-drives/exos-e-drives/exos-7e8/ ) - that is, Exos 7E8 4TB 512e SATA, not that it is home grade disks, so 3 days is too short for them to withstand (and the I/O load is quite low too).
I have only one VM on that PVE server, this VM is for web development purpose so load is small, and disks appears to die at night, when backup is in progress. Backup is set to run from rpool to "local" (/var/lib/vz) dir, so it run from and to the same pool physically.
I doubt ZFS capable to kill disks in any way, nor these disks can kill inself with vibration or whatever. The server itself is this https://www.supermicro.com/en/products/system/3U/5039/SYS-5039MC-H12TRF.cfm one (SuperServer 5039MC-H12TRF), it is provided by hosting company and I see no suspicious in this choice. The CPU is Xeon E-2288G, new and promising one (https://ark.intel.com/content/www/u...eon-e-2288g-processor-16m-cache-3-70-ghz.html).
Please advice how can I save disks further and make the server work not kill disks!
 
	 
	 
 
		 ZFS did its best to predict problems and it warns me each time. First that there are some errors in checksums on one drive, then that errors are ate "too much" level so it won't use disk anymore (at this point the mirror is broken and we run on single disk, but this single disk run well which is strange indeed), and in maybe 8-10 hours the server hangs (there is no hotswap, it is a blade with simple Intel direct-to-motherboard SATA3 ports), and as I reboot it via IPMI the disk used to be dead.
 ZFS did its best to predict problems and it warns me each time. First that there are some errors in checksums on one drive, then that errors are ate "too much" level so it won't use disk anymore (at this point the mirror is broken and we run on single disk, but this single disk run well which is strange indeed), and in maybe 8-10 hours the server hangs (there is no hotswap, it is a blade with simple Intel direct-to-motherboard SATA3 ports), and as I reboot it via IPMI the disk used to be dead.