EchoGecko795

joined 1 year ago
[–] [email protected] 1 points 1 year ago (3 children)

modinfo zfs | grep version

To quickly get the version installed.

[–] [email protected] 1 points 1 year ago

My average cost per usable TB is around $11 per TB, but most of my drives are smaller used SATA/SAS drives. My main pools come out to about 450TB of usable storage.

Edit, Copy paste from the post I never got around to making.

Main Pools 2023, are on 24/7/365 - 360-420 Watts.

I use 3 UPS units to protect them.

Name Usable TB ZFS vdev Drives per vdev Drive size Estimated total cost Content Notes
Main 65.90TB RAIDz2 1 8 12TB $960 Content New drives
Main 76.50TB RAIDz2 1 8 14TB $1200 Content New drives
Main 87.86TB RAIDz2 1 8 16TB $1120 Content reconditioned drives
Backup0 176.91TB RAIDz2 2 12 10TB $1500 Backup used + new drives
Temp0 12TB EXT4 1 1 12TB $120 Temp new drive

Main pool - 261.26TB, This is all 1 pool, but I broke it up on the list since I use 3 different drive sizes.

Total usable TB : 450.17TB

Total Cost : $4900 / 450.17TB = $10.89 per usable TB, not bad considering the amount of drives were purchased new.

[–] [email protected] 1 points 1 year ago (2 children)

I use a few hundred drives as cold storage backup pools and I try to get to them twice a year but I always test them at least once a year. I lose 2-3 drives every year. Since they are mostly smaller 750GB to 4TB drives in 12x driver RAIDz2 pools it is not really that bad and I have never lost data yet.

[–] [email protected] 1 points 1 year ago

I am currently using 80+ 4TB drives, as cold storage and backups.

[–] [email protected] 1 points 1 year ago

Here is my over the top method.

++++++++++++++++++++++++++++++++++++++++++++++++++++

My Testing methodology

This is something I developed to stress both new and used drives so that if there are any issues they will appear.
Testing can take anywhere from 4-7 days depending on hardware. I have a dedicated testing server setup.

I use a server with ECC RAM installed, but if your RAM has been tested with MemTest86+ then your are probably fine.

  1. SMART Test, check stats

smartctl -i /dev/sdxx

smartctl -A /dev/sdxx

smartctl -t long /dev/sdxx

  1. BadBlocks -This is a complete write and read test, will destroy all data on the drive

badblocks -b 4096 -c 65535 -wsv /dev/sdxx > $disk.log

  1. Real world surface testing, Format to ZFS -Yes you want compression on, I have found checksum errors, that having compression off would have missed. (I noticed it completely by accident. I had a drive that would produce checksum errors when it was in a pool. So I pulled and ran my test without compression on. It passed just fine. I would put it back into the pool and errors would appear again. The pool had compression on. So I pulled the drive re ran my test with compression on. And checksum errors. I have asked about. No one knows why this happens but it does. This may have been a bug in early versions of ZOL that is no longer present.)

zpool create -f -o ashift=12 -O logbias=throughput -O compress=lz4 -O dedup=off -O atime=off -O xattr=sa TESTR001 /dev/sdxx

zpool export TESTR001

sudo zpool import -d /dev/disk/by-id TESTR001

sudo chmod -R ugo+rw /TESTR001

  1. Fill Test using F3 + 5) ZFS Scrub to check any Read, Write, Checksum errors.

sudo f3write /TESTR001 && f3read /TESTR001 && zpool scrub TESTR001

If everything passes, drive goes into my good pile, if something fails, I contact the seller, to get a partial refund for the drive or a return label to send it back. I record the wwn numbers and serial of each drive, and a copy of any test notes

8TB wwn-0x5000cca03bac1768 -Failed, 26 -Read errors, non recoverable, drive is unsafe to use.

8TB wwn-0x5000cca03bd38ca8 -Failed, CheckSum Errors, possible recoverable, drive use is not recommend.

++++++++++++++++++++++++++++++++++++++++++++++++++++