I haven't kept up with spinning rust drives so I had to take a look. Seagate has a couple 30 tb models now, crazy. Lot of eggs in one basket...and through the same ol 6gbit sata interface these must be a nightmare to format or otherwise deal with. Impressive nonetheless
I'm pretty sure that there were such drives more than twenty years ago (not popular though). I have to ask, what's the point today? The average latency goes down at most linearly with the number of actuators. One would need thousands to match SSDs. For anything but pure streaming (archiving), spinning rust seems questionable.
Edit: found it (or at least one)
"MACH.2 is the world’s first multi-actuator hard drive technology, containing two independent actuators that transfer data concurrently."
Btw, you can get refurbished ones for relatively cheap too. ~$350[0]. I wouldn't put that in an enterprise backup server, but pretty good deal for home storage if you're implementing raid and backups.
Prices have soared recently because AI eats storage as well as GPU; but tracking the data hoarder sites can be worthwhile. Seagate sometimes has decent prices on new.
Probably a good time to mention systemd automount. This will auto mount and unmount drives as needed. You save on your energy bill but the trade off is that first read takes longer as drives need to mount.
You need 2 files, the mount file and the automount file. Keep this or something similar as a skeleton file somewhere and copy over as needed
# /etc/systemd/system/full-path-drive-name.mount
[Unit]
Description=Some description of drive to mount
Documentation=man:systemd-mount(5) man:systemd.mount(5)
[Mount]
# Find with `lsblk -f`
What=/dev/disk/by-uuid/1abc234d-5efg-hi6j-k7lm-no8p9qrs0ruv
# See file naming scheme
Where=/full/path/drive/name
# https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/sect-using_the_mount_command-mounting-options#sect-Using_the_mount_Command-Mounting-Options
Options=defaults,noatime
# Fails if mounting takes longer than this (change as appropriate)
TimeoutSec=1m
[Install]
# Defines when to load drive in bootup. See `man systemd.special`
WantedBy=multi-user.target
# /etc/systemd/system/full-path-drive-name.automount
[Unit]
Description=Automount system to complement systemd mount file
Documentation=man:systemd.automount(5)
Conflicts=umount.target
Before=umount.target
[Automount]
Where=/full/path/drive/name
# If not accessed for 15 minutes drive will spin down (change as appropriate)
TimeoutIdleSec=15min
[Install]
WantedBy=local-fs.target
Late reply but this gave me a chuckle as a (I guess old) unix guy. Sun had automount in the late 80s and afaik it/autofs/auto.master stuff is largely unchanged (in usage, maybe not in implementation).
If you have one drive it feels like that, but if you throw 6+2 drives into a RAID6/raidz2. Sure, a full format can take 3 days (at 100 Megabytes/second sustained speed), but it's not like you are watching it. The real pain is fining viable backup options that don't cost an arm and a leg
SATA 3 can move 500MB/s, but high-capacity drives typically can't. They are all below 300MB/s sustained even when shiny new. Look for example at the performance numbers quoted in these data sheets [1][2][3][4], all between 248 MiB/s and 272 MiB/s.
Now that's still a lot faster than 100MB/s. But I have a lot of recertified drives, and while some of them make the advertised numbers some of them have settled at 100MB/s. You could argue that is something wrong with them, but they are in a raid and I don't need them to be fast. That's what the SSD cache is for.
I had a 12 disk striped raidz2 array comprised of wd gold drives that could push 10Gbit/s over the network, while scrubbing, while running 10 virtual machines, and still had plenty of IO to play with. /shrug
Unlike typical raid-5/6 parity, zfs / raidz doesn't require a format/parity initialization that writes all blocks of the disk before the pool can be used.
You just need to write labels (at start & end of each disk) which is an attempt to confirm that the disk is actually as big as claimed.