Haven't hard drives been cheaper per storage amount than SSDs forever? The problem was always that they were slow. I think tape may still be cheaper per storage amount than hard drives, but the speed is abysmal.
It's also used for sending huge amounts of data long distances. "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." That's usually attributed to Andrew S. Tanenbaum, but wikipedia follows that with "other alleged speakers include..." so take that with a grain of salt. They do note that the first problem in his book on computer networks asks students to calculate the throughput of a Saint Bernard carrying floppy disks.
It's criminal that some computers are still sold with mechanical hard drives, but I will still be using them in NAS for years to come. The right technology for the right job.
That's where I have a theory about when the hard drive market will collapse. A lot of networked drive setups have 4 drives on RAID 10. With SSDs, those can become 2 drive RAID1, and will be faster. That means SSDs can be 2x the cost to eliminate hard drives as a viable option for a very common use case.
That isn't too far away. Your next NAS upgrade cycle might be with SSDs.
Yes. SSDs are still excellent for small form factor and speed, but for long term reliable storage in massive volumes, old fasion hard drives are only second to tape storage.
Source: I am in charge of four 1.2PB storage clusters, each consisting of 144 10TB Toshiba drives. The systems write their output to 10TB tapes for data delivery.
Are you trying to compile 1GB worth of code or load into memory 4GB of game at startup: absolutelly, they're slow.
Are you serving a compressed 1080p video file from your NAS to your media player over 100Mb/s ethernet: they're more than fast enough.
(Or to put things another way, trying to fit your home collection of media files on SSDs in yout NAS is probably not so smart as you can get almost 10x the storage for the same price and the bottleneck in that system isn't the HDD)
You're not going to put a massive production database of a performance criticial system on an HDD but storing "just in case" in one your historic of RAW images files after you've processed them is probably the smart thing to do.
And that there is the real crime. It's a real shame no one's making a tape drive at the consumer market price point. Tapes are a hell of a lot more convenient for backups and archival than the giant weird pile of storage formats we've seen over years.
So many HDs are crapping out after about 5 years. Not saying SSDs are better, but I haven't used any for storage. But it's starting to feel like a subscription plan as I'm rotating hard drives in my server nearly every year now since 2018.
That seems high. Data center drives have a failure rate around 1% per year, even for the worst manufacturer. Not sure how many drives you have or what your workload is like.
Wendel from level 1 techs really likes the multi actuator spinning rust drives. You still wouldn't use them for a boot drive, but they're fast enough to saturate a sata connection, while still being much more dense than ssds. They can achieve 500MB/s sequential speeds, so they're plenty fast for large file access. Most consumers should be using SSD's but if you're dealing with more than a couple terabytes, the best solution isn't as straightforward.
There's some space occupied by the servo tracks (which align the heads to the tap) in LTO, but if we ignore that...
Current-generation LTO9 has 1035m of 12.65mm wide tape, for 18TB of storage. That's approximately 13.1m², or just under 1.4TB/m².
A 90 minute audio cassette has around 90m of 6.4mm wide tape, or 0.576m². At the same density it could potentially hold 825GB.
DDS (which was data tape in a similar form factor) achieved 160GB in 2009, although there's a lot more tape in one of those cartridges (153m).
Honestly, you'd be better off using the LTO. Because they're single-reel cartridges (the 2nd is inside the drive), they can pack a lot more tape into the same volume.
We've done this exercise recently for multi-petabyte enterprise storage systems.
Not going to name brands, but in both cases this is usable (after RAID and hot spares) capacity, in a high-availability (multi-controller / cluster) system, including vendor support and power/cooling costs, but (because we run our own datacenter) not counting a $/RU cost as a company in a colo would be paying:
HDD: ~60TiB/RU, ~150W/RU, ~USD$ 30-35/TB/year
Flash: ~250TiB/RU, ~500W/RU, ~USD$ 45-50/TB/year
Note that the total power consumption for ~3.5PB of HDD vs ~5PB of flash is within spitting distance, but the flash system occupies a third of the total rack space doing it.
As this is comparing to QLC flash, the overall system performance (measured in Gbps/TB) is also quite similar, although - despite the QLC - the flash does still have a latency advantage (moreso on reads than writes).
So yeah, no. At <1.5× the per-TB cost for a usable system - the cost of one HDD vs one SSD is quite immaterial here - and at >4× the TB-per-RU density, you'd have to have a really good reason to keep buying HDDs. If lowest-possible-price is that reason, then sure.
Reliability is probably higher too, with >300 HDDs to build that system you're going to expect a few failures.
Not in so much detail, but it's also really hard to define unless you've one specific metric you're trying to hit.
Aside from the included power/cooling costs, we're not (overly) constrained by space in our own datacentre so there's no strict requirement for minimising the physical space other than for our own gratification. With HDD capacities steadily rising, as older systems are retired the total possible storage space increases accordingly..
The performance of the disk system when adequately provisioned with RAM and SSD cache is honestly pretty good too, and assuming the cache tiers are adequate to hold the working set across the entire storage fleet (you could never have just one multi-petabyte system) the abysmal performance of HDDs really doesn't come into it (filesystems like ZFS coalesce random writes into periodic sequential writes, and sequential performance is... adequate).
Not mentioned too is the support costs - which typically start in the range of 10-15% of the hardware price per year - do eventually have an upward curve. For one brand we use, the per-terabyte cost bottoms out at 7 years of ownership then starts to increase again as yearly support costs for older hardware also rise. But you always have the option to pay the inflated price and keep it, if you're not ready to replace.
And again with the QLC, you're paying for density more than you are for performance. On every fair metric you can imagine aside from the TB/RU density - latency, throughput/capacity, capacity/watt, capacity/dollar - there are a few tens of percent in it at most.
My 8TB Seagate failed a week ago and I was looking into new drives. The cheapest HDD was around 25 EUR per TB (for the 18TB ones) and the cheapest SSD were under 50 EUR per TB. No idea where this "7 times cheaper" comes, maybe from 2015.
I ended up buying a 4TB Crucial MX500 with 4TB for 208 EUR (barely enough for my data, but with some cleanup it will hold a year for sure).
Not only it's faster, it's smaller (fits in the NUC), it's quieter and it consumes much less electricity. I don't think I will ever buy an HDD ever again. Maybe for surveillance recording?
Hamr drives and for data center use. Consumer ssds are made very poorly and even premium drives like a Samsung pro won't hold up in a data center environment. Hard drives on the other hand are basically only data center versions now.
The new ones were on sale for $270 so around $10-15 per TB.
The best I can find is $40-50 per TB for SSD. Certainly not 7times more expensive but more like 3-5.
Yea, you can't compare consumer to business. Very different. Article is talking about datacenters, which don't typically rely on consumer grade products.
Maybe regional differences. I've been looking for 3 days last week and have found anything under 20 EUR per TB, more like 25 for non-sketchy sites. For new drives, I'd never buy a refurbished again. SSDs are similarly priced, around 50 per TB for brand named ones.
Not that many 18TB SSDs available though. Might (and probably will) change in the future, but today, if you want massive amounts of storage, HDDs are your only reasonable solution (ignoring magnetic tape) unless you really require the read & write speeds of an SSD.
Imagine Backblaze trying to replace their 46000 16TB HDDs with a few hundred thousand smaller SSDs in their datacenter.
This is my thing. I have about 122TB of spinning metal (with the same as an offsite backup) with SSDs as ZIL and L2ARC. And it's awesome. HDDs I think will genuinely be important for for the foreseeable future.
There is a lot of power to waste for the savings you made, when not buying expensive SSDs (20€ a year is not much). Where we use HDDs, we don't care about noise. Durability? We use huge RAID systems with lots of redundancy.
I personally like to swap new drives after 5 years to avoid failures. So when you find a 16 TB SSD for 350€, you send me a message.
Don't let your dreams be dreams, I didn't know Jack shit about nas and just built my own with an old pc, I tried truenas but ended up paying for unraid, it was just easier for my needs.
Yes. SSDs are still excellent for small form factor and speed, but for long term reliable storage in massive volumes, old fasion hard drives are only second to tape storage.
Source: I am in charge of four 1.2PB storage clusters, each consisting of 144 10TB Toshiba drives. They're due for an upgrade.
I work tech support for a NAS company and the ratio of HDDs to SSDs is roughly 85-15. Sometimes people use SSDs for stuff that requires low latency, but most commonly they're used as a cache for HDDs in my experience.
If you're able to get enterprise ssds, you could get 16tb ssds... But no clue what minimum order sizes are like for that kind of thing. But of you wanted to use 16tb ssds instead of buying a house 100% down payment, that's an option probably.
The last set of NAS drives I bought for my home server were ~$120 for 8TB, and while random access may not quite measure up, I'd put them up against your $50 Inland white-label drive for sustained R/W any day of the week, especially once the SSD's write cache is saturated. That's not even comparing like-for-like -- consumer hard drives using SMR are quite a bit cheaper than the NAS drives I bought, and enterprise-grade Flash storage costs 2-4 times as much as low-end consumer flash.
There's absolutely still a case to be made for mechanical drives in near-line storage, and that's not likely to change for quite a few years yet.
Toshiba's estimates feel reasonable. While the price difference is slowly narrowing compared to the widening performance and form factor gap, it'll certainly continue to be a slow death. The current price ratio would need to be inverted before it makes sense to drop hdds entirely. And even then tapes will still be around forever.
With investments in storage tech being so diverted away from HDD technologies I wonder how much further capacity will get. We're already at the point where disks have many platters and HAMR is finally going to be delivered after decades of "coming soon". It feels like, much akin to processor fab, we're approaching a wall.
Besides speed, the main problem of spinning rust hard drive ultimately comes down to reliability: you have to baby them, one bad shock and the magnetic needle scratches the platter, then all your data is gone without any way to recover them.
Datacenters usually have redundancies just in case, but being that NAND flash is dirt cheap nowadays, the flaws of spinning rust hard drives are too great to overcome.
Considering that the needle hovers like mere nanometers over the disk, something as simple as loud noises would cause enough vibration to affect disk performance, so the force needed to permanently damage a disk is really, really small.