71-TiB NAS with twenty-four 4TB drives hasn't had a single drive failure for ten years — owner outlines key approaches to assure HDD longevity
Turning off your NAS will save on electric costs and save your hard drives.
In 2014, a PC hardware enthusiast from the Netherlands built a 71-TiB (approximately 78 terabytes) network-attached storage (NAS) that used 24 4TB hard drives. Ten years later, Louwrentius says the NAS is still running and hasn’t experienced a single drive failure.
The NAS owner said the 4TB HGST drives have only accumulated 6,000 hours since their deployment, translating to about 600 hours or about 25 days annually. Louwrentius turns off the NAS when unused, which the user claims is the likely secret to its longevity.
The NAS is only turned on remotely when the user needs to read or write data using a script to switch on the smart power bar where the NAS is plugged in. When the baseboard management controller (BMC) on the NAS’s motherboard has booted, the owner then uses an Intelligent Platform Management Interface (IPMI) system to turn on the NAS itself (although the owner said that they could use Wake-on-LAN, too). After Louwrentius finishes using the NAS, he runs another script that shuts down the server and the wall socket.
Although the enthusiast’s primary reason for setting up this seemingly complicated bootup process for their NAS is to save on energy consumption, it seems that it also had the side effect of prolonging the life of their hard drives. Given that most hard drives are only rated to last three to five years, then the 10-year longevity that Louwrentius has on the systems means that he is lucky with their drive choices or he’s doing something right. But even if it’s luck with the 24 installed drives in the system, Louwrentius also claims that a previous system, which has 20 1TB Samsung hard drives, didn’t experience any drive failures through its approximately five-year lifespan.
The only hardware replacement Louwrentius made to the system was when the motherboard failed. Since it was impossible to access the BIOS, the owner had to replace it. It’s a good thing he acquired a replacement on eBay, allowing the NAS setup to store data for another day.
While we cannot 100% determine that turning off the HDDs is the sole reason for the longevity of these drives, it seems that it does have some impact. Nevertheless, it might be a good idea for users to occasionally check the health of their storage devices or at least copy the backups on new drives every few years. That way, even if one of the drives fails because of age or deterioration, you’d still have a copy of your most important files — something that the music industry is just discovering now as their archival hard drives are now recording a 20% failure rate, even when stored properly.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.
-
bit_user
First, HGST drives of this vintage were really good.The article said:The NAS owner said the 4TB HGST drives have only accumulated 6,000 hours since their deployment, translating to about 600 hours or about 25 days annually. Louwrentius turns off the NAS when unused, which the user claims is the likely secret to its longevity.
Second, I had a fileserver with 5x 1 TB WD Black HDDs that also lasted over 10 years with zero unrecoverable sectors on any of the drives. As in his case, I turned it off when not in use. I used it mostly for backups and it probably had a similar number of hours as his.
Finally, I've seen machines at work with HDDs in continuous service for more than 10 years. I think one of them was even running for more than 15 years! So, it's possible to have old hard drives last that long, even when in continuous use! -
das_stig The Hitachi drives in my servers were all manufactured over 10 years ago and since I purchased them off ebay as ex-DC drives have been online 24/7 for nearly 2 years without any issues. THe call centre I used to work in doing support, had old HP desktops, that had drives 15 years+ and still running until we refreshed to W10. As bit_user said, drives back in the day just better qualityReply -
ex_bubblehead He should consider himself lucky as most failures actually occur during power up where current draw is maximum and this is when things like spindle motors fail.Reply -
konky Why mix units here? TiB to TB just to have to explain the size difference is foolish. Are you just trying to show of that you know the difference?Reply -
thestryker So less than a year of powered on time which is not exactly how the vast majority of people would use a NAS. I'd be more curious about the number of times the drives were powered up than the length of time they've been installed for.Reply
With my previous server box (24/7 operation) I had one drive actually fail with its original installation. The second set of drives I had a failing drive that I replaced before it failed. Neither one of which do I consider a very big problem as it was running 8 drives in RAID 6 so it was protected from drive failure. This would have been over a period of about 11 years. -
USAFRet Optimizing drive life vs usability of the NAS.Reply
My NAS is on 24/7.
The 12 drives in or attached are between 4 and 15 yrs old. Various sizes and makes.
Only one has failed, an 8TB Toshiba Enterprise. 7 months old at the time of death. Its 4 year old replacement is just fine.
It is ON 24/7, because it is the movie and music repository, and receives nightly backups from the other house systems, and 24/7 video from the house security cams. -
bit_user
When I upgraded my file server to 4 TB drives, one of the drives hit an unrecoverable sector immediately after RAID initialization. I think it was during the first consistency-check I ran. That was a WD Gold drive, apparently designed by the WD team prior to the merger with HGST completing.USAFRet said:Only one has failed, an 8TB Toshiba Enterprise. 7 months old at the time of death. Its 4 year old replacement is just fine.
I replaced it with another 4 TB WD Gold drive that was designed by HGST and not only was it faster and cooler, but also quieter! I assume the speed difference was due to it containing fewer, higher-density platters, since all drives had the same capacity and were 7.2 kRPM. It could complete a self-check in 9 hours, whereas the older, non-HGST drives took 11 hours. -
USAFRet
That 8TB Tosh went from 0 to 14k+ bad sectors in less than a week.bit_user said:When I upgraded my file server to 4 TB drives, one of the drives hit an unrecoverable sector immediately after RAID initialization. I think it was during the first consistency-check I ran. That was a WD Gold drive, apparently designed by the WD team prior to the merger with HGST completing.
I replaced it with another 4 TB WD Gold drive that was designed by HGST and not only was it faster and cooler, but also quieter! I assume the speed difference was due to it containing fewer, higher-density platters, since all drives had the same capacity and were 7.2 kRPM. It could complete a self-check in 9 hours, whereas the older, non-HGST drives took 11 hours.
RMA. -
bit_user
I never got around to RMA'ing mine. I had waited too long to even do the upgrade, to avail myself of Newegg's 30 day return period. So, that meant returning it to WD and receiving a refurb drive in exchange. Even though I was using RAID-6, I didn't want refub drives in my array, so I just bought a new replacement with my own money.USAFRet said:That 8TB Tosh went from 0 to 14k+ bad sectors in less than a week.
RMA.
I still wanted to RMA it, just to punish them for shipping a bad drive. However, laziness overtook my indignation - a tale as old as time, I'm sure. -
NinoPino
This case prove at least that for HGST 4TB drives this is not true.ex_bubblehead said:He should consider himself lucky as most failures actually occur during power up where current draw is maximum and this is when things like spindle motors fail.