I’m going to remind you that these fuckers are LOUD, like ROARING LOUD, so might not be suitable for your living room server.
DON’T TELL ME WHAT I CAN HANDLE!! I HOPE YOU CAN HEAR ME, MY PC’S FANS ARE A LITTLE NOISY!!
OK…what’s this HAMR technology and how does it play compared to the typical CMR/SMR performance differences?
Heat-Assisted Magnetic Recording. It uses a laser to heat the drive platter, allowing for higher areal density and increased capacity.
I am ignorant on the CMR/SMR differences in performance
I fear HAMR sounds like a variation on the idea of getting a coarser method to prepare the data to be written, just like on SMR. These kind of hard drives are good for slow predictable sequential storage, but they suck at writing more randomly. They’re good for surveillance storage and things like that, but no good for daily use in a computer.
My poor memory is telling me the heat is used to make the bits easier to flip, so you can use a weaker magnetic field that only affects a smaller area, allowing you to pack in bits more closely. It shouldn’t have the same problem as SMR.
That sounds absolutely fine to me.
Compared to an NVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.
In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.
If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.
These are still not good for a RAID array, was my point. Unless just storing sequentially, at a kinda slow rate. At least for SMR. I fear HAMR might be similar (it reminds me of Sony’s minidisk idea but applied to a hard drive).
I would not risk 36TB of data on a single drive let alone a Seagate. Never had a good experience with them.
The only thing I want is reasonably cheap 3.5" SSDs. Sata is fine just let me pay $500 for a 12TB SSD please.
Yeah, nvme drives show how little space the storage takes up. Just stick a bunch of them inside the 3.5" format, along with a controller and cooling, and that would be great for a large/slow (relative to NVME) drive capped by SATA speeds.
I don’t miss the noise hard drives make, plus it’s nice to not really worry as much about what kind of magnetic activity might be going on around it, like is my subwoofer too close or what if my kid somehow gets her hands on a powerful magnet and wants to see if it will stick to my PC case.
HeatDidn’t read your full comment sorry. How would heat control work? Integrated fan?Passive cooling could be enough. Even a bunch of ssd chips wouldn’t take up all of the vertical space, so top of the case could just be a heat sink. Though it might need instructions to only install it in an enclosure that has a fan blowing air past it (and not use the spots behind the mobo that don’t get much airflow).
A lot of motherboards come with metal styling that acts as a heat sink for nvme drives without even using fins, though they still have more surface area than a 3.5" drive and only have to deal with the heat from one or two chips.
But maybe it isn’t realistic and that’s why we don’t see SSDs like that on the market (in addition to price).
Hm. Maybe a small laptop style fan on the port side? Takes in air and spits it out right next to it. NVMEs seem fine not having cooling anyway.
Yeah, I’ve wondered if the ones that come with heat sinks really need them or if it’s just a gimmick to make people think the performance is better.
I want one of those heat cameras some use in hardware reviews. I don’t need one, but Iwant one lol.
They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.
That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I’d likely be going with smaller drives because however much a 36 TB drive costs, I don’t wanna feel like I’m spending 2x the cost of one of those just for redundancy lmao
Could you imagine the time it would take to resilver one drive… Crazy.
I use mirrors, so RAID 1 right now and likely RAID 10 when I get more drives. That’s the safest IMO, since you don’t need the rest of the array to resilver your new drive, only the ones in its mirror pool, which reduces the likelihood of a cascading failure.
You couldn’t afford this drive unless you are enterprise so there’s nothing to worry about. They don’t sell them by the 1. You have to buy enough for a rack at once.
100%. 36tb is peanuts for data centres
Ignoring the Seagate part, which makes sense… Is there a reason with 36TB?
I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.
So this growth seems right.
I recall IT people losing their minds when we hit the 1TB
1TB? I remember when my first computer had a state of the art 200MB hard drive.
I remember first hearing about 1TB and thinking (who needs that much storage?) wasn’t an IT person then just a regular nerd but am now and it took me a while to ever fill up my first 1TB HDD (steam folder) now I have a 2TB NVME in my desktop and a 4TB NVME in my server (for my Linux ISOs ;))
Remembering when Zip drives sounded so big!
It’s so consistent it has a name: Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. https://en.m.wikipedia.org/wiki/Moore’s_law
I heard that we were at the theoretical limit but apparently there’s been a break through: https://phys.org/news/2020-09-bits-atom.html
Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore’s law. SSDs do use transistors/nano structures (NAND) for storage and it’s storage capacity is more related to Moore’s law.
Wonderful. Storage is a great thing, and I’m happy to have it.
Now you can store even more data unsafely!
You are not supposed to use these in a non-redundant config.
Especially these, ye
Even in an array I’d be terrified of more drive fails in a rebuild that is gonna take a long time.
I’m still not buying a seagate.
Why?
I bought a seagate. Brand new. 250gb, back when 250gb on one hard drive cost a fuckton.
It sat in a box until I was done burning the files on my old 60gb hard drive onto dvd-r’s.
Finally, like 2 months later, I open the box. Install the drive. Put all the files from dvds onto the hard drive.
And after I finished, 2 weeks later it totally dies. Outside of return window, but within the warranty period. Seagate refused to honor their warranty even though I still had the reciept.
That was like 2005. Western Digital has now gotten my business ever since. Multiple drives bought. Not because the drives die, but because datawise I outgrow them. My current setup is 18TB and a 12TB. I figure by 2027 I’ll need to update that 12TB to a 30TB. Which I assume will still cost $400 at that point.
Return customer? No no. We’ll hassle our customer and send bad vibes. Make him frustrated for ever shopping our brznd! Gotta protect that one time $400 purchase! It’s totally worth losing 20 years of sales!
I’ve had a lot of seagates simply because they’re the cheapest crap on the market and my budget was low. But unfortunately, crap is what you get.
As @renegadespork@lemmy.jelliefrontier.net said, infant mortality is a concern with spinning disks, if I recall (been out of reliability for a few years) things like bearings are super sensitive to handling and storage, vibrations and the like can totally cause microscopic damage causing premature failure, once they’re good though they’re good until they wear out. A lot of electronics follow that or the infant mortality curve, stuff dying out of the box sucks, but it’s not unexpected from a reliability POV.
Shitty of Seagate not to honour the warranty, that’d turn me off as well. Mine is pettier, when I was building my nas/server I initially bought some WD reds, returned those and went for some Seagate ironwolf drives because the reds made this really irritating whine you could hear across the room, at the time we had a single room apartment so was no good.
I’ve bought 2 Seagate drives and both have failed. Meanwhile, I still have my 2 15yo WD drives working.
I hope I didn’t just jinx myself. Lol
I’ve got the opposite experience, with WD.
You know who uses loads of Seagate drives? Backblaze. They also publish the stats. They wouldn’t be buying Seagate drives if they were significantly worse than the others.
The important thing is to back up your shit. All drives fail.
Same here. I have a media server and just spent an afternoon of my weekend replacing a failed Seagate drive that was only used to to backup my more important files nightly that was purchased maybe 4-5 years ago. In the past 10 years, this is the third failed Seagate drive I’ve encountered (out of 5 total) while I have 9 WD drives that have had zero issues. One of them is even dedicated to torrents with constant R/W that is still chugging along just fine.
I get it, I’ve had the opposite experience with wd, but they were 2.5” portable drives. All my desktop stuff works perfectly still 🤞
They have had reliability issues in the past.
Nearly all brands have produced unreliable and a reliable series of hard drives.
Really have to look at them based on series / tech.
None of the big spinning rust brands really can be labeled as unreliable across the board
Backblaze.com gives stats on drive failures across their datacenters:
https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2024/
Seagate’s results stick out. Most of the drives with >2% failure rates are theirs. They even have one model over 11%.
Why would Backblaze use so many Seagate drives if they’re significantly worse? Seagate also has some of the highest Drive Days on that chart. It’s clear Backblaze doesn’t think they’re bad drives for their business.
I can only speculate on why. Perhaps they come as a package deal with servers, and they would prefer to avoid them otherwise.
There are plenty of drives of equivalent or more runtime than the Seagate drives. They cycle their drives every 10 years regardless of failure. The standout failure rate, the Seagate ST12000NM0007 at 11.77% failure, has less than half that average age.
Seconding this. Anecdotally from my last job in support, every drive failure we had was a Seagate. WDs and samsungs never seemed to have an issue.
Got a source on that? According to Backblaze, Seagate seems to be doing okay (Backblaze Drive Stats for Q1 2024 https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2024/), especially given how many models are in operation.
I wouldn’t call those numbers okay. They have noticeably higher failure rates than anybody else. On that particular report, they’re the only ones with failure rates >3% (save for one Toshiba and one HGST), and they go as high as 12.98%. Most drives on this list are <1%, but most of the Seagate drives are over that. Perhaps you can say that you’re not likely to encounter issues no matter what brand you buy, but the fact is that you’re substantially more likely to have issues with Seagate.
Looks like another person commented above you with some stuff. I recall looking this up a year ago and the ssd I was looking at was in the news for unreliability. It was just that specific model.
What brand is currently recommended? WD is taking the enshittification highway…
Latest story I know of: https://arstechnica.com/gadgets/2023/06/clearly-predatory-western-digital-sparks-panic-anger-for-age-shaming-hdds/
What about the writing and reading speeds?
If you care about that, spinning rust is not the right solution for you.
I mean, newer server-grade models with independent actuators can easily saturate a SATA 3 connection. As far as speeds go, a raid-5 or raid-6 setup or equivalent should be pretty damn fast, especially if they start rolling out those independent actuators into the consumer market.
As far as latency goes? Yeah, you should stick to solid state…but this breathes new life into the HDD market for sure.
It has some.
The speed usually increases with capacity, but this drive uses HAMR instead of CMR, so it will be interesting to see what effect that has on the speed. The fastest HDDs available now can max out SATA 3 on sequential transfers, but they use dual actuators.
me: torrents the entire spn series
Managing that many files becomes the challenge
Only ssd for me
Yeah, but I can’t afford 2TB of SSD, and I need to expand soon.
You can’t get SSDs that big except for some extremely expensive enterprise drives.