Комментарии:
I’ve been waiting on someone to make this video. I feel like no one cares about how cool these drives are. I saw them on Server Part Deals and I’ve been tempted.
Also…. First!! 😂
For the first reboot, does it take that long because of the zpool cache? I remember something about TrueNas using a non standard location for the cache so I needed to manually specify it every time I ran a cli command on a pool created via the GUI
ОтветитьI wish the drives could be seen as one with 2x performance to simplify
ОтветитьI always wondered why they didn't try to do something like this with the actuators. But why not just show up as one drive with more performance? Certainly it wouldn't be bottlenecked by the SAS interface.
ОтветитьSo I think the tech is cool but honestly I don't think they really should be used. There is more of a use case for SAS and I agree that SATA really shouldn't be used period. I do think that platter drives will be around for a while as well but that the added complexity of these will just make them not worth it for most use cases. Things like drive failures and rebuilds are also going to be impacted by this and what happens when someone stupidly puts both of the actuators in the same vdev and now their entire pool is corrupted because of a single drive failure. If someone wants/ needs the speed I think that SSDs are cheap enough that you can just get those and even if you don't want to convert everything you can just use tiering like we always have done. Again the tech is cool but I wouldn't recommend it for anyone to actually use due to the added complexity and chances of failure.
ОтветитьTerrible idea, having the single physical drive present as two logical drives. So 2 physical drives, raid 5, one physical drive dies, you've just lost your raid. It's not like they couldn't present it as one logical drive if they tried.
ОтветитьPretty cool idea to get some performance out of spinning rust. I'd never dare to do a raidz1 with a Seagate product though. At least raidz2+hs. But ymmv. What will get complicated is when 1 or 2 "half drives" fail and you need to replace them. And as this is a Seagate product those failures are inevitable. One single mistake replacing the drive and the pool turns to rust. I think the SATA version handles it better. Even if not presenting 2x the performance. Will try this for sure when a dependable company has them.
ОтветитьTrueNAS Mini R with 12x of the SATA Mach.2 drives. That could be a good use case. Once setup you could effectively double the disk IOPS that unit can provide.
ОтветитьI'm building a TrueNAS system in a rig with SAS and I was considering these drives before I knew of the compatibility issues. I'll be sticking with standard drives until proper support exists in TrueNAS.
ОтветитьAre dual actuator drives supported out of the box by Microsoft Storage Spaces (not storage spaces direct, just storage spaces)?
ОтветитьMy problem is i love TrueNas and i deployed a lot in serverl companys but now i need some hv solution and TrueNas has no Cluster solution and thats bad.
ОтветитьHummm... if i wanted to see two different drive I’d by two drives.
Surely the drive manufactures can make it appear as one drive, and the drive itself decides which set of heads to use.
I think that's really cool technology, and for my homelab the slightly jank setup required would be totally worth it.
ОтветитьGood video, but I really wish that you had shown some example of performance with a similar sized zpool running single actuator drives.
ОтветитьI just built a new server and I wanted to use these drives. I ended up going with IronWolf drives because of the compatibility issues.
ОтветитьI bought one of the sata versions to test out in my desktop before I decide to get more for a NAS. To get full speed with the sata, you have to create 2 partitions of 50%, then raid-0 them. That way each partition has its own actuator. Performance is good.
With zfs, you would separate the partions into different vdevs, so if the drive fails, you lose 1 drive from different vdevs.
It looks like this thing on unraid will have drive count license concern plus it still that slow due to how unraid array designed 😢
It still see it as two drive
Maybe this thing double the drive count in same unit ( example 8 drive in 4 bay now) so the total read write speed in zfs / raid been stack up
Unraid : i hate this thing 😂
Unraid just left the chat
Interesting. My mind immediately went to striping the 2 parts of the single drive, and then mirroring the physical drives.
I wonder if you can put striped vdevs into a RAIDZ2, I haven't played with that.
On one hand I don't like the additional complexity of these drives. But I've gotten to the point where I don't really need more space, but I wouldn't mind more performance, however the sweet-spot between size and price keeps increasing in size. These would be a way to nearly double the performance without increasing the size too much.
Just what we need: one more point of failure in already failure-prone hardware. Innovation is cool, but this is not my idea of a cool innovation.
ОтветитьIt's a shame you can't stack VDEVs in an alternative layout in ZFS: one physical Mach.2 drive could be presented to the pool as a mirrored pair of LUNs, and then the resulting middle-layer VDEV could be put into RAIDZ2/3 or similar.
I know today's release of ZFS 2.3 only hours ago brings more flexibility with RAIDZ expansion, along with other benefits, but I don't recall any discussions in the past about stacking VDEVs such that you could construct a pool from a RAIDZ3 made up of mirrored pairs of drives (or, as in this theoretical example, LUNs within the same physical unit).
Given the Mach.2 technology particularly when used over a SAS interface it seems a huge waste of engineering effort is being thwarted by software limitations.
Maybe these Mach.2 drives would be better employed in a LINSTOR / LINBIT DRBD Cluster, underpinned by ZFS … I'm trying to pull all the pieces together into a coherent summary, but that's beyond the scope of TrueNAS and this video for sure.
Will they ever get to a point where each platter has its own independent head?¿ Potentially much faster speeds and see it as 1 drive or individual drives per platter?¿ Cool to think about.
ОтветитьI wish sas was more commonly supported at the consumer level since the drives are often cheaper than sata. Kinda like rdimms.
ОтветитьWould be interested in seeing a staggered mirror pool setup. Mirror vdevs spread across drives in a daisy chain fashion
Ответитьi never call them spinning rust, and never will. and when my HDD survived being outside for 2 years(burried) and worked fine afterwards i def never will
ОтветитьBeen using unRaid for many years and still super happy with it. The main array is slow, but for me maximising the usable storage space is priority so that is working great. For the high speed storage I use nvme drives in a cache pool and that covers the high speed needs.
unRaid is a nice, simple and functional way for a full homelab in one physical box.
The plan is to upgrade to a all nvme based server in the future, but then a TB need to cost less han $40 a TB for high quality nvme drives and that is some years until it is going to happen.
I’m running two SATA Mach.2 in TrueNAS. You just need to create the two partitions manually, then use each partition in a similar way to create a ZFS pool.
So far they work as well as the SAS Mach.2 I have in the same system.
I’ve got a pool with 2 6 wide vdevs in a raidz2. Approximately 1 year of up time. Was glad to see this video to confirm my experience in the UI is not a result of bad configuration.
ОтветитьI don't understand why the HDD controller presents itself as two drives. Why does the controller not just present itself as one drive, but which happens to have (up to) twice the performance? Seems like an unnecessary user complication to me. If I ever wanted a HDD to present as two drives, I'd partition it as such. 🤷
ОтветитьIt is a real shame there isn’t better support for these devices in TN Scale.
ОтветитьTrueNAS's primary benefit is the UI and commercially supported platform. If you use these, you don't have support, and you don't have a functional gui. So like.. why? Just use zfs on a regular distro.
Ответитьso glad you did this video, been eyeing the cheap 12TB recertified drives, this is a way better option even if its future support!
ОтветитьWhy not just get two drivers?
ОтветитьI would think that the best way to use those use to do RAID0 (don't remember the ZFS name of this at this time) inside each disk and then do a RAIDZ2 with a bunch of disks. As most probable when one part of the disk fails the other will do the same I just would join the two parts of the same disk in a faster one.
ОтветитьI think that before these become mainstream, Seagate is going to have to simplify how it presents itself to the OS. Ideally, presenting itself as one single drive would be ideal.
I'm no engineer, but I'm not sure why the drives don't just stripe (RAID0) themselves automatically, and then present as a single drive to the operating system. That would simplify deployment and troubleshooting in the case of a failure, but also give the same performance benefit.
I've been reluctant to try this, even the SAS versions. My reasoning is that when I lose a drive, I actually lose two. This means that I need at least RAIDZ2 if both logical drives are in the same VDEV, and probably RAIDZ3 to be safe. If they are in different VDEVs, then I now have two degraded VDEVs for each drive failure. I'm not convinced the performance vs risk tradeoff is a good one.
ОтветитьGreat video Tom! We've had a few community users show up and the Level1Techs scripts have been useful in getting them online with their MACH.2 drives.
One thing that I would caution the users of these drives about, the SAS ones specifically, is that certain low-level SAS/SCSI commands cross LU boundaries - meaning they'll impact "both drives" in this sense. Power management is the obvious example as there's only a single motor, so start/stop/idle spin-down states impact both - but it starts to become a potential performance issue with "FLUSH CACHE" impacting both LUs during transaction commits. If "both disks" are in the same pool, they'll likely be receiving the flush command at effectively the same time; but the potential for some interference exists.
Where it gets ugly is commands like "FORMAT UNIT" and "SANITIZE" also cross LU boundaries. Now, those would be the lower-level ones - something like a dd zero-pass or similar won't, but because the new EXOS 2X line optionally comes with SED (Self Encrypting Drive) support, that would be a bad way to find out you accidentally put two LUs in the same vdev.
This is very old, but very cool technology! I’m excited to see what hard drive manufacturers can actually do when they want to push limits.
At my MSP we use a lot of Synology, being able to double the iops would definitely help for active backup tasks…. Of course these drives might not be on a compatibility list and Synology’s been kind of rude about that!
I will never purchase another Seagate drive after what they did with ST3000DM001.
Ответить