Комментарии:
I moved everything from the cache to the array (even backed up onto my personal pc), installed a bigger m.2 for the cache drive, did the formatting, moved everything back from array to cache, and now even though all my data is there, the dockers and VMs I had are gone? What gives.
ОтветитьGreat guide. I wanted to change my Cache RAID-1 BTRFS to a single drive RAID-0 ZFS. That all went fine. Having an issue getting mover to invoke and transfer the files back onto the new Cache. Any idea?
ОтветитьPLEASE MORE ON ZFS. Love it!! I have been meaning to get into it and this might be my way in!
ОтветитьOK, I have 1tb ZFS cache Pools. I bought 2tbs to replace them but i can't get it to work. Any advise or help or another video? Everytime i go to swtich it out i lose all data for my dockers and containers.
ОтветитьI have 3 1 TB ssd installed on my server. Is it best to create ZFS pool or use Raid-1?
ОтветитьBrilliant, as a total newbie to unraid these videos are invaluable in my learning experience. Very clear and detailed, many thanks.
ОтветитьI am going to set up an Unraid server soon. Should I set up the cache drives as ZFS from the get go?
ОтветитьMaybe it was in the video but when I swapped out to a larger cache drive this am it wouldn't let me just swap out the cache and start the array. I ended up having to delete the old pool and add a new one, then I could format,
ОтветитьXFS speed > ZFS speed
ОтветитьYou didn't change the shares on the cache pool to cache only. Does that matter? Is there any significant difference from cache only, versus cache with the array as a secondary storage and the mover set to move from array to cache?
ОтветитьGreat video! I followed all the steps and now have a mirrored ZFS cache pool! Thanks! If, at a later date, I want to expand this pool, is there any way to convert it to Raidz? Or do I have to move all the data to the array again and create a new pool?
ОтветитьJust had one of my new 22TB Ultrastar disks formatted with ZFS but now copying files to and from it is extremely slow (50M write, 80M read) vs XFS 280-290M raw speed! Did a bit search within Unraid forum, people having similar issues... moving back to XFS ... any suggestion single ZFS in Unraid array still can enjoy the raw read / write speed? thx. 🥴
Ответитьif I have 1 ssd zfs drive and want to add two more, do I need to move everything off the original zfs ssd drive then add all 3 together?
ОтветитьThank you, i needed to swap my cache set up from a combined hdd raid to a mirrored raid for saftey. Also changed to zfs while i was at it.
ОтветитьFrom reading the forums, it would appear the mover takes forever to move files from the cache pool to the array, how long would it take to move 170GB of files? And is there an alternative like using rsync or krusader? I am looking to upgrade my cache pool...
ОтветитьAs a first time UNRAID user I’d love you to do a video on how to set it up first time with 6.12.X with ZFS. mostly interest on how to get it as fast as possible for read/writes.
ОтветитьIf I plan on running VMs off of my zfs cache drive. Should I use Mirror or Raidz? Compression on or Off?
ОтветитьHow can I go from Cache Drive to Zpool?
ОтветитьFirst of all love your videos Spaceinvader :)
Here are my experiences with the new ZFS side of Unraid.
NVME CACHE
I transferred from btrfs to zfs and have had a very noticeable decrease in write speed before and after memory cache has been filled not only that but the ram being used as a cache makes me nerves even though I have a ups and ecc memory, I have noticed that my duel nvme raid1 in zfs cache pool gets full 7000mbps read but only 1500mbps max write which is a far cry from what it should be when using zfs. I will be swtich my appdata pools back to btrfs as it has nearly all the same features as zfs but is much faster from my tests.
The only thing that is missing to take advantage of the btrfs is the nice gui plugin and scripts that have been done to deal with snapshots which i'm sure someone could manage to bang up pretty quick using existing zfs scripts and plugins.
Its important to note here that my main nvme cache pool was both raid1 in btrfs and zfs local to that file system type of raid1 obviously.
ARRAY
I also started doing some of my array drives to single disk zfs as per spaceinvadors videos as the array parity and expansion abilities would be handled by unraid which is where I noticed the biggest downside to me personally which was that zfs single disk unlike any zpools is obviously missing a lot of features but more so is very heavily impacted write performance and you still only get single disk read speed obviously once the ram cache was exhausted. I noticed 65% degrading in write speed to the zfs single drive.
I did a lot of research into BTRFS vs ZFS and have decided to migrate all my drives and cache to BTRFS and let unraid handle parity much the same as the way spaceinvader is doing zfs but this way I don't see the performance impact I was seeing in zfs and should still be able to do all the same snapshot shifting and replication that zfs does. Doing it this way I avoid the dreaded unstable btrfs local file system raid 5/6 and I get nearly all the same features as ZFS but without the speed bug issues in Unraid.
DISCLAIMER
I'm sure ZFS is very fast when it comes to an actual zpool and not on a single disk drives situation but it also very much feels like zfs is a deep storage only file system and not really molded for an active so to speak array.
Given my testing all my cache pools and the drives within them will be btrfs raid 1 or 0 ( Raid 1 giving you active bitrot protection) and my Array will be Unraid handled parity with individual BTRFS files system single disks
Hope this helps others in some way to avoid days of data transfer only to realis the pitfalls.
It would be super nice if you did redos of some of your old videos that are outdated office rat ca unraid license for 22 dolr Pro
ОтветитьThank you!!!!
ОтветитьWhat would be the point of single drive zfs cache pool? I mean what would be the advantage?
ОтветитьWhy did you say 'definitely not going to do raid 0' on your cache pool? Was it only because you were planning on doing a 3 drive raid-z pool?
ОтветитьCan I create the datasets for all the dockers (appdata/plex for example) before execute the mover action that move the files from the aray to the new cache ZFS volume?
ОтветитьI'm in absolute awe of your generosity Spaceinvader One!!!
I have a fair idea just how much time that you give to help others in your detailed videos...... We are all blessed...... Thankyou for all that you do!!!
Very thorough guide. Thank you!
Ответить@Spaceinvader One, any thoughts on the upsides and downsides of doing this cache pool zfs process, but encrypting the cache pool? I'm worried about issues running VMs from the cache pool, and also of speed/encryption overhead? (And if someone cache drive speed may slow down if there still is no trim support for encrypted drives?)
ОтветитьWhat are the benifits / negatives of using compression with ZFS? How does it affect read/write speed?
ОтветитьSorry if this is a noob question. My unraid kung-fu is not that strong yet. 🥷
Anyone else having an issue where mover seems to be stuck on the system share? Its been 12+ hours now. Everything else moved pretty fast. Been googling around and maybe its because in settings > docker I have it set to directory? 🤔🤷🏻♂️
I ran mover stop and set all the shares back to cache <- array. Now i'm having mover go at it again. That way I can (hopefully) be back up and running and figure out a different plan of attack.
PS - can I just stop the two services and manually copy everything to the array?
Note to self, when converting to ZFS, don't create a domains folder until you have backed up everything in that existing folder... It creates the dataset for you and happily empties the folder for you. 2nd note to self, change backups from 2 weeks to nightly... I should have watched your video sooner :)
ОтветитьI'd like to see the recovery process on Unraid when a zfs drive fails. For instance I have a raidz pool with three 1TB SSDs and one fails. How do you re-silver the array in unraid? This is something I need to see before I'm willing to switch to ZFS on my cache pool. I do need to move to zfs for the compression though so I can minimize the appdata and plex appdata pools. Obviously not the media... just the database and stuff. Currently my plex db and supporting files is too large for me to used the appdata backup plugin on plex, but i'd like to be able to add another larger SSD around 4-8TB that would host the snapshots of my production pools as a backup. And also for another video idea. Is it possible to send zfs snapshots to another Unraid system with a large ZFS pool (think 10-20TB SSDs) as a secondary backup location that isn't part of the main server. I can see this also being setup to run over a site-to-site vpn so that the snapshot is geographically seperated. And one more is it possible to send the zfs snapshot to S3 compatible storage or another cloud storage provider?
Ответитьis zfs better then btrfs for cache? Also if doing a new unraid, should I just go with ZFS for all my disks array and cache? Thank you for all your videos
ОтветитьYou're better of just creating a backup from your appdata using the 'Appdata Backup' plugin. Using mover is incredibly slow if you have lots of small files. My 10Gb plex appdata folder would've taken over 6 hours to move (one way). I just erased the cache drive and restored from the backup, took about 15 mins.
ОтветитьHi, can you do a video on folder permission in Unraid from the Terminal? I'm having an issue with the files in the Media folder not being seen by Plex Server
ОтветитьThe mover option seems like the safest thing for this task but you could be waiting quite a bit for the mover to finalize if you have a big media library managed by plex/photoprism/paperles dockers. This is due to the way the mover works - checking every file to see if it's being used etc. I'm assuming one could just move the data from the cache disk(s) to the array using unBalance plugin. Does anyone see a risk in doing it this way?
ОтветитьWell very nice video. I tryed this now a few times. In my Case i have a Raid0 Cache pool. But if i format that in ZFS raid 0 i cant write anything anymore to the cache pool cause it says there is not enough memory left. its formated and shown up in the UI as a rdy cahce pool but not working so far. Did i do something wrong?
ОтветитьHi,
I was wondering at the end of all the process, do you need to change to mover action cache to array, my logic is that if my cache drive is full it move the new data to the array.
Thanks
Are there any caveats with ZFS cache and how it handles dockers? Im having issues installing dockers since changing over to ZFS, cannot install any new dockers at all throws errors
ОтветитьI wonder if a simpler way, to keep the server up is to copy the data, rather than move it, or use app data backup utility to back it up to another drive, format then restore
ОтветитьA week or 2 ago I got a new HDD for my array that I precleared and formatted as ZFS. Now I'd like to use this moment to empty another drive on the new one and reformat that drive as ZFS. Since the new drive is to replace a smaller one, this is sort of the only change I have, without making things annoying.
I've moved all of the data of the xfs drive to the new drive using unbalance (took forever, since the copy didn't go faster than around 70MB/sec due to what I understand to be a bug). I'm planning to use the 'zero-drive' script to zero my old drive, keeping parity in check. After that I can reformat the drive to ZFS. Maybe there are easier and less time consuming methods, but this should work.
Having issues with destroying Datasets now. This has happened on mulitple containers. I have to stop the array to delete. Error is Unable to destroy dataset cache_nvme/appdata/xxx
Output: 'cannot destroy '\''cache_nvme/appdata/xxx'\'': dataset is busy destroy cache_nvme/appdata/xxx' Do you have any suggestions?
I did convert successful to ZFS. Thank you. If I delete a container in Docker Manager the dataset is still present. it's empty and if I try do Destroy the dataset it gave me an error: dataset is busy. only stopping docker service , destroy dataset and restart docker service helps. is there anything around this issue?
ОтветитьIsn't the Cache dirve bfs and not zfs?
ОтветитьGreat Tutorial - replaced my 970 with a 980 pro while doing this - no issues thanks!
ОтветитьI followed the instructions and had to reinstall my dockers. Not sure why, but everything except the jellyfin data was retained
Ответитьmover is insanely slow. burn your whole day slow. terrible
ОтветитьPerfect timing, just bought a new larger NVME for my system! Got to remember how the heck I encrypted the last one!
Ответить