KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Jan 10, 2018 15:48:03 GMT -5
Not SPECIFICALLY. Open source and commercial products both have advantages and disadvantages. And neither has historically been 100% free of errors, problems, glitches, and other issues. And Synology has a pretty good reputation (although I am not aware of ANY NAS vendor who hasn't had occasional product issues). My real point was that, while a NAS may help keep your data a little safer, it doesn't replace the need for backups. (I sort of fancied I was seeing a sentiment that "if I get a good enough NAS then I don't need to worry about backups".... and that worries me ) Keith, Was you response really in reference to Synology using proprietary RAID or not?
|
|
|
Post by qdtjni on Jan 10, 2018 15:58:15 GMT -5
Not SPECIFICALLY. Open source and commercial products both have advantages and disadvantages. And neither has historically been 100% free of errors, problems, glitches, and other issues. And Synology has a pretty good reputation (although I am not aware of ANY NAS vendor who hasn't had occasional product issues). My real point was that, while a NAS may help keep your data a little safer, it doesn't replace the need for backups. (I sort of fancied I was seeing a sentiment that "if I get a good enough NAS then I don't need to worry about backups".... and that worries me ) Keith, Was you response really in reference to Synology using proprietary RAID or not? I agree with your points but they are unrelated to Synology RAID being proprietary or not. Since they are fully implented using md (and lvm) in Linux, all open source based, their RAID configurations are the very opposite of proprietary and they don't have any HW raid controller.
|
|
|
Post by LuisV on Jan 10, 2018 15:58:53 GMT -5
No argument here, but from my research there is a lot discussion about ECC for home usage. In corporate or mission critical yes, but for media storage, I've read it's not 100% necessary.
|
|
mikes
Minor Hero
Posts: 38
|
Post by mikes on Jan 10, 2018 16:18:39 GMT -5
My freeNAS NAS has been running for 3+ years with 16 GB of non-ecc RAM with no problems (Yes, I recognise that a one person sample size is useless). In my case I had the RAM from an old workstation so I used it. I haven't looked up the price of ECC ram in a while but if I was building a new machine and it added over $150 to the price, I would probably go with non-ecc RAM and just set up a FreeNAS plugin to backup a ZFS snapshot of critical data more frequently (say weekly). These things can all be done automatically and backups can be done to cloud storage, FreeNAS make it simple. For me, I make regular backups of my pictures and home movies, all other media (movies, music...) I can reacquire if I need to ( a bit of a pain but for me not worth the backup storage I would use). I can say that my setup has been rock solid.
Since I've started using FreeNAS i've lost a power supply so i haven't had to recover from any other hardware failure. Before I started using it, I did do various tests. I setup FreeNAS and created the data pools then proceeded to unplugged various pieces of hardware to see how FreeNAS handled it. I unplugged a hard drive, RAM and the graphics card. Unplugging one hard drive did nothing and it was simple to recover from (as it should be). Unplugging other hardware caused the computer to crash, but never corrupted the ZFS pool. After these tests I was happy my data was reasonably safe.
Mike
|
|
|
Post by millst on Jan 10, 2018 16:23:17 GMT -5
In regard to FreeNAS only, I disagree that it's home vs corporate environment use. As a general statement about ECC vs non-ECC, I do agree.
I don't want to rehash endlessly here. As I said before, anyone considering FreeNAS should read at least the first post I linked. If it's above your head, then TLDR use ECC. Otherwise, perform the risk analysis yourself. Many have gone non-ECC and it works out fine. Some have and it didn't.
A UPS is strongly recommended, too.
-tm
|
|
|
Post by copperpipe on Jan 10, 2018 16:35:15 GMT -5
In regard to FreeNAS only, I disagree that it's home vs corporate environment use. As a general statement about ECC vs non-ECC, I do agree. I don't want to rehash endlessly here. As I said before, anyone considering FreeNAS should read at least the first post I linked. If it's above your head, then TLDR use ECC. Otherwise, perform the risk analysis yourself. Many have gone non-ECC and it works out fine. Some have and it didn't. A UPS is strongly recommended, too. -tm I think we can agree to disagree that ZFS changes the ECC/no ECC discussion in any way; my own take is that ZFS doesn't need ECC anymore than any other file system. But here is a pretty good explanation of why the article you linked to is kind of wrong: jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
|
|
|
Post by creimes on Jan 10, 2018 16:36:29 GMT -5
My main files that i dearly want protected as best as I humanly can I have and will have on external drives and I would use the NAS as the backup of those other drives, also the NAS would be used for media for Plex to my Nvidia Shield and Sharp TV that has built in Roku. I will be keeping the cherished stuff on more than one drive or setup, it was my mistake of not making sure some of that stuff was not on multiple drives, and when I lost a micro sd card in my phone last year I had no idea of Google photos and such and just periodically plugged the card into my Mac Mini and transferred everything to iphoto which is on an external drive and backed up to to separate drives mirrored in time machine. After multiple drive losses I need to get off my butt and deal with this in a better manner and making sure my important files are better protected from loss, in the end it's my fault and you learn from your mistakes...well sometimes Chad Not SPECIFICALLY. Open source and commercial products both have advantages and disadvantages. And neither has historically been 100% free of errors, problems, glitches, and other issues. And Synology has a pretty good reputation (although I am not aware of ANY NAS vendor who hasn't had occasional product issues). My real point was that, while a NAS may help keep your data a little safer, it doesn't replace the need for backups. (I sort of fancied I was seeing a sentiment that "if I get a good enough NAS then I don't need to worry about backups".... and that worries me ) Keith, Was you response really in reference to Synology using proprietary RAID or not?
|
|
|
Post by Boomzilla on Jan 10, 2018 17:13:11 GMT -5
...in what sense is RAID implemented in Synology NAS appliances with proprietary technology? Perhaps they're better than they used to be. I'm still wary of them.
|
|
|
Post by millst on Jan 10, 2018 17:17:56 GMT -5
I think we can agree to disagree that ZFS changes the ECC/no ECC discussion in any way; my own take is that ZFS doesn't need ECC anymore than any other file system. But here is a pretty good explanation of why the article you linked to is kind of wrong: jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/Yes, the scrub of death scenario was what I was referring to when I said hyperbolic. In the end, it doesn't matter whether the FreeNAS people are right or wrong. The prevailing belief there is that you must use ECC. If you don't, don't expect any love over there (iXsystems). They aren't the friendliest group to begin with... I would think the primary reason to use FreeNAS is because one wants ZFS (otherwise, there are probably better choices). If one wants ZFS, presumably, one values the integrity of the data. Consequently, I don't know why one would risk that integrity by choosing non-ECC RAM. If the RAM corrupts the data before it is written to the filesystem, ZFS can't save you. Garbage in, garbage out. -tm
|
|
|
Post by copperpipe on Jan 10, 2018 17:46:56 GMT -5
I think we can agree to disagree that ZFS changes the ECC/no ECC discussion in any way; my own take is that ZFS doesn't need ECC anymore than any other file system. But here is a pretty good explanation of why the article you linked to is kind of wrong: jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/Yes, the scrub of death scenario was what I was referring to when I said hyperbolic. In the end, it doesn't matter whether the FreeNAS people are right or wrong. The prevailing belief there is that you must use ECC. If you don't, don't expect any love over there (iXsystems). They aren't the friendliest group to begin with... I would think the primary reason to use FreeNAS is because one wants ZFS (otherwise, there are probably better choices). If one wants ZFS, presumably, one values the integrity of the data. Consequently, I don't know why one would risk that integrity by choosing non-ECC RAM. If the RAM corrupts the data before it is written to the filesystem, ZFS can't save you. Garbage in, garbage out. -tm To be honest, I don't use FreeNAS at all; I'm not a fan of these dedicated machines that only do 1 or 2 things and have to be configured using their special UI etc. So I just use a bog standard ubuntu 16.04 and put ZFS on pretty much all my disks these days. Even my laptop and my removable usb backup drive which stays in my Jeep at all times; it's encrypted using luks, then on top of that I put a ZFS filesystem. In the case of the usb disk, I use ZFS because it makes backups really really nice. Just create a snapshot on my main pool, then send the snapshot delta to my USB drive. I have hundreds of gigs in that pool, but it takes only seconds to take a snapshot and transfer it to my usb drive. It's far faster than even delta based tools like rsync and unison. Another reason to use ZFS is the deduplication feature (only 1 copy of a block), and the built in compression. For my NAS, it's using mirrored stripes (like RAID 10). So all these goodies are available on ZFS, and are reasons enough to use ZFS over other systems, regardless of the ECC issue. Regarding your last point, that's true enough, but the same is true for every filesystem at that point. Which is my only disagreement with your post; I'm only disagreeing that ZFS requires ECC more than other filesystems.
|
|
|
Post by millst on Jan 10, 2018 23:04:45 GMT -5
Well, all of my posts were in the primarily in the context of FreeNAS. I thought I made sure to mention it regularly as that was what started the discussion about ECC RAM.
True, the snapshots, compression, dedup, etc. are all nice features of ZFS. Although dedup is only worthwhile in limited use cases.
-tm
|
|
|
Post by pknaz on Jan 10, 2018 23:50:54 GMT -5
I'm a total noob to all this stuff haha Which is exactly why you should use a Synology! My thoughts exactly.
|
|
|
Post by creimes on Jan 11, 2018 0:53:28 GMT -5
Which is exactly why you should use a Synology! My thoughts exactly. Hey now don't get me wrong I'm a fast learner, just have never dove into this world yet
|
|
|
Post by pknaz on Jan 11, 2018 2:38:13 GMT -5
Hey now don't get me wrong I'm a fast learner, just have never dove into this world yet There is a lot of misinformation in this thread - learning is great, and I'm a big proponent of "always learning" - I'm sure you'll come up with a solution that works well for you.
|
|
|
Post by millst on Jan 11, 2018 11:04:39 GMT -5
Since you're getting started, I'd point you towards a commercial product. Not that roll-your-own devices are super complicated...but with something commercial you're going to get a polished product designed for the home user by a company that provides support. It should be a very good value and require much less time to get going.
The newer Synology devices use btrfs for the filesystem, which gave you many of the advanced features that came up with ZFS e.g. snapshots.
-tm
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Jan 11, 2018 14:42:45 GMT -5
Personally, I just use duplicate drives..... I can buy a PAIR of 4 tB WD MyPassport USB drives for $200 ($99 each). I store a separate but identical copy of my important data on each... and unplug them when I'm not using them. That way I have completely separate but identical, physical drives, controllers, enclosures, and even cases. I've occasionally had a drive fail.... but never two at the same time. I see a NAS as a convenient way to aggregate a lot of date and in turn be able to access it from multiple places across a network. My main files that i dearly want protected as best as I humanly can I have and will have on external drives and I would use the NAS as the backup of those other drives, also the NAS would be used for media for Plex to my Nvidia Shield and Sharp TV that has built in Roku. I will be keeping the cherished stuff on more than one drive or setup, it was my mistake of not making sure some of that stuff was not on multiple drives, and when I lost a micro sd card in my phone last year I had no idea of Google photos and such and just periodically plugged the card into my Mac Mini and transferred everything to iphoto which is on an external drive and backed up to to separate drives mirrored in time machine. After multiple drive losses I need to get off my butt and deal with this in a better manner and making sure my important files are better protected from loss, in the end it's my fault and you learn from your mistakes...well sometimes Chad Not SPECIFICALLY. Open source and commercial products both have advantages and disadvantages. And neither has historically been 100% free of errors, problems, glitches, and other issues. And Synology has a pretty good reputation (although I am not aware of ANY NAS vendor who hasn't had occasional product issues). My real point was that, while a NAS may help keep your data a little safer, it doesn't replace the need for backups. (I sort of fancied I was seeing a sentiment that "if I get a good enough NAS then I don't need to worry about backups".... and that worries me )
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Jan 11, 2018 15:03:30 GMT -5
I would also like to point out, since no-one else has, that the way you CONFIGURE a NAS, or backup software, has a major influence on how secure it is.
For example, if your controller malfunctions, and writes corrupt data to your backup drive or NAS, then the damage is already done. And, depending on the situation, you may never know it until you go to read that backup someday.
That's why it always pays to check that "verify" box on your backup software (or your CD/DVD/Blu-Ray burner). When you do that, after the data is written, it is all read back and compared to the original. It takes twice as long, but you WILL be notified if your backup isn't perfect, sooner rather than later.
And, if you have really valuable files, it makes sense to use some sort of checksum software. You may not really be able to see if a few bits on a 10 gB video have gotten flipped - depending on where they are. But, if you calculate a checksum of that file, you can use it later to verify that the file is EXACTLY the same as it was before. When you verify a backup the data is read and compared to the original - but you can only do that if the original is still there. A checksum stores a form of "digital signature" of each file, which you can use to confirm that it has remained unchanged, even without the original. Even though the checksum is relatively small compared to the file, it can confirm that the original remains unchanged with great certainty. (A simple checksum won't enable you to fix a damaged file - but it will warn you so you can go find your backup copy and take steps to fix the problem.) In a typical usage scenario you might store a checksum in each music folder on your server. Then, by simply issuing a command to "check all folder contents against their checksums", the entire drive can be confirmed to remain exactly as it was originally - or not. (And, if you think you've heard a problem, you can quickly confirm that the song you were just listening to hasn't gotten corrupted.)
Likewise, beginners should note that there are several different "types" of RAID..... they go by number (for example RAID 5 offers redundancy). Certain types store data redundantly, and can reconstruct data on a bad drive, even allowing you to replace a bad drive while everything is running (hot swap). But other types improve access speed, but DON'T provide redundancy at all. (So read the directions and make sure you choose the right one when setting up that NAS drive.)
|
|
|
Post by creimes on Jan 11, 2018 15:10:17 GMT -5
I could grab two Seagate 8TB external drives right now at $224 each for now and do exactly that, I was actually looking at that last night while browsing newegg, and then add some sort of NAS at a later date for another case and storage and better media distribution. In the end it comes down to available funds, after Christmas is never an easy time haha. Chad Personally, I just use duplicate drives..... I can buy a PAIR of 4 tB WD MyPassport USB drives for $200 ($99 each). I store a separate but identical copy of my important data on each... and unplug them when I'm not using them. That way I have completely separate but identical, physical drives, controllers, enclosures, and even cases. I've occasionally had a drive fail.... but never two at the same time. I see a NAS as a convenient way to aggregate a lot of date and in turn be able to access it from multiple places across a network. My main files that i dearly want protected as best as I humanly can I have and will have on external drives and I would use the NAS as the backup of those other drives, also the NAS would be used for media for Plex to my Nvidia Shield and Sharp TV that has built in Roku. I will be keeping the cherished stuff on more than one drive or setup, it was my mistake of not making sure some of that stuff was not on multiple drives, and when I lost a micro sd card in my phone last year I had no idea of Google photos and such and just periodically plugged the card into my Mac Mini and transferred everything to iphoto which is on an external drive and backed up to to separate drives mirrored in time machine. After multiple drive losses I need to get off my butt and deal with this in a better manner and making sure my important files are better protected from loss, in the end it's my fault and you learn from your mistakes...well sometimes Chad
|
|
|
Post by pknaz on Jan 11, 2018 20:23:27 GMT -5
For example, if your controller malfunctions, and writes corrupt data to your backup drive or NAS, then the damage is already done. And, depending on the situation, you may never know it until you go to read that backup someday. The storage industry as a whole has moved on from proprietary controllers, especially those of the true RAID variety. Everyone's moving toward Software solutions that don't rely on proprietary hardware. This is true for QNAP, Synology, Drobo, etc. Even in the enterprise space, "big iron" Storage Area Networks and Network Attached Storage (SAN/NAS) haven't used proprietary controller based arrays in quite a number of years. EMC, NetAPP, and PureStorage, Tintry, etc. (the list goes on) use Commercial Off-The-Shelf (COTS) servers (Like Dell R510's, R710's, etc.) as their "Building Blocks"
|
|
|
Post by pknaz on Jan 11, 2018 20:31:40 GMT -5
Likewise, beginners should note that there are several different "types" of RAID..... they go by number (for example RAID 5 offers redundancy). Certain types store data redundantly, and can reconstruct data on a bad drive, even allowing you to replace a bad drive while everything is running (hot swap). But other types improve access speed, but DON'T provide redundancy at all. (So read the directions and make sure you choose the right one when setting up that NAS drive.) This was definitely true of controller based arrays, you basically had three flavors of RAIDS (Redundant Array of Independent/Inexpensive Disk): 1) Mirrors, 2) Stripes, 3) Parity. Each of these main types had multiple sub types allowing flexibility, and you could even combine some types to produce super types. As the industry has moved away from hardware solutions, the software solutions have become much more flexible in their approach to provide more benefits with less of the draw backs. For instance, a number of software solutions will do some cool things like write a mirror copy initially, preventing the requirement for calculating parity data, and then when the writes are committed to disk, and during periods of low system utilization calculate parity data to reduce the "on disk" footprint of the data. A lot of them will also support things like duplication and compression which dramatically reduces the amount of space consumed on disks. This has less of an impact on already compressed data, of course.
|
|