Zfs Has Finally Arrived For Mac
How does BTRFS compare to ZFS? Ask Question. I must admit after reading the latest updates on ZFS-FUSE it seems the project has successfully been revived (dev stalled for a long time). Finally, silent data corruption is unlikely to render a btrfs file system unmountable given the fact there are two checksummed copies of meta-data. They have finally surpassed the state that ZEVO was in on OS X in terms of. I have to say, ZFS is absolutely needed for Mac OS X. I've been beating this drum.
I suspect an unmentioned goal of this project is eventually to make the installation of OpenZFS on Linux (and other.ix operating systems) quick and simple, legally running around Oracle's license restrictions.1 Unsurprisingly, the list of supporting companies2 does not include Oracle, which surely isn't happy about this project. 1 The source code upon which OpenZFS is based was provided by Oracle (Sun) under the CDDL license, which prevents OpenZFS from being distributed in binary form as part of the Linux kernel. Good questions. Booting from Linux ZFS is a bit more of an advanced project. Here is the guide for Ubuntu: The ZFS packages are not fully pre-compiled. The ZFS packages use the dkms system where the kernel specific parts are recompiled when the kernel changes. This greatly reduces the maintenance work and is a system that has been in use by other out of tree kernel modules for a decade.
Here's the background and what we know about APFS. What is a File System? Zfs Has Finally Arrived For Mac Download; Zfs Has Finally Arrived For Machine.
Its not perfect, but it does largely reduce the distribution specific maintenance burden. Of course, when a new kernel version comes out, ZFS may have to make adjustments to support it and ZFS may lag a little bit. However, currently there are significant resources being dedicated to ZFS on Linux development and packaging and for the last 18 months they've kept up pretty well.
It isn't perfect. I wish that ZFS could be in the linux source tree. While I saw ZFS on linux being available, I wasn't ready to try it until last year. However, third party file systems do have a long tradition, such as AFS, vxfs, and numerous SAN file systems. ZFS on Linux seems to be doing very well. It's a great pity more ZFS advocates are taking this as an opportunity to get better acquainted with FreeBSD.
I've been running FreeBSD on my home server for about a decade now -even before ZFS- and have loved it. But I used to run Linux VMs on top of it. However it's only recently that I've decided to go fully FreeBSD on it, using jails instead of hardware virtualisation. I honestly can't understand why I waited so long to do so.
It's proven to be a far more elegant solution for what I needed. While I do still run Linux on my desktop and work with Solaris and linux as my day job; FreeBSD seems a vastly overlooked alternative these days, which I think that's a great pity. Its stable, proven and dead to administrate. But each to their own I guess. I built a 16TB raidz home office server this weekend using Ubuntu and ZFS on Linux0.
It worked great out of the box. I was even able to import a pool created on another server without any problem. Of course, your mileage may vary.
I was using FreeNAS previously (mainly for the ZFS support to keep my data safe and not spend a bunch on raid controllers) and kept getting bogged down by feeling the need to grok jails. I think jails are terrific in theory, but a pain to work with if you're not intimately familiar with them. Maybe it's just the way it works on FreeNAS, but newly created jails (by default on FreeNAS) were getting a new virtual IP addresses, which really threw me for a loop.
Add to that frustration trying to get all the permissions correct just to make a few different services work together started to get really painful. The drop-dead simplicity of setting up exactly what I had previously on a fresh Ubuntu box with native ZFS port really warmed my cockles. I have been doing lots of research on this recently and here is the main thing that makes ZFS win every time: When you have a RAID of any kind you need to periodically scrub it, meaning compare data on each drive byte by byte to all other drives (let's assume we are talking just about mirroring). So if you have two drives in an mdadm array and the scrubbing process finds that a block differs from drive A to drive B, and neither drive reports an error, then the scrubber simply takes the block from the highest numbered drive and makes that the correct data, copying it to the other drive. What's worse is that even if you use 3 or more drives, Linux software RAID does the same thing, despite having more info available. On the other hand, ZFS does the scrubbing by checksums, so it knows which drive has the correct copy of the block. How often does this happen?
Zfs Has Finally Arrived For Mac 2017
According to what I have been reading, without ECC RAM and without ZFS, your machines get roughly one corrupt bit per day. In other words, that could be a few corrupt files per week. My conclusion is that as I am building my NAS, I want ECC RAM and ZFS for things I cannot easily replicate. There is a great advantage to combining the filesystem with the disk mapper.
You don't have to use different commands to add and grow disks and the partitions upon those disks. Your filesystem knows about what it's living on and stores data accordingly. ZFS has more advanced file system properties, like sending snapshots, even of block devices. BTRFS is still working on feature parity with this.
ZFS is much more stable than other FS with similar features. The big disadvantage is the memory and CPU requirement. If your server has plenty of memory and CPU, I'd use ZFS. If you're running on an ARM NAS with 128MB RAM, I'd use something less fancy.
I have trouble understanding the position of OpenZFS. My understanding is that Oracle ZFS, which can not be integrated in mainline Linux, can still be distributed as a separate project (zfsonlinux.org). On the other hand, Linux kernel developers have started btrfs, which is inspired by, but incompatible with, ZFS. So, what is this project? I can only imagine that this is either a white room reimplementation of ZFS, or a fork before the license changed (but I think it was CDDL from the start).
A more interesting question would be: who should use or develop this? IMHO this will never be on par with 'the' ZFS, so btrfs is where everyone's energy should go. (Also, by reimplementing a ZFS product you're supporting them in a way.). You should read the link, it answers many of your questions here. For example, it's not a reimplementation of ZFS, it's an organization to coordinate between the many different groups that are actively using ZFS in their products. Parts of btrfs may be inspired by ZFS, but btrfs doesn't even aspire to some of ZFS's niceties like zraid3.
And if you're looking at ever using 4TB disks, a third parity bit should be a requirement. I'm a fan of btrfs, but I'm a much much bigger fan of ZFS. ZFS will almost certainly be at the base of our next storage buildout, and btrfs probably will not. The only thing keeping btrfs relevant is that the GPL and CDDL interact poorly. But when there's a great, well tested, and higher tech code base, why should people abandon it? ZFS is used by many many people in production, btrfs by very few, and even if btrfs hits all its development roadmap it won't be the equal of ZFS. I read the link, and I disagree that it answers these questions.
It is presented as 'the truly open source successor to the ZFS project', which is why I understood it was a fork. I agree that btrfs is not ready for production (and I'm not ready to hand my precious bytes to it yet), and will probably never be as feature rich as ZFS. But the licensing issue will always exist, and Linux needs a modern file system - btrfs. That said, a lot of people/businesses use ZFS on Linux in production, so it's nice that there is an central place where they can find documentation about it. My apologies, on reading again, my first sentence comes across far snarkier than I meant it to be! I thought that the announcement was clear, but reading again, I can see some ambiguities.
I do agree that that Linux probably could use something better than ext4. But Linux also needs something like ZFS. If Linux's license makes it too difficult to run ZFS, then I can run FreeBSD, Illumos, OpenIndiana, or whatever other open source OS I want in order to get ZFS. But I can't replace ZFS with btrfs, and it doesn't look like btrfs wants to be able to replace ZFS. Well, it's like fork but it's the other way. After Oracle bought Sun, other projects that adopted ZFS continued fixing it and adding new features. So FreeBSD has its version, Illumnos (fork of Solaris) has it own, Delphix own and so on.
Those projects were using patches from each other but managing all of that became problematic. So they basically designated one central place to do ZFS development from which all of the projects will use. So now instead of many ZFS forks there just two: - Oracle - which is now closed source - Open-ZFS - which will be now the official open source ZFS that all Open Source systems will use.
Unfortunately it will still be CDDL, since no one in that project has power to do it. This would require Oracle and all contributors to allow for changing the license. OpenZFS is a fork from before Oracle took ZFS closed again. It still has the CDDL license. OpenZFS is directly used by the people who maintain ZFS on various platforms (such as ZFS on Linux). The end ZFS implementation can be used many places and ways. For instance, my employer believes that ZFS on Linux is safer and more reliable than btrfs is.
Likewise, it is used on FreeBSD and various illumos (open source fork of Solaris) systems as the default file system. While Oracle has presumably added new features to their ZFS, numerous companies have been working on OpenZFS for some time now and adding their own sets of new features that Oracle doesn't have, such as LZ4 compression. Basically, this is a ZFS fork, forced by the stupidity of Oracle. The whole issue is that Oracle has stopped publishing the source code of new ZFS versions for some years already: there are new ZFS 'versions' (pool versions from version 29 to 34, which include some new backward-incompatible features) from Oracle, but no new open source code has been released. The opensource ZFS code has been almost stagnant/in mainteinance mode all this time. OpenZFS is an attempt to keep ZFS open source alive. Since Oracle doesn't work with opensource anymore, the people who uses ZFS (BSDs, Nexenta, etc) have to do their own thing, and evolve the filesystem in their own way.
The Construction Invoice Template can help you to create an invoice that is simple to use and easy to understand. You are looking for help and you can use a Construction Invoice Template to create something quickly that will make a difference in regard to your business and the way that things are done. 10+ Contractor Invoice Samples – PDF, Word, Excel Most contractors in any specialized field, like research and construction, work independently and are generally not employed by their clients who hire them for their work. Our free contractor invoice templates are what you need to request payment in a professional manner, providing the proper layout to calculate hours and days worked in accordance with your determined rate of pay. Download a contractor invoice template. Download Word template Download Excel template.
ZFS has the notion of version numbers - for both the zpool and filesystem. The last FOSS releases are zpool 28 and zfs 5.
There have been subsequent releases of ZFS by Oracle, but versions 28 and 5 are the latest used by all the open source implementations. What the FOSS community has done is to add 'feature flags' to ZFS instead of constantly bumping the version number. So encryption is a feature on top of zfs, but the encryption introduced in FreeNAS for ZFS isn't the same as the encryption in Oracle's zpool v30. I'm glad to see open-zfs.org for exactly the reasons you mention.
I've been runnning MacZFS for years, but it's currently way behind at zpool version 8, zfs version 2. At one point I put a FreeBSD machine in the basement, with plans to stream snapshot diffs for backup, but the version mismatches prevented me from doing this in a smart (zfs-based, rather than rsync-based) way. Looks like this central coordination point ought to help the porting of features/versions from the Linux branch to the Mac branch. We've had a very positive experience with btrfs, using it on a redundant system for a few years. It's a lot more flexible than zfs in some respects. For instance, all the snapshots are modifiable and can themselves be modified. You can build up a graph of snapshots, which is not possible in zfs which enforces a strict hierarchy.
We ran btrfs on top of mdadm, to get raid and integrity checking. Btrfs can do things like copy particular files using copy on write, which is really cool. Btrfs also supports offline deduplication, which isn't supported in zfs. This is very useful if you want to do the deduplication when the system is not otherwise being used and avoids the overheads of keeping hashes in memory all the time. I think that 10 years behind is exaggerating where it is at. For instance, people using zfs on linux often have problems with running out of memory and so on, even on systems with very large memory. Modifiable snapshots are called clones in ZFS.
Everything you said about snapshots is possible with ZFS with clones+snapshots. Clones are just as instant as snapshots. Runnin ZFS in linux is not good idea. That's why I'm currently using OpenSolars and maybe switching to FreeBSD in the future for file and db servers. The problem with btrfs time estimate is that it takes long time for filesystems to become reliable enough that you can put important data on them.
ZFS has crossed that threshold, it takes years and years for btrfs. ZFS doesn't support an arbitrary graph of snapshots, even with cloning. There is a 'zfs promote' command that can be used, but can only be used in certain circumstances. For example, I wanted to take the latest backup of a system and rsync older and older backups onto that backup, making a snapshot each time.
I then wanted the snapshot of the newest data to be the 'HEAD'. I also wanted to gradually delete the older data. This setup was not possible with ZFS, because the child-parent relationships were in the wrong order. With btrfs it was simple.
You know why this is huge? Its the first kernel piece of code that are and can be shared by different open-source operating systems in the kernel codebase! I hope more technologies could get up into this model. While the democratic open-source ecosystem in userland is a rule this days. On the kernel land this is not true and that may force us to use a OS instead of other because of some feature that only that Os has, even if we wish to install the other one. For instance, i love linux, but i also love the BSD´s and want them to grow as much as linux did. If the good things created in one OS could also be used on other, by a proper port to that kernel, we might not being bullied to accept on OS in favor of another, and be stuck by it!
I hope this movement make its way to others kerneland technologies! I would tentatively disagree with raising the bar and describe it using my favorite MySQL / PostGRES analogy that they have a fundamentally different philosophy but do about the same thing. For example, if you want to do something software raid-ish, ZFS has the philosophy that should be done at the filesystem layer, not as a virtual device like every other linux filesystem, ever. Its not a new feature to be able to do RAID, but its new to embed RAID into the filesystem layer itself. Linux style virtual RAID devices don't care if you build a FAT32 on top of /dev/md0. There are other examples of the same philosophy in ZFS. For example everywhere else in linux if you want some manner of 'volume manager' you simply use LVM.
ZFS has its own interesting little volume manager. Which relates to snapshots. It's exactly the same with encryption.
Every other implementation on linux uses a loop device and your choice of algo. ZFS shoves all that inside the filesystem. Another philosophical decision is every other linux filesystem doesn't scrub, but only fsck's metadata, so logically ZFS implements the exact opposite. Although ZFS supporters are technically telling the truth when they run around saying only ZFS can provide software RAID, or only ZFS has a volume manager, and ext2/3/4 does not, its not relevant. I've had LVM and software RAID and all that for many years on existing linux stuff. One of the few true features ZFS provides is allowing ridiculously big filesystems.
Which is cool. It is mostly a philosophical difference between modularity and monolithic design, with pretty much everything else being modular, and ZFS being extremely monolithic. In that way I don't think ZFS has prodded any innovation at all in any other filesystems other than maybe BTRS which I haven't been following because my data is too valuable to experiment upon and filesystems aren't my thing. I don't see the iso9660 FS driver adding native volume management, snapshotting, software RAID, and encryption any time soon. For example, if you want to do something software raid-ish, ZFS has the philosophy that should be done at the filesystem layer, not as a virtual device like every other linux filesystem, ever. Not exactly true.
RAID and mirroring logically sit at the zpool layer, and therefore anything on top of a given zpool has the zpool's RAID/mirror characteristics. This may be a filesystem, but it could also mean a zvol too. A zvol would be analogous to your /dev/md0 block device in that you could put a FAT32 filesystem on top of a zvol and still benefit from the underlying redundancy, parity, and checksumming features of ZFS. Addendum: Strictly speaking, redundancy (RAID or mirror) configuration is on a vdev, and a zpool is comprised of one or more vdevs over which data is striped. Yes in theory you could probably come up with a weird pathological scenario where a monolithic design is slower and a modular design is faster. But that usually doesn't happen.
Usually, turning a modular design into a monolithic design for a tiny performance gain turns into an epic disaster/mistake. Well, think about this. Suppose you're running RAID-1 with two drives, and you've got some filesystem (maybe ext4, but that doesn't matter) running on top of that.
R for Machine Learning Allison Chang 1 Introduction It is common for today’s scientific and business industries to collect large amounts of data, and the ability to analyze the data and learn from it is critical to making informed decisions. Familiarity with software such as R. Machine learning tutorial pdf.
You create one huge file, and then a little while later you delete it. And right after that, one of the disks dies, and you replace it. In this case, your RAID layer doesn't know that most of the data written to the original drive is junk, and that the only really important bits are some inodes and directory entries consuming a few MB near the end of the disk. It has to re-mirror the entire drive from the original to the replacement before they are in-sync again and you are fully protected. Even with modern drives, that leaves a large window of time that you're not protected. If, on the other hand, your RAID layer has a thicker interface to the filesystem than just a dumb block store, it can just mirror the little bit of metadata, and within seconds you're in sync again and fully protected. That's just one example.
There are many more. Go read the stories about people complaining about RAID-5 and RAID-6 performance. The gnu link says CDDL is not compatible with the GPL. What Linus is saying is, if a kernel loadable module relies on kernel internals, then it may count as a derived work, and must be licensed under a GPL-compatible license. If it does not - for example, it is a filesystem that was ported to Linux (like AFS, and zfs fits here too IMO), then it may not be a derived work, and can be licensed under a non-gpl compatible license (like the CDDL). Obviously, including the AFS or zfs into the source WOULD DEFINITELY create a derived work, and would require AFS/ZFS to be licensed under the GPL.
So Linus' is only explaining how a kernel loadable module could be licensed under a GPL-incompatible license. It's really more than simply adding it to a repository.
I can store two text files that have nothing to do with each other on my HD or in a repo or wherever, and I would not be creating a derived work. But if I have file A (kernel source) and file B (zfs code), and I compile A+B into a binary (the kernel image), then I have a single work that has been derived from A and B.
When it's suggested that ZFS be added to the kernel repo, what's really being said is that a single work (kernel+zfs) should be created. In contrast, ZFS is currently a kernel loadable module. We have the kernel binary, and the module binary. Two separate works. (What Linus was clarifying was how integrated the module could be with the kernel and it still be considered a separate work.). From my mostly second-hand knowledge, in practice even lawyers specializing in IP wouldn't have a solid answer for how to make this distinction, without looking at a specific case in detail.
If it came up in a trial, the two sides would make a version of the arguments presented here: one would emphasize that the sources have now been 'added to the kernel tree', a unified project managed with close integration etc. Etc., while the other side would argue they were merely placed alongside the kernel sources in a version control system, like collecting short stories in an anthology. A derivative work is one that extends upon an original work. Thats an simple definition of a derivative work, but doesn't include any clear examples. The FSF gives the example that linking causes a derivative work, and incorporates that line of thinking into the LGPL.
The reason behind it is that a linked work existence is based upon an original work, and can not exist without it. As such, linking is an easy example where the line into derivative work has been crossed. In the end, it will be up to the courts to decide what is or isn't a derivative work in software. The statutory definition is incomplete and the concept of derivative work is thus interpreted with reference to explanatory case law.
Each time a music company wins a lawsuit against remixes, derivative work extends its grasp. Each time a game like WoW wins a lawsuit against bot software, one more step is taken. In light of the precedential cases, I consider the FSF example of linking to be quite conservative definition of derivation. It might not be true every time and for every possible use of linking, but it should be true enough in the general case. Is there a strong argument against that interpretation?
Don't they claim that the linking rule works via derivation? As far as I understand it, the FSF would tell you that anything linked is always derivative. But that doesn't mean it's true. If you could prove that a particular instance of linking to a library was not derivative, would they still claim your program had to be GPL? As I understand it, the LGPL exists to 1.
Provide legal certainty and 2. Allow some amount of external derivation if necessary.
And it's easy to create an artificial dynamic-linking case where there is provably no derivation, using multiple libraries with the same API. Don't they claim that the linking rule works via derivation? It was the closest thing I could think of where the two pieces of software are fairly separated. I mean you could have a huge proprietary program, and a developer calls gslpowint from the GNU scientific library, and the entire program must be licensed under the GPL. I think that's about as close as you're going to get.
If you're looking for a case where the FSF said a piece of software had to be licensed under the GPL, even though it was NOT a derivative work, I don't think you'll find it. The reason it must be a derivative is copyright law. The GPL can't unilaterally change that.
Yes, as evidenced on slide 8 of that link. ' Solaris Porting Layer - Adds stable Solaris/Illumos interfaces. Tasqs, lists, condition variables, rwlocks, memory allocators, etc - Layers ontop of Linux equivalents if available - Solaris specific interfaces were implemented from scratch ' It doesn't (currently) use the Linux page cache, which causes quite a few ancillary issues. The idea is awesome, but this will simply never be able to be 'natively' in linux without a rewrite of much of the core. Generally it's a complicated area.
Generally the only safe way to sneak out from under an existing license would be a black-box rewrite, done by people who hadn't looked at the source for the original version. Otherwise the original author could claim that it's a derivative work, and thus falls under the terms of the original license. The CDDL in particular specifies that any modifications (changes, additions or deletions to the source code or their files) are also under the CDDL. See 3.2,3.4 along with their definition of Modification. However, even a black-box rewrite could still fall foul of any patents granted to the original creators.
I dabbled in ZFS on FreeBSD + OpenSolaris 3 years back. It was nice and all, but it hasn't been worth the overhead of running another OS to get its features since. I'm therefore glad to see some unity in the ZFS community to create more trust around its use in Linux, and proud to see my beloved Gentoo in the list of standard-bearers! Bring on the unrivalled pragmatism.
Observation: Gentoo packages still point to not to. Is this the same code? I suppose so. Further observation: The kernel code looks like, as packaged by Gentoo, it can only be compiled as a module.
Generally I disable LKMs on production systems. Downvote wasn't me. I agree with the sentiment regarding requirements. However, I disagree with the bit regarding data redundancy and integrity. You can do it other ways, but that doesn't make it a good idea; it's a bit like Greenspun's Tenth Rule, but for data. ZFS, or something like it (and there isn't anything else like it), is the foundation of any modern setup where data is important.
Because it's not on Linux is a terrible reason. If your data is important, then you'll need to look elsewhere than Linux for the servers where the data sleeps. The importance of data requires it. If data is relatively unimportant, then you're right.
There are few domains where that's true nowadays though. I disagree with the bit regarding data redundancy and integrity. You are welcome to disagree but I'd like to see some reasoning. You can do it other ways, but that doesn't make it a good idea; it's a bit like Greenspun's Tenth Rule, but for data. Had to go searching for that rule, which seems to be Lisp-snobbery which is clearly somewhat justified in theory but almost irrelevant in practice.
Right tool for the job, and all that. It's such a broken metaphor for storage consistency or availability that I'm not going to comment further. ZFS, or something like it (and there isn't anything else like it), is the foundation of any modern setup Do you honestly view ZFS as the be-all and end-all of data storage? That would be. Other filesystems can offer snapshots and high availability, as can other elements within a storage system. For example, in Linux, DRBD is a block device driver that provides even more powerful availability guarantees that any conventional (single-host-homed) filesystem. Likewise, LVM2 has provided block-layer snapshots for ages.
Similarly, Linux is unsurprisingly the most vibrant platform for cluster filesystems. Then there's also other great general purpose tools such as RAID, signatures/checksums, and such. If your data is important, then you'll need to look elsewhere than Linux for the servers where the data sleeps. That's just ridiculous. I guess you're going to tell me most of the world's data lives on ZFS? Google uses ZFS?
Facebook uses ZFS? Yahoo uses ZFS? Let's be realistic here: you're absolutely and demonstrably wrong, and have provided no compelling argument. Which seems to be Lisp-snobbery which is clearly somewhat justified in theory but almost irrelevant in practice I agree, but you're missing the forest for the trees here.
Please accept my arguments in good faith. Do you honestly view ZFS as the be-all and end-all of data storage?
For local storage? Yes, it's the best we have. Provided no compelling argument How many filesystems have Merkle trees? You need something like them to avoid phantom reads, phantom writes, and silent corruption.
How many filesystems have duplicate metadata blocks, duplicate what's analogous to the superblock several times, and can duplicate data a user-specified number of times? And then check their validity using the Merkle tree property above to validate reads? How many filesystems offer free and instant snapshots?
As many as you want? Those things are wonderful for databases. How many filesystems offer software RAID? Hardware RAID is a dodgy idea, because it's a complex binary blob in firmware you have no insight into when something goes wrong (speaking from bitter experience, things go wrong). Furthermore some hardware RAID suffers from a write hole. How many filesystems are transactional? And allow you to roll back if a transaction becomes unfixably corrupted?
How many can replicate? How many use SSDs efficiently?
Zfs Has Finally Arrived For Mac Free
How many have been in heavy industrial use for years? ZFS has all that (not some of it, that's the point), and more. There's nothing else like it.
Btrfs probably will be one day as well, but not yet. So, no, it's not ridiculous. I've been down this trail of tears before, and ZFS has made life so much better.
At least I don't need to dread a number in my database silently flipping a digit anymore - if that scenario doesn't give you the hives, then I really don't know what to say. I have a lot of respect for those working on this project, but realistically, if you use Linux, using an out-of-tree filesystem is just asking for pain- lots of it. I would never use this on a production system. You know how painful out-of-tree video drivers are? Imagine that, only now with the potential for data loss and divergent on-disk formats. And if it's your root fs, you can forget about booting if there's a problem.
Sure ZFS has a great reputation, but a lot of that came from how well-integrated it was into Solaris and how much QA was done on it. Neither of those things were ever true (or are going to be true in the future) for the various ZFS-on-Linux projects (yes, there are multiple.) The comments about btrfs are about 5 years out of date. SuSE has already shipped btrfs in their 'stable' 11.1 distribution, and Red Hat is going to do so in RHEL7. Give it a chance. The comments about btrfs are about 5 years out of date. I've tried it, and it still has pain points I'd not like to have in my filesystem.
It's like ZFS almost a decade ago (and I'm not talking about features). Although ZFS on Linux vs. Btrfs on Linux. Right now I'd still go with btrfs. Neither of those things were ever true (or are going to be true in the future) for the various ZFS-on-Linux projects (yes, there are multiple.) I believe there is a shift with regard to this, as demonstrated by the Gentoo project's integration of ZFS. ZFS also has 'general lackluster performance' in areas like using memory (it requires tons of it).
It's inherent in the design of a copy-on-write filesystem. According to Ted Dunangst: 'ZFS wants a lot of memory. A lot lot lot of memory. So much memory, the kernel address space has trouble wrapping its arms around ZFS. I haven't studied it extensively, but the hack of pushing some of the cache off into higher memory and accessing it through a small window may even work.' See Different filesystems are good for different things.
If you want a filesystem that has subvolumes, copy-on-write snapshots, built-in RAID, transactions, space-efficient packing of small files, batch deduplication, checksums on data and metadata, and so forth, you have to pay a price. Just the same way as running Apache with all the bells and whistles is not going to be as fast as ngnix. ZFS also has 'general lackluster performance' in areas like using memory (it requires tons of it). It's inherent in the design of a copy-on-write filesystem.
Those benchmarks aren't about CPU or memory consumption. These days a good filesystem probably should trade memory and CPU for increased performance. Those benchmarks are about throughput/latency. Just the same way as running Apache with all the bells and whistles is not going to be as fast as ngnix.except ZFS generally performs very well as compared to other filesystems. When it first came out it had all kinds of ugly corner cases where it performed poorly, but it seems to do great these days.
The huge iOS hit has finally arrived to the Mac! Bike Baron is the Ultimate Bike Game! Number one game in 62 countries on the Mac. 'Say goodbye to your free time.' - Yahoo Number ONE in Quality Index round-up for October! - PocketGamer 'Bike Baron is exceptionally well-made.'
- TouchArcade iOS Game of the Day - IGN What happens when you put the developers of Minigore, Ice Rage, Death Rally and Aqua Globs together with the composer of Angry Birds and Trine 2? Bike Baron is a ride you will never forget! - MASTER over 100 different tracks, beat more than 300 unbelievable challenges and learn the secrets of the Joker card - CREATE custom levels. and share them with everyone! - GREAT keyboard controls which can be fully customized - FANTASTIC real-time shadows. Bike Baron comes with a full-featured level editor that we used to create all the levels in the game. Over 400 000 levels created so far!
Join us @ www.facebook.com/thebikebaron Follow Baron @ www.twitter.com/thebikebaron Discover new levels @ www.thebikebaron.com.