|Introduced||Stable: yet to be released
Unstable: 2.6.29, March 2009 (Linux)
|Max. file size||16 EiB (8 EiB under Linux due to a kernel limit)|
|Max. number of files||264|
|Max. filename length||255 bytes|
|Max. volume size||16 EiB|
|Allowed characters in filenames||All except '/' and NUL ('\0')|
|Dates recorded||modification (mtime), attribute modification (ctime), access (atime)|
|Attributes||POSIX, extended attributes|
|File system permissions||POSIX, ACL|
|Transparent compression||Yes (gzip, LZO, Snappy  (planned) and LZ4  (planned))|
|Data deduplication||Yes |
|Supported operating systems||Linux|
Btrfs (B-tree file system, variously pronounced: "Butter F S", "Better F S", "B-tree F S", or simply "Bee Tee Arr Eff Ess") is a GPL-licensed experimental copy-on-write file system for Linux. Development began at Oracle Corporation in 2007. It is still in heavy development and marked as unstable, especially since when the filesystem becomes full, no-space conditions arise which might make it challenging to delete files.
Btrfs is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in Linux file systems, these features being crucial as Linux use scales upward into the larger storage configurations common in enterprise. Chris Mason, the principal Btrfs author, has stated that its goal was "to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable."
In 2008, the principal developer of the ext3 and ext4 file systems, Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance, it uses old technology, and is a stop-gap; Ts'o believes that Btrfs is the better direction because "it offers improvements in scalability, reliability, and ease of management". Btrfs also has "a number of the same design ideas that reiser3/4 had".
- 1 History
- 2 Features
- 3 Design
- 4 See also
- 5 References
- 6 External links
The core data structure of Btrfs â€” the copy-on-write B-tree â€” was originally proposed by IBM researcher Ohad Rodeh at a presentation at USENIX 2007. Chris Mason, an engineer working on ReiserFS for SUSE at the time, joined Oracle later that year and began work on a new file system based on these B-trees.
Btrfs 1.0 (with finalized on-disk format) was originally slated for a late 2008 release, and was finally accepted into the mainline kernel as of 2.6.29 in 2009. Several Linux distributions began offering Btrfs as an experimental choice of root file system during installation, including Arch Linux, openSUSE 11.3, SLES 11 SP1, Ubuntu 10.10, Sabayon Linux, Red Hat Enterprise Linux 6,  Fedora 15, MeeGo, Debian, and Slackware 13.37. In summer 2012, several Linux distributions have moved Btrfs from experimental to production / supported status, including SLES 11 SP2 and Oracle Linux 5 and 6, with the Unbreakable Enterprise Kernel Release 2
- Mostly self-healing in some configurations due to the nature of copy on write
- Online defragmentation
- Online volume growth and shrinking
- Online block device addition and removal
- Online balancing (movement of objects between block devices to balance load)
- Offline filesystem check
- Online data scrubbing for finding errors and automatically fixing them for files with redundant copies
- RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10
- Subvolumes (one or more separately mountable filesystem roots within each disk partition)
- Transparent compression (zlib and LZO)
- Snapshots (read-only or copy-on-write clones of subvolumes)
- File cloning (copy-on-write on individual files, or byte ranges thereof)
- Checksums on data and metadata (CRC-32C)
- In-place conversion (with rollback) from ext3/4 to Btrfs
- File system seeding (Btrfs on read-only storage used as a copy-on-write backing for a writeable Btrfs)
- Block discard support (reclaims space on some virtualized setups and improves wear leveling on SSDs with TRIM)
- Send/receive (saving diffs between snapshots to a binary stream)
- Hierarchical per-subvolume quotas
- Out-of-band data deduplication (requires userspace tools)
Planned features include:
- In-band data deduplication
- Online filesystem check
- Very fast offline filesystem check
- Object-level RAID 0, RAID 1, and RAID 10
- Incremental backup"Btrfs Wiki: Incremental Backup". 2013-05-27. Retrieved 2013-11-27.
- Ability to handle swap files and swap partitions
In 2009, Btrfs was expected to offer a feature set comparable to ZFS, developed by Sun Microsystems. After Oracle's acquisition of Sun in 2009, Mason and Oracle decided to continue on with Btrfs development.
Btrfs provides a clone operation which atomically creates a copy-on-write snapshot of a file. Such cloned files are sometimes referred to as reflinks, in light of the associated Linux kernel system calls.
By cloning, the file system does not create a new link pointing to an existing inode â€” it instead creates a new inode that shares the same disk blocks as the original file. As a result, this operation only works within the boundaries of the same Btrfs file system, while it can cross the boundaries of subvolumes since Linux kernel version 3.6. The actual data blocks are not becoming duplicated but, due to the copy-on-write nature of cloning, modifications to any of the cloned files are not visible in their parent files and vice-versa.
This should not be confused with hard links, which are directory entries that associate multiple file names with actual files on a file system. While hard links can be taken as different names for the same underlying group of disk blocks (known as a file), cloning in Btrfs provides independent files that are sharing their disk blocks as a form of data deduplication on the disk block level. Any later changes to the content of such "dependent" files invoke the copy-on-write mechanism, which creates independent copies of all altered disk blocks.
Cloning can be especially effective in case of storing disk images of virtual machines or their snapshots. Those are large files differing only in small portions, where the cloning provides both their faster (instantenous) copying and minimal consumption of storage space due to data deduplication.
Subvolumes and snapshots
A subvolume in Btrfs is quite different from the usual LVM logical volumes. With LVM, a logical volume is a block device in its own right â€” while this is not the case with Btrfs. A Btrfs subvolume is not a separate block device, and it cannot be treated or used that way.
Instead, a Btrfs subvolume can be thought of as a separate POSIX file namespace. This namespace can be accessed either through the top-level subvolume of the file system, or it can be mounted on its own and accessed separately by specifying the
subvolid option to . When accessed through the top-level subvolume, subvolumes are visible and accessed as its subdirectories.
Subvolumes can be created at any place within the file system hierarchy, and they can also be nested. Nested subvolumes appear as subdirectories within their parent subvolumes, similar to the way top-level subvolume presents its subvolumes as subdirectories. Deleting a subvolume deletes all subvolumes below it in the nesting hierarchy, and for this reason the top-level subvolume cannot be deleted.
Any Btrfs file system always has a default subvolume, which is initially set to be the top-level subvolume, and it is mounted by default if no subvolume selection option is passed to
mount. Of course, the default subvolume can be changed as required.
A Btrfs snapshot is actually a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs' copy-on-write capabilities, and modifications to a snapshot are not visible in the original subvolume. Once a writable snapshot is made, it can be treated as an alternate version of the original file system. For example, to roll back to a snapshot, modified original subvolume needs to be unmounted, and the snapshot mounted in its place. At that point, the original subvolume may also be deleted.
The copy-on-write nature of Btrfs means that snapshots are quickly created, and they initially consume very little disk space. Since a snapshot is a subvolume, creating nested snapshots is also possible. Taking snapshots of a subvolume is not a recursive process. If a snapshot of a subvolume is created, every subvolume or snapshot that the subvolume already contains is mapped to an empty directory of the same name inside the snapshot.
Taking snapshots of a directory is not possible, as only subvolumes can have snapshots. Though, there is a workaround, involving reflinks spreading across subvolumes. That way, a new subvolume is created, containing cross-subvolume reflinks to the content of the targeted directory. Having that available, a snapshot of this new volume can be created.
Given any pair of subvolumes (or snapshots), Btrfs can generate a binary diff between them (by using the
btrfs send command) that can be replayed later (by using
btrfs receive), possibly on a different Btrfs file system. The send/receive feature effectively creates (and applies) a set of data modifications required for converting one subvolume into another.
The send/receive feature can be used with regularly scheduled snapshots for implementing a simple form of file system master/slave replication, or for the purpose of performing incremental backups.
A quota group (or qgroup) imposes an upper limit to the space a subvolume or snapshot may consume. A new snapshot initially consumes no quota because its data is shared with its parent, but thereafter incurs a charge for new files and copy-on-write operations on existing files. When quotas are active, a quota group is automatically created with each new subvolume or snapshot. These initial quota groups are building blocks which can be grouped (with the
btrfs qgroup command) into hierarchies to implement quota pools.
Quota groups only apply to subvolumes and snapshots, while having quotas enforced on individual subdirectories is not possible.
In-place ext2/3/4 conversion
As the result of having very little metadata anchored in fixed locations, Btrfs can warp to fit unusual spatial layouts of the backend storage devices. The
btrfs-convert tool exploits this ability to do an in-place conversion of any ext2/3/4 file system, by nesting the equivalent Btrfs metadata in its unallocated space â€” while preserving an unmodified copy of the original file system.
The conversion involves creating a copy of the whole ext2/3/4 metadata, while the Btrfs files simply point to the same blocks used by the ext2/3/4 files. This makes the bulk of the blocks shared between the two filesystems before the conversion becomes permanent. Thanks to the copy-on-write nature of Btrfs, the original versions of the file data blocks are preserved during all file modifications. Until the conversion becomes permanent, only the blocks that were marked as free in ext2/3/4 are used to hold new Btrfs modifications, meaning that the conversion can be undone at any time.
All converted files are available and writable in the default subvolume of the Btrfs. A sparse file holding all of the references to the original ext2/3/4 filesystem is created in a separate subvolume, which is mountable on its own as a read-only disk image, allowing both original and converted file systems to be accessed at the same time. Deleting this sparse file frees up the space and makes the conversion permanent.
When creating a new Btrfs, an existing Btrfs can be used as a read-only "seed" file system. The new file system will then act as a copy-on-write overlay on the seed. The seed can be later detached from the Btrfs, at which point the rebalancer will simply copy over any seed data still referenced by the new file system before detaching. Mason has suggested this may be useful for a Live CD installer, which might boot from a read-only Btrfs seed on optical disc, rebalance itself to the target partition on the install disk in the background while the user continues to work, then eject the disc to complete the installation without rebooting.
Though Chris Mason said in his interview in 2009 that encryption was planned for Btrfs, this is unlikely to be implemented for some time, if ever, due to the complexity of implementation and pre-existing tested and peer-reviewed solutions. The current recommendation for encryption with Btrfs is to use a full-disk encryption mechanism such as dm-crypt/LUKS on the underlying devices, and to create the Btrfs filesystem on top of that layer (and that if a RAID is to be used with encryption, encrypting a dm-raid device or a hardware-RAID device gives much faster disk performance than dm-crypt overlaid by Btrfs' own filesystem-level RAID features).
Checking and recovery
Unix systems traditionally rely on "fsck" programs to check and repair filesystems. The
btrfsck program is now available but, as of May 2012, it is described by the authors as "relatively new code" which has "not seen widespread testing on a large range of real-life breakage", and that "may cause additional damage in the process of repair".
There is another tool, named
btrfs-restore, that can be used to recover files from an unmountable filesystem, without modifying the broken filesystem itself (i.e., non-destructively).
In normal use Btrfs is mostly self healing and can recover from broken root trees at mount time since a backup is made every 30 seconds. Thus isolated errors will cause a maximum of 30 seconds of filesystem changes to be lost at next mount. This period can be changed.
Ohad Rodeh's original proposal at USENIX 2007 noted that B+ trees, which are widely used as on-disk data structures for databases, could not efficiently support copy-on-write-based snapshots because its leaf nodes were linked together: if a leaf was copy-on-written, its siblings and parents would have to be as well, as would their siblings and parents and so on until the entire tree was copied. He suggested instead a modified B-tree (which has no leaf linkage), with a refcount associated to each tree node but stored in an ad-hoc free map structure and certain relaxations to the tree's balancing algorithms to make them copy-on-write friendly. The result would be a data structure suitable for a high-performance object store that could perform copy-on-write snapshots, while maintaining good concurrency.
At Oracle later that year, Chris Mason began work on a snapshot-capable file system that would use this data structure almost exclusivelyâ€”not just for metadata and file data, but also recursively to track space allocation of the trees themselves. This allowed all traversal and modifications to be funneled through a single code path, against which features such as copy-on-write, checksumming and mirroring needed to be implemented only once to benefit the entire file system.
Btrfs is structured as several layers of such trees, all using the same B-tree implementation. The trees store generic items sorted on a 136-bit key. The first 64 bits of the key are a unique object id. The middle 8 bits are an item type field; its use is hardwired into code as an item filter in tree lookups. Objects can have multiple items of multiple types. The remaining right-hand 64 bits are used in type-specific ways. Therefore items for the same object end up adjacent to each other in the tree, ordered by type. By choosing certain right-hand key values, objects can further put items of the same type in a particular order.
Interior tree nodes are simply flat lists of key-pointer pairs, where the pointer is the logical block number of a child node. Leaf nodes contain item keys packed into the front of the node and item data packed into the end, with the two growing toward each other as the leaf fills up.
Every tree appears as an object in the root tree (or tree of tree roots). Some trees, such as file system trees and log trees, have a variable number of instances, each of which is given its own object id. Trees which are singletons (the data relocation, extent and chunk trees) are assigned special, fixed object ids â‰¤256. The root tree appears in itself as a tree with object id 1.
Trees refer to each other by object id. They may also refer to individual nodes in other trees as a triplet of the tree's object id, the node's level within the tree and its leftmost key value. Such references are independent of where the tree is actually stored.
File system tree
User-visible files and directories all live in a file system tree. There is one file system tree per subvolume. Subvolumes can nest, in which case they appear as a directory item (described below) whose data is a reference to the nested subvolume's file system tree.
Within each directory, directory entries appear as directory items, whose right-hand key values are a CRC32C hash of their filename. Their data is a location key, or the key of the inode item it points to. Directory items together can thus act as an index for path-to-inode lookups, but are not used for iteration because they are sorted by their hash, effectively randomly permuting them. This means user applications iterating over and opening files in a large directory would thus generate many more disk seeks between non-adjacent filesâ€”a notable performance drain in other file systems with hash-ordered directories such as ReiserFS, ext3 (with Htree-indexes enabled) and ext4, all of which have TEA-hashed filenames. To avoid this, each directory entry has a directory index item, whose right-hand value of the item is set to a per-directory counter that increments with each new directory entry. Iteration over these index items thus returns entries in roughly the same order as they are stored on disk.
Besides inode items, files and directories also have a reference item whose right-hand key value is the object id of their parent directory. The data part of the reference item is the filename that inode is known by in that directory. This allows upward traversal through the directory hierarchy by providing a way to map inodes back to paths.
Files with hard links in other directories have multiple reference items, one for each parent directory. Files with hard links in the same directory pack all of the links' filenames into the same reference item. This was a design flaw that limited the number of same-directory hard links to however many could fit in a single tree block. (On the default block size of 4 KB, an average filename length of 8 bytes and a per-filename header of 4 bytes, this would be less than 350.) Applications which made heavy use of same-directory hard links, such as git, GNUS, GMame and BackupPC were later observed to fail after hitting this limit. The limit was eventually removed (and as of October 2012 has been merged pending release in Linux 3.7) by introducing spillover extended reference items to hold hard link filenames which could not otherwise fit.
File data are kept outside the tree in extents, which are contiguous runs of disk blocks. Extent blocks default to 4KiB in size, do not have headers and contain only (possibly compressed) file data. In compressed extents, individual blocks are not compressed separately; rather, the compression stream spans the entire extent.
Files have extent data items to track the extents which hold their contents. The item's right-hand key value is the starting byte offset of the extent. This makes for efficient seeks in large files with many extents, because the correct extent for any given file offset can be computed with just one tree lookup.
Snapshots and cloned files share extents. When a small part of a large such extent is overwritten, the resulting copy-on-write may create three new extents: a small one containing the overwritten data, and two large ones with unmodified data on either side of the overwrite. To avoid having to re-write unmodified data, the copy-on-write may instead create bookend extents, or extents which are simply slices of existing extents. Extent data items allow for this by including an offset into the extent they are tracking: items for bookends are those with non-zero offsets.
If the file data is small enough to fit inside a tree node, it is instead pulled in-tree and stored inline in the extent data item. Each tree node is stored in its own tree blockâ€”a single uncompressed block with a header. The tree block is regarded as a free-standing, single-block extent.
Extent allocation tree
The extent allocation tree acts as an allocation map for the file system. Unlike other trees, items in this tree do not have object ids and represent regions of space: their left-hand and right-hand key values are the starting offsets and lengths of the regions they represent.
The file system zones its allocated space into block groups, which are variable-sized allocation regions that alternate successively between preferring metadata extents (tree nodes) and data extents (file contents). The default ratio of data to metadata block groups is 1:2. They are intended to work like the Orlov block allocator and block groups in ext3 in allocating related files together and resisting fragmentation by leaving allocation gaps between groups. (ext3 block groups, however, have fixed locations computed from the size of the file system, whereas those in Btrfs are dynamic and are created as needed.) Each block group is associated with a block group item. Inode items in the file system tree include a reference to their current block group.
Extent items contain a back-reference to the tree node or file occupying that extent. There may be multiple back-references if the extent is shared between snapshots. If there are too many back-references to fit in the item, they spill out into individual extent data reference items. Tree nodes, in turn, have back-references to their containing trees. This makes it possible to find which extents or tree nodes are in any region of space by doing a B-tree range lookup on a pair offsets bracketing that region, then following the back-references. For relocating data, this allows an efficient upwards traversal from the relocated blocks to quickly find and fix all downwards references to those blocks, without having to walk the entire file system. This, in turn, allows the file system to efficiently shrink, migrate and defragment its storage online.
The extent allocation tree, as with all other trees in the file system, is copy-on-write. Writes to the file system may thus cause a cascade whereby changed tree nodes and file data result in new extents being allocated, causing the extent tree to itself change. To avoid creating a feedback loop, extent tree nodes which are still in memory but not yet committed to disk may be updated in-place to reflect new copy-on-written extents.
In theory, the extent allocation tree makes a conventional free-space bitmap unnecessary because the extent allocation tree acts as a B-tree version of a BSP tree. In practice, however, an in-memory red-black tree of page-sized bitmaps is used to speed up allocations. These bitmaps are persisted to disk (starting in Linux 2.6.37, via the
space_cache mount option) as special extents that are exempt from checksumming and copy-on-write. The extent items tracking these extents are stored in the root tree.
Checksum tree and scrubbing
CRC-32C checksums are computed for both data and metadata and stored as checksum items in a checksum tree. There is room of 256 bits for metadata checksums and up to a full leaf block (roughly 4 KB or more) for data checksums. Support for more checksum algorithms is planned for the future.
There is one checksum item per contiguous run of allocated blocks, with per-block checksums packed end-to-end into the item data. If there are more checksums than can fit, they spill rightwards over into another checksum item in a new leaf.
If the file system detects a checksum mismatch while reading a block, it first tries to obtain (or create) a good copy of this block from another device â€” if internal mirroring or RAID techniques are in use. Btrfs can initiate an online check of the entire file system by triggering a file system scrub job that is performed in the background. The scrub job scans the entire file system for integrity and automatically attempts to report and repair any bad blocks it finds along the way. 
An fsync is a request to commit modified data immediately to stable storage. fsync-heavy workloads (such as databases) could potentially generate a great deal of redundant write I/O by forcing the file system to repeatedly copy-on-write and flush frequently modified parts of trees to storage. To avoid this, a temporary per-subvolume log tree is created to journal fsync-triggered copy-on-writes. Log trees are self-contained, tracking their own extents and keeping their own checksum items. Their items are replayed and deleted at the next full tree commit or (if there was a system crash) at the next remount.
Chunk and device trees
Block devices are divided into chunks of 256 MB or more. Chunks may be mirrored or striped across multiple devices. The mirroring/striping arrangement is transparent to the rest of the file system, which simply sees the single, logical address space that chunks are mapped into.
This is all tracked by the chunk tree, where each device is represented as a device item and each mapping from a logical chunk to its underlying physical chunks is stored in a chunk map item. The device tree is the inverse of the chunk tree, and contains device extent items which map byte ranges of block devices back to individual chunks. As in the extent allocation tree, this allows Btrfs to efficiently shrink or remove devices from volumes by locating the chunks they contain (and relocating their contents).
The file system, chunks and devices are all assigned a Universally Unique Identifier (UUID). The header of every tree node contains both the UUID of its containing chunk and the UUID of the file system. The chunks containing the chunk tree, the root tree, device tree and extent tree are always mirroredâ€”even on single-device volumes. These are all intended to improve the odds of successful data salvage in the event of media errors.
Defragmentation, shrinking and rebalancing operations require extents to be relocated. However, doing a simple copy-on-write of the relocating extent will break sharing between snapshots and consume disk space. To preserve sharing, an update-and-swap algorithm is used, with a special relocation tree serving as scratch space for affected metadata. The extent to be relocated is first copied to its destination. Then, by following backreferences upward through the affected subvolume's file system tree, metadata pointing to the old extent is progressively updated to point at the new one; any newly updated items are stored in the relocation tree. Once the update is complete, items in the relocation tree are swapped with their counterparts in the affected subvolume, and the relocation tree is discarded.
All the file system's treesâ€”including the chunk tree itselfâ€”are stored in chunks, creating a potential chicken-and-egg problem when mounting the file system. To bootstrap into a mount, a list of physical addresses of chunks belonging to the chunk and root trees must be stored in the superblock.
Superblock mirrors are kept at fixed locations: 64â€¯KiB into every block device, with additional copies at 64â€¯MiB, 256â€¯GiB and 1â€¯PiB. When a superblock mirror is updated, its generation number is incremented. At mount time, the copy with the highest generation number is used. All superblock mirrors are updated in tandem, except in SSD mode which alternates updates among mirrors to provide some wear levelling.
- Btrfs Picks Up Snappy Compression Support Phoronix
- LZ4 For Btrfs Arrives Phoronix
- McPherson, Amanda (22 June 2009). "A Conversation with Chris Mason on BTRfs: the next generation file system for Linux". Linux Foundation. Archived from the original on 24 June 2012. Retrieved 2009-09-01.
- btrfs Wiki: Deduplication
- Henson, Valerie (31 January 2008). Chunkfs: Fast file system check and repair. Melbourne, Australia. Event occurs at 18m 49s. Retrieved 2008-02-05. "It's called Butter FS or B-tree FS, but all the cool kids say Butter FS"
- "BTRFS FAQ on Stability". BTRFS FAQ. Archived from the original on 24 June 2012. Retrieved 12 June 2012.
- kernel BUG at fs/btrfs/delayed-inode.c:1466!
- Kerner, Sean Michael (30 October 2008). "A Better File System For Linux". InternetNews.com. Archived from the original on 24 June 2012. Retrieved 2008-10-30.
- Paul, Ryan (13 April 2009). Panelists ponder the kernel at Linux Collaboration Summit. Ars Technica. Archived from the original on 24 June 2012. Retrieved 2009-08-22.
- Ts'o, Theodore (1 August 2008). "Re: reiser4 for 2.6.27-rc1". linux-kernel mailing list. http://lkml.org/lkml/2008/8/1/217. Retrieved 2010-12-31.
- Mason, Chris (2007-06-12). "Btrfs: a copy on write, snapshotting FS". linux-kernel mailing list. http://lkml.org/lkml/2007/6/12/242. Retrieved 2010-05-29.
- "Development timeline". Btrfs wiki. 11 December 2008. Archived from the original on 20 December 2008. Retrieved 5 November 2011.
- Wuelfing, Britta (12 January 2009). "Kernel 2.6.29: Corbet Says Btrfs Next Generation Filesystem". Linux Magazine. Retrieved 5 November 2011.
- "Red Hat Enterprise Linux 6 documentation: Technology Previews".
- "Fedora Weekly News Issue 276". 25 May 2011.
- "MeeGo project chooses Btrfs as standard file system". The H. 12 May 2010. Retrieved 5 November 2011.
- "Debian 6.0 "Squeeze" released" (Press release). Debian. 6 February 2011. Retrieved 2011-02-08. "Support has also been added for the ext4 and Btrfs filesystems..."
- "SLES 11 SP2 Release Notes". 21 August 2012 (and earlier versions). Retrieved 2012-08-29.
- "Oracle Linux Technical Information". Retrieved 2012-11-14.
- Brown, Eric (22 July 2011). "Linux 3.0 scrubs up Btrfs, gets more Xen". Linux devices (eWeek). Archived from the original on 2013-01-27. Retrieved 8 November 2011.
- Leemhuis, Thorsten (21 June 2011). "Kernel Log: Coming in 3.0 (Part 2) - Filesystems". The H Open. Retrieved 8 November 2011.
- Mason, Chris. "Leaving Oracle". Gmane. Retrieved 26 June 2013.
- "Btrfs Wiki: Feature List". The btrfs project. 2013-11-27. Retrieved 2013-11-27.
- "Btrfs Wiki: Changelog". 2013-11-08. Retrieved 2013-11-27.
- "Using Btrfs with Multiple Devices". kernel.org. 2013-11-07. Retrieved 2013-11-20.
- Chris Mason (2013-02-02). "RAID 5/6 code merged into Btrfs". LWN.net. Retrieved 2013-12-04.
- "btrfs: Readonly snapshots". Retrieved 2011-12-12.
- "Wiki FAQ: What checksum function does Btrfs use?". Btrfs wiki. Retrieved 2009-06-15.
- Mason, Chris (2008-04-01). "Conversion from Ext3". Btrfs wiki. Retrieved 2012-05-23.
- Mason, Chris (12 January 2009). "Btrfs changelog". Retrieved 2012-02-12.
- Corbet, Jonathan (2012-07-11), Btrfs send/receive, LWN.net, retrieved 2012-11-14
- Jansen, Arne (2011), Btrfs Subvolume Quota Groups (PDF), Strato AG, retrieved 2012-11-14
- "Btrfs Wiki: Deduplication". 2013-09-13. Retrieved 2013-11-27.
- "Btrfs Wiki: Roadmap". 2013-11-27. Retrieved 2013-11-27.
- Corbet, Jonathan (2 November 2011). "A btrfs update at LinuxCon Europe". Retrieved 12 February 2012.
- "Btrfs Project ideas". 21 February 2013. Retrieved 21 February 2013.
- Aurora, Valerie (22 July 2009). "A short history of btrfs". LWN.net. Retrieved 5 November 2011.
- Hilzinger, Marcel (22 April 2009). "Future of Btrfs Secured". Linux Magazine. Retrieved 5 November 2011.
- Jonathan Corbet (2009-05-05). "The two sides of reflink()". LWN.net. Retrieved 2013-10-17.
- "UseCases - btrfs Wiki". kernel.org. Retrieved 2013-11-04.
- "btrfs: allow cross-subvolume file clone". github.com. Retrieved 2013-11-04.
- "Symlinks reference names, hardlinks reference meta-data and reflinks reference data". pixelbeat.org. 2010-10-27. Retrieved 2013-10-17.
- Meyering, Jim (20 August 2009). "GNU coreutils NEWS: Noteworthy changes in release 7.5". Retrieved 2009-08-30.
- Scrivano, Giuseppe (1 August 2009). "cp: accept the --reflink option". Retrieved 2009-11-02.
- "SysadminGuide - btrfs Wiki". kernel.org. Retrieved 2013-10-31.
- "5.6 Creating Subvolumes and Snapshots". oracle.com. 2013. Retrieved 2013-10-31.
- "5.7 Using the Send/Receive Feature". oracle.com. 2013. Retrieved 2013-10-31.
- Mason, Chris (2012-04-05), Btrfs Filesystem: Status and New Features, Linux Foundation, retrieved 2012-11-16
- Btrfs FAQ, 2012-06-01, retrieved 2012-11-16
- Rodeh, Ohad (2008). B-trees, Shadowing, and Clones. New York: Association for Computing Machinery.
- Rodeh, Ohad (2008). B-trees, Shadowing, and Clones. New York: Association for Computing Machinery.
- Mason, Chris. "Btrfs design". Btrfs wiki. Retrieved 8 November 2011.
- Reiser, Hans (2001-12-07). "Re: Ext2 directory index: ALS paper and benchmarks". ReiserFS developers mailing list. Retrieved 2009-08-28.
- Mason, Chris. "Acp". Oracle personal web page. Retrieved 2011-11-05.
- Fasheh, Mark (2012-10-09), btrfs: extended inode refs, retrieved 2012-11-07
- Torvalds, Linus (2012-10-10), "Pull btrfs update from Chris Mason", git.kernel.org, retrieved 2012-11-07
- Larabel, Michael (2010-12-24). "Benchmarks Of The Btrfs Space Cache Option". Phoronix. Retrieved 2012-11-16.
- "FAQ - btrfs Wiki: What checksum function does Btrfs use?". The btrfs Project. Retrieved 2013-09-19.
- Bierman, Margaret; Grimmer, Lenz (August 2012). "How I Use the Advanced Capabilities of Btrfs". Retrieved 2013-09-20.
- Coekaerts, Wim (2011-09-28). "btrfs scrub - go fix corruptions with mirror copies please!". Retrieved 2013-09-20.
- Mason, Chris; Rodeh, Ohad; Bacik, Josef (2012-07-09), BRTFS: The Linux B-tree Filesystem, IBM Research, retrieved 2012-11-12
- Mason, Chris (30 April 2008). "Multiple device support". Btrfs wiki. Archived from the original on 20 July 2011. Retrieved 5 November 2011.
- Bartell, Sean (2010-04-20). "Re: Restoring BTRFS partition". linux-btrfs mailing list. http://kerneltrap.org/mailarchive/linux-btrfs/2010/4/20/6884623.
- Btrfs wiki
- I Can't Believe This is Butter! A tour of btrfs on YouTube â€” Conference presentation by Oracle Engineer, Avi Miller.