ZFS Pool Design: RAIDZ vs Mirrors for a Home NAS
How to decide between RAIDZ1, RAIDZ2, and mirror vdevs for a home TrueNAS pool. Trade-offs in usable capacity, rebuild risk, IOPS, and what 'one big pool' really costs you.
The pool you create on day one is the hardest thing to change later. You can grow a ZFS pool, but you cannot freely change its vdev topology after the fact: a pool built from RAIDZ2 vdevs is a RAIDZ2 pool, full stop. Get this layer right and the next decade of NAS administration is straightforward. Get it wrong and you are eventually destroying and rebuilding the pool.
This guide is about that decision: RAIDZ or mirrors, and which variant.
Vocabulary refresher
A ZFS pool is built out of vdevs (virtual devices). A vdev is one or more physical disks grouped in a specific redundancy topology. A pool stripes data across its vdevs. Lose a vdev, lose the pool. Redundancy lives at the vdev level, not the pool level.
The common vdev types for a home NAS:
- Single disk — no redundancy. Avoid for primary storage.
- Mirror — 2 (or more) disks holding identical data. Survives N-1 disk failures.
- RAIDZ1 — 3 or more disks, single parity. Survives 1 disk failure per vdev.
- RAIDZ2 — 4 or more disks, double parity. Survives 2 disk failures per vdev.
- RAIDZ3 — 5 or more disks, triple parity. Survives 3 disk failures per vdev. Rare at home.
The honest comparison
| Topology | Min disks | Usable capacity | Fault tolerance | IOPS scaling | Rebuild stress |
|---|---|---|---|---|---|
| 2-way mirror | 2 | 50% of raw | 1 disk per vdev | Scales with vdev count (high) | Read 1 disk, fast |
| 3-way mirror | 3 | 33% of raw | 2 disks per vdev | Scales with vdev count (high) | Read 1 disk, fast |
| RAIDZ1 (4-wide) | 3 | ~75% | 1 disk per vdev | One vdev = one disk of IOPS | Read all surviving disks |
| RAIDZ2 (6-wide) | 4 | ~67% | 2 disks per vdev | One vdev = one disk of IOPS | Read all surviving disks |
| RAIDZ2 (8-wide) | 4 | ~75% | 2 disks per vdev | One vdev = one disk of IOPS | Read all surviving disks |
Two things in this table matter most:
- RAIDZ vdevs do not scale IOPS. A six-drive RAIDZ2 vdev has roughly the random-write IOPS of a single disk. To increase random IOPS in a ZFS pool, you add more vdevs, not wider vdevs. Mirrors scale IOPS naturally because each mirror vdev contributes its own IOPS to the pool.
- RAIDZ resilvers stress every remaining disk in the vdev. During a resilver, ZFS reads from all surviving disks to reconstruct the missing one. On a wide RAIDZ vdev built from older drives, the resilver itself raises the probability of a second failure mid-rebuild.
How to think about the choice
Match the topology to the workload.
”I want a media library and bulk file storage”
This is the most common home NAS workload: large sequential reads and writes, low IOPS demands, capacity matters more than performance.
Use RAIDZ2. Go six-wide or eight-wide. You get ~67–75% usable capacity, survive two disk failures, and the sequential throughput is excellent. Avoid RAIDZ1 on modern multi-terabyte drives — the rebuild window is long enough that a second failure is not negligible.
”I want VMs and databases on my NAS”
VM disks (zvols, or .vmdk/.qcow2 files on a dataset) demand random IOPS. RAIDZ will be slow.
Use mirrors. Two-way mirrors give you a pool whose random IOPS scales with the number of vdevs. You lose 50% to redundancy, but you gain the responsiveness VMs and databases expect. This is the standard recommendation for a TrueNAS pool intended to host VMs or as iSCSI block storage.
”I want both”
You have two reasonable options:
- Two pools. One mirror-based pool for VMs/databases, one RAIDZ2 pool for bulk storage. This is what most TrueNAS users with serious VM workloads do. The downside is you commit drive bays to each, and you cannot grow one at the expense of the other.
- One RAIDZ2 pool with a special vdev or SLOG. Adding a fast NVMe special vdev (mirrored, always) can hold metadata and small blocks on flash, dramatically improving random read performance for things like browsing large datasets and small-file workloads. This does not fix sustained VM workloads, but for many homes it is enough.
”I want every drive bay to count”
If the bay count is small (4-bay NAS), capacity vs redundancy is brutal. Options:
- 4 disks, RAIDZ2: 50% usable, 2-disk fault tolerance. Honest and conservative.
- 4 disks, RAIDZ1: 75% usable, 1-disk fault tolerance. Acceptable only with modern small drives and recent backups elsewhere.
- 4 disks, two 2-way mirrors: 50% usable, scales IOPS. Good for VM-heavy use.
Do not run RAIDZ1 with drives larger than ~10 TB. The resilver time is long enough that the second-failure risk is real.
Mistakes we see repeatedly
Mixing topologies in one pool. A pool with a RAIDZ2 vdev and a mirror vdev technically works, but ZFS will stripe across them and the pool is effectively limited by its weakest member for failure planning. Don’t do it. Pick a topology and stick to it.
Adding a single disk as a new vdev to an existing pool. ZFS will let you, and the pool will accept the disk. It is also now a pool whose redundancy is gated by a single non-redundant disk. Lose that disk and the whole pool is gone. Never add a single non-redundant disk to an existing pool except as cache or log.
Building a too-wide RAIDZ vdev. Twelve-wide RAIDZ2 looks attractive for capacity efficiency. The resilver time on a 12-disk RAIDZ2 vdev with multi-terabyte disks is measured in days, and IOPS is still that of one disk. For most home users, 6-wide or 8-wide RAIDZ2 is the sweet spot.
Treating RAIDZ as a substitute for backup. Snapshots and replication to a separate system are what protect your data. A pool that survives a single drive failure does not survive a pool corruption, a controller failure that wipes labels, an rm -rf issued against the wrong dataset, or a fire.
A concrete starting recommendation
For a typical 4–8 bay home NAS focused on media, documents, and a handful of apps:
- Pool topology: One RAIDZ2 vdev, 6–8 drives wide.
- Drives: Matching capacity, ideally from at least two different manufacturing batches (to avoid simultaneous failure of disks that age in lockstep).
- Spare: Hot spare optional; cold spare on a shelf is more useful.
- Special vdev: Skip on day one. Add later if you find metadata operations are slow.
If your primary workload is VMs or iSCSI block storage, replace that recommendation with two or three 2-way mirror vdevs.
Next steps
- TrueNAS SCALE vs CORE in 2026: Which Should You Install? covers platform choice.
- TrueNAS Snapshot and Replication Strategy walks through the protection layer that sits on top of your pool design.
- ZFS Performance Tuning: ARC, Recordsize, and Compression covers the tunables that matter once the topology is fixed.
Related
TrueNAS SCALE vs CORE in 2026: Which Should You Install?
TrueNAS CORE is FreeBSD-based and battle-tested. TrueNAS SCALE is Linux-based and runs containers and VMs natively. Here is how to pick the right one for a home NAS today.
ZFS Performance Tuning: ARC, Recordsize, and Compression
The three ZFS tunables that actually matter for a home NAS: ARC sizing, dataset recordsize, and compression. What each does, when to change it, and what to leave alone.
The TrueNAS Hardware Guide: What Actually Matters for a Home NAS
ECC RAM, HBAs, drives, motherboards, and power. A practical hardware guide for building a TrueNAS server at home — what to splurge on and what is fine.