ZFS Performance Tuning: ARC, Recordsize, and Compression
The three ZFS tunables that actually matter for a home NAS: ARC sizing, dataset recordsize, and compression. What each does, when to change it, and what to leave alone.
ZFS exposes hundreds of tunables. The vast majority should be left alone on a home NAS — the defaults reflect years of tuning by people with far more workload diversity than any single household will encounter. There are, however, three settings that genuinely move the needle for typical home workloads: ARC sizing, dataset recordsize, and compression.
This guide covers when and how to adjust each. It also lists the tunables you should not touch unless you have a specific reason.
ARC: how ZFS uses your RAM
The ARC (Adaptive Replacement Cache) is ZFS’s in-memory cache of recently and frequently accessed data. Reads served from ARC complete in microseconds; reads served from disk take milliseconds, at best. The single biggest performance lever on a home NAS is giving ARC enough RAM.
By default on TrueNAS SCALE, ARC will grow to consume most of the system’s free memory (up to about half by default, modifiable). On a 32 GB system with no VMs or large apps, expect ARC to use 12–16 GB.
When ARC sizing matters
It matters most when your working set (the files you regularly access) exceeds your ARC capacity. Symptoms:
- Browsing a large directory feels slow after the system has been idle.
- File-by-file restores from backup are noticeably slower than the disks should support.
arc_summaryshows a hit rate below 95% on a workload you would expect to be cache-friendly.
To inspect ARC hit rate on TrueNAS:
arc_summary | head -20
Look for the “Cache hits by data type” section. Sustained hit rates below 90% on a read-heavy workload suggest you would benefit from more RAM (or, much less often, an L2ARC SSD).
When to tune zfs_arc_max
The default ARC max on TrueNAS SCALE is roughly half of system RAM. If you do not run VMs or memory-hungry apps, raise the limit to 75–85% of RAM. This is set in TrueNAS at System Settings → Tunables:
- Variable:
zfs_arc_max - Type:
sysctl - Value: target bytes (e.g.,
26843545600for ~25 GB on a 32 GB system) - Comment: document why you set it
Restart afterward.
If you run VMs and apps on the same box, leave ARC alone. The default coexists with other memory demands gracefully. Pinning ARC to 80% of RAM on a system that also runs a 16 GB VM will cause memory pressure and swap.
Recordsize: dataset-level block size
ZFS writes data in variable-size blocks up to the dataset’s recordsize limit. The default is 128K, which is a reasonable middle ground but is often not optimal for the actual workload on a given dataset.
How to choose recordsize
| Dataset purpose | Recommended recordsize |
|---|---|
| General user files, documents | 128K (default, leave alone) |
| Large media files (movies, archives) | 1M |
| VM zvols (block storage for VMs) | 64K or 16K |
| Database (Postgres, MySQL data) | 16K (matches typical DB page) |
| iSCSI block targets | Match the consumer’s block size |
apps datasets (mixed file sizes) | 128K |
Why this matters: every read of a single byte from a file pulls the entire enclosing record. If your dataset stores 4K database pages but is set to 128K recordsize, every database row read causes ZFS to fetch 128K from disk and decompress 128K. Conversely, if your dataset stores 8 GB movie files at 16K recordsize, the metadata overhead is enormous and sequential throughput suffers.
The catch
Recordsize only applies to newly written records. Changing recordsize after data is written has no effect on the existing records. To “apply” a new recordsize to an existing dataset, you must rewrite the data — typically by replicating the dataset to a new dataset with the new recordsize, then swapping.
The simplest path: set recordsize at dataset creation time based on the intended use, and resist the urge to change it later.
Compression: almost always free, usually a win
Modern ZFS supports several compression algorithms. The relevant ones for a home NAS:
lz4(default in TrueNAS) — extremely fast, low CPU, modest compression ratio. Always-on recommended.zstd— better compression ratio than lz4, higher CPU cost but still acceptable on modern CPUs. Multiple levels (zstd-1throughzstd-19);zstd-3is the practical default if you want better ratios than lz4.gzip— legacy, slower than zstd at every compression level. No reason to choose this in 2026.off— only useful for already-compressed datasets (raw video sources, encrypted blobs) where compression is provably wasted work.
What we recommend
- Default everywhere:
lz4. It is essentially free. ZFS detects incompressible data quickly and skips trying to compress it, so the cost on truly random data is tiny. zstd-3for documents, code repositories, log datasets. Documents and source compress well; the higher CPU cost is worth it for the better ratio.zstd-9or higher only for archival datasets that are written infrequently and read rarely. Restoring azstd-9archive is fast; writing it is slow.offfor datasets holding already-compressed media (movies, MP3s, JPEGs, encrypted blobs). The compressor will fail to compress and you waste a small amount of CPU.
You can confirm your achieved compression ratio per dataset:
zfs get compressratio tank/users
A ratio of 1.01x on a media dataset is honest — most media is already compressed. A ratio of 1.6x+ on a documents dataset suggests zstd-3 would do meaningfully better than the lz4 default.
What to leave alone
The following tunables get a lot of discussion on forums but rarely benefit a home NAS:
primarycache=metadata— set this only on datasets where you actively want file data to bypass ARC (rare for home use). Defaultallis correct.logbias=throughputvslogbias=latency— only relevant for sync-write-heavy workloads (NFS-shared VM datastores), and even there the default is usually fine.sync=disabled— turning this on is a footgun. You will get a measurable performance boost on sync writes and a small chance of data loss on power failure. The “small chance” is sometimes large enough to matter. Defaultstandardis correct.atime=offvsatime=on—relatime(the TrueNAS default) is the right answer. Turning atime fully off saves negligible IO and breaks programs that rely on access timestamps.zfs_dirty_data_max— the default scales with system RAM and is fine. Tuning this without a specific reason is unlikely to help.
What to monitor
After any tuning, look at:
# ARC stats
arc_summary
# Per-dataset compression ratio and used space
zfs list -o name,used,refer,compressratio
# IO stats over a 10-second sample
zpool iostat -v tank 10 6
The point is to verify the change had the effect you expected, not to chase numbers. If arc_summary shows your hit rate did not improve after raising zfs_arc_max, the bottleneck was not ARC and you have learned something useful.
When not to tune
The strongest recommendation in this guide: on a fresh install, change nothing for the first month. Let TrueNAS run with defaults. Use the system. Note the actual workload — read-heavy, write-heavy, random, sequential. Then tune the small number of things that match your observed bottleneck.
People who tune ZFS on day one based on forum advice often end up with worse performance than the default would have given them, because they have tuned for someone else’s workload.
Next steps
- ZFS Pool Design: RAIDZ vs Mirrors for a Home NAS covers the pool-level decisions that matter more than any of the tunables above.
- TrueNAS Snapshot and Replication Strategy covers data protection on top of the optimized pool.
Related
ZFS Pool Design: RAIDZ vs Mirrors for a Home NAS
How to decide between RAIDZ1, RAIDZ2, and mirror vdevs for a home TrueNAS pool. Trade-offs in usable capacity, rebuild risk, IOPS, and what 'one big pool' really costs you.
TrueNAS SCALE vs CORE in 2026: Which Should You Install?
TrueNAS CORE is FreeBSD-based and battle-tested. TrueNAS SCALE is Linux-based and runs containers and VMs natively. Here is how to pick the right one for a home NAS today.
The TrueNAS Hardware Guide: What Actually Matters for a Home NAS
ECC RAM, HBAs, drives, motherboards, and power. A practical hardware guide for building a TrueNAS server at home — what to splurge on and what is fine.