TrueNASGuide
zfs-fundamentals

ZFS Performance Tuning: ARC, Recordsize, and Compression

The three ZFS tunables that actually matter for a home NAS: ARC sizing, dataset recordsize, and compression. What each does, when to change it, and what to leave alone.

By Editorial · · 8 min read

ZFS exposes hundreds of tunables. The vast majority should be left alone on a home NAS — the defaults reflect years of tuning by people with far more workload diversity than any single household will encounter. There are, however, three settings that genuinely move the needle for typical home workloads: ARC sizing, dataset recordsize, and compression.

This guide covers when and how to adjust each. It also lists the tunables you should not touch unless you have a specific reason.

ARC: how ZFS uses your RAM

The ARC (Adaptive Replacement Cache) is ZFS’s in-memory cache of recently and frequently accessed data. Reads served from ARC complete in microseconds; reads served from disk take milliseconds, at best. The single biggest performance lever on a home NAS is giving ARC enough RAM.

By default on TrueNAS SCALE, ARC will grow to consume most of the system’s free memory (up to about half by default, modifiable). On a 32 GB system with no VMs or large apps, expect ARC to use 12–16 GB.

When ARC sizing matters

It matters most when your working set (the files you regularly access) exceeds your ARC capacity. Symptoms:

To inspect ARC hit rate on TrueNAS:

arc_summary | head -20

Look for the “Cache hits by data type” section. Sustained hit rates below 90% on a read-heavy workload suggest you would benefit from more RAM (or, much less often, an L2ARC SSD).

When to tune zfs_arc_max

The default ARC max on TrueNAS SCALE is roughly half of system RAM. If you do not run VMs or memory-hungry apps, raise the limit to 75–85% of RAM. This is set in TrueNAS at System Settings → Tunables:

Restart afterward.

If you run VMs and apps on the same box, leave ARC alone. The default coexists with other memory demands gracefully. Pinning ARC to 80% of RAM on a system that also runs a 16 GB VM will cause memory pressure and swap.

Recordsize: dataset-level block size

ZFS writes data in variable-size blocks up to the dataset’s recordsize limit. The default is 128K, which is a reasonable middle ground but is often not optimal for the actual workload on a given dataset.

How to choose recordsize

Dataset purposeRecommended recordsize
General user files, documents128K (default, leave alone)
Large media files (movies, archives)1M
VM zvols (block storage for VMs)64K or 16K
Database (Postgres, MySQL data)16K (matches typical DB page)
iSCSI block targetsMatch the consumer’s block size
apps datasets (mixed file sizes)128K

Why this matters: every read of a single byte from a file pulls the entire enclosing record. If your dataset stores 4K database pages but is set to 128K recordsize, every database row read causes ZFS to fetch 128K from disk and decompress 128K. Conversely, if your dataset stores 8 GB movie files at 16K recordsize, the metadata overhead is enormous and sequential throughput suffers.

The catch

Recordsize only applies to newly written records. Changing recordsize after data is written has no effect on the existing records. To “apply” a new recordsize to an existing dataset, you must rewrite the data — typically by replicating the dataset to a new dataset with the new recordsize, then swapping.

The simplest path: set recordsize at dataset creation time based on the intended use, and resist the urge to change it later.

Compression: almost always free, usually a win

Modern ZFS supports several compression algorithms. The relevant ones for a home NAS:

What we recommend

You can confirm your achieved compression ratio per dataset:

zfs get compressratio tank/users

A ratio of 1.01x on a media dataset is honest — most media is already compressed. A ratio of 1.6x+ on a documents dataset suggests zstd-3 would do meaningfully better than the lz4 default.

What to leave alone

The following tunables get a lot of discussion on forums but rarely benefit a home NAS:

What to monitor

After any tuning, look at:

# ARC stats
arc_summary

# Per-dataset compression ratio and used space
zfs list -o name,used,refer,compressratio

# IO stats over a 10-second sample
zpool iostat -v tank 10 6

The point is to verify the change had the effect you expected, not to chase numbers. If arc_summary shows your hit rate did not improve after raising zfs_arc_max, the bottleneck was not ARC and you have learned something useful.

When not to tune

The strongest recommendation in this guide: on a fresh install, change nothing for the first month. Let TrueNAS run with defaults. Use the system. Note the actual workload — read-heavy, write-heavy, random, sequential. Then tune the small number of things that match your observed bottleneck.

People who tune ZFS on day one based on forum advice often end up with worse performance than the default would have given them, because they have tuned for someone else’s workload.

Next steps

#zfs #performance #arc #recordsize #compression #tuning

Related

Comments