Skip to main content
2 of 10
added 424 characters in body
Peter Cordes
  • 6.7k
  • 2
  • 33
  • 39

Lossy compression is a tradeoff between bitrate (file size) and quality, not just about getting the smallest files. If that's all you wanted, use -preset veryslow -crf 51 (and optionally downscale to 256x144) to get a very tiny file that's mostly just blurred blobs with no detail.

Encoding is a 3-way tradeoff of CPU time against quality against bitrate, very different from something like zip where file size is all you need to look at.

-preset veryslow gives you the best tradeoff by spending more CPU time searching for ways to represent more detail per bit. (i.e. best tradeoff of rate per distortion).

This is mostly orthogonal to rate-control, which decides how many total bits to spend. x264' default rate-control is CRF 23 (ffmpeg -crf 23); if you want smaller files, use -preset veryslow -crf 26 or something to spend fewer bits for the same complexity, resulting in more blurring. It's logarithmic so bumping up the CRF by a few numbers can change the bitrate by a factor of 2. For nearly transparent quality, -crf 18 or 20 is often good, but costs more bitrate.


CRF mode is not true constant-quality (SSIM, PSNR, or any other metric). With faster encoding presets, x264 uses a simpler decision-making process to how / where to spend bits, resulting in some variation in bitrate for the same CRF setting.

With different search tools to find redundancy as @szatmary explains, a higher preset might find a much smaller way to encode something that only looks slightly worse. Or a way to encode some blocks that looks much better but is only slightly larger. Depending which way these things go on average, the same CRF at different quality presets will have different quality and different bitrate.

That's why you don't get progressively smaller files at identical quality; -preset veryfast typically looks worse . -preset ultrafast is usually noticeably bad even at high bitrate, but other presets can look as good as veryfast if you spend much more bitrate.

Smaller file doesn't mean "better compression". Remember that quality is also variable. If you used ffmpeg -i in.mp4 -ssim 1 -tune ssim -preset veryslow out.mkv to get libx264 to calculate the SSIM visual quality metric, you'll find that veryslow has better quality per bitrate than veryfast.

(Keep in mind that psychovisual optimizations that make images look better to humans (like -psy-rd=1.0:0.15) can score worse on some quality metrics, so for real use you don't want -tune ssim. Psy-rd means to take human perception into account when optimizing the rate vs. distortion tradeoff. AQ (adaptive quantization) is another psy optimization, but one that SSIM is sophisticated enough to benefit from, unlike PSNR.)


If you're encoding once to keep the result for a long time, and/or serve it up over the internet, use -preset veryslow. Or at least -preset medium. You pay the CPU cost once, and reap the savings in file size (for a given quality) repeatedly.

But if you're only going to watch an encode once, e.g. to put a video on a mobile device where you'll watch it once then delete it, then -preset faster -crf 20 makes sense if you have the storage space. Just spend extra bits.

Peter Cordes
  • 6.7k
  • 2
  • 33
  • 39