Footnote 2: There's also -preset placebo which takes much more CPU time, for maybe 1% better quality per bitrate. i.e. pushing much farther into the diminishing-returns of spending more CPU to compress better.
Other codecs like h.265 (with the x265 encoder) or VP9 can offer even better rate distortion tradeoffs, but at the cost of much more CPU time to encode. For a fixed encode time, I'm not sure if there's any advantage to x265 over x264 for presets near x264 medium or slow at least. But decoder compatibility with h.265 is much less widespread than h.264. But if you don't mind spending more CPU time encoding, -c:v libx265 -preset slow is pretty reasonable and probably better quality than x264 -preset placebo for the same bitrate on some content. (x265 does have more of a tendency to smooth out texture detail, though.)
Decode compatibility is very good for h.264 main profile, and hopefully also high profile these days. (8x8 DCT is most useful for high resolutions like 1080p and especially 4k.) x264's default is high profile. Some obsolete mobile devices might only have hardware decode for h.264 baseline profile, but that's significantly worse quality per bitrate (no B-frames, and no CABAC, only the less efficient CAVLC for the final step of losslessly encoding structs into a bitstream.) Many old guides recommend baseline profile for max compatibility, but that sucks a lot. (Perhaps as much as 40% worse quality per bitrate vs. high profile, on 1080p live-action, if I'm remembering correctly from some codec-testing results.)
Related:
- https://trac.ffmpeg.org/wiki/Encode/H.264
- https://slhck.info/video/2017/02/24/crf-guide.html (some info about the bitrate
-crf 23will chose on different content.) Unless you want a specific bitrate, CRF rate-control is almost always the best choice, potentially with upper limits from VBV. (e.g. so local bitrate doesn't go through the roof for a second if you encode the HBO logo = screen full of static.)