Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

7
  • 2
    FWIW, the size of the pipe buffer is dependent on OS and versions. Real old Linux had buffers around 4Kb in size. Later it defaulted to something like 64Kb, and provides a fcntl() where an open pipe buffer can be resized (eg to 1M) as required. Commented Jan 2, 2020 at 17:21
  • 2
    From unix.stackexchange.com/questions/11946/… "PIPE_BUF" is the largest value that can be written atomically. sysctl fs.pipe-user-pages-soft returns 16384 on my machine, and sysctl fs.pipe-max-size returns 1048576 Commented Jan 2, 2020 at 17:31
  • 1
    All that adding buffer here really will do is just increase the memory usage of the pipeline. Except in a few pathological cases, there's usually little to gain by adding large buffers between commands in a pipeline. Commented Jan 3, 2020 at 9:05
  • 4
    @LieRyan I didn't know about pv, but use cases which come to mind for a large buffer are reading from a time-metered connection or from a USB stick you want to remove as soon as possible, or generally from any process which locks a critical resource. Commented Jan 3, 2020 at 22:48
  • 2
    Piping from a network into an I/O-bound process that does writes in chunks is another place where it comes in handy. Several jobs ago (so the details are hazy), I had a job that effectively did periodic flushes to disk, hanging for a significant while as they were ongoing; without pv and a large buffer, this stopped the download, meaning a substantial loss in throughput. Sure, if your I/O system flushes at a reasonably steady rate, it doesn't make much of a difference; but that's not always the case. Commented Jan 4, 2020 at 6:05