Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

14
  • 1
    I for one just can't get it to work. I don't see any proc spawned, switching echo for wget doesn't output anything Commented May 9, 2014 at 19:10
  • 2
    Note with the 'it will run as many processes as you have cores' - network bandwidth is likely going to be more of a limiting factor. Commented Jun 21, 2014 at 17:10
  • 2
    It really depends. For a large number of small files this can be almost an order of magnitude faster, as most of the transfer time is the handshake/TCP round trip's. Also in the situation where you are downloading from a number of smaller hosts, sometime the per connection bandwidth is limited, so this will bump things up. Commented Jun 23, 2014 at 17:22
  • 2
    This is pretty useful if you want to use a list of relative URLs (resource ID without hostnames) with different hostnames, example: cat urlfile | parallel --gnu "wget example1.com{}" and cat urlfile | parallel --gnu "wget example2.com{}" Commented May 14, 2015 at 2:21
  • 1
    One might add that flooding a website for a massive amount of parallel requests for large files is not particularly nice. Doesn't matter for big sites, but if it's just a smaller one you should take care. Commented Sep 19, 2019 at 10:02