curl can only read single web pages files, the bunch of lines you got is actually the directory index (which you also see in your browser if you go to that URL). To use curl and some Unix tools magic to get the files you could use something like
for file in $(curl -s http://www.ime.usp.br/~coelho/mac0122-2013/ep2/esqueleto/ |
grep href |
sed 's/.*href="//' |
sed 's/".*//' |
grep '^[a-zA-Z].*'); do
curl -s -O http://www.ime.usp.br/~coelho/mac0122-2013/ep2/esqueleto/$file
done
which will get all the files into the current directory.
For more elaborated needs (including getting a bunch of files from a site with folders/directories), wget (as proposed in another answer already) is the better option.
curlman-page: _If not told otherwise, curl writes the received data to stdout. It can be instructed to instead save that data into a local file, using the -o, --output or -O, --remote-name options. _ So, what else did you expect?