Skip to main content

cas

Australia

http://taz.net.au/blog/

Craig Sanders

http://taz.net.au/~cas/

1d
comment Msys2 bash rejects empty subexpression in regular expression
Is this for just a one-off or occasional regex match in your script? If so, using bash regex matches is fine. If not, if you're using it in, say, a while read loop or a for loop, then you really should rewrite that section of the script to use sed or awk or perl. If regex matches are the majority of the script then rewrite the whole thing in either awk or perl, optionally with a shell wrapper. Use shell for what it's good at, and use other tools for what they're good at.
2d
revised Bash script for doing off-site backups using restic
a few small fixes, extra case example
2d
comment Find all files, containing multiple keywords
@Dominique the reason why the answers in the other questions use awk, perl, etc is that you can't do a logical AND with grep (alternation with | is effectively a logical OR). And, as you've noted A.*B|B.*A doesn't scale if you have more than a few patterns to find (although you could write a script that, given multiple patterns to search for, outputs an optimised regex starting with (?s) that matches all possible combinations. You could then use that with grep -rPz). Note that -z effectively loads each entire text file into RAM, and won't work with non-text files containing NUL.
Feb
26
comment `column` can split a line into columns with `-t` but can it wrap the result?
or just output it in a record-oriented format. Each record showing the field name and field data, separated by, say, a colon and a tab (with fixed string width for the field name), and a blank line or other separator between each record. That's probably easier and more appropriate since you only have 8 columns - the AoA and AoH ideas would be more suitable if you had lots of columns.
Feb
26
comment `column` can split a line into columns with `-t` but can it wrap the result?
An Array of Hashes ("AoH") for the data would work too, and might be even better, depending on how you approach implementing it. It would make printing each column by name easier as you could have one array mapping the field numbers to column names (serving double-duty as hash keys) for processing the input, AND for each table, an Array of Arrays ("AoA") to map each column name to the table it belongs to for printing the output (e.g. use the header arrays in a hash slice when printing each table).
Feb
26
comment `column` can split a line into columns with `-t` but can it wrap the result?
Or maybe generate two or three <80 col tables (each with "related" or "thematically similar" data, as much as possible anyway) from your TSV data, with a common key field as the first field of each table (e.g. customer_user_id). perl would be good for that: read in the entire file and, for each line, assign each field to one of N tables (an Array of Arrays, see man perllol and man perldsc, would be a good data structure for that). And, after the main reading loop, print each table. This would not be difficult to code, the hardest part would be deciding which fields belong to which table.
Feb
26
comment `column` can split a line into columns with `-t` but can it wrap the result?
If that output is for your use only, I guess you'll know how to interpret it. Otherwise, it defeats the purpose of columnar output - most people would have no idea what to do with a table which merges three columns into three lines of one column, and even if they do figure it out, they'll have to read it and do modulo 3 calculations in their head to match data with headers. Maybe consider adding a newline after every third line. If this is for printing, consider using landscape mode like a spreadsheet rather than portrait. Or maybe just tell your users "you need a 132-character wide terminal".
Feb
24
revised How can I find files that contain one or more uppercase letters in their filenames?
added -type f back to second example
Feb
24
comment Convert numbers to words/text (Like 111 is One hundred eleven)
I think dropping the "and" like that is an American English thing. It's fairly common for USians to say "one hundred twenty" rather than "one hundred and twenty", but not the British (or Australians or other Commonwealth countries). American English seems to drop a lot of words that English doesn't & wouldn't, especially conjunctions and prepositions.
Feb
24
comment northbridge very hot on Asus kgpe-d16 and Gnuboot: is normal?
I still have one remaining AM3 system (990FX chipset) with an FX-8150 CPU, and consider that to be "retro". It's over 15 years old, not much older than this kgpe-d16 opteron m/b. In any case, this is a hardware question about whether high temperatures are normal, not a software q and not about U&L.
Feb
24
comment northbridge very hot on Asus kgpe-d16 and Gnuboot: is normal?
I’m voting to close this question because it doesn't have anything to do with Unix or Linux. Seems more suited to retrocomputing.stackexchange.com
Feb
24
comment NFS server stuck at shutdown
BTW, systemd does not help with this kind of problem at all and often makes it worse - it can be utterly brain-damaged when shutting down or rebooting. I've seen it stuck for hours (until I got bored of waiting and power-cycled the machine) because there are zombie processes that it should just ignore but kept on trying to kill.
Feb
24
comment NFS server stuck at shutdown
The classic solution was sync; sync; sync; reboot (or whatever command caused an immediate no questions asked and no shutdown scripts wanted reboot - these days, on linux: reboot -f or systemctl reboot -f). The multiple syncs were possibly-but-not-necessarily useful paranoia voodoo, as most reboot programs will sync anyway unless you told it not to with -n or similar (because sometimes, it was the sync that locked up).
Feb
24
revised How can I find files that contain one or more uppercase letters in their filenames?
added examples
Feb
24
comment How can I find files that contain one or more uppercase letters in their filenames?
BTW, if you intend to pass the found filenames to another program, remember to use either -exec ... {} + or pipe the filenames with NUL separators (-print0) and tell whatever program(s) they're being piped into to expect NUL-separated input (usually with a -0, -z, -Z, --null or similar option). Or if you want to use them in a bash script, read them into an array with mapfile -t -d '' and process substitution.
Feb
24
answered How can I find files that contain one or more uppercase letters in their filenames?
Feb
23
revised Bash script for doing off-site backups using restic
added 1 character in body
Feb
23
revised Bash script for doing off-site backups using restic
added advice on what to learn for sysadmin tasks
Feb
23
revised Bash script for doing off-site backups using restic
added 10 characters in body
Feb
23
revised Bash script for doing off-site backups using restic
added 1 character in body
1 2 3 4 5