Extract specific columns from delimited file using Awk

I don't know if it's possible to do ranges in awk. You could do a for loop, but you would have to add handling to filter out the columns you don't want. It's probably easier to do this:

awk -F, '{OFS=",";print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$20,$21,$22,$23,$24,$25,$30,$33}' infile.csv > outfile.csv

something else to consider - and this faster and more concise:

cut -d "," -f1-10,20-25,30-33 infile.csv > outfile.csv

As to the second part of your question, I would probably write a script in perl that knows how to handle header rows, parsing the columns names from stdin or a file and then doing the filtering. It's probably a tool I would want to have for other things. I am not sure about doing in a one liner, although I am sure it can be done.


As mentioned by @Tom, the cut and awk approaches actually don't work for CSVs with quoted strings. An alternative is a module for python that provides the command line tool csvfilter. It works like cut, but properly handles CSV column quoting:

csvfilter -f 1,3,5 in.csv > out.csv

If you have python (and you should), you can install it simply like this:

pip install csvfilter

Please take note that the column indexing in csvfilter starts with 0 (unlike awk, which starts with $1). More info at https://github.com/codeinthehole/csvfilter/


Other languages have short cuts for ranges of field numbers, but not awk, you'll have to write your code as your fear ;-)

awk -F, 'BEGIN {OFS=","} { print $1, $2, $3, $4 ..... $30, $33}' infile.csv > outfile.csv

There is no direct function in awk to use field names as column specifiers.

I hope this helps.


Others have answered your earlier question. For this:

As an addendum, is there any way to extract directly with the header names rather than with column numbers?

I haven't tried it, but you could store each header's index in a hash and then use that hash to get its index later on.

for(i=0;i<$NF;i++){
    hash[$i] = i;
}

Then later on, use it:

j = hash["header1"];
print $j;