perl.gg / one-liners

<!-- category: one-liners -->

Perl as an Awk Replacement

2026-03-28

Awk splits lines into fields. That's basically its whole thing. You say $1 for the first field, $2 for the second, and awk chops every line on whitespace for you.

Perl does the same thing. One flag and you're there.

perl -ane 'print "$F[0]\n"'
The -a flag turns on auto-split mode. Every line of input gets chopped into the @F array. That's it. Full Perl power behind awk-style field processing.

Part 1: THE -a FLAG

Add -a to any perl -n or perl -p one-liner, and Perl automatically calls split on each line of input, storing the result in @F.
echo "alice 30 admin" | perl -ane 'print "$F[0]\n"'
Output:
alice
By default it splits on whitespace, just like awk. Leading whitespace is ignored, consecutive whitespace is treated as a single delimiter.

The -a flag implies that you're reading input line by line. You almost always pair it with -n (loop without printing) or -p (loop with printing).

# -n: loop, don't auto-print perl -ane 'print "$F[2]\n" if $F[2]' file.txt # -p: loop AND auto-print (like sed) perl -ape '$_ = "$F[0]\n"' file.txt

Part 2: ACCESSING FIELDS

Awk uses $1, $2, $3. Perl uses $F[0], $F[1], $F[2].

The difference: awk is 1-indexed. Perl is 0-indexed.

AWK FIELD PERL FIELD CONTENT --------- ---------- ------- $1 $F[0] First field $2 $F[1] Second field $NF $F[-1] Last field $(NF-1) $F[-2] Second to last $0 $_ Entire line NF scalar @F Number of fields
Perl's negative indexing is a genuine win here. Getting the last field in awk requires $NF. In Perl it's $F[-1]. Second to last? $F[-2]. Any field from the end? Easy.
# last field of every line echo "one two three four" | perl -ane 'print "$F[-1]\n"'
Output:
four

Part 3: CHANGING THE DELIMITER WITH -F

Whitespace is the default. But data comes in all shapes. Use -F to change the split delimiter.
# split on colons perl -F: -ane 'print "$F[0]\n"' /etc/passwd # split on commas perl -F, -ane 'print "$F[2]\n"' data.csv # split on tabs perl -F'\t' -ane 'print "$F[0]\n"' spreadsheet.tsv
The -F flag takes a pattern. That pattern is actually a regex, so you can get fancy:
# split on one or more dashes perl -F'-+' -ane 'print "$F[1]\n"' file.txt # split on pipe character (escape for shell) perl -F'\|' -ane 'print "$F[0]\n"' pipe-delimited.txt
The awk equivalent is awk -F:. Same idea, different letter.

Part 4: PARSING /etc/passwd

The classic example. Every Unix admin has done this in awk at least once.
# awk version awk -F: '{print $1, $3}' /etc/passwd # perl version perl -F: -ane 'print "$F[0] $F[2]\n"' /etc/passwd
Find users with UID above 1000:
perl -F: -ane 'print "$F[0]\n" if $F[2] > 1000' /etc/passwd
Find users whose shell is bash:
perl -F: -ane 'chomp $F[-1]; print "$F[0]\n" if $F[-1] eq "/bin/bash"' /etc/passwd
The chomp on the last field is important. Auto-split doesn't strip the trailing newline from the last field. The newline from $_ ends up in $F[-1].

Or use -l to handle it automatically:

perl -F: -lane 'print $F[0] if $F[-1] eq "/bin/bash"' /etc/passwd
The -l flag adds chomp to input and "\n" to output. It's the missing piece that makes -a truly comfortable.

Part 5: THE KILLER COMBO: -lane

You'll type this so often it becomes muscle memory:
perl -lane '...' perl -F: -lane '...'
What each flag does:
-l Chomp input, add newline to output -a Auto-split into @F -n Loop over input without printing -e Execute the following code
Together, -lane gives you: read each line, strip the newline, split into fields, run your code, and you print when you want output with auto-newlines.
# sum the third column perl -lane '$sum += $F[2]; END { print $sum }' data.txt # print lines where field 5 is greater than 100 perl -lane 'print if $F[4] > 100' report.txt # print unique values from column 2 perl -lane '$seen{$F[1]}++ or print $F[1]' log.txt

Part 6: CSV HANDLING

Real CSV has quoted fields, embedded commas, escaped quotes. Auto-split on commas will break on that.

For quick and dirty CSV where you know the fields are clean:

perl -F, -lane 'print $F[0]' simple.csv
For real CSV with quoted fields, you need more:
perl -MText::CSV -ne ' BEGIN { $csv = Text::CSV->new } $csv->parse($_) and print( ($csv->fields)[2], "\n" ) ' data.csv
But honestly? For quick field extraction from well-behaved data, -F, works 90% of the time. Don't over-engineer one-liners.

Part 7: LOG PARSING

This is where Perl leaves awk in the dust. Awk can split fields, but Perl can split fields AND do complex regex, AND do math, AND do hash lookups, all in one line.

Apache access log analysis. Print IPs with more than 100 requests:

perl -lane ' $hits{$F[0]}++; END { for (sort { $hits{$b} <=> $hits{$a} } keys %hits) { print "$hits{$_} $_" if $hits{$_} > 100; } } ' access.log
Sum bytes transferred (field 10 in combined log format):
perl -lane '$total += $F[9]; END { printf "%.2f GB\n", $total / 1073741824 }' access.log
Find all 500 errors and show the URL:
perl -lane 'print "$F[0] $F[6]" if $F[8] == 500' access.log
Awk can do these, sure. But once you need a regex inside the condition, or a hash for deduplication, or formatted output, awk starts to creak. Perl just keeps going.

Part 8: REWRITING AWK SCRIPTS

The translation is almost mechanical:
AWK PERL -lane --- --------- $1 $F[0] $NF $F[-1] NF scalar @F NR $. $0 $_ FS = ":" -F: OFS = "," $, = "," (or use join) /pattern/ m~pattern~ $2 ~ /regex/ $F[1] =~ m~regex~ BEGIN { ... } BEGIN { ... } END { ... } END { ... } { print $1, $3 } print "$F[0] $F[2]"
A real awk script:
awk -F: '$3 >= 1000 && $7 !~ /nologin/ { printf "%-15s %s\n", $1, $7 }' /etc/passwd
In Perl:
perl -F: -lane 'printf "%-15s %s\n", $F[0], $F[6] if $F[2] >= 1000 && $F[6] !~ m~nologin~' /etc/passwd
Same logic. Same output. But the Perl version can grow. Need to add a hash lookup? A subroutine call? A module import? No problem. Try that in awk.

Part 9: MULTI-CHARACTER DELIMITERS

Awk supports multi-character field separators natively. So does Perl's -F, because it's a regex:
# split on " :: " perl -F'\s*::\s*' -lane 'print $F[1]' config.txt # split on two or more spaces (fixed-width-ish data) perl -F'\s{2,}' -lane 'print $F[0]' report.txt # split on HTML table cells perl -F'</td>\s*<td>' -lane 'print $F[2]' ugly.html
This is where -F silently becomes more powerful than awk's -F. Awk's field separator is a string or an ERE. Perl's is a full Perl regex. Lookaheads, lookbehinds, character classes, the works.
# split on comma NOT inside quotes (simplified) perl -F'(?:,)(?=(?:[^"]*"[^"]*")*[^"]*$)' -lane 'print $F[0]' data.csv
You'd never do that in awk. You'd probably never do it in Perl either, because at that point you should use Text::CSV. But you COULD.

Part 10: PRACTICAL RECIPES

Print the second column, sorted and unique:
perl -lane 'print $F[1]' file.txt | sort -u
Or do it all in Perl:
perl -lane '$u{$F[1]}++; END { print for sort keys %u }' file.txt
Swap columns 1 and 3:
perl -lane 'print join " ", @F[2,1,0,3..$#F]' file.txt
Add a new column:
perl -lane 'print join "\t", @F, $F[1] * $F[2]' prices.tsv
Filter rows where any field matches a pattern:
perl -lane 'print if grep { m~error~i } @F' log.txt
Print only lines with exactly 5 fields:
perl -lane 'print if @F == 5' data.txt
Renumber the first column:
perl -lane '$F[0] = $.; print join "\t", @F' file.tsv
@F / | \ / | \ [0] [1] [2] ... [-1] awk splits it. Perl splits it better. .--. |o_o | |:_/ | // \ \ (| | ) /'\_ _/`\ \___)=(___/
Every awk one-liner you've ever written can be a Perl one-liner. Same idea, more power. The -a flag is the bridge. Cross it, and you never need to go back.

perl.gg