how to read file from line x to the end of a file in bash
BashBash Problem Overview
I would like know how I can read each line of a csv
file from the second line to the end of file in a bash script.
I know how to read a file in bash:
while read line
do
echo -e "$line\n"
done < file.csv
But, I want to read the file starting from the second line to the end of the file. How can I achieve this?
Bash Solutions
Solution 1 - Bash
tail -n +2 file.csv
From the man page:
-n, --lines=N
output the last N lines, instead of the last 10
...
If the first character of N (the number of bytes or lines) is a '+',
print beginning with the Nth item from the start of each file, other-
wise, print the last N items in the file.
In English this means that:
tail -n 100
prints the last 100 lines
tail -n +100
prints all lines starting from line 100
Solution 2 - Bash
Simple solution with sed
:
sed -n '2,$p' <thefile
where 2
is the number of line you wish to read from.
Solution 3 - Bash
Or else (pure bash)...
{ for ((i=1;i--;));do read;done;while read line;do echo $line;done } < file.csv
Better written:
linesToSkip=1
{
for ((i=$linesToSkip;i--;)) ;do
read
done
while read line ;do
echo $line
done
} < file.csv
This work even if linesToSkip == 0 or linesToSkip > file.csv's number of lines
Edit:
Changed ()
for {}
as gniourf_gniourf enjoin me to consider: First syntax generate a sub-shell, whille {}
don't.
of course, for skipping only one line (as original question's title), the loop for (i=1;i--;));do read;done
could be simply replaced by read
:
{ read;while read line;do echo $line;done } < file.csv
Solution 4 - Bash
There are many solutions to this. One of my favorite is:
(head -2 > /dev/null; whatever_you_want_to_do) < file.txt
You can also use tail
to skip the lines you want:
tail -n +2 file.txt | whatever_you_want_to_do
Solution 5 - Bash
Depending on what you want to do with your lines: if you want to store each selected line in an array, the best choice is definitely the builtin mapfile
:
numberoflinestoskip=1
mapfile -s $numberoflinestoskip -t linesarray < file
will store each line of file file
, starting from line 2, in the array linesarray
.
help mapfile
for more info.
If you don't want to store each line in an array, well, there are other very good answers.
As F. Hauri suggests in a comment, this is only applicable if you need to store the whole file in memory.
Otherwise, you best bet is:
{
read; # Just a scratch read to get rid (pun!) of the first line
while read line; do
echo "$line"
done
} < file.csv
Notice: there's no subshell involved/needed.
Solution 6 - Bash
This will work
i=1
while read line
do
test $i -eq 1 && ((i=i+1)) && continue
echo -e "$line\n"
done < file.csv
Solution 7 - Bash
I would just get a variable.
#!/bin/bash
i=0
while read line
do
if [ $i != 0 ]; then
echo -e $line
fi
i=$i+1
done < "file.csv"
UPDATE Above will check for the $i
variable on every line of csv. So if you have got very large csv file of millions of line it will eat significant amount of CPU cycles, no good for Mother nature.
Following one liner can be used to delete the very first line of CSV file using sed
and then output the remaining file to while
loop.
sed 1d file.csv | while read d; do echo $d; done