Easiest way to extract the urls from an html page using sed or awk only

HtmlRegexBashSedAwk

Html Problem Overview


I want to extract the URL from within the anchor tags of an html file. This needs to be done in BASH using SED/AWK. No perl please.

What is the easiest way to do this?

Html Solutions


Solution 1 - Html

You could also do something like this (provided you have lynx installed)...

Lynx versions < 2.8.8

lynx -dump -listonly my.html

Lynx versions >= 2.8.8 (courtesy of @condit)

lynx -dump -hiddenlinks=listonly my.html

Solution 2 - Html

You asked for it:

$ wget -O - http://stackoverflow.com | \
  grep -io '<a href=['"'"'"][^"'"'"']*['"'"'"]' | \
  sed -e 's/^<a href=["'"'"']//i' -e 's/["'"'"']$//i'

This is a crude tool, so all the usual warnings about attempting to parse HTML with regular expressions apply.

Solution 3 - Html

grep "<a href=" sourcepage.html
  |sed "s/<a href/\\n<a href/g" 
  |sed 's/\"/\"><\/a>\n/2'
  |grep href
  |sort |uniq
  1. The first grep looks for lines containing urls. You can add more elements after if you want to look only on local pages, so no http, but relative path.

  2. The first sed will add a newline in front of each a href url tag with the \n

  3. The second sed will shorten each url after the 2nd " in the line by replacing it with the /a tag with a newline Both seds will give you each url on a single line, but there is garbage, so

  4. The 2nd grep href cleans the mess up

  5. The sort and uniq will give you one instance of each existing url present in the sourcepage.html

Solution 4 - Html

With the Xidel - HTML/XML data extraction tool, this can be done via:

$ xidel --extract "//a/@href" http://example.com/

With conversion to absolute URLs:

$ xidel --extract "//a/resolve-uri(@href, base-uri())" http://example.com/

Solution 5 - Html

I made a few changes to Greg Bacon Solution

cat index.html | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d'

This fixes two problems:

  1. We are matching cases where the anchor doesn't start with href as first attribute
  2. We are covering the possibility of having several anchors in the same line

Solution 6 - Html

An example, since you didn't provide any sample

awk 'BEGIN{
RS="</a>"
IGNORECASE=1
}
{
  for(o=1;o<=NF;o++){
    if ( $o ~ /href/){
      gsub(/.*href=\042/,"",$o)
      gsub(/\042.*/,"",$o)
      print $(o)
    }
  }
}' index.html

Solution 7 - Html

I am assuming you want to extract a URL from some HTML text, and not parse HTML (as one of the comments suggests). Believe it or not, someone has already done this.

OT: The sed website has a lot of good information and many interesting/crazy sed scripts. You can even play Sokoban in sed!

Solution 8 - Html

You can do it quite easily with the following regex, which is quite good at finding URLs:

\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))

I took it from John Gruber's article on how to find URLs in text.

That lets you find all URLs in a file f.html as follows:

cat f.html | grep -o \
    -E '\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))'

Solution 9 - Html

Expanding on kerkael's answer:

grep "<a href=" sourcepage.html
  |sed "s/<a href/\\n<a href/g" 
  |sed 's/\"/\"><\/a>\n/2'
  |grep href
  |sort |uniq
# now adding some more
  |grep -v "<a href=\"#"
  |grep -v "<a href=\"../"
  |grep -v "<a href=\"http"

The first grep I added removes links to local bookmarks.

The second removes relative links to upper levels.

The third removes links that don't start with http.

Pick and choose which one of these you use as per your specific requirements.

Solution 10 - Html

In bash, the following should work. Note that it doesn't use sed or awk, but uses tr and grep, both very standard and not perl ;-)

$ cat source_file.html | tr '"' '\n' | tr "'" '\n' | grep -e '^https://' -e '^http://' -e'^//' | sort | uniq

for example:

$ curl "https://www.cnn.com" | tr '"' '\n' | tr "'" '\n' | grep -e '^https://' -e '^http://' -e'^//' | sort | uniq

generates

//s3.amazonaws.com/cnn-sponsored-content
//twitter.com/cnn
https://us.cnn.com
https://www.cnn.com
https://www.cnn.com/2018/10/27/us/new-york-hudson-river-bodies-identified/index.html\
https://www.cnn.com/2018/11/01/tech/google-employee-walkout-andy-rubin/index.html\
https://www.cnn.com/election/2016/results/exit-polls\
https://www.cnn.com/profiles/frederik-pleitgen\
https://www.facebook.com/cnn
etc...

Solution 11 - Html

Go over with a first pass replacing the start of the urls (http) with a newline (\nhttp). Then you have guaranteed for yourself that your link starts at the beginning of the line and is the only URL on the line.

The rest should be easy, here is an example:

sed "s/http/\nhttp/g" <(curl "http://www.cnn.com") | sed -n "s/\(^http[s]*:[a-Z0-9/.=?_-]*\)\(.*\)/\1/p"

alias lsurls='_(){ sed "s/http/\nhttp/g" "${1}" | sed -n "s/\(^http[s]*:[a-Z0-9/.=?_-]*\)\(.*\)/\1/p"; }; _'

Solution 12 - Html

This is my first post, so I try to do my best explaining why I post this answer...

  1. Since the first 7 most voted answers, 4 include GREP even when the post explicitly says "using sed or awk only".
  2. Even when the post requires "No perl please", due to the previous point, and because use PERL regex inside grep.
  3. and because this is the simplest way ( as far I know , and was required ) to do it in BASH.

So here come the simplest script from GNU grep 2.28:

grep -Po 'href="\K.*?(?=")'

About the \K switch , not info was founded in MAN and INFO pages, so I came here for the answer.... the \K switch get rid the previous chars ( and the key itself ). Bear in mind following the advice from man pages: "This is highly experimental and grep -P may warn of unimplemented features."

Of course, you can modify the script to meet your tastes or needs, but I found it pretty straight for what was requested in the post , and also for many of us...

I hope folks you find it very useful.

thanks!!!

Solution 13 - Html

You can try:

curl --silent -u "<username>:<password>" http://<NAGIOS_HOST/nagios/cgi-bin/status.cgi|grep 'extinfo.cgi?type=1&host='|grep "status"|awk -F'</A>' '{print $1}'|awk -F"'>" '{print $3"\t"$1}'|sed 's/<\/a>&nbsp;<\/td>//g'| column -c2 -t|awk '{print $1}'

Solution 14 - Html

That's how I tried it for better view, create shell file and give link as parameter, it will create temp2.txt file.

a=$1

lynx -listonly -dump "$a" > temp

awk 'FNR > 2 {print$2}' temp > temp2.txt

rm temp

>sh test.sh http://link.com

Solution 15 - Html

Eschewing the awk/sed requirement:

  1. urlextract is made just for such a task (documentation).
  2. urlview is an interactive CLI solution (github repo).

Solution 16 - Html

I scrape websites using Bash exclusively to verify the http status of client links and report back to them on errors found. I've found awk and sed to be the fastest and easiest to understand. Props to the OP.

curl -Lk https://example.com/ | sed -r 's~(href="|src=")([^"]+).*~\n\1\2~g' | awk '/^(href|src)/,//'

Because sed works on a single line, this will ensure that all urls are formatted properly on a new line, including any relative urls. The first sed finds all href and src attributes and puts each on a new line while simultaneously removing the rest of the line, inlcuding the closing double qoute (") at the end of the link.

Notice I'm using a tilde (~) in sed as the defining separator for substitution. This is preferred over a forward slash (/). The forward slash can confuse the sed substitution when working with html.

The awk finds any line that begins with href or src and outputs it.

Once the content is properly formatted, awk or sed can be used to collect any subset of these links. For example, you may not want base64 images, instead you want all the other images. Our new code would look like:

curl -Lk https://example.com/ | sed -r 's~(href="|src=")([^"]+).*~\n\1\2~g' | awk '/^(href|src)/,//' | awk '/^src="[^d]/,//'

Once the subset is extracted, just remove the href=" or src="

sed -r 's~(href="|src=")~~g'

This method is extremely fast and I use these in Bash functions to format the results across thousands of scraped pages for clients that want someone to review their entire site in one scrape.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestioncodaddictView Question on Stackoverflow
Solution 1 - HtmlHardyView Answer on Stackoverflow
Solution 2 - HtmlGreg BaconView Answer on Stackoverflow
Solution 3 - HtmlkerkaelView Answer on Stackoverflow
Solution 4 - HtmlIngo KarkatView Answer on Stackoverflow
Solution 5 - HtmlCrisbootView Answer on Stackoverflow
Solution 6 - Htmlghostdog74View Answer on Stackoverflow
Solution 7 - HtmlAlok SinghalView Answer on Stackoverflow
Solution 8 - Htmlnes1983View Answer on Stackoverflow
Solution 9 - HtmlNikhil VJView Answer on Stackoverflow
Solution 10 - HtmlBrad ParksView Answer on Stackoverflow
Solution 11 - Htmluser4401178View Answer on Stackoverflow
Solution 12 - HtmlX00D45View Answer on Stackoverflow
Solution 13 - HtmldpathakView Answer on Stackoverflow
Solution 14 - HtmlAbhishek GurjarView Answer on Stackoverflow
Solution 15 - HtmlMarek KowalczykView Answer on Stackoverflow
Solution 16 - HtmlCodeMilitantView Answer on Stackoverflow