Question

Let's say I have a file named plaintext.txt which have 2 lines and a content of

you can be social online by logging in to http://www.facebook.com and http://plus.google.com
alternatively, you can also create an account in http://www.twitter.com

I know that I can display the whole contents of the file by issuing a cat statement in the command-line like

cat plaintext.txt

I wanted to collect all of the URL links from the plaintext such that I can display

http://www.facebook.com
http://plus.google.com
http://www.twitter.com

I presume the command line statement for this one would be something like

cat plaintext.txt | grep something

but I don't exactly know how.

How is it possible to use grep for collecting URL links?

Was it helpful?

Solution 2

sed 's/\s/\n/g' plaintext.txt | grep http:

where plaintext.txt is the file containing your two lines.

OTHER TIPS

You can do with grep -o option.

$ cat file
you can be social online by logging in to http://www.facebook.com and http://plus.google.com
alternatively, you can also create an account in http://www.twitter.com

$ grep -o "http[^ ]*" file
http://www.facebook.com
http://plus.google.com
http://www.twitter.com

From the man page:

-o, --only-matching
              Show only the part of a matching line that matches PATTERN.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top