Question

There is a vpn that keeps changing their password. I have an autologin, but obviously the vpn connection drops every time that they change the password, and I have to manually copy and paste the new password into the credentials file.

http://www.vpnbook.com/freevpn

This is annoying. I realise that the vpn probably wants people not to be able to do this, but it's not against the ToS and not illegal, so work with me here!

I need a way to automatically generate a file which has nothing in it except username password

on separate lines, just like the one above. Downloading the entire page as a text file automatically (I can do that) will therefore not work. OpenVPN will not understand the credentials file unless it is purely and simply username password

and nothing more.

So, any ideas?

Was it helpful?

Solution

This kind of thing is done ideally via an API that vpnbook provides. Then a script can much more easily access the information and store it in a text file.

Barring that, and looks like vpnbook doesn't have an API, you'll have to use a technique called Web Scraping.

To automate this via "Web Scraping", you'll need to write a script that does the following:

  1. First, login to vpnbook.com with your credentials
  2. Then navigate to the page that has the credentials
  3. Then traverse the structure of the page (called the DOM) to find the info you want
  4. Finally, save out this info to a text file.

I typically do web scraping with Ruby and the mechanize library. The first example in the Mechanize examples page shows how to visit the google homepage, perform a search for "Hello World", and then grab the links in the results one at time printing it out. This is similar to what you are trying to do except instead of printing it out you would want to write it to a text file. (Google for writing a text file with Ruby)":

require 'rubygems'
require 'mechanize'

a = Mechanize.new { |agent|
  agent.user_agent_alias = 'Mac Safari'
}

a.get('http://google.com/') do |page|
  search_result = page.form_with(:id => 'gbqf') do |search|
    search.q = 'Hello world'
  end.submit

  search_result.links.each do |link|
    puts link.text
  end
end

To run this on your computer you would need to:

a. Install ruby b. Save this in a file called scrape.rb c. call it by using the command line "ruby scrape.rb"

OSX comes with an older ruby that would work for this. Check out the ruby site for instructions on how to install it or get it working for your OS.

Before using a gem like mechanize you need to install it:

gem install mechanize

(this depends on Rubygems being installed, which I think typically comes with Ruby).

If you're new to programming this might sound like a big project, but you'll have an amazing tool in your toolbox for the future, where you'll feel like you can pretty much "do anything" you need to, and not rely on other developers to have happened to have built the software you need.

Note: for sites that rely on javascript, mechanize wont work - you can use Capybara+PhantomJS to run an actual browser that can run javascript from Ruby.

Note 2: Its possible that you don't actually have to go through the motions of (a) going to the login page (2) "filling in your info", (3) clicking on "Login", etc. Depending how their authentication works, you may be able to go directly to the page that displays info you need and just provide your credentials directly to that page using either basic auth or other means. You'll have to look at how their auth system works and do some trial and error for this. The most straightforward, most likely to work approach is to just to what a real user would do...login through the login page.

Update

After writing all this, I came across the vpnbook-utils library (during a search for "vpnbook api") which I think does what you need:

...With this little tool you can generate OpenVPN config files for the free VPN provider vpnbook.com... ...it also extracts the ever changing credentials from the vpnbook.com website...

looks like with one command line:

vpnbook config

you can automatically grab the credentials and write them into a config file.

Good luck! I still recommend you learn ruby :)

OTHER TIPS

You don't even need to parse the content. Just string search for the second occurrence of Username:, cut everything before that, use sed to find the content between the next two occurrences of <strong> and </strong>. You can use curl or wget -qO- to get the website's content.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top