The scraper you linked to seems to be empty, but I had a look at the original scraper by Rebecca Ratcliffe. If yours is the same, you only have to put your URLs into a list and loop through them with a for-loop:
urls = ['/issues/2013-01-15;2013-01-15/all=NoticeCode%3a2441/start=1',
'/issues /2013-01-15;2013-01-15/all=NoticeCode%3a2453/start=1',
'/issues/2013-01-15;2013-01-15/all=NoticeCode%3a2462/start=1',
'/issues/2012-02-10;2013-02-20/all=NoticeCode%3a2441/start=1']
base_url = 'http://www.london-gazette.co.uk'
for u in urls:
starting_url = urlparse.urljoin(base_url, u)
scrape_and_look_for_next_link(starting_url)
Just have a look at this scraper that I copied and adapted accordingly.