It appears that one problem is this line...
with open("local/file1.csv", "w") as f:
The output file is overwritten on each function call ("w" indicates the file mode is write). When an existing file is opened in write mode it is cleared. Since the file is cleared every time the function is called it's giving the appearance of only writing one row.
The bigger issue is that it is not good practice for multiple threads to write to a single file.
You could try this...
valid_chars = "-_.() abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
filename = ''.join(c for c in url if c in valid_chars)
with open("local/%s.csv" % filename, "w") as f:
# rest of code...
...which will write each url to a different file (assuming the urls are unique). You could then recombine the files later. A better approach would be to put the data in a Queue and write it all after the call to threaded. Something like this...
import Queue
output_queue = Queue.Queue()
def get_links():
#gather urls
urls = ['www.google.com'] * 25
threaded(urls, write_csv, num_threads=5)
def write_csv(url):
data = {'cat':1,'dog':2}
output_queue.put(data)
if __name__ == '__main__':
get_links() # thready blocks until internal input queue is cleared
csv_out = csv.writer(file('output.csv','wb'))
while not output_queue.empty():
d = output_queue.get()
csv_out.writerow(d.keys())
csv_out.writerow(d.values())