I think it's easy with itertools.product() and collections.Counter():
import csv
from itertools import product
from collections import Counter
rdr = csv.reader(open(r"data.csv"), quotechar='"',delimiter='|')
c = Counter((x, y) for _, a, b in rdr for x, y in product(a.split(), b.split()))
As for processing huge file, I think you can try some kind of map-reduce - read csv line by line and save all combinations into another file:
with open(r"data.csv") as r, open(r"data1.csv", "w") as w:
rdr = csv.reader(r, quotechar='"', delimiter='|')
for _, a, b in rdr:
for x, y in product(a.split(), b.split()):
w.write("{},{}\n".format(x, y))
Next step would be to read second file and create counter:
with open(r"c:\temp\data1.csv") as r:
for l in r:
c[l.rstrip('\n')] += 1
update I've started to see is there any map-reduce framework for Python. Here's first link by googling - Disco map-reduce framework. Actually it has a tutorial which shows how to create and run a Disco job that counts words - I think it could be useful for you (at least I will go and give it a try :) ). And another one - https://github.com/michaelfairley/mincemeatpy.