If I understand what you want correctly you can do this using the git version of PyMC (PyMC3) and the glm submodule. For example
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
from pymc import glm
## Make some data
x = np.array(range(0,50))
y = np.random.uniform(low=0.0, high=40.0, size=50)
y = 2*x+y
## plt.scatter(x,y)
data = dict(x=x, y=y)
with pm.Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
pm.glm.glm('y ~ x', data)
step = pm.NUTS() # Instantiate MCMC sampling algorithm
trace = pm.sample(2000, step)
##fig = pm.traceplot(trace, lines={'alpha': 1, 'beta': 2, 'sigma': .5});## traces
fig = plt.figure()
ax = fig.add_subplot(111)
plt.scatter(x, y, label='data')
glm.plot_posterior_predictive(trace, samples=50, eval=x,
label='posterior predictive regression lines')
To get something like this
You should find these blog posts interesting: 1 and 2 from where I took the ideas.
Edit For getting the y values for each x try this which I got from digging into the glm source.
lm = lambda x, sample: sample['Intercept'] + sample['x'] * x ## linear model
samples=50 ## Choose to be the same as in plot call
trace_det = np.empty([samples, len(x)]) ## initialise
for i, rand_loc in enumerate(np.random.randint(0, len(trace), samples)):
rand_sample = trace[rand_loc]
trace_det[i] = lm(x, rand_sample)
y = trace_det.T
y[0]
Apologies if it's not the most elegant - hopefully you can follow the logic.