Your question scope is rather large.
About wether linux containers and docker are usable for what you have in mind: yes, definitely. Containers are lightweight and cheap to deploy (way cheaper than VM), and using containers for continuous deployment / parallel testing does work very well.
Now, here is how docker works: you create a dockerfile that describes what services your application requires inside the container (mysql, php, whatever), what specific tuning / setup on the container OS you need, and where to put your application code. Then you create an "image" from that dockerfile, finally start a new container from that image, and that container will be entirely standalone, providing a "clean room" context for your application to execute.
The container has a specific ip, and will expose the services you choose on it. These services will then be NATed onto the host (you might even choose on which port at start time). Then it's rather simple to reverse proxy these using nginx so to serve these containers from various domain names / urls.
You can learn about dockerfile (and docker generally) here: http://docs.docker.io/en/latest/
If I were indeed to test branches that way, I would:
- write a Dockerfile describing my application stack, and push it on every branch of my project
- use a CI tool (jenkins, strider, whatever) and have it hooked into github so that it would build a new image on every pushed commit (docker build -rm -t me/myproject:branchname), and then stop the previously running container for that branch, and start a new container from the newly built image (assuming the build succeeded)
- setup my host nginx as a reverse proxy, mapping, say
http://example/branch
tohttp://localhost:NATTED_PORT/
Note that Hipache (https://github.com/dotcloud/hipache) might be an alternative to fiddling with nginx as a proxy though I have no first hand experience with it.
This is a rough description of the steps involved, and you probably have some learning to do, but that should put you on track I hope.