Pregunta

I know this is a bit long question but any help would be appreciated.

The short version is simply that I want to have a set of containers communicating with each other on multiple hosts and to be accessible with SSH. I know there are tools for this but I wasn't able to do it.

The long version is:

There is a software that has multiple components and these components can be installed in any number of machines. There is a client- and a server-side for this software. Both the client-server and the server side components communicate via UDP ports. The server uses CentOS, the client uses Microsoft Windows.

I want to create a testing environment that consists of 4 containers and these components would be spread across these containers and a client side machine. The docker host machine is Ubuntu, the containers are CentOS. If I install all the components in one container it's working, if there are more than it's not. According to the logs its working but its not.

I read that you need to link the containers or use an orchestrator like Maestro to do this, but I wasn't able to do it so far.

What I want is to be able to start a set if containers which communicate with each other, on one or multiple hosts. I want to be able to access these containers with ssh so the service should start automatically.

Also it would be great to use ddns for the containers because the names would be used again and again but the IP addresses can change, but this is just the cherry on top.

Some specifications:

The host is a fresh install of Ubuntu 12.04.4 LTS x86_64 Docker is the latest version. (lxc-docker 0.10.0) I used the native driver. The containers a plain simple centos pulled from the docker index. I installed some basic stuff on the containers: openssh-server, mc, java-jre. I changed the docker network to a network that can be reached from the internal network. IP tables rules were cleared, because I didn't needed them, but also tried with those in place but with no luck. The /etc/default/docker file changes:

    DOCKER_OPTS="--iptables=false"

or with the exposed API:

    DOCKER_OPTS="-H tcp://0.0.0.0:4243 --iptables=false"

The ports that the software uses are between 6000-9000 but I tried to open all the ports. An example of run command:

    docker run -h <hostname> -i -t  --privileged --expose 1-65535/udp <image> /bin/bash

I also tried with exposed API:

    docker -H :4243 run -h <hostname> -i -t  --privileged --expose 1-65535/udp <image> /bin/bash

I'm not giving up but I would appreciate some help.

¿Fue útil?

Solución

You might want to take a look at the in-development docker swarm project. It will allow you to treat your set of test machines as a cluster to which you can deploy containers to.

Otros consejos

You could simply use fig for orchestration and link the containers together instead of doing all that ddns and port forwarding stuff. The fig.yml syntax is pretty straight-forward.

You can use weave for networking part. You can use these tutorials

https://github.com/weaveworks/weave

http://xmodulo.com/networking-between-docker-containers.html

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top