The udev
rules are one of the ways to do it, however there is some shortage of information there, i.e. there is no reliable and simple way to know which container the veth
is associated with. I am not sure whether in your case it's sufficient to simply set bandwidth limit on the hosts' end of veth
pair, which might be the cease, but there is also the other end of it in the container's namespace, which is something you can look at using either ip netns
or nsenter
commands. So if you do need to operate on both ends of the veth
pair, it's best to have container ID, so you can lookup the PID and the network namespace associated with it. One way to do it is by running docker events
and parsing it's output, and yet a better way is to use Docker's domain socket API. For a use case I had earlier, it was sufficient to shell-out to docker events
and here is a script I wrote, what it does is add a route inside a container and turn-off the checksum offload with ethtool
.
#!/bin/sh -x
[ ! $# = 2 ] && exit 1;
container_interface="$1"
add_route="$2"
docker events | while read event
do
echo $event | grep -q -v '\ start$' && continue
container_id=`echo $event | sed 's/.*Z\ \(.*\):\ .*/\1/'`
nsenter="nsenter -n -t {{ .State.Pid }} --"
ip_route_add="ip route add ${add_route} dev ${container_interface} proto kernel scope link src {{ .NetworkSettings.IPAddress }}"
ethtool_tx_off="ethtool -K ${container_interface} tx off >/dev/null"
eval `docker inspect --format="${nsenter} ${ip_route_add}; ${nsenter} ${ethtool_tx_off};" ${container_id}`
done
In addition to docker events
, there is another way of catching networking events with ip monitor
command. However, that way you still don't have container IDs, similarly to udev
method.