AWS + DCHP Docker Containers

The default way of creating docker containers is to use a bridge with a host-only subnet provided by the docker0 or lxcbr0 bridges. However, this makes it incredibly difficult or impossible for containers on different hosts to communicate. This tutorial will show you how to deploy containers onto the same subnet as the host with DHCP or static IPs, so that you can deploy containers to any node, yet still have them communicate with each other.

We will be deploying onto AWS EC2 instances which requires us to NAT the bridge in order for our containers to be able to gain internet access. If you are not deploying to the AWS network, then you can skip all steps that involve iptables
    Create a network interface by going to EC2 -> Network Interfaces -> Create Network Interface.
    Assign it to the subnet you wish to deploy on, and choose a a single private IP. You will also need to choose a security group.
    When choosing a private IP, make sure to choose one that has a few IPs "around" it that are also spare. We will add these later so our dhcp server has a single ip "block" to dish out.
    Select the new network interface and click Actions -> Manage Private IP Addresses. Then add more IPs sequentially around the IP you chose in the previous step.
    Create an elastic IP and assosciate it with the lowest private IP on the newly created network interface.
    Create an EC2 instance (Ubuntu in this tutorial), and choose the subnet you chose earlier, before then being able to select the network interface you just created. Do not add the network interface in addition to the default one that is allocated. You should now get a message stating that you cannot be allocated a public IP. This is because a public IP from your elastic IPs has already been allocated to that network interface
    Log into your new EC2 instance and install docker and then install lxc.
    Update /etc/default/docker so that there is the line
    DOCKER_OPTS="-e lxc --dns 8.8.8.8"
    Replace the contents of
    /etc/network/interfaces.d/eth0.cfg
    with:
    # The primary network interface
    auto eth0
    iface eth0 inet dhcp
    
    auto br0
    iface br0 inet dhcp
            bridge_ports eth0
            bridge_stp off
            bridge_fd 0
            bridge_maxwait 0
    
    Run the following commands:
    # bring up the bridge we just created
    sudo ifup br0
    
    # set up routing (aws specific)
    sudo iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables --append FORWARD --in-interface br0 -j ACCEPT
    sudo iptables -t nat -A POSTROUTING -d 172.31.0.0/16 -j ACCEPT
    sudo iptables -t nat -A POSTROUTING -d 0.0.0.0/0 -j SNAT --to-source $HOST_PRIVATE_IP
    
    Using masquerade instead of SNAT will not work in this case!
    If you want to deploy your containers with dynamic IPs then install and configure dnsmsasq to act as our DHCP server for the node
    sudo apt-get install dnsmasq -y
    sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.bak
    sudo vim /etc/dnsmasq.conf
    
    Replace the contents of
    /etc/dnsmasq.conf
    with:
    interface=br0
    dhcp-range=$STARTING_PRIVATE_IP,$ENDING_PRIVATE_IP,12h
    dhcp-option=3,$PRIVATE_IP_OF_HOST
    If you don't want to use DHCP, then you simply need to start your containers similarly to below (but you will need to keep track of the IPs of every container)
    docker run \
    --net="none" \
    --lxc-conf="lxc.network.type = veth" \
    --lxc-conf="lxc.network.ipv4 = $IP_OF_CONTAINER/$CIDR" \
    --lxc-conf="lxc.network.ipv4.gateway = $HOST_PRIVATE_IP" \
    --lxc-conf="lxc.network.link = wan" \
    --lxc-conf="lxc.network.name = eth123" \
    --lxc-conf="lxc.network.flags = up" \
    -d $IMAGE_ID
    
    Restart dnsmasq for the changes to take effect
    sudo service dnsmasq restart
    If you run out of IPs because they are all currently leased to containers that didn't shut down gracefully, you can just empty the leases file is at:
    /var/lib/misc/dnsmasq.leases
    If you are using DHCP for your ubuntu containers, you will need to add the following line to your dockerfile [docker bug report]
    RUN mv /sbin/dhclient /usr/sbin/dhclient
    Now start your container similarly to below:
    docker run \
    -d \
    --privileged \
    --net="none" \
    --lxc-conf="lxc.network.type = veth" \
    --lxc-conf="lxc.network.link = br0" \
    --lxc-conf="lxc.network.flags = up" \
    $IMAGE
    
    Your container will need to automatically run the command
    sudo dhclient eth0
    to grab an IP from the DHCP server. I have this in a startup script that is called from the CMD option in the container's dockerfile and use
    cron -f
    as the last line in the startup script to "tie up" the container's foreground process.
    Your container should now have one of the private IPs that you allocated the network interface earlier. It should also have NAT'd internet access.

Network Bridge Cheatsheet

Traditional static IP Bridge

The setup below will set the computer to have IP address 192.168.1.x and will bridge computers with any 192.168.1.2-254 addresses

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.1.x
        netmask 255.255.255.0
        gateway 192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

Bridged DHCP Network

The configuration below will setup a brige for eth0, but use DHCP addressing.

# The primary network interface
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

CentOS 6.5 - Setup SSHFS

# Enable the EPEL REPO if you haven't already
sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

# Install the necessary packages
sudo yum install fuse sshfs -y
sudo modprobe fuse

# add modprobe fuse to the startup with rc.local to make sure the FUSE module is loaded upon a reboot
NEW_CONTENT="modprobe fuse"
FILEPATH="/etc/rc.local"
echo "`sudo cat $FILEPATH`
`echo $NEW_CONTENT`" | sudo tee $FILEPATH

References