How to use RTI Connext DDS to Communicate Across Docker Containers Using the Bridge Driver
Note: This article deals with Docker containers running on Linux systems; it is not applicable when running on Windows or Mac OS X systems.
By default when you start the Docker engine, it creates a network using the bridge driver. This network is named bridge.
When using the “bridge” Docker driver, it creates a bridge network between the host and the Docker container. It is similar to the bridge network created when you are using the bridge mode, for example, in Oracle VM VirtualBox. So for this case, there is networking isolation between Docker containers and the host.
To run a container in this mode, you have to run the following command:
$ docker run --network="<Your network>" -t <Docker image> <Program>
Possible scenarios
Communication within a host machine
In this scenario, communication within the same host will happen out-of-the-box. That is, RTI Connext DDS applications running on the same host (whether they are in Docker containers or not) should communicate with each other and with containers connected to the same bridge network.
This works out of the box because of the network isolation between Docker containers and the host machine. Each container and the host will have a different IP address and they will be able to communicate with each other. This is an advantage over using the host driver, but if you still want to learn more about communicating with containers using the host driver, you can read this other article.
For example, we could run a publisher in one container, run a subscriber in another container, and they will communicate using the bridged network. For the sake of simplicity, we will obviate the steps required to create the respective images. Then we would run the containers with these commands:
$ docker run --network="bridge" -t publisher-image hello_publisher
$ docker run --network="bridge" -t subscriber-image hello_subscriber
With this scenario, we would end up with a system like this, where the dark gray boxes represent containers:
Communication across machines (Docker container to container, or Docker container to a remote host machine)
As described before, the bridge network driver is designed to allow communication between the host and the containers. This means, all the containers in the same host can communicate with other processes (inside or outside of containers) running in the same host. The containers have no direct access to anything outside the host.
Opposedly, the containers have no network access to anything outside the host, keeping the containers isolated in the Docker bridge network
Therefore, in order for a Docker container in the bridge network to communicate with a different host, we need to set an extra hop in the route. A good solution for this is RTI Routing Service.
In the diagram above you can see Host 1 is running a Publisher in a container and a Routing Service instance; and Host 2 is running a Subscriber (not in a container).
The Publisher can communicate within Host 1, so there is communication between the Routing Service instance and the Publisher. There is communication between hosts as well, so Host 1 and Host 2 can communicate, which means that the Routing Service instance can communicate with the Subscriber. To set this up, we need to create our Routing Service configuration. We will call it bridge_docker:
<routing_service name="bridge_docker"> <domain_route name="TwoWayDomainRoute"> <participant name="1"> <!-- Here we can leave domain 0 as we can modify it from the command line using the -domainIdBase argument --> <domain_id>0</domain_id> </participant> <participant name="2"> <!-- Here we can leave domain 0 as we can modify it from the command line using the -domainIdBase argument --> <domain_id>0</domain_id> </participant> <session name="Session1"> <auto_topic_route name="AllForward"> <publish_with_original_info>true</publish_with_original_info> <input participant="1"> <allow_topic_name_filter>*</allow_topic_name_filter> <allow_registered_type_name_filter>*</allow_registered_type_name_filter> <deny_topic_name_filter>rti/*</deny_topic_name_filter> <creation_mode>ON_DOMAIN_AND_ROUTE_MATCH</creation_mode> </input> <output participant="2"> <allow_topic_name_filter>*</allow_topic_name_filter> <allow_registered_type_name_filter>*</allow_registered_type_name_filter> <deny_topic_name_filter>rti/*</deny_topic_name_filter> <creation_mode>ON_DOMAIN_AND_ROUTE_MATCH</creation_mode> </output> </auto_topic_route> </session> <session name="Session2"> <auto_topic_route name="AllBackward"> <publish_with_original_info>true</publish_with_original_info> <input participant="2"> <allow_topic_name_filter>*</allow_topic_name_filter> <allow_registered_type_name_filter>*</allow_registered_type_name_filter> <deny_topic_name_filter>rti/*</deny_topic_name_filter> <creation_mode>ON_DOMAIN_AND_ROUTE_MATCH</creation_mode> <datareader_qos> <reliability> <kind>RELIABLE_RELIABILITY_QOS</kind> </reliability> </datareader_qos> </input> <output participant="1"> <allow_topic_name_filter>*</allow_topic_name_filter> <allow_registered_type_name_filter>*</allow_registered_type_name_filter> <deny_topic_name_filter>rti/*</deny_topic_name_filter> <creation_mode>ON_DOMAIN_AND_ROUTE_MATCH</creation_mode> <datawriter_qos> <reliability> <kind>RELIABLE_RELIABILITY_QOS</kind> </reliability> </datawriter_qos> </output> </auto_topic_route> </session> </domain_route> </routing_service>
Then to run Routing Service with this configuration file, use this command:
$ <NDDSHOME>/bin/rtiroutingservice -cfgFile <config_file> -cfgName bridge_docker -domainIdBase <domain_id>
Some considerations here:
- Please note that this configuration works because the Routing Service instance is able to send and receive discovery traffic both to the bridge network and the external network. Once the discovery has taken place, the Routing Service instance forwards the samples from one domain to another.
- Using an intermediate Routing Service also allows you to keep the network isolation of the bridge driver. Modifying “iptables” is not a good choice because it could break some rules set by Docker.
- This configuration keeps the system scalable, since we can route any number of “containerized” Publishers/Subscribers from Host 1 to Host 2. We can also run any number of Publisher/Subscribers on Host 2
- In the case where we also run containerized applications on Host 2, we would need another Routing Service instance on that host, because of the above mentioned reasons. Then those two Routing Service instances would communicate with each other.