Why do we need Container Networking? #
- Container networking: Essential for enabling communication between containers, the host, and external systems.
- Features: Allows containers to:
- Share data and APIs with other services
- Be isolated or exposed securely
- Be part of multi-container apps (e.g., microservices)
- Default bridge network: When you install Docker Desktop (for example), a default bridge network (also called
bridge
) is created automatically- Default for New Containers: Newly-started containers connect to it unless otherwise specified
- Default outgoing connections allowed: Containers can make outgoing connections
- Incoming connections from host denied by default: You cannot access anything on the containers without publishing the ports to the host
What are the Networking Options in Docker? #
Bridge:
- Default Network: When you start Docker, a default bridge network (also called
bridge
) is created automatically, and newly-started containers connect to it unless otherwise specified. - Create Custom Networks: You can also create your own custom bridge networks and connect containers to them.
- Docker embedded DNS: Containers on the same bridge network can communicate using container names as host names(Simplifies service discovery without manual IP management)
- Provides Network Isolation: Offer network isolation by default: containers on one bridge network cannot access containers on another bridge unless explicitly connected
- Port mapping Needed: Port mapping (e.g., -p 8080:80) is required to expose container services to the host
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
b2c3d4e5f6g7 host host local
c3d4e5f6g7h8 none null local
docker network create --driver bridge my_bridge_network
# Assigning two containers to the bridge network
# These container can talk with each other using their names
# From container2, you can access a web app on port 8080
# using (http://container1:8080)
docker run -dit --name container1 \
--network my_bridge_network alpine sh
docker run -dit --name container2 \
--network my_bridge_network alpine sh
# HOST_PORT: The port number on your host machine
# where you want to receive traffic
# CONTAINER_PORT: The port number within the container
# that's listening for connections
docker run -d -p <HOST_PORT>:<CONTAINER_PORT> <IMAGE_NAME>
Host:
- Shares Host Network Stack: In host networking mode, the container shares the host machine’s network stack directly (no virtual network bridge is created) and the container uses the host’s IP address
- No Network Isolation: Since there's no network translation or NAT involved, host networking offers lower latency and higher throughput, making it suitable for performance-sensitive applications
- Only one Host Network allowed:
host
is a special, built-in network that directly maps the container’s network stack to the host — and only one such network can exist. - Usecase: Best for apps needing direct access to the host's network for maximum performance
- Constraint: Containers using host networking cannot bind to the same ports as other host services or containers using the same mode, leading to potential port conflicts
docker run -dit --name container1 --network host alpine sh
None:
- No Network Connectivity: Containers started with the
none
network driver have no network connectivity — not even to other containers or the host. - Complete Isolation: Provides complete network isolation for containers
- Most isolated network mode: Most isolated network mode in Docker and is typically used in advanced scenarios like security testing etc.
Overlay:
- Multiple Docker Hosts (Docker Swarm): Overlay networks enable containers running on different Docker hosts to communicate securely, as if they were on the same network
- Built-in DNS: Provides built-in DNS for service name resolution, making cross-host container communication seamless
docker network create --driver overlay my_overlay # Note: Only works if you're using docker swarm
Macvlan:
- Unique IP Addr For Each Container: Macvlan gives each Docker container its own IP address, so it behaves like a separate computer on your network—not just a sub-process inside your machine.
- Usecase: You’re containerizing a legacy application (e.g., an old monitoring system or database tool) that expects to:
- Have its own static IP address,
- Be directly accessible on the local network and
- Communicate with other physical machines or legacy systems that don’t support Docker-aware networking.
docker network create --driver macvlan \ --subnet=192.168.1.0/24 \ --gateway=192.168.1.1 \ -o parent=eth0 \ macvlan_net
Give Examples of Network Management Commands in Docker? #
Create:
- Create a custom network
docker network create --driver bridge my_custom_network docker network create --driver overlay my_overlay_net
List:
- List all Docker networks
$ docker network ls NETWORK ID NAME DRIVER SCOPE d9b100f2d636 bridge bridge local e2fb1c7bcd1d host host local 7a6ef9c3c6c2 none null local ae34f1b2ab01 my_overlay_net overlay swarm f4a678e3cf79 my_bridge_net bridge local
Inspection:
- Returns detailed information about a Docker network in JSON format
$ docker network inspect <NETWORK_ID or NAME> [ { "Name": "my_bridge_net", "Id": "f4a678e3cf79492b7e3d12c5f", "Driver": "bridge", "Scope": "local", "IPAM": { "Driver": "default", "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Containers": { "bca1c0b45357bd5a...": { "Name": "container1", "IPv4Address": "172.18.0.2/16", "MacAddress": "02:42:ac:12:00:02" }, "58f0e8eafadfbe29...": { "Name": "container2", "IPv4Address": "172.18.0.3/16", "MacAddress": "02:42:ac:12:00:03" } }, "Options": {}, "Labels": {} } ]
Remove:
- Removes a custom network (only if unused)
docker network rm <name>
How can containers be isolated using a custom Bridge Network? #
- Create a Custom Bridge Network:
docker network create isolated-network
- Run Containers in the Custom Network:
docker run -d --name container1 \ --network isolated-network nginx docker run -d --name container2 \ --network isolated-network mysql
- Communication Between Containers: Containers connected to the same custom bridge network (isolated-network in this case) can communicate with each other using their container names
- But containers outside the
isolated-network
will not be able to talk tocontainer1
andcontainer2
- But containers outside the
Can a container be connected to multiple networks? If yes, how? #
- Yes, a Docker container can be connected to multiple networks.
- When you create a container, you assign it to one network using --network.
- You can then connect it to additional networks using
docker network connect
docker run -dit --name myapp --network net1 alpine
docker network connect net2 myapp
What is DNS-based service discovery in Docker networks? #
- DNS-based service discovery: Ability for containers to resolve each other's names to IP addresses using an embedded DNS server provided by Docker.
- When you create a user-defined bridge or overlay network, Docker automatically sets up an internal DNS server.
- Every container attached to that network is registered by name.
- Other containers in the same network can communicate using that name, without needing IPs or environment variables.
docker network create my_network
docker run -dit --name web_container --network my_network nginx
docker run -dit --name app_container --network my_network alpine sh
#Inside app_container, ping web_container:
docker exec -it app_container sh
ping web_container
# This works because Docker’s internal DNS
# resolves web_container to its container IP.
What happens when two containers are in different networks? How can you enable communication between them? #
- When two containers are placed in different Docker networks, they are logically isolated from each other:
- They cannot ping, connect, or resolve each other's names.
- Each network has its own IP range and DNS namespace.
- This isolation improves security.
Option 1: Connect to Same Network
docker network create netA
docker network create netB
docker run -dit --name containerA --network netA alpine
docker run -dit --name containerB --network netB alpine
# Create a common network - netCommon
docker network create netCommon
docker network connect netCommon containerA
docker network connect netCommon containerB
# OR Connect containerB to netA
# docker network connect netA containerB
Option 2: Publish a Port on Host
docker run -d --name containerA \
--network netA \
-p 5000:5000 \
my-web-app
docker run -dit --name containerB --network netB alpine
docker exec -it containerB sh
curl http://HOST_IP:5000
Other Options
- Overlay Network on Multiple Hosts
- Macvlan Network
- ...