Warm tip: This article is reproduced from stackoverflow.com, please click
docker networking iptables

Docker networking, baffled and puzzled

发布于 2020-04-03 23:21:04

I have a simple python application which stores and searches it's data in an Elasticsearch instance. The python application runs in it's own container, just as Elasticsearch is. Elasticsearch exposes it's default ports 9200 and 9300, the python application exposes port 5000. The networktype used for Docker is a user defined bridged network. When i start both containers the application starts up nicely, both containers see each other by container name and communicate just fine.

But from the docker host (linux) it's not possible to connect to the exposed port 5000. So a simple curl http://localhost:5000/ renders in a time-out. The Docker tips from this documentation: https://docs.docker.com/network/bridge/ did not solve this.

After a lot of struggling I tried something completely different, I tried connecting from the outside of the docker host to the python application. I was baffled, from anywhere in the world i could do curl http://<fqdn>:5000/ and was served with the application. So that means, real problem solved because I'm able to serve the application to the outside world. (So yes, the application inside the container listens on 0.0.0.0 as is the solution for 90% of the network problems reported by others.)

But that still leaves me puzzled, what causes this strange behavior? On my development machine (Windows 10, WSL, Docker desktop, Linux containers) I am able to connect to the service on localhost:5000, 127.0.0.1:5000 etc. On my Linux (production) machine everything works except connecting from the docker host to the containers.

I hope someone can shed a light on this, I trying to understand why this is happening.

Some relevant information

Docker host:

#  ifconfig -a
br-77127ce4b631: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
[snip] 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
[snip]
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 1xx.1xx.199.134  netmask 255.255.255.0  broadcast 1xx.1xx.199.255

# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                            NAMES
1e7f2f7a271b        pplbase_api           "flask run --host=0.…"   20 hours ago        Up 19 hours         0.0.0.0:5000->5000/tcp                           pplbase_api_1
fdfa10b1ce99        elasticsearch:7.5.1   "/usr/local/bin/dock…"   21 hours ago        Up 19 hours         0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   pplbase_elastic_1

# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
[snip]
77127ce4b631        pplbase_pplbase     bridge              local

# iptables -L -n
[snip]
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:5000
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0
DOCKER-ISOLATION-STAGE-1  all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     tcp  --  0.0.0.0/0            172.18.0.2           tcp dpt:9300
ACCEPT     tcp  --  0.0.0.0/0            172.18.0.2           tcp dpt:9200
ACCEPT     tcp  --  0.0.0.0/0            172.18.0.3           tcp dpt:5000

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination
DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0

Docker compose file:

version: '3'
services:
  api:
    build: .
    links:
      - elastic
    ports:
      - "5000:5000"
    networks:
      - pplbase
    environment:
      - ELASTIC_HOSTS=elastic localhost
      - FLASK_APP=app.py
      - FLASK_ENV=development
      - FLASK_DEBUG=0
    tty: true


  elastic:
    image: "elasticsearch:7.5.1"
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - pplbase
    environment:
      - discovery.type=single-node
    volumes:
      - ${PPLBASE_STORE}:/usr/share/elasticsearch/data

networks:
  pplbase:
    driver: bridge

After more digging in the riddle is getting bigger and bigger. When using netcat I can establish a connection

Connection to 127.0.0.1 5000 port [tcp/*] succeeded!

Checking with netstat when no clients are connected is see:

tcp6       0      0 :::5000                 :::*                    LISTEN      27824/docker-proxy

While trying to connect from the dockerhost, the connection is made:

tcp        0      1 172.20.0.1:56866        172.20.0.3:5000         SYN_SENT    27824/docker-proxy
tcp6       0      0 :::5000                 :::*                    LISTEN      27824/docker-proxy
tcp6       0      0 ::1:58900               ::1:5000                ESTABLISHED 31642/links
tcp6     592      0 ::1:5000                ::1:58900               ESTABLISHED 27824/docker-proxy

So I'm suspecting now some networkvoodoo on the docker host.

Questioner
xiffy
Viewed
64
xiffy 2020-02-04 21:50

So as I was working on this problem, slowly towards a solution I found my last suggestion was right after all. In the firewall (iptables) I logged all dropped packets and yes, the packets between the docker-bridge (not docker0, but br- and the container (veth) were being dropped by iptables. Adding a rule allowing traffic from the interfaces to flow resolved the problem.

In my case: sudo iptables -I INPUT 3 -s 172.20.0.3 -d 172.20.0.1 -j ACCEPT Where 172.20.0.0/32 is the bridged network generated by Docker.