You’re probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn’t matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.

If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you’re a pointy head with a fetish for iptables this will be a world of pain, so isn’t really a solution.

There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.

Am I missing an obvious solution here?

It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

      Also, if the Docker container only has to be accessed from another Docker container, you don’t need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.

      • Matt The Horwood@lemmy.horwood.cloud
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        sure, you can see below that port 53 is only on a secondary IP I have on my docker host.

        ---
        services:
          pihole01:
            image: pihole/pihole:latest
            container_name: pihole01
            ports:
              - "8180:80/tcp"
              - "9443:443/tcp"
              - "192.168.1.156:53:53/tcp" # this will only bind to that IP
              - "192.168.1.156:53:53/udp" # this will only bind to that IP
              - "192.168.1.156:67:67/udp" # this will only bind to that IP
            environment:
              TZ: 'Europe/London'
              FTLCONF_webserver_api_password: 'mysecurepassword'
              FTLCONF_dns_listeningMode: 'all'
            dns:
              - '127.0.0.1'
              - '192.168.1.1'
            restart: unless-stopped
            labels:
                - "traefik.http.routers.pihole_primary.rule=Host(`dns01.example.com`)"
                - "traefik.http.routers.pihole_primary.service=pihole_primary"
                - "traefik.http.services.pihole_primary.loadbalancer.server.port=80"
        
      • tux7350@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Something like this. This is a compose.yml that only allows ips from the local host 8080 to connect to the container port 80.

        services:
          webapp:
            image: nginx:latest
            container_name: local_nginx
            ports:
              - "127.0.0.1:8080:80"
        
          • tux7350@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Well if your reverse proxy is also inside of a container, you dont need to expose the port at all. As long as the containers are in the same docker network then they can communicate.

            If your reverse proxy is not inside a docker container, then yes this method would work to prevent clients from connecting to a docker container.

  • Melmi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    If there’s a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don’t need to come up with arbitrary ports to assign on the host for every container with a shared port.

    IMO it’s more intuitive to connect to a service via container_name:443 instead of localhost:8443

  • bizdelnick@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    I’ve read the article you pointed to. What is written there and what you wrote here are absolutely different things. Docker does integrate with firewalld and creates a zone. Have you tried configuring filters for that zone? Ufw is just too dumb because it is suited for workstations that do not forward packets at all, so it cannot be integrated with docker by design.

  • davad@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 months ago

    In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.

    Edit: sorry, I failed to read the whole post 🤦‍♂️. I don’t have a good answer for you. When I used docker in my homelab, I exposed services using labels and a traefik container similar to this: https://docs.docker.com/guides/traefik/#using-traefik-with-docker

    That doesn’t protect you from accidentally exposing ports, but it helps make it more obvious when it happens.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      In an enterprise setting, you shouldn’t trust the server firewall. You lock that down with your network equipment.

      I thought someone might say this, but it doesn’t seem very zero-trust?

      Ideally you’d still want the host to be as secure as humanly possible?

  • gerowen@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I just host everything on bare metal and use systemd to lock down/containerize things as necessary, even adding my own custom drop-ins for software that ships its own systemd service file. SystemD is way more powerful than people often realize.

    • prettybunnys@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      When you say you’re using systems to lock down/containerize things as necessary can you explain what you mean?

      • moonpiedumplings@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        I don’t know what the commenter you replied to is talking about, but systemd has it’s own firewalling and sandboxing capabilities. They probably mean that they don’t use docker for deployment of services at all.

        Here is a blogpost about systemd’s firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html

        Here is a blogpost about systemd’s sandboxing: https://www.redhat.com/en/blog/mastering-systemd

        Here is the archwiki’s docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files

        I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.

      • gerowen@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Systemd has all sorts of options. If a service has certain sandbox settings applied such as private /tmp, private /proc, restricting access to certain folders or devices, restricting available system calls or whatever, then systemd creates a chroot in /proc/PID for that process with all your settings applied and the process runs inside that chroot.

        I’ve found it a little easier than managing a full blown container or VM, at least for the things I host for myself.

        If a piece of software provides its own service file that isn’t as restricted as you’d like, you can use systemctl edit to add additional options of your choosing to a “drop-in” file that gets loaded and applied at runtime so you don’t have to worry about a package update overwriting any changes you make.

        And you can even get ideas for settings to apply to a service to increase security with:

        systemd-analyze security SERVICENAME

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    If you are good at manipulating iptables there is a way around this

    Modern systems shouldn’t be using iptables any more.

    • BlueBockser@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      +1 for Podman. I’ve found rootful Podman Quadlets to be a very nice alternative to Docker Compose, especially if you’re using systemd anyway for timers, services, etc.

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I’ve had similar issues using CSF firewall. They just pushed out updates that apparently support docker a little better but I still have to fight with that to get that working, I don’t know if that will fix it, but give it a try

  • ryokimball@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I use podman instead, though I’m honestly not certain this “fixes” the problem you described. I assume it does purely on the no-root point.

    Agreeing with the other poster, network tools and not relying on the server itself is the professional fix

    • Overspark@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Podman explicitly supports firewalls and does not bypass them like docker does, no matter whether you’re using root mode or not. So IMHO that is the more professional solution.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    7
    ·
    3 months ago

    this is the second time I’ve seen a post like this.

    docker has always been like this. if it’s news to you then you must be new to docker.

    if you’re using the built in firewall to secure your system on your wan, you’re doing it wrong. get a physical firewall. if you’re doing it to secure your lan then you just need to put in some proper routes and let your hardware firewall sort it out with some vlans.

    don’t rely on firewalld or iptables for anything.

    • lukecyca@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      What if you rent a bare metal server in a data center? Or rent a VPS from a basic provider that expects you to do your own firewalling? Or run your home lab docker host on the same vlan as other less trusted hosts?

      It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.

      You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.

      • GreenKnight23@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 months ago

        What if you rent a bare metal server in a data center?

        any msp will work with your security requirements for a cost. if you can’t afford it, then you shouldn’t be using a msp.

        Or rent a VPS from a basic provider that expects you to do your own firewalling?

        find a better msp. if a vendor you’re paying tells you to fuck off with your requirements for a secure system, they are telling you that you don’t matter to them and their only goal is to take your money.

        Or run your home lab docker host on the same vlan as other less trusted hosts?

        don’t? IDK what to tell you if you understand what a vlan is and still refuse to set one up properly to segment your network securely.

        It would be nice if there was a reliable way to run a firewall on the same host that’s running docker.

        don’t confuse reliable with convenient. iptables and firewalld are not reliable, but they are certainly convenient.

        You may say these are obscure use cases and that they are Wrong and Bad. Maybe you’re right, but personally I think it’s an unfortunate gap in expected functionality, if for no other reason than defense-in-depth.

        poor network architecture is no excuse. do it the proper way or you’re going to get your shit exposed one day.