You can directly SSH into a Docker container, but it’s almost always the wrong way to do it.
Here’s a container running a simple Nginx server, configured to listen on port 8080 on the host:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y nginx
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080
CMD ["nginx", "-g", "daemon off;"]
And here’s the nginx.conf to serve a basic HTML file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
}
Let’s build this and run it:
docker build -t my-nginx .
docker run -d -p 8080:8080 --name nginx-container my-nginx
Now, if you want to poke around inside, you’d typically think of SSH. But Docker containers are designed to be ephemeral and lightweight. They don’t usually run an SSH server by default. If you did install and configure sshd inside, you’d be adding significant overhead and complexity, defeating much of the purpose of Docker.
The Docker way is docker exec. It lets you run a command inside a running container. To get an interactive shell, you’d use:
docker exec -it nginx-container bash
Here, -i keeps stdin open (so you can type) and -t allocates a pseudo-TTY (so you get a proper terminal interface). nginx-container is the name of your running container, and bash is the command you want to run.
This drops you right into the container’s shell. You can ls, cat, inspect files, and see the processes running.
Now, let’s understand why docker exec is the right tool. Docker containers are designed around a single primary process. When you run docker run, you’re starting that main process. docker exec allows you to launch additional processes within the existing container’s environment, using its filesystem and network namespace. It’s like spawning a new thread in an already running application, rather than starting a whole new, independent application (which is what ssh would imply).
The most surprising true thing about docker exec is that it doesn’t require any special setup within the container itself. No daemons to start, no ports to open. It leverages the Docker daemon on the host to establish a direct communication channel to the container’s process manager.
Consider this: you want to see the Nginx configuration files.
docker exec -it nginx-container ls /etc/nginx/
Or tail the access log:
docker exec -it nginx-container tail /var/log/nginx/access.log
This is fundamentally different from SSH. SSH requires a server process running inside the container, listening for connections, authenticating users, and then executing commands. This adds attack surface, increases resource usage, and complicates container lifecycle management. If the SSH server crashes, your access is gone. If the container restarts, you have to re-enable SSH.
With docker exec, you’re interacting with the container’s core execution environment directly through the Docker API. The Docker daemon handles the connection, and the container’s init system (or whatever process manager is running) is invoked to start your requested command.
What most people don’t realize is how powerful docker exec is for debugging and introspection. You can even use it to run non-interactive commands that produce output you can then pipe or redirect on the host. For instance, to get a list of all running processes inside the container:
docker exec nginx-container ps aux
This output is streamed directly back to your host terminal. There’s no intermediate server, no authentication handshake. It’s as close to "native" command execution as you can get without being physically inside the container.
The next logical step after mastering docker exec is understanding how to build images that don’t require interactive debugging, using robust entrypoints and well-defined build processes.