netstat and ss are your go-to tools for peering into the network stack and diagnosing connection issues.

TCP Diagnostics with netstat and ss: Find Connection Issues

The most surprising thing about TCP connection states is how many of them exist beyond the simple "ESTABLISHED" or "CLOSED" you see every day, and how long connections can linger in these intermediate states, silently consuming resources and blocking new connections.

Let’s see what a busy server looks like. Imagine you’ve got a web server and it’s suddenly slow. Users are reporting timeouts. You SSH in and want to see what’s happening on the network.

First, netstat. It’s been around forever and is usually pre-installed.

sudo netstat -tulnp | grep :80

This command shows:

  • -t: TCP connections
  • -u: UDP connections (we’re focusing on TCP, but it’s good to know it’s there)
  • -l: Listening sockets (processes waiting for incoming connections)
  • -n: Numeric IP addresses and port numbers (faster, avoids DNS lookups)
  • -p: Show the PID and program name owning the socket

The output might look like this:

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1234/apache2
tcp6       0      0 :::80                   :::*                    LISTEN      1234/apache2

This tells you apache2 (PID 1234) is listening on port 80 for both IPv4 and IPv6. Great, the server is configured to accept HTTP requests. But what about active connections?

sudo netstat -tn | grep ESTABLISHED | wc -l

This will give you a count of established TCP connections. If this number is unexpectedly high, or if it’s stuck and not decreasing, you might have a problem.

Now, ss. It’s a newer, faster, and more powerful tool that’s part of the iproute2 suite. It gets its information directly from the kernel, bypassing some of the overhead that netstat has.

Let’s check the same listening socket with ss:

sudo ss -tulnp | grep :80

The options are very similar, and the output is often more compact and readable:

Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port
tcp    LISTEN     0      128    0.0.0.0:80                      0.0.0.0:*
tcp    LISTEN     0      128    :::80                           :::*

Notice ss doesn’t show the PID/program by default for listening sockets in this basic form. To get that:

sudo ss -tulnp | grep :80
Netid  State      Recv-Q Send-Q Local Address:Port               Peer Address:Port      Process
tcp    LISTEN     0      128    0.0.0.0:80                      0.0.0.0:*              users:(("apache2",pid=1234,fd=3))
tcp    LISTEN     0      128    :::80                           :::*                   users:(("apache2",pid=1234,fd=4))

This is where ss really shines: exploring connection states beyond just "ESTABLISHED."

Let’s look at connections in the TIME_WAIT state. This state occurs after a connection is closed. The socket remains open for a period (usually 2*MSL, Maximum Segment Lifetime, often 30-60 seconds) to ensure any delayed packets from the previous connection are handled and don’t interfere with new connections on the same port.

sudo ss -tn state time-wait | wc -l

If you see a huge number of TIME_WAIT connections, it can exhaust ephemeral ports on the server, preventing new outgoing connections from being established. This is common on busy servers that make many short-lived outgoing connections.

What about connections that are stuck and not responding? You can look for connections in CLOSE_WAIT or FIN_WAIT states.

sudo ss -tn state close-wait | wc -l
sudo ss -tn state fin-wait1 | wc -l
sudo ss -tn state fin-wait2 | wc -l

A high number of CLOSE_WAIT sockets on the server side usually indicates that the application on the server has received the FIN packet from the client but hasn’t closed its end of the connection yet. This could be a bug in the application, or it’s waiting for something.

FIN_WAIT_1 means the local TCP stack has sent a FIN packet to the remote end and is waiting for an ACK. If it’s stuck here, the ACK isn’t arriving, or the remote end is slow to respond. FIN_WAIT_2 means the local end has received the ACK for its FIN and is now waiting for the remote end to send its own FIN.

To inspect these states and see which processes are involved:

sudo ss -tn state close-wait
sudo ss -tn state fin-wait1
sudo ss -tn state fin-wait2

This will show you the source and destination IPs and ports, and critically, the associated process.

If you have a process stuck in CLOSE_WAIT or FIN_WAIT, and it’s not gracefully handling the connection closure, you might need to:

  1. Restart the application: This is the quickest fix and often resolves the issue by clearing out the stuck sockets. For apache2 on Debian/Ubuntu: sudo systemctl restart apache2.
  2. Tune TCP parameters (advanced): For TIME_WAIT issues, you can sometimes tune net.ipv4.tcp_fin_timeout (the time sockets stay in FIN_WAIT_2 before being closed) and net.ipv4.tcp_tw_reuse (allows reuse of sockets in TIME_WAIT for new connections, use with caution).
    • To check current values: sysctl net.ipv4.tcp_fin_timeout and sysctl net.ipv4.tcp_tw_reuse.
    • To change temporarily (until reboot): sudo sysctl -w net.ipv4.tcp_fin_timeout=30 and sudo sysctl -w net.ipv4.tcp_tw_reuse=1.
    • To make permanent, edit /etc/sysctl.conf and add the lines, then run sudo sysctl -p. The tcp_tw_reuse option is particularly effective for high-traffic servers experiencing TIME_WAIT exhaustion because it allows new outgoing TCP connections to reuse a local port that is in TIME_WAIT state, provided the new connection’s timestamp is newer than the timestamp of the connection in TIME_WAIT. This is safe because the kernel checks that the packets are not being confused.

The Recv-Q and Send-Q values in ss output are also crucial. Recv-Q is the number of bytes not yet copied by the user program connected to this socket. Send-Q is the number of bytes not yet acknowledged by the remote host. If Recv-Q is consistently high for a listening socket, it means the application can’t keep up with the incoming requests. If Send-Q is high for an ESTABLISHED connection, it means the network is slow, or the remote end is not acknowledging data quickly enough.

sudo ss -tn state established 'recv-q > 10240' # Look for connections with > 10MB in receive queue
sudo ss -tn state established 'send-q > 10240' # Look for connections with > 10MB in send queue

You’ll need to adjust 10240 (10KB) based on your typical traffic patterns.

One thing most people don’t realize is the sheer number of states a TCP connection can traverse, and how ss can help you pinpoint specific, often overlooked, states like SYN-RECV, LAST-ACK, CLOSING, and UDP-ONLY (though that’s UDP, it shows the range). Each state has specific implications for resource usage and connection lifecycle. For instance, SYN-RECV indicates the server has received a SYN, sent a SYN-ACK, but hasn’t received the final ACK from the client, often a sign of a SYN flood attack or network issues preventing the ACK from returning.

The next common problem you’ll encounter after fixing these connection issues is often related to application-level timeouts or resource exhaustion within the application itself, rather than the network stack.

Want structured learning?

Take the full Tcp course →