The most surprising thing about tcpdump timestamps is that they aren’t inherently tied to the packet’s actual arrival time at your network interface.
Let’s see it in action. Imagine you’re capturing traffic on eth0 and want to see the timestamps relative to the start of the capture, with microsecond precision.
tcpdump -i eth0 -tttt -l
This command will start capturing packets on eth0. The -tttt flag tells tcpdump to print the timestamp in absolute format, but without the date and time of day, just the seconds and microseconds since the epoch. The -l flag makes the output line-buffered, which is useful for real-time viewing.
Here’s what a few lines might look like:
1678886400.123456
1678886400.123500
1678886401.000100
Now, what if you want to see how much time elapsed between packets, rather than their absolute timestamps? That’s where the -t flag comes in, but it’s a bit nuanced. If you use -t alone, you get seconds since the start of the capture.
tcpdump -i eth0 -t -l
And the output might look like:
0.123456
0.000044
0.876600
Notice how the first number is the same as the microseconds part of the absolute timestamp when -tttt was used, but subsequent numbers represent the delta from the previous packet. This is the relative timestamp.
The system tcpdump uses for timestamps is based on the system clock of the machine running tcpdump. When tcpdump starts, it records the current time. All subsequent timestamps are then calculated relative to this initial recorded time, or directly from the system clock if absolute timestamps are requested. The precision (microseconds, seconds, etc.) is determined by the underlying operating system’s clock resolution and the flags you pass to tcpdump.
The real power comes when you combine these. For instance, to see the absolute time in a human-readable format (including date and time) with microsecond precision, and then the time elapsed since the previous packet:
tcpdump -i eth0 -ttt -l
Output:
2023-03-15 10:00:00.123456
0.000044
2023-03-15 10:00:01.000100
0.876600
Here, the first line shows the full absolute timestamp. Every subsequent line shows the time since the previous packet was processed by tcpdump. This is incredibly useful for diagnosing latency issues, as you can directly see the gaps between packets appearing on your interface.
The -tttt flag is the most common for debugging network performance because it gives you a consistent, high-resolution view of when packets are arriving according to the tcpdump process. However, it’s crucial to remember that this timestamp is recorded by the tcpdump process after the packet has been dequeued from the kernel’s network buffer. This means there’s a small, but potentially significant, delay between the packet actually hitting the wire and tcpdump recording its timestamp. This delay is often called "capture latency" or "kernel bypass latency" and can vary based on system load and kernel scheduling.
If you need to correlate tcpdump timestamps with events on other systems or applications, you’ll need to account for this capture latency. A common technique is to send precisely timed packets (e.g., using ping with specific intervals or a custom tool) and observe the difference between the sent time and the captured time to estimate this overhead.
The next thing you’ll likely want to explore is how to filter these packets based on their timestamps, or how to use these timestamps to reconstruct network flows.