Containers share the kernel with the host OS, which
has implications for the network stack. Achieving connectivity
between containers exclusively in software is unsuitable for
reliable, low-latency applications. While extensive research has
been conducted on virtual machines processing real-time traf-
fic with hardware support, the impact of network latencies
in containerized environments has received comparatively less
attention. This paper analyzes throughput and network latencies
in container topologies on a single host featuring single-root
input/output virtualization, Linux Containers, and commercial
off-the-shelf hardware. Using a state-of-the-art timestamping
methodology, we measure latencies with a resolution of 1.25 µs
without introducing delay by the measurement methodology
itself. We evaluate a single flow in a line topology with up
to 64 containers. The experiments demonstrate that pinning
interrupt request handlers to non-uniform memory access nodes
increases throughput and decreases latencies. Furthermore, we
identify dTLB misses, rescheduling interrupts, and soft interrupt
floods as critical challenges as they cause spikes in latencies,
and isolation is impossible. This paper contributes findings to
minimize bottlenecks and limitations for real-time container
applications.
This Dataset contains the Raw Data for a Line Topology and a Complex Topology.