Certainly! Here’s a rewritten version using the keyword “Linux network latency”:”You can troubleshoot Linux network latency by examining various factors that affect network performance. Start by checking the hardware components such as network interface cards and cables. Ensure that configurations on network devices like routers and switches are optimized. Use commands like `ping` and `traceroute` to diagnose latency issues and monitor network traffic to identify potential bottlenecks. Additionally, consider adjusting system settings and employing performance monitoring tools to improve overall network efficiency.”
In Linux servers, you can enhance the server’s defense capabilities and reduce the impact of DDoS on regular services through various methods such as kernel tuning, DPDK, and XDP. In applications, you can alleviate the impact of DDoS on applications by using various levels of caching, WAF, and CDN.
However, it is important to note that if DDoS traffic has already reached the Linux server, even if various optimizations are made at the application layer, network service latency will generally be much higher than usual.
Therefore, in practical applications, we usually use Linux servers in conjunction with professional traffic cleansing and network firewall equipment to mitigate this issue.
Apart from the increased network latency caused by DDoS, I am sure you have seen network latency caused by many other reasons, such as:
- Delay caused by slow network transmission.
- Delay caused by slow packet processing speed of the Linux kernel protocol stack.
- Delay caused by slow application data processing speed, etc.
So what should we do when we encounter delays caused by these reasons? How do we pinpoint the root cause of network latency? Let’s discuss network latency in this article.
Linux Network Latency
When it comes to network latency, people usually think of it as the time required for network data transmission. However, the “time” here refers to two-way traffic, which is the round-trip time (RTT) that the data takes to travel from the source to the destination and then return from the destination address.
Besides network latency, another commonly used metric is application latency, which refers to the time needed for an application to receive a request and return a response. Typically, application latency is also known as round-trip latency, which is the sum of network data transfer time and data processing time.
People usually use the ping command to test network latency, which is based on the ICMP protocol. It obtains the round-trip time by calculating the time difference between the ICMP echo response packet and the ICMP echo request packet. This process doesn’t require special authentication, making it frequently utilized by many network attacks, such as port scanning tools like nmap and packet tools like hping3.
Therefore, to avoid these issues, many network services disable ICMP, preventing us from using ping to test network service availability and round-trip latency. In this case, you can use traceroute or hping3’s TCP and UDP modes to obtain network latency.
For instance:
# -c: 3 requests# -S: Set TCP SYN# -p: Set port to 80$ hping3 -c 3 -S -p 80 google.comHPING google.com (eth0 142.250.64.110): S set, 40 headers + 0 data byteslen=46 ip=142.250.64.110 ttl=51 id=47908 sport=80 flags=SA seq=0 win=8192 rtt=9.3 mslen=46 ip=142.250.64.110 ttl=51 id=6788 sport=80 flags=SA seq=1 win=8192 rtt=10.9 mslen=46 ip=142.250.64.110 ttl=51 id=37699 sport=80 flags=SA seq=2 win=8192 rtt=11.9 ms--- baidu.com hping statistic ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 9.3/10.9/11.9 ms
Of course, you can also use traceroute:
$ traceroute --tcp -p 80 -n google.comtraceroute to google.com (142.250.190.110), 30 hops max, 60 byte packets 1 * * * 2 240.1.236.34 0.198 ms * * 3 * * 243.254.11.5 0.189 ms 4 * 240.1.236.17 0.216 ms 240.1.236.24 0.175 ms 5 241.0.12.76 0.181 ms 108.166.244.15 0.234 ms 241.0.12.76 0.219 ms ...24 142.250.190.110 17.465 ms 108.170.244.1 18.532 ms 142.251.60.207 18.595 ms
Traceroute sends three packets at each hop of the route and outputs the round-trip latency upon receiving responses. If there is no response or the response times out (default 5s), an asterisk * will be output.
Case Study
We need to host two hosts, host1 and host2, in this demonstration:
- host1 (192.168.0.30): Hosts two Nginx web applications (normal and delayed)
- host2 (192.168.0.2): Analysis host
host1 Preparation
On host1, let’s run and start two containers, which are the official Nginx and a version of Nginx with latency:
# Official nginx$ docker run --network=host --name=good -itd nginxfb4ed7cb9177d10e270f8320a7fb64717eac3451114c9fab3c50e02be2e88ba2# Latency version of nginx$ docker run --name nginx --network=host -itd feisky/nginx:latencyb99bd136dcfd907747d9c803fdc0255e578bad6d66f4e9c32b826d75b6812724
Run the following command to verify that both containers are serving traffic:
$ curl http://127.0.0.1...
Thank you for using nginx.
$ curl http://127.0.0.1:8080...
Thank you for using nginx.
host2 Preparation
Now let’s use the mentioned hping3 to test their latency and see any differences. On host2, execute the following commands to test the latency of port 8080 and port 80 of the case machine, respectively:
Port 80:
$ hping3 -c 3 -S -p 80 192.168.0.30HPING 192.168.0.30 (eth0 192.168.0.30): S set, 40 headers + 0 data byteslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=80 flags=SA seq=0 win=29200 rtt=7.8 mslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=80 flags=SA seq=1 win=29200 rtt=7.7 mslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=80 flags=SA seq=2 win=29200 rtt=7.6 ms--- 192.168.0.30 hping statistic ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 7.6/7.7/7.8 ms
Port 8080:
# Test port 8080 latency$ hping3 -c 3 -S -p 8080 192.168.0.30HPING 192.168.0.30 (eth0 192.168.0.30): S set, 40 headers + 0 data byteslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=8080 flags=SA seq=0 win=29200 rtt=7.7 mslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=8080 flags=SA seq=1 win=29200 rtt=7.6 mslen=44 ip=192.168.0.30 ttl=64 DF id=0 sport=8080 flags=SA seq=2 win=29200 rtt=7.3 ms--- 192.168.0.30 hping statistic ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 7.3/7.6/7.7 ms
From this output, you can see that the latency for both ports is roughly the same, at about 7 milliseconds. But this only applies to a single request. What if we switch to concurrent requests? Next, let’s try using wrk (https://github.com/wg/wrk).
Port 80:
$ wrk --latency -c 100 -t 2 --timeout 2 http://192.168.0.30/Running 10s test @ http://192.168.0.30/ 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.19ms 12.32ms 319.61ms 97.80% Req/Sec 6.20k 426.80 8.25k 85.50% Latency Distribution 50% 7.78ms 75% 8.22ms 90% 9.14ms 99% 50.53ms 123558 requests in 10.01s, 100.15MB readRequests/sec: 12340.91Transfer/sec: 10.00MB
Port 8080:
$ wrk --latency -c 100 -t 2 --timeout 2 http://192.168.0.30:8080/Running 10s test @ http://192.168.0.30:8080/ 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 43.60ms 6.41ms 56.58ms 97.