Klemmt es im Netzwerk, so helfen Ping und Traceroute, Fehler und Engpässe einzukreisen. Wir erklären die Funktionsweise und helfen Angriffe aufzudecken.
Continue reading Netzwerk-Monitoring: Ping und Traceroute richtig interpretieren
Klemmt es im Netzwerk, so helfen Ping und Traceroute, Fehler und Engpässe einzukreisen. Wir erklären die Funktionsweise und helfen Angriffe aufzudecken.
Continue reading Netzwerk-Monitoring: Ping und Traceroute richtig interpretieren
When implementing NTP servers, it’s always an interesting part to check whether the server is “up and running” and reachable from the clients. While I’ve done many basic NTP checks out of Linux, I lacked a small docu to do this with Windows. It turned out that there’s no need for third-party software because Windows already includes a tool to test NTP connections: w32tm.
Angreifer verwenden gern Ping und Traceroute, um Server im Internet ausfindig zu machen. Das bringt viele Security-Admins in Versuchung, den Ping- und Traceroute-Verkehr mittels ihrer Firewall in ihrem Netz zu unterbinden. Doch damit behindern sie nur die Arbeit von Server-Administratoren, denn es gibt noch viel mehr Möglichkeiten, Server aufzuspüren.
This is a really nice feature: you can run iperf3 directly on a FortiGate to speed-test your network connections. It’s basically an iperf3 client. Using some public iperf servers you can test your Internet bandwidth; using some internal servers you can test your own routed/switched networks, VPNs, etc. However, the maximum throughput for the test is CPU dependent. So please be careful when interpreting the results. Here we go:
This is a guest blogpost by Jasper Bongertz. His own blog is at blog.packet-foo.com.
Running your own NTP server(s) is usually a good idea. Even better if you know that they’re working correctly and serve their answers efficiently and without a significant delay, even under load. This is how you can use Wireshark to analyze the NTP delta time for NTP servers:
I am participating in the NTP Pool Project with at least one NTP server at a time. Of course, I am monitoring the count of NTP clients that are accessing my servers with some RRDtool graphs. ;) I was totally surprised that I got quite high peaks for a couple of minutes whenever one of the servers was in the DNS while the overall rate did grow really slowly. I am still not quite sure why this is the case.
For one month I also logged all source IP addresses to gain some more details about its usage. Let’s have a look at some stats:
Continue reading Stats from Participating the NTP Pool Project
If you are operating a public available NTP server, for example when you’re going to join the NTP Pool Project, you probably want to test whether your server is working correctly. Either with a one-off measurement from hundreds of clients or continuously to keep track of its performance. You can use the RIPE Atlas measurement platform (Wikipedia) for both use cases. Here’s how:
Monitoring a Meinberg LANTIME appliance is much easier than monitoring DIY NTP servers. Why? Because you can use the provided enterprise MIB and load it into your SNMP-based monitoring system. Great. The MIB serves many OIDs such as the firmware version, reference clock state, offset, client requests, and even more specific ones such as “correlation” and “field strength” in case of my phase-modulated DCF77 receiver (which is called “PZF” by Meinberg). And since the LANTIME is built upon Linux, you can use the well-known system and interfaces MIBs as well for basic coverage. Let’s dig into it:
Beyond monitoring Linux OS and basic NTP statistics of your stratum 1 GPS NTP server, you can get some more values from the GPS receiver itself, namely the number of satellites (active & in view) as well as the GPS fix and dilution of precision aka DOP. This brings a few more graphs and details. Nice. Let’s go:
Now that you’re monitoring the Linux operating system as well as the NTP server basics, it’s interesting to have a look at some more details about the DCF77 receiver. Honestly, there is only one more variable that gives a few details, namely the Clock Status Word and its Event Field. At least you have one more graph in your monitoring system. ;)
Wherever you’re running an NTP server: It is really interesting to see how many clients are using it. Either at home, in your company or worldwide at the NTP Pool Project. The problem is that ntp itself does not give you this answer of how many clients it serves. There are the “monstats” and “mrulist” queries but they are not reliable at all since they are not made for this. Hence I had to take another path in order to count NTP clients for my stratum 1 NTP servers. Let’s dig in:
Now that you have your own NTP servers up and running (such as some Raspberry Pis with external DCF77 or GPS times sources) you should monitor them appropriately, that is: at least their offset, jitter, and reach. From an operational/security perspective, it is always good to have some historical graphs that show how any service behaves under normal circumstances to easily get an idea about a problem in case one occurs. With this post I am showing how to monitor your NTP servers for offset, jitter, reach, and traffic aka “NTP packets sent/received”.
During my work with a couple of NTP servers, I had many situations in which I just wanted to know whether an NTP server is up and running or not. For this purpose, I used two small Linux tools that fulfil almost the same: single CLI command while not actually updating any clock but only displaying the result. That is: ntpdate & sntp. Of course, the usage of IPv6 is mandatory as well as the possibility to test NTP authentication.
What failover times do you expect from a network security device that claims to have high availability? 1 ms? Or at least <1 second? Not so for a fully featured Infoblox HA cluster which takes about 1-2 minutes, depending on its configuration. Yep. “Works as designed”. Ouch. Some details:
Continue reading Infoblox Failover Debacle (Works as Designed)
I already published a few examples how you can use layer four traceroutes in order to pass firewall policies that block ping but allow some well-known ports such as 80 or 443. Long story short: Using TCP SYN packets on an opened firewall port with the TTL trick will probably succeed compared to a classical traceroute based on ICMP echo-requests.
Another nice use case for layer 4 traceroutes is the recognition of policy based routes within your own network (or even beyond). That is: Depending on the TCP/UDP port used for the traceroute you can reveal which paths your packets take over the network. This is quite useful compared to classical traceroutes that only reveal the straightforward routing tables but not the policy based ones.
Continue reading Discovering Policy-Based Routes with Layer 4 Traceroutes (LFT)