We have run into an annoying situation: A hardware-dependent limit of user groups on a Palo Alto Next-Generation Firewall. That is: We cannot use more Active Directory groups at our firewalls. The weird thing about this: We don’t need that many synced groups on our Palo, but we have to do it that way since we are using nested groups for our users. That is: Palo Alto does not support nested groups out of the box, but needs all intermediary groups to retrieve the users which results in a big number of unnecessary groups.
For whatever reason, I had a Palo Alto Networks cluster that was not able to sync. A manual sync was not working, nor did a reboot of both devices (sequentially) help. Finally, the PAN support told me to “Export device state” on the active unit, import it on the passive one, do some changes, and commit. Indeed, this fixed it. A little more details:
Seit mehreren Jahren nutze ich Lampen von Philips Hue. Natürlich nicht nur Lampen, sondern auch Relais, Steckdosen, allerlei Schalter, Taster, sowie Hue Labs, Routinen, die Integration mit IFTTT, usw. Entsprechend bin ich leider bereits bei 30 Lampen (von angepriesenen 50) an die 100 % der verbrauchten Regeln gekommen. Ok, das wurde im Hueblog schon vor längerer Zeit beschrieben.
Gut, den Drops muss ich leider lutschen, kaufte mir eine 2. Hue Bridge und gut is. Denkste. Die Integration einer 2. Bridge ist leider alles andere als gut:
The NTP Pool is a volunteer organization that provides time synchronization service to hundreds of millions of computers worldwide. A typical client might query a particular NTP Pool server ~10-60 times/hour. Wikipedia lists some abusive clients that far exceeded the normal rate. This wastes NTP server resources, may interfere with other clients, and can trigger DDoS protections. In late 2019, a software update made some FortiGate firewalls very unfriendly to the NTP Pool.
I did a short presentation at the spring 2020 roundtable of the UK IPv6 Council. The talk was about a case study I did with my NTP server listed in the NTP Pool project: For 66 days I captured all NTP requests for IPv6 and legacy IP while analyzing the returning ICMPv6/ICMPv4 error messages. (A much longer period than my initial capture for 24 hours.) Following are my presentation slides along with the results.
If you’re into DNSSEC, you’ll probably have to troubleshoot or at least to verify it. While there are some good online tools such as DNSViz, there is also a command-line tool to test DNSSEC signatures onsite: delv.
During my analysis of NTP and its traffic to my NTP servers listed in the NTP Pool Project I discovered many ICMP error messages coming back to my servers such as port unreachables, address unreachables, time exceeded or administratively prohibited. Strange. In summary, more than 3 % of IPv6-enabled NTP clients failed in getting answers from my servers. Let’s have a closer look:
Of course, you should use dual-stack networks for almost everything on the Internet. Or even better: IPv6-only with DNS64/NAT64 and so on. ;) Unfortunately, still not every site has native IPv6 support. However, we can simply use the IPv6 Tunnel Broker from Hurricane Electric to overcome this time-based issue.
Well, wait… Not when using a Palo Alto Networks firewall which lacks 6in4 tunnel support. Sigh. Here’s my workaround:
Yes, ScreenOS is end-of-everything (EoE), but for historical reasons I still have some of them in my lab. ;D They simply work, while having lots of features when it comes to IPv6 such as DHCPv6-PD. However, using IPv6-only NTP servers is beyond their possibilities. :(
Anyway, I tried using NTP authentication with legacy IP. Unfortunately, I had some issues with it. Not only that they don’t support SHA-1 but MD5, this MD5 key was also limited in its length to 16 characters. Strange, since ntp-keygen per default generates 20 ASCII characters per key. Let’s have a look:
I initially wanted to show how to use NTP authentication on a Pulse Connect Secure. Unfortunately, it does not support NTP over IPv6, which is mandatory for my lab. Ok, after I calmed down a bit, a configured it with legacy IP and got NTP authentication running. ;) Here’s how:
A security device such as a firewall should rely on NTP authentication to overcome NTP spoofing attacks. Therefore I am using NTP authentication on the FortiGate as well. As always, this so-called next-generation firewall has a very limited GUI while you need to configure all details through the CLI. I hate it, but that’s the way Fortinet is doing it. Furthermore the “set authentication” command is hidden unless you’re downgrading to NTPv3 (?!?) and it only supports MD5 rather than SHA-1. Not that “next-generation”!
Finally, you have no chance of knowing whether NTP authentication is working or not. I intentionally misconfigured some of my NTP keys which didn’t change anything in the NTP synchronization process while it should not work at all. Fail!
What failover times do you expect from a network security device that claims to have high availability? 1 ms? Or at least <1 second? Not so for a fully featured Infoblox HA cluster which takes about 1-2 minutes, depending on its configuration. Yep. “Works as designed”. Ouch. Some details:
I got an email where someone asked whether I know how to change the link-local IPv6 addresses on a FortiGate similar to any other network/firewall devices. He could not find anything about this on the Fortinet documentation nor on Google.
Well, I could not find anything either. What’s up? It’s not new to me that you cannot really configure IPv6 on the FortiGate GUI, but even on the CLI I couldn’t find anything about changing this link-local IPv6 address from the default EUI-64 based one to a manually assigned one. Hence I opened a ticket at Fortinet. It turned out that you cannot *change* this address at all, but that you must *add* another LL address which will be used for the router advertisements (RA) after a reboot (!) of the firewall. Stupid design!
If you’re following my blog you probably know that I am using IPv6 everywhere. Everything in my lab is dual-stacked if not already IPv6-only. Great so far.
A few months ago my lab moved to another ISP which required to change all IP addresses (since I don’t have PI space yet). Oh boy! While it was almost no problem to change the legacy IPv4 addresses (only a few NATs), it was a huge pain in the … to change the complete infrastructure with its global unicast IPv6 addresses. It turned out that changing the interface IPv6 addresses was merely the first step, while many modifications at different services were the actual problem. And this was *only* my lab and not a complex company or the like.
Following you find a list of changes I made for IPv6 and for legacy IP. Just an overview to get an idea of differences and stumbling blocks.
In some situations, you want to manage your firewall only from a dedicated management network and not through any of the data interfaces. For example, when you’re running an internal data center with no Internet access at all but your firewalls must still be able to get updates from the Internet. In those situations, you need a real out-of-band (OoB) management interface from which all management traffic (DNS, NTP, Syslog, Updates, RADIUS, …) is sourced and to which the admins can connect to via SSH/HTTPS. Another example is a distinct separation of data and management traffic. For example, some customers want any kind of management traffic to traverse through some other routing/firewall devices than their production traffic.
Unfortunately, the Fortinet FortiGate firewalls don’t have a reasonable management port. Their so-called “MGMT” port is only able to limit the access of incoming traffic but is not able to source outgoing traffic by default. Furthermore, in an HA environment you need multiple ports to access the firewalls independently. What a mess. (Little exception: You can use the set ha-direct enable option in the HA setup which sources *some* but not all protocols from the Mgmt interface. But only when you’re using a HA scenario. Reference.)
A functional workaround is to add another VDOM solely for management. From this VDOM, all management traffic is sourced. To have access to all firewalls in a high availability environment, a second (!) interface within this management VDOM is necessary. Here we go: