Office net seems slow
thanks to bootleg film swapping.
Let’s stop that right quick!
The previous chapter covered the basics of the OpenBSD packet filter
pf(4)
. But, as I mentioned, PF can manipulate packets in all kinds of ways beyond just permitting or denying them, including the following:
You can dynamically change the list of addresses to pass or block through outside software, such as dhcpd(8)
or spamd(8)
.
You can dynamically create sub-rulesets that let you set up very specific rules for troublesome protocols without allowing more access than necessary.
PF can provide NAT, letting you offer an entire network Internet access without public IP addresses.
You can redirect incoming traffic arbitrarily, and control how much bandwidth you will let a service use.
You can use PF logging.
This chapter covers each of these topics.
A table is a list of IPv4 and/or IPv6 addresses, much like a list. A table is faster than a list, however, and uses less memory. If you have only a few addresses, using a list is fine, but once you have more than a few, use a table.
Interestingly, you can edit tables without reloading the filter rules, and several programs use this feature to dynamically change how a server behaves. Some people load lists of malware-laden computers into a table to block those hosts, or use external programs to generate such lists. (“You’ve tried to send us four invalid emails in a row? Good-bye!”) Tables can be kept permanently in external files, or you can treat them as ephemera. It’s your choice.
You can create and manipulate tables entirely with pfctl
, but that’s not as common as defining the table within pf.conf. Give the table name in angle brackets, and provide the initial members delimited by commas inside braces.
table <management> {192.0.2.5, 192.0.2.8, 192.0.2.81}
In this case, the management
table contains three IP addresses.
If you want to define a table that pfctl
cannot change, use the const
keyword. The following example defines a table for private (RFC 1918) address space. This address space has been well defined for many years, so no one should alter it.
table <private> const {10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16}
If no rules reference a table, PF drops it. This makes sense for static rules, but if you’re using anchors (discussed later this chapter), you might want to retain the table for when rules reappear. Use the persist
keyword to make a table stick around even if it’s not used in a rule.
table <scumbags> persist
Some tables contain enough addresses that you wouldn’t want to list them in your configuration. For convenience, you can populate a table from a file, like this:
table <fullbogons> persist file "/etc/fullbogons.txt"
I have a script that updates the fullbogons.txt file every day. (Bogons are addresses that should never appear in the global Internet routing table.)
The bogons list includes private address space, addresses reserved for experimentation or documentation, addresses not assigned to any network, and addresses assigned to other exotic purposes. Several organizations produce and update full bogon lists. I use the bogons list at my border to weed out obvious garbage. The file looks like this:
# last updated 1352220481 (Tue Nov 6 16:48:01 2012 GMT) 0.0.0.0/8 10.0.0.0/8 14.1.96.0/19 …
You can include individual addresses, but not dotted-quad netmasks. You can use hostnames, but before pfctl
feeds the rules to the kernel, it checks the IP address or addresses of the host. This means that if a host changes its IP address after you load the rules, PF will not know about the new IP address.
Use the table in your firewall rules exactly as you would use an address or list.
block in on egress from <fullbogons> to any
You can put multiple tables in a list.
block in on egress from {<fullbogons>, <scumbags>} to any
Yes, a list is slower than a table. But if you maintain two different tables in different ways, you probably want those tables separated. And if a list of two items triggers firewall exhaustion, you really need more hardware.
Tables have their own subset of pfctl
commands. To see which tables are in the kernel, use pfctl -s Tables
. (Note that Tables
begins with a capital T
.)
# pfctl -s Tables
fullbogons
scumbags
Why would you need to ask the kernel what tables it has? Because dynamic rules can add and remove tables, as discussed in Anchors.
If you already know the table name, and you want to view the addresses within the table, use the -t
argument to specify a table name. The -T
argument has several subcommands, much like -s
, but is for table operations. Here’s how to examine the contents of the scumbags
table:
# pfctl -t scumbags -T show
157.166.248.10
157.166.248.11
157.166.249.10
157.166.249.11
For many table operations (add
, delete
, replace
, and test
as of right now), you can add one or two -v
options before the -T
to increase verbosity. If you work on multiple addresses simultaneously, adding verbosity shows details of what the command did.
You can eyeball a table with four entries pretty easily, but if a table has thousands of entries, you won’t want to page through it searching for an address. You could use grep(1)
, but that can fail because an address might be part of a network that looks completely different. (I’m sure I could write a grep
expression that matches 10.0.0.0/8 if I enter 10.99.61.4, but I don’t want to try it.) You can test an address to see if it’s in a table.
# pfctl -t fullbogons -T test 192.0.2.88
1/1 addresses match.
This address appears in the fullbogons
table.
If you test multiple addresses in one command, use -v
or -vv
before -T
to see which addresses match and which don’t.
# pfctl -t scumbags -vvT test 192.0.2.88 198.51.100.90
1/2 addresses match.
M 192.0.2.88 192.0.0.0/22
198.51.100.90 nomatch
Using a single -v
shows only matching addresses.
One important feature of tables is that you can dynamically alter them without reloading the firewall rules. If you must add an address to a table, use -T
’s add
command.
# pfctl -t scumbags -T add 192.0.2.88
1/1 addresses added.
Add networks by specifying a netmask and multiple addresses in a single command.
# pfctl -t scumbags -T add 198.51.100.0/24 2001:db8::/32
2/2 addresses added.
If you add addresses to a nonexistent table, PF automatically creates the table (so now you know where that scumbags
table came from).
Add all the addresses in a file to a table with the -f
argument.
# pfctl -t scumbags -T add -f scumbags.txt
1/1 addresses added.
To remove addresses, use the delete
command.
# pfctl -t scumbags -T delete 198.51.100.0/24
1/1 addresses deleted.
To completely remove all entries from a table, use flush
.
# pfctl -t scumbags -T flush
6 addresses deleted.
If emptying the table is not enough, and you want to completely remove it from the rules, use kill
.
# pfctl -t scumbags -T kill
1 table deleted.
OpenBSD includes software that can adjust tables algorithmically. In Chapter 16, I mentioned the DHCP server’s ability to assign leased, abandoned, and changed addresses to tables. You can use PF to assign different rules to each group of addresses.
Assume you have dhcpd(8)
add all leased IP addresses to the leased
table, abandoned addresses to the abandoned
table, and changed addresses to the changed
table. Hosts with properly leased addresses can access the network, but hosts with abandoned and changed addresses cannot. Here, interfaces in the office group face the local network:
table <leased> persist table <abandoned> persist table <changed> persist pass in on lan from <leased> to any block in on lan from {<abandoned>, <changed>} to any
If someone decides to configure an address from the DHCP server as a static address for their computer, they automatically lose access to the rest of the network—problem solved. Other OpenBSD software, such as spamd(8)
, has similar features.
At first glance, it might seem like this feature is ready for integration with other programs. It’s fairly simple to write a script that parses a log, grabs the IP addresses, and feeds those addresses to a table. Several years ago, I wrote a script to take alerts from the Snort intrusion detection system and automatically block attackers from the network. Without careful and skilled attention though, Snort generates many false positives. My autoblocking script very effectively created a denial-of-service attack against my own development team.
Be careful with automatically feeding PF tables to block traffic. It’s very easy to harm desirable connectivity.
One of the critical functions of a firewall is NAT. Use NAT to provide IPv4 network access to multiple machines but show only one public IPv4 address. Some companies provide Internet access to thousands and thousands of machines via NAT.
NAT is like making soup out of a bone—it stretches what you have so that it covers more. Some protocols won’t work well with NAT. It really confuses anyone who is trying to restrict access by IP address. And it can cause nightmares for network forensics and troubleshooters. But NAT is the chosen solution for the IPv4 address shortage.
NAT is not intended as a security mechanism. There are minor security benefits, but they are inadequate against today’s network threats. Relying on NAT for security is chasing 10 boilermakers with a cup of black coffee before staggering out of the pub to drive home. You might get away with it, but only by luck.
IPv6 was designed without NAT, but it was shoehorned in several years later by popular demand. (IPv4 was originally designed without NAT as well, so IPv6 is just following tradition.) Note that an IPv6 address—even a globally unique IPv6 address—does not mean or even imply “reachable from the world.” You can have solid network separation without NAT. Avoiding NAT means using your packet filter to protect your machines, with additional application proxies as needed.
In theory, you can use any addresses behind your NAT device. If you use some random IP addresses, though, you cannot exchange packets with whoever uses those IP addresses out in the real world. It’s highly advisable to use some of the IP addresses reserved for private use, generally referred to as “RFC 1918 addresses.” These include the following IP addresses:
10.0.0.0/8 (10.0.0.0-10.255.255.255)
172.16.0.0/12 (172.16.0.0-172.31.255.255)
192.168.0.0/16 (192.168.0.0-192.168.255.255)
You can subnet and rearrange those addresses any way you like, as long as you don’t try to route them on the public Internet.
You can use other IP addresses behind your NAT if you have a really good reason for doing so. For example, RFC 5737 defines IPv4 addresses for use in documentation. Like RFC 1918 addresses, RFC 5737 addresses should never appear on the public Internet. I write documentation, so I use those addresses on my home and test networks. It saves me from doing search and replace as I write books.[48] There’s still no chance of those addresses appearing on other networks.
Perhaps the most common form of NAT is for use in hiding a small network behind a single IP address. You’ll find this in many homes and small businesses. Very few home offices have internal routing and multiple subnets. For this example, I have two interface groups: the Internet-facing egress
group and the lan
group attached to my office.
pass out on egress from 1lan:network to any 2nat-to egress
The first part of this rule looks just like any other firewall rule permitting the addresses on the lan
interface access to everywhere, but the last two words additionally configure NAT. The nat-to
keyword tells PF to translate addresses 2. The egress
that follows tells PF to hide the internal addresses behind the addresses of the egress
interfaces 1. You could use an interface name or a specific IP address here, but if you do, you must change your filter rules when you change your server.
In order to have PF recognize IP address changes from DHCP, put the interface group name in parentheses.
pass out on egress from lan:network to any nat-to (egress)
Now load your firewall rules, enable IP forwarding, and suddenly, hosts on your LAN will have access to the Internet through the firewall’s public address.
The easiest way to understand how address translation works is to look at the state table (discussed in the previous chapter) after PF passes translated packets back and forth. On the office network from machine 192.0.2.2, I ran this command:
$ ping www.michaelwlucas.com
Several pings later, I checked the state table and found entries like this:
# pfctl -ss | grep 192.0.2.2 all udp 1203.0.113.5:55797 2(192.0.2.2:10853) -> 3203.0.113.15:53 MULTIPLE:SINGLE all icmp 203.0.113.5:8813 (192.0.2.2:41584) -> 198.22.63.8:8 0:0
The first state represents a UDP connection from the firewall’s public address 1 to the local DNS server 3. This state entry includes the client’s private IP address 2, as well as the actual ports used by the client, the firewall, and the DNS server.
The client initiated this state by sending a request from port 10853 on its IP address to port 53 on the DNS server. When the packet passed through PF, OpenBSD rewrote the packet so that it appeared to come from the address 203.0.113.5 on port 55797 and sent it on to the DNS server. The DNS server sent its response to the firewall’s public IP on port 55797. When the reply arrived, the firewall checked the state table, and found that UDP packets on port 55797 were part of the state for the client. PF rewrote the packet’s destination address and forwarded it to the client.
The second state represents an ICMP connection. The state table encodes the various ICMP codes used for a ping request as port numbers, and forwards responses back to the client based on that information. Otherwise, it’s very similar to the DNS example above it.
In other words, NAT works by lying. PF lies to the client, telling it that it has direct access to the public Internet. It lies to the external servers, giving a false source address and port for client connections. PF uses the state table to track its lies and keep everything consistent. These lies are convenient for IPv4 address conservation, but they’re exactly why address translation complicates troubleshooting and intrusion forensics.
Now that you understand the basics of NAT, let’s tell the network even more complicated and interesting lies.
You can use several public IP addresses for address translation. If you use an interface group for the external address in your NAT rule, any addresses in that interface group can become the public address of any connection. If you want to be specific, list particular addresses.
pass out on egress from lan:network to any 1nat-to 203.0.113.5
I use this configuration when my firewall’s external interface has multiple IP addresses and I want to conceal my desktop clients behind a single address (although I probably would define and use a macro for the external address 1).
But how many public addresses do you need? The answer depends on your clients.
Port numbers range from 0 to 65535. The bottom 1024 ports are generally used for services on the localhost. Not all of those ports will be used on the localhost, but a packet filter generally won’t use those ports for translated connections. I’m lazy, so I’ll round off to 64,000 free ports.
Even the most heavily loaded desktop client rarely can use as many as 100 outbound connections simultaneously. Most will use far fewer, but again, I’m lazy, and I want a worst-case scenario, so I’ll call it 100.
One IP address can support 64,000 / 100 = 640 machines being pathological simultaneously. Realistically, each client might have 10 simultaneous outbound connections, so a public address could support 6,400 simultaneous clients. How many of your users browse the Internet at the same time? The answer probably is not many. And if you have thousands of users, you would probably benefit from implementing a caching proxy, which would greatly reduce the number of connections.
If you’re concerned about overflowing the number of client machines for one address, watch your state table. Until you have multiple tens of thousands of states for one public IP address, don’t worry.
Specifying individual addresses in a NAT rule is most useful for bidirectional NAT.
Some applications work better if you dedicate a public IP address as the NAT address for a specific private IP address. For example, if you have a server that offers several different services on different ports, and you want to put it behind your firewall, you might want to dedicate a single address to it. This is called bidirectional, one-to-one, or static NAT. OpenBSD docs use “bidirectional,” but the terms all mean the same thing.
Configure bidirectional NAT with the binat-to
keyword.
pass on lan from 192.0.2.65 to any binat-to 203.0.113.6
PF dedicates the public IP address 203.0.113.6 for NAT services for the private IP address 192.0.2.65.
If you use bidirectional NAT, be sure to specify a specific IP address for your general NAT and consider using the following NAT rules:
pass out log on egress from lan:network to any nat-to egress pass on lan from 192.0.2.2 to any binat-to 203.0.113.6
The IP addresses on this LAN are hidden behind the IP addresses on the egress
interface. If 203.0.113.6 is an address on an egress
interface, outbound packets from the LAN might use it as a source address.
When I need bidirectional NAT, I usually write my NAT rules like this:
mainnat="203.0.113.5" servernat="203.0.113.6" pass out log on egress from lan:network to any nat-to $mainnat pass on lan from 192.0.2.2 to any binat-to $servernat
In this way, packets leaving my network are unambiguously translated. Only the one specific server uses the IP address 203.0.113.6; all other hosts on my local network use 203.0.113.5. If I change IP addresses, I must reconfigure pf.conf, but that’s a minor annoyance compared to troubleshooting network ambiguity.
The use of bidirectional NAT, and allowing the redirection of connections, lets you give people outside your network access to servers behind your firewall, and every one of these gaps is a potential security hole. If you allow the world access to your web servers, and an intruder compromises one of your servers, you have a compromised machine inside your firewall. The firewall doesn’t really secure the web servers; it just controls who can try to break into them and limits the available attack vectors.
When writing packet-filtering rules for bidirectional NAT, the order in which you list rules is important. Consider the following rules:
pass on lan from 192.0.2.2 to any binat-to 203.0.113.6 pass in on egress proto tcp from any to 192.0.2.2 port 80
The first rule establishes static NAT for the host 192.0.2.2 on the LAN, hiding it behind the public IP address 203.0.113.6. All is well and good. The second line permits connections to port 80 on the same host, or does it? Packets meant for this server that arrive on the firewall’s egress
interface won’t be addressed to 192.0.2.2; they’ll be addressed to the public NAT address, or 203.0.113.6. They won’t match this rule, so they are discarded.
In order to permit connections from the world to the web server behind this firewall, permit packets sent to the proper port on the public address.
pass on lan from 192.0.2.2 to any binat-to 203.0.113.6 pass in on egress proto tcp from any to 203.0.113.6 port 80
This translates 192.0.2.2 to the public address 203.0.113.6, and then allows packets with a destination of port 80 on 203.0.113.6 to pass. You’ll see this in the state table, like this:
all tcp 203.0.113.6:80 <- 198.22.63.8:64791 ESTABLISHED:ESTABLISHED
The host 198.22.63.8 has connected to the server’s public IP address on port 80.
Why doesn’t this state entry have the hidden IP address in it? Because this is a bidirectional NAT. PF can send port numbers through unaltered, so it can track a little less information in the state table.
The tricky thing here is that the rule order impacts how you filter, and you must read your filtering rules carefully to see how address translation interacts with packet filtering. I always write my rules so that I do address translation before I filter. I consistently use the public IP address in the filter rules, but sometimes that’s not practical. PF lets you write arbitrarily complex rules mainly because the real world is arbitrarily complex. If you have trouble passing traffic through NAT, read your rules very carefully.
To see a bidirectional NAT, look at the loaded rules.
# pfctl -sr … pass out on lan inet from 192.0.2.2 to any flags S/SA nat-to 203.0.113.6 static-port pass in on lan inet from any to 203.0.113.6 flags S/SA rdr-to 192.0.2.2 pass on egress inet proto tcp from any to 203.0.113.6 port = 80 flags S/SA
The first rule gives the private IP address access to the public Internet, translated to the specific IP address. The third rule passes traffic to the translated address.
But what about the second rule, with that rdr-to
stuff? That’s a redirection, which is how PF implements static NAT.
Bidirectional NAT is actually a combination of address translation and redirection; in other words, it twists a connection intended for one IP or port to another. In bidirectional NAT, all connections to the designated public IP address are redirected to a different IP address. Sometimes you don’t want to twist all traffic for an IP address—only a few ports. Sometimes you want to redirect one port one way, but a different port elsewhere. Do this with redirection rules.
Suppose you have one public IP address: 203.0.113.5. You want port 80 on that IP address routed to your web server at 192.0.2.2, ports 25 and 110 to your mail server at 192.0.2.3, and port 443 to your e-commerce server at 192.0.2.4. PF lets you choose where to send each port via redirection by using a standard packet-filtering rule and adding the rdr-to
redirection keyword.
pass in on egress proto tcp from any to egress port 80 rdr-to 192.0.2.2 pass in on egress proto tcp from any to egress port {25, 110} rdr-to 192.0.2.3 pass in on egress proto tcp from any to egress port 443 rdr-to 192.0.2.4
These rules declare that any connection coming to the egress
interface group (the interface facing the public Internet, with a default route going over it) can be redirected in three different ways. The first rule directs port 80 requests to one internal server. The second rule directs requests for ports 25 and 110 to the second server. The last rule redirects requests for port 443 to the third server. One public IP address is now providing services to the world from three different servers.
All port redirection rules must include a protocol, because specifying a TCP/IP port works only if you’re forwarding a protocol that includes port numbers, such as TCP or UDP. If you want to forward both TCP and UDP ports, you must specify both protocols. For example, DNS uses port 53 on both TCP and UDP. Here’s a rule that forwards both of these protocols’ port 53 to the internal server 192.0.2.5:
pass in on egress proto {tcp, udp} from any to egress port 53 rdr-to 192.0.2.5
Pick a port, say where you want it to go, and PF will redirect it as you please.
All of the preceding discussion makes sense when you have only one public IP address. But what happens when you have multiple addresses?
Remember that using an interface group in pf.conf tells pfctl
to create a matching rule for every IP address in the interface group. Suppose you have three IP addresses on your egress
interface: 203.0.113.5, 203.0.113.6, and 203.0.113.7. You write this pf.conf rule:
pass in on egress proto tcp from any to egress port 80 rdr-to 192.0.2.2
Load this rule into the kernel with pfctl
, and what do you get?
# pfctl -sr
…
pass in on egress inet proto tcp from any to 203.0.113.5 port = 80 flags S/SA rdr-to 192.0.2.2
pass in on egress inet proto tcp from any to 203.0.113.6 port = 80 flags S/SA rdr-to 192.0.2.2
pass in on egress inet proto tcp from any to 203.0.113.7 port = 80 flags S/SA rdr-to 192.0.2.2
Any connection to port 80 on any of these IP addresses is directed to port 80 on the same server. This might be useful in some environments, but that’s not what most of us want. If you have multiple IP addresses, and you want to redirect a port on only one IP address, you must specify the interface name and the public IP address.
pass in on em0 proto tcp from any to 203.0.113.5 port 80 rdr-to 192.0.2.2
This doesn’t expand; it doesn’t have any interface groups, lists of addresses, variables, or macros. When pfctl
parses this, it loads only one PF rule into the kernel.
As you redirect ports from one machine to another, you can change the port. The following example takes requests to TCP port 2222 on the firewall and redirects them to port 22 on a machine inside the firewall.
pass in on egress proto tcp from any to egress port 2222 rdr-to 192.0.2.2 port 22
This is a reasonable way to offer SSH services to several machines inside the firewall on only one IP address, and to give each machine its own port.
If you have specific source addresses that you want to abuse, you can give them special port redirections by source IP address.
pass in on egress proto tcp from 198.51.100.0/24 to egress port 80 rdr-to 192.0.2.2 pass in on egress proto tcp from ! 198.51.100.0/24 to egress port 80 rdr-to 192.0.2.3
Every HTTP connection from the IP addresses in 198.51.100.0/24 will be redirected to one server, while every other connection will be directed elsewhere. (To redirect connections for many source addresses, use a table for the source address.)
PF can also redirect entire ranges of ports using the same logical operators used for filtering ports. One obvious thing to do is to redirect a range of ports to a single machine. NFS is a prime example, as it requires TCP port 111, as well as all TCP and UDP ports from 1024 to 65535.
pass in on egress proto {tcp, udp} from any to egress port {111, 1024:65535} rdr-to 192.0.2.15
Recall from Chapter 21 that a colon between port numbers indicates a range of ports. This rule passes ports 1024 through 65535, inclusive. Admittedly, certain NFS implementations can be restricted to use either TCP or UDP, and that’s a great big gaping hole in your packet filter. But NFS uses random high-numbered ports that come and go very quickly, and cannot be effectively filtered or restricted at the packet level.
You can also funnel an entire range of ports to one port on one machine.
pass in on egress proto tcp from any to egress port {1024:65535} rdr-to 192.0.2.15 port 80
I’ve used this to point random traffic at a web page that says “Go away. You cannot use this service.”
Traffic interception is similar to redirection in that PF intercepts traffic bound for one port and steers it to a port on the local machine. Traffic interception is one way to implement a transparent proxy. Use the divert-to
keyword to tell PF to steer any matching packets to a local server.
pass in inet proto tcp from lan:network to any port 80 divert-to 127.0.0.1 port 3129
Any traffic from the local LAN to port 80 will be diverted to port 3129 on the firewall. Port 3129 is usually used by the Squid caching proxy (/usr/ports/www/squid). If you choose to implement a caching proxy like Squid, you’ll probably want to redirect several ports to the cache. (We’ll take a closer look at diverting connections in FTP and PF.)
In PF, an anchor is a sub-ruleset at a specific point in the filter rules that you can change without reloading the rules. It’s a spot marked “insert rules here,” letting you dynamically add and remove filter rules, tables, and other PF configurations.
The most common users of anchors are software programs. Human beings or sysadmins should probably just edit pf.conf and reload the rules.
OpenBSD includes several programs that take advantage of anchors, however, including the FTP proxy ftp-proxy(8)
, the authenticated firewall access system authpf(8)
, and the load balancer relayd(8)
. You could also use anchors to trigger conditional evaluation of rules.
A ruleset with an anchor might look something like the following, where the interface group egress
faces the Internet, and the interface group lan
faces a small office with the addresses 192.0.2.0/24.
block pass in on egress from any to 192.0.2.45 port {25, 80} anchor "antivirus/*" pass in on lan from 192.0.2.0/27 to any
These rules block all traffic by default. Incoming traffic is allowed to a specific address on ports 25 and 80 because those are the mail and web servers. There’s an anchor in the middle of the rules. I don’t yet know what’s in the antivirus
anchor, but any rules in it are processed next. Finally, a small subnet of the addresses is allowed out.
Now let’s add some rules to the anchor.
You can insert rules into anchors from a file, within pf.conf itself, or via pfctl
.
Adding rules to an anchor from a file is a good way to initialize your anchor when first starting the packet filter. You can set base rules here that you can expand later. Give the filename in pf.conf.
anchor dhcp load anchor dhcp from "/etc/pf/dhcp-anchor.conf"
I created an /etc/pf/ directory because I didn’t want to have a whole bunch of PF configuration files scattered throughout /etc. I’m easily confused, after all. This file contains PF rules like this:
block from 192.0.2.192/26 to any
This is one way to load basic rules into an anchor when you start PF.
If you were paying attention, you probably noticed that my first example anchor had a /*
after its name. This example doesn’t. I’ll explain why in Nested Anchors: /*.
You can place anchor rules directly inside pf.conf. If you don’t intend to dynamically alter the rules, you don’t even need to name the anchor. Just use curly braces to define the beginning and end of the anchor.
anchor "smtp" on egress { pass proto tcp from 192.0.2.12 to any port 25 }
This is just slightly more complicated than the anchors in the default pf.conf.
Why would you want to do this? Read Conditional Filtering.
To dynamically alter anchor rules with pfctl
, you need the name of the anchor and the rule you want to put in its place. For example, suppose I want to add a rule to the antivirus
anchor in the first anchor example.
# 1echo "block in from 203.0.113.8 to any" 2| pfctl 3-a antivirus 4-f -
Let’s look at this command slightly backwards. The -a
argument to pfctl
specifies an anchor name—in this case, the antivirus
anchor 3. The -f
argument normally gives a filename that contains the new anchor rule, much like -f
when loading a PF ruleset, but rather than a path to a file, I use a single dash that tells pfctl
to read the new rule from standard input, or the command line 4. I start everything by echoing the rule to be added 1, and then piping that into pfctl
2.
Taken as a whole, this adds the rule block in from 203.0.113.8 to any
to the anchor antivirus
.
You could also write the new rule to a file, and tell pfctl
to load the rules from that file into the anchor.
# pfctl -a antivirus -f newrule.conf
If you’re writing rules to a file to load them into an anchor, however, chances are you’re better off editing pf.conf.
Use the pfctl
view (-s
), flush (-F
), and load (-f
) commands on anchors by specifying the anchor name with -a
.
# pfctl -a antivirus -s rules
block drop in inet from 203.0.113.8 to any
To erase the rules from an anchor, flush the rules in the anchor.
# pfctl -a antivirus -F rules
rules cleared
Your anchor is now empty.
Rulesets within anchors are completely separate from each other, and also from the main ruleset. Flushing all the rules in a specific anchor does not affect the rules in any other anchor, or the rules in the main ruleset. For that matter, flushing the rules in the main ruleset does not impact the rules in the anchor. To destroy an anchor, you must remove everything in the anchor, including any child anchors.
“Child anchors?” I hear you cry. “What are you babbling about now, dude?”
Consider the following pf.conf snippet:
… anchor "office/*" in from lan to any { pass out proto tcp from any to {80, 443} } …
The office/*
anchor has a filter condition after it, and only traffic that matches the filter condition will pass through the anchor. In this case, only packets that come from the lan
interface group will pass through the rules within the anchor. Your rules within the anchor might be easier to write, simply because everything in the anchor is already known to be originating from the lan
interfaces.
If your packet filter is very heavily loaded, you might be able to reduce the amount of time it spends processing packets by careful conditional filtering.
Anchors can contain other anchors.
anchor "office" in from lan to any{ … anchor "ftp-proxy/*" pass in quick inet proto tcp to port ftp divert-to 127.0.0.1 port 8021 } …
Only traffic that passes into the office
anchor can pass through the ftp-proxy
anchor. The FTP proxy can have its own sub-anchors as well. In fact, you might have several layers of anchors to support a complicated protocol, such as FTP.
This is where the /*
after some anchor names comes in. An anchor name without this is executed all by itself. By adding the /*
, you tell PF to evaluate all sub-anchors within this anchor, in alphabetical order.
Anchors and sub-anchors deliberately resemble a filesystem. You can have a file /office or a directory /office/ containing more files. If you list the files in a directory, they appear in alphabetical order. Anchors work much the same way.
All of this anchor stuff is very theoretical. How about a practical example? Read on to see how PF uses anchors to handle that most annoying of network protocols: FTP.
Most modern application protocols run over a single network connection. If you make a web request, your browser opens a connection to the server on port 80, requests information, and receives the answer, all on the same connection. SSH opens a single connection on port 22 and exchanges all information over that port, even if you tunnel a hundred other protocols inside it. Experience and experiments with older protocols taught the wisdom of this approach. FTP is an older protocol, and it provides a wealth of experience on how not to do things.
The original version of FTP (today called active FTP) required the client to connect to the server on port 21. The server would then open a connection back to the client, from port 20 to some random high-numbered port on the client for sending information. The connection from server to client is called the data connection, or the back channel. The FTP client and server agree on the ports to be used and how the second connection will be used. On a network protocol level, however, no connection exists between the client’s connection to port 21 and the server’s connection from port 20, so there’s no way for a firewall to use stateful inspection to sort out if such a connection is allowed. Worse, if the client is behind a NAT device, there’s no way to determine to which private IP address the firewall should route an incoming FTP data request.
Passive FTP is an updated version of the FTP protocol where the client initiates both TCP connections. All modern clients and servers support passive FTP. The differences between active and passive FTP spark endless rounds of user education and increased help-desk load, especially if you’re trying to use FTP through a web browser. (And if anyone is going to break my help desk staff, it’s going to be me!) Active FTP simplified firewall rules, because the firewall didn’t need to allow the back channel. Unfortunately, the creators of passive FTP called the modified protocol FTP. Clients don’t care about active or passive, they just want “this FTP thing” to work, regardless of the actual protocol underlying it.
To complicate things, some FTP servers and clients implement something between active and passive FTP. The FTP protocol has been around for decades (it predates TCP/IP), and people have tweaked and “improved” it for years. Getting a random combination of FTP server and client through a random NAT device and a packet filter can cause nightmares, or at least require opening a wide range of TCP ports.
OpenBSD and PF get around this problem by including an FTP application proxy, ftp-proxy(8)
. When a client makes an FTP request, PF intercepts the request and reroutes it to the application proxy. The proxy tracks the FTP protocol transactions, uses anchors to insert the appropriate rules into the firewall, and removes the rules when the transfer finishes. Strictly speaking, ftp-proxy
isn’t a traditional proxy. Data doesn’t actually go through ftp-proxy
; the “proxy” adjusts the firewall rules so that traffic can pass. The proxy requires two parts: a running ftp-proxy
instance and the redirect rules.
Like any other OpenBSD daemon, ftp-proxy
is enabled in /etc/rc.conf.local. There’s no configuration file—only command-line arguments. By default, ftp-proxy
automatically listens on port 8021 on the loopback interface. It’s very rare for me to add any command-line arguments for ftp-proxy
for routine use.
ftpproxy_flags=""
If I’m debugging a problem, however, I might run ftp-proxy
in the foreground, in debugging mode. Doing this shows me all FTP transactions as they occur.
# ftp-proxy -dD7
This displays everything that passes through the FTP proxy, including the ports used for the data channel back to the client. Press CTRL-C to stop ftp-proxy
.
The most common problem I have with ftp-proxy
is that nothing appears in the debugging terminal. That means that the firewall isn’t diverting any traffic to the proxy. Check your pf.conf file to verify that you have the necessary rules to support the FTP proxy.
PF must know to send FTP requests to ftp-proxy
. There’s a good example configuration in the default pf.conf file:
anchor "ftp-proxy/*" pass in quick inet proto tcp to port ftp divert-to 127.0.0.1 port 8021 pass out inet proto tcp from (self) to any port ftp
Here’s where we use anchors. The ftp-proxy/*
anchor can contain sub-rulesets. The ftp-proxy
daemon modifies these anchors on the fly to configure the necessary traffic or data connections. The second rule declares that PF will divert any traffic addressed to the FTP port (21 as per /etc/services) to port 8021 on the localhost. The third rule says that the firewall host can send TCP port 21 traffic to any other host. This rule contains a new term, (self)
, which is PF shorthand for “all IP addresses on the localhost.”
How can you be sure this works? First, find an FTP server that supports active FTP. Open your FTP client and log in to the server, going through the firewall. Once you log in, use the pasv
command at the FTP prompt. This command turns passive mode on and off. If the server doesn’t recognize pasv
, it supports only passive FTP. Find another FTP server for this test. Once the FTP server reports that “passive mode is off,” list the contents of a directory. Directory listings, like data files, come over the data channel.
During the data transfer of an active FTP connection, you should see rules in the ftp-proxy/*
anchor.
# pfctl -a "ftp-proxy/*" -sr
anchor "6837.2" all {
pass in log (all) quick on rdomain 0 inet proto tcp from 129.128.5.191 to 139.171.202.34 port = 62323 flags S/SA keep state (max 1) rtable 0 rdr-to 192.0.2.2 port 64280
pass out log (all) quick on rdomain 0 inet proto tcp from 129.128.5.191 to 192.0.2.2 port = 64280 flags S/SA keep state (max 1) nat-to 129.128.5.191
}
The rules created by ftp-proxy
are very specific. They permit only one connection, from a particular server to a particular client, with address translation rules to make each side think it’s actually talking to the proper client or server.
One common task for a network perimeter device is bandwidth management. Network managers must control how much bandwidth is used for certain tasks, and must also reserve bandwidth for vital functions. If one of your minions loads the latest blockbuster comic book movie on the web server, you must be able to make an SSH connection to the server, find out why your server is overloaded, and fix the problem. PF includes the ALTQ bandwidth management system.
The most important thing to remember about bandwidth management is that you cannot control how much traffic other people send you. You can stop traffic at the point it enters your network. You can send hints that the bandwidth is saturated. You can arbitrarily restrict bandwidth from your servers. But nothing you do can stop 10,000 people a second from clicking a link to that server. You cannot prevent a distributed denial-of-service attack from saturating your inbound bandwidth. The best you can do is control how you respond to those requests.
When I run content farms, I usually put dedicated bandwidth control machines in front of my servers. This setup controls how much traffic actually reaches my server network, reduces load on the servers in case of a sudden spike, and prevents one overly busy customer from taking down other customers on the same server.
ALTQ manages bandwidth by queues. A queue is a list of packets waiting to be processed.
By dividing your bandwidth into separate queues, and processing those queues as you configure, you can manage server bandwidth. Queues are somewhat like the checkout lines at the grocery store; some lines are for 10 packets or less and get you out quickly, and others are for people who shop once a month and fill up three carts. You can define just about any characteristics for queues, as if you could create a “meats only” or “white wine with fish” register.
Engineers have defined many different queuing algorithms, and the most proper queue method for a given situation is a topic that sparks heated discussions. TCP/IP quality-of-service queue handling is one of those topics that make angelic children cry. By default, all BSD-based systems use first-in, first-out (FIFO) queuing, where packets are processed in the order in which they are received. Newer packets wait in a queue until older packets move on.
OpenBSD also supports priority queuing (PRIQ or prio), where the kernel considers packets of certain types to have “priority” and processes them first. This means that if you assign web packets highest priority, all web packets jump to the head of the queue. Packets of lower priority might never be processed at all under this scheme. These days, just about everything supports priority queuing, especially switches. The goal of priority queuing is to reduce latency for specific traffic, such as voice or video, paying for that reduced latency by increasing the latency of less urgent traffic.
However, in most operational settings where you must regulate bandwidth, class-based queuing (CBQ) is appropriate. CBQ allows the network administrator to allocate a certain amount of bandwidth to different types of traffic through hierarchical classes. Each class has its own queue, with its own bandwidth characteristics. You can assign different sorts of traffic to different classes: SSH to one class, HTTP and HTTPS to another, and so on. One of the nice features of CBQ is that its hierarchical nature allows lower classes to borrow available bandwidth from classes above them.
As I find CBQ appropriate for most environments, I focus on it here. Once you master CBQ, if you need PRIQ, you’ll find it easy to understand.
Queuing starts with defining the parent queue. All other queues are children of the parent queue. The parent queue is attached to a network interface, most commonly the Internet-facing interface. Place your queue definitions in pf.conf. I put queues at the top of the file, before any packet-filtering rules.
Here’s how you define a parent queue on an interface:
1altq on 2interface 3cbq bandwidth 4bw qlimit 5qlim tbrsize 6size 7queue { 8queue1, 9queue2}
Start all ALTQ parent queue definitions with the altq
keyword 1, and then give the interface to which this queue is attached 2. (Each interface can have no more than one parent queue.) Then give the queue type you’re using 3. For CBQ queuing, the queue type is always cbq
.
Now define the total amount of bandwidth in the parent queue 4. This is not the same as the amount of bandwidth the interface can pass, but the amount of bandwidth you reasonably expect to pass upstream. If your OpenBSD machine has a gigabit network card, but you have only 10 megabits of bandwidth to the Internet, use 10Mb
as your bandwidth (or fiddle with the bandwidth value until you hit your actually usable allocation). You can use the following case-sensitive abbreviations for bandwidth:
b
. bits per second
Kb
. kilobits per second
Mb
. megabits per second
Gb
. gigabits per second
The optional qlimit
parameter gives the number of packets the queue can hold 5. The default value is 50
, which suffices for almost all cases. I recommend not setting qlimit
unless specific debugging shows that you need a larger queue size.
This example includes the token bucket regulator size configuration because tbrsize
lets you dictate how quickly packets can be transmitted 6. ALTQ defaults to transmitting packets as fast as the wire permits. As with qlimit
, I recommend not setting tbrsize
unless you encounter a problem.
Next, identify this as a parent queue 7, and define child queues queue1
8 and queue2
9.
Here’s how to configure a parent queue with a 50-megabit uplink, with the child queues ssh
, web
, and mgmt
:
altq on em0 bandwidth 50Mb queue {ssh, web, mgmt}
The tbrsize
and qlim
keywords are not set, so they’re at their defaults.
Once you have a parent queue, you can define child queues. Define CBQ queues with the following syntax:
queue 1name on 2interface bandwidth 3bw [priority 4pri] [qlimit 5qlim] cbq 6(options) 7{child_queues}
Each queue needs a name 1, defined in the parent queue definition, of 15 characters or less. The names don’t need to be unique—you could use a queue of the same name on a different interface—but I recommend that you use unique names.
The interface is the specific interface to which this queue is applied 2. If you don’t define an interface, traffic that passes through any interface can be assigned to this queue.
The bandwidth
term uses the same bandwidth labels that the parent queue uses, but the total bandwidth assigned to all child queues cannot exceed the total amount of bandwidth available on the parent queue 3. You can also use a percentage value for bandwidth, indicating the percentage of the parent queue that this queue can consume. Bandwidth and queue are the only mandatory terms in a child queue description.
The following defines the ssh
child queue and gives it a bandwidth of 2 megabits:
queue ssh bandwidth 2Mb
Here’s a child queue called web
, which is allowed to use three-quarters of the parent queue bandwidth:
queue web bandwidth 75%
You can assign a priority to a queue 4. CBQ priorities run from 0
to 7
, with 7
being the highest. The default priority is 1
. A CBQ queue with a higher priority does not run to the exclusion of other queues, but PF processes it more quickly than other queues.
As with a parent queue, you can assign a qlimit
to a child queue 5, but don’t do this unless you have a specific problem that can be solved with this value.
You can assign options to a CBQ child queue 6. We’ll look at these options in the next section.
Finally, child queues can have their own children. Define a queue’s children in the queue 7. You’ll see an example of this in A CBQ Ruleset.
Modify how a child queue processes packets by assigning options to a queue. Options let you decide how the queue should respond to a variety of network conditions and bandwidth availability.
Every parent queue must have one and only one default child. If a packet crossing a queued interface is assigned to no other queue, it is assigned to the default queue.
Random early detection (RED) is a method for handling packet loss when a queue starts to fill up. As the queue fills up, more and more packets are dropped. RED randomly chooses packets to drop. The net effect is that short transfers, such as HTTP requests and interactive SSH sessions, respond more quickly, while large data transfers become slower.
TCP clients and servers react to dropped packets by reducing their throughput. UDP, ICMP, and other protocols don’t have any built-in reaction to packet loss. Using RED on queues expected to carry TCP is sensible, but not on queues for other protocols.
Explicit Congestion Notification (ECN) is a modification to RED that sets flags in the packet rather than dropping the packet. If a device recognizes the ECN flag, it will reduce transmission rates.
Not all platforms understand ECN, however, and many that can recognize ECN disable it by default. Microsoft’s Windows Vista and newer, Apple OS X, FreeBSD, and OpenBSD can support ECN, but disable it by default. Newer Linux versions support ECN if the other host requests it. I have successfully used ECN, in corporate environments where I could make the support guys enable ECN on the desktops.
Unless you know the operating systems in use and can control their settings, stick with standard RED.
The borrow
option is available only in CBQ. A queue with borrow set may borrow bandwidth from its parent queue, if the bandwidth is available. For example, you might have a queue that reserves 20 percent of your bandwidth for VoIP. If you don’t have that much VoIP traffic at any particular moment, the parent will have excess bandwidth. Other queues could borrow bandwidth from that allocation. When your VoIP traffic spikes, however, PF revokes the bandwidth loan, and the VoIP traffic gets what’s reserved for it.
Use the borrow
option on the queues that you want to permit to borrow bandwidth, not on the queues whose bandwidth might be borrowed.
Before configuring queues, figure out how you want to divide your bandwidth. While you could use bits per second to manage bandwidth, for most of us, percentages are easier to deal with. Here’s how you might divide Internet bandwidth for a company with a 10-megabit link. Start by making a list of your desired bandwidth reservations, and then assign a name to each category, like this:
All of these queues can borrow from the parent queue.
Start by defining the parent queue.
altq on em0 cbq bandwidth 10Mb queue {ssh, web, voip, other}
This parent queue is attached to interface em0
, and has 10 megabits of bandwidth and four child queues. Leave all the other options alone.
Now define the first child queue.
queue ssh bandwidth 5% cbq (borrow)
Start with the queue name and the bandwidth percentage you’ve chosen. This percentage is calculated from the parent of this particular queue, so it’s about 5 percent of 10 megabits, or 500 kilobits per second. That should be plenty to log in remotely and fix any problems. Adding the borrow
option lets you use more bandwidth for SSH if it’s available.
Building from this example, you can define the other child queues.
queue web bandwidth 50% cbq (borrow, red) queue voip bandwidth 5% cbq (borrow) queue other bandwidth 5% cbq (borrow, default)
The other queue is your default. Any traffic that isn’t assigned its own queue is assigned to this queue.
Assign traffic to a queue with the queue
keyword at the end of a packet-filtering rule. To allow all SSH (port 22) traffic into the network and assign it to the queue named ssh
, use a rule like this:
pass in on egress proto tcp from any to lan:network port 22 queue ssh
Sometimes you must classify traffic without filtering it. The previous example let you assign inbound SSH traffic to the ssh
queue, but what if you want to capture outbound SSH as well? Consider the following rule snippet:
pass in on egress proto tcp from <customers> to <sshservers> port 22 pass out on egress from lan:network to any
This allows hosts in the customers
table to connect to hosts in the sshservers
table on port 22. The second rule allows the local network to send any traffic, or any protocol. Some of that outbound traffic will be SSH traffic. Should you write a separate rule just for queuing traffic?
This is where the match
keyword comes in. Using match
, you can change how PF classifies traffic without changing how it filters traffic. Here’s how to send all TCP port 22 traffic to the ssh
queue, without changing any filtering characteristics:
match proto tcp from any to any port 22 queue ssh pass in on egress proto tcp from <customers> to <sshservers> port 22 pass out on egress from lan:network to any
The first rule matches all traffic on TCP port 22 and assigns it to the ssh
queue. The rules that follow control who can send and receive SSH connections.
To view the queues currently in the packet filter, run pfctl -s queues
.
# pfctl -sq
queue root_em0 on em0 bandwidth 10Mb priority 0 cbq( wrr root ) {ssh, web, voip, other}
queue ssh on em0 bandwidth 500Kb cbq( borrow )
queue web on em0 bandwidth 5Mb cbq( red borrow )
queue voip on em0 bandwidth 500Kb priority 7 cbq( borrow )
queue other on em0 bandwidth 500Kb cbq( borrow default )
Adding -v
gives you a brief snapshot of the state of each queue. For a constantly updating view of all queues, including how much traffic is borrowed from each, what gets dropped, and so on, use -vvsq
or systat queues
instead.
This section covers a couple tidbits of PF configuration that don’t quite fit anywhere else: include files and the quick
keyword.
Sometimes splitting a configuration file into multiple pieces simplifies your work. Do this with an include
statement in pf.conf.
include "/etc/pf/management-addresses"
I do this when I need to manage several PF machines with unique configurations, but certain pieces are identical. The management-addresses file defines a table listing all hosts and networks that can connect via SSH, make SNMP queries, as so on. When one of those addresses change, I copy this file to all of my PF hosts and reload the packet-filtering rules.
PF processes packet-filtering rules in order, and the last matching rule wins, which can complicate designing a ruleset that supports exactly the access you desire. If you find yourself stuck, use the quick
keyword to abort processing the rest of the rules for matching packets. Here’s an example:
…
pass in quick proto tcp from any to $sshserver port 22
…
block in proto tcp from any to any port 22
…
The first rule permits traffic to the host(s) in the macro $sshserver
on port 22. The second rule drops all TCP port 22 traffic. The quick
keyword in the first rule says, “When a packet matches this rule, follow this rule and do not process any more rules.” In this case, the SSH connection will be permitted.
The quick
keyword is especially useful in anchors, where rules added for a special purpose by an automated process like ftp-proxy(8)
might be overridden by later rules meant for unrelated purposes.
The purist in me wants to insist that all static rulesets be written without using quick
. While strictly speaking that’s true, sometimes avoiding quick
creates rulesets that are difficult to interpret. A ruleset you can easily understand is more secure than something baroque but syntactically pure.
Tell PF to log packets with the log
keyword in a rule.
pass out log on egress from lan:network to any
Without additional setup, however, those logs just go to the PF log device pflog0
. To successfully log PF messages, you must run the packet filter logger pflogd(8)
. If you start PF at boot, pflogd
is automatically started with it. Otherwise, you must start it on the command line.
One thing to remember is that if you’re using stateful inspection, only the first packet that triggers a rule is logged. Other packets that are part of the same state are not logged. To log all packets in a stateful connection, give the all
modifier to the log
keyword, but beware because this can generate very large logs.
pass out log (all) on egress from lan:network to any
Logging is especially useful when troubleshooting connection problems. If packets are being blocked when you think they should be passed, add logging to your block
statements to see which rule is stopping the traffic.
I don’t recommend logging everything, especially because logs can grow quite large. Log selectively. For example, perhaps you don’t care which websites your local users visit, but do want to know about incoming traffic. And be sure to exclude your firewall logging traffic from your packet filter logs, or you’ll quickly find that PF is logging the transmission of the logs of the log transmissions, which are logs of transmitting the logs, from when you transmitted the logs … yadda yadda yadda.
PF logs in the tcpdump(8)
binary format. Use tcpdump
to examine the data. To just dump everything in the log, tell tcpdump
to read the log file.
# tcpdump -r /var/log/pflog
This can generate a huge amount of output. See Filtering tcpdump for some hints.
The entries in /var/log/pflog are not added in real time; pflogd(8)
buffers its records until writing a log message is worthwhile. To see PF logs in real time, attach tcpdump
to the pflog0
interface with the -i
flag.
# tcpdump -i pflog0
Depending on how much traffic you’re logging, this might also produce an overwhelming amount of information. You must filter tcpdump
to make it useful. Or if you pretend you missed my earlier warning about log sizes, you can devise a one-liner that uses logger
to send your PF logs as text to syslog
.
Every system administrator should know how to use tcpdump
. Here’s your motivation for doing so.
When troubleshooting a problem with a particular connection, you probably don’t care about every packet passing through the filter. You care about traffic to or from a particular host. Specify an IP address with the ip
or ip6
expression.
# tcpdump -i pflog0 ip host 192.0.2.2
This will display only traffic to and from this particular host.
To narrow things further and see only the traffic between two hosts, combine the hosts with the and
keyword.
# tcpdump -i pflog0 ip host 192.0.2.2 and ip host 203.0.113.88
Maybe you’re interested in only a specific port, on a specific address. Use the tcp
or udp
keyword and the port number to filter on that.
# tcpdump -i pflog0 ip host 139.171.199.254 and tcp port 80
Read the tcpdump(8)
man page for an exhaustive list of innumerable other filtering options.
If using tcpdump
doesn’t appeal to you, consider the pflow(4)
NetFlow exporter. Network flow is a complicated topic, but the book Network Flow Analysis (No Starch Press, 2010) might help you.
Sometimes, knowing whether a packet passed or failed isn’t enough. You know that a packet was blocked, but not why. You want to watch the packet pass through the rules and see which rules affect it.
Suppose an internal host 192.0.2.226 cannot connect to the external host 203.0.113.34. The log would show that the packet is blocked, but not why. You can specifically have PF log matching rules. Add a line like this to the top of your pf.conf file:
match log (matches) from 192.0.2.226 to 203.0.113.34
This is a standard packet-filtering rule. You could use an individual IP address, a port number, or any other legal packet filter terms. Reload your packet-filtering rules.
Turn on tcpdump
, and filter based on one of the IP addresses in your match
statement. If you’re using NAT, filter on the IP address that doesn’t change.
# tcpdump -n -e -ttt -i pflog0 ip host 203.0.113.34 Dec 17 18:05:07.773703 rule 0/(match) match out on fxp0: 192.0.2.226.24916 > 203.0.113.34.22: S 1730871963:1730871963(0) win 16384 <mss 1460,nop,nop, sackOK,nop,wscale 3,nop,nop,timestamp 597858150[|tcp]> (DF) Dec 17 18:05:07.773708 rule 2/(match) block out on fxp0: 192.0.2.226.24916 > 203.0.113.34.22: S 1730871963:1730871963(0) win 16384 <mss 1460,nop,nop, sackOK,nop,wscale 3,nop,nop,timestamp 597858150[|tcp]> (DF) Dec 17 18:05:07.773712 rule 5/(match) pass out on fxp0: 192.0.2.226.24916 > 203.0.113.34.22: S 1730871963:1730871963(0) win 16384 <mss 1460,nop,nop, sackOK,nop,wscale 3,nop,nop,timestamp 597858150[|tcp]> (DF)
While I won’t go through all the annoying details of reading tcpdump
output, you can see that PF logs the rule numbers that this data connection matches, and whether the rule passes or blocks the connection. If the connection involves NAT, you’ll see the actual and translated IP addresses.
At this point, you know enough about PF to protect a small network. If you need more, definitely check out The Book of PF, 2nd edition (No Starch Press, 2010).
Now let’s look at some of the more exotic edges of OpenBSD.
[48] Can Lucas configure a highly available firewall cluster in a day? Yep. Can he search and replace IP addresses in a text file without screwing everything up? Nope.