Comparing Open Source BGP Stacks

Compare some simple performance characteristics of three Open Source BGP stacks: BIRD, FRRouting, and Gobgp.

Open source BGP stacks are very important, but I don’t think they get the love they deserve. There’s lots going on in open source BGP stacks and I can’t keep up. So I thought I’d like to quantitatively compare them. This is one, often tiny, aspect of evaluating a BGP stack. I did fairly simple testing. Very little policy, just the number of routes and/or number of neighbors are the independent variables.

The stacks evaluated are BIRD, FRRouting, GoBGP. These have different sets of features, for instance both FRR and BIRD have more full routing stacks that include other protocols. BIRD and FRRouting are single process/core stacks while gobgp can use multiple cores. One of the reasons I did these tests is that I was hoping that we’d see the benefits of multiple cores.

Test setup

I started with bgperf, written by the same people that write GoBGP. bgperf hasn’t been updated in 4+ years, and pretty much doesn’t work at all. It doesn’t work with Python3 and doesn’t correctly configure any of the current versions of these protocol stacks. I’ve forked and updated bgperf to work with current software, at least for the tests I ran. bgperf has quite a bit of complexity that I didn’t try out, especially around remote test subjects.

bgperf uses ExaBGP to source and send all the routes. In some of the tests, ExaBGP adds a lot of load to the test because it’s not the most efficient code (written in Python). However, I think you can see what this means by comparing the same test for the three different stacks. The way that bgperf works is that it creates all the configuration needed for the tester, the stack under test, and the observer. If you have more than one “neighbor”, that means it’s more than one ExaBGP process.

How bgperf works

One thing specific for BIRD; for reasons I don’t understand, bgperf by default creates a separate routing table for every neighbor. I’m not sure when you want to use that and when you do not. If you pass it the -s flag then you get one big table. bgperf doesn’t do this for they other stacks. This has relevance to performance when there are a lot of neighbors which I’ll show in some of the results.

What do these results mean?

There are lots of things that are going on that we have to measure. We have to measure how long it takes for ExaBGP to get started, the time the test stack takes to initiate all the neighbor connections, and then the time it takes to pass all the routes.

Let’s look through some examples. My tests are primarily done on my 16 core AMD 3970 with 64 GB of RAM. I did some tests also on an EC2 m6g.16xlarge.

10 neighbors 10K routes

Let’s start someplace simple, 10 (ExaBGP), each advertising 10K routes. Let me explain what all those numbers mean.

Total Time is the time measured from unix time command for the whole duration and the process: starting everything up, connecting everything, sending traffic, monitoring. Next is the neighbor time, which is the time it takes for all the neighbors to get connected to the test stack. After that happens, the bgperf measured the time it takes to send and receive all the routes, which is elapsed time. You can also see the time that elapsed since the first route, which tells us how much time it takes ExaBGP to get working and sending traffic. In other words in this case of 10K routes, it takes ExaBGP about 1 second before it sends the first route.

This test isn’t much of a stressor on these stacks so we don’t see much differentiation. In this case, they all take 1-3 seconds for the test. GoBGP takes longer to establish neighbors and it uses a lot (1160 vs 10-20) more CPU.

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 10 10000 3 2 1 1 11 10 9.33 2021-06-10 16 64 single table
frr 7.7-dev_git 10 10000 1 3 2 1 10 19 0.057 2021-06-10 16 64
gobgp 2.28.0 10 10000 6 4 3 1 17 1160 0.077 2021-06-10 16 64

100 neighbors 10K routes

This might not be usual, but I’d bet there are some places where it’s critical to have 100 neighbors on a single devices.

You’ll notice that there are two BIRD results here: the faster results are the single table results (with the -s flag to bgperf), while the longer are a table per neighbor. Single table is about 10x faster in this test 14 seconds vs 139. Also single table uses 1.4GB vs 10.6GB of memory. That makes sense.

Single table BIRD and FRRouting each take about 15 seconds to send all the routes. However, FRRouting takes 61 seconds vs 4 to create all the neighbor relationships, so the total time for the whole test is much longer. It turns out that the version of FRRouting I originally used was not a stable version. Moving to version 7.5.1 got times faster than BIRD.

GoBGP starts really looking bad here, about 40x slower than FRRouting or BIRD

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 100 10000 3 139 139 0 3:21 101 10.6 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 100 10000 4 14 14 0 1:31 100 1.39 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 100 10000 61 15 15 0 2:27 103 1.3 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 100 10000 0 7 7 0 1:15 101 1.39 2021-07-27 16 64
gobgp 2.28.0 100 10000 4 646 646 0 11:49 1629 1.92 2021-06-10 16 64

10K routes

All those 10K results in 1 graph.

10K routes graph

5 neighbors, 100K routes

What happens if we add more routes? Let’s go up to 100K routes, staying with the 10 neighbors.

BIRD and FRR have similar performance. GoBGP is starting to get a lot slower. It’s taking 24 seconds to receive all routes, while the others take 3-4 seconds. BIRD and FRRrouting take a full core at max CPU, while gobgp takes all the cores. GobGP also uses about 2x more memory. Not looking that great for gobgp.

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 5 100000 4 10 4 6 0:39 99 0.414 2021-06-10 16 64 table per neighbor that redistributes to a master table
frr 7.7-dev_git 5 100000 1 10 3 7 0:35 100 0.376 2021-06-10 16 64
gobgp 2.28.0 5 100000 4 31 24 7 1:00 1555 0.654 2021-06-10 16 64

5 neighbors, 1M routes

Again, BIRD and FRR are similar in performance, with BIRD being about 25% faster, and GoBGP in this case is about 3x slower and 2x memory.

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 5 1000000 3 103 33 70 5:07 100 4.4 2021-06-10 16 64 table per neighbor that redistributes to a master table
frr 7.7-dev_git 5 1000000 1 111 40 71 5:10 101 3.6 2021-06-10 16 64
gobgp 2.28.0 5 1000000 6 342 273 69 9:09 2619 8.4 2021-06-10 16 64

1 neighbor, 10M routes

I wanted to test something crazy, but it failed. The bgperf process runs out of memory creating configs before it even gets stared. bgperf creates all the config in memory before writing it out, so if you have a large number of routes, like 10M, it will run out of memory on the 32bit Python process before it even gets to any of the rest of the test. This clearly could be improved, but I didn’t get that far.

500 neighbors

FRRouting gets really upset when there are 500 neighbors, and it takes over 30 minutes to connect all the neighbors. As mentioned above, FRRouting version 7.5.1 looks really good, but the original version I tested which is not a stable release has some issues

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 500 100 3 1 1 0 1:24 9 0.031 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 500 100 2173 2 2 0 40:25:00 4 0.439 2021-06-11 16 64
frr 7.5.1_git (d64c849b0bce) 500 100 1 8 8 0 1:16 102 1.33 2021-07-27 16 64
gobgp 2.28.0 500 100 5 206 206 0 4:56 775 0.156 2021-06-11 16 64 the python process was killed, it was 4GB

Observations

GoBGP resource utilization

I was assuming that GoBGP would use more CPU resources but be quite a bit faster on fast hardware. It turns out it just uses a lot of CPU resources and is a lot slower. I was really hoping it would take advantage of the hardware available.

FRRouting and lots of neighbors

The original tests showed very high neighbor times for FRRouting 7.7. I was trying to use the easiest way to build an FRRouting container, and didn’t realize I was using a dev version. It turns out that the last stable version 7.5.1 has very good neighbor times.

As mentioned, it turns out that this is an issue with the original version I tested, 7.7, and looks good with 7.5.1.

The problem was debugged from me on the FRRouting slack. Thanks People!

Bird table per neighbor

As mentioned, bgperf by default has BIRD use a separate table per neighbor. When there are lots of neighbors, this is much slower and uses more memory, as expected.

Conclusion / Followup

FRRouting and BIRD (single table) are pretty close in performance. The 7.7.1-dev version I originally tested was slower, but that was a dev version and I didn’t understand that. It looks like FRRrouting 7.5.1 is in general faster than BIRD, but I wouldn’t assume that’s actually true in practice. There might be interactions with ExaBGP testers that affect performance when the differences are that small. More for me to understand. Also, of course, this is pretty simple testing, so there are certain to be other features of BGP that will have different performance characteristics that might affect you more.

I’d sure love it if you want to have a discussion about how to have better tests. If you propose a test, I’d love config snippets for the protocol stacks that show exactly what you want to compare. Or even better, PRs to bgperf.

There is more to learn even from the results I’ve collected at the end.

Anybody have any other benchmarks that are useful?

Keep me honest if I did something dumb!

Next steps

Need to test more sophisticated policy. I should probably also see if this gobgp vs quagga with prefix lists, slide 15 is still relevant, that Quagga/FRR doesn’t scale with the number of policy lines.

I’m curious to test rustybgp, and bio-rd, and whatever else there is.

I should get the remote bgperf working and try out VM or container for various commercial stacks.

Update 2021-07-27

  • fixed typos in the tables

  • removed the debugging section and put it into the bgperf README

  • In cleaning up bgperf, I made it so that it used the FRRouting container in docker hub that is the latest. It turns out that’s not a stable version and has some issues in neighbor performance. I then hard coded to the latest stable version (7.5.1) and reran tests. I’ve included that data above.

Update 2021-07-30

  • tried to make it more clear the results and that the original testing of FRRouting was for a dev version, not a production version.

All Results

nos version peers routes per peer neighbor (s) elapsed (s) since first route (s) exabgp (s) total time max cpu % max mem (GB) flags date cores Mem (GB) notes results
bird v2.0.8-52-g8eea396b 10 10000 3 2 1 1 0:11 10 0.00933 2021-06-10 16 64 table per neighbor that redistributes to a master table
frr 7.7-dev_git 10 10000 1 3 2 1 0:10 19 0.057 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 10 10000 0 3 3 0 0:11 40 0.105 2021-07-27 16 64
gobgp 2.28.0 10 10000 6 4 3 1 0:17 1160 0.077 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 30 10000 3 9 9 0 0:33 101 1.11 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 30 10000 3 3 3 0 0:26 99 1.11 -s 2021-06-10 16 64 single table
frr 7.7-dev_git 30 10000 5 3 3 0 0:28 65 0.293 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 30 10000 0 3 3 0 0:23 67 0.281 2021-07-27 16 64
gobgp 2.28.0 30 10000 4 59 59 0 1:24 1570 0.507 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 50 10000 4 24 24 0 0:59 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 50 10000 4 7 7 0 0:44 100 0.408 -s 2021-06-10 16 64 single table
frr 7.7-dev_git 50 10000 14 6 6 0 0:52 101 0.528 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 50 10000 1 4 4 0 0:35 101 0.535 2021-07-27 16 64
gobgp 2.28.0 50 10000 4 157 157 0 3:13 1657 0.936 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 75 10000 3 71 71 0 1:58 101 6.04 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 75 10000 4 10 10 0 1:04 101 0.823 -s 2021-06-10 16 64 single table
frr 7.7-dev_git 75 10000 34 10 10 0 1:32 102 0.946 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 75 10000 1 7 7 0 0:55 101 1.01 2021-07-27 16 64
gobgp 2.28.0 75 10000 4 354 354 0 6:43 1707 1.43 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 100 10000 3 139 139 0 3:21 101 10.6 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 100 10000 4 14 14 0 1:31 100 1.39 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 100 10000 61 15 15 0 2:27 103 1.3 2021-06-10 16 64
frr 7.5.1_git (d64c849b0bce) 100 10000 0 7 7 0 1:15 101 1.39 2021-07-27 16 64
gobgp 2.28.0 100 10000 4 646 646 0 11:49 1629 1.92 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 5 100000 4 10 4 6 0:39 99 0.414 2021-06-10 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 5 100000 4 11 4 7 0:39 45 0.123 -s 2021-06-10 16 64 single table
frr 7.7-dev_git 5 100000 1 10 3 7 0:35 100 0.376 2021-06-10 16 64
gobgp 2.28.0 5 100000 4 31 24 7 1:00 1555 0.654 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 10 100000 3 15 9 6 1:04 101 1.43 2021-06-11 16 64
bird v2.0.8-52-g8eea396b 10 100000 4 13 7 6 1:02 101 0.446 -s 2021-06-11 16 64 single table
bird v2.0.8-52-g8eea396b 1 1000000 3 74 5 69 2:00 60 0.311 2021-06-11 16 64 table per neighbor that redistributes to a master table
bird v2.0.8-52-g8eea396b 1 1000000 3 72 6 66 1:54 28 0.106 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 1 1000000 1 76 7 69 1:59 99 0.71 2021-06-11 16 64
gobgp 2.28.0 1 1000000 7 94 24 70 2:23 969 1.32 2021-06-11 16 64
bird v2.0.8-52-g8eea396b 5 1000000 3 103 33 70 5:07 100 4.4 2021-06-10 16 64 table per neighbor that redistributes to a master table
frr 7.7-dev_git 5 1000000 1 111 40 71 5:10 101 3.6 2021-06-10 16 64
gobgp 2.28.0 5 1000000 6 342 273 69 9:09 2619 8.4 2021-06-10 16 64
bird v2.0.8-52-g8eea396b 10 1000000 12.8 2021-06-11 16 64 table per neighbor that redistributes to a master table OOM killed at about 12.8GB at elapsed seconds
bird v2.0.8-52-g8eea396b 10 1000000 4 166 94 72 9:18 101 4.34 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 10 1000000 1 178 107 71 9:31 102 7.32 2021-06-11 16 64
gobgp 2.28.0 10 1000000 9.5 2021-06-11 16 64 OOM killed about 9.5GB, 502 elapse seconds, 5.32M routes
bird v2.0.8-52-g8eea396b 500 10000 4 31 2021-06-11 16 64 table per neighbor that redistributes to a master table OOM killed at 135 elapsed, 582370 routes 31GB
bird v2.0.8-52-g8eea396b 500 100 3 1 1 0 1:24 9 0.031 -s 2021-06-11 16 64 single table
frr 7.7-dev_git 500 100 2173 2 2 0 40:25:00 4 0.439 2021-06-11 16 64
frr 7.5.1_git (d64c849b0bce) 500 100 1 8 8 0 1:16 102 1.33 2021-07-27 16 64
gobgp 2.28.0 500 100 5 206 206 0 4:56 775 0.156 2021-06-11 16 64 the python process was killed, it was 4GB
bird v2.0.8-52-g8eea396b 500 1000 4 -s 2021-06-12 16 64 single table the python process was killed, it was 4GB
frr 7.7-dev_git 500 1000 2149 2021-06-13 16 64
gobgp 2.28.0 500 1000 4 2021-06-14 16 64
bird v2.0.8-52-g8eea396b 250 1000 4 3 3 0 1:41 59 0.325 -s 2021-06-15 16 64 single table
frr 7.7-dev_git 250 1000 427 3 3 0 9:15 12 0.442 2021-06-16 16 64
frr 7.5.1_git (d64c849b0bce) 250 1000 1 2 2 0 1:37 47 0.282 2021-07-27 16 64
gobgp 2.28.0 250 1000 6 446 446 0 8:23 1189 0.537 2021-06-17 16 64
bird v2.0.8-52-g8eea396b 1 10000000 -s 2021-06-18 16 64 single table the python process was killed before exabgp
frr 7.7-dev_git 1 10000000 2021-06-19 16 64 the python process was killed before exabgp
gobgp 2.28.0 1 10000000 2021-06-20 16 64 the python process was killed before exabgp
gobgp 2.28.0 10 10000 6 9 9 0 3650 0.16 2021-06-11 64 256 m6g.16xlarge
bird v2.0.8-52-g8eea396b 10 10000 3 2 2 0 0:16 93 0.128 2021-06-11 64 256 m6g.16xlarge
bird v2.0.8-52-g8eea396b 100 10000 4 33 33 0 1:55 104 0.129 -s 2021-06-11 64 256 m6g.16xlarge, single table
gobgp 2.28.0 100 10000 6 477 477 0 9:20 3650 1.67 2021-06-11 64 256 m6g.16xlarge
bird v2.0.8-52-g8eea396b 500 10000 4 -s 2021-06-11 64 256 m6g.16xlarge, single table the python process was killed, it was 4GB
gobgp 2.28.0 500 10000 5 0 2021-06-11 64 256 m6g.16xlarge
bird v2.0.8-52-g8eea396b 500 1000 4 53 53 0 3:00 101 0.117 -s 2021-06-11 64 256 m6g.16xlarge, single table
gobgp 2.28.0 500 1000 4 1518 1518 0 27:16:00 3188 1.08 2021-06-11 64 256 m6g.16xlarge
bird v2.0.8-52-g8eea396b 10 1000000 OOM killed at 31GB