PowerDNS at Open-Xchange Summit in Berlin, 8-9 October 2015

Hi everybody!

Since we are now happily part of Open-Xchange, together with our friends at Dovecot IMAPd, we will also be present at the Open-Xchange summit in Berlin, October 8-9 this year. This is a free event, and you are invited!

Besides marketing presentations like ‘The Power of DNS to Engage More Customers’, we’ll also be having a serious technical presentation on Friday about PowerDNS. Feel free to invite your manager to the first presentation and join us for the second one ;-)

Also important, the summit includes a ‘Bier-Xchange’, which also serves other things than beer, plus a party at the end. More about the event can be read on http://summit.open-xchange.com/

All PowerDNS users are cordially invited to join the summit, which is free of charge: http://www.cvent.com/d/6rq9my/4W or using the ‘Register Now’ button at the bottom of http://summit.open-xchange.com/ page.

If you register, please drop us an email (powerdns.ideas @ powerdns.com) so we can invite you to the PowerDNS-specific gathering during the drinks and party.

Also: feel free to invite your manager to show that PowerDNS is a real company and more than some free software from the internet.


        Bert & Team

What is a PowerDNS Backend? And how do I make it send an NXDOMAIN?

PowerDNS is a dynamic nameserver, with a ton of backends. If the supplied backends aren’t flexible enough, our architecture enables operators to write their own, or to use one of our forwarding backends (Pipe and Remote), which can send PowerDNS queries over a pipe, a UNIX domain socket, an HTTP connection or even a ZMQ link.

Over time, many operators have done just that, and this allows you to mix a ‘real’ nameserver, with everything you can expect from that, with a custom data source.

Very often however (weekly at this point!), we get questions from users confused about our backends:

  1. Why does my backend get ANY queries, when no ANY queries are sent to the nameserver?
  2. How do I generate an NXDOMAIN response from my backend?
  3. Why do I get SOA queries, even for domains not in my backend?
  4. Why does PowerDNS ignore the records my backend sent back to put in the packet?
  5. Why do I get more backend queries than DNS queries (sometimes)?
  6. Why do I get *way less* backend queries than DNS queries at other times?
  7. Why are backends launched for AXFRs?

All of these questions stem from a misunderstanding what a PowerDNS backend is, possibly combined with the fact that people want things that aren’t easily achieved from a backend. We may not have been doing a sufficiently good job explaining PowerDNS backends.

To explain, we must first do away with what a PowerDNS backend is not. It is for example not this:


In this model, packets from the internet get handed over to a backend, and the backend drafts an answer packet for PowerDNS to send out. We do have functionality that does that, more about which later, but backends are not it. Backends get DNS queries and return data, or perhaps none if there isn’t any.

So why did we not build it as above? It turns out DNS needs a lot of logic. So for example, if a nameserver gets a question for ‘www.powerdns.com’, it has to figure out if it is even authoritative for that question, or in other words, does it have a zone with relevant data for that query? To figure that out, the RFC algorithm tells us we must find the best zone that matches the query. Initially this could be ‘www.powerdns.com’, and if we don’t have that zone, we should look for ‘powerdns.com’, then ‘com’ and finally perhaps a root zone.

Once we have the best zone ready, we must check if a ‘www’ record exists, if there is perhaps a delegation of ‘www’ to another server, if there is perhaps a CNAME for it, if there is a wildcard ‘*.powerdns.com’ record etc. DNSSEC brings further questions, should we do DNSSEC processing, and if so, are there keys we need to sign with?

Because of all this complication, and because back in 2000 we already predicted there would be many many backends, we decided to keep the ‘DNS logic’ central, where we would implement it once and leave the backends to being providers of data, pure and simple. It might have been better if we called the backends ‘DNS Data stores’ perhaps. But alas, we missed that chance.

So now that we’ve described how it doesn’t work, let’s check out the present day reality:


Packets come in from the Internet and get processed by our PacketHandler class. This class has all the logic on best zone selection, CNAMEs, wildcards, delegations, DNSSEC metadata etc. And to figure this all out, it sends many questions to the UeberBackend. The UeberBackend is subsequently responsible for distributing the queries over the multiple possible configured backends. In addition, it hosts a cache which optionally remembers answers (or lack thereof) a backend provided previously.

The best way to understand this procedure is that the PacketHandler turns DNS packets into data backend queries. This also means that backends should do no thinking. Just answer the question, with data or none. The PacketHandler can send many kinds of questions depending on the nature of your zone. For example, it may ask about SOA records, even for zones you do not host in your backend. This is because when a question comes in for ‘www.something.com’, PowerDNS must go hunt for a backend with relevant data.

As a further example of how things work, if PowerDNS asks a backend for ‘www.powerdns.com’, it can answer with a CNAME to ‘webserver1.cloudprovider.com’. PowerDNS itself then will attempt to follow the CNAME chain and check if there is data for the ‘cloudprovider.com’ zone. The backend does not need to think about this!

Why does my backend get ANY queries when no such queries are sent to the nameserver?

In addition, as a speedup, PowerDNS may ask ANY questions, which allow it to see if there is a CNAME for the DNS query name, and if there is no CNAME, get the right answer from the same question. This answers questions 1 and 3 from the list above.

How do I generate an NXDOMAIN answer from my backend?

Answering question 2, how to generate an NXDOMAIN from my backend also becomes easy: you need to convince PowerDNS that you do host the domain in question by delivering a SOA (aka Start of Authority) record when asked for your zone, and when you then get a question for ‘nosuchdomain.powerdns.com’, just return.. nothing. From this PowerDNS will conclude that 1) you host the zone 2) there is no record called ‘nosuchdomain.powerdns.com’. And this will lead to an NXDOMAIN.

So, the clarify even further, if you run a zone called ‘maik.de’, and PowerDNS receives a DNS query for ‘nosuchdomain.maik.de’, it will ask your backend if it knows about ‘nosuchdomain.maik.de’, and you should return no data. Next it will ask about ‘maik.de’, and if it asked for ANY or the SOA record, you should return the SOA record. From this PowerDNS will conclude that it should send out an NXDOMAIN.

Why does PowerDNS ignore the records my backend sends back to put in the packet?

Answering question 4, if a backend sends back records that PowerDNS did not ask for, these do indeed not end up in your packet. PowerDNS asked the backend if it had data for a certain name, and any replies not for that name are out of spec. Not only might PowerDNS ignore them, it could also decide your backend is not functioning correctly and drop the packet.

Why do I get way less/way more backend queries than DNS queries?

Answering questions 5 and 6: you may indeed get many more backend queries than DNS packets coming in. An individual DNS query may cause 4 or more backend queries. Each of these queries is necessary to figure out details about the domain. If your Pipe backend is flooded with questions it does not want to hear about, the ‘pipe-regex’ feature is available to quickly deny any knowledge about such domains.

You may also get a lot less questions, since the UberBackend has a cache that optionally stores previous answers from your backend. If this is not what you want, caches can be disabled, which may be useful if you want to provide fully dynamic answers.

Why does PowerDNS instantiate extra copies of my backend for AXFR?

Finally, the answer to question 7, why do new backends get spawned for zone transfers? An AXFR may last minutes, and since during those minutes a backend can do nothing else, a dedicated instance is created for each zone transfer (both incoming and outgoing).


We hope that this information has been helpful in clarifying what a PowerDNS backend is and isn’t.

Finally, if it turns out a PowerDNS backend is not actually what you want, you may be interested to hear about a currently undocumented feature in the PowerDNS Authoritative Server that we use for testing. This allows you to intercept queries sent to PowerDNS Authoritative Server, much like is possible (and documented) in the PowerDNS Recursor.

From the point of interception, you can mangle questions to your heart’s delight from Lua, and send back any answer you want. This feature will become documented and supported in PowerDNS 4.0, but for now, study the ‘lua-prequery-script’ configuration parameter. And if you have any questions, you can find us on the mailing lists or IRC.

Good luck!

Authoritative Server 3.4.5, 3.3.3 and Recursor 3.7.3, 3.6.4 released

We’re pleased to announce the availibility of a number of  PowerDNS releases. These releases share a performance update that prevents short bursts of high resource usage with malformed qnames. The full changelogs can be viewed by using the links below:

PowerDNS user are recommended to update to the latest release, preferably in the latest release branch, this is 3.4 for the Authoritative Server and 3.7 for the Recursor.

Tar.gz and packages are available on:

PowerDNS needs your help: What are we missing?

Hi everybody,

As we’re working on PowerDNS 4.x, we are wondering: what are we missing?

The somewhat longer story is that as a software developer, a sort of feature-blindness creeps up on you. We try to make the software better, faster etc, but by focusing so much on the technology, one can lose sight of the actual use cases.

In this way it is possible that a software vendor neglects to implement something, even though many users desperately want it. If so, please speak up! The short version: please mail powerdns.ideas at powerdns.com your ideas!

As concrete examples, PowerDNS took its time to add an API, and once we had it, people immediately started using it, even before we had documented the API. Similarly, for many years, we did not deliver a proper graphing solution, and now that it is there it is highly popular.

But what more are we missing? Should we expand into IPAM and do DHCP and IP address management? Should we make an out of the box NAT64/DNS64 solution? Do we need to improve replication beyond “database native” and “AXFR-based” (so ‘super-duper-slave’)?

Should we start doing versioned databases so people can roll back changes?  IXFR?

Should we add a built-in DNS based load balancer where we poll if your IP addresses are up?

Or would it be wise to move on beyond the geographical versatile backends, and simply add ‘US’ and ‘Europe’, ‘Oceania’, ‘Asia’ IP address profiles?

Should the recursor gain cache sharing abilities? Or pre-fetching? Or even TTL-faking in case auths are down?

The list above is just to prime your imagination: if you have any ideas on what you are missing, please reach out to powerdns.ideas at powerdns.com, or use our contact form.


PowerDNS 2.x End of Life Statement

PowerDNS 2.x End of Life Statement

21st of May 2015

PowerDNS Authoritative Server 2.9.22 was released more than 6 years ago, in January 2009. Because of its immense and durable popularity, some patch releases have been provided, the last one of which ( was made available over three years ago in January 2012.

The 2.9.22.x series contains a number of probable and actual violations of the DNS standards. In addition, some behaviours of 2.9.22.x are standards conforming but cause interoperability problems in 2015. Finally, and earlier are impacted by PowerDNS Security Advisory 2012-01, which means PowerDNS can be used in a Denial of Service attack.

Although we have long been telling users that we can no longer support the use of 2.x, and urging upgrades to 3.x, with this statement we formally declare 2.x end of life.

This means that any 2.x issues will not be addressed. This has been the case for a long time, but with this statement we make it formal.

To upgrade to 3.x, please consult the instructions on how to upgrade the database. If you need help with upgrading, we provide migration services to our supported users. If you are currently running 2.9.22 and need help to tide you over, we can also provide that as part of a support agreement.

But we urge everyone to move on to PowerDNS Authoritative Server 3.4 or later – it is a faster, more standards conforming and more powerful nameserver!

DNS-OARC Spring Workshop 2015

This weekend, PowerDNS attended the DNS-OARC Spring Workshop 2015 in force, with 100% attendance. I shamefully have to admit this is the first time I’ve gone to an OARC workshop, but I was well rewarded. Both the speakers and audience were stellar. Full video is available in four parts: Saturday morning, Saturday afternoon, Sunday morning, Sunday afternoon.

UPDATE: Geoff Huston also did a writeup with more and different details from mine. Very much worth your time!

In this post, I briefly want to summarise the big themes of the meeting. But I want to start off with describing the audience. We had people running the biggest (cc)TLDs on the planet, we had authors from all the big name servers. There were the people that run the root and plan the DNSSEC key transitions. The largest access providers on the planet. Folks instrumenting the whole internet to get statistics on DNS(SEC) performance. For fun. The people that protect our websites from denial of service attacks. In short, everyone was there. This workshop was easily the best DNS event I have ever attended.

The biggest theme of the meeting was the flood of ongoing reflection attacks. In short, bad people send random questions to open resolvers on the internet. These in turn often forward their queries to powerful recursive name servers over at providers. And these reach out frequently and insistently to numerous “authoritative servers” to find answers to these questions.

However, the actual goal is not to get answers to questions. The actual goal is to perform a powerful denial of service attack on these “authoritative servers”, which frequently don’t even run DNS. But they do get bombarded with DNS traffic from all over the world and go offline.

These attacks have been going on for the past year or so, and are the biggest thing in DNS for a long time. All large resolver implementations have had to implement changes to protect themselves from the attacks, and to attempt to no longer to take part in this malicious traffic. This has not been easy.

At the workshop, we had presentations describing how BIND and Nominum name servers implement their protection strategies, with ISC implementing various tuneable knobs that attempt to detect unresponsive servers, and Nominum doing (among other things) “threat lists” of domains currently known to be involved in attacks.

In addition, Kazunori FUJIWARA of JPRS presented how NSEC records from DNSSEC could be used to silence such attacks – an NSEC denial of existence range can be used to block many random queries, as long as we have a denying range for them. There was some discussion if this would work for NSEC3 too, and OARC attendees are now pondering that question.

Tangentially related, Microsoft presented research on how well the internet performs negative caching, specifically how long. I was very happy to see Microsoft open up and become a part of the DNS community. Microsoft has long had smart people working on DNS, but up till maybe a year ago, you’d never see anyone from Microsoft present at a workshop or working group. This has now changed, and the internet will surely be a better place with Microsoft at the table. Even if Microsoft legal still insists they carry a ‘Microsoft Confidential’ warning on every slide!

Moving on: dealing with random packet floods requires the best statistics, and John Dickinson presented work on Hedgehog which can present DSC data.

Another major theme of the meeting was ‘dealing with packet floods’, random or not. Part of this is writing smart name server software, but at some level of traffic, packets need to be processed or blocked at the network layer. Various folks presented on this, and I want to specifically thank Cloudflare for sharing their vast DoS-quenching knowledge. It is not common for the DoS protection people to open up on how they do their work, because these are of course the crown jewels. However, not everyone can be a Cloudflare customer, so it is great that we can learn from them.

In short, they had some key insights. Efficient name server software is mostly limited by the UDP stack in Linux and other operating systems, and this stack really hits a wall somewhere around 200kqps. This was repeated in several other presentations, and the reasons for this limitation appear to be pretty fundamental. With careful tuning and specific hardware configuration higher numbers can be achieved, but it is uphill work. But, you have to view this presentation, it is full of unexpected insights.

At the end of this post, I get back to both the ‘200kqps’ issue and possible new ways to deal with unrequested packet floods.

Cloudflare separately presented their gross DNSSEC hacks, which while clever don’t fill me with glee. In the words of Filippo “I’ve done stuff I ain’t proud of and the stuff I am proud of is disgusting”. Read all about their NSEC Shotgun in any case.

Verisign also opened up with various eye-popping statistics on denial of service attacks they have to weather all day, and what countermeasures they have in place. During one such attack last year, Verisign filtered a big PowerDNS user, causing mayhem for us. Piet Barber feared I would call him out on that during the presentation, but in fact we had no “hard feelings”. I mostly feel bad we were part of relaying that attack to the GTLD servers!


Various people reported interesting measurements. Geoff Huston of APNIC did another one of his incredible presentations, this time on how well ECDSA signed domain names get resolved on the internet. Geoff really is an asset to the world, he truly has his finger on the pulse of the internet. In short, 80% of DNSSEC resolvers can validate ECDSA records. The ones that can’t behave oddly, or in the words of Geoff to the implementors present “you lot write a lot of crap”. I am sure this is true. Another key insight is that around 2000 resolver ‘pairs’ represent 95% of query load on authoritative nameservers. In another presentation, it was reported there are around 150k bona fide resolver IP addresses in the world, and this matches my own observations. In short, if you are under DoS as an auth, you could do worse than block everyone except the top 99% of resolvers.

Shumon Huque of Verisign presented measurements of the privacy enhancing qname minimisation idea, and in short, because of Akamai’s current nameserver implementation, it works very badly. Akamai is aware of the problem and has made a vague promise to do something about it one day.

Ralf Weber of Nominum did measurements how well several resolvers deal with random query attacks. Unsurprisingly, Nominum came out best, but this was a very good even-handed presentation, which showed that most modern nameservers have been updated to deal well with such attacks. NOTE: the current presentation shows unfavourable numbers for Unbound, but during the meeting Ralf and Wouter Wijngaards found out why this was the case, and Ralf will be redoing the tests. As with Microsoft, I’m very happy to see Nominum join the (public) DNS community and this can only be helpful in improving the state of DNS.

William Sotomayor of OARC presented remotely on how various countries and university networks use AS112 networks, but I have to admit most of this presentation went over my head as I’m not very good with large scale internet routing. Similarly his work on ‘RSSAC-02′ is undoubtedly very important, but outside my expertise.

Joao Luis Silva Damas worked hard to get access to actual customer DNS traffic to do statistics on that, and when he finally got the data he tore it apart and learned a lot. Recommended reading. After this presentation, various DNS trace anonymization strategies were discussed, including the PowerDNS ‘dnswasher’ and the NZ registry more sophisticated ‘keyed’ blinding solution. For both these programs however, keep in mind that data can frequently be de-anonymized with sufficient correlation!

Bruce Van Nice of Nominum showed statistics on the life of a resolver and popular domain names. I call fake on this one since not a single domain listed was of the ‘XXX’ variety ;-) I assume these were quietly filtered from the graphics so as not to upset anyone. The PowerDNS statistics I presented later had actual adult domain names in them, but we’re Dutch, so we get away with that!

Sebastien Castro of the NZ registry presented work on how to discover popular or important domain names using statistical measures, and showed how these change over the week. Similarly Francisco Cifuentes of CL NIC research presented on how to do realtime DNS analytics with Apache Storm and other technologies.

Root zone related

The root is, of course, at the root of all of DNS and thus the internet. Anything affecting the root affects us all. And there is enough stuff to think about. The root Key Signing Key is getting stale and the batteries on the Hardware Security Modules housing the KSK are similarly showing their age. I understand this is a problem. To change the KSK is a major effort however.

For one, during the change, root answer packets might get (a lot) bigger. Duane Wessels of Verisign presented numbers on what various changeover scenarios would mean for fragmentation and truncation. The good news appears to be that the sky isn’t falling.

Meanwhile, before the root and most important (cc)TLDs got signed, there was the DLV-registry, which made it possible to specify your DNSSEC keys in a parallel registry over at ISC and some other places. It is now high time to sunset this registry, and Jim Martin of ISC set out how they plan to do that. After the presentation a huge line formed at the mike to wholeheartedly support the rapid shutdown of the DLV registry.

Kazunori Fujiwara presented on the changing ratio between JP and ROOT queries on the JPRS infrastructure.

Finally at the end of the day, Edward Lewis (now of ICANN) presented about the process of changing the root KSK. This is fraught with difficulty and I for one doubt it will happen before circumstances force us to. Ed and his very capable friends, including Geoff Huston, are giving it their best however, and surprisingly (to me at least), during and after the presentation, some new ideas were raised to facilitate the transition. This involves having one root-server serve with the old keying material, perhaps giving people a chance to limp on during the transition.

Other presentations

Florian Maury of the French IT government security agency ANSSI presented on the iDNS attack they discovered, which felled PowerDNS, Bind and Unbound last year, but notably not DJBDNS since “1999” Dan Bernstein was smarter than all of us and failed to fall for that one. This provided the brief moment of drama of OARC with one audience member claiming the iDNS stuff was not news, not important, and that by publishing it, Florian had only helped potential attackers. Luckily sanity returned and this member of the audience apologized later in the day. Who says DNS is boring?

Florian also presented on the French government DNS guidelines (which are, of course, in French), but that interestingly (and correctly I think) do not propose to implement DNSSEC before a host of other best practices in DNS are implemented, including registry locks.

Patrik Wallström of .SE presented on Zonemaster, a Swedish-French collaborative zone checker, intended to supersede DNSCheck and Zonecheck.

And I did a presentation too on dnsdist, a highly DoS and DNS aware load-balancer, where I asked the audience if there is room for a ‘smart load balancer that has some features of a nameserver’. The feedback I got was overwhelmingly yes. Further discussions afterwards were instrumental in finding the limits of what dnsdist should and should not do.

Followup work

Two things stand out for me from this workshop. For one, Cloudflare and many others feel compelled to implement a ‘Super NXDOMAIN’ answer that allows an authoritative server to send a response to a bona fide resolver: “you can stop sending me queries for x.y.somedomain.com, or in fact anything within somedomain.com. It is not going to happen.”. We jokingly called this the Shut Up Packet. However, it appears this idea has merit and Olafur already wrote some text on it. We will also be working on this and studying the parallels with the (failed) ICMP Source Quench packet, or in fact even a real ‘NXDOMAIN’ response.

Secondly, too many people to mention lamented the suboptimal performance of the Linux (and UNIX in general, but specifically Linux) on dealing with UDP packets. Where you can now blast gigabits of TCP/IP, the supposedly more efficient UDP struggles to reach hundreds of thousands of packets per second. Part of this problem is neglect of UDP in the kernel. Part of this is an inefficient ‘one system call per packet’ interface (not usefully addressed by recvmmsg()).

Since we all feel the pain of this and have to buy special hardware to get better (filtering) performance, I feel it is time to liaise with various kernel folks to see what could be done. Individually, everyone I spoke to agreed, but nothing has coalesced yet. I’ll continue to agitate for something to happen, please let me know if you want to join in!

Important Update for Security Advisory 2015-01

Last week, we released Security Advisory 2015-01, with text suggesting that only specific platforms were seriously affected. We must now report that this was incorrect: all platforms are impacted. The advisory has been updated to that effect.

Furthermore, by popular demand, we have released Authoritative Server 3.3.2, an update to version 3.3.1 which includes DNSSEC improvements and of course a patch for the security issue. Click these links: release notes, tarball, debs, RPMs.