Double helping of Pi Hole

In episode 100 of Late Night Linux I talked a little bit about trying out Pi Hole and AdGuard to replace my home grown ad blocker based on dnsmasq and a massive hosts file.

I came down in favour of Pi Hole for a couple of reasons but the deciding factor was that Pi Hole felt a bit more open and that it was built on top of dnsmasq which allowed me to reuse config for TFTP which netboots some devices which needed it.

Now that I’ve been using Pi Hole for a few months I have a much better understanding of its limitations and the big one for me is performance. Not the performance when servicing DNS requests but performance when querying the stats data, when reloading block lists and when enabling and disabling certain lists. I suspect a lot of the problems I was having is down to flaky SD cards.

I fully expect that for most people this will never be a problem, but for me it was an itch I wanted to scratch, so here’s what I did:

Through the actually quite generous Amazon Alexa AWS Credits promotion I have free money to spend on AWS services, so I spun up a t2.micro EC2 instance (1 vCPU, 1GB RAM – approx £10 a month) running Ubuntu.

I installed Pi Hole on that instance along with Wireguard which connects it back to my local network at home. I used this guide from Linode to get Wireguard set up.

The Pi Hole running in AWS hosts the large block files and is configured with a normal upstream DNS server as its upstream (I’m using Cloudflare).

Pi Hole running in AWS configured with Cloudflare as its upstream DNS

I use three Ad block lists:

Pi Hole running on a t2.micro instance is really speedy. I can reload the block list in a matter of seconds (versus minutes on the Pi) and querying the stats database no longer locks up and crashes Pi Hole’s management engine FTL.

The Pi Hole running on my LAN is configured to use the above AWS based Pi Hole as its upstream DNS server and also has a couple of additional block lists for YouTube and TikTok.

This allows me use Pi Hole on a Pi as the DHCP server on my LAN and benefit from the GUI to configure things. I can quickly and easily block YouTube when the kids have done enough and won’t listen to reason and the heavy lifting of bulk ad blocking is done on an AWS EC2 instance. The Pi on the LAN will cache a good amount of DNS and so everything whizzes along quickly.

Pi Hole on the LAN has a block list of about 3600 hosts, whereas the version running in AWS has over 1.5 million.

All things considered I’m really happy with Pi Hole and the split-load set up I have now makes it even easier to live with. I would like to see an improved Pi Hole API for enabling and disabling specific Ad lists so that I can make it easier to automate (e.g. unblock YouTube for two hours on a Saturday morning). I think that will come in time. The split-load set up also allows for easy fallback should the AWS machine need maintenance – it would be nice to have a “DNS server of last resort” in Pi Hole to make that automatic. Perhaps it already does, I should investigate.

Why not just run Pi Hole on a more powerful computer in the first place? That would be too easy.

If you fancy trying out Pi Hole in the cloud or just playing with Wireguard you can get $100 free credit with Linode with the Late Night Linux referral code: https://linode.com/latenightlinux

DNS over HTTPS in a snap

Background Story

With the recent news about the ISP UK association proposing Mozilla as “Internet villain of the year” for enabling DNS over HTTPS (and subsequently changing their mind and dropping the whole category of villain of the year. Good move I think.) I figured it was probably about time that I looked at enabling DoH at home.

Cloudflare have a suite of open source tools called cloudflared which has, among other things, a DNS over HTTPS proxy. By default it points at their 1.1.1.1 service, but you can change that if you want to. Note, at the time of writing there is a bug which seems to stop Google’s DNS service working. If you’re looking to stop people seeing your DNS traffic then Google probably isn’t the right DNS service to use anyway.

I already have dnsmasq running as my DNS server and I have quite a lot of config which I wanted to keep (e.g. adblocking) so I figured I would add cloudflared’s proxy-dns alongside dnsmasq and have dnsmasq use proxy-dns as it’s upstream server, which would in turn pass the DNS lookups to 1.1.1.1 over HTTPS. dnsmasq would then cache the results locally.

So far, so good. I’d built cloudflared on my desktop to test it, now I wanted to move it on to the Raspberry Pi, run it as a service, and ideally have a package so that I didn’t have to mess around rebuilding it in loads of places if I wanted to move to a different box.

Make a snap

Making a snap of proxy-dns would give the the package I wanted, and could allow me to run proxy-dns as a daemon with two words in the YAML. Snapcraft’s build service would build me an ARM binary, as well as loads of others, for free.

I downloaded the source for cloudflared and added three files:

  1. A snapcraft.yaml which describes how to build cloudflared and sets it to be run as a daemon
  2. A configure hook which lets me set some config options
  3. A launcher script which sets the config at run time

None of these are very complicated, as you can see. Hat-tip to Popey for help with the snapcraft.yaml.

The I pushed these back to my project on GitHub and added that project to the Snapcraft.io build service. Now, whenever I push a new change back to GitHub the snap will get rebuilt automatically and uploaded to the store! All I would need to do is a snap refresh and I’d be upgraded to the latest version. All my requirements solved in one place.

How to use the snap

If your Pi is running snapd, it’s dead easy (e.g. Ubuntu MATE or Ubuntu Core):

sudo snap install cloudflaredoh --edge

The snap is currently in the edge channel, meaning it’s not ready for the main stage just yet. Once I’ve spent a bit more time on it, I will move it to stable.

sudo snap set cloudflaredoh address=127.0.0.1
sudo snap set cloudflaredoh port=5053

Configure proxy-dns to listen on 127.0.0.1. If you want it to answer DNS queries from other computers on your network try either the IP address of the box, or just 0.0.0.0 to listen on all interfaces. It will also configure proxy-dns to listen on port 5053. If you want it to answer DNS queries from other computers on your network, use the default DNS port of 53.

sudo snap get cloudflaredoh

This will show you the currently set config options.

sudo snap restart cloudflaredoh

Restart proxy-dns and use the new config.

Now you can use something like nslookup to query the DNS server and make sure it’s doing what you expected.

10 Steps To DNS-over-HTTPS

  1. Get a Raspberry Pi
  2. Download Ubuntu Core and write it to an SD card
  3. Put the SD card in your Pi and boot it
  4. Set up the network on Ubuntu Core (tip: register for an Ubuntu One account first)
  5. sudo snap install cloudflaredoh
  6. sudo snap set cloudflaredoh address=0.0.0.0
  7. sudo snap set cloudflaredoh port=53
  8. sudo snap restart cloudflaredoh
  9. Configure your client’s DNS server as the IP address of your Pi
  10. Have a cup of tea

Update 2019-08-01

I’ve got a new Github repo set up with an improved snapcraft.yaml which pulls directly from the upstream project. I’m aiming to get this hooked up to the Snapcraft build service so that we can package the latest version automatically. More on this later. In the meantime, you can clone this and build the latest version yourself:

https://github.com/8none1/cloudflarednsproxy

Apache – 20 second lag before serving pages

TL;DR:  There is no such thing as a “none” directive in Apache 2.  If you’ve got “deny from none” or “allow from none” then you’re doing DNS lookups on each host that connects regardless of whether you want to or not.

I was experiencing a very annoying problem trying to serve static HTML pages and CGI scripts from Apache 2 recently.  The problem manifested itself like this:

  • Running the scripts on the server hosting Apache shows they ran in well under a second
  • Connecting to the Apache server from the LAN, everything was fine and ran in under a second
  • Connecting to the Apache server from the Internet, but from a machine known to my network, ran fine
  • Connecting from an AWS Lambda script, suddenly there is a 20 second or more delay before getting data back
  • Connecting from Digital Ocean, there is a 20 second delay
  • Connecting from another computer on the internet, there is a 20 second delay

What the heck is going on here?

I spent time trying to debug my CGI scripts and adding lots more logging and finally convinced myself that it was a problem with the Apache config and not something like MTUs or routing problems.

But what was causing it?  It started to feel like like a DNS related issue since the machines where it ran fine where all known to me, and so had corresponding entries in my local DNS server.  But but but… I clearly had “HostnameLookups Off” in my apache2.conf file.  When I looked at the logs again, I noticed that indeed hostnames were being looked up, even though I told it not to.

966381

Why?  Because I don’t know how to configure Apache servers properly.  At some point in time I thought this was a good idea:

Order deny, allow
Deny from none
Allow from all

But, there is no such thing as a “none” directive.  Apache interprets “none” as a host name and so has to look it up to see if it’s supposed to be blocking it or not, which causes a DNS lookup delays and hostnames to appear in your Apache logs.

Enlightenment came from here: http://kb.simplywebhosting.com/idx/6/213/article/

There is also a suggestion that inline comments can do the same thing here:  https://www.drovemebatty.com/wp/entries/11