Highly Available DNS Ad-blocking with Blocky
Table of Contents
Please consider leaving a small gesture of your appreciation.
Introduction #
There exists a significant number of software packages that can offer ad-blocking at some level or another. The majority exist on a single device, or even within a browser itself, which work great for keeping the quantity of ads to a minmum while you’re browsing the internet. However, as use of ad-blockers becomes more prolific, site designers find new and interesting ways to get your eyeballs exposed to their adverts.
A very common Homelab project is installing a DNS-level ad-blocker such as Pi-hole. Instead of existing on a single device or browser, these network-wide blocking tools check any DNS requests against a list of known advertisers, and drop any requests that match, meaning the advert content never makes its way to your machine. These have the added advantage of blocking adverts on devices that you might not have the ability to install ad-blocking software on. However, as soon as you start messing with DNS… you’re almost inevitably asking for trouble!
One of the first projects I undertook when setting up my new home network was installing Pi-hole, however I soon found myself in trouble the first time I tried to do any kind of maintenance. The moment my Pi-hole instance was offline, the entire network ground to a halt, even to the point where I couldn’t bring Pi-hole back up again!
One solution to this problem is multiple Pi-hole instances, there are even tools available that help you keep separate instances in sync. However, I didn’t like the idea of manually maintaining multiple separate installations, so went about finding a simpler solution.
Enter Blocky! This light-weight piece of software allows you to do the majority of what Pi-hole does, while being easily deployable in a “highly available” manner.
Prerequisites #
I run a number of different machines all converged in a Docker Swarm - this article assumes you’re running the same.
You’ll also need some sort of shared storage - in my case I’m running Gluster across my nodes as well, but alternatives exist.
Keepalived #
First up, we need to be able to access our Blocky instance at a static IP address (or preferably multiple addresses). In order to achieve this, I’ve set up Keepalived on all of the Docker Swarm Manager nodes, with a configuration that looks something like:
vrrp_track_process check_docker {
process "dockerd"
weight 1
quorum 1
delay 10
}
vrrp_instance VI_DOCKER {
interface ens18
state BACKUP
virtual_router_id 50
priority 199
advert_int 1
garp_master_delay 5
authentication {
auth_type PASS
auth_pass P455W0RD
}
track_process {
check_docker
}
virtual_ipaddress {
192.168.7.10/20 dev ens18
192.168.7.11/20 dev ens18
}
}
This will ensure that dockerd
is running on our node, and if so then advertise the two IP addresses listed.
Make sure you change the interface
line to match whatever the primary network interface is on your device.
I actually have this configuration duplicated and slightly modified on each node, so that 192.168.7.10
and 192.168.7.11
should always end up on different nodes, but that is beyond the scope of this post!
Blocky Configuration #
Next, we need to tell Blocky how we want things configured. I’d suggest you have a thorough read of the documentation to understand what all the options mean, but here’s (most of) my setup:
port: 53
upstream:
default:
- 8.8.8.8
- 8.8.4.4
- 1.1.1.1
- 1.0.0.1
bootstrapDns: 8.8.8.8
blocking:
blackLists:
main:
- https://v.firebog.net/hosts/static/w3kbl.txt
- https://adaway.org/hosts.txt
- https://v.firebog.net/hosts/Easylist.txt
- https://v.firebog.net/hosts/Easyprivacy.txt
- https://v.firebog.net/hosts/RPiList-Malware.txt
- https://v.firebog.net/hosts/RPiList-Phishing.txt
clientGroupsBlock:
default:
- main
caching:
prefetching: true
cacheTimeNegative: 1m
redis:
address: dns_cache:6379
password: cachepassword
required: false
connectionAttempts: 10
connectionCooldown: 3s
This will use both the Google and Cloudflare Public DNS servers as our “upstream” providers. We’ll also pull a number of commonly available block lists from a few sources, and then configure a shared cache on Redis.
You can use Blocky for custom static DNS entries, which is hugely useful for other projects. In addition, I also have a MySQL query log defined, as well as Prometheus metrics, but these are all left as an exercise for the reader once you’ve got the basic configuration up and running!
Docker #
As mentioned, I’m running a Docker Swarm, so my docker-compose.yml
looks something like:
version: "3.7"
services:
blocky:
image: ghcr.io/0xerr0r/blocky:v0.24
restart: unless-stopped
ports:
- "53:53/tcp"
- "53:53/udp"
depends_on:
- cache
environment:
- TZ=Europe/London
- BLOCKY_CONFIG_FILE=/config/config.yml
volumes:
- blocky-config:/config/
networks:
- internal
deploy:
mode: replicated
replicas: 3
placement:
preferences:
- spread: node.labels.rack
update_config:
delay: 60s
order: start-first
monitor: 30s
cache:
image: redis:7-alpine
restart: always
networks:
- internal
ports:
- '6379:6379'
command: redis-server --save 20 1 --loglevel warning --requirepass cachepassword
volumes:
- cache:/data
networks:
internal:
volumes:
blocky-config:
driver: glusterfs
name: "docker/blocky/config"
cache:
driver: glusterfs
name: "docker/blocky/cache"
You’ll need to update the volumes
section to point at where you’re storing your shared configuration, and also provide somewhere for Redis to store its data.
Note the spread: node.labels.rack
line within the configuration - this is a label I’ve applied to each of my Docker Swarm nodes to differentiate what physical machine they are running on, which allows me to make sure that the 3 Blocky instances are fairly spread out in case a single machine dies.
Final steps #
Once you’ve got this up and running, you can use dig
to test whether or not DNS resolution is working correctly.
Run the following command, and hope you get an answer back!
matt@kessel:~$ dig google.com @192.168.7.10
; <<>> DiG 9.16.48-Debian <<>> google.com @192.168.7.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31788
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 145 IN A 142.250.180.14
;; Query time: 0 msec
;; SERVER: 192.168.7.10#53(192.168.7.10)
;; WHEN: Wed Dec 18 16:37:42 UTC 2024
;; MSG SIZE rcvd: 44
If this step works, you should be able to test against a known advert domain, and get an answer of 0.0.0.0
back, which means your advert blocking is working.
matt@kessel:~$ dig doubleclick.net @192.168.7.10
; <<>> DiG 9.16.48-Debian <<>> doubleclick.net @192.168.7.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21674
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;doubleclick.net. IN A
;; ANSWER SECTION:
doubleclick.net. 21600 IN A 0.0.0.0
;; Query time: 0 msec
;; SERVER: 192.168.7.10#53(192.168.7.10)
;; WHEN: Wed Dec 18 16:40:52 UTC 2024
;; MSG SIZE rcvd: 49
At this point, you’ll need to go digging in your router settings to find the DHCP Option 6 setting. Once this is changed to your virtual IP addresses configured above, all of your clients should pick up the new setting and start using Blocky as their DNS server, meaning the end of adverts network-wide!
Because we’re running multiple instances of Blocky, requests should get fairly balanced between them by the Docker Swarm ingress network. In addition, if we lose any one of the nodes running Docker, we should get quick failover to an alternative instance thanks to Keepalived. I’ll confess that this might not meet industry standards for high availability, but so far it has proven more than enough for my home network!