Building a Dynamic DNS Update Proxy
My router gets a new public IPv4 address and IPv6 prefix from my ISP once every 24 hours. So for you to be able to read this post I have to update some public DNS records every time my network's IP changes.
Previously, I used Dyn for this but I recently decided to close my account with them and move to a different DNS provider. Unfortunately, my router doesn't have built-in support for the new provider's update API (it does for Dyn). So I decided to build my own update API as a proxy between my router and my DNS provider. How hard can that be, right?
Not That Hard
When my router gets a new IP address from my ISP, it sends a GET
request to https://dns.chaos-wg.net/update
which has a static IP outside of my local network and hosts a WSGI application. The query string for a typical request looks like this:
ipv4=89.14.163.156 &ipv6=2a01:c22:b200:69f9:1234:56ff:fe78:9abc &prefix=2a01:c22:b045:de00::/64
My router receives a 56-bit prefix from my ISP (2a01:c22:b045:de00::/56
in this case) and always delegates a 64-bit prefix to the DMZ in which the web server lives (2a01:c22:b045:defc::/64
in this example). The additional 8 bits are always fc
and the web server always has the suffix ::8
.
Python's ipaddress module provides the IPv4Address
, IPv6Address
, and IPv6Network
classes which can parse the corresponding values from the request query string. Using the IPv6Address
and IPv6Network
classes the proxy can construct the web server's public IPv6 address from the prefix it received from my router:
>>> prefix IPv6Network('2a01:c22:b045:de00::/64') >>> mask = IPv6Address('0:0:0:fc::8') >>> address = IPv6Address(int(prefix.network_address) | int(mask)) >>> address IPv6Address('2a01:c22:b045:defc::8')
Now the proxy can send a PUT
request to my DNS provider's REST API with a JSON payload describing the DNS records I want to update. This is pretty straightforward using the requests library.
Not That Simple Either
But what happens if somebody else sends a request with their own IP address to the update proxy or tricks me into sending it for them? They could deny service to my site or redirect traffic to their own servers.
One safeguard is of course authentication. My router sends an Authorization
header with its request and the WSGI application validates it. This has the added benefit that I don't have to store the API token for my DNS provider's REST API in plain text on the server side. Instead, my router sends its Authorization
header as "Basic " + base64(user_name + ":" + api_token)
and the update proxy checks that against a PBKDF2 hash that I can safely store on the server. If the check succeeds, the proxy can then extract the API token from the Authorization
header and use it to send requests on its own.
def authorize(credentials: str) -> str: try: value = b64decode(credentials, validate=True) except ValueError as e: raise Unauthorized from e user, _, token = value.partition(b':') # validate the entire <user>:<token> combination key = hashlib.pbkdf2_hmac('sha256', value, SEED, ITERATIONS) if hmac.compare_digest(key, AUTHORIZED_KEY): return token.decode('utf-8') else: raise Unauthorized
But even with the Authorization
header I would still like the update proxy to validate that it got the correct IP addresses before performing the update with my DNS provider. I briefly thought about implementing some signature scheme so that I could store a public key on the update proxy, let it connect to the new IP of the web server, and let that sign some random payload with its private key before I realized that I could simply make an HTTPS request to the new IP and check that it has a valid certificate for chaos-wg.net
.
However, for the certificate validation to work I have to specify the domain name in the call to requests.get()
which still points to the web server's old IP address (because I haven't made the DNS update yet). If I specify the new IP in the call to requests.get()
, the proxy will be able to connect to the web server but the certificate validation will fail because the web server can't present a valid certificate for the new IP address (it only has one for the domain name). Or in other words:
requests.get('https://chaos-wg.net/') # fails because "chaos-wg.net" still resolves to the old IP and # requests won't be able to establish a connection to the new IP requests.get('https://[2a01:c22:b045:defc::8]/') # fails because the server certificate was issued to "chaos-wg.net" and # not to "2a01:c22:b045:defc::8"
I would also like to validate the IPv4 and IPv6 addresses individually, which I can't really control with the default requests
setup. The "official" solution would be to implement a transport adapter that only connects to a single IP address. But these adapters handle the entire HTTP request cycle. All I want to do is create a socket connection to a specific IP address while still specifying a host name for certificate validation.
Standard Library To The Rescue
Python's http.client module provides HTTPConnection
and HTTPSConnection
classes, which are used internally by urllib3
and requests
. Since Python 3.4.1 these classes have a private _create_connection()
method, which usually simply delegates to socket.create_connection()
1. If I override this method and create a socket to the new IP address of the web server, I can still use the built-in certificate validation without having to first update the DNS records.
The HTTPSConnection
class is a bit harder to use than requests
but this small inconvenience far outweighs the effort that would have been necessary to implement a transport adapter for requests
. As a bonus, the entire update proxy now has no external dependencies except for the standard library, which results in less code overall.
def validate(address: Union[IPv4Address, IPv6Address]): connection = HttpsConnection('chaos-wg.net', address) try: connection.request('HEAD', '/') # I don't care about the response code or body here. # A valid certificate is all that matters. response = connection.getresponse() response.read() except ssl.CertificateError as e: raise InvalidServer from e except OSError as e: raise InvalidServer from e finally: connection.close() class HttpsConnection(http.client.HTTPSConnection): def __init__(self, host: str, address=None) -> None: super().__init__(host, check_hostname=True) destination_address = (str(address), http.client.HTTPS_PORT) def _connect(address, timeout=None, source_address=None): return socket.create_connection( destination_address, timeout, source_address ) if address is not None: assert callable(self._create_connection) self._create_connection = _connect
Summary
Updating the DNS records now involves four HTTPS requests:
one from my router to the update proxy,
two from the proxy to the web server to validate that it received the correct IP addresses,
and one from the proxy to my DNS provider that actually performs the update.
The interesting requests are the ones from the update proxy to the web server because at that time the DNS records for chaos-wg.net
still point to the old IP addresses of the web server. I had to patch Python's http.client.HTTPSConnection
class to be able to connect to a specific IP address without querying the DNS while still relying on the built-in certificate validation.
The web server proves its identity by presenting a valid certificate for chaos-wg.net
. Once the proxy is satisfied that it has the correct IP addresses, it sends an additional request to my DNS provider that actually updates the DNS records.
- 1
-
See the change log, issue 7776, and commit 9da047.