self host the world
I’ve known since forever that google is not to be trusted, and that they whimsically create and destroy products like no one else. I’ve also been a not so proud owner of a google mail account for the past 15 years, that I rely on for almost all my services.
honestly, those facts didn’t bother me that much, because, like everyone else, I’m always under the impression that it isn’t going to happen to me, right? that was, until june of last year, when google sunset’d google domains - which I was under the impression to be the king of domain registrars - plus the rise of AI and LLM’s seriously made me question the hegemony of google search in the current state of affairs, and how well I was positioned to a possible google meltdown as a whole.
of course, I don’t think that gmail is going anywhere soon, but it tipped me off into searching into the world of self hosting. I mean, how hard could it be to host my own emails right? I wanted to find it out using a home device and nixos, in order to get declarative and reproducible systems for free.
the raspberry pi
i managed to get my hands on a raspberry pi model 4B in december 2023, but at the time I didn’t have time to try and get something running on it. it was just around april or may of 2024 that I actually started to try to get it working. at first, I wanted to go with a completely headless nixos setup, by writing a proto-configuration akin to my current ones, exporting it as an sd image and flashing it to the pi, while baking in my ssh key. thus, no manual installation process would be necessary, and just inserting it to the pi and turning it own would be all.
sadly it didn’t work, as in it would turn on but never appear as a network device, and given that detecting it through the network was the only way to interact with it [that I knew of], it left me in a dead end. at the time, I believed that it was because I was doing something wrong in the nixos configuration - maybe some network config mishap? maybe I forgot to turn something on? - but given that I couldn’t see it’s video output, I just gave up and decided to buy an HDMI adapter and do it the normal way.
of course, I only bought the hdmi adapter about 3 months later, and only then did I try to install nixos manually. I went with the normal approach: downloading the bare arm image, fetching my nixos repo locally and rebuilding to install everything. only by having visual confirmation did I understand that the problem wasn’t with my original nixos image, but rather with the fact that it was shutting down after the first boot phase!
it made me realize that I had never given proper thought into buying a real power supply, as I thought that connecting it through a usb-c cable I had lying around to my computer’s usb port was good enough. I was able to gracefully connect the dots and realize that most likely it was rebooting because it didn’t have enough power to boot, so I switched it to a 5 amps 3 volts cellphone charger I had to spare and it finally booted correctly!
networking issues
after that, I figured that I’d like to be able to not only turn it on, but also connect to my raspberry pi from outside my house’s network. sadly, my router’s public ip changes pretty much every day, so my only real option was to use ddns + a domain name.
I bought santi.net.br
cheaply and quickly transfered it to cloudflare, as I wanted to get some ddns action going on. as I’m using the ISP provided all-in-one [shitty] router, it’s not shocking to say that trying to open the relevant ports (22, 80 and 443) in the default configuration interface wouldn’t have any external effect whatsoever.
I found out that there was a way to get the “admin version” of the router’s setup page, and through that I was able to get port 22 open to the public internet (even though changing it the normal way wouldn’t do anything), but neither 80 nor 443 were pingable still. I even questioned if my network was inside a CGNAT, as that is very common in brazil, but my ip wasn’t one of the common formats and I could access port 22 of my router’s public ip just fine. I don’t know how the ISP could be blocking it other than the router’s admin page port forwarding setup being a no-op for some specific ports.
I fought with this problem for a week but eventually decided to give up and just setup cloudflare tunnels for 80 and 443 ports, and route all the subdomains through that. cloudflare turnnels work by serving as an outbound only connection, by using a cloudflared
instance running on the pi to route the requests through. after using some stateful commands to generate credentials, the relevant piece of code to set this up in nixos is very simple:
|
|
though, I couldn’t really use these tunnels to connect through ssh, and honestly I don’t know why. I believe cloudflare expects you to use their warp tool to authenticate through ssh connections (besides ssh key auth?), but I thought it was too much trouble to configure yet another tool (in all my computers), so I chose to use the router’s public ip + ddns with port forwarding instead. I tested pretty much all ddns nixos services exposed in nixpkgs, and the only one that worked reliably was inadyn
:
|
|
remote rebuilds
given that my computers’ architecture (x86_64-linux
) and the raspberry pi’s one (aarch64-linux
) are not the same, I needed a way to either trigger rebuilds remotely, or to cross compile locally and nix-copy-closure
to the pi. cross compilation can be enabled by setting boot.binfmt.emulatedSystems
, but I don’t really like that solution as it requires me to enable that flag on every computer I’d like to deploy.
instead, I went with the most barebones approach possible, nixos-rebuild, by using the following command:
|
|
this works because --fast
avoids rebuilding nixos-rebuild
, and passing --build-host
forces it to build directly on the pi, avoiding the cross compilation issue. I still intend to use a proper build tool (most inclined to using deploy-rs) but that is for the future.
self hosting
after setting up a way to connect to the pi from the public network, I could finally get some self hosting started.
initially, all I did was a simple setup where I added my blog’s repository as a flake input that would serve the result of calling hugo build
on it through nginx. it did look something like the following:
|
|
it sure worked fine for the first couple of weeks, and it auto generated ssl certificates for me, which is convenient, but it had a glaring flaw: in order to change something, I’d need to push a new commit to the blog repo, nix flake update blog
and then nixos-rebuild switch
(remotely) on the pi, every single time. the whole process was unnecessarily complicated, so I sought out to setup a simpler one.
I vaguely knew that git repos had a notion of hooks, that can be run pre and post any command or action you take, but never had I implemented or tinkered with them. still, it occurred to me that if I could setup a bare git “upstream” in my pi, and set a hook to run after every commit it receives, I could run hugo build
on the source files and generate a new blog in a known path, that I could then hardwire nginx
to constantly watch. this way, it would be very much like the old setup that I had with github pages, except local and not depending on microsoft’s ai products.
funnilly enough, mere minutes after searching for this idea on the internet, I found a blog post by Andreas that did exactly that. while searching, I also figured that it would be pretty cool to have a cgit instance exposed that could track my changes in this “git repos” directory, so that I could really stop relying on github while keeping the code fully open source.
the main idea is to setup a git repository declaratively [of course] pre-baked with a post-receive
hook file that calls hugo build
with the directory we’d like nginx
watch. Andreas’ post shows exactly how to idempotently create (or no-op after second run) the git repo using a systemd one shot service, and the only problem remaining is, as always, managing the permissions around these directories:
- my user,
leonardo
, has it’s own files and it’s what I use to runnixos-rebuild
from. - the
git
user, will own the permissions to the git repositories directory - the
cgit
user, will be responsible to run the cgit server. - the
nginx
user, is responsible to run the nginx instance and respond to requests.
thus, I devised the following structure:
/server/blog
is where the hugo-generated files are going to be. thenginx
user must be able to read it, andgit
must be able to write to it./server/git-repos
is where the git repositories will be. thecgit
user must be able to read all of it’s directories, andgit
user must be able to read and write to it.
it seems to suffice to set git
as the owner of both of these directories, and give all users permission to read and execute files. to implement them, I used systemd.tmpfile.rules
. I know, there’s tmp
in their name, but rest assured, you can use them to create permanent files setting the correct permissions if you don’t give them an age parameter:
|
|
after figuring this stuff out, the rest is pretty much text book nixos. we set up cgit with scanPath = git-repo-path
, with a hook using pandoc
to generate the about pages using org files correctly:
|
|
while the following snippet sets up a systemd one shot service to initialize the path to the blog’s public files (ran with the git
user):
|
|
where the post-receive
hook is very similar to the one Andreas used in his post:
|
|
after running it the first time, I went ahead and statefully copied the git repo from github to the pi, in order to not lose the history, but other than that it should be fine.
next steps
sadly, I haven’t got the time to actually setup email hosting. currently, I read my email through mu4e, using mu as a local maildir indexer and searcher. what I’d need is to host a server to receive and send email. receiving doesn’t seem to have many difficulties, as it’s just a normal listener, but sending apparently is a huge problem, as there seem to be a lot of measures need to be taken in order for your email to actually be delivered and not be flagged as spam.
besides having to setup a reverse DNS lookups, you also need to mess with SPF, DMARC and DKIM, which are scary looking acronyms for boring business authentication stuff. moreover, your ip might be blacklisted, or have low reputation (what does that even mean?), and to top it off it seems like my router’s port 25 is blocked forever so I’d also need to configure cloudflare tunnels for that, most likely. I’m currently avoiding all of it, but I intend to look into them in the near future.
I’ve been meaning to experiment with nixos simple mailserver’s setup for a while now, but it is an “all in one” solution, and I think it might be trying to do much more than what I’m currently trying to achieve. if anyone has tinkered with it, I’d love to know more about it.