Spartan Ultra Quebec (Owl’s Head) 2019

This race was pretty nuts in a bunch of ways: the course itself is allegedly one of the hardest (if not the hardest) in Spartan’s quiver of locations, we had a Code Irene called (closed course because of weather), and the basic fact that this was my first Ultra. This post will be more about the race itself than my personal training and experience.

Half of you will not finish. The difference between the Beast and the Ultra is that the Beast is designed so most people finish. The Ultra is designed so 50% of you finish.

– Race Director, pre-race briefing

This was a surprise. Yes, the Ultra is meant to be Hard with a capital H, but to have the race director blatantly state the target finish rate is 50% was humbling. According to the Athlinks results, about 150 signed up (in the three heats combined), and about 50 finished.

Ask yourself, “what is my Why”? Because without your Why, you will not finish.

– a Krypteia, start corral

The race format was as follows: the Beast and Ultra took place the same day, with the Ultra starting first. The Ultra competitors would do the Beast course twice, plus an extra 4km “Ultra leg” on both laps. The Ultra leg was actually a rather nice trail run in the woods (shade was nice), the rest of the course was out on the ski runs. There were no Ultra-specific obstacles: the Beast would do 30, and the Ultras would do 60 (each twice). In typical Spartan Race fashion, the obstacles were mostly bunched together in groups of 2-3, with long periods of running in between.

Why does this place only have black diamonds?

– a fellow racer

The course itself, and I cannot stress this enough, was absolutely brutal. If you ski/snowboard, the majority of the up/down hills were single or double black diamonds. In total, the Beast went up/down the hill 6 times… given the extra Ultra leg, the Ultra went up/down 14 times, enjoying a total elevation gain of about 5000m. This sounds ridiculous but it’s correct, confirmed by my own tracker and the official Spartan map elevation chart thingy. For comparison, the Killington Ultra is closer to 4000m — so yes, Owl’s Head is harder than Killington.

I thought it was supposed to rain all day

– a very sunburned athlete

Although the forecast called for rain most of the day, we ended up seeing blazing sun for the first 10ish hours, followed by a massive storm that seemed to roll in in the span of a few minutes. After some lightning strikes within a few miles, the RD called to close the course. All athletes were directed back to the festival area (at this point I was about halfway through my second loop, 10 hours in).

Now here’s where it gets interesting: First, the event staff handed out Ultra shirts and medals to everyone. There were no checks whatsoever, literally just staff members handing out still-wrapped medal packages to anyone who stuck their hand out in the registration area. I know it was a bad situation and they just wanted everyone to get out of there without too much friction, but the way it was done really rubs me the wrong way. It’s not really “about the medal” and I’m sure most people present were honest, but I think I would’ve preferred for them to just mail the things afterwards to actual finishers.

Speaking of “actual finishers”, this is interesting too: Spartan timing seems to have extrapolated our finish times based on checkpoints. Like I said above, I got sent off course about 10 hours into the race, yet my official results put me at about 14 hours. The last two timing mats weren’t hit in my results, so I’m guessing they looked at what time athletes hit the timing mats before the course closure and estimated a finish time. Additionally, the course time limit was 16 hours: it looks like only athletes with a “projected” finish time below 16 hours were assigned finishes, everyone else (70% ish) got a DNF. Personally I’m really happy with this: their estimate is right in line with what I predicted for myself (finish-time-projection mental math is a great way to pass the time during an endurance event), and I’m sure it would’ve upset a lot of people to just assign DNFs to everyone. That being said, despite having official results, I don’t consider myself to have actually finished the Ultra — it’s a bit bittersweet.

Overall, it was a very tough event with some shenanigans thrown in for good measure. But I’ll be back next year, planning to finish “for real” this time. And this time, my Why is simple: Because it’s hard. IPv6 Setup (with reverses!)

IPv6-LogoLet’s skip the lengthly introduction and get right into it: In this article, I’ll cover how to set up IPv6 connectivity to your dedi, and how to set up reverse DNS records. If you want reverse DNS records (which you want if you’re hosting email), you’ll need to be running your own authoritative DNS server, which I won’t be covering here. As far as I know, there’s no way to get a PTR record for your address without running your own server. console

  1. Log in to your console, and head over to Server > Network configuration.
  2. Order your IPv6 block. In a while, you’ll get a /48. Took about half an hour for me. When it’s ready, it’ll appear in the list with Done appearing in the Delegation Status column.
  3. Click the gear beside your delegation > Edit nameserver delegation. You will have to specify two nameservers. If you only have a single authoritative server, you can use a backup server service, like the one provided on
  4. Once finished, click the gear beside your /48 delegation again > Create subnet. This will create a /56, which you’ll assign to your server. Pick one you like. You can create as many subnets as you have servers.
  5. Note the address and DUID of your /56. That’s all we need from the console.

Your server

On your server, we’ll set up dhclient and configure your interfaces.

  1. Edit /etc/network/interfaces. Append the following:
    iface eth0 inet6 static
    #this enables ipv6 on eth0, and sets a static address
    address 2001:bbbb:cccc:100::1
    #replace the above with your /56 delegation, but make sure
    #it ends with ::1 (or whatever other number you want)
    netmask 56
    #specifies the netmask
    accept_ra 1
    #accepts router advertisements, you need this
    pre-up /sbin/dhclient -1 -v -pf /run/ -lf /var/lib/dhcp/dhclient6.eth0.leases -cf /etc/dhcp/dhclient6.conf -6 -P eth
    #this will feed /etc/dhcp/dhclient6.conf (which we still have yet
    #to create) to dhclient when the interfaces are being loaded
  2. Create /etc/dhcp/dhclient6.conf, containing:
    interface "eth0" {
    send dhcp6.client-id 00:03:00:01:7a:c6:00:11:22:33;
    #replace above line with your DUID for you /56
    #don't forget the semicolon!
  3. Reboot, or systemctl restart networking.

Reverse DNS

Very similar to how IPv4 PTR records work.

  1. Create a zone file for your /48. The zone file name will be the first three quartets, reversed, character separated by periods, followed by So for me, if I was assigned 2001:bbbb:cccc::, I’d go with
  2. Create the zone file as usual, with an IN PTR record for your server’s address (the one that ends with :1). This tool makes life easy for reversing the entire address.


You need to allow through UDP port 546,547 for DHCP to work out.

That’s it! Remember to set up ip6tables or whatever firewall you prefer. Happy IPv6-ing!

IoST: The Internet of Spying Things

Thinking of buying that Skype-enabled smart TV? Bad move bud, you’re inviting all sorts of hackers, spy agencies, feds, and other undesirables directly into your living room.

Once upon a time in 2003, the FBI sought permission to wiretap an OnStar-like device in a car… except it wasn’t wiretapping communications, it was turning the device into an always-on microphone that got piped directly to FBI HQ. And they didn’t seek permission, they coerced the manufacturer into helping them, concluded their surveillance, then asked the court later. Oops.


A recent report, released by “the Berkman Center for Internet & Society at Harvard University” said:

A plethora of networked sensors are now embedded in everyday objects. These are prime mechanisms for surveillance: alternative vectors for information-gathering that could more than fill many of the gaps left behind by sources that have gone dark – so much so that they raise troubling questions about how exposed to eavesdropping the general public is poised to become.

The report itself is a very interesting read, and surprisingly unbiased; I’d recommend reading the full thing if you have 20 mins to spare. 

In short: who cares if you’re using Signal if your TV can just listen in on your conversation? Why bother with PGP if your Wi-Fi enabled anytime-vlogging necklace can just read your emails off your screen? Is there a point to avoiding Windows 10 if your voice-activated Twitter-enabled fridge is reporting everything you do in your kitchen anyway? Ignoring, for a second, that The Greatest Surveillance Tool of All Time rarely leaves you pocket: a microphone, two cameras, a GPS chip, and even an always-on data connection! 


Our glorious government’s been getting assblasted by the recent we-do-encryption-toono-backdoors-we-swear corporate meme, but that’s sadly about to become irrelevant as IoT becomes more prevalent. The Xbox One was a fairly good test of how the public would react to inserting a surveillance device directly into their living room (I still get creeped out when I enter a room and see that thing looking back at me), and according to MS, the “vast majority” of people who bought an Xbone with a Kinect still use it (although, MS is “decommissioning” certain Kinect features like gesture control for menus, so I’d read the preceding statement as “left it plugged in but don’t really use it”). As Wi-Fi enabled everything becomes the new cool thing to have, we’ll keep seeing more and more stories about exploits in poorly-written firmware. Then one day, some whistleblower will drop a story about some agency having recorded everything you said in your livingroom over the last decade and everyone will be all surprised all over again.


Sounds bleak right? There are a few things you can do though:

  • Try to avoid IoT-esque devices for “nifty” features. Do you really need to control your house temperature from your phone, at the expense of your house “occupancy metadata” being available?
  • If you’re using a device in a LAN context, don’t let it talk to the outside world. If you like turning on your blender with a button in your bathroom, that’s cool. But no, the blender does not need 24/7 internet connectivity to “check for updates”. Least amount of access necessary is good secsec anyway.
  • If your device needs to talk to someone external, firewall it down to just the people it should be talking to (you do have a hardware firewall at the edge of your network, right?). If your toilet posts to Twitter, there’s no reason for it to be talking to anyone but Twitter.
  • If you need to connect to your device from outside your LAN, do yourself a favor and set up a VPN server on your network. Exposing these IoT devices to the outside world is a terrible, terrible idea considering that they often offer no authentication past a basic username and password, and are often hilariously insecure. Personally, I make a single RPi available to the outside world, which I OpenVPN to (using PKI) (this is a one-button connection on my phone), then I access all internal services from there.
  • Unplug your Xbone Kinect. Plug it in when you’re using it. And for the love of god, rip the OnStar module out of your car.

Self-host Everything

I firmly believe that “cloud services” will be the downfall of the internet: instead of a free and open network, where anyone can provide services, we’re moving towards a few monolithic networks providing “free” services (in exchange for selling your data to advertisers, and showing you advertisements) and stomping out all smaller competition, Walmart-style.


There are several issues with depending on cloud service providers:

You are at the mercy of the service provider. What would happen if, say, Facebook chose to shut down services in your country tomorrow? How many people would you lose touch with? How many photos and messages would you lose forever? Better yet, how fucked would you be if Gmail disappeared?

Your data is most likely being vacuumed up by various nation-state attackers. As the Snowden slides revealed, virtually all major cloud service providers are providing your personal data directly to the NSA — however, it would be foolish to assume that only the NSA has your data. Because these cloud service providers are international, your data is most likely also provided to intelligence agencies in virtually all developed countries, from China to Russia to Israel. Why? Because these providers “must follow the law”, and operating in many countries means following the law in many countries.

Cloud services are a tempting target for attackers. Imagine if you could… oh I dunno, find nude pictures of many celebrities in a single datastore. If you had the skills, wouldn’t that be a juicy target? That being said, cloud services are usually fairly secure, but slip-ups still happen.

All “free” cloud services sell your data to advertising firms. There’s probably some sweatshop worker reading your emails right now to figure out whether to sell your male enhancement pills or sunglasses. I hope you’re not surprised, as you agreed to it in the EULA you accepted — how else did you think these services would get paid for? Interestingly, Google is mostly likely the least evil of the providers in this regard, because they do their own advertising. So at least your data stays with one company.

I bet you have a solution, LG. 

Of course. The answer is to self-host everything.

Running your own services lets your keep control of your data, and offers enhanced privacy and security. While running services requires a certain amount of technical competence, it’s far more straightforward (and cheaper) than many people assume. Find yourself a nice VPS host (DigitalOcean and Linode are good) or a host for dedicated servers (I’ve had good experiences with, Hetzner, and OVH), find some tutorials, pay a few bucks per month, build services, break services, fix services. Find a few technically-able friends to give you a hand, or a few privacy-aware friends to split the cost with. Some examples:

  • Email: Postfix and Dovecot, optionally Roundcube (webmail)
  • Chat: Prosody (XMPP)
  • Files: OwnCloud
  • Documentation: Mediawiki
  • Blog: WordPress
  • Search Engines: Searx
  • More


Won’t this be horrendously expensive?
For a few users, you can run all of the above on a $5/month DigitalOcean VPS.

Won’t things break?
Absolutely. But learning how to fix things when they break is what makes you a good sysadmin. Backup often, backup well.

Won’t it be inconvenient?
Absolutely. But that’s the whole appeal of cloud service providers: convenience, in exchange for your personal data. At some point, you’ll realize it’s just not worth it.

Will I be secure against hackers/nation-state attackers?
Kinda. You’ll be safe from certain types of attacks: the NSA storing and analyzing every email you send via Gmail, for instance. If you’re specifically targeted, no, you’ll get #rekt anyway via the attacker compromising/compelling your hosting provider, putting malware on your home computer, or being beaten with a wrench until you give up your encryption keys. But self-hosting keeps your data out of the massive, easy-to-access pools of personal data on cloud services — it makes it more difficult for attackers to get at your data, and making attacker’s jobs more difficult is something we should all strive to do.

Humor me: try it out today. Get a domain name, fire up a $5 VPS on DigitalOcean, find an inital server setup and securing your server guides, then follow the ISPmail tutorial and set up email services (DigitalOcean and Linode have excellent knowledge bases of tutorials: see 1 and 2). Test it out, find features you want, find tutorials to implement them. Do something dumb, break something, then figure out how to fix it. Find some friends, work together, and free yourself of the cloud service botnet.


Poettering vs Linux

Before the event, before He Who Shall Not Be Named, the Linux community lived in unruly harmony with the other unicies. And while SysV was the defacto standard, everyone who had two brain cells immediately swapped that shit out for something more daemontools-like. And the SysAdmins would play, play because they worried not; for their supervisor programs were doing their handiwork and the systems ran smoothly. Truly, the land was happy, far and wide.

Then the Dark One appeared, and first claimed, “I will take the Blight of Jobs, known as mDNS, and make it run on Linux.” And he cast a great pox upon the community, but most did not notice, because most of us weren’t gay cocksuckers. And this irked him, so he cast another blight, “I will fix your audio”. And using the pox, he built upon it and filled many a distribution with his sludge. And the users did gnash their teeth for a few years as the pox-sludge was incomplete and annoying. And before the users could turn on him, he snuck out like a thief in the night, claiming that it was not his handiwork.

Now the dark one was really pissed, for the pox-sludge did not do its purpose. So he found employ in that vile place, the House of Red Fedoras, a place filled with gnomes that only cared for bilking money from unsuspecting business people. And he did come upon a plan, so devious in nature and grand in scope, that the gnomes of Red Fedoras did endorse it, seeing that it would lead to many lucrative and unnecessary service contracts. And he brought forth his third and final curse, systemd, which was meant to “replace icky SysV” but in reality, it was forged from a shard of the Dark One’s corrupt soul, and its code would bind all others in darkness as they succumbed to its temptations. And one after another, projects agreed to bend their knee before it, and darkness descended upon the land.


Federation: The future of open online services, and the war against it

To clarify, by “federation”, I mean federation in contrast to a client-server or P2P models. Specifically, a collection of independent servers, each serving a number of users, all using a standardized, federated protocol.

Let’s go over the basics real quick. At first, there was the client-server model.

client-server-modelThere is a server that communicates directly with a bunch of clients. For instance, when you open up Facebook, you’re using the client-server model (yes, Facebook has multiple servers, but they’re all owned and controlled by Facebook, so in this context, it’s essentially a single server).

Then, around the time of Napster, we realized that it might be a good idea to take servers out of the equation. This introduced the peer-to-peer model.

peer-to-peer-modelThis distributed model offered us decentralization — a network that can’t be destroyed by removing a single server out of the middle. It’s a network that’s a collection of peers with no central authority. Protocols like BitTorrent use the P2P model. However, there is a downside to the P2P model: each of the peers have a fairly high processing cost, and are usually expected to be constantly connected to the network.

So what’s the in-between? Federation: a collection of independent servers, each serving a number of clients.

federation-model“Independent” is the important word here: the idea is that anyone can host their own server, and can join it to the network of servers by using an agreed-upon, or “federated”, protocol. This allows us to have an open network (unlike, say, Facebook’s servers) while not burdening the clients with all the processing. An excellent example of the federation model is email: an email server can be run by your ISP, your company, an online ad-supported service, or you can run one yourself. Multiple clients connect to each server (ie all of your ISP’s customers), and the servers can talk to each other via an established protocol (SMTP). There is no central authority in the email system: your little home server has, by design, the same “say” in the network as Gmail’s servers.

The federated model, while being old tech, is still the best compromise between client-server and P2P models. It enforces an open network, gives us the option to completely own our data, while still leaving room for our corporate peers. However, there’s been an increasing trend away from federated models by several large service providers.

XMPP is an instant messaging protocol (it’s actually a lot more than just an IM protocol, but that’s not important here) which uses the federated model. Users connect to the server for their domain, and they can chat with users on different domains via server-to-server communication. Google Talk (aka Hangouts) implemented XMPP support in 2006. The idea was that a user on Google Talk, say, “” could chat with a remote user, like “” without user2 having to create a Gmail account. This was a Good Thing, because it let users have choice of IM providers, while still letting users on different networks chat with each other. In 2010, Facebook Chat added XMPP support. This was also a good thing, for an additional 400 million accounts could be reached via XMPP. It looked like XMPP was going to get as popular as email.

But then… it all fell apart. Both Google and Facebook dropped XMPP support in late 2014 / early 2015. There was never much of an explanation from either corporation, just something along the lines of “We’re switching to X new API and we didn’t bother adding XMPP support” and “We promise we might eventually one day look at maybe adding something resembling XMPP support. Maybe.”


So what actually happened? They realized the business value of vendor lock-in. Effectively, “we know that user1 wants to chat with user2. Why make it easy for user2 to chat externally? We can just force them to join to chat with user1, giving us more product a new client!” You want to chat with grandma over Facebook Chat? Too bad, you’ll have to make an FB account now… even though perfectly good tech exists to let you chat with her from wherever.

But that’s just XMPP. We have slightly bigger issues to worry about.

It’s becoming increasingly more difficult to run your own email server. If you set up a new email server on a dedi/VPS somewhere, and follow all the usual recommended practices (PTR, SPF records, DKIM), your emails will be put in the Spam bin on both Gmail and Microsoft inboxes. Gmail will direct you to read their Bulk Sender Guidelines (even if you only sent a single email), and Microsoft will give you a place to “register” your server for a chance to avoid the spam bin. In order to avoid the spam bin on Gmail, however, you’ll need to build up a “reputation”, by having conversations with numerous Gmail accounts, and having them mark you as “Not Spam”. Here are some HN threads griping about this issue. To make matters worse, having some reputation with Gmail doesn’t guarantee that your email will get delivered: the Gmail servers will accept your email, but they may still end up in the user’s spam bin or disappear completely, especially if this is the first time you’ve emailed this particular user.

This is a ridiculous amount of effort for a user who just wants to run their own email server because, oh I dunno, maybe they want to actually own their emails? This is also extremely concerning, because email is the original example of an open and decentralized system. I suspect that, within the next few years, it’ll gradually become impossible for anyone to run a small email server. Eventually, you’ll use your work email to talk to your work colleageues, your Gmail to talk to your friends on Gmail, and your Outlook to talk to your friends on Outlook. Here’s vendor lock-in again — why let you run your own server when you can maintain three accounts (and, of course, see ads from all three providers).

Ad-blocking: The new “stealing”

Since the dawn of filesharing, our corporate overlords have been shouting about how media piracy is “stealing“. Thankfully, the idea of making a digital copy being equivalent to theft has been beaten down by countless arguments and dissertations, to the point where courts have, in certain cases, prohibited copyright holders from using the words “piracy”, “theft”, and “stealing” in jury trials. However, there is a new group of people complaining about “stealing” — websites that earn their revenue through ads.

nobody pirates games anymore though so lol

Blocking ads is a little different than piracy. Viewing a web page actively consumes resources: bandwidth, which the site pays for. This amount is fairly negligible, however. Consider that my $30/month Hetzner server is allotted 20TB of upload, and that the homepage of this blog is roughly a megabyte, each page load costs me about $0.0000015 . Not really an amount worth crying about.

The anti-ad-blockers’ primary argument is usually something like, “We are content creators! Ads are the only way we make money! If you block our ads, the internet will never have any new content ever again!” Let’s pretend for a second that 75% of the “creative content” blogs and websites aren’t just regurgitated bullshit served for the sole purpose of getting ad views. Bad news: I’ve seen better content generated by the internet hate machine and even in Reddit comments than any ad-supported blog. We’re talking about unpaid, pseudo-anonymous users of these communities creating better content than anyone paid to do so.

But LG, those communities wouldn’t exist without ads!
I can run a Reddit clone on a $2 VPS. Good creative content spreads between all of the communities. In other words, the content would’ve still been created even if these communities didn’t exist, or were more fragmented.

So why block ads? It’s fairly simple: they’re annoying. Ads have evolved from being simple text ads, to flashing and jumping, to literally screaming the name of a product in a background window. Fuck you and your content, I don’t want to put up with that shit. If you included a few affiliate links or something in your blog posts, I wouldn’t mind, but because of the behaviour of the ad industry, I have no choice but to go full nuclear and block everything resembling an ad.

even billy blocks ads, despite his profession

The idea of “acceptable advertisements” has been a topic of debate recently, but I’m not interested. I simply cannot trust a company to accept money to decide which ads to show me: it’s basically bribery. Nor am I going to trust a some democratic process: Reddit is a prime example of democracy failing. No, fuck advertisements alltogether, you people will have to find better ways to fund your activities.

So let’s talk alternatives. “Donation” mechanisms have been around approximately forever, and are the sole source of revenue for all sorts of sites (private trackers are a notable example). A somewhat more complicated idea is Flattr: basically, users pay a monthly donation of their choosing that’s automatically distributed between the sites they clicked a “flattr” button on. But that’s the essential idea: users contributing directly to your site, rather than some evil network in the middle. I bet if I contributed a single penny to every website I’ve ever visited, I would’ve given 90% of those websites more revenue than they ever earned by showing me ads.

pls gib mony

Anyways, back to the original point: is blocking ads “stealing”? Kinda, like copyright infringement is kinda stealing, but it doesn’t matter. In short, advertising is basically getting #rekt by ad blockers, there’s going to be an all out war soon, and advertisers will lose. If you run a site supported by ads, now’s the time to get out and find a donation-based alternative. And if you can’t sustain your site on donations… I’m sorry, but your site is probably clickbait shit that nobody cares about, and the net would be better off if you just left.