walkergriggs.com source file


TODO Digital Homesteading @essays

Tech is changing. Whether you want to call it horitzontal scaling, scaling out, distributing, or deconstructing; the ways we structure, write, test, and deploy systems is changing.

I argue though, that the ways we learn, however, are not changing but should be. Learning the latest, individual abstraction isn’t enough anymore. Side projects are often too narrow and our exposure to “enterprise systems” is limited. We, as engineers, are not adequately tailoring our “studies” to fit the needs of industry… but we could be.

I propose a form of continuous learning called “digital homesteading” which emphasizes composition and encourages self-sufficiency.

TODO State Machines All the Way Down @essays

TODO A Standard for Password Management @devlogs

TODO Five Years with Emacs @devlogs

DONE Coding Diddles @essays

“If you fail in copying from a master you succeed in birthing an original art”, Kushal Poddar

Last year, a colleague of mine picked up woodcarving. They told me about their battle with the “originality demon” and how, even when learning a new and productively right-brain skill, they felt every knife stroke needed to be an original one. Each complete whittle needed to an attractive addition to a catalog of novel works.

Then a content creator – a carving guru, as my colleague referred to him – referred to some of their more simple or instructive carvings as “diddles”. These diddles were common, practiced, and rehearsed; there’s absolutely nothing original about these them. He event went as far as to dictate each cut as if they were notes on a staff. Yet, they were a critical part of this creator’s trade, and so my colleague took solace in the idea that regardless of profession or experience, we need to iterate on the trite before we can produce even a modicum of original work.

My colleagues’ story resonated with me; programming works the same way.

I can’t count the number of times I’ve stumbled on a new idea, excitedly put pen to paper, and resurfaced a few hours later to learn – after some light ‘market research’ – that someone else has solved the problem. At that point I’m faced with the decision to write it off as a fun investigation or to forge ahead knowing that someone beat me to the punch. And of course someone else has! Given the glut of public repositories on Github alone, it’s hard to imagine some problems haven’t been solved.

I wouldn’t call this a particularly productive outlook, but for some innate reason it’s a shared human experience. We want to be adventurers and make great discoveries, and yet the most notable advances are often those in a solved fields.

Take chess, for example. The further a player deviates from the “main line” or accepted variation, the higher their odds of finding a novelty – a move no one has considered before in that position. 99.9999% of those novelties aren’t fabulous moves, but there’s a one-in-an-infinitesimally-small chance they’ve discovered something game changing. Chess is not a solved game; that’s why we continue to play. On the surface, it looks like there are a finite number of moves. On the surface, every player has perfect knowledge. And on the surface, there shouldn’t be a stone un-turned. For those reasons alone, finding novelties Chess is exhilarating. Repetition, learning the lines, and studying old games are the only way you’ll find a novelty worth its salt.

Like chess games or wood carvings, frame your programming projects as diddles. Sorting algorithms, data structures, security groups, EMNIST data, hello worlds – all are diddles. There’s nothing original about heap sort and certainly classifying handwritten letters seems like a solved problem. We should take solace in that. Before we write our magnum opal, we should understand existing systems. How can we presume to be entirely original until we know all existing prior art.

There’s another part to diddles too. In a recent post about Basic English and controlled languages, I touched on that, to learn quickly we need to first learn slowly. By limiting the syllabus to the most common parts, we’re giving ourselves time to build a solid, reliable, and practical foundation. My colleague may have carved 15+ canoes in one weekend, but their last iteration was infinitely better than their first. By freeing themselves from the need to produce original work, they were able to focus on the techniques of carving.

Thinking about my own experience learning Go, I probably wrote just as many CLIs as my colleague has carved canoes. CLIs aren’t sexy and they’re most certainly not novel. But now I can whip out a CLI faster than you read this post. And how many times have I needed to in the wild. Tons!

So write like Didion! Paint like Jackson! Dribble like Jordan!

Practice your diddles, re-implement your darlings, and study how “innovations” make use of your favorite data structures. Before you blow anyone’s mind, first learn what makes their brain tick.

DONE Basic English @essays

It takes only 400 words of Basic to run a battleship; with 850 words you can run the planet.

Ivor Armstrong Richards

I’m terrible at learning foreign languages. In fact, I studied Latin for 8 years – a dead language for all intents and purposes – and hardly remember a thing. Recently I tried learning Italian; that fell by the wayside too.

My experience with foreign languages could probably be summed up in one word: overwhelming. Gerunds and gerundives. Participle. Present perfect imperatives. Yet, somehow, there’s a sizable population of polyglots out there who learn languages, or at least the basics, in just a few weeks. How? Enter: Basic English.

Basic English is a controlled language, or a whittled down version of a language meant to reduce complexity and improve comprehension. Charles Ogden and Ivon Richards designed Basic English as a tool for those learning English as a second language. Odgen believed that the fastest path to become conversational in any language was to learn only the most used words.

Of the hundreds of thousands of words in the English language, Basic is only 850. Britches, breeches, bell-bottoms, blue jeans – who cares, so long as you can say “pants”.

Of course, this got me thinking about my experience learning to code, or work with computers more generally. Honestly, Basic English is not far off.

In high school, we wrote hundreds of lines on paper well before we typed a single character into a text editor. Before we learned loops, we learned about variables. Before variables: types. The syllabus was condensed to 850 words (or whatever the programming equivalent is), and we kept to it. Our diction was limited, and we drilled those core principles home.

Jump forward however many years, and my experience learning Rust was vastly different. I dove straight into traits and borrowing and async, and I ultimately failed to learn the language. I don’t know Rust any better than I know Italian. I didn’t limit myself to 850 words.

My initial revision of this essay proposed (or at least attempted to) a model to evaluate programming languages. My reasoning was that math, philosophy, and computer science are fundamentally just syntaxes to express logic, arguments, and reasoning. A well designed language, so I reasoned, wasn’t a language with many bells in whistles. Instead, it applied routine, boring, consistent, trite syntax to great effect.

That train of thought is a logical fallacy though: a faulty parallel construction. Controlled languages don’t help evaluate, they just improve legibility for non-native speakers. Rust isn’t a bad language by any stretch, and English isn’t either – they’re just difficult to grok for the first-time speaker.

So what can we learn from controlled languages as programmers, architects, or designers?

1) Learn slowly to learn quickly

What did my experience with Rust teach me? You’re never too experienced, smart, or savvy to start from square one. The core contributors of Rust literally wrote a book on getting started for a reason.

This takeaway is the more obvious of the two, but we willingly walk ourselves into a trap when we jump straight to complex features, patterns, or idioms. We push well past those 850 words, and sabotage our learning process.

2) Simple code is empathetic code

I love writing list comprehensions in Python! My caveman brain releases endorphins when I realize how much I can do in only one line. Paradoxically, though, list comprehension can be… incomprehensible.

We need to write code with the understanding that someone in a galaxy far far away will need to read it.

In my case, maybe that person is a colleague who isn’t familiar with Python. Maybe they’re a contractor who knows Python, but it’s been a while. Or maybe I’ve switched companies, and am not around to answer their questions. By saving myself a few keystrokes, I’ve cost someone valuable minutes; I’m not respecting their time.

Of course, list comprehension is a small example, but this principle applies just as well to complex patterns and sprawling systems. Simplicity is empathy.

All in all, controlled languages are an interesting theory and intuitively make so much sense. I likely won’t be fluent in Italian any time soon, but I’ll certainly remind myself to slow down and keep it stupid simple. I might even revisit Rust and do it right this time.

DONE Learning Go Generics with Advent of Code @devlogs

This post is a living draft and may be revised. If you have any comments, questions, or concerns, please reach out.

Yesterday, the Go core team released go1.18beta1 which formally introduces generics. There isn’t a whole lot of info circulating yet aside from git history and go-nuts experiments, but the overall reception feels very positive.

Personally, I’ve been hands on with generics for the better part of a week all thanks to the Advent of Code, which has been the perfect venue to take generics for a spin. If you’re not familiar with AOC…

Advent of Code is an advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other.

You don’t need a computer science background to participate - just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware. – Eric Wastl

This article will cover the basics of generics (or enough to get you started) and uses my AOC experiments a case study.

DONE ZNC, the right way @devlogs

I’ve setup ZNC one too many times.

Sometimes I forget it’s riding shotgun on a spare droplet heading to the trash heap. Other times, my payment method expires and so too does the instance. Other times I’m too lazy to host it in the cloud at all, so I run it locally. In any case, today I wanted to set up ZNC the right way… for the last time.

I also want to document the process for posterity and stop scouring the web for the same articles time after time.

The TODO list for today:

Dedicated domain and droplet

I’ll gloss over the relatively simple steps like provisioning a droplet, securing the firewall, installing ZNC, and purchasing a domain.

tldr; I…

  1. Provisioned a droplet.
  2. Purchased a new domain. I opted for a .chat TLD because I thought it was appropriate
  3. Directed the registrar to DigitalOcean’s nameservers. Consolidating behind a single control panel makes life much easier.
  4. Created an A record with an irc subdomain pointing at the IP of my new droplet.

For the remainder of this post, I’ll use irc.example.chat as my placeholder domain!

Configuring ZNC

How you configure ZNC is a matter of personal taste. I opt to load fairly standard modules like chanserver, fail2ban, log, and identfile but feel free to go crazy! One thing that is important to mention though, are the separate listeners.

I created one listener for SSL IRC traffic over 6697 and one listener for non-SSL HTTP traffic over 8080. The web listener has SSL disabled because 1) it’s only a self signed cert 2) it’s only hosting to localhost.

<Listener listener0>
    AllowIRC = true
    AllowWeb = false
    IPv4 = true
    IPv6 = false
    Port = 6697
    SSL = true
    URIPrefix = /

<Listener listener1>
    AllowIRC = false
    AllowWeb = true
    Host = localhost
    IPv4 = true
    IPv6 = false
    Port = 8080
    SSL = false
    URIPrefix = /

Configuring Nginx

I’ll first preface this section by saying: I’m not an Nginx wizard by any means. In fact, most of this configuration comes from the Nginx blog and Stack Overflow.

Before we can generate a certificate, we want to add a basic configuration. I dropped a file in /etc/nginx/config.d and create a softlink to sites-available and sites-enabled.

touch /etc/nging/config.d/irc.example.chat
ln -s /etc/nginx/config.d/irc.example.chat /etc/nginx/sites-available
ln -s /etc/nginx/config.d/irc.example.chat /etc/nginx/sites-enabled

I then edited the parent configuration. Fortunately, it’s fairly readable; nginx will proxy all SSL traffic from irc.example.chat to our ZNC localhost listener. We can also set a few headers in the process.

server {
    listen      443 ssl http2;
    server_name irc.example.chat;
    access_log  /var/log/nginx/irc.log combined;

    location / {
        proxy_set_header      Host             $host;
        proxy_set_header      X-Real-IP        $remote_addr;
        proxy_set_header      X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header      X-Client-Verify  SUCCESS;
        proxy_set_header      X-Client-DN      $ssl_client_s_dn;
        proxy_set_header      X-SSL-Subject    $ssl_client_s_dn;
        proxy_set_header      X-SSL-Issuer     $ssl_client_i_dn;
        proxy_read_timeout    1800;
        proxy_connect_timeout 1800;

The ssl_certificate configs will be added by certbot in the next step. If they aren’t added for whatever reason, they should look something like…

ssl_certificate     /etc/letsencrypt/live/irc.example.chat/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/irc.example.chat/privkey.pem;

Generating certs with LetsEncrypt

Now the fun part, and the reason to setup the domain in the first place. I used the EFF’s handy certbot with Nginx drivers to provision a cert with LetsEncrypt. Technically the Nginx drivers aren’t necessary – you could provision the certs directly – but the added config editor is a nice feature.

certbot took care of just about everything!

sudo apt-get install certbot python3-certbot-nginx

certbot --nginx -d irc.example.chat

I say “just about” because these certs still expire every 90 days. I’m guaranteed to forget about the cert, so I set a cron job (sudo crontab -e) to renew the cert every week.

0 0 * * 0 certbox renew --quiet

Configuring Weechat

The last step of any ZNC install is to setup your client. I use Weechat, so the next steps may be different for you.

Weechat needs to validate ZNC’s SSL cert to connect over 6697, so grab the SSL certificate fingerprint from the droplet first.

cat ~/.znc/znc.pem \
    | openssl x509 -sha512 -fingerprint -noout \
    | tr -d ':' \
    | tr 'A-Z' 'a-z' \
    | cut -d = -f 2

On the weechat client, I added the ZNC server with a default network, set the fingerprint, connected, and saved my changes. One detail that I forget constantly: these creds aren’t your network creds, they’re your ZNC creds.

/server add ZNC irc.example.chat/6697 -ssl -username=username/network -password=password
/set irc.server.ZNC.ssl_fingerprint <fingerprint>
/connect ZNC

Most networks require you to authenticate with SASL these days, which I set through Weechat. Another option is to load the SASL module and set your credentials through the web console.

/msg *Status LoadMod sasl
/msg *SASL Set nick pass
/msg *SASL RequireAuth true

And that’s about it. We’ve setup the A record for our domain, configured separate HTTP and IRC listeners for ZNC, generated an SSL cert through LetsEncrypt, proxied web traffic to ZNC with Nginx, and connected securely with Weechat. A pretty productive afternoon!

If you’d like to chat, you can find me on libera.chat!

DONE A Year with Emacs @devlogs

It is important to preface that everything in this article is opinion and based off (roughly) a year of heavy Emacs usage. It is also important to know that this article will be updated along side my configuration and tastes. So without further ado…

We all know Emacs is an immensely powerful beast. We also know how easy it is to venture down a rabbit hole of elisp and never surface. I liken it to a carpenter replacing a door. After removing the old door, he notices the hinges are askew. He removes the hinges only to notice rot in the door frame. By the time he replaces the frame, he notices a slight difference in shade between the new frame and old moldings… The learning curve for Emacs is wonderfully circular. That being said, I would like to take a moment and explain my configuration in moderate detail.

Before I get too technical, I should probably explain my fascination and reservation with Emacs. Brief background: I was forced into using Emacs when the only other editor on the lab machines was Gedit (and Vi, but we’ll forget about that for now). In all honestly, it was quite a hassle. I began compiling a minimal init.el out of necessity. Linum, flyspell, you name it. It was certainly a gradual transition from cushy Atom, but, after a long while, it became an addiction. It wasn’t until I discovered a keyboard designed with Emacs in mind (Atreus) did I see Emacs (and the devoted community) in all of its glory.

As for my reservations…

The learning curve is far too steep. My time is best spent elsewhere.

WRONG. The weeks of struggling with Meta keys and Emacs pinkie pays off. Trust me. My workflow has increased substantially, and I feel extraordinarily comfortable in my configuration. Granted, emacs is truly a lifestyle. Embrace it.

It’s a bloated editor packed with legacy functionality. The startup time is just too long!

MYTH. You think Emacs is too heavy for you system? Try running Eclipse and Chrome simultaneously and then get back to me. As long as your config file is optimized (cough cough ‘use-package’), the startup time won’t be longer than a couple of seconds. Granted, on a system with limited resources, Vi may be a better option. Which brings me to my biggest qualm. Vi is an editor. Emacs is an editor AND IDE. When remoting into a server, I’m not about to Xforward a fully functional Emacs when bandwidth and memory are scarce. For that reason, I keep a modest .vimrc on hand for some quick cli editing.

DONE Ergodox Infinity LCD Firmware @devlogs

So you’ve got yourself an Ergodox Infinity. Congratulations! Everyone probably thinks your a little bit crazy spending that much on a keyboard that strange with LCD displays that small and a layout you’re struggling to type on. But it’s ok – anyone who shares this strange obsession probably understands.

This post is really to demonstrate how to change the default layer’s LCD logo. Asciipr0n has a very clean guide to this, but I find that parts of it are (if not the majority of it is) out of date. Since the firmware has been updated, I thought I’d update the guide.

DONE Pipewire in Docker @devlogs

Pipewire is a graph-based multimedia processing engine that lets you handle audio + video in real time! I’ve had way too much fun playing with it recently, but spent longer than I care to admit spinning it up in an Ubuntu container.

Most of the examples I saw floating around were using systemd or Fedora, but my requirements were

  1. Ubuntu 22.04
  2. Processes run as background sub-shells without systemd
  3. Built from the latest source
  4. Drop-in replacement for PulseAudio

Side note: I spent some time tinkering with 18.04 LTS, which requires either a PPA or building Meson and Alsa utils from scratch (Pipewire requires versions not available older Debian systems). I highly recommend the PPA if you head that route…

Front matter and dependencies

As with most containers, we first define the front matter and install all Pipewire build / runtime dependencies. There are probably a few unnecessary packages floating around here, but the goal of this spike wasn’t to optimize the container’s size.

FROM ubuntu:22.04 AS pw_build

LABEL description="Ubuntu-based stage for building pipewire" \
      maintainer="Walker Griggs <[email protected]>"

RUN apt-get update \
    && apt-get install -y \
    debhelper-compat \
    findutils        \
    git              \
    libasound2-dev   \
    libdbus-1-dev    \
    libglib2.0-dev   \
    libsbc-dev       \
    libsdl2-dev      \
    libudev-dev      \
    libva-dev        \
    libv4l-dev       \
    libx11-dev       \
    ninja-build      \
    pkg-config       \
    python3-docutils \
    python3-pip      \
    meson            \
    pulseaudio       \
    dbus-x11         \
    rtkit            \

Relevant environment variables

The next step is setting the relevant environment variables for building Pipewire. I like to do this after installing dependencies so I don’t have to re-install everything if one variable changes.

In this example, we’re pulling Pipewire’s latest version (as of time of writing) and defining our build directory. We’re building Pipewire in /root as root – worst practice, but it’s a spike.

ENV PW_ARCHIVE_URL="https://gitlab.freedesktop.org/pipewire/pipewire/-/archive"
ENV PW_TAR_FILE="pipewire-${PW_VERSION}.tar"


Build the thing

Now that we’ve installed our dependencies, we’re ready to build Pipewire itself. Meson is Pipewire’s build system of choice. I don’t have much experience with Meson, but it was easy enough to work with.

    && tar -C $BUILD_DIR_BASE -xvf $PW_TAR_FILE

    && meson setup $BUILD_DIR \
    && meson configure $BUILD_DIR -Dprefix=/usr \
    && meson compile -C $BUILD_DIR \
    && meson install -C $BUILD_DIR

Setup the entrypoint scripts

Next up are the dominoes of entrypoint scripts.

COPY startup/      /root/startup/
COPY entrypoint.sh /root/entrypoint.sh

CMD ["/bin/bash", "entrypoint.sh"]

I like to breakdown the entrypoint scripts and order them with a filename prefix. I forget exactly where I picked up this habit, but it stuck a long time ago.

In this example, I’m running xvfb as a lightweight X11 server. From everything I’ve read, Pipewire is really designed to run on a full Wayland system, but I haven’t made the jump on any of my machines and likely wont for some time.

# startup/00_try-sh.sh
for f in startup/*; do
    source "$f" || exit 1
    sleep 2s

# startup 01_envs.sh
export XDG_RUNTIME_DIR=/tmp
export DISPLAY=:0.0

# startup/10_dbus.sh
mkdir -p /run/dbus
dbus-daemon --system --fork

# startup/20_xvfb.sh
Xvfb -screen $DISPLAY 1920x1080x24 &

# startup/30_pipewire.sh
mkdir -p /dev/snd
pipewire &
pipewire-media-session &
pipewire-pulse &

Pipewire has a few runtime requirements; dbus and rtkit are top of mind. So long as the Pipewire media session can fork the system dbus session though (or launch a new one), you should be fine. I’ve personally disabled rtkit.

Another point of note: I’ve opted for media-session which is, unsurprisingly, a reference implementation of Pipewire’s media session. In future revisions, I plan to replace it with the more advanced Wireplumber. Media Session was quick and easy for the time being though.

Run the thing!

There’s not much to it. If we hop into the container and check on the Pulse server’s, we can see that our Pipewire server is running and properly emulating Pulse. Great success!

root@8e86f658e342:/# pactl info
Server String: /tmp/pulse/native
Library Protocol Version: 35
Server Protocol Version: 35
Is Local: yes
Client Index: 42
Tile Size: 65472
User Name: root
Host Name: 8e86f658e342
Server Name: PulseAudio (on PipeWire 0.3.59)
Server Version: 15.0.0
Default Sample Specification: float32le 2ch 48000Hz
Default Channel Map: front-left,front-right

I’ll likely write more about Pipewire once I get more experiencing working with it as a desktop service and as an API client. Wim and team have written some great client examples which I’ve modified for a few different use cases – the Simple Plugin API (SPA) is surprisingly… simple. More to follow!

DONE Zettelkasten, Rhizomes, and You @essays

Figure 1: Chris Korner, Deutsches Literaturarchiv Marbach

Figure 1: Chris Korner, Deutsches Literaturarchiv Marbach

A few years ago, I stumbled upon a collection of odd websites that called themselves “brain dumps.” On the surface, they seemed like collections of disjointed thoughts – fragments of ideas that linked to seemingly unrelated topics. Often, they bridged disciplines altogether.

That’s when I learned about Zettelkasten.


Zettelkasten (sometimes referred to as Zettel or Zet) is a system for taking notes that is specifically structured to develop ideas, not just collect them. The method has existed for hundreds of years under various names, but at its core, it consists of “bite-sized” notes written on slips of paper that are linked by a heading or a unique ID. These slips, often index cards, are filed away in a place that can be easily referenced and traversed.

The theory behind it is sound. Verweisungsmöglichkeiten, translated as a “referral opportunity” or “possibility of linking,” refers to any moment when you might reference another note or tangential thought. For example, ‘structuralism’ might refer to ‘post-structuralism’ which itself links to ‘Michel Foucault’ and a plethora of post-structuralists. Small, pointed notes can connect to any number of these thoughts across various topics, and reviewing your notes often results in finding commonalities among seemingly disparate ideas. With enough notes in your slip box, you can even hold a conversation with it.

In fact, Niklas Luhmann, a German sociologist credited with creating the modern Zettelkasten method, referred to his slip box as a “partner of communication.” His notes comprised just over 90,000 index cards and helped him write nearly 50 books and 600 essays. Luhmann said:

It is impossible to think without writing; at least it is impossible in any sophisticated or networked fashion. Somehow we must mark differences and capture distinctions which are either implicitly or explicitly contained in concepts. Only if we have secured in this way the constancy of the schema that produces information can the consistency of the subsequent processes of processing information be guaranteed. And if one has to write anyway, it is useful to take advantage of this activity in order to create in the system of notes a competent partner of communication.

You can browse Luhmann’s archive online if you’re interested.

Figure 2: The Niklas Luhmann Archive, Historisches Museum Frankfurt

Figure 2: The Niklas Luhmann Archive, Historisches Museum Frankfurt

The Spatial and Temporal

In my experience, Zettelkasten felt counterintuitive at first. We, as humans, live and think spatially. Even how we perceive time is geometric. For example, we’ve created the concept of a “timeline.” When you complete a task, you’ve put it “behind you.” When you start a new phase of life, you’re eager to see “what lies ahead.” Humans are inherently spatial – we live in a three-dimensional world – so naturally, our notes are too.

For example, as we read text or listen to a lecture, we take notes sequentially – top to bottom. We indent or nest our notes to show that certain thoughts “belong” to a certain topic. Headers encapsulate subheaders, similar to how rooms encapsulate closets (which themselves have drawers and boxes, etc.).

Zettelkasten, however, avoids concepts of past, present, and belonging. Notes aren’t concerned with what came before or after them, only how individual thoughts relate to one another. They juxtapose and correlate ideas, rather than spatially positioning them. The value of a note isn’t in its individual content, but in the narrative they collectively tell as you discover new paths between and bridges across topics.

Luhmann, too, valued this idea of “internal branching”. New ideas shouldn’t be appended to a list of prior notes, but instead inserted among connected thoughts. This internal network of links creates a greater combination of thoughts than if we simply connected thoughts to what came before and after.

Deleuze, Plato, and Rocking Chairs

Last year, a colleague introduced me to a group of post-structuralists, including Derrida, Deleuze, and Baudrillard. Deleuze particularly caught my attention with his interest in topology. Relevant to this essay is his disdain for representational thinking and strict hierarchy.

To properly understand Deleuze, we should probably first understand Plato. Plato believed that everything has an ideal form, and the closer something is to that ideal form, the closer it is to perfection. For example, there is an ideal chair, and so a chair with a slight wobble is closer to perfection than a chair with a broken leg.

Deleuze describes this model as “arborescent”; it is structured like a tree, where the ideal form is the root and the lesser representations extend out over the branches to the canopy.

In our “chair” example, somewhere on that tree are stools, stumps, and hammocks. They are ranked according to their proximity to the ideal chair. Plato might ask, “How perfect of a chair are you?” but Deleuze took issue with this line of reasoning. He proposed that a better question is “How are you different?” or “What characteristics make you unique?” We can then categorize the stump, stool, and hammock not by their representation of an “ideal chair,” but by the differences between them. Stools are portable, hammocks are soothing, and stumps firmly ground you in nature.

Figure 3: Terry Winters, Rhizome, 1998, Smithsonian American Art Museum

Figure 3: Terry Winters, Rhizome, 1998, Smithsonian American Art Museum

In contrast, Deleuze calls this “rhizomatic” thinking. Rhizomes are systems of roots that spread horizontally underground and branch in every direction. Ginger and asparagus are rhizomes. They have no top or bottom, no start, and no end. They are circuitous and cyclical. If you kill one section, the remaining roots will live on. If you cut it in half, they will live separate lives.

Relative to arborescent thought, in a rhizome nothing represents something else and certainly not an ideal form. In rhizomes, all that exist are the connections between nodes. Stools are chairs without a back. Chairs are hammocks without a rotating axis. Hammocks and rocking chairs incorporate motion.

Zettelkasten are also rhizomes. My notes for this essay point me towards Spinoza, then to Pantheism, then to Sikhism, then to Buddhism, then to the concept of time, which itself inspired my earlier point that humans perceive time spatially. They branch, reconnect, wind, and are never hierarchical. They are, if we want to think spatially, horizontal.

Repetition and Paratext

There is another connection between Deleuze and Zettelkasten worth exploring, and that is repetition. Deleuze believed that when you repeat something, you are creating a copy of that thing. When you think about a rocking chair, you are creating another representation of that chair – one that differs in many ways from all the rocking chairs you have seen before. Therefore, by rereading or repeating your notes, you are creating a unique multiplicity.

The problem with this is that your notes do not exist in a vacuum. They are, if transcribed linearly, surrounded by prior context. They are spatially dependent on adjacent ideas – how the topic is presented, the previous lecture, the syllabus as a whole, and even the notes on the chalkboard. This framing is paratextual; it informs how you approach the primary text, similar to how the cover of a book or the font on its spine might.

When you repeat or review linear, contextual notes, you are creating a snapshot of a previous argument – paratext and all. You are retracing the same ground and connecting the same dots. This repetition cannot lead to the creation of new ideas.

Deleuze dislikes representational thinking, in part, because we cannot create anything new if everything represents a common root or a perfect form. yourself the opportunity to reframe those thoughts. You are not just rehashing the same ideas in the same light; you are creating an entirely new amalgamation from existing scraps. You will find more opportunities for external connection – verweisungsmöglichkeiten – and therefore more opportunities to evolve and transform your existing ideas.

Luhmann found it extremely important for communication partners (you and your notes, in this case) to “mutually surprise each other.” Partners can only successfully communicate, or produce new information, when they “communicate in the face of different comparative goals.”

In closing

So why am I writing this? It was, for all intents and purposes, a proof of concept; a successful conversation with my “communication partner”.

In fact, the majority of time spent writing this piece was spent on flow, grammar, and narrative. I took the bulk of the content from a series of notes written on disparate topics at various times over the last year.

The graph now has enough nodes – the rhizome enough roots – that I’m surprised by new connections. I can follow trains of thought longer than a few nodes. I can venture forward, backpedal, and reconsider thoughts I had from months prior. No note has a perfect form. No note is dependent on time or space. No note is dependent on another.

In all honesty, I’m not sure where this train of thought should end, or if it should end at all.

Maybe in the future, I’ll write something more concrete on how exactly I take notes. For the time being, I’m still working out the finer details. I’ll update this conclusion with “new nodes” as they are written.


Deleuze, Gilles. Difference and Repetition. New York: Columbia University Press, 1994.

Deleuze, Gilles, and Félix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press, 1987.

Genette, Gérard. Paratexts: Thresholds of Interpretation. Literature, Culture, Theory 20. Cambridge ; New York, NY, USA: Cambridge University Press, 1997.

Luhmann, Niklas. Communicating with Slip Boxes. Accessed January 5, 2023. https://luhmann.surge.sh/communicating-with-slip-boxes.

The Rhizome - A Thousand Plateaus, Deleuze and Guattari. Then & Now, 2018. https://www.youtube.com/watch?v=RQ2rJWwXilw&ab_channel=Then%26Now.

DONE Timestamp Troubles @talks



Video is hard, and reliable timestamps in increasingly virtual environments are even harder.

We at Mux recently broke ground on a new live video experience, one that takes a website URL as input and outputs a livestream. We call it Web Inputs. As with any abstraction, Web Inputs hides quite a bit of complexity, so it wasn’t long before we ran up against our first “unexpected behavior”: our audio and video streams were out of sync.

This talk walks you through our experience triaging our timestamps troubles. It’s a narrative account that puts equal weight on the debugging process as the final implementation, and aims to leave the audience with a new perspective on the triage process.

I hope you’ll learn from our mistakes, a bit about Libav audio device decoders, and hopefully a new pattern for web-to-video streaming.


Hey everyone, my name is Walker Griggs, and I’m an engineer at Mux.

I’m actually going to do something a little out of order here and introduce the “punchline” for my talk before I even introduce the topic.

The punchline is: “reliable timestamps when livestreaming from virtual environments are really, really hard.”

I’m giving the punchline away because this talk isn’t about the conclusion, it’s about the story I’m going to tell you. It’s a story about our mistakes, a little bit about Libav audio device decoders, and a lot a-bit about some good, old-fashion detective work.

One last piece of framing: up until I joined Mux 9 months ago, I worked with databases. That was a simpler time. WHIP still meant whipped cream and DASH was still 100 meters.

I’ve realized, though, that databases and video have a lot more in common than you might think. They’re both sufficiently complex pillars of the modern internet, they both require a degree of subject matter expertise, and, at first glance, neither are exceptionally transparent.

That’s why this talk will be geared to those of us who are looking to level up our deductive reasoning skills and maybe add new triage tools to our tool box. In the end of the day, all that matters is “getting there”.

So where is this talk going?

We’ll start by introducing the problem space, of course. Every good story needs an antagonist. We’ll take a quick detour to talk about timestamps, and use that info to color how we triaged the problem. Finally, we’ll arrive back at our problem statement and how we fixed it.

So let’s jump into it. On and off for the last 9 months, I’ve been working on a system called Web Inputs. Web Inputs takes a website URL as input, and outputs a livestream. URL in, video out. On the surface that seems pretty simple, but, as most abstractions do, that simplicity hides a great deal of complexity.

Web Inputs has to wear quite a few hats.

  1. First and foremost, it runs a headless browser to handle all of the target website’s client-side interaction. For example, broadcasting WebRTC is a common use case, so the headless browser – Chromium, in our case – needs to decode all participant streams.
  2. Chromium then pushes audio and video onto separate buffers – X11 and Pulseaudio, specifically. We opted to use a virtual X11 frame buffer instead of a canvas to avoid the GPU requirement.
  3. Finally, FFmpeg can transcode the buffer content and broadcast over Mux’s standard Livestream API.

An adjustment we made early on, and one that’s the catalyst for this entire talk, is to hide the page load from the livestream. If we start Chrome and immediately buffer audio and video, we’re going to catch the webpage loading in the resulting livestreaming. That’s not a great customer experience.

Instead, we can listen to Chrome’s events. One of which is called “First Meaningful Paint”, and that’s effectively Chrome saying “something interesting is on the screen now, you should probably pay attention. A colleague of mine, Garrett Graves actually came up with this idea. From a timing perspective, it worked really well, but this change is also when we started seeing some odd behaviors.

Behavior number 1: the first 4-7 seconds of audio and video looked like they were shot from a cannon. The audio was scattered all over the place, and frames were jumping left and right.

Behavior number 2: the audio + video would meander in and out of sync over the course of the broadcast.

That’s no good. So what did we do? We did, what I’m sure many of you all are guilty of, and stayed up late into the morning fiddling with ffmpeg flags. We read all the blog posts on AV sync. We tried various combinations of filters and flags.

The problem with this approach, as many of you are probably itching to call out, is it lacks evidence. We spent a day on what effectively amounted to trial and error. In fact, a colleague of mine put together a spreadsheet of the flags we had tried, links to the resulting videos, and various, subjective scores.

The most frustrating part: sometimes we’d get close, and I mean really really close. And then one test run would fail, which would put us back on square one.

Another point to call out here: we were testing in different environments. We were comparing behaviors from production against our development stack and the differences were staggering. We allocate Web Inputs some number of cores in production. For context, our entire development stack runs on that same number. It didn’t take long before we noticed how inconsistent dev really was, and that our qualitative assessments weren’t going to get us there.

Empirical evidence is and will always be the fastest way to understanding your problem.

Before we look at any logs or metrics, let’s run through a quick primer on timestamps so we’re all on the same page.

You’ll often hear PTS and DTS talked about – the “presentation timestamp” and “decode timestamp”. For starters, every frame has both and they dictate the frames’ order. The PTS is when a player should present that specific frame to the viewer. The DTS is when the player should decode the frame.

These timestamps are different because frames aren’t always stored or transmitted in the order you view them. Some frames actually refer back to one another. These are called “predictive” or “delta” frames.

With that out of the way, let’s talk about our triage process.

One thing we found early in our investigation: FFmpeg was complaining about timestamps assigned by the Pulseaudio device decoder. Naturally, we wanted to go right to the source, so we added some new log lines to the decoder and dumped various metrics to disk.

The first thing to call out: “non-monotonic DTS in output stream”. These can be the bane of your existence if you’re not careful. It means that your decoded time stamps are not increasing by the same amount frame to frame.

Another bit to call out are the sample sizes. We’re seeing a huge push of these 64kb packets at the start of the stream, which settles down to a steady 4kb after the first few seconds.

The next bit to question: PTS and DTS on audio samples. Audio ‘frames’ don’t form groups of pictures like video frames do. Audio doesn’t have predictive frames, so why are they used, and why are they different?

Ultimately it comes down to Libav’s data models. Frames and packets are general structs and used for both video and audio, so we can think of “PTS” and “DTS” in this context as ‘appropriately typed fields that can store timestamps’. So that explains why we’re using this terminology, but it doesn’t explain why they’re different.

For that we have to look at the Pulse decoder which does 3 things when it assigns timestamps to frames.

The first is to fetch the time according to the wall clock; that’s the DTS. It then adjusts the DTS by the sample latency. That latency is just the time difference between when the sample was buffered by Pulse and requested by ffmpeg.

It then runs it through a filter to de-noise the DTS and smooth out the timestamps frame-frame. The wall clock isn’t always perfect, as we’ll see more of in a second, and it can be exceptionally sporadic in these virtual environments.

Keep in mind, this system is running a docker container, running on a VM, which is probably itself part of a hypervisor. We’re likely not using a hardware timing crystal here, so we de-noise that PTS to offset and inconsistencies.

We’re heading in the right direction, but at this point I’d say we have “data” — not “evidence”. Long log files aren’t exactly human readable, and certainly harder to reason about. I may not be a Python developer, but the one thing I’ll swear by is its ability to visualize and reason about data sets.

The first thing we wanted to visualize were these timestamps, of course. We expected to see a linear increase in timestamps, maybe an artifact of those non-monotonic logs in the first few seconds.

Good news: we do! But, maybe not as clearly as we should.

Unfortunately, this graph doesn’t tell us that much. We can’t draw any conclusions from this data. What would be more helpful would be to graph the rate at which these timestamps fluctuate because what we really care about is “how reliable or consistent these timestamps are”. The derivative, or the rate of change, of this data might show us how unstable these timestamps actually are.

Lo and behold; the derivative is pretty telling. So what are we looking at? Well a derivative of a linearly increasing function is flat, so that tells us that after some number of seconds, our timestamps are dead close to linearly increasing. That’s what we want!

But the first few seconds — they tell another story. Every time the slope increases, timestamps are increasing in a super-linear way. When the slope decreases, our timestamps are slowing down or even “jumping back in time” in a sub-linear way. So that’s interesting, but maybe more interesting is that this is only occurring for the first few seconds.

Also worth calling out is that our denoising filter is doing it’s job, but it can’t spin gold from straw. The peaks are lower and the troughs are higher, but the filter is only as good as the data it’s fed.

There was another piece to the logs: that back pressure of buffered samples at the beginning of the stream.

If we graph the latency as well, we see some rough correlation. Again, high pangs of latency early in the stream which settles down to something more consistent.

If we think back to those initial behaviors, I think this visualizes them pretty well. We see an initial scramble of timestamps which likely is causing the player to throw frames at us in a seemingly random or unpredictable order. We can also see that the timestamps aren’t perfectly linear, which would explain why AV sync meanders a little bit over the course of a stream.

Something to call out here though: this is just a correlational and not directly causational relationship. These are only part of the picture. It might be hasty to drop the gavel and blame Pulse. There’s a number of paths unexplored here. For example, these are only the audio samples. There’s a whole other side to the video samples to explore.

We needed to step back and consider our goals at this point, though. It’s important to remember that these visualizations are just interpretations – not hard evidence. We, like many of you, are under deadlines.

We had to make the difficult decision here. Keep digging, or action what we already know. We went with the latter, and wanted to strip the problem back to first principles.

Before we talk about how we fixed it, it’s important to talk about what we already knew.

The first and very naive solution we used to validate our hypothesis was to ignore all samples until we were pulling off nice, round, 4kb packets. This solution gave us fine results in a controlled environment, but we’d never want this hack in production for obvious reasons.

The logical next step here is to flush Pulse’s buffers. If you remember where this entire saga began, we were trying to cleanly start headless chrome without broadcasting the loading screen. Any data buffered before the start of the transcode can be tossed. We found limited success interacting with the audio server directly.

The last option was the one we ultimately went with, which is counting the number of samples and computing the DTS on the fly.

So what does that look like for us? First, we record the wall time when we initialize the device decoder — that’s our ‘starting time’. We then ignore all buffered samples with a DTS before that starting time.

From there, we count each sample we do care about and use that to determine sample perfect timestamps using our target frequency and timebase.

For example, if our target frequency is 48khz, or 48000hz, and we’ve already decoded 96000 samples, that means we’re exactly 2 seconds into the livestream.

If we translate this solution into terms Libav will understand, it’s actually fairly simple.

The results were so much closer. Not perfect, but closer. In fact, over the next few days, we ran a 8 hour test stream and noticed that, over the course of the day, millisecond by millisecond, the video pulled ahead of the audio.

So, what gives?

See: what we learned first hand was, when it comes to livestreaming timestamps, you can’t trust any one single method. Counting samples is great in theory, but not responsive by itself. There are a number of reasons why we might drop samples, and this solution doesn’t have any way to recover if we do. Sharks bite undersea cables.

So instead, we can re-sync where appropriate and actually use the wall clock as a system of checks and balances. If the two methods of determining timestamps disagree by more than some threshold, re-sync. You could, for example, reset that initial timestamp and restart the frame counter.

This solution gives you the accuracy of a wall clock but the precision of sample counting.

So what are some takeaways here?

  1. For us, this experience was our first time getting our hands dirty with device decoders. We found that, in this instance, going through FFmpeg’s documentation flag by flag wasn’t going to cut it. There’s a big gap in online resources between high level glossary and low level specification. Getting hands on was the only way to fill that gap.

  2. Choose redundancy where it matters. This lesson is something we’ve learned in infrastructure and database; video is no different. It’s not always best to trust a single system when calculating timestamps.

  3. The last take away, and one that we actually started recently, is to invest in glass-to-glass testing. We wasted far too many hours watching test cards and Big Buck Bunny – my palms still get sweaty when I hear that pan flute.

    One thing we tried was injecting QR codes directly into test cards with audible sync pulses at regular intervals. We can then check the resulting waveform to see if those pulses landed on frames flagged with QR codes. We can then use the frame count and sample rate to calculate how we’ve deviated.

That said, I think the big takeaway here is the one I told you was coming from the very beginning: “reliable timestamps in virtual environments are really, really hard.”

DONE The Guy Who Likes Lemons @essays

#+begin_description I’ve recently been thinking about my personal brand – whatever that means. Do I want to expose all of myself? None? Probably some facets, but which? I doubt I’ll ever find the right balance, if there is such a thing. de#+end_description

“I want to be remembered as the guy who likes lemons.”

About 10 years ago, I asked a college admissions advisor what she considered the most memorable essay she’d ever read. She responded without pause: “I want to be remembered as the guy who likes lemons.”

She explained. There are always wonderful essays about ambition and adversity, but this one, semi-sensible essay took the cake. The first sentence was “I want to be remembered as the guy who likes lemons.”

She didn’t remember the specifics of the essay. It was probably some analogy about how the author strove to be bright and funky, or sweet and sour. The contents of the essay didn’t even matter; it was all about that unforgettable opening sentence that I’m still talking about 10 years later.

In a world where hiring managers review your resume in 6 seconds, users context switch after 400ms, and the optimal sales email can fit in a single tweet, the guy who likes lemons got that advisor’s attention in just 11 words.

Wear Orange Shoes

Four years later, I read Dave Kerpen’s book “The Art of People”. Out of 53 chapters, one detail has stuck with me: Kerpen always wears orange shoes.

Some people will think that’s silly; most won’t notice. There’s a sliver of people, though, that will never forget those bright orange shoes. If that translates to just one new connection, hire, or investment, the shoes have paid out in dividends.

I stood in a room filled with entrepreneurs and investors, hoping to get the attention of just one. I was contemplating whether to get a drink from the bar, when all of a sudden I heard, “I have got to talk to the man wearing those f–king shoes!”…

Were my orange shoes the reason I secured an investment? Of course not. But they were the reason I got into a conversation in the first place. In a room full of people trying to get busy people’s attention, that was all it took to stand out in the crowd.

The key, Kerpen asserts, is to garner attention and be authentic. When I gave my first conference talk, the overwhelming majority of messages in the Slack thread weren’t questions; they were comments about my mustache. Some jokes were Mario riffs, and others were a bit more creative.

Jokes aside, my mustache gave me an identity at that conference, intentional or not. As one commenter said: “your mustache brought all the boys to the yard”. Afterwards, a friend told me that I couldn’t shave it; “it’s part of your brand”.

The funny thing is I never intended to grow a mustache. I was debugging a misbehaving system under a time crunch, and didn’t shave for a bit. After the dust settled, I promised coworkers that I’d keep the mustache until our product launched. Now I have a permanent tea strainer.

Look good, feel good, sh*t in the woods!

I went to college in Maine. Freshman orientation was a one-week trek deep into the forest. So deep, in fact, that our van had skid plates for dirt roads and bull bars in case we “bumped” into a moose.

Before we left campus with our groups, one of the student guides stood on a picnic table and started chanting “Look good, feel good, sh*t in the woods!”, emphasizing every word. Then everyone joined in!

It was goofy and ridiculous, but it loosened everyone up and made us comfortable around each other. We’d be living in close quarters for the week, and quite out of our element, but something about being crass and childish was oddly freeing. It shattered social pretense and set a clear tone; we were there to be honest and ourselves.

We played orientation-type games, repaired a few hiking trails, and then went our separate ways. Over the next four years, though, you could walk into any communal space, chant “look good, feel good!”, and at least one person would holler back “sh*t in the woods!” while grinning ear to ear.

I recently met a fellow alumna at a holiday party. I told this story and repeated those magic words. “COOT!”, she said (the name of our orientation week). Those 9 words were all it took to open the flood gate of shared experiences, and we happily compared notes and swapped stories.

The Game of Life

I’ve recently been thinking about my personal brand – whatever that means. Do I want to expose all of myself? None? Probably some facets, but which? I doubt I’ll ever find the right balance, if there is such a thing.

At the end of the day, like all of you, I have varied interests. What I might want to shout from the rooftops one day, I may be uncomfortable sharing the next; and what others might find meaningful, might feel inconsequential to me.

One interview that lives rent-free in my head is Numerphile’s discussion with John Conway on his Game of Life. To Conway, the Game of Life felt like an insignificant curiosity. To the rest of the world, the game was immensely impactful.

Conway spent much of his illustrious career studying cellular automaton, but his most notorious work can be written in less than 40 characters.

Kolmogorov complexity is the theory that something is as complex as the shortest program that can reproduce it. By those standards, the Game of Life is less complex than the name of some Welch towns.

Well, I used to say, and I’m still inclined to say occasionally, that I hate it, I hate the Game of Life. I don’t really, at least I don’t anymore.

The reason why I felt like that was that, whenever my name was mentioned with respect to some mathematics, it was always the Game of Life. And I don’t think the Game of Life is very very interesting. I don’t think it was worth all that, I’ve done lots of other mathematical things. So I found the Game of Life was overshadowing much more important things and I did not like it.

The Game of Life and Conway’s relationship with it, highlights how little it takes to leave a lasting impression and how it often happens when you least expect it. No matter how we try, we can’t control it.

Franz Wright wrote that “one of the few pleasures of writing is the thought of one’s book in the hands of a kind hearted intelligent person somewhere.”

I take solace in knowing that, as long as we approach life bright and funky, bold and authentic, or smiling and borderline puerile, someone will remember.

And so, of course, there’s only one way to close. My name is Walker Griggs, and I want to be remembered as the mustached man who looked good, felt good, loved lemons, and shat in the woods – metaphorically, of course.

DONE Data Preservation, Alf’s Room, and Spicy P @essays

Figure 4: Alf, &ldquo;Welcome to Alf&rsquo;s Room. I am Alf&rdquo;

Figure 4: Alf, “Welcome to Alf’s Room. I am Alf”

A colleague of mine recently bought a capture card to record their Nintendo Switch. I asked if they wanted to stream on Twitch or post to Youtube. “No,” they explained, “I just like saving the recordings for my personal archives…something to remember.”

They explained that they once posted often on a Youtube channel and were proud of the content. Even with some regular viewers, though, they decided to delete the channel and contents along with it. Looking back, they deeply regret that decision – understandable.

I’ve had a similar experience. I had a Youtube channel sometime around 2011, posting silly game montages. I wasn’t any good at the games but enjoyed creating, curating, and narrating. In some ways, the channel made me feel like I was contributing to the broader gaming community.

You can imagine my disappointment when someone commented: “your videos are fantastic; if only you started 3 years ago.” Looking back, that comment is hilariously short sighted. Youtube and esports were in their infancy back then and still are in many ways. But I was young and took feedback from a random internet stranger to heart – something I still struggle with today. I chose to believe that I was too late to the punch. Like my colleague, I deleted the channel.

In hindsight, I think that was a huge mistake. My content was likely the most sincere that I’ll ever produce. I had no expectations. “Youtuber” was barely a career, and I was stoked to get 50 views. I was posting for myself; something relatively few folks can say today – myself included.

That conversation with my colleague highlights a crucial distinction between personal and public content. There’s something inherently genuine and intimate about content created just for yourself or your closest circle. There’s no ego – only enthusiasm. That sincerity makes it all the more intriguing to the outsider and what I believe contributed to Youtube’s early success.

Spicy P and the Handycam Vision

Pascal Siakam, an NBA player currently with the Toronto Raptors, showed up to the NBA All-Star game this year with a tape camcorder – three, actually! What may have started as an homage to Shaq and his VX1000 turned into a spectacle all its own.

Figure 5: Sam Byford, Pascal Siakam and his NBA All-Star Weekend camcorders: an investigation

Figure 5: Sam Byford, Pascal Siakam and his NBA All-Star Weekend camcorders: an investigation

People were throwing themselves at the camera. In a sea of cellphones and broadcast equipment, everyone wanted to be taped by Spicy P and his Handycam Vision. It was a sincere gesture on Siakam’s part. His smiles were genuine, and everyone’s reactions were priceless. Afterwards, he posted clips to Instagram. The film was grainy and the colors fluctuated so often that it couldn’t be used for anything but personal records. I love that idea.

Lowering the production quality, in a way, signals to others that you intend to create memories – artifacts of a time worth remembering. Saikam didn’t stabilize his footage or edit out moments where the ref blocked the camera. He panned faster than the camera could capture, and he wasn’t concerned about lighting or framing or jump cuts. The footage is a lens through which he can relive those moments – we’re just along for the ride, warts and all.

Sam Byford has a full post exclusively dedicated to Siakam and cameras. I’ll leave the technical breakdown to him, because he’s certainly done his research.

Alf’s Room

Siakam’s story reminds me so much of Adachi Yoshinori’s website Alf’s Room, which gained popularity in small circles after Nick Robinson posted The mystery of MICHAELSOFT BINBOWS.

Alf’s Room is Yoshinori’s digital “cabinet of curiosities.” The site is broken down into a handful of categories like trains, computers, and music. One stands out: the “exhibition room,” home to “weird, unusual and mysterious things, photos, etc.” It’s a grab bag of oddities like t-shaped vending machines, potted plants at train stations, and holiday lights. Like Siakam, Yoshinori doesn’t worry about the composition of his photos. They exist solely to document his experiences.

Figure 6: Adachi Yoshinori, Ueki Station

Figure 6: Adachi Yoshinori, Ueki Station

On their own, these oddities and unusual mysteries wouldn’t garner likes or follows; they would never individually “blow up.” Yet, Alf’s Room did indeed go viral. It falls on the intersection of quaint and eccentric, and I have to believe that it gained brief fame because of its quirky personality. The site itself could be part of someone else’s “weird and unusual” digital curio – that’s what makes it so appealing.

His website isn’t some Squarespace-special. It’s been hand crafted and maintained since 1996 with no target audience in mind except himself. It’s a digital diary that just so happens to be publicly accessible.

As an aside, it’s the poster child of IndieWeb.

Betamax and Blinkenlights

Along with any conversation about digital archives comes concerns about mixing preservation and privacy.

For some time now, I’ve wanted to build a rack dedicated to the conversation and restoration of analog tape media. Aside from my love for all things blinken’, I’m interested in the preservation of at-risk media.

There’s an incredible amount of data actively decaying or otherwise falling victim to the annals of time. VCRs are breaking down, tapes are rotting, and interest in the older formats has completely disappeared with all but a dedicated few. Tapes are not a durable format, at least not the tape made for home consumption.

Figure 7: /u/nicholasserra, Video archival rack build: one year update; More gear, bigger rack

Figure 7: /u/nicholasserra, Video archival rack build: one year update; More gear, bigger rack

Ironically, that fragility makes this data more valuable. Disney will always have a master record of “Beauty and the Beast,” but your family memories can never be recovered.

Barbecues in the park, your child’s first steps, and highschool graduations are moments that capture the fundamental human experience and serve as a historical record regardless of how mundane the memory.

A close friend asked a simple but difficult question when I mentioned digitizing non-commercial tape: “do you think that’s an invasion of privacy?” Should the lifecycle of those tapes be tied to the lifespan of the camera-person? Is there a statute of limitation on privacy where these artifacts transition from personal effects to public archive? Where does ‘good will’ fit?

I don’t think there’s a clear answer to this moral quandary. But I do know that the same sincerity that makes cherished memories private and intimate also makes them intriguing. In some paradoxical way, that also makes them worth sharing.

So what?

So what do blinkenlights, Alf’s Room, and Spicy P have to do with each other? For starters, data is plentiful and only getting more abundant. That presents a new challenge in deciding what data is worth keeping. I’ll argue, the most important data are the bits you never share – the bits that can never be replaced.

Forget those well lit, well staged, well edited ‘candids’; I’m talking about those half-drunk selfies with your mother at Christmas dinner.

I’m talking about Adachi Yoshinori’s snapshots tracking the history of potted trees at his local train station or Pascal Siakam’s grainy footage of his friends at the All-Star game.

I’m talking about my Youtube channel from 2011 and those VHS tapes stuffed deep in your grandparents’ closet. I may not be the right person to preserve them, but you are.

How to Overcomplicate Offline Storage @devlogs

Seven years ago, I made the decision to keep offline backups of all my personal data. What started as a 1 terabyte external harddrive loaded with a few sentimental photos, zipped folders of school projects, and maybe the odd 360p DVD rip has turned into a 40TB NAS and 26TB worth of offline drives.

Figure 8: LTO Tapes: the dream no one can reasonably afford

Figure 8: LTO Tapes: the dream no one can reasonably afford

Recently, I tried explaining my system to a friend but my thoughts kept running off in every possible direction. This post attempts to answer ‘how I store and track offline files’; Data Preservation, Alf’s Room, and Spicy P answers ‘why.’

What to backup

Before anyone thinks about how they handle offline storage, they should first think about what they’re storing. Personally, I have three rules.

Can this data be replaced?

If not, I won’t hesitate to keep multiple offline copies. Data in this category includes family photos and artifacts of hard work like blog posts, source code, or research notes. These are items that need to be protected at all times.

Do I access the file regularly?

If I access the file less often than I perform a round of disk maintenance, it’s probably not worth keeping on a spinning disk. That said, pulling out the drives, setting up an external drive dock, and mounting the drives all take time. I avoid it when I can.

Am I running out of online storage space?

As I’ve expanded my networked storage, my definition of a “large file” has changed, and my tolerance for always-online storage has grown. Mileage may vary.

Picking the right drive

Everyone has a different opinion on which drives are best for offline storage. I think the best drives are the ones you have.

Personally, my archive drives are a graveyard of systems-past. They’re all different capacities, manufactured by different companies, and spin at different speeds. As long as they’re above their S.M.A.R.T. thresholds, they’re fine by me.

Capacity is another story; some drives are just too small to bother with. My general rule of thumb is “4 times the size of your data set, divided by your maximum acceptable number of drives”. Personally I never want to maintain more than 10 offline drives at a time. Routine scrubbing and maintenance can be a slow process; let’s not make it slower than it has to be.

For me, currently, that’s (12TB * 4) / 10 or about 5TB per drive. At the moment, I have a grab bag of 8, 4, and 3TB drives, so that math works out pretty well.

Why 4x the size of the dataset? Well, hard drives – especially old drives – aren’t meant to be long term, offline data solutions. As a result, I try to keep 2 copies of every file spread across multiple drives. I also like to keep 50% capacity available, which is probably overkill but it spreads the files out nicely and reduces the blast radius should a drive completely fail.

Workflow, in theory

This is where things get opinionated and tailored to your needs. Personally I wanted a system that could:

I chose btrfs and git-annex.

I initially narrowed my filesystem choice down to either BTRFS or ZFS, but I have personal experience with the former and dislike the experience of exporting and offlining ZFS pools. BTRFS is now included in the Linux kernel and has all the features I look for in a modern filesystem. Relevant to this use case: block-level deduplication, disk defragmenting, and data scrubbing.

Git Annex surprised me, honestly. I hadn’t given it much thought in the past, but it covered my requirements fully. At the very least, it aligns with my normal software development workflow. From their website:

git-annex allows managing large files with git, without storing the file contents in git. It can sync, backup, and archive your data, offline and online. Checksums and encryption keep your data safe and secure. Bring the power and distributed nature of git to bear on your large files with git-annex.

Annex supports quite a few remote repository backends like web, bittorrents, XMPP, and S3 to name a few. Unless I decide to move files into AWS S3 or Glacier in the future, I’ll only ever use the bare filesystem. I’d recommend at least reading through their docs – they’re wonderful!

Workflow, in practice

My workflow, in practice, is pretty simple.

  1. Load a drive up with files until it’s mostly full
  2. Annex those files into a git repository and sync to the origin remote
  3. Defragment the drive
  4. Scrub the filesystem to ensure that all checksums match
  5. Clone the repository on another drive and copy over any files that have less than 2 copies.

Here are some rough steps to reproduce that workflow.

Want to be forwarded essays as they’re posted?