阅读视图

发现新文章,点击刷新页面。

Don't make Google sell Chrome

The web will be far worse off if Google is forced to sell Chrome, even if it's to atone for legitimate ad-market monopoly abuses. Which mean we'll all be worse off as the web would lose ground to actual monopoly platforms, like the iOS App Store and Google's own Play Store.

First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best. Besides, we have a million alternatives. Firefox still exists, so does Safari,  so does the billion Chromium-based browsers like Brave and Edge. And we finally even have new engines on the way with the Ladybird browser

Look, Google's trillion-dollar business depends on a thriving web that can be searched by Google.com, that can be plastered in AdSense, and that now can feed the wisdom of AI. Thus, Google's incredible work to further the web isn't an act of charity, it's of economic self-interest, and that's why it works. Capitalism doesn't run on benevolence, but incentives.

We want an 800-pound gorilla in the web's corner! Because Apple would love nothing better (despite the admirable work to keep up with Chrome by Team Safari) to see the web's capacity as an application platform diminished. As would every other owner of a proprietary application platform. Microsoft fought the web tooth and nail back in the 90s because they knew that a free, open application platform would undermine lock-in — and it did!

But the vitality of that free and open application platform depends on constant development. If the web stagnates, other platforms will gain. But with Team Chrome pushing the web forward in a million ways — be it import maps, nested CSS, web push, etc. — is therefore essential.

This is a classic wealth vs. riches mistake. Lawyers see Chrome as valuable in a moment's snapshot, but the value is all in the wealth that continued investment brings. A Chrome left to languish with half the investment will evaporate as quickly as a lottery winner's riches. Wealth requires maintenance to endure.

Google should not get away with rigging the online ad market, but forcing it to sell Chrome will do great damage to the web.

We'll always need junior programmers

We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase.

This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied!

It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance.

I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded.

However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.

The new Framework 13 HX370

The new AMD HX370 option in the Framework 13 is a good step forward in performance for developers. It runs our HEY test suite in 2m7s, compared to 2m43s for the 7840U (and 2m49s for a M4 Pro!). It's also about 20% faster in most single-core tasks than the 7840U.

But is that enough to warrant the jump in price? AMD's latest, best chips have suddenly gotten pretty expensive. The F13 w/ HX370 now costs $1,992 with 32GB RAM / 1TB. Almost the same an M4 Pro MBP14 w/ 24GB / 1TB ($2,199). I'd pick the Framework any day for its better keyboard, 3:2 matte screen, repairability, and superb Linux compatibility, but it won't be because the top option is "cheaper" any more. 

Of course you could also just go with the budget 6-core Ryzen AI 5 340 in same spec for $1,362. I'm sure that's a great machine too. But maybe the sweet spot is actually the Ryzen AI 7 350. It "only" has 8 cores (vs 12 on the 370), but four of those are performance cores -- the same as the 370. And it's $300 cheaper. So ~$1,600 gets you out the door. I haven't actually tried the 350, though, so that's just speculation. I've been running the 370 for the last few months.

Whichever chip you choose, the rest of the Framework 13 package is as good as it ever was. This remains my favorite laptop of at least the last decade. I've been running one for over a year now, and combined with Omakub + Neovim, it's the first machine in forever where I've actually enjoyed programming on a 13" screen. The 3:2 aspect ratio combined with Linux's superb multiple desktops that switch with 0ms lag and no animations means I barely miss the trusted 6K Apple XDR screen when working away from the desk.

The HX370 gives me about 6 hours of battery life in mixed use. About the same as the old 7840U. Though if all I'm doing is writing, I can squeeze that to 8-10 hours. That's good enough for me, but not as good as a Qualcomm machine or an Apple M-chip machine. For some people, those extra hours really make the difference.

What does make a difference, of course, is Linux. I've written repeatedly about how much of a joy it's been to rediscover Linux on the desktop, and it's a joy that keeps on giving. For web work, it's so good. And for any work that requires even a minimum of Docker, it's so fast (as the HEY suite run time attests).

Apple still has a strong hardware game, but their software story is falling apart. I haven't heard many people sing the praises of new iOS or macOS releases in a long while. It seems like without an asshole in charge, both have move towards more bloat, more ads, more gimmicks, more control. Linux is an incredible antidote to this nonsense these days.

It's also just fun! Seeing AMD catch up in outright performance if not efficiency has been a delight. Watching Framework perfect their 13" laptop while remaining 100% backwards compatible in terms of upgrades with the first versions is heartwarming. And getting to test the new Framework Desktop in advance of its Q3 release has only affirmed my commitment to both.

But on the new HX370, it's in my opinion the best Linux laptop you can buy today, which by extension makes it the best web developer laptop too. The top spec might have gotten a bit pricey, but there are options all along the budget spectrum, which retains all the key ingredients any way. Hard to go wrong.

Forza Framework!

Normal boyhood is ADHD

Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray?

Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill.

Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all.

These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading.

Not least because the definition of ADHD is so subjective and  situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5:

Inattention
Difficulty staying focused on tasks or play.
Frequently losing things needed for tasks (e.g., toys, school supplies).
Easily distracted by unrelated stimuli.
Forgetting daily activities or instructions.
Trouble organizing tasks or completing schoolwork.
Avoiding or disliking tasks requiring sustained mental effort.

Hyperactivity
Fidgeting, squirming, or inability to stay seated.
Running or climbing in inappropriate situations.
Excessive talking or inability to play quietly.
Acting as if “driven by a motor,” always on the go.

Impulsivity
Blurting out answers before questions are completed.
Trouble waiting for their turn.
Interrupting others’ conversations or games.

The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify!

Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all.

This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today.

Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above.

But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too.

This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well. 

Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere.

Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.

Believe it's going to work even though it probably won't

To be a successful founder, you have to believe that what you're working on is going to work — despite knowing it probably won't! That sounds like an oxymoron, but it's really not. Believing that what you're building is going to work is an essential component of coming to work with the energy, fortitude, and determination it's going to require to even have a shot. Knowing it probably won't is accepting the odds of that shot.

It's simply the reality that most things in business don't work out. At least not in the long run. Most businesses fail. If not right away, then eventually. Yet the world economy is full of entrepreneurs who try anyway. Not because they don't know the odds, but because they've chosen to believe they're special.

The best way to balance these opposing points — the conviction that you'll make it work, the knowledge that it probably won't — is to do all your work in a manner that'll make you proud either way. If it doesn't work, you still made something you wouldn't be ashamed to put your name on. And if it does work, you'll beam with pride from making it on the basis of something solid.

The deep regret from trying and failing only truly hits when you look in the mirror and see Dostoevsky staring back at you with this punch to the gut: "Your worst sin is that you have destroyed and betrayed yourself for nothing." Oof.

Believe it's going to work. 
Build it in a way that makes you proud to sign it.
Base your worth as a human on something greater than a business outcome.

Why we won't hire a junior with five years of experience

We just opened a search for a new junior programmer at 37signals. It's been years since we last hired a junior, but the real reason the listing is turning heads is because we're open about the yearly salary: $145,849*. That's high enough that programmers with lots of experience are asking whether they could apply, even if they aren't technically "junior". The answer is no.

The reason we're willing to pay a junior more than most is because we're looking for a junior who's better than most. Not better in "what do they already know", but in "how far could they go". We're hiring for peak promise — and such promise only remains until it's revealed.

Maybe it sounds a little harsh, but a programmer who's been working professionally for five years has likely already revealed their potential. What you're going to get is roughly what you see. That doesn't mean that people can't get better after that, but it means that the trajectory by which they improve has already been plotted.

Whereas a programmer who's either straight out of school or fresh off their first internship or short-stint job is essentially all potential. So you draw their line on the basis of just a few early dots, but the line can be steep.

It's not that different from something like the NFL scouting combine. Teams fight to find the promise of The Next All-Star. These rookies won't have the experience that someone who's already played in the league for years would have, but they have the potential to be the best. Someone who's already played for several seasons will have shown what they have and be weighed accordingly.

This is not easy to do! Plenty of rookies, in sports and programming, may show some early potential, then fail to elevate their game to where the buyer is betting it could be. But that's the chance you take to land someone extraordinary.

So if you know a junior programmer with less than three years of industry experience who is sparkling with potential, do let them know of our listing. And if you know someone awesome who's already a senior programmer, we also have an opening for them.

*It's a funnily precise number because it's pulled directly from the Radford salary database, which we query for the top 10% of San Francisco salaries for junior programmers.

Universal Basic Dead End

While the world frets about the future of AI, the universal basic income advocates have an answer ready for the big question of "what are we all going to do when the jobs are gone": Just pay everyone enough to loaf around as they see fit! Problem solved, right?

Wrong. The purpose of work is not just about earning your keep, but also about earning a purpose and a place in the world. This concept is too easily dismissed by intellectuals who imagines a world of liberated artists and community collaborators, if only unshackled by the burdens of capitalism. Because that's the utopia that appeals to them.

But we already know what happens to most people who lose their job. It's typically not a song-and-dance of liberation, but whimper with increasing despair. Even if they're able to draw benefits for a while.

Some of that is probably gendered. I think men have a harder time finding a purpose without a clear and externally validated station of usefulness. As a corollary to the quip that "women want to be heard, men want to be useful" from psychology. Long-term unemployment, even cushioned by state benefits, often leads men to isolation and a rotting well-being.

I've seen this play out time and again with men who've lost their jobs, men who've voluntarily retired from their jobs, and men who've sold their companies. As the days add up after the centering purpose in their life disappeared, so does the discontent with "the problem of being".

Sure, these are just anecdotes. Some men are thrilled to do whatever, whenever, without financial worries. And some women mourn a lost job as deeply as most men do. But I doubt it's evenly split.

Either way, I doubt we'll be delighted to discover what societal pillars wither away when nobody is needed for anything. If all labor market participation rests on intrinsic motivation. That strikes me as an obvious dead end.

We may not have a say in the manner, of course. The AI revolution, should it materialize like its proponents predict, has the potential to be every bit as unstoppable as the agricultural, industrial, and IT revolutions before it. Where the Luddites and the Amish, who reject these revolutions, end up as curiosities on the fringe of modern civilization. The rest of us are transformed, whether we like it or not.

But generally speaking, I think we have liked it! I'm sure it was hard to imagine what we'd all be doing after the hoe and the horse gave way to the tractor and combine back when 97% of the population worked the land. Same when robots and outsourcing claimed the most brutish assembly lines in the West. Yet we found our way through both to a broadly better place.

The IT revolution feels trickier. I've personally worked my life in its service, but I'm less convinced it's been as universal good as those earlier shifts. Is that just nostalgia? Because I remember a time before EVERYTHING IS COMPUTER? Possibly, but I think there's a reason the 80s in particular occupy such a beloved place in the memory of many who weren't even born then.

What's more certain to me is that we all need a why, as Viktor Frankl told us in Man's Search for Meaning. And while some of us are able to produce that artisanal, bespoke why imagined by some intellectuals and academics, I think most people need something prepackaged. And a why from work offers just that. Especially in a world bereft of a why from God.

It's a great irony that the more comfortable and frictionless our existence becomes, the harder we struggle with the "the problem of being". We just aren't built for a life of easy leisure. Not in mass numbers, anyway. But while the masses can easily identify the pathology of that when it comes to the idle rich, and especially their stereotyped trust-fund offspring, they still crave it for themselves.

Orwell's thesis is that heaven is merely that fuzzily-defined place that provides relief from the present hardships we wish to escape. But Dostoevsky remarks that should man ever find this relief, he'd be able to rest there for just a moment, before he'd inevitably sabotage it — just to feel something again.

I think of that often while watching The Elon Show. Musk's craving for the constant chaos of grand gestures is Dostoevsky's prediction underwritten by the wealth of the world's richest man. Heaven is not a fortune of $200 billion to be quietly enjoyed in the shade of a sombrero. It's in the arena.

I’ve also pondered this after writing about why Apple needs a new asshole in charge, and reflecting on our book, It Doesn't Have To Be Crazy At Work. Yes, work doesn’t have to be crazy, but for many, occasional craziness is part of the adventure they crave. They’ll tolerate an asshole if they take them along for one such adventure — accepting struggle and chaos as a small price to feel alive.

It's a bit like that bit from The Babylon Bee: Study Finds 100% Of Men Would Immediately Leave Their Desk Job If Asked To Embark Upon A Trans-Antarctic Expedition On A Big Wooden Ship. A comical incarnation of David Graeber's Bullshit Jobs thesis that derives its punchline from how often work lacks a Big Why. So when a megalomanic like Musk — or even just a run-of-the-mill asshole with a grand vision — offers one, the call of the wild beckons. Like that big wooden ship and the open sea.

But even in the absence of such adventure, a stupid email job offers something. Maybe it isn't much, maybe it doesn't truly nourish the soul, but it's something. In the Universal Basic Income scenario of having to design your own adventure entirely from scratch, there is nothing. Just a completely blank page with no deadline to motivate writing the first line.

If we kill the old 9-5 "why", we better find a new one. That might be tougher than making silicon distill all our human wisdom into vectors and parameters, but we have to pull it off.

Great AI Steals

Picasso got it right: Great artists steal. Even if he didn’t actually say it, and we all just repeat the quote because Steve Jobs used it. Because it strikes at the heart of creativity: None of it happens in a vacuum. Everything is inspired by something. The best ideas, angles, techniques, and tones are stolen to build everything that comes after the original.

Furthermore, the way to learn originality is to set it aside while you learn to perfect a copy. You learn to draw by imitating the masters. I learned photography by attempting to recreate great compositions. I learned to program by aping the Ruby standard library.

Stealing good ideas isn’t a detour on the way to becoming a master — it’s the straight route. And it’s nothing to be ashamed of.

This, by the way, doesn’t just apply to art but to the economy as well. Japan became an economic superpower in the 80s by first poorly copying Western electronics in the decades prior. China is now following exactly the same playbook to even greater effect. You start with a cheap copy, then you learn how to make a good copy, and then you don’t need to copy at all.

AI has sped through the phase of cheap copies. It’s now firmly established in the realm of good copies. You’re a fool if you don’t believe originality is a likely next step. In all likelihood, it’s a matter of when, not if. (And we already have plenty of early indications that it’s actually already here, on the edges.)

Now, whether that’s good is a different question. Whether we want AI to become truly creative is a fair question — albeit a theoretical or, at best, moral one. Because it’s going to happen if it can happen, and it almost certainly can (or even has).

Ironically, I think the peanut gallery disparaging recent advances — like the Ghibli fever — over minor details in the copying effort will only accelerate the quest toward true creativity. AI builders, like the Japanese and Chinese economies before them, eager to demonstrate an ability to exceed.

All that is to say that AI is in the "Good Copy" phase of its creative evolution. Expect "The Great Artist" to emerge at any moment.

The Year on Linux

I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more. That's the funny thing about habits: They can be so hard to break, but once you do, they're also easily forgotten.

That's how it feels having left the Apple realm after two decades inside the walled garden. It was hard for the first couple of weeks, but since then, it’s rarely crossed my mind.

Humans are rigid in the short term, but flexible in the long term. Blessed are the few who can retain the grit to push through that early mental resistance and reach new maxima.

That is something that gets harder with age. I can feel it. It takes more of me now to wipe a mental slate clean and start over. To go back to being a beginner. But the reward for learning something new is as satisfying as ever.

But it's also why I've tried to be modest with the advocacy. I don't know if most developers are better off on Linux. I mean, I believe they are, at some utopian level, especially if they work for the web, using open source tooling. But I don't know if they are as humans with limited will or capacity for change.

Of course, it's fair to say that one doesn't want to. Either because one remain a fan of Apple, in dire need of the remaining edge MacBooks retain on efficiency/battery, or simply content inside the ecosystem. There are plenty of reasons why someone might not want to change. It's not just about rigidity.

Besides, it's a dead end trying to convince anyone of an alternative with the sharp end of a religious argument. That kind of crusading just seeds resentment and stubbornness. I know that all too well.

What I've found to work much better is planting seeds and showing off your plowshare. Let whatever curiosity that blooms find its own way towards your blue sky. The mimetic engine of persuasion runs much cleaner anyway.

And for me, it's primarily about my personal computing workbench regardless of what the world does or doesn't. It was the same with finding Ruby. It's great when others come along for the ride, but I'd also be happy taking the trip solo too.

So consider this a postcard from a year into the Linux, Neovim, and Framework journey. The sun is still shining, the wind is in my hair, and the smile on my lips hasn't been this big since the earliest days of OS X.

Singularity & Serenity

The singularity is the point where artificial intelligence goes parabolic, surpassing humans writ large, and leads to rapid, unpredictable change. The intellectual seed of this concept was planted back in the '50s by early computer pioneer John von Neumann. So it’s been here since the dawn of the modern computer, but I’ve only just come around to giving the idea consideration as something other than science fiction.

Now, this quickly becomes quasi-religious, with all the terms being as fluid as redemption, absolution, and eternity. What and when exactly is AGI (Artificial General Intelligence) or SAI (Super Artificial Intelligence)? You’ll find a million definitions.

But it really does feel like we’re on the cusp of something. Even the most ardent AI skeptics are probably finding it hard not to be impressed with recent advances. Everything Is Ghibli might seem like a silly gimmick, but to me, it flipped a key bit here: the style persistence, solving text in image generation, and then turning those images into incredible moving pictures.

What makes all this progress so fascinating is that it’s clear nobody knows anything about what the world will look like four years from now. It’s barely been half that time since ChatGPT and Midjourney hit us in 2022, and the leaps since then have been staggering.

I’ve been playing with computers since the Commodore 64 entertained my childhood street with Yie Ar Kung-Fu on its glorious 1 MHz processor. I was there when the web made the internet come alive in the mid-'90s. I lined up for hours for the first iPhone to participate in the grand move to mobile. But I’ve never felt less able to predict what the next token of reality will look like.

When you factor in recent advances in robotics and pair those with the AI brains we’re building, it’s easy to imagine all sorts of futuristic scenarios happening very quickly: from humanoid robots finishing household chores à la The Jetsons (have you seen how good it’s getting at folding?) to every movie we watch being created from a novel prompt on the spot, to, yes, even armies of droids and drones fighting our wars.
This is one of those paradigm shifts with the potential for Total Change. Like the agricultural revolution, the industrial revolution, the information revolution. The kind that rewrites society, where it was impossible to tell in advance where we’d land.

I understand why people find that uncertainty scary. But I choose to receive it as exhilarating instead. What good is it to fret about a future you don’t control anyway? That’s the marvel and the danger of progress: nobody is actually in charge! This is all being driven by a million independent agents chasing irresistible incentives. There’s no pause button, let alone an off-ramp. We’re going to be all-in whether we like it or not.
So we might as well come to terms with that reality. Choose to marvel at the accelerating milestones we've been hitting rather than tremble over the next.

This is something most religions and grand philosophies have long since figured out. The world didn’t just start changing; we’ve had these lurches of forward progress before. And humans have struggled to cope with the transition since the beginning of time. So, the best intellectual frameworks have worked on ways to deal.
Christianity has the Serenity Prayer, which I’ve always been fond of:

God, grant me the serenity
to accept the things I cannot change,
the courage to change the things I can,
and the wisdom to know the difference.

That’s the part most people know. But it actually continues:

Living one day at a time,
enjoying one moment at a time;
accepting hardship as a pathway to peace;
taking, as Jesus did,
this sinful world as it is,
not as I would have it;
trusting that You will make all things right
if I surrender to Your will;
so that I may be reasonably happy in this life
and supremely happy with You forever in the next.
Amen.

What a great frame for the mind!

The Stoics were big on the same concept. Here’s Epictetus:

Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.

Buddhism does this well too. Here’s the Buddha being his wonderfully brief self:

Suffering does not follow one who is free from clinging.

I don’t think it’s a coincidence that all these traditions converged on the idea of letting go of what you can’t control, not clinging to any specific preferred outcome. Because you’re bound to be disappointed that way. You don’t get to know the script to life in advance, but what an incredible show, if you just let it unfold.
This is the broader view of amor fati. You should learn to love not just your own fate, but the fate of the world — its turns, its twists, its progress, and even the inevitable regressions.

The singularity may be here soon, or it may not. You’d be a fool to be convinced either way. But you’ll find serenity in accepting whatever happens.

It's five grand a day to miss our S3 exit

We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and everything else. The only way we were able to get the pricing that low was by signing a four-year contract. That contract expires this summer, June 30, so that's our departure date for the final leg of our cloud exit.

We've already racked the replacement from Pure Storage in our two primary data centers. A combined 18 petabytes, securely replicated a thousand miles apart. It's a gorgeous rack full of blazing-fast NVMe storage modules. Each card in the chassis capable of storing 150TB now.

Pure Storage comes with an S3-compatible API, so no need for CEPH, Minio, or any of the other object storage software solutions you might need, if you were trying to do this exercise on commodity hardware. This makes it pretty easy from the app side to do the swap. 

But there's still work to do. We have to transfer almost six petabytes out of S3. In an earlier age, that egress alone would have cost hundreds of thousands of dollars in fees alone. But now AWS offers a free 60-day egress window for anyone who wants to leave, so that drops the cost to $0. Nice!

It takes a while to transfer that much data, though. Even on the fat 40-Gbit pipe we have set aside for the purpose, it'll probably take at least three weeks, once you factor in overhead and some babysitting of the process.

That's when it's good to remind ourselves why June 30th matters. And the reminder math pens out in nice, round numbers for easy recollection: If we don't get this done in time, we'll be paying a cool five thousand dollars a day to continue to use S3 (if all the files are still there). Yikes!

That's $35,000/week! That's $150,000/month!

Pretty serious money for a company of our size. But so are the savings. Over five years, it'll now be almost five million! Maybe even more, depending on the growth in files we need to store for customers. About $1.5 million for the Pure Storage hardware, and a bit less than a million over five years for warranty and support.

But those big numbers always seem a bit abstract to me. The idea of paying $5,000/day, if we miss our departure date, is awfully concrete in comparison.

pure-storage.jpeg

To hell with forever

Immortality always sounded like a curse to me. But especially now, having passed the halfway point of the average wealthy male life expectancy. Another scoop of life as big as the one I've already been served seems more than enough, thank you very much.

Does that strike you as morbid?

It's funny, people seem to have no problem understanding satiation when it comes to the individual parts of life. Enough delicious cake, no more rides on the rollercoaster, the end of a great party. But not life itself.

Why?

The eventual end strikes me as beautiful relief. Framing the idea that you can see enough, do enough, be enough. And have enjoyed the bulk of it, without wanting it to go on forever.

Have you seen Highlander? It got panned on its initial release in the 80s. Even Sean Connery couldn't save it with the critics at the time. But I love it. It's one of my all-time favorite movies. It's got a silly story about a worldwide tournament of immortal Highlanders who live forever, lest they get their heads chopped off, and then the last man standing wins... more life?

Yeah, it doesn't actually make a lot of sense. But it nails the sadness of forever. The loneliness, the repetition, the inevitable cynicism with humanity. Who wants to live forever, indeed.

It's the same theme in Björk's wonderfully melancholic song I've Seen It All. It's a great big world, but eventually every unseen element will appear as but a variation on an existing theme. Even surprise itself will succumb to familiarity.

Even before the last day, you can look forward to finality, too. I love racing, but I'm also drawn to the day when the reflexes finally start to fade, and I'll hang up the helmet. One day I will write the last line of Ruby code, too. Sell the last subscription. Write the last tweet. How merciful.

It gets harder with people you love, of course. Harder to imagine the last day with them. But I didn't know my great-great-grandfather, and can easily picture him passing with the satisfaction of seeing his lineage carry on without him.

One way to think of this is to hold life with a loose grip. Like a pair of drumsticks. I don't play, but I'm told that the music flows better when you avoid strangling them in a death grip. And then you enjoy keeping the beat until the song ends.

Amor fati. Amor mori.

Age is a problem at Apple

The average age of Apple's board members is 68! Nearly half are over 70, and the youngest is 63. It’s not much better with the executive team, where the average age hovers around 60. I’m all for the wisdom of our elders, but it’s ridiculous that the world’s premier tech company is now run by a gerontocracy.

And I think it’s starting to show. The AI debacle is just the latest example. I can picture the board presentation on Genmoji: “It’s what the kids want these days!!”. It’s a dumb feature because nobody on Apple’s board or in its leadership has probably ever used it outside a quick demo.

I’m not saying older people can’t be an asset. Hell, at 45, I’m no spring chicken myself in technology circles! But you need a mix. You need to blend fluid and crystallized intelligence. You need some people with a finger on the pulse, not just some bravely keeping one.

Once you see this, it’s hard not to view slogans like “AI for the rest of us” through that lens. It’s as if AI is like programming a VCR, and you need the grandkids to come over and set it up for you.

By comparison, the average age on Meta’s board is 55. They have three members in their 40s. Steve Jobs was 42 when he returned to Apple in 1997. He was 51 when he introduced the iPhone. And he was gone — from Apple and the world — at 56.

Apple literally needs some fresh blood to turn the ship around.

The 80s are still alive in Denmark

I grew up in the 80s in Copenhagen and roamed the city on my own from an early age. My parents rarely had any idea where I went after school, as long as I was home by dinner. They certainly didn’t have direct relationships with the parents of my friends. We just figured things out ourselves. It was glorious.

That’s not the type of childhood we were able to offer our kids in modern-day California. Having to drive everywhere is, of course, its own limitation, but that’s only half the problem. The other half is the expectation that parents are involved in almost every interaction. Play dates are commonly arranged via parents, even for fourth or fifth graders.

The new hysteria over smartphones doesn’t help either, as it cuts many kids off from being able to make their own arrangements entirely (since the house phone has long since died too).

That’s not how my wife grew up in the 80s in America either. The United States of that age was a lot like what I experienced in Denmark: kids roaming around on their own, parents blissfully unaware of where their offspring were much of the time, and absolutely no expectation that parents would arrange play dates or even sleepovers.

I’m sure there are still places in America where life continues like that, but I don’t personally know of any parents who are able to offer that 80s lifestyle to their kids — not in New York, not in Chicago, not in California. Maybe this life still exists in Montana? Maybe it’s a socioeconomic thing? I don’t know.

But what I do know is that Copenhagen is still living in the 80s! We’ve been here off and on over the last several years, and just today, I was struck by the fact that one of our kids had left school after it ended early, biked halfway across town with his friend, and was going to spend the day at his place. And we didn’t get an update on that until much later.

Copenhagen is a compelling city in many ways, but if I were to credit why the US News and World Report just crowned Denmark the best country for raising children in 2025, I’d say it’s the independence — carefree independence. Danish kids roam their cities on their own, manage their social relationships independently, and do so in relative peace and safety.

I’m a big fan of Jonathan Haidt’s work on What Happened In 2013, which he captured in The Coddling of the American Mind. That was a very balanced book, and it called out the lack of unsupervised free play and independence as key contributors to the rise in child fragility.

But it also pinned smartphones and social media with a large share of the blame, despite the fact that the effect, especially on boys, is very much a source of ongoing debate. I’m not arguing that excessive smartphone usage — and certainly social-media brain rot — is good for kids, but I find this explanation is proving to be a bit too easy of a scapegoat for all the ills plaguing American youth.

And it certainly seems like upper-middle-class American parents have decided that blaming the smartphone for everything is easier than interrogating the lack of unsupervised free play, rough-and-tumble interactions for boys, and early childhood independence.

It also just doesn’t track in countries like Denmark, where the smartphone is just as prevalent, if not more so, than in America. My oldest had his own phone by third grade, and so did everyone else in his class — much earlier than Haidt recommends. And it was a key tool for them to coordinate the independence that The Coddling of the American Mind called for more of.

Look, I’m happy to see phones parked during school hours. Several schools here in Copenhagen do that, and there’s a new proposal pending legislation in parliament to make that law across the land. Fine!

But I think it’s delusional of American parents to think that banning the smartphone — further isolating their children from independently managing their social lives — is going to be the one quick fix that cures the anxious generation.

What we need is more 80s-style freedom and independence for kids in America.

Apple needs a new asshole in charge

When things are going well, managers can fool themselves into thinking that people trying their best is all that matters. Poor outcomes are just another opportunity for learning! But that delusion stops working when the wheels finally start coming off — like they have now for Apple and its AI unit. Then you need someone who cares about the outcome above the effort. Then you need an asshole.

In management parlance, an asshole is someone who cares less about feelings or effort and more about outcomes. Steve Jobs was one such asshole. So seems to be Musk. Gates certainly was as well. Most top technology chiefs who've had to really fight in competitive markets for the top prize fall into this category.

Apple's AI management is missing an asshole:

Walker defended his Siri group, telling them that they should be proud. Employees poured their “hearts and souls into this thing,” he said. “I saw so many people giving everything they had in order to make this happen and to make incredible progress together.”

So it's stuck nurturing feelings:

“You might have co-workers or friends or family asking you what happened, and it doesn’t feel good,” Walker said. “It’s very reasonable to feel all these things.” He said others are feeling burnout and that his team will be entitled to time away to recharge to get ready for “plenty of hard work ahead.”

These are both quotes from the Bloomberg report on the disarray inside Apple, following the admission that the star feature of the iPhone 16 — the Apple Intelligence that could reach inside your personal data — won't ship until the iPhone 17, if at all.

John Gruber from Daring Fireball dug up this anecdote from the last time Apple seriously botched a major software launch:

Steve Jobs doesn’t tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple’s campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question: “Can anyone tell me what MobileMe is supposed to do?”

Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?”

For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs. 

Can you see the difference? This is an asshole in action.

Apple needs to find a new asshole and put them in charge of the entire product line. Cook clearly isn't up to the task, and the job is currently spread thinly across a whole roster of senior VPs. Little fiefdoms. This is poison to the integrated magic that was Apple's trademark for so long.

The most interesting people

We didn’t used to need an explanation for having kids. That was just life. That’s just what you did. But now we do, because now we don’t.

So allow me: Having kids means making the most interesting people in the world. Not because toddlers or even teenagers are intellectual oracles — although life through their eyes is often surprising and occasionally even profound — but because your children will become the most interesting people to you.

That’s the important part. To you.

There are no humans on earth I’m as interested in as my children. Their maturation and growth are the greatest show on the planet. And having a front-seat ticket to this performance is literally the privilege of a lifetime.

But giving a review of this incredible show just doesn’t work. I could never convince a stranger that my children are the most interesting people in the world, because they wouldn’t be, to them.

So words don’t work. It’s a leap of faith. All I can really say is this: Trust me, bro.

We wash our trash to repent for killing God

Denmark is technically and officially still a Christian nation. Lutheranism is written into the constitution. The government has a ministry for the church. Most Danes pay 1% of their earnings directly to fund the State religion. But God is as dead here as anywhere in the Western world. Less than 2% attend church service on a weekly basis. So one way to fill the void is through climate panic and piety.

I mean, these days, you can scarcely stroll past stores in the swankier parts of Copenhagen without being met by an endless parade of ads carrying incantations towards sustainability, conservation, and recycling. It's everywhere.

Hilariously, sometimes this even includes recommending that customers don’t buy the product. I went to a pita place for lunch the other day. The menu had a meat shawarma option, and alongside it was a plea not to order it too often because it’d be better for the planet if you picked the falafel instead.

But the hysteria peaks with the trash situation. It’s now common for garbage rooms across Copenhagen to feature seven or more bins for sorting disposals. Despite trash-sorting robots being able to do this job far better than humans in most cases, you see Danes dutifully sorting and subdividing their waste with a pious obligation worthy of the new climate deity.

Yet it’s not even the sorting that gets me — it’s the washing. You can’t put plastic containers with food residue into the recycling bucket, so you have to rinse them first. This leads to the grotesque daily ritual of washing trash (and wasting water galore in the process!).

Plus, most people in Copenhagen live in small apartments, and all that separated trash has to be stored separately until the daily pilgrimage to the trash room. So it piles up all over the place.

This is exactly what Nietzsche meant by “God is dead” — his warning that we’d need to fill the void with another centering orientation toward the world. And clearly, climatism is stepping up as a suitable alternative for the Danes. It’s got guilt, repentance, and plenty of rituals to spare. Oh, and its heretics too.

Look, I'd like a clean planet as much as the next sentient being. I'm not crying any tears over the fact that gas-powered cars are quickly disappearing from the inner-city of Copenhagen. I love biking! I wish we'd get a move on with nuclear for consistent, green energy. But washing or sorting my trash when a robot could do a better job just to feel like "I'm doing my part"? No.

It’s like those damn paper straws that crumble halfway through your smoothie. The point of it all seems to be self-inflicted, symbolic suffering — solely to remind you of your good standing with the sacred lord of recycling, refuting the plastic devil.

And worse, these small, meaningless acts of pious climate service end up working as catholic indulgences. We buy a good conscience washing trash so we don't have to feel guilty setting new records flying for fun.

I’m not religious, but I’m starting to think it’d be nicer to spend a Sunday morning in the presence of the Almighty than to keep washing trash as pagan replacement therapy.

Our switch to Kamal is complete

In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative. 

Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain.

See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet.

That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal.

That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version.

I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity.

Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.

Closing the borders alone won't fix the problems

Denmark has been reaping lots of delayed accolades from its relatively strict immigration policy lately. The Swedes and the Germans in particular are now eager to take inspiration from The Danish Model, given their predicaments. The very same countries that until recently condemned the lack of open-arms/open-border policies they would champion as Moral Superpowers

But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model.

Here are just three major cases that's been covered in the Danish media in 2025 alone:

  1. Danish public schools are increasingly struggling with violence and threats against students and teachers, primarily from descendants of MENAPT immigrants. In schools with 30% or more immigrants, violence is twice as prevalent. This is causing a flight to private schools from parents who can afford it (including some Syrians!). Some teachers are quitting the profession as a result, saying "the Quran run the class room".
  2. Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them".
  3. A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match.

Problems of this sort have existed in Denmark for well over thirty years. So in a way, none of this should be surprising. But it actually is. Because it shows that long-term assimilation just isn't happening at a scale to tackle these problems. In fact, data shows the opposite: Descendants of MENAPT immigrants are more likely to be violent and troublesome than their parents.

That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what?

This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue.

Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there.

There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.

Apple does AI as Microsoft did mobile

When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright!

But of course now, we know it wasn't. Steve Ballmer infamously dismissed the iPhone with a chuckle, as he believed all of Microsoft's past glory would guarantee them mobile victory. He wasn't worried at all. He clearly should have been!

After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game. 

But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art.

Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes.

This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch.

This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.

apple-id-asterisk.png

Beans and vibes in even measure

Bean counters have a bad rep for a reason. And it’s not because paying attention to the numbers is inherently unreasonable. It’s because weighing everything exclusively by its quantifiable properties is an impoverished way to view business (and the world!).

Nobody presents this caricature better than the MBA types who think you can manage a business entirely in the abstract realm of "products," "markets," "resources," and "deliverables." To hell with that. The death of all that makes for a breakout product or service happens when the generic lingo of management theory takes over.

This is why founder-led operations often keep an edge. Because when there’s someone at the top who actually gives a damn about cars, watches, bags, software, or whatever the hell the company makes, it shows up in a million value judgments that can’t be quantified neatly on a spreadsheet.

Now, I love a beautiful spreadsheet that shows expanding margins, healthy profits, and customer growth as much as any business owner. But much of the time, those figures are derivatives of doing all the stuff that you can’t compute and that won’t quantify.

But this isn’t just about running a better business by betting on unquantifiable elements that you can’t prove but still believe matter. It’s also about the fact that doing so is simply more fun! It’s more congruent. It’s vibe management.

And no business owner should ever apologize for having fun, following their instincts, or trusting that the numbers will eventually show that doing the right thing, the beautiful thing, the poetic thing is going to pay off somehow. In this life or the next.

Of course, you’ve got to get the basics right. Make more than you spend. Don’t get out over your skis. But once there’s a bit of margin, you owe it to yourself to lean on that cushion and lead the business primarily on the basis of good vibes and a long vision.

Air purifiers are a simple answer to allergies

I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing, headache, and runny eyes. I tried everything the doctors recommended for relief. About a million different types of medicine, several bouts of allergy vaccinations, and endless testing. But never once did an allergy doctor ask the basic question: What kind of air are you breathing?

Turns out that's everything when you're allergic to pollen, grass, and dust mites! The air. That's what's carrying all this particulate matter, so if your idea of proper ventilation is merely to open a window, you're inviting in your nasal assailants. No wonder my symptoms kept escalating.

For me, the answer was simply to stop breathing air full of everything I'm allergic to while working, sleeping, and generally just being inside. And the way to do that was to clean the air of all those allergens with air purifiers running HEPA-grade filters.

That's it. That was the answer!

After learning this, I outfitted everywhere we live with these machines of purifying wonder: One in the home office, one in the living area, one in the bedroom. All monitored for efficiency using Awair air sensors. Aiming to have the PM2.5 measure read a fat zero whenever possible.

In America, I've used the Alen BreatheSmart series. They're great. And in Europe, I've used the Philips ones. Also good.

It's been over a decade like this now. It's exceptionally rare that I have one of those bad allergy days now. It can still happen, of course — if I spend an entire day outside, breathing in allergens in vast quantities. But as with almost everything, the dose makes the poison. The difference between breathing in some allergens, some of the time, is entirely different from breathing all of it, all of the time.

I think about this often when I see a doctor for something. Here was this entire profession of allergy specialists, and I saw at least a handful of them while I was trying to find a medical solution. None of them even thought about dealing with the environment. The cause of the allergy. Their entire field of view was restricted to dealing with mitigation rather than prevention.

Not every problem, medical or otherwise, has a simple solution. But many problems do, and you have to be careful not to be so smart that you can't see it.

Human service is luxury

Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how often I've been stuck in some god-forsaken AI loop or phone tree WHEN ALL I WANT IS A HUMAN. So I end up either just yelling "operator", "operator", "operator" (the modern-day mayday!) or smashing zero over and over. It's a unworthy interaction for any premium service.  

Don't get me wrong. I'm pretty excited about AI. I've seen it do some incredible things. And of course it's just going to keep getting better. But in our excitement about the technical promise, I think we're forgetting that humans need more than correct answers. Customer service at its best also offers understanding and reassurance. It offers a human connection.

Especially as AI eats the low-end, commodity-style customer support. The sort that was always done poorly, by disinterested people, rapidly churning through a perceived dead-end job, inside companies that only ever saw support as a cost center. Yeah, nobody is going to cry a tear for losing that.

But you know that isn't all there is to customer service. Hopefully you've had a chance to experience what it feels like when a cheerful, engaged human is interested in helping you figure out what's wrong or how to do something right. Because they know exactly what they're talking about. Because they've helped thousands of others through exactly the same situation. That stuff is gold.

Partly because it feels bespoke. A customer service agent who's good at their job knows how to tailor the interaction not just to your problem, but to your temperament. Because they've seen all the shapes. They can spot an angry-but-actually-just-frustrated fit a thousand miles away. They can tell a timid-but-curious type too. And then deliver exactly what either needs in that moment. That's luxury.

That's our thesis for Basecamp, anyway. That by treating customer service as a career, we'll end up with the kind of agents that embody this luxury, and our customers will feel the difference.

AMD in everything

Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget. So instead of getting a top-of-the-line Intel Pentium machine, he bought two AMD K5 boxes, and got a faster rendering flow for less money. All I cared about in the 90s was gaming, though, and for that, Intel was king, so to me, AMD wasn't even a consideration.

And that's how it stayed for the better part of the next three decades. AMD would put out budget parts that might make economic sense in narrow niches, but Intel kept taking all the big trophies in gaming, in productivity, and on the server.

As late as the end of the 2010s, we were still buying Intel for our servers at 37signals. Even though AMD was getting more competitive, and the price-watt-performance equation was beginning to tilt in their favor.

By the early 2020s, though, AMD had caught up on the server, and we haven't bought Intel since. The AMD EPYC line of chips are simply superior to anything Intel offers in our price/performance window. Today, the bulk of our new fleet run on dual EPYC 9454s for a total of 96 cores(!) per machine. They're awesome.

It's been the same story on the desktop and laptop for me. After switching to Linux last year, I've been all in on AMD. My beloved Framework 13 is rocking an AMD 7640U, and my desktop machine runs on an AMD 7950X. Oh, and my oldest son just got a new gaming PC with an AMD 9900X, and my middle son has a AMD 8945HS in his gaming laptop. It's all AMD in everything!

So why is this? Well, clearly the clever crew at AMD is putting out some great CPU designs lately with Lisa Su in charge. I'm particularly jazzed about the upcoming Framework desktop, which runs the latest Max 395+ chip, and can apportion up to 110GB of memory as VRAM (great for local AI!). This beast punches a multi-core score that's on par with that of an M4 Pro, and it's no longer that far behind in single-core either. But all the glory doesn't just go to AMD, it's just as much a triumph of TSMC.

TSMC stands for Taiwan Semiconductor Manufacturing Company. They're the world leader in advanced chip making, and key to the story of how Apple was able to leapfrog the industry with the M-series chips back in 2020. Apple has long been the top customer for TSMC, so they've been able to reserve capacity on the latest manufacturing processes (called "nodes"), and as a result had a solid lead over everyone else for a while.

But that lead is evaporating fast. That new Max+ 395 is showing that AMD has nearly caught up in terms of raw grunt, and the efficiency is no longer a million miles away either. This is again largely because AMD has been able to benefit from the same TSMC-powered progress that's also propelling Apple.

But you know who it's not propelling? Intel. They're still trying to get their own chip-making processes to perform competitively, but so far it looks like they're just falling further and further behind. The latest Intel boards are more expensive and run slower than the competition from Apple, AMD, and Qualcomm. And there appears to be no easy fix to sort it all out around the corner.

TSMC really is lifting all the boats behind its innovation locks. Qualcomm, just like AMD, have nearly caught up to Apple with their latest chips. The 8 Elite unit in my new Samsung S25 is faster than the A18 Pro in the iPhone 16 Pro in multi-core tests, and very close in single-core. It's also just as efficient now.

This is obviously great for Android users, who for a long time had to suffer the indignity of truly atrocious CPU performance compared to the iPhone. It was so bad for a while that we had to program our web apps differently for Android, because they simply didn't have the power to run JavaScript fast enough! But that's all history now.

But as much as I now cheer for Qualcomm's chips, I'm even more chuffed about the fact that AMD is on a roll. I spend far more time in front of my desktop than I do any other computer, and after dumping Apple, it's a delight to see that the M-series advantage is shrinking to irrelevance fast. There's of course still the software reason for why someone would pick Apple, and they continue to make solid hardware, but the CPU playing field is now being leveled.

This is obviously a good thing if you're a fan of Linux, like me. Framework in particular has invigorated a credible alternative to the sleek, unibody but ultimately disposable nature of the reigning MacBook laptops. By focusing on repairability, upgradeability, and superior keyboards, we finally have an alternative for developer laptops that doesn't just feel like a cheap copy of a MacBook. And thanks to AMD pushing the envelope, these machines are rapidly closing the remaining gaps in performance and efficiency.

And oh how satisfying it must be to sit as CEO of AMD now. The company was founded just one year after Intel, back in 1969, but for its entire existence, it's lived in the shadow of its older brother. Now, thanks to TSMC, great leadership from Lisa Su, and a crack team of chip designers, they're finally reaping the rewards. That is one hell of a journey to victory!

So three cheers for AMD! A tip of the hat to TSMC. And what a gift to developers and computer enthusiasts everywhere that Apple once more has some stiff competition in the chip space.

The New York Times gives liberals The Danish Permission to pivot on mass immigration

One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its editorial line sets the terms for what's permissible to discuss in polite circles on the center left. Whether it's covid mask efficiency, trans kids, or, now, mass immigration. When The New York Times allows the counter argument to liberal orthodoxy to be published, it signals to its readers that it's time to pivot. 

On mass immigration, the center-left liberal orthodoxy has for the last decade in particular been that this is an unreserved good. It's cultural enrichment! It's much-needed workers! It's a humanitarian imperative! Any opposition was treated as de-facto racism, and the idea that a country would enforce its own borders as evidence of early fascism. But that era is coming to a close, and The New York Times is using The Danish Permission to prepare its readers for the end.

As I've often argued, Denmark is an incredibly effective case study in such arguments, because it's commonly thought of as the holy land of progressivism. Free college, free health care, amazing public transit, obsessive about bikes, and a solid social safety net. It's basically everything people on the center left ever thought they wanted from government. In theory, at least.

In practice, all these government-funded benefits come with a host of trade-offs that many upper middle-class Americans (the primary demographic for The New York Times) would find difficult to swallow. But I've covered that in detail in The reality of the Danish fairytale, so I won't repeat that here.

Instead, let's focus on the fact that The New York Times is now begrudgingly admitting that the main reason Europe has turned to the right, in election after election recently, is due to the problems stemming from mass immigration across the continent and the channel.

For example, here's a bit about immigrant crime being higher:

Crime and welfare were also flashpoints: Crime rates were substantially higher among immigrants than among native Danes, and employment rates were much lower, government data showed.

It wasn't long ago that recognizing higher crime rates among MENAPT immigrants to Europe was seen as a racist dog whistle. And every excuse imaginable was leveled at the undeniable statistics showing that immigrants from countries like Tunisia, Lebanon, and Somalia are committing violent crime at rates 7-9 times higher than ethnic Danes (and that these statistics are essentially the same in Norway and Finland too).

Or how about this one: Recognizing that many immigrants from certain regions were loafing on the welfare state in ways that really irked the natives:

One source of frustration was the fact that unemployed immigrants sometimes received resettlement payments that made their welfare benefits larger than those of unemployed Danes.

Or the explicit acceptance that a strong social welfare state requires a homogeneous culture in order to sustain the trust needed for its support:

Academic research has documented that societies with more immigration tend to have lower levels of social trust and less generous government benefits. Many social scientists believe this relationship is one reason that the United States, which accepted large numbers of immigrants long before Europe did, has a weaker safety net. A 2006 headline in the British publication The Economist tartly summarized the conclusion from this research as, “Diversity or the welfare state: Choose one.”

Diversity or welfare! That again would have been an absolutely explosive claim to make not all that long ago.

Finally, there's the acceptance that cultural incompatibility, such as on the role of women in society, is indeed a problem:

Gender dynamics became a flash point: Danes see themselves as pioneers for equality, while many new arrivals came from traditional Muslim societies where women often did not work outside the home and girls could not always decide when and whom to marry.

It took a while, but The New York Times is now recognizing that immigrants from some regions really do commit vastly more violent crime, are net-negative contributors to the state budgets (by drawing benefits at higher rates and being unemployed more often), and that together with the cultural incompatibilities, end up undermining public trust in the shared social safety net. 

The consequence of this admission is dawning not only on The New York Times, but also on other liberal entities around Europe:

Tellingly, the response in Sweden and Germany has also shifted... Today many Swedes look enviously at their neighbor. The foreign-born population in Sweden has soared, and the country is struggling to integrate recent arrivals into society. Sweden now has the highest rate of gun homicides in the European Union, with immigrants committing a disproportionate share of gun violence. After an outburst of gang violence in 2023, Ulf Kristersson, the center-right prime minister, gave a televised address in which he blamed “irresponsible immigration policy” and “political naïveté.” Sweden’s center-left party has likewise turned more restrictionist.

All these arguments are in service of the article's primary thesis: To win back power, the left, in Europe and America, must pivot on mass immigration, like the Danes did. Because only by doing so are they able to counter the threat of "the far right".

The piece does a reasonable job accounting for the history of this evolution in Danish politics, except for the fact that it leaves out the main protagonist. The entire account is written from the self-serving perspective of the Danish Social Democrats, and it shows. It tells a tale of how it was actually Social Democrat mayors who first spotted the problems, and well, it just took a while for the top of the party to correct. Bullshit.

The real reason the Danes took this turn is that "the far right" won in Denmark, and The Danish People's Party deserve the lion's share of the credit. They started in 1995, quickly set the agenda on mass immigration, and by 2015, they were the second largest party in the Danish parliament. 

Does that story ring familiar? It should. Because it's basically what's been happening in Sweden, France, Germany, and the UK lately. The mainstream parties have ignored the grave concerns about mass immigration from its electorate, and only when "the far right" surged as a result, did the center left and right parties grow interested in changing course.

Now on some level, this is just democracy at work. But it's also hilarious that this process, where voters choose parties that champion the causes they care about, has been labeled The Grave Threat to Democracy in recent years. Whether it's Trump, Le Pen, Weidel, or Kjærsgaard, they've all been met with contempt or worse for channeling legitimate voter concerns about immigration.

I think this is the point that's sinking in at The New York Times. Opposition to mass immigration and multi-culturalism in Europe isn't likely to go away. The mayhem that's swallowing Sweden is a reality too obvious to ignore. And as long as the center left keeps refusing to engage with the topic honestly, and instead hides behind some anti-democratic firewall, they're going to continue to lose terrain.

Again, this is how democracies are supposed to work! If your political class is out of step with the mood of the populace, they're supposed to lose. And this is what's broadly happening now. And I think that's why we're getting this New York Times pivot. Because losing sucks, and if you're on the center left, you'd like to see that end.

Stick with the customer

One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable.

The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth! 

When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning.

For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there.

It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction.

But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.

When to give up

Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out.

But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop.

Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't.

So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them.

What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar?

I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large.

I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now.

This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years.

I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze.

I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap.

The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works.

But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want".

Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.

Europe must become dangerous again

Trump is doing Europe a favor by revealing the true cost of its impotency. Because, in many ways, he has the manners and the honesty of a child. A kid will just blurt out in the supermarket "why is that lady so fat, mommy?". That's not a polite thing to ask within earshot of said lady, but it might well be a fair question and a true observation! Trump is just as blunt when he essentially asks: "Why is Europe so weak?".

Because Europe is weak, spiritually and militarily, in the face of Russia. It's that inherent weakness that's breeding the delusion that Russia is at once both on its last legs, about to lose the war against Ukraine any second now, and also the all-potent superpower that could take over all of Europe, if we don't start World Word III to counter it. This is not a coherent position.

If you want peace, you must be strong.

The big cats in the international jungle don't stick to a rules-based order purely out of higher principles, but out of self-preservation. And they can smell weakness like a tiger smells blood. This goes for Europe too. All too happy to lecture weaker countries they do not fear on high-minded ideals of democracy and free speech, while standing aghast and weeping powerlessly when someone stronger returns the favor.

I'm not saying that this is right, in some abstract moral sense. I like the idea of a rules-based order. I like the idea of territorial sovereignty. I even like the idea that the normal exchanges between countries isn't as blunt and honest as those of a child in the supermarket. But what I like and "what is" need separating.

Europe simply can't have it both ways. Be weak militarily, utterly dependent on an American security guarantee, and also expect a seat at the big-cat table. These positions are incompatible. You either get your peace dividend -- and the freedom to squander it on net-zero nonsense -- or you get to have a say in how the world around you is organized.

Which brings us back to Trump doing Europe a favor. For all his bluster and bullying, America is still a benign force in its relation to Europe. We're being punked by someone from our own alliance. That's a cheap way of learning the lesson that weakness, impotence, and peace-dividend thinking is a short-term strategy. Russia could teach Europe a far more costly lesson. So too China.

All that to say is that Europe must heed the rude awakening from our cowboy friends across the Atlantic. They may be crude, they may be curt, but by golly, they do have a point.

Get jacked, Europe, and you'll no longer get punked. Stay feeble, Europe, and the indignities won't stop with being snubbed in Saudi Arabia.

Europe's impotent rage

Europe has become a third-rate power economically, politically, and militarily, and the price for this slowly building predicament is now due all at once.

First, America is seeking to negotiate peace in Ukraine directly with Russia, without even inviting Europe to the table. Decades of underfunding the European military has lead us here. The never-ending ridicule of America, for spending the supposedly "absurd sum" of 3.4% of its GDP to maintain its might, coming home to roost.

Second, mass immigration in Europe has become the central political theme driving the surge of right-wing parties in countries across the continent. Decades of blind adherence to a naive multi-cultural ideology has produced an abject failure to assimilate culturally-incompatible migrants. Rather than respond to this growing public discontent, mainstream parties all over Europe run the same playbook of calling anyone with legitimate concerns "racist", and attempting to disparage or even ban political parties advancing these topics.

Third, the decline of entrepreneurship in Europe has lead to a death of new major companies, and an accelerated brain drain to America. The European economy lost parity with the American after 2008, and now the net-zero nonsense has lead Europe's old manufacturing powerhouse, Germany, to commit financial harakiri. Shutting its nuclear power plants, over-investing in solar and wind, and rendering its prized car industry noncompetitive on the global market. The latter leading European bureaucrats in the unenviable position of having to both denounce Trump on his proposed tariffs while imposing their own on the Chinese.

A single failure in any of these three crucial realms would have been painful to deal with. But failure in all three at the same time is a disaster, and it's one of Europe's own making. Worse still is that Europeans at large still appear to be stuck in the early stages of grief. Somewhere between "anger" and "bargaining". Leaving us with "depression" before we arrive at "acceptance".

Except this isn't destiny. Europe is not doomed to impotent outrage or repressive anger. Europe has the people, the talent, and the capital to choose a different path. What it currently lacks is the will.

I'm a Dane. Therefore, I'm a European. I don't summarize the sad state of Europe out of spite or ill will or from a lack of standing. I don't want Europe to become American. But I want Europe to be strong, confident, and successful. Right now it's anything but.

The best time for Europe to make a change was twenty years ago. The next best time is right now. Forza Europe! Viva Europe!

Leave it to the Germans

Just a day after JD Vance's remarkable speech in Munich, 60 Minutes validates his worst accusations in a chilling segment on the totalitarian German crackdown on free speech. You couldn't have scripted this development for more irony or drama!

This isn't 60 Minutes finding a smoking gun in some secret government archive, detailing a plot to prosecute free speech under some fishy pretext. No, this is German prosecutors telling an American journalist in an open interview that insulting people online is a crime and retweeting a "lie" will get you in trouble with the law. No hidden cameras! All out in the open!

Nor is this just some rogue prosecutorial theory. 60 Minutes goes along for the ride with German police, as they conduct a raid at dawn with six armed officers to confiscate the laptop and a phone of a German citizen suspected of posting a racist cartoon. Even typing out this description of what happens sounds like insane hyperbole, but you can just watch the clip for yourself.

And this morning raid was just one of fifty that day. Fifty raids in a day! For wrong speech, spicy memes, online insults of politicians, and other utterances by German citizens critical of their government or policies! Is this is the kind of hallowed democracy that Germans are supposed to defend against the supposed threat of AfD?

As I noted yesterday, even Denmark has some draconian laws on the books limiting free speech. And they've been used in anger too. Although I've yet to see the kind of grotesque enforcement -- six armed officers at dawn coming to confiscate a laptop! -- but the trend is none the less worrying all across Europe, not just in Germany.

I suppose this is why European leaders are in such shock over Vance's wagging finger. Because they know he's dead on, but they're not used to getting called out like this. On the world stage, while they just had to sit there. I can see how that's humiliating.

But the humiliation of the European people is infinitely greater as they're gaslit about their right to free speech. That Vance doesn't know what he's talking about. Oh, and what about the Gulf of America?? It's pathetic.

So too is the apparent deep support from many parts of Europe for this totalitarian insanity. I keep hearing from Europeans who with a straight face will claim that of course they have free speech, but that doesn't mean you can insult people, hurt their feelings, or post statistics that might cast certain groups in a bad light.

Madness.

"The party told you to reject the evidence of your eyes and ears. It was their final, most essential command."
-- Orwell, 1949

Europeans don't have or understand free speech

The new American vice president JD Vance just gave a remarkable talk at the Munich Security Conference on free speech and mass immigration. It did not go over well with many European politicians, some of which immediately proved Vance's point, and labeled the speech "not acceptable". All because Vance dared poke at two of the holiest taboos in European politics.

Let's start with his points on free speech, because they're the foundation for understanding how Europe got into such a mess on mass immigration. See, Europeans by and large simply do not understand "free speech" as a concept the way Americans do. There is no first amendment-style guarantee in Europe, yet the European mind desperately wants to believe it has the same kind of free speech as the US, despite endless evidence to the contrary.

It's quite like how every dictator around the world pretends to believe in democracy. Sure, they may repress the opposition and rig their elections, but they still crave the imprimatur of the concept. So too "free speech" and the Europeans.

Vance illustrated his point with several examples from the UK. A country that pursues thousands of yearly wrong-speech cases, threatens foreigners with repercussions should they dare say too much online, and has no qualms about handing down draconian sentences for online utterances. It's completely totalitarian and completely nuts.

Germany is not much better. It's illegal to insult elected officials, and if you say the wrong thing, or post the wrong meme, you may well find yourself the subject of a raid at dawn. Just crazy stuff.

I'd love to say that Denmark is different, but sadly it is not. You can be put in prison for up to two years for mocking or degrading someone on the basis on their race. It recently become illegal to burn the Quran (which sadly only serves to legitimize crazy Muslims killing or stabbing those who do). And you may face up to three years in prison for posting online in a way that can be construed as morally supporting terrorism.

But despite all of these examples and laws, I'm constantly arguing with Europeans who cling to the idea that they do have free speech like Americans. Many of them mistakenly think that "hate speech" is illegal in the US, for example. It is not.

America really takes the first amendment quite seriously. Even when it comes to hate speech. Famously, the Jewish lawyers of the (now unrecognizable) ACLU defended the right of literal, actual Nazis to march for their hateful ideology in the streets of Skokie, Illinois in 1979 and won.

Another common misconception is that "misinformation" is illegal over there too. It also is not. That's why the Twitter Files proved to be so scandalous. Because it showed the US government under Biden laundering an illegal censorship regime -- in grave violation of the first amendment -- through private parties, like the social media networks.

In America, your speech is free to be wrong, free to be hateful, free to insult religions and celebrities alike. All because the founding fathers correctly saw that asserting the power to determine otherwise leads to a totalitarian darkness.

We've seen vivid illustrations of both in recent years. At the height of the trans mania, questioning whether men who said they were women should be allowed in women's sports or bathrooms or prisons was frequently labeled "hate speech".

During the pandemic, questioning whether the virus might have escaped from a lab instead of a wet market got labeled "misinformation". So too did any questions about the vaccine's inability to stop spread or infection. Or whether surgical masks or lock downs were effective interventions.

Now we know that having a public debate about all of these topics was of course completely legitimate. Covid escaping from a lab is currently the most likely explanation, according to American intelligence services, and many European countries, including the UK, have stopped allowing puberty blockers for children.

Which brings us to that last bugaboo: Mass immigration. Vance identified it as one of the key threats to Europe at the moment, and I have to agree. So should anyone who've been paying attention to the statistics showing the abject failure of this thirty-year policy utopia of a multi-cultural Europe. The fast changing winds in European politics suggest that's exactly what's happening.

These are not separate issues. It's the lack of free speech, and a catastrophically narrow Overton window, which has led Europe into such a mess with mass immigration in the first place. In Denmark, the first popular political party that dared to question the wisdom of importing massive numbers of culturally-incompatible foreigners were frequently charged with claims of racism back in the 90s. The same "that's racist!" playbook is now being run on political parties across Europe who dare challenge the mass immigration taboo.

But making plain observations that some groups of immigrants really do commit vastly more crime and contribute vastly less economically to society is not racist. It wasn't racist when the Danish Folkparty did it in Denmark in the 1990s, and it isn't racist now when the mainstream center-left parties have followed suit.

I've drawn the contrast to Sweden many times, and I'll do it again here. Unlike Denmark, Sweden kept its Overton window shut on the consequences of mass immigration all the way up through the 90s, 00s, and 10s. As a prize, it now has bombs going off daily, the European record in gun homicides, and a government that admits that the immigrant violence is out of control.

The state of Sweden today is a direct consequence of suppressing any talk of the downsides to mass immigration for decades. And while that taboo has recently been broken, it may well be decades more before the problems are tackled at their root. It's tragic beyond belief.

The rest of Europe should look to Sweden as a cautionary tale, and the Danish alternative as a precautionary one. It's never too late to fix tomorrow. You can't fix today, but you can always fix tomorrow.

So Vance was right to wag his finger at all this nonsense. The lack of free speech and the problems with mass immigration. He was right to assert that America and Europe has a shared civilization to advance and protect. Whether the current politicians of Europe wants to hear it or not, I'm convinced that average Europeans actually are listening.

Serving the country

In 1940, President Roosevelt tapped William S. Knudsen to run the government's production of military equipment. Knudsen had spent a pivotal decade at Ford during the mass-production revolution, and was president of General Motors, when he was drafted as a civilian into service as a three-star general. Not bad for a Dane, born just ten minutes on bike from where I'm writing this in Copenhagen!

Knudsen's leadership raised the productive capacity of the US war machine by a 100x in areas like plane production, where it went from producing 3,000 planes in 1939 to over 300,000 by 1945. He was quoted on his achievement: "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible".

Knudsen wasn't an elected politician. He wasn't even a military man. But Roosevelt saw that this remarkable Dane had the skills needed to reform a puny war effort into one capable of winning the Second World War.

Do you see where I'm going with this? Elon Musk is a modern day William S. Knudsen. Only even more accomplished in efficiency management, factory optimization, and first-order systems thinking.

No, America isn't in a hot war with the Axis powers, but for the sake of the West, it damn well better be prepared for one in the future. Or better still, be so formidable that no other country or alliance would even think to start one. And this requires a strong, confident, and sound state with its affairs in order.

If you look at the government budget alone, this is direly not so. The US was knocking on a two-trillion-dollar budget deficit in 2024! Adding to a towering debt that's now north of 36 trillion. A burden that's already consuming $881 billion in yearly interest payments. More than what's spent on the military or Medicare. Second to only Social Security on the list of line items.

Clearly, this is not sustainable.

This is the context of DOGE. The program, lead by Musk, that's been deputized by Trump to turn the ship around. History doesn't repeat, but it rhymes, and Musk is dropping beats that Knudsen would have surely been tapping his foot to. And just like Knudsen in his time, it's hard to think of any other American entrepreneur more qualified to tackle exactly this two-trillion dollar problem. 

It is through The Musk Algorithm that SpaceX lowered the cost of sending a kilo of goods into lower orbit from the US by well over a magnitude. And now America's share of worldwide space transit has risen from less than 30% in 2010 to about 85%. Thanks to reusable rockets and chopstick-catching landing towers. Thanks to Musk.

Or to take a more earthly example with Twitter. Before Musk took over, Twitter had revenues of $5 billion and earned $682 million. After the take over, X has managed to earn $1.25 billion on $2.7 billion in revenue. Mostly thank to the fact that Musk cut 80% of the staff out of the operation, and savaged the cloud costs of running the service.

This is not what people expected at the time of the take over! Not only did many commentators believe that Twitter was going to collapse from the drastic costs in staff, they also thought that the financing for the deal would implode. Chiefly as a result of advertisers withdrawing from the platform under intense media pressure. But that just didn't happen.

Today, the debt used to take over Twitter and turn it into X is trading at 97 cents on the dollar. The business is twice as profitable as it was before, and arguably as influential as ever. All with just a fifth of the staff required to run it. Whatever you think of Musk and his personal tweets, it's impossible to deny what an insane achievement of efficiency this has been!

These are just two examples of Musk's incredible ability to defy the odds and deliver the most unbelievable efficiency gains known to modern business records. And we haven't even talked about taking Tesla from producing 35,000 cars in 2014 to making 1.7 million in 2024. Or turning xAI into a major force in AI by assembling a 100,000 H100 cluster at "superhuman" pace. 

Who wouldn't want such a capacity involved in finding the waste, sloth, and squander in the US budget? Well, his political enemies, of course!

And I get it. Musk's magic is balanced with mania and even a dash of madness. This is usually the case with truly extraordinary humans. The taller they stand, the longer the shadow. Expecting Musk to do what he does and then also be a "normal, chill dude" is delusional.

But even so, I think it's completely fair to be put off by his tendency to fire tweets from the hip, opine on world affairs during all hours of the day, and offer his support to fringe characters in politics, business, and technology. I'd be surprised if even the most ardent Musk super fans don't wince a little every now and then at some of the antics.

And yet, I don't have any trouble weighing those antics against the contributions he's made to mankind, and finding an easy and overwhelming balance in favor of his positive achievements.

Musk is exactly the kind of formidable player you want on your team when you're down two trillion to nothing, needing a Hail Mary pass for the destiny of America, and eager to see the West win the future.

He's a modern-day Knudsen on steroids (or Ketamine?). Let him cook.

time-knudsen.jpg

Servers can last a long time

We bought sixty-one servers for the launch of Basecamp 3 back in 2015. Dell R430s and R630s, packing thousands of cores and terabytes of RAM. Enough to fill all the app, job, cache, and database duties we needed. The entire outlay for this fleet was about half a million dollars, and it's only now, almost a decade later, that we're finally retiring the bulk of them for a full hardware refresh. What a bargain!

That's over 3,500 days of service from this fleet, at a fully amortized cost of just $142/day. For everything needed to run Basecamp. A software service that has grossed hundreds of millions of dollars in that decade.

We've of course had other expenses beyond hardware from operating Basecamp over the past decade. The ops team, the bandwidth, the power, and the cabinet rental across both our data centers. But none the less, owning our own iron has been a fantastically profitable proposition. Millions of dollars saved over renting in the cloud.

And we aren't even done deriving value from this venerable fleet! The database servers, Dell R630s w/ Xeon E5-2699 CPUs and 768G of RAM, are getting handed down to some of our heritage apps. They will keep on trucking until they give up the ghost.

When we did the public accounting for our cloud exit, it was based on five years of useful life from the hardware. But as this example shows, that's pretty conservative. Most servers can easily power your applications much longer than that.

Owning your own servers has easily been one of our most effective cost advantages. Together with running a lean team. And managing our costs remains key to reaping the profitable fruit from the business. The dollar you keep at the end of the year is just as real whether you earn it or save it.

So you just might want to run those cloud-exit numbers once more with a longer server lifetime value. It might just tip the equation, and motivate you to become a server owner rather than a renter.

It burns

The first time we had to evacuate Malibu this season was during the Franklin fire in early December. We went to bed with our bags packed, thinking they'd probably get it under control. But by 2am, the roaring blades of fire choppers shaking the house got us up. As we sped down the canyon towards Pacific Coast Highway (PCH), the fire had reached the ridge across from ours, and flames were blazing large out the car windows. It felt like we had left the evacuation a little too late, but they eventually did get Franklin under control before it reached us.

Humans have a strange relationship with risk and disasters. We're so prone to wishful thinking and bad pattern matching. I remember people being shocked when the flames jumped the PCH during the Woolsey fire in 2017. IT HAD NEVER DONE THAT! So several friends of ours had to suddenly escape a nightmare scenario, driving through burning streets, in heavy smoke, with literally their lives on the line. Because the past had failed to predict the future.

I fell into that same trap for a moment with the dramatic proclamations of wind and fire weather in the days leading up to January 7. Warning after warning of "extremely dangerous, life-threatening wind" coming from the City of Malibu, and that overly-bureaucratic-but-still-ominous "Particularly Dangerous Situation" designation. Because, really, how much worse could it be? Turns out, a lot.

It was a little before noon on the 7th when we first saw the big plumes of smoke rise from the Palisades fire. And immediately the pattern matching ran astray. Oh, it's probably just like Franklin. It's not big yet, they'll get it out. They usually do. Well, they didn't.

By the late afternoon, we had once more packed our bags, and by then it was also clear that things actually were different this time. Different worse. Different enough that even Santa Monica didn't feel like it was assured to be safe. So we headed far North, to be sure that we wouldn't have to evacuate again. Turned out to be a good move.

Because by now, into the evening, few people in the connected world hadn't started to see the catastrophic images emerging from the Palisades and Eaton fires. Well over 10,000 houses would ultimately burn. Entire neighborhoods leveled. Pictures that could be mistaken for World War II. Utter and complete destruction.

By the night of the 7th, the fire reached our canyon, and it tore through the chaparral and brush that'd been building since the last big fire that area saw in 1993. Out of some 150 houses in our immediate vicinity, nearly a hundred burned to the ground. Including the first house we moved to in Malibu back in 2009. But thankfully not ours.

That's of course a huge relief. This was and is our Malibu Dream House. The site of that gorgeous home office I'm so fond to share views from. Our home.

But a house left standing in a disaster zone is still a disaster. The flames reached all the way up to the base of our construction, incinerated much of our landscaping, and devoured the power poles around it to dysfunction.

We have burnt-out buildings every which way the eye looks. The national guard is still stationed at road blocks on the access roads. Utility workers are tearing down the entire power grid to rebuild it from scratch. It's going to be a long time before this is comfortably habitable again.

So we left.

That in itself feels like defeat. There's an urge to stay put, and to help, in whatever helpless ways you can. But with three school-age children who've already missed over a months worth of learning from power outages, fire threats, actual fires, and now mudslide dangers, it was time to go.

None of this came as a surprise, mind you. After Woolsey in 2017, Malibu life always felt like living on borrowed time to us. We knew it, even accepted it. Beautiful enough to be worth the risk, we said. 

But even if it wasn't a surprise, it's still a shock. The sheer devastation, especially in the Palisades, went far beyond our normal range of comprehension. Bounded, as it always is, by past experiences.

Thus, we find ourselves back in Copenhagen. A safe haven for calamities of all sorts. We lived here for three years during the pandemic, so it just made sense to use it for refuge once more. The kids' old international school accepted them right back in, and past friendships were quickly rebooted.

I don't know how long it's going to be this time. And that's an odd feeling to have, just as America has been turning a corner, and just as the optimism is back in so many areas. Of the twenty years I've spent in America, this feels like the most exciting time to be part of the exceptionalism that the US of A offers.

And of course we still are. I'll still be in the US all the time on both business, racing, and family trips. But it won't be exclusively so for a while, and it won't be from our Malibu Dream House. And that burns.

palisades-plumes.jpg

Waiting on red

Americans often laugh when they see how often Danes will patiently, obediently wait on the little red man to turn green before crossing an empty intersection, in the rain, even at night. Nobody is coming! Why don't you just cross?! It seems silly, but the underlying philosophy is anything but. It's load bearing for a civil society like Denmark.

Because doing the right thing every time can be put on autopilot, and when most people follow even the basic norms consistently, the second-order effects are profound. Like the fact that Copenhagen is one of the absolute safest major cities in the world.

But the Danes also know that norms fray if they're not enforced, so they vigorously pursue even small infractions. The Danish police regularly celebrating ticketing bicyclists making even minor mistakes (like driving instead of dragging their bike on the sidewalk). And the metro is constantly being patrolled for fare evaders and antisocial behavior.

It's broken windows theory on steroids. And it works.

When we were living in the city for three years following the pandemic, the most startling difference to major US cities was the prevalence of unattended children everywhere, at all hours. Our oldest was just nine years-old when he started taking the metro alone, even at night.

How many American parents would feel comfortable letting their nine-year old take the L in Chicago or the subway in Manhattan? I don't know any. And as a result, you just don't see any unattended children do this. But in Copenhagen it's completely common place.

This is the prize of having little tolerance for antisocial behavior in the public space. When you take away the freedom from crackheads and bums to smoke up on the train or sleep in the park, you grant the freedom to nine-year olds to roam the city and for families to enjoy the park at dusk.

This is the fundamental error of suicidal empathy. That tolerance of the deranged and dangerous few can be kept a separate discussion from the freedom and safety of the many. These are oppositional forces. The more antisocial behavior you excuse, the further families will retract into their protective shell. And suddenly there are no longer children around in the public city space or any appetite for public transit.

Maybe you have to become a parent to really understand this. I admit that I didn't give this nearly the same attention before coming a father of three. But the benefit isn't exclusively about the freedom and safety enjoyed by your own family, it's also about the ambient atmosphere of living in a city where children are everywhere. It's a special form of life-affirming luxury, and it's probably the thing I've missed most about Copenhagen since we went back to the US.

What's interesting is how much active effort it takes to maintain this state of affairs. The veneer of civil society is surprisingly thin. Norms fray quickly if left unguarded. And it's much harder to reestablish their purchase on society than to protect them from disappearing in the first place.

But I also get that it's hard to connect the dots from afar, though. Many liberals in America keep Denmark as some mythical place where all their policy dreams have come true, without ever wrestling much with what it takes to maintain the social trust that allows those policies to enjoy public support.

The progressive Nirvana of Denmark is built on a highly conservative set of norms and traditions. It's yin and yang. So if you're committed to those progressive outcomes in America, whether it's the paternity leave, the independent children, or the amazing public transit system, you ought to consider what conservative values it makes sense to accept as enablers rather than obstacles.

MEGA

Trump is back at the helm of the United States, and the majority of Americans are optimistic about the prospect. Especially the young. In a poll by CBS News, it's the 18-29 demographic that's most excited, with a whopping two-thirds answering in the affirmative to being optimistic about the next four years under Trump. And I'm right there with them. The current American optimism is infectious!

signal-2025-01-19-19-30-15-791.png


While Trump has undoubtedly been the catalyst, this is a bigger shift than any one person. After spending so long lost in the wilderness of excessive self-criticism and self-loathing, there's finally a broad coalition of the willing working to get the mojo back.

This is what's so exhilarating about America. The big, dramatic swings. The high stakes. The long shots. And I like this country much better when it's confident in that inherent national character.

Of course all this is political. And of course Trump is triggering for many. Just like his opponent would have been if she had won. But this moment is not just political, it's beyond that. It's economic, it's entrepreneurial, it's technological. Optimism is infectious.

As someone with a foot on both the American and European continent, I can't help being jealous with my euro leg. Europe is stuck with monumental levels of pessimism at the moment, and it's really sad to see.

But my hope is that Europe, like usual, is merely a few years behind the American revival in optimism. That it's coming to the old world eventually.

This is far more an article of faith than of analysis, mind you. I can also well imagine Europe sticking with Eurocrat thinking, spinning its wheels with grand but empty proclamations, issuing scorning but impotent admonishments of America, and doubling down on the regulatory black hole.

Neither path is given. Europe was competitive with America on many economic terms as recently as 15 years ago. But Europe also lacks the ability to change course quite like the Americans. So the crystal ball is blurry.

Personally, I choose faith. Optimism must win. Pessimism is literally for losers.

Failed integration and the fall of multiculturalism

For decades, the debate in Denmark around  problems with mass immigration was stuck in a self-loathing blame game of "failed integration". That somehow, if the Danes had just tried harder, been less prejudice, offered more opportunities, the many foreigners with radically different cultures would have been able to integrate successfully. If not in the first generation, then the second. For much of this time, I thought that was a reasonable thesis. But reality has proved it wrong.

If literally every country in Europe has struggled in the same ways, and for decades on end, to produce the fabled "successful integration", it's not a compelling explanation that it's just because the Danes, Swedes, Norweigans, Germans, French, Brits, or Belgians just didn't try hard enough. It's that the mission, on the grand and statistical scale, was impossible in many cases.

As Thomas Sowell tells us, this is because there are no solutions to intractable, hard problems like cultural integration between wildly different ways of living. Only trade offs. Many of which are unfavorable to all parties.

But by the same token, just because the overall project of integrating many of the most divergent cultures from mass immigrations has failed, there are many individual cases of great success. Much of the Danish press, for example, has for years propped up the hope of broad integration success by sharing hopeful, heartwarming stories of highly successful integration. And you love to see it.

Heartwarming anecdotes don't settle trade offs, though. They don't prove a solution or offer a conclusion either.

I think the conclusion at this point is clear. First, cultural integration, let alone assimilation, is incredibly difficult. The more divergent the cultures, the more difficult the integration. And for some combinations, it's outright impossible.

Second, the compromise of multiculturalism has been an abject failure in Europe. Allowing parallel cultures to underpin parallel societies is poison for the national unity and trust.

Which brings us to another bad social thesis from the last thirty-some years: That national unity, character, and belonging not only isn't important, but actively harmful. That national pride in history, traditions, and culture is primarily an engine of bigotry.

What a tragic thesis with catastrophic consequences.

But at this point, there's a lot of political capital invested into all these bad ideas. In sticking with the tired blame game. Thinking that what hasn't worked for fifty years will surely start working if we give it five more. 

Now, I actually have a nostalgic appreciation for the beautiful ideals behind such hope for humanity, but I also think that at this point it is as delusional as it is dangerous.

And I think it's directly responsible for the rise of so-called populist movements all over Europe. They're directly downstream from the original theses of success in cultural integration going through just-try-harder efforts as well as the multicultural compromise. A pair of ideas that had buy-in across much of the European board until reality simply became too intolerable for too many who had to live with the consequences.

Such widespread realization doesn't automatically correct the course of a societal ship that's been sailing in the wrong direction for decades, of course. The playbook that took DEI and wokeness to blitzkrieg success in the States, by labeling any dissent to those ideologies racist or bigoted, have also worked to hold the line on the question of mass immigration in Europe until very recently. 

But I think the line is breaking in Europe, just as it recently did in America. The old accusations have finally lost their power from years of excessive use, and suppressing the reality that many people can see with their own eyes is getting harder.

I completely understand why that makes people anxious, though. History is full of examples of combative nationalism leading us to dark edges. And, especially in Germany, I can understand the historical hesitation when there's even a hint of something that sounds like what they heard in the 30s.

But you can hold both considerations in your head at the same time without losing your wits. Mass immigration to Europe has been a failure, and the old thesis of naive hope has to get replaced by a new strategy that deals with reality. AND that not all proposed fixes by those who diagnosed the situation early are either sound or palatable.

World history is full of people who've had the correct diagnosis but a terrible prescription. And I think it's fair to say that it's not even obvious what the right prescription is at this point!

Vibrant, strong societies surely benefit from some degree of immigration. Especially from culturally-compatible regions based on national and economic benefit. But whatever the specific trade-offs taken from here, it seems clear that for much of Europe, they're going to look radically different than they've done in the past three decades or so.

Best get started then.

The social media censorship era is over (for now)

Mark Zuckerberg just announced a stunning pivot for Meta's approach to social media censorship. Here's what he's going to do:

  1. Replace third-party fact checkers with community notes ala X.
  2. Allow free discussion on immigration, gender, and other topics that were heavily censored in the past, as well as let these discussions freely propagate (and go viral).
  3. Focus moderation on illegal activities, like child exploitation, frauds, and scams, instead of political transgressions.
  4. Relocate the moderation team from California to Texas to address political bias from within the team.

This new approach is going to govern all the Meta realms, from Facebook to Threads to Instagram. Meaning it'll affect the interactions of some three billion people around the globe. In other words, this is huge.

As to be expected, many are highly skeptical of Zuckerberg's motives. And for good reason. Despite making a soaring speech to the values of free speech back in 2019, Meta, together with Twitter, became one of the primary weapons for a political censorship regime that went into overdrive during the pandemic.

Both Meta and Twitter received direct instructions from the US government, among other institutions, on what was to be considered allowable speech and what was to be banned. The specifics shifted over those awful years, but everything from questioning the origins of the Covid virus to disputing vaccine efficacy to objections on mass migration to the Hunter Biden laptop leak all qualified for heavy-handed intervention.

The primary rhetorical fig leaves for this censorship regime was "hate speech" and "misinformation". Terms that almost immediately lost all objective content, and turned into mere descriptors of "speech we don't like". Either because it was politically inconvenient or because it offended certain holy tenants of the woke religion that reigned at the time.

But that era is now over. Between Meta and X, the gravity of the global discourse has swung dramatically in favor of free expression. I suspect that YouTube and Reddit will eventually follow suit as well. But even if they don't, it won't really matter. The forbidden opinions and inconvenient information will still be able to reach a wide audience.

That's a momentous and positive moment for the world. And it's a particularly proud moment for America, since this is all downstream from the country's first amendment protection of free speech.

But it's also adding to the growing chasm between America and Europe. And the United Kingdom in particular. While America is recovering from the authoritarian grip on free speech in terms of both social media policies and broader social consequences (remember cancel culture?), the Brits are doubling down.

Any post on social media made in Britain is liable to have those cute little bobbies show up at your door with a not-so-cute warrant for your arrest. The delusional UK police commissioner is even threatening to "come after" people from around the world, if they write bad tweets.

And Europe isn't far behind. Thierry Breton, the former European Commissioner, spent much of last year threatening American tech companies, and Elon Musk in particular, with draconian sanctions, if they failed to censor on the EU's behest. He has thankfully since been dismissed, but the sentiment of censorship is alive and well in the EU.

This is why the world needs America. From the UK to the EU to Brazil, China, Russia, and Iran, political censorship is very popular. And for a couple of dark years in the US, it looked like the whole world was about to be united in an authoritarian crackdown on speech of all sorts.

But Elon countered the spell. His acquisition of Twitter and its transformation into X was the pivotal moment for both American and global free speech. And if you allow yourself to zoom out from the day-to-day antics of the meme lord at large, you should be able to see clearly how the timeline split.

I know that's hard to do for a lot of people who've traded in their Trump Derangement Syndrome diagnosis for a Musk Derangement Syndrome variety (or simply added it to their inventory of mental challenges). And I get it. It's hard to divorce principles from people! We're all liable to mix and confuse the two.

And speaking of Trump, which, to be honest, I try not to do too often, because I know how triggering he is,  credit is still due. There's no way this incredible vibe shift would have happened as quickly or as forcefully without his comeback win.

Now I doubt that any of his political opponents are going to give him any credit for this, even if they do perhaps quietly celebrate the pivot on free speech. And that's OK. I don't expect miracles, and we don't need them either. You don't need to love every champion of your principles to quietly appreciate their contributions.

Which very much reminds me of the historic lawsuit that the Jewish lawyers at the ACLU (in its former glory) fought to allow literal nazis to match in the streets of Skokie, Illinois. That case goes to the crux of free speech. That in order for you to voice your dissent on Trump or Musk or whatever, you need the protection of the first amendment to cover those who want to dissent in the opposite direction too.

That's a principle that's above the shifting winds and vibes of whoever is in power. It's entire purpose is to protect speech that's unpopular with the rulers of the moment. And as we've seen, electoral fortunes can change! It's in your own self interest to affirm a set of rules for participation in the political debate that live beyond the what's expedient for partisan success in the short term.

I for one am stoked about Meta's pivot on censorship. I've historically not exactly been Mark Zuckerberg's biggest fan, and I do think it's fair to question the authenticity of him and this move, but I'm not going to let any of that get in the way of applauding this monumental decision. The world needs America and its exceptional principles more than ever. I will cheer for Zuckerberg without reservation when he works in their service.

Now how do we get the UK and the EU to pivot as well?

Delusional dreams of excess freedom

Jim Carrey once said that he hoped everyone could "...get rich and famous and do everything they dreamed of so they can see that it is not the answer". And while I sorta agree, I think the opposite position also has its appeal: That believing in a material fix to the problem of existence dangles a carrot of hope that's depressing to go without.

What made me think of Carrey's quote was this tale of the startup founder behind Loom, who made out with a $60m windfall when his business was sold, and is still working his way through the existential crisis that created. It's harder than you think to suddenly have all the freedom you ever desired land in your lap. You may just realize that you don't actually know what to do with it all!

And this predicament isn't reserved for successful entrepreneurs either. You see miniature revelations of this in many stories of retirement. Workers who after a long life toiling away suddenly arrive at the promised land of unlimited time, the basics taken care of, and full freedom from all responsibilities and obligations. Some literally wither away from all that excess freedom.

One of the Danish newspapers I read recently published a series on exactly this phenomenon. Pensioners who realize that life without work can be a surprisingly difficult place to find meaning in. That being needed, being useful is far more attractive than leaning back in leisure. And, as a result, more and more senior Danes are returning to the workforce, at least part time, to reclaim some of that meaning.

I think you can even draw a connection to the stereotype of rich kids who grow up never being asked to do contribute anything, busy bossing the help around, and as a result end up floundering in a vapid realm of materialism. Condemned rather than blessed.

Yes, this all rhymes a bit with that iconic scene from The Matrix where Cypher is negotiating a return to blissful ignorance with Agent Smith: I don't want to remember nothing! Because once you know that the material carrot is just like the spoon that bends because it doesn't actually exist, you're condemned to a life of knowing that what you imagine as nirvana probably isn't.

What beautiful irony: That the prize for catching the carrot is the realization that chasing it was more fun.

The premise trap

The hardest part for me about collaborating with junior programmers, whether it's in open source or at work, is avoiding the premise trap. That's where the fundamental assumptions baked into the first draft of the code aren't questioned until you've already spent far too long improving the implementation. It's the same with AI.

Because AI at the moment is like a superb junior programmer. One with an encyclopedic knowledge about syntax and APIs, but also one saddled with the same propensities to produce overly-complicated, subtly-defective solutions.

You could read this as a bullish signal for the future of AI programming. That the current trajectory is tracking with the human programmer's progression tree, and that eventually, like the best juniors, it'll graduate to senior levels of competence in the fine details of code aesthetics, novel problem reasoning, and architectural coherence. I hope that's the case.

But that doesn't change the fact that, as of right now, I've yet to see any of the AI models I've been using for the past year produce great code within domains that I'm very familiar with. Occasionally there'll be a glimmer, just like with promising junior programmers, but taken as a whole, the solutions almost always need material amounts of rework.

Which is when that premise trap claps!

I've seen this repeatedly with both the Ruby and JavaScript code that comes out of the AI, so I doubt it's that particular to one language over another. But the propensity to pull in needless dependencies, the overly-verbose presentation, and the architectural dead ends are there all the time.

This is what I hear from people who are trying to use AI to write entire systems for them without actually being capable programmers themselves. That it's an incredible rush to see a prototype come to life in mere minutes, but that actually moving this forward to something that works reliably often turns into a one-step-forwards-two-steps-backwards dance. (Not unlike the many stories someone might have getting catfished by a barely qualified junior programmer on Upwork!).

While that's frustrating, it makes perfect sense when you consider the training data that has been teaching these models. The endless stream of basic online tutorials, Stack Overflow simplified answers, and the unfortunate reality that a fair chunk of internet programming content is made by the blind leading the blind.

Senior human programmers all got started on the same information diet, but eventually graduated to higher levels of understanding and mastery by working on proprietary code bases. Where all the trade-offs that are absent in tutorial-style code reveal themselves and demand to be weighed with finesse.

I think the next big leap for these models under the current paradigm probably isn't likely to happen until they're exposed to a vast corpus on proprietary, corporate code. And how that's going to happen isn't entirely clear at the moment.

So in the mean time, as a senior programmer, you'd do well to treat AI as you would a junior programmer. It's rarely going to save you time asking it to produce an entire system, or even subsystem, if you care about the final quality of the architecture or implementation. Because to verify the assumptions that have been baked into its path will require spending as much time to understand the choices as it would doing the work yourself.

I remain bullish on AI writing code for us all, but also remain realistic about its current abilities. As well as alert to the danger of luring more senior programmers, including myself, into signing off on slop, while it saps our stamina for continued learning, as we lean too much on AI writing for us rather than teaching how.

May this piece age badly within a few short years!

Jaguar is lost but Volvo knows the way

Jaguar's new rebrand is getting murdered online, and for good reason. The clichés are as thick as the diversity pandering is dated. CREATE EXUBERANT. LIVE VIVID. DELETE ORDINARY. You'd think these were slogans from a Will Ferrel bit about insufferable marketing trons, but nope, that's the 2024 campaign for a car maker that won't be selling any cars until 2026. Utterly tone-deaf, out of tune with the vibe shift, and quite likely the final gasp of a storied but dying British brand. SAD!

Contrast this with the advertisement for Volvo's latest EX90. It's a 3:45-minute emotional cinematic ride that illustrates to perfect what it means to have a strong brand. To stand for something, and actually mean it. 

It's not even that original! They did a variation of the same theme six years ago for the same car, but that detracts nothing from its brilliance. In fact, the opposite. Brand is like culture: It's all about repetition and authenticity. Being who you say you are, over and over.

Volvo is safety, safety is paramount to parents, so Volvo is for parents. It's that simple, and it's that powerful. But only because it's actually true! You couldn't run this branding campaign for, say, Toyota, and see the same success. Toyota has their roots in reliability. That's their story.

But Volvo literally does care. Their history includes giving away the patent to the seatbelt. In Sweden, they have a crash response team that goes to the scene of accidents involving Volvo cars to learn how they can become safer (and they've been doing this since the 70s!!). And in the UK, the XC90 had no official deaths recorded since the car was introduced in 2004 (at least per 2018).

Volvo also does safety quite differently than most auto makers. They're not just studying and optimizing for the crash-ratings tests, which is what it seems drive most other manufacturers. Their cars do very well when the test expands to include new scenarios because it's designed to be best-in-reality not just best-in-test.

All that is to say that the branding strength of Volvo rests on congruence, consistency, and commitment to doing the same thing, a little better every year, for basically an eternity. It's incredibly inspiring.

Frankly, it makes me want to buy a Volvo! Even though by all sorts of natural inclinations (speed/design/heritage), I should be interested in a new Jaguar. But I wouldn't be caught dead in a Jaguar now. That's the power of advertisement: To lift and to diminish.

Cold reading an ADHD affliction

I'm sure there are truly pathological cases of ADHD out there, and maybe taking amphetamines really is a magic pill for some folks. But there clearly is also an entire cottage industry cropping up around convincing perfectly normal people that they suffer from ADHD, and that this explains many unwanted aspects of the human condition.

Take this thread I stumbled across on X today by an "ADHD coach": The ADHD Basics. It lists five primary symptoms:

  • Forgetfulness.
  • High standards / perfectionism.
  • Attraction to novelty.
  • Lack of consistency.
  • Difficulty establishing/breaking habits.

No wonder we've seen an explosion of ADHD diagnosis. This list applies to most humans at least part of the time! I would even say that all five applies to me much of the time. So does this mean I suffer from ADHD and should start taking Adderall? Come on.

This is usually when the hand waving starts: "Sure, you may recognize all those symptoms, but for true ADHD sufferers, they're just, like, worse!". Okay, but what kind of diagnostic standard is that?!

The official presentation of ADHD symptoms as listed on Wikipedia isn't much better than what the five from the ADHD coach either.  It includes markers such as:

  • "Frequently overlooks details or makes careless mistakes"
  • "Often cannot quietly engage in leisure activities or play"
  • "Often talks excessively"
  • "Often has difficulty maintaining focus on one task or play activity"
  • "Is frequently easily distracted by extraneous stimuli"

Again, I can recognize myself in several of those from time to time. And if you include the entire list of markers from the DSM-5, I'm sure I can rack up the five+ necessary to earn an official designation of ADHD. That's just ridiculous.

It's even worse when it comes to kids, but Abigail Shrier already covered that topic expertly in Bad Therapy, so I won't repeat that here. If only to marvel at the collective insanity where being loud or animated during play is a pathological marker for children! Now that's crazy. 

But I know this is a touchy subject for plenty of parents of kids who struggle in ways that might fall under some of these rubrics. So let's leave the kids out of this for a minute and focus on the adults instead.

A total of 45 million Adderall prescriptions were written in the US in 2023. That's up from 35 million in 2019. A great many of these were surely made to people who got convinced that being "forgetful" or "attracted to novelty" isn't just part of being human, but an affliction requiring amphetamines to mitigate.

What this reminds me of is the concept of cold readings. Where a psychic slyly prods for revealing details from their subject while vaguely throwing out potential hooks left, right, and center. The subject is induced to ignore the vagueness that doesn't apply to their situation, but focus on the inevitable hits that convince someone that what they desperately want to hear is true.

I think a lot of people desperately want to hear that there's a medical reason for why they sometimes can't focus, don't feel motivated, forget things, or find breaking bad habits hard (and not something as boring as you need better sleep, regular exercise, and an improved diet). So when ADHD coaches show up to make them feel better with a medical label, it's compelling to partake in the cold reading, and get the answer you were hoping for.

But that's nonsense. You don't need a diagnosis to be a flawed human. It goes for all of us. So if you want to supercharge your morning's productivity routine by popping a pill or two of amphetamines, own it! Don't hide behind some label (or think you're immune to the long-term effects of taking speed either).

Joining the Shopify board of directors

I've known Tobi for over twenty years now. Right from the earliest days of Ruby on Rails, when he was building Snowdevil, which eventually became Shopify, to sell snowboards online. Here's his first commit to Rails from 2004, which improved the ergonomics of controller testing. Just one out of the 131 commits he made to the framework from 2004-2010 -- a record still good enough to be in the top 100 all-time contributors to Rails!

But Tobi's contributions to Ruby on Rails extend far beyond his individual commits to the framework, creating Active Merchant and the Liquid templating system, or serving on the Rails Core Team back in the early days. With Shopify, Tobi more or less single-handedly killed the zombie argument that Rails couldn't scale by building the world's most popular hosted e-commerce platform and routing a sizable portion of all online sales through it.

In the process, Tobi built an incredible technical organization to support this effort. Shopify employs a third of the Rails Core Team, developed the YJIT compiler for Ruby, and contributed in a billion other ways. They are without a doubt the most generous benefactor in the Ruby on Rails world.

So when Tobi asked me whether I'd be interested in joining Shopify's board, I needed no pause to consider the invitation. OF COURSE I WOULD!

But to be honest, it wasn't just a reflexive answer to service the gratitude I've felt toward Shopify for many years. It was also to satisfy a selfish curiosity to wrestle with problems at a scale that none of my own work has ever touched. 

Both in terms of the frontier programming problems inherent in dealing with a majestic monolith clocking in at five million lines of code, and the challenge of guiding thousands of programmers to productively extend it, Shopify deals with a scale several orders of magnitude beyond what I do day-to-day at 37signals. That's interesting!

So too is the sheer magnitude of the impact Shopify is having on the world of commerce. While much of the web is decaying to enshitification and entropy, Shopify stores stand out by being faster to browse, quicker to checkout, and easier to trust. That's enabling a vast array of individual entrepreneurs and businesses to have a competitive shopping experience against the likes of Amazon, without needing huge teams to do it.

It's always a delight when I find a cool store, and I learn that it's running on Shopify. As I spoke with Tobi about on the announcement show, this was really hammered home after I got into mechanical keyboards. Seemingly every single vendor of thocky and clicky keyboards use Shopify! And when I see that, not only am I sure that buying won't be a hassle, but I also know I'm not going to get scammed. That's the Shopify magic: Leveling the commercial playing field between some obscure keyboard maker and the consolidated titans of e-commerce.

And now I get to help further that mission from the inside! What a treat. Thanks Tobi!

Obsessive problem solving followed by aimless wandering

I haven't felt any urge to tinker with my Linux setup in months. This after spending much of the spring and into summer furiously and obsessively trying every PC out there to find the perfect replacement for the Mac, diving deep with Ubuntu, and codifying my findings in the Omakub project. But now it's done, and I'm left enjoying the Apple-free spoils of a new, better place without any recurring work.

It was the same experience getting out of the cloud. For months, I spent all my time building Kamal, examining server components, and plotting our path. But then we did it, and then it was done

Ditto with Rails 8. Huge push to get the Solid Trifecta to line up with a release that included Propshaft and the authentication generator, and the rest of all the amazing steps forward I covered in the Rails World keynote. Now that's done too, and all new Rails apps enjoy the compressed complexity.

At the company level, most of our work is a marathon. That's how you stay in business for twenty years and beyond. By sticking with it. But at the executive level, almost all big leaps forward are sprints inspired by a hunch. They have to be sprints, because the level of intensity required to get that hunch over the hill is just too high to sustain for long (unless, I guess, you're Elon!).

That to me is the best argument for making sure my plate isn't full of half-eaten commitments. That my calendar isn't clogged with an endless ream of recurring meetings. Such that my mind remains an open, blank slate when one of those obsessive opportunities flutter by.

See, I've come to accept that my best work is a series of sprints punctuated by periods of wandering. It was a knee-jerk protest to the stultifying and abusive App Store bureaucracy that eventually lead me to Linux. And it was discovering Linux that lead to Omakub, and making the open source operating system the default for new technical staff at 37signals

It was a deep dive into Docker, originally without any clear mission beyond curiosity, that lead to Kamal, and our path out of the cloud. Oh, and getting enamored by the speed of Gen 4 SSDs was what planted the seed for the Solid Trifecta. 

I couldn't have planned any of this even if I wanted to. But I also don't want to. There are few things more satisfying to me than following a hunch and seeing where it leads, without a commitment to a specific, final destination.

It's a little like writing. Half the joy of composing these paragraphs come from discovering the arguments as the piece develops. Every essay starts with a hunch, but the final shape is rarely clear until the mind has been left to wrestle with the words for a while.

So that's why my answer to the usual question of "what's next?": I don't know! Because if I did, I'd already be half-way done doing it. And then it wouldn't really be next, it'd be now.

Finding the next now is the art of wandering, and wandering well takes practice and patience. Don't rush it.

House rules in Fortnite

We play a lot of Fortnite at our house. It's a great game for teaching kids cooperative discipline, and in a remarkably wholesome setting to boot (no blood, cartoon styling). I've had no qualms involving all three of our boys from an early age in the family squad, including our two youngest from around age four.

Since we started playing, I've just had two primary house rules:

  1. Stay together.
  2. No complaining.

Sounds simple, but it's ever-so tempting to stray from the squad to chase your own goal of getting better gear, and it comes easy to blame your brother when the other team gets you. Especially when you're still a preschooler!

But that's why Fortnite is such an effective tool for teaching discipline. Because if you want to win in a team-based context like that, you have to work together as a team, and you'll quickly realize that sticking to the rules makes that way more likely.

It's also teaches you how to lose gracefully. If you get all pissy and blamey when you're knocked out, the session ends, because nobody wants to listen to that (especially dad!). So if you want to play more, and get better, you better start tempering your frustrations. Wonderful life skill to develop early.

Equally, it's a delightful game to beat together. Unlike something like Mario Kart or Smash Brothers, which pits the family against itself (also fun, but less learning!), Fortnite puts us all on the same team, striving for the same objective, and in line to celebrate together when we're successful.

When you add it all up, it's one of my favorite activities with the kids. It's highly rewarding to see them internalize both the big life lessons mentioned above, as well as the nitty-gritty tactical insights, like always seeking higher ground, securing cover, and having proper backup before engaging.

All screen time is not created equal.

Too much therapy at work

Many years ago, Jason and I hired a COO at 37signals, but ended up letting them go after just a year (many reasons, another story). This happened not long before one of our company meet-ups, so we thought it fitting to discuss the matter in-person. What a mistake. The session turned into a group therapy session lasting hours, with a free-flowing out-pour of every anxiety under the sun. 

Some might look at group therapy, with its sharing of emotional stories and vulnerability, as a good thing. And I'm sure it can be, in the right setting. But that setting is not work. Especially not with the entire company present as participants.

But that's where a therapy-infused corporate culture and language often leads. It's a natural extension of "holding the space", of too much navel-gazing "mindfulness", of a posture of nurture and care that borders the patronizing.

It's okay to be disappointed or frustrated or anxious at work. That's part of the experience working with and especially for other people! But where things go astray is when there's an expectation that these emotions always need to be processed during the 9-5 by your manager (rather than after hours with a licensed therapist).

It's also based on a fundamental misconception that everything can be made better for everyone by talking more about it. For some people, men especially, the better way out of a bad situation is to feel useful. There's a whole meme dedicated to "men would rather X than go the therapy", which is posited as a point of derision rather than recognizing taking some/any/all actions as a legitimate coping strategy.

See, there's another one of those therapy words: coping. Belonging to the same lexicon as trauma, rumination, and self-care. I'm sure these are all useful and helpful labels in the right therapeutic context, but again, that just isn't work. 

There's a reason licensed therapists are bound by a whole host of ethical boundaries when working with clients. They can't be intertwined with the client's friends or family or colleagues. They must be competent in the specific areas where they recommend interventions. There are all sorts of healthy constraints on the relationship.

Those constraints can't apply at work when you're treating the 1-1 as an hour on the Freudian couch or turn the all-hands into a group therapy session. It's the same problem with the excessive pathologizing of children that Abigail Shrier covered so well in her book Bad Therapy (one of my favorite reads from 2024!).

It's easy to read all of the above as cold or dismissive, but please don't. It's possible to share the same genuine care for people while disagreeing on the methods and the context that's due to serve them best. And it's also possible to look at the increase in therapy thinking, language, and methods in the workplace as an abject failure of modern corporate culture. An increase that's making people more fragile, more precious, more incapable of coping with the most basic expectations of disappointments or adversity on the job.

That to me is as sound of a theory for trying something else as you could ever have. What we've been doing for the last couple of decades is busted, and it's time to accept that. Get therapy out of the office and let's hand it back to the shrinks.

The spells are spent

They just don't work any more, those baseless accusations that anyone we disagree with is a racist, misogynist, fascist. After being invoked en masse and in vain for the better part of the past decade, their power to shock and awe is finally gone. All that's left is a weak whimper. Good fucking riddance.

The problem with accusations like that is that they eventually have to be backed by proof or they'll bounce like a rubber check. And when even the most mundane political or moral positions were able to earn one of those *ist-y labels, the entire enterprise of throwing them around was bound to go bankrupt. And now it has.

Whatever controversy calling the peak of DEI might have garnered two years ago is certainly gone now. Being professionally offended by every little thing, seeing micro-aggressions around every corner, and rendering the entire world through a narrow oppressed/oppressor lens is back to the niche disposition once more.

This particular preference falsification, where people just go along with what's in vogue to keep their head down, has left the zeitgeist. For a hot minute, it really did seem like everyone was immersed in critical-theory thinking and the rest of the woke orthodoxies. But that was shadows on the wall, distorted to thrice their size from the angle of the light. 

Now the angle has changed, so the shadows have shrunk. That big scary mob that once was has been reduced to near impotence in most arenas, including the biggest of all, the US presidential election. That's a win for everyone whether they like Donald Trump or not.

What you know that just ain't so

The fun bit about business is in all the answers you don't have. Should we be priced higher or lower or leave it alone? Should we chase these customers over here or those customers over there? Should we add more features or polish the ones we have? There's endless variation in every one of those questions, and you can't reason your way to conclusion. Only testing against reality will really give you an answer. And even then, reality can be "wrong".

Meaning that even valid results of your experiments can steer you astray. Maybe you ran your pricing test too short to know what the second-order effect of losing sign-ups in favor of a higher revenue-per-customer will be, even if it nets out positively for the first year. Maybe presenting your new features came with a new website design, and the latter was what actually moved the needle. It's very hard to reduce any interesting question in business to a perfect experiment. Best you usually get is more or less confidence.

Which is fine! We don't need to know everything for sure before taking action. In fact, the hallmark of a good business person is their ability to make calculated bets based on half a story and a hunch.

But the key is to remember that what you won was a bet, not a trophy made of truth. And while you can to be grateful for the good outcome, you need to remember that whatever conclusions you draw from it need to remain suspect.

This is just as true when you lose the bet. That's when you're liable to believe that "we tried that, didn't work" represents some universal statement of fact. When in reality, the lesson is more like "we tried SOMETHING and that SOMETHING didn't work". In order to remember that "we could try SOMETHING ELSE and THAT might work".

Because if you don't, you'll eventually become a prisoner to every bet you've made. Convinced that half the opportunities in the world just don't apply to your situation and the other half is a slam dunk. Nonsense.

It's okay to collect your winnings even if you aren't exactly sure why you won. It's okay to take some losses even if you can't explain exactly where it went wrong. In fact, it's better than okay to resist coming up with a definitive story either which way. Because as soon as you write that story down, it tends to become rigid rather than open to reinterpretation.

An open mind is always willing to let the story twist, willing to revisit the setup, and willing to test everything from timing to premise on another day.

Our cloud-exit savings will now top ten million over five years

We finished pulling seven cloud apps, including HEY, out of AWS and onto our own hardware last summer. But it took until the end of that year for all the long-term contract commitments to end, so 2024 has been the first clean year of savings, and we've been pleasantly surprised that they've been even better than originally estimated.

For 2024, we've brought the cloud bill down from the original $3.2 million/year run rate to $1.3 million. That's a saving of almost two million dollars per year for our setup! The reason it's more than our original estimate of $7 million over five years is that we got away with putting all the new hardware into our existing data center racks and power limits.

The expenditure on all that new Dell hardware – about $700,000 in the end – was also entirely recouped during 2023 while the long-term commitments slowly rolled off. Think about that for a second. This is gear we expect to use for the next five, maybe even seven years! All paid off from savings accrued during the second half of 2023. Pretty sweet!

But it's about to get sweeter still. The remaining $1.3 million we still spend on cloud services is all from AWS S3. While all our former cloud compute and managed database/search services were on one-year committed contracts, our file storage has been locked into a four(!!)-year contract since 2021, which doesn't expire until next summer. So that's when we plan to be out.

We store almost 10 petabytes of data in S3 now. That includes a lot of super critical customer files, like for Basecamp and HEY, stored in duplicate via separate regions. We use a mixture of storage classes to get an optimized solution that weighs reliability, access, and cost. But it's still well over a million dollars to keep all this data there (and that's after the big long-term commitment discounts!).

When we move out next summer, we'll be moving to a dual-DC Pure Storage setup, with a combined 18 petabytes of capacity. This setup will cost about the same as a year's worth of AWS S3 for the initial hardware. But thanks to the incredible density and power efficiency of the Pure flash arrays, we can also fit these within our existing data center racks. So ongoing costs are going to be some modest service contracts, and we expect to save another four million dollars over five years.

This brings our total projected savings from the combined cloud exit to well over ten million dollars over five years! While getting faster computers and much more storage.

Now, as with all things cloud vs on-prem, it's never fully apples-to-apples. If you're entirely in the cloud, and have no existing data center racks, you'll pay to rent those as well (but you'll probably be shocked at how cheap it is compared to the cloud!). And even for our savings estimates, the target keeps moving as we require more hardware and more storage as Basecamp and HEY continues to grow over the years.

But it's still remarkable that we're able to reap savings of this magnitude from leaving the cloud. We've been out for just over a year now, and the team managing everything is still the same. There were no hidden dragons of additional workload associated with the exit that required us to balloon the team, as some spectators speculated when we announced it. All the answers in our Big Cloud Exit FAQ continue to hold.

It's still work, though! Running apps the size of Basecamp and HEY across two data centers (and soon at least one more internationally!) requires a substantial and dedicated crew. There's always work to be done maintaining all these applications, databases, virtual machines, and yes, occasionally, even requesting a power supply or drive swap on a machine throwing a warming light (but our white gloves at Deft take care of that). But most of that work was something we had to do in the cloud as well!

Since we originally announced our plans to leave the cloud, there's been a surge of interest in doing the same across the industry. The motto of the 2010s and early 2020s – all-cloud, everything, all the time – seems to finally have peaked. And thank heavens for that!

The cloud can still make a lot of sense, though. Especially in the very early days when you don't even need a whole computer or are unsure whether you'll still be in business by the end of the year. Or when you're dealing with enormous fluctuations in load, like what motivated Amazon to create AWS in the first place.

But as soon as the cloud bills start to become substantial, I think you owe it to yourself, your investors, and common business sense to at least do the math. How much are we spending? What would it cost to buy these computers instead of renting them? Could we try moving some part of the setup onto our own hardware, maybe using Kamal or a similar tool? The potential savings from these answers can be shocking.

At 37signals, we're looking forward to literally deleting our AWS account come this summer, but remain grateful for the service and the lessons we learned while using the platform. It's obvious why Amazon continues to lead in cloud. And I'm also grateful that it's now entirely free to move your data out of S3, if you're leaving the platform for good. Makes the math even better. So long and thanks for all the fish!

Capture less than you create

I beam with pride when I see companies like Shopify, GitHub, Gusto, Zendesk, Instacart, Procore, Doximity, Coinbase, and others claim billion-dollar valuations from work done with Rails. It's beyond satisfying to see this much value created with a web framework I've spent the last two decades evolving and maintaining. A beautiful prize from a life's work realized.

But it's also possible to look at this through another lens, and see a huge missed opportunity! If hundreds of billions of dollars in valuations came to be from tools that I originated, why am I not at least a pétit billionaire?! What missteps along the way must I have made to deserve life as merely a rich software entrepreneur, with so few direct, personal receipts from the work on Rails?

This line of thinking is lethal to the open source spirit.

The moment you go down the path of gratitude grievances, you'll see ungrateful ghosts everywhere. People who owe you something, if they succeed. A ratio that's never quite right between what you've helped create and what you've managed to capture. If you let it, it'll haunt you forever.

So don't! Don't let the success of others diminish your satisfaction with your own efforts. Unless you're literally Mark Zuckerberg, Elon Musk, or Jeff Bezos, there'll always be someone richer than you!

The rewards I withdraw from open source flow from all the happy programmers who've been able to write Ruby to build these amazingly successful web businesses with Rails. That enjoyment only grows the more successful these business are! The more economic activity stems from Rails, the more programmers will be able to find work where they might write Ruby.

Maybe I'd feel different if I was a starving open source artist holed up somewhere begrudging the wheels of capitalism. But fate has been more than kind enough to me in that regard. I want for very little, because I've been blessed sufficiently. That's a special kind of wealth: Enough.

And that's also the open source spirit: To let a billion lemons go unsqueezed. To capture vanishingly less than you create. To marvel at a vast commons of software, offered with no strings attached, to any who might wish to build.

Thou shall not lust after thy open source's users and their success.

To the crazy ones

In an earlier era, we'd all have been glued to the television to cheer SpaceX successfully catching Starship's returning booster rocket on the first try. I remember my father talking about seeing Apollo 11 make it to the moon. That was a lifelong memory for him. And I remember, as a six-year old boy, watching the fatal Challenger explosion on TV. That's been a lifelong memory for me. Reaching for space, in triumph or tragedy, ought to be special. 

But today it's often just another post that quickly scrolls by on your feed. Maybe you pause for a minute, but the moment easily gets compressed to the same level of gravitas as someone getting punched in the face or an extra cute cat. The spectacle has been completely commoditized. The grand gesture is gone. It now takes effort to actually marvel when something truly spectacular occurs. Like shooting the Starship, 50 feet taller than the Statue of Liberty, towards space, and seeing it stick the landing to perfection.

You should make that effort. Embrace that marvel. Don't be cynical. Don't let transient party politics cloud the moment. Musk was once considered to be on the blue team, now he's on the red team. Little of that will matter in the long term. But propelling progress forward definitely will.

And speaking of Mr Musk, what an insane week for him, and for everyone excited about progress. Cybertaxis, robot bartenders, art-deco people carriers, and now returning starships. This is exactly the kind of willing the future into existence that leaps of progress depend on. Who cares whether all these incredible ambitions arrive right on time or not.

Because ambition this crazy is only likely to emerge from someone equally and sufficiently nuts. And I mean that in the most admirable way possible. Musk is nuts. He's one of the crazy ones. A true original. Easy to hate, impossible to ignore.

And that's what gets me. Everyone find it easy to nod in agreement with Jobs' ode to To The Crazy Ones. Everyone wants to believe that they'd support "the misfits, the rebels, the troublemakers". That they too would cheer for those who are "not fond of rules. And... have no respect for the status quo".

But they won't and they don't. Most people are either aggressively or passively conformist. They squirm when The Crazy Ones actually attempt to change the world. They don't see genius as often as they see transgressors.  A failure to comply and comport. And they don't like it.

I like it. Not crazy for the sake of crazy, but crazy for the sake of progress. Demonstrable, undeniable, awe-inspiring progress. And that's what Mr Musk has brought us and continues to bring us.

To infinity and beyond, you crazy spaceman! 🫡

Open source royalty and mad kings

I'm solidly in favor of the Benevolent Dictator For Life (BDFL) model of open source stewardship. This is how projects from Linux to Python, from Laravel to Ruby, and yes, Rails, have kept their cohesion, decisiveness, and forward motion. It's a model with decades worth of achievements to its name. But it's not a mandate from heaven. It's not infallible.

Now I am loathe to even open this discussion. Because I've weathered more than my fair share of bad-faith attempts on my own stewardship, and witnessed the show trials of several others. It doesn't take much for contentious issues within a project, or societal moral panics outside it, to seed dethroning mobs. Which will hijack then eulogize The Will of The Community, as though that somehow deserved the mandate from heaven.

Half the advantage of the BDFL model is exactly in allowing for unpopular decisions to be made without the lethargic mores of committees and bureaucracy! Open source is not a democracy, and we all benefit from that fact, whether we accept it or not.

So what follows is not a categorical argument. I believe in the social utility of an open source royalty. One crowned on the virtues of initiative, perseverance, contributions, and technical excellence.

Matt Mullenweg has earned his crown in the land of WordPress. He created the system, and for twenty years has been its prime champion and cheerleader. His achievements are obvious. Half the damn internet runs on WordPress! There's an industry worth billions feeding theme designers, plugin makers, hosting companies, and Matt's own Automattic enterprise. It's a first-rate open source success story.

But it's also one that has taken a dark turn since Automattic went to war with WP Engine (WPE) over a claim that the latter pay 8% of its revenues as a tithe approximate under the guise of "giving back more". The leverage of extraction started as a spurious trademark claim, but has since escalated into what WPE has alleged as extortion, and what I see as a seemingly never-ending series of dramatic overreaches and breaches of open source norms. Especially the introduction of the login loyalty oath, and now with the expropriation of WPE's Advanced Custom Fields (ACF) plugin.

That's a lot, so let's start from the end. The most recent escalation, and, in my opinion, the most unhinged, is the expropriation of the ACF plugin. Automattic first answered WPE's lawsuit by blocking engineers from the latter from accessing the WordPress.org plugin registry, which is used to distribute updates and security patches. It then used the fact that WPE no longer had access to the registry to expropriate the plugin, including reviews and download stats!! The ACF entry now points to Automattic's own Secure Custom Fields.

For a dispute that started with a claim of "trademark confusion", there's an incredible irony in the fact that Automattic is now hijacking users looking for ACF onto their own plugin. And providing as rational for this unprecedented breach of open source norms that ACF needs maintenance, and since WPE is no longer able to provide that (given that they were blocked!), Automattic has to step in to do so. I mean, what?!

Imagine this happening on npm? Imagine Meta getting into a legal dispute with Microsoft (the owners of GitHub, who in turn own npm), and Microsoft responding by directing GitHub to ban all Meta employees from accessing their repositories. And then Microsoft just takes over the official React repository, pointing it to their own Super React fork. This is the kind of crazy we're talking about.

Weaponizing open source code registries is something we simply cannot allow to form precedence. They must remain neutral territory. Little Switzerlands in a world of constant commercial skirmishes.

And that's really the main reason I care to comment on this whole sordid ordeal. If this fight was just one between two billion-dollar companies, as Automattic and WPE both are, I would not have cared to wade in. But the principles at stake extend far beyond the two of them.

Using an open source project like WordPress as leverage in this contract dispute, and weaponizing its plugin registry, is an endangerment of an open source peace that has reigned decades, with peace-time dividends for all. Not since the SCO-Linux nonsense of the early 2000s have we faced such a potential explosion in fear, doubt, and uncertainty in the open source realm on basic matters everyone thought they could take for granted.

So while I always try to keep things from getting personal, I'll break practice to make this plea: Matt, don't turn into a mad king. I hold your work on WordPress and beyond in the highest esteem. And I recognize the temptation of gratitude grievances, arising from beneficiaries getting more from our work than they return in contributions. But that must remain a moral critique, not a commercial crusade. You can't just extract by force that which you believe to be owed beyond the license agreement on a whim.

Please don't make me cheer for a private-equity operator like Silver Lake, Matt. Don't make me wish for them to file an emergency injunction to stop the expropriation of ACF.

It's not too late. Yes, some bridges have been burned, but look at those as sunk cost. Even in isolation, the additional expense from here on out to continue this conquest is not going to be worth it either. There's still time to turn around. To strike a modest deal where all parties save some face. I implore you to pursue it.

Automattic is doing open source dirty

Automattic demanding 8% of WP Engine's revenues because they're not "giving back enough" to WordPress is a wanton violation of general open source ideals and the specifics of the GPL license. Automattic is completely out of line, and the potential damage to the open source world extends far beyond the WordPress. Don't let the drama or its characters distract you from that threat.

A key part of why open source has been so successful over the last several decades is the clarity and certainty of its licensing regime. Which allow you to build a business on open source without fear of frivolous claims or surprise shakedowns. The terms of the deal are spelled out in the license agreement, and the common ones, like MIT, BSD, or GPL, have all stood the test of time.

The most important part of such a license is usually the fact that the software is offered without any warranty. But some also include provisions that require any modifications to be released as open source as well. None of the major licenses, however, say anything close to "it's free but only until the project owners deem you too successful and then you'll have to pay 8% of your revenues to support the project". That's a completely bonkers and arbitrary standard based in the rule of spite, not law.

I don't even have a dog in this fight, only a set of principles. If anything, I'd be naturally inclined to be on Team WordPress. Between creating one of the most widely used open-source programs and powering half the internet, there's every tribal reason to side with Automattic over WP Engine's private-equity owners at Silver Lake.

But whatever my feelings about private equity in general or Silver Lake's management of WP Engine in particular, I care far more about the integrity of open source licenses, and that integrity is under direct assault by Automattic's grotesque claim for WP Engine's revenues.

It's even more outrageous that Automattic has chosen trademarks as their method to get their "Al Capone" when up until 2018 they were part owners of WP Engine before selling their stake to Silver Lake!

And yet, I can see where this is coming from. Ruby on Rails, the open-source web framework I created, has been used to create businesses worth hundreds of billions of dollars combined. Some of those businesses express their gratitude and self-interest by supporting the framework with dedicated developers, membership of The Rails Foundation, or conference sponsorships. But many also do not! And that is absolutely their right, even if it occasionally irks a little.

That's the deal. That's open source. I give you a gift of code, you accept the terms of the license. There cannot be a second set of shadow obligations that might suddenly apply, if you strike it rich using the software. Then the license is meaningless, the clarity all muddled, and certainty lost.

Look, Automattic can change their license away from the GPL any time they wish. The new license will only apply to new code, though, and WP Engine, or anyone else, are eligible to fork the project. That's what happened with Redis after Redis Labs dropped their BSD license and went with a commercial source-available alternative. Valkey was forked from the last free Redis version, and now that's where anyone interested in an open-source Redis implementation is likely to go.

But I suspect Automattic wants to have their cake and eat it too. They want to retain WordPress' shine of open source, but also be able to extract their pound of flesh from any competitor that might appear, whenever they see fit. Screw that.

Kamal 2: Thou need not PaaS

Kamal was our ticket out of the cloud. A simple tool for deploying containerized applications onto our own hardware, without the need for the complexity of something like Kubernetes. Kamal 2 is a huge leap forward for that tool, and it has just shipped. 

Now you can deploy multiple applications to the same server, and you can have SSL certificates automatically provisioned via Let's Encrypt. A big compression in complexity, especially when just getting started.

Because Kamal isn't just for high-end cloud exits where applications rely on an entire fleet of machines. It's also an excellent option for running a bunch of smaller apps on a single server. Imagine just how you can run on one of those amazing Hetzner EPYC 9454 boxes with 96 threads and 256GB of RAM that they sell for $220/month! 

Our move out of the cloud would not has been nearly as smooth or as fast without Kamal. And I'm thrilled we can share such a tool with everyone else who might want to reconsider the cost of Platform-as-a-Service setups. Kamal works great whether you're starting on a cheap $5 VPS, moving onto a fleet of cloud VMs, using dedicated-but-managed servers, or running your own hardware entirely.

In fact, it's chief mission is to allow you to move through all those stages of a production deployment without onerous migration costs or delays. We can't have competition in the cloud as long as folks are locked into proprietary or overly-complicated setups that makes moving from one vendor to another a huge hassle and expense.

If this sounds at all appetizing, checkout the new Kamal 2 demo, which shows how to deploy a simple Go application (Kamal isn't just for Rails!), how to add another Rails app on the same box, and how to move that Rails app onto a three-machine Hetzner cloud setup. All in under half an hour.

Enjoy Kamal 2!

kamal-demo.webp

Wonderful Rails World Vibes

I totally understand how programming conferences end up being held in a drab Sheraton hotel somewhere to save money. It's expensive to outfit a cool venue with the gear and operations needed to pull off a great experience for speakers, sponsors, and attendees. And while the cost of doing something more inspiring than a carpet-clad conference hotel is clear, the pay off is often fuzzy until you do. But Rails World 2024 in Toronto just made that value crystal clear. Holy smackeroli, what an incredible show!!

We had over a thousand people gathered from 57 countries in one of the coolest conference venues I've ever had the pleasure of talking to programmers in. Amanda Perino, the executive director of the Rails Foundation, and the mastermind behind Rails World, gambled on an inside/outside venue in Toronto in September, and it paid off big. Talking Rails between the trees, in the open air, and basking in the sun made for a unique and mesmerizing couple of days in Canada.

The fact that we managed to time the first beta release of Rails 8 with the conference certainly helped too. The bold leap from #NOBUILD to #NOPAAS, the release of an entire trifecta of Solid adapters (powered by SQLite by default!), and the release of Kamal 2.0 (with it's brand new purpose-built proxy!) made for perhaps the most exciting shipping season in well over a decade for Rails.

no-pass-s.webp

It's clear that after spending a few years in the wilderness with webpacker and whatnot, Rails has found a new stride and a new confidence to pursue its counter melody to the rest of the industry. While everyone seems resigned to slice the expertise of web development into ever thinner specialties, Rails is doubling down on the one-developer framework, the full stack, and the compression of complexity.

And while venture capitalists are moving in on developer tech from all sides, Rails is reaffirming its commitment to open source generics, and deployment strategies that don't require paying big premiums for wrappers around AWS. (You should never need a PaaS subscription to go live with Rails!).

But while both of those ingredients, a fantastic venue and a stacked shipping schedule, helped set us up for a good time, the fact that THE VIBES have so definitively turned around in the Rails ecosystem, as well as the rest of the programming world, certainly contributed too. We've now long since put those awkward first years of the decade behind us, and we're just back to having a great time celebrating competency and conceptual compression together.

And what a celebration! While I really enjoyed delivering the opening keynote, it was the session with Tobi and Matz that truly warmed my heart. I haven't had a chance to really talk with Matz for many years, to truly thank him for the incredible gift of Ruby that he just keeps delivering to us. So to do so in a session moderated by Tobi was beyond special. So too was offering Matz the Rails Lifetime Award, which apparently was the first award Matz has ever received himself. Just look at this:
matz-award.jpeg

Last year's recipient, Tobi, also did his part and more to make us all feel incredibly welcome on Shopify's home turf of Toronto. From the pre-game event, to the support for the conference, to the insane after party. Shopify's patronage towards Ruby and Rails is the stuff of a living legend, and the results, from their superb work on YJIT, to their strong presence on Rails Core, to their support of the Rails Foundation and Rails World. We couldn't have asked for a better host or benefactor!


Now I know all of this sounds a bit sappy at this point. Thanks going here, thanks going there, but that really is the key sentiment that I left Toronto with. Just an immense gratitude towards all the companies, contributors, and individuals working together in the Rails ecosystem for a better tomorrow. The web does not have to be as complicated as we've let it become, and here's a corner of the programming world pushing back in unison.

I mean, I've been working on The Web Problem with Ruby through Rails for well over twenty years now, but I'm as fired up as ever to push the frontier forward. And it's largely because the effort is sustained by such a broad and charitable ecosystem.

With Rails World such a roaring success -- I mean it sold out in a mere twenty minutes and still managed to surpass the hype!! -- it might look obvious in retrospect that it was always going to be like this. But it wasn't obvious at all at the outset. It took a real leap of faith from Cookpad, Doximity, Fleetio, GitHub, Intercom, Procore, and Shopify to join with 37signals in creating The Rails Foundation, without which none of this would have happened.

The fact that we were able to put on this incredible celebration of Rails in Toronto at a very affordable entry price relied not just on the support of sponsors like Shopify, GitHub, AppSignal, Clio, Huntress, Valkey, crunchydata, Paraxial.io, Sentry, MongoDB, happyscribe, framework, and others, but also on the fact that the Rails Foundation, the founding core members listed above, as well as the contributing members (AppSignal, BigBinary, cedarcode, makandra, Planet Argon, Renuo, and TableCheck), were willing to happily underwrite a loss of over $100,000 on the conference itself.

Because the mission for The Rails Foundation is to broaden the appeal of Rails, excite existing and new programmers in our ecosystem, and help educate them all on the latest advances with the framework. That's what the budget is for, and spending a considerable chunk of it on subsidizing Rails World serves that mission in full.
RothMedia-09272024-1018.jpg


In the coming days and weeks, we'll be releasing all the sessions from Rails World on the Rails YouTube channel. I know there was enough FOMO to float a blimp online from those who didn't manage to snatch a ticket in those ludicrously short twenty minutes it took before the conference sold out, but at least all the keynotes and sessions are made available as quickly as we can edit and upload them. You won't be getting the vibes of the hallway track, but maybe you'll have a chance to remedy that next year, when Rails World returns to Amsterdam for the 2025 show!

In the mean time, just one last thank you to everyone who attended, brought their good vibes, and helped ensure that we can all feel energized and inspired about Ruby on Rails for another year and beyond! ✌️❤️

tenderlove-s.jpeg

Ears rarely open until a rapport is established

It's hard to open cold with a controversial take to a bunch of strangers. And the room is always cold on X or in a one-off blog post. Just like comedy, half the battle of winning over the audience comes from a solid introduction, good timing, and a broad smile to warm the room. You can have great material, but if the vibe is off, good luck landing a laugh.

The stream I did on Monday with ThePrimeagen and TJ DeVries illustrates this to a T. Not only was the stream warm and in a good mood from spending time with ThePrimeagen first (who wouldn't be!), but the pair immediately elevated that spirit further by being such welcoming and gracious hosts. Thereby signaling to their audience that who they were about to hear would be worth listening to with an open mind.

That's the kind of introduction that makes all the difference as to whether someone is willing to grant you the grace of charitable listening -- or whether they'll immediately set their defenses to Max Stranger Skepticism.

I've been making many of my main arguments for years. Some for decades, even. And I know that many in that stream had probably heard some of them before, and dismissed them out of hand, because we hadn't established a rapport that would warrant an open mind. 

That's where writing just can't compete with a podcast or a stream. Putting a face, a voice, and a vibe to the argument absolutely changes its tone, and in turn, the emotions it evokes. And that's what most people go off on. Those emotions.

I sometimes do struggle with that. Thinking that the strength of an argument should be gauged purely by its logic or at least its rhetoric. But I've really come to appreciate the value of set and setting. Of a warm introduction. Of establishing a rapport.

We might not all become friends on the internet, but we needn't be strangers either. And the best way to move on from being strangers is by having others vouch for your earnestness. Thanks for doing that, ThePrimeagen and TJ DeVries ✌️

Wonderful vi

The speed of change in technology often appears to be the industry's defining characteristic. Nothing highlights that perception more than the recent and relentless march of AI advancements. But for as much as some things in technology change, many other things stay the same. Like vi!

vi is a programming text editor that was created by Bill Joy before computers even had real graphical interfaces, back in 1976. Just five years after the first microprocessor, the Intel 4004. In computing terms, we might as well be talking about ancient Egyptian hieroglyphs here. It's that old.

But it's fundamental design, splitting insert mode from command mode, remains unchanged in its modern successors, like Vim and Neovim. In fact, the entire vi ethos of maximizing programmer productivity by minimizing keystrokes has carried forward all these years with remarkably little distortion. In 1976, most computers didn't even have a mouse. In 2024, most vi-successor users don't even need one when programming. 

That's kinda incredible! That I can sit here, almost half a century after Bill Joy first gave birth to vi, and enjoy the same quirky style of text editing to make modern web apps in 2024. It's not why I use Neovim, but it sure does make it feel extra special.

The other reason it feels special is that vi makes turns manipulating text into a key-based form of Street Fighter. Sure, you can have fun just learning the basic buttons for punching and kicking, but the game unlocks an entirely new dimension the moment you pull off your first hadoken — a fireball move done by making a half circle with the joystick followed by a punch. And now you're off to learn all the special moves followed by techniques for stringing them together into combos.

That's what pulling a "ciq" in Neovim feels like. It stands for "Change Inside Quotes". So say you're in the middle of a line of code like this: "puts 'Hello<cursor> World'". With that cursor placed after the "Hello", the "ciq" move will select all the text inside the quotes ("Hello World"), remove it, and place your cursor right after the opening quotes, ready to write something new. That's pretty magic!

And it just goes on and on from there. You can use "dab" to "Delete Around Brackets", which is great when you want to remove all the parameters used for a method at the end of a line in Ruby. Or what about "vii" to "(Visually) Select Inside [the current level of] Indention", so you have the entire body of a method highlighted, ready for overwriting with a paste or copying or cutting. Or just "yiw" which copies the current word your cursor is on, regardless of where in the word it is, and copies it to the clipboard.

There's an astounding number of combos like these available in Neovim (and the other vi flavors). But now you're probably thinking: how could anyone possibly remember all that? Which brings me to the real wonder of vi: It's not just an editor, it's a language for editing. Once you learn the basic grammar of vi text manipulation, and you learn a few actions, scopes, and objects, you can string it all together in any combination possible.

Here's the structure: [Action] [Scope] [Object]. I've already given you four actions in the examples above: change, select, delete, and copy. And there are only two primary scopes: inside and around. And we've looked at four different objects: quotes, brackets, indention, and word. There's your language.

yaq = Copy (yank) Around Quotes
diw = Delete Inside Word
vab = Select (visually) Around Brackets

See how it's starting to make sense? Now let's add one more move to the combo, which is a count. So you can also do:

3cw = Change Three Words
4dd = Delete Four Lines
10j = Jump Ten Lines

There's more to learning the vi command mode than just this, but to me, this is where the magic is. The language of text manipulation. The action-scope-object grammar. It's when that clicks that the combo stringing begins, and your dopamine starts flowing.

It's just uniquely satisfying to string a handful of these combos together and see the text beneath you radically manipulated. In a way you just know would have been a drag to do in any other editor. That's the game-like joy of vi's power moves.

Now all of this still comes with quite some learning curve, of course. On top of the text manipulation, there's a bunch of basic navigation keystrokes to learn, but I think you ought to start with the basic grammar explained above. That's the fun bit, that's the addictive bit.

And if this appetizer has you hungry for more, I'd start by installing Neovim using the superb LazyVim distribution. Neovim from scratch is like climbing Mount Everest without oxygen. If all you're interested in at first is a peek at the view, book that LazyVim helicopter tour before bothering the sherpas. Then checkout the excellent Vim and Neovim tutorials available on Youtube from the likes of ThePrimeagan and Typecraft, along with Elijah Manor's LazyVim introduction.

You can run Neovim on any operating system, but it works better if you're using a modern, fast terminal like Alacritty or Kitty. I personally use Alacritty and Neovim together with the multi-pane terminal enhancement Zellij. The entire package is configured out of the box in Omakub, if your adventurous spirit should extend to a trip into Linux. But you absolutely don't need to run Linux to enjoy Neovim. It's great on both Mac and Windows too.

So that's it. That's my testimony to what a wonderful experience it's been adopting Neovim. It certainly wasn't without some frustration (like figuring out how to do my own snippets!!), and it's not without some sadness that I've given up on my beloved TextMate editor, but I can comfortably say, after running this stack since February, that it feels like home now. A combo-smashing, hadoken-wielding home. And it's awesome.

May vi reign for another fifty years and beyond!

Back in the market (Sonos Edition)

I've been a Sonos megafan for years. Owned probably two dozen devices for different homes. Mainly amps for in-ceiling speakers, but also some Ones, 3s, 5s. All of it. Because it Just Worked when it came to multi-room music. Now it doesn't, and it hasn't for a long time, so I've been back in the market.

I'm not exactly sure what the problem is. The new app has gotten a lot of the blame for Sonos' problems, but to me, the bigger issue has been how unreliable the system at large has become. And that doesn't seem like it's just an app issue. Probably more of a firmware problem. But whatever it is, it's been driving me crazy.

When you want to listen to some music at dinner, you want play to mean PLAY RIGHT NOW. Not "won't connect". Not "try restarting the app". Just play. But Sonos hasn't been like that. It's been an exercise in frustration instead.

I sympathize with their situation, though. Software is hard! Especially when you've been around as long as Sonos has, and have a back catalog of a million different devices, as well as a road map with tons of new stuff coming out. It can't be easy to juggle all that.

The second thing that's clearly hard is wireless synchronization. Sonos has a huge patent portfolio for making sure the music is always in sync across multiple rooms and devices, even when it's running over a wireless connection. Google lost in court when they tried to steal that tech. It's a formidable fortress.

But I don't need wireless. All my gear is hardwired with ethernet. And that's why the problems I've had have been extra infuriating. When the music drops out. Or it won't start. Or you can't connect to the devices. That just shouldn't happen when it's on a hardwired connection, but it does.

And it's not like we haven't tried to figure this out. It's literally been months of debugging. The AV outfit I'm using has been working with Sonos support. They've been replacing Amps with newer hardware. Lots of experimentation. But nothing has made the system reliable or enjoyable to use. Certainly not through the Sonos app.

So last week I finally threw in the towel and got back into the market. Turns out there are quite a few competitors out there now. I decided to try two: Bluesound and WiiM. They offer a comparable streaming amp that I can hook up to my in-ceiling Klipsch speakers, and they both connect via ethernet, and they both support Spotify Connect. WiiM in particular has been wowing hifi aficionados with their incredible price/performance ratio.

Instantly it was revelation. Music to either amp would start immediately when I selected a song in Spotify. I'd been boiled by the Sonos problems for so long that I didn't even remember what instant does for enjoyment. It does a lot!

And now that I've been pushed back into the market, I've realized that the historic Sonos app advantage just doesn't matter nearly as much as it used to. The Spotify app is a superior way to browse music, and both WiiM and Bluesound make it easy to group speakers. In the past, those two points served as the Sonos moat. That's over, it seems.

Further more, the WiiM Amp is just $299! That's half the price of the Sonos Amp (and the Bluesound). And for the kind of in-ceiling, background music I'm using it for, it's more than plenty. And despite the low price, the software is actually the best of all three options. It's incredibly easy and quick to set up. You're playing music five minutes after you've unpacked the device.

I'm telling you all of this because there are a lot of lessons for business owners of all types in this Sonos predicament. Like, if you've managed to win over a customer, the last thing you want to do is get them back into the market to sample the competition. If you can keep them content with what they got, they aren't interested in whatever anyone else is offering.

That's why it's worth going the extra effort not to fuck with existing customers. That's why we run three different versions of Basecamp. Including one that hasn't seen any feature updates for 14 years! A version that still has thousands of happy customers generating millions in revenue every year. They signed up for a system they liked in the mid-to-late 2000s, and they're not in the market for a change. THAT'S JUST FINE BY US!

Again, it isn't free to do this. But Sonos eventually got it right the last time they faced this dilemma. When they wanted to push the last new Sonos app, and couldn't make their plans work with the old hardware. The first approach would have essentially bricked them all. But they came around, split the apps, and I still use the Sonos S1 setup with an installation that's over a decade old.

And I'm sure we've gotten it wrong at 37signals many times over the years too. Accidentally changed something that we thought would be better, or at least more attractive to new customers, and ended up pushing happy buyers back into the market. Some of that is unavoidable, but a lot of it is not. A lot of it is a choice.

Don't make that choice. Don't push customers back into the market. They might just find that Spotify Connect is pretty awesome, and that amps half the price can do the job. Keep current customers happy, then go chasing new ones.

Passwords have problems, but passkeys have more

We had originally planned to go all-in on passkeys for ONCE/Campfire, and we built the early authentication system entirely around that. It was not a simple setup! Handling passkeys properly is surprisingly complicated on the backend, but we got it done. Unfortunately, the user experience kinda sucked, so we ended up ripping it all out again.

The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access of their accounts. Much the same way that two-factor authentication can do, but worse, since you're not even aware of it.

Let's take a simple example. You have an iPhone and a Windows computer. Chrome on Windows stores your passkeys in Windows Hello, so if you sign up for a service on Windows, and you then want to access it on iPhone, you're going to be stuck (unless you're so forward thinking as to add a second passkey, somehow, from the iPhone will on the Windows computer!). The passkey lives on the wrong device, if you're away from the computer and want to login, and it's not at all obvious to most users how they might fix that.

Even in the best case scenario, where you're using an iPhone and a Mac that are synced with Keychain Access via iCloud, you're still going to be stuck, if you need to access a service on a friend's computer in a pinch. Or if you're not using Keychain Access at all. There are plenty of pitfalls all over the flow. And the solutions, like scanning a QR code with a separate device, are cumbersome and alien to most users.

If you're going to teach someone how to deal with all of this, and all the potential pitfalls that might lock them out of your service, you almost might as well teach them how to use a cross-platform password manager like 1password.

Yes, passwords have problems. If you're using them without a password manager, you're likely to reuse them across multiple services, and if you do, all it takes is one service with awful password practices (like storing them in plain text rather than hashing them with something like bcrypt), and a breach will mean hackers might get access to all your other services.

But just because we have a real problem doesn't mean that all proposed solutions are actually going to be better. And at the moment, I don't see how passkeys are actually better, and, worse still, can become better. Unless you accept the idea that all your passwords should be tied to one computing ecosystem, and thus make it hard to use alternative computers.

A decent alternative to passkeys, if you need the extra layer of security, is to lean on email for the first login from a new device. Treating email as a second factor, but only on the first login from that device. Everyone has email, everyone understands email. (Just don't force us all to go through magic links exclusively, as that's a pain too for those who've actually adopted a password manager!).

Bottom line, I'm disappointed to report that passkeys don't appear worth the complexity of implementation (which is substantial!) nor the complexity and gotchas of the user experience. So we're sticking to passwords and emails. Encouraging opt-in 2FA and password managers, but not requiring them.

Passkeys seemed promising, but not all good intentions result in good solutions.

Optimize for bio cores first, silicon cores second

A big part of the reason that companies are going ga-ga over AI right now is the promise that it might materially lower their payroll for programmers. If a company currently needs 10 programmers to do a job, each have a cost of $200,000/year, then that's a $2m/year problem. If AI could even cut off 1/4 of that, they would have saved half a million! Cut double that, and it's a million. Efficiency gains add up quick on the bottom line when it comes to programmers!

That's why I love Ruby! That's why I work on Rails! For twenty years, it's been clear to me that this is where the puck was going. Programmers continuing to become more expensive, computers continuing to become less so. Therefore, the smart bet was on making those programmers more productive EVEN AT THE EXPENSE OF THE COMPUTER!

That's what so many programmers have a difficult time internalizing. They are in effect very expensive biological computing cores, and the real scarce resource. Silicon computing cores are far more plentiful, and their cost keeps going down. So as every year passes, it becomes an even better deal trading compute time for programmer productivity. AI is one way of doing that, but it's also what tools like Ruby on Rails were about since the start.

Let's return to that $200,000/year programmer. You can rent 1 AMD EPYC core from Hetzner for $55/year (they sell them in bulk, $220/month for a box of 48, so 220 x 12 / 48 = 55). That means the price of one biological core is the same as the price of 3663 silicon cores. Meaning that if you manage to make the bio core 10% more efficient, you will have saved the equivalent cost of 366 silicon cores. Make the bio core a quarter more efficient, and you'll have saved nearly ONE THOUSAND silicon cores!

But many of these squishy, biological programming cores have a distinctly human sympathy for their silicon counterparts that overrides the math. They simply feel bad asking the silicon to do more work, if they could spend more of their own time to reduce the load by using less efficient for them / more efficient for silicon tools and techniques. For some, it seems to be damn near a moral duty to relieve the silicon of as many burdens they might believe they're able carry instead.

And I actually respect that from an artsy, spiritual perspective! There is something beautifully wholesome about making computers do more with fewer resources. I still look oh-so-fondly back on the demo days of the Commodore 64 and Amiga. What those wizards were able to squeeze out of a mere 4kb to make the computer dance in sound and picture was truly incredible.

It just doesn't make much economic sense, most of the time. Sure, there's still work at the vanguard of the computing threshold. Somebody's gotta squeeze the last drop of performance out of that NVIDIA 4090, such that our 3D engines can raytrace at 4K and 120FPS. But that's not the reality at most software businesses that are in the business of making business software (say that three times fast!). Computers have long since been way fast enough for that work to happen without heroic optimization efforts.

And that's the kind of work I've been doing for said twenty years! Making business software and selling it as SaaS. That's what an entire industry has been doing to tremendous profit and gainful employment across the land. It's been a bull run for the ages, and it's been mostly driven by programmers working in high-level languages figuring out business logic and finding product-market fit.

So whenever you hear a discussion about computing efficiency, you should always have the squishy, biological cores in mind. Most software around the world is priced on their inputs, not on the silicon it requires. Meaning even small incremental improvements to bio core productivity is worth large additional expenditures on silicon chips. And every year, the ratio grows greater in favor of the bio cores.

At least up until the point that we make them obsolete and welcome our AGI overlords! But nobody seems to know when or if that's going to happen, so best you deal in the economics of the present day, pick the most productive tool chain available to you, and bet that happy programmers will be the best bang for your buck.

Why don't more people use Linux?

A couple of weeks ago, I saw a tweet asking: "If Linux is so good, why aren't more people using it?" And it's a fair question! It intuitively rings true until you give it a moment's consideration. Linux is even free, so what's stopping mass adoption, if it's actually better? My response:

If exercising is so healthy, why don't more people do it?
If reading is so educational, why don't more people do it?
If junk food is so bad for you, why do so many people eat it?

The world is full of free invitations to self-improvement that are ignored by most people most of the time. Putting it crudely, it's easier to be fat and ignorant in a world of cheap, empty calories than it is to be fit and informed. It's hard to resist the temptation of minimal effort.

And Linux isn't minimal effort. It's an operating system that demands more of you than does the commercial offerings from Microsoft and Apple. Thus, it serves as a dojo for understanding computers better. With a sensei who keeps demanding you figure problems out on your own in order to learn and level up.

Now I totally understand why most computer users aren't interested in an intellectual workout when all they want to do is browse the web or use an app. They're not looking to become a black belt in computing fundamentals.

But programmers are different. Or ought to be different. They're like firefighters. Fitness isn't the purpose of firefighting, but a prerequisite. You're a better firefighter when you have the stamina and strength to carry people out of a burning building on your shoulders than if you do not. So most firefighters work to be fit in order to serve that mission.

That's why I'd love to see more developers take another look at Linux. Such that they may develop better proficiency in the basic katas of the internet. Such that they aren't scared to connect a computer to the internet without the cover of a cloud.

Besides, if you're able to figure out how to setup a modern build pipeline for JavaScript or even correctly configure IAM for AWS, you already have all the stamina you need for the Linux journey. Think about giving it another try. Not because it is easy, but because it is worth it.

Free speech isn't guaranteed to be forever

History is full of long stretches of dominance by noble ideas and despots, times of prosperity and of dark ages. Each of which must have seemed like they would never end to the people who lived through them. If you were a citizen of the Ottoman empire 1452, you probably didn't imagining life any other way. Ditto the height of the Roman empire. Ditto today.

Humans of all times have acclimated to their environment, their culture, and their politics. And while we've also always had historians and prophets proclaiming to know the future, through the tea leaves of the past or burning bushes, actually calling the advent of specific change at a specific time, with any degree of accuracy, has been and remains nearly impossible. It's just that kind of planet.

But it was a piece of fiction, rather than actual history, that made me think of this yesterday, as I watching the first episode of HBO's Game of Thrones prequel House of Dragons. It opens with the narration that the realm had been under sixty years of peaceful rule by King Jaehaerys Targaryen prior to the succession crisis of the plot. Sixty years.

And this story spawned not just an abstract meditation on the waxing and waning of peace and history, but on what's going on right now in the world we live in today, specifically with free speech.

I'm old enough to remember when it was the left of the west that carried the banner of free speech. That defended lurid and violent rap music, first-person shooter video games, raging against the machine, exposing big pharma, protesting wars of all kinds. The counter culture, the dissidents, the live-and-let-live ethos of free expression, and the "that's like just your opinion, maaaan".

I remember learning about the ACLU's Jewish attorneys suing the government for the right of neo-Nazis in 1977 to march the streets of Skokie, Illinois. Men of principle, defending the right of their sworn enemies to exercise constitutional freedoms, such that that right might protect causes they believed in at a later day.

That long stretch of peaceful rule and commitment to protecting free speech seems to have evaporated from much of the left today. In its place, we have the fig leaves of "misinformation" and "hate speech" failing to adequately cover the naked persecution of political adversaries. A fun-house mirror version of the right's old "won't somebody please think of the children" go-to argument for curbing free speech.

In case you haven't been following along in the last few weeks, let me catch you up. First, Brazil's left-wing judiciary just banned X from their internet because this one judge couldn't strong-arm the platform into banning his political enemies. Now he's decreed that any citizen that dares access X via VPN will risk a fine amounting to almost a year's median salary.

This barely two weeks after the European Union's Kommissar Thierry Buton threatened X with eviction from Europe, if Musk failed to censor "harmful content". Specifically referencing the grave danger of featuring an interview with one of the candidates for the US presidency, Donald Trump.

Finally, just two days ago, Robert Reich, the former US secretary of Labor under Clinton, called for the arrest of Elon Musk on a similar pretext of "misinformation" and "hate", echoing his fellow Brazilian and European censorship mongers.

Not exactly an idle threat, either, given the arrest of Telegram's CEO by French authorities on adjacent accusations of failing to collaborate with state authorities on moderation and policing of speech on that platform.

And while all this is going on, the UK is accelerating its existing crackdown on spicy memes, indignant Facebook posts, and other expressions of frustration over the country's handling of the mass migration issue that have been boiling over following a string of stabbings.

There are two common responses to this that seek to diminish the concern. One is to downplay these threats to free speech by playing up the danger posed by said "misinformation" and "hate speech". That's the "yeah, of course we believe in free speech, but IN THIS CASE...". I'd say that's probably the most common response I've heard from Canadians, Brits, and Europeans when I've voiced my concern over this issue.

The other is the more American response: This couldn't happen here. We have the first amendment! It'll protect us. Regardless of what bluster you hear from the likes of Reich and other would-be censors. And that's also been my go-to calming pill for coping with this alarming rise in authoritarian censorship around the western world, given that I live here.

But then I think of the sixty years of peaceful rule under Jaehaerys. And how we're just one succession crisis away from losing rights that have been taken for granted for several generations of Americans. It sounds paranoid, I know. Because any dystopian vision of the future usually does from the vantage point of a present that hasn't yet tipped.

Yet just cause you're paranoid doesn't mean they're not after you. And I don't even think you have to be all that paranoid to see that they really are after free speech all over the west! From Canada to Britain, from Brazil to Brussels. The momentum is deeply authoritarian at the moment.

So I think you'd be right to worry that a grand turn of history is before us. But rather than throw up your hands, Private Hudson style, and say "Now what the fuck are we supposed to do? We're in some real pretty shit now, man!", I urge you to find your inner Corporal Hicks: "Let's move like we got a purpose".

The only way to defend free speech is to nuke the idea of benevolent censorship from orbit. Nobody has a monopoly on the truth, nobody can discern "misinformation" from truth consistently or without bias, and nobody can define "hate speech" in universally acceptable terms that don't recall blasphemy laws of centuries past. The alternative, betting on more speech to counter bad speech, isn't a guaranteed win every time, but it's by far the best option we've found so far.

For what it'll make of you

I've always had an ambivalent relationship with goals. I don't like goals that feel like checkpoints on a treadmill. They make you reach for a million dollars in revenue, celebrate for a second, and then turn the chase to five million the minute after. No thanks. But specific, material goals aren't the only kind you can set.

Here's a goal I remember setting that wasn't like that. I remember seeing Kent Beck -- the creator of eXtreme Programming and author of my favorite programming style guide of all time --  on the conference stage at JAOO 2003. I was mesmerized by Kent's command of the material and the audience. And I remember setting a goal of becoming as capable as that in the art of public speaking on technical topics.

I also remember setting the goal of participating in the 24 Hours of Le Mans after just a few years of getting behind the wheel of a racing car for the first time. I had barely become proficient enough to compete safely against other drivers, but had already progressed enough from the first time on track that I could extrapolate the trajectory. And have faith that I was going to get there.

Now there's a fine line between goals that are ambitious and goals that are delusional. If I had set a goal to be the fastest driver in the world at age 30, after only earning my driver's license at 25, I would have been delusional. Those odds wouldn't just be long, they'd be impossible. And I don't like to start a pursuit in vain.

Same too, if, after watching Kent Beck on stage, I'd set a goal to repeat the feat the next year. That would have been delusional. Not only does it take time and practice and skill to become that good of a public speaker, I also needed to become knowledgeable enough about my domain to have something interesting to talk about.

But in both cases, the racing and the speaking, I intuitively knew that it wasn't just about the destination. If it was, there'd probably be many shortcuts I could have taken. But I wanted to take the long road. For what it would make of me. Jim Rohn expresses this sentiment beautifully:

Set a goal that'll make you stretch that far.
For what it'll make of you to achieve it.

The greatest value in life is not what you get,
the greatest value in life is what you become.

It was not about winning a race, but about becoming a racer. It was not about giving a speech, but about becoming a speaker. Think about what you'd like to become more often than thinking about what you'd like to get.

We once more have no full-time managers at 37signals

After experimenting with a number of management roles over the last few years, 37signals is back to its original configuration: None. We once more have no full-time managers whose sole function is to organize or direct the work of others. Everyone doing management here does so on the side, next to their primary work as an individual contributor. Including Jason and me. And it works.

That's not to say that there's no managerial work at 37signals. We still do yearly performance reviews, have onboarding duties for new colleagues, schedule on-call coverage, and supervise junior work. But instead of gathering all those responsibilities with a few full-time managers, we've distributed the load among the senior staff.

This incentivizes a minimalist approach to management. Instead of lining up a recurring schedule of weekly one-on-ones, we drive status updates and check-ins by automated questions. Instead of assigning five-six-seven reports to a single person, we make every lead and principal programmer responsible for one mentee. 

With this distribution, there's no longer a fixed forty hours for managerial duties to expand into, and so we dodge Parkinson's Law. Because everyone we have doing a small share of the management would usually rather get back to their own work as quickly as possible. Some weeks that means the managerial overhead is literally zero, as everyone is just busy getting their work done, and nobody needs anything. 

You just can't do that in a traditional setup. It's impossible for a full-time manager to scale their interventions down to zero hours per week and still feel like they're doing their job.

Now I'm sure that this arrangement would be difficult to maintain at a grand scale. When you're dealing with hundreds or thousands of employees, you're naturally going to yearn for more structure and hierarchy. But that's not the case at around 60 people, which is where we are.

It's also not the case when you hire, mentor, and promote managers of one. People with both the competence and drive to set their own agenda and follow it autonomously. People who don't need weekly one-on-ones with a manager to stay on track. People who thrive on long stretches of uninterrupted time.

That's the kind of people we seek to hire and develop at 37signals. And I find that it's easier to steer that development when someone's primary manager is also their professional superior. You're likely to learn faster and frankly respect that person more.

This is the old Steve Jobs quip about "they knew how to manage, but they didn't know how to do anything" that followed his experience hiring professional managers rather than letting the best individual contributors do the management. I wouldn't go that far. I do think there are plenty of professional managers who've had solid experience "doing things", but I still think the general principle holds, if they're not better at "doing things" than the people they're managing.

But even so, it's still a trade off. I've found that the hardest part of the managerial burden to distribute broadly is dealing with conflict and poor performance. There are a lot of senior staff willing and able to help mentor and manage juniors and peers who are on a solid trajectory and acting as managers of one. It's a much smaller group who welcomes dealing with reports who can't keep up or repeatedly gets lost.

So for this to work, you have to realize the limitations of the manager-less arrangement. There's got to be someone else in the organization who can direct or even take over when an employee needs strong, repeated "redirecting feedback" (what a euphemism!). That's the deal. Those cases will have to bubble up to the top.

But is that really so bad? If cases of poor performance keeps bubbling up, there's a fundamental problem somewhere in the process anyway, and that can usually only be definitively addressed by the top. Are we not hiring the right people? Are we unclear in our expectations? Are we too diffident in our feedback? These are root cause problems that require root cause reparations. 

When all this works, the result is astounding. Small teams of highly competent managers of one can progress at an unbelievable pace. Left to just do their job, they get it done, and are in turn rewarded with that precious job satisfaction of really making a direct, personal difference. There's nothing better than shipping quickly alongside peers you respect highly.

After twenty-plus years of running a company like this, I can tell you that I no longer have any interest in working any other way. Spending a couple of years with a more traditional structure was a wonderful experience because it cemented this fact. We're going to make this company work without full-time managers or die trying.

Children of You

The birth rate is dropping all over the world. In some places, like South Korea (0.72), it is so low people are starting to worry about a national extinction. In other places, including all of Europe (average 1.5, Spain 1.29), it's merely bad and alarming. And nobody seems to know exactly why. 

Even in Denmark, it's now so low (1.5) that the government went from estimating a lack of 5,000 child care professionals to suddenly looking at a 2,000 person surplus by 2035. Plans for kindergartens and schools are being rewritten in anticipation of far fewer kids in the coming years. At 1.5, Denmark's population will shrink almost a fifth by 2050, and of course it'll then be a mix of much older people with far fewer working-age individuals to support them.

As a reminder, just to stay level, a country needs a birth rate of 2.1. But by continent, only Africa is consistently above that level. The UN keeps revising back the year they project that the world's population will peak at about 10 billion. In 2022, they said 2086. In 2023, that became 2084. And of course the demographic threat isn't just a shrinking globe, but that in the years leading up to that fact, the rate of old-to-young is going to be difficult for most countries to deal with.

That's all the relatively far future, though. And what we should have learned by now is just how hard that future is to predict. Denmark literally went from one year to the next predicting a massive shortage in child care professionals that needed political intervention to realizing they wouldn't need that after all.

But worse than being unable to predict the future, we don't even seem to understand the present. There's no comprehensive theory as to why this is happening all over the world and at the same time. There are a million theories, but the bulk of them seem mostly about advancing someone's political priors. And the vast majority of them fall apart when tested against the local facts of countries as different as Denmark from South Korea to Italy to Canada.

It's the lack of religion! It's the mental overload for women! It's contraception! It's not enough maternity leave! It's too high pressure to achieve at work! It's the rise of narcissism! It's low testosterone! It's hook-up culture!

There's no shortage of explanations (or accusations).

I don't know why this is happening either, but I do know this: Before having kids, I did not understand what it would be like. That might sound obvious, but I think it's actually not. I understood what it was like to be a kid myself, yes. I understood what other people's children were like, yes. And I understood all the reasons why parents might often be stressed. But I did not understand the core emotional and even spiritual transformation that actually having my own children would induce.

And what a transformation that was! Or was for me, I guess I should caveat. People react in a myriad of ways to becoming parents, and let's just say not everyone accepts the burden with equal grace. But to me, it's been perhaps the single biggest shift in perspective and probably the most rewarding life experience possible.

Yet sentences like that, with words like "rewarding" or "transformation", do not even begin to convey the reality, because it's not possible to transmit its essence in words! I imagine it's like explaining "green" to a person who can't see color. Or the smell of the ocean to someone who've never been near a shore. Words will be very crude representations that will ultimately do little to represent reality.

But that's all we have! Words. So let me try a few different approaches. One is that the trolley problem no longer becomes a problem let alone much of a moral dilemma. If that lever controls a train heading for my kids or a million strangers, it's not a hard choice at all. I don't even have to think about it for a second.

Same too if the lever is me versus them. No hesitation, no trepidation. I'll sacrifice myself with ease.

Maybe this is what some people experience being part of the military in an existential fight for a nation's survival. That they're ready to die for the greater cause. But even so, I'd be surprised if there wasn't at least a little bit of hesitation then!

And again, maybe this isn't how everyone feels about their children. In fact, clearly it's not. But I do think this is the most common feeling. An easy commitment to self sacrifice, and a willingness to do whatever it takes to keep them safe. That's a transcendental shift in experience.

Which may sound great, but shouldn't be mistaken for "happiness". Plenty of studies have shown that parents are actually the most "unhappy" in the prime child caring years, and it's not hard to imagine why. Starting with the lack of sleep, the ever-present degrees of worry, and the sudden, abrupt limits on "me time". 

It illustrates the difference between "meaning" and "happiness" perfectly. In almost all cases, children bring a deep, profound meaning to the life of those who have them. Happiness comes and goes, but the meaning of being directly connected by DNA to a human you brought into this world is ever lasting.

Now there are a million good reasons for why having kids is great for the world (whatever the Malthusian #degrowth fanatics claim), but it's that selfish reason, the profound life experience and bond, that I think often slips out of the conversation, and thus consciousness of prospective parents. It's easy to describe in words how having less time, less money, less freedom sounds like a raw deal. It's very hard to describe the purpose granted by parenthood. That asymmetry clearly isn't helping the case!

So that's why I'm trying to make it. The case for having children. To not let the asymmetry of explainable cons tip the scale over unexplainable pros. Contributing with your own DNA to the continuation of the species, and watching your own flesh and blood grow up, is a sublime experience. And while not accessible to all due to medical issues or otherwise, it's accessible to most, and I think you'll deeply regret not partaking if able and capable. I know I would have.

Merchants of complexity

It's hard to sell simple, because simple looks easy, and who wants to pay for that? Of course, everyone says they want something simple, but the way they buy reveals that they usually don't.

This is the secret that the merchants of complexity have long since figured out. That clever and sophisticated beats basic and straightforward most days in the market. Since both clever and sophisticated implies something special, and only what's special command the premium dollar.

Deep down, that's what most people want. To feel special. That's far more important than merely purchasing a solution. Basic, cheap, or even free options are for the common dolt, with simple needs and simple problems, goes this wicked intuition. Few people have the courage to admit their life and work isn't that complicated.

You see this syndrome all over the tech industry. Basic problems people could easily solve for themselves, cheaply and quickly, getting turned into scary and insurmountable challenges that only a sophisticated solution (usually on a subscription!) will cure.

Because that's the other side of this. There are rarely high margins in actually selling someone something that they then own. Much better to rent it to them. They'll own nothing and you'll convince them they should be happy.

By what sorcery? Fear, mostly. Vanity, sometimes. Sloth, occasionally. Pride, definitely. The more insecurities the merchants of complexity can trigger, the easier the sell.

But the spell only binds as long as you choose to believe in it. If you decide tomorrow that all this mass and weight and expense isn't worth it, it won't be. It's that simple and that hard.

Whether that means daring to connect a computer to the internet, dumping that micro-services monstrosity, ditching Slack, or going #nobuild, it's all within your power.

So what are you waiting for? Dare to be basic, babe.

Software estimates have never worked and never will

Since the dawn of computing, humans have sought to estimate how long it takes to build software, and for just as long, they've consistently failed. Estimating even medium-sized projects is devilishly difficult, and estimating large projects is virtually impossible. Yet the industry keeps insisting that the method that hasn't worked for sixty years will definitely work on this next project, if we all just try a little harder. It's the definition of delusional.

The fundamental problem is that as soon as a type of software development becomes so routine that it would be possible to estimate, it turns into a product or a service you can just buy rather than build. Very few people need to build vanilla content management systems or e-commerce stores today, they just use WordPress or Shopify or one of the alternatives. Thus, the bulk of software development is focused on novel work.

But the thing about novel work is that nobody knows exactly what it should look like until they start building. For just as long as software industry has been failing to estimate the work, it's also been deluding itself into thinking that you can specify novel work upfront, and produce something people actually want.

Yet we've also tried that many times before! And nobody cared for the outcome. Because it invariably didn't end up solving the real problems. The ones you could only articulate after building half of a wrong solution, changing direction, and then coming up with something better.

It's time to accept this. Smart programmers have tried for decades, and they have repeatedly failed, just as folks fail today, when we try to cut against the grain of human ingenuity, and insist that software needs estimation.

The solution is not to try harder nor to hope that this time is somehow different. It's to change tactics. Give up on estimates, and embrace the alternative method for making software by using budgets, or appetites, as we call them in our Shape Up methodology.

It turns out that programmers are actually surprisingly good at delivering great software on time, if you leave the scope open to negotiation during development. You're not going to get exactly what you asked for, but you wouldn't want that anyway. Because what you asked for before you began building was based on the absolute worst understanding of the problem.

Great software is the product of trade-offs and concessions made while making progress. That's how you cut with the grain of human nature. It's the core realization that's been driving us for decades at 37signals, and which has resulted in some wonderful products built by small teams punching way above their weight. We've incorporated it into Shape Up, but whether you use a specific methodology or not, giving up on estimates can help you ship better and sooner.

Where at least I know I'm free

I used to find the American self-image of being this uniquely freedom-loving, freedom-having people delusional. Sure, I'd think, you're not North Korea or Venezuela, but is that really a standard worth celebration? Shouldn't America compare itself to higher alternatives, like Europe or even the rest of the Anglosphere? Turns out I just needed to wait a little longer for the patriotic hymns to begin ringing true.

Witness what's going on in both the UK and the EU at the moment. There's a shockingly draconian crackdown on "misinformation" and "hate speech", the two stalwart euphemisms for "speech we don't like", going on in Britain in particular at the moment. Fast-track tribunals have been setup to hand out unbelievably harsh sentences for such terrible offenses as "anti-establishment rhetoric", criticisms of mass migration, and "obscene gesticulations at the police".

Yes, all this is happening in the context of violent riots. But weren't The Enlightened Ones supposed to recognize the dangers of using a crisis to push through police-state measures? Wasn't this what so many on the left were rightfully up in arms about regarding The Patriot Act in the US?

Meanwhile, the EU is threatening the owner of X, Elon Musk, with severe consequences if he does not abide by its arbitrary definition of what defending us all against "misinformation" and "hate speech" looks like. And the new Digital Markets Act, which I've in the past applauded for its attempt to counter tech monopolies, is being wielded as the cudgel for this authoritarian crackdown. 

This is all very much in line with the authoritarian preview we got during the pandemic from Canada, which also pushed beyond the democratic Rubicon, and moved to shut down the bank accounts of people who donated to protesters, among other draconian injunctions.

Which brings me back to America, and the specifically the song "God Bless the USA". My wife grew up singing this song in school. And I always thought that wasn't only rather camp, but damn near indoctrination. But that sure does look different now.

I'm going to quote the main bit in full, just so we're all on the same campy page about this song:

If tomorrow all the things were gone
I worked for all my life
And I had to start again
With just my children and my wife

I thank my lucky stars
To be living here today
'Cause the flag still stands for freedom
And they can't take that away


And I'm proud to be an American
Where at least I know I'm free
And I won't forget the men who died
Who gave that right to me
And I'd gladly stand up next to you
And defend Her still today
'Cause there ain't no doubt
I love this land
God Bless the U.S.A.

I mean, I still smirk. Especially when watching the music video and its kitsch depiction of American life. And yet - AND YET - it somehow now does ring poetically true. 

America's first amendment, the constitutional right to free speech, including the right to "misinformation" and even "hate speech", has proven a surprisingly effective and resilient bulwark against the new rise of censorship and blasphemy laws gaining steam across the Atlantic.

That's what the song is on about with "but at least I know I'm free"!

Now, I'm not an American. But I sure am becoming a lot more appreciative of the core principles my adopted country was founded on. (While keeping my eyes wide open to all its trade-offs and contradictions.)

Americans should be proud of these principles. Whatever the age, there'll always be blasphemous talk, offensive jokes, mean insults, and wild conspiracy theories (that occasionally turn out to be true!). I'd much rather live in a country that embraces everyone's right to BE FULL OF SHIT than one that pretends it can declare a priori what's true and what's false or one that makes false equivalences between violence and speech.

The Framework 13 has a new high-res screen!

The first laptop I ordered back when my Linux journey began was the Framework 13. I immediately liked a lot about it. The keyboard is a big step up over the MacBook Pro, primarily because of the 50% longer key travel. And I love the matte screen and 3:2 aspect ratio. Both feel way nicer for programming. 

But running a 2256x1504 resolution on a 13.5" screen means a PPI of just 200, which is only barely acceptable for a crisp-font snob like myself, and below both the MacBook Air at 224 and the MacBook Pro at 254. This, more than even the difference in battery life, was why I kept looking around at other PC laptops.

I tried options from Tuxedo, Samsung, and Dell, but while all of them had something solid to offer, the entire package never spoke to me like the Framework did. And while I kept looking for something else, I realized that the Framework was really growing on me. Especially after figuring out that I could get a very enjoyable setup running at 200% resolution but at a 0.8 font sizing with Linux.

And after a few weeks, I accepted that the slightly less crisp screen was worth the sacrifice to enjoy the rest of the amazing Framework package. And even if you could have offered me the MacBook Pro's 254 PPI display on this machine, I wouldn't have done the swap, after the 3:2 aspect ratio and matte finish won me over.

But now Framework has fixed it! They've just released a new 13.5" screen running a 2880x1920 resolution for a MacBook Pro-beating 256 PPI! And it's still matte and 3:2! AND 120HZ! Hallelujah!

What's even cooler is that this new screen can be retrofitted to existing Framework laptops. And that's exactly what I've done. It's a $269 upgrade, so it's not cheap, but it's a hellavu lot cheaper than buying a whole new computer. Not to mention way less wasteful.

The installation is involved enough that you feel like you've made a real personal contribution to your hardware, but without feeling complicated or onerous. I probably had the swap done in about five minutes. No special tooling needed. Very cool.

And the screen itself is just wonderful. The text is considerably crisper than a MacBook Air, and fully on par with a MacBook Pro. The response time has also been improved over the last Framework screen, so where really keen eyes could detect some ghosting on fast moving objects with the old display, like a white mouse hand on a black background, that's totally gone now. Oh, and you no longer need the 0.8 text hack for Linux to look right at 200%.

Now, it's not totally perfect. There's a small battery hit if you run it at 120hz on battery, because Linux still only has experimental support for variable refresh rates, and in my testing, it still needs more work (but apparently the next version of Gnome will sort this). But you can just set it to 60hz when on the go, and you'll still get those roughly 6 hours of mixed use from the 61wh battery.

As you can tell, I really like this new screen. It takes the last reservation I had about the Framework 13 and turns it into a key strength. In fact, I'd go so far as to say that this is the best looking laptop screen I've ever used for programming. The combination of 256 PPI, matte covering, 3:2 aspect ratio, and 120hz is amazing for working with text.

Now I'm sure a video editor would still prefer Apple's 16:10 aspect ratio, glossy display, and color-corrected profile of a MacBook Pro. But for programmers? The Framework is better.

And it's such a good deal in its AMD version. You can get a 7640U with the new screen, 32GB of RAM, and 1TB of storage for just around $1,200. Even slightly cheaper if you just get the RAM and storage from Amazon. That's a ton of amazing computer for the money (and almost half of what a MacBook Pro with 24GB of RAM and 1TB of storage costs!). It's what I use as my daily-driver laptop.

Good job, Framework!

framework-13-new-screen.jpg

Cookie banners show everything that's wrong with the EU

Companies have spent billions on cookie banner compliance only to endlessly annoy users with no material improvement to their privacy, but this unsightly blight is still with us (and the rest of the internet!). All because the EU has no mechanism for self-correcting its legislative failures, even with years of evidence in the bag. The bureaucratic maze almost guarantees that all the noble intentions eventually find a dead end in which to get stuck. What a waste. 

It's this example that all enthusiasm for European tech legislation should be set against. The fact that cookie banners still exist. And that the bureaucratic drag they have created across Europe (and the rest of the world) is ludicrously disproportionate to whatever theoretical value these awful overlays actually provide. 

It makes it hard to cheer for even much-needed antitrust action, like the Digital Markets Act. Because if the EU can screw up something as basic as cookie banners this bad, and prove so incapable and uninterested in re-calibrating its approach, what hope is there for the thornier issues?

What made me think of cookie banners today was the discussion about Europe's new AI legislation. Which even the architect of the law is now having second thoughts about. Because like many modern European laws, it's a vague, posturing, and suffocating mess that's way too specific about things it fails to prove the ability to understand.

It's also just an embarrassing illustration of Europe's third-tier status in technology. So little of the core tech innovation that's driving the future, in AI or otherwise, is happening in Europe. And when it does happen there, it's usually just incubation, which then heads for America or elsewhere as soon as its ready to make a real impact on the world.

Europe is in desperate need for a radical rethink on how it legislates tech. The amount of squandered potential from smart, capable entrepreneurs on the old continent is tragic. It needn't be like this. But if you want different outcomes, you have to start doing different things.

Finding acoustical delight in THE THOCK

Before diving into the world of mechanical keyboards, I'd never heard the word "thock" before. But I soon learned that it describes one of those strangely seductive sounds you can produce from pressing the keys on a keyboard tuned for acoustical joy. And now, dammit, I've acquired a taste for this type of ear candy, and I can't stop smiling as I type with a tune.

Thock isn't the only type of sound that connoisseurs of keyboard acoustics clamor for, mind you. There's also the clack, and even the creamy. But I think it's fair to say that thock is perhaps the most sought after. It's hard to describe exactly what it sounds like with words, so if you're curious, checkout this video from Hipyo Tech -- a YouTuber with more than a million followers who just reviews keyboards! This is one where he reviews the Lofree Flow, the keyboard I'm typing these very words on.

But even a video doesn't quite do it justice, because half the satisfaction of hearing that thock comes from it being a response to your fingers dancing across the keyboard. You're making this odd music simply by writing an email or fixing some code. Like a snake charmer mesmerizing a serpent.

Now mechanical keyboards have been a niche hobby for programmers for a long time, but I never really cared to give it a proper inquiry before. I was happy with the standard Apple Magic Keyboard for many, many years. Hell, I was even happy with the most despised Magic Mouse. It just didn't seem like a problem that was worth any attention to solve.

But there goes the misconception. Mechanical keyboards don't solve any problems. In fact, if anything, they bring quite a few with them! Batteries usually don't last as long, in part because many of them have RBG lights, and they can be annoyingly loud if you're around others. Yet that misses the point. Taking greater joy from such a mundane activity as typing is one of those rare daily treats that just makes life a little better, a little more interesting.

It's also a benign rabbit hole of lingo, knowledge, aesthetics, and preference. Kinda like mechanical watches, but mercifully more affordable. While you can spend hundreds of dollars on a single luxury mechanical keyboard, there's a huge variety of incredible models available from around $100-$200. (I wish more of my rewarding hobbies were that light on the wallet!).

And as you dive deeper, you learn not just about the difference between thock and clack. But about pre-lubed stabilizers, gasket flex, 2.4ghz polling rate advantages, key-cap designs, north vs sound facing LEDs, tactile vs linear switches, and about a hundred other nerdy details that go into making a great mechanical keyboard. I absolutely love leveling up on a new domain like this!  

What I also love is the incredible display of capitalism in this space. I can scarcely believe the variety of choice for something so niche as a mechanical keyboard! There are literally hundreds of manufacturers all competing on nailing these nerdy details, and they're doing it at ever more affordable price points. The sheer variety and depth of this market would make Adam Smith blush.

You can see this in the evolution over just the past few years. It used to be that to get a truly delightful mechanical keyboard, you had to built it yourself. Which meant buying an empty shell of a board, pick just the right switches, insert those switches, then get some lube to make said switches properly smooth, and on, and on. And you can still do that, of course, but the prebuilt boards that are now available for just a few hundred dollars at most have caught up with where the keyboard nerds wanted the typing world to be all along.

Many of you may already be deep into the cave of mechanical keyboards and chuckle at what took me so long to discover the entrance, but that's exactly why I'm writing this. I had a few false starts in the mechanical keyboard world with boards from Keychron that I didn't really enjoy, so I just thought this wasn't a world for me. And I'm sure plenty of others either did the same or were turned off by the prospect of spending hours putting their own bespoke board together.

So allow me to recommend the two boards that turned my opinion around: The near-silent Varmilo Milo 75 and the creamy, thocky Lofree Flow 84. Both look amazing, feel amazing, and make a mockery of the $100 Apple Magic Keyboard that I thought was all I ever needed for so long. If you make a living from typing on a computer all day, I think you owe it to yourself to see if a mechanical board might not just add a bit of extra delight to your daily grind. I'd be surprised if it didn't.

varmilo-milo-75.jpg

Living with Linux and Android after two decades of Apple

It now seems laughable that only a few months ago, I was questioning whether I'd actually be able to switch off the Apple stack and stick to my choice. That's what two decades worth of entrenched habits will do to your belief in change! But not only was it possible, it's been immensely enjoyable. What seemed so difficult at first now appears trivial, since it's been done.

That's not a critique of the Apple ecosystem per se. Apple continues to make great hardware and pretty good software too. And I think accepting that fact from the get go actually makes sticking to a switch more likely to succeed. Falling in love with something new is a better and healthier motivator than hating something old.

That's been my experience with Linux in particular. I've come to love the setup I've enshrined in the Omakub project. It does a whole host of things much nicer than the old macOS walled garden experience, on top of the fact that running an open source operating system simply feels right for someone who owes their entire career to open source. It's a positive vision: I really, really like what Linux has to offer these days.

It's a similar feeling, albeit not as strong, with the switch to Android. I don't have any illusions that Google is any gentler of a corporate giant than Apple. The two have very similar ideas about extracting monopoly rents from their respective app stores. But the slogan that Android is more open is actually true.

Now plenty of people don't really care about that openness, and that's fine. But I do. I care about being able to download the Fortnite APK straight from Epic Games, and being able to play our favorite family game with the kids, without having to ask someone for permission. Sure, it could be a bit more convenient if the game was available in the Play Store, but spending another two minutes doing the direct install is nothing against the hundreds of hours we've spent playing since.

The same is true in terms of customization. I'm running this beautiful, minimal launcher called olauncher, which turns Android into a far less addictive mobile experience by replacing icons and app drawers with a simple set of text links. It's great. And it's the kind of stuff you can only really do properly on Android, because replacing the default launcher is possible.

Getting out of the Apple rut has also lead me to discover an amazing new world of both software and hardware. Solutions that I just wasn't in the market for after settling into an Apple grove over the years. Since switching to Linux, I've picked up Neovim as my new editor of choice, I've fallen in love with the Framework 13 laptop, and recently, I've even gotten into mechanical keyboards (the current choice being the NuPhy Air75).

That's what a change of scenery can do for you. Force you to open your eyes to what else is out there. And in turn introduce you to new ways of doing and being. That's a gift in and of itself.

Now I'm not telling you any of this to convince you to give up your Mac or your iPhone. People who love their Apple gear aren't going to be convinced by an alternative positive vision because they're simply not in the market for one. That's where I was for a long time. Just not interested.

I'm telling you this in case your Apple relationship isn't quite so hunky dory. In case their recent trajectory and relationship with developers might have been bugging you too. Because it's in that situation you really need to know that the alternatives are not just present, they're good. Their value isn't defined as not-Apple. There's no valor in that.

Linux is wonderful, is flawed, is messy, is beautiful, is nerdy, is different. Android is customizable, is open, is fragmented, is less polished, is experimental. All the pros and the cons are true at the same time. 

And it's a compelling adventure to discover whether the trade-offs speak to you. Don't be afraid to take the trip, but give it at least two weeks (if not two months!), and don't think of the journey as a way to find the same home in a different place. Be open to a new home, in a new way, in a new place. You might just like it.

Visions of the future

Nothing gets me quite as fired up as discovering the future early and undistributed. That feeling of realizing that something is simply better, and the only reason it hasn't taken off yet is because the world hasn't realized it. It's amazing, and it's how I'm feeling about Linux right now. That "how did I not know it was this good" sensation.

I felt the same way about the Mac back in 2001. And Ruby in 2003. And company chat with Campfire in 2005. And, fast forward, now with #nobuild, Hotwire, and even exiting the cloud. When I discover a path that seems like a clear shortcut, it doesn't really matter if it's poorly paved at first. As long as it appears to take us somewhere better, laying the bricks and clearing the brush is incidental.

It's about seeing that end state. Where what's right in front of us now, rough and unpolished as it may be, can be transformed, if we put in the effort, that inspires me to keep going.

Yes, half the fun is the adventure. We should always be pushing toward new horizons, even if some of them inevitably will end up in dead ends. But the other half is watching things genuinely compound for the better.

Take web development. It's incredible how much conceptual complexity we've been able to compress in the last decade, and particularly in the last half of a decade. It wasn't just one thing, it was all the things. It was browsers getting better, mobile CPUs getting faster, #nobuild becoming possible, Hotwire showing an alternate route. Each substantial, yes, but together epoch altering. A new dawn.

Following such a sense of wanderlust requires a certain disagreeableness. Even arrogance. A steadfast belief that it's possible that you might actually have found a better way. Whether that turns out to be true or not. You have to believe that it's possible. That the market place of ideas isn't perfectly efficient or perfectly rational. That it hasn't priced it all in, and that you could invest in upcoming concepts for an intellectual profit.

You're never going to be right about everything, but a life spent without taking at least a few bets on being early on an idea is one not lived to the fullest. Dare a little. Roll the dice every now and then. Come along for an adventure whether its heads or tails.

Introducing Omakub

Linux can look and feel so good, but it often doesn't out of the box. It's almost like there's a rite of passage in certain parts of the community where becoming an expert in the intricacies of every tool and its theming is required to prove you're a proper nerd. I think that's a bit silly, so I created Omakub: An opinionated web developer setup for Ubuntu.

Omakub turns a fresh Ubuntu installation into a fully-configured, beautiful, and modern web development system by running a single command. No need to write bespoke configs for every essential tool just to get started or to be up on all the latest command-line tools. Omakub is an opinionated take on what Linux can be at its best.

Omakub includes a curated set of applications and tools that one might discover through hours of watching YouTube, reading blogs, or just stumbling around Linux internet. All so someone coming straight from a platform like Windows or the Mac can immediately start enjoying a ready-made system, without having to do any configuration and curation legwork at all.

This isn’t a project for someone already versed in the intricacies of nixOS or relishing a fresh install of Arch. It’s using vanilla Ubuntu because that’s one of the most widely adopted Linux distributions, and one that is even a pre-install option from many computer vendors. But while Ubuntu has a great package manager in apt, many of the tools that developers want either haven’t been packaged, need more recent versions than what has been frozen in the LTS, or need actions post-install necessary for the best operation. Omakub includes all those scripts needed.

But package management is only half the battle of getting a great development experience going on Linux. The other half lies in the dotfiles that control the configuration. Linux gets great power from how customizable it is, but that also presents a paradox of choice and a tall learning curve. Having good, curated defaults that integrate all the many tools in a coherent feel and look can help more developers acquire a taste for Linux, which they may then later inspire a fully bespoke setup (or not!).

Nothing in Omakub provides solutions to problems you couldn’t also solve a million other ways. The main benefit is in The Omakase Spirit. The idea that an entire setup experience can benefit from being tailored upfront by someone with strong opinions about what works and looks good together. This doesn’t make the choices necessarily better than other choices. Linux has inspired a million options for a million tastes. That’s great and worthy of celebration. But there’s a large constituency of developers who are more than willing to trade ultimate bespoke customization for a cohesive package of goods, at least until they understand what all the options are and have fully bought into making the switch to Linux.

Omakub is for all these future Linux users.


Why I retired from the tech crusades

When Ruby on Rails was launched over twenty years ago, I was a twenty-some young programmer convinced that anyone who gave my stack a try would accept its universal superiority for solving The Web Problem. So I pursued the path of the crusade, attempting to convert the unenlightened masses by the edge of a pointed argument.

And for a long time, I thought that's what had worked. That this was why Ruby on Rails took off, became one of the most popular full-stack web frameworks of all time, inspired countless clones, and created hundreds of billions in enterprise value for companies built on it. But I was wrong. It wasn't the crusade that did it.

Since those early days, I've talked to thousands of programmers who adopted Ruby on Rails back then, and do you know what virtually every one of them cite? That original 15-minute blog video. Which didn't contain a single comparison to other named solutions or specifically pointed arguments against alternatives. It just showed what you could do with Ruby on Rails, and the A/B comparison automatically ran inside the mind of every programmer who was exposed to that.

That's what did it. Showing something great, and letting those who weren't happy with their current situation become inspired to check it out. Because those are the only people who are able to convert to your cause anyway. I've never seen someone who was head'over'heels in love with, say, functional programming be won over by arguments for objected-oriented programming.

You simply can't dunk someone into submission, and it's usually counterproductive if you try. But you can absolutely attract people who aren't happy with their current circumstances to give an alternative a chance, if you simply show them how it works, and allow them to conclude by themselves how it would make their programming life better.

What I've also come to realize is that programmers come in many different intellectual shapes and sizes. Some of those shapes will click with functional programming, and that'll be their path to passion. Others will click with vanilla JavaScript, and be relieved to give up the build pipelines. Others still will find their spirit in Go. This is great. Seriously. The fact that working for the web allows for such diverse ecosystem choices is an incredible feature, not a bug.

I found my life's work and passion in Ruby. I have friends who've found theirs in Python or Elixir or PHP or Go or even JavaScript. That's wonderful! And that's really all I want for you. I want you to be happy. I want you to find just that right language that opens your mind to the beautiful game of coding in your most compatible mode of conception, as Ruby did for me.

This is not the same as just saying "everything has trade-offs, use what works best". That to me is a bit of a cop out. There is no universal set of trade-offs that'll make something objectively "work best". Half the programming conundrum lies in connecting to an enduring source of motivation. I wouldn't be a happy camper if  I had to spend my days programming Rust (but I LOVE so many of the tools coming out of that community by people who DO enjoy just that).

It also doesn't mean we should give up on technical discussions of advantages or disadvantages, but I think those are generally more effective when performed in the style of "here's what I like, why I like it, so look at my code, my outcomes, and see if it tickles your fancy too".

Programming is a beautiful game. I would give up all the fancy cars I have in a heartbeat, if I was made to choose between them and programming. The intellectual stimulation, the occasional high from hitting The Zone, is such a concrete illustration of Coco Chanel's "the best things in life are free, the second best things are very expensive". Programming is one of those "best things" that is virtually free to everyone in the Western world (and increasingly so everywhere else too). 

So let's play that beautiful game to the best of our ability, in the position that flatters our conceptual capacities the most, and create some wonderful code.

Linux as the new developer default at 37signals

For over twenty years, the Mac was the default at 37signals. For designers, programmers, support, and everyone else. That mono culture had some clear advantages, like being able to run Kandji and macOS-specific setup scripts. But it certainly also had its disadvantages, like dealing with Apple's awful reliability years, and being cut off from seeing how half our Basecamp customer base saw by default (since they're on Windows!). Either way, it's over. Apple is no longer the exclusive default at 37signals. Going forward, we're working to make Linux the default for developers and system operators (and welcoming Windows back in the mix for accounting/marketing). 

I've personally been having a blast over the last few months digging deeper and deeper into the Linux rabbit hole, and it's been a delight discovering just how good its become as developer platform. Not one without its flaws, obviously, but an incredible proposition none the less.

This has left me with little interest in going back to a commercial operating system as a daily development driver. My entire career has been spent in the service and sun of open source, both as a contributor and a beneficiary, and closing the loop with a desktop operating system is very satisfying.

Default doesn't mean edict, though. First of all, we have a great mobile team that simply needs to be on Apple hardware to develop for Apple platforms. That's not changing. Neither is the fact that some people will have a strong personal preference to stick with the Mac. Totally fine too.

But defaults still matter. Along with assumptions about what's supported and how well. And changing our default to Linux sets a new tone, as well as affords us the institutional weight to support companies like Framework with our business. I love voting with my dollars for more of a future I'd like to see, and Framework represents just that.

The end result will be a company that has people running Mac, Windows, and Linux. Which is great from both the perspective of living how your customers do, but also escaping the trapped feeling of a mono culture built around an Apple that we, and many other developers, are increasingly at odds with

None of this has to be binary. Hate/love. Yes/no. Sure, Apple has evolved into a company that's much harder to recommend for people who care about the future of computing, but they still make great hardware, and the M-chip revolution will continue to benefit anyone who likes computers.

So I haven't crushed my MacBook in defiance. We won't be wholesaling out our existing fleet of Apple machines either. But going forward, we'll spend more of our money and attention on platforms that align better with the independence and freedom we so cherish in all other aspects of our business.

The year of Linux on the desktop. Who would have thought!

linux-desktop.png

Beautiful motivations

Programmers are often skeptical of aesthetics because they frequently associate it with veneering. A thin sheen of flashy marketing design covering up for a rotten or deficient product. Something that looks good from afar, but reveals itself to be a disappointing imitation up close. They're right to be skeptical. Cheap veneers are the worst. But discarding the value of aesthetic on behalf of cheap imitations is a mistake.

Not just because truly beautiful objects and concepts inevitably reveal a deeper and designed experience. That's the whole "I'm writing you a long letter because I didn't have time to write you a short one". Making something beautiful takes extra steps. Steps that are commonly also associated with extra care the rest of the creation.

No, the primary reason I appreciate aesthetics so much is its power to motivate. And motivation is the rare fuel that powers all the big leaps I've ever taken in my career and with my projects and products. It's not time, it's even attention. It's motivation. And I've found that nothing quite motivates me like using and creating beautiful things.

I don't think that would come as any surprise to people of the past. The history of creation is in part a tale of pursuing beautiful outcomes and rewards. But in our age, we've managed to deconstruct and problematize so much of what is self-evidently beautiful that it's harder to take the chase for granted.

It's in the context of this age that I labor for programmers to rediscover beauty. Beautiful code, beautiful patterns, beautiful tools. Not to create a single, monoculture of aesthetics. That's never going to happen. But to elevate the work of making things look not just good, but sublime. To revel in it, to celebrate it.

And beauty isn't binary. It's the journey of a thousand little decisions and investments in making something marginally prettier than it was before. To resist the urge to just make it work, and not stop until you make it shine. Not for anyone else, even, although others will undoubtedly appreciate your care. But for yourself, your own motivation, and your own mission.

System tests have failed

When we introduced a default setup for system tests in Rails 5.1 back in 2016, I had high hopes. In theory, system tests, which drive a headless browser through your actual interface, offer greater confidence that the entire machine is working as it ought. And because it runs in a black-box fashion, it should be more resilient to implementation changes. But I'm sad to report that I have not found any of this to be true in practice. System tests remain as slow, brittle, and full of false negatives as they did a decade ago.

I'm sure there are many reasons for this state of malaise. Browsers are complicated, UI driven by JavaScript is prone to timing issues, and figuring out WHY a black-box test has failed is often surprisingly difficult. But the bottom line for me is that system tests no longer seem worth the effort the majority of the time. Or said another way, I've wasted far more time getting system tests to work reliably than I have seen dividends from bugs caught.

Which gets to the heart of why we automate testing. We do it for the quick feedback loop on changes, we do it to catch regressions, but most of all, we do it to become confident that the system works. These are all valid goals, but that doesn't mean system testing is the best way to fulfill them.

Now I'm not advocating you throw out all your system tests. Just, you know, probably most of them. System tests work well for the top-level smoke test. The end-to-end'ness has a tendency to catch not problems with the domain model or business logic, but some configuration or interaction that's preventing the system from loading correctly at all. Catching that early and cheaply is good.

The stickiest point, however, is not testing business logic, which model and controller tests do better and cheaper, but testing UI logic. Which means testing JavaScript. And I'll say I'm not sure we're there yet on the automated front. 

The method that gives me the most confidence that my UI logic is good to go is not system tests, but human tests. Literally clicking around in a real browser by hand. Because half the time UI testing is not just about "does it work" but also "does it feel right". No automation can tell you that.

HEY today has some 300-odd system tests. We're going through a grand review to cut that number way down. The sunk cost fallacy has kept us running this brittle, cumbersome suite for too long. Time to cut our losses, reduce system tests to a much smaller part of the confidence equation, and embrace the human element of system testing. Maybe one day we can hand that task over to AI, but as of today, I think we're better off dropping the automation.

Paranoia and desperation in the AI gold rush

I've never seen so much paranoia in technology about missing out on The Next Big Thing as with AI. Companies seem less excited about the prospects than they are petrified that its going to kill them. Maybe that fear is justified, maybe it's not, but what's incontestable is the kind of desperation it's leading to. Case in point: Slack.

So Salesforce just announced that they'll be training their Slack AI models on people's private messages, files, and other content. And they're going to do so by default, lest you send them a specially formatted email to feedback@slack.com. I mean, really, feedback? It's the kind of process that invites a quip about some knuckle-sandwich feedback to their face. But I digress. 

Presumably this is because some Salesforce executives got the great idea in a brainstorming sesh that the way to catch up to the big players in AI is by just ignoring privacy concerns all together. If you can't beat the likes of OpenAI in scanning the sum of public human knowledge, maybe you can beat them by scanning all the confidential conversations about new product strategies, lay-off plans that haven't been announced yet, or private financial projections for 2025?

I mean imagine the delight some CEO might feel when they start typing out the announcement to lay off 30% of the workforce, and Slack autocompletes the text with the most anodyne distillation from five competitors doing the same? All you have to do is edit out, say, Asana in your layoff completion, and voila, you'll have saved at least 8 minutes typing out the corporate slop yourself.

Whether the vision of that gleams bright or dystopian probably depends on how well your inner compass is tuned to the kind of AI KPIs pushing product managers in charge of acquired chat products at large tech companies.

But the more interesting point to me is what this says about the broad paranoia and desperation in the AI gold rush. Things are moving fast enough that we'll probably see more such flagrant transgressions of trust and privacy, if there's even a sliver of a chance that it can provide an edge in the race for a better chatbot. Buckle up!

Open source is neither a community nor a democracy

Using open source software does not entitle you to a vote on the direction of the project. The gift you've received is the software itself and the freedom of use granted by the license. That's it, and this ought to be straight forward, but I repeatedly see that it is not (no matter how often it is repeated). And I think the problem stems from the word "community", which implies a democratic decision-making process that never actually existed in the open source world.

First of all, community implies that we're all participating on some degree of equal footing in the work required to further the welfare of the group. But that's not how the majority of open source projects are run. They're usually run by a small group of core contributors who take on the responsibility to advance the project, review patches, and guard the integrity of the vision. The division of labor isn't even close to be egalitarian. It's almost always distinctly elitist.

That's good! Yes, elitism is good, when it comes to open source. You absolutely want projects to be driven by the people who show up to do the work, demonstrate their superior dedication and competence, and are thus responsible for keeping the gift factory churning out new updates, features, and releases. Productive effort is the correct moral basis of power in these projects. 

But this elitism is also the root of entitlement tension. What makes you think you're better than Me/Us/The Community in setting the direction for this project?? Wouldn't it be more fair, if we ran this on democratic consensus?? And it's hard to answer these question in a polite way that doesn't aggravate the tension or offend liberal sensibilities (in the broad historic sense of that word -- not present political alignments).

So we usually skirt around the truth. That not all participants in an open source project contribute equally in neither volume nor value, and this discrepancy is the basis of the hierarchical nature of most projects. It is not, and never will be, one user, one vote. That is, it will never be democratic. And this is good!

The democratic ideals are fulfilled by the fact that open source is free and full of alternatives. Don't like how they're running a given project? Use one of the usual countless alternatives. Or start your own! Here, you can even use the work of a million projects that came before you as a base for doing new work.

But the reason this doesn't resolve the tension is that it still relies on showing up and doing the work. And there just so happens to be far fewer individuals willing and capable of doing that than there are individuals who wish they had a say on the direction of their favorite software.

You can't solve that tension, only acknowledge it. I've dealt with it for literally twenty years with my work on Rails and a million other open source projects. There's an ever-latent instinct in a substantial subset of open source users who will continuously rear itself to question why it's the people who do the most work or deliver the most value or start the most projects that get to have the largest say.

And when people talk about open source burnout, it's often related to this entitlement syndrome. Although it's frequently misdiagnosed as a problem of compensation. As if begging for a few dollars would somehow make the entitlement problem bearable. I don't think it would. Programmers frequently turn to the joy of open source exactly because it exists outside the normal employment dynamics of quid-pro-quo. That's the relief.

I frequently argue that open source is best seen as a gift exchange, since that puts the emphasis on how to react as receiver of gifts. But if you're going to use another word as an alternative to community, I suggest you look at "ecosystem". Ecosystems aren't egalitarian. There are big fish and little fish. Sometimes the relationships are symbiotic, but they're also potentially parasitic.

But whatever word you choose, you'd do well to remember that open source is first and foremost a method of collaboration between programmers who show up to do the work. Not an entitlement program for petulant users to get free stuff or a seat at the table where decisions are made.

Meta is shutting down Workplace

The saying "nobody ever got fired for buying IBM" is at its essence about risk management. The traditional wisdom goes that if you buy from a big company, you're going to be safe. It may be more expensive, but big companies project an image of stability and reliability, so buying their wares is seen as the prudent choice. Except, it isn't. Certainly not any more. Meta killing Workplace is merely exhibit #49667.

Any company that hitched their wagon to Workplace just got served with an eviction notice. In a about a year, the data will go read-only, and shortly after that, it's game over. Now companies from Spotify to McDonalds, along with millions of others, have to scramble to find an alternative. Simply because Meta can't be bothered to maintain a platform that's merely used by millions when their consumer business is used by billions.

This, right here, is the risk of buying anything from big tech like Meta and Google. Their main ad-based cash cows are so fantastically profitable that whether it's the millions of paying accounts on Workplace or the millions of live websites once hosted by Google Domains, it all just pales in comparison, and is thus one strategy rotation away from being labeled "non-core" and killed off.

Buying from big isn't the sure bet they want you to believe. Buy from someone who actually needs your business to make the wheels go round.

The endangered state of normality

When I was growing up in the 80s and 90s, I had friends who were socially awkward nerds, friends who were cool but didn't like school at all, friends who were good at school but couldn't muster the will to finish their math homework, and friends who were tomboys. None of these kids ever got a diagnosis. They were all well within the spectrum of what constituted "normal" back then. Today all of them would likely have acquired some label of pathology because nobody seems to qualify (or desire to be seen) as being normal these days.

This is the natural consequence of "centering the margins". Of making it socially desirable to be not normal and low status to be regular. And it's happening across everything from gender expression to neurodiversity. There's cachet of cool to be had in identifying with some margin. Preferably one that can claim to be oppressed by society via heteronormativity or neurotypicality or other big words for "normal".

And, as Abigail Shrier documents in Bad Therapy, there's a large industry of therapists and other "mental health professionals" eager to accommodate this flight to the margins. Eager to supply the diagnosis, the pills, the reassurance of how every phase of experimentation or misbehavior can be explained by some big word of pathology.

I think we've given up on something important in this pursuit of individuality through the spectrum of marginalization. When fitting is less about having the right brand of backpack and more about having some medical prescription.

Whatever is at play, I think we're better off re-expanding the definition of normal  to fit a much broader spectrum of quirky, weird, and natural varieties of humans. Fewer labels, reserved for far more severe predicaments, and fewer interventions, left for those where the risks of medicalized action far outweighs the risk of iatrogenic treatment.

DEI is done (minus the mop up)

In November of 2022, I wrote about the waning days of DEI's dominance, and enumerated four factors that I saw as primary drivers of this decline. Those waning days have now been brought to a close, and DEI, as an obsessive, ideological preoccupation of the corporate world, is done. Witness this tabulation of DEI (and ESG) mentions in earnings reports, reported by Business Insider:

dei-esg-mentions.png


It's over. And thank heavens for that! This graph perfectly captures the temporary insanity of what those nutty years were really like in corporate America. An explosion of sanctimony, triggered by the fallacy that racial disparities in the office (and elsewhere) could only be explained by systemic racism. And, worse still, that the way to counteract this mirage was by compensatory discrimination against "model minorities" (mostly Asians) and "whiteness", as prescribed by the dogma of Antiracism.

And with this explosion of sanctimony came a brief but suffocating culture of intimidation. Anyone who dared question the theology of high priests of this new religion, like DiAngelo or Kendi, were hounded and frequently banished by ideological thugs. 

It didn't matter if the hounded were low-level employees who hadn't kept up with latest woke dictionary (is it latinx now? or latine? or?), executives who dared to claim that meritocracy might actually be a good thing, or comedians making jokes about the insufferable fake piety of it all. There were pitchforks enough to chase all transgressors, big or small.

But now it's done. The tide has turned. The People are sick of this shit. So that's it. All there's left to do is mop up, then make sure we harden our defenses for the next time agitating embers come flying.

Hating Apple goes mainstream

This isn't just about one awful ad. I mean, yes, the ad truly is awful. It symbolizes everything everyone has ever hated about digitization. It celebrates a lossy, creative compression for the most flimsy reason: An iPad shedding an irrelevant millimeter or two. It's destruction of beloved musical instruments is the perfect metaphor for how utterly tone-deaf technologists are capable of being. But the real story is just how little saved up goodwill Apple had in the bank to compensate for the outrage.

That's because Apple has lost its presumption of good faith over the last five years with an ever-larger group of people, and now we've reached a tipping point. A year ago, I'm sure this awful ad would have gotten push back, but I'm also sure we'd heard more "it's not that big of a deal" and "what Apple really meant to say was..." from the stalwart Apple apologists the company has been able to count on for decades. But it's awfully quiet on the fan-boy front.

This should all be eerily familiar to anyone who saw Microsoft fall from grace in the 90s. From being America's favorite software company to being the bully pursued by the DOJ for illegalities. Just like Apple now, Microsoft's reputation and good standing suddenly evaporated seemingly overnight once enough critical stories had accumulated about its behavior.

It's not easy to predict these tipping points. Tim Cook enthusiastically introduced this awful ad with a big smile, and I'm sure he's sitting with at least some sense of "wtf just happened?" and "why don't they love us any more?". Because companies like Apple almost have to ignore the haters as the cost of doing business, but then they also can't easily tell when the sentiment has changed from "the usual number" to "one too many". And then, boom, the game is forever changed.

I think this is bound to come as a bigger surprise to Apple than it would have almost any other company. Apple had such treasure chest of goodwill from decades as first an underdog, then unchallenged innovator. But today they're a near three-trillion dollar company, battling sovereigns on both sides of the Atlantic, putting out mostly incremental updates to mature products. Nobody is lining up with a tent to buy a new iPhone any more. The Vision Pro had at best a mediocre launch. Oh, and now the company is even the creator of cringy ads, introduced by a cringy CEO.

Not that this is a mortal wound or even a story anyone is likely to remember in a month. But it is an early indicator that Apple's run on easy street is over. And that's going to require a new approach, which Apple probably won't embrace until they've embarrassed themselves a few more times (like they did with another cringe ad from a little while back).

Everything is great until it isn't.

The last RailsConf

Few numbers exemplified the early growth of Rails like attendance at RailsConf. I think we started with something like 400-600 attendees at the inaugural conference in Chicago in 2006, then just kept doubling year over year, as Rails went to the moon. If memory serves me right, we had something like 1,800 attendees in 2008? It was rapid, it was wild, but next year, it'll be over. RailsConf 2025 will be the last RailsConf.

This is for the best. RailsConf, as it exists today, is a legacy from when the Rails ecosystem didn't have its own guardian institution. For many years, it was left to Ruby Central to fill this role, but that was always going to be a secondary pursuit to their primary mission of furthering Ruby in general. 

But now we have The Rails Foundation, which is focused 100% on Rails, backed by the biggest names in ecosystem. It's also the organizer of Rails World, which just sold out its own thousand-attendee conference in Toronto this coming September -- in less than twenty minutes! The baton has been passed.

This is good. With Ruby Central focusing their efforts on general-purpose Ruby endeavors, like maintaining Bundler and RubyGems, as well as putting on RubyConf, the division of responsibilities between it and The Rails Foundation is now clear. Which makes it much easier for both organizations to collaborate on furthering Ruby on Rails, each putting emphasis on their side of the conjunction.

I'm going to choose to remember RailsConf for all the wonderful memories it brought me and the ecosystem, especially in the early years. Working with the original crew of Chad Fowler, David A. Black, and Rich Kilmer was the treat of a lifetime. We bootstrapped something from nothing, turned it into an epoch-defining event, and I delivered some of my most memorable keynotes in that era.

It's a bit of a shame what happened later, during those mad years in and immediately following the pandemic, but that kind of nonsense is thankfully now largely behind us, not just in the Ruby world, but in tech in general. And Ruby Central is now almost entirely run by people who didn't have anything to do with that debacle anyway. Making it much easier just to look forward, and simply appreciate that those odd years helped motivate finally getting The Rails Foundation off the ground.

Either way, the future of Rails shines incredibly bright. The ideal of the one-person framework has never been more relevant. The world has woken up from the ZIRP years with a complexity hang-over, and Rails is the perfect painkiller. From #nobuild to bare-metal deployment to the eternal appeal of a full-stack solution comprised by Active Record, Action Pack, Active Support, and the million other arguments and assets that underpin the modern appeal of Rails. We never went away, but the renaissance is palpable none the less.

So I raise my glass to the final RailsConf. Let's go out with a bang in 2025, celebrate the legacy, and then keep plucking away on spreading the joy of beautiful code, incredible productivity, and programmer-centric development with Ruby on Rails. Cheers!

Magic machines

There's an interesting psychological phenomenon where programmers tend to ascribe more trust to computers run by anyone but themselves. Perhaps it's a corollary to imposter syndrome, which leads programmers to believe that if a computer is operated by AWS or SaaS or literally anyone else, it must be more secure, better managed, less buggy, and ultimately purer. I wish that was so, but there are no magic machines and no magic operators. Just the same kind of potentially faulty bits and brains.

A great example of this was the feedback to our declaration that we're bringing continuous integration back to developer machines. The most common objection was to invoke "it works on my machine", as to imply that developer machines were somehow a different breed than the ones running in the cloud or the data center. They really aren't! The computer running tests remotely is indeed just that: Another computer. It isn't magical, and it's no less prone to be reliant on unaccounted for dependencies or environmental factors.

In fact, when it comes to testing, it's a feature not a bug to have the suite run on multiple machines. It's like an extra fuzzy check that will uncover undeclared dependencies, and help you produce a more resilient system. Because even the best CI setup isn't production. And just because it works in CI doesn't mean it'll be free of issues in production.

Which leads us to the whole point of testing systems in the first place: It's about confidence, not certainty. The road to programmer misery is paved with delusional aspirations that you can ever be fully, truly certain that any sufficiently complicated system will ever work as intended in production. All you have is degrees of confidence to trade-off against increasingly cumbersome protocols and procedures. There's no such thing as 100% test coverage that's meaningful and achievable at the same time.

And it's the fundamental lack of confidence in their own abilities that lead programmers to think that the people operating their cloud computers are so much smarter or better than they are. They rarely are. They're just hidden, and it's that opaqueness that false implies a higher competence. If only you knew what kind of frazzled mechanical turk it takes to run most cloud institutions or SaaS operations, you wouldn't be so quick to doubt your own abilities.

There's no magic class of computers and no magic class of computing clerics. "It works on my computer" is just the midwit version of "it works on THAT computer". It's all just computers. You can figure them out, you can make them dance.

We're moving continuous integration back to developer machines

Between running Rubocop style rules, Brakeman security scans, and model-controller-system tests, it takes our remote BuildKite-based continuous integration setup about 5m30s to verify a code change is ready to ship for HEY. My Intel 14900K-based Linux box can do that in less than half the time (and my M3 Max isn't that much slower!). So we're going to drop the remote runners and just bring continuous integration back to developer machines at 37signals.

It's remarkable how big of a leap multi-core developer machines have taken over the last five-to-seven years or so. Running all these checks and validations in a reasonable time on a local machine would have been unthinkable not too long ago. But the 14900K has over 20 cores, the M3 Max has 16, and even a lowly M2 MacBook has 8. They're all capable of doing a tremendous amount of parallelized work that would have seem fantastical to do locally in the mid 2010s.

HEY is a pretty substantial code base too. About 55,000 lines of Ruby code, which is verified by some 5,000 test cases along with another 300-some system tests. Virtually all of these tests go through the full-stack and hit the database. These are not mocked to the hilt.

To me, the most satisfying part of the improved performance of modern developer CPUs is the possibility to simplify our stacks. Installing, operating, and caring for a remote CI setup is a substantial complication. Either you do it on your own hardware, and deal with that complexity directly, or you pay through the nose for a cloud-based setup. Getting to flush all of it down the simplification drain is an amazing step forward.

In fact, it's what I like most about paying attention to the progress of our platforms. Oh, browsers now have really good JavaScript and CSS engines? Awesome. Let's go #nobuild. Oh, developer CPUs now have dozens of cores? Sweet. Let's pull CI home. Oh, single-core performance is way up? Wonderful. Let's drop gotcha-hinged accelerators like Spring.

As always, the simplified future is not evenly distributed. I can't see the likes of Shopify or GitHub being able to run the full battery of tests against their millions of lines of code locally any time soon. But 99.99% of all web apps are much closer to HEY in breadth than they are to those behemoths. And small teams ought to remove all the moving parts possible. Never aspire to a more complicated stack than what your application calls for.

So we need to keep burning those bridges of complexity once we get to the other side. I can't wait to set fire to every single one of the remote continuous integration bridges we have here at 37signals. Progress is a bonfire.

I could have been happy with Windows

After more than twenty years on the mac, it was always going to be difficult for me to leave Apple. I've simply not been in the market for another computing platform in decades. Sure, I've dabbled a bit here and there, but never with true commitment. It wasn't until Cupertino broke my camel's back this year that I suddenly had the motivation needed to uproot everything. And when I did, I learned that Windows has turned into a wonderful web developer's platform thanks to the Windows Subsystem for Linux (WSL).

I'm not going to lie and say I loved everything about Windows. But after the question of font rendering was settled, and I came to terms with giving up TextMate, it felt perfectly adequate. Better than adequate, actually. It felt nice. Nice knowing that there was a real, realistic, and compelling alternative to the mac, and that most of my aversion to Windows was based on outdated facts or misconceptions.

So I made the commitment. Only to fall in love with a quirky piece of hardware from a small company called Framework shortly thereafter. That in turn lead to taking another look at running Linux outright, as the AMD chip inside the Framework simply punched harder with the penguin in charge.

This coincided with a month-long trip away from home where all I brought was the Framework 13 running Ubuntu. And that taught me two things: Most of the jokes about Linux are true! There are more rabbit holes, more gotchas, and less polish. But I also learned, and this was the real surprise, that I scarcely minded at all! That in fact running Linux, and running into many of the little issues that often entails, was a surprisingly delightful and educational experience.

As an example, I've been trying for a while to get my desktop PC, which has an Nvidia 4090 GPU, to work with my Apple XDR 6K monitor, which only accepts Thunderbolt 3. This involved sourcing an exotic Huawei DisplayPort + 2 USB-A => USB C cable. Then learning everything about monitor EDIDs, xorg.conf, kernel parameters, Nvidia driver versions, and about a million other topics that are very close to the metal and very far from the Apple experience (and I still haven't cracked the nut!).

But rather than being frustrated with things not just working out of the box, I embraced the adventure. There's a certain nostalgia here, I'm sure. I grew up with computers that needed far more tender, love, and care to work well. Where IRQ conflicts had to be resolved before the SoundBlaster card would work for Wing Commander. Computers required some assembly, and as IKEA knows, it made us love them more. 

So here I am. I still have Windows available as a dual-boot option on the desktop, but the Framework 13 has been running Ubuntu exclusively the whole time, it's my daily driver at the moment, and now that I've acclimated, Linux just feels right. I love the Tactile windows manager for Gnome. I've figured out how to easily fill out my PDFs using Xournal++. Typora is giving me that iA Writer-like distraction-free typing experience I've come to love. And, for now, I've come to terms with VSCode. (See my current setup script).

Would I recommend this expedition to everyone? No. I think if the idea of having to occasionally tinker with kernel parameters or display drivers give you nightmares, you probably shouldn't run Linux on your primary computer. But I'd also say that it's hard to know whether you'll find some zen of motorcycle maintenance in knowing how to tighten the timing chain of Ubuntu before you try. Especially if you've been cocooned inside the Apple bubble forever.

For a lot of people, Windows is probably the better alternative to the mac. And that's great! We ought to have AT LEAST three good options for personal computing in the modern age, and now I've come to realize that we do.

I'm just happy this exodus happened. I learned something new about myself. I tried a million combinations. And I discovered a real affinity for Framework and Ubuntu. I'd invite you to give it a go, if you're in the mood for a trek. Do it not because it is easy, but because it is hard. See what kind of computing stuff you're made of. Oh, and have fun!

The gift of ambition

The Babylon Bee ran this amazing bit last year: "Study Finds 100% Of Men Would Immediately Leave Their Desk Job If Asked To Embark Upon A Trans-Antarctic Expedition On A Big Wooden Ship". Yes. Exactly. Modern office workers are often starved for ambition, adventure, and even discomfort. This is why there's an endless line of recruits willing to sign up to work for leaders like Musk, despite his reputation for being an erratic hard ass. The ambition is worth it. Because real ambition is rare.

It's the lack of ambition that fuels the malaise of a bullshit job. Work so aspirationally underwhelming that it's possible to coast and imagine how the world wouldn't be an iota different if the work wasn't done. A perfect recipe for existential dread and despair.

But while the stereotype of ambition is indeed someone like Elon Musk (or Steve Jobs, before that), I don't think you literally need to aim for Mars to stir the heart of sailors. Nor do I think you need to be as abrasive or demanding, as the stereotype implies. That's the balance we've been trying to find at 37signals since its inception: The vision of a calm company compatible with ambition.

It's not always easy. If you talk for long enough about the fact that 40 hours a week ought to be enough, that vacations should be free of homing beacons, and how it literally doesn't have to be crazy at work, people inside and out your company might soon think that's indeed all there is. That the ultimate goal of the company is to provide a cush and coasting existence. But that's not why I get out of bed in the morning.

The aspiration of a calm company is to me exactly the opposite. To prove how much faster and further you can go if you embrace constraints, stay small, and trust skilled professionals to get the job done. That is, the calm company is a method for getting where we really want to go. It's not a destination in and of itself.

In fact, I'd rather work in place where it was crazy all the time, if we're trying to get somewhere, than I'd work somewhere perfectly calm that's just spinning the wheels. But the point is that I don't think these objectives are in opposition. Being ambitious and calm is like being smooth and fast. Big, erratic, dramatic movements might feel like they're getting you somewhere, but the stop watch usually reveals the opposite.

But what is ambition, exactly? To me, it's a leap of faith. A belief in the possibility of success without all the evidence to justify it a priori. A trust that whatever challenges we'll face between here and there, we'll be able to figure them out. It's a confidence in the strength of human ingenuity. And a bet that it takes a goal just beyond the reach of the plausible to get the best out of us all.

That ambition can be applied to all aspects of a project. The timeline, the people, the problem, the tech. To tickle our sense of adventure, some of it has to be daring and bold. Maybe it's not enough people, not enough time, new tech, novel problems. Whatever it is, there must be an x factor, an unknown. If we can quantify it all before we even begin, the ambition disappears.

And in that lack of certainly lies the discomfort, lies the leap. Betting on your ability to figure it out means taking a risk. Maybe you won't figure it out! Maybe we really didn't have enough people! Maybe we'll fail. But it's exactly the possibility of failure that gives the effort its meaning and its value.

In the lore of Steve Jobs, you'll find plenty of anecdotes from people who really didn't care for how he treated them or their colleagues at times, but who still credit the projects they worked on for him at Apple as the most meaningful ones of their career. I'm sure the same is true with Musk. These encapsulate the paradox that, psychologically speaking, I don't think most people know what makes them fulfilled at work. (But it isn't the ping pong or the free massage.)

Without a dash of the unpredictable, we all wither away. The chase for security and surety only works as a thrill if you never truly get there. Our competency only grows when we stretch it slightly beyond its breaking point from time to time.

So keep calm, yes, but for all that is holy, carry on by being ambitious.

Villains may live long enough to become heroes

The first tech company I ever really despised was Microsoft. This was back in the 1990s, the era of "cutting off the air supply", of embrace-extend-extinguish, of open source as a "cancer", and of Bill Gates before he sought reputational refugee in philanthropy. What made the animosity so strong was the sense of being trapped. That the alternatives to the Wintel monopoly of the time was so inferior as to essentially require giving up on modern computing.

So when Apple released the first Unix-powered OSX machines at the turn of the millennium, I felt relieved. Saved, even. Finally -- FINALLY!! -- a real choice. Apple provided an escape hatch for computing without giving up on modernity, and I came to love them for it.

But that was then and this is now. Microsoft has completed an astounding redemption arc since. They've gone from being the sworn enemy of open source to one of the biggest sponsors of it. They've been exemplary stewards of GitHub. They've won the hearts and minds of developers with VSCode fair and square. They've even put Linux inside of Windows with WSL! In short, they've gone from being a villain to a hero in a wide array of domains. Open source most of all. And I love it. They deserve all the accolades.

Meanwhile, Apple... Well, I've talked enough about Apple. So let's talk about something new: Meta.

I can't say I ever despised Meta, then Facebook, quite like I did Microsoft. But I sure as shit wasn't a fan. And I remain a staunch opponent of targeted advertising, the privacy assault that inevitably comes with them, and what it's done to the web. But I've come to appreciate that there are bigger challenges facing us than invasive ads. Much bigger.

Take AI. Zuckerberg's embrace of open source AI, now making headlines with the public release of Llama 3, is an invaluable counter to the cartel-adjacent bullshit of "AI safety & ethics" that would see the likes of OpenAI and Google conspire with governments around the world to determine what math should be allowed to predict the next token. I've seen this movie before, and I'm not interested in a rerun.

In fact, Facebook itself was one of the main characters in the previous show. The still going battle over misinformation/disinformation/malinformation, which continues to see the awful fusion of state and platforms through censors and algorithms in controlling The Narrative. Whatever trust I may once have had in objective third-party "fact checkers" have long since evaporated from the catastrophic track record of these anything-but-neutral, would-be arbiters of truth.

I don't pretend that either of these problems are easy or even that they have solutions. But they certainly have different potential outcomes and trade-offs. Some worse than others. And the prospect of having AI exclusively fine-tuned by the likes of whoever did Gemini or directed by bureaucrats trying to "save democracy" by banning the opposition, yeah, no thanks. I'll take my chances with the unadulterated math or speech any day.

Which makes Zuckerberg's transformation so important. I think a lot of the naivete he had, as did many, about the role of content moderation, truth arbiters, and platform control has been replaced by high degrees of skepticism. And I certainly think that after being humiliated by Apple via ATT, he's as motivated as anyone to prevent the next frontier of computing to be dominated by anyone (if it can't be himself!).

This is good. And it's not good because I have some special insight into Zuckerberg's "heart of hearts". I'm sure his dedication to open source AI is as motivated by self-interest as anyone in that position ever was. That's not a bug! It's a feature! Adam Smith saw it clearly in Wealth of Nations from 1776:

"It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages."

We don't need altruism to save us, we need incentives. We need competition. We need free markets for products, platforms, services, ideas, speech, and AI. We need to put the locus of control with consumers and individuals, not bureaucrats and monopolists.

That's always going to be the struggle. Whenever we achieve anything close to its ideal, like the marvel of the worldwide web, it constantly has to be guarded against regression. So if Microsoft proves to be aligned with some of those causes some of the time, I shall cheer them on, and I shall quell my quarreling. And if Meta does the same, they too shall receive my praise. (Or so I imagine Smith would sound!).

We badly need more powerful champions and heroes of free markets and free thought. Enough that I'm more than willing to commute the sentencing of former villains who've served their time and changed their minds. And enough that I'm comfortable stack ranking my concerns about society, and realizing that targeted ads just aren't as important as the freedoms defined above.

Let's go, Zuck. Give 'em hell.

As we forgive those who trespass against us

Google's announcement that they're done discussing politics at work widely echoed the policy changes Coinbase and we at 37signals did a few years back. So yesterday, I did two separate interviews with media outlets on the topic. And we spoke in part about those early weeks of reaction to our changes, as Twitter went crazy in response to the story. What was it like to briefly be the main, hated characters on the internet?

In the moment, it was awful, but in retrospect, it was a gift. 

A gift as a mirror, causing me to reflect on how I might have been part of a similar mob, on other topics, in different ways. A gift of a misogi challenge of character, bringing the satisfaction of overcoming a vicious social purgatory. But above all, the gift of knowing who was there for me and who wasn't.

It's a cliché, but "knowing who your friends are" really is a blessing. We walk through life much of the time without really knowing who'll be there when the going gets tough -- and our guesses are often wrong. Only the moment of truth, a real crisis, can clarify who's who and what's what. And so it did for me.

It brought unexpected friends and allies out in the light, and it revealed which friends and acquaintances would rather crawl back in the woodwork than stand by my side. I was surprised on both sides.

But while I'll never forget who made what choice, I've committed myself to forgive those who trespassed against me. In that Mike Tyson'esque way of refusing to let any of them change me. I'm certain I've been the weak link in past situations from time to time. I'm certain I've been too lazy or too timid to reach out to support someone who I knew needed it. I don't have many regrets in life, but the ones I have usually fall into this category: I should have been there for someone.

Beyond the personal, I think forgiving our trespassers is how we get out of this specific mess. The period from the late 2010s until at least 2022 really was crazy. It swept up so many otherwise kind and caring people into an ideology predicated on dividing us all into oppressors/oppressed, privileged/not, and other false dichotomies and identities. The way out of bad ideas like that is not a vendetta, but forgiveness. We all have the capacity for being swept up in a social movement or mob. But equally, we all have the path of finding our way out again.

I think there are a lot of people sitting right now with a nagging sense of regret from what they partook in during that crazy era. Who tried on the cape of being a Social Justice Warrior, but ultimately found it suffocating both intellectually and culturally. That's the reckoning we're going through right now.

The worst thing we can do to slow down the rejection of these bad ideas is by forever tarring people who were momentarily taken in by them. Yes, we should absolutely have a vigorous inquiry into the nature of these bad ideas, trace their lineage, and uncover their tragedies. But we can't persecute every individual who in a moment of fear, weakness or ignorance signed on to carry a torch because that's what everyone else was doing at the time.

That is to say, it's more important that we expedite and complete the broad societal rejection of bad ideas than it is to pursue every bad actor until the end of the earth. That's exactly why this vicious ideology proved so unstable, and unable to retain the peak of its power. It kept eating its own for ever-smaller transgressions against an ever-shifting doctrine. To beat that nonsense back, the side of sanity has to do the opposite. Be broader, more forgiving, and less high strung.

Lead us not into temptation of retaliation.

We are a place of business

After the disastrous launch of their Gemini AI, which insisted that George Washington was actually Black and couldn't decide whether Musk's tweets or Hitler was worse, Google's response was timid and weak. This was just a bug! A problem with QA! It absolutely, positively wasn't a reflection of corrupted culture at Google, which now appeared to put ideology over accuracy. Really, really!

Anyone watching that shit show would be right to wonder whether one of America's great technology companies had fallen completely into the hands of the new theocracy. I certainly did. 

But now comes evidence that Google perhaps isn't totally lost, even if an internal war over its origin principles is very much raging. One pitting the mission of organizing the world's information and making it useful against the newspeak Trust & Safety goal of controlling narratives and countering malinformation (i.e. inconvenient truths).

This played out in stereotype as 28 Googlers occupied the CEO of Google Cloud's office for 10 hours this week, defaced property, and prevented other Googlers from doing their work. Because Google provides cloud services to Israel, said the occupiers. And thus The Current Thing demanded it be stopped by whatever means possible. (Remember when The Current Thing was that GitHub shouldn't offer its technology to ICE because "kids in cages"? Same thing).

But then the most amazing thing happened. There was no drawn-out investigation. No saccharine statements about employee's rights to occupy offices, preventing work from happening, or advance their political agenda at work. Nope. They were just fired. Immediately. All 28 of them.

Bravo.

Google's bottom line? "This is a place of business". And while employees have the lawful right to protest their working conditions, they do not have the right to prevent a business from carrying out its normal course of commerce over a disagreement in politics. So that was that.

But it gets better. Google followed up the unceremonious firings by calling an end to employees bringing their politics to the office. Just like Coinbase did, just like we did. The language was spot on:

"But ultimately we are a workplace and our policies and expectations are clear: this is a business, and not a place to act in a way that disrupts coworkers or makes them feel unsafe, to attempt to use the company as a personal platform, or to fight over disruptive issues or debate politics. This is too important a moment as a company for us to be distracted."

Three years ago, taking a common-sense position like this would have been met with drama and outage in the media and on Twitter. Today I doubt it'll bring more than a ripple outside of a few activist echo chambers on Mastodon. Amazing progress!

Note, none of this pertains to what you think about The Current Thing that provided the trigger this time. It could just as well had been BLM, Russiagate, climate change, or a million other hot-button topics that have occupied the role as The Current Thing, and been used to justify these kinds of insufferable activists yelling at their boss.

We've not just passed the peak of the nonsense that nearly swallowed corporate America whole, but we're now seeing them repudiate it head on. If Google, with it's employment roster still packed with people sympathetic to the new theocracy, can put its foot down, so can the rest of the Fortune 500 and beyond. It's time they all say: "This is a place of business".

Forcing master to main was a good faith exploit

I never actually cared whether we call it master or main. So when the racialized claims started over how calling the default branch in Git repositories "master" was PrObLEmAtIC, I thought, fine, what skin is it off anyone's or my back to change? If this is really important, can make a real difference, great. Let's do it.

How naive.

This was a classic exploit of good faith, and I fell for it.

Changing master to main changed less than nothing. Because nothing was or is ever enough in this arena. As soon as this word battle was won, it was just on to the next and the next (and the next).

But the upside of being hit by an exploit like this is that you eventually end up with a patch that closes the hole. And rest assured, this hole in our collective good faith is now closed. People are not going to be this gullible twice. I am not going to be this gullible twice. 

Next time the firewall will be ready.

Imperfections create connections

The engine is in wrong place in a Porsche 911. It's hanging out the back, swinging the car like a pendulum. And that's key to why it's the most iconic sports car ever made. This fundamental imperfection is part of how it creates the connection.

This is true of mechanical watches too. They're hilariously complicated pieces of engineering that tell time worse than a $20 Casio quartz watch. And that's why we love them. The imperfection of timekeeping, the need to manually wind the things, cements the connection.

That's how computers used to feel too. The Amiga, and the Commodore 64 before it, were quirky bread boxes. Using chips named things like Agnus, Alice, Denise, Lisa, and Paula. With clicking, whirring disk drives. The flickering screen when software was loading. As distinctly different from the competition as a Porsche flat-six is from a Ferrari V12.

But the quirky is almost all gone from modern day computers. The mac in particular has been massaged to within an inch of perfection, and has thus become harder to connect with. It's a curious contradiction. We strive to make things better and better, but if we succeed, we reminisce of the quirks that used to be.

The last MacBook I really loved was the original 11" MacBook Air. It was full of compromises. A cramped screen. Chips that weren't quite fast enough. An iconic, wedgy design. It was so good because it was also kinda bad.

I thought that era was simply gone. But over the last month or so, I've developed much of the same affection for the Framework 13. Exactly because of all it's compromises and it's quirky design choices.

It uses an odd 3:2 display, which is almost as tall as it is wide. In a time when most every other maker has gone 16:9 or 16:10. And it's matte, not glossy.

The keyboard has twice the travel of most modern laptops. Giving it almost a vintage feel, which, once you get used to it, is really addictive.

It has interchangeable ports?! You can configure the 4 slots with every combination of USB C, USB A, ethernet ports, HDMI ports, and additional storage you desire. Then swap them quickly and easily. An ingenious alternative to dongle life.

And to top it off, I've chosen to run Linux on mine full time. I started out dual booting with Windows, but quickly realized that Linux ran faster on this AMD 7840U chip, and I found that Linux gave me everything I needed in more of that quirky style that gives the Framework machine its appeal in the first place.

Those are all the good parts, but there are plenty of drawbacks too. Compared to a modern MacBook, the battery is inferior. I got 6 hours in mixed use yesterday. The screen is only barely adequate to run at retina-like 2x for smooth looking fonts. Linux is far less polished than macOS. But somehow it just doesn't really matter.

First of all, 6 hours is enough for regular use. If I'm doing more than that in a single stint without getting up, I'll be paying for it physically anyway. And the somewhat cramped resolution has made me fall in love with full-screen apps again, like I used to do with that 11" MacBook Air.

But this is all picking at the parts when the grand story is the sum. This quirky, flawed machine has created a connection I haven't had with a piece of physical computer hardware in a very long time. That's notable!

I know this testimony isn't likely to appeal seriously to most mac users. Just like it wouldn't really have appealed to me a year or two ago. I just wasn't in the market for a change. And that's fine. Apple makes really, really good computers these days. Damn near perfect ones.

And most people don't care that the 911 has the engine in the back. In fact, they don't care about cars at all, really. They just want to get from A to B, as quickly, cheaply, and smoothly as possible. And they tell time perfectly from their smart phone display. This is the democratization of progress. Wonderful.

But if you're the kind of person who might appreciate a slightly notchy manual gearbox, the click of a mechanical shutter on a camera, the ticking of the escapement in a watch, or, dare I say it, putting on a vinyl record, you should checkout the Framework 13. The AMD version starts at just around a thousand bucks. So it's not like you have to switch your whole computing life around to give it a try.

And, if you're a programmer, I think you should actually give Linux a try as well. I've smirked about "This Is The Year of Linux on the Desktop" for over twenty years, but now that I've been actually running it for over a month, I've realized it's actually here. And probably has been for quite a while. I just run Ubuntu 23.10, and together with ulauncher + tactile, it's a delightful desktop experience (see my whole Ubuntu setup script). I even found a replacement for my beloved iA Writer in Typora!

Make no mistake, there's more fuss. More snags, more imperfections. So if you go in expecting the same level of perfection you'd get from company worth three trillion, you might be disappointed. But if you consider this the work of a worldwide open source community, it's incredible how close it is in most areas, ahead in a few, and not that far behind in the rest.

Dare to add a little imperfection into your computing setup, and you might just find a deeper connection to the bits and electrons running it all. And if you don't, at least you got to see the sun rise in a fun location.

Enough problems to go around

The worst kind of company is usually not the one where there's too much real work to do, but the kind where there's not enough. It's in this realm the real monsters appear. Without enough real problems to go around, humans are prone to invent fictitious and dreadful ones.

This is the root of David Graeber's Bullshit Jobs analysis. That a shocking percentage of people work jobs that they themselves see little to no meaning in, because the work that's being produced makes no difference, has no essence. It's enough to make anyone mad.

Now part of the problem is clearly one of perspective. I'm always amazed by the pride and duty it appears most Japanese workers put into the most mundane jobs. I forget where I read this, but it's the difference between being a happy zoo keeper who think of their job as "tending to the welfare of the elephants" rather than just "shoveling shit all day".

But it's not all subjective either. We are biologically tuned to conserve energy while being cognitively tuned to crave a challenge. So when the load is material, we often wish it was lighter. But if we actually succeed in lightening the load, we wonder why we're unhappy.

This is one of those contradictory aspects of the human condition, and one that's foolish to attempt to resolve. The trick I've found is to believe both things to be true at the same time. Yes, occasionally there's a need to rest and conserve energy. But equally so, there's a need to get back into the arena, and wrestle with something significant. Mojito island, all the time, is a curse, not a blessing. 

And in fact, it probably is worse, for most people, to have too many stretches of too little to do than the opposite. Tales of workers dropping dead a year into retirement is a common folklore expression of this knowledge. 

All this to say: don't slice the few, meaningful problems you have at work too thin. The worst injury you can inflict on knowledge workers is leaving them with too little of consequence to contest with.

Meaningful problems are the most valuable human motivators. Made-up problems are a blight. Ensure you have not quite enough time and people available to tackle the former lest you start inventing the latter.
❌