The Copenhagen International School is a wonderful private school located in the North Harbor of the city. It's home to over 900 students from around the world. This is where ambassadors, international executives, and other expats send their kids to get a great education in English while stationed in Denmark. As a result, it's perhaps the most diverse, inclusive school in all of Copenhagen. Lovely.
What's less lovely is the fact that CIS seems to have caught some of the same gender-ideology obsession that has ravaged many schools in America. We thought Copenhagen would offer a respite from the woke nonsense that's been plaguing California — where some schools in our social circle ended up with a quarter or more of the student body identifying as trans or gender nonconformative — but it seems ideological contagions travel as fast as airplanes these days.
It started last week, when the primary school, which includes kindergarten, declared its intention to spend every morning meeting for the entire week focused on gender dysphoria, transgenderism, they/them pronoun protocols, and coloring pride flags. That just sounded a bit odd and a bit much at first, but after reviewing the associated material, it actually looked downright devious. Just look at this example:
Draw yourself in the mirror, then adorn it with trans colors? And the guiding example is a boy who sees himself as a girl?
As you can imagine, many parents at the school were mortified by the idea of their children participating in this kind of overt indoctrination activities, and some of them let the school know. That's when the revisions started rolling out.
First, the program was revised to no longer apply to kindergarten and first grade, just second through fifth. Then the "draw yourself in the mirror and use trans colors to decorate it" activity was pulled from the program. Then the schedule was reduced from all week to just a single session this Monday while the rest of the material is being "reconsidered". And that's where it stands today.
But that's not all. After talking to a number of other parents, I learned that CIS has other highly objectionable programs in this sphere. Like "Gender and Sexuality Alliances" where primary school students in G3-5, meaning kids as young as eight, are invited to join in lunch and recess meetings to talk more about gender, sexuality, and how to become a good ally to the 2SLGBTQIA+ community.
According to one parent I spoke to (who's considering pulling their kids out over this), CIS hasn't wanted to disclose all specifics about the staff conducting these lunch and recess meetings with the children. Because while it's billed as "student led" on their website, the sessions are actually facilitated by CIS staff on campus. I've asked the same question of the school administration, including what qualifications these individuals might have, and have not received an answer either. But ultimately, it shouldn't even matter, because this shouldn't even be happening!
There's simply no responsible explanation for having kids as young as eight, or even as old as 11, in lunch and recess meetings with CIS staff to discuss gender and sexuality on school campus. It's preposterous, if not outright creepy.
The school's mission is no cover either. The commitment to an inclusive school does not offer a license to indulge in this kind of overt indoctrination or inappropriate lunch meetings where minors discuss gender and sexuality with school staff. And it has to stop.
CIS, like any other school, should not be a subsidiary of any specific interest organization. We don't want our kids to get their information about climate change from either Extinction Rebellion or fossil-fuel lobbyists. We expect our school to stay politically neutral on the international conflicts, like the one in Gaza. In higher grades where these topics are appropriate, they should be discussed in a context that also includes things like the Cass Review and the recent UK Supreme Court ruling.
It's the same reason Copenhagen Pride Week saw a massive loss of sponsorship after trying to cajole major companies into a position on Gaza last year. Novo, Maersk, Google, and many others rejected this organization (and they're not returning this year either) for their partisan politics. It's bizarre that those same companies now have the children of their employees programmed by this organization's agenda at school.
CIS needs to return to its high-level mission of focusing on giving kids an excellent education, teaching them objectively about the world, and upholding general standards for kindness and caring. Not coloring partisan flags during school programs, not facilitating inappropriate meeting forums about gender and sexuality between staff and children.
Update: As a result of feedback from parents, the rest of the proposed material intended for Pride Week will not be used at CIS this semester. I'll make sure to follow-up on what might be considered for next semester, as well as the future of the GSA program. Thanks to all the parents who gave their feedback to the school!
The recent disconnection of the ICC's chief prosecutor, at the behest of the American administration, could not have come at a worse time for Microsoft. Just a month prior, the folks from Redmond tried to assure Europe that all was well. That any speculation Europeans could get cut off from critical digital infrastructure was just fear, doubt, and uncertainty. Then everything Europeans worried could happen happened in Hague. Oops!
Microsoft's assurances met reality and reality won.
That reality is that all American administrations have the power to disconnect any individual, company, or foreign government from digital infrastructure provided by American Big Tech. So in that sense, it's pointless to blame Microsoft for the sanctioning power vested in the Oval Office. But we certainly can blame them for gaslighting Europe about the risk.
What's more important than apportioning blame, though, is getting out of the bind that Europe is in. The continent is hopelessly dependent on American Big Tech for even the most basic of digital infrastructure. If this American administration, or the next, decides to use its sanctioning power again, Europe is in a real pickle. And with the actions taken against the ICC in Haag, Europe would be negligent to ignore the threat.
Denmark even more so. It's no secret that tensions between Denmark and the US are at a historic high. Trump keeps repeating a desire to take over Greenland by fuzzy means possible. The American intelligence services have been directed to increase their spying on Denmark and Greenland. Naturally, the Danes are spooked. They should be!
Regardless of what happens with Greenland, trade negotiations, or geopolitical disagreements, though, it would suit Europe well to become digitally sovereign. That doesn't mean cutting off all American tech, but it does mean rejecting any services that can be turned off from Washington. So in terms of Microsoft, it means no more Microsoft 365, no more Teams, no more Azure.
And that's exactly what the two biggest counties in Denmark have announced plans to do. Copenhagen and Aarhus just declared that they're going to get rid of Microsoft products for all their workers. The Copenhagen county is the largest employer in Denmark with over 40,000 employees. So this is a big deal!
The chairman of the Copenhagen committee who pushed this forward made this comment to Danish media:
If, theoretically, the relationship to the US gets worse, we could fear that Microsoft would be forced to shut everything down. That possibility exists. And if we suddenly can't access our emails or communicate via our systems, we'll be challenged.
That's an understatement. Denmark is one of the most highly digitalized countries in the world. It's also one of the most Microsoft dependent. In fact, Microsoft is by far and away the single biggest dependency, so it makes perfect sense to start the quest for digital sovereignty there.
But Denmark is also full of unambitious, defeatist bureaucrats who can't imagine a world without Microsoft. Just today, the IT director for The Capital Region declared it to utopian to think Denmark could ever achieve digital sovereignty or meaningfully replace Microsoft. Not even a decade would make a dent, says the director, while recognizing that if we'd done something 15 years ago, we wouldn't be in this pickle. A remarkable illustration of cognitive dissonance!
Sadly, this is not an uncommon conclusion from people who work inside the belly of bureaucracies for too long. Whatever has always done too often seems like the only thing that ever could be done. But, as Mandela said, it always seems impossible until it's done.
So let's get it done. Digital sovereignty isn't easy, but neither was securing a sovereign energy supply. Nor will it be to rebuild a credible defensive military. Europe needs all of it, yesterday. The bureaucrats who aren't interested in making it happen should find employment elsewhere.
Over the years, I've learned not to question inspiration. To simply let it drive when it shows up with a full tank. Quite often, I don't exactly know where we're going or even why we're going, but it's repeatedly taken me to just the right place at just the right time, so now I just hop in and say: Let's go!
Case in point: Arch + Hyprland. It's been over a year since I created Omakub to smooth out my own exit path from macOS to Linux, and in the process, helped thousands of others enjoy a beautiful, preconfigured, and relatively familiar desktop experience with Ubuntu. And I continue to think this is an excellent choice for Linux, especially for first-time Mac and Windows defectors.
But this weekend, I just happened to be home alone, and the hypearound Arch + Hyprland got the better of me. While Ubuntu is all about being a friendly place for newcomers to Linux, Arch + Hyprland is the exact opposite: It's Linux on hard mode!
At least that's the reputation, and there's certainly something to that. The Arch ISO literally just dumps you into a terminal with scarcely any direction. Just getting on the Wi-Fi requires learning the arcane command-line options for the iwctl terminal configurator.
But besides iwctl, it's actually not that bad anymore. We now have a cheat code for installing Arch in the form of archinstall. It's a terminal interface for getting all the bits like picking a disk and creating the first user done in the correct order, without having to set aside an entire evening just to get the OS installed. So it didn't take long to get Arch up and running.
That doesn't get you much further, though! By default, Arch is about as minimal as it gets. There's no default graphical interface. There are no niceties. Not even wget or curl by default! It really is just a bare-bones installation of Linux.
But on the other hand, Arch is blessed with the AUR — a Wikipedia-style, community-maintained package management system that seems to have literally every piece of software ever released on Linux, and always in the latest version.
A quarter of Omakub is trying to work around the fact that helpful tools like Alacritty or LazyGit or whatever rarely have official packages for Ubuntu, so you're left doing a lot of manual scripting to get everything a developer would want from a modern Linux setup. Not so with the AUR!
So that's Arch. But Hyprland is even more hilarious. It's a tiling window manager with something as rare in the Linux world as a keen focus on aesthetics! It looks like it was written by visual artists rather than neckbeards. And it's at the core of the modern r/unixporn Linux ricing subculture.
That's not the funny part, though. The funny part is how ridiculously atomized the entire thing is. Hyprland comes with absolutely nothing out of the box. No login screen, no menu bar, no notification system, no file manager, no visual settings application. Just a text-based configuration file, a wiki, and an outline on a map for how to design your own adventure.
This means that if you're interested in running Hyprland, and you intend to set it all up from scratch, you're probably signing up for at least a good 10+ hours — just to install and configure everything! Now, there are some precompiled setup scripts out there already, but most of them still require you to do a ton of manual legwork before you reach that beautiful summit of a complete system.
So while the downside of Arch + Hyprland is that literally every last detail requires you to make a choice and a config file to move on, it's also its upside: you can change EVERYTHING! And the core window tiling magic of Hyprland is one of the most intoxicating ways of using a computer that I've ever experienced. It looks amazing; it feels amazing (if you prefer using a keyboard instead of a mouse!).
I've poured hours and hours into this quest over the weekend, and yet I'm still not even done with my first Arch + Hyprland build! My login screen looks like shit. I haven't decided on a final menu bar configuration. But what is working — the basic tiling flow and look — is so nice that the inspiration keeps driving the project forward.
Which, of course, I've already decided to codify. I'm not going through all this work to set up a beautiful, preconfigured, fully-functional, out-of-the-box Arch + Hyprland combo and then not share it with anyone who's curious (and might not have 10–20 hours for this kind of side quest available in their schedule). So of course I registered a domain and found a way to draw some ASCII art: Omarchy is on the way!
Credit for this amazing turn of events goes to Epic Games founders Tim Sweeney and Mark Rein, who did what no small developer like us could ever dream of doing: they spent over $100 million to sue Apple in court. And while the first round yielded very little progress, Apple's (possibly criminal) contempt of court is what ultimately delivered the resolution. Thanks to their fight for Fortnite, app developers everywhere are now allowed to link out of apps to their own web-based payment system in the US store (but, sadly, nowhere else yet).
This is all we ever wanted from Apple: to have a way to distribute our iPhone apps and keep the customer relationship by billing directly. The 30% toll gets all the attention, and it is ludicrously egregious, but to us, it's just as much about retaining that direct customer relationship, so we can help folks with refunds, so they don't tie their billing for a multi-platform email system to a single manufacturer.
Apple always claims to put the needs of the users first, and that whatever hardship developers have to carry is justified by their customer-focused obsession. But in this case, it's clear that the obsession was with collecting the easiest billions Apple has ever made, by taking an obscene cut of all software and subscription sales on the platform.
This obsession with squeezing every last dollar from developers has produced countless customer-hostile experiences on the iPhone. Like how you couldn't buy a book in the Kindle app before this (now you can!). Or sign up for a Netflix subscription (now you can!). Before, users would hunt in vain for an explanation inside these apps, and thanks to Apple's gag orders, developers were not even allowed to explain the confusing situation.
It's been the same deal with HEY. While we successfully fought off Apple's attempt to extort us into using their in-app payment system (IAP), we've been stuck with an awkward user experience ever since. One that prevented new customers from signing up for a real email address in the application, and instead sent them down this bizarre burner-account setup. All so the app would "do something", in order to please an argument that App Store chief Phil Schiller made up on the fly in an interview.
That's what we can now get rid of. No more weird burner accounts. Now you can sign up directly for a real email address in HEY, and if you like what we have to offer (and I think you will!), you'll be able to pay the $99/year for a subscription via a web-based flow that it's now kosher to link to from the app itself.
What a journey, and what a needless torching of the developer relationship from Apple's side. We've always been happy to pay Apple for hosting our application on the App Store, as all developers have always needed to do via the $99/year developer fee. But being forced to hand over 30% of the business, as well as the direct customer relationship, was always an unacceptable overreach.
Now that's been arrested by Judge Yvonne Gonzalez Rogers from the United States District Court of Northern California, who has delivered app developers the only real relief that we've seen in this whole sordid monopoly affair that's been boiling since 2020. It's a beautiful thing.
It also offers Apple an opportunity to bury the hatchet with developers. They can choose to accept the court's decision in full and worldwide. Allow developers everywhere the right to link to their own billing flow, so they can retain their own customer relationship, and so business models that can't carry a 30% toll can flourish.
Besides, Apple's own offering will likely still have plenty of pull. I'm sure many small developers would continue to consider IAP to avoid having to worry about international taxes or even direct customer service. Nobody is taking that away from Apple or those developers. All Judge Rogers is demanding is that Apple compete fairly with alternative arrangements.
In case Apple doesn't accept the court's decision — and there's sadly some evidence to that — I hope the European antitrust regulators watch the simple yet powerful mechanism that Judge Rogers has imposed on Apple. While I'd love side loading as much as the next sovereign techie who wants to own the hardware I buy, I think we can get the lion's share of independence by simply being allowed to link out of the apps, just like has been so ordered by this District Court.
I do hope, though, that Apple does accept the court's decision. Both because it would be a stain on their reputation to get convicted of criminal contempt of court, but also because I really want Apple to return to being a shining city on the hill. To show that you can win in the market merely by making better products. Something Apple never used to be afraid of doing. That they don't need these gangster extortion techniques to make the numbers that Cook has promised Wall Street.
Despite moving on to Linux and Android, I have a real soft spot for Apple's taste, aesthetics, and engineering prowess. They've lost their way and moral compass over the last half decade or so, but that's only one leadership pivot away from being found again. That won't win back all the trust and good faith that was squandered right away, but they'll at least be on the long road to recovery.
Who knows, maybe developers would even be inclined to assist Apple next time they need help launching a new device in need of third-party software to succeed.
Have you thought about doing the opposite of whatever you're doing or considering? It's a really helpful way to test your assumptions and your values. What does the opposite look like, how would it work?
It's so easy to get stuck in a groove of what works, what you believe to be right. But helpful assumptions have a half-life, just like facts. And it's ever so easy to miss the shift when circumstances change, if you're not regularly stress-testing your core beliefs.
That doesn't mean you're just a flag in the wind, blowing whichever way. But it does mean having enough intellectual humility and creative flexibility to consider that what you believe to be true about your business, about your team, about your technology might not be so.
We did this a while back with full-time managers. We'd been working for nearly two decades without any, but exactly because it'd been so long, we were drawn to try the opposite, just to see what we might have missed. So we did. Hired a few full-time managers to help us test that assumption for a few years.
In the end, we decided that our managers-of-one culture worked better, but it wasn't a given at the outset. To try the opposite, you really have to believe that you might have been wrong.
Because you're wrong about something. I guarantee it. We all are.
For the past week, I've been working off the Minisforum UM870. A tiny mini PC with an 8-core/16-thread AMD 8745H CPU, which retails for $343 (or €379) as a bare-bone unit, and stays below $550, even after adding 48GB of RAM and 1TB of storage. I'm shocked to report that I really don't need more than this!
I mean, I knew that Apple's Mac Mini, which is equally petite to the Minisforum, had plenty of power for macOS. But somehow I thought Apple had some special sauce that made this possible, and that PCs were forever condemned to be bigger, louder, and slower. How 2020 of me.
The UM870 is a little beast. It runs our full HEY test suite in just 2m28s. In comparison, it takes a 14-core M4 Pro 2m49s, and such an Apple costs $2,199, once you've given it 48GB of RAM and 1TB of storage.
Now, that M4 Mac Mini can probably do things with, say, 8K video editing that the UM870 can't touch. But on the other hand, the UM870 can play the latest video games. Everything from Fortnite to Cyberpunk 2077 to Forza Horizon. It won't trouble a modern, dedicated Nvidia card for max FPS, but it's perfectly playable at 1080p at medium settings in a ton of games.
In raw CPU power, the AMD 8745H will match a regular M4 in multi-core. They both clock in right around 13,000 points on Geekbench 6. Though the M4 is a fair bit quicker in single-core. The AMD is also far behind an M4 Pro in raw multi-core power (13K vs 22K), but at less than a quarter the cost, it's hard to complain.
But as with the example of video games, it's often deceiving just to compare the Geekbench numbers, because it all depends on what you're doing. If you're really into video games, it's no use to have extra grunt, if your favorite games won't run. The same is true if you're a developer working with Docker containers and a Linux toolchain. As quoted above, the UM870 handily defeats the M4 Pro in our all-cores-buzzing HEY test suite.
That's partly because we run databases and accessories, like MySQL, Redis, and ElasticSearch, in Docker containers. Even though we run the Ruby code natively on both platforms, the Docker dependencies put the Mac further behind than it otherwise would have been, because Linux runs Docker natively, and the Mac has to deal with the file-system tax and other drawbacks.
The irony is that it was partly Apple's volume with and investment in TSMC that got us these incredible AMD chips, as they're riding the same improvements in TSMC manufacturing prowess as Apple's M chips. The Zen 4 cores in the 8745H are all forged on the same 5nm process as the M2, so it's no surprise that the AMD cores are dead-on-the-money for Apple's in Geekbench single-core performance.
And Zen 4 is even the last generation! The insane new (and insanely named) AMD Ryzen AI Max 395+ chip that's used in the upcoming Framework Desktop runs on Zen 5 cores. And with 16 of those, the 395+ is faster in Geekbench multi-core than an M4 Pro, and only ever so slightly behind the M4 Max. On my HEY test suite, it completes the run in an insane 1m21s — more than twice as fast as the 14-core M4 Pro!
But I digress. The 395+ chip isn't cheap, even if it's still a great deal. The Framework Desktop with 64GB/1TB, which is twice as fast as the M4 Pro with our HEY tests, is $1,744. That's still less than the $2,199 Mac Mini, which only has 48GB of RAM. But obviously way more than a $550 Minisforum! And while it's quite small, it's not tiny, like the UM870.
Regardless, this is what I love about technology. I love when our assumptions are tested: just how small and cheap can an awesome developer machine become? I love that open-source Linux is able to run laps around Apple in the workloads that many developers need (like working with Docker containers). I love that this tiny little silent $550 mini PC on my desk is capable of putting out computing power that only a decade ago would have been reserved to loud, honking metal in a data center.
Mini PCs have gotten really good. AMD is on a roll. Linux is a blast. These are my conclusions. Check out the Minisforum UM870 or the Beelink SER8. Anything with an AMD 7745H and up to an 8945HS should be a great deal. If you want to splurge (yet still get a bargain compared to the macs), you could have a look at the new HX370 in the Beelink SER9 or Minisforum X1, but I'd save my money, buy a Lofree Flow84 keyboard to go with the new mini rig, and put the rest of the money towards a KEF LSX II savings fund!
Update: The MediaTek Wifi + Bluetooth card used in the UM870 isn't compatible with Ubuntu out of the box. I run everything hardwired for performance, so didn't notice during my testing, but a reader caught it, and I confirmed it.
If you need wireless out-of-the-box on Ubuntu, I found that the card in the Beelink SER9 works great with both. If you already have a UM870 or still want one, and you need wireless, you could buy the AMD RZ616 Wi-Fi 6E card from Framework or Amazon for around $20 — that works and slots right in.
The appeal of "vibe coding" — where programmers lean back and prompt their way through an entire project with AI — appears partly to be based on the fact that so many development environments are deeply unpleasant to work with.
So it's no wonder that all these programmers stuck working with cumbersome languages and frameworks can't wait to give up on the coding part of software development. If I found writing code a chore, I'd be looking for retirement too.
But I don't.
I mean, I used to! When I started programming, it was purely because I wanted programs. Learning to code was a necessary but inconvenient step toward bringing systems to life. That all changed when I learned Ruby and built Rails.
Ruby's entire premise is "programmer happiness": that writing code should be a joy. And historically, the language was willing to trade run-time performance, memory usage, and other machine sympathies against the pursuit of said programmer happiness. These days, it seems like you can eat your cake and have it too, though. Ruby, after thirty years of constant improvement, is now incredibly fast and efficient, yet remains a delight to work with.
That ethos couldn't shine brighter now. Disgruntled programmers have finally realized that an escape from nasty syntax, boilerplate galore, and ecosystem hyper-churn is possible. That's the appeal of AI: having it hide away all that unpleasantness. Only it's like cleaning your room by stuffing the mess under the bed — it doesn't make it go away!
But the instinct is correct: Programming should be a vibe! It should be fun! It should resemble English closely enough that line noise doesn't obscure the underlying ideas and decisions. It should allow a richness of expression that serves the human reader instead of favoring the strictness preferred by the computer. Ruby does.
And given that, I have no interest in giving up writing code. That's not the unpleasant part that I want AI to take off my hands. Just so I can — what? — become a project manager for a murder of AI crows? I've had the option to retreat up the manager ladder for most of my career, but I've steadily refused, because I really like writing Ruby! It's the most enjoyable part of the job!
That doesn't mean AI doesn't have a role to play when writing Ruby. I'm conversing and collaborating with LLMs all day long — looking up APIs, clarifying concepts, and asking stupid questions. AI is a superb pair programmer, but I'd retire before permanently handing it the keyboard to drive the code.
Maybe one day, wanting to write code will be a quaint concept. Like tending to horses for transportation in the modern world — done as a hobby but devoid of any economic value.
I don't think anyone knows just how far we can push the intelligence and creativity of these insatiable token munchers. And I wouldn't bet against their advance, but it's clear to me that a big part of their appeal to programmers is the wisdom that Ruby was founded on: Programming should favor and flatter the human.
The web will be far worse off if Google is forced to sell Chrome, even if it's to atone for legitimate ad-market monopoly abuses. Which mean we'll all be worse off as the web would lose ground to actual monopoly platforms, like the iOS App Store and Google's own Play Store.
First, Chrome won the browser war fair and square by building a better surfboard for the internet. This wasn't some opportune acquisition. This was the result of grand investments, great technical prowess, and markets doing what they're supposed to do: rewarding the best. Besides, we have a million alternatives. Firefox still exists, so does Safari, so does the billion Chromium-based browsers like Brave and Edge. And we finally even have new engines on the way with the Ladybird browser.
Look, Google's trillion-dollar business depends on a thriving web that can be searched by Google.com, that can be plastered in AdSense, and that now can feed the wisdom of AI. Thus, Google's incredible work to further the web isn't an act of charity, it's of economic self-interest, and that's why it works. Capitalism doesn't run on benevolence, but incentives.
We want an 800-pound gorilla in the web's corner! Because Apple would love nothing better (despite the admirable work to keep up with Chrome by Team Safari) to see the web's capacity as an application platform diminished. As would every other owner of a proprietary application platform. Microsoft fought the web tooth and nail back in the 90s because they knew that a free, open application platform would undermine lock-in — and it did!
But the vitality of that free and open application platform depends on constant development. If the web stagnates, other platforms will gain. But with Team Chrome pushing the web forward in a million ways — be it import maps, nested CSS, web push, etc. — is therefore essential.
This is a classic wealth vs. riches mistake. Lawyers see Chrome as valuable in a moment's snapshot, but the value is all in the wealth that continued investment brings. A Chrome left to languish with half the investment will evaporate as quickly as a lottery winner's riches. Wealth requires maintenance to endure.
Google should not get away with rigging the online ad market, but forcing it to sell Chrome will do great damage to the web.
We received over 2,200 applications for our just-closed junior programmer opening, and now we're going through all of them by hand and by human. No AI screening here. It's a lot of work, but we have a great team who take the work seriously, so in a few weeks, we'll be able to invite a group of finalists to the next phase.
This highlights the folly of thinking that what it'll take to land a job like this is some specific list of criteria, though. Yes, you have to present a baseline of relevant markers to even get into consideration, like a great cover letter that doesn't smell like AI slop, promising projects or work experience or educational background, etc. But to actually get the job, you have to be the best of the ones who've applied!
It sounds self-evident, maybe, but I see questions time and again about it, so it must not be. Almost every job opening is grading applicants on the curve of everyone who has applied. And the best candidate of the lot gets the job. You can't quantify what that looks like in advance.
I'm excited to see who makes it to the final stage. I already hear early whispers that we got some exceptional applicants in this round. It would be great to help counter the narrative that this industry no longer needs juniors. That's simply retarded.
However good AI gets, we're always going to need people who know the ins and outs of what the machine comes up with. Maybe not as many, maybe not in the same roles, but it's truly utopian thinking that mankind won't need people capable of vetting the work done by AI in five minutes.
The new AMD HX370 option in the Framework 13 is a good step forward in performance for developers. It runs our HEY test suite in 2m7s, compared to 2m43s for the 7840U (and 2m49s for a M4 Pro!). It's also about 20% faster in most single-core tasks than the 7840U.
But is that enough to warrant the jump in price? AMD's latest, best chips have suddenly gotten pretty expensive. The F13 w/ HX370 now costs $1,992 with 32GB RAM / 1TB. Almost the same an M4 Pro MBP14 w/ 24GB / 1TB ($2,199). I'd pick the Framework any day for its better keyboard, 3:2 matte screen, repairability, and superb Linux compatibility, but it won't be because the top option is "cheaper" any more.
Of course you could also just go with the budget 6-core Ryzen AI 5 340 in same spec for $1,362. I'm sure that's a great machine too. But maybe the sweet spot is actually the Ryzen AI 7 350. It "only" has 8 cores (vs 12 on the 370), but four of those are performance cores -- the same as the 370. And it's $300 cheaper. So ~$1,600 gets you out the door. I haven't actually tried the 350, though, so that's just speculation. I've been running the 370 for the last few months.
Whichever chip you choose, the rest of the Framework 13 package is as good as it ever was. This remains my favorite laptop of at least the last decade. I've been running one for over a year now, and combined with Omakub + Neovim, it's the first machine in forever where I've actually enjoyed programming on a 13" screen. The 3:2 aspect ratio combined with Linux's superb multiple desktops that switch with 0ms lag and no animations means I barely miss the trusted 6K Apple XDR screen when working away from the desk.
The HX370 gives me about 6 hours of battery life in mixed use. About the same as the old 7840U. Though if all I'm doing is writing, I can squeeze that to 8-10 hours. That's good enough for me, but not as good as a Qualcomm machine or an Apple M-chip machine. For some people, those extra hours really make the difference.
What does make a difference, of course, is Linux. I've written repeatedly about how much of a joy it's been to rediscover Linux on the desktop, and it's a joy that keeps on giving. For web work, it's so good. And for any work that requires even a minimum of Docker, it's so fast (as the HEY suite run time attests).
Apple still has a strong hardware game, but their software story is falling apart. I haven't heard many people sing the praises of new iOS or macOS releases in a long while. It seems like without an asshole in charge, both have move towards more bloat, more ads, more gimmicks, more control. Linux is an incredible antidote to this nonsense these days.
It's also just fun! Seeing AMD catch up in outright performance if not efficiency has been a delight. Watching Framework perfect their 13" laptop while remaining 100% backwards compatible in terms of upgrades with the first versions is heartwarming. And getting to test the new Framework Desktop in advance of its Q3 release has only affirmed my commitment to both.
But on the new HX370, it's in my opinion the best Linux laptop you can buy today, which by extension makes it the best web developer laptop too. The top spec might have gotten a bit pricey, but there are options all along the budget spectrum, which retains all the key ingredients any way. Hard to go wrong.
Nearly a quarter of seventeen-year-old boys in America have an ADHD diagnosis. That's crazy. But worse than the diagnosis is that the majority of them end up on amphetamines, like Adderall or Ritalin. These drugs allow especially teenage boys (diagnosed at 2-3x the rate of girls) to do what their mind would otherwise resist: Study subjects they find boring for long stretches of time. Hurray?
Except, it doesn't even work. Because taking Adderall or Ritalin doesn't actually help you learn more, it merely makes trying tolerable. The kids might feel like the drugs are helping, but the test scores say they're not. It's Dunning-Kruger — the phenomenon where low-competence individuals overestimate their abilities — in a pill.
Furthermore, even this perceived improvement is short-term. The sudden "miraculous" ability to sit still and focus on boring school work wanes in less than a year on the drugs. In three years, pill poppers are doing no better than those who didn't take amphetamines at all.
These are all facts presented in a blockbuster story in New York Time Magazine entitled Have We Been Thinking About A.D.H.D. All Wrong?, which unpacks all the latest research on ADHD. It's depressing reading.
Not least because the definition of ADHD is so subjective and situational. The NYTM piece is full of anecdotes from kids with an ADHD diagnosis whose symptoms disappeared when they stopped pursuing a school path out of step with their temperament. And just look at these ADHD markers from the DSM-5:
Inattention Difficulty staying focused on tasks or play. Frequently losing things needed for tasks (e.g., toys, school supplies). Easily distracted by unrelated stimuli. Forgetting daily activities or instructions. Trouble organizing tasks or completing schoolwork. Avoiding or disliking tasks requiring sustained mental effort.
Hyperactivity Fidgeting, squirming, or inability to stay seated. Running or climbing in inappropriate situations. Excessive talking or inability to play quietly. Acting as if “driven by a motor,” always on the go.
Impulsivity Blurting out answers before questions are completed. Trouble waiting for their turn. Interrupting others’ conversations or games.
The majority of these so-called symptoms are what I'd classify as "normal boyhood". I certainly could have checked off a bunch of them, and you only need six over six months for an official ADHD diagnosis. No wonder a quarter of those seventeen year-old boys in America qualify!
Borrowing from Erich Fromm’s The Sane Society, I think we're looking at a pathology of normalcy, where healthy boys are defined as those who can sit still, focus on studies, and suppress kinetic energy. Boys with low intensity and low energy. What a screwy ideal to chase for all.
This is all downstream from an obsession with getting as many kids through as much safety-obsessed schooling as possible. While the world still needs electricians, carpenters, welders, soldiers, and a million other occupations that exist outside the narrow educational ideal of today.
Now I'm sure there is a small number of really difficult cases where even the short-term break from severe symptoms that amphetamines can provide is welcome. The NYTM piece quotes the doctor that did one of the most consequential studies on ADHD as thinking that's around 3% — a world apart from the quarter of seventeen-year-olds discussed above.
But as ever, there is no free lunch in medicine. Long-term use of amphetamines acts as a growth inhibitor, resulting in kids up to an inch shorter than they otherwise would have been. On top of the awful downs that often follow amphetamine highs. And the loss of interest, humor, and spirit that frequently comes with the treatment too.
This is all eerily similar to what happened in America when a bad study from the 1990s convinced a generation of doctors that opioids actually weren't addictive. By the time they realized the damage, they'd set in motion an overdose and addiction cascade that's presently killing over a 100,000 Americans a year. The book Empire of Pain chronicles that tragedy well.
Or how about the surge in puberty-blocker prescriptions, which has now been arrested in the UK, following the Cass Review, as well as Finland, Norway, Sweden, France, and elsewhere.
Doctors are supposed to first do no harm, but they're as liable to be swept up in bad paradigms, social contagions, and ideological echo chambers as the rest of us. And this insane over-diagnosis of ADHD fits that liability to a T.
To be a successful founder, you have to believe that what you're working on is going to work — despite knowing it probably won't! That sounds like an oxymoron, but it's really not. Believing that what you're building is going to work is an essential component of coming to work with the energy, fortitude, and determination it's going to require to even have a shot. Knowing it probably won't is accepting the odds of that shot.
It's simply the reality that most things in business don't work out. At least not in the long run. Most businesses fail. If not right away, then eventually. Yet the world economy is full of entrepreneurs who try anyway. Not because they don't know the odds, but because they've chosen to believe they're special.
The best way to balance these opposing points — the conviction that you'll make it work, the knowledge that it probably won't — is to do all your work in a manner that'll make you proud either way. If it doesn't work, you still made something you wouldn't be ashamed to put your name on. And if it does work, you'll beam with pride from making it on the basis of something solid.
The deep regret from trying and failing only truly hits when you look in the mirror and see Dostoevsky staring back at you with this punch to the gut: "Your worst sin is that you have destroyed and betrayed yourself for nothing." Oof.
Believe it's going to work.
Build it in a way that makes you proud to sign it.
Base your worth as a human on something greater than a business outcome.
We just opened a search for a new junior programmer at 37signals. It's been years since we last hired a junior, but the real reason the listing is turning heads is because we're open about the yearly salary: $145,849*. That's high enough that programmers with lots of experience are asking whether they could apply, even if they aren't technically "junior". The answer is no.
The reason we're willing to pay a junior more than most is because we're looking for a junior who's better than most. Not better in "what do they already know", but in "how far could they go". We're hiring for peak promise — and such promise only remains until it's revealed.
Maybe it sounds a little harsh, but a programmer who's been working professionally for five years has likely already revealed their potential. What you're going to get is roughly what you see. That doesn't mean that people can't get better after that, but it means that the trajectory by which they improve has already been plotted.
Whereas a programmer who's either straight out of school or fresh off their first internship or short-stint job is essentially all potential. So you draw their line on the basis of just a few early dots, but the line can be steep.
It's not that different from something like the NFL scouting combine. Teams fight to find the promise of The Next All-Star. These rookies won't have the experience that someone who's already played in the league for years would have, but they have the potential to be the best. Someone who's already played for several seasons will have shown what they have and be weighed accordingly.
This is not easy to do! Plenty of rookies, in sports and programming, may show some early potential, then fail to elevate their game to where the buyer is betting it could be. But that's the chance you take to land someone extraordinary.
So if you know a junior programmer with less than three years of industry experience who is sparkling with potential, do let them know of our listing. And if you know someone awesome who's already a senior programmer, we also have an opening for them.
*It's a funnily precise number because it's pulled directly from the Radford salary database, which we query for the top 10% of San Francisco salaries for junior programmers.
While the world frets about the future of AI, the universal basic income advocates have an answer ready for the big question of "what are we all going to do when the jobs are gone": Just pay everyone enough to loaf around as they see fit! Problem solved, right?
Wrong. The purpose of work is not just about earning your keep, but also about earning a purpose and a place in the world. This concept is too easily dismissed by intellectuals who imagines a world of liberated artists and community collaborators, if only unshackled by the burdens of capitalism. Because that's the utopia that appeals to them.
But we already know what happens to most people who lose their job. It's typically not a song-and-dance of liberation, but whimper with increasing despair. Even if they're able to draw benefits for a while.
Some of that is probably gendered. I think men have a harder time finding a purpose without a clear and externally validated station of usefulness. As a corollary to the quip that "women want to be heard, men want to be useful" from psychology. Long-term unemployment, even cushioned by state benefits, often leads men to isolation and a rotting well-being.
I've seen this play out time and again with men who've lost their jobs, men who've voluntarily retired from their jobs, and men who've sold their companies. As the days add up after the centering purpose in their life disappeared, so does the discontent with "the problem of being".
Sure, these are just anecdotes. Some men are thrilled to do whatever, whenever, without financial worries. And some women mourn a lost job as deeply as most men do. But I doubt it's evenly split.
Either way, I doubt we'll be delighted to discover what societal pillars wither away when nobody is needed for anything. If all labor market participation rests on intrinsic motivation. That strikes me as an obvious dead end.
We may not have a say in the manner, of course. The AI revolution, should it materialize like its proponents predict, has the potential to be every bit as unstoppable as the agricultural, industrial, and IT revolutions before it. Where the Luddites and the Amish, who reject these revolutions, end up as curiosities on the fringe of modern civilization. The rest of us are transformed, whether we like it or not.
But generally speaking, I think we have liked it! I'm sure it was hard to imagine what we'd all be doing after the hoe and the horse gave way to the tractor and combine back when 97% of the population worked the land. Same when robots and outsourcing claimed the most brutish assembly lines in the West. Yet we found our way through both to a broadly better place.
The IT revolution feels trickier. I've personally worked my life in its service, but I'm less convinced it's been as universal good as those earlier shifts. Is that just nostalgia? Because I remember a time before EVERYTHING IS COMPUTER? Possibly, but I think there's a reason the 80s in particular occupy such a beloved place in the memory of many who weren't even born then.
What's more certain to me is that we all need a why, as Viktor Frankl told us in Man's Search for Meaning. And while some of us are able to produce that artisanal, bespoke why imagined by some intellectuals and academics, I think most people need something prepackaged. And a why from work offers just that. Especially in a world bereft of a why from God.
It's a great irony that the more comfortable and frictionless our existence becomes, the harder we struggle with the "the problem of being". We just aren't built for a life of easy leisure. Not in mass numbers, anyway. But while the masses can easily identify the pathology of that when it comes to the idle rich, and especially their stereotyped trust-fund offspring, they still crave it for themselves.
Orwell's thesis is that heaven is merely that fuzzily-defined place that provides relief from the present hardships we wish to escape. But Dostoevsky remarks that should man ever find this relief, he'd be able to rest there for just a moment, before he'd inevitably sabotage it — just to feel something again.
I think of that often while watching The Elon Show. Musk's craving for the constant chaos of grand gestures is Dostoevsky's prediction underwritten by the wealth of the world's richest man. Heaven is not a fortune of $200 billion to be quietly enjoyed in the shade of a sombrero. It's in the arena.
I’ve also pondered this after writing about why Apple needs a new asshole in charge, and reflecting on our book, It Doesn't Have To Be Crazy At Work. Yes, work doesn’t have to be crazy, but for many, occasional craziness is part of the adventure they crave. They’ll tolerate an asshole if they take them along for one such adventure — accepting struggle and chaos as a small price to feel alive.
But even in the absence of such adventure, a stupid email job offers something. Maybe it isn't much, maybe it doesn't truly nourish the soul, but it's something. In the Universal Basic Income scenario of having to design your own adventure entirely from scratch, there is nothing. Just a completely blank page with no deadline to motivate writing the first line.
If we kill the old 9-5 "why", we better find a new one. That might be tougher than making silicon distill all our human wisdom into vectors and parameters, but we have to pull it off.
Picasso got it right: Great artists steal. Even if he didn’t actually say it, and we all just repeat the quote because Steve Jobs used it. Because it strikes at the heart of creativity: None of it happens in a vacuum. Everything is inspired by something. The best ideas, angles, techniques, and tones are stolen to build everything that comes after the original.
Furthermore, the way to learn originality is to set it aside while you learn to perfect a copy. You learn to draw by imitating the masters. I learned photography by attempting to recreate great compositions. I learned to program by aping the Ruby standard library.
Stealing good ideas isn’t a detour on the way to becoming a master — it’s the straight route. And it’s nothing to be ashamed of.
This, by the way, doesn’t just apply to art but to the economy as well. Japan became an economic superpower in the 80s by first poorly copying Western electronics in the decades prior. China is now following exactly the same playbook to even greater effect. You start with a cheap copy, then you learn how to make a good copy, and then you don’t need to copy at all.
AI has sped through the phase of cheap copies. It’s now firmly established in the realm of good copies. You’re a fool if you don’t believe originality is a likely next step. In all likelihood, it’s a matter of when, not if. (And we already have plenty of early indications that it’s actually already here, on the edges.)
Now, whether that’s good is a different question. Whether we want AI to become truly creative is a fair question — albeit a theoretical or, at best, moral one. Because it’s going to happen if it can happen, and it almost certainly can (or even has).
Ironically, I think the peanut gallery disparaging recent advances — like the Ghibli fever — over minor details in the copying effort will only accelerate the quest toward true creativity. AI builders, like the Japanese and Chinese economies before them, eager to demonstrate an ability to exceed.
All that is to say that AI is in the "Good Copy" phase of its creative evolution. Expect "The Great Artist" to emerge at any moment.
I've been running Linux, Neovim, and Framework for a year now, but it easily feels like a decade or more. That's the funny thing about habits: They can be so hard to break, but once you do, they're also easily forgotten.
That's how it feels having left the Apple realm after two decades inside the walled garden. It was hard for the first couple of weeks, but since then, it’s rarely crossed my mind.
Humans are rigid in the short term, but flexible in the long term. Blessed are the few who can retain the grit to push through that early mental resistance and reach new maxima.
That is something that gets harder with age. I can feel it. It takes more of me now to wipe a mental slate clean and start over. To go back to being a beginner. But the reward for learning something new is as satisfying as ever.
But it's also why I've tried to be modest with the advocacy. I don't know if most developers are better off on Linux. I mean, I believe they are, at some utopian level, especially if they work for the web, using open source tooling. But I don't know if they are as humans with limited will or capacity for change.
Of course, it's fair to say that one doesn't want to. Either because one remain a fan of Apple, in dire need of the remaining edge MacBooks retain on efficiency/battery, or simply content inside the ecosystem. There are plenty of reasons why someone might not want to change. It's not just about rigidity.
Besides, it's a dead end trying to convince anyone of an alternative with the sharp end of a religious argument. That kind of crusading just seeds resentment and stubbornness. I know that all too well.
What I've found to work much better is planting seeds and showing off your plowshare. Let whatever curiosity that blooms find its own way towards your blue sky. The mimetic engine of persuasion runs much cleaner anyway.
And for me, it's primarily about my personal computing workbench regardless of what the world does or doesn't. It was the same with finding Ruby. It's great when others come along for the ride, but I'd also be happy taking the trip solo too.
So consider this a postcard from a year into the Linux, Neovim, and Framework journey. The sun is still shining, the wind is in my hair, and the smile on my lips hasn't been this big since the earliest days of OS X.
The singularity is the point where artificial intelligence goes parabolic, surpassing humans writ large, and leads to rapid, unpredictable change. The intellectual seed of this concept was planted back in the '50s by early computer pioneer John von Neumann. So it’s been here since the dawn of the modern computer, but I’ve only just come around to giving the idea consideration as something other than science fiction.
Now, this quickly becomes quasi-religious, with all the terms being as fluid as redemption, absolution, and eternity. What and when exactly is AGI (Artificial General Intelligence) or SAI (Super Artificial Intelligence)? You’ll find a million definitions.
But it really does feel like we’re on the cusp of something. Even the most ardent AI skeptics are probably finding it hard not to be impressed with recent advances. Everything Is Ghibli might seem like a silly gimmick, but to me, it flipped a key bit here: the style persistence, solving text in image generation, and then turning those images into incredible moving pictures.
What makes all this progress so fascinating is that it’s clear nobody knows anything about what the world will look like four years from now. It’s barely been half that time since ChatGPT and Midjourney hit us in 2022, and the leaps since then have been staggering.
I’ve been playing with computers since the Commodore 64 entertained my childhood street with Yie Ar Kung-Fu on its glorious 1 MHz processor. I was there when the web made the internet come alive in the mid-'90s. I lined up for hours for the first iPhone to participate in the grand move to mobile. But I’ve never felt less able to predict what the next token of reality will look like.
When you factor in recent advances in robotics and pair those with the AI brains we’re building, it’s easy to imagine all sorts of futuristic scenarios happening very quickly: from humanoid robots finishing household chores à la The Jetsons (have you seen how good it’s getting at folding?) to every movie we watch being created from a novel prompt on the spot, to, yes, even armies of droids and drones fighting our wars.
This is one of those paradigm shifts with the potential for Total Change. Like the agricultural revolution, the industrial revolution, the information revolution. The kind that rewrites society, where it was impossible to tell in advance where we’d land.
I understand why people find that uncertainty scary. But I choose to receive it as exhilarating instead. What good is it to fret about a future you don’t control anyway? That’s the marvel and the danger of progress: nobody is actually in charge! This is all being driven by a million independent agents chasing irresistible incentives. There’s no pause button, let alone an off-ramp. We’re going to be all-in whether we like it or not.
So we might as well come to terms with that reality. Choose to marvel at the accelerating milestones we've been hitting rather than tremble over the next.
This is something most religions and grand philosophies have long since figured out. The world didn’t just start changing; we’ve had these lurches of forward progress before. And humans have struggled to cope with the transition since the beginning of time. So, the best intellectual frameworks have worked on ways to deal.
Christianity has the Serenity Prayer, which I’ve always been fond of:
God, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
That’s the part most people know. But it actually continues:
Living one day at a time, enjoying one moment at a time; accepting hardship as a pathway to peace; taking, as Jesus did, this sinful world as it is, not as I would have it; trusting that You will make all things right if I surrender to Your will; so that I may be reasonably happy in this life and supremely happy with You forever in the next. Amen.
What a great frame for the mind!
The Stoics were big on the same concept. Here’s Epictetus:
Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.
Buddhism does this well too. Here’s the Buddha being his wonderfully brief self:
Suffering does not follow one who is free from clinging.
I don’t think it’s a coincidence that all these traditions converged on the idea of letting go of what you can’t control, not clinging to any specific preferred outcome. Because you’re bound to be disappointed that way. You don’t get to know the script to life in advance, but what an incredible show, if you just let it unfold.
This is the broader view of amor fati. You should learn to love not just your own fate, but the fate of the world — its turns, its twists, its progress, and even the inevitable regressions.
The singularity may be here soon, or it may not. You’d be a fool to be convinced either way. But you’ll find serenity in accepting whatever happens.
We're spending just shy of $1.5 million/year on AWS S3 at the moment to host files for Basecamp, HEY, and everything else. The only way we were able to get the pricing that low was by signing a four-year contract. That contract expires this summer, June 30, so that's our departure date for the final leg of our cloud exit.
We've already racked the replacement from Pure Storage in our two primary data centers. A combined 18 petabytes, securely replicated a thousand miles apart. It's a gorgeous rack full of blazing-fast NVMe storage modules. Each card in the chassis capable of storing 150TB now.
Pure Storage comes with an S3-compatible API, so no need for CEPH, Minio, or any of the other object storage software solutions you might need, if you were trying to do this exercise on commodity hardware. This makes it pretty easy from the app side to do the swap.
But there's still work to do. We have to transfer almost six petabytes out of S3. In an earlier age, that egress alone would have cost hundreds of thousands of dollars in fees alone. But now AWS offers a free 60-day egress window for anyone who wants to leave, so that drops the cost to $0. Nice!
It takes a while to transfer that much data, though. Even on the fat 40-Gbit pipe we have set aside for the purpose, it'll probably take at least three weeks, once you factor in overhead and some babysitting of the process.
That's when it's good to remind ourselves why June 30th matters. And the reminder math pens out in nice, round numbers for easy recollection: If we don't get this done in time, we'll be paying a cool five thousand dollars a day to continue to use S3 (if all the files are still there). Yikes!
That's $35,000/week! That's $150,000/month!
Pretty serious money for a company of our size. But so are the savings. Over five years, it'll now be almost five million! Maybe even more, depending on the growth in files we need to store for customers. About $1.5 million for the Pure Storage hardware, and a bit less than a million over five years for warranty and support.
But those big numbers always seem a bit abstract to me. The idea of paying $5,000/day, if we miss our departure date, is awfully concrete in comparison.
Immortality always sounded like a curse to me. But especially now, having passed the halfway point of the average wealthy male life expectancy. Another scoop of life as big as the one I've already been served seems more than enough, thank you very much.
Does that strike you as morbid?
It's funny, people seem to have no problem understanding satiation when it comes to the individual parts of life. Enough delicious cake, no more rides on the rollercoaster, the end of a great party. But not life itself.
Why?
The eventual end strikes me as beautiful relief. Framing the idea that you can see enough, do enough, be enough. And have enjoyed the bulk of it, without wanting it to go on forever.
Have you seen Highlander? It got panned on its initial release in the 80s. Even Sean Connery couldn't save it with the critics at the time. But I love it. It's one of my all-time favorite movies. It's got a silly story about a worldwide tournament of immortal Highlanders who live forever, lest they get their heads chopped off, and then the last man standing wins... more life?
Yeah, it doesn't actually make a lot of sense. But it nails the sadness of forever. The loneliness, the repetition, the inevitable cynicism with humanity. Who wants to live forever, indeed.
It's the same theme in Björk's wonderfully melancholic song I've Seen It All. It's a great big world, but eventually every unseen element will appear as but a variation on an existing theme. Even surprise itself will succumb to familiarity.
Even before the last day, you can look forward to finality, too. I love racing, but I'm also drawn to the day when the reflexes finally start to fade, and I'll hang up the helmet. One day I will write the last line of Ruby code, too. Sell the last subscription. Write the last tweet. How merciful.
It gets harder with people you love, of course. Harder to imagine the last day with them. But I didn't know my great-great-grandfather, and can easily picture him passing with the satisfaction of seeing his lineage carry on without him.
One way to think of this is to hold life with a loose grip. Like a pair of drumsticks. I don't play, but I'm told that the music flows better when you avoid strangling them in a death grip. And then you enjoy keeping the beat until the song ends.
The average age of Apple's board members is 68! Nearly half are over 70, and the youngest is 63. It’s not much better with the executive team, where the average age hovers around 60. I’m all for the wisdom of our elders, but it’s ridiculous that the world’s premier tech company is now run by a gerontocracy.
And I think it’s starting to show. The AI debacle is just the latest example. I can picture the board presentation on Genmoji: “It’s what the kids want these days!!”. It’s a dumb feature because nobody on Apple’s board or in its leadership has probably ever used it outside a quick demo.
I’m not saying older people can’t be an asset. Hell, at 45, I’m no spring chicken myself in technology circles! But you need a mix. You need to blend fluid and crystallized intelligence. You need some people with a finger on the pulse, not just some bravely keeping one.
Once you see this, it’s hard not to view slogans like “AI for the rest of us” through that lens. It’s as if AI is like programming a VCR, and you need the grandkids to come over and set it up for you.
By comparison, the average age on Meta’s board is 55. They have three members in their 40s. Steve Jobs was 42 when he returned to Apple in 1997. He was 51 when he introduced the iPhone. And he was gone — from Apple and the world — at 56.
Apple literally needs some fresh blood to turn the ship around.
I grew up in the 80s in Copenhagen and roamed the city on my own from an early age. My parents rarely had any idea where I went after school, as long as I was home by dinner. They certainly didn’t have direct relationships with the parents of my friends. We just figured things out ourselves. It was glorious.
That’s not the type of childhood we were able to offer our kids in modern-day California. Having to drive everywhere is, of course, its own limitation, but that’s only half the problem. The other half is the expectation that parents are involved in almost every interaction. Play dates are commonly arranged via parents, even for fourth or fifth graders.
The new hysteria over smartphones doesn’t help either, as it cuts many kids off from being able to make their own arrangements entirely (since the house phone has long since died too).
That’s not how my wife grew up in the 80s in America either. The United States of that age was a lot like what I experienced in Denmark: kids roaming around on their own, parents blissfully unaware of where their offspring were much of the time, and absolutely no expectation that parents would arrange play dates or even sleepovers.
I’m sure there are still places in America where life continues like that, but I don’t personally know of any parents who are able to offer that 80s lifestyle to their kids — not in New York, not in Chicago, not in California. Maybe this life still exists in Montana? Maybe it’s a socioeconomic thing? I don’t know.
But what I do know is that Copenhagen is still living in the 80s! We’ve been here off and on over the last several years, and just today, I was struck by the fact that one of our kids had left school after it ended early, biked halfway across town with his friend, and was going to spend the day at his place. And we didn’t get an update on that until much later.
Copenhagen is a compelling city in many ways, but if I were to credit why the US News and World Report just crowned Denmark the best country for raising children in 2025, I’d say it’s the independence — carefree independence. Danish kids roam their cities on their own, manage their social relationships independently, and do so in relative peace and safety.
I’m a big fan of Jonathan Haidt’s work on What Happened In 2013, which he captured in The Coddling of the American Mind. That was a very balanced book, and it called out the lack of unsupervised free play and independence as key contributors to the rise in child fragility.
But it also pinned smartphones and social media with a large share of the blame, despite the fact that the effect, especially on boys, is very much a source of ongoing debate. I’m not arguing that excessive smartphone usage — and certainly social-media brain rot — is good for kids, but I find this explanation is proving to be a bit too easy of a scapegoat for all the ills plaguing American youth.
And it certainly seems like upper-middle-class American parents have decided that blaming the smartphone for everything is easier than interrogating the lack of unsupervised free play, rough-and-tumble interactions for boys, and early childhood independence.
It also just doesn’t track in countries like Denmark, where the smartphone is just as prevalent, if not more so, than in America. My oldest had his own phone by third grade, and so did everyone else in his class — much earlier than Haidt recommends. And it was a key tool for them to coordinate the independence that The Coddling of the American Mind called for more of.
Look, I’m happy to see phones parked during school hours. Several schools here in Copenhagen do that, and there’s a new proposal pending legislation in parliament to make that law across the land. Fine!
But I think it’s delusional of American parents to think that banning the smartphone — further isolating their children from independently managing their social lives — is going to be the one quick fix that cures the anxious generation.
What we need is more 80s-style freedom and independence for kids in America.
When things are going well, managers can fool themselves into thinking that people trying their best is all that matters. Poor outcomes are just another opportunity for learning! But that delusion stops working when the wheels finally start coming off — like they have now for Apple and its AI unit. Then you need someone who cares about the outcome above the effort. Then you need an asshole.
In management parlance, an asshole is someone who cares less about feelings or effort and more about outcomes. Steve Jobs was one such asshole. So seems to be Musk. Gates certainly was as well. Most top technology chiefs who've had to really fight in competitive markets for the top prize fall into this category.
Apple's AI management is missing an asshole:
Walker defended his Siri group, telling them that they should be proud. Employees poured their “hearts and souls into this thing,” he said. “I saw so many people giving everything they had in order to make this happen and to make incredible progress together.”
So it's stuck nurturing feelings:
“You might have co-workers or friends or family asking you what happened, and it doesn’t feel good,” Walker said. “It’s very reasonable to feel all these things.” He said others are feeling burnout and that his team will be entitled to time away to recharge to get ready for “plenty of hard work ahead.”
John Gruber from Daring Fireball dug up this anecdote from the last time Apple seriously botched a major software launch:
Steve Jobs doesn’t tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple’s campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question: “Can anyone tell me what MobileMe is supposed to do?”
Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?”
For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs.
Can you see the difference? This is an asshole in action.
Apple needs to find a new asshole and put them in charge of the entire product line. Cook clearly isn't up to the task, and the job is currently spread thinly across a whole roster of senior VPs. Little fiefdoms. This is poison to the integrated magic that was Apple's trademark for so long.
We didn’t used to need an explanation for having kids. That was just life. That’s just what you did. But now we do, because now we don’t.
So allow me: Having kids means making the most interesting people in the world. Not because toddlers or even teenagers are intellectual oracles — although life through their eyes is often surprising and occasionally even profound — but because your children will become the most interesting people to you.
That’s the important part. To you.
There are no humans on earth I’m as interested in as my children. Their maturation and growth are the greatest show on the planet. And having a front-seat ticket to this performance is literally the privilege of a lifetime.
But giving a review of this incredible show just doesn’t work. I could never convince a stranger that my children are the most interesting people in the world, because they wouldn’t be, to them.
So words don’t work. It’s a leap of faith. All I can really say is this: Trust me, bro.
Denmark is technically and officially still a Christian nation. Lutheranism is written into the constitution. The government has a ministry for the church. Most Danes pay 1% of their earnings directly to fund the State religion. But God is as dead here as anywhere in the Western world. Less than 2% attend church service on a weekly basis. So one way to fill the void is through climate panic and piety.
I mean, these days, you can scarcely stroll past stores in the swankier parts of Copenhagen without being met by an endless parade of ads carrying incantations towards sustainability, conservation, and recycling. It's everywhere.
Hilariously, sometimes this even includes recommending that customers don’t buy the product. I went to a pita place for lunch the other day. The menu had a meat shawarma option, and alongside it was a plea not to order it too often because it’d be better for the planet if you picked the falafel instead.
But the hysteria peaks with the trash situation. It’s now common for garbage rooms across Copenhagen to feature seven or more bins for sorting disposals. Despite trash-sorting robots being able to do this job far better than humans in most cases, you see Danes dutifully sorting and subdividing their waste with a pious obligation worthy of the new climate deity.
Yet it’s not even the sorting that gets me — it’s the washing. You can’t put plastic containers with food residue into the recycling bucket, so you have to rinse them first. This leads to the grotesque daily ritual of washing trash (and wasting water galore in the process!).
Plus, most people in Copenhagen live in small apartments, and all that separated trash has to be stored separately until the daily pilgrimage to the trash room. So it piles up all over the place.
This is exactly what Nietzsche meant by “God is dead” — his warning that we’d need to fill the void with another centering orientation toward the world. And clearly, climatism is stepping up as a suitable alternative for the Danes. It’s got guilt, repentance, and plenty of rituals to spare. Oh, and its heretics too.
Look, I'd like a clean planet as much as the next sentient being. I'm not crying any tears over the fact that gas-powered cars are quickly disappearing from the inner-city of Copenhagen. I love biking! I wish we'd get a move on with nuclear for consistent, green energy. But washing or sorting my trash when a robot could do a better job just to feel like "I'm doing my part"? No.
It’s like those damn paper straws that crumble halfway through your smoothie. The point of it all seems to be self-inflicted, symbolic suffering — solely to remind you of your good standing with the sacred lord of recycling, refuting the plastic devil.
And worse, these small, meaningless acts of pious climate service end up working as catholic indulgences. We buy a good conscience washing trash so we don't have to feel guilty setting new records flying for fun.
I’m not religious, but I’m starting to think it’d be nicer to spend a Sunday morning in the presence of the Almighty than to keep washing trash as pagan replacement therapy.
In a fit of frustration, I wrote the first version of Kamal in six weeks at the start of 2023. Our plan to get out of the cloud was getting bogged down in enterprisey pricing and Kubernetes complexity. And I refused to accept that running our own hardware had to be that expensive or that convoluted. So I got busy building a cheap and simple alternative.
Now, just two years later, Kamal is deploying every single application in our entire heritage fleet, and everything in active development. Finalizing a perfectly uniform mode of deployment for every web app we've built over the past two decades and still maintain.
See, we have this obsession at 37signals: That the modern build-boost-discard cycle of internet applications is a scourge. That users ought to be able to trust that when they adopt a system like Basecamp or HEY, they don't have to fear eviction from the next executive re-org. We call this obsession Until The End Of The Internet.
That obsession isn't free, but it's worth it. It means we're still operating the very first version of Basecamp for thousands of paying customers. That's the OG code base from 2003! Which hasn't seen any updates since 2010, beyond security patches, bug fixes, and performance improvements. But we're still operating it, and, along with every other app in our heritage collection, deploying it with Kamal.
That just makes me smile, knowing that we have customers who adopted Basecamp in 2004, and are still able to use the same system some twenty years later. In the meantime, we've relaunched and dramatically improved Basecamp many times since. But for customers happy with what they have, there's no forced migration to the latest version.
I very much had all of this in mind when designing Kamal. That's one of the reasons I really love Docker. It allows you to encapsulate an entire system, with all of its dependencies, and run it until the end of time. Kind of how modern gaming emulators can run the original ROM of Pac-Man or Pong to perfection and eternity.
Kamal seeks to be but a simple wrapper and workflow around this wondrous simplicity. Complexity is but a bridge — and a fragile one at that. To build something durable, you have to make it simple.
But even in Denmark, thirty years after the public opposition to mass immigration started getting real political representation, the consequences of culturally-incompatible descendants from MENAPT continue to stress the high-trust societal model.
Here are just three major cases that's been covered in the Danish media in 2025 alone:
Danish women are increasingly feeling unsafe in the nightlife. The mayor of the country's third largest city, Odense, says he knows why: "It's groups of young men with an immigrant background that's causing it. We might as well be honest about that." But unfortunately, the only suggestion he had to deal with the problem was that "when [the women] meet these groups... they should take a big detour around them".
A soccer club from the infamous ghetto area of Vollsmose got national attention because every other team in their league refused to play them. Due to the team's long history of violent assaults and death threats against opposing teams and referees. Bizarrely leading to the situation were the team got to the top of its division because they'd "win" every forfeited match.
That's an explosive point because it blows up the thesis that time will solve these problems. Showing instead that it actually just makes it worse. And then what?
This is particularly pertinent in the analysis of Sweden. After the "far right" party of the Swedish Democrats got into government, the new immigrant arrivals have plummeted. But unfortunately, the net share of immigrants is still increasing, in part because of family reunifications, and thus the problems continue.
Meaning even if European countries "close the borders", they're still condemned to deal with the damning effects of maladjusted MENAPT immigrant descendants for decades to come. If the intervention stops there.
There are no easy answers here. Obviously, if you're in a hole, you should stop digging. And Sweden has done just that. But just because you aren't compounding the problem doesn't mean you've found a way out. Denmark proves to be both a positive example of minimizing the digging while also a cautionary tale that the hole is still there.
When the iPhone first appeared in 2007, Microsoft was sitting pretty with their mobile strategy. They'd been early to the market with Windows CE, they were fast-following the iPod with their Zune. They also had the dominant operating system, the dominant office package, and control of the enterprise. The future on mobile must have looked so bright!
After reliving that Ballmer moment, it's uncanny to watch this CNBC interview from one year ago with Johny Srouji and John Ternus from Apple on their AI strategy. Ternus even repeats the chuckle!! Exuding the same delusional confidence that lost Ballmer's Microsoft any serious part in the mobile game.
But somehow, Apple's problems with AI seem even more dire. Because there's apparently no one steering the ship. Apple has been promising customers a bag of vaporware since last fall, and they're nowhere close to being able to deliver on the shiny concept demos. The ones that were going to make Apple Intelligence worthy of its name, and not just terrible image generation that is years behind the state of the art.
Nobody at Apple seems able or courageous enough to face the music: Apple Intelligence sucks. Siri sucks. None of the vaporware is anywhere close to happening. Yet as late as last week, you have Cook promoting the new MacBook Air with "Apple Intelligence". Yikes.
This is partly down to the org chart. John Giannandrea is Apple's VP of ML/AI, and he reports directly to Tim Cook. He's been in the seat since 2018. But Cook evidently does not have the product savvy to be able to tell bullshit from benefit, so he keeps giving Giannandrea more rope. Now the fella has hung Apple's reputation on vaporware, promised all iPhone 16 customers something magical that just won't happen, and even spec-bumped all their devices with more RAM for nothing but diminished margins. Ouch.
This is what regression to the mean looks like. This is what fiefdom management looks like. This is what having a company run by a logistics guy looks like. Apple needs a leadership reboot, stat. That asterisk is a stain.
Bean counters have a bad rep for a reason. And it’s not because paying attention to the numbers is inherently unreasonable. It’s because weighing everything exclusively by its quantifiable properties is an impoverished way to view business (and the world!).
Nobody presents this caricature better than the MBA types who think you can manage a business entirely in the abstract realm of "products," "markets," "resources," and "deliverables." To hell with that. The death of all that makes for a breakout product or service happens when the generic lingo of management theory takes over.
This is why founder-led operations often keep an edge. Because when there’s someone at the top who actually gives a damn about cars, watches, bags, software, or whatever the hell the company makes, it shows up in a million value judgments that can’t be quantified neatly on a spreadsheet.
Now, I love a beautiful spreadsheet that shows expanding margins, healthy profits, and customer growth as much as any business owner. But much of the time, those figures are derivatives of doing all the stuff that you can’t compute and that won’t quantify.
But this isn’t just about running a better business by betting on unquantifiable elements that you can’t prove but still believe matter. It’s also about the fact that doing so is simply more fun! It’s more congruent. It’s vibe management.
And no business owner should ever apologize for having fun, following their instincts, or trusting that the numbers will eventually show that doing the right thing, the beautiful thing, the poetic thing is going to pay off somehow. In this life or the next.
Of course, you’ve got to get the basics right. Make more than you spend. Don’t get out over your skis. But once there’s a bit of margin, you owe it to yourself to lean on that cushion and lead the business primarily on the basis of good vibes and a long vision.
I developed seasonal allergies relatively late in life. From my late twenties onward, I spent many miserable days in the throes of sneezing, headache, and runny eyes. I tried everything the doctors recommended for relief. About a million different types of medicine, several bouts of allergy vaccinations, and endless testing. But never once did an allergy doctor ask the basic question: What kind of air are you breathing?
Turns out that's everything when you're allergic to pollen, grass, and dust mites! The air. That's what's carrying all this particulate matter, so if your idea of proper ventilation is merely to open a window, you're inviting in your nasal assailants. No wonder my symptoms kept escalating.
For me, the answer was simply to stop breathing air full of everything I'm allergic to while working, sleeping, and generally just being inside. And the way to do that was to clean the air of all those allergens with air purifiers running HEPA-grade filters.
That's it. That was the answer!
After learning this, I outfitted everywhere we live with these machines of purifying wonder: One in the home office, one in the living area, one in the bedroom. All monitored for efficiency using Awair air sensors. Aiming to have the PM2.5 measure read a fat zero whenever possible.
It's been over a decade like this now. It's exceptionally rare that I have one of those bad allergy days now. It can still happen, of course — if I spend an entire day outside, breathing in allergens in vast quantities. But as with almost everything, the dose makes the poison. The difference between breathing in some allergens, some of the time, is entirely different from breathing all of it, all of the time.
I think about this often when I see a doctor for something. Here was this entire profession of allergy specialists, and I saw at least a handful of them while I was trying to find a medical solution. None of them even thought about dealing with the environment. The cause of the allergy. Their entire field of view was restricted to dealing with mitigation rather than prevention.
Not every problem, medical or otherwise, has a simple solution. But many problems do, and you have to be careful not to be so smart that you can't see it.
Maybe one day AI will answer every customer question flawlessly, but we're nowhere near that reality right now. I can't tell you how often I've been stuck in some god-forsaken AI loop or phone tree WHEN ALL I WANT IS A HUMAN. So I end up either just yelling "operator", "operator", "operator" (the modern-day mayday!) or smashing zero over and over. It's a unworthy interaction for any premium service.
Don't get me wrong. I'm pretty excited about AI. I've seen it do some incredible things. And of course it's just going to keep getting better. But in our excitement about the technical promise, I think we're forgetting that humans need more than correct answers. Customer service at its best also offers understanding and reassurance. It offers a human connection.
Especially as AI eats the low-end, commodity-style customer support. The sort that was always done poorly, by disinterested people, rapidly churning through a perceived dead-end job, inside companies that only ever saw support as a cost center. Yeah, nobody is going to cry a tear for losing that.
But you know that isn't all there is to customer service. Hopefully you've had a chance to experience what it feels like when a cheerful, engaged human is interested in helping you figure out what's wrong or how to do something right. Because they know exactly what they're talking about. Because they've helped thousands of others through exactly the same situation. That stuff is gold.
Partly because it feels bespoke. A customer service agent who's good at their job knows how to tailor the interaction not just to your problem, but to your temperament. Because they've seen all the shapes. They can spot an angry-but-actually-just-frustrated fit a thousand miles away. They can tell a timid-but-curious type too. And then deliver exactly what either needs in that moment. That's luxury.
That's our thesis for Basecamp, anyway. That by treating customer service as a career, we'll end up with the kind of agents that embody this luxury, and our customers will feel the difference.
Back in the mid 90s, I had a friend who was really into raytracing, but needed to nurture his hobby on a budget. So instead of getting a top-of-the-line Intel Pentium machine, he bought two AMD K5 boxes, and got a faster rendering flow for less money. All I cared about in the 90s was gaming, though, and for that, Intel was king, so to me, AMD wasn't even a consideration.
And that's how it stayed for the better part of the next three decades. AMD would put out budget parts that might make economic sense in narrow niches, but Intel kept taking all the big trophies in gaming, in productivity, and on the server.
As late as the end of the 2010s, we were still buying Intel for our servers at 37signals. Even though AMD was getting more competitive, and the price-watt-performance equation was beginning to tilt in their favor.
By the early 2020s, though, AMD had caught up on the server, and we haven't bought Intel since. The AMD EPYC line of chips are simply superior to anything Intel offers in our price/performance window. Today, the bulk of our new fleet run on dual EPYC 9454s for a total of 96 cores(!) per machine. They're awesome.
It's been the same story on the desktop and laptop for me. After switching to Linux last year, I've been all in on AMD. My beloved Framework 13 is rocking an AMD 7640U, and my desktop machine runs on an AMD 7950X. Oh, and my oldest son just got a new gaming PC with an AMD 9900X, and my middle son has a AMD 8945HS in his gaming laptop. It's all AMD in everything!
So why is this? Well, clearly the clever crew at AMD is putting out some great CPU designs lately with Lisa Su in charge. I'm particularly jazzed about the upcoming Framework desktop, which runs the latest Max 395+ chip, and can apportion up to 110GB of memory as VRAM (great for local AI!). This beast punches a multi-core score that's on par with that of an M4 Pro, and it's no longer that far behind in single-core either. But all the glory doesn't just go to AMD, it's just as much a triumph of TSMC.
TSMC stands for Taiwan Semiconductor Manufacturing Company. They're the world leader in advanced chip making, and key to the story of how Apple was able to leapfrog the industry with the M-series chips back in 2020. Apple has long been the top customer for TSMC, so they've been able to reserve capacity on the latest manufacturing processes (called "nodes"), and as a result had a solid lead over everyone else for a while.
But that lead is evaporating fast. That new Max+ 395 is showing that AMD has nearly caught up in terms of raw grunt, and the efficiency is no longer a million miles away either. This is again largely because AMD has been able to benefit from the same TSMC-powered progress that's also propelling Apple.
But you know who it's not propelling? Intel. They're still trying to get their own chip-making processes to perform competitively, but so far it looks like they're just falling further and further behind. The latest Intel boards are more expensive and run slower than the competition from Apple, AMD, and Qualcomm. And there appears to be no easy fix to sort it all out around the corner.
TSMC really is lifting all the boats behind its innovation locks. Qualcomm, just like AMD, have nearly caught up to Apple with their latest chips. The 8 Elite unit in my new Samsung S25 is faster than the A18 Pro in the iPhone 16 Pro in multi-core tests, and very close in single-core. It's also just as efficient now.
This is obviously great for Android users, who for a long time had to suffer the indignity of truly atrocious CPU performance compared to the iPhone. It was so bad for a while that we had to program our web apps differently for Android, because they simply didn't have the power to run JavaScript fast enough! But that's all history now.
But as much as I now cheer for Qualcomm's chips, I'm even more chuffed about the fact that AMD is on a roll. I spend far more time in front of my desktop than I do any other computer, and after dumping Apple, it's a delight to see that the M-series advantage is shrinking to irrelevance fast. There's of course still the software reason for why someone would pick Apple, and they continue to make solid hardware, but the CPU playing field is now being leveled.
This is obviously a good thing if you're a fan of Linux, like me. Framework in particular has invigorated a credible alternative to the sleek, unibody but ultimately disposable nature of the reigning MacBook laptops. By focusing on repairability, upgradeability, and superior keyboards, we finally have an alternative for developer laptops that doesn't just feel like a cheap copy of a MacBook. And thanks to AMD pushing the envelope, these machines are rapidly closing the remaining gaps in performance and efficiency.
And oh how satisfying it must be to sit as CEO of AMD now. The company was founded just one year after Intel, back in 1969, but for its entire existence, it's lived in the shadow of its older brother. Now, thanks to TSMC, great leadership from Lisa Su, and a crack team of chip designers, they're finally reaping the rewards. That is one hell of a journey to victory!
So three cheers for AMD! A tip of the hat to TSMC. And what a gift to developers and computer enthusiasts everywhere that Apple once more has some stiff competition in the chip space.
One of the key roles The New York Times plays in American society is as guardians of the liberal Overton window. Its editorial line sets the terms for what's permissible to discuss in polite circles on the center left. Whether it's covid mask efficiency, trans kids, or, now, mass immigration. When The New York Times allows the counter argument to liberal orthodoxy to be published, it signals to its readers that it's time to pivot.
On mass immigration, the center-left liberal orthodoxy has for the last decade in particular been that this is an unreserved good. It's cultural enrichment! It's much-needed workers! It's a humanitarian imperative! Any opposition was treated as de-facto racism, and the idea that a country would enforce its own borders as evidence of early fascism. But that era is coming to a close, and The New York Times is using The Danish Permission to prepare its readers for the end.
As I've often argued, Denmark is an incredibly effective case study in such arguments, because it's commonly thought of as the holy land of progressivism. Free college, free health care, amazing public transit, obsessive about bikes, and a solid social safety net. It's basically everything people on the center left ever thought they wanted from government. In theory, at least.
In practice, all these government-funded benefits come with a host of trade-offs that many upper middle-class Americans (the primary demographic for The New York Times) would find difficult to swallow. But I've covered that in detail in The reality of the Danish fairytale, so I won't repeat that here.
Instead, let's focus on the fact that The New York Times is now begrudgingly admitting that the main reason Europe has turned to the right, in election after election recently, is due to the problems stemming from mass immigration across the continent and the channel.
For example, here's a bit about immigrant crime being higher:
Crime and welfare were also flashpoints: Crime rates were substantially higher among immigrants than among native Danes, and employment rates were much lower, government data showed.
It wasn't long ago that recognizing higher crime rates among MENAPT immigrants to Europe was seen as a racist dog whistle. And every excuse imaginable was leveled at the undeniable statistics showing that immigrants from countries like Tunisia, Lebanon, and Somalia are committing violent crime at rates 7-9 times higher than ethnic Danes (and that these statistics are essentially the same in Norway and Finland too).
Or how about this one: Recognizing that many immigrants from certain regions were loafing on the welfare state in ways that really irked the natives:
One source of frustration was the fact that unemployed immigrants sometimes received resettlement payments that made their welfare benefits larger than those of unemployed Danes.
Or the explicit acceptance that a strong social welfare state requires a homogeneous culture in order to sustain the trust needed for its support:
Academic research has documented that societies with more immigration tend to have lower levels of social trust and less generous government benefits. Many social scientists believe this relationship is one reason that the United States, which accepted large numbers of immigrants long before Europe did, has a weaker safety net. A 2006 headline in the British publication The Economist tartly summarized the conclusion from this research as, “Diversity or the welfare state: Choose one.”
Diversity or welfare! That again would have been an absolutely explosive claim to make not all that long ago.
Finally, there's the acceptance that cultural incompatibility, such as on the role of women in society, is indeed a problem:
Gender dynamics became a flash point: Danes see themselves as pioneers for equality, while many new arrivals came from traditional Muslim societies where women often did not work outside the home and girls could not always decide when and whom to marry.
It took a while, but The New York Times is now recognizing that immigrants from some regions really do commit vastly more violent crime, are net-negative contributors to the state budgets (by drawing benefits at higher rates and being unemployed more often), and that together with the cultural incompatibilities, end up undermining public trust in the shared social safety net.
The consequence of this admission is dawning not only on The New York Times, but also on other liberal entities around Europe:
Tellingly, the response in Sweden and Germany has also shifted... Today many Swedes look enviously at their neighbor. The foreign-born population in Sweden has soared, and the country is struggling to integrate recent arrivals into society. Sweden now has the highest rate of gun homicides in the European Union, with immigrants committing a disproportionate share of gun violence. After an outburst of gang violence in 2023, Ulf Kristersson, the center-right prime minister, gave a televised address in which he blamed “irresponsible immigration policy” and “political naïveté.” Sweden’s center-left party has likewise turned more restrictionist.
All these arguments are in service of the article's primary thesis: To win back power, the left, in Europe and America, must pivot on mass immigration, like the Danes did. Because only by doing so are they able to counter the threat of "the far right".
The piece does a reasonable job accounting for the history of this evolution in Danish politics, except for the fact that it leaves out the main protagonist. The entire account is written from the self-serving perspective of the Danish Social Democrats, and it shows. It tells a tale of how it was actually Social Democrat mayors who first spotted the problems, and well, it just took a while for the top of the party to correct. Bullshit.
The real reason the Danes took this turn is that "the far right" won in Denmark, and The Danish People's Party deserve the lion's share of the credit. They started in 1995, quickly set the agenda on mass immigration, and by 2015, they were the second largest party in the Danish parliament.
Does that story ring familiar? It should. Because it's basically what's been happening in Sweden, France, Germany, and the UK lately. The mainstream parties have ignored the grave concerns about mass immigration from its electorate, and only when "the far right" surged as a result, did the center left and right parties grow interested in changing course.
Now on some level, this is just democracy at work. But it's also hilarious that this process, where voters choose parties that champion the causes they care about, has been labeled The Grave Threat to Democracy in recent years. Whether it's Trump, Le Pen, Weidel, or Kjærsgaard, they've all been met with contempt or worse for channeling legitimate voter concerns about immigration.
I think this is the point that's sinking in at The New York Times. Opposition to mass immigration and multi-culturalism in Europe isn't likely to go away. The mayhem that's swallowing Sweden is a reality too obvious to ignore. And as long as the center left keeps refusing to engage with the topic honestly, and instead hides behind some anti-democratic firewall, they're going to continue to lose terrain.
Again, this is how democracies are supposed to work! If your political class is out of step with the mood of the populace, they're supposed to lose. And this is what's broadly happening now. And I think that's why we're getting this New York Times pivot. Because losing sucks, and if you're on the center left, you'd like to see that end.
One of the biggest mistakes that new startup founders make is trying to get away from the customer-facing roles too early. Whether it's customer support or it's sales, it's an incredible advantage to have the founders doing that work directly, and for much longer than they find comfortable.
The absolute worst thing you can do is hire a sales person or a customer service agent too early. You'll miss all the golden nuggets that customers throw at you for free when they're rejecting your pitch or complaining about the product. Seeing these reasons paraphrased or summarized destroy all the nutrients in their insights. You want that whole-grain feedback straight from the customers' mouth!
When we launched Basecamp in 2004, Jason was doing all the customer service himself. And he kept doing it like that for three years!! By the time we hired our first customer service agent, Jason was doing 150 emails/day. The business was doing millions of dollars in ARR. And Basecamp got infinitely, better both as a market proposition and as a product, because Jason could funnel all that feedback into decisions and positioning.
For a long time after that, we did "Everyone on Support". Frequently rotating programmers, designers, and founders through a day of answering emails directly to customers. The dividends of doing this were almost as high as having Jason run it all in the early years. We fixed an incredible number of minor niggles and annoying bugs because programmers found it easier to solve the problem than to apologize for why it was there.
It's not easy doing this! Customers often offer their valuable insights wrapped in rude language, unreasonable demands, and bad suggestions. That's why many founders quit the business of dealing with them at the first opportunity. That's why few companies ever do "Everyone On Support". That's why there's such eagerness to reduce support to an AI-only interaction.
But quitting dealing with customers early, not just in support but also in sales, is an incredible handicap for any startup. You don't have to do everything that every customer demands of you, but you should certainly listen to them. And you can't listen well if the sound is being muffled by early layers of indirection.
Most of our cultural virtues, celebrated heroes, and catchy slogans align with the idea of "never give up". That's a good default! Most people are inclined to give up too easily, as soon as the going gets hard. But it's also worth remembering that sometimes you really should fold, admit defeat, and accept that your plan didn't work out.
But how to distinguish between a bad plan and insufficient effort? It's not easy. Plenty of plans look foolish at first glance, especially to people without skin in the game. That's the essence of a disruptive startup: The idea ought to look a bit daft at first glance or it probably doesn't carry the counter-intuitive kernel needed to really pop.
Yet it's also obviously true that not every daft idea holds the potential to be a disruptive startup. That's why even the best venture capital investors in the world are wrong far more than they're right. Not because they aren't smart, but because nobody is smart enough to predict (the disruption of) the future consistently. The best they can do is make long bets, and then hope enough of them pay off to fund the ones that don't.
So far, so logical, so conventional. A million words have been written by a million VCs about how their shrewd eyes let them see those hidden disruptive kernels before anyone else could. Good for them.
What I'm more interested in knowing more about is how and when you pivot from a promising bet to folding your hand. When do you accept that no amount of additional effort is going to get that turkey to soar?
I'm asking because I don't have any great heuristics here, and I'd really like to know! Because the ability to fold your hand, and live to play your remaining chips another day, isn't just about startups. It's also about individual projects. It's about work methods. Hell, it's even about politics and societies at large.
I'll give you just one small example. In 2017, Rails 5.1 shipped with new tooling for doing end-to-end system tests, using a headless browser to validate the functionality, as a user would in their own browser. Since then, we've spent an enormous amount of time and effort trying to make this approach work. Far too much time, if you ask me now.
This year, we finished our decision to fold, and to give up on using these types of system tests on the scale we had previously thought made sense. In fact, just last week, we deleted 5,000 lines of code from the Basecamp code base by dropping literally all the system tests that we had carried so diligently for all these years.
I really like this example, because it draws parallels to investing and entrepreneurship so well. The problem with our approach to system tests wasn't that it didn't work at all. If that had been the case, bailing on the approach would have been a no brainer long ago. The trouble was that it sorta-kinda did work! Some of the time. With great effort. But ultimately wasn't worth the squeeze.
I've seen this trap snap on startups time and again. The idea finds some traction. Enough for the founders to muddle through for years and years. Stuck with an idea that sorta-kinda does work, but not well enough to be worth a decade of their life. That's a tragic trap.
The only antidote I've found to this on the development side is time boxing. Programmers are just as liable as anyone to believe a flawed design can work if given just a bit more time. And then a bit more. And then just double of what we've already spent. The time box provides a hard stop. In Shape Up, it's six weeks. Do or die. Ship or don't. That works.
But what's the right amount of time to give a startup or a methodology or a societal policy? There's obviously no universal answer, but I'd argue that whatever the answer, it's "less than you think, less than you want".
Having the grit to stick with the effort when the going gets hard is a key trait of successful people. But having the humility to give up on good bets turned bad might be just as important.
Trump is doing Europe a favor by revealing the true cost of its impotency. Because, in many ways, he has the manners and the honesty of a child. A kid will just blurt out in the supermarket "why is that lady so fat, mommy?". That's not a polite thing to ask within earshot of said lady, but it might well be a fair question and a true observation! Trump is just as blunt when he essentially asks: "Why is Europe so weak?".
Because Europe is weak, spiritually and militarily, in the face of Russia. It's that inherent weakness that's breeding the delusion that Russia is at once both on its last legs, about to lose the war against Ukraine any second now, and also the all-potent superpower that could take over all of Europe, if we don't start World Word III to counter it. This is not a coherent position.
The big cats in the international jungle don't stick to a rules-based order purely out of higher principles, but out of self-preservation. And they can smell weakness like a tiger smells blood. This goes for Europe too. All too happy to lecture weaker countries they do not fear on high-minded ideals of democracy and free speech, while standing aghast and weeping powerlessly when someone stronger returns the favor.
I'm not saying that this is right, in some abstract moral sense. I like the idea of a rules-based order. I like the idea of territorial sovereignty. I even like the idea that the normal exchanges between countries isn't as blunt and honest as those of a child in the supermarket. But what I like and "what is" need separating.
Europe simply can't have it both ways. Be weak militarily, utterly dependent on an American security guarantee, and also expect a seat at the big-cat table. These positions are incompatible. You either get your peace dividend -- and the freedom to squander it on net-zero nonsense -- or you get to have a say in how the world around you is organized.
Which brings us back to Trump doing Europe a favor. For all his bluster and bullying, America is still a benign force in its relation to Europe. We're being punked by someone from our own alliance. That's a cheap way of learning the lesson that weakness, impotence, and peace-dividend thinking is a short-term strategy. Russia could teach Europe a far more costly lesson. So too China.
All that to say is that Europe must heed the rude awakening from our cowboy friends across the Atlantic. They may be crude, they may be curt, but by golly, they do have a point.
Get jacked, Europe, and you'll no longer get punked. Stay feeble, Europe, and the indignities won't stop with being snubbed in Saudi Arabia.