After years of looking exactly the same, this blog finally got a fresh coat of paint. I’ve switched things over to a darker look: black background, light text, and a more code‑friendly feel that matches how I actually spend my time these days. If you’re reading this at night or in a dim room, it should be a little easier on the eyes than the old blinding white theme I set up years ago and never touched again. Behind the scenes, I also moved the site off WordPress.com and over to a local provider. That gives me more control over backups, themes, and customization, and lets me treat this blog a bit more like the rest of my projects instead of something I only poke at every few years .Nothing dramatic is changing about the content, but the plumbing and paint are finally caught up with the present. If something looks weird or broken in the new setup, feel free to let me know—and thanks for still stopping by after all this time.
Monthly Archives: November 2025
The Trillion-Dollar Lie: Ilya’s Testimony and the Altman Conundrum
Let’s be clear: we’ve always known that the builders of our world-class, society-altering infrastructure were flawed. The railroad barons, the telecom giants, the oil magnates—their ambitions were often matched only by their ruthlessness.
But in the age of AI, the stakes are different. We’re not just laying track or stringing cable; we’re building the potential substrate of all future human thought and society. The person steering that ship, we hope, would be held to a higher standard.
The recent testimony from Ilya Sutskever in the Elon Musk vs. OpenAI lawsuit shatters that hope, and reveals a problem that is both mundane and existentially terrifying: the man at the helm of this transformation, Sam Altman, allegedly has a huge lying problem.
And what’s most alarming isn’t just the accusation, but the collective shrug from his defenders.
The Testimony: Not a Misunderstanding, but a Pattern
The legal documents are dry, but the content is explosive. Ilya Sutskever, OpenAI’s former Chief Scientist and a board member at the time, stated under oath that the board’s decision to fire Altman in November 2023 was due to a “breakdown in the trust and communications between the board and Mr. Altman.”
He didn’t say “a disagreement over strategy.” He didn’t cite “differing visions for AGI safety.” He cited a breakdown in trust. Specifically, the board could no longer trust Altman to be consistently honest with them.
This wasn’t about one lie. It was about a pattern—a “multiplicity of examples,” as one report put it—where Altman was allegedly not candid, making it impossible for the board to govern effectively. The very body tasked with ensuring OpenAI’s mission-aligned governance felt it had to launch a corporate coup to perform its duty, all because it couldn’t believe what its CEO was saying.
The Stakes: This Isn’t a Normal Startup
We need to pause and absorb the dissonance here.
On one hand, you have Sam Altman, the global ambassador for AI, courting trillions of dollars in investment and infrastructure spending from governments and corporations. He is shaping global policy, testifying before Congress, and making promises about building a future that is safe and beneficial for all of humanity. The fabric of our future society is, in part, being woven on his loom.
On the other hand, you have his own board—comprised of aligned experts like Ilya and Helen Toner—concluding he is so fundamentally untrustworthy that he must be removed immediately for the good of the mission.
This isn’t a typical “move fast and break things” startup culture clash. This is the equivalent of the head of the International Atomic Energy Agency being fired by his own scientists for being loose with the facts about safety protocols. The potential consequences are not a failed app; they are, in the most extreme but not-unthinkable scenarios, catastrophic.
The Defense: “I Don’t Care, He Gets Shit Done”
Perhaps the most telling part of this whole saga is the nature of the defense for Sam Altman. As one observer aptly noted, you don’t see many people jumping to say, “He doesn’t have a huge lying problem.”
Instead, the defense maps almost perfectly to: “I don’t care, he gets shit done.”
The employee revolt that reinstated Altman, the support from major investors—it all signaled that the perceived ability to execute and create value (or, let’s be frank, monetary value) was more important than a deficit of trust at the very top. The mission of “ensuring that artificial general intelligence benefits all of humanity” was, in a moment of crisis, subordinated to the cult of execution.
This is a devil’s bargain that Silicon Valley has made before, but never with a technology of this magnitude. We’ve accepted the “brilliant jerk” genius to give us our next social network or smartphone. Are we really willing to accept it for the technology that could redefine consciousness itself?
The Precedent We’re Setting
The message this sends is chilling. It tells future leaders in the AI space that transparency and consistent honesty are secondary to velocity and fundraising. It tells boards that if they try to hold a charismatic, high-value CEO accountable for a “pattern of lying,” they may be the ones who are ousted.
We are institutionalizing a dangerous precedent at the worst possible time.
The Ilya testimony isn’t just a juicy piece of corporate drama. It’s a stark warning. It suggests that the architect of our AI future operates in a cloud of alleged deception, and that a large portion of the ecosystem building that future is perfectly willing to look the other way.
The question is no longer if Sam Altman has a lying problem. The question, posed by his own chief scientist under oath, is whether we should care. And in our collective answer, we are deciding what kind of future we are truly building.
My Split Heart: Why I’m Defensive of the Linux That Saved Me
There’s a war going on inside me, and it’s fought in terminal commands and neural networks.
On one hand, I am euphoric. The gates have been blown wide open. For decades, the biggest barrier to entry for Linux wasn’t the technology itself—it was the gatekeeping, the assumed knowledge, the sheer terror of being a “moron” in a world of geniuses. You’d fumble with a driver, break your X server, and be met not with a helpful error message, but with a cryptic string of text that felt like the system mocking you.
But now? AI has changed the game. That same cryptic error message can be pasted into a chatbot and, in plain English, you get a step-by-step guide to fix it. You can ask, “How do I set up a development environment for Python on Ubuntu?” and get a coherent, working answer. The barrier of “having to already be an expert to become an expert” is crumbling. It’s a beautiful thing. I want to throw the doors open and welcome everyone in. The garden is no longer a walled fortress; it’s a public park, and I want to be the guy handing out maps.
But the other part of my heart, the older, more grizzled part, is defensive. It’s protective. It feels a pang of something I can’t fully explain when I see this new, frictionless entry.
Because Linux, for me, wasn’t frictionless. It was friction that saved my life.
I was a kid when I first booted into a distribution I’d burned onto a CD-R. It was clunky. It was slow. Nothing worked out of the box. But for a kid who felt out of place, who was searching for a sense of agency and control in a confusing world, it was a revelation. Here was a system that didn’t treat me like a consumer. It treated me like a participant. It demanded that I learn, that I struggle, that I understand.
Fixing that broken X server wasn’t just a task; it was a trial by fire. Getting a sound card to work felt like summiting a mountain. Every problem solved was a dopamine hit earned through sheer grit and persistence. I wasn’t just using a computer; I was communicating with it. I was learning its language. In a world that often felt chaotic and hostile, the terminal was a place of logic. If you learned the rules, you could make it obey. You could build things. You could break things, and more importantly, you could fix them.
That process—the struggle—forged me. It taught me problem-solving, critical thinking, and a deep, fundamental patience. It gave me a confidence that came not from being told I was smart, but from proving it to myself by conquering a system that asked no quarter and gave none. In many ways, the command line was my first therapist. It was a space where my problems had solutions, even if I had to dig for them.
So when I see AI effortlessly dismantling those very same struggles, I feel a strange, irrational bias. It’s the bias of a veteran who remembers the trenches, looking at new recruits with high-tech gear. A part of me whispers, “They didn’t earn their stripes. They don’t know what it truly means.”
I know this is a fallacy. It’s the “I walked uphill both ways in the snow” of our community. The goal was never the suffering; the goal was the empowerment. If AI can deliver that empowerment without the unnecessary pain, that is a monumental victory.
But my love for Linux is tangled up in that pain. It’s personal. It’s the technology that literally saved me by giving me a world I could control and a community I could belong to. I am defensive of it because it’s a part of my identity. I feel a need to protect its history, its spirit, and the raw, hands-on knowledge that feels sacred to me.
So here I am, split.
One hand is extended, waving newcomers in, thrilled to see the community grow and evolve in ways I never dreamed possible. “Come on in! The water’s fine! Don’t worry, the AI lifeguard is on duty.”
The other hand is clenched, resting protectively on the old, heavy textbooks and the logs of a thousand failed compile attempts, guarding the memory of the struggle that shaped me.
Perhaps the reconciliation is in understanding that the soul of Linux was never the difficulty. It was the freedom, the curiosity, and the empowerment. The tools are just changing. The spirit of a kid in a bedroom, staring at a blinking cursor, ready to tell the machine what to do—that remains. And if AI helps more people find that feeling, then maybe my defensive, split heart can finally find peace.
The gates are down. The garden is open. And I’ll be here, telling stories about the old walls, even as I help plant new flowers for everyone to enjoy.
Great domain planning Microsoft

Microsoft, what on fucking earth are you doing?
How could you think this is a good idea —
https://tasks.microsoft.com → Outlook
https://tasks.office.com → Planner
Picture it:
“We’re really aligning the Tasks strategy under a unified vision of cross-platform productivity.”
“Great! So… two separate domains?”
“Exactly.”
Dozens of PMs, architects, designers, and engineers probably sat in Teams calls nodding at slides with flowcharts explaining why the Outlook Tasks experience needed to live under microsoft.com while Planner Tasks deserved its own shiny office.com home. Because, you know, user clarity.
Meanwhile every DevOps person on earth is just trying to figure out why half their integrations break depending on which URL someone fat-fingered into a webhook.
Somewhere there’s a PowerPoint deck titled “Unifying the Task Experience” that’s been in circulation since 2018.