There’s a line I read this week that’s pretty compelling: encryption, Doctorow says, makes it possible to scramble a photo so thoroughly that even if every hydrogen atom in the universe were made into a computer, we still wouldn’t have enough universe to brute-force the key. A sentence like that stops me. I’ve been thinking so much about tools lately—laser cutters, game consoles, dial-driven prototyping toys—that the reminder of what computers really are (universal, stubbornly general-purpose, occasionally dangerous) feels grounding. Maybe that’s the long view I’m supposed to be taking.

A lot of the work I’ve done this week has been about untangling short-term obligations from long-term voyages. Sometimes I confuse the two. Maybe my students do, too.

Meta’s fraud files (and what they say about design)

Doctorow’s dive into Meta’s internal memos reads like a case study in what happens when a platform decides the externalities are someone else’s problem. Fifteen billion scam ads a day. A quota for how much fraud the fraud team is allowed to prevent. My students ask me why I push them toward building actual things—physical prototypes, pieces of software they can explain, not pitch decks. This is why. If the substrate is compromised, the only stable footing you can give yourself is craft.

Relatedly:

I re-read weirder, older ubiquitous computing papers this week—Mark Weiser et al. on “interface agents” back when that meant something closer to a helpful ghost in your machine than a graph transformer wedged into a website sidebar. It pairs strangely well with Amazon’s legal theatrics over Perplexity’s shopping agent. Agents collapse the space between intention and action; platforms depend on that space staying large. That tension feels like it’s about to define a decade.

Hardware, small and joyful

The new USB-C Arduboy showed up in my feed: credit-card sized, 128×64 OLED, multiplayer over a cable like it’s 1992. I love that this exists. There’s something deeply reassuring about people continuing to build tiny game consoles with honest constraints while the rest of tech culture sprints toward frictionlessness. I’m sketching the next phase of the handheld dial-driven AI toy we’re prototyping—trying to understand how physical metaphors can make generative systems more legible. Sometimes the smallest boards make the biggest ideas possible.

Privacy as a design material

Another thread: Doctorow again, on why the internet—despite having world-class encryption running underneath everything—still feels like the worst privacy landscape we’ve ever built. The short version: policy, not math. I keep thinking about how this affects the way I teach entrepreneurship. Students want to build tools that help people. I want them to understand the architecture they’re building into. If you treat surveillance as a default, you end up designing for compliance instead of possibility.

One last link

This week’s reading included a LessWrong guide for whistleblowers in the U.S. government. Dense, serious, oddly hopeful. The existence of such a guide suggests that the long view is still available to us, if we’re willing to sail in that direction.

Thanks for reading. If any of this sparks something for you—about making, teaching, or the shape of the internet we’re handing off to the next generation—I’d love to hear it.

—Jay