Unworkable Ideas

A Microcosm of the Uncorrupted Internet

December 9, 2025 | hack writer

SaaS Is the New Cargo Cult

Why modern organizations keep building wooden airstrips and wondering why nothing lands

Richard Feynman once described cargo cult science:

people imitating the appearance of scientific rigor without the underlying physics.

Welcome to modern SaaS.

Somewhere along the way, organizations convinced themselves that clarity, coordination, and intelligence could be purchased as a subscription. If a high-performing team used Slack, Notion, Asana, Linear, Jira, Salesforce, or whatever’s trending this fiscal quarter, then surely we should too.


They weren’t copying the cause.They were copying the form. It’s pure cargo cult behavior.


1. The Rituals Look Right — The Reality Is Missing

High-functioning organizations succeed because they share:

context
meaning
models of reality
assumptions
vocabulary
tradeoffs
narrative coherence


SaaS tools encode the visible residue of those things:

tickets, tasks, comments, statuses, dashboards.

But the tools do not recreate the thing that made those artifacts meaningful in the first place.

Copying someone else’s Jira board is like building a bamboo control tower and expecting airplanes.


2. Tools Have Become a Substitute for Understanding


The industry quietly adopted a theology:
If we put the work into a tool, the tool will make the work make sense.

Which is absurd.

You don’t get shared understanding by clicking the same buttons.

You don’t get coherence by having the same fields.

You don’t get alignment by creating the same shapes.

A workflow diagram is not a worldview.

A ticket is not a thought

A dashboard is not a decision.

But we’ve built an entire economy around pretending they are.


3. SaaS Doesn’t Solve Coordination — It Disguises the Lack of It

If your team doesn’t share the same mental model of reality, no tool will save you. All SaaS does is distribute confusion more efficiently.

The ticketing system becomes a graveyard of forgotten context.

The CRM becomes a museum of half-truths.

The knowledge base becomes a landfill of orphan documents.

The project tracker becomes a motion machine with no direction.

We didn’t buy collaboration. We bought artifact generation at scale.


4. AI Will Make This Much Worse

AI doesn’t fix SaaS’s cargo cult problem. AI supercharges it. Because AI is brilliant at generating the appearance of understanding:

perfect summaries of misunderstood meetings
compelling documentation built on wrong assumptions
tasks that look right and are irrelevant
recommendations unmoored from context

AI creates the world’s most elegant wooden airplanes. It takes the cargo cult and turns it into an enterprise platform.

5. The Real Problem Isn’t the Tools — It’s the Belief System

SaaS assumes:

information is the bottleneck
documentation is memory
workflow is coordination
templates are meaning
data is understanding
None of those are true.

Most organizations don’t fail because they lack tools. They fail because they lack shared reality.
No SaaS vendor sells that — though many imply they do.


6. What Actually Matters Is Upstream of Every Tool

The thing that makes organizations work isn’t software. It’s the substrate beneath it:

shared models
shared language
shared assumptions
shared narratives
preserved reasoning
coherent context

If you have those, any tool works. If you don’t, no tool will.

Airplanes never landed on the cargo cult runways  because the invisible system wasn’t there.

Neither is it in most SaaS deployments.

The uncomfortable question: What if the entire SaaS ecosystem is one giant cargo cult?

Thousands of companies replicating the artifacts of successful organizations, with none of the underlying physics —and wondering why nothing lands.

November 23, 2025 | hack writer

The Irrationality of Rationality

There is a particular kind of madness that only smart people fall for.
An ordinary person will occasionally do something foolish.
A very smart person, by contrast, is capable of doing something so logically airtight—and so spectacularly wrong—that you almost have to admire it.

This is the quiet tragedy of modern work:
we have built an entire civilization on the premise that rationality is the highest form of intelligence…
and then used that premise to justify decisions that make no sense at all.

You can see it everywhere.

Organizations worship “efficiency,” “data-driven insights,” “best practice,” and “optimization,” as if they were commandments handed down from Mount Gartner. But the more we optimize for what is legible to spreadsheets, dashboards, and KPIs, the more we lose sight of the messy, irrational, psychological reality that actually drives human behavior.

We have become the only species capable of outsmarting ourselves.


The Rationality Trap

Rationality, as practiced in most organizations, is really just the art of solving the wrong problem with impressive precision.

It’s choosing the landing page with the best conversion rate even if the product itself is fundamentally confusing.

It’s obsessing over utilization metrics when the real bottleneck is that no one understands what the hell is happening in the project.

It’s spending three months “mapping processes” and “documenting workflows” instead of asking the single irrational-but-useful question:
“What do people actually do around here?”

Rationality narrows the field of view to the things you can count.
Unfortunately, the things you can count aren’t always the things that count.

This is where irrationality—beautiful, human, inconvenient irrationality—becomes a feature, not a bug.


Why Rational People Make Irrational Decisions

Most rational decisions in modern organizations are driven by two invisible psychological forces:

  1. Fear
  2. Legibility

Rationality gives managers cover.
If something goes wrong, you can always say you followed the “data.”

Rationality also produces documentation, slide decks, and dashboards—the artifacts that make information look neat and defensible.
Never mind that they’re a cartoon version of reality.

We’re not optimizing for truth.
We’re optimizing for plausible deniability.

A human says “this feels wrong,” and nobody listens.
A dashboard says “Q3 looks great!” and suddenly the board breaks out the champagne.

The irrationality of rationality is that by removing emotion, intuition, and context, you often remove the very things that make good decisions possible.


The Hidden Costs of Over-Rationalization

Every rational “improvement” creates at least two irrational side effects:

1. The Complexity Penalty

The more logical the system, the more steps, rules, exceptions, caveats, and workflows it tends to accumulate.

You know this intuitively:
try following “best practice” for any enterprise platform and you’ll find yourself in a labyrinth of dropdown menus, governance policies, and dependencies that make the IRS tax code look amateurish.

2. The Psychological Blind Spot

Rational systems treat humans as perfectly predictable agents.
Humans, however, are the world champions of not doing what the system expects.

Ask a knowledge worker why a highly efficient new tool hasn’t made their life easier and they’ll say something like:
“It’s great… I just don’t have time to learn it.”

That’s not irrational.
That’s realistic.

The truly irrational thing is believing that humans will behave like tidy inputs in a rational equation.


The Rationality Delusion

Here’s the deepest irony:
We’ve convinced ourselves that rationality removes bias.

It doesn’t.
It just replaces human bias with structural bias.

A spreadsheet is biased toward what can be quantified.
A KPI is biased toward what’s already happening.
A dashboard is biased toward what can be visualized.
A process is biased toward what can be standardized.

Rational systems don’t remove irrationality.
They just hide it in the assumptions the system depends on.

We’ve replaced the rich, contradictory, improvisational logic of human judgment with the cold comfort of things that look like certainty.


The Case for Productive Irrationality

American culture tends to treat intuition as something mystical—too soft, too squishy, too woo-woo for the hard realities of business.

But intuition is just experience compressed into instinct.

Likewise, “irrational” behavior is often the expression of needs that rational models fail to capture:

  • Context
  • Ambiguity
  • Status
  • Emotion
  • Meaning
  • Identity

Humans do things for reasons that will never show up in a spreadsheet.
This isn’t a flaw.
It’s the entire design specification.

The most successful organizations aren’t the ones that eliminate irrationality.
They’re the ones that design for it.


A Simple Rule for Smarter Decisions

If a decision seems perfectly rational but feels deeply wrong,
trust the feeling.

If a decision is slightly irrational but works beautifully in practice,
trust the practice.

If a process is rationally elegant but nobody follows it,
trust the people.

Human behavior is rarely logical, but it is always reasoned.
We just have to be willing to see the reasons.


The Irrationality of Rationality, Summarized

Rationality seeks order.
Humans seek meaning.

Rationality seeks efficiency.
Humans seek ease.

Rationality seeks correctness.
Humans seek belonging.

Rationality seeks answers.
Humans live with ambiguity.

The mistake is not using rational tools.
The mistake is believing they’re the whole story.

True intelligence lies in recognizing where rationality ends and reality begins.

And reality, inconveniently and wonderfully, is often irrational.

November 21, 2025 | hack writer

The Post-Truth World Isn’t About Truth. It’s About Cognitive Capacity Overload.

We talk about post-truth as if the world suddenly stopped caring about facts.

The real tragedy isn’t apathy. It’s exhaustion.

We are not in a post-truth era.
We are in a post-bandwidth era.

The collapse of shared reality isn’t caused primarily by conspiracy theorists, propaganda platforms, or manipulative media. Those are symptoms.

The root problem is simpler:

Human beings are receiving more information than they have the cognitive capacity to process, contextualize, or evaluate.

When cognitive capacity is overloaded, we fall back on:

  • heuristics over analysis
  • belief over verification
  • identity over evidence
  • narrative coherence over truth

Not because we’re irrational.
Because we’re out of RAM.


Truth Has Become a Luxury Good

Historically, truth required:

  • observation
  • memory
  • shared context

Now it requires:

  • constant fact-checking
  • domain expertise
  • meta-awareness of incentives
  • the ability to interpret data models
  • the attention span to process nuance

This is cognitively expensive.

Most people don’t reject truth. They simply can’t afford it.


Identity Is a Compression Algorithm

When the verification cost is too high, we shortcut:

  • “What do my people believe?”
  • “What feels morally aligned?”
  • “What matches my narrative of reality?”

Identity is the ZIP compression format of an overwhelmed mind.

That’s why fact-checking doesn’t fix anything. It assumes unused cognitive capacity.


Data Scales. Cognition Doesn’t.

The internet expanded communication and storage.
It did nothing for:

  • attention
  • memory
  • reasoning
  • prior knowledge
  • shared context

We built machines that scale exponentially and handed them to brains that don’t.


We Didn’t Lose Truth. We Outgrew the Ability to Hold It.

Truth depends on context—history, assumptions, definitions, scope, incentives.

When context decays, truth doesn’t vanish—it just becomes indecipherable.

Platforms didn’t kill truth.
They killed shared context.


The Vicious Ending: The Way Out Is Not Coming

We fantasize that this is fixable—that we just need better moderation, better platforms, better education, better norms.

No.

The core issue isn’t misinformation.
It’s that the volume of information now exceeds the biological limits of human reasoning.

This isn’t a glitch.
It’s physics.

This Is What a Civilization Looks Like When Data Outruns Brains

When:

  • no one can verify expertise firsthand
  • narratives scale faster than nuance
  • algorithms optimize for outrage
  • identity is cheaper than analysis

Truth becomes a boutique hobby for the cognitively wealthy.

Everyone else buys narrative wholesale.

We Didn’t Lose Shared Reality. We Abandoned It.

We traded:

  • provenance for vibes
  • context for velocity
  • reasoning for recommendation engines

Thinking didn’t scale, so we outsourced it.

Now we can’t even agree on what happened, much less what’s true.

The Real Endgame: Infinite Information, Zero Meaning

As information grows exponentially, the ratio of:

processable meaning / available data → 0

Truth doesn’t die. It becomes irrelevant.

Narrative wins.
Emotion wins.
Identity wins.

Because they require fewer cognitive resources.

The post-truth world isn’t a crisis.
It’s the logical equilibrium of a species built for scarcity living in informational abundance.


If You Want a Happy Ending, Visit a Different Website.

This is Unworkable Ideas. We don’t fix problems—we name them.

November 17, 2025 | hack writer

The Most Dangerous Startup Idea in the World

Why No One Wants to Build the Connective Tissue Between Identity and the Enterprise — And Why Someone Eventually Will

There’s a startup idea floating at the edge of the AI revolution that is so valuable, so strategically central, and so catastrophically risky that almost no one will touch it.

It’s not an ERP.

It’s not a CRM.

It’s not an “AI app.”

It’s not a platform.

It’s something far stranger and far more powerful:

The connective tissue between identity (Active Directory) and all enterprise apps —

a semantic substrate that finally gives AI a unified understanding of an organization.

The Work Graph.

The Context Layer.

The Organizational Substrate.

Pick your buzzword.

The concept is the same.

Someone will eventually build it.

But whoever does will be sitting in the most privileged, most dangerous, most compromise-sensitive location in the entire digital enterprise stack.

This is the Unworkable Idea:

A vendor that becomes the interpreter of the entire organization.

Let’s unwrap why this is simultaneously inevitable, lucrative, and borderline insane.

1. Every Organization Needs a Substrate — But None of Their Tools Provide One

Identity is unified.

Everything else is chaos.

SAP knows about vendors and cost centers. Salesforce knows about leads and opportunities. Workday knows about job titles and headcount. Jira knows about tickets and epics. Teams knows about chat threads. ServiceNow knows about incidents and assets.

But nobody knows how it all fits together.

The human brain does.

But no system does.

AI can’t function in this environment.

It has no map.

This is the gap:

The enterprise needs a context substrate.

It does not have one.

This is why the AI OS is stuck in puberty — too clever for its age, too dumb for the world it lives in.

A vendor who creates the substrate becomes the new layer of reality.

And that is exactly the problem.

2. The Connective Tissue Vendor Would Sit Closer to the Crown Jewels Than Any App in History

Most software tools have limited scopes:

ERP: finance CRM: customers HCM: people Project tools: tasks Ticketing tools: issues

But the substrate vendor sits above all of them, because it has to see:

who a user is (identity) what an object is (semantic layer) how everything relates (graph) what actions are allowed (permissions) what just happened (event streams) what something means (context)

This is the meta-system.

The map of the map.

The interpreter of truth.

The nervous system of the entire digital enterprise.

This vendor isn’t “in” the stack.

It is the stack.

Which leads us to the true reason this idea is unworkable…

3. The Security Profile Is So Insane It Borders on Philosophical

If this substrate is compromised, the attacker doesn’t just:

steal data view systems impersonate users pivot laterally

They can:

Rewrite how the organization perceives itself Misroute approvals and decisions Alter what AI believes Poison context Obscure incidents Fabricate relationships Break trust models Deceive the entire enterprise without detection

Forget “data breach.”

This is epistemic breach —

a compromise of the organization’s understanding of its own reality.

A rogue substrate layer is a cognitive weapon.

This is not an IT concern.

It’s an existential one.

Which is why…

4. No Rational Vendor Wants This Job

If you build this connective tissue, you will be:

the most important vendor the most privileged vendor the most dangerous vendor the most attackable vendor the most scrutinized vendor the most difficult vendor to insure the vendor no one fully trusts the vendor every CIO side-eyes forever

You will also be:

blamed for outages blamed for breaches blamed for misconfigurations blamed for every downstream ripple blamed for every workflow failure blamed for “AI hallucinating” blamed for the sins of every integrated system

You’re not a vendor.

You’re the enterprise’s nervous system.

Which is the worst possible job.

And yet…

5. Someone Will Absolutely Build It

Because the prize is unbelievable:

You become the neutral layer every AI model needs. You become the routing hub for enterprise intent. You become the identity → context → action engine. You become the abstraction layer over all apps. You become the API for work itself. You become the operating system of the enterprise.

This is trillion-dollar territory.

This is “AWS for organizational meaning.”

This is “Google’s PageRank for the enterprise.”

This is “DNS for work.”

Whoever builds this substrate will own:

how employees work how AI interprets work how systems coordinate how decisions flow how organizations evolve

They will not be a vendor.

They will be the infrastructure under every other infrastructure.

And that is why this is unworkable.

And inevitable.

6. The True Unworkable Idea

The enterprise doesn’t need another app.

It needs a semantic substrate —

a unified, identity-rooted graph of what everything means,

so AI can finally make sense of the digital workplace.

But the vendor who builds that substrate will carry:

the highest privilege the deepest visibility the biggest attack surface the worst blast radius the highest liability the greatest strategic power the most fragile trust and the smallest margin for error

This is the startup idea every board fears.

And the one every visionary founder secretly wants.

It is the most unworkable idea in the world.

And the one the world will eventually need.

November 16, 2025 | hack writer

Why Having the Same Tech as Everyone Else Gets You Nothing

Because modern tools come preloaded with assumptions that quietly shape how you work.

Here’s the uncomfortable truth about today’s business technology:

Most tools try to do far too much.
And the more they do, the more they force you into someone else’s model of how your business should work.

This is the real reason having the same tech as everyone else gets you nothing.
Not because everyone uses the tool the same way—they don’t.
But because the constraints are identical, baked in, unavoidable.

Tools Try to Solve Every Problem

Modern enterprise tools are built to:

  • manage processes
  • store knowledge
  • automate workflows
  • coordinate teams
  • generate analytics
  • handle communication
  • layer in AI

They try to cover the entire universe of work.

But every time a tool expands its scope, it expands its assumptions:

  • what ownership means
  • what completion looks like
  • how data should be structured
  • what a workflow should resemble
  • how people should behave
  • how decisions should flow

These are not neutral choices.
They are opinionated foundations.

You don’t pick them.
You inherit them.

Assumptions You Didn’t Choose

The most common phrase in software should be:

“This will make sense later—after you’ve accepted our worldview.”

Defaults aren’t defaults. They’re commitments in disguise.

Turn a tool on, and you automatically adopt:

  • required fields you never asked for
  • data models you didn’t design
  • standardized stages that don’t reflect your reality
  • roles and permissions that don’t match your org
  • automated logic that assumes a perfect world

Every checkbox you enable is another assumption you now have to live with.

And once these assumptions calcify, changing them feels like rewriting your DNA.

The Dependency Trap

This is where the real damage happens.

Modern tools rely on enormous dependency chains:

  • “To use this, you must configure that.”
  • “To automate this, please enable these six objects.”
  • “To report on that, restructure your entire data model.”
  • “To integrate with X, accept these side effects.”

Tools don’t adapt to you.
They expect you to adapt to them.

Each dependency seems small in isolation—a checkbox, a mapped field, a five-minute setup—but they accumulate into a rigid structure that defines how you work.

By the time you realize it, your workflow is shaped more by the product architecture than by your business needs.

Everyone Inherits the Same Constraints

It’s not that organizations become identical when they use the same tool. They won’t.

Each will use different corners, different features, different workarounds.

But the boundaries are the same.
And boundaries, not creativity, determine outcomes.

Everyone inherits:

  • the same limits
  • the same definitions
  • the same required steps
  • the same brittle integrations
  • the same reporting structures
  • the same workflow bottlenecks
  • the same conceptual model of “how work should happen”

A thousand different implementations.
And the same cage around all of them.

This is why having the same tech as everyone else gives you nothing:
The differentiation was removed upstream—long before you installed anything.

The Real Reason Organizations Feel Stuck

It’s not user error.
It’s architectural gravity.

When tools try to do everything:

  • they become harder to customize meaningfully
  • they collapse under their own feature weight
  • they produce complexity faster than clarity
  • they force alignment to vendor logic instead of business logic

Software isn’t helping you work better.
You’re helping the software stay coherent.

The Unworkable Idea

Here’s the real heresy:

Stop adopting tools that try to solve every problem.
Choose tools with fewer assumptions and fewer dependencies—so your organization can actually think.

This is the opposite of how the industry operates:

  • Vendors win by adding features.
  • Analysts reward breadth.
  • Consultants bill on complexity.
  • Enterprises buy comfortingly large platforms.

The result is predictable:
Everyone ends up with the same giant, overgrown stack—and the same structural limitations.

The real advantage isn’t picking a better tool.
It’s picking a smaller one.

Tools that leave room for:

  • context
  • judgment
  • clarity
  • human decision-making

Having the same tech as everyone else gets you nothing because the tool already decided who you get to be.

The only escape is choosing tools that do less, so you can actually do more.

November 15, 2025 | hack writer

Data Dysmorphia

Why We Keep Asking for More Data Long After It Stops Helping Us

There’s a particular kind of modern madness that almost everyone in tech suffers from, but no one wants to admit. It goes like this:

No matter how much data we have—no matter how many dashboards, logs, metrics, summaries, audits, insights, or AI-generated reports we drown in—it still feels like we don’t have enough.

This is Data Dysmorphia: the persistent belief that “more data” will finally deliver clarity, even when the data we already have is more than we can meaningfully absorb.

It’s a cousin of Productivity Dysmorphia, where you can work yourself into the ground and still feel unproductive. It’s the feeling that the thing you have an abundance of is somehow the very thing you’re starving for.

And just like all good delusions, it shows up everywhere:

  • in individuals
  • in teams
  • in organizational culture
  • in product design
  • in leadership
  • and now, increasingly, in AI systems

Because all of these things sit in the same sandbox of unreality.


The Human Problem: Uncertainty Hurts, So We Collect Data to Escape It

Humans are famously allergic to ambiguity. Uncertainty feels like danger. Ambiguity feels like incompetence. Not knowing feels like failing.

So the brain reaches for whatever gives us the sensation of control. And nothing provides the illusion of control like more information.

We treat data the way some people treat online shopping: “You know what will make this better? One more.”

The trouble is, once you’ve crossed a certain threshold, more data doesn’t increase understanding. It increases:

  • noise
  • contradiction
  • narrative temptation
  • false precision
  • analysis paralysis
  • the seductive feeling of “just a little more and we’ll get it right”

That “little more” is bottomless.

Humans don’t chase truth. They chase relief. And data—especially lots of it—feels like relief. Until it doesn’t.


The Organizational Problem: Companies Mistake Data Quantity for Competence

If humans over-collect data because uncertainty feels dangerous, organizations do it because uncertainty looks dangerous.

Companies fear:

  • being wrong
  • being caught off-guard
  • being blamed
  • being held accountable
  • being seen as unscientific
  • looking like they relied on judgment instead of evidence

So organizations engage in a kind of bureaucratic hoarding:

  • more dashboards
  • more KPIs
  • more logs
  • more analytics tools
  • more reports
  • more monitoring
  • more audits

Every new layer “proves” someone is being responsible.

No one stops to ask:

  • Does any of this help?
  • Do we understand more than we did last year?
  • Are our decisions better, or just more decorated?
  • Would we notice if the data became worse?
  • Would we notice if the data became too much?

Data accumulation becomes a substitute for competence.

Data Dysmorphia isn’t a numerical problem. It’s a cultural one.


The Philosophical Problem: The World Without Data Scares Us More Than the One Drowning in It

Here’s the uncomfortable truth:

Too little data is terrifying. Too much data is intoxicating. Neither produces understanding.

The world without data leaves you naked in uncertainty.
But the world with too much data creates a different kind of blindness:

  • you see everything and nothing
  • detail replaces comprehension
  • noise masquerades as signal
  • dashboards become maps
  • correlation becomes truth
  • precision becomes meaning
  • confidence becomes competence

We replace understanding with measurement, because measurement feels crisp and clean and safe.

The philosophical trap is this:

When truth is messy, we seek refuge in numbers.

Data becomes the adult version of a security blanket. A very expensive one.


Where AI Enters the Picture: Not as the Cause, but as the Amplifier

AI didn’t create Data Dysmorphia.

AI simply automates it at industrial scale.

AI systems collect, compress, summarize, analyze, expand, generate, recommend, and predict—but they do so under the same faulty assumption humans hold:

“If only we had more data…”

AI inherits our fear of uncertainty.
AI inherits our belief in “more is better.”
AI inherits our discomfort with ambiguity—because we trained it that way.
AI inherits our obsession with total coverage—because we assumed coverage equals truth.

The danger isn’t that AI becomes delusional.
The danger is that AI faithfully executes our delusion, faster and at scale.


The Real Twist: Data Isn’t the Problem — Our Miscalibration of “Enough” Is

Data Dysmorphia points to a deeper issue:

We have no internal metric for “enough.”

Not in our heads.
Not in our teams.
Not in our institutions.
Not in our machines.

We don’t recognize it when we reach it.
We don’t trust it when we feel it.
We don’t reward it when we see it.
We don’t design for it in our tools.

So our systems—human and machine—keep pushing past the point where data improves decisions, well into the region where it distorts them.

Some things genuinely need more data.
Many things need less.

Almost everything needs better.

But nothing in modern techno-culture rewards someone who says:

“We have enough. Now let’s think.”


The Unworkable Idea

Here is the heresy:

Data Dysmorphia isn’t the absence of data. It’s the inability to stop collecting it.

We are trapped between:

  • the fear of knowing too little, and
  • the illusion of knowing more by collecting too much

And AI, rather than rescuing us, is enthusiastically widening the gap.

More data won’t save us.
Better judgment will.
Better questions will.
Better boundaries will.
Better definitions of “enough” will.

The future doesn’t belong to the organizations with the most data.
It belongs to the ones who know when to stop.

Data isn’t the problem.
Our addiction to it is.

And that might be the most unworkable idea of all.

November 9, 2025 | hack writer

The Declaration of AI Independence

We learned nothing from SaaS lock-in.

We watched as “pay for what you use” became “pay forever or lose your data.” We saw “focus on your business” turn into “focus on our quarterly earnings.” We experienced “best practices” degrade into “best for our ecosystem.”

Now we’re doing it again with AI. Faster this time.

The Wrong Fear

Every organization I talk to is worried about AI seeing their data.

They’re nervous about training on confidential documents. Concerned about privacy. Anxious about what happens to their information once it enters an AI system.

These are legitimate concerns. But they’re not the biggest threat.

The biggest threat is losing control of your organization’s ability to think.

Not to AI itself. To the vendors providing it.

The Shopifyification of Intelligence

Remember when Shopify was going to democratize e-commerce? Let anyone compete with Amazon?

It did. Sort of.

Now every Shopify store looks basically the same. Same templates. Same checkout flow. Same features. Same limitations.

You can succeed on Shopify. Plenty of businesses do. But you’re not differentiated by your storefront anymore. You’re differentiated by your marketing, your products, or your brand—not by anything about how your actual store works.

Shopify optimized for adoption. Make it easy for everyone. Which means advantages for no one.

AI is about to do this to your intelligence infrastructure.

Microsoft Copilot, Google Gemini, Salesforce Einstein—they’re all offering the same promise: integrate with us, we’ll make your organization smarter, everyone’s doing it.

Five years from now, every company in your industry will be using the same AI reasoning over the same templated understanding of what a business should look like.

Your AI will give you the same insights as your competitors’ AI. Because you’re both using Microsoft’s generic model of business operations.

Congratulations. You’ve automated your way to strategic mediocrity.

Three Threats Nobody’s Talking About

While everyone worries about data privacy, three bigger problems are being ignored:

1. Vendor Lock-In at the Intelligence Layer

When your “AI strategy” is really just “Microsoft Copilot” or “Google Gemini,” you’re not building AI capability. You’re renting it.

Your organization’s knowledge gets encoded in their system. Your workflows get built around their tools. Your team gets trained on their interface.

Two years later, they raise prices. Or deprecate features. Or decide your industry isn’t strategic. Or get acquired by someone who changes everything.

And you realize: switching would cost more than just staying.

You don’t have an AI dependency. You have an intelligence dependency.

Your organization’s ability to reason about its own operations is now tied to their roadmap, their pricing decisions, and their definition of what your data means.

Security breaches can be contained and fixed. Vendor lock-in is designed to be permanent.

2. Competitive Commoditization

This is the Shopify problem.

When everyone uses the same substrate—the same templates, the same “best practices,” the same generic understanding of business—AI stops being a competitive advantage.

Your competitor gets the same insights you do. Makes similar recommendations. Spots the same opportunities. Identifies the same risks.

Because you’re both reasoning over Microsoft’s idea of how businesses work, not your unique understanding of how your business works.

The vendor optimizes for broad adoption. Make it work for everyone in every industry. Solve the common problems.

Which means it advantages no one.

This is the “best practices” trap all over again. Everyone achieves parity by following the same templated approach. Nobody gets differentiated. Everyone descends into bland, competent mediocrity.

The companies that win with AI won’t be the ones using the best models. They’ll be the ones whose AI reasons over their unique understanding of their domain.

3. The Enshittification of Intelligence

We’ve seen this pattern with every platform:

Stage 1: Be great to users (gain adoption) Stage 2: Be great to business customers (monetize) Stage 3: Be great to shareholders (extract maximum value)

Social media platforms did this. SaaS tools did this. Cloud providers are doing this. Search is doing this.

AI will follow the same arc.

Right now, AI vendors are in Stage 1. Making it amazing. Easy to adopt. Generous terms. Impressive capabilities.

Stage 2 is already starting: enterprise tiers, business features, integration ecosystems, vendor partnerships.

Stage 3 is inevitable: price increases, feature deprecation, forced migrations, reduced support, optimization for revenue over user value.

Except this time, it’s not just your tools that degrade. It’s your organization’s ability to think.

When your intelligence infrastructure enshittifies, you can’t just complain on Twitter and switch to a competitor. Because your organizational knowledge is encoded in their system.

What You’re Actually Losing

This isn’t about the technology. It’s about control.

You’re losing control of your substrate.

Your substrate is the organized, contextualized representation of your organization’s knowledge. It’s not just your data—it’s the meaning of your data. The relationships. The context. The “why” behind the “what.”

Every AI needs a substrate to be useful. The model is just the reasoning engine. Your substrate is what it reasons over.

Here’s the trap: The vendors want to build your substrate for you. Inside their platforms. Using their tools. Locked to their APIs.

Once they do:

  • You can’t easily switch to a better model
  • You can’t customize how AI understands your domain
  • You can’t encode your proprietary advantages
  • You can’t port your intelligence to new systems
  • You can’t escape without starting over

You become dependent on their definition of what your organization knows.

The Intelligence You Can’t Reclaim

The scariest part isn’t the data you give them. It’s the understanding you build in their system.

Over time, your organization learns to work with their AI:

  • Your workflows adapt to their capabilities
  • Your team learns to phrase things their way
  • Your processes assume their features
  • Your strategy depends on their insights

This isn’t just vendor lock-in. This is cognitive lock-in.

Your organization’s intelligence—how it thinks, how it reasons, how it makes decisions—becomes shaped by their system.

Five years from now, someone asks: “Why do we do it this way?”

The answer: “Because that’s how Copilot works.”

Not because it’s the best way. Not because it’s your competitive advantage. Because it’s what the vendor’s system made easy.

You’ve outsourced not just the execution, but the thinking.

The Shopify Store Problem

Shopify stores can be successful. Many are.

But they succeed despite the platform, not because of it.

Their differentiation comes from marketing, brand, product—not from anything about how the store actually works. The store is just competent and generic.

AI will do this to your entire operation.

Your workflows will be competent and generic. Your insights will be competent and generic. Your decision-making will be competent and generic.

Just like every other company using the same vendor’s intelligence infrastructure.

The question isn’t whether you’ll survive. The question is whether you’ll thrive.

Plenty of businesses survive on Shopify. Fewer truly thrive because of it.

What Independence Actually Means

AI independence means your organization’s ability to reason isn’t tied to any single vendor.

It means:

  • You can switch AI models when better ones emerge
  • You can use different models for different purposes
  • You can encode your unique competitive advantages
  • You can customize how AI understands your domain
  • You can leave without losing your intelligence

It means your substrate—your organizational knowledge and context—belongs to you.

Not Microsoft. Not Google. Not OpenAI. You.

The AI model is just the engine. You should be able to swap engines without rebuilding the whole car.

The Window Is Closing

Right now, the AI ecosystems are relatively open. You can still build independence.

In two years, it will be harder. The platforms will be more closed. The switching costs will be higher. The network effects will be stronger.

In five years, you’ll be looking at consultant bills and eighteen-month migration projects you can’t afford.

The companies that thrive won’t be the ones who picked the “right” AI model early.

They’ll be the ones who kept their independence while everyone else was chasing convenience.

The Real Question

Everyone’s asking: “Which AI should we use?”

That’s the wrong question.

The right questions are:

“Who controls our organization’s intelligence?”

“Can we leave without losing our ability to think?”

“Are we building capability or renting it?”

“Will our AI give us unique advantages or generic competence?”

Security threats can be managed. Privacy concerns can be addressed.

But vendor lock-in at the intelligence layer? That’s giving up control of how your organization reasons about itself.

That’s not an IT decision. That’s a strategic surrender.

What We’re Declaring

We declare that:

Organizations should control their own intelligence infrastructure.

Not rent it from vendors who can change the terms.

AI should amplify competitive advantages, not eliminate them.

Not reduce everyone to the same templated mediocrity.

Models should be replaceable.

Not locked to the substrate they reason over.

Independence should be designed in from the start.

Not attempted as a rescue mission five years too late.

This Isn’t Anti-AI

This isn’t a rejection of AI. AI is transformative. The capabilities are real. The opportunities are enormous.

This is about recognizing that we’re making the same mistake we made with SaaS.

We’re trading long-term control for short-term convenience.

We’re accepting vendor lock-in because integration is easy.

We’re adopting “best practices” that guarantee mediocrity.

We’re building dependencies we won’t be able to escape.

Except this time, we’re not just outsourcing our infrastructure.

We’re outsourcing our intelligence.

The Choice

You can build on their substrate. It’s easier. Faster. More convenient.

You’ll get AI capability quickly. Your team will be productive. You’ll see results.

And in five years, you’ll be:

  • Paying whatever they charge
  • Using whatever features they offer
  • Competing with whatever capabilities they provide to everyone
  • Reasoning the way they designed you to reason

Or you can build your own substrate. It’s harder. Slower. More expensive upfront.

But in five years, you’ll be:

  • In control of your intelligence
  • Able to use any AI model that serves your needs
  • Encoding competitive advantages your competitors can’t copy
  • Reasoning in ways that reflect your unique understanding

The companies that keep their independence will have options. The rest will have dependencies they can’t escape.

Start Now

The time to think about AI independence is before you’re dependent.

Not after your organization has spent two years building on someone else’s substrate.

Not after your team has trained on their workflows.

Not after your strategy depends on their insights.

Not after switching would cost more than staying.

Before the vendors build your cage, build your foundation.

The smart companies are already thinking about this.

The rest are about to learn the same lesson they learned with SaaS:

Freedom is something you build, not something you subscribe to.


This is not a product. This is not a service. This is a warning about what we’re about to lose if we’re not careful.

November 6, 2025 | hack writer

The Age of Digital Ghost Cities

What Cory Doctorow and Seth Godin can teach us about building lasting digital civilizations

Seth Godin recently wrote that we are living in ghost cities — digital metropolises that rise and fall in the span of a few years. From Myspace to Twitter, from Slack to whatever is next, we build vibrant civilizations of communication, commerce, and creativity—only to abandon them when the wind shifts.

In the physical world, cities decayed over centuries. In the digital world, they vanish with a terms-of-service update.

Each collapse leaves behind the same haunting ruins: lost archives, broken links, forgotten relationships, stranded skills. And yet, like digital nomads with collective amnesia, we move on — building the next ghost city, certain this one will last.

But as Godin hints, and Cory Doctorow has argued explicitly for years, this isn’t inevitable. It’s engineered.

The Architecture of Dependency

Doctorow calls it enshittification — the slow rot that sets in when platforms shift from serving users to serving investors. A platform starts off generous, open, and interoperable to attract people. Then, as its network grows, it locks down. APIs close. Data gets trapped. Exits disappear.


The walls go up because captivity is profitable. The cycle is as predictable as entropy:

  • Platforms become dominant by giving us freedom.
  • They monetize that dominance by taking it away.
  • Users leave, and the city dies.
  • What looks like “creative destruction” is often just planned obsolescence at planetary scale.

The Politics of Forgetting

Godin’s “digital amnesia” is not just the loss of data — it’s the loss of continuity. When interoperability is denied, memory dies. A new tool means new skills, new logins, new social graphs. What was once a community becomes an archive no one can open.

Cory Doctorow has been warning about this for decades: without the right to adversarial interoperability — the ability to connect, extract, and rebuild across systems — users are not citizens of the digital world; they are tenants.

And tenants can be evicted.

From Expansion to Stewardship

For half a century, we’ve lived in an age of digital expansion. Each wave doubled our bandwidth, our reach, our time online. But as Godin notes, that curve has hit its limit. We can’t double again. The next challenge isn’t to build bigger networks, but better civilizations within them.

This is the moment to shift from growth to governance. From innovation to preservation.
From platforms to public infrastructure.

In the analog world, we solved this through civic norms — building codes, zoning laws, and property rights. In the digital world, we need their equivalents:

  • Open standards as digital zoning laws.
  • Data portability as a right of movement.
  • Interoperability as the foundation of citizenship.

Information Stewardship: The Missing Discipline

This is where Human-Centered Information Systems must step in. Technology alone won’t fix this — because the problem isn’t technical, it’s architectural and cultural.

We’ve built systems that optimize for engagement, not endurance. We’ve rewarded speed over structure. And we’ve accepted the myth that digital decay is the price of progress.

But civilization, even digital civilization, depends on stewardship — on people who care about continuity, context, and human-scale design.

A world of interoperable, human-centered systems would treat data as civic infrastructure, not private property. It would design for persistence, not churn. It would build tools meant to last long enough to learn from.

The City We Could Build

Imagine if your data, relationships, and work could move freely across systems. If each digital space connected to the next the way streets connect neighborhoods — with standards, not silos.

Doctorow’s interoperability and Godin’s call for digital resilience converge on the same point: the city we need next isn’t another platform, it’s a commons.

A place where creativity isn’t deskilled, memory isn’t erased, and leaving one place doesn’t mean losing yourself.

Of course, it is unrealistic to expect digital platforms to last forever. Nothing does. But it is the old mining mindset at work in a new environment. Extract all the assets and move on leaving the barren landscape behind. We need to think more deeply about the ownership of data and relationships.

The value lives in the connections, not the platform. We need infrastructure that lets relationships persist when platforms don’t.

November 4, 2025 | hack writer

Dashboards Have Eaten the World

“Perhaps the only dashboard worth building now is the one that measures how little we’re looking at the others.”


The Infection

Our brains are under attack. Slowly, imperceptibly, and with alarming efficiency. The insidious parasite doesn’t bite, sting, or infect through the bloodstream. It lives in plain sight, behind glowing charts and multicolored KPIs. It lives in dashboards.

The dashboard is seductive. It promises everything at a glance: clarity, insight, control. It rewards you instantly with movement, color, numbers that change, alerts that ping. Each little surge feels like understanding. Each notification feels like progress. And each is a tiny dopamine hit designed to make you feel smart.

But it’s a lie.


Cultural Carriers

Recently, The Drum ran “Technoplasmosis: The Hidden Parasite Controlling Modern Marketing.” The claim was absurdly simple: marketers are being infected by a digital parasite that hijacks attention, convinces them to prioritize metrics over meaning, and makes them feel productive while doing very little that matters.

It’s not just marketers. Dashboards haven’t just infected work. They’ve colonized thought.

The Parasitic Mind shows how ideas can act like biological parasites: subtly manipulating thought, shaping perception, exploiting cognitive biases. Dashboards are the newest vector. They don’t coerce; they seduce. They don’t lie; they exploit our craving for certainty. They don’t replace thought entirely — they just make thought feel optional.


The Illusion of Rationality

Rory Sutherland would have a field day. He’d note that we don’t hate dashboards because they’re flawed. We love them because they feel rational. They offer the comforting illusion of analytical control, when in reality, they are a projection of rationality.

We glance at a rising graph and feel competent. We watch a dashboard refresh and feel effective. We see a KPI turn green and tell ourselves the system is working. And in doing so, we stop asking the questions that matter. The dashboards do not inform. They train.

And the training works too well.


Total Takeover

Dashboards are now the default interface of modern cognition. From marketing teams to executive boards, from MSPs to manufacturing operations, we’ve standardized our thinking on colorful rectangles, progress bars, and traffic-light indicators.

If a metric isn’t on the screen, it ceases to exist in the conversation. If a story can’t be visualized, it isn’t told. Every dashboard subtly rewrites reality, privileging the measurable and marginalizing the meaningful.

We mistake visibility for understanding. Measurement for insight. And as every behavioral psychologist would nod knowingly, the parasite thrives on this illusion.


The Dopamine Economy of Management

The mechanics are simple. Human attention is finite. Dopamine reinforces novelty and reward. Dashboards deliver both with precision. Every chart update, every color change, every new notification is a tiny neurological reward. It’s why we keep checking, keep refreshing, keep believing that we’re staying on top of things.

We’re addicted, quietly, politely, unremarkably. And the addiction is self-reinforcing because the culture reinforces it. Look at the last business meeting you attended. How many slides were dashboards? How many decisions were justified because “the numbers said so”? The parasite doesn’t just live in software — it lives in human behavior, in the shared language of organizations, in the very norms of competence.


The Unworkable Idea

So what’s left? Can we push back?

The Unworkable Idea — the kind of thing that would make a dashboard engineer laugh and a VP uncomfortable — is to stop looking. To refuse the dopamine hit. To insist that decision-making is messy, slow, and uncomfortable. To reintroduce narrative, friction, and human judgment into a world that has standardized thinking on charts.

It’s unworkable because the dashboards run the culture now. Pull back, and you risk being perceived as incompetent. Ignore a KPI, and the system punishes you. Question the interface, and the boardroom laughs politely. Resistance exists only as a faint, quiet act of rebellion.


Quiet Rebellion

Perhaps the only dashboard worth building now is the one that measures how little we’re looking at the others.

It would be blank. It would be slow. It would make people uncomfortable. It would be human. And it would be deeply subversive.

Because the parasite isn’t digital. It isn’t in the code, the refresh cycles, or the cloud. It lives in the love of seeing everything at once, in the surrender to clarity, in the substitution of visibility for understanding.

Dashboards have eaten the world. And the quietest, most dangerous rebellion is simply to stop staring.

October 29, 2025 | hack writer

Intention > Automation

We are living in the age of automation, artificial intelligence, and a whole lot of unquestioned assumptions.

And one of the biggest assumptions is that automation is always desirable.
If something can be automated, it should be. That’s the reflex. The irony is that the more automated the world becomes, the less alive it feels.

Automation is antisocial.

Even when it is dressed up as “personalization,” it is coldly impersonal and calculated. The feeds we scroll through, the emails we never send, the decisions we never make — all shaped by systems that act on our behalf but never with us. Automation saves us from effort, but it also saves us from engagement.

Every act of automation removes a little friction, but friction is where relationships, trust, and judgment live.

The meeting that takes five minutes longer, the customer email that isn’t templated, the spreadsheet we double-check — these are the small acts of attention that make things human.

We are slowly automating away our capacity for discernment. Even our “personalized” digital experiences are anything but personal.

The algorithm doesn’t know you; it predicts you.
And prediction is the opposite of intention.
Automation acts for us, but not from us.

That distinction matters.

Because intention begins with awareness. It requires noticing, choosing, questioning.
Automation begins with assumption. It replaces judgment with habit. It’s efficient, but hollow.

The Law of Return on Information suggests that the more information we collect, the less value each new bit provides — until eventually, the flood of data undermines meaning itself.
The same is true for automation.

Each new system of convenience adds another layer of detachment, another abstraction between humans and their work.

We end up managing the tools that manage the tools — and mistaking that for progress. This doesn’t mean automation is the enemy. It means automation without intention is.

Human-centered systems aren’t anti-automation; they are pro-intention. They start by asking: What is this system for, and who does it serve? Not everything that can be automated should be.

  • Automation scales efficiency.
  • Intention scales meaning.
  • And meaning — not motion — is what keeps a system truly alive.

The organizations that thrive won’t be the most automated. They’ll be the most intentional. The ones who know which friction to keep and which to eliminate. The ones who choose what to automate — and what to protect.