Can we build better AI? Consider the case of the loom vs. the crane.

Roy Bahat
Also by Roy Bahat
Published in
6 min readSep 27, 2023

--

Not all artificial intelligence is created equal. Founders of companies who create AI will need to build their services differently depending on whether it expands human abilities (“cranes”) instead of replacing them (“looms”).

DALL·E 2 from prompt “A scene with many, many construction cranes and many much smaller weaving looms, in high-definition watercolor” passed through Prisma’s Tokyo filter

“The robots are coming!” Yet again, a wave of fear. Will this AI help us or harm us? Will the robots replace writers of Hollywood screenplays, teachers in schools, accountants or illustrators or management consultants or all of us? This on top of a range of other impacts of AI: national security threats, runaway Terminators, heightened bias, and more. Chief among these worries right now in our global conversation is the fear that AI will take away jobs.

It’s reasonable to be scared. Imagine the hard-working person who has spent a career building skills, only to find themselves and their peers out of work. Technology has often taken away jobs and, while the economy as a whole is better off, and more jobs eventually appear to replace those lost, that’s small consolation for the person who can no longer feed their family.

And, at the same time, nobody quite knows what it means for a job to be “automated,” as Noah Smith points out. We have intuitions about technology hurting people, and still we struggle to define exactly what kinds of technologies we might (not) want. (Would we actually want to go back in time and un-invent the steam shovel, because it replaced workers with picks and unsteamed shovels?)

Some want changes to government policy (a guaranteed income so that basic survival is less tied to work, for example), or stronger institutions (like unions or other forms of organized labor) to protect them. As a startup investor, who often supports technologists at the moment when they start to build, I see one other opportunity — people who make AI can choose what kind of AI to create.

AI is not a force of nature, and different technologies have different effects on people. If we build technology that makes a person’s effort more valuable (specifically their marginal productivity) then those people will probably get paid more.

When AI can predict the structure of a protein molecule, nobody says “wait, that was my job,” unlike when it draws an illustration for the cover of a magazine. People worry less when AI routes an aircraft more accurately than when it drives a car.

For years, economists have tried to draw a distinction between technologies that “replace” human efforts, vs. those that “augment” them — and asked that technologists create more technologies that augment human efforts, as Daron Acemoglu and Simon Johnson most recently did in their Power and Progress.

So far, those pleas have had too little effect. Technologists remain enamored with the idea of a machine that can replace a person — look at the shapes of robots, and the capacities of generative AI that can write just like a person. Erik Brynjolfsson calls this the “Turing Trap” — where the quest to pass the Turing Test (a machine that can fool a person into thinking it’s really a person) has captivated the minds of technologists, infected the science fiction that often inspired them to pursue this work in the first place, stroked the egos of many who create AI, and left us with a crisis of imagination. Marc Andreessen goes so far as to define artificial intelligence as something “similar to how people do it.” (It’s more than that, of course.)

To be fair, it’s hard to define a clear distinction between technologies that replace humans vs. augment them. If I replace a person’s menial tasks with an AI (e.g., by summarizing legal contracts), I’ve also augmented that person by giving them spare time. If I augment them by speeding up their completion of a task (e.g., as LLMs do for software engineers), I might need fewer people performing that task.

Different types of augmentation also have different effects. Some merely make our work more convenient, whereas others give us capabilities we otherwise could never have achieved (for those old enough to remember, imagine the arms on Inspector Gadget).

So, I propose we (over-)simplify the discussion and look at AI technologies in three categories:

  1. “Looms” — these generally can replace a person, as a fully-automated loom can replace a weaver. Take, for example, the way that AI can replace a person troubleshooting your customer support problem, approving an expense, or driving a car.
  2. “Slide rules” — these assist a person, as a slide rule makes a calculation faster (again, for those old enough to remember). Software tools that write a first draft of code can speed a developer’s work, and grammar checkers can improve a person’s writing. (Some of these slide rules, like self checkout in a store, actually make the experience worse for the customer, a so-called “so-so technology.”)
  3. “Cranes” — these allow a person to do something they otherwise would be completely unable to do themselves. For example, translating from one language into another, indexing millions of web pages and predicting where you’d most like to click, discovering new molecules for medicine, or predicting the quality of applicants responding to a job posting.
I discussed some of these categories at a recent conference.

As builders and backers of AI, we often seem preoccupied with looms (and to some extent, slide rules). Why? Looms boost technologists’ self image — man creates a thing to replace himself. They also make for easier-to-imagine problems to solve, and reduce one of the biggest costs a company incurs (labor). Otherwise, we often imagine slide rules: in his “Why AI Will Save the World” essay, almost every one of the many examples Marc Andreessen gives is a slide rule, where “every person will have an AI assistant” (other than passing references to curing all disease and interstellar travel).

And looms, slide rules, and cranes may sometimes blur into one another. Some technologies can act in more than one category (e.g., generative AI like ChatGPT can both do a thing a high school student can do and synthesize vast amounts of data beyond the reading lifetime of any human), and technologies built in one category can later expand into another.

***

How do we get more cranes? Founders of technology companies can just decide to make different technology — and to think of it in this way. They can ask whether people, today, can do what they are building without technology, and focus on making AI that gives people brand new powers. They can stretch their imaginations to invent ways of solving problems better than how a person would have done it. Why limit ourselves?

Builders of AI can also recognize that the nature of looms, slide rules, and cranes will mean building AI services in wildly different ways. How people choose to use technologies, how companies pay for them and re-arrange their organizational systems around them, and how we finance these technologies might differ dramatically depending on whether the technology includes loom, a slide rule, or crane elements.

Workers can also demand more cranes and harness them to make themselves more productive. PhD students can shape their research. Academic faculty can set different problems for their students. Government can decide what research to fund. (In fact, we shared this framework a few weeks ago with legislators in DC.)

We still want looms, of course. (The actual loom itself turns out to have been a great invention, as will self-driving cars and almost all the forms of labor-replacing technology.) That said, today we have mostly looms, with a few slide rules sprinkled in, and that’s a problem.

AI founders backed by Bloomberg Beta presented with us on Capitol Hill

If we want AI to help people at work, consider making more cranes, and fewer looms. And, as founders of companies, recognize that different AI services are different, and you’ll need to build your company differently depending on how it interacts with those we really ultimately care about — um, people.

Note: some links above link to companies in which Bloomberg Beta is an investor, for disclosure. Thank you to many people for their comments, including Betsy Masiello, Ben Van Roo, Avi Zenilman, Steve Newman, Noah Smith, and others.

--

--

Head of Bloomberg Beta, investing in the best startups creating the future of work. Alignment: Neutral good