Humanoid robots are coming – the technology is passing critical thresholds, and the economics of use-cases and capability are beginning to match. It’s a clear example of William Gibson’s insight: “The future is already here. It’s just not evenly distributed yet.” It won’t be either The Terminator or The Jetsons. There’s an imperfect middle ground experienced differently by everyone, everywhere. (Remember: there are still millions of people who do not have reliable electricity or sewage systems.) I’m optimistic about potential implications and hope we soberly address pitfalls before we’re compelled to.
I organize my thoughts this way:
- Why humanoid robots are inevitable
- Beneficial use cases
- Social, Legal, and Religious implications
Note: I am not going to address military applications here. This is massive, complex, and inevitable, too, with deep concerns.
Why humanoid robots are inevitable
Humanoid robots fit into the spaces and tools we created for our bodies. They’re different than robots custom-designed to assemble part of a car, or managed automated lab processes. In principle a robot could do any physical task better than we can, more safely than we can. We don’t need to completely re-engineer our world for humanoid robots to fit in.
Economists estimate the global value of human labor is about $40T annually. Billions in venture capital is flooding into companies like Tesla and Figure seeking a sliver of that economic opportunity. Many highly developed countries face skilled labor shortages; unfavorable demographic decline will make this worse in the decades to come. Even unskilled labor is increasingly expensive.
The technologies advance rapidly. Companies like Boston Dynamics have worked out many of the difficulties for balance, coordination, and integrated sensors. The combination of materials, powerful/cheap/tiny sensors, batteries, memory space and chip sophistication, computer vision, ubiquitous bandwidth, and ai make people nod when you say, “ChatGPT with a body.” We can now train a robot how to do a task through repeatedly watching a person do it – machine learning allows a robot to learn by imitation, rather than someone having to algorithmically program every task. Once a task is learned by one robot, it’s straightforward to transfer that learning to other robots. Engineers are successfully building robots which mimic and emulate facial expressions, and (to a lesser extent) can interpret your facial expression and voice patterns. Fine motor capability like delicate hand motions is steadily improving. Today most robots are hand-crafted, but we’re not far from robots being able to build robots. It’s not difficult to imagine new robot version launches like we do cars and iPhones now. We’re near an inflection point where robots will become more capable through frequent software updates, meaning existing hardware has a significantly longer value.
Science fiction, entertainment media, and human/work robotics have largely prepared people to accept humanoid robots. Most people think they’re ingenious and interesting. We name them. We generally embrace tools and assistants which make our lives easier, at least if they’re reliable and don’t annoy us too much. Owning and using these tools become social status markers. Despite the ‘terminator’ robot dystopian themes our population is unlikely to resist humanoid robots, at least at first.
How fast are humanoid robots coming? Implementation won’t be uniform, but they are coming. I’m skeptical about visions of the US having 100 million humanoid robots by 2040 because of limitations of materials, especially batteries and chips. Initial costs will be high enough that only wealthy families and some companies can afford them. Maintenance and support will lag production. Also, these robots will add significant strain on our aging electrical grid. Could we have 10+ million humanoid robots in the US by 2040? Yes.
Beneficial use cases
There are many situations where a humanoid robot could be helpful. Here are a few:
Construction of buildings and infrastructure
Agriculture
Light manufacturing, esp. more risky jobs
Warehouse and shipping logistics
Elder care
Mining, smelting
Road construction and repair
City infrastructure services (e.g., garbage collection)
Driving existing vehicles, including ships and planes
Assistants in space and oceanic exploration
Assistants in education
Any job which is dangerous or highly repetitive
Wet environment jobs will be less accessible to humanoid robots for a few years longer. Water creates difficult challenges. (Non-humanoid robots are more easily designed for immersible conditions.)
The introduction of humanoid robots may be the biggest transformation of the labor market in the 21st century. This will be another serious round of job elimination and new job creation. Purely digital ai tools will destroy the living wage market for some historic jobs, but the fact that humanoid robots can do physical work affects an even larger job market. ChatGPT can’t take out the kitchen trash; ChatGPT with a body can handle the trash in entire neighborhoods. There will be some new jobs in designing, building, and maintaining the robots, of course. But overall the speed of job changes will come faster than compensatory adjustments. Humanoid robots become another source of non-localized anxiety for our populations.
Social, Legal, and Religious implications
“It will be like slaves in the Roman empire, except better. They’ll do all the hard work, they’ll obey perfectly, and free up our time for better things. It will be better because robots don’t have souls, so we’re not abusing people.” This is how my friend describes the idea of a world of ubiquitous humanoid robots. His comments sparked many questions and thought-problems.
Isaac Asimov’s 3 laws of robotics (first published in 1942!) hold up well:
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Humanoid robots, like ai software in recent years, continue to pressure our thinking about what makes humans distinct or special. Woe to materialists who think there is no soul, no transcendence! I observe their stance becomes something like “These intelligent robots are our children, so we must protect and nurture them so they can better help us.” That is certainly correct in this: someone must program and teach a ‘moral’ code of behaviors, because they won’t form spontaneously.
Let’s go back to all the jobs that can be replaced by humanoid robots. If people don’t do that work, what do they do? What makes for a living wage? What new legislation comes to protect human worker rights, like an evolution of the unions? How quickly can we adapt?
Agriculture has some lessons here. We’ve gone from about 40% of the population living and working on farms to less than 1% in the last century in the US. Why? Increased productivity of crops and mechanization/automation. This transition happened slowly enough that there were other jobs to occupy people. Yet some countries and regions have resisted automation in part because there is nothing else for the farmers to do, constructively, at a living wage. Rulers throughout history have rightly worried about bored, unoccupied populations. Many of the great forts, monuments, and temples in India and elsewhere were built in part to occupy men.
Many consider increased leisure time as an automatic good. “They’ll take up poetry and writing novels, fulfilling their creative potential. They’ll volunteer more and help others.” The self-disciplined people do. Everyone else? Let’s watch more cat videos! Where do people learn self-discipline? Through working. See the problem? We call it “work ethic” for a reason.
The current generations are learning that physicality is good for us. “Doing hard things” makes us better in every way. Maybe you’ve seen movie Wall-E where the people are effectively couch slugs, hardly an attractive image of humanity. Our great-grandfathers would have laughed or disbelieved you if you told them that exercise coaches, gyms, and fitness apparel would become multibillion dollar industries. We are physical creatures and even “people of the mind” thrive better with physicality.
What about legal rights for humanoid robots? Don’t laugh – the precedents exist. “But they don’t have souls,” you say. “They’re our servants,” you say. No one should be surprised when a legal rights conversation surfaces for a sophisticated humanoid robot which can interact with human speech, emotions, and exhibit significant intelligence. They don’t have a God-infused soul, but we might behave like they do. Likely mindset: “They’re not just robots, they’re like our children, we’re responsible to them in unique ways.” We will personify them. Most Roomba owners delightedly name their floor sweepers. You think they won’t give names to humanoid robot servants and companions? Oh, and expect a patchwork of rights and obligations across states and countries.
Related to legal rights will be new questions about responsibilities and liabilities. Who controls the algorithms, and shared learning across independent robots? What responsibility does the ‘owner’ have vs. the creators? Who will be sued for damages when bad things happen? We will work all this out, just as we did for cars, but it will be a 2 steps forward 1 step back effort.
Mistreatment of others begets soul-shriveling. One of the worst attributes of historical cultures which had large numbers of slaves, even significant numbers of servants, is that the “masters” became ugly as they mistreated slaves and servants. Boys who torment and torture animals often become vicious men. How will parents and employers teach and model how to interact with humanoid robots? The robot doesn’t need to hear “thank you” but failing to be appreciative warps our character, which has other consequences.
Will owning and using humanoid robots become “keeping up with the Jones” social pressure? Will they be status symbols? We’re wired to imitate trend-setters. Humanoid robots might become a new measuring tool to distinguish the haves and the have-nots. I can easily imagine people taking an “anti-robot” stance as a kind of virtue signaling. Will we create social expectations where robots are not allowed to be present? Will you bring your humanoid robot assistant to a worship service? There are people today with robot phobias. How will we handle that in the future?
I used the word ‘companions’ earlier. Servant is a specific mindset, companion is distinct. We expect something extra from companion, a form of kinship and protection. At what point will humanoid robots be sophisticated enough to be considered companions for young children? What boundaries might be best here? I have more questions than answers, but we’ll need to make decisions eventually. If you think parents have a challenge resisting a child’s demands for a smartphone, can you imagine a future conversation about a personal robot companion?
These robots are going to be relatively expensive. Costs will come down with mass production, but still pricey. I suspect we’ll think of them like we do cars today. There will be a whole industry of financing them, like cars and houses – in fact, this could be one of the primary profit sources for humanoid robots. What might this mean for personal and corporate debt?
Will autocratic governments control who gets to own humanoid robots, and why? (Of course they will, they’re autocratic governments!) How much of this government oversight will show up in democratic republics?
We will trust robots the same or differently than people? We live now in a world where ai and digital manipulation means you can’t trust what you see or hear as authentic, which makes trust 100x more important to people. If Asimov’s laws of robotics hold, then will we adopt a ‘trust _and_ verify’ model? Will we trust them like we trust certain experts (e.g., doctors)? When trust is broken – and it will be – how will forgiveness and restoration work?
Some of you might be saying “Glenn, you’re being silly, they’re just machines.” I beg to differ. The market will tap into our default desires for robots that interact well with us, not noisy and clumsy bots but more elegant and attractive. Perhaps that becomes a differentiator of humanoid robots for different jobs – much more machine-like for garbage collection and far more humanistic for family servants.
Work has dignity. We learn in Genesis that work is a “a “before the Fall” phenomenon, made 10,000X harder by sin and alienation. Work is good for us and to help others. We can learn to work with humanoid robots just as we learn to work effectively with other people, if you think of humanoid robots as assistants and servants.
Let’s be frank, humanoid robots will become better at doing (almost?) every physical task than we can. This fosters the critical question: What work will we refuse to abdicate, even if a robot can do it better? Some ideas to spark your imagination –
reading to young kids
preparing a family meal
physical activities we enjoy (e.g., gardening)
helping someone move into a new house
kid’s chores that help them learn how to be responsible
caring for sick person
moral teaching
playing musical instruments
driving your child to school, or standing with them at the bus stop
customer service
coaching youth sports
Maybe one way to consider this question is to ask, “What are we optimizing for?” There are many valuable aspects of family and community life where we do not optimize on efficiency of labor. Life-on-life is prized.
I suspect all these questions will deeply affect religious institutions. What will religious communities advocate for and against? To what extent will individuals and governments look to religious institutions for guidance on these matters? This will not be driven purely by amoral capitalist economics or socialism/communism. Will religious groups rise to defend the uniqueness of humans, or will they facilitate a continued slide to materialistic anthropology?
One of the biggest drivers will be the difficult consequences of narrowing the number of living wage jobs for humans to do. I can’t easily predict how that will turn forward, but it will generate anger, frustration, and opportunities to decide in advance what we should do.
What do you think? I’m curious to hear your ideas and questions, too.