AI is a great equalizer. So how will we differentiate people by merit?

AI knows, or will soon know, practically everything. The most knowledgeable people in the world readily admit that AI knows more than they do – about their own specialties.

How could it be otherwise? AI has access to all the information on the internet. In today’s world, that’s tantamount to saying it has access to all information, period, with the possible exception of personal information like what you ate for breakfast this morning and classified information like the underwater location of America’s nuclear submarines at any given instant.  

Apart from those narrow exceptions, AI already knows – or will soon know – more about quantum mechanics than the leading physicists. More about tax law than the best tax lawyers. More about biology than the brightest biologists.

It’s not just me saying this. Business technology thinkers like Elon Musk and Sam Altman, who know a lot more about AI and a lot more about business than I do, say the same. (Interestingly, most of those deep thinkers are conservative/libertarian in their politics.)

Here are the implications. We are fast approaching the day when people with knowledge will not be able to command a premium for their services. Employers needing knowledge will not pay someone for it; they’ll simply ask AI for it. The knowledge they get from AI will not only be less expensive, but also more accurate.

Granted, we aren’t at that point yet – today’s AI makes too many mistakes – but we soon will be.

This phenomenon is likely to accelerate. AI will acquire more human knowledge and will also start to interpret that knowledge to produce knowledge that humans themselves don’t have.

In some cases, AI will produce knowledge that humans cannot even comprehend. When AI figures out how the universe began, don’t expect to understand its explanation.  

That’s the fate of knowledge. Factual knowledge will be the province of machines, not humans.

Now let’s look at another quality that employers currently pay for: Hard work.

In the past, hard workers were paid more, just as knowledgeable ones were. That’s because hard workers produced more for the company. Other things being equal, someone who put in 46-hour weeks got paid more than someone who put in 29-hour weeks.

Take an analytical problem that is difficult but susceptible to resolution. Imagine that a team of humans would require, say, 1000 manhours to solve it. The employee working 46-hour weeks will contribute much more to that solution than the one working 29-hour weeks.

In the future, however, the hard work of the AI machine will dwarf both those workers. AI could solve that 1000-hour problem in seconds – and without the drama, sex harassment lawsuits, maternity leaves, labor strikes, water cooler gossip about the boss and expensive office space associated with the team of humans.

To the accuracy of a tiny rounding error, those two workers – the 46-hour worker and the 29-hour worker – become equally valuable or, more accurately, equally valueless.

Human hard work will thus go the way of human knowledge. Just as backbreaking work in the mines and the fields became obsolete with the advent of tunnel digging machines and farm equipment, hard work of other kinds including office work will become obsolete with the advent of AI machines.

This presents a dilemma. If employees are not differentiated by their knowledge or their hard work, then how will they be differentiated in their salaries? How will the market decide to pay Jane a million a year, and pay Charlie only a few hundred thousand? (Yes, workers’ compensation will increase dramatically due to the incredible efficiencies that AI brings to bear.)

We already see this problem in schools. How do you differentiate students when they’re all using AI to take the test for them and all the answers are right?

Stated another way, if AI can answer questions and perform work assignments unimaginably fast, what can AI not do? What’s left for us humans?

Here’s what. AI cannot weigh human values.

A character in an Oscar Wilde play complained about people who “know the price of everything and the value of nothing.” That character might have been anticipating AI by a century and a half.

Oh sure, AI is fully capable of determining value in a cost-per-pound or other quantifiable way. But it is incapable of possessing or assigning “values” in a human sense. It is consequently incapable of weighing those human values in its analysis.

Here’s an example, back to Elon Musk. He has about a million children at last count (actually, the figure is ten, officially, according to AI) born to sundry mothers. And he has something like a half-trillion dollars.

AI can figure out how to distribute his billions to his children over time in a way that minimizes the tax consequences. (Don’t worry, the taxes will still be astronomical.)

But here’s a question that AI is incapable of figuring out. Is it a “good” thing for Musk’s kids to receive a multi-billion-dollar inheritance? More specifically, will such a windfall enhance the values that we humans call “happiness” and “fulfilment”? Relatedly, are such inheritances “good” for society?

My human instinct is that the answer depends on lots of circumstances, including especially the nature of each kid. For some kids, such an inheritance would be a “good” thing for them, and perhaps for humanity, too, though there might also be some bad aspects to it. For other kids, maybe not.

To answer this question, you need to understand human nature, you need to understand kids, and you need to understand that people change as they grow up, sometimes in unpredictable ways.  

AI will always have a poor grasp of such things. Such things are and will remain the province of humans. They entail something AI will never have, no matter how fast or knowledgeable it becomes. They entail wisdom.

Will society find a way to compensate people for wisdom after AI renders human knowledge and hard work obsolete? I don’t know, and neither does AI. Maybe the compensation for wisdom is just the joy and the pain of having it.

AI is real, it can think, and it will change everything

“Epic” is how a lengthy article in the Wall Street Journal last week described the current investment in AI. In today’s dollars, it dwarfs the investment in the railways in the 1800s. It dwarfs the investment in electrifying America in the early 1900s. It dwarfs the investment in the interstate highway system in the mid-1900s. It dwarfs the investments in the internet at the end of the last century.

So, went the gist of the Journal’s article, it must all be an investment bubble – right? – that will come crashing down the way Pets.com and other internet stocks did.

Or didn’t. Bear in mind that Amazon, Facebook, Google and Microsoft are internet companies, too.

A competing article in the Journal last week describes how Walmart plans to manage AI. They say AI will change every job in the company – all 2.1 million of them. They anticipate substantial growth in their revenues and store count, but see their employee count staying flat. They intend to use AI to do more work without more people.

Along the same lines, the Chief Executive of Ford Motor Company said last summer, “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.”

The average person has limited experience with AI. They do know that when they need a gas station, they no longer have to type “gas station” into Google Maps. Instead, they can tell AI, “Find me a gas station,” and – voila! – it does. It’s like having a wife who can read maps!

(Ladies, please direct your correspondence to WGates@Microsoft.com.)

Several criticisms are often leveled at AI. One is that it’s great at gathering information off the internet, but its conclusions are only as good as the information it gathers. This criticism is valid. How could it not be? Like you and me, the machine is only as good as the information it relies upon.

On the other hand, the machine’s use of information is getting better and better as the algorithms mature. It is learning, for example, that quantity does not equal quality. Just because something is said many times on the internet does not make it right, and just because something is said seldomly on the internet does not make it wrong.

It makes this discernment both by considering the credibility of the information sources and also . . . [drum roll] . . . by reasoning.

That’s right, AI can think. It can look at a piece of information and say, “Nah, that cannot be accurate. It cannot be accurate that it takes days for sunlight to reach the Earth, given that the Earth is X miles from the sun and light travels at Y mph.”

In my judgment, that constitutes thinking. The machine is not specifically asked how long it takes for sunlight to reach the earth. Rather, in the course of answering the question it is asked, it rejects information that it reasons cannot be accurate.

Here’s another example of AI thinking. Already, you can give it information about a building site for a house, such as the location, the topography and the boundaries, and tell it:

“Give me some birds-eye views (yes, it will understand that colloquialism) of potential house designs for a client who likes midcentury architecture and passive solar, and wants four bedrooms and a wine cellar. Oh, and bear in mind the Building Code of Pitkin County, Colorado and the HOA rules at this address.”  

In seconds, the machine will churn out diagrams of such houses. It doesn’t scour the internet for diagrams to copy; it generates its own. It becomes an architect – one with the benefit of Frank Lloyd Wright, Leonardo da Vinci, Antoni Gaudí, and all the others firmly in its “head” together with an instantaneous ability to figure out the workability of the designs it conceives.

If you want to tinker with a design, it will let you do so. You can say, “I like this one, but it’s kinda tall. Can you make it shorter and with a bigger footprint?” Or, “Let’s get into the HVAC and plumbing details on this one. Give me some schematics.”

To me, that’s high-level thinking again.

In medicine, AI already has the capability (though it hasn’t been tasked with this yet) to have on-file a patient’s lifetime medical history. A technician could say, “This patient is now experiencing sharp pain in his left-side torso and recurring headaches. What do you think?” AI might respond:

“It’s not his left kidney, because this patient had his left kidney removed in 2013. I recommend the following tests . . .  By the way, be careful with poking him – he’s on blood thinners. And he’s had claustrophobia in the MRI chamber before. Note his family history of diabetes.”

To me, that’s high-level thinking yet again.              

Ah, you say, that’s all just problem-solving. The machine still cannot dream, cannot feel. It knows the cost of everything, but the value of nothing.

Maybe, but the same can said of many people.

As for AI’s ability as an aesthete, I asked ChatGPT the following (with deliberate misspellings):

“Make me a 3-dimentional wall hanging about 3 x 5 feet made out of scrap steel welded together to make an abstract sculture.”

Here’s what it came up with:

I probably wouldn’t hang this on my wall, but, then again, I probably wouldn’t hang on my wall what passes for modern abstract masterpieces in museums today, either.

Now a word about the purported downside of AI – the Luddite notion that it will put everyone out of work and so we’ll all starve to death.

Economists know this is bunk. Technology certainly produces dislocations. The invention of refrigeration put thousands of ice men out of work. The invention of the automobile put millions of buggy-makers out of work. The invention of the internet is gradually putting late-night comedians out of work.

But overall, these technological wonders improve the efficiency of society – and, therefore, the wealth of society. If an invention can improve a worker’s efficiency by 50%, that doesn’t mean half the workers get laid off and starve. In the big picture, it instead means workers can get paid the same for working half the hours, or get paid double for working the same hours, or some blend of those two outcomes.

That’s what has happened throughout history in response to technological innovation. We work fewer and fewer hours, even as we have more and more things. (Whether that makes us happier is different question.)

We also live longer and longer. With AI, could we live forever?

Maybe. AI might not just cure disease and treat injury, but also stop the biological mechanism of aging.

Or AI might have the ability to receive an upload of a person’s memory – his life – before his body dies. A memory in a durable machine that can interact with humans would seem no less valid than a memory in a failing brain that increasingly cannot.

Could that AI embodiment of a person, residing on the computer cloud (maybe Heaven really is in a cloud!) continue to interact with the flesh and blood world? I don’t see why not. And what it experiences would of course add to the experiences that were originally uploaded. “You” would continue to “live.”

The AI “you” would undoubtedly be the object of real love by flesh-and-blood humans (let’s call them “humies”). After all, people routinely experience real love for inanimate objects like dolls and teddy bears and sports cars. They could surely love an image that talks with them, especially if they loved that image before its humie got buried.  

In receiving that upload of a person’s memory, would the machine also receive his soul? I cannot answer that question, nor, I suspect, can AI.