AI is a great equalizer. So how will we differentiate people by merit?

AI knows, or will soon know, practically everything. The most knowledgeable people in the world readily admit that AI knows more than they do – about their own specialties.

How could it be otherwise? AI has access to all the information on the internet. In today’s world, that’s tantamount to saying it has access to all information, period, with the possible exception of personal information like what you ate for breakfast this morning and classified information like the underwater location of America’s nuclear submarines at any given instant.  

Apart from those narrow exceptions, AI already knows – or will soon know – more about quantum mechanics than the leading physicists. More about tax law than the best tax lawyers. More about biology than the brightest biologists.

It’s not just me saying this. Business technology thinkers like Elon Musk and Sam Altman, who know a lot more about AI and a lot more about business than I do, say the same. (Interestingly, most of those deep thinkers are conservative/libertarian in their politics.)

Here are the implications. We are fast approaching the day when people with knowledge will not be able to command a premium for their services. Employers needing knowledge will not pay someone for it; they’ll simply ask AI for it. The knowledge they get from AI will not only be less expensive, but also more accurate.

Granted, we aren’t at that point yet – today’s AI makes too many mistakes – but we soon will be.

This phenomenon is likely to accelerate. AI will acquire more human knowledge and will also start to interpret that knowledge to produce knowledge that humans themselves don’t have.

In some cases, AI will produce knowledge that humans cannot even comprehend. When AI figures out how the universe began, don’t expect to understand its explanation.  

That’s the fate of knowledge. Factual knowledge will be the province of machines, not humans.

Now let’s look at another quality that employers currently pay for: Hard work.

In the past, hard workers were paid more, just as knowledgeable ones were. That’s because hard workers produced more for the company. Other things being equal, someone who put in 46-hour weeks got paid more than someone who put in 29-hour weeks.

Take an analytical problem that is difficult but susceptible to resolution. Imagine that a team of humans would require, say, 1000 manhours to solve it. The employee working 46-hour weeks will contribute much more to that solution than the one working 29-hour weeks.

In the future, however, the hard work of the AI machine will dwarf both those workers. AI could solve that 1000-hour problem in seconds – and without the drama, sex harassment lawsuits, maternity leaves, labor strikes, water cooler gossip about the boss and expensive office space associated with the team of humans.

To the accuracy of a tiny rounding error, those two workers – the 46-hour worker and the 29-hour worker – become equally valuable or, more accurately, equally valueless.

Human hard work will thus go the way of human knowledge. Just as backbreaking work in the mines and the fields became obsolete with the advent of tunnel digging machines and farm equipment, hard work of other kinds including office work will become obsolete with the advent of AI machines.

This presents a dilemma. If employees are not differentiated by their knowledge or their hard work, then how will they be differentiated in their salaries? How will the market decide to pay Jane a million a year, and pay Charlie only a few hundred thousand? (Yes, workers’ compensation will increase dramatically due to the incredible efficiencies that AI brings to bear.)

We already see this problem in schools. How do you differentiate students when they’re all using AI to take the test for them and all the answers are right?

Stated another way, if AI can answer questions and perform work assignments unimaginably fast, what can AI not do? What’s left for us humans?

Here’s what. AI cannot weigh human values.

A character in an Oscar Wilde play complained about people who “know the price of everything and the value of nothing.” That character might have been anticipating AI by a century and a half.

Oh sure, AI is fully capable of determining value in a cost-per-pound or other quantifiable way. But it is incapable of possessing or assigning “values” in a human sense. It is consequently incapable of weighing those human values in its analysis.

Here’s an example, back to Elon Musk. He has about a million children at last count (actually, the figure is ten, officially, according to AI) born to sundry mothers. And he has something like a half-trillion dollars.

AI can figure out how to distribute his billions to his children over time in a way that minimizes the tax consequences. (Don’t worry, the taxes will still be astronomical.)

But here’s a question that AI is incapable of figuring out. Is it a “good” thing for Musk’s kids to receive a multi-billion-dollar inheritance? More specifically, will such a windfall enhance the values that we humans call “happiness” and “fulfilment”? Relatedly, are such inheritances “good” for society?

My human instinct is that the answer depends on lots of circumstances, including especially the nature of each kid. For some kids, such an inheritance would be a “good” thing for them, and perhaps for humanity, too, though there might also be some bad aspects to it. For other kids, maybe not.

To answer this question, you need to understand human nature, you need to understand kids, and you need to understand that people change as they grow up, sometimes in unpredictable ways.  

AI will always have a poor grasp of such things. Such things are and will remain the province of humans. They entail something AI will never have, no matter how fast or knowledgeable it becomes. They entail wisdom.

Will society find a way to compensate people for wisdom after AI renders human knowledge and hard work obsolete? I don’t know, and neither does AI. Maybe the compensation for wisdom is just the joy and the pain of having it.

AI is real, it can think, and it will change everything

“Epic” is how a lengthy article in the Wall Street Journal last week described the current investment in AI. In today’s dollars, it dwarfs the investment in the railways in the 1800s. It dwarfs the investment in electrifying America in the early 1900s. It dwarfs the investment in the interstate highway system in the mid-1900s. It dwarfs the investments in the internet at the end of the last century.

So, went the gist of the Journal’s article, it must all be an investment bubble – right? – that will come crashing down the way Pets.com and other internet stocks did.

Or didn’t. Bear in mind that Amazon, Facebook, Google and Microsoft are internet companies, too.

A competing article in the Journal last week describes how Walmart plans to manage AI. They say AI will change every job in the company – all 2.1 million of them. They anticipate substantial growth in their revenues and store count, but see their employee count staying flat. They intend to use AI to do more work without more people.

Along the same lines, the Chief Executive of Ford Motor Company said last summer, “Artificial intelligence is going to replace literally half of all white-collar workers in the U.S.”

The average person has limited experience with AI. They do know that when they need a gas station, they no longer have to type “gas station” into Google Maps. Instead, they can tell AI, “Find me a gas station,” and – voila! – it does. It’s like having a wife who can read maps!

(Ladies, please direct your correspondence to WGates@Microsoft.com.)

Several criticisms are often leveled at AI. One is that it’s great at gathering information off the internet, but its conclusions are only as good as the information it gathers. This criticism is valid. How could it not be? Like you and me, the machine is only as good as the information it relies upon.

On the other hand, the machine’s use of information is getting better and better as the algorithms mature. It is learning, for example, that quantity does not equal quality. Just because something is said many times on the internet does not make it right, and just because something is said seldomly on the internet does not make it wrong.

It makes this discernment both by considering the credibility of the information sources and also . . . [drum roll] . . . by reasoning.

That’s right, AI can think. It can look at a piece of information and say, “Nah, that cannot be accurate. It cannot be accurate that it takes days for sunlight to reach the Earth, given that the Earth is X miles from the sun and light travels at Y mph.”

In my judgment, that constitutes thinking. The machine is not specifically asked how long it takes for sunlight to reach the earth. Rather, in the course of answering the question it is asked, it rejects information that it reasons cannot be accurate.

Here’s another example of AI thinking. Already, you can give it information about a building site for a house, such as the location, the topography and the boundaries, and tell it:

“Give me some birds-eye views (yes, it will understand that colloquialism) of potential house designs for a client who likes midcentury architecture and passive solar, and wants four bedrooms and a wine cellar. Oh, and bear in mind the Building Code of Pitkin County, Colorado and the HOA rules at this address.”  

In seconds, the machine will churn out diagrams of such houses. It doesn’t scour the internet for diagrams to copy; it generates its own. It becomes an architect – one with the benefit of Frank Lloyd Wright, Leonardo da Vinci, Antoni Gaudí, and all the others firmly in its “head” together with an instantaneous ability to figure out the workability of the designs it conceives.

If you want to tinker with a design, it will let you do so. You can say, “I like this one, but it’s kinda tall. Can you make it shorter and with a bigger footprint?” Or, “Let’s get into the HVAC and plumbing details on this one. Give me some schematics.”

To me, that’s high-level thinking again.

In medicine, AI already has the capability (though it hasn’t been tasked with this yet) to have on-file a patient’s lifetime medical history. A technician could say, “This patient is now experiencing sharp pain in his left-side torso and recurring headaches. What do you think?” AI might respond:

“It’s not his left kidney, because this patient had his left kidney removed in 2013. I recommend the following tests . . .  By the way, be careful with poking him – he’s on blood thinners. And he’s had claustrophobia in the MRI chamber before. Note his family history of diabetes.”

To me, that’s high-level thinking yet again.              

Ah, you say, that’s all just problem-solving. The machine still cannot dream, cannot feel. It knows the cost of everything, but the value of nothing.

Maybe, but the same can said of many people.

As for AI’s ability as an aesthete, I asked ChatGPT the following (with deliberate misspellings):

“Make me a 3-dimentional wall hanging about 3 x 5 feet made out of scrap steel welded together to make an abstract sculture.”

Here’s what it came up with:

I probably wouldn’t hang this on my wall, but, then again, I probably wouldn’t hang on my wall what passes for modern abstract masterpieces in museums today, either.

Now a word about the purported downside of AI – the Luddite notion that it will put everyone out of work and so we’ll all starve to death.

Economists know this is bunk. Technology certainly produces dislocations. The invention of refrigeration put thousands of ice men out of work. The invention of the automobile put millions of buggy-makers out of work. The invention of the internet is gradually putting late-night comedians out of work.

But overall, these technological wonders improve the efficiency of society – and, therefore, the wealth of society. If an invention can improve a worker’s efficiency by 50%, that doesn’t mean half the workers get laid off and starve. In the big picture, it instead means workers can get paid the same for working half the hours, or get paid double for working the same hours, or some blend of those two outcomes.

That’s what has happened throughout history in response to technological innovation. We work fewer and fewer hours, even as we have more and more things. (Whether that makes us happier is different question.)

We also live longer and longer. With AI, could we live forever?

Maybe. AI might not just cure disease and treat injury, but also stop the biological mechanism of aging.

Or AI might have the ability to receive an upload of a person’s memory – his life – before his body dies. A memory in a durable machine that can interact with humans would seem no less valid than a memory in a failing brain that increasingly cannot.

Could that AI embodiment of a person, residing on the computer cloud (maybe Heaven really is in a cloud!) continue to interact with the flesh and blood world? I don’t see why not. And what it experiences would of course add to the experiences that were originally uploaded. “You” would continue to “live.”

The AI “you” would undoubtedly be the object of real love by flesh-and-blood humans (let’s call them “humies”). After all, people routinely experience real love for inanimate objects like dolls and teddy bears and sports cars. They could surely love an image that talks with them, especially if they loved that image before its humie got buried.  

In receiving that upload of a person’s memory, would the machine also receive his soul? I cannot answer that question, nor, I suspect, can AI.

Charlie Kirk debates Jimmie Kimmel — a BS construction by AI

Note to readers: This is what ChatGPT came up with when I asked it to imagine a debate between Charlie Kirk and Jimmy Kimmel. Every word is from ChatGPT, except that one or two AI typos have been corrected.

_________________

Moderator: Welcome everyone to tonight’s debate on “Free Speech and Violence in America.” We’re honored to have Charlie Kirk and Jimmy Kimmel here to share their insights. Let’s begin with opening statements. Kirk, you have the floor.

Kirk: Thank you. Free speech is the foundation of democracy. It empowers individuals to voice their opinions, challenge authority, and push society forward. Throughout history, free speech has been instrumental in civil rights movements, political reform, and social progress. That said, with this freedom comes a responsibility. Words can incite violence, spread misinformation, and deepen divisions. Our goal should be to promote responsible speech—not censor it. Education, open dialogue, and adherence to social norms are the way forward. The core principle is that speech should thrive, but it should not be used as a weapon.

Kimmel: Thanks. I agree that free speech is vital, but let’s be clear. With the right comes the duty to prevent harm. In today’s world, inflammatory rhetoric, hate speech, and conspiracy theories are fueling real violence. We’ve seen many incidents where words have led to deadly actions — racial violence, terrorist attacks, online harassment turning into physical violence. Protecting free speech doesn’t mean tolerating speech that incites or promotes harm. Platforms — especially social media — must be held accountable. We need regulations and policies that draw the line where speech becomes dangerous. Society must defend freedom, but not allow it to devolve into chaos.

Moderator: An important discussion. Kirk, when you hear about hate speech, do you believe there should be limits?

Kirk: I believe in strong protections for free speech. Historically, censorship often suppresses dissent and can be used as a tool of oppression. The issue isn’t speech per se, but how we handle its consequences. Education is key — if people understood the impact of their words, they might moderate their tone voluntarily. Legal limits should be narrow– focused on directly inciting violence or spreading lies that cause harm. Broad restrictions risk infringing on fundamental rights and silencing constructive debate. The solution is fostering social responsibility, not censorship.

Kimmel: But history shows us that certain speech, especially hate speech, has led to violence and suffering. It dehumanizes people and can ignite mob actions or even genocide. Allowing unchecked harmful speech gives license to hate groups. That’s why platforms need clear policies– like removing hate speech and threats — and authorities should intervene when necessary. Sometimes, restrictions are essential for protecting vulnerable communities. We must recognize that the harm caused by hate speech is real, and it requires action.

Moderator: Let’s talk about where to draw the line. Kirk, do you think social media platforms should limit or regulate content more heavily?

Kirk: Platforms do have a role, but overregulation can suppress legitimate discourse. They should implement moderation tools and community standards, but these must be transparent and fair. Ultimately, the true solution lies in education — teaching people to critically evaluate content and behave responsibly online. Suppressing speech outright risks creating echo chambers and authoritarian control, which is dangerous for democracy.

Kimmel: I agree moderation is important, but the scale of social media makes it hard to control harmful content. Platforms often hesitate to act decisively due to fear of backlash or legal challenges. Meanwhile, harmful rhetoric continues to spread. Regulation can set standards that protect users without infringing on free speech. Additionally, social media companies must be held accountable for their role in amplifying dangerous content. We need collaborative efforts—government regulations, technological tools, and community initiatives—to create safer online spaces.

Moderator: Now let’s introduce a new topic — recent controversy surrounding Kimmel’s firing from a major network over comments related to political beliefs. Kimmel, would you like to comment on that?

Kimmel: Absolutely. The situation was very personal to me. I was fired after making some jokes that, depending on who you ask, were either funny or inappropriate, but what struck me was how quickly the culture shifted to silencing opinions and entertainment if they didn’t align perfectly with certain political narratives. It raises a fundamental question about free expression — should entertainers be penalized for their viewpoints or jokes? I believe that diversity of opinion, even offensive or controversial humor, is vital in a free society. Silencing voices due to pressure or fear undermines that principle.

Kirk: I agree with you. Freedom of speech includes the right to express unpopular opinions, including satire and humor. When society starts punishing individuals for expressing their views, it’s a step toward authoritarianism. The marketplace of ideas flourishes only when all voices can be heard, even if those voices are uncomfortable or controversial. I believe that silencing or punishing individuals—whether entertainers, journalists, or citizens — because of their opinions or jokes — undermines the very foundation of our democracy. Free speech is messy and can be uncomfortable, but it’s essential for progress. When we start policing humor or viewpoints out of fear, we risk creating a society where only certain approved voices can be heard, and that’s dangerous. We need to defend the right to express even unpopular or provocative ideas, because only then can real debate and growth occur. Censorship, whether by corporations or government, erodes liberty and sets a dangerous precedent.

Kimmel: Exactly. When individuals are silenced or penalized just for expressing their opinions, it sends a signal that certain ideas are unacceptable — no matter how silly or offensive. That’s not the path to a healthy democracy. We must stand firm in defending free expression, even when we disagree with what’s being said. Otherwise, we pave the way for authoritarian control and a stifling culture where conformity replaces free thought.

Moderator: Thank you, Kirk and Kimmel, for your insightful perspectives. To summarize, both of you agree that free speech is a fundamental pillar of democracy, but with important responsibilities. Kirk emphasizes the importance of education, responsible discourse, and cautious moderation to prevent censorship and authoritarianism. Kimmel highlights the need for accountability, especially on social media, and stresses that harmful speech can incite violence and must be regulated to protect vulnerable communities.

Regarding the recent controversy about Kimmel’s firing, both agree that silencing or punishing individuals for their opinions erodes free speech and opens the door to authoritarian control. The balance, as you both suggest, is in protecting the right to express unpopular ideas while ensuring public safety and social harmony. Ultimately, fostering open, respectful dialogue and safeguarding free expression are essential for a healthy society.

Thank you both for a compelling discussion.

_________________

I say bullshit. This sanitized, anodyne debate imagined by ChatGPT sounds like a love-fest.

Sure, it’s likely that both men would come down on the side of free speech, in the abstract, and both would caution against internet excesses.

But the debate imagined by ChatGPT completely ignores the overt calls for violence by such groups as Antifa and BLM and the barely-concealed calls for violence from Democrat leaders such as Senator Schumer’s threat against Supreme Court Justices.

It also ignores Kimmel’s false suggestion that the Kirk murderer was “one of” MAGA. Any legitimate discussion of Kimmel’s firing has to deal with that lie. Finally, it ignores the refusal by about half of the Democrats in Congress to join in a joint resolution condemning that violence.

So why did Chat GPT sidestep these issues?

I saw a piece a few months ago (if I could remember where, I would give credit to the author) suggesting that AI will not take bias out of the news. Rather, it will legitimize it. When you’re watching Rachel Maddow, you know she’s biased. When you’re inquiring with ChatGPT, you assume that the machine is not.

In addition, AI knows only what it reads on the internet. The old saw “the internet says . . .” is literally 100% of the basis for AI’s conclusions. Computer programmers would say garbage in, garbage out.

For the record, I think Kimmel’s bosses should fire him, or not, based on their sense of human decency and what he brings to their business. (My own sense of decency comes down in favor of the firing.) I think viewers should watch Kimmel, or not, based on their personal sense of human decency and what he brings to their entertainment life. I’m wary of the government getting involved in those decisions.

Also for the record, I occasionally use AI for research, but never for writing. I happen to think that, for now, I’m a better writer than the machine.

AI “thinks” hospital gowns make sense

Marc Andreessen is a very successful Silicon Valley venture capitalist. In picking and investing in early-stage tech companies such as Twitter and Facebook years ago, you could say he swims with the sharks, and swims very well. His firm has over $45 billion under management. He was recently the subject of a wide-ranging interview by Bari Weise, the intrepid founder of The Free Press.

The hottest thing in Silicon Valley these days is AI – the acronym for Artificial Intelligence. In its true definition, AI is much more than the next generation of powerful computers. If you ask Google Assistant or Apple Siri a question, it will search the internet for data pertaining to your question, aggregate that data, synthesize an answer, and write it up in understandable language.

It has what computer scientists call a heuristic ability. It learns. If it were playing chess with you, it would quickly learn your tendencies – and thoroughly kick your ass.

Andreessen remarked among other things that AI has been “a censorship machine . . . right from the very beginning.” That statement made in passing made headlines as an indictment of AI.

But that’s not exactly how Andreessen intended it. He actually thinks AI is a good thing and will make for a better world, and I tend to agree. But there’s a danger.

The danger is the oldest danger in computing: it’s GIGO, or “Garbage In Garbage Out.” AI is not in contact with the real world. What it knows about the real world is what it gathers on the internet. Its answers are only as good as you yourself could get on the internet if you took the time to do so. And it does not have the rich context of a human’s lifetime of experience in interpreting casual language, particular circumstances and unreliable sources.

AI has infinite knowledge and zero judgment.

I saw this recently when I was in the hospital for some minor open-heart surgery. As hospitals do, they insisted that I wear one of those “gown” contraptions that are open in the back so that your buns hang out.

I find this piece of apparel quite annoying and embarrassing, and I know you do too. I’m guessing the hospital employees do as well, since an average hospital patient is not someone whose buns you would want to ogle.

So, I asked Google Assistant why hospital gowns tie in the back – where they cannot be tied at all. Here’s the answer I got back:

Hospital gowns tie in back for a variety of reasons:

Translation: There’s no good reason, but there’s a “variety” of them. Google Assistant listed them:

*Easy access: The open back design allows medical technicians to quickly and easily access any part of the patient’s body for examinations, treatments, or procedures without the need to fully undress the patient.

It allows easy access alright. But there’s a big but. The big but is that it allows access to only the patient’s big butt. In my personal experience, that part of the patient is not the focus of many medical procedures. And, contrary to what Google implies, the patient in a gown is completely undressed apart from the gown.

There’s more.

* Patient Comfort: The loose fit and open back design can help prevent overheating and discomfort, especially for bedridden patients.

Patient comfort? So, will patient gowns catch on outside hospitals? Will they be the next big thing in fashion, replacing jeans and hoodies? Because they’re just . . . so, darned, comfortable?

And overheating? I find hospitals to be cold, if anything. I cannot remember a time when my buns were so hot – in any sense of the word – that I wanted to whip them out.

* Modesty: While the open back design might seem counterintuitive, it can actually provide a sense of modesty. The ties can be adjusted to ensure the gown stays securely in place, covering the patient’s back and providing a sense of privacy.

That’s not just “counterintuitive.” It’s a bald-faced, butt-on lie. The ties are not long enough to extend around to the front in order to cover the patient’s back, as AI admits in its very next bullet point:

* Efficiency: Tying the gown in the back is a quick and simple process, saving time for both the patient and the medical staff.

How on earth is tying a string behind your back “quick and efficient”? Try it sometime, Google Assistant.

Overall, the design of hospital gowns is a balance between practicality, comfort, and modesty. The open back with ties is a design that has been found to be effective in meeting the needs of both patients and healthcare providers.

That’s all a big fat lie. But don’t blame AI. AI thinks it’s true because . . . Garbage In, Garbage Out.

This is a trivial example to prove Marc Andreessen’s point. AI is only as good – and is just as bad – as the information it receives. AI thinks hospital gowns are efficient, convenient and modest because the internet says so. If I were to say something contrary to that, AI would say I’m wrong.

This extends to everything – global warming, Jan. 6, Russian collusion, the Kennedy assassination, election rigging, and drones over Jersey.

That’s the sort of censorship we’re looking at. It’s a pervasive, insidious thing. We cannot put AI back in the genie bottle, nor should we. But we should carefully monitor and dispute the information it relies on for its pronouncements, and take those pronouncements with a large grain of salt. That won’t be easy.