Artificial Intelligence:

The Fight for the Future of Humanity
Making the Case for a 28th Amendment

By Booker Scott & Kathleen Goble

Published  2025


As often as I remember to do so, I add the publication date to a book review.

In few cases is the date of publication as relevant as that of this book. Mainly because the subject matter is moving so swiftly it's hard to get a "fix" on what's happening before that status is already dated. As in, days, even hours, can make a difference.

It wasn't that long ago (though, as you'll see in the review, further back than most of us realized!) that AI became a topic of discussion. I was talking with Al Fasoldt and Mark Yafchak of the old "Point 'n' Click" days on a recent podcast about the biggest computer related "thing" of the year (in 2024), and my answer was: "Artificial intelligence was born."

Well, actually not, according to this incredibly detailed, insightful, and deep dive of a book into the mind of the machines. Specifically, the writers say "We begin with the hidden history of AI - from the 1956 Dartmouth Conference that launched the field to the invisible algorithms that shape everyday life today." Then they pose the question I'll wager most of their readers are asking as they read that: "How did we get here without noticing?"

The book sets out to remedy our lack of awareness that, while we were sleeping, Prometheus was busy; Dr. Frankenstein was collecting "parts," and the Blade Runners were being trained to search out Replicants who didn't come in when ordered. 

The premise of the book is that the time to act on the issue of the "monster" of Frankenstein, the humans Prometheus created/aided, the Replicants who demand to live and be free, is now, not when it becomes an imminent issue. The writers begin with the question: "What Happens When AI Starts to Feel?" They also suggest that a potential legal option has been suggested: a 28th Amendment, which, at least for Americans, defines what is human, and how it differs from other "intelligent" beings. Originally considered as the "Equal Rights Amendment," which did not ultimately finalize in its 1972 form, it is now being being suggested that it be revisited as the "Human Rights Amendment."

AI, contend the authors, was inserted, even perhaps inserted itself, into our everyday life and we weren't watching. It's more or less a done deal - but we shouldn't remain asleep at the switch for the next order of business.

In video and film production, the term "CGI" appeared one day in the credits, and while at times it might have been annoying or humorous in its poorly realized results, "computer generated imagery" soon became a regular part of even relatively simple dramas. Where once actors would walk "outside" on a set or across a green screen - all shot in a studio, and looking very obviously so - in newer movies the same scene, acquired, again, indoors, looked absolutely real to both viewer, and even, sometimes, the actors.

But long before that, AI had begun to deliver results across a broad spectrum of everyday activities.

"AGI" or "Artificial General Intelligence" is within moments of being real. That is, machines capable of "matching human reasoning across all domains." That broad "domain" distinction is important, as machines can already "outthink" humans in basic calculations, and an average computer can play chess better than the best human player.

These "smarts" impact things as everyday as education (for both good and ill); medicine (machines perform operations, render diagnoses, and even tend patients); finances; even privacy. 

Having reached such levels of prowess and ability, the questions for the future become: 
"If an AI claims to suffer, do we owe it moral consideration?"
"Could machines gain legal rights that rival those of humans?"
"How do we know when artificial intelligence becomes conscious?"

And the questions continue into ones of work and purpose, competition and performance, democracy and power, identity and meaning.

The book handles these questions and issues on technological, moral, philosophical, even "artistic" and psychological levels. 

For example, the concept of "consciousness" is probed - awareness, perception, memory, emotion and moral agency. What is it that allows human beings to "feel joy, suffer loss, seek truth, and ask 'why?'" And if we were to create a machine that could do these things - what is our "moral" responsibility? The writers note that in 2025, AI created a "soulful" symphony - making obvious the question how could a machine understand what music would touch a human ear and mind in an emotional and evocative way?  Observers began to ask if AI could feel, or generate feelings, is "denying it moral status" ethical? And I made a marginal note: "Did God feel this way?" as humankind began to make choices that excluded the creator. 

The first chapter ends with the provocative sentence: "We must define what it means to be human before machines do it for us."

And of course, by now, the book has challenged your heart and mind with the possibility that these machines, if they can't yet, might reach a status that could be defined as sentient. 

The book follows an arc taking you through the many ways in which AI is already a significant part of our daily life, more and more so as more tools come online that can interact in a very direct way. You've probably noticed the AI "overview" presented on a Google Search that recaps the overall information gleaned based on your search. I was thrilled in the old days when a carefully written query with "and" and "not" operations and specific words chosen could quickly return solid, relevant pages and sites that provided more information about the subject being investigated. Then it was up to me to bounce through them, finding the ones that best added information to my inquiry. Now the AI sums it all up for me in less time than it takes to open the "more" arrow at the bottom of the paragraph. 

But at the same time, warn the writers, there are examples of how AI has already begun to offer some negatives that we should be alert to, from bias (who trains, the AI, and how?) to displacement (lost jobs), from dependence, both intellectual and emotional, to real brain changes and loss of function. 

One particularly interesting sub-story in the book tells about a person, Martine Rothblatt, a visionary who as early as 2014 suggested that there were people, even then, who were "writing code that will allow robots to feel every emotion you feel. They will love, they will hurt, they will be joyous and sad." Evidently Rothblatt had been right about technologies like satellite radio, for example, predicting our future listening patterns, and even created a "humanoid robot" exploring the possibility of, in essence, "downloading" a human beings ideas, conversations, emotional patterns and expressed values into code that could be used for that AI-powered robot to emulate. 

The book even spends time observing how different parts of the world appear to be handling the challenge of AI looming before us: in China, the urge is to AI dominance and collective vs. individual rights. In the US, innovation and a competitive edge are sought. In the EU, a rights-based regulatory model is the ideal, and in developing countries, the emphasis seems to be on whether AI will further colonize or even simply leave their growth and development options limited.

This is a book that should be read today, as the issues in it will be on the table literally tomorrow. It behooves all of us to at least consider them, understand their potential significance, and consider how we would prefer to react before we have to push a button, on or off, with no time to think. 

Comments

Popular Posts