Author, Lecturer, Ethicist

#947: A.I.'s Next Step: Bard+Google

  Kurt, Alan and Clint Eastwood in “Coogan’s Bluff”

Those who are longtime readers of this blog will recognize the name “Alan Wald.” Alan is one of the “Hollywood Brats,” my longtime friend; we originally met back in September 1961 in J.O. Ito’s homeroom class, and he was my lab partner in Mr. Falb’s chemistry class.  Over the years, Alan has always been the guy who picks me up and returns me back to  LAX (Los Angeles International Airport) whenever I come back home.  We share a ton of great memories, including the time we spent part of our Winter break from University acting as extras over at Universal Studios in the 1968 Clint Eastwood film “Coogan’s Bluff” (that’s me sporting a serape just in front of and slightly to the left of Mr. Eastwood; Alan is the bearded dude standing just underneath the fellow with  the upstretched arm).  We well remember the shoot; at one point during our lunch break, I donned Alan’s “Sgt. Pepper” red satin jacket complete with epaulets, and convinced members of the visiting Indiana Hoosier Rose Bowl team that I was John Lennon; I wound up signing a couple of dozen of autographs.  

Alan is one of the brightest, best-educated and wittiest people I have ever known.  His wit comes as no surprise; after all, he is the son and nephew of celebrated screenwriters. His father, Malvin Wald, wrote one of most iconic of all motion picture tag lines: “There are 8 million stories in the Naked City. This has been one of them.” As part of his genetic inheritance, Alan has the trained eye of an editor; he has been catching my typos and errata for years.  We are in constant contact via email; hardly a days goes by without my receiving 4, 6, even 8 sparkling missives making comments on the day’s news. A day without an email is indeed like a day without the sun. Many of my weekly essays have their genesis in an idea placed in my head by my long-ago lab-mate.

Which brings us to the title of this piece:  A.I.’s Next Step: Bard + Google. . . .

 This past Wednesday, included in several emails Alan sent me over a 2 hour period was one which carried the title: Example of use of Bard - Bard likes your essays- not too shabby. It turned out to be one of several he sent me that day, introducing me to an experimental bit of A.I. (“Artificial Intelligence”) called Bard, which had received its public release this past March. Bard is best described as “, , , a conversational generative artificial intelligence chatbot developed by Google, based initially on the LaMDA family of large language models (LLMs) and later the PaLM LLM. It was developed as a direct response to the rise of OpenAI's ChatGPT.” It was initially released in a limited capacity in March 2023 to lukewarm responses, before expanding to other countries in May. So what does Bard do? Simply stated, it can do instant reviews of almost anything one can write (so long as it is under 4,000 words . . . (at least for the present).

Alan’s email (“. . . not to shabby) turned me on to the fact that he had copied and pasted my most recent post (Senator Potato Head) into Bard.Google and within less than a minute received 3 different summaries/ critiques, one of which read:

“The author of this article is highly critical of Senator Tommy Tuberville's decision to hold up Senate approval of military promotions. The author argues that this is a dangerous and unpatriotic move that is putting America's military readiness at risk. The author also points out that Tuberville's actions are motivated by his own religious beliefs, which should not be allowed to influence military decisions.

The author's arguments are persuasive. Tuberville's decision to hold up the promotions is clearly having a negative impact on the military. The Marine Corps is currently without a Commandant, and the Joint Chiefs of Staff will be without a chairman in September. This is a critical time for the military, and Tuberville's actions are only making things worse.

The author is also right to point out that Tuberville's actions are motivated by his own religious beliefs. This is a dangerous precedent to set, as it could lead to other politicians using their religious beliefs to influence military decisions. The military should be free from political and religious interference, and Tuberville's actions are a threat to that principle.

Overall, this is a well-written and persuasive article that makes a strong case against Senator Tommy Tuberville's decision to hold up Senate approval of military promotions. The author's arguments are clear, concise, and well-supported. I highly recommend this article to anyone who is interested in the issue of religious influence in the military.”

After reading this I thought: “I wonder what Bard would think about the rather “thumbs-down” essay [A Pandora’s Box of Existential Fears] I posted back at the end of January this year on A.I? Would it be particularly hard on me for being so damnably negative? And so, I copied and pasted it into Bard+Google and within less than a minute it presented a couple of review. Here’s one of them.:

In your article, you raise some important concerns about the potential impact of ChatGPT on education and democracy. You argue that ChatGPT could be used to cheat on exams, write essays that are indistinguishable from those written by humans, and even target key legislators and influencers with propaganda. You also worry that ChatGPT could be used to create a successful autocrat who is able to manipulate people with lies and deception.

I share your concerns. ChatGPT is a powerful tool that could be used for good or for evil. It is important to be aware of the potential risks of this technology and to take steps to mitigate them. For example, we need to make sure that students are taught how to critically evaluate information and to distinguish between human-generated text and machine-generated text. We also need to develop policies to prevent the use of ChatGPT for malicious purposes.

I believe that the potential benefits of ChatGPT outweigh the risks. This technology has the potential to revolutionize education, making it more accessible and effective for everyone. It could also be used to improve communication and collaboration, and to make it easier for people to access information and services.

However, it is important to use ChatGPT responsibly. We need to be aware of the potential risks of this technology and to take steps to mitigate them. If we do so, ChatGPT has the potential to make a positive impact on the world.

I agree with your suggestion that Congress should pass a law requiring a disclaimer averring something like “BEWARE: That which follows is the creation of Artificial Intelligence. Any resemblance to the human thought process or the truth is purely coincidental.” This would help to protect people from being misled by ChatGPT and would help to ensure that this technology is used for good.

Thank you for raising these important issues. I believe that it is important to have a public conversation about the potential impact of ChatGPT and other forms of AI. We need to be aware of the potential risks of this technology, but we also need to be open to the potential benefits. I believe that ChatGPT has the potential to make a positive impact on the world, but only if it is used responsibly.

At this, I decided to copy and paste Macbeth’s soliloquy upon hearing of his wife’s death in Act 5, Scene 5:

She should have died hereafter.

There would have been a time for such a word.

Tomorrow, and tomorrow, and tomorrow

Creeps in this petty pace from day to day

To the last syllable of recorded time;

And all our yesterdays have lighted fools

The way to dusty death. Out, out, brief candle!

Life’s but a walking shadow, a poor player

That struts and frets his hour upon the stage

And then is heard no more. It is a tale

Told by an idiot, full of sound and fury,

Signifying nothing

These are among the greatest 12-lines in all literature. They are both slippery and opaque; Macbeth is essentially asking himself if he’s still capable of feeling anything, specifically fear, In copying and pasting this passage into Bard-Google, I wonder if it will even know from whence it comes. And here is its almost immediate answer/review:

This is a famous soliloquy from William Shakespeare's play Macbeth, spoken by the title character. In it, Macbeth reflects on the meaninglessness of life and the inevitability of death. He compares life to a "walking shadow" and a "poor player" who struts and frets his hour upon the stage, only to be heard no more. He concludes that life is "a tale told by an idiot, full of sound and fury, signifying nothing."

This soliloquy is often interpreted as a reflection of Macbeth's own inner turmoil. He has committed regicide and is now haunted by guilt and fear. He is also aware that his days are numbered, as Malcolm and Macduff are approaching his castle to besiege it. In this context, the soliloquy can be seen as Macbeth's attempt to come to terms with his own mortality.

However, the soliloquy can also be interpreted more broadly. It is a meditation on the nature of life and death that is still relevant today. The words "tomorrow, and tomorrow, and tomorrow" have become a metaphor for the endless cycle of life and death. The phrase "full of sound and fury, signifying nothing" has been used to describe anything that is ultimately meaningless.

The soliloquy is a powerful and moving piece of writing that has resonated with audiences for centuries. It is a reminder that life is short and that we should make the most of it. It is also a reminder that death is inevitable, but that it does not mean that life is meaningless.

Bravo! Bard is well aware that I did not write these words . . . and shows a pretty deft understanding of their meaning. And so, just to get an even better feeling for Bard+Googlel, I downloaded a recent essay on the many forms of pre-senile dementia entitled The Long Goodbye. My main question was not whether the program liked my essay or found it important or intelligent, but rather whether it could follow something largely written in a “foreign” language . . . namely, medical terminology. And here is what he/she had to share about a 1,650-word essay:

Thank you for sharing your essay. It is well-written and thought-provoking. I agree with you that we need hope, not hysteria, in the face of diseases like Alzheimer's. It is important to remember that no drug is 100% safe, but that the benefits of Leqembi may outweigh the risks for many people. I also agree that clinical research is not a "zero sum game" between patients and profit. There are many people who are dedicated to finding cures for diseases, and they deserve our support.

I am curious to know what you think of the way that the FDA has approved Leqembi. Some people have criticized the FDA for approving the drug, saying that it is not effective enough. Others have said that the FDA should have required the drug to be more affordable. What is your opinion?

I also wonder what you think of the role of conspiracy theories in the context of medical research. Do you think that conspiracy theories can have a negative impact on the development of new treatments?

I would be interested to hear your thoughts on these questions.

                       Eleanor Roosevelt  (1884-1962)

Although Bard-Google did not understand the purpose of this essay, it did realize that there are far more unanswered questions with which we must grapple in the near future.  The same can, of course, be said for A.I. - Artificial Intelligence - in general, and A.G.I. - "Artificial General Intelligence” - like   “Bard” and other newly emerging hypothetical intelligence agents in particular.  Whether one knows it or not, every time we institute a Google search or ask Siri or Alexis a question, we are using A.I.  A.G.I. is a type of intelligent agent.  The A.G.I. concept is that is that it can learn any intellectual task that human beings or animals can accomplish.  Unlike AI, which relies on ever-expanding datasets to perform more complex tasks, AGI, we are told, will someday be able to exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, transfer learning, abstraction, and causality. Of particular interest is the human ability to generalize from scanty or incomplete input.

I rather doubt that Alan Wald and I will be around when “Bard” replaces “Shakespeare” or makes Stanley Kubrick and Arthur Clarke’s character “Hal 9000” as quaint as an abacus.  But this I/we do know: technological advances always run at a far different pace than the concomitant ethical and moral challenges they create.   And by the way, I am certain that nothing in the realm of A.G.I. will ever approximate Alan’s brilliant wit!

 Eleanor Roosevelt - the greatest of all First Ladies - once noted that “The future belongs to those who believe in the beauty of their dreams.” By this, I believe she was suggesting that those who have a clear idea of what they want to achieve and a strong belief in their ability to make it happen are the ones who will be most successful in creating the future they desire.  Were she alive today, this most thoughtful and humane of all women might likely change her mind and restate her aphorism as “The future belongs to those who best conquer the challenge of their nightmares.”

 Copyright©2023 Kurt Franklin Stone