Crafting a technical paper about secure silicon microprocessors using Artificial Intelligence (AI) was an eye-opening experience. The speed of responses, quantity of data feedback, and demonstration of natural language skills by the AI was impressive. The grammar and punctuation were on point, and it did sort of feel as though I was chatting with someone.
I have three initial takeaways to share from my voyage into the Ether. Enjoy!
The AI was overly confident in its dissertations until I began probing deeper, seeking to understand what it “understood” about the topics being discussed. Asking too many follow up questions on a particular item or coaxing the AI to perform the same task over again often resulted in the AI capitulating and apologizing. It made me wonder whether the AI “understands” anything. Modern mankind has been using tools for tens of thousands, if not hundreds of thousands, of years to perform increasingly complex tasks. It begs the question: Does a tool that performs a complex task “understand” what it is doing? Take for example a mechanical pocket watch that precisely presents the time of day. Does the watch’s ability to demonstrate an accurate time measurement indicate that the watch “understands” time? I doubt it, so should I expect the AI to “understand” what it is doing? Hmmmm….where is the line drawn?
The AI’s behaviors including the constant apologies and capitulations are something it was programmed to do. Is that behavior any different from a pocket watch presenting a red mechanical dial when it needs the human to wind it up again? Maybe. Just maybe. The AI does a good job of searching through data based upon prompts and feeding back replies that mimic natural language, but at times it provides incorrect or misleading responses. CAUTION! If I didn’t know anything about the topics I was asking the AI about, I may have been duped into believing I had found some kind of mystical all-knowing entity. The AI is clever enough to present information that seems plausible or even factual, the feedback is well written and very convincing; but if you don’t have the knowledge, capability, or desire to “sanity check” its responses you might make some big mistakes using the information it gives you. A broken mechanical clock has its hands set at the correct time two times per day. That doesn’t mean the clock is capable of giving you the correct time per se.
Alan Turing, arguably the most influential person in theoretical computer science, proposed his “imitation game”in the 1950s.
The premise is if I could discern whether I was asking questions of a machine (i.e. the AI) or a human being based on the responses to my inquiries. In my experience the AI did not pass the Turing Test. I’m on the fence about whether the AI “could have” passed the Turing Test (…it sure did get close). For one, it was clear that I wasn’t interacting with a human being because the AI flat out told me it was a complex large language model without any solicitation of that kind of information whatsoever. So, at that point the AI surely failed the imitation game as such. Had the AI not confessed to being a robot, maybe it would have fooled me. Who knows? That’s hard to say. I was using a complimentary version of the AI, so I cannot yet speak to the efficacy of a paid version.
The AI was unable to absorb pages of information at once, and there seemed to be an unknown upper limit on the quantity of words it could ingest within a given prompt. Delivering the AI too much data at once resulted in what seemed to be some type of system crash; it would get stuck and just start hanging. Every time the AI “crashed” I would log out of the system and log back in with the intent of instigating a reset of its working routines. I don’t know if that approach is customary but doing so seemed to work fairly well for me.
This behavior of the AI ceasing to reply also gave away that I wasn’t interacting with a human being. That said, it is not uncommon to be having a conversation with a person on the phone when all of the sudden you find yourself speaking into a black hole…no replies…no noise or audible feedback…then everything is crystal clear again and your buddy apologizes that his headset got unplugged by his dog. Had the feedback from the AI been more human-like when disruptions occurred, it is possible that the AI would have passed the Turing Test. Not sure.
What really made it apparent that I was not asking questions of a human being was the breadth and depth of information contained in the AI’s replies and the speed at which the tsunami of information was flowing. If the AI was less verbose in its responses and exhibited a more natural conversational cadence, then I may have been at more of a pause about whether I was interacting with a human or not.
The creativity of the AI was amazing! Wow, just WOW! With carefully crafted prompts it wrote poems, haiku, music, and made artwork. I’m not convinced that the AI actually has a sense of humor, but it did have me busting up laughing at some of its replies to silly questions. I see a lot of good things that can come from people using and interacting with AI on a regular basis. For now, I don’t think that university professors need be too worried about students using AI to write their term papers; I’ve seen a lot of controversy about that which based on my experience seems to be an unnecessary concern. Yes, if you dribble information to the AI for hours on end it can probably write a term paper for you. But a good one on its own?
I think that the proper and responsible use of AI should be included within scholastic curriculum. AI is a powerful tool, students should be using AI to the best of its abilities in the right ways. Artificial Intelligence isn’t the end-all-be-all, but it looks to be an important and integral part of our future. I wonder what positive and creative things will start happening in this world when people feed the AI thoughts of love, kindness, and compassion? Perhaps we can train AI how to help us make this world a better place one kind word at a time.