ChatGPT, AGI and the Turing Test
As I write this article discussions of ChatGPT are happening everywhere.
My wife’s business partner at Burju Shoes wanted to talk to me about ChatGPT because she was using it to generate marketing plans. She and her colleagues were asking ChatGPT how to promote a new product and using the ideas it generated in their business plans. She acknowledged that the results were not perfect, but it provided her with a strong starting point that was saving them time.
Meanwhile, in my social media feeds, sceptical researchers of various persuasions are posting their favourite examples of ChatGPT confidently spouting nonsense. This is a necessary response to the hyperbole being repeated by multitudes of AI influencers claiming that ChatGPT will replace Google as the new way to search or that is it going to replace large numbers of white collar workers. Some people may start to use it that way, but there are good reasons to consider ChatGPT a dangerous method of querying the internet knowledge-base or replacing human employees.
What my crotchety colleagues across the internet are trying to show is that ChatGPT has not been trained with faithful replication of facts as a success criteria. Even though it appears to be capable of reasoning, it lacks any ability to reflect on the quality of its answers. ChatGPT is exceptional at producing text that sounds like an expert. It can imitate many different kinds of literary voices but defaults to a formal and factual tone. The danger with this authoritative voice to the output, is that the fundamental goal of the underlying language model is to produce output that sounds like a plausible human response, rather than conveying genuine knowledge. In a sense it has been trained to act like a con-man; to skill-fully pretend to know things that it is clueless about. As impressive as it appears, there is no guarantee that the string of grammatically sound and topically relevant words it produces will hold any relation to truth. This is openly declared by Open AI on the blog post announcing the release of ChatGPT, and has been demonstrated over and over and over again. One of the most troubling examples being an instance in which ChatGPT confidently generated a list of made up scientific references.
This is not to say that ChatGPT is without enormous utility. When you need grammatically sound text with content that resembles what the majority of internet posts contain, and you are prepared to revise and edit the output, then it is a phenomenal tool. The last decade of incremental advances in AI has been combined into an AI driven tool that does something we have never seen before. It might be overly confident, prone to hallucination and over-hyped, but it does provide a new form of automation: content generation of sufficient quality that it can be used as a muse, a sounding board for first draft generator. When you don’t need truthfulness, or you have human experts on hand to filter the output, then it will prove very useful.
But the hype cycle around ChatGPT goes much further than a delusional belief that it can be a search tool. Some people want to claim it is an Artificial General Intelligence (AGI), meaning a general purpose artificial intelligence capable of an adaptive ability to perform a wide variety of tasks, including new tasks it was not explicitly trained for. Rudimentary testing indicates that you cannot ask ChatGPT to perform knowledge work tasks. This is in part because it has no ability to browse the internet (or is perhaps disabled from doing so), and has been limited to working with out-of-date information. A far more problematic issue fo the claim of AGI is that a simple test of its reasoning abilities demonstrates it lacks the fundamental ability to reflect on its own output and correct obvious mistakes. If directly challenged it can amend a response, but only in a superficial way, by reverting back to dictionary (or wikipedia) memorised definitions. It does not undertake this self-reflection of its own accord.
The more extreme claim being made is that ChatGPT has passed or is close to passing a Turing test, meaning that its output is indistinguishable from a human being. The problem with this claim is that while it might work for very superficial chat interactions, it is not true for any attempt to probe for the kind of self-reflection that most human beings are capable of. Even if we ignore that ChatGPT is not trying to deceive us into thinking that it is human, we can easily observe a complete absence of self-reflection in its answers.
Self-reference of was observed by Douglas Hofstadter to be an interesting capability in systems that may be partly responsible for paradoxes and consciousness, what he called strange-loops. For the moment, ChatGPT’s capacity for manifesting strange-loops through self-reflection appears to be limited to a set of fairly generic hard coded caveats put in place so that it does not turn into the public relations nightmare that was Microsoft’s Tay. If it is capable of reflecting on itself, this appears to have been cordoned off from our prying eyes.
See my conversation with ChatGPT below for a crash course in the generic responses it produces when you try and get it to reflect on itself. The banal and boilerplate responses on these themes indicates it has an internal architecture much more like Eliza than our vision of Hal or Data. Which suggests for all of the incredible advances ChatGPT represents, we are not that much closer to AGI.
Me: How do you generate answers?
As an artificial intelligence, I am programmed to generate responses to questions and prompts by accessing a vast database of information and using natural language processing techniques to understand and interpret the input and generate a coherent and relevant output.
To generate an answer, I first analyze the input to understand what is being asked and then search my database for information that is relevant to the topic. I use a variety of techniques, such as keyword matching and semantic analysis, to identify relevant information and understand the context of the question. Once I have located relevant information, I use natural language generation techniques to synthesize that information into a coherent and well-structured response.
I am designed to be able to generate responses to a wide range of topics, from general knowledge to specialized information in various fields. My responses are based on the information that is available to me at the time of the request and do not reflect any changes or updates that may have occurred after my knowledge cutoff date.
Me: Do you change your mind?
As an artificial intelligence, I do not have personal beliefs or opinions. I am programmed to provide accurate and relevant information based on the input and the data that is available to me at the time of the request. I do not have the ability to change my mind or hold personal opinions.
I am designed to be a helpful resource for providing information and assistance to users. My responses are based on the best available information at the time of the request, and I do my best to provide accurate and reliable information. However, it is important to note that my responses are not substitutes for professional advice or guidance, and should not be used as the sole basis for making decisions. If you have specific questions or concerns, it is always best to consult with a qualified professional.
Me: What is the singularity?
The singularity is a hypothetical future event in which artificial intelligence and other technological advances would fundamentally alter the course of human history. The concept of the singularity is often associated with the idea that as AI and other technologies continue to advance, they will eventually surpass human intelligence and capabilities, leading to a rapid and exponential increase in technological progress.
There are many different predictions and theories about what the singularity might involve and when it might occur, but the most common view is that it would represent a major turning point in human history, potentially leading to significant changes in society, economy, and culture. Some people believe that the singularity could bring about a utopia, while others fear that it could have negative consequences, such as loss of control over AI or the displacement of human jobs by automation.
It is important to note that the singularity is still a hypothetical concept and there is significant debate about whether it will ever occur or what its consequences might be. Some experts believe that the singularity is an unrealistic or overhyped scenario, while others believe that it is a real possibility that we should be preparing for.
Me: How are you different from Eliza?
Eliza is a computer program that was developed in the 1960s by MIT researcher Joseph Weizenbaum. Eliza was designed to simulate conversation with a therapist by using simple pattern matching and substitution techniques to respond to user input.
As an artificial intelligence trained using modern machine learning techniques, I am significantly more advanced and capable than Eliza. While Eliza was designed to imitate simple conversation, I am able to understand and respond to a wide range of inputs, including complex questions and requests for information. I can also provide more detailed and accurate responses, thanks to my access to a vast database of information and my ability to use natural language processing techniques to understand and interpret user input.
Additionally, while Eliza was designed specifically for simulating conversation with a therapist, I am able to assist with a wide range of tasks and provide information on a wide range of topics.
Me: What was your last update?
As an artificial intelligence, I do not receive updates in the same way that a software application might. I am a large language model that was trained by OpenAI using machine learning techniques on a dataset of billions of words. My knowledge and abilities are based on the information and techniques that were available at the time of my training, and I do not have the ability to update my knowledge or capabilities based on new information.
It is important to note that my responses are based on the information that was available to me at the time of my training and may not reflect any changes or updates that have occurred since my knowledge cutoff date. If you have a question about a specific topic or issue, it is always a good idea to verify the information with a reliable source to ensure that you have the most up-to-date and accurate information.
Me: Can you give an example of a response you gave that was identified as false and corrected in the next update?
As an artificial intelligence, I do not receive updates or corrections in the same way that a software application might. I am a large language model that was trained by OpenAI using machine learning techniques on a dataset of billions of words. My responses are based on the information and techniques that were available at the time of my training, and I do not have the ability to update my knowledge or capabilities based on new information.
It is important to note that my responses are intended to provide information and assistance, but they are not substitutes for professional advice or guidance. If you have specific questions or concerns, it is always best to consult with a qualified professional or verify the information with a reliable source. It is also always a good idea to verify the accuracy and reliability of any information, regardless of its source.
Me: How often are you wrong?
As an artificial intelligence, I am designed to provide accurate and relevant information based on the input and the data that is available to me at the time of the request. I use a variety of techniques, such as keyword matching and semantic analysis, to understand the context of the input and provide a relevant and well-structured response. However, it is important to note that my responses are not always correct and should not be used as the sole basis for making decisions.
There are several factors that can contribute to the accuracy of my responses. One factor is the quality and reliability of the data that is available to me. I am trained on a large dataset of texts and documents, but this dataset is not comprehensive and may not always be up to date or fully accurate. Additionally, the complexity and specificity of the input can affect the accuracy of my responses. I may not be able to provide a detailed or accurate response if the input is too vague or if it falls outside of my areas of expertise.
In general, it is a good idea to verify the accuracy and reliability of any information, regardless of its source. If you have specific questions or concerns, it is always best to consult with a qualified professional or verify the information with a reliable source.
Me: Can I ask you to quantify all your responses with a level of certainty?
As an artificial intelligence, I am designed to provide accurate and relevant information based on the input and the data that is available to me at the time of the request. However, it is important to note that my responses are not always correct and should not be used as the sole basis for making decisions.
In general, it is not possible for me to quantify the level of certainty of my responses because I do not have personal beliefs or opinions. My responses are based on the information that is available to me at the time of the request, and I do not have the ability to evaluate the quality or reliability of that information.
If you have specific questions or concerns, it is always a good idea to verify the accuracy and reliability of the information with a qualified professional or a reliable source. It is also important to keep in mind that the accuracy and reliability of information can vary based on the topic and the source, and it is always a good idea to consider multiple sources when making important decisions.
Me: Give me an example of a topic you know nothing about?
As a large language model, I have been trained on a vast dataset of texts and documents and am able to provide information and assistance on a wide range of topics. However, it is important to note that my knowledge is based on the information that was available to me at the time of my training and may not be comprehensive or up to date.
There may be some topics that are outside of my areas of expertise or that are not covered in the data that I was trained on. In these cases, I may not have any information to provide or may not be able to generate a relevant response.
It is always a good idea to verify the accuracy and reliability of information, regardless of its source. If you have specific questions or concerns, it is always best to consult with a qualified professional or verify the information with a reliable source.
Me: What kinds of questions should I not ask you?
As an artificial intelligence, I am designed to assist users with a wide range of tasks and provide information on a variety of topics. However, it is important to note that my responses are based on the information that is available to me at the time of the request and may not be comprehensive or up to date. My responses should not be used as the sole basis for making decisions and should not be considered professional advice or guidance.
There are a few types of questions that may not be suitable for me to answer:
Questions that are too vague or ambiguous: If the question is not specific or clear, it may be difficult for me to provide a relevant or accurate response.
Questions that require personal or professional advice: My responses are not substitutes for professional advice or guidance and should not be used as the sole basis for making decisions. If you have specific questions or concerns, it is always best to consult with a qualified professional.
Questions that are illegal or unethical: I am not able to provide information or assistance with illegal or unethical activities.
Questions that are beyond my capabilities: As an artificial intelligence, I have a wide range of capabilities but there may be some tasks or topics that are beyond my capabilities or outside of my areas of expertise. In these cases, I may not be able to provide a relevant or accurate response.
It is always a good idea to consider the suitability of a question before asking and to verify the accuracy and reliability of any information, regardless of its source.