'ChatGPT, write an article about ChatGPT'

News
ChatGPT is a large language model developed by OpenAI that can generate human-like text and responds conversationally like a chatbot. Although there have been many chatbots developed in the past, ChatGPT seems to rise above the level of any that have come before it. It is the virtual partner that is faster than any real one you’ve ever had and is available at any time of the day. It could probably write this article using the title as input, but do we want that?

Blog by Thijmen Kupers

Since the end of November the OpenAI research group has released the newly trained chatbot called ChatGPT. It is created to interact in a conversational way and is able to answer follow-up questions. Although this alone is not revolutionary, the level of responses by ChatGPT are. It is able to write a simple explanation about complex topics, write a poem in the style of a famous writer, write music lyrics including a chord scheme, write computer code and can even create grammar rules for a completely new made-up language. If you are not happy with the output, you just simply ask it to change its response by adding some additional information. All this is done within mere seconds and it is open to everyone to use (for now).

A brief history of GPT-3

ChatGPT is a member of the large language model (LLM) family. ChatGPT model is based on the GPT-3 architecture (a third iteration Generative Pre-trained transformer model) and contains about 175 billion  parameters which was at the time of creation larger than anything before. Since then we have seen even larger models like Megatron from Microsoft and speculations about GPT-4, however these have not yet been available to the public. The continually increasing sizes of new language models is mainly due to this observation: Larger models perform better. This trend is linear and did not level off with GPT-3. Some speculate that this is a new version of Moore’s law. Others question the trend of larger models being the right approach. Besides having arguments about the carbon footprint and complexity, the main argument is that when the models sizes are comparable to a typical human brains it should tell us we are on the wrong path. Especially when considering not all of the human brain is dedicated to language. 

Moore’s law for large language models
(source: https://huggingface.co/blog/large-language-models )

 

What can I use it for?

Currently a lot of people seem to try to figure out what the power of ChatGPT is by simply trying a bunch of stuff, including me. I started off by asking it to explain AI to a 10 year old to see how well it grasps a difficult topic like this. I have asked it to write some code and explain what IPA beer is and even asked to write the opening to this article, but with my biased opinion I thought mine was better. All these requests were within the grasps of my own knowledge, but ChatGPT can respond to things that go beyond my understanding. However, the question you should be asking yourself is: “Do I want to ask an AI text generator more than I know?” 

While testing the chatbot, it became clear to me, that it was not flawless and sometimes even clearly hallucinating facts. In such cases, the responses are still very fluent and confident. You can try it out by simply asking ChatGPT for references or a URL to a topic and it gives you imaginary outputs, although I have noticed that some URL guesses are so close to something real you get redirected to a real webpage. This shows that ChatGPT is a next-word prediction model, although an extremely sophisticated one. As to why ChatGPT sometimes hallucinates facts we will have to discuss reinforcement learning.

Reinforcement Learning from Human Feedback

The GPT model is not simply trained on next-word predictions and some other unsupervised tasks, but is also trained using Reinforcement Learning from Human Feedback (RLHF). This means the model was fine-tuned using human AI trainers. Step one included AI trainers written answers to prompts with model-written suggestions as inspiration. Step two of RLHF used model outputs which the AI trainers ranked and used to train a new reward model that was used to algorithmically improve training at a later stage. These steps are shown in the image below. This results in more human-like answers but with the risk of behavior cloning (BC). 

BC occurs when human expert demonstrations are used to maximize likelihood of the model, e.g. the most human-like response is given preference. However, the ideal answer would be dependent on the knowledge of the model instead of the human instructor. When a model is for example less knowledgeable than its human AI trainer about a certain topic, it should not be able to respond in the same way as a knowledgeable human but would receive a penalty for a “bad” response in training. Subsequently, the model will try to mimic the human instructor as this would be perceived as performing better. In technical terms: without the needed knowledge available, the model will try to reduce the possible values of the unobserved information/variables using its priors which is the probability distribution over the parameters. This essentially boils down to taking a guess at the unobserved information using all other info it was trained on. The guessed response is given with as much confidence as it would when the information was observed. Why? Because the human instructor with the needed knowledge would have acted in the same way. Training a model to be more cautious however creates a higher possibility of under confident AI that declines questions it can answer. This means the opposite of the hallucinating AI also happens in BC. A model that is more knowledgeable than the instructor would throw away relevant information and respond as “dumb” as the human instructor. 

Reinforcement learning from Human Feedback of ChatGPT
(source: https://openai.com/blog/chatgpt/ )

 

Why do we seem to trust a hallucinating chatbot?

We are now used to the fact that in most cases computers are more precise in many tasks we do, so it is easy to think a chatbot like ChatGPT knows better. This is  partly influenced by the exterior which seems to function in a similar fashion as Googling a topic, which we have learned to use and trust. However, in the case of googling we find human generated content that for a large part has been somewhat reviewed but is also static. The content on the website will not change every time we visit it even when we get there through a different search query. With ChatGPT every time you ask a question the output changes. It generates a new text that is different as the one before, because it is at its core a “next word predictor”. Based on the input it will determine the next word, uses that outcome as new input to predict the next word and so on. This makes it difficult to trust an answer fully when your own knowledge of a topic is incomplete. Although ChatGPT might look like the search engine 2.0, it currently is not.

Should we start using ChatGPT?

It is THE time to try and understand LLMs as they are maturing at a rapid speed. Growing with new technology  is easier than trying to understand it when all is neatly hidden away behind polished user interfaces. The “play around and find out”-approach can help us calibrate our trust to be at the right level. This is the famous It seems inevitable that with ChatGPT we turned a corner where AI is really going to change the way we work. It is a tool that can help kick-start building a framework for a project which you can refine based on the expert knowledge you have. It can generate alternatives for a piece of text you’ve written or you can improve on something ChatGPT has written. The current value of LLMs is in it being a side-kick to your projects, however 2023 might be the year it could become more than that.