Introduction
Artificial intelligence (AI) has been around for decades and has made tremendous progress in recent years, thanks to advances in machine learning algorithms and data availability. However, there are still concerns about AI’s ability to lie, misrepresent information, and make incorrect decisions. In this blog post, I will explore how AI, including ChatGPT, learns, deals with lacking information, and fills in the gaps. I’ll also discuss how I can ask better questions to drive decision-making, which leads to my conclusion of what I can learn from knowing that AI “lies”.

How does artificial intelligence, like ChatGPT, learn?
ChatGPT and other AI models learn through a process called training. During training, the model is fed a large amount of data and learns to identify patterns and make predictions based on that data. The training process involves adjusting the model’s weights and biases until it can accurately predict the correct output for a given input.
Supervised learning is one of the most popular approaches to training AI models. In supervised learning, the model is trained on labeled data, where each data point is associated with a specific output. The model learns to associate certain inputs with individual results by analyzing the labeled data and adjusting its weights and biases accordingly.
Another thing to keep in mind here is that to store all this data, the AI compresses its model, and by doing that, the model can end up with the wrong association with the data. An example is the Xerox scanner issue, where compression led to the machine printing wrong numbers after scanning documents. For more, check out this blog: https://www.digitaltrends.com/computing/computer-scientist-discovers-alarming-issue-with-xerox-scanners/
By compressing billions of data into a neural network, the AI generates associative bits of information and links that information together. And just like the brain does, it does not necessarily connect the dots in the same manner or with the same output each time. Thus, asking the same question three times might provide three different answers. Not necessarily wrong, but with a different flavor. No different from what a professor does when asked to explain a complex data algorithm to three students. The answers depend on the student asking, the professor’s mind’s direction, and the processing of his data. The answers might be slightly different even though it is the same proper conclusion. Contrary to the professor, the AI does not understand the “truth”, so it might end up providing three different answers where two have the same conclusion while the third has the wrong or a different conclusion.
How does AI deal with lacking information and compilation?
AI models often encounter situations where they lack the necessary information to make a prediction or decision. In such cases, the model will try to make the best possible prediction based on the information it has. This process is called inference.
In some cases, AI models may also encounter conflicting or contradictory information. This can happen when the data is noisy or when different sources of information provide other answers. In such cases, the model will use a process called compilation, which involves combining various sources of information to arrive at the best possible solution. This is where the “lying” begins. What AI does in this instance is have two or more semi-correct pieces of information that it concatenates or infers to create its answer. An example is the famous ski jump hill in Oslo, Holmenkollen. If you ask ChatGPT for its height, ChatGPT will find two pieces of information:
- That Holmenkollen is a large hill located in Oslo with an altitude of approximately 357 meters.
- It also knows that the ski jump tower is 60 meters above the ground.
ChatGPT then concludes convincingly that the ski jump’s height is around 417 meters from its base to its top. The problem here is an interpretation of its available data. As a person, I can find information by going to the best possible source with more accurate data. I will still not find the exact information, but by looking at topographic data, I know that the bottom of the ski jump is 292m above sea level and the top of the hill is 369m plus the tower of 64m. So with a little bit of math, we will arrive at approximately 141 meters as the height of the ski jump hill from the bottom to the top, far from ChatGPT’s conclusion of 417 meters. This will then be perceived as a lie. Though a direct lie, not an intentional one.
How does AI fill in the gaps?
Looking at the answer ChatGPT concluded above, we can presume that AI models can fill in information gaps by making predictions based on the patterns they have learned during training. This is known as extrapolation. However, as we have seen, extrapolation can be risky if the model has not been trained on enough data or if the data it has been trained on does not represent the real-world scenarios it may encounter.
It is vital to ensure that AI models are trained on diverse and representative data to reduce the risk of incorrect extrapolation. This can help the model learn to generalize to new situations and make accurate predictions even when it encounters missing or incomplete information.

How can I ask better questions to drive decision-making?
One of the critical challenges of using AI for decision-making is ensuring that the model is making decisions that align with my goals and values. To achieve this, it is essential to ask better questions that consider my choices’ ethical and social implications.
An approach to asking better questions is to involve stakeholders in the decision-making process. This can include people from diverse backgrounds and perspectives, as well as subject matter experts who can provide insights into the specific domain of the decision.
You can also design AI models that are transparent and explainable. This can help us understand how the model makes decisions and identify potential biases or errors in the decision-making process.
Most importantly, though, is that humans learn to ask more specific questions simultaneously as the training model needs to be enhanced. Let’s take the example from before and apply it to a mountain near my home called Mistberget. In my primary question, I asked how tall Mistberget is. The answer was correctly put at 848m above sea level. Since I wanted to know how tall it is compared to its surroundings, I then clarified by asking how tall it is from its base. Now, this ties into its training model of topography, and ChatGPT knows that the surrounding areas of the mountain are at 200m elevation above sea level. Then correctly calculating that from the base to its top, it is about 648 meters tall. I got my correct answer by changing my question and asking about something I know ChatGPT would have a basic understanding of. In addition, it also noted that this is an estimate, and it would depend on the actual location of the base and its height above sea level.
I could provide a specific geolocation and get an even more precise answer.
This technique is known as prompt engineering and is the art or science of designing prompts so that AI can give better responses.

What can I learn from the limitations of AI and why it “lies”?
The limitations of AI, including its tendency to produce inaccurate or misleading outputs when it lacks information, can teach us valuable lessons about the importance of transparency, accountability, and human oversight in decision-making processes.
First, we can learn that AI is not a panacea and cannot replace human judgment and intuition. While AI can provide valuable insights and recommendations, it is essential to remember that it is only a tool, and decisions should ultimately be made by humans who can consider the ethical, social, and political implications of those decisions.
Second, the limitations of AI highlight the importance of transparency and accountability in developing and deploying AI models. AI models should be transparent and explainable so that humans can understand how the model makes decisions and identify potential biases or errors in the decision-making process.
Third, the limitations of AI underscore the need for ongoing human oversight and evaluation of AI models. Humans must continuously monitor and evaluate the outputs generated by AI to ensure that they align with our goals and values and to identify and address any inaccuracies or biases that may arise.
Conclusion
In conclusion, the limitations of AI can teach us valuable lessons about the importance of transparency, accountability, and human oversight in decision-making processes and can guide us in developing more responsible and ethical approaches to the development and deployment of AI models.
AI, including ChatGPT, can transform many areas of our lives, from healthcare to finance and education. However, to fully realize the potential of AI, we must ensure that it is making decisions that align with our goals and values. This requires careful attention to how AI learns, deals with lacking information and compilation, fills in gaps, and how we can ask better questions to drive decision-making known as "prompt engineering".
Myself, as a participant, also need to remember that the way I ask for information will be reflected in the answers provided by services like ChatGPT, Bing Chat, and Google Bard. We need to be aware that we influence the response and that the solutions we are getting are generated by nodes in a neural network just like brain cells, and that we need to treat it the same way we treat answers from other known or unknown people. Evaluate it carefully before using it, and do not trust it without verification.
References or further reading:
OpenAI. (2021). OpenAI’s GPT Models.
https://openai.com/gpt/
GPStrategies. (2023). What ChatGPT and Generative AI Could Mean for Learning
https://www.gpstrategies.com/blog/what-chatgpt-and-generative-ai-could-mean-for-learning/
New York Times (2023). Why Do A.I Chatbots tell lies and act weird.
https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html

Chef, soldier, and developer
Mews