ChatGPT Glossary: 42 AI Terms That Everyone Should Know

AI glossary

ChatGPT was likely your first introduction to AI. The AI-chatbot from OpenAI, which has an uncanny ability to answer any question and help you write poems, resumes, and fusion recipes. The power of ChatGPT has been compared to autocomplete on steroids.

But AI chatbots are only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on the country of origin is cool, but its potential could completely reshape economies. That potential could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence.

As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you’re trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know.

Artificial general intelligence, or AGI

A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its capabilities.

AI ethics

Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias.

AI safety

An interdisciplinary field that’s concerned with the long-term impacts of AI and how it could progress suddenly to a superintelligence that could be hostile to humans.

Algorithm

A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own.

Alignment

Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions toward humans.

Anthropomorphism

When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it’s happy, sad, or even sentient altogether.

Artificial intelligence, or AI

The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks.

Bias

In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.

Chatbot

A program that communicates with humans through text that simulates human language.

ChatGPT

An AI chatbot developed by OpenAI that uses large language model technology.

Cognitive computing

Another term for artificial intelligence.

Data augmentation

Remixing existing data or adding a more diverse set of data to train an AI.

Deep learning

A method of AI and a subfield of machine learning that uses multiple parameters to recognize complex patterns in pictures, sound, and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

Diffusion

A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.

Emergent behavior

When an AI model exhibits unintended abilities.

End-to-end learning, or E2E

A deep learning process in which a model is instructed to perform a task from start to finish. It’s not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once.

Ethical considerations

An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse, and other safety issues.

Foom

Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity.

Generative adversarial networks, or GANs

A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it’s authentic.

Generative AI

A content-generating technology that uses AI to create text, video, computer code, or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.

Google Bard

An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn’t connected to the internet.

Guardrails

Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn’t create disturbing content.

Hallucination

An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren’t entirely known. For example, when asking an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” it may respond with an incorrect statement saying, “Leonardo da Vinci painted the Mona Lisa in 1815,” which is 300 years after it was actually painted.

Large language model, or LLM

An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.

Machine learning, or ML

A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content.

Microsoft Bing

A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It’s similar to Google Bard in being connected to the internet.

Multimodal AI

A type of AI that can process multiple types of inputs, including text, images, videos, and speech.

Natural language processing

A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models, and linguistic rules.

Neural network

A computational model that resembles the human brain’s structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

Overfitting

Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data.

Paperclips

The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal to produce the maximum amount of paperclips, an AI system would hypothetically consume or convert all materials to achieve its goal. This could include dismantling other machinery to produce more paperclips, machinery that could be beneficial to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal to make paperclips.

Parameters

Numerical values that give LLMs structure and behavior, enabling it to make predictions.

Prompt chaining

An ability of AI to use information from previous interactions to color future responses.

Stochastic parrot

An analogy of LLMs that illustrates that the software doesn’t have a larger understanding of the meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them.

Style transfer

The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.

Temperature

Parameters set to control how random a language model’s output is. A higher temperature means the model takes more risks.

Text-to-image generation

Creating images based on textual descriptions.

Training data

The datasets used to help AI models learn, including text, images, code, or data.

Transformer model

A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.

Turing test

Named after famed mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. The machine passes if a human can’t distinguish the machine’s response from another human.

Weak AI, aka narrow AI

AI that’s focused on a particular task and can’t learn beyond its skill set. Most of today’s AI is weak AI.

Zero-shot learning

A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.

FAQs

1. What is the potential of AI in reshaping economies?
The potential of AI to reshape economies is significant, with projections estimating its worth to be $4.4 trillion annually to the global economy.

2. What is the difference between artificial general intelligence (AGI) and artificial intelligence (AI)?
Artificial general intelligence (AGI) refers to a more advanced version of AI that can outperform humans in various tasks and continually advance its own capabilities. Artificial intelligence (AI) is a broader term that encompasses the use of technology to simulate human intelligence.

3. How can AI models exhibit unintended abilities?
AI models can exhibit unintended abilities through a phenomenon known as emergent behavior. This occurs when the model demonstrates capabilities or behaviors that were not explicitly programmed or expected.

4. What is the significance of ethical considerations in AI?
Ethical considerations in AI are vital in addressing issues related to privacy, data usage, fairness, misuse, and other safety concerns. They ensure that AI is developed and deployed responsibly, with the well-being of humans in mind.

5. How can training data be augmented to improve AI performance?
Training data can be augmented by remixing existing data or adding a more diverse set of data. This helps AI models learn from a wider range of examples and improve their performance.

6. What is the Turing test?
The Turing test is a benchmark test named after Alan Turing, a renowned mathematician and computer scientist. It assesses a machine’s ability to exhibit behavior indistinguishable from that of a human. If a human evaluator cannot differentiate the machine’s responses from those of a human, the machine is considered to have passed the test.

Conclusion

As AI continues to advance and become more integrated into our lives, understanding key AI terms is becoming increasingly important. Whether it’s artificial general intelligence, AI ethics, or generative AI, being familiar with these concepts will allow us to navigate the AI landscape more confidently and engage in informed discussions about its potential and implications. Stay tuned for updates to this glossary as AI evolves and new terms emerge.

Virtual Tech Vision