AI Glossary: Essential Terms and Concepts for Getting Started
Overview of Key AI Terms for GME
This glossary provides an overview of key terms and concepts in artificial intelligence (AI), covering foundational topics to help make sense of all the jargon as well as practical considerations for applying AI to your work or program. With clear definitions and explanations, this incomplete set of terms offers a solid understanding of AI basics, how it works, and the critical considerations involved in its application and interpretation. (And yes, generative AI helped develop this).
The Basics: AI 101
- Artificial Intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
- Generative AI: A branch of artificial intelligence concerned with creating systems that can generate new content, such as text, images, music, or videos, that is indistinguishable from content created by humans. Generative AI techniques often rely on deep learning models, such as generative adversarial networks (GANs) or autoregressive models, to produce realistic and novel outputs.
- Large Language Model: A type of artificial intelligence model that is trained on vast amounts of text data and designed to understand and generate human-like language. These models, such as GPT (Generative Pre-trained Transformer) models, employ deep learning techniques and large-scale neural networks to generate coherent and contextually relevant text based on given prompts or input.
- Fine-tuning: The process of further training a pre-trained language model on a specific dataset or task to adapt it to perform well on that particular task. Fine-tuning allows for customization of the model's parameters to improve performance on domain-specific or task-specific objectives.
- Chatbot: A computer program or AI application designed to simulate conversation with human users, typically over the internet. Chatbots can range from simple rule-based systems to more sophisticated AI-powered models capable of understanding natural language, interpreting user intent, and providing relevant responses or assistance in various domains.
So How Does It Work?
- Neural Network: A computational model inspired by the structure and function of the human brain. It consists of interconnected nodes (neurons) arranged in layers, with each layer transforming input data into progressively more abstract representations.
- Deep Learning: A subfield of machine learning that utilizes neural networks with many layers (deep neural networks) to model and extract patterns from complex data. Deep learning has shown remarkable success in tasks such as image and speech recognition.
- Natural Language Processing: A branch of AI that focuses on the interaction between computers and humans through natural language. It enables computers to understand, interpret, and generate human language in a way that is meaningful and contextually relevant.
- Computer Vision: A field of AI that enables computers to interpret and understand the visual world. It involves tasks such as image recognition, object detection, and image segmentation, often using deep learning techniques.
- AI Engine Room: A metaphorical term referring to the infrastructure, resources, and processes that power artificial intelligence systems and applications within an organization or ecosystem. The AI engine room encompasses various components, including hardware (e.g., servers, GPUs), software frameworks (e.g., TensorFlow, PyTorch), data pipelines, algorithms, and human expertise, that work together to develop, deploy, and maintain AI solutions.
Prompt Engineering: How To Talk to AI
- Prompt Engineering: The process of crafting precise and effective prompts or instructions to guide the behavior of AI models, particularly in language generation tasks such as text completion or dialogue generation.
- Persona: A fictional representation of a user or customer segment that helps in understanding their needs, behaviors, and preferences. Personas are often used in AI and product design to tailor solutions to specific user groups.
- Few-shot Learning: A prompt technique used in training large language models where the model is provided with only a few examples (shots) of a particular task or concept during fine-tuning. Few-shot learning enables the model to generalize from limited examples and adapt quickly to new tasks or domains with minimal additional training data.
- Chain of Thought: A prompt technique that involves guiding the generation of text by providing a sequence of interconnected prompts or inputs, each building upon the output of the previous step. Chain of thought prompts helps steer the language model's generation process towards producing coherent and contextually consistent narratives or responses by maintaining a logical flow of ideas. This technique can be used to generate stories, dialogues, or structured text based on predefined themes or scenarios.
What Could Possibly Go Wrong?
- Bias and Fairness: Concerns related to the presence of biases in large language models, which can manifest in various forms, including gender, racial, cultural, or ideological biases. Ensuring fairness and mitigating biases in language models is crucial to prevent the amplification of harmful stereotypes or misinformation.
- Bias-Variance Tradeoff: A fundamental concept in machine learning that involves balancing the error due to bias (underfitting) and variance (overfitting) in a model. Finding the right balance is crucial for building models that generalize well to unseen data.
- Safety and Robustness: Concerns related to the reliability, safety, and robustness of large language models, particularly in real-world applications where errors or vulnerabilities could have significant consequences.
- Ethical Considerations: The ethical implications of deploying and using large language models, including issues such as privacy, consent, accountability, transparency, and societal impact. Faculty and students should be aware of ethical guidelines and frameworks for the responsible development and deployment of AI technologies.
- Hallucinations: In the context of AI, hallucinations refer to instances where a model generates outputs that are inconsistent with the input data or reality. Hallucinations can occur due to model biases, overfitting, or limitations in training data. Evaluating and mitigating hallucinations are critical for ensuring the reliability and trustworthiness of AI systems.
Data Science, Machine Learning, or Artificial Intelligence?
- Algorithm: A step-by-step procedure or set of rules for solving a problem or accomplishing a task. In the context of AI and machine learning, algorithms are used to train models, make predictions, and perform various other tasks.
- Machine Learning (ML): A subset of AI that enables systems to automatically learn and improve from experience without being explicitly programmed. It focuses on the development of algorithms that can learn from and make predictions or decisions based on data.
- Data Generation vs. Data Analysis: A distinction between the goals of generative AI and data science. While data science focuses on analyzing and extracting insights from existing datasets to make predictions or solve problems, generative AI aims to generate new data samples that are similar to those in the training dataset. Generative AI is more concerned with creativity and imagination, whereas data science emphasizes understanding and inference.
- Unsupervised Learning vs. Supervised Learning: A comparison between the learning paradigms used in generative AI and machine learning. Generative AI often falls under the category of unsupervised learning, where the model learns to represent and generate data without explicit supervision. In contrast, machine learning encompasses both supervised learning—where models are trained on labeled data to make predictions—and unsupervised learning—where models learn patterns and structures from unlabeled data.