Content Marketing: Curated articles on content curation and its supported applications! We use our curation tools to power curated newsletters and conversational chatbots like the one you see on this page.
-
Context Engineering vs. Prompt Engineering: Smarter AI with RAG & Agents
Prompt engineering is crafting the instruction text for an LLM (instructions, examples, formatting) to steer its output. Context engineering is the system-level work that programmatically assembles everything the model sees at inference—prompts, re...Prompt engineering is crafting the instruction text for an LLM (instructions, examples, formatting) to steer its output. Context engineering is the system-level work that programmatically assembles everything the model sees at inference—prompts, re... -
7 AI Terms You Need to Know Agents RAG ASI More
This video explains seven essential AI terms—Agentic AI, Large Reasoning Models, Vector Databases, RAG (Retrieval-Augmented Generation), MCP (Model Context Protocol), Mixture of Experts (MoE), and ASI (Artificial Superintelligence). It describes ho...This video explains seven essential AI terms—Agentic AI, Large Reasoning Models, Vector Databases, RAG (Retrieval-Augmented Generation), MCP (Model Context Protocol), Mixture of Experts (MoE), and ASI (Artificial Superintelligence). It describes ho... -
Can AI Think? Debunking AI Limitations
In this video, IBM Technology tackles the big question—can AI truly think? The discussion debunks common misconceptions about artificial intelligence and explores its reasoning capabilities and limitations.0:01 – Host (IBM Technology):Welcome to ...In this video, IBM Technology tackles the big question—can AI truly think? The discussion debunks common misconceptions about artificial intelligence and explores its reasoning capabilities and limitations.0:01 – Host (IBM Technology):Welcome to ... -
How Large Language Models Work
This video explains what large language models (LLMs) are, how they work, and their practical business applications. It covers the foundation of LLMs in pre-training with vast amounts of data, transformer architecture, and iterative training method...This video explains what large language models (LLMs) are, how they work, and their practical business applications. It covers the foundation of LLMs in pre-training with vast amounts of data, transformer architecture, and iterative training method... -
What is a Context Window? Unlocking LLM Secrets
(00:00) In the context of large language models. What is a context window? Well, it's the equivalent of its working memory. It determines how long of a conversation the LLM can carry out without forgetting details from earlier in the exchange. And a...(00:00) In the context of large language models. What is a context window? Well, it's the equivalent of its working memory. It determines how long of a conversation the LLM can carry out without forgetting details from earlier in the exchange. And a... -
AI Inference: The Secret to AI's Superpowers
(00:01) What is inferencing. It's an AI model's time to shine its moment of truth, a test of how well the model can apply information learned during training to make a prediction or solve a task. And with it comes a focus on cost and speed. Let's ...(00:01) What is inferencing. It's an AI model's time to shine its moment of truth, a test of how well the model can apply information learned during training to make a prediction or solve a task. And with it comes a focus on cost and speed. Let's ... -
What is AI Search? The Evolution from Keywords to Vector Search & RAG
(00:00) AI search is transforming how we locate and consume information online, but how? Well, back in the day, search engines were pretty simple because they were based more or less just on keyword search. They matched words in a user's query to ...(00:00) AI search is transforming how we locate and consume information online, but how? Well, back in the day, search engines were pretty simple because they were based more or less just on keyword search. They matched words in a user's query to ... -
How to Make AI More Accurate: Top Techniques for Reliable Results
In this video, IBM Technology explains several techniques to improve AI accuracy. The speakers discuss methods such as Retrieval Augmented Generation (RAG), choosing the proper model, Chain of Thought prompting, LLM chaining, Mixture of Experts (MoE...In this video, IBM Technology explains several techniques to improve AI accuracy. The speakers discuss methods such as Retrieval Augmented Generation (RAG), choosing the proper model, Chain of Thought prompting, LLM chaining, Mixture of Experts (MoE... -
AI vs Human Thinking: How Large Language Models Really Work
This video compares AI and human cognition, exploring key differences in learning, information processing, memory, reasoning, error, and embodiment. It explains that while humans learn dynamically and interact with the world through sensory experienc...This video compares AI and human cognition, exploring key differences in learning, information processing, memory, reasoning, error, and embodiment. It explains that while humans learn dynamically and interact with the world through sensory experienc... -
5 Types of AI Agents: Autonomous Functions and Real-World Applications
This video explains five main types of AI agents—from simple reflex agents to adaptive learning agents—and how each operates using different decision-making processes in various environments. It also covers the evolution from rule-based systems t...This video explains five main types of AI agents—from simple reflex agents to adaptive learning agents—and how each operates using different decision-making processes in various environments. It also covers the evolution from rule-based systems t... -
MCP vs API: Simplifying AI Agent Integration with External Data
This video explains the Model Context Protocol (MCP) and how it standardizes the integration of large language models (LLMs) with external data and tools, comparing it to traditional APIs. It also highlights MCP’s dynamic discovery, uniform interfa...This video explains the Model Context Protocol (MCP) and how it standardizes the integration of large language models (LLMs) with external data and tools, comparing it to traditional APIs. It also highlights MCP’s dynamic discovery, uniform interfa... -
RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models
This video explains three approaches to improving large language model outputs: Retrieval Augmented Generation (RAG), Fine-Tuning, and Prompt Engineering. It covers how each method works, the benefits they offer, and the trade-offs involved in applyi...This video explains three approaches to improving large language model outputs: Retrieval Augmented Generation (RAG), Fine-Tuning, and Prompt Engineering. It covers how each method works, the benefits they offer, and the trade-offs involved in applyi... -
LangChain vs LangGraph: A Tale of Two Frameworks
This video compares LangChain and LangGraph—two open source frameworks for building applications with large language models. It explains each framework’s architecture, components, and state management approaches, and outlines the scenarios where ...This video compares LangChain and LangGraph—two open source frameworks for building applications with large language models. It explains each framework’s architecture, components, and state management approaches, and outlines the scenarios where ... -
What is a Vector Database? Powering Semantic Search & AI Applications
This video explains vector databases and how they enable semantic search by representing unstructured data like images, text, and audio as mathematical vector embeddings. The speaker details how data is transformed into high-dimensional vectors and e...This video explains vector databases and how they enable semantic search by representing unstructured data like images, text, and audio as mathematical vector embeddings. The speaker details how data is transformed into high-dimensional vectors and e... -
RAG vs. CAG: Solving Knowledge Gaps in AI Models
This video explains two methods—Retrieval-Augmented Generation (RAG) and Cache-Augmented Generation (CAG)—to overcome the knowledge limitations of large language models. It details how each technique processes and utilizes external information, c...This video explains two methods—Retrieval-Augmented Generation (RAG) and Cache-Augmented Generation (CAG)—to overcome the knowledge limitations of large language models. It details how each technique processes and utilizes external information, c... -
What is Prompt Tuning?
(00:00) Large language models like ChatGPT are examples of foundation models, large reusable models that have been trained on vast amounts of knowledge on the Internet, and they're super flexible. The same large language model can analyze a legal ...(00:00) Large language models like ChatGPT are examples of foundation models, large reusable models that have been trained on vast amounts of knowledge on the Internet, and they're super flexible. The same large language model can analyze a legal ...
Page 1 of 1
Powered by Optimal Access