AI Basic Theory Explained
This section introduces the core theoretical concepts behind Artificial Intelligence. Educators will find accessible, well-structured explanations that cover the evolution of AI, what it is, how it works—especially through machine learning—and the key risks involved in using these technologies. These materials are designed to help you confidently teach adult learners the fundamentals of AI, placing modern tools and trends in the context of their historical and technical roots.

A BRIEF HISTORY OF AI
How AI Evolved: From Logic to Learning
Artificial Intelligence has its roots in ancient philosophy, but modern AI began in the 1950s with pioneers like Alan Turing and John McCarthy. The 1956 Dartmouth Conference marked the official birth of AI as a field. In the 1970s–80s, AI faced setbacks known as the “AI winters” due to limited progress and high expectations. The resurgence began in the 2000s with advances in computing power, big data, and machine learning. Today, we live in the era of generative AI, with tools like ChatGPT and DALL·E becoming part of daily life. Understanding this history helps learners appreciate how far the field has come—and where it may go next.
Key Moments in the History of AI
-
1950 – Turing Test Proposed
Alan Turing publishes "Computing Machinery and Intelligence", introducing the idea that a machine could be considered intelligent if it could hold a conversation indistinguishable from a human -
1956 – Birth of AI as a Field
The term “Artificial Intelligence” is coined at the Dartmouth Conference, marking the official start of AI research. -
1966 – First Chatbot (ELIZA)
MIT creates ELIZA, a text-based program that mimics a conversation with a therapist—one of the earliest examples of natural language processing. -
1997 – AI Beats Chess World Champion
IBM’s Deep Blue defeats Garry Kasparov, the reigning world chess champion. This marked the first time a machine beat a human champion in a complex intellectual game.
-
2011 – AI Wins on Jeopardy!
IBM’s Watson wins the TV quiz show Jeopardy! against two of its most successful human champions, showcasing AI’s ability to understand and process natural language. -
2016 – AI Beats a Go Grandmaster
Google DeepMind’s AlphaGo defeats Lee Sedol, one of the world’s best Go players. Go is far more complex than chess, making this a historic achievement for AI in strategic thinking. -
2022 – DALL·E and ChatGPT Launch
AI models like OpenAI’s DALL·E (image generation) and ChatGPT (advanced conversational AI) become widely available to the public, marking the beginning of accessible, creative AI tools for everyday use.
WHAT IS ARTIFICIAL INTELLIGENCE?
What Makes a Machine Intelligent?
Artificial Intelligence (AI) is a type of computer technology that enables machines to perform tasks that match human intelligence —such as understanding language, recognizing images, solving problems, or learning from data.
Unlike traditional software, which follows fixed rules and only does what it’s explicitly programmed to do, AI systems can adapt to new inputs, analyze information, and even improve their performance over time. This means AI can handle complex, unpredictable tasks — like answering questions in natural language or detecting patterns in large amounts of data.
As an educator, you can demonstrate three aspects we expect from AI to match human intelligence that could serve as definition for AI:
- Discover – ability to find out new information
- Infer – ability to read and understand information that have not been explicitly stated
- Reason – ability to figure things out and formulate conclusions
When people use the term AI today, they are mostly referring to machine learning-powered technologies as for example:
- A virtual assistant that understands and responds to spoken questions
- An app like Netflix that suggests content based on your past preferences
- A tool such as ChatGPT or Gemini that can summarize long articles or translate languages
- Email filters that detect and remove spam automatically

HOW AI WORKS (MACHINE LEARNING EXPLAINED)
How Machines Learn from Data
AI works by analysing large amounts of data and identifying patterns to make predictions or decisions—this is known as machine learning (ML). Instead of being explicitly programmed, ML algorithms “learn” from examples. For instance, to teach an AI to recognize cats, you show it thousands of cat images. Over time, it learns to identify new cat images by spotting similarities. Key terms include training data, models, and algorithms. Educators can use simple analogies like teaching a child through flashcards to explain the concept of supervised learning and make these ideas tangible for adult learners.
How Machine Learning Works: A Simple 5-Step Process
-
1. Collect Data
Machine learning starts with data — lots of it. For example, if you want a computer to recognize cats, you give it thousands of pictures labeled “cat” and “not cat.” -
2. Train the Model
The AI looks at the data and starts to learn patterns. It doesn’t understand what a cat is like a human does, but it notices things that cat images tend to have—like two eyes, ears, and fur. -
3. Test the Model
After training, the AI is tested with new data it hasn’t seen before. This shows how well it learned. If it gets most answers right, it’s doing well. If not, it may need more data or adjustments.
-
4. Make Predictions
Now, the trained model can make predictions. Show it a new picture, and it will say, “This looks like a cat” (or not) based on what it learned. -
5. Improve Over Time
The more data and feedback it gets, the more it improves. This is what makes machine learning different—it learns and adapts rather than just following fixed rules.
There are Two Types of Machine Learning
-
1. Supervised Learning
In supervised learning, the AI is trained on labeled data — which means the correct answers are already known. The goal is to learn from examples so the AI can make predictions about new, unseen data. - Example: Training an AI to recognize spam emails by showing it examples of both “spam” and “not spam.”
-
2. Unsupervised Learning
In unsupervised learning, the AI is given unlabeled data — there are no right answers. The system tries to find patterns or groupings on its own. - Example: An AI analyzing shopping habits to group similar customers, even though no categories were provided in advance.

RISKS OF USING AI
What to Watch Out for When Using AI
While AI offers many benefits, it’s important to understand its risks. AI systems can unintentionally:
- Reinforce biases if trained on unbalanced data
- Produce false information (hallucinations) with high confidence
- Validate privacy when AI tools collect or process personal data
Furthermore, over-reliance on AI can reduce critical thinking. Educators should help learners approach AI with a healthy dose of curiosity and caution. Teaching how to question AI outputs, spot potential inaccuracies, and understand ethical implications is key to safe and responsible AI use.
Additionally, AI-generated content may raise concerns around ethical use – like deepfakes or surveillance technologies.
For example, the well-known historian Yuval Noah Harari warns of the growing power of AI and points out its potential to manipulate humans or to distort civic discussion, which is the cornerstone of democratic society.
Educators should emphasize the importance of using AI responsibly, double-checking information, and understanding the tool’s limitations. Teaching adults to be curious but cautious helps them become informed and empowered users.