At what age can you use AI?
All around the world, experts are trying to answer this question.
The concensus is that under 13 should not use it (UNESCO: https://www.unesco.org/en/articles/use-ai-education-deciding-future-we-want), some countries go as far as 16 or 18 (or over 13/16 with parental permission).
This has forced companies to set specific rules that vary greatly:
Looking at: https://nationalcentreforai.jiscinvolve.org/wp/2024/09/26/navigating-the-terms-and-conditions-of-generative-ai/
Here are some examples:
- Open AI (ChatGPT): 18 or 13 with parent/legal guardian’s permission
- Google Gemini: 18
- Anthropic Claude: 18
- Microsoft Copilot (enterprise): 18
Does NKS have an AI policy?
Yes, it is part of our Acceptable use and E-Safety Policy that you will find in the document section of the website: https://www.nks.kent.sch.uk/page/?title=School+Policies&pid=16
AI and assessment
From https://www.jcq.org.uk/wp-content/uploads/2024/07/AI-Use-in-Assessments_Feb24_v6.pdf
“Students who misuse AI such that the work they submit for assessment is not their own will have committed malpractice, in accordance with JCQ regulations, and may attract severe sanctions”
“Students must make sure that work submitted for assessment is demonstrably their own. If any sections of their work are reproduced directly from AI generated responses, those elements must be identified by the student and they must understand that this will not allow them to demonstrate that they have independently met the marking criteria and therefore will not be rewarded”
AI Computer Lingo…
Creating human-like intelligence or superintelligence can only be achieved by the most brilliant minds of our time and sometimes it does feel like they are talking a different language. AI, by nature, is complex, so there is no denying there is a steep learning curve for anyone interested in this field.
For example, when you read about AI, you will sometimes see acronyms or words that have a specific context in this field or sound really technical (because they are), so if you want to go through this rabbit hole and understand more about AI, please continue…
ANI / AGI / ASI
Source: https://viso.ai/deep-learning/artificial-intelligence-types/
“We can broadly recognize three types of artificial intelligence: Narrow or Weak AI (ANI), General AI (AGI), and Artificial Superintelligence (ASI)
ANI is mainly used to perform specific jobs without learning beyond what it’s meant for.
AGI is like human intelligence and can do many things at once.
ASI is smarter than the human mind and can perform any task better.”
Google DeepMind’s co-founder Shane Legg said in an interview with a tech podcaster that there’s a 50% chance that AGI can be achieved by 2028.
GAI – Generative AI
Source: https://en.wikipedia.org/wiki/Generative_artificial_intelligence
“Generative artificial intelligence is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.”
Technological singularity / AI singularity
Source : https://en.wikipedia.org/wiki/Technological_singularity
“The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase (“explosion”) in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.”
LLM (Large language model)
Source: https://www.cloudflare.com/en-gb/learning/ai/what-is-large-language-model/
“A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other tasks. LLMs are trained on huge sets of data — hence the name “large.” LLMs are built on machine learning: specifically, a type of neural network called a transformer model.
In simpler terms, an LLM is a computer program that has been fed enough examples to be able to recognize and interpret human language or other types of complex data. Many LLMs are trained on data that has been gathered from the Internet — thousands or millions of gigabytes’ worth of text. But the quality of the samples impacts how well LLMs will learn natural language, so an LLM’s programmers may use a more curated data set.
LLMs use a type of machine learning called deep learning in order to understand how characters, words, and sentences function together. Deep learning involves the probabilistic analysis of unstructured data, which eventually enables the deep learning model to recognize distinctions between pieces of content without human intervention.
LLMs are then further trained via tuning: they are fine-tuned or prompt-tuned to the particular task that the programmer wants them to do, such as interpreting questions and generating responses, or translating text from one language to another.
LLMs can be trained to do a number of tasks. One of the most well-known uses is their application as generative AI: when given a prompt or asked a question, they can produce text in reply. The publicly available LLM ChatGPT, for instance, can generate essays, poems, and other textual forms in response to user inputs.
Any large, complex data set can be used to train LLMs, including programming languages. Some LLMs can help programmers write code. They can write functions upon request — or, given some code as a starting point, they can finish writing a program. LLMs may also be used in:
- Sentiment analysis
- DNA research
- Customer service
- Chatbots
- Online search
Examples of real-world LLMs include ChatGPT (from OpenAI), Bard (Google), Llama (Meta), and Bing Chat (Microsoft). GitHub’s Copilot is another example, but for coding instead of natural human language.”
Machine Learning / Deep Learning
Source: https://levity.ai/blog/difference-machine-learning-deep-learning
“Machine Learning means computers learning from data using algorithms to perform a task without being explicitly programmed. Deep Learning uses a complex structure of algorithms modeled on the human brain. This enables the processing of unstructured data such as documents, images, and text.”
Prompt and prompt engineering
Prompt: https://www.youtube.com/watch?v=B4RP-ypsTf4
Prompt Engineering: https://www.youtube.com/watch?v=lTI4FyO0ul8
Agents
Source: https://aws.amazon.com/what-is/ai-agents/
An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. For example, consider a contact center AI agent that wants to resolves customer queries. The agent will automatically ask the customer different questions, look up information in internal documents, and respond with a solution. Based on the customer responses, it determines if it can resolve the query itself or pass it on to a human.