< Back to Blog

How AI Models Learn and Train Themselves

In our last blog on AI, we covered different types of AI models. But how do these digital brains actually learn and train themselves? In this blog, we’re going to take a closer look into the processes behind AI learning and training, as well as cover some future considerations and caveats when it comes to using AI.  

via Giphy


Take a moment to follow us on our social channels like X and Facebook for more insights on AI and its applications in education, simulations, and games!

Neural Networks and Pre-Trained Models

All modern AI models rely on Neural Networks, mirroring the structure of the human brain. These networks consist of simulated neurons connected in various configurations, each with assigned “weights” dictating their influence. These weights, crucial for learning, are determined through rigorous training against labeled datasets. Models like ChatGPT and DALL-E are pre-trained, with companies investing heavily in refining these weights and datasets, making them the linchpin of AI value.

This also means that the economic value, the trade secret or the recipe of AI, if you will, are the weights and the training datasets. The network of connections are not nearly as valuable from a business standpoint. This is why Meta and other companies have started sharing untrained models publicly. That doesn’t mean there aren’t plenty of open source weights as well, but they are usually inferior quality to what big companies with lots of investment in training time have. This is also why publications and authors are upset that their articles were scraped into a training dataset without compensation.

When a trained model generates a response from a stimulus, it’s called inference, akin to human thought processes. While inference is generally swift, Generative Models face computational hurdles due to their expansive output layers, driving the race for AI-capable chips. 

There are two kinds of AI-helpful chips – the kind everyone has been using to accelerate training and inference have been GPUs – which happen to be the same exact chips in your graphics card in your PC, PS5, or XBox. If you think that’s strange, you are not alone. While GPUs coincidentally accelerate certain AI tasks, the future is in Neuromorphic Processors. These implement neurons in silicon. This means pre-trained AI models will be able to run inference entirely in hardware, but it’s likely training will still need to rely on classical techniques for longer, so don’t dump your NVIDIA stock just yet!

Looking ahead, the convergence of diverse AI models heralds an era of unprecedented applications. Large Language Models (LLMs) serve as intermediaries, orchestrating interactions between different AI models and traditional APIs. These dynamic chains pave the way for AI-driven features in everyday applications, promising a future where AI permeates every facet of digital experiences.

Caveats and Challenges

All that said, the architectural intricacies of AI models underscore both their potential and limitations. Generative AI inference is expensive and resource-intensive, limiting scalability and necessitating careful consideration of environmental impacts. Models also grapple with input and output constraints, posing limitations on conversational depth and image resolution, for example. Moreover, the inherent non-determinism of Generative AI introduces complexities in research and logical thinking, requiring vigilant evaluation of error and bias.

Error represents incorrect or inaccurate inferences, and bias is a tendency towards one type of interpretation of response over another when all facts are equal. Biases can be benign, like preferring to generate images with orange shirts instead of blue shirts, or malignant, like preferring to suggest white historical figures over Black historical figures, or mean statements instead of friendly statements. Whenever we use these models it’s important for designers, clients, and developers to evaluate error and bias in the end product, and proactively inform users when these cannot be effectively mitigated.

While navigating this rapid increase in AI technologies and capabilities, it’s imperative to acknowledge the ethical dimensions of AI deployment. From mitigating bias to evaluating error, responsible AI development demands a holistic approach, ensuring that technological advancements serve the collective good.

Have questions or ideas about AI and game-based learning? Reach out to us – we’re educational game developers with a passionate for exploring the intersection of technology and education, and we want to help you bring your project to life.

More on AI:

© 2024 Filament games. All rights reserved.