In a world increasingly shaped by AI, large language models (LLMs) have emerged as powerful tools—capable of drafting emails, summarizing documents, writing code, and even crafting bedtime stories. But while these general-purpose models excel in many domains, there’s one particularly sensitive and complex space where they consistently fall short: communicating effectively—and responsibly—with children.
That’s where LORA comes in.
In this episode of the Austrian Artificial Intelligence Podcast, host Manuel Pasieka sits down with founders Mathias Neumayer and Dima Rubanov to explore an innovative new project: LORA, a fine-tuned language model specifically built to generate child-friendly content. Born out of their previous work on the successful app Oscar Stories, which allows kids to star in personalized bedtime tales, LORA was created in response to very real challenges they observed when using mainstream AI tools like ChatGPT, Claude, or Gemini for younger audiences.
Despite clever prompting techniques and attempts to simplify output, these models often failed to produce language that truly resonates with children of different ages. A six-year-old and an eight-year-old don’t just differ in maturity—they process language and meaning in vastly different ways. Furthermore, Mathias and Dima discovered recurring gender biases in generated stories. Female characters were more likely to need rescuing or take on passive roles, a problem that became especially troubling at scale.
Rather than accept these flaws, the duo decided to build something better.
LORA is fine-tuned on tens of thousands of carefully labeled text examples reviewed by child education experts. It incorporates sophisticated metrics for measuring readability and sentiment, and it employs rigorous benchmarks—some adapted from academic research—to identify and minimize bias. Importantly, LORA isn’t a chatbot children talk to directly; instead, it powers structured educational storytelling experiences through a safe, thoughtfully designed app interface.
The team is even going one step further, pursuing AI certification to meet rigorous standards of trustworthiness and safety. As they explain, LORA isn’t just about avoiding harmful content—it’s about fostering learning, curiosity, and imagination in a way that is developmentally appropriate and inclusive.
This episode is a must-listen for anyone interested in the future of AI and education, especially those who believe that technology should adapt to its youngest users—not the other way around.
Tune in to discover how LORA might just be laying the groundwork for a new generation of AI—one that’s not just smart, but also safe, fair, and child-first.
References
- LORA: https://oscarstories.com/de/lora/
- GenBit Score: https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf
- Counterfactual Reasoning for Bias Evaluation: https://arxiv.org/abs/2302.08204
- Trustifai – Certification for AI models: www.trustifai.at