🔍 In this episode of Humans of AI, we speak with Airí and Pau from Domestic Data Streamers, creators of Synthetic Memories — a research initiative using generative AI to help people reconstruct lost personal and collective memories.
Aíri and Pau share how the project began during the 2015 refugee crisis and grew into a global effort to support communities affected by migration, conflict, and displacement. They discuss their unique methodology, the importance of culturally aware datasets, and how early “imperfect” AI models can actually deepen emotional recall. They also explore how synthetic memory techniques are now being tested in therapeutic contexts, including work with dementia patients.
📌 HoAI Highlights
⏲️[00:00] Intro
⏲️[00:16] The Spark
⏲️[03:40] The Impact
⏲️[08:18] The Challenge
⏲️[15:55] The Future
⏲️[21:38] The Takeaway
The Spark
🗣️“Being a refugee is not about losing your home — it’s about losing the answer to where do I come from?”
The Impact
🗣️ “AI can help preserve what displacement, conflict, and disaster try to erase.”
The Challenge
🗣️ “Archives are biased per se — so re-biasing a model means taking responsibility for representing a culture in all its richness.”
The Future
🗣️“AI should help us understand and connect with each other, not just control or optimise.”
The Takeaway
🗣️ “The value of a well-used tool is not in what it produces for us, but in what it produces within us.”
📌 About Our Guests
Airí Dordas & Pau Aleikum | Synthetic Memories
🌐 http://www.syntheticmemories.net
Synthetic Memories is a research initiative by Domestic Data Streamers that uses generative AI to help people visually reconstruct memories lost through migration, conflict, or time. Built around guided conversations, the project turns personal recollections into emotionally resonant images while addressing cultural and historical bias through close collaboration with local communities. By focusing on identity, healing, and connection, Synthetic Memories shows how AI can preserve stories at risk of disappearing—and support both cultural memory and therapeutic work.
#AI #ArtificialIntelligence #GenerativeAI
🔍 In this episode of Humans of AI, Javier de la Rosa, Head of Language Models at the National Library of Norway, explains how Norway’s long-term digitisation effort laid the groundwork for powerful, locally grounded AI models. He shares how these tools are now used across public services, the challenges of updating legal frameworks for generative AI, and why supporting public-interest institutions is essential for Europe to build AI that reflects its own languages, values, and cultural identity.
📌 HoAI Highlights
⏲️[00:00] Intro
⏲️[00:30] The Spark
⏲️[02:04] The Impact
⏲️[05:21] The Challenge
⏲️[08:44] The Future
⏲️[12:40] The Takeaway
The Spark
🗣️“Twenty years of digitisation was a leap of faith — but it’s now the foundation that makes Norwegian AI possible.”
The Impact
🗣️“Our models are used everywhere — from hospitals and police work to newsrooms. The impact is huge, and so are the expectations.”
The Challenge
🗣️“LLMs changed the game. Agreements written before generative AI suddenly didn’t fit anymore, and we had to rethink everything with rights holders.”
The Future
🗣️“We want models that protect privacy, run locally, and reflect Norwegian values — and expand into speech and visual AI in a responsible way.”
The Takeaway
🗣️“If restrictions become too heavy, small public institutions can’t contribute — and Europe risks losing its voice in AI.”
📌 About Our Guests
Javier de la Rosa | KB Norway
The National Library of Norway runs NbAiLab, an applied AI programme that develops open language, speech, and image models based on the library’s extensive digitised collections. The lab focuses on improving access to Norwegian cultural heritage through AI-driven tools such as text and handwriting recognition, speech-to-text, and automated metadata enrichment, while openly sharing models and datasets to support the wider research and GLAM community.
#AI #ArtificialIntelligence #GenerativeAI
🔍 In this episode of Humans of AI, we speak with Daniel van Strien, researcher at Hugging Face and coordinator of Big GLAM, about how open collaboration, shared datasets, and a focus on the public good can help the cultural heritage sector build a healthier AI ecosystem.
Daniel shares his journey from studying library science to exploring how machine learning can empower libraries, archives, and museums. He discusses the Living with Machines project, his early experiments with FastAI, and how the Big GLAM initiative grew into a global effort to make cultural heritage datasets accessible for ethical and transparent AI development.
📌 HoAI Highlights
⏲️[00:00] Intro
⏲️[00:24] The Spark
⏲️[03:23] The Impact
⏲️[10:05] The Challenge
⏲️[13:45] The Future
⏲️[20:32] The Takeaway
The Spark
🗣️“Even something as simple as an image classifier can be incredibly useful for libraries — the challenge is adapting these tools to real-world GLAM contexts.”
The Impact
🗣️ “People often focus on models, but data is what really lasts. If we want AI to work for cultural heritage, we need to see datasets as shared infrastructure.”
The Challenge
🗣️ “Funding and copyright uncertainty are holding institutions back — we can’t expect the GLAM sector to lead innovation without long-term investment and legal clarity.”
The Future
🗣️“There’s an opportunity for libraries to collaborate on evaluation datasets and task-specific models that reflect their own values and missions.”
The Takeaway
🗣️ “Models might fade, but well-built datasets can last a decade — that’s where sustainable progress begins.”
📌 About Our Guests
Daniel van Strien | Hugging Face & Big GLAM
🌐 linkedin.com/in/danielvanstrien
🌐 https://huggingface.co/biglam
Big GLAM is a global open science initiative hosted on Hugging Face that gathers datasets from galleries, libraries, archives, and museums to support ethical, community-driven machine learning. By fostering collaboration and transparency, Big GLAM helps cultural institutions engage with AI in ways that respect public values and strengthen the digital commons.
#AI #ArtificialIntelligence #GenerativeAI
🔍 In this episode of Humans of AI, Professor Melissa Terras, co-founder of Transkribus, shares how three decades of work in digital cultural heritage led to one of Europe’s most successful examples of ethical, community-owned AI.
Melissa talks about turning damaged ancient manuscripts into searchable digital archives, the journey from academic research to a cooperative business model, and why AI should serve communities — not corporations. She reflects on lessons from the project’s growth, the importance of human networks, and her vision for a future where AI is sustainable, transparent, and keeps humans in the loop.
📌 HoAI Highlights
⏲️[00:00] Intro
⏲️[00:21] The Spark
⏲️[08:25] The Impact
⏲️[13:54] The Challenge
⏲️[18:19] The Future
⏲️[24:11] The Takeaway
The Spark
🗣️“We’re building tools that let people turn images of the past into knowledge for the future.”
The Impact
🗣️ “To have learned about this poet at school and now help make her diaries accessible to everyone — that was a really nice full-circle moment for me.”
The Challenge
🗣️ “All of this is about information literacy — and AI companies don’t want people to be literate about AI. They want to disrupt, take the money, and run.”
The Future
🗣️“If we want a better AI world, we need to look at the business models — and encourage more people to build AI cooperatively.”
The Takeaway
🗣️ “AI is just a tool. What we choose to do with it is on us.”
📌 About Our Guests
Melissa Terras | Transkribus
🌐 linkedin.com/in/melissa-terras-mbe-freng-8658714
Transkribus is a cooperative AI platform that uses handwriting recognition and machine learning to turn historical documents into searchable, machine-readable text. Co-owned by libraries, archives, museums, and individuals, it helps researchers and institutions digitize and analyze millions of handwritten records while ensuring data ethics and community-driven sustainability.
#AI #ArtificialIntelligence #GenerativeAI