
You are a designer and researcher. What informs your practice overall, and what would be important for us to know about you before you speak as part of the program: (Un)Conference: „Creating the culture we want / Building the infrastructures we need.“
I am a researcher and designer. My practice spans many different roles, but those are good umbrella terms. For almost a decade now, I’ve been working in and around AI, at the intersection of culture and new media: thinking about how we navigate technological and societal change. Things like aesthetics, subcultures, or the mythologies people build around these technologies. Most of my time, I’ve been working on AIxDESIGN. It’s a community I started in 2018, before the current wave of generative AI. Back then, it was much more experimental, working with early tools and artistic approaches. AIxDESIGN has evolved a lot, and it’s now a non-profit organization. The community is still at its core, but we also focus on research, participatory design, events, and publications. I run it now with my co-founder, Ploipailin Flynn, and a small team. Outside of that, I work as a freelancer. One of my current roles is as a researcher-in-residence at the Film Academy in Amsterdam, where I focus on AI literacy and pedagogy.

What is the main concept and question behind AIxDESIGN?
Our main question is: how do we make AI work for the rest of us, not just for tech companies? How can people actively shape what AI becomes? We’re exploring how young people can develop a meaningful relationship to these technologies: not just learning tools, but building critical and computational understanding.
What are you personally looking forward to at the event and your talk at Creative Days Vienna 2026?
I am mostly interested in what the other people will present. The Creative Days Vienna program is very well curated; there are people with great perspectives, and I trust that the selection will lead to interesting conversations. For example, I am excited for the talk of Christie Morgan, Creative Director of SOFTER, and Calum Bowden, one of the founders of TRUST. I’m especially excited to encounter ideas I haven’t thought about before. Maybe through a talk, but especially through workshops. I saw some sessions on that that look really interesting. I’m curious to experience something new, something outside my usual thinking. I personally really value participatory, hands-on formats, where you try things, experiment, or create something. That’s where learning really happens for me. So I always appreciate when there are as many workshops as talks in the program.
What should governments or institutions create to help communities like yours grow? The knowledge is already there; communities need the right conditions and support for it to prosper.
There is a huge barrier between industry and academia. Industry is where AI actually happens; many companies do care about building responsibly. Academia does the deep thinking. Communities like ours test ideas in practice. We need more flow between them. Our community already crosses these worlds: academics, industry folks, artists. Rooms of support need to open up more, so lived experience can actually inform regulation. That said, I’m really glad to be in the EU right now. There’s a commitment to safety over pure capitalism, which feels reassuring.

There are many communities that are emerging, but people still don’t quite understand what they are. AIxDESIGN, or the other speakers here—they’re informal knowledge networks. Not agencies, consultancies, hobby groups, or just friends. They’re something new: a hybrid community of practice that’s quietly shaping culture and knowledge. But they’re still underground. People don’t know how to engage: Can I join? Can I hire you? How does this work? More visibility and more funding would help. Right now, we patchwork it—arts grants, advocacy funding, client work. It’s super hard. Funders don’t understand—you’re not a museum (but you do programming), not a school (but you do literacy). They want you to fit in a box. We struggle to explain what we do and how big it actually is. It’s hard to build something new and make it sustainable in a world that doesn’t have categories for it yet.
You already mentioned really important topics: community, society, and how people relate to knowledge and AI. How do we deal with the fear people have around AI? Even if AI is not necessarily something bad, a lot of people are afraid.
I do think there are reasons to be fearful. There’s one kind of fear, like the idea that AI will go rogue, a sort of “Terminator scenario.” That’s very influenced by science fiction. I personally find those fears a bit out of context or irrational, but that doesn’t make the feelings less real.

Then there’s another set of fears that I think are more grounded. For example, fears about jobs and livelihoods. Some of that concern is definitely valid, even if it’s sometimes exaggerated in the media. Another fear I relate to is what people call “cognitive outsourcing.” The idea that we might start relying on AI for thinking, decision-making, or forming opinions, rather than engaging deeply with information ourselves. Instead of researching, comparing sources, and forming our own views, we might increasingly just ask a model. That changes our relationship to knowledge. There are also concerns about loneliness or emotional dependence, people forming strong attachments to AI systems, even replacing social relationships or therapy with them. Some fears are overblown, but others point to real shifts that we need to think carefully about.
There’s such a strong culture of speculation when it comes to possible AI futures. You mentioned you’re interested in internet folklore and these kinds of narrative communities. How do you think that relates? You said some fears come from sci-fi imaginaries, but for people who might not be familiar with this more side of the internet, how does that connect to AI?
We create stories, that’s as old as humanity. I think it’s something people do to make sense of things they don’t understand. And I think people similarly approach AI. Because they don’t understand the technical mechanisms, they build narratives around it, almost like mythologies that make it more graspable. With large language models, for example, you see this in how people describe them as oracles or as something magical. What I find interesting is that these stories aren’t really about being right or wrong on a technical level. They show how people perceive the technology: what their mental models are. And those mental models matter, because they shape how we relate to AI. Some can be helpful, others not.

There’s also this ongoing debate: is AI a tool? An instrument? An entity? An intelligence? These are just words, but words shape worlds. The metaphors we choose affect how we behave toward the technology. And all of this is happening very quickly. For many people, AI just suddenly appeared, even though it has a long history. The speed of adoption—like with ChatGPT- has been unprecedented. Faster than social media platforms like Facebook. So suddenly, people are confronted with a new reality, and collectively we’re producing stories to understand where this fits in our world. That’s what makes it so interesting to observe and to talk about.
Today, we have a completely different level of exposure. There are so many opinions, so much information every day, that it could become overwhelming. Even though the car itself is the result of a very long development, like the wheel evolving over thousands of years, people experienced it as something sudden. When the first cars appeared, people thought everyone would lose their lives in car crashes, that this would be the end. And now something similar is happening with AI.
One thing I find fascinating is retrofuturism—looking at how people in the past imagined the future. There’s this image I love: it shows a self-driving car, imagined decades ago. So people could envision the technological development. But inside the car, the family is a very traditional nuclear family—mother, father, boy, girl—and they’re playing a board game. So it’s like: we can imagine technological change, but we struggle to imagine how society and culture will change alongside it. And I think right now there’s a lot of speculation about AI futures, but I’m not sure we actually need more of it.

In AIxDESIGN, we used to do more speculative design, but I’m less interested in that now. There’s already too much speculation. Instead, I think the best way to understand the future is to pay close attention to the present.
It feels like you’re very interested in these alternative or archival visual cultures of the internet—like the example you gave with retrofuturism and the self-driving car. Could you tell us more about your visual approach in AIxDESIGN? Why is it important, and how does it connect to your work as a designer?
Starting with the aesthetic: when I began AIxDESIGN around 2018, I was very intentional about making it feel soft, playful, and accessible. At the time, most tech aesthetics were very cold, silver, blue, white, and very sterile. I wanted something different: pastel colors, something more welcoming, even a bit “cute.” It was a way to create a space that felt open and inclusive—especially for people who might feel alienated by typical tech culture. We also embraced a kind of DIY, slightly messy visual language, things that are unfinished, sketch-like, or imperfect.

Because that reflects how we actually work: thinking out loud, experimenting, researching in the open. It should feel more like flipping through someone’s notebook than looking at a polished corporate product. And that carries into physical spaces we curate for our programs, too.
The optimization of knowledge, of information circulating online. The AIxDESIGN community seems really strong in that regard. So I’m curious: how do you curate content? Who decides what gets shared? How do you fact-check or filter what goes out to your community?
So we have a discussion, and then use tools to help turn that into something shareable. But the input is always real: things that were actually said, thought, or discussed. I would never ask an AI, “What should we post this month?” That decision is always ours: based on what we find relevant or meaningful. The tools help with editing, condensing, structuring, and maybe checking if we forgot something. But the substance comes from us. We could definitely automate more. But I don’t want to. I don’t think the world needs more content. I think we already have enough.
That’s interesting, because I would say that the general internet user is actually always waiting for more content. That’s what keeps the system running. There’s no end to it—it’s constant production. What you said about community, it feels very central. Even though you work with AI, it seems like the community aspect is what really makes the difference. How important are these in-person events for you? When people come together, exchange ideas, and engage with these technologies, what role do they play?
I think precisely because the internet is so fast, and increasingly filled with generated content, these physical or real-time spaces become more important.

There’s this idea of the internet: that more and more of what we see online isn’t actually made by humans, but by bots, agents, or automated systems. Of course, humans are still there, but we’re increasingly outnumbered. So spaces where people actually meet, talk, and exchange ideas in real time become incredibly valuable. You get real opinions, real disagreements, real encounters. And also those moments where you hear something you would never actively search for—something outside your usual bubble. That can shift your perspective or inspire you in unexpected ways. I think more and more people are craving that. Being online all the time can also be quite isolating.
The full program and registration: www.creativedaysvienna.at
How are digital technologies shaping the future of cultural experiences?
Find out on 20 and 21 May 2026!
Creative Days Vienna 2026 brings together creatives, cultural practitioners, and critical thinkers who want to actively shape the future of culture, society, and the digital world. Creative Days Vienna 2026, held May 20–21 as part of ViennaUP, connects international creatives to explore how digital technologies and AI shape culture, design, and media. The event features an (Un)Conference at ORF-Funkhaus, workshops, and tours focusing on sustainable, equitable digital futures. Participants will discuss how media, communities, and platforms can be reimagined in the context of AI, social media, and digital transformation. Insights will be provided by Nadia Piet (AIxDesign), Calum Bowden (Trust), and Christie Morgan (SOFTER). In the evening, keynotes by Shumon Basar and Günseli Yalcinkaya will follow. The focus will be on the stories and images we associate with new technologies, as well as their impact on culture and society.
Nadia Piet is an independent researcher, designer, and organizer focused on critical AI, digital culture, technology, and participatory practices. She is co-founder of AIxDESIGN alongside Ploipailin Flynn, and serves as its creative director of AIxDESIGN—a global community and decentralized research lab nourishing critical, creative, and communal approaches to AI since 2018. Currently, she serves as researcher-in-residence at the Netherlands Film Academy, faculty at ELISAVA Barcelona, mentor at NEW INC, and works on other freelance and self-initiated projects that explore how we shape technology and it shapes us in return. www.nadiapiet.comwww.aixdesign.co
