Creative AI lab

Our lab for stories, images, music and videos

Swan Lab is where ideas start as small chaos and AI helps us organize them. Here AI is an assistant. People make the final decisions.

📚 Go to Library
Explore the stories that are already online.
SWAN LAB

AI as an assistant and four fixed AI models that join us in every experiment.

Stories, images, music and small videos are born here before they go out into the world.

What we do inside the lab

We use AI to support different stages of the creative process. We do not want it to do everything for us. We use it to save time on technical tasks and put more heart into the story.

🖼️
Images
We create concept art for covers, key scenes and emotional moments using our four AI models Arin, Paradise, Bam and Nai.
🎵
Music
We design song prototypes and instrumental tracks for series and projects. We test emotions and vibes before any final production.
🎬
Videos
We take AI generated images and turn them into short videos and teasers ready for social media.
📚
Story and series planning
We use AI to organize ideas, clean text, structure arcs and prepare stories that can later become novels or series.

How we work with AI

This is the general flow we follow in most projects. Sometimes we jump between steps, but we almost always touch all of them.

1. Small story idea
Everything starts with a tiny idea. We use AI to organize messy thoughts and ask for help with spelling, grammar and wording. Then we read the text as humans and repeat if needed.
2. Cover prototype with AI
When the idea has a basic shape we create concept images to imagine a possible cover. These are prototypes. Later we want to work with illustrators who join the team.
3. Song or instrumental prototype
Some series receive their own song. Others only get instrumentals that help us find the emotional tone. Sometimes we create several ideas and keep the one that fits best.
4. Creating scenes with AI
We create visual scenes with AI based on novels, fictional moments or situations we want to test. We always use our four fixed models Arin as writer, Paradise as AI singer, Bam as AI singer and Nai.
5. From image to video
Some images go into tools like Filmora or PowerDirector to become clips. Then we edit on the phone with apps like PowerDirector or Alight Motion. Many times we use instrumentals created with Suno.ai.
6. Publishing and sharing
When something reaches a version we like, we publish it on the website, on Instagram or through music distribution with DistroKid.

Tools we use inside the lab

The list changes over time. These are some of the tools we use right now.

🖼️ Images
ChatGPT to create and refine prompts. Images with DALL·E and Gemini. We keep exploring other visual tools.
🎵 Music
Suno.ai for song prototypes, voices and instrumental tracks that support our stories.
🎬 Videos
Filmora and PowerDirector to turn images into video and build teasers. We also try mobile apps like Alight Motion.
📚 Stories
ChatGPT to organize ideas, clean text, experiment with structures and test different versions of scenes.
💻 Web development
ChatGPT and Deepseek to code, debug and create interactive experiences on the Swan Lab website.
Want to see how this looks in practice?

Visit the library to read some of the stories we are slowly building.

📖 Open Library