How to Use Generative AI in Fashion, 3D, and Creative Production
A step-by-step guide for artists, designers, and creative technologists
Generative AI is changing how fashion concepts are imagined, visualized, prototyped, and experienced. But the real opportunity is not in replacing creative work. It is in extending it.
Across the panel discussion, several creators described AI as a collaborator: something that accelerates ideation, expands visual possibilities, and helps bridge the gap between imagination and execution. One workflow used custom AI models and VFX compositing to transform ordinary catwalk footage into a “meta catwalk.” Others used Runway, EbSynth, CLO 3D, Lens Studio, Unreal, Unity, and conversational AI to create digital garments, AR experiences, and immersive story worlds.
This guide turns those ideas into a clear process you can follow.
Step 1: Start with a creative objective, not a tool
Before opening any AI software, define what you are trying to make.
Ask:
Are you creating a fashion concept?
Reimagining real footage?
Building an AR try-on or digital garment?
Developing an immersive scene or interactive experience?
Generating fabrics, silhouettes, or moodboards?
One of the strongest points from the discussion was that AI works best when it serves a clear artistic or narrative goal. In the catwalk example, the goal was not “use AI because it’s new.” The goal was to elevate a traditional runway into something visually impossible with a normal fashion show.
Practical output for this step:
Write a one-sentence brief such as:
“I want to transform a standard catwalk clip into a hyper-real fashion sequence with designer-specific looks and AI-generated environments.”
Or:
“I want to turn rough garment sketches into multiple digital fashion directions for AR and gaming.”
Step 2: Choose the right AI workflow for your outcome
The panel described several distinct workflow types.
A. AI for catwalk or filmed footage
Use this when you want to transform real video into stylized fashion imagery while preserving performance and motion. The catwalk workflow used custom-trained models, Stable Diffusion, WarpFusion, and VFX compositing.
B. AI for concept development
Use this when you want fast iteration on garments, silhouettes, materials, moodboards, or styling directions. Josephine Miller described using tools like DALL·E, Midjourney, and Runway for concept generation, then refining ideas further in production tools.
C. AI for digital fashion and AR
Use this when your end result is wearable digital fashion, body-tracked effects, or social-camera experiences. The described workflow involved CLO 3D and Lens Studio, especially for body-tracked Snapchat AR.
D. AI for interactive worlds and storytelling
Use this when you want to turn stories, chatbots, or text prompts into scenes and playable environments. Yuqian Sun described going from chatbot-generated story material to text-to-image or text-to-3D, then into Unreal or Unity for immersive experiences.
Practical output for this step:
Pick one pipeline:
video transformation
concept generation
AR fashion
immersive storytelling
Step 3: Build your visual reference base
Generative AI improves when your reference material is clear and intentional.
Depending on your project, gather:
designer references
brand imagery
moodboards
textures
silhouette references
catwalk stills
material references
environment references
narrative keywords
In the catwalk workflow, Johannes Saam received designer names and reference images, then trained DreamBooth-style models for each designer aesthetic. He also noted that custom training is optional; sometimes prompting alone can establish a strong visual style.
Field Skjellerup described a different approach: building datasets from scraped second-hand clothing imagery and using those to generate new fashion image outputs and fabric ideas.
Practical output for this step:
Create a folder with:
visual inspiration
aesthetic keywords
any source footage
style references for materials, shape, and mood
Step 4: Generate first-round concepts
Now use image-generation tools to create rough directions quickly.
Good use cases:
alternative silhouettes
fabric concepts
styling directions
exaggerated material behavior
impossible garments
scene variations
background ideas
Josephine described using DALL·E, Midjourney, and Runway to rapidly generate variations from sketches and ideas, especially for conceptualization. Her key point was speed: AI can produce realistic idea directions much faster than sketching alone.
How to do it:
Start with a simple prompt describing the garment, mood, material, and silhouette.
Generate multiple versions.
Select only the strongest 5–10 outputs.
Refine prompts rather than chasing perfect results immediately.
Use the outputs as references, not finished work.
Example prompt structure:
garment type
silhouette
material
mood
setting
level of realism
point of view
Example:
Futuristic couture jacket, translucent iridescent fabric, engineered structure, soft metallic folds, editorial fashion photography, high realism, studio lighting
Step 5: Decide whether to prompt or train
At this point, choose between two paths:
Option 1: Prompt-only workflow
Best for:
fast ideation
moodboards
early exploration
highly experimental work
This aligns with the point that you do not always need a custom model to produce your own style; prompting may be enough.
Option 2: Custom-trained workflow
Best for:
designer-specific aesthetics
campaign consistency
repeatable outputs
branded visual systems
The catwalk workflow trained DreamBooth models based on designer references to create style-specific outputs that could then be transferred onto live-action footage.
Decision rule:
Use prompting when speed matters.
Use training when consistency matters.
Step 6: Translate concepts into production assets
This is where AI stops being inspiration only and becomes part of a real workflow.
For video and catwalk work
The described process was:
Take the original catwalk footage
Crop around the model
Use WarpFusion to transfer AI-generated visuals onto the performer while keeping the underlying motion alive
Composite shadows, reflections, and environment changes in post
Generate or replace backgrounds separately, including with Midjourney
Blend scenes and transitions in traditional VFX tools
For digital garments and AR
The described process was:
Generate variations with Runway or other text/image tools
In-paint or stylize keyframes over the outfit area
Use EbSynth to propagate those changes over the rest of the footage
Refine garment concepts in CLO 3D
Export into Lens Studio for body-tracked AR experiences
For immersive scenes and storytelling
The described process was:
Generate narrative material through chatbots or conversational AI
Turn text into images or 3D assets
Import into Unreal or Unity
Build an interactive or immersive environment around the generated content
Step 7: Preserve human creative control
A major theme in the discussion was that good AI workflows are directed, not passive.
Josephine described a loop of:
generate
sketch over
re-input
regenerate
refine again
Her point was that strong AI art is not just “press a button.” It is an iterative back-and-forth between creator and system.
Yuqian Sun made a similar point from a different angle: the creator needs meaningful control and should stay conscious of what they want the system to produce. Otherwise, authorship becomes weak.
Best practice:
do not accept first outputs blindly
edit between generations
add sketches, masks, paintovers, and design notes
treat AI as a system for exploration, not authorship by default
Step 8: Use AI where it adds scale and speed
The panel repeatedly emphasized that AI is valuable because it speeds up iteration and expands options.
Strong uses include:
concept ideation
style variation
visual prototyping
catwalk transformation
garment customization
digital self-expression
interactive storytelling
collaborative worldbuilding
Field also highlighted how generative systems can help brands develop exclusive patterns, references, and early design directions much faster than traditional methods alone.
Where AI is especially useful:
the fuzzy early stage
the “what if?” stage
the variation-heavy stage
the pre-visualization stage
Step 9: Connect fashion with 3D, gaming, and AR
The discussion made clear that fashion is no longer confined to physical garments.
AI-driven fashion increasingly overlaps with:
game engines
AR filters
body tracking
virtual spaces
digital self-expression
non-physical materials and impossible garments
This matters because digital fashion removes many physical constraints:
no material waste
no gravity limits
no manufacturing cost at ideation stage
no dependence on physical sampling for early experiments
Practical application:
Try building a single concept in three outputs:
editorial still
short motion test
AR or game-engine prototype
That gives you a far richer creative system than a flat sketchbook alone.
Step 10: Think about storytelling, not just visuals
One of the most useful ideas in the transcript is that generative AI narrows the gap between imagination and what can be shown. But that makes storytelling more important, not less.
If it is suddenly easy to generate imagery, your differentiation shifts toward:
narrative
worldbuilding
authorship
intent
interaction design
emotional meaning
In practice, ask:
What story is this garment telling?
What world does this look belong to?
What should the viewer feel?
Is the AI output decorative, or meaningful?
This was echoed in the discussion about AI art becoming valid when it carries storytelling, originality, and cultural significance.
Step 11: Be transparent about the process
The discussion also addressed skepticism around AI. One recommended response was transparency.
Josephine argued that creators should openly show how AI is being used, including ethical considerations and the human role in shaping outcomes. Yuqian suggested separating the conversation around the tool, the developer, and the user, since these are often conflated.
What to disclose in your own work:
what part was generated
what part was directed or edited by you
whether you used custom datasets or prompting
what tools were used
what the AI was actually responsible for
This builds trust and helps audiences evaluate the work more fairly.
Step 12: Build a repeatable creative pipeline
Once you have one successful experiment, turn it into a reusable system.
A strong repeatable workflow could look like this:
Repeatable AI fashion pipeline
Define concept
Gather references
Generate visual directions
Select strongest outputs
Refine with prompt edits or paintovers
Convert into motion, garment, AR, or world assets
Composite and polish
Review narrative strength
Export presentation-ready work
Document what worked for the next iteration
This mirrors the broader spirit of the discussion: AI should not be treated as a one-click gimmick, but as a structured creative layer across ideation, design, visualization, and experience-building.
A simple starter workflow you can use this week
If you want a minimal version to test immediately:
Beginner workflow
Goal: Create one AI-assisted digital fashion concept
Pick one fashion idea
Make a moodboard
Generate 20 AI variations
Select the best 3
Paint over or annotate them
Regenerate with refinements
Bring the best version into CLO 3D or Photoshop
Create one still image and one short motion mockup
Write one paragraph explaining the concept and your process
Share it with clear credits and process notes
That gives you something real, presentable, and learnable.
Final takeaway
The most useful lesson from the transcript is this:
Generative AI is strongest when it extends human taste, direction, and storytelling.
It can help you:
ideate faster
visualize more boldly
prototype impossible ideas
build richer digital fashion experiences
merge fashion with VFX, gaming, AR, and narrative systems
But the quality still comes from the creator’s intent.