Beyond E-Commerce: What Else You Can Build with AI + Supabase + Vector Search
Modern AI tooling has quietly changed how software gets built. The video you just watched demonstrates something subtle but powerful: you can now go from idea to full-stack app by treating AI as a collaborator, not just a helper.
While the demo focuses on a fashion e-commerce site with AI-powered related-items search, the real takeaway isn’t e-commerce at all. It’s the methodology.
Once you understand the pattern, you can reuse it to build dozens of different products—often in days instead of months.
This article breaks down that methodology and explores what else you can build with it.
The Core Methodology (Abstracted)
At a high level, the workflow looks like this:
Define structured data
Decide what your “items” are: products, documents, people, media, listings, etc.Generate or enrich that data with AI
Use AI to:Write descriptions
Normalize fields
Generate embeddings for semantic similarity
Store everything in Supabase
Postgres tables for structured data
Storage buckets for images/files
pgvector for embeddings
Query semantically, not just with filters
“Show me items like this”
“Find similar content”
“Recommend related things”
Ship a simple frontend
AI generates most UI scaffolding
You focus on wiring data and polish
Once this loop clicks, the domain barely matters.
Why This Pattern Is So Powerful
Traditional apps rely heavily on:
Manual tagging
Rigid filters
Complex recommendation logic
This approach replaces all of that with semantic understanding.
Instead of asking:
“Does this item match category X and tag Y?”
You ask:
“Is this item similar to the one the user cares about?”
That single shift unlocks an entire class of applications.
App Ideas You Can Build with the Same Approach
Below are examples that map almost perfectly to the same architecture shown in the video.
1. Content Discovery Platforms
Knowledge Base Search
Ingest docs, markdown files, PDFs
Chunk content and generate embeddings
Semantic search beats keyword search instantly
Ideal for internal tools or customer support
Swap: products → documents
Keep: embeddings + vector similarity
Media Recommendation App
Movies, podcasts, YouTube videos, articles
AI-generated summaries
“More like this” recommendations
Minimal manual curation required
2. Marketplaces & Directories
Freelance or Talent Marketplace
Profiles with bios, skills, portfolios
AI-generated embeddings from resumes
Match freelancers to jobs semantically
Better than keyword skill matching
Real Estate or Listing Platform
Listings with descriptions and images
AI summarizes listings
“Homes similar to this one”
Much more natural discovery experience
3. Education & Learning Tools
Course Recommendation Engine
Courses, lessons, learning outcomes
Semantic matching to user goals
AI-generated previews or summaries
Great for edtech MVPs
Study or Flashcard App
Concepts stored as embeddings
Automatically group related ideas
Suggest related topics to review
Powerful personalization with minimal logic
4. Creative & Asset Libraries
Stock Asset Platform
Images, icons, design assets
AI-generated metadata and embeddings
Semantic search like “minimal dark UI background”
Storage + vectors are a perfect fit
Music or Sound Effect Library
Mood-based discovery instead of genres
AI-generated descriptions
“Sounds like this” recommendations
5. SaaS & Internal Tools
Customer Support Assistant
Ingest help docs and FAQs
Semantic search for agents or users
AI-suggested responses
Huge productivity gains with minimal code
Internal Knowledge Explorer
Company docs, PRDs, decisions
Semantic lookup instead of folders
Great example of RAG-style apps without complexity
Why This Is Ideal for MVPs and Indie SaaS
This methodology works especially well because:
AI handles the scaffolding
You don’t waste time writing boilerplate.Supabase removes infrastructure friction
Auth, database, storage, vectors—done.You focus on product, not plumbing
The fastest way to test real ideas.
Once you build one app this way, the next one is mostly a find-and-replace exercise:
Products → documents
Related items → related anything
Images → files
Descriptions → summaries
The Bigger Insight
The biggest lesson from the video isn’t how to build an e-commerce site.
It’s this:
AI + vector databases turn “search and discovery” into a universal primitive.
If your app involves:
Finding things
Recommending things
Grouping things
Exploring things
This methodology already fits.
Step-by-step GUIDE
Use this as a follow-along checklist for the video. Each step includes:
What you’re building
Terminal commands
Copy/paste prompts you can use with Claude/ChatGPT
Gotchas the video ran into (so you don’t)
What you’ll build (the architecture)
Claude generates dummy product JSON
A Node/TypeScript script:
loops through products
generates images via DALL·E 2
uploads images to Supabase Storage
creates OpenAI embeddings
inserts rows into Supabase Postgres (pgvector)
A Next.js frontend:
lists products by category
shows product details
shows “related items” first by category, then via vector similarity RPC
1) Outline the build + generate dummy JSON data (0:29)
Prompt (Claude)
“Generate dummy JSON data for a fashion e-commerce app.
Return an array of 50 items with fields: id, name, description, category, price, and image_prompt (prompt to generate a product image). Make categories realistic.”
Do
Copy the JSON into a file called:
scripts/fashion-data.json
Tip: Keep image_prompt clean and specific (avoid words that could trigger image safety filters).
2) Create the Node script (1:17)
Goal: loop over JSON and for each item:
generate image (DALL·E 2)
upload to Supabase Storage
generate embeddings (OpenAI)
upsert row into Supabase table
Prompt (Claude)
“Write a Node.js script that reads fashion-data.json, loops items, generates an image using OpenAI DALL·E 2, uploads it to Supabase Storage bucket fashion-images, generates embeddings using OpenAI embeddings API, and inserts the item metadata + image_url + embedding into a Supabase Postgres table named fashion_items.”
3) Convert script to TypeScript + initialize project (around 1:36–3:00)
The video switches the script to TypeScript.
Terminal
mkdir -p scripts
cd scripts
npm init -y
Install dependencies (match the video’s package choices)
npm i @supabase/supabase-js openai dotenv
npm i -D typescript ts-node @types/node
Initialize TypeScript
npx tsc --init
Prompt (Claude)
“Convert this Node script to TypeScript. Use dotenv for env vars, add types for the product JSON structure, and keep it runnable via npm run start.”
4) Add environment variables (around 2:40–4:00)
Create .env inside scripts/.
.env keys used in the video
OPENAI_API_KEY=...SUPABASE_URL=...SUPABASE_SERVICE_ROLE_KEY=...
The video uses Service Role key for the script since it’s run locally in a trusted environment (bypasses RLS).
5) Create required folders + files (around 3:00–6:00)
Do
Ensure you have:
scripts/images/scripts/fashion-data.jsonscripts/src/index.ts(or whichever TS entry file you use)
Terminal
mkdir -p images src
6) Create the Supabase project + Storage bucket (6:37)
Do in Supabase Dashboard
Create a new project (video names it “Claude Commerce”)
Create a public storage bucket named:
fashion-images(note the hyphen)
Gotcha from the video: bucket name mismatch caused broken image URLs later.
7) Create the Postgres schema (table + pgvector) (around 6:45–7:30)
Prompt (Claude)
“Write SQL to create a fashion_items table with columns:
id (text or uuid)
name, description, category
price
image_url (text)
embedding (vector)
Also include indexes helpful for category and vector similarity, and enable pgvector.”
Run in Supabase SQL editor (example shape)
Enable pgvector
Create table with a
vectorcolumn for embeddings
The video also tweaks SQL to:
ensure the vector extension is enabled
store image_url instead of “image_path”
8) Fix the OpenAI client usage in the script (5:03–8:56)
Claude generated a few incorrect method calls. The video fixes:
OpenAI import
Image generation method →
images.generateEmbeddings method →
embeddings.createCorrectly reading
response.data[...]
Prompt (Claude)
“Here’s my TS file + errors. Fix the OpenAI client usage for:
images.generate(DALL·E 2, size 1024)embeddings.create
Return a corrected TypeScript file.”
Common fixes (from the video)
Use the correct import style for the OpenAI library
DALL·E: use
"dall-e-2"modelEnsure you read the returned URL from the correct response field
9) Upload images correctly (buffer vs stream) (8:56–10:10)
The script initially errors when uploading.
The fix in the video: use a file buffer (not a read stream) when uploading to Supabase.
Prompt (Claude)
“I’m uploading generated images to Supabase Storage and getting upload errors. Update the script to upload using a file buffer, set the correct content-type, and return a public URL.”
10) Make the script idempotent: use upsert (around 8:30–9:30)
So you can re-run without crashing.
Prompt (Claude)
“Update the script so both:
storage uploads
database inserts
use an upsert-safe approach so I can rerun without duplicate errors.”
11) Run the script (first with 1 item) (8:56)
Terminal
npm run start
Debug checklist (video’s flow)
If it says it can’t find an image path:
ensure
scripts/images/exists
Confirm in Supabase:
Storage bucket has
1.png(or similar)Table has 1 row
12) Run full dataset (target 50) and handle safety filter (around 10:10)
The video hits a DALL·E safety error around item ~43 and stops at 42 items.
What to do
Remove/adjust the “flagged” item’s
image_promptRe-run (since your script is now upsert-safe)
Prompt (Claude)
“Here’s the item prompt that caused DALL·E to reject. Rewrite it to be safe while keeping the product concept the same.”
13) Generate the frontend with Claude (10:51)
They ask for:
top page (catalog)
item details page
Claude outputs a Next.js app and then they convert to App Router.
Prompt (Claude)
“Create a beautiful fashion e-commerce site using Next.js App Router.
Pages:
Home: category filter + product grid
Product details: big image, description, price, related items section
Use Supabase JS to fetch fromfashion_itemsand show images fromimage_url.”
14) Create the Next.js app locally (13:20)
Terminal
npx create-next-app@latest app
cd app
npm i @supabase/supabase-js
Create .env.local
NEXT_PUBLIC_SUPABASE_URL=...NEXT_PUBLIC_SUPABASE_ANON_KEY=...
Frontend uses Anon key, not service role.
15) Paste Claude’s generated files (13:20)
Follow the file list Claude gave you (video says ~5–6 files created/edited):
app/layout.tsx(metadata tweaks)app/page.tsx(home)components (cards, grids, etc.)
product details route (e.g.,
app/products/[id]/page.tsx)
16) Fix “distinct categories” error with a view (around 13:20–14:30)
Supabase JS doesn’t have a .distinct() helper like some ORMs.
Solution: create a view returning unique categories.
Prompt (Supabase AI Assistant or Claude)
“Create a SQL view that returns distinct categories from fashion_items.”
Example shape
create or replace view fashion_categories as
select distinct category
from fashion_items
order by category;
Then query that view in your app.
17) Fix broken images (bucket name / image_url) (around 14:30–16:00)
The video hit:
uploaded to
fashion-imagesbut code generated URLs using
fashion_imagesor wrong column name (image_pathvsimage_url)
Fix options
Fix your code to use the correct bucket name
If DB values are already wrong, run an UPDATE to repair the URLs
Prompt (Supabase AI Assistant)
“Update fashion_items.image_url to replace underscores with hyphens so URLs point at the correct storage bucket.”
18) Related items v1: same category (baseline) (before 16:22)
This is a simple approach:
query 4 items in same category as current product
Prompt (Claude)
“Implement getRelatedItems that fetches 4 items from the same category excluding the current item.”
19) Related items v2: semantic similarity with pgvector + RPC (16:22)
Now switch to the “AI powered” part:
Use the
embeddingcolumnCreate a SQL function (RPC) to return nearest neighbors
Prompt (Supabase AI Assistant)
“Create a Postgres SQL function get_related_items that takes:
current_item_id
query_embedding
and returns the most similar items (limit 4), excluding the current item, using pgvector similarity.”
Important detail from the video: The function must return SET OF fashion_items (not a single row), otherwise Supabase client code will error.
Call it from Next.js
Use:
supabase.rpc('get_related_items', { ... })
You’ll pass:
input_id(current product id)input_embedding(current product embedding)
Supabase returns advanced Postgres types as strings; treat the embedding parameter accordingly.
20) Final QA checklist (end)
✅ Home page loads products
✅ Product details loads correct item
✅ Images render (URLs correct)
✅ Related items show:
baseline category method, then
semantic method via RPC + embeddings
Reusable prompts (copy/paste bank)
Data
“Generate 50 realistic fashion products as JSON with safe image prompts.”Script
“Write a TS script to generate DALL·E images, upload to Supabase Storage, create embeddings, upsert into Postgres.”SQL
“Createfashion_itemswith pgvector + indexes.”RPC
“Createget_related_items(current_id, query_embedding)returning SETOF fashion_items using vector similarity.”Frontend
“Generate Next.js App Router ecommerce UI with Supabase queries and product detail page.”
If you tell me whether you want this guide optimized for Claude prompts (Artifacts workflow) or ChatGPT prompts, I can rewrite the prompt wording to match that tool’s style exactly.
Final Thought
E-commerce is just the easiest demo.
The real opportunity is realizing that most modern apps are just collections of items with relationships—and AI is now very good at understanding those relationships for you.
Build once. Reuse everywhere.
If you want, I can:
Turn one of these ideas into a full build guide
Rank them by business potential
Create a starter Supabase schema for any app type
Adapt this article for Medium, Dev.to, or a newsletter
Just say the word.