Upload a picture → get four styled looks (elegant, streetwear, sporty, business casual). In this walkthrough we’ll build FashionistaAI with Cloudinary GenAI, React (Vite) on the frontend, and a tiny Node.js/Express backend for secure uploads.
Repo: Cloudinary-FashionistaAI
What you’ll build
-
A React app that:
- uploads an image to your Node backend
- asks Cloudinary GenAI to swap tops/bottoms
- replaces the background
- lets you recolor top or bottom on click
A Node.js server that securely uploads files to Cloudinary using the official SDK.
Demo (what it looks like)
The background adapts to the look; each tile is a different style:
- Elegant
- Streetwear
- Sporty
- Business casual
Prerequisites
- Node 18+ and npm
- A free Cloudinary account
GenAI features may need to be enabled depending on your plan.
- Basic React/TypeScript familiarity (optional but helpful)
1) Set up Cloudinary
- Create/Login → Settings → Product Environments.
- Confirm your Cloud name (keep it consistent across tools).
- Settings → Product Environments → API Keys → Generate New API Key. Save: Cloud name, API key, API secret (secret stays on the server).
2) Bootstrap the React app (Vite)
# Create a Vite + React + TS app
npm create vite@latest fashionistaai -- --template react-ts
cd fashionistaai
# Frontend deps
npm i axios @cloudinary/react @cloudinary/url-gen
# Dev tooling
npm i -D @vitejs/plugin-react
# Backend deps (we'll use one package.json for both)
npm i express cors cloudinary multer streamifier dotenv
# Nice-to-have dev deps
npm i -D nodemon concurrently
3) Configure Vite dev proxy (frontend → backend)
Create/replace vite.config.js:
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
export default defineConfig({
plugins: [react()],
server: {
port: 3000,
proxy: {
'/api': {
target: 'http://localhost:8000',
changeOrigin: true,
secure: false,
},
},
},
})
This forwards any /api/* calls to the Express server on port 8000.
4) Environment variables
Create .env in the project root:
# Server (Node) reads these:
CLOUDINARY_CLOUD_NAME=YOUR_CLOUD_NAME
CLOUDINARY_API_KEY=YOUR_API_KEY
CLOUDINARY_API_SECRET=YOUR_API_SECRET
# Frontend (Vite) reads those prefixed with VITE_
VITE_CLOUDINARY_CLOUD_NAME=YOUR_CLOUD_NAME
Never expose
CLOUDINARY_API_SECRETon the frontend. That’s why we’re using a server.
5) Node/Express backend (server.js)
Create server.js in the project root. You can find the complete server file here.
Let's explain the main parts of the server.js file.
cloudinary.config({
secure: true,
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
})
This is the part responsible for connecting your server to your Cloudinary account by pulling your credentials from .env.
const storage = multer.memoryStorage()
const upload = multer({
storage,
limits: { fileSize: 10 * 1024 * 1024 }, // 10MB
fileFilter: (_req, file, cb) => {
const ok = /image\/(png|jpe?g|webp)/i.test(file.mimetype)
cb(ok ? null : new Error('Only PNG/JPG/WEBP images are allowed'), ok)
},
})
Remember the npm dependency multer we installed? Time to put it to work! Multer stores uploaded files in memory, not on disk which means is faster & simpler. Limits uploads to 10MB and only accepts PNG, JPG, WEBP images.
Why in memory?
Because Cloudinary works great with streams, no need to save files to disk!
const uploadStream = cloudinary.uploader.upload_stream(
{ resource_type: 'image' },
(error, result) => {
if (error) {
console.error('Cloudinary error:', error)
return res.status(500).json({ error: error.message })
}
res.json(result)
}
)
streamifier.createReadStream(req.file.buffer).pipe(uploadStream)
Let's talk about uploading images to the Cloudinary using the NodeJS SDK.
The Cloudinary NodeJS SDK expects either a file path or a stream. Since multer stored the image in memory, we convert the buffer to a readable stream then we pipe the stream directly into Cloudinary’s upload_stream(). When Cloudinary finishes, it calls the callback, and we return the Cloudinary result to the frontend.
package.json scripts
Open package.json and add these scripts:
{
"type": "module",
"scripts": {
"dev": "vite",
"server": "nodemon server.js",
"start:both": "concurrently -k \"npm:server\" \"npm:dev\""
}
}
Now you can run both servers with:
npm run start:both
(Or use two terminals: npm run server and npm run dev.)
6) React UI (src/App.tsx)
Below is a drop‑in, TypeScript friendly version that keeps your original logic but tightens types, separates file vs. Cloudinary images, and reads the cloud name from env. You can find the complete code for the App.tsx here.
Now, let's dive into the UI code!
Creating the clothing styles
type StyleKey = 'top' | 'bottom'
type StyleConfig = {
top: string
bottom: string
background: string
type: string
}
const STYLES: StyleConfig[] = [
{ top: 'suit jacket for upper body', bottom: 'suit pants for lower body', background: 'office', type: 'business casual' },
{ top: 'sport tshirt for upper body', bottom: 'sport shorts for lower body', background: 'gym', type: 'sporty' },
{ top: 'streetwear shirt for upper body', bottom: 'streetwear pants for lower body', background: 'street', type: 'streetwear' },
{ top: 'elegant tuxedo for upper body', bottom: 'elegant tuxedo pants for lower body', background: 'gala', type: 'elegant' },
]
The StyleKey type indicates whether the user is recoloring the top or bottom of an outfit, while StyleConfig represents a complete look, including the top, bottom, background, and a human-readable label. The STYLES array acts as a preset wardrobe: each entry specifies what clothing should replace the user’s upper and lower garments, the general background aesthetic for the image, and the style’s name. Each of these presets becomes one of the “cards” displayed in the grid, such as Business Casual, Sporty, Streetwear, or Elegant.
Submitting to the Backend and Getting the Base Image
async function handleSubmit() {
setError(null)
setLooks([])
setLoadingStatus([])
if (!file) return
try {
setLoading(true)
const data = new FormData()
data.append('image', file)
const resp = await axios.post('/api/generate', data, {
headers: { 'Content-Type': 'multipart/form-data' },
})
const publicId = resp.data.public_id as string
const base = cld.image(publicId).resize(fill().width(508).height(508))
setBaseImg(base)
createLooks(publicId)
} catch (err: any) {
console.error(err)
setError(err?.message ?? 'Upload failed')
} finally {
setLoading(false)
}
}
When the user submits an image, the function begins by clearing any previous errors and removing any previously generated looks. It then builds a FormData object containing the uploaded image and sends it via a POST request to the /api/generate endpoint you created earlier. The backend uploads this file to Cloudinary and returns Cloudinary’s full response, including the crucial public_id. Once the upload succeeds, the frontend creates a new CloudinaryImage based on that ID, resizes it to 508×508 for consistent display, and stores it in baseImg. With the base image ready, the function then calls createLooks(publicId) to generate all of the AI-styled outfit variations
Preloading Derived Images (Poll Until Ready)
function preload(img: CloudinaryImage, index: number, attempts = 0) {
const url = img.toURL()
const tag = new Image()
tag.onload = () =>
setLoadingStatus(prev => {
const copy = [...prev]
copy[index] = false
return copy
})
tag.onerror = async () => {
// 423 means "still deriving" on Cloudinary
try {
const r = await fetch(url, { method: 'HEAD' })
if (r.status === 423 && attempts < 6) {
setTimeout(() => preload(img, index, attempts + 1), 2000 * (attempts + 1))
return
}
} catch {}
setError('Error loading image. Please try again.')
setLoadingStatus(prev => {
const copy = [...prev]
copy[index] = false
return copy
})
}
tag.src = url
}
The preload function works by converting a CloudinaryImage into a URL and creating a temporary Image() object to load it in the background. If the image loads successfully, it marks that particular look as finished by updating loadingStatus[index] to false. If the load fails, the function sends a HEAD request to check whether Cloudinary is still generating the derived asset, indicated by a 423 status code. When this happens—and as long as the maximum number of attempts hasn’t been reached—it retries after an increasing delay. If the error persists for reasons other than derivation, the function sets an error message and stops retrying.
Creating the Different Looks (Generative Effects)
function createLooks(publicId: string) {
const imgs = STYLES.map(style => {
const i = cld.image(publicId)
i.effect(generativeReplace().from('shirt').to(style.top))
i.effect(generativeReplace().from('pants').to(style.bottom))
i.effect(generativeBackgroundReplace()) // optional: prompt with your background
i.effect(generativeRestore())
i.resize(fill().width(500).height(500))
return i
})
setLooks(imgs)
setLoadingStatus(imgs.map(() => true))
imgs.forEach((img, idx) => preload(img, idx))
}
To generate each outfit variation, the app creates a new CloudinaryImage from the same publicId and applies a series of generative effects: it replaces the shirt with the style’s top, swaps the pants for the style’s bottom, updates the background, and restores the image to remove artifacts. After resizing the result, the image is added to the looks array and marked as loading. The app then calls preload() on each look to determine when Cloudinary has finished processing it. This is the step where the “magic wardrobe” is created, turning a single uploaded image into multiple styled variations.
Example:
It replaces the shirt with the style's designated top using generativeReplace().from('shirt').to(style.top) and swaps the pants for the appropriate bottom using generativeReplace().from('pants').to(style.bottom), allowing prompts like “suit jacket for upper body” or “tuxedo pants” to transform the clothing.
Recolor Modal Logic
function openRecolorModal(index: number) {
setSelectedLookIndex(index)
setOpenModal(true)
}
function applyRecolor() {
const clone = [...looks]
const img = clone[selectedLookIndex]
if (!img) return
setLoadingStatus(prev => {
const copy = [...prev]
copy[selectedLookIndex] = true
return copy
})
setOpenModal(false)
// Recolor only the chosen item for the chosen look
img.effect(generativeRecolor(STYLES[selectedLookIndex][selectedItem], color))
setLooks(clone)
preload(img, selectedLookIndex)
}
Recoloring works by letting the user click any generated look, which opens a modal and stores the index of the selected outfit. Inside the modal, the user chooses whether to recolor the top or bottom and picks a new color. When they confirm, applyRecolor() marks that look as loading again, applies a new generativeRecolor() transformation using the appropriate prompt and selected hex value, updates the looks array, and triggers preload() to wait for Cloudinary to finish generating the updated image. In essence, the app layers an additional AI transformation on top of an already styled outfit.
Add some love to your app by adding our css or your own.
7) How it works (quick tour)
-
Upload: The file is sent to
POST /api/generate. The server usescloudinary.uploader.upload_streamto store it and returns thepublic_id. -
Transform:
generativeReplace().from('shirt').to(style.top)generativeReplace().from('pants').to(style.bottom)-
generativeBackgroundReplace()(optionally prompt it to steer the scene) -
generativeRestore()for quality
Recolor: On a generated tile, open a modal and apply
generativeRecolor(<item>, <hex>).423 handling: When the first request for a derived image hits Cloudinary while it’s still being generated, you might see HTTP 423. The preload helper retries with backoff; for heavy use, consider preparing eager transformations on upload.
8) Testing locally
# Install (already done if you followed along)
npm i
# Run both servers
npm run start:both
# Frontend: http://localhost:3000
# Backend: http://localhost:8000
Production notes (optional but recommended)
-
Secrets: Keep
CLOUDINARY_API_SECRETserver‑side only; use environment vars on your host. - Upload presets: Lock down transformations and content rules with a Cloudinary upload preset.
- Limits: Add rate limiting to your API if you open it to the public.
-
Validation: Keep the Multer
fileFilterandlimitsin place; consider scanning/validating uploads. -
Caching/CDN: Cloudinary URLs are CDN‑backed; reusing the same
public_idimproves cache hits. -
Accessibility: Provide helpful
alttext for generated images (the example includes captions).
Wrap‑up
FashionistaAI shows how a small React app plus Cloudinary’s GenAI can turn one image into four on‑brand looks with background changes and easy recoloring. Fork it, tweak the prompts, and ship your own AI‑powered try‑on experience.
If you build something with this, drop a link—DEV readers will want to see it!

Top comments (0)