Create an end-to-end LLM App — No Coding Required!
Welcome to your hands-on journey to building production-ready AI systems.
In this guide, you'll learn how to design, build, and deploy a complete LLM application — from backend to frontend — using tools like Cursor, v0, GitHub, and Vercel.
This workshop teaches you how to:
- Build an AI-powered app with no manual coding
- Use GitFlow for structured collaboration
- Test and deploy your app live on the web
- Understand how AI systems are structured in production
Prompt 1: Initial Backend Setup
Create me a super simple but pretty FastAPI LLM app (using GPT-4.0) with a simple UI HTML and run with uvicorn.
It will be a Thanksgiving Diet app where I select if I am vegetarian/vegan/no restrictions
and it returns a simple thanksgiving recipe I can use for dinner. User will insert the
OPENAI_API_KEY in the UI.
Use client = OpenAI(api_key=api_key).
Make sure the libraries are compatible with Python 3.13.
Use openai==1.51.0 and use compatible httpx==0.27.0.
Prompt 2: Vercel Deployment Configuration
Add files to deploy to Vercel. Make sure there is no proxies argument error,
error caused by a dependency conflict between the OpenAI library and its
underlying HTTP library (httpx) on Vercel. Note: the OpenAI SDK 1.x+ doesn't
accept a proxies parameter directly.
Prompt 1: Initial Frontend Setup
Create me a Next.js frontend for a Thanksgiving diet LLM app with an image of turkey, where I select if I am
vegetarian/vegan/or no restrictions and it returns a simple thanksgiving recipe. User will
insert the OPENAI_API_KEY and NEXT_PUBLIC_API_URL directly in the UI! See the backend
attached. Make sure to use React 18.3.1 and frontend uses vaul 1.1.1 version.
Use all the necessary files so I can deploy it to Vercel. Use openai==1.51.0.
Prompt 2: Connect Frontend & Backend
Connect backend and the frontend (frontend is created with v0 in frontend
folder) so I can run them using commands: npm and uvicorn. Make sure that both
frontend and backend use React 18.3.1 and frontend uses vaul 1.1.1 version.
Finally, Make sure the PostCSS configuration is not missing the required Tailwind CSS plugin.
- Log into your GitHub account.
- Click New Repository → name your project (e.g.,
my-llm-app). - Clone it locally using SSH keys (see our October 1 VibeCoding session for setup details):
git clone [email protected]:your-username/my-llm-app.git
Paste the GitFlow rules we defined in our previous session into your README or team wiki.
These ensure smooth collaboration between main, develop, and feature branches.
Example:
main → production-ready code
develop → integration branch
feature/ → new features
hotfix/ → urgent fixes - Open your GitHub repo inside Cursor.
- Use the “Create backend” command in Cursor.
- Generate endpoints for your app (e.g.,
/api/analyze,/api/results). - Commit and push the backend to GitHub.
Use v0 to visually design your frontend:
- Add buttons, inputs, and chat windows
- Connect them to backend endpoints later
- Export when finished
In your project folder, create and activate a Python virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtExport your v0 project and move it into your local repo folder:
/my-llm-app
├── main.py
├── frontend/
└── README.md
Prompt Cursor:
“Connect my frontend with my backend API endpoints so I can run the app with npm and uvicorn commands.”
Cursor will automatically create the integration code for you.
Before pushing to GitHub, make sure everything runs smoothly:
- Run your backend locally:
uvicorn api:app --reload
- Run your frontend (depending on your setup):
npm run dev
- Test your app locally by visiting the URL (e.g.,
http://localhost:3000). - Check if the frontend correctly calls your backend endpoints.
✅ Pro Tip: Use the browser console or network tab to see if requests and responses are working as expected.
Commit and push your code:
git add .
git commit -m "Add full LLM app"
git push origin developThen merge via Pull Request to main for production.
- Go to vercel.com.
- Import your GitHub repo.
- Configure environment variables if needed.
- Click Deploy — your app will go live in minutes!
You’ve just created a production-ready AI system — end-to-end — without writing code manually!
💡 Next step: Add custom logic, authentication, or analytics for your MVP.
- Cursor — AI-powered coding environment
- v0.dev — visual frontend builder
- GitHub SSH Keys Setup Guide
- Vercel Deployment Docs
Created for the AI Makerspace “VibeCoding” series
by Katerina Gawthorpe ✨