This project is an educational demonstration of deploying a sentiment analysis model using PyTorch (TorchScript), FastAPI, and GoLang (BubbleTea) for the interface. It aims to teach how to preprocess data, deploy machine learning models, containerize applications, and build a full pipeline for model inference.
- Machine Learning Model: Uses an LSTM model trained in PyTorch and exported with TorchScript.
- Data Processing: Implements preprocessing and tokenization with spaCy and custom processing functions.
- API Deployment: The model is served through a FastAPI backend.
- TUI Interface: A lightweight GoLang text-based UI (TUI) built using BubbleTea.
- Containerized Application: The entire application runs with Docker Compose.
- End-to-End Example: Covers data preprocessing, model inference, and API integration.
.
├── README.md
├── app # GoLang TUI Interface
│ ├── Dockerfile
│ ├── api.go
│ ├── config.go
│ ├── go.mod
│ ├── go.sum
│ ├── main.go
│ ├── tui.go
├── server # FastAPI Backend for Model Inference
│ ├── Dockerfile
│ ├── api
│ │ ├── __init__.py
│ │ ├── dependencies.py
│ │ ├── inference.py
│ │ ├── schemas.py
│ ├── core
│ │ ├── pipelines
│ │ │ ├── __init__.py
│ │ │ ├── data_processor.py
│ │ ├── services
│ │ │ ├── __init__.py
│ │ │ ├── model_service.py
│ │ ├── utils.py
│ ├── main.py
│ ├── models
│ │ ├── architecture
│ │ │ ├── model.py
│ │ │ ├── model_config.yaml
│ │ ├── weights
│ │ ├── script_lstm.pt
│ │ ├── sentiment_lstm.pt
│ │ ├── vocab.txt
│ ├── requirements.txt
│ ├── setup.sh
│ ├── test_api.py
├── docker-compose.yml
This method ensures all dependencies are installed automatically.
If you haven’t installed Docker yet, follow the instructions for your OS:
- Windows & macOS: Install Docker Desktop
- Linux: Follow the official guide for Docker Engine
(Ensure you also installdocker-compose
if it’s not included.)
Once Docker is installed, open a terminal and run:
git clone https://github.com/cmsolson75/SentimentClassification.git
cd SentimentClassification
docker compose build
docker compose run app
This will:
- Build the necessary Docker containers.
- Start the FastAPI backend.
- Launch the TUI (Text User Interface) in your terminal.
If you prefer to run the app manually, you’ll need to set up the environment yourself.
- Install Python 3.10+ from python.org
- Install Go from golang.org
- Navigate to the
server
directory:cd server
- Install dependencies:
pip install -r requirements.txt python -m spacy download en_core_web_sm
- Run the FastAPI server:
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
- Navigate to the
app
directory:cd ../app
- Install dependencies:
go mod tidy
- Run the app:
go run main.go
Once running, the FastAPI server is accessible at http://localhost:8000
.
- Description: Returns the sentiment classification of a given text.
- Request Body:
{ "text": "I love this!" }
- Response:
{ "label": "Positive", "confidence": 0.98 }
- PyTorch / TorchScript – Model training & inference
- FastAPI – REST API framework
- spaCy – Tokenization & NLP processing
- GoLang (BubbleTea) – TUI implementation
- Docker / Docker Compose – Containerization
- YAML & Shell Scripts – Deployment automation
This project is designed as a learning resource for:
- Building and deploying machine learning models
- Preprocessing and cleaning NLP data
- Serving models via an API
- Building CLI-based UIs with Go
- Containerizing applications with Docker
Once you have the project running, try extending it with any of these improvements:
- Deploy to a Cloud Provider:
- Deploy the FastAPI backend to AWS, GCP, or Azure.
- Since the model runs on CPU, no GPU instance is required.
- Use Google Cloud Run, AWS Lambda, or an EC2 instance for hosting.
- The API is already containerized so deployment should be straightforward.
- Implement a Request Queue:
- Use Redis + Celery to queue user requests, preventing API overload.
- This will help scale the system for higher traffic.
- Swap the LSTM Model with a Transformer:
- Replace the current model with a HuggingFace Transformer (e.g., distilbert-base-uncased).
- Update the preprocessing pipeline to handle tokenization for transformers.
- NOTE: This might require you to deploy with a GPU.