This repository contains the implementation of a Real-Time Collaborative Code Editor with AI-Assisted Debugging, built using FastAPI for the backend and React for the frontend. The platform allows multiple developers to collaborate on code in real-time (like Google Docs for code) and provides AI-powered debugging suggestions using Qwen 2.5 7B running locally via Ollama.
- Features
- Requirements
- Folder Structure
- Setup Instructions
- Running the Application
- API Documentation
- Testing
- Docker Setup (Optional)
- Contributing
- License
-
Real-Time Collaboration:
- Multiple users can edit the same code file simultaneously.
- Changes are synced in real-time using WebSockets.
- Live cursors and highlights for each user.
-
AI-Assisted Debugging:
- Integrates Qwen 2.5 7B (via Ollama) to analyze code locally.
- Provides real-time suggestions for syntax errors, potential bugs, and performance improvements.
- Users can accept or reject AI suggestions.
-
User Management:
- User registration and login.
- Role-based access control (e.g., owner, collaborator).
-
Scalability and Performance:
- Uses Redis for caching and RabbitMQ for message queuing.
- Optimized database queries and rate limiting for AI service.
-
Security:
- Authentication and authorization using JWT tokens.
- Input sanitization to prevent injection attacks.
-
Bonus Features:
- Frontend interface built with React for visualizing the code editor and AI suggestions.
- Optional Git-like version control for code files.
- Python 3.9+
- Node.js (for the frontend)
- PostgreSQL (for the database)
- Redis (for caching and real-time updates)
- RabbitMQ (optional, for message queuing)
- Docker (optional, for containerization)
- Ollama installed locally for running Qwen 2.5 7B
- Backend: FastAPI, SQLAlchemy, Pydantic, Uvicorn, Redis, RabbitMQ, etc.
- Frontend: React, Socket.IO, CodeMirror, Axios, etc.
real-time-code-editor/
├── app/ # Backend (FastAPI)
│ ├── main.py # Main entry point
│ ├── models.py # Database models
│ ├── schemas.py # Pydantic schemas
│ ├── crud.py # CRUD operations
│ ├── websocket.py # WebSocket implementation
│ ├── ai_debugger.py # AI integration (Qwen 2.5 7B via Ollama)
│ ├── auth.py # Authentication and authorization
│ ├── config.py # Configuration settings
│ └── utils/ # Utility functions
├── frontend/ # Frontend (React)
│ ├── public/ # Static assets
│ ├── src/ # React source code
│ │ ├── components/ # React components
│ │ ├── App.js # Main application component
│ │ └── index.js # Entry point
│ ├── package.json # Dependencies
│ └── README.md # Frontend documentation
├── migrations/ # Database migrations
├── requirements.txt # Python dependencies
├── .env # Environment variables
└── README.md # Project documentation
-
Install Dependencies:
pip install -r requirements.txt
-
Set Up PostgreSQL:
- Create a database named
code_editor
. - Update the
.env
file with your PostgreSQL credentials:DATABASE_URL=postgresql://user:password@localhost/code_editor
- Create a database named
-
Set Up Redis:
- Start Redis locally or use a hosted instance.
-
Install Ollama and Qwen 2.5 7B:
- Install Ollama by following the instructions at Ollama's official website.
- Pull the Qwen 2.5 7B model:
ollama pull qwen2.5:7b
-
Update
.env
:- Add the following to your
.env
file:OLLAMA_MODEL=qwen2.5:7b
- Add the following to your
-
Run Migrations:
- Use Alembic or SQLAlchemy to create tables in the database.
-
Navigate to the Frontend Directory:
cd frontend
-
Install Dependencies:
npm install
-
Start the Development Server:
npm start
Start the FastAPI server:
uvicorn app.main:app --reload
The backend will run at http://localhost:8000
.
Start the React development server:
npm start
The frontend will run at http://localhost:3000
.
Access the Swagger UI documentation at:
http://localhost:8000/docs
This includes all APIs for user management, code file management, and AI debugging.
Run unit and integration tests using pytest
:
pytest
Use tools like Jest or React Testing Library for frontend testing.
-
Build and Start Containers:
docker-compose up --build
-
Access the Application:
- Backend:
http://localhost:8000
- Frontend:
http://localhost:3000
- Backend:
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature
). - Commit your changes (
git commit -m "Add your feature"
). - Push to the branch (
git push origin feature/your-feature
). - Open a pull request.
- The AI model (Qwen 2.5 7B) is run locally using Ollama, ensuring data privacy and eliminating dependency on external APIs.
- To call the model, the backend uses a subprocess or HTTP requests to interact with the Ollama server.