A Python SDK for interacting with the Dify Service-API. This library provides a fluent, type-safe interface for building AI-powered applications using Dify's API services including chat, completion, knowledge base, and workflow features.
This project is based on https://github.com/QiMington/dify-oapi, with refactoring and support for the latest Dify API.
- Multiple API Services: Chat, Completion, Knowledge Base (39 APIs), Workflow, and Core Dify APIs
- Builder Pattern: Fluent, chainable interface for constructing requests
- Sync & Async Support: Both synchronous and asynchronous operations
- Streaming Responses: Real-time streaming for chat and completion
- Type Safety: Comprehensive type hints with Pydantic validation
- File Upload: Support for images and documents
- Modern HTTP Client: Built on httpx for reliable API communication
pip install dify-oapi2
Requirements: Python 3.10+
Dependencies:
pydantic
(>=1.10,<3.0.0) - Data validation and settings managementhttpx
(>=0.24,<1.0) - Modern HTTP client
from dify_oapi.api.chat.v1.model.chat_request import ChatRequest
from dify_oapi.api.chat.v1.model.chat_request_body import ChatRequestBody
from dify_oapi.client import Client
from dify_oapi.core.model.request_option import RequestOption
# Initialize client
client = Client.builder().domain("https://api.dify.ai").build()
# Build request
req_body = (
ChatRequestBody.builder()
.inputs({})
.query("What can Dify API do?")
.response_mode("blocking")
.user("user-123")
.build()
)
req = ChatRequest.builder().request_body(req_body).build()
req_option = RequestOption.builder().api_key("your-api-key").build()
# Execute request
response = client.chat.v1.chat.chat(req, req_option, False)
print(response.answer)
# Enable streaming for real-time responses
req_body = (
ChatRequestBody.builder()
.query("Tell me a story")
.response_mode("streaming")
.user("user-123")
.build()
)
req = ChatRequest.builder().request_body(req_body).build()
response = client.chat.v1.chat.chat(req, req_option, True)
# Process streaming response
for chunk in response:
print(chunk, end="", flush=True)
import asyncio
async def async_chat():
response = await client.chat.v1.chat.achat(req, req_option, False)
print(response.answer)
asyncio.run(async_chat())
- Interactive conversations with AI assistants
- File upload support (images, documents)
- Conversation and message history management
- Streaming and blocking response modes
- Message Processing: Send messages and control responses
- Annotation Management: Create, update, and manage annotations
- Audio Processing: Text-to-audio conversion
- Feedback System: Collect and analyze user feedback
- File Upload: Support for document and media files
- Application Info: Configuration and metadata retrieval
- Dataset Management: CRUD operations for datasets
- Document Management: Upload, process, and manage documents
- Segment Management: Fine-grained content segmentation
- Metadata & Tags: Custom metadata and knowledge type tags
- Retrieval: Advanced search and retrieval functionality
- Automated workflow execution
- Parameter configuration
- Status monitoring
- Essential Dify service functionality
Explore comprehensive examples in the examples directory:
- Blocking Response - Standard chat interactions
- Streaming Response - Real-time streaming chat
- Conversation Management - Managing chat history
- Basic Completion - Text generation
- List Datasets - Dataset management
For detailed examples and usage patterns, see the examples README.
- Python 3.10+
- Poetry
# Clone repository
git clone https://github.com/nodite/dify-oapi2.git
cd dify-oapi
# Setup development environment (installs dependencies and pre-commit hooks)
make dev-setup
This project uses modern Python tooling:
- Ruff: Fast Python linter and formatter
- MyPy: Static type checking
- Pre-commit: Git hooks for code quality
- Pylint: Additional code analysis
# Format code
make format
# Lint code
make lint
# Fix linting issues
make fix
# Run all checks (lint + type check)
make check
# Install pre-commit hooks
make install-hooks
# Run pre-commit hooks manually
make pre-commit
# Set environment variables
export DOMAIN="https://api.dify.ai"
export CHAT_KEY="your-api-key"
# Run tests
make test
# Run tests with coverage
make test-cov
# Configure PyPI tokens (one-time setup)
poetry config http-basic.testpypi __token__ <your-testpypi-token>
poetry config http-basic.pypi __token__ <your-pypi-token>
# Build package
make build
# Publish to TestPyPI (for testing)
make publish-test
# Publish to PyPI (maintainers only)
make publish
dify-oapi/
βββ dify_oapi/ # Main SDK package
β βββ api/ # API service modules
β β βββ chat/ # Chat API
β β βββ completion/ # Completion API
β β βββ dify/ # Core Dify API
β β βββ knowledge_base/ # Knowledge Base API (39 APIs)
β β βββ workflow/ # Workflow API
β βββ core/ # Core functionality
β β βββ http/ # HTTP transport layer
β β βββ model/ # Base models
β β βββ utils/ # Utilities
β βββ client.py # Main client interface
βββ docs/ # Documentation
βββ examples/ # Usage examples
βββ tests/ # Test suite
βββ pyproject.toml # Project configuration
- Project Overview - Architecture and technical details
- Completion APIs - Complete completion API documentation
- Knowledge Base APIs - Complete dataset API documentation
- Examples - Usage examples and patterns
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Ensure code quality (
ruff format
,ruff check
,mypy
) - Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- PyPI Package: https://pypi.org/project/dify-oapi2/
- Source Code: https://github.com/nodite/dify-oapi2
- Dify Platform: https://dify.ai/
- Dify API Docs: https://docs.dify.ai/
MIT License - see LICENSE file for details.
Keywords: dify, nlp, ai, language-processing