@@ -50,37 +50,45 @@ Llama Stack supports two types of external providers:
50
50
1 . ** Remote Providers** : Providers that communicate with external services (e.g., cloud APIs)
51
51
2 . ** Inline Providers** : Providers that run locally within the Llama Stack process
52
52
53
+
54
+ ### Provider Specification (Common between inline and remote providers)
55
+
56
+ - ` provider_type ` : The type of the provider to be installed (remote or inline). eg. ` remote::ollama `
57
+ - ` api ` : The API for this provider, eg. ` inference `
58
+ - ` config_class ` : The full path to the configuration class
59
+ - ` module ` : The Python module containing the provider implementation
60
+ - ` optional_api_dependencies ` : List of optional Llama Stack APIs that this provider can use
61
+ - ` api_dependencies ` : List of Llama Stack APIs that this provider depends on
62
+ - ` provider_data_validator ` : Optional validator for provider data.
63
+ - ` pip_packages ` : List of Python packages required by the provider
64
+
53
65
### Remote Provider Specification
54
66
55
67
Remote providers are used when you need to communicate with external services. Here's an example for a custom Ollama provider:
56
68
57
69
``` yaml
58
- adapter :
59
- adapter_type : custom_ollama
60
- pip_packages :
61
- - ollama
62
- - aiohttp
63
- config_class : llama_stack_ollama_provider.config.OllamaImplConfig
64
- module : llama_stack_ollama_provider
70
+ adapter_type : custom_ollama
71
+ provider_type : " remote::ollama "
72
+ pip_packages :
73
+ - ollama
74
+ - aiohttp
75
+ config_class : llama_stack_ollama_provider.config.OllamaImplConfig
76
+ module : llama_stack_ollama_provider
65
77
api_dependencies : []
66
78
optional_api_dependencies : []
67
79
` ` `
68
80
69
- #### Adapter Configuration
81
+ #### Remote Provider Configuration
70
82
71
- The ` adapter` section defines how to load and configure the provider:
72
-
73
- - `adapter_type` : A unique identifier for this adapter
74
- - `pip_packages` : List of Python packages required by the provider
75
- - `config_class` : The full path to the configuration class
76
- - `module` : The Python module containing the provider implementation
83
+ - ` adapter_type`: A unique identifier for this adapter, eg. `ollama`
77
84
78
85
# ## Inline Provider Specification
79
86
80
87
Inline providers run locally within the Llama Stack process. Here's an example for a custom vector store provider :
81
88
82
89
` ` ` yaml
83
90
module: llama_stack_vector_provider
91
+ provider_type: inline::llama_stack_vector_provider
84
92
config_class: llama_stack_vector_provider.config.VectorStoreConfig
85
93
pip_packages:
86
94
- faiss-cpu
@@ -95,12 +103,6 @@ container_image: custom-vector-store:latest # optional
95
103
96
104
# ### Inline Provider Fields
97
105
98
- - `module` : The Python module containing the provider implementation
99
- - `config_class` : The full path to the configuration class
100
- - `pip_packages` : List of Python packages required by the provider
101
- - `api_dependencies` : List of Llama Stack APIs that this provider depends on
102
- - `optional_api_dependencies` : List of optional Llama Stack APIs that this provider can use
103
- - `provider_data_validator` : Optional validator for provider data
104
106
- `container_image` : Optional container image to use instead of pip packages
105
107
106
108
# # Required Fields
@@ -113,20 +115,17 @@ All providers must contain a `get_provider_spec` function in their `provider` mo
113
115
from llama_stack.providers.datatypes import (
114
116
ProviderSpec,
115
117
Api,
116
- AdapterSpec,
117
- remote_provider_spec,
118
+ RemoteProviderSpec,
118
119
)
119
120
120
121
121
122
def get_provider_spec() -> ProviderSpec:
122
- return remote_provider_spec (
123
+ return RemoteProviderSpec (
123
124
api=Api.inference,
124
- adapter=AdapterSpec(
125
- adapter_type="ramalama",
126
- pip_packages=["ramalama>=0.8.5", "pymilvus"],
127
- config_class="ramalama_stack.config.RamalamaImplConfig",
128
- module="ramalama_stack",
129
- ),
125
+ adapter_type="ramalama",
126
+ pip_packages=["ramalama>=0.8.5", "pymilvus"],
127
+ config_class="ramalama_stack.config.RamalamaImplConfig",
128
+ module="ramalama_stack",
130
129
)
131
130
` ` `
132
131
@@ -234,11 +233,10 @@ dependencies = ["llama-stack", "pydantic", "ollama", "aiohttp"]
234
233
235
234
` ` ` yaml
236
235
# ~/.llama/providers.d/remote/inference/custom_ollama.yaml
237
- adapter:
238
- adapter_type: custom_ollama
239
- pip_packages: ["ollama", "aiohttp"]
240
- config_class: llama_stack_provider_ollama.config.OllamaImplConfig
241
- module: llama_stack_provider_ollama
236
+ adapter_type: custom_ollama
237
+ pip_packages: ["ollama", "aiohttp"]
238
+ config_class: llama_stack_provider_ollama.config.OllamaImplConfig
239
+ module: llama_stack_provider_ollama
242
240
api_dependencies: []
243
241
optional_api_dependencies: []
244
242
` ` `
0 commit comments