|
1 | 1 | --- |
| 2 | +- &qwen25coder |
| 3 | + name: "qwen2.5-coder-14b" |
| 4 | + url: "github:mudler/LocalAI/gallery/chatml.yaml@master" |
| 5 | + license: apache-2.0 |
| 6 | + tags: |
| 7 | + - llm |
| 8 | + - gguf |
| 9 | + - gpu |
| 10 | + - qwen |
| 11 | + - qwen2.5 |
| 12 | + - cpu |
| 13 | + urls: |
| 14 | + - https://huggingface.co/Qwen/Qwen2.5-Coder-14B |
| 15 | + - https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF |
| 16 | + description: | |
| 17 | + Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: |
| 18 | +
|
| 19 | + Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. |
| 20 | + A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. |
| 21 | + Long-context Support up to 128K tokens. |
| 22 | + overrides: |
| 23 | + parameters: |
| 24 | + model: Qwen2.5-Coder-14B.Q4_K_M.gguf |
| 25 | + files: |
| 26 | + - filename: Qwen2.5-Coder-14B.Q4_K_M.gguf |
| 27 | + sha256: 94f277a9ac7caf117140b2fff4e1ccf4bc9f35395b0112f0d0d7c82c6f8d860e |
| 28 | + uri: huggingface://mradermacher/Qwen2.5-Coder-14B-GGUF/Qwen2.5-Coder-14B.Q4_K_M.gguf |
| 29 | +- !!merge <<: *qwen25coder |
| 30 | + name: "qwen2.5-coder-3b-instruct" |
| 31 | + urls: |
| 32 | + - https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct |
| 33 | + - https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-GGUF |
| 34 | + overrides: |
| 35 | + parameters: |
| 36 | + model: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
| 37 | + files: |
| 38 | + - filename: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
| 39 | + sha256: 3da3afe6cf5c674ac195803ea0dd6fee7e1c228c2105c1ce8c66890d1d4ab460 |
| 40 | + uri: huggingface://bartowski/Qwen2.5-Coder-3B-Instruct-GGUF/Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
| 41 | +- !!merge <<: *qwen25coder |
| 42 | + name: "qwen2.5-coder-32b-instruct" |
| 43 | + urls: |
| 44 | + - https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct |
| 45 | + - https://huggingface.co/bartowski/Qwen2.5-Coder-32B-Instruct-GGUF |
| 46 | + overrides: |
| 47 | + parameters: |
| 48 | + model: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf |
| 49 | + files: |
| 50 | + - filename: Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf |
| 51 | + sha256: 8e2fd78ff55e7cdf577fda257bac2776feb7d73d922613caf35468073807e815 |
| 52 | + uri: huggingface://bartowski/Qwen2.5-Coder-32B-Instruct-GGUF/Qwen2.5-Coder-32B-Instruct-Q4_K_M.gguf |
2 | 53 | - &opencoder |
3 | 54 | name: "opencoder-8b-base" |
4 | 55 | icon: https://github.com/OpenCoder-llm/opencoder-llm.github.io/blob/main/static/images/opencoder_icon.jpg?raw=true |
|
1118 | 1169 | - filename: calme-3.1-qwenloi-3b.Q5_K_M.gguf |
1119 | 1170 | sha256: 8962a8d1704979039063b5c69fafdb38b545c26143419ec4c574f37f2d6dd7b2 |
1120 | 1171 | uri: huggingface://MaziyarPanahi/calme-3.1-qwenloi-3b-GGUF/calme-3.1-qwenloi-3b.Q5_K_M.gguf |
1121 | | -- !!merge <<: *qwen25 |
1122 | | - name: "qwen2.5-coder-14b" |
1123 | | - urls: |
1124 | | - - https://huggingface.co/Qwen/Qwen2.5-Coder-14B |
1125 | | - - https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-GGUF |
1126 | | - description: | |
1127 | | - Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: |
1128 | | -
|
1129 | | - Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. |
1130 | | - A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. |
1131 | | - Long-context Support up to 128K tokens. |
1132 | | - overrides: |
1133 | | - parameters: |
1134 | | - model: Qwen2.5-Coder-14B.Q4_K_M.gguf |
1135 | | - files: |
1136 | | - - filename: Qwen2.5-Coder-14B.Q4_K_M.gguf |
1137 | | - sha256: 94f277a9ac7caf117140b2fff4e1ccf4bc9f35395b0112f0d0d7c82c6f8d860e |
1138 | | - uri: huggingface://mradermacher/Qwen2.5-Coder-14B-GGUF/Qwen2.5-Coder-14B.Q4_K_M.gguf |
1139 | | -- !!merge <<: *qwen25 |
1140 | | - name: "qwen2.5-coder-3b-instruct" |
1141 | | - urls: |
1142 | | - - https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct |
1143 | | - - https://huggingface.co/bartowski/Qwen2.5-Coder-3B-Instruct-GGUF |
1144 | | - description: | |
1145 | | - Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: |
1146 | | -
|
1147 | | - Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o. |
1148 | | - A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. |
1149 | | - Long-context Support up to 128K tokens. |
1150 | | - overrides: |
1151 | | - parameters: |
1152 | | - model: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
1153 | | - files: |
1154 | | - - filename: Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
1155 | | - sha256: 3da3afe6cf5c674ac195803ea0dd6fee7e1c228c2105c1ce8c66890d1d4ab460 |
1156 | | - uri: huggingface://bartowski/Qwen2.5-Coder-3B-Instruct-GGUF/Qwen2.5-Coder-3B-Instruct-Q4_K_M.gguf |
1157 | 1172 | - &archfunct |
1158 | 1173 | license: apache-2.0 |
1159 | 1174 | tags: |
|
0 commit comments