Skip to content

Conversation

@ds5t5
Copy link

@ds5t5 ds5t5 commented Sep 25, 2023

/claim #77

reference PR: ggml-org/llama.cpp#3329

@ds5t5
Copy link
Author

ds5t5 commented Sep 25, 2023

@olegklimov please help review and feel free to test. The inference is extremely fast with the effort from llama.cpp.

@teleprint-me
Copy link

@ds5t5

It is really fast! Nice work!

olegklimov pushed a commit that referenced this pull request Feb 17, 2025
* wip(patch show apply): fetch the patch data from the lsp

* fix(patchResult type guard): add proper type checks for patchResult.

* feat(pin open): open an unsaved file in the ide.

* refactor(pin): use a stricter regexp for detecting the end of the markdown.

* feat(pins): add apply button and share chunks cache with diffs.

* feat(pins): allow the ide to reset the diff cache when applying / saving diffs

* ui(markdown): limit pin messages to only assistant messages.

* ui(pin buttons): wrap buttons above long 📍 message

* chore(pins): remove todos and hide show button in web host

* refactor(pin): lazily fetch chunks and show warning / error message if result doesn't work.

* ui(pin): move callout to bellow the button

* chore(dev max age for dev tools): decrease to 500

* refactor(pin warning): reuse diff warning callout

* refactor(pin callout): add click handler and timeout props.
@JegernOUTT
Copy link
Member

@mitya52 do we still need this?

@mitya52
Copy link
Member

mitya52 commented Jun 2, 2025

Refact model is obsolete, so we'll close this.

@mitya52 mitya52 closed this Jun 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants