Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit 7465842

Browse files
authored
[Hackability Refactor] Move Browser/OpenAI/Server under torchchat usages (#1052)
* Initial Move of Browser/OpenAI/Server under torchchat usages * Fix swapped init files
1 parent 5d39cbc commit 7465842

File tree

9 files changed

+24
-15
lines changed

9 files changed

+24
-15
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ First, follow the steps in the Server section above to start a local server. The
250250
[skip default]: begin
251251

252252
```
253-
streamlit run browser/browser.py
253+
streamlit run torchchat/usages/browser.py
254254
```
255255

256256
Use the "Max Response Tokens" slider to limit the maximum number of tokens generated by the model for each response. Click the "Reset Chat" button to remove the message history and start a fresh chat.

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ gguf
1717
lm-eval==0.4.2
1818
blobfile
1919
tomli >= 1.1.0 ; python_version < "3.11"
20+
openai
2021

2122
# Build tools
2223
wheel

torchchat.py

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -68,17 +68,12 @@
6868

6969
generate_main(args)
7070
elif args.command == "browser":
71-
# enable "chat" and "gui" when entering "browser"
72-
args.chat = True
73-
args.gui = True
74-
check_args(args, "browser")
75-
76-
from browser.browser import main as browser_main
77-
78-
browser_main(args)
71+
print(
72+
"\nTo test out the browser please use: streamlit run torchchat/usages/browser.py <args>\n"
73+
)
7974
elif args.command == "server":
8075
check_args(args, "server")
81-
from server import main as server_main
76+
from torchchat.usages.server import main as server_main
8277

8378
server_main(args)
8479
elif args.command == "generate":

torchchat/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from torchchat import usages
2+
3+
__all__ = [usages]

torchchat/usages/README.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Chat with LLMs Everywhere
2+
3+
This directory hosts examples of how to leverage model inference.
4+
5+
* OpenAI API Integration: `openai_api.py`
6+
* Streamlit UI: `browser.py`
7+
* Localhost Flask Server (OpenAI API): `server.py`

torchchat/usages/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from torchchat.usages import browser, openai_api, server
2+
3+
__all__ = [browser, openai_api, server]
File renamed without changes.
File renamed without changes.

server.py renamed to torchchat/usages/server.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,17 @@
1515

1616
import torch
1717

18-
from api.openai_api import (
18+
from build.builder import BuilderArgs, TokenizerArgs
19+
from flask import Flask, request, Response
20+
from generate import GeneratorArgs
21+
22+
from torchchat.usages.openai_api import (
1923
CompletionRequest,
2024
get_model_info_list,
2125
OpenAiApiGenerator,
2226
retrieve_model_info,
2327
)
2428

25-
from build.builder import BuilderArgs, TokenizerArgs
26-
from flask import Flask, request, Response
27-
from generate import GeneratorArgs
28-
2929
OPENAI_API_VERSION = "v1"
3030

3131

0 commit comments

Comments
 (0)