Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
130 commits
Select commit Hold shift + click to select a range
a7a6e43
[Benchmark] Add async throughput benchmark
njhill Aug 28, 2024
ce7d159
wip
njhill Aug 29, 2024
569cd43
Merge remote-tracking branch 'njhill/async-llm-eng-bench' into reduce…
Aug 29, 2024
d99ce6f
stash
Aug 31, 2024
8d6b2e9
remove proxy
Sep 2, 2024
14f3637
stash
Sep 2, 2024
3b8311b
added mp_llm_engine
Sep 2, 2024
5e2eb74
fixed
Sep 2, 2024
aa62f2e
format
Sep 2, 2024
863081b
cleanup
Sep 2, 2024
965b97a
revert asyncllmengine
Sep 2, 2024
8fd72f6
fix nit
Sep 2, 2024
ddeb7c6
format
Sep 2, 2024
6539e10
Merge branch 'main' into reduce-asyncio-oh
Sep 2, 2024
4b111e4
clean
Sep 2, 2024
a5ffd2c
fix
Sep 2, 2024
1395872
stash
Sep 2, 2024
938cf85
move files
Sep 2, 2024
72d1d42
cleanup code
Sep 3, 2024
fcdcfc9
refactor, cleanup
Sep 3, 2024
659169e
updated
Sep 3, 2024
9886f3d
make health check work
Sep 3, 2024
5b2f057
format
Sep 3, 2024
ae4564c
awk -> ack
Sep 3, 2024
f9ccecc
add better shutdown
Sep 3, 2024
89b730b
cleanup comment
Sep 3, 2024
f3dc82b
more awk --> ack
Sep 3, 2024
ac97a9e
use constant
Sep 3, 2024
becd7ab
format
Sep 3, 2024
b7f49ed
remove set to None
Sep 3, 2024
58ae3b0
Merge remote-tracking branch 'origin/main' into reduce-asyncio-oh
njhill Sep 4, 2024
d0f9641
Remove redundant pass
njhill Sep 4, 2024
aa64042
Merge branch 'main' into reduce-asyncio-oh
Sep 4, 2024
5c6e5ef
review comments
alexm-redhat Sep 4, 2024
25174a5
format
alexm-redhat Sep 4, 2024
db55c1a
add async socket reads and socket writes
alexm-redhat Sep 4, 2024
f97e1f2
Some error handling
njhill Sep 4, 2024
dd96d3e
remove async benchmark
Sep 7, 2024
14d4afe
stash
Sep 7, 2024
bc386ea
Merge branch 'main' into reduce-asyncio-oh-alex
Sep 7, 2024
c0d0d60
adding error handling
Sep 7, 2024
b7c1fcc
error handling
Sep 7, 2024
a661b76
added
Sep 7, 2024
5d00f3a
formatting in place
Sep 7, 2024
5598494
added error handling
Sep 8, 2024
98aaa7d
change name
Sep 8, 2024
ba5ef38
change name
Sep 8, 2024
18b5a94
added dead_error to asyncengine
Sep 8, 2024
b048961
moved tests under openai
Sep 8, 2024
6b2e18b
updated tests
Sep 8, 2024
7a7ff5b
revert executor change
Sep 8, 2024
b7e1fe9
revert
Sep 8, 2024
48068d5
executor class
Sep 8, 2024
e3daa28
cleanup format
Sep 8, 2024
7880b75
format
Sep 8, 2024
29fe3c8
shorten
Sep 8, 2024
a720947
Revert change
Sep 8, 2024
5b8cee6
enable shutdown for tp>1
Sep 8, 2024
97a241d
format
Sep 8, 2024
6d0570e
added error handling
Sep 8, 2024
eb26791
format
Sep 8, 2024
e256050
try out hwm
Sep 9, 2024
59c5aca
Add stop_remote_worker_execution_loop for TP case
njhill Sep 9, 2024
62f654a
Revert unnecessary stop_remote_worker_execution_loop
njhill Sep 10, 2024
75c6157
fixed magicmock errored
Sep 10, 2024
6f1cced
Merge branch 'main' into reduce-asyncio-oh-alex
Sep 10, 2024
370c104
fall back to asyncllmengine if pp
Sep 10, 2024
0cf9551
formatting
Sep 10, 2024
72f72fd
stash
Sep 10, 2024
ded4540
Merge branch 'main' into reduce-asyncio-oh-alex
Sep 10, 2024
364ed7f
remove DO_LOG_STATS RPC call
Sep 10, 2024
f7fdf69
cleanup health check
Sep 10, 2024
7e61cdb
Use pickle for requests too
njhill Sep 10, 2024
3e84c8c
Remove hwm
Sep 10, 2024
2559813
Simplify configs setup
njhill Sep 10, 2024
d0a0f8b
stash
Sep 10, 2024
70e4916
Merge branch 'reduce-asyncio-oh-alex' of https://github.com/neuralmag…
Sep 10, 2024
021fed3
added tests
Sep 10, 2024
fd6ee43
added failed health check
Sep 11, 2024
ccb43a3
rename
Sep 11, 2024
1aa0823
added failed abort test
Sep 11, 2024
fe22fe2
formatting
Sep 11, 2024
3ce8702
Some more startup RPC simplification
njhill Sep 11, 2024
1f3fc24
fix yapf conflict
njhill Sep 11, 2024
ead62dd
fix entrypoints tests
alexm-redhat Sep 11, 2024
672fb81
stash
Sep 11, 2024
86312e4
fix Intel/TPU tests
alexm-redhat Sep 11, 2024
c4f6898
Merge branch 'reduce-asyncio-oh-alex' of https://github.com/neuralmag…
Sep 11, 2024
678e8e5
Merge branch 'reduce-asyncio-oh-alex' of https://github.com/neuralmag…
Sep 11, 2024
78b9e21
fix
Sep 11, 2024
66c6961
formatting
Sep 11, 2024
6e1e2bb
cleanup
Sep 11, 2024
610b349
cleanup
Sep 11, 2024
28bb8a4
format
Sep 11, 2024
b266249
fix poller
Sep 11, 2024
f8036a5
add graceful shutdown on abort after client closed
Sep 11, 2024
a649f75
cleanup formatting
Sep 11, 2024
5b3535d
added test abort
Sep 11, 2024
7097e05
fix up tests
Sep 11, 2024
ad3d0f8
added abort tests
Sep 12, 2024
6e9c6c9
added another accurayc test
Sep 12, 2024
fb8e2f9
add multistep test for accuracy of mq llm engine
Sep 12, 2024
75523b2
added test genertion
Sep 12, 2024
5546d2e
fixed accuracy test launch
Sep 12, 2024
6403f49
added load test
Sep 12, 2024
bc68b51
Merge branch 'main' into reduce-asyncio-oh-alex
Sep 12, 2024
3bb5e52
remove file
Sep 12, 2024
2ac814f
format
Sep 12, 2024
179a667
added load test
Sep 12, 2024
97d6c09
format
Sep 12, 2024
78badc1
added load test
Sep 12, 2024
a499733
format
alexm-redhat Sep 12, 2024
6a5d8d8
stash
Sep 12, 2024
dfab5eb
Merge branch 'reduce-asyncio-oh-alex' of https://github.com/neuralmag…
Sep 12, 2024
96f84fe
format
Sep 12, 2024
ae14670
Merge branch 'main' into reduce-asyncio-oh-alex
robertgshaw2-redhat Sep 14, 2024
117c024
format
Sep 14, 2024
c059713
remove debug print
Sep 14, 2024
1af3297
removed stray
Sep 14, 2024
97ae38d
updated
Sep 14, 2024
d0fab11
switch model to avoid OOM in TPU test
Sep 14, 2024
bb4d839
Merge remote-tracking branch 'origin/main' into reduce-asyncio-oh-alex
njhill Sep 16, 2024
1967f6a
Adjust timeouts
njhill Sep 16, 2024
a911323
stahs
Sep 17, 2024
95ff4f3
make timeout 10000 ms
Sep 17, 2024
302868e
format
Sep 17, 2024
add68ee
Update examples/openai_chat_completion_client.py
robertgshaw2-redhat Sep 17, 2024
242b952
adjust RPC timeout on TPU
Sep 17, 2024
3dafa26
add longer delay for check ehalth
Sep 17, 2024
836a9d2
Update client.py
robertgshaw2-redhat Sep 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .buildkite/test-pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,13 +43,15 @@ steps:
fast_check: true
source_file_dependencies:
- vllm/
- tests/mq_llm_engine
- tests/async_engine
- tests/test_inputs
- tests/multimodal
- tests/test_utils
- tests/worker
commands:
- pytest -v -s async_engine # Async Engine
- pytest -v -s mq_llm_engine # MQLLMEngine
- pytest -v -s async_engine # AsyncLLMEngine
- NUM_SCHEDULER_STEPS=4 pytest -v -s async_engine/test_async_llm_engine.py
- pytest -v -s test_inputs.py
- pytest -v -s multimodal
Expand Down
4 changes: 2 additions & 2 deletions docs/source/dev/profiling/profiling_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ Traces can be visualized using https://ui.perfetto.dev/.
.. tip::

To stop the profiler - it flushes out all the profile trace files to the directory. This takes time, for example for about 100 requests worth of data for a llama 70b, it takes about 10 minutes to flush out on a H100.
Set the env variable VLLM_RPC_GET_DATA_TIMEOUT_MS to a big number before you start the server. Say something like 30 minutes.
``export VLLM_RPC_GET_DATA_TIMEOUT_MS=1800000``
Set the env variable VLLM_RPC_TIMEOUT to a big number before you start the server. Say something like 30 minutes.
``export VLLM_RPC_TIMEOUT=1800000``

Example commands and usage:
===========================
Expand Down
106 changes: 0 additions & 106 deletions tests/async_engine/test_openapi_server.py

This file was deleted.

120 changes: 0 additions & 120 deletions tests/entrypoints/openai/rpc/test_zmq_client.py

This file was deleted.

56 changes: 25 additions & 31 deletions tests/entrypoints/openai/test_accuracy.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,38 +18,32 @@
FILTER = "exact_match,strict-match"
RTOL = 0.03
EXPECTED_VALUE = 0.58
DEFAULT_ARGS = ["--max-model-len", "4096", "--disable-log-requests"]
MORE_ARGS_LIST = [["--enable-chunked-prefill"], ["--num-scheduler-steps", "8"]]


@pytest.fixture(scope="module")
def server():
args = [
"--max-model-len", "4096", "--enable-chunked-prefill",
"--disable-log-requests", "--enforce-eager"
]

with RemoteOpenAIServer(MODEL_NAME, args) as remote_server:
yield remote_server


@pytest.fixture(scope="module")
def server_data(server):
return {
"url": f"{server.url_for('v1')}/completions",
}
@pytest.mark.parametrize("more_args", MORE_ARGS_LIST)
def test_lm_eval_accuracy(more_args):
args = list(DEFAULT_ARGS)
args.extend(more_args)

print(f"Running with: {args}")

def test_lm_eval_accuracy(server_data):
model_args = (f"model={MODEL_NAME},"
f"base_url={server_data['url']},"
f"num_concurrent={NUM_CONCURRENT},tokenized_requests=False")

results = lm_eval.simple_evaluate(
model="local-completions",
model_args=model_args,
tasks=TASK,
)

measured_value = results["results"][TASK][FILTER]
assert (measured_value - RTOL < EXPECTED_VALUE
and measured_value + RTOL > EXPECTED_VALUE
), f"Expected: {EXPECTED_VALUE} | Measured: {measured_value}"
with RemoteOpenAIServer(MODEL_NAME, args) as remote_server:
url = f"{remote_server.url_for('v1')}/completions"

model_args = (
f"model={MODEL_NAME},"
f"base_url={url},"
f"num_concurrent={NUM_CONCURRENT},tokenized_requests=False")

results = lm_eval.simple_evaluate(
model="local-completions",
model_args=model_args,
tasks=TASK,
)

measured_value = results["results"][TASK][FILTER]
assert (measured_value - RTOL < EXPECTED_VALUE
and measured_value + RTOL > EXPECTED_VALUE
), f"Expected: {EXPECTED_VALUE} | Measured: {measured_value}"
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from vllm.entrypoints.openai.protocol import ChatCompletionRequest
from vllm.transformers_utils.tokenizer import get_tokenizer

from ..utils import VLLM_PATH
from ...utils import VLLM_PATH

chatml_jinja_path = VLLM_PATH / "examples/template_chatml.jinja"
assert chatml_jinja_path.exists()
Expand Down
40 changes: 0 additions & 40 deletions tests/entrypoints/openai/test_mp_api_server.py

This file was deleted.

5 changes: 3 additions & 2 deletions tests/entrypoints/openai/test_serving_chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from unittest.mock import MagicMock

from vllm.config import MultiModalConfig
from vllm.engine.async_llm_engine import AsyncLLMEngine
from vllm.engine.multiprocessing.client import MQLLMEngineClient
from vllm.entrypoints.openai.protocol import ChatCompletionRequest
from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
from vllm.transformers_utils.tokenizer import get_tokenizer
Expand Down Expand Up @@ -52,8 +52,9 @@ def test_async_serving_chat_init():


def test_serving_chat_should_set_correct_max_tokens():
mock_engine = MagicMock(spec=AsyncLLMEngine)
mock_engine = MagicMock(spec=MQLLMEngineClient)
mock_engine.get_tokenizer.return_value = get_tokenizer(MODEL_NAME)
mock_engine.errored = False

serving_chat = OpenAIServingChat(mock_engine,
MockModelConfig(),
Expand Down
Loading
Loading