Skip to content

Conversation

@nrghosh
Copy link
Contributor

@nrghosh nrghosh commented Aug 25, 2025

LLM Doctest Fix

  • Re-enabled doc tests for working-with-llms.rst that were previously broken
  • Refactored data/llm documentation to separate test execution from doc examples
  • Removed testcode/testoutput blocks from working-with-llms.rst that mixed test execution with documentation
  • Added dual-level examples: simple code-block snippets for quick understanding plus comprehensive literalinclude references for detailed implementation
  • Separated concerns: tests run via BUILD.bazel, documentation focuses on user guidance and code snippets

Notes

  1. Vale linter only runs on doc/source/data and doc/source/ray-overview/examples directories, which is why quick-start.rst (in doc/source/serve/llm/) can use code-block:: python but our .rst file cannot
  2. The separate pytests run and pass in data: doc gpu tests (premerge)

Log


[2025-08-28T23:09:52Z] //doc:source/data/doc_code/working-with-llms/basic_llm_example           PASSED in 36.4s
--
  | [2025-08-28T23:09:52Z]   WARNING: //doc:source/data/doc_code/working-with-llms/basic_llm_example: Test execution time (36.4s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="short" or size="small".
  | [2025-08-28T23:09:52Z] //doc:source/data/doc_code/working-with-llms/openai_api_example          PASSED in 25.5s
  | [2025-08-28T23:09:52Z]   WARNING: //doc:source/data/doc_code/working-with-llms/openai_api_example: Test execution time (25.5s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="short" or size="small".
  | [2025-08-28T23:09:52Z] //doc:source/data/doc_code/working-with-llms/vlm_example                 PASSED in 34.3s
  | [2025-08-28T23:09:52Z]   WARNING: //doc:source/data/doc_code/working-with-llms/vlm_example: Test execution time (34.3s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="short" or size="small".
  | [2025-08-28T23:09:52Z] //doc/source/data/examples:batch_inference_object_detection              PASSED in 35.3s
  | [2025-08-28T23:09:52Z]   WARNING: //doc/source/data/examples:batch_inference_object_detection: Test execution time (35.3s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="short" or size="small".
  | [2025-08-28T23:09:52Z] //doc/source/data/examples:huggingface_vit_batch_prediction              PASSED in 65.8s
  | [2025-08-28T23:09:52Z]   WARNING: //doc/source/data/examples:huggingface_vit_batch_prediction: Test execution time (65.8s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="moderate" or size="medium".
  | [2025-08-28T23:09:52Z] //doc/source/data/examples:pytorch_resnet_batch_prediction               PASSED in 103.0s
  | [2025-08-28T23:09:52Z]   WARNING: //doc/source/data/examples:pytorch_resnet_batch_prediction: Test execution time (103.0s excluding execution overhead) outside of range for LONG tests. Consider setting timeout="moderate" or size="medium".
  | [2025-08-28T23:09:52Z]
  | [2025-08-28T23:09:52Z] Executed 6 out of 6 tests: 6 tests pass.

Why are these changes needed?

In order to enable Working with LLMs docs / example code

Related issue number

addresses #55796

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Unit Test

pytest -vs --doctest-modules doc/source/data/working-with-llms.rst

Output:

================================================================================ test session starts =================================================================================
platform linux -- Python 3.11.11, pytest-7.4.4, pluggy-1.5.0 -- /home/ray/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /home/ray/default/work/ray
configfile: pytest.ini
plugins: asyncio-0.17.2, sphinx-0.5.1.dev0, anyio-3.7.1
asyncio: mode=Mode.LEGACY
collected 1 item                                                                                                                                                                     

doc/source/data/working-with-llms.rst::working-with-llms.rst ^[2025-08-25 16:31:30,414  INFO worker.py:1771 -- Connecting to existing Ray cluster at address: 10.0.145.18:6379...
2025-08-25 16:31:30,430 INFO worker.py:1942 -- Connected to Ray cluster. View the dashboard at https://session-cv6v8dczba3ifnv58j8xbv8gwa.i.anyscaleuserdata-staging.com 
2025-08-25 16:31:30,947 INFO packaging.py:380 -- Pushing file package 'gcs://_ray_pkg_eb140f40cb2d56c13f14872eb3cbf0744e701a46.zip' (303.88MiB) to Ray cluster...
2025-08-25 16:31:32,163 INFO packaging.py:393 -- Successfully pushed file package 'gcs://_ray_pkg_eb140f40cb2d56c13f14872eb3cbf0744e701a46.zip'.
No cloud storage mirror configured
2025-08-25 16:31:32,851 WARNING util.py:608 -- The argument ``compute`` is deprecated in Ray 2.9. Please specify argument ``concurrency`` instead. For more information, see https://docs.ray.io/en/master/data/transforming-data.html#stateful-transforms.
2025-08-25 16:31:32,854 INFO dataset.py:3248 -- Tip: Use `take_batch()` instead of `take() / show()` to return records in pandas or numpy batch format.
2025-08-25 16:31:32,855 INFO logging.py:295 -- Registered dataset logger for dataset dataset_201_0
2025-08-25 16:31:32,874 INFO streaming_executor.py:159 -- Starting execution of Dataset dataset_201_0. Full logs are in /tmp/ray/session_2025-08-25_14-58-12_129560_5972/logs/ray-data
2025-08-25 16:31:32,875 INFO streaming_executor.py:160 -- Execution plan of Dataset dataset_201_0: InputDataBuffer[Input] -> TaskPoolMapOperator[Map(_preprocess)] -> ActorPoolMapOperator[MapBatches(ChatTemplateUDF)] -> ActorPoolMapOperator[MapBatches(TokenizeUDF)] -> ActorPoolMapOperator[MapBatches(vLLMEngineStageUDF)] -> ActorPoolMapOperator[MapBatches(DetokenizeUDF)] -> LimitOperator[limit=1] -> TaskPoolMapOperator[Map(_postprocess)]
(MapWorker(MapBatches(ChatTemplateUDF)) pid=110145) No cloud storage mirror configured
(MapWorker(MapBatches(TokenizeUDF)) pid=110296) No cloud storage mirror configured
(MapWorker(MapBatches(vLLMEngineStageUDF)) pid=110446) Max pending requests is set to 141
(MapWorker(MapBatches(vLLMEngineStageUDF)) pid=110446) No cloud storage mirror configured
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]d=110711) 
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:00<00:01,  1.54it/s]) 
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:01<00:01,  1.27it/s]) 
Loading safetensors checkpoint shards:  75% Completed | 3/4 [00:02<00:00,  1.26it/s]) 
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:02<00:00,  1.64it/s]) 
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:02<00:00,  1.51it/s]) 
(MapWorker(MapBatches(vLLMEngineStageUDF)) pid=110446) (VllmWorker rank=0 pid=110711) 
Capturing CUDA graph shapes:   0%|          | 0/35 [00:00<?, ?it/s]rank=0 pid=110711) 
Capturing CUDA graph shapes:   6%|▌         | 2/35 [00:00<00:03, 10.33it/s]
Capturing CUDA graph shapes:  11%|█▏        | 4/35 [00:00<00:03,  9.98it/s]
Capturing CUDA graph shapes:  17%|█▋        | 6/35 [00:00<00:02, 10.28it/s]
Capturing CUDA graph shapes:  23%|██▎       | 8/35 [00:00<00:02, 10.46it/s]
Capturing CUDA graph shapes:  29%|██▊       | 10/35 [00:00<00:02, 10.33it/s]
Capturing CUDA graph shapes:  34%|███▍      | 12/35 [00:01<00:02, 10.56it/s]
Capturing CUDA graph shapes:  40%|████      | 14/35 [00:01<00:01, 10.78it/s]
Capturing CUDA graph shapes:  46%|████▌     | 16/35 [00:01<00:01, 10.72it/s]
Capturing CUDA graph shapes:  51%|█████▏    | 18/35 [00:01<00:01, 11.09it/s]
Capturing CUDA graph shapes:  57%|█████▋    | 20/35 [00:01<00:01, 11.42it/s]
Capturing CUDA graph shapes:  63%|██████▎   | 22/35 [00:02<00:01, 11.33it/s]
Capturing CUDA graph shapes:  69%|██████▊   | 24/35 [00:02<00:00, 11.50it/s]
Capturing CUDA graph shapes:  74%|███████▍  | 26/35 [00:02<00:00, 11.81it/s]
Capturing CUDA graph shapes:  80%|████████  | 28/35 [00:02<00:00, 12.09it/s]
Capturing CUDA graph shapes:  86%|████████▌ | 30/35 [00:02<00:00, 12.07it/s]
Capturing CUDA graph shapes:  91%|█████████▏| 32/35 [00:02<00:00, 12.48it/s]
Capturing CUDA graph shapes:  97%|█████████▋| 34/35 [00:02<00:00, 12.83it/s]
Capturing CUDA graph shapes: 100%|██████████| 35/35 [00:03<00:00, 11.48it/s]
(MapWorker(MapBatches(DetokenizeUDF)) pid=111059) No cloud storage mirror configured
Running 0: 0.00 row [01:10, ? row/s]            2025-08-25 16:32:43,620 WARNING resource_manager.py:134 -- ⚠️  Ray's object store is configured to use only 24.3% of available memory (186.3GiB out of 768.0GiB total). For optimal Ray Data performance, we recommend setting the object store to at least 50% of available memory. You can do this by setting the 'object_store_memory' parameter when calling ray.init() or by setting the RAY_DEFAULT_OBJECT_STORE_MEMORY_PROPORTION environment variable.
Running Dataset: dataset_201_0. Active & requested resources: 1/192 CPU, 1/8 GPU, 256.0MB/139.7GB object store: : 0.00 row [01:12, ? row/s]2025-08-25 16:32:46,033      INFO streaming_executor.py:279 -- ✔️  Dataset dataset_201_0 execution finished in 73.16 seconds82.0B object store: 100%|████████████████████████████████████████| 1.00/1.00 [00:02<00:00, 2.12s/ row]
✔️  Dataset dataset_201_0 execution finished in 73.16 seconds: 100%|█████████████████████████████████████████████████████████████████████████████| 1.00/1.00 [01:13<00:00, 73.2s/ row] 
- Map(_preprocess): Tasks: 0; Actors: 0; Queued blocks: 0; Resources: 0.0 CPU, 182.0B object store: 100%|████████████████████████████████████████| 1.00/1.00 [00:02<00:00, 2.42s/ row]
- MapBatches(ChatTemplateUDF): Tasks: 0; Actors: 0; Queued blocks: 0; Resources: 0.0 CPU, 505.0B object store; [0/1 objects local]: : 1.00 row [00:02, 2.42s/ row]                    
- MapBatches(TokenizeUDF): Tasks: 0; Actors: 0; Queued blocks: 0; Resources: 0.0 CPU, 981.0B object store; [0/1 objects local]: : 1.00 row [00:02, 2.42s/ row]    
(MapWorker(MapBatches(vLLMEngineStageUDF)) pid=110446) [vLLM] Elapsed time for batch 56ae7ef6064b4a90818ba30ea316b011 with size 1: 1.0539131289997385                       
(MapWorker(MapBatches(vLLMEngineStageUDF)) pid=110446) Shutting down vLLM engine                                                                                            
- MapBatches(vLLMEngineStageUDF): Tasks: 0; Actors: 1; Queued blocks: 0; Resources: 0.0 CPU, 1.0 GPU, 1.7KB object store; [0/1 objects local]: : 1.00 row [00:05, 5.31s/ row]
- MapBatches(DetokenizeUDF): Tasks: 0; Actors: 1; Queued blocks: 0; Resources: 1.0 CPU, 0.0B object store; [0/1 objects local]: : 1.00 row [00:05, 5.34s/ row]              
- limit=1: Tasks: 0; Actors: 0; Queued blocks: 0; Resources: 0.0 CPU, 0.0B object store: 100%|███████████████████████████████████████████████████| 1.00/1.00 [00:05<00:00, 5.34s/ row]
- Map(_postprocess): Tasks: 0; Actors: 0; Queued blocks: 0; Resources: 0.0 CPU, 1.7KB object store: : 1.00 row [00:05, 5.34s/ row]                                          
2025-08-25 16:32:48,963 INFO base.py:255 -- The first stage of the processor is ChatTemplateStage.                                                                           
Required input columns:                                                                                                                                    
        messages: A list of messages in OpenAI chat format. See https://platform.openai.com/docs/api-reference/chat/create for details.                       
Requirement already satisfied: datasets>=4.0.0 in /home/ray/anaconda3/lib/python3.11/site-packages (4.0.0)                    
Requirement already satisfied: filelock in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (3.17.0)                                                           
Requirement already satisfied: numpy>=1.17 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (1.26.4)
Requirement already satisfied: pyarrow>=15.0.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (19.0.1)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (0.3.8)
Requirement already satisfied: pandas in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (1.5.3)
Requirement already satisfied: requests>=2.32.2 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (2.32.3)
Requirement already satisfied: tqdm>=4.66.3 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (4.67.1)
Requirement already satisfied: xxhash in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (3.5.0)
Requirement already satisfied: multiprocess<0.70.17 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (0.70.16)
Requirement already satisfied: fsspec<=2025.3.0,>=2023.1.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (2023.5.0)
Requirement already satisfied: huggingface-hub>=0.24.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (0.34.3)
Requirement already satisfied: packaging in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (23.0)
Requirement already satisfied: pyyaml>=5.1 in /home/ray/anaconda3/lib/python3.11/site-packages (from datasets>=4.0.0) (6.0.1)
Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in /home/ray/anaconda3/lib/python3.11/site-packages (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (3.11.16)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ray/anaconda3/lib/python3.11/site-packages (from huggingface-hub>=0.24.0->datasets>=4.0.0) (4.12.2)
Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /home/ray/anaconda3/lib/python3.11/site-packages (from huggingface-hub>=0.24.0->datasets>=4.0.0) (1.1.3)
Requirement already satisfied: charset_normalizer<4,>=2 in /home/ray/anaconda3/lib/python3.11/site-packages (from requests>=2.32.2->datasets>=4.0.0) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/ray/anaconda3/lib/python3.11/site-packages (from requests>=2.32.2->datasets>=4.0.0) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/ray/anaconda3/lib/python3.11/site-packages (from requests>=2.32.2->datasets>=4.0.0) (1.26.19)
Requirement already satisfied: certifi>=2017.4.17 in /home/ray/anaconda3/lib/python3.11/site-packages (from requests>=2.32.2->datasets>=4.0.0) (2025.1.31)
Requirement already satisfied: python-dateutil>=2.8.1 in /home/ray/anaconda3/lib/python3.11/site-packages (from pandas->datasets>=4.0.0) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /home/ray/anaconda3/lib/python3.11/site-packages (from pandas->datasets>=4.0.0) (2022.7.1)
Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (2.6.1)
Requirement already satisfied: aiosignal>=1.1.2 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (25.1.0)
Requirement already satisfied: frozenlist>=1.1.1 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (6.0.5)
Requirement already satisfied: propcache>=0.2.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (0.3.0)
Requirement already satisfied: yarl<2.0,>=1.17.0 in /home/ray/anaconda3/lib/python3.11/site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]<=2025.3.0,>=2023.1.0->datasets>=4.0.0) (1.18.3)
Requirement already satisfied: six>=1.5 in /home/ray/anaconda3/lib/python3.11/site-packages (from python-dateutil>=2.8.1->pandas->datasets>=4.0.0) (1.16.0)
2025-08-25 16:32:54,576 INFO worker.py:1771 -- Connecting to existing Ray cluster at address: 10.0.145.18:6379...
2025-08-25 16:32:54,576 INFO worker.py:1789 -- Calling ray.init() again after it has already been called.
No cloud storage mirror configured
PASSED

============================================================================ 1 passed in 89.81s (0:01:29) ============================================================================

- fix rst/python syntax issues causing crashes
- fix pytest and datasets compatibility issues[LLM]
- patch VLM example (dependency sisue) and OpenAI API example
- conditional execution based on OPENAI_API_KEY / demo mode

Signed-off-by: Nikhil Ghosh <[email protected]>
@nrghosh nrghosh requested a review from angelinalg August 25, 2025 23:34
@nrghosh nrghosh added alpha Alpha release features llm labels Aug 25, 2025
@nrghosh nrghosh self-assigned this Aug 25, 2025
@nrghosh nrghosh marked this pull request as ready for review August 25, 2025 23:35
@nrghosh nrghosh requested review from a team as code owners August 25, 2025 23:35
@nrghosh nrghosh changed the title [LLM] fix doc test for Working with LLMs guide #55796 [LLM] fix doc test for Working with LLMs guide Aug 25, 2025
# Now handle Ray's compatibility issue by patching the dynamic modules function
import datasets.load

# Create a compatibility wrapper for the removed init_dynamic_modules function
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does some hacky stuff for package version mismatch issues - otherwise tough to have all these examples in the same place.

Copy link
Contributor

@angelinalg angelinalg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stamp


vision_processed_ds = vision_processor(vision_dataset).materialize()
vision_processed_ds.show(3)
# For doctest, we'll just set up the processor without running the full dataset
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was running into resource exhaustion when running this example (after debugging format / syntax / package versions)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this snippet need GPUs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, expects L4 - but this doesn't actually load/run the model (to avoid that issue). Is there a different way we want to handle this?

@nrghosh nrghosh added go add ONLY when ready to merge, run all tests and removed alpha Alpha release features labels Aug 25, 2025
@nrghosh nrghosh requested a review from a team August 26, 2025 00:33
@kouroshHakha
Copy link
Contributor

kouroshHakha commented Aug 26, 2025

Actually @nrghosh let's do a similar thing to what @eicherseiji did in this PR. The idea being, create the full working doc-test code outside of the doc rst files. In the rst files use includeliterals and separately run proper tests on those scripts however you'd like without having to rely on any particular infrastructure parsing the code outside of docs.

This approach makes our code examples clean hiding the infra code that is needed for running and setting up the test envs. e.g. it will remove the need for subprocess.check_call([sys.executable, "-m", "pip", "install", "--upgrade", "datasets>=4.0.0"]) call in the test doc as these can be the dependencies of the test module specified in the doc_test yaml file.

@nrghosh
Copy link
Contributor Author

nrghosh commented Aug 26, 2025

Actually @nrghosh let's do a similar thing to what @eicherseiji did in this #54763.

ack @kouroshHakha will take a second pass with those changes

@nrghosh nrghosh force-pushed the nrghosh/llms-doctest branch from ad64964 to 156d469 Compare August 26, 2025 23:05
- Move infrastructure code from RST to standalone Python files with literalinclude
- Add run_test() functions and fix transformers version mismatch for RoPE config
- Replace complex module imports with direct configuration validation in testcode
- Prevent GPU memory errors by avoiding inference execution during tests
- Maintain clean separation: RST shows code, testcode validates configs only
- Keep 1:1 functionality with the original RST code example file

Signed-off-by: Nikhil Ghosh <[email protected]>
@nrghosh nrghosh force-pushed the nrghosh/llms-doctest branch from 156d469 to 70db35f Compare August 26, 2025 23:22
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
@nrghosh nrghosh force-pushed the nrghosh/llms-doctest branch 3 times, most recently from d4d6b58 to cb7fc7b Compare August 28, 2025 01:45
- check that bazel pytest section actually does run the python test
  files
- remote testcode blocks from .rst and replace with literalinclude /
  code-block references

Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
@nrghosh nrghosh force-pushed the nrghosh/llms-doctest branch from 5cd2198 to f71704a Compare August 28, 2025 22:07
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
Copy link
Contributor

@kunling-anyscale kunling-anyscale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Signed-off-by: Nikhil Ghosh <[email protected]>
cursor[bot]

This comment was marked as outdated.

Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
cursor[bot]

This comment was marked as outdated.

Signed-off-by: Nikhil Ghosh <[email protected]>
cursor[bot]

This comment was marked as outdated.

Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Nikhil Ghosh <[email protected]>
Copy link
Contributor Author

@nrghosh nrghosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • addressed all detailed comments
  • fixed version / code example issues
  • refactored code examples out of the main doc
  • unblocked tests from waiting for GPUs in CI

Working With LLMs Doc rendering - https://anyscale-ray--55917.com.readthedocs.build/en/55917/data/working-with-llms.html

Copy link
Contributor

@angelinalg angelinalg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stamp #2

@richardliaw richardliaw removed the request for review from bveeramani September 24, 2025 16:21
Copy link
Contributor

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@kouroshHakha kouroshHakha merged commit d6b9e4d into ray-project:master Sep 24, 2025
6 checks passed
@nrghosh nrghosh deleted the nrghosh/llms-doctest branch September 24, 2025 17:13
marcostephan pushed a commit to marcostephan/ray that referenced this pull request Sep 24, 2025
elliot-barn pushed a commit that referenced this pull request Sep 27, 2025
dstrodtman pushed a commit that referenced this pull request Oct 6, 2025
Signed-off-by: Nikhil Ghosh <[email protected]>
Signed-off-by: Douglas Strodtman <[email protected]>
justinyeh1995 pushed a commit to justinyeh1995/ray that referenced this pull request Oct 20, 2025
landscapepainter pushed a commit to landscapepainter/ray that referenced this pull request Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go add ONLY when ready to merge, run all tests llm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants