Skip to content

Conversation

russellb
Copy link
Member

@mergify mergify bot added the frontend label Sep 26, 2025
@russellb russellb added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a security vulnerability by using secrets.compare_digest for API token validation to prevent timing attacks. The implementation is a significant improvement. However, I've identified a potential residual timing vulnerability related to token length and have suggested a further hardening measure by hashing the tokens before comparison. This will ensure the comparison is always done on fixed-length inputs, fully mitigating timing attacks on both token value and length.

Comment on lines 1264 to 1283
def __init__(self, app: ASGIApp, tokens: list[str]) -> None:
self.app = app
self.api_tokens = {f"Bearer {token}" for token in tokens}
self.api_tokens = tokens

def verify_token(self, headers: Headers) -> bool:
authorization_header_value = headers.get("Authorization")
if not authorization_header_value:
return False

scheme, _, param = authorization_header_value.partition(" ")
if scheme.lower() != "bearer":
return False

token_match = False
for token in self.api_tokens:
if secrets.compare_digest(param, token):
token_match = True

return token_match
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While using secrets.compare_digest is a great improvement to prevent timing attacks on the token's value, this implementation might still be vulnerable to a timing attack that could reveal the length of the valid API tokens.

The documentation for secrets.compare_digest notes that if the two strings being compared have different lengths, a timing attack could theoretically reveal information about their lengths. An attacker could use this to determine the length of valid tokens, which reduces the search space for a brute-force attack.

To fully mitigate this, you can hash the API tokens to a fixed length before comparison. This ensures that secrets.compare_digest always operates on inputs of the same length. This also has the benefit of allowing non-ASCII characters in tokens if needed, as they will be encoded to UTF-8 bytes before hashing.

Here is a suggested implementation that uses hashlib.sha256 to hash the tokens. Note that you will need to add import hashlib at the beginning of the file.

Suggested change
def __init__(self, app: ASGIApp, tokens: list[str]) -> None:
self.app = app
self.api_tokens = {f"Bearer {token}" for token in tokens}
self.api_tokens = tokens
def verify_token(self, headers: Headers) -> bool:
authorization_header_value = headers.get("Authorization")
if not authorization_header_value:
return False
scheme, _, param = authorization_header_value.partition(" ")
if scheme.lower() != "bearer":
return False
token_match = False
for token in self.api_tokens:
if secrets.compare_digest(param, token):
token_match = True
return token_match
def __init__(self, app: ASGIApp, tokens: list[str]) -> None:
self.app = app
self.api_tokens = [hashlib.sha256(t.encode("utf-8")).digest() for t in tokens]
def verify_token(self, headers: Headers) -> bool:
authorization_header_value = headers.get("Authorization")
if not authorization_header_value:
return False
scheme, _, param = authorization_header_value.partition(" ")
if scheme.lower() != "bearer":
return False
param_hash = hashlib.sha256(param.encode("utf-8")).digest()
token_match = False
for token_hash in self.api_tokens:
token_match |= secrets.compare_digest(param_hash, token_hash)
return token_match

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to integrate this suggestion.

@russellb russellb added this to the v0.11.0 Cherry Picks milestone Sep 26, 2025
This is based on gemini-code-assist's recommendation:

> While using secrets.compare_digest is a great improvement to prevent timing
> attacks on the token's value, this implementation might still be vulnerable to a
> timing attack that could reveal the length of the valid API tokens.
>
> The documentation for secrets.compare_digest notes that if the two strings being
> compared have different lengths, a timing attack could theoretically reveal
> information about their lengths. An attacker could use this to determine the
> length of valid tokens, which reduces the search space for a brute-force attack.
>
> To fully mitigate this, you can hash the API tokens to a fixed length before
> comparison. This ensures that secrets.compare_digest always operates on inputs
> of the same length. This also has the benefit of allowing non-ASCII characters
> in tokens if needed, as they will be encoded to UTF-8 bytes before hashing.

Signed-off-by: Russell Bryant <[email protected]>
@Isotr0py Isotr0py merged commit 3f5d902 into vllm-project:main Sep 27, 2025
43 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants