Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
6b1dc24
feat: add partition_equal_subset_sum
wislertt Sep 10, 2025
6f2e656
feat: add ideal examples for ideal template
wislertt Sep 10, 2025
53c5aff
feat: update template new version
wislertt Sep 10, 2025
fe56805
feat: complete first problem for new template
wislertt Sep 10, 2025
348af7e
feat: complete accounts_merge
wislertt Sep 11, 2025
8dd2709
feat: finish example plan
wislertt Sep 11, 2025
6f628c3
feat: completed example problems
wislertt Sep 11, 2025
5815e75
feat: add example.json5
wislertt Sep 11, 2025
eae0327
ci: add test-reproducibility
wislertt Sep 11, 2025
45c0471
ci: add test-reproducibility
wislertt Sep 11, 2025
e0344b4
ci: add test-reproducibility
wislertt Sep 11, 2025
fd9db3d
ci: add test-reproducibility
wislertt Sep 11, 2025
4e259b5
feat: update
wislertt Sep 11, 2025
816db0a
feat: add more problems
wislertt Sep 11, 2025
094e2bf
feat: add find_median_from_data_stream
wislertt Sep 11, 2025
740592b
feat: add maximum_profit_in_job_scheduling
wislertt Sep 11, 2025
ca892d7
feat: add partition_equal_subset_sum
wislertt Sep 11, 2025
24eeece
feat: add zero_one_matrix
wislertt Sep 12, 2025
8fc2d76
docs: add test-case-enhancement.md
wislertt Sep 12, 2025
a1c2bd9
feat: check solution with old version
wislertt Sep 12, 2025
7a87496
feat: add more test cases for find_median_from_data_stream
wislertt Sep 12, 2025
f9ad24a
feat: add more test cases for clone graph
wislertt Sep 12, 2025
43cf917
feat: delete generated problems from old template
wislertt Sep 12, 2025
b0cdabe
feat: update mismatch
wislertt Sep 12, 2025
1e4a547
ci: fix reproducibility
wislertt Sep 12, 2025
5eb8d3c
ci: update cache Graphviz
wislertt Sep 12, 2025
b439f00
ci: update cache Graphviz
wislertt Sep 12, 2025
bdb5ef4
ci: update cache Graphviz
wislertt Sep 12, 2025
71982dc
ci: update cache Graphviz
wislertt Sep 12, 2025
e402e93
ci: update cache Graphviz
wislertt Sep 12, 2025
0c1ff93
ci: update cache Graphviz
wislertt Sep 12, 2025
94401f5
ci: update cache Graphviz
wislertt Sep 12, 2025
865e00d
ci: update cache Graphviz
wislertt Sep 12, 2025
051d649
ci: update cache Graphviz
wislertt Sep 12, 2025
7971bef
ci: update cache Graphviz
wislertt Sep 12, 2025
445d8f2
ci: update cache Graphviz
wislertt Sep 12, 2025
6bbf9ac
ci: update cache Graphviz
wislertt Sep 12, 2025
0564d27
refactor: remove unecessary md files
wislertt Sep 12, 2025
e2e4704
refactor: remove unecessary md files
wislertt Sep 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
1 change: 1 addition & 0 deletions .amazonq/rules/problem-creation.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,7 @@ When creating JSON properties that use PascalCase (solution_class_name, test_cla
- Multiple methods including `__init__`
- Complex test setup with operation sequences
- Import custom class in test_imports
- **NEVER include custom solution classes** in test_imports - only import the main solution class specified in solution_class_name

### Dict-based Tree Problems (Trie, etc.)

Expand Down
129 changes: 129 additions & 0 deletions .amazonq/rules/test-case-enhancement.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Test Case Enhancement Rules

## Assistant Workflow for Adding Comprehensive Test Cases

When user requests to enhance test cases for a problem, the assistant will:

### 1. Problem Resolution (Priority Order)

- **FIRST**: Try to resolve from context - check active file path or user-provided problem name
- **SECOND**: If context resolution fails, THEN run `poetry run python .templates/check_test_cases.py --threshold=10 --max=1` to auto-detect 1 problem with <10 test cases
- **LAST**: If both above fail, ask user to explicitly specify problem name

### 2. Test Case Generation

- Read `leetcode/{problem_name}/README.md` for problem understanding
- Analyze existing test cases in `leetcode/{problem_name}/tests.py`
- Generate comprehensive test cases covering:
- **Edge cases**: Empty inputs, single elements, boundary values
- **Corner cases**: Maximum/minimum constraints, special patterns
- **Normal cases**: Typical scenarios with varied complexity
- **Error cases**: Invalid inputs (if applicable)

### 3. Initial Validation

- Run `make p-test PROBLEM={problem_name}` to verify current implementation
- **If errors found**:
- DO NOT update implementation automatically
- Only update test cases if they're incorrect
- If implementation seems wrong, ASK USER first before modifying

### 4. JSON Template Update

- Update corresponding `.templates/leetcode/json/{problem_name}.json`
- Add new test cases to `test_cases` field in proper format
- Maintain existing test structure and naming conventions

### 5. Backup and Regeneration Process

- **Backup**: Move `leetcode/{problem_name}/` to `.cache/leetcode/{problem_name}/`
- **Regenerate**: Run `make p-gen PROBLEM={problem_name} FORCE=1`
- **Lint check**: Run `make p-lint PROBLEM={problem_name}`
- **Iterate**: If lint fails, update JSON and regenerate until passes

### 6. Solution Preservation

- Copy `solution.py` from backup to newly generated structure
- Run `make p-test PROBLEM={problem_name}` to verify tests pass
- **If tests fail**: Go back to step 4, update JSON, and iterate until passes

### 7. Cleanup and Restore

- **CRITICAL**: Remove entire newly generated `leetcode/{problem_name}/` directory
- **CRITICAL**: Restore original structure from `.cache/leetcode/{problem_name}/` backup
- **CRITICAL**: Only THEN copy enhanced `test_solution.py` from generated files to restored structure
- **CRITICAL**: Preserve existing solution class parametrization - if original test had multiple solution classes, restore them
- Verify final state with `make p-test PROBLEM={problem_name}`
- Clean up backup directory after successful verification

## Test Case Quality Standards

### Coverage Requirements

- **Minimum 10 test cases** per problem
- **Edge cases**: 20-30% of total test cases
- **Normal cases**: 50-60% of total test cases
- **Corner cases**: 20-30% of total test cases

### Test Case Categories

#### Edge Cases

- Empty inputs: `[]`, `""`, `None`
- Single element: `[1]`, `"a"`
- Boundary values: `[0]`, `[1]`, `[-1]`
- Maximum/minimum constraints from problem description

#### Corner Cases

- Duplicate elements: `[1,1,1]`
- Sorted/reverse sorted arrays: `[1,2,3]`, `[3,2,1]`
- All same elements: `[5,5,5,5]`
- Alternating patterns: `[1,0,1,0]`

#### Normal Cases

- Mixed positive/negative numbers
- Various array sizes within constraints
- Different data patterns and structures
- Representative problem scenarios

### JSON Format Requirements

- Use single quotes for Python strings in test cases
- Follow existing parametrize format
- Maintain type hints in parametrize_typed
- Ensure test_cases string is valid Python list syntax
- **NEVER include custom solution classes** in test_imports - only import the main solution class specified in solution_class_name
- **PRESERVE existing solution class parametrization** - if original test had multiple solution classes, restore them after JSON regeneration

## Commands Reference

```bash
# Find problems needing more test cases
poetry run python .templates/check_test_cases.py --threshold=10 --max=1

# Test specific problem
make p-test PROBLEM={problem_name}

# Generate from JSON template
make p-gen PROBLEM={problem_name} FORCE=1

# Lint specific problem
make p-lint PROBLEM={problem_name}
```

## Error Handling

- **Implementation errors**: Ask user before modifying solution code
- **Test failures**: Update JSON template and regenerate
- **Lint failures**: Fix JSON format and iterate
- **Backup failures**: Ensure `.cache/leetcode/` directory exists

## Success Criteria

- All tests pass with enhanced test cases
- Minimum 10 comprehensive test cases per problem
- Original solution code preserved and working
- JSON template updated for future regeneration
- Clean final state with no temporary files
48 changes: 48 additions & 0 deletions .github/workflows/ci-test-reproducibility.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: ci

on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]

jobs:
test-reproducibility:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0

- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: "3.13"

- name: Install Poetry
uses: snok/install-poetry@76e04a911780d5b312d89783f7b1cd627778900a # v1.4.1
with:
virtualenvs-create: true
virtualenvs-in-project: true
installer-parallel: true

- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: .venv
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}

- name: Install dependencies
run: poetry install --no-interaction --no-ansi

- name: Delete existing problems
run: rm -rf leetcode/*/

- name: Regenerate all problems from templates
run: make gen-all-problems FORCE=1
env:
# Skip interactive confirmation
CI: true

- name: Run linting to verify reproducibility
run: make lint
26 changes: 20 additions & 6 deletions .github/workflows/ci-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,19 +33,33 @@ jobs:
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}

- name: Install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-ansi

- name: Cache Graphviz
- name: Cache Graphviz installation
id: cache-graphviz
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: /usr/bin/dot
key: graphviz-${{ runner.os }}
path: ~/graphviz-cache
key: graphviz-installed-${{ runner.os }}

- name: Install Graphviz
if: steps.cache-graphviz.outputs.cache-hit != 'true'
run: sudo apt-get update && sudo apt-get install -y graphviz
run: |
if [ "${{ steps.cache-graphviz.outputs.cache-hit }}" = "true" ]; then
sudo cp ~/graphviz-cache/bin/* /usr/bin/ 2>/dev/null || true
sudo cp ~/graphviz-cache/lib/* /usr/lib/x86_64-linux-gnu/ 2>/dev/null || true
sudo cp -r ~/graphviz-cache/share/graphviz /usr/share/ 2>/dev/null || true
sudo cp -r ~/graphviz-cache/lib/graphviz /usr/lib/x86_64-linux-gnu/ 2>/dev/null || true
sudo ldconfig
sudo dot -c
else
sudo apt-get update
sudo apt-get install -y graphviz
mkdir -p ~/graphviz-cache/{bin,lib,share}
cp /usr/bin/{dot,neato,twopi,circo,fdp,sfdp,patchwork,osage} ~/graphviz-cache/bin/ 2>/dev/null || true
cp /usr/lib/x86_64-linux-gnu/lib{gvc,cgraph,cdt,pathplan,gvpr,lab-gamut,ann,gts}* ~/graphviz-cache/lib/ 2>/dev/null || true
cp -r /usr/lib/x86_64-linux-gnu/graphviz ~/graphviz-cache/lib/ 2>/dev/null || true
cp -r /usr/share/graphviz ~/graphviz-cache/share/ 2>/dev/null || true
fi

- name: Run tests
run: make test
Expand Down
91 changes: 91 additions & 0 deletions .templates/check_test_cases.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#!/usr/bin/env python3

import json
from pathlib import Path
from typing import Optional
import typer


def count_test_cases(json_data):
"""Count total test cases across all test methods."""
total = 0

# Handle both direct test_methods and nested _test_methods.list
test_methods = json_data.get("test_methods", [])
if not test_methods and "_test_methods" in json_data:
test_methods = json_data["_test_methods"].get("list", [])

for method in test_methods:
test_cases = method.get("test_cases", "")
if test_cases.strip():
# Parse the test_cases string to count actual test cases
try:
# Remove outer brackets and split by top-level commas
cases_str = test_cases.strip()
if cases_str.startswith("[") and cases_str.endswith("]"):
cases_str = cases_str[1:-1] # Remove outer brackets

# Count test cases by counting commas at parenthesis depth 0
depth = 0
case_count = 1 if cases_str.strip() else 0

for char in cases_str:
if char in "([{":
depth += 1
elif char in ")]}":
depth -= 1
elif char == "," and depth == 0:
case_count += 1

total += case_count
except Exception:
# Fallback to old method if parsing fails
total += test_cases.count("(") - test_cases.count("([") + test_cases.count("[(")
return total


def main(
threshold: int = typer.Option(
10, "--threshold", "-t", help="Show files with test cases <= threshold"
),
max_results: str = typer.Option(
1, "--max", "-m", help="Maximum number of results to show ('none' for no limit)"
),
):
"""Check test case counts in LeetCode JSON templates."""
json_dir = Path(".templates/leetcode/json")
all_files = []

for json_file in json_dir.glob("*.json"):
try:
with open(json_file) as f:
data = json.load(f)

test_count = count_test_cases(data)
all_files.append((json_file.name, test_count))
except Exception as e:
typer.echo(f"Error reading {json_file.name}: {e}", err=True)

# Sort by test count
all_files.sort(key=lambda x: x[1])

# Filter by threshold
filtered_files = [f for f in all_files if f[1] <= threshold]

# Apply max results limit
if max_results.lower() not in ["none", "null", "-1"]:
try:
max_count = int(max_results)
if max_count > 0:
filtered_files = filtered_files[:max_count]
except ValueError:
typer.echo(f"Invalid max_results value: {max_results}", err=True)
raise typer.Exit(1)

typer.echo(f"Files with ≤{threshold} test cases ({len(filtered_files)} total):")
for filename, count in filtered_files:
typer.echo(f"{filename}: {count} test cases")


if __name__ == "__main__":
typer.run(main)
36 changes: 24 additions & 12 deletions .templates/leetcode/cookiecutter.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,32 @@
"readme_constraints": "- 2 <= nums.length <= 10^4\n- -10^9 <= nums[i] <= 10^9\n- -10^9 <= target <= 10^9\n- Only one valid answer exists.",
"readme_additional": "",

"helpers_imports": "import pytest\nfrom leetcode_py.test_utils import logged_test\nfrom .solution import Solution",
"helpers_content": "",
"helpers_run_name": "two_sum",
"helpers_run_signature": "(solution_class: type, nums: list[int], target: int)",
"helpers_run_body": " implementation = solution_class()\n return implementation.two_sum(nums, target)",
"helpers_assert_name": "two_sum",
"helpers_assert_signature": "(result: list[int], expected: list[int]) -> bool",
"helpers_assert_body": " assert result == expected\n return True",

"solution_imports": "",
"solution_contents": "",
"solution_class_content": "",

"test_imports": "import pytest\nfrom leetcode_py.test_utils import logged_test\nfrom .helpers import assert_two_sum, run_two_sum\nfrom .solution import Solution",
"test_content": "",
"test_class_name": "TwoSum",
"test_class_content": " def setup_method(self):\n self.solution = Solution()",
"_solution_methods": {
"list": [
{
"name": "two_sum",
"parameters": "nums: list[int], target: int",
"return_type": "list[int]",
"dummy_return": "[]"
"signature": "(self, nums: list[int], target: int) -> list[int]",
"body": " # TODO: Implement two_sum\n return []"
}
]
},

"test_imports": "import pytest\nfrom loguru import logger\nfrom leetcode_py.test_utils import logged_test\nfrom .solution import Solution",
"test_class_name": "TwoSum",
"_test_helper_methods": {
"list": [
{
Expand All @@ -45,16 +57,16 @@
"list": [
{
"name": "test_two_sum",
"signature": "(self, nums: list[int], target: int, expected: list[int])",
"parametrize": "nums, target, expected",
"parametrize_typed": "nums: list[int], target: int, expected: list[int]",
"test_cases": "[([2, 7, 11, 15], 9, [0, 1]), ([3, 2, 4], 6, [1, 2])]",
"body": "result = self.solution.two_sum(nums, target)\nassert result == expected"
"body": " result = run_two_sum(Solution, nums, target)\n assert_two_sum(result, expected)"
}
]
},

"playground_imports": "from solution import Solution",
"playground_test_case": "# Example test case\nnums = [2, 7, 11, 15]\ntarget = 9\nexpected = [0, 1]",
"playground_execution": "result = Solution().two_sum(nums, target)\nresult",
"playground_assertion": "assert result == expected"
"playground_imports": "from helpers import run_two_sum, assert_two_sum\nfrom solution import Solution",
"playground_setup": "# Example test case\nnums = [2, 7, 11, 15]\ntarget = 9\nexpected = [0, 1]",
"playground_run": "result = run_two_sum(Solution, nums, target)\nresult",
"playground_assert": "assert_two_sum(result, expected)"
}
Loading