diff --git a/docs/troubleshooting/error_codes/001_UNSUPPORTED_METHOD.md b/docs/troubleshooting/error_codes/001_UNSUPPORTED_METHOD.md
new file mode 100644
index 00000000000..be9418b0df6
--- /dev/null
+++ b/docs/troubleshooting/error_codes/001_UNSUPPORTED_METHOD.md
@@ -0,0 +1,364 @@
+---
+slug: /troubleshooting/error-codes/001_UNSUPPORTED_METHOD
+sidebar_label: '001 UNSUPPORTED_METHOD'
+doc_type: 'reference'
+keywords: ['error codes', 'UNSUPPORTED_METHOD', '001', 'not supported', 'method']
+title: '001 UNSUPPORTED_METHOD'
+description: 'ClickHouse error code - 001 UNSUPPORTED_METHOD'
+---
+
+# Error 1: UNSUPPORTED_METHOD
+
+:::tip
+This error occurs when you attempt to use a method, operation, or feature that is not supported by the specific storage engine, data type, or context you're working with.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Storage engine limitations**
+ - Attempting write operations on read-only storage (e.g., View, Dictionary)
+ - Using unsupported operations with specific table engines
+ - Trying to modify materialized views directly
+ - Operations not supported by remote/distributed storage
+
+2. **Data type method limitations**
+ - Methods not implemented for specific column types (JSON, Object, Dynamic)
+ - Operations not supported for complex types (Nullable, Array, Tuple)
+ - Serialization/deserialization methods unavailable for certain types
+ - Hash functions not supporting specific data types
+
+3. **Query analyzer limitations**
+ - Correlated subqueries without proper settings
+ - WITH RECURSIVE without new analyzer
+ - Advanced SQL features requiring specific settings
+ - Subquery correlation issues
+
+4. **Feature not available in current version**
+ - Using experimental features without enabling them
+ - Features that exist in newer versions but not in your current version
+ - MySQL dialect compatibility issues
+ - Missing table function support
+
+5. **Integration/connector limitations**
+ - dbt-clickhouse connector limitations with certain operations
+ - External system integration constraints
+ - Protocol-specific limitations (MySQL wire protocol vs native)
+ - Third-party tool incompatibilities
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Read the error message carefully**
+
+The error message usually tells you exactly what method or operation is not supported:
+
+```text
+Method write is not supported by storage View
+Method serializeValueIntoMemory is not supported for Object
+WITH RECURSIVE is not supported with the old analyzer
+```
+
+**2. Check which storage engine or data type you're using**
+
+```sql
+-- Check table engine
+SELECT engine
+FROM system.tables
+WHERE database = 'your_database'
+ AND name = 'your_table';
+
+-- Check column types
+SELECT
+ name,
+ type
+FROM system.columns
+WHERE table = 'your_table'
+ AND database = 'your_database';
+```
+
+**3. Review your ClickHouse version**
+
+```sql
+SELECT version();
+```
+
+**4. Check if experimental features need to be enabled**
+
+```sql
+-- Enable analyzer for advanced features
+SET allow_experimental_analyzer = 1;
+
+-- Enable correlated subqueries
+SET allow_experimental_correlated_subqueries = 1;
+
+-- Check current settings
+SELECT name, value
+FROM system.settings
+WHERE name LIKE '%experimental%';
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. For write operations on views**
+
+```sql
+-- Instead of writing to a view (fails):
+INSERT INTO my_view VALUES (...);
+
+-- Write to the underlying table:
+INSERT INTO underlying_table VALUES (...);
+
+-- Or drop and recreate as a materialized view with proper target table
+CREATE MATERIALIZED VIEW my_view TO target_table AS
+SELECT * FROM source_table;
+```
+
+**2. For JSON/Object type operations**
+
+```sql
+-- Instead of using unsupported operations on JSON (fails):
+SELECT my_json_column FROM table GROUP BY my_json_column;
+
+-- Cast to String or extract specific fields:
+SELECT toString(my_json_column) FROM table GROUP BY 1;
+
+-- Or extract and group by specific paths:
+SELECT my_json_column.field1 FROM table GROUP BY 1;
+```
+
+**3. For `WITH RECURSIVE` queries**
+
+```sql
+-- Enable the new analyzer
+SET allow_experimental_analyzer = 1;
+
+-- Then run your recursive query
+WITH RECURSIVE cte AS (
+ SELECT ...
+ UNION ALL
+ SELECT ...
+)
+SELECT * FROM cte;
+```
+
+**4. For correlated subqueries**
+
+```sql
+-- Enable correlated subqueries
+SET allow_experimental_correlated_subqueries = 1;
+
+-- Or rewrite as JOIN
+-- Instead of this (may fail):
+SELECT
+ name,
+ (SELECT value FROM other_table WHERE id = main.id) AS value
+FROM main_table AS main;
+
+-- Use this:
+SELECT
+ m.name,
+ o.value
+FROM main_table AS m
+LEFT JOIN other_table AS o ON m.id = o.id;
+```
+
+**5. For data type compatibility**
+
+```sql
+-- Check if your data type supports the operation
+-- Replace with compatible type if needed:
+
+-- Instead of Nullable(JSON) in GROUP BY:
+SELECT CAST(json_col AS String) AS json_str
+FROM table
+GROUP BY json_str;
+
+-- Or use non-nullable version:
+ALTER TABLE your_table
+ MODIFY COLUMN json_col JSON; -- Remove Nullable wrapper
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Method write is not supported by storage View**
+
+```text
+Code: 1. DB::Exception: Method write is not supported by storage View
+```
+
+**Cause:** Attempting to insert data directly into a View. Views are read-only representations of data.
+
+**Solution:**
+
+```sql
+-- If you need a writable view, use a materialized view with a target table
+CREATE MATERIALIZED VIEW mv_name TO target_table AS
+SELECT * FROM source_table;
+
+-- Then inserts to source_table will populate target_table via the MV
+
+-- Or if you accidentally created a regular view instead of table
+DROP VIEW my_view;
+CREATE TABLE my_table (
+ id UInt64,
+ name String
+) ENGINE = MergeTree ORDER BY id;
+```
+
+**Scenario 2: Method serializeValueIntoMemory not supported for Object/JSON**
+
+```text
+Method serializeValueIntoMemory is not supported for Object(max_dynamic_paths=1024, max_dynamic_types=32)
+```
+
+**Cause:** Trying to use GROUP BY or aggregate functions with Nullable(JSON) type.
+
+**Solution:**
+```sql
+-- Instead of this (fails):
+SELECT json_col
+FROM table
+GROUP BY json_col;
+
+-- Use non-nullable JSON:
+SELECT CAST(json_col AS JSON) AS col
+FROM table
+GROUP BY col;
+
+-- Or convert to String first:
+SELECT toString(json_col) AS str_col
+FROM table
+GROUP BY str_col;
+```
+
+**Scenario 3: WITH RECURSIVE not supported with old analyzer**
+
+```text
+WITH RECURSIVE is not supported with the old analyzer. Please use `enable_analyzer=1`
+```
+
+**Cause:** Attempting to use recursive CTEs without the new analyzer.
+
+**Solution:**
+
+```sql
+-- Enable the new analyzer
+SET allow_experimental_analyzer = 1;
+
+-- Then your recursive query will work
+WITH RECURSIVE hierarchy AS (
+ SELECT id, parent_id, name, 1 AS level
+ FROM categories
+ WHERE parent_id IS NULL
+
+ UNION ALL
+
+ SELECT c.id, c.parent_id, c.name, h.level + 1
+ FROM categories c
+ INNER JOIN hierarchy h ON c.parent_id = h.id
+)
+SELECT * FROM hierarchy;
+```
+
+**Scenario 4: Correlated subqueries not supported**
+
+```text
+Resolved identifier in parent scope with correlated columns (Enable 'allow_experimental_correlated_subqueries' setting)
+```
+
+**Cause:** Using correlated subqueries without enabling experimental feature.
+
+**Solution:**
+```sql
+-- Option 1: Enable the setting
+SET allow_experimental_correlated_subqueries = 1;
+SET allow_experimental_analyzer = 1;
+
+-- Option 2: Rewrite as JOIN (recommended)
+-- Instead of:
+SELECT
+ p.name,
+ (SELECT l.name FROM platform_lists l WHERE l.id = p.list_id LIMIT 1) AS list_name
+FROM platform_datas_view p;
+
+-- Use:
+SELECT
+ p.name,
+ l.name AS list_name
+FROM platform_datas_view p
+LEFT JOIN platform_lists l ON l.id = p.list_id;
+```
+
+**Scenario 5: Hash functions not supported for JSON type**
+
+```text
+Method getDataAt is not supported for Object
+```
+
+**Cause:** Trying to use hash functions (cityHash64, etc.) on JSON type.
+
+**Solution:**
+
+```sql
+-- Instead of this (fails):
+SELECT cityHash64(json_column) FROM table;
+
+-- Convert to String first:
+SELECT cityHash64(toString(json_column)) FROM table;
+
+-- Or serialize properly:
+SELECT cityHash64(JSONExtractRaw(json_column)) FROM table;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Understand storage engine capabilities**
+ - Views are read-only - use materialized views for writable scenarios
+ - Check engine documentation before using specific operations
+ - Different engines support different operations
+
+2. **Enable experimental features when needed**
+
+ ```sql
+ -- Add to your configuration or session:
+ SET allow_experimental_analyzer = 1;
+ SET allow_experimental_correlated_subqueries = 1;
+ ```
+
+3. **Use compatible data types**
+ - Avoid Nullable wrappers on complex types when possible
+ - Check if types support required operations (GROUP BY, hash, serialization)
+ - Convert to compatible types when necessary
+
+4. **Prefer standard SQL patterns**
+ - Use JOINs instead of correlated subqueries when possible
+ - Avoid deeply nested or complex subqueries
+ - Test compatibility with simpler queries first
+
+5. **Keep ClickHouse updated**
+ - Newer versions support more operations
+ - Check release notes for new features
+ - Many "unsupported" operations become supported in later versions
+
+6. **Review integration tool compatibility**
+ - dbt-clickhouse, MySQL protocol, etc. have specific limitations
+ - Check tool documentation for supported operations
+ - Report issues to tool maintainers
+
+## Related settings {#related-settings}
+
+```sql
+-- Enable new query analyzer (supports more features)
+SET allow_experimental_analyzer = 1;
+
+-- Enable correlated subqueries
+SET allow_experimental_correlated_subqueries = 1;
+
+-- Enable recursive CTEs
+-- (Requires allow_experimental_analyzer = 1)
+
+-- Check all experimental settings
+SELECT name, value, description
+FROM system.settings
+WHERE name LIKE '%experimental%'
+ AND name NOT LIKE '%internal%';
+```
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/003_UNEXPECTED_END_OF_FILE.md b/docs/troubleshooting/error_codes/003_UNEXPECTED_END_OF_FILE.md
new file mode 100644
index 00000000000..7fe9e74ebdc
--- /dev/null
+++ b/docs/troubleshooting/error_codes/003_UNEXPECTED_END_OF_FILE.md
@@ -0,0 +1,380 @@
+---
+slug: /troubleshooting/error-codes/003_UNEXPECTED_END_OF_FILE
+sidebar_label: '003 UNEXPECTED_END_OF_FILE'
+doc_type: 'reference'
+keywords: ['error codes', 'UNEXPECTED_END_OF_FILE', '003', 'EOF', 'truncated', 'corrupted']
+title: '003 UNEXPECTED_END_OF_FILE'
+description: 'ClickHouse error code - 003 UNEXPECTED_END_OF_FILE'
+---
+
+# Error 3: UNEXPECTED_END_OF_FILE
+
+:::tip
+This error occurs when ClickHouse attempts to read data from a file but encounters an unexpected end-of-file (EOF) condition before reading all expected data. This typically indicates file truncation, corruption, or incomplete file writes.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Interrupted or incomplete file writes**
+ - Server crash or restart during data write operations
+ - Network interruption during remote file transfer
+ - Disk I/O errors while writing data
+ - Insufficient disk space during write operations
+ - Missing fsync causing incomplete writes after restart
+
+2. **File corruption or truncation**
+ - Files truncated due to external processes
+ - Metadata files corrupted or incomplete
+ - S3/Object storage upload failures
+ - Empty or zero-byte files in detached parts
+ - Incomplete downloads from remote storage
+
+3. **Part loading failures after restart**
+ - Parts partially downloaded before server restart
+ - Broken parts in detached directory with empty files
+ - Metadata inconsistency (e.g., `data.packed` missing)
+ - Marks files or column files truncated
+
+4. **Remote storage issues**
+ - S3/GCS connection interruptions during reads
+ - Incomplete multipart uploads
+ - Object storage eventual consistency issues
+ - Authentication failures mid-read
+ - Network timeouts truncating responses
+
+5. **Filesystem cache corruption**
+ - Cache files truncated or incomplete
+ - Cache metadata out of sync with actual file size
+ - Concurrent access corruption in cache
+ - Disk issues affecting cache directory
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Identify which file is affected**
+
+The error message usually indicates the specific file:
+
+```text
+Code: 3. DB::Exception: Unexpected end of file while reading:
+Marks file '/path/to/part/column.mrk2' doesn't exist or is truncated
+```
+
+**2. Check logs for context**
+
+```sql
+-- Find recent errors
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 3
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**3. Inspect broken parts**
+
+```sql
+-- Check for broken detached parts
+SELECT
+ database,
+ table,
+ name,
+ reason,
+ disk
+FROM system.detached_parts
+WHERE name LIKE 'broken%'
+ORDER BY modification_time DESC;
+```
+
+**4. Check for empty files in detached parts**
+
+```bash
+# On the server, look for zero-byte files
+find /var/lib/clickhouse/disks/*/detached/ -type f -size 0
+
+# List broken parts
+find /var/lib/clickhouse/disks/*/detached/ -name "broken-*"
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. For broken detached parts with empty files**
+
+```bash
+# Drop the broken detached parts (they contain no data anyway)
+# First verify they are truly empty
+find /var/lib/clickhouse/disks/s3disk/store/*/detached/broken-* -type d
+
+# Then remove them
+find /var/lib/clickhouse/disks/s3disk/store/*/detached/broken-* -type d -exec rm -rf {} +
+```
+
+**2. For replicated tables - refetch from other replicas**
+
+```sql
+-- If using replication, data can be recovered automatically
+-- Check replica status
+SELECT
+ database,
+ table,
+ is_leader,
+ total_replicas,
+ active_replicas
+FROM system.replicas
+WHERE table = 'your_table';
+
+-- Force check and refetch of missing parts
+SYSTEM RESTART REPLICA your_table;
+```
+
+**3. For corrupted parts on non-replicated tables**
+
+```sql
+-- If part is detached and data exists elsewhere
+ALTER TABLE your_table DROP DETACHED PART 'broken_part_name';
+
+-- If you have backups
+RESTORE TABLE your_table FROM Disk('backups', 'path/to/backup');
+```
+
+**4. For system tables with broken parts**
+
+```sql
+-- System tables can usually be truncated safely
+TRUNCATE TABLE system.query_log;
+TRUNCATE TABLE system.text_log;
+TRUNCATE TABLE system.metric_log;
+
+-- Or restart the server to rebuild
+```
+
+**5. For filesystem cache issues**
+
+```sql
+-- Disable cache temporarily
+SET enable_filesystem_cache = 0;
+
+-- Or clear corrupted cache
+SYSTEM DROP FILESYSTEM CACHE '/path/to/corrupted/file';
+
+-- Clear all cache (use with caution)
+SYSTEM DROP FILESYSTEM CACHE;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Broken parts after server restart**
+
+```text
+Code: 32. DB::Exception: Attempt to read after eof.
+while loading part all_41390134_41390134_0 on path store/.../all_41390134_41390134_0
+```
+
+**Cause:** Server restarted during part write, resulting in empty or truncated metadata files (especially with `data.packed` format).
+
+**Solution:**
+
+```bash
+# Check for zero-byte files
+ls -la /var/lib/clickhouse/disks/s3disk/store/.../detached/broken-on-start_*
+
+# If files are empty (0 bytes), safely remove the broken parts
+find /var/lib/clickhouse/disks/s3disk/store/*/detached/broken-on-start_* -type d -exec rm -rf {} +
+
+# For replicated tables, parts will be refetched automatically
+# For non-replicated tables, data is lost unless you have backups
+```
+
+**Scenario 2: Empty marks file**
+
+```text
+Empty marks file: 0, must be: 75264
+Code: 246. DB::Exception: CORRUPTED_DATA
+```
+
+**Cause:** Marks file is truncated or empty, often due to incomplete S3 writes or cache issues.
+
+**Solution:**
+```sql
+-- Check if other replicas have the data
+SELECT * FROM system.replicas WHERE table = 'your_table';
+
+-- For SharedMergeTree, part will be automatically refetched
+-- For non-replicated MergeTree, try to restore from backup
+
+-- If this is a system table, just truncate it
+TRUNCATE TABLE system.text_log;
+```
+
+**Scenario 3: Cannot read all data - bytes expected vs received**
+
+```text
+Cannot read all data. Bytes read: 32. Bytes expected: 40.
+while loading part 202311_0_158_42_159
+```
+
+**Cause:** File is truncated or corrupted, often occurs with projections after ALTER MODIFY COLUMN operations or incomplete merges.
+
+**Solution:**
+
+```sql
+-- Check for broken projections
+SHOW CREATE TABLE your_table;
+
+-- If part is detached with broken projection:
+-- 1. Extract data from packed format (if using Packed storage)
+-- 2. Remove projection from extracted part
+-- 3. Delete checksums.txt
+-- 4. Attach the part back
+
+-- For replicated tables, easier to just drop and refetch
+ALTER TABLE your_table DROP DETACHED PART 'broken_part_name';
+SYSTEM RESTART REPLICA your_table;
+```
+
+**Scenario 4: Filesystem cache "Having zero bytes" error**
+
+```text
+Having zero bytes, but range is not finished: file offset: 0, cache file size: 11038
+read type: CACHED, cache file path: /mnt/clickhouse-cache/.../0
+```
+
+**Cause:** Filesystem cache file is corrupted or truncated, often occurs with DiskEncrypted or remote reads.
+
+**Solution:**
+
+```sql
+-- Drop specific file from cache
+SYSTEM DROP FILESYSTEM CACHE '/path/to/file';
+
+-- Or disable cache for the query
+SET enable_filesystem_cache = 0;
+
+-- For persistent issues, clear all cache
+SYSTEM DROP FILESYSTEM CACHE;
+
+-- Retry the query
+```
+
+**Scenario 5: S3/Remote storage read truncation**
+
+```text
+Code: 3. DB::Exception: Unexpected end of file while reading from S3
+Connection reset by peer
+```
+
+**Cause:** Network connection dropped during S3 read, authentication expired, or S3 throttling.
+
+**Solution:**
+
+```sql
+-- Increase retry attempts and timeouts
+SET s3_max_single_read_retries = 10;
+SET s3_retry_attempts = 5;
+SET s3_request_timeout_ms = 30000;
+
+-- Check for authentication issues
+-- Verify S3 credentials are valid and not expired
+
+-- For ClickPipes/s3 table functions, retry the operation
+-- The error is usually transient
+```
+
+**Reference:** Based on search results showing S3 authentication and network errors
+
+## Prevention best practices {#prevention}
+
+1. **For replicated tables**
+ - Always use replication for critical data
+ - Configure at least 2-3 replicas
+ - Broken parts will be automatically refetched
+
+2. **Ensure sufficient disk space**
+
+ ```sql
+ -- Monitor disk usage
+ SELECT
+ name,
+ path,
+ formatReadableSize(free_space) AS free,
+ formatReadableSize(total_space) AS total,
+ round(free_space / total_space * 100, 2) AS free_percent
+ FROM system.disks;
+
+ -- Alert when free space < 20%
+ ```
+
+3. **Monitor broken detached parts**
+
+ ```sql
+ -- Set up monitoring
+ SELECT count()
+ FROM system.detached_parts
+ WHERE name LIKE 'broken%';
+
+ -- Alert when count exceeds threshold
+ -- Check max_broken_detached_parts setting
+ ```
+
+4. **Use proper shutdown procedures**
+
+ ```bash
+ # Graceful shutdown allows ClickHouse to finish writes
+ systemctl stop clickhouse-server
+
+ # Avoid kill -9 or forceful terminations
+ ```
+
+5. **Configure appropriate retry settings for remote storage**
+
+ ```xml
+
+
+ 5
+ 30000
+
+
+ ```
+
+6. **Regular cleanup of broken parts**
+
+ ```bash
+ # Periodically clean up known-broken detached parts
+ # Especially those with zero-byte files
+ find /var/lib/clickhouse/disks/*/detached/broken-* -type f -size 0 -delete
+ ```
+
+## Related settings {#related-settings}
+
+```sql
+-- Control handling of broken parts
+SET max_broken_detached_parts = 100;
+
+-- S3 retry configuration
+SET s3_max_single_read_retries = 10;
+SET s3_retry_attempts = 5;
+SET s3_request_timeout_ms = 30000;
+
+-- Filesystem cache settings
+SET enable_filesystem_cache = 1;
+SET enable_filesystem_cache_on_write_operations = 1;
+
+-- Check current broken parts limit
+SELECT name, value
+FROM system.settings
+WHERE name LIKE '%broken%';
+```
+
+## When data is unrecoverable {#when-unrecoverable}
+
+If you encounter this error and:
+- The table is **not replicated**
+- You have **no backups**
+- The detached parts are **truly corrupted** (not just empty files from restart)
+
+Then the data in those parts is **lost**. Prevention through replication and backups is critical.
+
+For system tables (`query_log`, `text_log`, `metric_log`, etc.), data loss is usually acceptable - just truncate and continue.
diff --git a/docs/troubleshooting/error_codes/006_CANNOT_PARSE_TEXT.md b/docs/troubleshooting/error_codes/006_CANNOT_PARSE_TEXT.md
new file mode 100644
index 00000000000..1bb2e263765
--- /dev/null
+++ b/docs/troubleshooting/error_codes/006_CANNOT_PARSE_TEXT.md
@@ -0,0 +1,375 @@
+---
+slug: /troubleshooting/error-codes/006_CANNOT_PARSE_TEXT
+sidebar_label: '006 CANNOT_PARSE_TEXT'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_PARSE_TEXT', '006', 'parse', 'CSV', 'TSV', 'JSON', 'format']
+title: '006 CANNOT_PARSE_TEXT'
+description: 'ClickHouse error code - 006 CANNOT_PARSE_TEXT'
+---
+
+# Error 6: CANNOT_PARSE_TEXT
+
+:::tip
+This error occurs when ClickHouse cannot parse text data according to the expected format.
+This typically happens during data imports from CSV, TSV, JSON, or other text-based formats when the data doesn't match the expected schema or contains malformed values.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Incorrect format specification**
+ - Using CSV format for tab-delimited files (should use TSV)
+ - Format mismatch between actual data and declared format
+ - Wrong delimiter character specified
+ - Missing or incorrect escape characters
+
+2. **Malformed CSV/TSV data**
+ - Missing delimiters (commas or tabs)
+ - Unescaped special characters in string fields
+ - Quotes not properly closed or escaped
+ - Embedded newlines without proper escaping
+ - Extra or missing columns compared to table schema
+
+3. **Data type mismatches**
+ - String data in numeric columns
+ - Invalid date/datetime formats
+ - Values exceeding type boundaries (e.g., too large for Int32)
+ - Empty strings where numbers are expected
+ - Special characters in numeric fields
+
+4. **Character encoding issues**
+ - UTF-8 encoding errors
+ - Byte order marks (BOM) at file beginning
+ - Invalid characters in string fields
+ - Mixed character encodings in the same file
+
+5. **Inconsistent data structure**
+ - Variable number of columns per row
+ - Headers don't match data rows
+ - Schema inference fails with complex nested data
+ - Mixed data formats within same column
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for specific details**
+
+The error message typically indicates:
+- Which row failed
+- Which column had the problem
+- What was expected vs. what was found
+- The actual parsed text that failed
+
+```text
+Cannot parse input: expected ',' before: 'some_text': (at row 429980)
+Row 429979: Column 8, name: blockspending, type: Int32, ERROR: text "7027181" is not like Int32
+```
+
+**2. Verify the actual file format**
+
+```bash
+# Check first few lines of your file
+head -n 5 your_file.csv
+
+# Check for tabs vs commas
+cat your_file.csv | head -n 1 | od -c
+
+# Check character encoding
+file -i your_file.csv
+```
+
+**3. Test with a small sample**
+
+```sql
+-- Try parsing just the first few rows
+SELECT *
+FROM file('sample.csv', 'CSV')
+LIMIT 10;
+
+-- Let ClickHouse infer the schema
+DESCRIBE file('sample.csv', 'CSV');
+```
+
+**4. Check logs for more details**
+
+```sql
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 6
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 5;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use correct format for your data**
+
+```sql
+-- For tab-delimited files
+INSERT INTO table FROM INFILE 'file.tsv' FORMAT TSV;
+-- Or
+INSERT INTO table FROM INFILE 'file.tsv' FORMAT TSVWithNames;
+
+-- For comma-delimited files
+INSERT INTO table FROM INFILE 'file.csv' FORMAT CSV;
+-- Or
+INSERT INTO table FROM INFILE 'file.csv' FORMAT CSVWithNames;
+```
+
+**2. Skip malformed rows**
+
+```sql
+-- Skip specific number of bad rows
+INSERT INTO table
+SELECT * FROM file('data.csv', 'CSV')
+SETTINGS input_format_allow_errors_num = 100;
+
+-- Skip percentage of bad rows
+INSERT INTO table
+SELECT * FROM file('data.csv', 'CSV')
+SETTINGS input_format_allow_errors_ratio = 0.1; -- Allow 10% errors
+```
+
+**3. Handle NULL values correctly**
+
+```sql
+-- Treat empty fields as default values
+SET input_format_null_as_default = 1;
+
+-- For CSV specifically
+SET input_format_csv_empty_as_default = 1;
+
+-- Allow missing fields
+SET input_format_skip_unknown_fields = 1;
+```
+
+**4. Use custom delimiters for tab-delimited CSV**
+
+```sql
+-- For tab-delimited data with CSV quoting
+SET format_custom_escaping_rule = 'CSV';
+SET format_custom_field_delimiter = '\x09'; -- Tab character
+
+INSERT INTO table FROM INFILE 'data.tsv' FORMAT CustomSeparated;
+```
+
+**5. Specify schema explicitly**
+
+```sql
+-- Instead of relying on schema inference
+SELECT * FROM file(
+ 'data.csv',
+ 'CSV',
+ 'id UInt64, name String, date Date, value Float64'
+);
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: CSV expected comma but found tab**
+
+```text
+Cannot parse input: expected ',' before: '7027181'
+```
+
+**Cause:** File is actually tab-delimited (TSV) but being read as CSV.
+
+**Solution:**
+```sql
+-- Use TSV format instead
+INSERT INTO table FROM INFILE 'file.tsv' FORMAT TSVWithNames;
+
+-- Or if you must use CSV-style quoting with tabs
+SET format_custom_escaping_rule = 'CSV';
+SET format_custom_field_delimiter = '\x09';
+INSERT INTO table FROM INFILE 'file.tsv' FORMAT CustomSeparated;
+```
+
+**Scenario 2: Malformed string with embedded delimiters**
+
+```text
+Cannot parse input: expected '\t' before: 'I49d(I\""\t\t\t13\t1350000'
+```
+
+**Cause:** String field contains delimiter characters (tabs, commas) and special characters that aren't properly escaped or quoted.
+
+**Solution:**
+```sql
+-- Use CSV-style escaping for tab-delimited data
+SET format_custom_escaping_rule = 'CSV';
+SET format_custom_field_delimiter = '\x09';
+
+-- Allow errors in problematic rows
+SET input_format_allow_errors_num = 100;
+
+INSERT INTO table FROM INFILE 'file.tsv' FORMAT CustomSeparated;
+```
+
+**Scenario 3: Syntax error at unexpected position**
+
+```text
+Syntax error: failed at position 1 ('85c59771') (line 1, col 1): 85c59771-ae5d-4a53-9eed...
+```
+
+**Cause:** Wrong format specified - file is TSV but being read as CSV.
+
+**Solution:**
+```sql
+-- Check actual delimiter in file
+-- If you see wide spacing, it's likely tabs not commas
+
+-- Use TSV instead of CSV
+SELECT * FROM file('data.tsv', 'TSVWithNames');
+```
+
+**Scenario 4: Cannot parse decimal type from Parquet**
+
+```text
+Cannot parse type Decimal(76, 38), expected non-empty binary data with size equal to or less than 32, got 36
+```
+
+**Cause:** Decimal precision in Parquet file exceeds ClickHouse maximum (Decimal256 max precision is 76, but internal representation limits apply).
+
+**Solution:**
+```sql
+-- Read as String first, then convert
+SELECT
+ CAST(decimal_col AS Decimal(38, 10)) AS decimal_col
+FROM file('data.parquet', 'Parquet', 'decimal_col String, ...');
+
+-- Or use Double for very large values
+SELECT
+ toFloat64(decimal_col) AS decimal_col
+FROM file('data.parquet', 'Parquet', 'decimal_col String, ...');
+```
+
+**Scenario 5: Schema inference fails on complex data**
+
+```text
+The table structure cannot be extracted from a JSONEachRow format file
+```
+
+**Cause:** File is empty, inaccessible, or schema inference can't determine structure from sample.
+
+**Solution:**
+```sql
+-- Increase bytes read for schema inference
+SET input_format_max_bytes_to_read_for_schema_inference = 999999999;
+
+-- Or specify schema manually
+SELECT * FROM s3(
+ 'https://bucket/file.json',
+ 'JSONEachRow',
+ 'id UInt64, name String, data String'
+);
+```
+
+## Prevention best practices {#prevention}
+
+1. **Validate data format before importing**
+ ```bash
+ # Check actual delimiter
+ head -n 1 file.csv | od -c
+
+ # Verify consistent column count
+ awk -F',' 'NR==1{cols=NF} NF!=cols{print "Line " NR " has " NF " columns"}' file.csv
+
+ # Check for encoding issues
+ file -i file.csv
+ ```
+
+2. **Use appropriate format for your data**
+ - CSV: Comma-delimited with optional CSV-style quoting
+ - TSV/TabSeparated: Tab-delimited, no quoting
+ - TSVWithNames: Tab-delimited with header row
+ - CustomSeparated: Custom delimiter with CSV-style quoting
+
+3. **Test with small samples first**
+ ```sql
+ -- Test schema inference
+ DESCRIBE file('sample.csv', 'CSV');
+
+ -- Test parsing first 100 rows
+ SELECT * FROM file('sample.csv', 'CSV') LIMIT 100;
+ ```
+
+4. **Specify schemas explicitly for production**
+ ```sql
+ -- Don't rely on inference for critical imports
+ SELECT * FROM file(
+ 'data.csv',
+ 'CSV',
+ 'id UInt64, timestamp DateTime, value Float64, status String'
+ );
+ ```
+
+5. **Use settings to handle imperfect data**
+ ```sql
+ -- Common settings for dealing with real-world data
+ SET input_format_allow_errors_ratio = 0.01; -- Allow 1% errors
+ SET input_format_null_as_default = 1; -- Empty = default
+ SET input_format_skip_unknown_fields = 1; -- Ignore extra fields
+ SET input_format_csv_empty_as_default = 1; -- Empty CSV fields = default
+ ```
+
+6. **Monitor parsing errors**
+ ```sql
+ -- Set up monitoring query
+ SELECT
+ count() AS error_count,
+ any(exception) AS sample_error
+ FROM system.query_log
+ WHERE exception_code = 6
+ AND event_time > now() - INTERVAL 1 DAY;
+ ```
+
+## Related settings {#related-settings}
+
+```sql
+-- Error handling
+SET input_format_allow_errors_num = 100; -- Skip N bad rows
+SET input_format_allow_errors_ratio = 0.1; -- Skip up to 10% bad rows
+
+-- NULL and default handling
+SET input_format_null_as_default = 1; -- NULL becomes default value
+SET input_format_csv_empty_as_default = 1; -- Empty CSV field = default
+SET input_format_skip_unknown_fields = 1; -- Ignore extra columns
+
+-- Schema inference
+SET input_format_max_bytes_to_read_for_schema_inference = 1000000;
+SET schema_inference_make_columns_nullable = 0; -- Don't infer Nullable types
+
+-- CSV-specific
+SET format_csv_delimiter = ','; -- CSV delimiter
+SET format_csv_allow_single_quotes = 1; -- Allow single quotes
+SET format_csv_allow_double_quotes = 1; -- Allow double quotes
+
+-- Custom format
+SET format_custom_escaping_rule = 'CSV'; -- Use CSV escaping
+SET format_custom_field_delimiter = '\x09'; -- Tab delimiter
+
+-- Date/time parsing
+SET date_time_input_format = 'best_effort'; -- Flexible date parsing
+```
+
+## Debugging tips {#debugging-tips}
+
+```sql
+-- 1. Check what ClickHouse sees in the problematic row
+SELECT * FROM file('data.csv', 'CSV')
+WHERE rowNumberInAllBlocks() = 429980; -- The row number from error
+
+-- 2. Examine the raw bytes
+SELECT hex(column_name) FROM file('data.csv', 'CSV', 'column_name String')
+LIMIT 10;
+
+-- 3. Test different formats
+SELECT * FROM file('data.txt', 'TSV') LIMIT 5;
+SELECT * FROM file('data.txt', 'CSV') LIMIT 5;
+SELECT * FROM file('data.txt', 'CSVWithNames') LIMIT 5;
+
+-- 4. Use LineAsString to see raw data
+SELECT * FROM file('data.csv', 'LineAsString') LIMIT 10;
+```
diff --git a/docs/troubleshooting/error_codes/010_NOT_FOUND_COLUMN_IN_BLOCK.md b/docs/troubleshooting/error_codes/010_NOT_FOUND_COLUMN_IN_BLOCK.md
new file mode 100644
index 00000000000..d765bf069b6
--- /dev/null
+++ b/docs/troubleshooting/error_codes/010_NOT_FOUND_COLUMN_IN_BLOCK.md
@@ -0,0 +1,106 @@
+---
+slug: /troubleshooting/error-codes/010_NOT_FOUND_COLUMN_IN_BLOCK
+sidebar_label: '010 NOT_FOUND_COLUMN_IN_BLOCK'
+doc_type: 'reference'
+keywords: ['error codes', 'NOT_FOUND_COLUMN_IN_BLOCK', '010']
+title: '010 NOT_FOUND_COLUMN_IN_BLOCK'
+description: 'ClickHouse error code - 010 NOT_FOUND_COLUMN_IN_BLOCK'
+---
+
+# Error 10: NOT_FOUND_COLUMN_IN_BLOCK
+
+:::tip
+This error occurs when ClickHouse attempts to access a column that doesn't exist in a data block during query execution, merge operations, or mutations.
+It typically indicates schema inconsistency between table metadata and actual data parts.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Schema Evolution Issues with Mutations**
+ - Mutations fail when trying to process parts that were created before certain columns were added to the table
+ - Parts with different data versions have different column sets
+ - Missing internal columns like `_block_number` during DELETE mutations
+
+2. **`ALTER TABLE` Operations Gone Wrong**
+ - Column additions/modifications not properly applied to all parts
+ - Incomplete mutations that leave some parts with old schema
+
+3. **Missing Internal Columns**
+ - `_block_number` column missing during `DELETE` mutations (very common case)
+ - Virtual columns expected but not present in older data parts
+
+4. **Projection-Related Issues**
+ - Materialized projections referencing columns that don't exist in older parts
+ - Projection calculations failing when columns are missing from source data
+
+## Common solutions {#common-solutions}
+
+**1. Kill and Retry the Mutation**
+
+```sql
+-- Check stuck mutations
+SELECT * FROM system.mutations WHERE NOT is_done;
+
+-- Kill problematic mutation
+KILL MUTATION WHERE mutation_id = 'your_mutation_id';
+
+-- Retry the operation
+```
+
+**2. Force Part Merges**
+
+```sql
+-- For specific table
+OPTIMIZE TABLE your_table FINAL;
+```
+
+This can help consolidate parts with different schemas.
+
+**3. Check Part Versions**
+
+```sql
+SELECT
+ data_version,
+ count(),
+ groupArray(name)
+FROM system.parts
+WHERE database = 'your_db' AND table = 'your_table'
+GROUP BY data_version;
+```
+
+Look for parts with very old data versions that might be missing columns.
+
+**4. Verify Column Presence Across Parts**
+- Old parts created before column additions may be missing columns
+- Use `clickhouse-disk` utility to inspect actual column metadata in parts
+
+**5. For Missing `_block_number` Errors**
+This is a known issue with `DELETE` mutations on tables with older parts. Solutions:
+- Kill the mutation and retry
+- Consider using lightweight deletes if available in your version
+- Upgrade to newer ClickHouse versions where this is fixed
+
+**6. For Projection Errors**
+
+If the error occurs during projection materialization:
+
+```sql
+-- Drop and recreate the projection
+ALTER TABLE your_table DROP PROJECTION projection_name;
+ALTER TABLE your_table ADD PROJECTION projection_name (...);
+ALTER TABLE your_table MATERIALIZE PROJECTION projection_name;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Plan Schema Changes Carefully**: Understand that all existing parts need to be processed when adding columns used in mutations
+2. **Monitor Mutation Queue**: Regularly check `system.mutations` for stuck operations
+3. **Use Proper `ALTER` Syntax**: Ensure `ALTER TABLE` operations complete successfully
+4. **Keep ClickHouse Updated**: Many of these issues are fixed in newer versions
+5. **Regular `OPTIMIZE` Operations**: Help consolidate parts and maintain schema consistency
+
+If you're experiencing this error, it is recommended to:
+1. Check `system.mutations` to identify the stuck mutation
+2. Examine part versions to find schema inconsistencies
+3. Kill and retry the mutation as a first step
+4. If it persists, consider escalating to ClickHouse support with specific details about your table schema and the failing operation
diff --git a/docs/troubleshooting/error_codes/013_DUPLICATE_COLUMN.md b/docs/troubleshooting/error_codes/013_DUPLICATE_COLUMN.md
new file mode 100644
index 00000000000..daea1f45d79
--- /dev/null
+++ b/docs/troubleshooting/error_codes/013_DUPLICATE_COLUMN.md
@@ -0,0 +1,228 @@
+---
+slug: /troubleshooting/error-codes/013_DUPLICATE_COLUMN
+sidebar_label: '013 DUPLICATE_COLUMN'
+doc_type: 'reference'
+keywords: ['error codes', 'DUPLICATE_COLUMN', '013', '015', 'duplicate', 'column', 'alias']
+title: '013 DUPLICATE_COLUMN'
+description: 'ClickHouse error code - 013 DUPLICATE_COLUMN'
+---
+
+# Error 13: DUPLICATE_COLUMN
+
+:::tip
+This error occurs when you attempt to create or add a column with a name that already exists in the table, or when you use duplicate aliases in queries.
+ClickHouse column names are case-sensitive, so `name` and `Name` are different columns.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **ALTER TABLE operations on replicated tables**
+ - Replicated DDL executed twice due to race condition
+ - Distributed DDL task executed on multiple replicas simultaneously
+ - ZooKeeper lock timing issues causing duplicate execution
+ - Custom ZooKeeper paths with inconsistent shard/replica configuration
+
+2. **Creating tables with duplicate column names**
+ - Accidentally specifying the same column name twice in CREATE TABLE
+ - Case-sensitive column names (`name` vs `Name`) treated as different by ClickHouse
+ - Schema inference creating conflicts with explicit column definitions
+
+3. **Duplicate aliases in queries**
+ - Same alias used for multiple expressions in SELECT
+ - Column alias conflicts with table alias
+ - WITH clause alias conflicts with SELECT alias
+ - Materialized column aliases reused in multiple column definitions
+
+4. **INSERT with conflicting column mappings**
+ - Same column selected multiple times with different aliases
+ - Schema inference from source conflicts with target table
+ - [`use_structure_from_insertion_table_in_table_functions`](/operations/settings/settings#use_structure_from_insertion_table_in_table_functions) causes conflicts
+
+5. **Query analyzer issues with aliases**
+ - New analyzer stricter about duplicate aliases than old analyzer
+ - Column and table sharing same alias name
+ - Nested subqueries with conflicting aliases
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for the specific column**
+
+The error message indicates which column name is duplicated:
+
+```text
+Cannot add column `remark`: column with this name already exists
+Cannot add column bid: column with this name already exists
+Different expressions with the same alias custom_properties_map
+```
+
+**2. Check existing columns in the table**
+
+```sql
+-- View all columns and their types
+SELECT
+ name,
+ type,
+ position
+FROM system.columns
+WHERE table = 'your_table'
+ AND database = 'your_database'
+ORDER BY position;
+
+-- Check for case-sensitive duplicates
+SELECT
+ name,
+ count() AS cnt
+FROM system.columns
+WHERE table = 'your_table'
+ AND database = 'your_database'
+GROUP BY name
+HAVING cnt > 1;
+```
+
+**3. For replicated tables, check if the operation actually succeeded**
+
+```sql
+-- Even if error appears, check if column was added
+SHOW CREATE TABLE your_table;
+
+-- Check DDL queue status
+SELECT *
+FROM system.distributed_ddl_queue
+WHERE entry LIKE '%ADD COLUMN%'
+ORDER BY entry_create_time DESC
+LIMIT 10;
+```
+
+**4. Review recent DDL operations**
+
+```sql
+-- Check for duplicate DDL executions
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code IN (13, 15)
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use `IF NOT EXISTS` for idempotent operations**
+
+```sql
+-- Instead of this (may fail if column exists):
+ALTER TABLE your_table ADD COLUMN new_col String;
+
+-- Use this (safe to run multiple times):
+ALTER TABLE your_table ADD COLUMN IF NOT EXISTS new_col String;
+
+-- Similarly for DROP
+ALTER TABLE your_table DROP COLUMN IF EXISTS old_col;
+```
+
+**2. Rename duplicate column in `CREATE TABLE`**
+
+```sql
+-- Instead of this (fails):
+CREATE TABLE users (
+ uid Int16,
+ name String,
+ age Int16,
+ Name String -- Error: different case but ClickHouse allows it
+) ENGINE = Memory;
+
+-- Use unique names:
+CREATE TABLE users (
+ uid Int16,
+ name String,
+ age Int16,
+ full_name String
+) ENGINE = Memory;
+```
+
+**3. Fix duplicate aliases in queries**
+
+```sql
+-- Instead of this (fails):
+SELECT
+ map('name', errors.name) AS labels,
+ value,
+ 'ch_errors_total' AS name -- Conflicts with errors.name in map()
+FROM system.errors;
+
+-- Use this (works):
+SELECT
+ map('name', errors.name) AS labels,
+ value,
+ 'ch_errors_total' AS metric_name -- Different alias
+FROM system.errors;
+```
+
+**4. Fix duplicate aliases in materialized columns**
+
+```sql
+-- Instead of this (fails on restart):
+CREATE TABLE events (
+ properties String,
+ custom_map Map(String, String) MATERIALIZED
+ mapFromArrays(...JSONExtractKeysAndValuesRaw(properties) AS custom_properties_map...),
+ custom_map_sorted Map(String, String) MATERIALIZED
+ mapSort(...JSONExtractKeysAndValuesRaw(properties) AS custom_properties_map...)
+ -- Same alias 'custom_properties_map' used twice!
+) ENGINE = MergeTree ORDER BY tuple();
+
+-- Use unique aliases:
+CREATE TABLE events (
+ properties String,
+ custom_map Map(String, String) MATERIALIZED
+ mapFromArrays(...JSONExtractKeysAndValuesRaw(properties) AS custom_properties_map...),
+ custom_map_sorted Map(String, String) MATERIALIZED
+ mapSort(...JSONExtractKeysAndValuesRaw(properties) AS custom_properties_map_smaller...)
+) ENGINE = MergeTree ORDER BY tuple();
+```
+
+**5. For `INSERT` with conflicting aliases**
+
+```sql
+-- Disable automatic structure inference
+SET use_structure_from_insertion_table_in_table_functions = 0;
+
+-- Then run your INSERT
+INSERT INTO target_table
+SELECT
+ datetime,
+ base,
+ quote,
+ bid AS bid_v1,
+ bid AS bid_v2,
+ bid AS bid_v3
+FROM s3('file.csv', 'CSVWithNames');
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Replicated DDL executed twice (race condition)**
+
+```text
+Cannot add column `remark`: column with this name already exists. (DUPLICATE_COLUMN)
+Task query-0000000004 was not executed by anyone, maximum number of retries exceeded
+```
+
+**Cause:** Two replicas both attempted to execute the same DDL task due to ZooKeeper lock timing issues. The column was actually added successfully on the first execution, but the second execution failed with DUPLICATE_COLUMN.
+
+**Solution:**
+
+```sql
+-- The operation actually succeeded despite the error
+-- Verify the column exists:
+SHOW CREATE TABLE your_table;
+
+-- Use idempotent syntax going forward:
+ALTER TABLE your_table ADD COLUMN IF NOT EXISTS new_col String;
+```
+
+:::note
+This was a bug in versions before 22.4, fixed in [PR #31295](https://github.com/ClickHouse/ClickHouse/pull/31295).
+:::
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/027_CANNOT_READ_ALL_DATA.md b/docs/troubleshooting/error_codes/027_CANNOT_READ_ALL_DATA.md
new file mode 100644
index 00000000000..f3938c94e36
--- /dev/null
+++ b/docs/troubleshooting/error_codes/027_CANNOT_READ_ALL_DATA.md
@@ -0,0 +1,365 @@
+---
+slug: /troubleshooting/error-codes/027_CANNOT_READ_ALL_DATA
+sidebar_label: '027 CANNOT_READ_ALL_DATA'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_READ_ALL_DATA', '027', 'corrupted', 'truncated', 'S3', 'remote storage']
+title: '027 CANNOT_READ_ALL_DATA'
+description: 'ClickHouse error code - 027 CANNOT_READ_ALL_DATA'
+---
+
+# Error 27: CANNOT_READ_ALL_DATA
+
+:::tip
+This error occurs when ClickHouse expects to read a certain number of bytes from a file but receives fewer bytes than expected.
+This typically indicates file corruption, truncation, or issues with remote storage reads.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **File corruption or truncation**
+ - Data part files corrupted on disk
+ - Incomplete writes due to server crashes
+ - Mark files (.mrk, .mrk2) truncated or corrupted
+ - Column data files incomplete or damaged
+ - Checksums mismatch after decompression
+
+2. **Remote storage (S3/Object storage) issues**
+ - Network interruptions during S3 reads
+ - S3 authentication expiration mid-read
+ - Eventual consistency issues with object storage
+ - Missing or deleted objects in S3
+ - S3 throttling causing incomplete reads
+
+3. **LowCardinality column serialization issues**
+ - Bug with LowCardinality columns when using `remote_filesystem_read_method=threadpool`
+ - Invalid version for SerializationLowCardinality key column
+ - Specific to S3 disks with certain data patterns
+ - Fixed in recent versions but may still occur
+
+4. **JSON/Dynamic column type issues**
+ - Corrupted variant discriminator files (`.variant_discr.cmrk2`)
+ - Issues with JSON field serialization
+ - Problems reading Dynamic type metadata
+ - SerializationObject state prefix errors
+
+5. **Packed/compressed file format issues**
+ - `data.packed` metadata corruption
+ - Incomplete compression or decompression
+ - Marks file doesn't match actual data
+ - Issues with wide vs compact format parts
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Identify the corrupted part**
+
+The error message specifies which part and column failed:
+
+```text
+Cannot read all data. Bytes read: 114. Bytes expected: 266.:
+(while reading column operationName): (while reading from part
+/var/lib/clickhouse/.../1670889600_0_33677_2140/ from mark 26)
+```
+
+**2. Check if the table is replicated**
+
+```sql
+-- Check table engine
+SELECT engine
+FROM system.tables
+WHERE database = 'your_database' AND name = 'your_table';
+
+-- If replicated, check replicas status
+SELECT *
+FROM system.replicas
+WHERE database = 'your_database' AND table = 'your_table';
+```
+
+**3. Check for detached broken parts**
+
+```sql
+-- Check broken detached parts
+SELECT
+ database,
+ table,
+ name,
+ reason
+FROM system.detached_parts
+WHERE name LIKE 'broken%'
+ORDER BY modification_time DESC;
+```
+
+**4. Review logs for error context**
+
+```sql
+SELECT
+ event_time,
+ query_id,
+ exception
+FROM system.query_log
+WHERE exception_code = 27
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. For replicated tables - refetch from other replicas**
+
+```sql
+-- ClickHouse will automatically refetch corrupted parts
+-- You can force a sync:
+SYSTEM RESTART REPLICA your_table;
+
+-- Or detach and reattach the broken part to trigger refetch
+-- (part will be refetched from other replicas automatically)
+```
+
+**2. For LowCardinality on S3 - use alternative read method**
+
+```sql
+-- Workaround for LowCardinality + S3 bug
+SET remote_filesystem_read_method = 'read'; -- Instead of 'threadpool'
+
+-- Then retry the query
+-- Note: This has performance implications
+```
+
+**3. For corrupted parts - detach and rebuild**
+
+```sql
+-- For non-replicated tables, if you have backups
+ALTER TABLE your_table DETACH PARTITION 'partition_id';
+
+-- Restore from backup or reinsert data
+
+-- For replicated tables, just detach and ClickHouse will refetch
+ALTER TABLE your_table DETACH PARTITION 'partition_id';
+ALTER TABLE your_table ATTACH PARTITION 'partition_id';
+```
+
+**4. For broken detached parts on restart**
+
+```bash
+-- If parts are truly broken with zero-byte files
+-- Check for empty files
+find /var/lib/clickhouse/disks/*/detached/broken-on-start_* -type f -size 0
+
+-- Remove broken detached parts (they contain no valid data)
+find /var/lib/clickhouse/disks/*/detached/broken-on-start_* -type d -exec rm -rf {} +
+```
+
+**5. Retry S3-related errors**
+
+```sql
+-- Increase retry settings for S3
+SET s3_max_single_read_retries = 10;
+SET s3_retry_attempts = 5;
+SET s3_request_timeout_ms = 30000;
+
+-- Then retry the query
+-- Often S3 errors are transient
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: LowCardinality column on S3 with threadpool read method**
+
+```text
+Cannot read all data. Bytes read: 114. Bytes expected: 266.:
+(while reading column operationName): (while reading from part
+/var/lib/clickhouse/disks/s3_disk/store/.../1670889600_0_33677_2140/)
+```
+
+**Cause:** Bug with LowCardinality columns when using `remote_filesystem_read_method=threadpool` on S3 storage. Specific data patterns trigger incomplete reads.
+
+**Solution:**
+```sql
+-- Immediate workaround
+SET remote_filesystem_read_method = 'read';
+
+-- Then run your query
+SELECT * FROM your_table;
+
+-- Note: This setting has performance impact
+-- Upgrade to latest ClickHouse version for permanent fix
+```
+
+**Scenario 2: JSON field variant discriminator corruption**
+
+```text
+Cannot read all data. Bytes read: 7. Bytes expected: 25.:
+While reading or decompressing dimensions.Phone_number.variant_discr.cmrk2
+```
+
+**Cause:** Corruption in JSON/Dynamic column variant discriminator mark files.
+
+**Solution:**
+```sql
+-- Check if table is replicated
+-- If yes, ClickHouse will automatically handle it
+
+-- For persistent issues, try to rebuild affected partitions
+ALTER TABLE events DETACH PARTITION '202501';
+ALTER TABLE events ATTACH PARTITION '202501';
+```
+
+**Scenario 3: Packed data format corruption on restart**
+
+```text
+Code: 32. DB::Exception: Attempt to read after eof. (ATTEMPT_TO_READ_AFTER_EOF)
+while loading part all_10009167_10009239_16 from disk s3disk
+```
+
+**Cause:** Server restarted while writing packed format data, leaving `data.packed` metadata incomplete or corrupted.
+
+**Solution:**
+```bash
+# Check for broken-on-start parts
+clickhouse-client --query "
+SELECT count()
+FROM system.detached_parts
+WHERE name LIKE 'broken-on-start%'"
+
+# If files are zero bytes, safely remove them
+find /var/lib/clickhouse/disks/*/detached/broken-on-start_* -type f -size 0 -delete
+
+# For replicated tables, parts will be refetched automatically
+# For non-replicated tables without backups, data is lost
+```
+
+**Scenario 4: S3 network interruption during read**
+
+```text
+Cannot read all data. Bytes read: 28248. Bytes expected: 38739.:
+(while reading from part .../202206_10626_10770_3/ from mark 0)
+Connection reset by peer
+```
+
+**Cause:** Network connection to S3 dropped during read, or S3 throttling occurred.
+
+**Solution:**
+```sql
+-- Configure more aggressive S3 retries
+SET s3_max_single_read_retries = 10;
+SET s3_retry_attempts = 5;
+SET s3_request_timeout_ms = 30000;
+
+-- Retry the query
+-- Error is usually transient
+```
+
+**Scenario 5: Invalid SerializationLowCardinality version**
+
+```text
+Invalid version for SerializationLowCardinality key column:
+(while reading column valuation_result_type)
+```
+
+**Cause:** Rare race condition or corruption in LowCardinality column serialization, potentially related to concurrent reads during async inserts.
+
+**Solution:**
+```sql
+-- Check if this is a replicated table
+SELECT engine FROM system.tables WHERE name = 'your_table';
+
+-- For SharedMergeTree, part will be marked as broken and refetched
+-- Query may succeed on retry:
+SELECT * FROM your_table; -- Retry the same query
+
+-- If persistent, check for recent merges/mutations
+SELECT * FROM system.mutations WHERE table = 'your_table' AND is_done = 0;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Always use replication for critical data**
+
+ ```sql
+ -- Use ReplicatedMergeTree instead of MergeTree
+ ENGINE = ReplicatedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+
+ -- Configure at least 2-3 replicas
+ -- Corrupted parts will be automatically refetched
+ ```
+
+2. **Monitor S3/remote storage health**
+
+ ```sql
+ -- Check S3 error rates
+ SELECT
+ count() AS errors,
+ any(exception) AS sample
+ FROM system.query_log
+ WHERE exception LIKE '%S3_ERROR%'
+ AND event_time > now() - INTERVAL 1 DAY;
+ ```
+
+3. **Use appropriate settings for S3 reads**
+
+ ```sql
+ -- For production workloads with LowCardinality on S3
+ SET remote_filesystem_read_method = 'read';
+
+ -- Or configure in server config:
+ -- read
+ ```
+
+4. **Avoid packed format for volatile environments**
+
+ ```sql
+ -- If experiencing frequent restarts
+ -- Consider using wide format instead of compact/packed
+ ALTER TABLE your_table
+ MODIFY SETTING min_bytes_for_wide_part = 0;
+ ```
+
+5. **Monitor broken detached parts**
+
+ ```sql
+ -- Set up monitoring
+ SELECT count()
+ FROM system.detached_parts
+ WHERE name LIKE 'broken%';
+
+ -- Alert when count exceeds threshold
+ -- Investigate logs when parts are being detached
+ ```
+
+6. **Regular backups**
+
+ ```sql
+ -- Use BACKUP/RESTORE or freeze partitions
+ BACKUP TABLE your_table TO Disk('backups', 'backup_name');
+
+ -- Or freeze specific partitions
+ ALTER TABLE your_table FREEZE PARTITION '2024-01';
+ ```
+
+## Related error codes {#related-errors}
+
+- **Error 3 `UNEXPECTED_END_OF_FILE`**: Similar to CANNOT_READ_ALL_DATA but typically indicates file was truncated
+- **Error 32 `ATTEMPT_TO_READ_AFTER_EOF`**: Trying to read past end of file
+- **Error 117 `INCORRECT_DATA`**: Data doesn't match expected format
+- **Error 499 `S3_ERROR`**: Specific S3/object storage errors
+- **Error 740 `POTENTIALLY_BROKEN_DATA_PART`**: Wrapper error indicating suspected corruption
+
+## Related settings {#related-settings}
+
+```sql
+-- S3 retry configuration
+SET s3_max_single_read_retries = 10;
+SET s3_retry_attempts = 5;
+SET s3_request_timeout_ms = 30000;
+
+-- Remote filesystem read method
+SET remote_filesystem_read_method = 'read'; -- Instead of 'threadpool'
+
+-- Broken parts handling
+SET max_broken_detached_parts = 100; -- Alert threshold
+
+-- Check current settings
+SELECT name, value
+FROM system.settings
+WHERE name LIKE '%s3%' OR name LIKE '%broken%';
+```
diff --git a/docs/troubleshooting/error_codes/036_BAD_ARGUMENTS.md b/docs/troubleshooting/error_codes/036_BAD_ARGUMENTS.md
new file mode 100644
index 00000000000..bd3a6d3c026
--- /dev/null
+++ b/docs/troubleshooting/error_codes/036_BAD_ARGUMENTS.md
@@ -0,0 +1,137 @@
+---
+slug: /troubleshooting/error-codes/036_BAD_ARGUMENTS
+sidebar_label: '036 BAD_ARGUMENTS'
+doc_type: 'reference'
+keywords: ['error codes', 'BAD_ARGUMENTS', '036']
+title: '036 BAD_ARGUMENTS'
+description: 'ClickHouse error code - 036 BAD_ARGUMENTS'
+---
+
+# Error 36: BAD_ARGUMENTS
+
+:::tip
+This error occurs when a function or table function is called with an incorrect number of arguments or with arguments of incompatible types.
+It typically indicates that parameters provided to a function don't match what the function expects.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Wrong Number of Arguments**
+ - Providing too many or too few arguments to a function
+ - Misunderstanding function signature requirements (e.g., some functions expect arrays instead of multiple scalar arguments)
+ - Table functions like `s3Cluster`, `file`, `url`, `mysql` receiving incorrect argument counts
+
+2. **Incorrect Argument Types**
+ - Passing a scalar value when an array is expected
+ - Type mismatch between provided and expected parameters
+ - Numeric types where strings are expected, or vice versa
+
+3. **Function Signature Confusion**
+ - Functions with overloaded signatures (accepting different argument counts)
+ - Misreading documentation about optional vs required parameters
+ - Using old syntax for functions that have been updated
+
+4. **Table Function Argument Issues**
+ - S3, file, URL, and remote table functions have specific argument order requirements
+ - Missing required arguments like format or structure specifications
+ - Extra arguments beyond what the function supports
+
+## Common solutions {#common-solutions}
+
+**1. Check Function Documentation**
+
+Always verify the correct function signature in the ClickHouse [reference documentation](/sql-reference).
+Pay attention to:
+- Number of required vs optional arguments
+- Argument types (scalars vs arrays)
+- Argument order
+
+**2. Use Arrays for Multi-Value Functions**
+
+Many functions expect arrays rather than multiple scalar arguments:
+
+```sql
+-- WRONG: Multiple scalar arguments
+SELECT multiSearchAny(text, 'ClickHouse', 'Clickhouse', 'clickHouse', 'clickhouse')
+
+-- CORRECT: Array argument
+SELECT multiSearchAny(text, ['ClickHouse', 'Clickhouse', 'clickHouse', 'clickhouse'])
+```
+
+**3. Verify Table Function Arguments**
+
+For table functions, ensure you're providing arguments in the correct order:
+
+```sql
+-- S3Cluster with all parameters
+SELECT * FROM s3Cluster(
+ 'cluster_name', -- cluster
+ 'path', -- URL/path
+ 'access_key', -- credentials
+ 'secret_key',
+ 'format', -- data format
+ 'structure', -- column definition
+ 'compression' -- optional
+)
+```
+
+**4. Check for Missing Required Arguments**
+
+Some functions have mandatory parameters that cannot be omitted:
+
+```sql
+-- WRONG: Missing required interval
+SELECT tumble(now())
+
+-- CORRECT: With required interval argument
+SELECT tumble(now(), INTERVAL 1 HOUR)
+```
+
+**5. Use `DESCRIBE` or `EXPLAIN` to Validate**
+
+Test your query structure before execution:
+
+```sql
+EXPLAIN SYNTAX
+SELECT yourFunction(arg1, arg2);
+```
+
+**6. Review Error Message for Hints**
+
+The error message often indicates what was expected:
+```text
+Number of arguments for function X doesn't match:
+passed 5, should be 2
+```
+
+This tells you the function needs exactly 2 arguments, not 5.
+
+## Common function-specific issues {#common-issues}
+
+**Window Functions**
+- `tumble()`, `hop()`, `tumbleStart()` require both timestamp and interval arguments
+- Missing interval is a common mistake
+
+**Search Functions**
+- `multiSearchAny()`, `multiSearchAllPositions()` expect an array as the second argument
+- Docs examples may sometimes show incorrect syntax
+
+**Table Functions**
+- `s3Cluster()` - Expects 1 to 6 arguments (varies by version)
+- `generateRandom()` - Check structure specification
+- Remote table functions - Verify connection parameters
+
+## Prevention tips {#prevention-tips}
+
+1. **Consult Documentation First**: Always check the official ClickHouse docs for function signatures
+2. **Use IDE/Editor with ClickHouse Support**: Many editors can validate function calls
+3. **Test in Development**: Validate queries in a non-production environment first
+4. **Keep ClickHouse Updated**: Function signatures may change between versions
+5. **Use `EXPLAIN` Queries**: Helps catch argument errors before execution
+
+If you're experiencing this error:
+1. Check the exact error message for what was passed vs what was expected
+2. Review the function documentation for correct signature
+3. Verify you're using arrays where required (not multiple scalar arguments)
+4. Ensure all required arguments are provided in the correct order
+5. Check if your ClickHouse version supports the function signature you're using
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/038_CANNOT_PARSE_DATE.md b/docs/troubleshooting/error_codes/038_CANNOT_PARSE_DATE.md
new file mode 100644
index 00000000000..7fe73c5c5df
--- /dev/null
+++ b/docs/troubleshooting/error_codes/038_CANNOT_PARSE_DATE.md
@@ -0,0 +1,259 @@
+---
+slug: /troubleshooting/error-codes/038_CANNOT_PARSE_DATE
+sidebar_label: '038 CANNOT_PARSE_DATE'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_PARSE_DATE', '038', 'date', 'parsing', 'format', 'toDate', 'DateTime']
+title: '038 CANNOT_PARSE_DATE'
+description: 'ClickHouse error code - 038 CANNOT_PARSE_DATE'
+---
+
+# Error 38: CANNOT_PARSE_DATE
+
+:::tip
+This error occurs when ClickHouse cannot parse a string value as a Date.
+This typically happens when the date string format doesn't match the expected format, contains invalid values, or is too short/malformed.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Invalid date format or structure**
+ - Date string too short (e.g., missing day or month)
+ - Wrong date format (DD/MM/YYYY vs YYYY-MM-DD)
+ - Missing separators or wrong separators
+ - Empty strings in date columns
+ - Non-standard date representations
+
+2. **Invalid date component values**
+ - Month value out of range (e.g., month 16)
+ - Day value out of range (e.g., day 40)
+ - Year outside supported range (Date: 1970-2149, Date32: 1900-2299)
+ - February 30th or other impossible dates
+ - Note: ClickHouse may return `1970-01-01` instead of error for some invalid values
+
+3. **Using wrong parsing function**
+ - Using `toDate()` when you need `parseDateTime()` with format string
+ - Using MySQL format specifiers without proper function
+ - Not using timezone parameter when needed
+ - Wrong syntax variant (MySQL vs Joda)
+
+4. **Format string issues in parseDateTime()**
+ - Format string doesn't match actual data format
+ - Using `%e` for single-digit days (requires padding)
+ - Using `%D` (American date format MM/DD/YY) with wrong year interpretation
+ - Using `%F` (ISO 8601 date) when seconds are missing
+ - Bugs in specific format specifiers (fixed in recent versions)
+
+5. **Data import mismatches**
+ - CSV/JSON files with inconsistent date formats
+ - ClickPipes parsing dates in unrecognized format
+ - Source data has mixed date formats
+ - Timezone information missing from timestamps
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for details**
+
+The error message usually indicates what went wrong:
+
+```text
+Cannot parse date: value is too short: Cannot parse Date from String
+Cannot parse string '2021-hi-10' as Date: syntax error at position 9
+Value is too short
+```
+
+**2. Examine the problematic data**
+
+```sql
+-- Find rows that can't be parsed
+SELECT date_string
+FROM your_table
+WHERE toDateOrNull(date_string) IS NULL
+LIMIT 100;
+
+-- Check string lengths
+SELECT
+ date_string,
+ length(date_string) AS len
+FROM your_table
+WHERE length(date_string) < 10 -- YYYY-MM-DD is 10 chars
+LIMIT 100;
+```
+
+**3. Test parsing with sample data**
+
+```sql
+-- Test with actual value from error message
+SELECT toDate('your-date-value');
+
+-- Or use safe version
+SELECT toDateOrNull('your-date-value');
+```
+
+**4. Check your ClickHouse version**
+
+```sql
+SELECT version();
+
+-- Some parseDateTime bugs existed in 24.5 and were fixed
+-- Check if upgrading helps
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use safe parsing functions**
+
+```sql
+-- Instead of toDate() which throws errors:
+SELECT toDate(date_string) FROM table;
+
+-- Use toDateOrNull() which returns NULL for invalid dates:
+SELECT toDateOrNull(date_string) FROM table;
+
+-- Or toDateOrZero() which returns 1970-01-01:
+SELECT toDateOrZero(date_string) FROM table;
+```
+
+**2. Use `parseDateTimeBestEffort` for flexible parsing**
+
+```sql
+-- Automatically handles many date formats
+SELECT parseDateTimeBestEffort('2/20/2004'); -- DD/MM/YYYY format
+SELECT parseDateTimeBestEffortUS('2/3/2004'); -- MM/DD/YYYY format
+
+-- With timezone
+SELECT parseDateTimeBestEffort('2024-06-20 1200', 'Europe/Paris');
+
+-- Convert to Date
+SELECT toDate(parseDateTimeBestEffort('2/20/2004'));
+```
+
+**3. Use `parseDateTime` with explicit format**
+
+```sql
+-- Specify exact format (MySQL syntax)
+SELECT parseDateTime('2024-06-20 1200', '%Y-%m-%d %H%M');
+
+-- Common format patterns:
+-- '%Y-%m-%d' for YYYY-MM-DD
+-- '%Y-%m-%d %H:%M:%S' for YYYY-MM-DD HH:MM:SS
+-- '%d/%m/%Y' for DD/MM/YYYY
+-- '%m/%d/%Y' for MM/DD/YYYY (American)
+```
+
+**4. Use Joda syntax for complex formats**
+
+```sql
+-- For single-digit months/days
+SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/YYYY');
+
+-- Instead of problematic MySQL %e:
+-- This fails:
+-- SELECT parseDateTime('9/3/2024', '%c/%e/%Y');
+
+-- This works:
+SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/yyyy');
+```
+
+**5. Handle empty or invalid values in data**
+
+```sql
+-- Use CASE to handle empty strings
+SELECT
+ CASE
+ WHEN date_string = '' THEN NULL
+ ELSE toDateOrNull(date_string)
+ END AS parsed_date
+FROM your_table;
+
+-- Or use coalesce with default
+SELECT coalesce(toDateOrNull(date_string), '1970-01-01') AS parsed_date
+FROM your_table;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Value too short error**
+
+```text
+Cannot parse date: value is too short: Cannot parse Date from String
+```
+
+**Cause:** Date string is missing components (e.g., "2024-06" instead of "2024-06-20").
+
+**Solution:**
+
+```sql
+-- Instead of direct cast:
+SELECT cast(release_date as Date) FROM movies;
+
+-- Use safe conversion:
+SELECT toDateOrNull(release_date) FROM movies;
+
+-- Filter out short values first:
+SELECT toDate(release_date)
+FROM movies
+WHERE length(release_date) >= 10;
+
+-- Or pad/default short values:
+SELECT
+ if(length(release_date) >= 10,
+ toDate(release_date),
+ NULL
+ ) AS date
+FROM movies;
+```
+
+**Scenario 2: parseDateTime broken with %F format (version 24.5 bug)**
+
+```text
+Code: 0. DB::Exception: while executing 'FUNCTION parseDateTime(formatDateTime(...), '%F %T')'. (OK)
+```
+
+**Cause:** Bug in ClickHouse 24.5 where `parseDateTime` with `%F` (ISO 8601 date), `%D` (American date), and Joda `%E` formats failed with confusing error code 0.
+
+**Solution:**
+
+```sql
+-- Upgrade to 24.5.2 or later where this is fixed
+
+-- Temporary workaround - use different format specifier:
+-- Instead of %F:
+SELECT parseDateTime('2024-06-20 1200', '%Y-%m-%d %H%M');
+
+-- Or use parseDateTimeBestEffort:
+SELECT parseDateTimeBestEffort('2024-06-20 1200', 'Europe/Paris');
+```
+
+**Scenario 3: Single-digit months/days with %e format**
+
+```text
+Code: 41. DB::Exception: Unable to parse fragment LITERAL from 2024 because literal / is expected but 2 provided
+```
+
+**Cause:** Using `%e` (day with leading space) or `%c` (month) with single-digit values doesn't work correctly in MySQL syntax.
+
+**Solution:**
+
+```sql
+-- Instead of MySQL syntax (fails):
+SELECT parseDateTime('9/3/2024', '%c/%e/%Y');
+
+-- Use Joda syntax (works):
+SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/yyyy');
+
+-- Or use parseDateTimeBestEffort:
+SELECT parseDateTimeBestEffortUS('9/3/2024'); -- American format
+```
+
+**Scenario 4: ClickPipes date format not recognized**
+
+```text
+could not parse 2024-09-03T16:03Z as a DateTime
+```
+
+**Cause:** Date format missing seconds component (should be `2024-09-03T16:03:00Z`).
+
+**Solution:**
+- Fix source data to include seconds in ISO 8601 format
+- Use a materialized column to parse with custom logic
+- Pre-process data before sending to ClickPipes
diff --git a/docs/troubleshooting/error_codes/041_CANNOT_PARSE_DATETIME.md b/docs/troubleshooting/error_codes/041_CANNOT_PARSE_DATETIME.md
new file mode 100644
index 00000000000..eafc85d572d
--- /dev/null
+++ b/docs/troubleshooting/error_codes/041_CANNOT_PARSE_DATETIME.md
@@ -0,0 +1,407 @@
+---
+slug: /troubleshooting/error-codes/041_CANNOT_PARSE_DATETIME
+sidebar_label: '041 CANNOT_PARSE_DATETIME'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_PARSE_DATETIME', '041', 'parseDateTime', 'timezone', 'format']
+title: '041 CANNOT_PARSE_DATETIME'
+description: 'ClickHouse error code - 041 CANNOT_PARSE_DATETIME'
+---
+
+# Error 41: CANNOT_PARSE_DATETIME
+
+:::tip
+This error occurs when ClickHouse cannot parse a string value as a DateTime. This typically happens with `parseDateTime()` or `parseDateTimeBestEffort()` functions when the date/time format doesn't match expectations, contains invalid values, or has incompatible timezone information.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Format string mismatch in parseDateTime()**
+ - Format specifier doesn't match actual data format
+ - Using wrong syntax variant (MySQL vs Joda)
+ - Single-digit months/days with `%e` or `%c` format (MySQL syntax limitation)
+ - Data missing required components (e.g., seconds in time string)
+ - Literal characters in format don't match data
+
+2. **ClickHouse 24.4-24.5 parseDateTime bugs (now fixed)**
+ - Critical bug with `%F` (ISO 8601 date format)
+ - Bug with `%D` (American MM/DD/YY format)
+ - Bug with Joda `%E` format
+ - Returned confusing error code 0 instead of proper error message
+ - Fixed in 24.5.2+ and backported to relevant branches
+
+3. **parseDateTimeBestEffort evaluation order issues**
+ - Function evaluated before safety checks (notEmpty, IS NOT NULL)
+ - WHERE clause conditions don't guarantee evaluation order
+ - Empty strings or NULL values processed by parseDateTimeBestEffort
+ - More common with new analyzer in 24.4+ versions
+
+4. **Distributed table and analyzer interactions**
+ - Query rewriting converts DateTime64 to string '0' incorrectly
+ - Timezone precision mismatches in distributed queries
+ - `report_time IN (toDateTime64(...))` fails with analyzer enabled
+ - Epoch time (1970-01-01 00:00:00) conversion issues
+
+5. **Timezone and format incompatibilities**
+ - Missing seconds in ISO 8601 format (e.g., `2024-09-03T16:03Z` instead of `2024-09-03T16:03:00Z`)
+ - Fractional seconds with timezone markers
+ - Invalid or non-existent timezone transitions
+ - 2-digit year formats requiring interpretation (1970-2070 range)
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check your ClickHouse version**
+
+```sql
+SELECT version();
+
+-- If using 24.4.x or 24.5.0-24.5.1, upgrade to 24.5.2+ or later
+-- Critical parseDateTime bugs were fixed in these versions
+```
+
+**2. Examine the error message details**
+
+The error typically indicates what failed to parse:
+
+```text
+Cannot read DateTime: neither Date nor Time was parsed successfully
+Unable to parse fragment LITERAL from 2024 because literal / is expected
+Cannot parse DateTime: while converting '0' to DateTime64(9, 'UTC')
+```
+
+**3. Test with sample data**
+
+```sql
+-- Test the problematic value
+SELECT parseDateTime('your-datetime-value', '%Y-%m-%d %H:%M:%S');
+
+-- Or use safe version
+SELECT parseDateTimeOrNull('your-datetime-value', '%Y-%m-%d %H:%M:%S');
+
+-- Test parseDateTimeBestEffort
+SELECT parseDateTimeBestEffort('your-datetime-value');
+```
+
+**4. Check if analyzer is causing issues**
+
+```sql
+-- Try disabling the analyzer
+SET allow_experimental_analyzer = 0;
+
+-- Then run your query
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use safe parsing functions**
+
+```sql
+-- Instead of parseDateTimeBestEffort (throws on error):
+SELECT parseDateTimeBestEffort(date_string) FROM table;
+
+-- Use parseDateTimeBestEffortOrNull (returns NULL):
+SELECT parseDateTimeBestEffortOrNull(date_string) FROM table;
+
+-- Or parseDateTimeBestEffortOrZero (returns 1970-01-01):
+SELECT parseDateTimeBestEffortOrZero(date_string) FROM table;
+```
+
+**2. Handle empty/NULL values before parsing**
+
+```sql
+-- ClickHouse doesn't guarantee WHERE clause evaluation order
+-- Use CASE to ensure safety:
+SELECT *
+FROM table
+WHERE CASE
+ WHEN date_string IS NOT NULL AND date_string != ''
+ THEN parseDateTimeBestEffortOrNull(date_string) > '2024-01-01'
+ ELSE false
+END;
+
+-- Or use parseDateTimeBestEffortOrZero which handles empty strings:
+SELECT *
+FROM table
+WHERE notEmpty(date_string)
+ AND parseDateTimeBestEffortOrZero(date_string) > '2024-01-01';
+```
+
+**3. Use correct format specifiers**
+
+```sql
+-- Common MySQL format patterns:
+SELECT parseDateTime('2024-06-20 12:00:00', '%Y-%m-%d %H:%M:%S');
+SELECT parseDateTime('06/20/2024', '%m/%d/%Y'); -- American date
+SELECT parseDateTime('2024-06-20', '%Y-%m-%d'); -- Date only
+
+-- For single-digit months/days, use Joda syntax instead:
+-- This fails in MySQL syntax:
+-- SELECT parseDateTime('9/3/2024', '%c/%e/%Y');
+
+-- Use Joda syntax instead:
+SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/yyyy');
+```
+
+**4. For distributed table issues - disable analyzer**
+
+```sql
+-- Workaround for distributed + DateTime64 + IN clause issues
+SET allow_experimental_analyzer = 0;
+
+-- Then run your query
+SELECT *
+FROM distributed_table
+WHERE report_time IN (toDateTime64('1970-01-01 00:00:00', 9, 'UTC'));
+```
+
+**5. Fix incomplete ISO 8601 formats**
+
+```sql
+-- If your data is missing seconds:
+-- Input: '2024-09-03T16:03Z'
+-- Expected: '2024-09-03T16:03:00Z'
+
+-- Option 1: Fix source data to include seconds
+
+-- Option 2: Pre-process with string manipulation
+SELECT parseDateTime(concat(substr(date_str, 1, 16), ':00Z'), '%Y-%m-%dT%H:%M:%SZ')
+FROM table;
+
+-- Option 3: Use parseDateTimeBestEffort (more flexible)
+SELECT parseDateTimeBestEffort(date_str)
+FROM table;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: parseDateTime broken in 24.5 with %F format**
+
+```text
+Code: 0. DB: while executing 'FUNCTION parseDateTime(formatDateTime(...), '%F %T')'. (OK)
+```
+
+**Cause:** Critical bug in ClickHouse 24.5.0-24.5.1 where `parseDateTime` with `%F`, `%D`, and Joda `%E` formats failed with error code 0.
+
+**Solution:**
+```sql
+-- Upgrade to 24.5.2 or later (bug is fixed)
+
+-- Temporary workaround - use explicit format:
+-- Instead of %F (ISO 8601 short date):
+SELECT parseDateTime('2024-06-20 1200', '%Y-%m-%d %H%M');
+
+-- Or use parseDateTimeBestEffort:
+SELECT parseDateTimeBestEffort('2024-06-20 1200', 'Europe/Paris');
+```
+
+**Scenario 2: parseDateTimeBestEffort with WHERE clause (analyzer issue)**
+
+```text
+Cannot read DateTime: neither Date nor Time was parsed successfully:
+while executing 'FUNCTION parseDateTimeBestEffort(...)'
+```
+
+**Cause:** With new analyzer (24.4+), `parseDateTimeBestEffort` may be evaluated on empty/NULL values before WHERE conditions are checked. ClickHouse doesn't guarantee condition evaluation order.
+
+**Solution:**
+```sql
+-- Option 1: Disable analyzer (temporary workaround)
+SET allow_experimental_analyzer = 0;
+
+-- Option 2: Use safe parsing function
+SELECT *
+FROM map_test
+WHERE notEmpty(properties['somedate'])
+ AND parseDateTimeBestEffortOrZero(properties['somedate']) > '2022-06-15';
+
+-- Option 3: Use CASE for guaranteed order
+SELECT *
+FROM map_test
+WHERE CASE
+ WHEN notEmpty(properties['somedate'])
+ THEN parseDateTimeBestEffortOrNull(properties['somedate']) > '2022-06-15'
+ ELSE false
+END;
+```
+
+**Reference:** [GitHub Issue #75296](https://github.com/ClickHouse/ClickHouse/issues/75296)
+
+**Scenario 3: Single-digit month/day parsing with MySQL syntax**
+
+```text
+Code: 41. DB::Exception: Unable to parse fragment LITERAL from 2024 because literal / is expected but 2 provided
+```
+
+**Cause:** MySQL syntax `%e` (space-padded day) and `%c` (month 01-12) don't work correctly with single-digit values.
+
+**Solution:**
+
+```sql
+-- Instead of MySQL syntax (fails):
+SELECT parseDateTime('9/3/2024', '%c/%e/%Y');
+
+-- Use Joda syntax (works):
+SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/yyyy');
+
+-- Or use parseDateTimeBestEffort:
+SELECT parseDateTimeBestEffortUS('9/3/2024'); -- American format MM/DD/YYYY
+```
+
+**Scenario 4: Distributed table with DateTime64 IN clause**
+
+```text
+Code: 41. DB::Exception: Cannot parse DateTime: while converting '0' to DateTime64(9, 'UTC')
+```
+
+**Cause:** Query analyzer rewrites the query for distributed tables and incorrectly converts DateTime64 value to string '0' instead of proper format.
+
+**Solution:**
+```sql
+-- Option 1: Disable analyzer
+SET allow_experimental_analyzer = 0;
+
+SELECT *
+FROM distributed_table
+WHERE report_time IN (toDateTime64('1970-01-01 00:00:00', 9, 'UTC'));
+
+-- Option 2: Use equality instead of IN for single value
+SELECT *
+FROM distributed_table
+WHERE report_time = toDateTime64('1970-01-01 00:00:00', 9, 'UTC');
+
+-- Option 3: Use >= and <= instead
+SELECT *
+FROM distributed_table
+WHERE report_time >= toDateTime64('1970-01-01 00:00:00', 9, 'UTC')
+ AND report_time < toDateTime64('1970-01-02 00:00:00', 9, 'UTC');
+```
+
+**Scenario 5: ISO 8601 format missing seconds**
+
+```text
+could not parse 2024-09-03T16:03Z as a DateTime
+```
+
+**Cause:** ISO 8601 format requires seconds component. `2024-09-03T16:03Z` should be `2024-09-03T16:03:00Z`.
+
+**Solution:**
+```sql
+-- Fix source data to include :00 for seconds
+
+-- Or use parseDateTimeBestEffort (more flexible)
+SELECT parseDateTimeBestEffort('2024-09-03T16:03Z');
+
+-- Or pre-process to add seconds:
+SELECT
+ if(
+ date_str LIKE '%Z' AND length(date_str) = 17,
+ concat(substr(date_str, 1, 16), ':00Z'),
+ date_str
+ ) AS fixed_date
+FROM table;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Always use safe parsing functions in WHERE clauses**
+
+ ```sql
+ -- Prefer OrNull/OrZero variants
+ WHERE parseDateTimeBestEffortOrNull(date_str) > '2024-01-01'
+
+ -- Not: WHERE parseDateTimeBestEffort(date_str) > '2024-01-01'
+ ```
+
+2. **Use Joda syntax for flexible day/month parsing**
+
+ ```sql
+ -- For variable-length date components
+ SELECT parseDateTimeInJodaSyntax('9/3/2024', 'M/d/yyyy');
+
+ -- Instead of MySQL %e/%c which require padding
+ ```
+
+3. **Validate date formats before complex parsing**
+
+ ```sql
+ -- Check format first
+ SELECT
+ date_str,
+ length(date_str) AS len,
+ parseDateTimeOrNull(date_str, '%Y-%m-%d %H:%M:%S') AS parsed
+ FROM table
+ WHERE len >= 19; -- YYYY-MM-DD HH:MM:SS is 19 chars
+ ```
+
+4. **Keep ClickHouse updated**
+
+ ```sql
+ -- Check version
+ SELECT version();
+
+ -- Upgrade from 24.4.x or 24.5.0-24.5.1 to avoid critical bugs
+ -- These versions had significant parseDateTime issues
+ ```
+
+5. **Test with new analyzer disabled if issues arise**
+
+ ```sql
+ -- The new analyzer changes query rewriting behavior
+ SET allow_experimental_analyzer = 0;
+
+ -- Test if this resolves the issue
+ -- Report bugs if only works with analyzer disabled
+ ```
+
+6. **Standardize datetime formats in source data**
+ - Use consistent ISO 8601: `YYYY-MM-DD HH:MM:SS`
+ - Always include seconds component
+ - Use UTC or explicitly specify timezone
+ - Avoid ambiguous formats (2-digit years, regional variants)
+
+## Related settings {#related-settings}
+
+```sql
+-- Control analyzer behavior
+SET allow_experimental_analyzer = 0; -- Disable new analyzer
+
+-- Timezone handling
+SET session_timezone = 'UTC'; -- Set default timezone
+
+-- Date/time parsing flexibility
+SET date_time_input_format = 'best_effort'; -- More flexible parsing
+```
+
+## Format specifier reference {#format-specifiers}
+
+**MySQL syntax (`parseDateTime`)**:
+- `%Y` - 4-digit year (2024)
+- `%m` - Month (01-12) zero-padded
+- `%d` - Day (01-31) zero-padded
+- `%H` - Hour 24h format (00-23)
+- `%M` - Minute (00-59) - note: capital M!
+- `%S` - Second (00-59)
+- `%F` - ISO 8601 date (`%Y-%m-%d`)
+- `%T` - ISO 8601 time (`%H:%M:%S`)
+- `%D` - American date (`%m/%d/%y`)
+
+**Joda syntax (`parseDateTimeInJodaSyntax`)**:
+- `yyyy` - 4-digit year
+- `M` - Month (1-12) no padding
+- `d` - Day (1-31) no padding
+- `HH` - Hour 24h format (00-23)
+- `mm` - Minute (00-59)
+- `ss` - Second (00-59)
+
+**See also:** [ClickHouse parseDateTime documentation](https://clickhouse.com/docs/sql-reference/functions/type-conversion-functions#parsedatetime)
+
+## When to use which function {#which-function}
+
+| Function | Use Case | Error Handling |
+|-----------------------------------|------------------------------------|--------------------|
+| `parseDateTime()` | Exact known format | Throws exception |
+| `parseDateTimeOrNull()` | Exact format, allow failures | Returns NULL |
+| `parseDateTimeBestEffort()` | Unknown/variable formats | Throws exception |
+| `parseDateTimeBestEffortOrNull()` | Unknown formats, allow failures | Returns NULL |
+| `parseDateTimeBestEffortOrZero()` | Unknown formats, use default | Returns 1970-01-01 |
+| `parseDateTimeBestEffortUS()` | American date formats (MM/DD/YYYY) | Throws exception |
+| `parseDateTimeInJodaSyntax()` | Joda format, exact match | Throws exception |
diff --git a/docs/troubleshooting/error_codes/042_NUMBER_OF_ARGUMENTS_DOESNT_MATCH.md b/docs/troubleshooting/error_codes/042_NUMBER_OF_ARGUMENTS_DOESNT_MATCH.md
new file mode 100644
index 00000000000..51e15b533eb
--- /dev/null
+++ b/docs/troubleshooting/error_codes/042_NUMBER_OF_ARGUMENTS_DOESNT_MATCH.md
@@ -0,0 +1,395 @@
+---
+slug: /troubleshooting/error-codes/042_NUMBER_OF_ARGUMENTS_DOESNT_MATCH
+sidebar_label: '042 NUMBER_OF_ARGUMENTS_DOESNT_MATCH'
+doc_type: 'reference'
+keywords: ['error codes', 'NUMBER_OF_ARGUMENTS_DOESNT_MATCH', '042', 'function', 'arguments', 'parameters']
+title: '042 NUMBER_OF_ARGUMENTS_DOESNT_MATCH'
+description: 'ClickHouse error code - 042 NUMBER_OF_ARGUMENTS_DOESNT_MATCH'
+---
+
+# Error 42: NUMBER_OF_ARGUMENTS_DOESNT_MATCH
+
+:::tip
+This error occurs when you call a ClickHouse function with the wrong number of arguments.
+The function expects a specific number of parameters, but you provided either too many or too few.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Incorrect function signature**
+ - Passing individual values instead of arrays (e.g., `multiSearchAny`)
+ - Missing required arguments
+ - Providing too many arguments
+ - Not understanding function overload variants
+
+2. **Misunderstanding documentation**
+ - Function documentation unclear or outdated
+ - Examples showing incorrect usage
+ - Confusion between similar function names
+ - Missing information about required vs optional parameters
+
+3. **Array functions expecting single array parameter**
+ - Passing multiple string literals instead of array
+ - Using varargs syntax when function expects array
+ - For example: `multiSearchAny(haystack, 'a', 'b', 'c')` should be `multiSearchAny(haystack, ['a', 'b', 'c'])`
+
+4. **Type conversion functions with wrong parameter count**
+ - `toFixedString` requires 2 arguments (string, length)
+ - `toDecimal` requires precision and scale
+ - Timezone functions requiring timezone parameter
+ - Format functions requiring format string
+
+5. **User-defined functions (UDFs) with wrong signature**
+ - Custom functions called with incorrect argument count
+ - Lambda functions with mismatched parameter count
+ - Higher-order functions with wrong lambda signature
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for function details**
+
+The error message tells you exactly what went wrong:
+
+```text
+Number of arguments for function multiSearchAny doesn't match: passed 5, should be 2
+Number of arguments for function toFixedString doesn't match: passed 1, should be 2
+Incorrect number of arguments for function generateSnowflakeID provided 2, expected 0 to 1
+```
+
+**2. Look up the function documentation**
+
+```sql
+-- Check function exists and its signature
+SELECT * FROM system.functions WHERE name = 'multiSearchAny';
+
+-- Or search for similar functions
+SELECT name FROM system.functions WHERE name LIKE '%search%';
+```
+
+**3. Review official documentation**
+
+Visit [ClickHouse functions documentation](https://clickhouse.com/docs/sql-reference/functions/) to verify:
+- Required vs optional parameters
+- Expected data types
+- Usage examples
+- Alternative function variants
+
+**4. Test with simple example**
+
+```sql
+-- Test function with minimal valid arguments
+SELECT multiSearchAny('test string', ['test', 'example']);
+
+-- Check if function works as expected
+SELECT toFixedString('hello', 10);
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. For array functions - wrap arguments in array**
+
+```sql
+-- Instead of this (fails):
+SELECT multiSearchAny(text, 'ClickHouse', 'Clickhouse', 'clickHouse');
+
+-- Use this (works):
+SELECT multiSearchAny(text, ['ClickHouse', 'Clickhouse', 'clickHouse']);
+```
+
+**2. For type conversion functions - provide all required parameters**
+
+```sql
+-- Instead of this (fails):
+SELECT toFixedString(15);
+
+-- Use this (works):
+SELECT toFixedString('15', 10); -- string value, length
+
+-- For decimals:
+SELECT toDecimal64(123.45, 2); -- value, scale
+```
+
+**3. For functions with optional parameters - check ranges**
+
+```sql
+-- Function may accept variable argument counts
+SELECT generateSnowflakeID(); -- 0 arguments (OK)
+SELECT generateSnowflakeID(expr); -- 1 argument (OK)
+-- SELECT generateSnowflakeID(expr1, expr2); -- 2 arguments (ERROR)
+```
+
+**4. Use correct function variant**
+
+```sql
+-- Different functions for different purposes:
+
+-- For single needle in haystack:
+SELECT position('haystack', 'needle');
+
+-- For multiple needles (requires array):
+SELECT multiSearchAny('haystack', ['needle1', 'needle2']);
+
+-- For first position of any needle:
+SELECT multiSearchFirstPosition('haystack', ['needle1', 'needle2']);
+```
+
+**5. Check for renamed or deprecated functions**
+
+```sql
+-- Some functions have been renamed or changed signatures
+-- Check release notes if migrating between versions
+
+-- Use SHOW FUNCTIONS or system.functions to find correct name
+SELECT name, origin FROM system.functions WHERE name LIKE '%YourFunction%';
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: multiSearchAny with multiple string literals**
+
+```text
+Number of arguments for function multiSearchAny doesn't match: passed 5, should be 2
+```
+
+**Cause:** `multiSearchAny` expects 2 arguments: haystack and an array of needles. User passed multiple individual string arguments.
+
+**Solution:**
+```sql
+-- Instead of this (fails):
+SELECT multiSearchAny(text, 'ClickHouse', 'Clickhouse', 'clickHouse', 'clickhouse');
+
+-- Use this (works - wrap in array):
+SELECT multiSearchAny(text, ['ClickHouse', 'Clickhouse', 'clickHouse', 'clickhouse']);
+
+-- Example with column:
+SELECT
+ body,
+ multiSearchAny(body, ['error', 'warning', 'critical']) AS has_alert_keyword
+FROM logs;
+```
+
+**Reference:** [Slack Internal Discussion](https://clickhouse-inc.slack.com/archives/C03RDM5UNGP/p1674846121070709)
+
+**Scenario 2: toFixedString with missing length parameter**
+
+```text
+Number of arguments for function toFixedString doesn't match: passed 1, should be 2
+```
+
+**Cause:** `toFixedString` requires both the string value and the fixed length.
+
+**Solution:**
+```sql
+-- Instead of this (fails):
+SELECT toFixedString(15);
+
+-- Use this (works):
+SELECT toFixedString('15', 2); -- String value, fixed length
+
+-- With column:
+SELECT toFixedString(user_id, 36) AS fixed_user_id
+FROM users;
+
+-- Pad with zeros:
+SELECT toFixedString(toString(id), 10) AS padded_id
+FROM table;
+```
+
+**Reference:** [GitHub Issue #61024](https://github.com/ClickHouse/ClickHouse/issues/61024)
+
+**Scenario 3: generateSnowflakeID with too many arguments**
+
+```text
+Incorrect number of arguments for function generateSnowflakeID provided 2 (UInt8, DateTime64(3)), expected 0 to 1
+```
+
+**Cause:** `generateSnowflakeID` accepts 0 or 1 argument, but 2 were provided.
+
+**Solution:**
+```sql
+-- Valid usages:
+SELECT generateSnowflakeID(); -- No arguments
+SELECT generateSnowflakeID(1); -- With expression
+
+-- Instead of this (fails):
+SELECT generateSnowflakeID(1, now64(3));
+
+-- Use this (works):
+SELECT generateSnowflakeID();
+-- Or
+SELECT generateSnowflakeID(toUInt8(1));
+```
+
+**Reference:** [Slack Internal Discussion](https://clickhouse-inc.slack.com/archives/C02F2LML5UG/p1719898780769029)
+
+**Scenario 4: parseDateTime with wrong argument count**
+
+```text
+Number of arguments for function parseDateTime doesn't match
+```
+
+**Cause:** Missing format string parameter or providing too many arguments.
+
+**Solution:**
+```sql
+-- parseDateTime requires 2-3 arguments: string, format, [timezone]
+
+-- Instead of this (fails):
+SELECT parseDateTime('2024-01-15');
+
+-- Use this (works):
+SELECT parseDateTime('2024-01-15', '%Y-%m-%d');
+
+-- With timezone:
+SELECT parseDateTime('2024-01-15 10:30:00', '%Y-%m-%d %H:%M:%S', 'America/New_York');
+
+-- Or use best effort parsing (1-2 arguments):
+SELECT parseDateTimeBestEffort('2024-01-15');
+SELECT parseDateTimeBestEffort('2024-01-15', 'Europe/London');
+```
+
+**Scenario 5: Array distance functions with wrong types**
+
+```text
+Arguments of function arrayL2Distance have different array sizes: 0 and 1536
+```
+
+**Cause:** While this appears as error 190 (SIZES_OF_ARRAYS_DONT_MATCH), it's often caused by empty arrays or NULL values that should be filtered.
+
+**Solution:**
+```sql
+-- Filter out empty arrays before calculation
+SELECT
+ id,
+ L2Distance(embedding1, embedding2) AS distance
+FROM table
+WHERE notEmpty(embedding1)
+ AND notEmpty(embedding2);
+
+-- Or use ifNull to provide defaults
+SELECT
+ id,
+ L2Distance(
+ ifNull(embedding1, arrayWithConstant(1536, 0.0)),
+ ifNull(embedding2, arrayWithConstant(1536, 0.0))
+ ) AS distance
+FROM table;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Always check function documentation first**
+ - Visit [https://clickhouse.com/docs/sql-reference/functions/](https://clickhouse.com/docs/sql-reference/functions/)
+ - Look at examples in documentation
+ - Note required vs optional parameters
+ - Check for function overloads
+
+2. **Use system.functions table**
+
+ ```sql
+ -- Find function and its description
+ SELECT
+ name,
+ origin,
+ description
+ FROM system.functions
+ WHERE name = 'yourFunction';
+
+ -- Search for similar functions
+ SELECT name
+ FROM system.functions
+ WHERE name ILIKE '%search%'
+ ORDER BY name;
+ ```
+
+3. **Test functions with simple examples**
+
+ ```sql
+ -- Test with literal values first
+ SELECT multiSearchAny('test', ['t', 'e']);
+
+ -- Then apply to your data
+ SELECT multiSearchAny(column, ['value1', 'value2'])
+ FROM your_table;
+ ```
+
+4. **Pay attention to function naming patterns**
+ - Functions ending in `Any`: usually take arrays
+ - Functions with `First`, `Last`, `All`: variants with different return types
+ - Functions with `OrNull`, `OrZero`: safe variants that handle errors
+
+5. **Watch for version differences**
+
+ ```sql
+ -- Check ClickHouse version
+ SELECT version();
+
+ -- Some functions change signatures between versions
+ -- Check release notes when upgrading
+ ```
+
+6. **Use IDE or CLI autocomplete**
+ - ClickHouse CLI shows function signatures
+ - IDEs with ClickHouse support show parameter hints
+ - Helps avoid argument count mistakes
+
+## Common function signatures {#common-signatures}
+
+**String search functions:**
+
+```sql
+-- Single needle
+position(haystack, needle)
+positionCaseInsensitive(haystack, needle)
+
+-- Multiple needles (array required!)
+multiSearchAny(haystack, [needle1, needle2, ...])
+multiSearchFirstPosition(haystack, [needle1, needle2, ...])
+multiSearchAllPositions(haystack, [needle1, needle2, ...])
+```
+
+**Type conversion functions:**
+
+```sql
+-- Fixed length strings
+toFixedString(string, length)
+
+-- Decimals
+toDecimal32(value, scale)
+toDecimal64(value, scale)
+toDecimal128(value, scale)
+
+-- Dates
+toDate(value)
+toDateTime(value)
+toDateTime(value, timezone)
+toDateTime64(value, precision)
+toDateTime64(value, precision, timezone)
+```
+
+**Date/time parsing:**
+
+```sql
+-- Flexible parsing
+parseDateTimeBestEffort(string)
+parseDateTimeBestEffort(string, timezone)
+
+-- Strict parsing
+parseDateTime(string, format)
+parseDateTime(string, format, timezone)
+parseDateTimeInJodaSyntax(string, format)
+```
+
+**Aggregate functions:**
+
+```sql
+-- Basic
+sum(column)
+avg(column)
+count()
+
+-- With conditions
+sumIf(column, condition)
+avgIf(column, condition)
+countIf(condition)
+```
diff --git a/docs/troubleshooting/error_codes/043_ILLEGAL_TYPE_OF_ARGUMENT.md b/docs/troubleshooting/error_codes/043_ILLEGAL_TYPE_OF_ARGUMENT.md
new file mode 100644
index 00000000000..4058ab35af7
--- /dev/null
+++ b/docs/troubleshooting/error_codes/043_ILLEGAL_TYPE_OF_ARGUMENT.md
@@ -0,0 +1,185 @@
+---
+slug: /troubleshooting/error-codes/043_ILLEGAL_TYPE_OF_ARGUMENT
+sidebar_label: '043 ILLEGAL_TYPE_OF_ARGUMENT'
+doc_type: 'reference'
+keywords: ['error codes', 'ILLEGAL_TYPE_OF_ARGUMENT', '043']
+title: '043 ILLEGAL_TYPE_OF_ARGUMENT'
+description: 'ClickHouse error code - 043 ILLEGAL_TYPE_OF_ARGUMENT'
+---
+
+# Error 43: ILLEGAL_TYPE_OF_ARGUMENT
+
+:::tip
+This error occurs when a function receives an argument with an incompatible or inappropriate data type that cannot be used with that specific function.
+It indicates a type mismatch between what a function expects and what was actually provided.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Type Incompatibility in Arithmetic Operations**
+ - Mixing signed and unsigned integers in operations
+ - Attempting arithmetic between incompatible numeric types (e.g., `UInt64` with `Nullable(Nothing)`)
+ - Using `NULL` or empty values in arithmetic expressions
+
+2. **Incorrect Mutation Queries**
+ - `ALTER TABLE` mutations with type mismatches in `UPDATE` or `DELETE` expressions
+ - Applying functions to columns with incompatible types during mutations
+ - Most common in background mutation tasks that fail repeatedly
+
+3. **Function Type Requirements Not Met**
+ - String functions receiving numeric types
+ - Date/time functions receiving non-temporal types
+ - Aggregation functions with incompatible input types
+ - Type conversion functions with unsupported source types
+
+4. **Nullable Type Issues**
+ - Operations between `Nullable` and non-`Nullable` types without proper handling
+ - Functions that don't support `Nullable` arguments
+ - Mixing `Nullable(Nothing)` with concrete types
+
+5. **Union Query Type Mismatches**
+ - Different data types in corresponding columns across `UNION` branches
+ - Incompatible signed/unsigned integer combinations
+ - Mixed nullability across union branches
+
+## Common solutions {#common-solutions}
+
+**1. Check and Cast Types Explicitly**
+
+Use explicit type casting to ensure compatibility:
+
+```sql
+-- WRONG: Type mismatch in arithmetic
+SELECT column_uint64 - column_nullable
+
+-- CORRECT: Cast to compatible types
+SELECT toInt64(column_uint64) - assumeNotNull(column_nullable)
+```
+
+**2. Handle Nullable Types Properly**
+
+Ensure nullable types are handled before operations:
+
+```sql
+-- Use assumeNotNull() or ifNull()
+SELECT ifNull(nullable_column, 0) + other_column
+
+-- Or cast explicitly
+SELECT CAST(nullable_column AS UInt64) + other_column
+```
+
+**3. Fix Mutations with Type Mismatches**
+
+Check stuck mutations and kill those with type errors:
+
+```sql
+-- Check for stuck mutations
+SELECT *
+FROM system.mutations
+WHERE NOT is_done AND latest_fail_reason LIKE '%ILLEGAL_TYPE_OF_ARGUMENT%';
+
+-- Kill the problematic mutation
+KILL MUTATION WHERE mutation_id = 'problematic_mutation_id';
+
+-- Rewrite the mutation with proper types
+ALTER TABLE your_table
+UPDATE column = CAST(expression AS CorrectType)
+WHERE condition;
+```
+
+**4. Ensure Union Compatibility**
+
+Cast columns to common types in UNION queries:
+
+```sql
+-- WRONG: Mixed types in UNION
+SELECT uint_col FROM table1
+UNION ALL
+SELECT int_col FROM table2
+
+-- CORRECT: Cast to common type
+SELECT CAST(uint_col AS Int64) AS col FROM table1
+UNION ALL
+SELECT int_col AS col FROM table2
+```
+
+**5. Use Type-Aware Comparison Functions**
+
+For signed/unsigned comparisons, use appropriate functions:
+
+```sql
+-- For mixed signed/unsigned comparisons
+SELECT *
+FROM table
+WHERE toInt64(unsigned_col) > signed_col
+```
+
+**6. Check Function Requirements**
+
+Verify the function accepts your argument types:
+
+```sql
+-- Use DESCRIBE to check types
+DESCRIBE TABLE your_table;
+
+-- Check function documentation for accepted types
+SELECT toTypeName(your_column);
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Signed/Unsigned Integer Conflicts**
+
+```text
+Error: There is no supertype for types Int64, UInt64 because some of them
+are signed integers and some are unsigned integers
+```
+
+**Solution:** Cast to a common wider signed type like `Int128` or ensure all are unsigned.
+
+**Scenario 2: Nullable Arithmetic**
+
+```text
+Error: Arguments of 'minus' have incorrect data types:
+'UInt64' and 'Nullable(Nothing)'
+```
+
+**Solution:** Use `assumeNotNull()`, `ifNull()`, or explicit casting.
+
+**Scenario 3: Failed Background Mutations**
+
+```text
+Latest_fail_reason: ILLEGAL_TYPE_OF_ARGUMENT in mutation
+```
+
+**Solution:** Kill the mutation and rewrite with proper type handling.
+
+## Prevention tips {#prevention-tips}
+
+1. **Use Consistent Types**: Design schemas with consistent types across related columns
+2. **Explicit Casting**: Always cast types explicitly in complex expressions rather than relying on implicit conversion
+3. **Test Mutations**: Test `ALTER TABLE` mutations on a subset of data before applying to production tables
+4. **Handle Nullability**: Use `Nullable` types judiciously and handle them explicitly in queries
+5. **Check Schema Compatibility**: When using `UNION`, ensure column types match exactly or cast appropriately
+6. **Monitor Mutations**: Regularly check `system.mutations` for stuck operations
+
+## Special considerations {#special-considerations}
+
+**Mutations Context:**
+
+This error is frequently seen in background mutations because:
+- Mutations continue retrying with the same incorrect query
+- The only fix is to kill the mutation (errors persist until manually resolved)
+- It's classified as a "client error" since the mutation query itself is incorrect
+
+**When merging/processing data:**
+
+- Type mismatches can occur when old and new data parts have different schemas
+- Consider using `ALTER TABLE MODIFY COLUMN` to standardize types before complex operations
+
+If you're experiencing this error:
+1. Identify the exact column and operation causing the issue from the error message
+2. Check data types with `DESCRIBE TABLE` or `system.columns`
+3. For mutations: Check `system.mutations` and kill if necessary
+4. Add explicit type casts or use type-conversion functions
+5. Test the corrected query on sample data before full execution
diff --git a/docs/troubleshooting/error_codes/044_ILLEGAL_COLUMN.md b/docs/troubleshooting/error_codes/044_ILLEGAL_COLUMN.md
new file mode 100644
index 00000000000..0e5adf35065
--- /dev/null
+++ b/docs/troubleshooting/error_codes/044_ILLEGAL_COLUMN.md
@@ -0,0 +1,201 @@
+---
+slug: /troubleshooting/error-codes/044_ILLEGAL_COLUMN
+sidebar_label: '044 ILLEGAL_COLUMN'
+doc_type: 'reference'
+keywords: ['error codes', 'ILLEGAL_COLUMN', '044']
+title: '044 ILLEGAL_COLUMN'
+description: 'ClickHouse error code - 044 ILLEGAL_COLUMN'
+---
+
+# Error 44: ILLEGAL_COLUMN
+
+:::tip
+This error occurs when you reference a column in an illegal context, such as using non-aggregated columns outside of `GROUP BY` clauses, referencing columns that aren't available in the current query scope, or attempting to use columns in ways that violate ClickHouse's query execution semantics.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Non-aggregated columns used without `GROUP BY`**
+ - Selecting regular columns alongside aggregate functions without including them in GROUP BY
+ - Mixing aggregated and non-aggregated columns incorrectly
+ - Using columns in `HAVING` clause that aren't in `GROUP BY` or aggregate functions
+
+2. **Column scope issues in subqueries**
+ - Referencing outer query columns in correlated subqueries where not allowed
+ - Using columns from inner queries in outer query contexts
+ - Incorrect column visibility across query nesting levels
+
+3. **Invalid column references in JOINs**
+ - Referencing columns from tables not included in the current JOIN scope
+ - Using columns before the table is introduced in the FROM/JOIN chain
+ - Ambiguous column references when tables have overlapping column names
+
+4. **Array JOIN context violations**
+ - Using non-array columns as if they were arrays
+ - Referencing array-joined columns outside their valid scope
+ - Mixing array-joined and regular columns incorrectly
+
+5. **Window function scope issues**
+ - Using window function results in `WHERE` or `HAVING` clauses (not allowed)
+ - Referencing window function aliases in contexts where they haven't been evaluated
+ - Mixing window functions with aggregates incorrectly
+
+## Common solutions {#common-solutions}
+
+**1. Add missing columns to `GROUP BY` clause**
+
+```sql
+SELECT
+ user_id,
+ count() as total_events
+FROM events
+GROUP BY date;
+
+-- Solution: Include user_id in GROUP BY
+SELECT
+ user_id,
+ count() as total_events
+FROM events
+GROUP BY date, user_id;
+```
+
+**2. Use ANY or arbitrary aggregate functions for non-key columns**
+
+```sql
+-- Error: Can't select name without aggregation
+SELECT
+ user_id,
+ name,
+ count() as events
+FROM users
+GROUP BY user_id;
+
+-- Solution: Use any() or other aggregate function
+SELECT
+ user_id,
+ any(name) as name,
+ count() as events
+FROM users
+GROUP BY user_id;
+```
+
+**3. Fix subquery column references**
+
+```sql
+-- Error: column 'user_id' from outer query used incorrectly
+SELECT
+ user_id,
+ (SELECT max(event_time) FROM events WHERE user_id = users.user_id) as last_event
+FROM users;
+
+-- Solution: Use proper correlation
+SELECT
+ u.user_id,
+ (SELECT max(event_time) FROM events e WHERE e.user_id = u.user_id) as last_event
+FROM users u;
+```
+
+**4. Qualify column names with table aliases in JOINs**
+
+```sql
+-- Error: Ambiguous column reference
+SELECT
+ id,
+ name
+FROM users
+JOIN orders USING (id);
+
+-- Solution: Use table aliases or qualify columns
+SELECT
+ u.id as user_id,
+ u.name,
+ o.id as order_id
+FROM users u
+JOIN orders o ON u.id = o.user_id;
+```
+
+**5. Move window function logic to subquery or CTE**
+
+```sql
+-- Error: Can't use window function in WHERE
+SELECT
+ user_id,
+ row_number() OVER (PARTITION BY user_id ORDER BY event_time) as rn
+FROM events
+WHERE rn = 1;
+
+-- Solution: Use subquery or CTE
+WITH ranked AS (
+ SELECT
+ user_id,
+ event_time,
+ row_number() OVER (PARTITION BY user_id ORDER BY event_time) as rn
+ FROM events
+)
+SELECT user_id, event_time
+FROM ranked
+WHERE rn = 1;
+```
+
+**6. Fix ARRAY JOIN scope issues**
+
+```sql
+-- Error: Incorrect array column reference
+SELECT
+ user_id,
+ tag
+FROM users
+ARRAY JOIN tags as tag
+WHERE user_id IN (SELECT user_id FROM users WHERE tag = 'premium');
+
+-- Solution: Restructure to ensure proper scope
+WITH tagged_users AS (
+ SELECT DISTINCT user_id
+ FROM users
+ ARRAY JOIN tags as tag
+ WHERE tag = 'premium'
+)
+SELECT
+ u.user_id,
+ t.tag
+FROM users u
+ARRAY JOIN u.tags as t
+WHERE u.user_id IN (SELECT user_id FROM tagged_users);
+```
+
+**7. Use proper aggregation in `HAVING` clauses**
+
+```sql
+-- Error: name not in GROUP BY or aggregate
+SELECT
+ category,
+ count() as cnt
+FROM products
+GROUP BY category
+HAVING name LIKE '%special%';
+
+-- Solution: Use aggregate function or move to WHERE
+SELECT
+ category,
+ count() as cnt
+FROM products
+WHERE name LIKE '%special%'
+GROUP BY category;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Always aggregate non-GROUP BY columns**: When using `GROUP BY`, ensure every column in `SELECT` is either in the `GROUP BY` clause or wrapped in an aggregate function (count, sum, any, etc.)
+2. **Use explicit table aliases in JOINs**: Always qualify column names with table aliases when working with multiple tables to avoid ambiguity and improve query clarity
+3. **Understand query evaluation order**: Remember that SQL evaluates in order: `FROM` → `WHERE` → `GROUP BY` → `HAVING` → `SELECT` → `ORDER BY` → `LIMIT`. Use this to understand where columns are available
+4. **Test complex queries incrementally**: Build complex queries step by step, testing each level of nesting or each JOIN separately to identify where column scope issues arise
+5. **Use CTEs for complex window functions**: When using window functions, consider using CTEs (WITH clauses) to separate the window function evaluation from filtering operations
+6. **Enable strict mode settings**: Use `any_join_distinct_right_table_keys = 1` and other strict settings during development to catch column reference issues early
+7. **Validate column existence**: Before running complex queries in production, verify that all referenced columns exist in their respective tables and are accessible in the query context
+
+## Related error codes {#related-error-codes}
+
+- [UNKNOWN_IDENTIFIER (47)](/troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER) - Column or identifier not found
+- [NOT_AN_AGGREGATE (215)](/troubleshooting/error-codes/215_NOT_AN_AGGREGATE) - Non-aggregate function used where aggregate expected
+- [ILLEGAL_AGGREGATION (184)](/troubleshooting/error-codes/184_ILLEGAL_AGGREGATION) - Invalid aggregation usage
+- [AMBIGUOUS_COLUMN_NAME (352)](/troubleshooting/error-codes/352_AMBIGUOUS_COLUMN_NAME) - Column name exists in multiple tables
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/046_UNKNOWN_FUNCTION.md b/docs/troubleshooting/error_codes/046_UNKNOWN_FUNCTION.md
new file mode 100644
index 00000000000..48a66ae5f2b
--- /dev/null
+++ b/docs/troubleshooting/error_codes/046_UNKNOWN_FUNCTION.md
@@ -0,0 +1,37 @@
+---
+slug: /troubleshooting/error-codes/046_UNKNOWN_FUNCTION
+sidebar_label: '046 UNKNOWN_FUNCTION'
+doc_type: 'reference'
+keywords: ['error codes', 'UNKNOWN_FUNCTION', '046']
+title: '046 UNKNOWN_FUNCTION'
+description: 'ClickHouse error code - 046 UNKNOWN_FUNCTION'
+---
+
+# Error 46: UNKNOWN_FUNCTION
+
+:::tip
+This error occurs when ClickHouse encounters a function name that it does not recognize or that is not available in the current context.
+It typically indicates a typo in the function name, a missing user-defined function (UDF), or use of a function that doesn't exist in your ClickHouse version.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Typo in function name**
+ - Misspelled function name
+ - Incorrect capitalization (though ClickHouse is usually case-insensitive for function names)
+ - Extra or missing characters in function name
+
+2. **User-defined function not properly configured**
+ - Python or executable UDF not uploaded or registered correctly
+ - UDF configuration XML not loaded properly
+ - UDF script execution permissions issues
+ - UDF dependencies or libraries not available
+
+3. **Function not available in current ClickHouse version**
+ - Functions introduced in later versions than the one you are using
+ - [Experimental functions](/beta-and-experimental-features) not enabled
+
+4. **Function exists but identifier is confused**
+ - Column name confused with function name
+ - Similar-named function exists (error may suggest alternatives)
+ - Identifier scope issues
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/047_UNKNOWN_IDENTIFIER.md b/docs/troubleshooting/error_codes/047_UNKNOWN_IDENTIFIER.md
new file mode 100644
index 00000000000..0b8cca49bad
--- /dev/null
+++ b/docs/troubleshooting/error_codes/047_UNKNOWN_IDENTIFIER.md
@@ -0,0 +1,207 @@
+---
+slug: /troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER
+sidebar_label: '047 UNKNOWN_IDENTIFIER'
+doc_type: 'reference'
+keywords: ['error codes', 'UNKNOWN_IDENTIFIER', '047']
+title: '047 UNKNOWN_IDENTIFIER'
+description: 'ClickHouse error code - 047 UNKNOWN_IDENTIFIER'
+---
+
+# Error 47: UNKNOWN_IDENTIFIER
+
+:::tip
+This error occurs when a query references a column name, alias, or identifier that does not exist in the specified scope or context.
+It typically indicates a column name that doesn't exist in the table, a missing alias, or incorrect identifier resolution in complex queries.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Column does not exist in table**
+ - Referencing a column name that is not present in the table schema
+ - Typo in column name
+ - Column was dropped or never created
+
+2. **Incorrect identifier scope in joins**
+ - Referencing columns without proper table aliases
+ - Ambiguous column references in multi-table queries
+ - Column from wrong side of join
+ - Using columns that exist in one table but not in the joined result
+
+3. **Missing columns in subqueries or CTEs**
+ - Column not selected in inner query but referenced in outer query
+ - Column not available in the scope where it's being referenced
+ - Incorrect nesting of subqueries
+
+4. **Alias issues**
+ - Using an alias before it's defined
+ - Referencing column by original name after aliasing
+ - Alias not properly propagated through query stages
+
+5. **Materialized view or integration issues**
+ - Column missing from source table in materialized view
+ - Schema mismatch between source and target
+ - Replication or CDC tools referencing non-existent columns
+
+6. **Aggregation context problems**
+ - Using non-aggregated columns not in `GROUP BY` clause
+ - Referencing columns that are only available after aggregation
+ - Incorrect use of columns in `HAVING` vs `WHERE` clauses
+
+## Common solutions {#common-solutions}
+
+**1. Verify column exists in table**
+
+```sql
+-- Check table structure
+DESCRIBE TABLE your_table;
+
+-- Or check system tables
+SELECT name, type
+FROM system.columns
+WHERE database = 'your_database'
+AND table = 'your_table';
+```
+
+**2. Use proper table aliases in joins**
+
+```sql
+-- WRONG: Ambiguous reference
+SELECT column1, unique_column
+FROM table1
+INNER JOIN table2 ON table1.id = table2.id;
+
+-- CORRECT: Explicit table references
+SELECT t1.column1, t2.unique_column
+FROM table1 AS t1
+INNER JOIN table2 AS t2 ON t1.id = t2.id;
+```
+
+**3. Check column availability in scope**
+
+```sql
+-- WRONG: Column not in subquery SELECT
+SELECT outer_column
+FROM (
+ SELECT inner_column
+ FROM table1
+)
+WHERE outer_column > 10;
+
+-- CORRECT: Include needed columns in subquery
+SELECT outer_column
+FROM (
+ SELECT inner_column, outer_column
+ FROM table1
+)
+WHERE outer_column > 10;
+```
+
+**4. Fix aggregation issues**
+
+```sql
+-- WRONG: Non-aggregated column not in GROUP BY
+SELECT user_id, count(*), email
+FROM users
+GROUP BY user_id;
+
+-- CORRECT: Include all non-aggregated columns in GROUP BY
+SELECT user_id, count(*), email
+FROM users
+GROUP BY user_id, email;
+
+-- OR: Use any() aggregate function
+SELECT user_id, count(*), any(email) AS email
+FROM users
+GROUP BY user_id;
+```
+
+**5. Use `EXPLAIN` to debug**
+
+```sql
+EXPLAIN SYNTAX
+SELECT column_name FROM your_table;
+```
+
+This shows how ClickHouse interprets your query and may reveal the actual column names being used.
+
+**6. Handle case sensitivity**
+
+```sql
+-- Column names are case-sensitive in ClickHouse
+SELECT UserID FROM users; -- May fail
+
+-- Use exact case from schema
+SELECT userId FROM users; -- Correct
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Missing column in materialized view**
+
+```text
+Error: Missing columns: 'email_id' while processing query
+```
+
+**Solution:** Ensure the column exists in the source table or add it to the materialized view definition.
+
+**Scenario 2: Column ambiguity in joins**
+
+```text
+Error: Unknown column: customtag1, there are only columns sum(viewercount), sumMap(eventcount_map)
+```
+
+**Solution:** The column exists in the joined table but isn't in the aggregation scope. Use proper aliases and ensure the column is accessible in the aggregation context.
+
+**Scenario 3: Alias before definition**
+
+```sql
+-- WRONG
+SELECT count(*) FROM table WHERE count > 10;
+
+-- CORRECT
+SELECT count(*) AS count FROM table HAVING count > 10;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Use explicit table aliases**: Always use `table.column` or `alias.column` syntax in joins
+2. **Verify schema before querying**: Use `DESCRIBE TABLE` to confirm column names and types
+3. **Check column case**: Column names are case-sensitive
+4. **Review aggregation logic**: Ensure all non-aggregated columns are in `GROUP BY`
+5. **Use IDE or query validator**: Many tools can catch column reference errors before execution
+6. **Test subqueries independently**: Verify inner queries work before nesting them
+7. **Monitor schema changes**: Track `ALTER TABLE` operations that might remove columns
+
+## Debugging steps {#debugging-steps}
+
+If you're experiencing this error:
+
+1. **Check the error message carefully** - it often suggests similar column names with "maybe you meant: ['column_name']"
+2. **Verify table schema**:
+
+ ```sql
+ DESCRIBE TABLE your_table;
+ ```
+3. **Check if column is in the right scope** for joins and subqueries
+4. **Use `EXPLAIN SYNTAX`** to see how ClickHouse interprets your query
+5. **Test with simpler query** - remove joins and subqueries to isolate the issue
+6. **Check for typos** in column names (including case sensitivity)
+7. **Review recent schema changes** - was the column recently dropped or renamed?
+8. **For integrations/materialized views** - verify source and target schemas match
+
+## Special considerations {#special-considerations}
+
+**For CDC and replication tools:**
+- This error often occurs when schema changes aren't synchronized
+- The source table may have different columns than expected
+- Check both source and target schemas
+
+**For complex queries with aggregations:**
+- Remember that aggregation changes the available columns
+- Use proper aggregate functions or add columns to `GROUP BY`
+- `HAVING` clause has different column availability than `WHERE`
+
+**For materialized views:**
+- The source table must have all columns referenced in the view query
+- Schema changes to source tables can break materialized views
+- Consider using `SELECT *` cautiously as it can cause issues with schema evolution
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/048_NOT_IMPLEMENTED.md b/docs/troubleshooting/error_codes/048_NOT_IMPLEMENTED.md
new file mode 100644
index 00000000000..94a26816bed
--- /dev/null
+++ b/docs/troubleshooting/error_codes/048_NOT_IMPLEMENTED.md
@@ -0,0 +1,196 @@
+---
+slug: /troubleshooting/error-codes/048_NOT_IMPLEMENTED
+sidebar_label: '048 NOT_IMPLEMENTED'
+doc_type: 'reference'
+keywords: ['error codes', 'NOT_IMPLEMENTED', '048']
+title: '048 NOT_IMPLEMENTED'
+description: 'ClickHouse error code - 048 NOT_IMPLEMENTED'
+---
+
+# Error 48: NOT_IMPLEMENTED
+
+:::tip
+This error occurs when you attempt to use a feature, function, or operation that is not implemented in your ClickHouse version, not supported for your specific table engine or configuration, or requires enabling experimental settings to use.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Table engine limitations**
+ - `ALTER` operations not supported on specific table engines (View, Memory, File, URL tables)
+ - Mutations (`UPDATE`/`DELETE`) not available for certain engines
+ - `OPTIMIZE` or other maintenance operations unavailable for read-only or external engines
+ - Missing replication features on non-Replicated table engines
+
+2. **Experimental or preview features not enabled**
+ - An experimental or beta feature has not been enabled through a [setting](/beta-and-experimental-features)
+ - The feature is not yet available in ClickHouse Cloud
+ - The feature is in private preview and access needs to be provided by Cloud support
+
+3. **Version-specific features**
+ - Using features from newer ClickHouse versions on older installations
+ - Functions or syntax not backported to your version
+ - Cloud vs self-managed feature differences
+ - Deprecated features removed in newer versions
+
+4. **Data type or operation incompatibilities**
+ - Operations not supported for specific data types (Array, Map, Tuple operations)
+ - Type conversions that don't have implementations
+ - Mathematical operations on incompatible types
+ - Special column operations (ephemeral, alias columns)
+
+5. **Distributed and replicated table limitations**
+ - Certain `ALTER` operations not supported on Distributed tables
+ - Global joins or subqueries not fully implemented
+ - Cross-cluster operations with limited support
+ - Replication features unavailable in standalone mode
+
+6. **Storage and integration limitations**
+ - S3/HDFS operations not fully implemented
+ - Table function limitations (URL, S3, MySQL, PostgreSQL engines)
+ - Backup/restore operations unavailable for certain engines
+ - External dictionary refresh operations not supported
+
+## Common solutions {#common-solutions}
+
+**1. Enable required experimental settings**
+
+```sql
+-- Error: Window functions not available (older versions)
+SELECT
+ user_id,
+ row_number() OVER (PARTITION BY user_id ORDER BY timestamp) as rn
+FROM events;
+
+-- Solution: Enable experimental window functions (< v21.12)
+SET allow_experimental_window_functions = 1;
+
+SELECT
+ user_id,
+ row_number() OVER (PARTITION BY user_id ORDER BY timestamp) as rn
+FROM events;
+```
+
+:::tip
+See ["Beta and experimental features"](/beta-and-experimental-features) page for a list of experimental and beta flags.
+:::
+
+**2. Use supported table engine for operations**
+
+```sql
+-- Error: Cannot ALTER a View table
+ALTER TABLE my_view ADD COLUMN new_column String;
+
+-- Solution: ALTER the underlying table instead
+ALTER TABLE underlying_table ADD COLUMN new_column String;
+
+-- Then recreate the view if needed
+CREATE OR REPLACE VIEW my_view AS
+SELECT *, new_column
+FROM underlying_table;
+```
+
+**3. Switch from non-replicated to Replicated table engine**
+
+```sql
+-- Error: Replication features not available on MergeTree
+-- Operations like SYNC REPLICA, FETCH PARTITION fail
+
+-- Solution: Migrate to ReplicatedMergeTree
+CREATE TABLE new_table
+(
+ date Date,
+ user_id UInt64,
+ value Float64
+)
+ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/new_table', '{replica}')
+PARTITION BY toYYYYMM(date)
+ORDER BY (date, user_id);
+
+-- Migrate data
+INSERT INTO new_table SELECT * FROM old_table;
+
+-- Rename tables
+RENAME TABLE old_table TO old_table_backup, new_table TO old_table;
+```
+
+**4. Upgrade ClickHouse version for newer features**
+
+```sql
+-- Error: Feature not implemented in version 21.3
+SELECT quantileTDigestWeighted(0.95)(response_time, weight) FROM requests;
+
+-- Solution: Upgrade to version 21.8+ or use alternative
+-- Alternative for older versions:
+SELECT quantileWeighted(0.95)(response_time, weight) FROM requests;
+
+-- Or upgrade ClickHouse:
+-- Check current version
+SELECT version();
+
+-- Plan upgrade to newer version with required features
+```
+
+**6. Work around Distributed table limitations**
+
+```sql
+-- Error: ALTER on Distributed table not fully supported
+ALTER TABLE distributed_table DROP COLUMN old_column;
+
+-- Solution: ALTER each underlying shard table
+-- On each shard:
+ALTER TABLE local_table DROP COLUMN old_column;
+
+-- Recreate Distributed table if needed
+DROP TABLE distributed_table;
+CREATE TABLE distributed_table AS local_table
+ENGINE = Distributed(cluster_name, database_name, local_table, rand());
+```
+
+**7. Use supported operations for external table engines**
+
+```sql
+-- Error: OPTIMIZE not supported for URL table engine
+OPTIMIZE TABLE url_table FINAL;
+
+-- Solution: For URL/File/external engines, recreate or use appropriate engine
+-- If you need optimization, import to MergeTree first:
+CREATE TABLE local_copy ENGINE = MergeTree() ORDER BY id AS
+SELECT * FROM url_table;
+
+OPTIMIZE TABLE local_copy FINAL;
+```
+
+**8. Enable ClickHouse Cloud preview features**
+
+```sql
+-- Error: Feature not available in ClickHouse Cloud
+-- Check if feature requires enablement
+
+-- Solution: Contact support or check Cloud console for preview features
+-- Some features need to be enabled via Cloud console settings
+-- Example: Advanced compute-compute separation, certain integrations
+
+-- Alternatively, use feature flags in query settings:
+SET allow_experimental_analyzer = 1;
+SET enable_optimize_predicate_expression = 1;
+
+SELECT * FROM table WHERE complex_condition;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Check compatibility before using new features**: Review ClickHouse release notes and documentation to verify feature availability in your version and deployment type (Cloud vs self-managed)
+2. **Choose appropriate table engines**: Select table engines that support the operations you need (use ReplicatedMergeTree for replication, MergeTree family for mutations, etc.)
+3. **Test experimental features in development first**: Always test experimental features in non-production environments before enabling in production, and monitor ClickHouse changelogs for when features become stable
+4. **Keep ClickHouse versions updated**: Regularly upgrade to newer ClickHouse versions to access new features and improvements, following a testing → staging → production upgrade path
+5. **Use Cloud-compatible patterns**: When using ClickHouse Cloud, design queries and schemas using features documented as Cloud-compatible to avoid surprises
+6. **Review engine-specific limitations**: Before choosing a table engine, review its documentation for supported and unsupported operations (especially for Kafka, MaterializedView, Distributed engines)
+7. **Monitor deprecation warnings**: Pay attention to deprecation notices in release notes to avoid using features that may be removed in future versions
+8. **Use alternative implementations**: When a specific operation isn't implemented, look for alternative approaches using supported features (e.g., using INSERT INTO SELECT instead of UPDATE)
+
+## Related error codes {#related-error-codes}
+
+- [UNSUPPORTED_METHOD (1001)](/troubleshooting/error-codes/001_UNSUPPORTED_METHOD) - Method not supported in current context
+- [ILLEGAL_TYPE_OF_ARGUMENT (43)](/troubleshooting/error-codes/043_ILLEGAL_TYPE_OF_ARGUMENT) - Operation not supported for data type
+- [BAD_ARGUMENTS (36)](/troubleshooting/error-codes/036_BAD_ARGUMENTS) - Invalid arguments for function or operation
+- [TABLE_IS_READ_ONLY (242)](/troubleshooting/error-codes/242_TABLE_IS_READ_ONLY) - Cannot modify read-only tables
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/049_LOGICAL_ERROR.md b/docs/troubleshooting/error_codes/049_LOGICAL_ERROR.md
new file mode 100644
index 00000000000..468daf0c99c
--- /dev/null
+++ b/docs/troubleshooting/error_codes/049_LOGICAL_ERROR.md
@@ -0,0 +1,170 @@
+---
+slug: /troubleshooting/error-codes/049_LOGICAL_ERROR
+sidebar_label: '049 LOGICAL_ERROR'
+doc_type: 'reference'
+keywords: ['error codes', 'LOGICAL_ERROR', '049']
+title: '049 LOGICAL_ERROR'
+description: 'ClickHouse error code - 049 LOGICAL_ERROR'
+---
+
+# Error 49: LOGICAL_ERROR
+
+:::tip
+This error indicates an internal bug or assertion failure in ClickHouse that should not occur under normal circumstances.
+It represents a violation of internal invariants or unexpected conditions that point to a bug in ClickHouse itself rather than a user error.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Internal assertion failures**
+ - Failed internal consistency checks
+ - Invariant violations in ClickHouse code
+ - Unexpected state transitions that should never happen
+ - Buffer or pointer validation failures
+
+2. **File system cache issues**
+ - Inconsistent cache state in S3 or remote filesystem operations
+ - Buffer offset mismatches (e.g., "Expected X >= Y")
+ - File segment inconsistencies
+
+3. **Merge tree operations**
+ - Part management issues (e.g., "Entry actual part isn't empty yet")
+ - Temporary part conflicts
+ - Part state inconsistencies during merges or mutations
+
+4. **Query optimizer or planner bugs**
+ - Incorrect operand types in expressions
+ - Invalid query plan generation
+ - Column type mismatches in internal processing
+
+5. **Concurrency and synchronization issues**
+ - Race conditions in multi-threaded operations
+ - Lock ordering violations
+ - State corruption from concurrent access
+
+6. **LLVM compilation errors**
+ - Incorrect operand types in compiled expressions
+ - JIT compilation failures
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. This is a bug - report it to ClickHouse**
+
+`LOGICAL_ERROR` always indicates a bug in ClickHouse, not a user error. The error message typically says "Report this error to [https://github.com/ClickHouse/ClickHouse/issues](https://github.com/ClickHouse/ClickHouse/issues)"
+
+**2. Gather diagnostic information**
+
+Before reporting, collect:
+
+```sql
+-- Get the full error message and stack trace from logs
+SELECT
+ event_time,
+ query_id,
+ exception,
+ exception_code,
+ stack_trace
+FROM system.query_log
+WHERE exception_code = 49
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**3. Note your ClickHouse version**
+
+```sql
+SELECT version();
+```
+
+**4. Try to create a minimal reproducible example**
+
+If possible, identify:
+- The specific query that triggers the error
+- Table schema and sample data
+- Any recent operations (merges, mutations, `ALTER` statements)
+
+**5. Check if the issue is already fixed**
+
+Search existing issues on [GitHub](https://github.com/ClickHouse/ClickHouse/issues)
+Consider upgrading to a newer version if available.
+
+## Temporary workarounds {#temporary-workarounds}
+
+While waiting for a fix, you may try:
+
+**1. Restart the server or retry the operation**
+
+```bash
+# Sometimes temporary state corruption can be cleared
+sudo systemctl restart clickhouse-server
+```
+
+**2. Optimize or rebuild affected parts**
+
+```sql
+-- For specific table issues
+OPTIMIZE TABLE your_table FINAL;
+
+-- Or detach and reattach the table
+DETACH TABLE your_table;
+ATTACH TABLE your_table;
+```
+
+**3. Disable experimental features**
+
+```sql
+-- If using experimental features, try disabling them
+SET allow_experimental_analyzer = 0;
+SET compile_expressions = 0;
+```
+
+**4. Adjust settings that may trigger the bug**
+
+```sql
+-- For filesystem cache issues
+SET enable_filesystem_cache = 0;
+
+-- For query optimization issues
+SET query_plan_enable_optimizations = 0;
+```
+
+**5. Use alternative query formulation**
+
+If a specific query pattern triggers the error, try rewriting the query differently.
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Buffer offset mismatch**
+
+```text
+Logical error: 'Expected 46044 >= 88088'
+```
+
+This typically occurs with S3 or remote filesystem cache. Try:
+- Clearing the filesystem cache
+- Disabling cache temporarily
+- Upgrading to a newer version
+
+**Scenario 2: Part management errors**
+
+```text
+Logical error: 'Entry actual part isn't empty yet'
+```
+
+Related to merge tree part operations. Try:
+- `OPTIMIZE TABLE FINAL`
+- Checking for stuck merges in `system.merges`
+- Checking mutations in `system.mutations`
+
+**Scenario 3: LLVM compilation errors**
+
+```text
+Logical error: Incorrect operand type
+```
+
+Related to expression compilation. Try:
+
+```sql
+SET compile_expressions = 0;
+SET compile_aggregate_expressions = 0;
+```
diff --git a/docs/troubleshooting/error_codes/050_UNKNOWN_TYPE.md b/docs/troubleshooting/error_codes/050_UNKNOWN_TYPE.md
new file mode 100644
index 00000000000..da530046f3c
--- /dev/null
+++ b/docs/troubleshooting/error_codes/050_UNKNOWN_TYPE.md
@@ -0,0 +1,274 @@
+---
+slug: /troubleshooting/error-codes/050_UNKNOWN_TYPE
+sidebar_label: '050 UNKNOWN_TYPE'
+doc_type: 'reference'
+keywords: ['error codes', 'UNKNOWN_TYPE', '050']
+title: '050 UNKNOWN_TYPE'
+description: 'ClickHouse error code - 050 UNKNOWN_TYPE'
+---
+
+# Error 50: UNKNOWN_TYPE
+
+:::tip
+This error occurs when ClickHouse encounters an unrecognized data type name, typically due to typos in type names, missing required type parameters, using types not available in your ClickHouse version, or incorrect syntax in complex type definitions.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Misspelled or incorrect type names**
+ - Typos in common type names (Int32 vs Int23, String vs Str)
+ - Case sensitivity issues (string vs String)
+ - Wrong type family (using PostgreSQL or MySQL type names)
+ - Deprecated type names in newer versions
+
+2. **Missing required type parameters**
+ - FixedString without length specification
+ - Decimal without precision and scale
+ - DateTime64 without precision parameter
+ - Enum without value definitions
+ - LowCardinality wrapping undefined types
+
+3. **Complex nested type syntax errors**
+ - Invalid Array, Tuple, or Map type definitions
+ - Incorrect nesting of parameterized types
+ - Missing parentheses or brackets in complex types
+ - Wrong separator usage (comma vs space)
+
+4. **Version-specific type availability**
+ - Using types introduced in newer ClickHouse versions e.g. `Time` or `Time64` data types
+ - Types removed or renamed in version upgrades. e.g deprecated `Object` data type
+ - Experimental types not available in your build
+
+5. **Type inference failures**
+ - Ambiguous `NULL` types in `INSERT` statements
+ - Empty arrays without explicit type specification
+ - Complex expressions where type cannot be determined
+ - Type conflicts in `UNION` queries
+
+## Common solutions {#common-solutions}
+
+**1. Fix typos in type names**
+
+```sql
+-- Error: Unknown type 'Int23'
+CREATE TABLE users (
+ id Int23,
+ name String
+) ENGINE = MergeTree() ORDER BY id;
+
+-- Solution: Use correct type name
+CREATE TABLE users (
+ id Int32,
+ name String
+) ENGINE = MergeTree() ORDER BY id;
+```
+
+**2. Add required type parameters**
+
+```sql
+-- Error: FixedString requires length parameter
+CREATE TABLE products (
+ sku FixedString,
+ name String
+) ENGINE = MergeTree() ORDER BY sku;
+
+-- Solution: Specify length parameter
+CREATE TABLE products (
+ sku FixedString(20),
+ name String
+) ENGINE = MergeTree() ORDER BY sku;
+```
+
+**3. Specify Decimal precision and scale**
+
+```sql
+-- Error: Decimal requires precision and scale
+CREATE TABLE prices (
+ product_id UInt64,
+ price Decimal
+) ENGINE = MergeTree() ORDER BY product_id;
+
+-- Solution: Add precision and scale parameters
+CREATE TABLE prices (
+ product_id UInt64,
+ price Decimal(18, 2) -- 18 total digits, 2 after decimal
+) ENGINE = MergeTree() ORDER BY product_id;
+
+-- Alternative: Use Decimal32, Decimal64, or Decimal128
+CREATE TABLE prices (
+ product_id UInt64,
+ price Decimal64(2) -- 2 decimal places, range up to 18 digits
+) ENGINE = MergeTree() ORDER BY product_id;
+```
+
+**4. Fix complex nested type syntax**
+
+```sql
+-- Error: Invalid Array type syntax
+CREATE TABLE events (
+ user_id UInt64,
+ tags Array String
+) ENGINE = MergeTree() ORDER BY user_id;
+
+-- Solution: Use parentheses for Array element type
+CREATE TABLE events (
+ user_id UInt64,
+ tags Array(String)
+) ENGINE = MergeTree() ORDER BY user_id;
+```
+
+**5. Correct Tuple and Map type definitions**
+
+```sql
+-- Error: Invalid Tuple syntax
+CREATE TABLE coordinates (
+ location_id UInt64,
+ point Tuple Float64, Float64
+) ENGINE = MergeTree() ORDER BY location_id;
+
+-- Solution: Wrap Tuple types in parentheses
+CREATE TABLE coordinates (
+ location_id UInt64,
+ point Tuple(Float64, Float64)
+) ENGINE = MergeTree() ORDER BY location_id;
+
+-- Error: Invalid Map syntax
+CREATE TABLE attributes (
+ item_id UInt64,
+ properties Map String String
+) ENGINE = MergeTree() ORDER BY item_id;
+
+-- Solution: Separate key and value types with comma
+CREATE TABLE attributes (
+ item_id UInt64,
+ properties Map(String, String)
+) ENGINE = MergeTree() ORDER BY item_id;
+```
+
+**6. Specify DateTime64 precision**
+
+```sql
+-- Error: DateTime64 requires precision parameter
+CREATE TABLE logs (
+ timestamp DateTime64,
+ message String
+) ENGINE = MergeTree() ORDER BY timestamp;
+
+-- Solution: Add precision (3 for milliseconds, 6 for microseconds)
+CREATE TABLE logs (
+ timestamp DateTime64(3), -- millisecond precision
+ message String
+) ENGINE = MergeTree() ORDER BY timestamp;
+
+-- With timezone
+CREATE TABLE logs (
+ timestamp DateTime64(3, 'UTC'),
+ message String
+) ENGINE = MergeTree() ORDER BY timestamp;
+```
+
+**7. Enable experimental types**
+
+```sql
+-- Error: Object type not recognized
+CREATE TABLE json_data (
+ id UInt64,
+ data Object('json')
+) ENGINE = MergeTree() ORDER BY id;
+
+-- Solution: Enable experimental Object type
+SET allow_experimental_object_type = 1;
+
+CREATE TABLE json_data (
+ id UInt64,
+ data Object('json')
+) ENGINE = MergeTree() ORDER BY id;
+```
+
+**8. Fix Enum definitions**
+
+```sql
+-- Error: Enum requires value definitions
+CREATE TABLE orders (
+ order_id UInt64,
+ status Enum
+) ENGINE = MergeTree() ORDER BY order_id;
+
+-- Solution: Define Enum values
+CREATE TABLE orders (
+ order_id UInt64,
+ status Enum8('pending' = 1, 'processing' = 2, 'completed' = 3, 'cancelled' = 4)
+) ENGINE = MergeTree() ORDER BY order_id;
+
+-- Or use Enum16 for more values
+CREATE TABLE orders (
+ order_id UInt64,
+ status Enum16('pending' = 1, 'processing' = 2, 'completed' = 3, 'cancelled' = 4)
+) ENGINE = MergeTree() ORDER BY order_id;
+```
+
+**9. Specify types for Nullable and LowCardinality**
+
+```sql
+-- Error: Nullable/LowCardinality requires base type
+CREATE TABLE data (
+ id UInt64,
+ category LowCardinality,
+ optional Nullable
+) ENGINE = MergeTree() ORDER BY id;
+
+-- Solution: Wrap valid base types
+CREATE TABLE data (
+ id UInt64,
+ category LowCardinality(String),
+ optional Nullable(String)
+) ENGINE = MergeTree() ORDER BY id;
+```
+
+**10. Use correct nested type syntax**
+
+```sql
+-- Error: Invalid nested Array syntax
+CREATE TABLE matrix (
+ id UInt64,
+ data Array[Array[Int32]]
+) ENGINE = MergeTree() ORDER BY id;
+
+-- Solution: Use parentheses consistently
+CREATE TABLE matrix (
+ id UInt64,
+ data Array(Array(Int32))
+) ENGINE = MergeTree() ORDER BY id;
+```
+
+**11. Explicit type casting in queries**
+
+```sql
+-- Error: Type cannot be inferred from empty array
+INSERT INTO events (user_id, tags) VALUES (1, []);
+
+-- Solution: Cast to specific type
+INSERT INTO events (user_id, tags) VALUES (1, CAST([] AS Array(String)));
+
+-- Or specify in SELECT
+INSERT INTO events (user_id, tags)
+SELECT 1, [] :: Array(String);
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Reference official documentation for type names**: Always check the ClickHouse documentation for exact type names and syntax, as type names are case-sensitive and must match exactly (e.g., `String` not `string`)
+2. **Use type parameters consistently**: For parameterized types (FixedString, Decimal, DateTime64, Enum), always include required parameters and verify syntax in documentation before creating tables
+3. **Test complex types incrementally**: When building complex nested types (Array of Tuples, Maps with complex values), test simpler versions first and add complexity gradually
+4. **Validate type compatibility with ClickHouse version**: Before using newer data types, verify they're available in your ClickHouse version by checking release notes or testing in development first
+5. **Use explicit type casting**: When dealing with `NULL`s, empty arrays, or ambiguous expressions, use explicit `CAST()` or `::` syntax to specify exact types
+6. **Enable required experimental settings in session**: If using experimental types (Object, JSON, Variant), enable necessary settings at the session level and document these requirements for production
+7. **Maintain type consistency across schema**: When creating related tables or views, ensure type definitions match exactly to avoid type inference issues in JOINs and UNION operations
+8. **Use schema inference carefully**: When using table functions (s3, url, file), explicitly specify types instead of relying on inference to avoid UNKNOWN_TYPE errors from ambiguous data
+
+## Related error codes {#related-error-codes}
+
+- [ILLEGAL_TYPE_OF_ARGUMENT (43)](/troubleshooting/error-codes/043_ILLEGAL_TYPE_OF_ARGUMENT) - Wrong type used for function argument
+- [CANNOT_CONVERT_TYPE (70)](/troubleshooting/error-codes/070_CANNOT_CONVERT_TYPE) - Type conversion not possible
+- [TYPE_MISMATCH (386)](/troubleshooting/error-codes/053_TYPE_MISMATCH) - Types don't match in operation
+- [UNKNOWN_IDENTIFIER (47)](/troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER) - Column or identifier not found
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/053_TYPE_MISMATCH.md b/docs/troubleshooting/error_codes/053_TYPE_MISMATCH.md
new file mode 100644
index 00000000000..8b9c24be8b7
--- /dev/null
+++ b/docs/troubleshooting/error_codes/053_TYPE_MISMATCH.md
@@ -0,0 +1,241 @@
+---
+slug: /troubleshooting/error-codes/053_TYPE_MISMATCH
+sidebar_label: '053 TYPE_MISMATCH'
+doc_type: 'reference'
+keywords: ['error codes', 'TYPE_MISMATCH', '053']
+title: '053 TYPE_MISMATCH'
+description: 'ClickHouse error code - 053 TYPE_MISMATCH'
+---
+
+# Error 53: TYPE_MISMATCH
+
+:::tip
+This error occurs when there is an incompatibility between expected and actual data types during data processing, serialization, or type casting operations.
+It typically indicates that ClickHouse encountered data of one type where it expected a different type, often during internal operations like column casting or data serialization.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Internal column type casting failures**
+ - Bad cast from one column type to another (e.g., `ColumnDecimal` to `ColumnVector`)
+ - Sparse column to dense column type mismatches
+ - Nullable column to non-nullable column casts
+ - Decimal precision mismatches (e.g., `Decimal64` vs `Decimal128`)
+
+2. **Data serialization issues**
+ - Type mismatches during binary bulk serialization
+ - Writing data parts with incompatible types
+ - Merge operations with incompatible column types
+
+3. **Integration and replication problems**
+ - Type mismatches in PostgreSQL/MySQL materialized views
+ - CDC (Change Data Capture) operations with schema differences
+ - External table type mapping errors
+
+4. **Mutation and merge operations**
+ - Mutations encountering data with unexpected types
+ - Background merge tasks failing due to type incompatibilities
+ - Part writing with mismatched column types
+
+5. **Sparse column serialization**
+ - Attempting to serialize sparse columns as dense columns
+ - Type casting errors with sparse column representations
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. This is often an internal bug**
+
+`TYPE_MISMATCH` errors, especially those marked as `LOGICAL_ERROR`, typically indicate internal ClickHouse issues rather than user errors.
+These should be reported if they persist.
+
+**2. Check for schema mismatches**
+
+```sql
+-- Verify table schema
+DESCRIBE TABLE your_table;
+
+-- Check column types in system tables
+SELECT
+ name,
+ type,
+ default_kind,
+ default_expression
+FROM system.columns
+WHERE database = 'your_database'
+ AND table = 'your_table';
+```
+
+**3. Check for stuck mutations**
+
+```sql
+-- Look for failing mutations
+SELECT
+ database,
+ table,
+ mutation_id,
+ command,
+ create_time,
+ latest_fail_reason
+FROM system.mutations
+WHERE NOT is_done;
+```
+
+**4. Review recent schema changes**
+
+Type mismatches often occur after:
+- `ALTER TABLE MODIFY COLUMN` operations
+- Schema changes in source systems (for integrations)
+- Version upgrades
+
+## Common solutions {#common-solutions}
+
+**1. Kill and retry stuck mutations**
+
+```sql
+-- Kill problematic mutation
+KILL MUTATION WHERE mutation_id = 'stuck_mutation_id';
+
+-- Re-run the operation if needed
+```
+
+**2. Optimize table to consolidate parts**
+
+```sql
+-- Force merge to consolidate data types
+OPTIMIZE TABLE your_table FINAL;
+```
+
+**3. Check and fix integration type mappings**
+
+For PostgreSQL/MySQL integrations:
+
+```sql
+-- Verify external table schema matches ClickHouse expectations
+SHOW CREATE TABLE your_postgresql_table;
+```
+
+**4. Disable sparse columns if problematic**
+
+```sql
+-- Disable sparse serialization setting
+SET optimize_use_implicit_projections = 0;
+SET use_sparse_serialization = 0;
+```
+
+**5. Detach and reattach table**
+
+For persistent issues:
+
+```sql
+DETACH TABLE your_table;
+ATTACH TABLE your_table;
+```
+
+**6. Rebuild affected parts**
+
+If specific parts are corrupted:
+
+```sql
+-- Check parts
+SELECT name, database, table, marks_bytes, rows
+FROM system.parts
+WHERE table = 'your_table' AND active;
+
+-- Detach problematic part
+ALTER TABLE your_table DETACH PART 'part_name';
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Bad cast during merge**
+
+```text
+Bad cast from type DB::ColumnDecimal> to
+DB::ColumnDecimal>>
+```
+
+**Cause:** Decimal precision mismatch between parts being merged.
+
+**Solution:**
+- Check if recent schema changes modified decimal types
+- Optimize table to merge parts with consistent types
+- May need to drop and recreate table with correct schema
+
+**Scenario 2: Sparse column serialization**
+
+```text
+Bad cast from type DB::ColumnSparse to DB::ColumnVector
+```
+
+**Cause:** Sparse column optimization conflicting with serialization.
+
+**Solution:**
+
+```sql
+SET use_sparse_serialization = 0;
+```
+Or upgrade to newer version with fixes.
+
+**Scenario 3: PostgreSQL replication type mismatch**
+
+```text
+Bad cast from type DB::ColumnDecimal to
+DB::ColumnDecimal>
+```
+
+**Cause:** PostgreSQL type mapped incorrectly to ClickHouse type.
+
+**Solution:**
+- Review PostgreSQL source column types
+- Verify MaterializedPostgreSQL table definitions
+- May need to recreate the materialized table
+
+**Scenario 4: Integration type conflicts**
+
+```text
+Unexpected type string for mysql type 15, got bool
+```
+
+**Cause:** MySQL/PostgreSQL type mapping mismatch.
+
+**Solution:**
+- Verify source schema hasn't changed
+- Check destination table was created with correct types
+- May need to recreate destination table
+
+## Prevention tips {#prevention-tips}
+
+1. **Consistent decimal types:** Use consistent decimal precision across your schema
+2. **Test schema changes:** Test `ALTER` operations on non-production data first
+3. **Monitor merges:** Watch `system.merges` for errors
+4. **Version consistency:** Keep ClickHouse versions consistent across replicas
+5. **Integration testing:** Test integration schemas before production
+6. **Avoid sparse columns:** If encountering issues, disable sparse serialization
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify the failing operation:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ exception,
+ query
+ FROM system.query_log
+ WHERE exception_code = 53
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+2. **Check merge/mutation logs:**
+
+ ```sql
+ SELECT
+ database,
+ table,
+ elapsed,
+ progress,
+ latest_fail_reason
+ FROM system.merges
+ WHERE NOT
diff --git a/docs/troubleshooting/error_codes/060_UNKNOWN_TABLE.md b/docs/troubleshooting/error_codes/060_UNKNOWN_TABLE.md
new file mode 100644
index 00000000000..f4fef66e3aa
--- /dev/null
+++ b/docs/troubleshooting/error_codes/060_UNKNOWN_TABLE.md
@@ -0,0 +1,282 @@
+---
+slug: /troubleshooting/error-codes/060_UNKNOWN_TABLE
+sidebar_label: '060 UNKNOWN_TABLE'
+doc_type: 'reference'
+keywords: ['error codes', 'UNKNOWN_TABLE', '060']
+title: '060 UNKNOWN_TABLE'
+description: 'ClickHouse error code - 060 UNKNOWN_TABLE'
+---
+
+# Error 60: UNKNOWN_TABLE
+
+:::tip
+This error occurs when a query references a table that does not exist in the specified database.
+It indicates that ClickHouse cannot find the table you're trying to access, either because it was never created, has been dropped, or you're referencing the wrong database.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Table name typo**
+ - Misspelled table name
+ - Incorrect capitalization (table names are case-sensitive)
+ - Extra or missing characters in table name
+
+2. **Wrong database context**
+ - Querying a table in the wrong database
+ - Database parameter not set correctly
+ - Using table name without database prefix when not in the correct database context
+
+3. **Table was dropped or renamed**
+ - Table was deleted by another process
+ - Table was renamed and old name is still being used
+ - Temporary tables that have expired
+
+4. **Incorrect database/table specification in connection**
+ - HTTP interface with wrong `database` parameter
+ - Wrong `X-ClickHouse-Database` header
+ - JDBC/ODBC connection string with incorrect database
+
+5. **Client or ORM confusion**
+ - Client libraries using table name as database name
+ - ORM frameworks misinterpreting table references
+ - Query builders constructing incorrect table paths
+
+6. **Distributed table or cluster issues**
+ - Local table missing on some cluster nodes
+ - Distributed table pointing to non-existent local tables
+ - Replication lag causing temporary table unavailability
+
+## Common solutions {#common-solutions}
+
+**1. Verify the table exists**
+
+```sql
+-- List all tables in current database
+SHOW TABLES;
+
+-- List tables in specific database
+SHOW TABLES FROM your_database;
+
+-- Search for table across all databases
+SELECT database, name
+FROM system.tables
+WHERE name LIKE '%your_table%';
+```
+
+**2. Use fully qualified table names**
+
+```sql
+-- WRONG: Ambiguous or missing database context
+SELECT * FROM my_table;
+
+-- CORRECT: Fully qualified table name
+SELECT * FROM my_database.my_table;
+```
+
+**3. Check current database context**
+
+```sql
+-- See current database
+SELECT currentDatabase();
+
+-- Switch to correct database
+USE your_database;
+
+-- Or set database in connection string/parameters
+```
+
+**4. Verify HTTP interface database parameter**
+
+```bash
+# WRONG: Using table name as database parameter
+curl 'http://localhost:8123/?database=my_table' -d 'SELECT * FROM my_table'
+
+# CORRECT: Using correct database name
+curl 'http://localhost:8123/?database=my_database' -d 'SELECT * FROM my_table'
+```
+
+**5. Check for distributed table issues**
+
+```sql
+-- Verify distributed table configuration
+SELECT * FROM system.tables
+WHERE name = 'your_distributed_table'
+AND engine = 'Distributed';
+
+-- Check if local tables exist on all nodes
+SELECT
+ hostName(),
+ database,
+ name
+FROM clusterAllReplicas('your_cluster', system.tables)
+WHERE name = 'your_local_table';
+```
+
+**6. Look for similar table names**
+
+Newer ClickHouse versions suggest similar table names:
+
+```sql
+-- If table doesn't exist, ClickHouse may suggest:
+-- "Table doesn't exist. Maybe you meant: 'similar_table_name'"
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Client using table name as database name**
+
+```text
+Error: Database my_table doesn't exist
+```
+
+**Cause:** HTTP client or ORM incorrectly passing table name as database parameter.
+
+**Solution:** Check connection parameters and ensure `database` parameter contains the database name, not table name.
+
+**Scenario 2: Missing distributed local tables**
+
+```text
+Error: Table default.my_table_local doesn't exist
+```
+
+**Cause:** Distributed table configured but local tables don't exist on some cluster nodes.
+
+**Solution:**
+
+```sql
+-- Create local table on all nodes
+CREATE TABLE my_table_local ON CLUSTER your_cluster
+(...) ENGINE = MergeTree() ...;
+```
+
+**Scenario 3: Temporary table expired**
+```text
+Error: Table default.my_temp_table_1681159380741 doesn't exist
+```
+
+**Cause:** Temporary table was created by a process that has ended, or it expired.
+
+**Solution:** Recreate the temporary table or check the process that creates it.
+
+**Scenario 4: Wrong database context**
+
+```text
+Error: Table default.my_table doesn't exist
+```
+
+But table exists in `production` database.
+
+**Solution:**
+
+```sql
+-- Specify database explicitly
+SELECT * FROM production.my_table;
+
+-- Or switch database
+USE production;
+SELECT * FROM my_table;
+```
+
+**Scenario 5: Integration table disappeared**
+
+```text
+Error: Table 'source_db.source_table' doesn't exist
+```
+
+**Cause:** Source table in external system (PostgreSQL/MySQL) was dropped.
+
+**Solution:** Verify source table exists in source system and recreate ClickHouse integration if needed.
+
+## Prevention tips {#prevention-tips}
+
+1. **Always use fully qualified names:** Use `database.table` syntax in production code
+2. **Verify table existence before queries:** Use `IF EXISTS` checks in scripts
+3. **Set database context explicitly:** Don't rely on default database
+4. **Use table existence checks:** Especially in automated processes
+5. **Monitor table changes:** Track table creation/deletion events
+6. **Document table dependencies:** Especially for distributed setups
+
+## Debugging steps {#debugging-steps}
+
+1. **List all available tables:**
+ ```sql
+ SHOW TABLES;
+ ```
+
+2. **Search for table across databases:**
+
+ ```sql
+ SELECT database, name, engine
+ FROM system.tables
+ WHERE name = 'your_table';
+ ```
+
+3. **Check current database:**
+
+ ```sql
+ SELECT currentDatabase();
+ ```
+
+4. **Verify connection parameters:**
+ - Check HTTP `database` parameter
+ - Verify `X-ClickHouse-Database` header
+ - Review JDBC/ODBC connection string
+
+5. **For distributed tables, check cluster:**
+
+ ```sql
+ -- See cluster configuration
+ SELECT * FROM system.clusters WHERE cluster = 'your_cluster';
+
+ -- Check table exists on all nodes
+ SELECT hostName(), count()
+ FROM clusterAllReplicas('your_cluster', system.tables)
+ WHERE database = 'your_db' AND name = 'your_table'
+ GROUP BY hostName();
+ ```
+
+6. **Check recent table operations:**
+
+ ```sql
+ SELECT
+ event_time,
+ query,
+ query_kind,
+ databases,
+ tables
+ FROM system.query_log
+ WHERE (has(tables, 'your_table') OR query LIKE '%your_table%')
+ AND query_kind IN ('Create', 'Drop', 'Rename')
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+## Special considerations {#special-considerations}
+
+**For HTTP interface users:**
+- The `database` parameter specifies which database to use
+- This is NOT the table name
+- Common issue with ORMs and query builders
+
+**For distributed tables:**
+- The distributed table must exist
+- Local tables must exist on all cluster nodes
+- Use `ON CLUSTER` clause when creating tables
+
+**For temporary tables:**
+- Temporary tables are session-specific
+- They disappear when the session ends
+- Named with timestamps are often temp tables
+
+**For integrations (MySQL/PostgreSQL):**
+- Verify source table exists in source system
+- Check connection to source system
+- Review materialized view or integration configuration
+
+If you're experiencing this error:
+1. Double-check the table name for typos
+2. Verify you're querying the correct database
+3. Use fully qualified table names (`database.table`)
+4. Check connection parameters (especially for HTTP interface)
+5. Verify the table actually exists using `SHOW TABLES`
+6. For distributed setups, check all cluster nodes
diff --git a/docs/troubleshooting/error_codes/062_SYNTAX_ERROR.md b/docs/troubleshooting/error_codes/062_SYNTAX_ERROR.md
new file mode 100644
index 00000000000..343f41a5346
--- /dev/null
+++ b/docs/troubleshooting/error_codes/062_SYNTAX_ERROR.md
@@ -0,0 +1,344 @@
+---
+slug: /troubleshooting/error-codes/062_SYNTAX_ERROR
+sidebar_label: '062 SYNTAX_ERROR'
+doc_type: 'reference'
+keywords: ['error codes', 'SYNTAX_ERROR', '062']
+title: '062 SYNTAX_ERROR'
+description: 'ClickHouse error code - 062 SYNTAX_ERROR'
+---
+
+# Error 62: SYNTAX_ERROR
+
+:::tip
+This error occurs when ClickHouse's SQL parser encounters invalid SQL syntax that it cannot interpret.
+It indicates that your query contains syntax errors such as missing keywords, incorrect punctuation, typos in commands, or malformed SQL statements.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Missing or incorrect punctuation**
+ - Missing commas between columns or values
+ - Missing or mismatched parentheses
+ - Missing or extra quotes (single or double)
+ - Missing semicolons in multi-statement queries
+
+2. **Typos in SQL keywords or function names**
+ - Misspelled SQL keywords (`SELCT` instead of `SELECT`)
+ - Wrong function names or syntax
+ - Case sensitivity issues in identifiers
+
+3. **Incorrect query structure**
+ - Missing required clauses (e.g., `FROM` clause)
+ - Clauses in the wrong order
+ - Invalid combinations of keywords
+
+4. **Quote and identifier issues**
+ - Using wrong quote types (double quotes for strings instead of single)
+ - Unescaped quotes within strings
+ - Missing backticks for identifiers with special characters
+
+5. **Data format confusion**
+ - Trying to execute data as SQL
+ - CSV/TSV data interpreted as SQL commands
+ - Binary or non-text data in SQL context
+
+6. **Incomplete or truncated queries**
+ - Query cut off mid-statement
+ - Missing closing parentheses or brackets
+ - Incomplete expressions
+
+## Common solutions {#common-solutions}
+
+**1. Check for missing or extra punctuation**
+
+```sql
+-- WRONG: Missing comma
+SELECT
+ column1
+ column2
+FROM table;
+
+-- CORRECT: Include comma
+SELECT
+ column1,
+ column2
+FROM table;
+```
+
+**2. Verify quote types**
+
+```sql
+-- WRONG: Double quotes for string literals
+SELECT * FROM table WHERE name = "John";
+
+-- CORRECT: Single quotes for string literals
+SELECT * FROM table WHERE name = 'John';
+
+-- Note: Backticks for identifiers with special characters
+SELECT `column-name` FROM table;
+```
+
+**3. Check parentheses balance**
+
+```sql
+-- WRONG: Unbalanced parentheses
+SELECT * FROM table WHERE (column1 = 1 AND column2 = 2;
+
+-- CORRECT: Balanced parentheses
+SELECT * FROM table WHERE (column1 = 1 AND column2 = 2);
+```
+
+**4. Verify keyword spelling and order**
+
+```sql
+-- WRONG: Incorrect keyword order
+SELECT * WHERE column1 = 1 FROM table;
+
+-- CORRECT: Proper keyword order
+SELECT * FROM table WHERE column1 = 1;
+```
+
+**5. Use proper identifiers for reserved words**
+
+```sql
+-- WRONG: Using reserved word without escaping
+SELECT from FROM table;
+
+-- CORRECT: Escape reserved words with backticks
+SELECT `from` FROM table;
+```
+
+**6. Check for data vs SQL confusion**
+
+```sql
+-- ERROR: Trying to execute data as SQL
+85c59771-ae5d-4a53-9eed-9418296281f8 Intelligent Search
+
+-- This is data, not SQL - use INSERT INTO or file import instead
+INSERT INTO table VALUES ('85c59771-ae5d-4a53-9eed-9418296281f8', 'Intelligent Search');
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Missing comma in column list**
+
+```text
+Error: Syntax error: failed at position X
+```
+
+**Cause:** Forgot comma between column names.
+
+**Solution:**
+
+```sql
+-- WRONG
+SELECT
+ id
+ name
+ email
+FROM users;
+
+-- CORRECT
+SELECT
+ id,
+ name,
+ email
+FROM users;
+```
+
+**Scenario 2: Data interpreted as SQL**
+
+```text
+Error: Syntax error: failed at position 1 ('85c59771')
+```
+
+**Cause:** Trying to insert data directly without `INSERT` statement.
+
+**Solution:**
+
+```sql
+-- Use proper INSERT syntax
+INSERT INTO table FORMAT TSV
+85c59771-ae5d-4a53-9eed-9418296281f8 Intelligent Search 2021-06-18
+```
+
+**Scenario 3: Unescaped quotes in strings**
+
+```text
+Error: Syntax error (missing closing quote)
+```
+
+**Cause:** String contains quotes that aren't escaped.
+
+**Solution:**
+
+```sql
+-- WRONG
+SELECT 'It's a test';
+
+-- CORRECT: Escape with backslash or double the quote
+SELECT 'It\'s a test';
+-- OR
+SELECT 'It''s a test';
+```
+
+**Scenario 4: Missing parentheses in function calls**
+
+```text
+Error: Syntax error
+```
+
+**Cause:** Function call without parentheses.
+
+**Solution:**
+
+```sql
+-- WRONG
+SELECT now, count
+FROM table;
+
+-- CORRECT
+SELECT now(), count()
+FROM table;
+```
+
+**Scenario 5: Invalid alias syntax**
+
+```text
+Error: Syntax error
+```
+
+**Cause:** Using `AS` incorrectly or missing quotes for aliases with spaces.
+
+**Solution:**
+
+```sql
+-- WRONG
+SELECT column1 myColumn Name
+FROM table;
+
+-- CORRECT
+SELECT column1 AS `myColumn Name`
+FROM table;
+
+-- OR better
+SELECT column1 AS my_column_name
+FROM table;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Use a SQL formatter:** Format queries before execution to catch syntax issues
+2. **Test incrementally:** Build complex queries step by step
+3. **Use IDE with syntax highlighting:** Many editors catch syntax errors before execution
+4. **Check for balanced punctuation:** Verify all parentheses, brackets, and quotes are matched
+5. **Review error position:** Error message usually indicates where parsing failed
+6. **Validate with `EXPLAIN`:** Use `EXPLAIN SYNTAX` to check query parsing without execution
+7. **Copy-paste with caution:** Hidden characters from copy-paste can cause syntax errors
+
+## Debugging steps {#debugging-steps}
+
+1. **Read the error message carefully:**
+
+ ```text
+ Syntax error: failed at position 45 ('WHERE') (line 3, col 5)
+ ```
+
+ The error tells you exactly where it failed.
+
+2. **Use EXPLAIN SYNTAX to test:**
+
+ ```sql
+ EXPLAIN SYNTAX
+ SELECT * FROM table WHERE column = 'value';
+ ```
+
+3. **Simplify the query:**
+
+ Start with the simplest valid query and add complexity:
+
+ ```sql
+ -- Start here
+ SELECT * FROM table;
+
+ -- Add WHERE
+ SELECT * FROM table WHERE id = 1;
+
+ -- Add more conditions
+ SELECT * FROM table WHERE id = 1 AND name = 'test';
+ ```
+
+4. **Check for invisible characters:**
+ - Copy to plain text editor
+ - Look for non-standard spaces or characters
+ - Retype the query if needed
+
+5. **Verify quote matching:**
+
+ Count opening and closing quotes:
+
+ ```sql
+ -- Use editor's bracket matching feature
+ -- Or manually count: ', ', ", (, ), [, ]
+ ```
+
+6. **Check the query log:**
+
+ ```sql
+ SELECT
+ query,
+ exception
+ FROM system.query_log
+ WHERE exception_code = 62
+ ORDER BY event_time DESC
+ LIMIT 5;
+ ```
+
+## Special considerations {#special-considerations}
+
+**For file imports:**
+- Ensure you're using correct format specification (`FORMAT CSV`, `FORMAT TSV`, etc.)
+- Don't try to execute data as SQL queries
+- Use appropriate import methods for bulk data
+
+**For programmatic query generation:**
+- Use parameterized queries or prepared statements
+- Properly escape identifiers and values
+- Validate generated SQL before execution
+- Consider using query builders that handle syntax
+
+**For complex queries:**
+- Break into CTEs (Common Table Expressions) for readability
+- Use proper indentation
+- Comment complex sections
+- Test subqueries independently
+
+**For special characters:**
+- Use backticks for identifiers: `` `my-column` ``
+- Use single quotes for strings: `'my string'`
+- Escape quotes within strings: `'it\'s'` or `'it''s'`
+
+## Common SQL syntax rules in ClickHouse {#clickhouse-syntax-rules}
+
+1. **String literals:** Use single quotes `'string'`
+2. **Identifiers:** Use backticks for special characters `` `identifier` ``
+3. **Comments:**
+ - Single line: `-- comment`
+ - Multi-line: `/* comment */`
+4. **Statement terminator:** Semicolon `;` (optional for single statements)
+5. **Case sensitivity:**
+ - Keywords are case-insensitive
+ - Table/column names are case-sensitive by default
+6. **Number formats:**
+ - Integers: `123`
+ - Floats: `123.45`
+ - Scientific: `1.23e10`
+
+If you're experiencing this error:
+1. Read the error message to find the exact position of the syntax error
+2. Check for missing commas, parentheses, or quotes around that position
+3. Verify SQL keywords are spelled correctly
+4. Ensure you're using proper quote types (single quotes for strings)
+5. Make sure you're executing SQL, not raw data
+6. Use `EXPLAIN SYNTAX` to validate query structure
+7. Simplify the query to isolate the syntax issue
diff --git a/docs/troubleshooting/error_codes/070_CANNOT_CONVERT_TYPE.md b/docs/troubleshooting/error_codes/070_CANNOT_CONVERT_TYPE.md
new file mode 100644
index 00000000000..0ee0c8635de
--- /dev/null
+++ b/docs/troubleshooting/error_codes/070_CANNOT_CONVERT_TYPE.md
@@ -0,0 +1,239 @@
+---
+slug: /troubleshooting/error-codes/070_CANNOT_CONVERT_TYPE
+sidebar_label: '070 CANNOT_CONVERT_TYPE'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_CONVERT_TYPE', '070']
+title: '070 CANNOT_CONVERT_TYPE'
+description: 'ClickHouse error code - 070 CANNOT_CONVERT_TYPE'
+---
+
+# Error 70: CANNOT_CONVERT_TYPE
+
+:::tip
+This error occurs when ClickHouse cannot convert data from one type to another due to incompatibility or invalid values.
+It indicates that a type conversion operation failed because the source data cannot be safely or logically converted to the target type.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Enum value mismatches during schema evolution**
+ - Enum values changed between table schema and stored data
+ - Enum element has different numeric value in current schema vs. data parts
+ - Adding or removing enum values without proper migration
+ - Reordering enum values causing value conflicts
+
+2. **Invalid string to numeric conversions**
+ - Trying to parse empty strings as numbers
+ - String contains non-numeric characters
+ - String value out of range for target numeric type
+
+3. **Field value out of range**
+ - Numeric value exceeds the maximum/minimum for target type
+ - Large integers don't fit into smaller integer types
+ - Settings values outside valid range
+
+4. **Type incompatibility in comparisons or casts**
+ - Comparing incompatible types without explicit conversion
+ - Implicit type conversions that ClickHouse doesn't support
+ - Wrong data type in partition column operations
+
+5. **Data corruption or schema conflicts**
+ - Stored data doesn't match current table schema
+ - Metadata inconsistency between data parts
+ - Broken data parts after incomplete schema changes
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for specific details**
+
+The error message usually includes:
+- What value failed to convert
+- Source and target types
+- The context (column name, operation)
+
+```sql
+-- Query logs for recent conversion errors
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 70
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**2. For Enum conversion errors - check schema history**
+
+```sql
+-- Check table structure
+SHOW CREATE TABLE your_table;
+
+-- Compare enum definitions between current schema and data
+-- Look for changed enum values or reordered items
+```
+
+**3. For string to number conversions - validate your data**
+
+```sql
+-- Find problematic values
+SELECT column_name
+FROM your_table
+WHERE NOT match(column_name, '^-?[0-9]+$') -- For integers
+LIMIT 100;
+
+-- Use safe conversion functions
+SELECT toInt32OrZero(column_name) -- Returns 0 for invalid values
+FROM your_table;
+```
+
+**4. Check for data type mismatches**
+
+```sql
+-- Verify column types
+SELECT
+ name,
+ type
+FROM system.columns
+WHERE table = 'your_table'
+ AND database = 'your_database';
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. For Enum schema changes - use safe conversion**
+
+```sql
+-- Option 1: Add new enum values at the end (safe)
+ALTER TABLE your_table
+ MODIFY COLUMN status Enum8('old1' = 1, 'old2' = 2, 'new3' = 3);
+
+-- Option 2: Recreate table with new enum (for major changes)
+CREATE TABLE your_table_new AS your_table
+ENGINE = MergeTree()
+ORDER BY ...;
+
+INSERT INTO your_table_new SELECT * FROM your_table;
+
+RENAME TABLE your_table TO your_table_old, your_table_new TO your_table;
+```
+
+**2. For string to numeric conversions - use safe functions**
+
+```sql
+-- Use OrZero variants that return 0 for invalid values
+SELECT toInt32OrZero(string_column) FROM table;
+
+-- Use OrNull variants that return NULL for invalid values
+SELECT toInt32OrNull(string_column) FROM table;
+
+-- Use tryParse functions
+SELECT parseDateTimeBestEffortOrNull(date_string) FROM table;
+```
+
+**3. For range issues - use appropriate types**
+
+```sql
+-- Use larger types when needed
+ALTER TABLE your_table
+ MODIFY COLUMN big_number Int64; -- Instead of Int32
+
+-- Or use Decimal for large numbers
+ALTER TABLE your_table
+ MODIFY COLUMN amount Decimal(18, 2);
+```
+
+**4. For corrupted data parts - rebuild affected parts**
+
+```sql
+-- Optimize specific partition
+OPTIMIZE TABLE your_table PARTITION 'partition_id' FINAL;
+
+-- If parts are broken, detach and reattach
+ALTER TABLE your_table DETACH PARTITION 'partition_id';
+ALTER TABLE your_table ATTACH PARTITION 'partition_id';
+```
+
+**5. Handle type conversions explicitly in queries**
+
+```sql
+-- Explicit CAST instead of implicit conversion
+SELECT CAST(column AS Int32) FROM table;
+
+-- Use appropriate comparison operators
+SELECT * FROM table WHERE toString(id) = 'value'; -- Instead of id = 'value'
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Enum conversion during merge**
+
+```text
+Enum conversion changes value for element 'SystemLibrary' from 18 to 17
+```
+
+**Cause:** Data was written with one enum definition, but the schema changed and now the same element has a different numeric value.
+
+**Solution:**
+- Never reorder or change numeric values of existing enum elements
+- Always add new enum values at the end
+- If you must change enum values, recreate the table with data migration
+
+**Scenario 2: Empty string to integer conversion**
+
+```text
+Attempt to read after eof: while converting '' to UInt8
+```
+
+**Cause:** Trying to convert an empty string to a numeric type.
+
+**Solution:**
+```sql
+-- Use safe conversion
+SELECT toUInt8OrZero(column_name) FROM table;
+
+-- Or handle empty strings
+SELECT if(column_name = '', 0, toUInt8(column_name)) FROM table;
+```
+
+**Scenario 3: Field value out of range**
+
+```text
+Field value 18446744073709551516 is out of range of long type
+```
+
+**Cause:** Setting or value exceeds the maximum value for the target type.
+
+**Solution:**
+```sql
+-- Use correct value range for the setting
+ALTER TABLE your_table
+ MODIFY SETTING zstd_window_log_max = 31; -- Valid range
+
+-- Or use larger data type
+ALTER TABLE your_table
+ MODIFY COLUMN id UInt64; -- Instead of Int64
+```
+
+**Scenario 4: ClickPipe/Replication type mismatch**
+
+```text
+Cannot convert string to type UInt8
+```
+
+**Cause:** Column order mismatch between source and destination, or wrong type mapping.
+
+**Solution:**
+- Ensure column mapping uses names, not positions
+- Verify data types match between source and target
+- Check replication configuration for correct type mapping
+
+## Prevention best practices {#prevention}
+
+1. **Always add enum values at the end** - never reorder or change existing values
+2. **Use safe conversion functions** (`toInt32OrNull`, `toInt32OrZero`) when data quality is uncertain
+3. **Validate data before insertion** - use input format settings to handle bad data
+4. **Choose appropriate data types** - use types large enough for your data range
+5. **Test schema changes carefully** - especially with Enum types
+6. **Monitor for conversion errors** - set up alerts on error code 70
diff --git a/docs/troubleshooting/error_codes/081_UNKNOWN_DATABASE.md b/docs/troubleshooting/error_codes/081_UNKNOWN_DATABASE.md
new file mode 100644
index 00000000000..f511c252c7d
--- /dev/null
+++ b/docs/troubleshooting/error_codes/081_UNKNOWN_DATABASE.md
@@ -0,0 +1,324 @@
+---
+slug: /troubleshooting/error-codes/081_UNKNOWN_DATABASE
+sidebar_label: '081 UNKNOWN_DATABASE'
+doc_type: 'reference'
+keywords: ['error codes', 'UNKNOWN_DATABASE', '081']
+title: '081 UNKNOWN_DATABASE'
+description: 'ClickHouse error code - 081 UNKNOWN_DATABASE'
+---
+
+# Error 81: UNKNOWN_DATABASE
+
+:::tip
+This error occurs when you attempt to access a database that doesn't exist, hasn't been created yet, or that you don't have permission to access.
+This can happen due to typos, missing database creation steps, permission restrictions, or issues in distributed cluster configurations.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Database doesn't exist**
+ - Typo in database name
+ - Database not created yet (missing `CREATE DATABASE` step)
+ - Database was dropped or deleted
+ - Wrong database name in connection string or queries
+
+2. **Permission and access issues**
+ - User lacks permissions to access the database
+ - Database exists but user's `GRANTS` don't include it
+ - Row-level security or access policies restricting visibility
+ - Cloud organization or service-level access restrictions
+
+3. **Case sensitivity and naming issues**
+ - Database name case mismatch (especially in distributed setups)
+ - Special characters or reserved words in database names
+ - Unquoted database names with spaces or special chars
+ - Unicode or non-ASCII characters in names
+
+4. **Distributed and cluster issues**
+ - Database doesn't exist on all cluster nodes
+ - Shard-specific database missing on some replicas
+ - Cross-cluster query referencing database on other cluster
+ - Materialized views or dictionaries referencing missing databases
+
+5. **Connection and context issues**
+ - Connected to wrong ClickHouse server or instance
+ - Default database not set in connection
+ - Database specified in connection string doesn't exist
+ - Using wrong credentials or connection profile
+
+6. **Schema migration and timing issues**
+ - Scripts running before database creation completes
+ - Race conditions in parallel migrations
+ - Database dropped and recreated causing timing gaps
+ - Incomplete rollback leaving references to deleted databases
+
+## Common solutions {#common-solutions}
+
+**1. Verify database exists and create if missing**
+
+```sql
+-- Error: Database 'analytics' doesn't exist
+SELECT * FROM analytics.events;
+
+-- Solution: Check if database exists
+SHOW DATABASES LIKE 'analytics';
+
+-- Create the database if missing
+CREATE DATABASE IF NOT EXISTS analytics;
+
+-- Then query the table
+SELECT * FROM analytics.events;
+```
+
+**2. List available databases**
+
+```sql
+-- Check all databases you have access to
+SHOW DATABASES;
+
+-- Or query system table
+SELECT name FROM system.databases ORDER BY name;
+
+-- Check specific database with pattern
+SHOW DATABASES LIKE '%prod%';
+```
+
+**3. Fix database name typos**
+
+```sql
+-- Error: Database 'analtyics' doesn't exist (typo)
+USE analtyics;
+
+-- Solution: Use correct spelling
+USE analytics;
+
+-- For queries, use correct database name
+SELECT * FROM analytics.events WHERE date = today();
+```
+
+**4. Use qualified table names**
+
+```sql
+-- Error: Can occur if current database not set
+SELECT * FROM events;
+
+-- Solution: Always qualify table names with database
+SELECT * FROM analytics.events;
+
+-- Or set default database
+USE analytics;
+SELECT * FROM events;
+```
+
+**5. Check and grant permissions**
+
+```sql
+-- Error: User doesn't have access to database
+SELECT * FROM restricted_db.sensitive_data;
+
+-- Solution: Check current user's grants
+SHOW GRANTS;
+
+-- As admin, grant access to the database
+GRANT SELECT ON restricted_db.* TO username;
+
+-- Grant all privileges on database
+GRANT ALL ON restricted_db.* TO username;
+
+-- Create database and grant in one workflow
+CREATE DATABASE IF NOT EXISTS analytics;
+GRANT SELECT, INSERT ON analytics.* TO app_user;
+```
+
+**6. Handle case-sensitive database names**
+
+```sql
+-- Error: Database 'Analytics' vs 'analytics' mismatch
+SELECT * FROM Analytics.events;
+
+-- Solution: Use exact case as stored
+SELECT name FROM system.databases WHERE name ILIKE 'analytics';
+
+-- Always use consistent casing
+SELECT * FROM analytics.events;
+
+-- Or quote if using mixed case
+CREATE DATABASE "MyDatabase";
+SELECT * FROM "MyDatabase".events;
+```
+
+**7. Create database on all cluster nodes**
+
+```sql
+-- Error: Database exists on some nodes but not all
+SELECT * FROM cluster('my_cluster', analytics.events);
+
+-- Solution: Create database on all nodes using ON CLUSTER
+CREATE DATABASE IF NOT EXISTS analytics ON CLUSTER my_cluster;
+
+-- Verify database exists on all nodes
+SELECT
+ hostName(),
+ name as database
+FROM clusterAllReplicas('my_cluster', system.databases)
+WHERE name = 'analytics';
+
+-- Create tables on cluster
+CREATE TABLE analytics.events ON CLUSTER my_cluster
+(
+ timestamp DateTime,
+ user_id UInt64,
+ event String
+)
+ENGINE = ReplicatedMergeTree()
+ORDER BY (timestamp, user_id);
+```
+
+**8. Fix materialized view references**
+
+```sql
+-- Error: Materialized view references non-existent database
+CREATE MATERIALIZED VIEW analytics.daily_summary
+ENGINE = SummingMergeTree()
+ORDER BY date
+AS SELECT
+ date,
+ count() as events
+FROM old_database.events -- This database was dropped
+GROUP BY date;
+
+-- Solution: Create missing database or update reference
+-- Option 1: Create the missing database
+CREATE DATABASE IF NOT EXISTS old_database;
+
+-- Option 2: Update materialized view to reference correct database
+DROP VIEW IF EXISTS analytics.daily_summary;
+CREATE MATERIALIZED VIEW analytics.daily_summary
+ENGINE = SummingMergeTree()
+ORDER BY date
+AS SELECT
+ date,
+ count() as events
+FROM analytics.events -- Correct database
+GROUP BY date;
+```
+
+**9. Handle database in connection strings**
+
+```sql
+-- Error: Connection string specifies non-existent database
+-- Connection: clickhouse://localhost:9000/nonexistent_db
+
+-- Solution: Create database first or use existing one
+-- Option 1: Create the database
+CREATE DATABASE IF NOT EXISTS nonexistent_db;
+
+-- Option 2: Connect without specifying database
+-- Connection: clickhouse://localhost:9000/
+-- Then specify database in queries
+
+-- Option 3: Use default database
+-- Connection: clickhouse://localhost:9000/default
+```
+
+**10. Verify database in migrations**
+
+```sql
+-- Error: Migration script assumes database exists
+-- migration.sql
+INSERT INTO analytics.events VALUES (...);
+
+-- Solution: Always include database creation
+-- migration.sql
+CREATE DATABASE IF NOT EXISTS analytics;
+
+-- Wait for creation to propagate in cluster environments
+SYSTEM SYNC REPLICA analytics.events;
+
+INSERT INTO analytics.events VALUES (...);
+```
+
+**11. Handle special characters in database names**
+
+```sql
+-- Error: Database with special characters not properly quoted
+SELECT * FROM my-database.events;
+
+-- Solution: Quote database names with special characters
+SELECT * FROM `my-database`.events;
+
+-- Better: Use underscores instead of hyphens
+CREATE DATABASE my_database;
+SELECT * FROM my_database.events;
+
+-- Avoid spaces and special characters
+CREATE DATABASE analytics_prod; -- Good
+-- CREATE DATABASE "analytics prod"; -- Works but not recommended
+```
+
+**12. Check database engine and access**
+
+```sql
+-- Some database engines may have special access requirements
+-- Check database engine
+SELECT
+ name,
+ engine,
+ data_path
+FROM system.databases
+WHERE name = 'analytics';
+
+-- For MySQL/PostgreSQL database engines, verify connection
+-- Error may occur if external database connection fails
+CREATE DATABASE mysql_db
+ENGINE = MySQL('remote_host:3306', 'database', 'user', 'password');
+
+-- Test access
+SELECT * FROM mysql_db.table LIMIT 1;
+
+-- If connection fails, check credentials and connectivity
+```
+
+**13. Handle dropped database scenarios**
+
+```sql
+-- Error: Database was dropped but objects still reference it
+-- Check for dependent objects
+SELECT
+ database,
+ name,
+ engine,
+ create_table_query
+FROM system.tables
+WHERE create_table_query LIKE '%old_database%';
+
+-- Solution: Recreate database or update references
+-- Option 1: Recreate the database
+CREATE DATABASE old_database;
+
+-- Option 2: Find and update all references
+-- Drop dependent materialized views
+DROP VIEW dependent_view;
+
+-- Recreate with correct references
+CREATE MATERIALIZED VIEW dependent_view AS
+SELECT * FROM correct_database.events;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Always use `IF NOT EXISTS` in database creation**: Include `CREATE DATABASE IF NOT EXISTS` in all migration scripts and initialization code to prevent errors when database already exists
+2. **Use qualified table names**: Always prefix table names with database names (`database.table`) to avoid ambiguity and make queries more portable across different contexts
+3. **Verify database existence before operations**: In scripts and applications, check database existence using `SHOW DATABASES` or query `system.databases` before performing operations
+4. **Use consistent naming conventions**: Adopt lowercase naming without special characters for databases to avoid case sensitivity and quoting issues across different environments
+5. **Create databases `ON CLUSTER`**: In clustered environments, always use `ON CLUSTER` clause when creating databases to ensure consistency across all nodes
+6. **Document database dependencies**: Maintain clear documentation of which databases are required by your tables, views, and applications, especially for materialized views and dictionaries
+7. **Implement proper error handling**: In application code, catch `UNKNOWN_DATABASE` errors and provide clear messages to users, potentially with automatic database creation logic
+8. **Test migrations in staging**: Always test database creation and migration scripts in staging environments that mirror production to catch missing database issues early
+9. **Use configuration management**: Store database creation scripts in version control and use infrastructure-as-code tools to ensure databases exist before deploying dependent resources
+10. **Monitor database permissions**: Regularly audit user permissions to databases using `SHOW GRANTS` to ensure users have appropriate access and identify permission-related issues early
+
+## Related error codes {#related-error-codes}
+
+- [UNKNOWN_TABLE (60)](/troubleshooting/error-codes/060_UNKNOWN_TABLE) - Table doesn't exist in database
+- [UNKNOWN_IDENTIFIER (47)](/troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER) - Column or identifier not found
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/1001_STD_EXCEPTION.md b/docs/troubleshooting/error_codes/1001_STD_EXCEPTION.md
new file mode 100644
index 00000000000..98b8a355333
--- /dev/null
+++ b/docs/troubleshooting/error_codes/1001_STD_EXCEPTION.md
@@ -0,0 +1,360 @@
+---
+slug: /troubleshooting/error-codes/1001_STD_EXCEPTION
+sidebar_label: '1001 STD_EXCEPTION'
+doc_type: 'reference'
+keywords: ['error codes', 'STD_EXCEPTION', '1001']
+title: '1001 STD_EXCEPTION'
+description: 'ClickHouse error code - 1001 STD_EXCEPTION'
+---
+
+# Error 1001: STD_EXCEPTION
+
+:::tip
+The error message format is always: `std::exception. Code: 1001, type: [ExceptionType], e.what() = [actual error message]`
+The `type` field tells you which external library or system component failed.
+Focus your troubleshooting there, not on ClickHouse itself.
+:::
+
+## What this error means {#what-this-error-means}
+
+`STD_EXCEPTION` indicates that ClickHouse caught a C++ standard exception from an underlying library or system component. This is **not a ClickHouse bug** in most cases—it's ClickHouse reporting an error from:
+
+- **External storage SDKs** (Azure Blob Storage, AWS S3, Google Cloud Storage)
+- **Third-party libraries** (PostgreSQL client libraries, HDFS integration)
+- **System-level failures** (network timeouts, file system errors)
+- **C++ standard library errors** (`std::out_of_range`, `std::future_error`, etc.)
+
+## Potential causes {#potential-causes}
+
+### 1. Azure Blob Storage exceptions (most common in ClickHouse Cloud) {#azure-blob-storage-exceptions}
+
+**`Azure::Storage::StorageException`**
+- **400 errors**: The requested URI does not represent any resource on the server
+- **403 errors**: Server failed to authenticate the request or insufficient permissions
+- **404 errors**: The specified container/blob does not exist
+
+**When you'll see it:**
+- During merge operations with object storage backend
+- When cleaning up temporary parts after failed inserts
+- During destructor calls (`~MergeTreeDataPartWide`, `~MergeTreeDataPartCompact`)
+
+**Real example from production:**
+
+```text
+std::exception. Code: 1001, type: Azure::Storage::StorageException,
+e.what() = 400 The requested URI does not represent any resource on the server.
+RequestId:8e4bfa97-201e-0093-7ed7-bb478b000000
+```
+
+### 2. AWS S3 exceptions {#aws-s3-exceptions}
+
+**Typical manifestations:**
+- Throttling errors
+- Missing object keys
+- Permission/credential failures
+- Network connectivity issues to S3
+
+### 3. PostgreSQL integration errors {#postgres-integration-errors}
+
+**`pqxx::sql_error`**
+
+**Real example:**
+
+```text
+std::exception. Code: 1001, type: pqxx::sql_error,
+e.what() = ERROR: cannot execute COPY during recovery
+```
+
+**Common scenarios:**
+- PostgreSQL database/materialized view as external dictionary source
+- PostgreSQL in recovery mode (read-only)
+- Connection failures to external PostgreSQL instances
+
+### 4. Iceberg table format errors {#iceberg-table-format-errors}
+
+**`std::out_of_range`** - Key not found in schema mapping
+
+**Real examples:**
+
+```text
+std::exception. Code: 1001, type: std::out_of_range,
+e.what() = unordered_map::at: key not found (version 25.6.2.6054)
+```
+
+**When you'll see it:**
+- Querying Iceberg tables after ClickHouse version upgrades
+- Schema evolution in Iceberg metadata (manifest files with older snapshots)
+- Missing schema mappings between snapshots and manifest entries
+
+**Affected versions:** 25.6.2.5983 - 25.6.2.6106, 25.8.1.3889 - 25.8.1.8277
+**Fixed in:** 25.6.2.6107+, 25.8.1.8278+
+
+### 5. HDFS integration errors {#hdfs-integration-errors}
+
+**`std::out_of_range`** - Invalid URI parsing
+
+**Real example:**
+
+```text
+std::exception. Code: 1001, type: std::out_of_range, e.what() = basic_string
+(in query: SELECT * FROM hdfsCluster('test_cluster_two_shards_localhost', '', 'TSV'))
+```
+
+**Cause:** Empty or malformed HDFS URI passed to `hdfsCluster()` function
+
+### 6. System-level C++ exceptions {#system-level-cpp-exceptions}
+
+**`std::future_error`** - Thread/async operation failures
+**`std::out_of_range`** - Container access violations
+
+## When you'll see it {#when-you-will-see-it}
+
+### Scenario 1: ClickHouse Cloud - Azure object storage cleanup {#cloud-azure-os-cleanup}
+
+**Context:** During background merge operations, temp parts cleanup, or destructor execution
+
+**Stack trace pattern:**
+
+```text
+~MergeTreeDataPartWide()
+→ IMergeTreeDataPart::removeIfNeeded()
+→ undoTransaction()
+→ AzureObjectStorage::exists()
+→ Azure::Storage::StorageException
+```
+
+**Why it happens:**
+ClickHouse tries to clean up temporary files in Azure Blob Storage, but the blob/container was already deleted or doesn't exist. This often occurs during:
+- Failed merge rollback operations
+- Concurrent deletion by multiple replicas
+- Race conditions with container lifecycle
+
+### Scenario 2: Iceberg table queries after version upgrade {#iceberg-table-queries}
+
+**Error message:**
+
+```text
+std::exception. Code: 1001, type: std::out_of_range,
+e.what() = unordered_map::at: key not found
+```
+
+**Triggering query:**
+
+```sql
+SELECT * FROM icebergS3(
+ 's3://bucket/path/',
+ extra_credentials(role_arn='arn:aws:iam::...')
+)
+LIMIT 100;
+```
+
+**Why it happens:**
+
+Version 25.6.2.5983 introduced a bug where ClickHouse couldn't find schema mappings for older Iceberg manifest entries with sequence numbers outside the current snapshot range.
+
+### Scenario 3: PostgreSQL dictionary/materialized view {#postgres-dictionary-mv}
+
+**Error message:**
+
+```text
+std::exception. Code: 1001, type: pqxx::sql_error,
+e.what() = ERROR: cannot execute COPY during recovery
+```
+
+**Triggering operation:** Dictionary refresh or materialized view read from PostgreSQL source
+
+**Why it happens:** External PostgreSQL instance is in recovery mode (read-only state)
+
+### Scenario 4: HDFS table function with invalid URI {#hdfs-table-function-with-invalid-URI}
+
+**Error message:**
+
+```text
+std::exception. Code: 1001, type: std::out_of_range, e.what() = basic_string
+```
+
+**Triggering query:**
+
+```sql
+SELECT * FROM hdfsCluster('cluster', '', 'TSV'); -- Empty URI
+```
+
+## Quick fixes {#quick-fixes}
+
+### Fix 1: Azure Storage exceptions (ClickHouse Cloud) {#azure-storage-exceptions}
+
+**For 400/404 errors during merges:**
+
+These are typically **benign** - ClickHouse is trying to clean up files that were already removed. The errors occur in destructors and are usually logged but don't affect functionality.
+
+**If causing crashes (versions before 24.7):**
+
+```sql
+-- Check for ongoing merges
+SELECT * FROM system.merges;
+
+-- Wait for merges to complete or stop problematic merges
+SYSTEM STOP MERGES table_name;
+```
+
+**Long-term fix:** Upgrade to ClickHouse 24.7+ where destructors have proper try/catch handling.
+
+### Fix 2: Iceberg table errors {#iceberg-table-errors}
+
+**Immediate fix:** Upgrade to patched version
+
+```bash
+# Required versions:
+# - 25.6.2.6107 or higher
+# - 25.8.1.8278 or higher
+# - 25.9.1.2261 or higher
+
+# Check current version
+SELECT version();
+
+# Request upgrade through ClickHouse Cloud support if needed
+```
+
+### Fix 3: PostgreSQL integration errors {#postgres-integration-errors-fix}
+
+**For "cannot execute COPY during recovery":**
+
+```sql
+-- Option 1: Wait for PostgreSQL to exit recovery mode
+
+-- Option 2: Switch to read-only queries
+-- Use SELECT instead of materializing from PostgreSQL during recovery
+
+-- Option 3: Point to PostgreSQL primary/writable replica
+-- Update dictionary/materialized view source configuration
+```
+
+**Check PostgreSQL recovery status:**
+
+```sql
+-- On PostgreSQL side
+SELECT pg_is_in_recovery();
+```
+
+### Fix 4: HDFS URI errors {#hdfs-uri-errors}
+
+**Fix empty/invalid URIs:**
+
+```sql
+-- Instead of:
+SELECT * FROM hdfsCluster('cluster', '', 'TSV');
+
+-- Use valid HDFS path:
+SELECT * FROM hdfsCluster('cluster', 'hdfs://namenode:8020/path/to/data/*.csv', 'CSV');
+```
+
+**Validate URI before passing to function:**
+
+```sql
+-- Ensure URI is not empty
+SELECT * FROM hdfsCluster('cluster',
+ if(length(uri_variable) > 0, uri_variable, 'hdfs://default/path'),
+ 'TSV'
+);
+```
+
+## Understanding the root cause {#understanding-the-root-cause}
+
+`STD_EXCEPTION` is a **symptom**, not a disease. Always look at:
+
+1. **The `type:` field** - What external library threw the exception?
+2. **The `e.what()` message** - What was the actual error?
+3. **The stack trace** - Where in the code path did it originate?
+
+Common patterns:
+
+| `type:` | Origin | Typical cause |
+|------------------------------------|------------------------|----------------------------------------------|
+| `Azure::Storage::StorageException` | Azure Blob Storage SDK | Missing blobs, auth failures, network issues |
+| `pqxx::sql_error` | PostgreSQL C++ library | External PostgreSQL errors |
+| `std::out_of_range` (Iceberg) | C++ standard library | Missing schema/snapshot mappings |
+| `std::out_of_range` (HDFS) | C++ standard library | Invalid URI parsing |
+| `std::future_error` | C++ async operations | Thread pool/async failures |
+
+## Troubleshooting steps {#troubleshooting-steps}
+
+### Step 1: Identify the exception type {#identify-exception-type}
+
+```sql
+-- Find recent STD_EXCEPTION errors
+SELECT
+ event_time,
+ query_id,
+ exception,
+ extract(exception, 'type: ([^,]+)') AS exception_type,
+ extract(exception, 'e\\.what\\(\\) = ([^(]+)') AS error_message
+FROM system.query_log
+WHERE exception_code = 1001
+ AND event_date >= today() - 1
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+### Step 2: Check for version-specific issues {#check-for-version-specific-issues}
+
+```sql
+SELECT version();
+
+-- If using Iceberg and version is 25.6.2.5983 - 25.6.2.6106
+-- OR 25.8.1.3889 - 25.8.1.8277
+-- You need to upgrade to 25.6.2.6107+ or 25.8.1.8278+
+```
+
+### Step 3: Check object storage health (Cloud) {#check-boject-storage-health}
+
+```sql
+-- Check for Azure/S3 errors in logs
+SELECT
+ event_time,
+ message
+FROM system.text_log
+WHERE message LIKE '%Azure::Storage%'
+ OR message LIKE '%S3%Exception%'
+ORDER BY event_time DESC
+LIMIT 20;
+```
+
+### Step 4: Check external integrations {#check-external-integrations}
+
+```sql
+-- For PostgreSQL dictionaries
+SELECT
+ name,
+ status,
+ last_exception
+FROM system.dictionaries
+WHERE source LIKE '%postgresql%';
+
+-- For HDFS paths
+SHOW CREATE TABLE your_hdfs_table;
+-- Verify URI is valid and not empty
+```
+
+## Related errors {#related-errors}
+
+- **Error 210: `NETWORK_ERROR`** - Network-level failures (might escalate to STD_EXCEPTION)
+- **Error 999: `KEEPER_EXCEPTION`** - Keeper/ZooKeeper failures (separate from STD_EXCEPTION)
+- **Error 226: `NO_FILE_IN_DATA_PART`** - Missing data files (not the same as STD_EXCEPTION)
+
+## Production notes {#prod-notes}
+
+### Azure exceptions are often benign {#azure-exceptions-benign}
+
+In ClickHouse Cloud with Azure backend, you may see many `Azure::Storage::StorageException` errors in logs during normal operation. These occur when:
+- Multiple replicas try to clean up the same temporary part
+- Background merges fail and rollback
+- Destructors attempt to delete already-deleted blobs
+
+**These don't affect data integrity** - ClickHouse handles them gracefully in versions 24.7+.
+
+### Iceberg schema mapping issues {#iceberg-schema-mapping-issues}
+
+If you use Iceberg tables:
+- **Always keep ClickHouse updated** to the latest patch version
+- Iceberg schema evolution can trigger errors in older ClickHouse versions
+- The fix in 25.6.2.6107+ makes error handling more robust but may log warnings
diff --git a/docs/troubleshooting/error_codes/107_FILE_DOESNT_EXIST.md b/docs/troubleshooting/error_codes/107_FILE_DOESNT_EXIST.md
new file mode 100644
index 00000000000..ce0c1beb352
--- /dev/null
+++ b/docs/troubleshooting/error_codes/107_FILE_DOESNT_EXIST.md
@@ -0,0 +1,298 @@
+---
+slug: /troubleshooting/error-codes/107_FILE_DOESNT_EXIST
+sidebar_label: '107 FILE_DOESNT_EXIST'
+doc_type: 'reference'
+keywords: ['error codes', 'FILE_DOESNT_EXIST', '107']
+title: '107 FILE_DOESNT_EXIST'
+description: 'ClickHouse error code - 107 FILE_DOESNT_EXIST'
+---
+
+# Error 107: FILE_DOESNT_EXIST
+
+:::tip
+This error occurs when ClickHouse attempts to access a file that does not exist in the filesystem or object storage.
+It typically indicates missing data part files, corrupted table parts, or issues with remote storage access.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Missing data part files**
+ - Data part file deleted or moved during query execution
+ - Part files missing: `data.bin`, `columns.txt`, `checksums.txt`, `.mrk2` files
+ - Part removal race condition (file deleted after being listed but before being read)
+
+2. **Corrupted or incomplete table parts**
+ - Broken data parts missing essential files
+ - Incomplete part downloads in replicated setups
+ - Checksums file referencing non-existent files
+
+3. **Merge or mutation issues**
+ - Parts removed during ongoing merges while queries are reading them
+ - Mutations creating parts with missing files
+ - Column alterations leaving excess file references in checksums
+
+4. **Object storage (S3/Azure) issues**
+ - S3 key not found errors
+ - Azure blob does not exist
+ - Network issues preventing file access
+ - Object storage eventual consistency problems
+
+5. **Filesystem cache problems**
+ - Cached metadata pointing to deleted files
+ - Cache invalidation race conditions
+ - Temporary files cleaned up prematurely
+
+6. **Replication synchronization issues**
+ - Part not yet downloaded to replica
+ - Part removed on one replica while being fetched on another
+ - Metadata inconsistency between replicas
+
+## Common solutions {#common-solutions}
+
+**1. Check table integrity**
+
+```sql
+-- Check for broken parts
+CHECK TABLE your_table;
+
+-- View part status
+SELECT
+ database,
+ table,
+ name,
+ active,
+ modification_time,
+ disk_name
+FROM system.parts
+WHERE table = 'your_table'
+ORDER BY modification_time DESC;
+```
+
+**2. Look for stuck merges or mutations**
+
+```sql
+-- Check ongoing merges
+SELECT *
+FROM system.merges
+WHERE table = 'your_table';
+
+-- Check mutations
+SELECT *
+FROM system.mutations
+WHERE database = 'your_database'
+ AND table = 'your_table'
+ AND NOT is_done;
+```
+
+**3. Optimize or rebuild the affected table**
+
+```sql
+-- Force merge to consolidate parts
+OPTIMIZE TABLE your_table FINAL;
+
+-- If table is severely corrupted, may need to rebuild
+```
+
+**4. Check replication queue (for replicated tables)**
+
+```sql
+-- Check replication status
+SELECT *
+FROM system.replication_queue
+WHERE table = 'your_table';
+
+-- Check replica status
+SELECT *
+FROM system.replicas
+WHERE table = 'your_table';
+```
+
+**5. Detach and reattach broken parts**
+
+```sql
+-- List parts
+SELECT name FROM system.parts WHERE table = 'your_table';
+
+-- Detach broken part
+ALTER TABLE your_table DETACH PART 'part_name';
+
+-- Part will be re-fetched from another replica (for replicated tables)
+```
+
+**6. For S3/object storage issues**
+
+- Check S3 bucket permissions and access
+- Verify network connectivity
+- Check for S3 lifecycle policies deleting objects
+- Review object storage logs
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: File missing during query**
+
+```text
+Error: File data/uuid/all_XXX_XXX_X/date.bin doesn't exist
+```
+
+**Cause:** Part was removed (merged or deleted) while the query was accessing it.
+
+**Solution:**
+- Retry the query (part removal race condition)
+- Check if excessive merges are happening
+- Verify table isn't being dropped/recreated
+
+**Scenario 2: Missing marks file**
+
+```text
+Error: Marks file '.../column.mrk2' doesn't exist
+```
+
+**Cause:** Part is broken or incompletely downloaded.
+
+**Solution:**
+
+```sql
+-- Check and repair
+CHECK TABLE your_table;
+
+-- For replicated tables, detach broken part
+ALTER TABLE your_table DETACH PART 'broken_part_name';
+```
+
+**Scenario 3: S3 object not found**
+
+```text
+Error: The specified key does not exist (S3_ERROR)
+```
+
+**Cause:** S3 object deleted, never uploaded, or access denied.
+
+**Solution:**
+- Check S3 bucket for the object
+- Verify S3 credentials and permissions
+- Check S3 lifecycle policies
+- For replicated tables, fetch from another replica
+
+**Scenario 4: Checksums.txt references excess files**
+
+```text
+Error: File 'column.sparse.idx.cmrk2' doesn't exist
+```
+
+**Cause:** Column alteration left stale file references in checksums.txt.
+
+**Solution:**
+- This is often a bug in ClickHouse during mutations
+- Detach and reattach the part
+- Or manually remove problematic parts
+
+**Scenario 5: Azure blob missing**
+
+```text
+Error: The specified blob does not exist
+```
+
+**Cause:** Azure storage object missing or access issues.
+
+**Solution:**
+- Verify Azure storage account access
+- Check blob exists in container
+- Review Azure storage logs
+
+## Prevention tips {#prevention-tips}
+
+1. **Use replicated tables:** Provides redundancy when parts go missing
+2. **Monitor merges:** Watch for excessive or slow merge operations
+3. **Regular integrity checks:** Run `CHECK TABLE` periodically
+4. **Stable object storage:** Ensure S3/Azure configurations are stable
+5. **Avoid manual file deletions:** Never manually delete part files
+6. **Monitor disk space:** Full disks can cause incomplete writes
+7. **Keep ClickHouse updated:** Bugs causing missing files are often fixed in newer versions
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify the missing file:**
+
+ ```text
+ Error message shows: File data/uuid/part_name/file.bin doesn't exist
+ ```
+
+2. **Check if part exists:**
+
+ ```sql
+ SELECT *
+ FROM system.parts
+ WHERE name = 'part_name';
+ ```
+
+3. **Check part log for part history:**
+
+ ```sql
+ SELECT
+ event_time,
+ event_type,
+ part_name,
+ error
+ FROM system.part_log
+ WHERE part_name = 'part_name'
+ ORDER BY event_time DESC;
+ ```
+
+4. **For replicated tables, check all replicas:**
+
+ ```sql
+ SELECT
+ hostName(),
+ database,
+ table,
+ active_replicas,
+ total_replicas
+ FROM clusterAllReplicas('your_cluster', system.replicas)
+ WHERE table = 'your_table';
+ ```
+
+5. **Check for recent merges:**
+
+ ```sql
+ SELECT *
+ FROM system.part_log
+ WHERE table = 'your_table'
+ AND event_type IN ('MergeParts', 'RemovePart')
+ AND event_time > now() - INTERVAL 1 HOUR
+ ORDER BY event_time DESC;
+ ```
+
+6. **For object storage, check logs:**
+ - S3: Check CloudTrail logs
+ - Azure: Check Storage Analytics logs
+ - Look for DELETE operations on the missing object
+
+## Special considerations {#special-considerations}
+
+**For SharedMergeTree / ClickHouse Cloud:**
+- Parts are stored in shared object storage
+- Missing files often indicate object storage issues
+- Check both local cache and remote storage
+
+**For replicated tables:**
+- One replica's missing part can be fetched from others
+- Detaching broken parts often triggers automatic recovery
+- Check replication lag before detaching parts
+
+**For mutations:**
+- Mutations create new parts; missing files may indicate mutation failure
+- Check `system.mutations` for failed mutations
+- Old parts are kept until mutation completes
+
+**During part removal:**
+- Parts are removed after being merged into larger parts
+- Race condition can occur if query starts before merge but reads after
+- Usually resolved by query retry
+
+If you're experiencing this error:
+1. Retry the query (it could be a transient race condition)
+2. Run `CHECK TABLE` to identify broken parts
+3. Check `system.part_log` for recent part operations
+4. For replicated tables, detach broken parts to trigger refetch
+5. For object storage errors, verify storage access and permissions
+6. If persistent, may indicate data corruption requiring restore from backup
diff --git a/docs/troubleshooting/error_codes/121_UNSUPPORTED_JOIN_KEYS.md b/docs/troubleshooting/error_codes/121_UNSUPPORTED_JOIN_KEYS.md
new file mode 100644
index 00000000000..64bed95db45
--- /dev/null
+++ b/docs/troubleshooting/error_codes/121_UNSUPPORTED_JOIN_KEYS.md
@@ -0,0 +1,421 @@
+---
+slug: /troubleshooting/error-codes/121_UNSUPPORTED_JOIN_KEYS
+sidebar_label: '121 UNSUPPORTED_JOIN_KEYS'
+doc_type: 'reference'
+keywords: ['error codes', 'UNSUPPORTED_JOIN_KEYS', '121', 'Join engine', 'StorageJoin', 'composite keys']
+title: '121 UNSUPPORTED_JOIN_KEYS'
+description: 'ClickHouse error code - 121 UNSUPPORTED_JOIN_KEYS'
+---
+
+# Error 121: UNSUPPORTED_JOIN_KEYS
+
+:::tip
+This error occurs when using the Join table engine (StorageJoin) with unsupported join key types or configurations.
+The Join engine has specific limitations on which key types it supports, especially with complex or composite keys.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Composite keys (multiple join columns)**
+ - Using multiple columns as join keys in StorageJoin
+ - Complex key types like `keys128`, `keys256` not supported in certain contexts
+ - Broke in ClickHouse 23.9-23.12, worked in 23.8 and earlier
+ - Fixed with new analyzer enabled by default in 24.3+
+
+2. **Specific data type combinations**
+ - UUID columns in Join tables (keys128 type)
+ - String + Date32 composite keys
+ - Large composite keys (3+ columns creating keys256 type)
+ - Mixed type keys that create unsupported hash types
+
+3. **Version-specific issues**
+ - Regression introduced in ClickHouse 23.9
+ - Affects versions 23.10-23.12 with old query interpreter
+ - Works with new analyzer (`allow_experimental_analyzer = 1`)
+ - Fully fixed in ClickHouse 24.3+ where analyzer is default
+
+4. **SELECT from StorageJoin with complex keys**
+ - `SELECT * FROM join_table` fails with composite keys
+ - StorageJoin doesn't store keys themselves, only hashed values
+ - Can't retrieve original key values when they're hashed together
+ - `joinGet()` function works even when SELECT doesn't
+
+5. **Parallel replicas with StorageJoin**
+ - Using parallel replicas with Join engine tables
+ - Combination of certain key types and parallel execution
+ - Affects ClickHouse 25.1+
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for key type**
+
+The error indicates which key type is unsupported:
+
+```text
+Unsupported JOIN keys in StorageJoin. Type: 8
+Unsupported JOIN keys of type keys128 in StorageJoin
+Unsupported JOIN keys of type keys256 in StorageJoin
+Unsupported JOIN keys of type hashed in StorageJoin
+```
+
+**2. Check your ClickHouse version**
+
+```sql
+SELECT version();
+
+-- If on 23.9-23.12, consider upgrading to 24.3+
+-- Or enable new analyzer if available
+```
+
+**3. Check your Join table definition**
+
+```sql
+SHOW CREATE TABLE your_join_table;
+
+-- Look at ENGINE = Join(...) clause
+-- Count how many key columns are specified
+```
+
+**4. Test with the new analyzer (if on 23.9-24.2)**
+
+```sql
+SET allow_experimental_analyzer = 1;
+
+-- Then retry your query
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Upgrade to ClickHouse 24.3 or later**
+
+The new analyzer is enabled by default in 24.3+ and resolves most StorageJoin issues with composite keys.
+
+**2. Enable new analyzer (versions 23.9-24.2)**
+
+```sql
+-- Enable for session
+SET allow_experimental_analyzer = 1;
+
+-- Then run your queries
+SELECT *
+FROM main_table
+LEFT JOIN join_table USING (key1, key2);
+```
+
+**3. Reduce to single join key**
+
+```sql
+-- Instead of multiple columns:
+CREATE TABLE join_table (
+ key1 Int32,
+ key2 Int32,
+ value String
+) ENGINE = Join(ALL, LEFT, key1, key2); -- Multiple keys may fail
+
+-- Use single composite key:
+CREATE TABLE join_table (
+ composite_key String, -- Combine keys: toString(key1) || '_' || toString(key2)
+ value String
+) ENGINE = Join(ALL, LEFT, composite_key); -- Single key works
+
+-- Then join with:
+SELECT *
+FROM main_table
+LEFT JOIN join_table ON concat(toString(main_table.key1), '_', toString(main_table.key2)) = join_table.composite_key;
+```
+
+**4. Use Dictionary instead of Join engine**
+
+```sql
+-- Instead of Join table:
+-- CREATE TABLE lookup_join (...) ENGINE = Join(...);
+
+-- Use Dictionary:
+CREATE DICTIONARY lookup_dict
+(
+ key1 Int32,
+ key2 Date32,
+ value String
+)
+PRIMARY KEY key1, key2
+SOURCE(CLICKHOUSE(
+ HOST 'localhost'
+ PORT 9000
+ TABLE 'source_table'
+ DB 'default'
+))
+LIFETIME(MIN 300 MAX 360)
+LAYOUT(COMPLEX_KEY_HASHED());
+
+-- Then use dictGet instead of JOIN:
+SELECT
+ *,
+ dictGet('lookup_dict', 'value', (key1, key2)) AS value
+FROM main_table;
+```
+
+**5. Use regular MergeTree table**
+
+```sql
+-- Instead of Join engine, use regular table:
+CREATE TABLE join_data (
+ key1 Int32,
+ key2 Int32,
+ value String
+) ENGINE = MergeTree
+ORDER BY (key1, key2);
+
+-- Then use normal JOIN (not StorageJoin):
+SELECT *
+FROM main_table
+LEFT JOIN join_data ON
+ main_table.key1 = join_data.key1 AND
+ main_table.key2 = join_data.key2;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Multiple join columns broke in 23.10-23.12**
+
+```text
+Code: 121. DB::Exception: Unsupported JOIN keys in StorageJoin. Type: 8
+```
+
+**Cause:** Regression in ClickHouse 23.9-23.12 where multiple join columns stopped working with StorageJoin when using the old query interpreter.
+
+**Solution:**
+```sql
+-- Option 1: Upgrade to 24.3+ (recommended)
+-- New analyzer is default and fixes this
+
+-- Option 2: Enable new analyzer (23.9-24.2)
+SET allow_experimental_analyzer = 1;
+
+SELECT
+ segmented_ctr_cache.product_id,
+ segmented_ctr_cache.segment_id,
+ count_in_cart
+FROM segmented_ctr_cache
+LEFT JOIN cart_join ON
+ cart_join.product_id = segmented_ctr_cache.product_id
+ AND cart_join.segment_id = segmented_ctr_cache.segment_id;
+
+-- Option 3: Downgrade join to single key temporarily
+-- Use only one join column until you can upgrade
+```
+
+**Scenario 2: UUID keys (keys128) with StorageJoin**
+
+```text
+Code: 121. DB::Exception: Unsupported JOIN keys of type keys128 in StorageJoin
+```
+
+**Cause:** UUID data type creates a keys128 hash type which isn't supported in certain StorageJoin contexts, particularly with parallel replicas or specific ClickHouse versions (25.1+).
+
+**Solution:**
+```sql
+-- Convert UUID to String for join key:
+CREATE TABLE joint
+(
+ id String, -- Instead of UUID
+ value LowCardinality(String)
+) ENGINE = Join(ANY, LEFT, id);
+
+-- Insert with conversion:
+INSERT INTO joint
+SELECT toString(id) AS id, value
+FROM source_table;
+
+-- Join with conversion:
+SELECT *
+FROM main_table
+LEFT JOIN joint ON toString(main_table.id) = joint.id;
+```
+
+**Scenario 3: Three or more join keys (keys256)**
+
+```text
+Code: 121. DB::Exception: Unsupported JOIN keys of type keys256 in StorageJoin
+```
+
+**Cause:** Three or more join columns create a keys256 hash type which isn't supported by StorageJoin in some configurations.
+
+**Solution:**
+```sql
+-- Instead of:
+CREATE TABLE tj (
+ key1 UInt64,
+ key2 UInt64,
+ key3 UInt64,
+ attr UInt64
+) ENGINE = Join(ALL, INNER, key3, key2, key1);
+
+-- Option 1: Combine into single key
+CREATE TABLE tj (
+ combined_key String, -- Format: "key1:key2:key3"
+ attr UInt64
+) ENGINE = Join(ALL, INNER, combined_key);
+
+INSERT INTO tj
+SELECT
+ concat(toString(key1), ':', toString(key2), ':', toString(key3)) AS combined_key,
+ attr
+FROM source;
+
+-- Option 2: Use Dictionary for complex keys
+CREATE DICTIONARY tj_dict
+(
+ key1 UInt64,
+ key2 UInt64,
+ key3 UInt64,
+ attr UInt64
+)
+PRIMARY KEY key1, key2, key3
+SOURCE(CLICKHOUSE(...))
+LAYOUT(COMPLEX_KEY_HASHED());
+```
+
+**Scenario 4: String + Date32 composite keys (version 25.6)**
+
+```text
+Code: 121. DB::Exception: Unsupported JOIN keys of type hashed in StorageJoin
+```
+
+**Cause:** Mixed types like String + Date32 as composite keys can create unsupported hash types, especially in mutations (ALTER TABLE UPDATE) or INSERT operations. Worked in earlier versions but broke in 25.6.
+
+**Solution:**
+```sql
+-- Option 1: Convert all keys to same type (String)
+CREATE TABLE join_table (
+ loan_identifier String,
+ mrp String, -- Convert Date32 to String: toString(date_column)
+ value Int32
+) ENGINE = Join(ANY, LEFT, loan_identifier, mrp);
+
+INSERT INTO join_table
+SELECT
+ loan_identifier,
+ toString(monthly_reporting_period) AS mrp,
+ value
+FROM source;
+
+-- Option 2: Use Dictionary (recommended for complex scenarios)
+CREATE DICTIONARY join_dict
+(
+ loan_identifier String,
+ mrp Date32,
+ value Int32
+)
+PRIMARY KEY loan_identifier, mrp
+SOURCE(CLICKHOUSE(TABLE 'source_table'))
+LAYOUT(COMPLEX_KEY_HASHED());
+
+-- Use dictGet instead of JOIN:
+ALTER TABLE target_table
+UPDATE column = dictGet('join_dict', 'value', (loan_identifier, monthly_reporting_period))
+WHERE true;
+```
+
+**Scenario 5: SELECT from StorageJoin with composite keys**
+
+```text
+Code: 121. DB::Exception: Unsupported JOIN keys in StorageJoin. Type: 11
+```
+
+**Cause:** `SELECT * FROM join_table` doesn't work with composite keys because StorageJoin doesn't store the original keys - only the hashed values. However, `joinGet()` still works.
+
+**Solution:**
+```sql
+-- SELECT directly fails with composite keys:
+-- SELECT * FROM join_table; -- ERROR
+
+-- But joinGet works:
+SELECT joinGet('join_table', 'value', toUInt64(1), '32'); -- OK
+
+-- Workaround: Use source table for SELECT:
+-- Keep a copy in regular MergeTree:
+CREATE TABLE join_data (
+ key1 UInt64,
+ key2 String,
+ value String
+) ENGINE = MergeTree
+ORDER BY (key1, key2);
+
+CREATE MATERIALIZED VIEW join_table
+ENGINE = Join(ANY, LEFT, key1, key2)
+AS SELECT * FROM join_data;
+
+-- Now you can SELECT from join_data:
+SELECT * FROM join_data WHERE key1 = 1;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Use ClickHouse 24.3 or later**
+ - New analyzer is enabled by default
+ - Most StorageJoin composite key issues are resolved
+ - Better query rewriting and optimization
+
+2. **Prefer Dictionaries for lookup tables**
+ ```sql
+ -- Instead of Join engine:
+ ENGINE = Join(ANY, LEFT, key1, key2)
+
+ -- Use Dictionary:
+ LAYOUT(COMPLEX_KEY_HASHED())
+ ```
+ Dictionaries support complex keys better and have more features
+
+3. **Limit join keys to single column when possible**
+ - Create composite key string instead of multiple columns
+ - Simpler, more compatible, works across all versions
+ - Example: `concat(toString(key1), ':', toString(key2))`
+
+4. **Use consistent key types**
+ - Don't mix String and Date/DateTime
+ - Convert all keys to same type (usually String)
+ - Avoid UUID directly - convert to String
+
+5. **Test after upgrades**
+ ```sql
+ -- After upgrading ClickHouse, test Join tables:
+ SELECT * FROM your_join_table LIMIT 10;
+
+ -- Test actual joins:
+ SELECT * FROM main LEFT JOIN your_join_table USING (keys) LIMIT 10;
+ ```
+
+6. **Monitor for regressions**
+ - ClickHouse 23.9-23.12 had regressions
+ - Check release notes for Join engine changes
+ - Test in staging before production upgrades
+
+## When to use Join engine vs alternatives {#when-to-use}
+
+**Use Join engine when:**
+- Single join key (simple types: Int, String)
+- Small dimension table (fits in RAM)
+- Very frequent joins on same table
+- Using ClickHouse 24.3+
+
+**Use Dictionary when:**
+- Complex composite keys (2+ columns)
+- Need key-value lookup functionality
+- Want automatic cache updates
+- More control over memory and refresh
+
+**Use regular MergeTree when:**
+- Large tables that don't fit in RAM
+- Infrequent joins
+- Need flexibility in query patterns
+- Complex join conditions
+
+## Related settings {#related-settings}
+
+```sql
+-- Enable new analyzer (23.9-24.2)
+SET allow_experimental_analyzer = 1;
+
+-- Check current analyzer status (24.3+)
+SELECT value FROM system.settings WHERE name = 'allow_experimental_analyzer';
+```
diff --git a/docs/troubleshooting/error_codes/125_INCORRECT_RESULT_OF_SCALAR_SUBQUERY.md b/docs/troubleshooting/error_codes/125_INCORRECT_RESULT_OF_SCALAR_SUBQUERY.md
new file mode 100644
index 00000000000..e6592d35b09
--- /dev/null
+++ b/docs/troubleshooting/error_codes/125_INCORRECT_RESULT_OF_SCALAR_SUBQUERY.md
@@ -0,0 +1,458 @@
+---
+slug: /troubleshooting/error-codes/125_INCORRECT_RESULT_OF_SCALAR_SUBQUERY
+sidebar_label: '125 INCORRECT_RESULT_OF_SCALAR_SUBQUERY'
+doc_type: 'reference'
+keywords: ['error codes', 'INCORRECT_RESULT_OF_SCALAR_SUBQUERY', '125', 'scalar', 'subquery', 'CTE', 'WITH']
+title: '125 INCORRECT_RESULT_OF_SCALAR_SUBQUERY'
+description: 'ClickHouse error code - 125 INCORRECT_RESULT_OF_SCALAR_SUBQUERY'
+---
+
+# Error 125: INCORRECT_RESULT_OF_SCALAR_SUBQUERY
+
+:::tip
+This error occurs when a scalar subquery returns more than one row.
+A scalar subquery is expected to return exactly zero or one row with a single value.
+This can also indicate misuse of WITH clauses or CTE syntax, particularly when using `WITH (SELECT ...) AS alias` syntax incorrectly.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Subquery returns multiple rows**
+ - Scalar subquery without proper LIMIT or aggregation
+ - Missing `GROUP BY` or `DISTINCT`
+ - Distributed table queries executed on multiple shards
+ - Correlated subquery returning multiple matches
+ - Subquery not properly filtered
+
+2. **Incorrect WITH clause syntax (scalar vs CTE)**
+ - Using `WITH (SELECT ...) AS alias` when `WITH alias AS (SELECT ...)` intended
+ - ClickHouse has two different WITH syntaxes with different meanings
+ - `WITH (subquery) AS alias` creates a scalar value
+ - `WITH alias AS (subquery)` creates a CTE (table expression)
+ - Confusion between the two syntaxes causes error
+
+3. **Alias conflicts with column names**
+ - Scalar subquery alias matches source table column name
+ - `prefer_alias_to_column_name` setting causes wrong column resolution
+ - ClickHouse uses column from table instead of scalar value
+ - Only affects certain positions (typically first matching column)
+ - Fixed in new analyzer
+
+4. **Invalid CTE references in outer scope (24.5-24.10 bug)**
+ - Referencing CTE table with wildcard (`t.*`) from another CTE
+ - Trying to access CTE columns outside their scope
+ - `Table expression ... data must be initialized` error (LOGICAL_ERROR code 49)
+ - Common with nested CTEs and complex queries
+ - Fixed in ClickHouse 25.4 (PR #66143)
+
+5. **Distributed tables with scalar subqueries**
+ - Each shard returns rows, combined result has multiple rows
+ - `distributed_product_mode = 'local'` can trigger this
+ - Subquery executed per shard instead of globally
+ - Need GLOBAL or different query structure
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check if your subquery actually returns multiple rows**
+
+```sql
+-- Test the subquery alone
+SELECT * FROM (
+ SELECT column FROM table WHERE condition
+);
+
+-- Count how many rows it returns
+SELECT count(*) FROM (
+ SELECT column FROM table WHERE condition
+);
+```
+
+**2. Determine which WITH syntax you need**
+
+```sql
+-- Scalar subquery syntax (single value):
+WITH (SELECT max(price) FROM products) AS max_price
+SELECT * FROM orders WHERE price > max_price;
+
+-- CTE syntax (table expression):
+WITH top_products AS (SELECT * FROM products ORDER BY sales DESC LIMIT 10)
+SELECT * FROM top_products;
+```
+
+**3. Check your ClickHouse version**
+
+```sql
+SELECT version();
+
+-- If on 24.5-24.10 with CTE wildcard issues, upgrade to 25.4+
+-- If on pre-24.3 with scalar alias issues, enable new analyzer
+```
+
+**4. Review query logs**
+
+```sql
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 125
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Add LIMIT to scalar subquery**
+
+```sql
+-- Instead of this (may return multiple rows):
+WITH (SELECT user_id FROM users WHERE active = 1) AS uid
+SELECT * FROM orders WHERE user_id = uid;
+
+-- Use this (guaranteed single row):
+WITH (SELECT user_id FROM users WHERE active = 1 LIMIT 1) AS uid
+SELECT * FROM orders WHERE user_id = uid;
+
+-- Or use aggregation:
+WITH (SELECT max(user_id) FROM users WHERE active = 1) AS uid
+SELECT * FROM orders WHERE user_id = uid;
+```
+
+**2. Use correct WITH syntax for your use case**
+
+```sql
+-- For scalar value (parentheses around subquery):
+WITH (SELECT 1) AS value
+SELECT value;
+
+-- For CTE table (no parentheses, use FROM):
+WITH cte AS (SELECT 1 AS n)
+SELECT * FROM cte;
+
+-- NOT: WITH cte AS (SELECT 1) SELECT cte; -- This fails!
+```
+
+**3. Avoid alias conflicts with column names**
+
+```sql
+-- Instead of this (alias matches column name):
+SELECT
+ (SELECT max(i) FROM t1) AS i, -- Alias 'i' conflicts with table column
+ (SELECT max(j) FROM t1) AS j
+FROM t1;
+
+-- Use different alias names:
+SELECT
+ (SELECT max(i) FROM t1) AS max_i,
+ (SELECT max(j) FROM t1) AS max_j
+FROM t1;
+
+-- Or disable prefer_alias_to_column_name:
+SET prefer_alias_to_column_name = 0;
+```
+
+**4. Fix CTE wildcard references (24.5-24.10)**
+
+```sql
+-- Instead of this (fails in 24.5-24.10):
+WITH
+ t1 AS (SELECT * FROM table1),
+ t2 AS (SELECT t1.*, other_col FROM table2) -- t1.* fails
+SELECT * FROM t2;
+
+-- Use this (works):
+WITH
+ t1 AS (SELECT * FROM table1),
+ t2 AS (
+ SELECT t1.col1, t1.col2, t1.col3, other_col -- List columns explicitly
+ FROM t1
+ CROSS JOIN table2
+ )
+SELECT * FROM t2;
+
+-- Or upgrade to ClickHouse 25.4+
+```
+
+**5. Use GLOBAL for distributed scalar subqueries**
+
+```sql
+-- Instead of this (may fail on distributed tables):
+WITH (SELECT x FROM distributed_table) AS filter_user
+SELECT * FROM another_table WHERE id IN filter_user
+SETTINGS distributed_product_mode = 'local';
+
+-- Use proper CTE syntax:
+WITH filter_user AS (SELECT x FROM distributed_table)
+SELECT * FROM another_table
+WHERE id IN (SELECT x FROM filter_user);
+
+-- Or use IN subquery directly:
+SELECT * FROM another_table
+WHERE id IN (SELECT x FROM distributed_table);
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Distributed table returning multiple rows**
+
+```text
+Scalar subquery returned more than one row: While processing (SELECT t3.x FROM ap_dist.tab3 AS t3) AS filter
+```
+
+**Cause:** Using `WITH (subquery) AS alias` syntax (scalar) instead of `WITH alias AS (subquery)` syntax (CTE). On distributed tables, scalar subquery executes on each shard and combines results.
+
+**Solution:**
+
+```sql
+-- Instead of scalar syntax (fails):
+WITH (SELECT t3.x FROM ap_dist.tab3 AS t3) AS filter_user
+SELECT * FROM ap_dist.tab WHERE x IN filter_user;
+
+-- Use CTE syntax (works):
+WITH filter_user AS (SELECT t3.x FROM ap_dist.tab3 AS t3)
+SELECT * FROM ap_dist.tab
+WHERE x IN (SELECT x FROM filter_user);
+```
+
+**Scenario 2: Scalar subquery alias conflicts with column name (old analyzer)**
+
+```text
+Returns wrong values when scalar subquery alias matches table column name
+```
+
+**Cause:** When scalar subquery has alias that matches a column name in the FROM table, the old analyzer's `prefer_alias_to_column_name` setting causes it to use the table column instead of the scalar value.
+
+**Solution:**
+```sql
+-- Problem (old analyzer):
+SELECT
+ (SELECT max(i) FROM t1) AS i, -- Alias 'i' matches column, returns row values 0,1,2...
+ (SELECT max(i) FROM t1) AS j -- Different alias, works correctly (9)
+FROM t1;
+
+-- Solution 1: Use different alias name
+SELECT
+ (SELECT max(i) FROM t1) AS max_i, -- No conflict
+ (SELECT max(i) FROM t1) AS max_j
+FROM t1;
+
+-- Solution 2: Disable setting
+SET prefer_alias_to_column_name = 0;
+
+-- Solution 3: Upgrade to 24.3+ (new analyzer default)
+SET allow_experimental_analyzer = 1; -- Or upgrade to 24.3+
+```
+
+**Scenario 3: CTE wildcard reference error (24.5-24.10)**
+
+```text
+Code: 49. DB::Exception: Table expression t1 AS (...) data must be initialized
+```
+
+**Cause:** Bug in ClickHouse 24.5-24.10 where referencing a CTE with wildcards (`t1.*`) from another CTE or outer query fails with LOGICAL_ERROR (code 49) instead of properly resolving columns.
+
+**Solution:**
+```sql
+-- Fails in 24.5-24.10:
+WITH
+ t1 AS (SELECT id, name FROM table1),
+ t2 AS (SELECT t1.* FROM table2 WHERE table2.id = t1.id) -- Error!
+SELECT * FROM t2;
+
+-- Workaround - list columns explicitly:
+WITH
+ t1 AS (SELECT id, name FROM table1),
+ t2 AS (SELECT t1.id, t1.name FROM t1 CROSS JOIN table2)
+SELECT * FROM t2;
+
+-- Or upgrade to 25.4+ (fixed by PR #66143)
+```
+
+**Scenario 4: Correlated subquery not using WHERE**
+
+```text
+Scalar subquery returned more than one row
+```
+
+**Cause:** Using a subquery intended to filter rows, but missing WHERE clause or correlation, so it returns all rows.
+
+**Solution:**
+```sql
+-- Instead of (returns all rows):
+WITH (SELECT customer_id FROM customers) AS cust_id
+SELECT * FROM orders WHERE customer_id = cust_id;
+
+-- Option 1: Add proper filtering
+WITH (SELECT customer_id FROM customers WHERE premium = 1 LIMIT 1) AS cust_id
+SELECT * FROM orders WHERE customer_id = cust_id;
+
+-- Option 2: Use IN instead
+SELECT * FROM orders
+WHERE customer_id IN (SELECT customer_id FROM customers WHERE premium = 1);
+
+-- Option 3: Use JOIN
+SELECT orders.*
+FROM orders
+INNER JOIN customers ON orders.customer_id = customers.customer_id
+WHERE customers.premium = 1;
+```
+
+**Scenario 5: Missing aggregation in scalar context**
+
+```text
+Scalar subquery returned more than one row
+```
+
+**Cause:** Expecting single value but query returns multiple rows without aggregation.
+
+**Solution:**
+```sql
+-- Instead of:
+SELECT
+ name,
+ (SELECT price FROM products WHERE category = items.category) AS price
+FROM items;
+
+-- Use aggregation:
+SELECT
+ name,
+ (SELECT max(price) FROM products WHERE category = items.category) AS price
+FROM items;
+
+-- Or use ANY:
+SELECT
+ name,
+ (SELECT any(price) FROM products WHERE category = items.category) AS price
+FROM items;
+
+-- Or use LIMIT 1:
+SELECT
+ name,
+ (SELECT price FROM products WHERE category = items.category LIMIT 1) AS price
+FROM items;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Understand WITH clause syntax differences**
+
+ ```sql
+ -- Scalar syntax (single value, uses parentheses):
+ WITH (SELECT 1) AS value
+ SELECT value;
+
+ -- CTE syntax (table, no parentheses around subquery):
+ WITH cte AS (SELECT 1 AS n)
+ SELECT * FROM cte;
+ ```
+
+2. **Always use LIMIT 1 or aggregation in scalar subqueries**
+
+ ```sql
+ -- Ensure single row result
+ WITH (SELECT max(id) FROM table) AS max_id
+ SELECT ...;
+
+ -- Or explicit LIMIT
+ WITH (SELECT id FROM table ORDER BY created_at DESC LIMIT 1) AS latest_id
+ SELECT ...;
+ ```
+
+3. **Avoid alias conflicts**
+ - Don't name scalar subquery aliases same as table columns
+ - Use descriptive prefixes: `max_`, `total_`, `latest_`
+ - Use different names: `value` instead of column name
+
+4. **Use the new analyzer (24.3+)**
+
+ ```sql
+ -- On 24.3+, new analyzer is default (better handling)
+ -- On earlier versions, enable it:
+ SET allow_experimental_analyzer = 1;
+ ```
+
+5. **Prefer IN/EXISTS over scalar subqueries for filtering**
+
+ ```sql
+ -- Instead of scalar subquery:
+ WHERE id = (SELECT id FROM table2 LIMIT 1)
+
+ -- Use IN (handles multiple values):
+ WHERE id IN (SELECT id FROM table2)
+
+ -- Or EXISTS (more efficient):
+ WHERE EXISTS (SELECT 1 FROM table2 WHERE table2.id = table1.id)
+ ```
+
+6. **Test subqueries independently**
+
+ ```sql
+ -- Always test subquery returns expected rows
+ SELECT count(*) FROM (
+ SELECT column FROM table WHERE condition
+ );
+
+ -- Ensure it returns 0 or 1 for scalar context
+ ```
+
+## Related error codes {#related-errors}
+
+- **Error 49 `LOGICAL_ERROR`**: "Table expression ... data must be initialized" - related CTE bug in 24.5-24.10
+- **Error 47 `UNKNOWN_IDENTIFIER`**: Missing column errors related to CTE resolution
+- **Error 184 `SET_SIZE_LIMIT_EXCEEDED`**: When IN subquery returns too many values
+
+## WITH clause syntax reference {#with-syntax}
+
+**Scalar subquery syntax (ClickHouse-specific):**
+
+```sql
+-- Creates a scalar value (single constant)
+WITH (SELECT 1) AS value
+SELECT value; -- Returns: 1
+
+-- Must return single row, single column
+WITH (SELECT max(price) FROM products) AS max_price
+SELECT * FROM products WHERE price = max_price;
+```
+
+**CTE syntax (SQL standard):**
+```sql
+-- Creates a table expression
+WITH cte AS (SELECT 1 AS n)
+SELECT * FROM cte; -- Must use FROM
+
+-- Can return multiple rows
+WITH top_products AS (
+ SELECT * FROM products ORDER BY sales DESC LIMIT 10
+)
+SELECT * FROM top_products;
+```
+
+**Key differences:**
+
+| Feature | Scalar: `WITH (SELECT ...) AS alias` | CTE: `WITH alias AS (SELECT ...)` |
+|--------------|---------------------------------------|------------------------------------|
+| Returns | Single value | Table/result set |
+| Usage | `SELECT alias` | `SELECT * FROM alias` |
+| Rows allowed | 0 or 1 | Any number |
+| Scope | Can be used as value | Must be used as table |
+
+## Related settings {#related-settings}
+
+```sql
+-- Alias resolution behavior
+SET prefer_alias_to_column_name = 1; -- Default, can cause conflicts
+
+-- Enable new analyzer (fixes many subquery issues)
+SET allow_experimental_analyzer = 1; -- Default in 24.3+
+
+-- Correlated subqueries (experimental)
+SET allow_experimental_correlated_subqueries = 1;
+
+-- Distributed query behavior
+SET distributed_product_mode = 'local'; -- Can affect scalar subqueries
+SET distributed_product_mode = 'global';
+SET distributed_product_mode = 'allow';
+```
diff --git a/docs/troubleshooting/error_codes/130_CANNOT_READ_ARRAY_FROM_TEXT.md b/docs/troubleshooting/error_codes/130_CANNOT_READ_ARRAY_FROM_TEXT.md
new file mode 100644
index 00000000000..aa513978cd0
--- /dev/null
+++ b/docs/troubleshooting/error_codes/130_CANNOT_READ_ARRAY_FROM_TEXT.md
@@ -0,0 +1,403 @@
+---
+slug: /troubleshooting/error-codes/130_CANNOT_READ_ARRAY_FROM_TEXT
+sidebar_label: '130 CANNOT_READ_ARRAY_FROM_TEXT'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_READ_ARRAY_FROM_TEXT', '130', 'array', 'parsing', 'format', 'brackets']
+title: '130 CANNOT_READ_ARRAY_FROM_TEXT'
+description: 'ClickHouse error code - 130 CANNOT_READ_ARRAY_FROM_TEXT'
+---
+
+# Error 130: CANNOT_READ_ARRAY_FROM_TEXT
+
+:::tip
+This error occurs when ClickHouse cannot parse array data from text formats because the array doesn't start with the expected `[` character or contains invalid syntax.
+This typically happens during data import, when using arrays with scalar CTEs/subqueries, or when migrating data from other databases like PostgreSQL.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Incorrect array syntax in text formats**
+ - Array uses curly braces `{1,2,3}` instead of square brackets `[1,2,3]`
+ - Common when importing data from PostgreSQL
+ - Array quoted incorrectly in CSV/TSV formats
+ - Missing opening `[` bracket
+ - Malformed array syntax
+
+2. **Using scalar CTE/subquery returning array with IN clause**
+ - Using `WITH (SELECT groupArray(...)) AS arr` syntax (scalar)
+ - ClickHouse tries to parse scalar result as text array
+ - Should use CTE syntax `WITH arr AS (SELECT ...)` instead
+ - Affects queries with `WHERE col IN (scalar_array)`
+
+3. **Nested array format mismatch**
+ - Inner arrays use different bracket styles
+ - Mixed quoting in nested arrays
+ - Spaces inside array not allowed in some formats
+
+4. **Format-specific array syntax issues**
+ - Values format expects unquoted array literals
+ - CSV expects arrays in quoted strings
+ - TSV expects specific array escaping
+ - Custom delimiters not matching format expectations
+
+5. **Invalid characters in array**
+ - Unescaped quotes inside array elements
+ - Special characters not properly escaped
+ - Null representation issues
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check your array syntax**
+
+```sql
+-- Verify array format
+SELECT * FROM format(TSV, '[1,2,3]'); -- Correct
+SELECT * FROM format(TSV, '{1,2,3}'); -- Wrong - throws error
+```
+
+**2. Examine your data file**
+
+```bash
+# Check actual array syntax in file
+head -n 10 your_data_file.tsv
+
+# Look for arrays with curly braces {} instead of []
+grep -o '{[0-9,]*}' your_data_file.tsv | head
+```
+
+**3. Test with simplified array data**
+
+```sql
+-- Test minimal case
+SELECT * FROM format(CSV, '"[1,2,3]"');
+
+-- Check if escaping is the issue
+DESC format(CSV, '\"[1,2,3]\",\"[[1, 2], [], [3, 4]]\"');
+```
+
+**4. Review recent queries for scalar CTE usage**
+
+```sql
+-- Check query_log for CANNOT_READ_ARRAY_FROM_TEXT errors
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 130
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. PostgreSQL array import - convert curly braces to brackets**
+
+```bash
+# Replace curly braces with square brackets before import
+sed 's/{/[/g; s/}/]/g' postgres_dump.tsv > clickhouse_import.tsv
+
+# Or use sed during pipe
+cat postgres_dump.tsv | sed 's/{/[/g; s/}/]/g' | clickhouse-client --query="INSERT INTO table FORMAT TSV"
+```
+
+**2. Fix scalar CTE syntax for arrays in IN clause**
+
+```sql
+-- Instead of scalar syntax (fails):
+WITH (SELECT groupArray(number) FROM numbers(10)) AS ids
+SELECT * FROM numbers(100) WHERE number IN (ids);
+-- Error: CANNOT_READ_ARRAY_FROM_TEXT
+
+-- Use CTE syntax (works):
+WITH ids AS (SELECT groupArray(number) FROM numbers(10))
+SELECT * FROM numbers(100) WHERE number IN (SELECT arrayJoin(arr) FROM ids);
+
+-- Or use literal array construction:
+WITH ids AS (SELECT groupArray(number) FROM numbers(10))
+SELECT * FROM numbers(100) WHERE number IN ids;
+
+-- Or extract values with arrayJoin:
+WITH (SELECT groupArray(number) FROM numbers(10)) AS ids
+SELECT * FROM numbers(100) WHERE number IN (SELECT arrayJoin(ids));
+```
+
+**3. Ensure proper quoting in CSV format**
+
+```sql
+-- Arrays in CSV must be quoted
+-- Correct:
+SELECT * FROM format(CSV, '"[1,2,3]","[[1,2],[3,4]]"');
+
+-- Wrong (not quoted):
+SELECT * FROM format(CSV, '[1,2,3],[[1,2],[3,4]]');
+```
+
+**4. Use appropriate format settings for array parsing**
+
+```sql
+-- For nested CSV arrays:
+SET input_format_csv_arrays_as_nested_csv = 1;
+SELECT * FROM format(CSV, '"""[""""Hello"""", """"world"""", """"42"""""""" TV""""]"""');
+
+-- Adjust max array size if needed:
+SET format_binary_max_array_size = 0; -- Unlimited
+```
+
+**5. Convert data inline during SELECT**
+
+```sql
+-- If source has curly braces, transform during read:
+SELECT
+ replaceRegexpAll(array_column, '[{}]', match -> if(match = '{', '[', ']'))
+FROM input('array_column String')
+FORMAT TSV
+SETTINGS input_format_tsv_use_best_effort_in_schema_inference = 0;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: PostgreSQL array migration**
+
+```text
+Code: 130. DB::Exception: Array does not start with '[' character. (CANNOT_READ_ARRAY_FROM_TEXT)
+```
+
+**Cause:** PostgreSQL exports arrays with curly braces `{1,2,3}` but ClickHouse expects square brackets `[1,2,3]`.
+
+**Solution:**
+```bash
+# Option 1: Transform during export with psql
+psql -c "COPY (SELECT regexp_replace(flags::text, '[{}]', '', 'g') as flags FROM table) TO STDOUT" |
+ clickhouse-client --query="INSERT INTO table FORMAT TSV"
+
+# Option 2: Transform the TSV file
+sed -i 's/{/[/g; s/}/]/g' postgres_export.tsv
+clickhouse-client --query="INSERT INTO table FORMAT TSV" < postgres_export.tsv
+
+# Option 3: Read as String and transform in ClickHouse
+CREATE TABLE staging (flags String) ENGINE = Memory;
+INSERT INTO staging FROM INFILE 'postgres_export.tsv' FORMAT TSV;
+
+INSERT INTO target_table
+SELECT replaceAll(replaceAll(flags, '{', '['), '}', ']') AS flags
+FROM staging;
+```
+
+**Scenario 2: Scalar CTE with array in IN clause**
+
+```text
+Code: 130. DB::Exception: Array does not start with '[' character:
+while executing 'FUNCTION in(toString(number), _subquery) UInt8'. (CANNOT_READ_ARRAY_FROM_TEXT)
+```
+
+**Cause:** Using scalar CTE syntax `WITH (SELECT groupArray(...)) AS arr` creates a scalar value, not a usable array in IN clause.
+
+**Solution:**
+
+```sql
+-- Problem (scalar CTE):
+WITH (SELECT groupArray(number) FROM numbers(10)) AS ids
+SELECT * FROM numbers(100) WHERE number IN (ids);
+-- Error: CANNOT_READ_ARRAY_FROM_TEXT
+
+-- Solution 1: Use arrayJoin to expand array:
+WITH (SELECT groupArray(number) FROM numbers(10)) AS ids
+SELECT * FROM numbers(100) WHERE number IN (SELECT arrayJoin(ids));
+
+-- Solution 2: Use proper CTE syntax (not scalar):
+WITH ids AS (SELECT number FROM numbers(10))
+SELECT * FROM numbers(100) WHERE number IN ids;
+
+-- Solution 3: Use array literal directly:
+WITH [0,1,2,3,4,5,6,7,8,9] AS ids
+SELECT * FROM numbers(100) WHERE number IN ids;
+```
+
+**Scenario 3: Array format in TSV import**
+
+```text
+Code: 130. DB::Exception: Array does not start with '[' character: (at row 2)
+```
+
+**Cause:** TSV file contains improperly formatted array data (wrong brackets, missing quotes, etc).
+
+**Solution:**
+```sql
+-- Verify TSV array format
+-- Arrays in TSV should look like:
+-- [1,2,3] [['a','b'],['c','d']]
+
+-- For quoted arrays:
+-- ['Hello', 'world'] [['Abc', 'Def'], []]
+
+-- If data has wrong format, read as String first:
+CREATE TABLE temp (arr_str String) ENGINE = Memory;
+INSERT INTO temp FROM INFILE 'data.tsv' FORMAT TSV;
+
+-- Then parse and fix:
+INSERT INTO target_table
+SELECT
+ JSONExtract(
+ replaceAll(replaceAll(arr_str, '{', '['), '}', ']'),
+ 'Array(Int64)'
+ ) AS arr
+FROM temp;
+```
+
+**Scenario 4: Nested CSV arrays**
+
+```text
+Array does not start with '[' character in CSV nested array
+```
+
+**Cause:** CSV nested arrays require special escaping and quoting.
+
+**Solution:**
+
+```sql
+-- Enable nested CSV arrays setting:
+SET input_format_csv_arrays_as_nested_csv = 1;
+
+-- Arrays in CSV can then be quoted with nested escaping:
+SELECT * FROM format(CSV, '"""[""""Hello"""", """"world""""]"""');
+
+-- Or use standard array format in quoted field:
+SELECT * FROM format(CSV, '"[''Hello'', ''world'']"');
+```
+
+**Scenario 5: Incompatible array delimiters in custom formats**
+
+```text
+CANNOT_READ_ARRAY_FROM_TEXT in CustomSeparated format
+```
+
+**Cause:** Custom format using delimiters that conflict with array syntax.
+
+**Solution:**
+```sql
+-- Ensure custom delimiters don't use array characters
+SET format_custom_field_delimiter = '|'; -- Not ',' or ']' or '['
+SET format_custom_escaping_rule = 'Escaped';
+
+-- Or read arrays as strings first:
+CREATE TABLE temp (arr String) ENGINE = Memory;
+-- Insert with custom format
+-- Then parse:
+SELECT JSONExtract(arr, 'Array(String)') FROM temp;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Understand array format requirements by input format**
+
+ ```sql
+ -- CSV: Arrays must be in quoted strings
+ '"[1,2,3]","[4,5,6]"'
+
+ -- TSV: Arrays without quotes
+ '[1,2,3]\t[4,5,6]'
+
+ -- Values: Array literals
+ '([1,2,3], [4,5,6])'
+
+ -- JSON: Native JSON arrays
+ '{"arr": [1,2,3]}'
+ ```
+
+2. **Use appropriate scalar vs CTE syntax**
+
+ ```sql
+ -- For scalar values (single result):
+ WITH (SELECT max(x) FROM table) AS max_val
+ SELECT ...;
+
+ -- For arrays/sets (multiple values):
+ WITH ids AS (SELECT id FROM table)
+ SELECT ... WHERE id IN ids;
+
+ -- NOT: WITH (SELECT groupArray(id) FROM table) AS ids
+ ```
+
+3. **Validate array syntax before import**
+
+ ```bash
+ # Check array format in file
+ head -n 5 data.tsv | grep -o '\[.*\]'
+
+ # Replace PostgreSQL arrays before import
+ sed 's/{/[/g; s/}/]/g' input.tsv > output.tsv
+ ```
+
+4. **Test format with small sample first**
+
+ ```sql
+ -- Test parsing with single row
+ SELECT * FROM format(TSV, '[1,2,3]');
+
+ -- Verify schema inference
+ DESC format(TSV, '[1,2,3]\t["a","b","c"]');
+ ```
+
+5. **Handle format-specific array settings**
+
+ ```sql
+ -- Configure for your format:
+ SET input_format_csv_arrays_as_nested_csv = 1; -- For nested CSV
+ SET input_format_tsv_use_best_effort_in_schema_inference = 1;
+ SET format_binary_max_array_size = 1000000; -- Prevent huge arrays
+ ```
+
+6. **Use schema hints for complex arrays**
+
+ ```sql
+ -- Specify array types explicitly
+ SELECT * FROM file('data.tsv')
+ SETTINGS schema_inference_hints = 'arr1 Array(Int64), arr2 Array(String)';
+ ```
+
+## Related error codes {#related-errors}
+
+- **Error 6 `CANNOT_PARSE_TEXT`**: General parsing error for malformed text data
+- **Error 53 `TYPE_MISMATCH`**: CAST AS Array type mismatch
+- **Error 33 `CANNOT_READ_ALL_DATA`**: Cannot read all array values from binary format
+
+## Array format reference by input format {#array-format-reference}
+
+| Format | Array Syntax | Example | Requires Quoting |
+|---------------------|-------------------------------|------------------------|-------------------|
+| **CSV** | Square brackets in quotes | `"[1,2,3]"` | Yes |
+| **TSV** | Square brackets, no quotes | `[1,2,3]` | No |
+| **Values** | Square brackets, SQL-style | `([1,2,3], ['a','b'])` | No |
+| **JSON** | Native JSON arrays | `{"arr": [1,2,3]}` | N/A (JSON format) |
+| **JSONEachRow** | Native JSON arrays | `{"arr": [1,2,3]}` | N/A (JSON format) |
+| **TabSeparated** | Square brackets with escaping | `[1,2,3]` | No |
+| **CustomSeparated** | Depends on escaping rule | Varies | Varies |
+
+**PostgreSQL compatibility:**
+- PostgreSQL exports: `{1,2,3}`
+- ClickHouse expects: `[1,2,3]`
+- **Must transform before import**
+
+## Related settings {#related-settings}
+
+```sql
+-- CSV array settings
+SET input_format_csv_arrays_as_nested_csv = 1; -- Nested CSV in arrays
+SET input_format_csv_use_best_effort_in_schema_inference = 1;
+
+-- TSV array settings
+SET input_format_tsv_use_best_effort_in_schema_inference = 1;
+
+-- Array size limits
+SET format_binary_max_array_size = 1000000; -- Max array elements (0 = unlimited)
+
+-- Schema inference
+SET schema_inference_hints = 'column_name Array(Type)';
+SET input_format_max_rows_to_read_for_schema_inference = 25000;
+
+-- Error tolerance during import
+SET input_format_allow_errors_num = 10; -- Allow N errors
+SET input_format_allow_errors_ratio = 0.01; -- Allow 1% errors
+```
diff --git a/docs/troubleshooting/error_codes/135_ZERO_ARRAY_OR_TUPLE_INDEX.md b/docs/troubleshooting/error_codes/135_ZERO_ARRAY_OR_TUPLE_INDEX.md
new file mode 100644
index 00000000000..c86cea51dfc
--- /dev/null
+++ b/docs/troubleshooting/error_codes/135_ZERO_ARRAY_OR_TUPLE_INDEX.md
@@ -0,0 +1,66 @@
+---
+slug: /troubleshooting/error-codes/135_ZERO_ARRAY_OR_TUPLE_INDEX
+sidebar_label: '135 ZERO_ARRAY_OR_TUPLE_INDEX'
+doc_type: 'reference'
+keywords: ['error codes', 'ZERO_ARRAY_OR_TUPLE_INDEX', '135', 'tuple', 'index', 'zero', 'tupleElement']
+title: '135 ZERO_ARRAY_OR_TUPLE_INDEX'
+description: 'ClickHouse error code - 135 ZERO_ARRAY_OR_TUPLE_INDEX'
+---
+
+# Error 135: ZERO_ARRAY_OR_TUPLE_INDEX
+
+:::tip
+This error occurs when you attempt to access a **tuple** element using index 0.
+ClickHouse tuples use 1-based indexing, meaning the first element is at index 1, not 0.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Using 0-based indexing from other languages**
+ - Developers coming from Python, JavaScript, C++, Java, etc. where arrays start at 0
+ - Forgetting ClickHouse uses 1-based indexing for tuples and arrays (with arrays `0` will work but return the default value of the array type, not the first element)
+ - Copy-pasting code from other systems without adjusting indices
+ - Mental model mismatch between ClickHouse and application code
+
+2. **Incorrect tuple element access**
+ - Using `.0` to access first element instead of `.1`
+ - Using `tupleElement(tuple, 0)` instead of `tupleElement(tuple, 1)`
+ - Bracket notation with 0 index: `tuple[0]` instead of `tuple[1]`
+ - Off-by-one errors in loop indices or calculations
+
+3. **Dynamic index calculations**
+ - Loop counters starting at 0 instead of 1
+ - Range functions generating 0-based sequences
+ - Mathematical calculations resulting in 0 index
+ - Converting from 0-based system without adjustment
+
+## Common solutions {#common-solutions}
+
+**1. Use 1-based indexing for tuple access**
+
+```sql
+-- Error: Attempting to access tuple element at index 0
+SELECT tupleElement((1, 'hello', 3.14), 0);
+
+-- Solution: Use 1-based indexing
+SELECT tupleElement((1, 'hello', 3.14), 1); -- Returns: 1
+SELECT tupleElement((1, 'hello', 3.14), 2); -- Returns: 'hello'
+SELECT tupleElement((1, 'hello', 3.14), 3); -- Returns: 3.14
+```
+
+**2. Use dot notation with correct indices**
+
+```sql
+-- Error: Tuple element .0 doesn't exist
+SELECT (1, 'hello', 3.14).0;
+
+-- Solution: Start from .1
+SELECT (1, 'hello', 3.14).1; -- Returns: 1
+SELECT (1, 'hello', 3.14).2; -- Returns: 'hello'
+SELECT (1, 'hello', 3.14).3; -- Returns: 3.14
+```
+
+## Related error codes {#related-error-codes}
+
+- [ILLEGAL_TYPE_OF_ARGUMENT (43)](/troubleshooting/error-codes/043_ILLEGAL_TYPE_OF_ARGUMENT) - Wrong type used for index
+- [SIZES_OF_ARRAYS_DONT_MATCH (190)](/troubleshooting/error-codes/190_SIZES_OF_ARRAYS_DONT_MATCH) - Array size mismatches
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/159_TIMEOUT_EXCEEDED.md b/docs/troubleshooting/error_codes/159_TIMEOUT_EXCEEDED.md
new file mode 100644
index 00000000000..3faf408624e
--- /dev/null
+++ b/docs/troubleshooting/error_codes/159_TIMEOUT_EXCEEDED.md
@@ -0,0 +1,362 @@
+---
+slug: /troubleshooting/error-codes/159_TIMEOUT_EXCEEDED
+sidebar_label: '159 TIMEOUT_EXCEEDED'
+doc_type: 'reference'
+keywords: ['error codes', 'TIMEOUT_EXCEEDED', '159']
+title: '159 TIMEOUT_EXCEEDED'
+description: 'ClickHouse error code - 159 TIMEOUT_EXCEEDED'
+---
+
+# Error 159: TIMEOUT_EXCEEDED
+
+:::tip
+This error occurs when a query exceeds the configured timeout limits for execution, connection, or network operations.
+It indicates that the operation took longer than the maximum allowed time and was automatically cancelled by ClickHouse.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Query execution timeout exceeded**
+ - Query takes longer than [`max_execution_time`](/operations/settings/settings#max_execution_time) setting
+ - Long-running aggregations or joins
+ - Full table scans on large tables
+ - Inefficient query patterns
+
+2. **Network socket timeout**
+ - Client connection timeout during long queries
+ - Timeout while writing results to client socket
+ - Client disconnected before query completed
+ - Load balancer or proxy timeout between client and server
+
+3. **Distributed query timeout**
+ - Timeout communicating with remote servers in cluster
+ - Network latency between cluster nodes
+ - Slow responses from remote shards
+
+4. **Resource contention causing slowness**
+ - High CPU usage delaying query completion
+ - Memory pressure causing disk spilling
+ - I/O bottlenecks with slow storage
+ - Too many concurrent queries
+
+5. **HTTP connection timeout**
+ - HTTP client timeout shorter than query execution time
+ - Keep-alive timeout mismatched between client and server
+ - Idle connection timeout on load balancers
+
+## Common solutions {#common-solutions}
+
+**1. Increase timeout settings**
+
+```sql
+-- Increase query execution timeout (in seconds)
+SET max_execution_time = 3600; -- 1 hour
+
+-- Or set at user level
+ALTER USER your_user SETTINGS max_execution_time = 7200;
+
+-- For specific query
+SELECT * FROM large_table
+SETTINGS max_execution_time = 600;
+```
+
+**2. Optimize the query**
+
+```sql
+-- Add WHERE clause to filter data
+SELECT * FROM table
+WHERE date >= today() - INTERVAL 7 DAY;
+
+-- Use appropriate indexes
+-- Ensure ORDER BY uses primary key columns
+-- Avoid SELECT * on wide tables
+```
+
+**3. Configure client-side timeout**
+
+For HTTP clients:
+
+```bash
+# Increase socket timeout in connection string
+# JDBC example
+socket_timeout=7200000 # 2 hours in milliseconds
+
+# Python clickhouse-connect
+client = clickhouse_connect.get_client(
+ host='your-host',
+ query_settings={'max_execution_time': 3600},
+ connect_timeout=30,
+ send_receive_timeout=3600
+)
+```
+
+**4. Handle timeout before checking execution speed**
+
+```sql
+-- Allow query to start before timeout kicks in
+SET timeout_before_checking_execution_speed = 10;
+
+-- Combined with max_execution_time
+SET timeout_before_checking_execution_speed = 0;
+SET max_execution_time = 300;
+```
+
+**5. Enable query cancellation on client disconnect**
+
+```sql
+-- Cancel query if HTTP client disconnects (requires readonly mode)
+SET cancel_http_readonly_queries_on_client_close = 1;
+```
+
+**6. Use async inserts with appropriate timeout**
+
+```sql
+-- For insert operations
+SET async_insert = 1;
+SET wait_for_async_insert = 1;
+SET async_insert_timeout = 300;
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Query timeout with `max_execution_time`**
+
+```text
+Error: Timeout exceeded: elapsed 98448.998521 ms, maximum: 5000 ms
+```
+
+**Cause:** Query ran longer than `max_execution_time` setting.
+
+**Solution:**
+
+```sql
+-- Increase timeout for this query
+SELECT * FROM large_table
+SETTINGS max_execution_time = 120;
+
+-- Or optimize the query to run faster
+```
+
+**Scenario 2: Network socket timeout**
+
+```text
+Error: Timeout exceeded while writing to socket
+```
+
+**Cause:** Client connection timed out while server was sending results.
+
+**Solution:**
+- Increase client socket timeout
+- Use compression to reduce data transfer time
+- Add `LIMIT` clause to reduce result size
+- Ensure stable network connection
+
+**Scenario 3: JDBC/HTTP client timeout**
+
+```text
+Error: Read timed out
+```
+
+**Cause:** Client-side timeout shorter than query execution time.
+
+**Solution:**
+
+```java
+// Increase JDBC timeout
+Properties properties = new Properties();
+properties.setProperty("socket_timeout", "7200000"); // 2 hours
+
+// Or in connection URL
+jdbc:clickhouse://host:8443/database?socket_timeout=7200000
+```
+
+**Scenario 4: Distributed query timeout**
+
+```text
+Error: Timeout exceeded while communicating with remote server
+```
+
+**Cause:** Remote shard not responding within timeout.
+
+**Solution:**
+
+```sql
+-- Increase distributed query timeout
+SET distributed_connections_timeout = 60;
+
+-- Check cluster health
+SELECT * FROM system.clusters WHERE cluster = 'your_cluster';
+```
+
+**Scenario 5: Load balancer timeout**
+
+```text
+Client receives timeout but query completes successfully on server
+```
+
+**Cause:** Load balancer or proxy has shorter timeout than query duration.
+
+**Solution:**
+- Configure load balancer timeout settings
+- Use direct connection for long-running queries
+- Enable TCP keep-alive to maintain connection
+
+## Prevention tips {#prevention-tips}
+
+1. **Set appropriate timeouts:** Match client and server timeout settings
+2. **Monitor query performance:** Identify and optimize slow queries
+3. **Use LIMIT clauses:** Reduce result set size for exploratory queries
+4. **Optimize table design:** Use proper primary keys and partitioning
+5. **Configure keep-alive:** Prevent idle connection timeouts
+6. **Test long queries:** Verify timeout settings before production use
+7. **Use query result cache:** Cache expensive query results
+
+## Debugging steps {#debugging-steps}
+
+1. **Check current timeout settings:**
+
+ ```sql
+ SELECT
+ name,
+ value
+ FROM system.settings
+ WHERE name LIKE '%timeout%' OR name LIKE '%execution_time%';
+ ```
+
+2. **Find queries that timed out:**
+
+ ```sql
+ SELECT
+ query_id,
+ user,
+ query_duration_ms,
+ exception,
+ query
+ FROM system.query_log
+ WHERE exception_code = 159
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+3. **Check if query completed despite timeout:**
+
+ ```sql
+ -- Query might have finished after client timeout
+ SELECT *
+ FROM system.query_log
+ WHERE query_id = 'your_query_id'
+ ORDER BY event_time;
+ ```
+
+4. **Analyze query performance:**
+
+ ```sql
+ SELECT
+ query_id,
+ query_duration_ms / 1000 AS duration_sec,
+ formatReadableSize(memory_usage) AS memory,
+ formatReadableQuantity(read_rows) AS rows_read,
+ formatReadableSize(read_bytes) AS bytes_read
+ FROM system.query_log
+ WHERE query_id = 'slow_query_id';
+ ```
+
+5. **Check for resource bottlenecks:**
+
+ ```sql
+ -- CPU usage
+ SELECT
+ query_id,
+ ProfileEvents['UserTimeMicroseconds'] / 1000000 AS cpu_sec
+ FROM system.query_log
+ WHERE query_id = 'your_query_id';
+
+ -- I/O wait
+ SELECT
+ query_id,
+ ProfileEvents['OSReadChars'] AS read_chars,
+ ProfileEvents['OSWriteChars'] AS write_chars
+ FROM system.query_log
+ WHERE query_id = 'your_query_id';
+ ```
+
+## Special considerations {#special-considerations}
+
+**For HTTP/JDBC clients:**
+- Client timeout and server `max_execution_time` are independent
+- Query may continue running on server after client timeout
+- Use `cancel_http_readonly_queries_on_client_close = 1` to auto-cancel
+
+**For distributed queries:**
+- Each shard has its own timeout
+- Network latency adds to total execution time
+- Use `distributed_connections_timeout` for shard communication
+
+**For long-running analytical queries:**
+- Consider using materialized views for pre-aggregation
+- Break complex queries into smaller steps
+- Use query result cache for repeated queries
+- Schedule heavy queries during off-peak hours
+
+**For aggregations with external sorting:**
+- Large aggregations may spill to disk
+- Merging temporary files can take significant time
+- Monitor memory usage and `max_bytes_before_external_group_by`
+
+## Timeout-related settings {#timeout-settings}
+
+```sql
+-- Main execution timeout (seconds)
+max_execution_time = 0 -- 0 = unlimited
+
+-- Timeout before speed checking starts (seconds)
+timeout_before_checking_execution_speed = 10
+
+-- Distributed query timeouts (seconds)
+connect_timeout_with_failover_ms = 50
+connect_timeout_with_failover_secure_ms = 100
+hedged_connection_timeout_ms = 50
+receive_timeout = 300
+send_timeout = 300
+
+-- HTTP-specific
+http_connection_timeout = 1
+http_send_timeout = 1800
+http_receive_timeout = 1800
+
+-- Cancel on disconnect
+cancel_http_readonly_queries_on_client_close = 0
+```
+
+## Synchronizing client and server timeouts {#synchronizing-timeouts}
+
+To ensure queries stop when client times out:
+
+```sql
+-- Set server timeout slightly less than client timeout
+-- Client timeout: 120 seconds
+-- Server setting:
+SET timeout_before_checking_execution_speed = 0;
+SET max_execution_time = 110; -- 10 seconds less than client
+
+-- Enable cancellation on client disconnect
+SET cancel_http_readonly_queries_on_client_close = 1;
+```
+
+:::note
+`cancel_http_readonly_queries_on_client_close` only works when `readonly > 0`, which is automatic for HTTP GET requests.
+:::
+
+If you're experiencing this error:
+1. Check if timeout is due to query complexity or timeout configuration
+2. Review `max_execution_time` setting and increase if needed
+3. For HTTP/JDBC clients, ensure client timeout >= server timeout
+4. Use `EXPLAIN` to analyze query plan and optimize
+5. Monitor query performance in `system.query_log`
+6. Consider breaking long queries into smaller batches
+7. For production workloads, set appropriate timeout values based on query patterns
+
+**Related documentation:**
+- [ClickHouse settings reference](/operations/settings/settings)
+- [Query execution limits](/operations/settings/query-complexity)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/179_MULTIPLE_EXPRESSIONS_FOR_ALIAS.md b/docs/troubleshooting/error_codes/179_MULTIPLE_EXPRESSIONS_FOR_ALIAS.md
new file mode 100644
index 00000000000..dbc83498002
--- /dev/null
+++ b/docs/troubleshooting/error_codes/179_MULTIPLE_EXPRESSIONS_FOR_ALIAS.md
@@ -0,0 +1,396 @@
+---
+slug: /troubleshooting/error-codes/179_MULTIPLE_EXPRESSIONS_FOR_ALIAS
+sidebar_label: '179 MULTIPLE_EXPRESSIONS_FOR_ALIAS'
+doc_type: 'reference'
+keywords: ['error codes', 'MULTIPLE_EXPRESSIONS_FOR_ALIAS', '179', 'alias', 'duplicate']
+title: '179 MULTIPLE_EXPRESSIONS_FOR_ALIAS'
+description: 'ClickHouse error code - 179 MULTIPLE_EXPRESSIONS_FOR_ALIAS'
+---
+
+# Error 179: MULTIPLE_EXPRESSIONS_FOR_ALIAS
+
+:::tip
+This error occurs when you assign the same alias to multiple different expressions in a query.
+ClickHouse cannot determine which expression the alias should refer to, causing ambiguity.
+This is a semantic error that prevents the query from executing.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Same alias used for different expressions in SELECT**
+ - Multiple columns with identical alias names
+ - One expression references the alias of another expression with the same name
+ - Nested expressions creating circular alias references
+ - Different calculations assigned to same result name
+
+2. **Query optimizer creating duplicate aliases (23.1-23.2 bug)**
+ - Optimization of `OR` chains into `IN` expressions
+ - Works fine in ClickHouse 22.12 but breaks in 23.1-23.2
+ - Particularly affects LowCardinality columns on distributed tables
+ - Query rewriting adds aliases during optimization
+
+3. **Alias column conflicts with SELECT alias in distributed queries**
+ - Table has ALIAS column with name `X`
+ - SELECT expression also uses `AS X`
+ - Works fine on local tables
+ - Fails with `remote()` or Distributed tables
+ - Especially with parallel replicas enabled
+
+4. **WITH clause expression reused with same alias**
+ - WITH clause defines an alias
+ - SELECT clause redefines the same alias differently
+ - Subqueries reference the ambiguous alias
+ - Query rewriting expands aliases incorrectly
+
+5. **Self-referential alias definitions**
+ - Expression references its own alias name
+ - `platform AS platform` where `platform` is both column and alias
+ - Recursive alias definitions in complex queries
+ - Especially problematic with `if()` or `CASE` expressions
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for conflicting expressions**
+
+```text
+Different expressions with the same alias alias1:
+((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0))) OR ((path IN ('/b', '/b/')) AS alias1) AS alias1
+and
+path IN ('/b', '/b/') AS alias1
+```
+
+**2. Review your SELECT clause for duplicate aliases**
+
+```sql
+-- Check query_log for the failing query
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 179
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC;
+```
+
+**3. Check your ClickHouse version**
+
+```sql
+SELECT version();
+
+-- Versions 23.1-23.2 had a query optimizer bug
+-- Consider upgrading to 23.3+ or downgrading to 22.12
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use unique alias names**
+
+```sql
+-- Instead of this (fails):
+SELECT
+ number AS num,
+ num * 1 AS num -- Duplicate alias!
+FROM numbers(10);
+
+-- Use this (works):
+SELECT
+ number AS num,
+ num * 1 AS num_times_one
+FROM numbers(10);
+```
+
+**2. Avoid self-referential aliases**
+
+```sql
+-- Instead of this (may fail on distributed tables):
+SELECT
+ if(platform = 'ios', 'apple', platform) AS platform
+FROM table;
+
+-- Use different alias:
+SELECT
+ if(platform = 'ios', 'apple', platform) AS platform_normalized
+FROM table;
+
+-- Or don't use alias for column:
+SELECT
+ if(t.platform = 'ios', 'apple', t.platform) AS platform
+FROM table AS t;
+```
+
+**3. For optimizer bug (23.1-23.2) - workaround or upgrade**
+
+```sql
+-- Workaround 1: Remove LowCardinality from distributed table
+ALTER TABLE distributed_table
+ MODIFY COLUMN path String; -- Instead of LowCardinality(String)
+
+-- Workaround 2: Upgrade to ClickHouse 23.3+
+-- Or downgrade to 22.12
+
+-- Workaround 3: Disable the problematic optimization
+SET optimize_min_equality_disjunction_chain_length = 0; -- Note: Ignored in 23.1+
+```
+
+**4. For distributed/remote table alias conflicts**
+
+```sql
+-- Option 1: Use different alias names
+SELECT max(x.ta) AS ta_max -- Not 'ta'
+FROM remote('127.0.0.1', default, t) x;
+
+-- Option 2: Disable analyzer (temporary fix)
+SELECT max(x.ta) AS ta
+FROM remote('127.0.0.1', default, t) x
+SETTINGS enable_analyzer = 0;
+
+-- Option 3: Disable alias optimization
+SELECT max(x.ta) AS ta
+FROM remote('127.0.0.1', default, t) x
+SETTINGS optimize_respect_aliases = 0;
+```
+
+**5. Rewrite complex expressions**
+
+```sql
+-- Instead of nested aliasing:
+WITH
+ (path = '/b') OR (path = '/b/') AS alias1
+SELECT max(alias1) FROM table;
+
+-- Use explicit expression:
+WITH
+ calculated_value AS (SELECT (path = '/b') OR (path = '/b/') FROM table)
+SELECT max(calculated_value);
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Query optimizer bug in 23.1-23.2 with LowCardinality**
+
+```text
+Code: 179. DB::Exception: Different expressions with the same alias alias1:
+((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0))) OR ((path IN ('/b', '/b/')) AS alias1) AS alias1
+and
+path IN ('/b', '/b/') AS alias1
+```
+
+**Cause:** Bug in ClickHouse 23.1-23.2 where optimizer converted `OR` conditions into `IN` expressions and incorrectly added aliases.
+Affects LowCardinality columns on distributed tables.
+
+**Solution:**
+
+```sql
+-- Upgrade to 23.3+ where this is fixed
+
+-- Or remove LowCardinality from distributed table:
+ALTER TABLE distributed_table
+ MODIFY COLUMN path String;
+
+-- Original failing query:
+WITH ((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0)))
+ OR (path = '/b') OR (path = '/b/') AS alias1
+SELECT max(alias1)
+FROM distributed_table
+WHERE id = 299386662;
+```
+
+**Scenario 2: Self-referential alias in SELECT**
+
+```text
+Code: 179. DB::Exception: Different expressions with the same alias num:
+num * 1 AS num
+and
+number AS num
+```
+
+**Cause:** Using the same column name as an alias, then referencing that alias.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT
+ number AS num,
+ num * 1 AS num -- Error!
+FROM numbers(10);
+
+-- Use different names:
+SELECT
+ number AS num,
+ num * 1 AS num_times_one
+FROM numbers(10);
+```
+
+**Scenario 3: ALIAS column conflicts with SELECT alias on distributed tables**
+
+```text
+Code: 179. DB::Exception: Multiple expressions toStartOfHour(__table1.t) AS ta
+and max(toStartOfHour(__table1.t) AS ta) AS ta for alias ta
+```
+
+**Cause:** Table has `ta DateTime ALIAS toStartOfHour(t)`, and SELECT uses `max(x.ta) AS ta`. Works locally but fails with `remote()` or parallel replicas.
+
+**Solution:**
+
+```sql
+-- Table definition:
+CREATE TABLE t (
+ uid Int16,
+ t DateTime,
+ ta DateTime ALIAS toStartOfHour(t) -- ALIAS column named 'ta'
+) ENGINE = MergeTree ORDER BY uid;
+
+-- Instead of (fails on distributed):
+SELECT max(x.ta) AS ta -- Conflicts with ALIAS column
+FROM remote('127.0.0.1', default, t) x;
+
+-- Use different alias:
+SELECT max(x.ta) AS ta_max
+FROM remote('127.0.0.1', default, t) x;
+
+-- Or disable analyzer:
+SELECT max(x.ta) AS ta
+FROM remote('127.0.0.1', default, t) x
+SETTINGS enable_analyzer = 0;
+```
+
+**Scenario 4: Parallel replicas with self-referential alias**
+
+```text
+Code: 179. DB::Exception: Different expressions with the same alias platform:
+if((_CAST(os, 'String') AS platform) = 'ios', 'apple', platform) AS platform
+and
+_CAST(os, 'String') AS platform
+```
+
+**Cause:** Using `if(platform = 'ios', 'apple', platform) AS platform` where `platform` is both the source column and the alias. Works without parallel replicas, fails with them.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT
+ if(platform = 'ios', 'apple', platform) AS platform
+FROM app_ids_per_day
+GROUP BY platform
+SETTINGS allow_experimental_parallel_reading_from_replicas = 2;
+
+-- Use different alias:
+SELECT
+ if(platform = 'ios', 'apple', platform) AS platform_normalized
+FROM app_ids_per_day
+GROUP BY platform_normalized
+SETTINGS allow_experimental_parallel_reading_from_replicas = 2;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Always use unique alias names**
+
+ ```sql
+ -- Don't reuse the same alias
+ SELECT
+ col1 AS result,
+ col2 AS result -- BAD!
+ FROM table;
+
+ -- Use descriptive unique names
+ SELECT
+ col1 AS result_col1,
+ col2 AS result_col2
+ FROM table;
+ ```
+
+2. **Avoid self-referential aliases**
+
+ ```sql
+ -- Don't use column name as its own alias
+ SELECT
+ platform AS platform -- Problematic
+ FROM table;
+
+ -- Use different alias name or no alias
+ SELECT
+ platform AS platform_value
+ FROM table;
+ ```
+
+3. **Be careful with ALIAS columns in distributed queries**
+
+ ```sql
+ -- If table has: ta DateTime ALIAS toStartOfHour(t)
+
+ -- Don't use 'ta' as SELECT alias on distributed tables
+ SELECT max(ta) AS ta_result -- Not AS ta
+ FROM distributed_table;
+ ```
+
+4. **Test on distributed tables if using remote()/Distributed**
+
+ ```sql
+ -- Test locally first
+ SELECT ... FROM local_table;
+
+ -- Then test on distributed
+ SELECT ... FROM remote('host', db, local_table);
+
+ -- Check for alias conflicts
+ ```
+
+5. **Keep ClickHouse updated to avoid optimizer bugs**
+
+ ```sql
+ -- Check version
+ SELECT version();
+
+ -- Versions 23.1-23.2 had alias optimization bugs
+ -- Use 23.3+ or 22.12
+ ```
+
+6. **Use WITH clauses carefully**
+
+ ```sql
+ -- Ensure WITH aliases don't conflict with SELECT aliases
+ WITH
+ calculated AS (SELECT value FROM table)
+ SELECT
+ other_value AS result, -- Not AS calculated
+ calculated
+ FROM source;
+ ```
+
+## Related settings {#related-settings}
+
+```sql
+-- Disable analyzer (temporary workaround)
+SET enable_analyzer = 0; -- Old query interpreter
+
+-- Disable alias optimization
+SET optimize_respect_aliases = 0; -- May help with distributed queries
+
+-- Parallel replicas (can trigger the error)
+SET allow_experimental_parallel_reading_from_replicas = 0; -- Disable to test
+SET max_parallel_replicas = 1; -- Or reduce
+
+-- Check current settings
+SELECT name, value
+FROM system.settings
+WHERE name IN ('enable_analyzer', 'optimize_respect_aliases',
+ 'allow_experimental_parallel_reading_from_replicas');
+```
+
+## Version-specific issues {#version-issues}
+
+| ClickHouse Version | Issue | Status |
+|----------------------------|----------------------------------------------------------------|---------------------------------|
+| **23.1 - 23.2** | Optimizer bug creating duplicate aliases with LowCardinality | Fixed in 23.3+ |
+| **24.3+** | New analyzer gives less clear error message (code 47) | Known, different error code |
+| **All versions** | ALIAS column conflicts with SELECT alias on distributed tables | Workaround: use different alias |
+| **With parallel replicas** | Self-referential aliases fail | Workaround: unique alias names |
+
+## Related error codes {#related-errors}
+
+- **Error 47 `UNKNOWN_IDENTIFIER`**: New analyzer may show this instead of 179 for duplicate aliases
+- **Error 13/15 `DUPLICATE_COLUMN`**: Similar but for table column definitions, not query aliases
diff --git a/docs/troubleshooting/error_codes/181_ILLEGAL_FINAL.md b/docs/troubleshooting/error_codes/181_ILLEGAL_FINAL.md
new file mode 100644
index 00000000000..cbf83c2c4d5
--- /dev/null
+++ b/docs/troubleshooting/error_codes/181_ILLEGAL_FINAL.md
@@ -0,0 +1,241 @@
+---
+slug: /troubleshooting/error-codes/181_ILLEGAL_FINAL
+sidebar_label: '181 ILLEGAL_FINAL'
+doc_type: 'reference'
+keywords: ['error codes', 'ILLEGAL_FINAL', '181', 'FINAL', 'subquery', 'modifier']
+title: '181 ILLEGAL_FINAL'
+description: 'ClickHouse error code - 181 ILLEGAL_FINAL'
+---
+
+# Error 181: ILLEGAL_FINAL
+
+:::tip
+This error occurs when you use the FINAL modifier in contexts where it is not allowed.
+FINAL can only be used directly on tables from the MergeTree family that support deduplication (`ReplacingMergeTree`, `CollapsingMergeTree`, etc.), not on subqueries, derived tables, or other table engines.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Using FINAL on subqueries**
+ - `SELECT * FROM (SELECT * FROM table) FINAL` - not allowed
+ - FINAL must be applied to the base table, not the subquery result
+ - Applies to both inline subqueries and CTEs
+
+2. **Using FINAL on derived tables**
+ - Result of JOIN, UNION, or other operations
+ - Attempting to deduplicate already processed data
+ - FINAL only works on physical table storage
+
+3. **Using FINAL on unsupported table engines**
+ - View tables (materialized or regular)
+ - Distributed tables in certain contexts
+ - Tables without deduplication logic (regular MergeTree)
+ - Dictionary tables
+
+4. **FINAL in wrong position in query**
+ - Placing FINAL after WHERE or other clauses
+ - Must come immediately after table name
+ - Incorrect syntax ordering
+
+5. **Using FINAL on JOINed tables indirectly**
+ - Attempting to apply FINAL to result of JOIN
+ - FINAL must be on individual source tables before JOIN
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check where FINAL is placed in your query**
+
+```sql
+-- Find the failing query
+SELECT query
+FROM system.query_log
+WHERE exception_code = 181
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC;
+```
+
+**2. Verify table engine supports FINAL**
+
+```sql
+-- Check if table supports FINAL
+SELECT
+ name,
+ engine
+FROM system.tables
+WHERE database = 'your_database'
+ AND name = 'your_table';
+
+-- FINAL works with:
+-- - ReplacingMergeTree
+-- - CollapsingMergeTree
+-- - CoalescingMergeTree
+-- - VersionedCollapsingMergeTree
+-- - AggregatingMergeTree
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Move FINAL to base table, not subquery**
+
+```sql
+-- Instead of this (fails):
+SELECT *
+FROM (SELECT * FROM table WHERE condition) FINAL;
+
+-- Use this (works):
+SELECT *
+FROM table FINAL
+WHERE condition;
+```
+
+**2. Apply FINAL before wrapping in subquery**
+
+```sql
+-- Instead of this (fails):
+SELECT *
+FROM (
+ SELECT * FROM table1
+ UNION ALL
+ SELECT * FROM table2
+) FINAL;
+
+-- Use this (works):
+SELECT * FROM table1 FINAL
+UNION ALL
+SELECT * FROM table2 FINAL;
+```
+
+**3. Use FINAL on each table in JOIN**
+
+```sql
+-- Instead of this (fails):
+SELECT *
+FROM (
+ SELECT * FROM table1
+ JOIN table2 USING (id)
+) FINAL;
+
+-- Use this (works):
+SELECT *
+FROM table1 FINAL
+JOIN table2 FINAL USING (id);
+```
+
+**4. Apply FINAL directly after table name**
+
+```sql
+-- Correct syntax:
+SELECT * FROM table FINAL WHERE condition;
+SELECT * FROM table AS t FINAL WHERE t.id = 1;
+
+-- Not:
+SELECT * FROM table WHERE condition FINAL; -- Wrong position
+```
+
+**5. Remove FINAL from unsupported engines**
+
+```sql
+-- Check table engine
+SHOW CREATE TABLE your_table;
+
+-- If engine is MergeTree (not Replacing/Collapsing):
+-- FINAL has no effect anyway, remove it:
+SELECT * FROM regular_mergetree_table; -- No FINAL needed
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: FINAL on subquery result**
+
+```text
+Code: 181. DB::Exception: ILLEGAL_FINAL
+```
+
+**Cause:** Attempting to use FINAL on a subquery or derived table.
+
+**Solution:**
+```sql
+-- Instead of:
+SELECT *
+FROM (
+ SELECT * FROM orders WHERE date > '2024-01-01'
+) FINAL;
+
+-- Move FINAL to base table:
+SELECT *
+FROM orders FINAL
+WHERE date > '2024-01-01';
+```
+
+**Scenario 2: FINAL in CTE used as derived table**
+
+```text
+Code: 181. DB::Exception: ILLEGAL_FINAL
+```
+
+**Cause:** Using FINAL on CTE reference instead of base table.
+
+**Solution:**
+```sql
+-- Instead of:
+WITH filtered AS (
+ SELECT * FROM table WHERE condition
+)
+SELECT * FROM filtered FINAL;
+
+-- Use FINAL in the CTE:
+WITH filtered AS (
+ SELECT * FROM table FINAL WHERE condition
+)
+SELECT * FROM filtered;
+```
+
+**Scenario 3: FINAL on Distributed table incorrectly**
+
+```text
+Code: 181. DB::Exception: ILLEGAL_FINAL
+```
+
+**Cause:** Using FINAL on Distributed table in unsupported context.
+
+**Solution:**
+```sql
+-- FINAL on Distributed tables works in most contexts:
+SELECT * FROM distributed_table FINAL;
+
+-- But not in subqueries:
+-- SELECT * FROM (SELECT * FROM distributed_table) FINAL; -- Wrong
+
+-- Move FINAL to table reference:
+SELECT * FROM (SELECT * FROM distributed_table FINAL);
+```
+
+**Scenario 4: FINAL on UNION result**
+
+```text
+Code: 181. DB::Exception: ILLEGAL_FINAL
+```
+
+**Cause:** Trying to deduplicate UNION result with FINAL.
+
+**Solution:**
+```sql
+-- Instead of:
+SELECT * FROM (
+ SELECT * FROM table1
+ UNION ALL
+ SELECT * FROM table2
+) FINAL;
+
+-- Apply FINAL to individual tables:
+SELECT * FROM table1 FINAL
+UNION ALL
+SELECT * FROM table2 FINAL;
+
+-- Or use DISTINCT if deduplication is needed:
+SELECT DISTINCT * FROM (
+ SELECT * FROM table1 FINAL
+ UNION ALL
+ SELECT * FROM table2 FINAL
+);
+```
diff --git a/docs/troubleshooting/error_codes/184_ILLEGAL_AGGREGATION.md b/docs/troubleshooting/error_codes/184_ILLEGAL_AGGREGATION.md
new file mode 100644
index 00000000000..2b2b63e03c1
--- /dev/null
+++ b/docs/troubleshooting/error_codes/184_ILLEGAL_AGGREGATION.md
@@ -0,0 +1,296 @@
+---
+slug: /troubleshooting/error-codes/184_ILLEGAL_AGGREGATION
+sidebar_label: '184 ILLEGAL_AGGREGATION'
+doc_type: 'reference'
+keywords: ['error codes', 'ILLEGAL_AGGREGATION', '184', 'aggregate', 'GROUP BY', 'nested']
+title: '184 ILLEGAL_AGGREGATION'
+description: 'ClickHouse error code - 184 ILLEGAL_AGGREGATION'
+---
+
+# Error 184: ILLEGAL_AGGREGATION
+
+:::tip
+This error occurs when aggregate functions are used incorrectly, such as nesting aggregate functions inside other aggregate functions, using aggregates in WHERE clauses, or mixing aggregated and non-aggregated columns without proper GROUP BY.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Nested aggregate functions**
+ - Putting one aggregate function inside another
+ - `SELECT sum(count(*))` without subquery
+ - `SELECT max(avg(x))` directly in same query level
+ - Aggregate functions must be at same nesting level or use subqueries
+
+2. **Using aggregate functions in WHERE clause**
+ - WHERE clause is evaluated before aggregation
+ - `WHERE count(*) > 10` is invalid
+ - Must use HAVING for post-aggregation filtering
+ - Or use subquery/CTE structure
+
+3. **Mixing aggregated and non-aggregated columns without GROUP BY**
+ - `SELECT name, count(*) FROM table` without GROUP BY name
+ - All non-aggregated columns must be in GROUP BY
+ - Or all columns must be aggregate functions
+ - ClickHouse requires explicit GROUP BY (unlike some databases)
+
+4. **Aggregate functions in invalid contexts**
+ - Using aggregates in JOIN ON conditions
+ - Aggregates in PREWHERE clause
+ - Aggregates in array indices or other expression contexts
+ - Some contexts fundamentally don't support aggregation
+
+5. **Complex alias references causing nested aggregation**
+ - Query optimizer may expand aliases in ways that nest aggregates
+ - Reusing aggregate result aliases in expressions
+ - Circular or recursive alias dependencies
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Check the error message for the specific aggregate function**
+
+```text
+Aggregate function count(*) is found inside another aggregate function
+Aggregate function sum(value) cannot be used in WHERE clause
+```
+
+**2. Review your query structure**
+
+```sql
+-- Look for:
+-- - Nested aggregate functions
+-- - Aggregates in WHERE clause
+-- - Missing GROUP BY for non-aggregated columns
+```
+
+**3. Review query logs**
+
+```sql
+SELECT
+ event_time,
+ query,
+ exception
+FROM system.query_log
+WHERE exception_code = 184
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Use subqueries for nested aggregations**
+
+```sql
+-- Instead of this (fails):
+SELECT sum(count(*)) FROM table;
+
+-- Use subquery:
+SELECT sum(cnt) FROM (
+ SELECT count(*) AS cnt
+ FROM table
+ GROUP BY category
+);
+```
+
+**2. Use HAVING instead of WHERE for aggregate conditions**
+
+```sql
+-- Instead of this (fails):
+SELECT category, count(*) AS cnt
+FROM table
+WHERE count(*) > 10 -- Error: aggregation in WHERE
+GROUP BY category;
+
+-- Use HAVING:
+SELECT category, count(*) AS cnt
+FROM table
+GROUP BY category
+HAVING count(*) > 10;
+```
+
+**3. Add GROUP BY for non-aggregated columns**
+
+```sql
+-- Instead of this (fails):
+SELECT category, count(*) FROM table;
+
+-- Add GROUP BY:
+SELECT category, count(*)
+FROM table
+GROUP BY category;
+
+-- Or aggregate all columns:
+SELECT any(category), count(*)
+FROM table;
+```
+
+**4. Move aggregates to subquery**
+
+```sql
+-- Instead of using aggregate in JOIN:
+SELECT *
+FROM table1
+JOIN table2 ON table1.id = count(table2.id); -- Error
+
+-- Use subquery:
+SELECT *
+FROM table1
+JOIN (
+ SELECT category, count(*) AS cnt
+ FROM table2
+ GROUP BY category
+) AS agg ON table1.category = agg.category;
+```
+
+**5. Use window functions for running aggregates**
+
+```sql
+-- Instead of nested aggregates:
+SELECT category, max(count(*)) OVER () FROM table GROUP BY category;
+
+-- Window functions can access aggregate results:
+SELECT
+ category,
+ count(*) AS cnt,
+ max(cnt) OVER () AS max_cnt
+FROM table
+GROUP BY category;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Nested aggregate functions**
+
+```text
+Code: 184. DB::Exception: Aggregate function count(*) is found inside another aggregate function
+```
+
+**Cause:** Attempting to nest aggregate functions without proper query structure.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT sum(count(*)) FROM table;
+
+-- Use subquery:
+SELECT sum(cnt) FROM (
+ SELECT count(*) AS cnt
+ FROM table
+ GROUP BY category
+);
+
+-- For finding maximum count per category:
+SELECT max(cnt) FROM (
+ SELECT category, count(*) AS cnt
+ FROM table
+ GROUP BY category
+);
+```
+
+**Scenario 2: Aggregate in WHERE clause**
+
+```text
+Code: 184. DB::Exception: Aggregate function in WHERE clause
+```
+
+**Cause:** Using aggregate function in WHERE clause, which is evaluated before GROUP BY.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT category, count(*) AS cnt
+FROM table
+WHERE count(*) > 10 -- Error
+GROUP BY category;
+
+-- Use HAVING:
+SELECT category, count(*) AS cnt
+FROM table
+GROUP BY category
+HAVING count(*) > 10;
+
+-- Or use subquery with WHERE:
+SELECT * FROM (
+ SELECT category, count(*) AS cnt
+ FROM table
+ GROUP BY category
+)
+WHERE cnt > 10;
+```
+
+**Scenario 3: Missing GROUP BY**
+
+```text
+Code: 184. DB::Exception: Column 'name' is not under aggregate function and not in GROUP BY
+```
+
+**Cause:** Selecting non-aggregated column without GROUP BY when aggregate functions are present.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT name, count(*) FROM users;
+
+-- Add GROUP BY:
+SELECT name, count(*) FROM users GROUP BY name;
+
+-- Or aggregate all columns:
+SELECT any(name), count(*) FROM users;
+
+-- Or use appropriate aggregate:
+SELECT uniq(name), count(*) FROM users;
+```
+
+**Scenario 4: Aggregate in JOIN condition**
+
+```text
+Code: 184. DB::Exception: Aggregate function not allowed in JOIN ON clause
+```
+
+**Cause:** Trying to use aggregate function directly in JOIN condition.
+
+**Solution:**
+
+```sql
+-- Instead of:
+SELECT *
+FROM orders o
+JOIN products p ON o.product_id = max(p.id);
+
+-- Pre-aggregate in subquery:
+SELECT *
+FROM orders o
+JOIN (
+ SELECT category, max(id) AS max_id
+ FROM products
+ GROUP BY category
+) p ON o.product_id = p.max_id;
+```
+
+**Scenario 5: Complex calculations with aggregate results**
+
+```text
+Code: 184. DB::Exception: Aggregate function found inside another aggregate function
+```
+
+**Cause:** Using aggregate result in expressions that get expanded incorrectly.
+
+**Solution:**
+
+```sql
+-- Instead of trying to use aggregate results in complex expressions:
+SELECT
+ argMax(col1, timestamp) AS col1,
+ argMax(col2, timestamp) AS col2,
+ col1 / col2 AS ratio -- May cause issues
+FROM table
+GROUP BY category;
+
+-- Use subquery to separate aggregation from calculation:
+SELECT
+ col1,
+ col2,
+ col1 / col2 AS ratio
+```
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/190_SIZES_OF_ARRAYS_DONT_MATCH.md b/docs/troubleshooting/error_codes/190_SIZES_OF_ARRAYS_DONT_MATCH.md
new file mode 100644
index 00000000000..2f105a278ec
--- /dev/null
+++ b/docs/troubleshooting/error_codes/190_SIZES_OF_ARRAYS_DONT_MATCH.md
@@ -0,0 +1,229 @@
+---
+slug: /troubleshooting/error-codes/190_SIZES_OF_ARRAYS_DONT_MATCH
+sidebar_label: '190 SIZES_OF_ARRAYS_DONT_MATCH'
+doc_type: 'reference'
+keywords: ['error codes', 'SIZES_OF_ARRAYS_DONT_MATCH', '190']
+title: '190 SIZES_OF_ARRAYS_DONT_MATCH'
+description: 'ClickHouse error code - 190 SIZES_OF_ARRAYS_DONT_MATCH'
+---
+
+# Error 190: SIZES_OF_ARRAYS_DONT_MATCH
+
+:::tip
+This error occurs when array functions that require equal-length arrays receive arrays of different sizes.
+This commonly happens with functions like `arrayMap`, `arrayZip`, higher-order array functions, and array distance functions (like `arrayL2Distance`) that operate on corresponding elements from multiple arrays.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Array functions with mismatched input arrays**
+ - Using `arrayMap` with lambda functions that require multiple arrays of different lengths
+ - Passing arrays of different sizes to `arrayZip` when it expects equal-length inputs
+ - Using higher-order functions like `arrayFilter`, `arrayExists`, or `arraySplit` with multiple arrays of different sizes
+ - Array distance functions receiving embeddings or vectors of different dimensions
+
+2. **Misleading error messages in recent versions (24.2+)**
+ - In ClickHouse 24.2+, the error message may report incorrect array sizes (e.g., "Argument 2 has size 1, but expected 1")
+ - The reported sizes in the error message may not accurately reflect the actual array dimensions
+ - This makes debugging more difficult on large queries where the actual mismatch is unclear
+
+3. **Version-specific issues with array functions**
+ - After migrating from 24.1.4.20 to 24.2.1.2248, functions like `arrayL2Distance` may fail with this error
+ - Can occur when processing embeddings or vector data with inconsistent dimensions
+ - Bitmap transformation functions may trigger internal array mismatches
+
+4. **Context-dependent evaluation with untuple and arrayZip**
+ - Using `arrayZip(untuple(...))` with certain table engines (ReplicatedMergeTree) may fail
+ - Adding WHERE clauses can trigger unexpected behavior with empty untuple results
+ - Works differently on Memory engine vs. ReplicatedMergeTree
+
+5. **Data quality issues**
+ - Inconsistent data ingestion creating arrays of varying lengths
+ - Nested structures where inner arrays have different sizes across rows
+ - NULL or empty arrays mixed with populated arrays in multi-array operations
+
+## Common solutions {#common-solutions}
+
+**1. Verify array lengths before calling array functions**
+
+```sql
+-- Option A: Filter to only equal-length arrays
+SELECT
+ arrayMap((x, y) -> x + y, arr1, arr2) AS result
+FROM table
+WHERE length(arr1) = length(arr2);
+
+-- Option B: Pad shorter arrays to match length (keeps all rows)
+SELECT
+ arrayMap((x, y) -> x + y,
+ arrayResize(arr1, greatest(length(arr1), length(arr2)), 0),
+ arrayResize(arr2, greatest(length(arr1), length(arr2)), 0)
+ ) AS result
+FROM table;
+
+-- Option C: Use CASE to handle mismatched rows
+SELECT
+ CASE
+ WHEN length(arr1) = length(arr2)
+ THEN arrayMap((x, y) -> x + y, arr1, arr2)
+ ELSE [] -- or NULL, or some default value
+ END AS result
+FROM table;
+```
+
+**2. Use `arrayZipUnaligned` for arrays of different lengths**
+
+```sql
+-- Instead of arrayZip which requires equal sizes
+SELECT arrayZip(['a'], [1, 2, 3]);
+-- Error: SIZES_OF_ARRAYS_DONT_MATCH
+
+-- Use arrayZipUnaligned which pads with NULLs
+SELECT arrayZipUnaligned(['a'], [1, 2, 3]);
+-- Result: [('a', 1), (NULL, 2), (NULL, 3)]
+
+-- Alternative: manually pad with arrayResize before using arrayZip
+SELECT arrayZip(
+ arrayResize(['a'], 3, ''),
+ [1, 2, 3]
+);
+-- Result: [('a', 1), ('', 2), ('', 3)]
+```
+
+**3. Validate embedding dimensions before distance calculations**
+
+```sql
+-- For vector similarity operations, ensure all embeddings have same dimension
+SELECT
+ id,
+ arrayL2Distance(embedding1, embedding2) AS distance
+FROM table
+WHERE length(embedding1) = length(embedding2)
+ AND length(embedding1) = 384; -- Expected embedding size
+
+-- Or add validation in your data pipeline
+INSERT INTO embeddings_table
+SELECT
+ id,
+ embedding
+FROM source_table
+WHERE length(embedding) = 384; -- Reject invalid embeddings at ingestion
+```
+
+**4. Handle version-specific issues (24.2+ misleading errors)**
+
+```sql
+-- When error messages are misleading, debug with explicit length checks
+SELECT
+ length(arr1) AS arr1_len,
+ length(arr2) AS arr2_len,
+ length(arr3) AS arr3_len
+FROM table
+WHERE NOT (length(arr1) = length(arr2) AND length(arr2) = length(arr3))
+LIMIT 10;
+
+-- This helps identify which arrays actually have mismatched lengths
+-- despite what the error message claims
+```
+
+**5. Fix untuple issues with ReplicatedMergeTree (use PREWHERE or experimental analyzer)**
+
+```sql
+-- If encountering issues with arrayZip(untuple(...)) on ReplicatedMergeTree
+
+-- Option A: Use PREWHERE instead of WHERE
+SELECT
+ app,
+ arrayZip(untuple(sumMap(k.keys, replicate(1, k.keys))))
+FROM test
+PREWHERE c > 0
+GROUP BY app;
+
+-- Option B: Enable experimental analyzer
+SET allow_experimental_analyzer = 1;
+SELECT
+ app,
+ arrayZip(untuple(sumMap(k.keys, replicate(1, k.keys))))
+FROM test
+WHERE c > 0
+GROUP BY app;
+
+-- Option C: Use untuple more explicitly
+SELECT
+ app,
+ arrayZip(untuple(sumMap(([partition_id], [rows])))) AS rows_per_partition
+FROM system.parts
+GROUP BY app;
+```
+
+**6. Handle bitmap transform operations carefully**
+
+```sql
+-- For bitmap functions that can trigger this error due to internal array mismatches,
+-- ensure consistent data types and proper null handling
+SELECT
+ bitmapToArray(bitmapAnd(bitmap1, bitmap2)) AS result
+FROM table
+WHERE bitmap1 IS NOT NULL
+ AND bitmap2 IS NOT NULL
+ AND bitmapCardinality(bitmap1) > 0
+ AND bitmapCardinality(bitmap2) > 0;
+```
+
+**7. Debug complex queries with multiple arrays**
+
+```sql
+-- Break down complex arrayMap operations to identify the mismatch
+WITH
+ arrays_checked AS (
+ SELECT
+ arr1,
+ arr2,
+ arr3,
+ length(arr1) as len1,
+ length(arr2) as len2,
+ length(arr3) as len3
+ FROM source_table
+ )
+SELECT
+ arr1, arr2, arr3,
+ len1, len2, len3,
+ (len1 = len2 AND len2 = len3) AS all_equal
+FROM arrays_checked
+WHERE NOT all_equal;
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Always validate array dimensions**: Before passing arrays to functions that require equal sizes, check their lengths using `length()` function or add assertions in your queries. Consider adding CHECK constraints on array columns if appropriate.
+2. **Be cautious after version upgrades**: When upgrading ClickHouse (especially to 24.2+), test queries involving array functions as error messages may be misleading and behavior might have changed. Keep a test suite of array operations.
+3. **Use appropriate array functions**: Choose `arrayZipUnaligned` when you need to handle arrays of different lengths, and `arrayZip` only when you're certain arrays are equal-sized.
+4. **Validate embedding data pipelines**: If using vector embeddings, implement validation checks in your data ingestion pipeline to ensure all vectors have consistent dimensions before insertion. Reject or pad vectors at the source.
+5. **Consider table engine differences**: Be aware that some array operations may behave differently on Memory engine vs. ReplicatedMergeTree, especially with complex expressions like `untuple`. Test on the target engine type.
+6. **Add data quality checks**: Implement regular data quality monitoring to detect when arrays of varying lengths are being inserted:
+
+```sql
+-- Monitor array length consistency
+SELECT
+ count() as total_rows,
+ countIf(length(arr1) = length(arr2)) as matching_lengths,
+ (matching_lengths / total_rows) * 100 as match_percentage
+FROM table
+WHERE toDate(inserted_at) = today();
+```
+
+7. **Document expected array sizes**: In table schemas and application code, clearly document the expected sizes of arrays, especially for ML embeddings or fixed-size data structures.
+
+8. **Use materialized columns for validation**: Create materialized columns that compute and store array lengths for quick validation:
+
+```sql
+CREATE TABLE embeddings_table (
+ id UInt64,
+ embedding Array(Float32),
+ embedding_size UInt32 MATERIALIZED length(embedding)
+) ENGINE = MergeTree()
+ORDER BY id;
+
+-- Then you can quickly filter or validate
+SELECT count() FROM embeddings_table WHERE embedding_size != 384;
+```
diff --git a/docs/troubleshooting/error_codes/198_DNS_ERROR.md b/docs/troubleshooting/error_codes/198_DNS_ERROR.md
new file mode 100644
index 00000000000..59d35876086
--- /dev/null
+++ b/docs/troubleshooting/error_codes/198_DNS_ERROR.md
@@ -0,0 +1,372 @@
+---
+slug: /troubleshooting/error-codes/198_DNS_ERROR
+sidebar_label: '198 DNS_ERROR'
+doc_type: 'reference'
+keywords: ['error codes', 'DNS_ERROR', '198']
+title: '198 DNS_ERROR'
+description: 'ClickHouse error code - 198 DNS_ERROR'
+---
+
+# Error 198: DNS_ERROR
+
+:::tip
+This error occurs when ClickHouse cannot resolve a hostname to an IP address through DNS lookup.
+It indicates that DNS resolution failed for a hostname used in cluster configuration, distributed queries, or external connections.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Hostname does not exist**
+ - Hostname is misspelled in configuration
+ - Pod or service not yet created in Kubernetes
+ - Server has been decommissioned or renamed
+ - DNS record not created or has been deleted
+
+2. **DNS server issues**
+ - DNS server is unreachable or down
+ - Network connectivity problems to DNS server
+ - DNS server timeout or slow response
+ - Incorrect DNS server configuration
+
+3. **Kubernetes service discovery problems**
+ - Pods not ready when DNS lookup occurs
+ - Service endpoints are not yet available
+ - Headless service DNS not propagated
+ - CoreDNS or kube-dns issues in cluster
+
+4. **Cluster configuration errors**
+ - Wrong hostname in cluster configuration
+ - Hostname referencing nodes that don't exist
+ - Typo in `remote_servers` configuration
+ - Stale configuration with old hostnames
+
+5. **DNS cache issues**
+ - Cached DNS entries for deleted hosts
+ - DNS TTL expiration causing lookups for removed hosts
+ - ClickHouse DNS cache not updated after infrastructure changes
+
+6. **Network or firewall issues**
+ - Firewall blocking DNS queries (port 53)
+ - Network segmentation preventing DNS access
+ - DNS resolution timeout too short
+
+## Common solutions {#common-solutions}
+
+**1. Verify hostname resolution manually**
+
+```bash
+# Test DNS resolution from ClickHouse server
+nslookup hostname.domain.com
+
+# Or using dig
+dig hostname.domain.com
+
+# Check from ClickHouse pod (Kubernetes)
+kubectl exec -it clickhouse-pod -- nslookup service-name.namespace.svc.cluster.local
+```
+
+**2. Check cluster configuration**
+
+```xml
+
+
+
+
+
+
+ correct-hostname.domain.com
+ 9000
+
+
+
+
+```
+
+**3. Check ClickHouse DNS resolver logs**
+
+```sql
+-- View DNS resolution errors in logs
+SELECT
+ event_time,
+ logger_name,
+ message
+FROM system.text_log
+WHERE logger_name = 'DNSResolver'
+ AND level IN ('Error', 'Warning')
+ AND event_date >= today() - 1
+ORDER BY event_time DESC
+LIMIT 100;
+```
+
+**4. Clear ClickHouse DNS cache**
+
+ClickHouse caches DNS lookups. If hostnames have changed:
+
+```sql
+-- Force reload of cluster configuration
+SYSTEM RELOAD CONFIG;
+
+-- Or restart ClickHouse server
+```
+
+**5. Fix Kubernetes service issues**
+
+```bash
+# Check if pods are ready
+kubectl get pods -n your-namespace
+
+# Check service endpoints
+kubectl get endpoints service-name -n your-namespace
+
+# Check CoreDNS logs
+kubectl logs -n kube-system -l k8s-app=kube-dns
+
+# Restart CoreDNS if needed
+kubectl rollout restart deployment/coredns -n kube-system
+```
+
+**6. Verify DNS server configuration**
+
+```bash
+# Check /etc/resolv.conf
+cat /etc/resolv.conf
+
+# Test DNS server accessibility
+ping dns-server-ip
+```
+
+**7. Update cluster configuration**
+
+Remove non-existent hosts from configuration:
+
+```xml
+
+
+
+
+
+
+
+
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Kubernetes pod not ready**
+
+```text
+Error: Cannot resolve host (pod-name.headless-service.namespace.svc.cluster.local),
+error 0: Host not found
+```
+
+**Cause:** Pod not yet started or service endpoints not available.
+
+**Solution:**
+- Wait for pods to become ready
+- Check pod status: `kubectl get pods`
+- Verify headless service has endpoints: `kubectl get endpoints`
+
+**Scenario 2: Stale cluster configuration**
+
+```text
+DNSResolver: Cannot resolve host (old-server-name), error 0: Host not found
+DNSResolver: Cached hosts dropped: old-server-name
+DNSCacheUpdater: IPs of some hosts have been changed. Will reload cluster config
+```
+
+**Cause:** Configuration references servers that have been removed.
+
+**Solution:**
+- Update cluster configuration to remove old hosts
+- Reload configuration: `SYSTEM RELOAD CONFIG`
+- Or restart ClickHouse server
+
+**Scenario 3: DNS server unreachable**
+
+```text
+Error: Cannot resolve host, error: Temporary failure in name resolution
+```
+
+**Cause:** DNS server is down or unreachable.
+
+**Solution:**
+- Check DNS server status
+- Verify network connectivity
+- Test DNS resolution manually: `nslookup hostname`
+- Check `/etc/resolv.conf` for correct DNS servers
+
+**Scenario 4: Embedded Keeper quorum issues**
+
+```text
+DNSResolver: Cannot resolve host (node-3.cluster.local), error 0: Host not found
+```
+
+**Cause:** Keeper nodes not yet available or wrong hostname.
+
+**Solution:**
+- Ensure all Keeper nodes are started
+- Verify Keeper configuration has correct hostnames
+- Check Keeper logs for connectivity issues
+
+## Prevention tips {#prevention-tips}
+
+1. **Use valid hostnames:** Verify hostnames exist before adding to configuration
+2. **Test DNS resolution:** Use `nslookup` or `dig` to test hostnames before configuring
+3. **Monitor DNS health:** Set up monitoring for DNS server availability
+4. **Use DNS caching wisely:** Consider DNS TTL settings for dynamic environments
+5. **Keep configuration current:** Remove decommissioned servers from cluster config
+6. **Kubernetes readiness:** Ensure pods are ready before ClickHouse tries to connect
+7. **Use StatefulSets:** In Kubernetes, use StatefulSets for predictable DNS names
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify failing hostname:**
+
+ ```sql
+ SELECT message
+ FROM system.text_log
+ WHERE message LIKE '%Cannot resolve host%'
+ AND event_date >= today()
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+2. **Test DNS resolution:**
+
+ ```bash
+ # From ClickHouse server
+ nslookup failing-hostname
+
+ # Check if DNS server responds
+ dig @dns-server-ip failing-hostname
+ ```
+
+3. **Check cluster configuration:**
+
+ ```sql
+ -- View cluster configuration
+ SELECT *
+ FROM system.clusters
+ WHERE cluster = 'your_cluster';
+ ```
+
+4. **Monitor DNS cache updates:**
+
+ ```sql
+ SELECT
+ event_time,
+ message
+ FROM system.text_log
+ WHERE logger_name = 'DNSCacheUpdater'
+ AND event_date >= today()
+ ORDER BY event_time DESC
+ LIMIT 20;
+ ```
+
+5. **Check network connectivity:**
+
+ ```bash
+ # Ping DNS server
+ ping dns-server-ip
+
+ # Check DNS port accessibility
+ nc -zv dns-server-ip 53
+
+ # Test from specific pod (Kubernetes)
+ kubectl exec -it pod-name -- ping dns-server-ip
+ ```
+
+6. **Review Kubernetes events (if applicable):**
+
+ ```bash
+ kubectl get events -n your-namespace --sort-by='.lastTimestamp'
+ ```
+
+## Special considerations {#special-considerations}
+
+**For Kubernetes deployments:**
+- Headless services create DNS entries for each pod
+- StatefulSet pods have predictable DNS names: `pod-name-0.service-name.namespace.svc.cluster.local`
+- DNS may not be immediately available when pods are starting
+- CoreDNS issues can affect entire cluster
+
+**For distributed clusters:**
+- All nodes must be able to resolve each other's hostnames
+- DNS failures on one node can affect distributed queries
+- Consider using IP addresses for critical internal connections (though less flexible)
+
+**For ClickHouse Keeper:**
+- All Keeper nodes must be resolvable by name
+- Keeper quorum formation requires DNS resolution
+- Wrong hostname in Keeper config prevents cluster formation
+
+**DNS cache behavior:**
+- ClickHouse caches DNS lookups to reduce DNS queries
+- Cache is updated periodically (default: every 15 seconds)
+- Failed lookups are also cached temporarily
+- `SYSTEM RELOAD CONFIG` forces DNS cache refresh
+
+## Configuration settings {#configuration-settings}
+
+DNS-related settings in ClickHouse configuration:
+
+```xml
+
+
+ 15
+
+
+ 0
+
+```
+
+## When DNS errors persist {#when-errors-persist}
+
+If DNS errors continue after basic troubleshooting:
+
+1. **Use IP addresses temporarily:**
+ ```xml
+
+
+
+
+
+ 192.168.1.10
+ 9000
+
+
+
+
+ ```
+
+2. **Add entries to /etc/hosts:**
+ ```bash
+ # Add static DNS entries
+ echo "192.168.1.10 server-name.domain.com" >> /etc/hosts
+ ```
+
+3. **Configure alternative DNS servers:**
+ ```bash
+ # Edit /etc/resolv.conf
+ nameserver 8.8.8.8
+ nameserver 8.8.4.4
+ ```
+
+4. **Increase DNS timeout:**
+ - Check system DNS resolver timeout settings
+ - Consider increasing if network latency is high
+
+If you're experiencing this error:
+1. Identify which hostname is failing from error logs
+2. Test DNS resolution manually with `nslookup` or `dig`
+3. Verify the hostname exists and is spelled correctly
+4. Check DNS server availability and accessibility
+5. For Kubernetes: ensure pods are ready and service endpoints exist
+6. Update cluster configuration to remove non-existent hosts
+7. Reload ClickHouse configuration or restart server
+8. Monitor DNS cache updates in ClickHouse logs
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/202_TOO_MANY_SIMULTANEOUS_QUERIES.md b/docs/troubleshooting/error_codes/202_TOO_MANY_SIMULTANEOUS_QUERIES.md
new file mode 100644
index 00000000000..7915c67c87c
--- /dev/null
+++ b/docs/troubleshooting/error_codes/202_TOO_MANY_SIMULTANEOUS_QUERIES.md
@@ -0,0 +1,392 @@
+---
+slug: /troubleshooting/error-codes/202_TOO_MANY_SIMULTANEOUS_QUERIES
+sidebar_label: '202 TOO_MANY_SIMULTANEOUS_QUERIES'
+doc_type: 'reference'
+keywords: ['error codes', 'TOO_MANY_SIMULTANEOUS_QUERIES', '202']
+title: '202 TOO_MANY_SIMULTANEOUS_QUERIES'
+description: 'ClickHouse error code - 202 TOO_MANY_SIMULTANEOUS_QUERIES'
+---
+
+# Error 202: TOO_MANY_SIMULTANEOUS_QUERIES
+
+:::tip
+This error occurs when the number of concurrently executing queries exceeds the configured limit for the server or user.
+It indicates that ClickHouse is protecting itself from overload by rejecting new queries until existing queries complete.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Exceeded global query limit**
+ - More queries than [`max_concurrent_queries`](/operations/server-configuration-parameters/settings#max_concurrent_queries) setting allows
+ - Default limit is typically 1000 concurrent queries
+ - Query execution slower than query arrival rate
+
+2. **Exceeded per-user query limit**
+ - User exceeds [`max_concurrent_queries_for_user`](/operations/settings/settings#max_concurrent_queries_for_user) limit
+ - Multiple applications using the same user account
+ - Query backlog from slow-running queries
+
+3. **Query execution bottleneck**
+ - Queries running slower than normal (cold cache, resource contention)
+ - Increased query complexity or data volume
+ - Insufficient server resources causing query queueing
+
+4. **Traffic spike or load test**
+ - Sudden increase in query rate
+ - Load testing without appropriate limits
+ - Retry storms from client applications
+
+5. **Async insert backpressure**
+ - Large number of async insert operations queueing
+ - Inserts counted toward query limit
+ - Async insert processing slower than arrival rate
+
+6. **Poor connection management**
+ - Client opening too many persistent connections
+ - Connection pooling misconfigured
+ - Each connection running queries simultaneously
+
+## Common solutions {#common-solutions}
+
+**1. Implement client-side retry with backoff**
+
+This is the recommended approach rather than just increasing limits:
+
+```python
+# Python example with exponential backoff
+import time
+import random
+
+def execute_with_retry(query, max_retries=5):
+ for attempt in range(max_retries):
+ try:
+ return client.execute(query)
+ except Exception as e:
+ if 'TOO_MANY_SIMULTANEOUS_QUERIES' in str(e) or '202' in str(e):
+ if attempt < max_retries - 1:
+ # Exponential backoff with jitter
+ wait_time = (2 ** attempt) + random.uniform(0, 1)
+ time.sleep(wait_time)
+ continue
+ raise
+```
+
+**2. Check current query limits**
+
+```sql
+-- View current settings
+SELECT
+ name,
+ value,
+ description
+FROM system.settings
+WHERE name IN ('max_concurrent_queries', 'max_concurrent_queries_for_user')
+FORMAT Vertical;
+```
+
+**3. Monitor concurrent query count**
+
+```sql
+-- Check current running queries
+SELECT
+ user,
+ count() AS concurrent_queries
+FROM system.processes
+GROUP BY user
+ORDER BY concurrent_queries DESC;
+
+-- Total concurrent queries
+SELECT count() FROM system.processes;
+```
+
+**4. Increase query limits (if appropriate)**
+
+```sql
+-- Increase global limit (requires server restart in self-managed)
+-- In config.xml:
+2000
+
+-- Increase per-user limit
+ALTER USER your_user SETTINGS max_concurrent_queries_for_user = 200;
+
+-- Or set at session level (won't help for the limit itself, but for testing)
+SET max_concurrent_queries_for_user = 200;
+```
+
+:::note
+In ClickHouse Cloud, changing `max_concurrent_queries` requires support assistance.
+:::
+
+**5. Optimize slow queries**
+
+```sql
+-- Find slow running queries
+SELECT
+ query_id,
+ user,
+ elapsed,
+ query
+FROM system.processes
+WHERE elapsed > 60
+ORDER BY elapsed DESC;
+
+-- Kill long-running queries if necessary
+KILL QUERY WHERE query_id = 'slow_query_id';
+```
+
+**6. Implement connection pooling**
+
+```python
+# Use connection pooling to reuse connections
+from clickhouse_connect import get_client
+
+# Create client pool
+client = get_client(
+ host='your-host',
+ pool_mgr=create_pool_manager(maxsize=20) # Limit pool size
+)
+```
+
+**7. Use query priorities**
+
+```sql
+-- Lower priority for less critical queries
+SELECT * FROM large_table
+SETTINGS priority = 10; -- Higher number = lower priority
+
+-- Higher priority for critical queries
+SELECT * FROM important_table
+SETTINGS priority = 1;
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Traffic spike**
+
+```text
+Error: Code: 202, message: Too many simultaneous queries. Maximum: 1000
+```
+
+**Cause:** Sudden increase in query rate from 200 to 1000+ queries/second.
+
+**Solution:**
+- Implement exponential backoff retries in client
+- Scale horizontally (add more replicas)
+- Optimize queries to complete faster
+- If sustained load, increase `max_concurrent_queries`
+
+**Scenario 2: Slow queries creating backlog**
+
+```text
+Error: Too many simultaneous queries
+```
+
+**Cause:** Queries taking 3-4 seconds instead of typical 7ms due to cold cache after restart.
+
+**Solution:**
+- Warm up cache after restarts with key queries
+- Optimize slow queries
+- Implement query timeout limits
+- Use query result cache for repeated queries
+
+**Scenario 3: Per-user limit exceeded**
+
+```text
+Error: Too many simultaneous queries for user 'app_user'
+```
+
+**Cause:** Single user running too many concurrent queries.
+
+**Solution:**
+
+```sql
+-- Increase user-specific limit
+ALTER USER app_user SETTINGS max_concurrent_queries_for_user = 500;
+
+-- Or create separate users for different applications
+CREATE USER app1_user IDENTIFIED BY 'password'
+SETTINGS max_concurrent_queries_for_user = 200;
+```
+
+**Scenario 4: Async inserts causing limit**
+
+```text
+Error: Too many simultaneous queries (mostly async inserts)
+```
+
+**Cause:** High volume async inserts filling query slots.
+
+**Solution:**
+
+```sql
+-- Adjust async insert settings
+SET async_insert = 1;
+SET async_insert_max_data_size = 10485760; -- 10MB
+SET async_insert_busy_timeout_ms = 1000; -- Flush more frequently
+
+-- Or batch inserts on client side
+```
+
+**Scenario 5: Connection pool misconfiguration**
+
+```text
+Error: Too many simultaneous queries
+```
+
+**Cause:** Each client connection running queries, with 1000 open connections.
+
+**Solution:**
+- Reduce connection pool size
+- Reuse connections for multiple queries
+- Close idle connections
+
+## Prevention tips {#prevention-tips}
+
+1. **Implement retry logic:** Always retry with exponential backoff for error 202
+2. **Monitor query concurrency:** Set up alerts for approaching limits
+3. **Optimize query performance:** Faster queries = lower concurrency
+4. **Use appropriate connection pools:** Don't create excessive connections
+5. **Set query timeouts:** Prevent queries from running indefinitely
+6. **Use query priorities:** Differentiate critical from non-critical queries
+7. **Scale horizontally:** Add replicas to distribute load
+
+## Debugging steps {#debugging-steps}
+
+1. **Check current concurrent queries:**
+
+ ```sql
+ SELECT
+ count() AS total_queries,
+ countIf(query_kind = 'Select') AS selects,
+ countIf(query_kind = 'Insert') AS inserts
+ FROM system.processes;
+ ```
+
+2. **Identify query patterns:**
+
+ ```sql
+ SELECT
+ user,
+ query_kind,
+ count() AS query_count,
+ avg(elapsed) AS avg_duration
+ FROM system.processes
+ GROUP BY user, query_kind
+ ORDER BY query_count DESC;
+ ```
+
+3. **Check recent error occurrences:**
+
+ ```sql
+ SELECT
+ toStartOfMinute(event_time) AS minute,
+ count() AS error_count
+ FROM system.query_log
+ WHERE exception_code = 202
+ AND event_date >= today() - 1
+ GROUP BY minute
+ ORDER BY minute DESC
+ LIMIT 50;
+ ```
+
+4. **Analyze query rate trends:**
+
+ ```sql
+ SELECT
+ toStartOfHour(event_time) AS hour,
+ user,
+ count() AS query_count,
+ countIf(exception_code = 202) AS rejected_queries
+ FROM system.query_log
+ WHERE event_date >= today() - 1
+ AND type != 'QueryStart'
+ GROUP BY hour, user
+ ORDER BY hour DESC, query_count DESC;
+ ```
+
+5. **Find slow queries causing backlog:**
+
+ ```sql
+ SELECT
+ query_id,
+ user,
+ elapsed,
+ formatReadableSize(memory_usage) AS memory,
+ query
+ FROM system.processes
+ WHERE elapsed > 30
+ ORDER BY elapsed DESC;
+ ```
+
+6. **Check connection distribution (for clusters):**
+
+ ```sql
+ SELECT
+ hostname() AS host,
+ user,
+ count() AS connection_count
+ FROM clusterAllReplicas('default', system.processes)
+ GROUP BY host, user
+ ORDER BY host, connection_count DESC;
+ ```
+
+## Query limit settings {#query-limit-settings}
+
+```sql
+-- Global limit for all users (server-level setting)
+max_concurrent_queries = 1000
+
+-- Per-user limit
+max_concurrent_queries_for_user = 100
+
+-- For specific query types
+max_concurrent_insert_queries = 100
+max_concurrent_select_queries = 100
+
+-- Related settings
+queue_max_wait_ms = 5000 -- Max time to wait in queue
+```
+
+## Best practices for high-concurrency workloads {#best-practices}
+
+1. **Scale horizontally:**
+ - Add more replicas to distribute load
+ - Use load balancing across replicas
+ - Better than just increasing limits on single instance
+
+2. **Optimize queries:**
+ - Use appropriate indexes and primary keys
+ - Avoid full table scans
+ - Use materialized views for aggregations
+ - Add `LIMIT` clauses where appropriate
+
+3. **Batch operations:**
+ - Combine multiple small queries into fewer large ones
+ - Use `IN` clauses instead of multiple queries
+ - Batch inserts instead of row-by-row
+
+4. **Use result caching:**
+ ```sql
+ -- Enable query cache for repeated queries
+ SET use_query_cache = 1;
+ SET query_cache_ttl = 300; -- 5 minutes
+ ```
+
+5. **Implement rate limiting:**
+ - Limit query rate on client side
+ - Use queuing systems (e.g., RabbitMQ, Kafka) for request management
+ - Implement circuit breakers
+
+If you're experiencing this error:
+1. Check if this is a traffic spike or sustained high load
+2. Monitor concurrent query count in `system.processes`
+3. Implement exponential backoff retries in your client
+4. Identify and optimize slow queries causing backlog
+5. Consider horizontal scaling before increasing limits
+6. If sustained high concurrency needed, request limit increase (Cloud) or update config (self-managed)
+7. Review connection pooling configuration
+
+**Related documentation:**
+- [Query complexity settings](/operations/settings/query-complexity)
+- [Server settings](/operations/server-configuration-parameters/settings)
+- [Session settings](/operations/settings/settings)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/209_SOCKET_TIMEOUT.md b/docs/troubleshooting/error_codes/209_SOCKET_TIMEOUT.md
new file mode 100644
index 00000000000..ca92d6feb2a
--- /dev/null
+++ b/docs/troubleshooting/error_codes/209_SOCKET_TIMEOUT.md
@@ -0,0 +1,419 @@
+---
+slug: /troubleshooting/error-codes/209_SOCKET_TIMEOUT
+sidebar_label: '209 SOCKET_TIMEOUT'
+doc_type: 'reference'
+keywords: ['error codes', 'SOCKET_TIMEOUT', '209']
+title: '209 SOCKET_TIMEOUT'
+description: 'ClickHouse error code - 209 SOCKET_TIMEOUT'
+---
+
+# Error 209: SOCKET_TIMEOUT
+
+:::tip
+This error occurs when a network socket operation (reading or writing) exceeds the configured timeout period.
+It indicates that data could not be sent to or received from a network connection within the allowed time, typically due to network issues, slow client response, or overloaded connections.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Network connectivity issues**
+ - Network latency or packet loss
+ - Unstable network connection
+ - Firewall or network security appliance delays
+ - Network congestion between client and server
+
+2. **Client not reading data fast enough**
+ - Client application blocked or frozen
+ - Client receive buffer full
+ - Client is processing slower than server sending
+ - Client-side timeout shorter than data transfer time
+
+3. **Large result sets**
+ - Query returning huge amount of data
+ - Client unable to consume data at required rate
+ - Network bandwidth insufficient for data volume
+ - No `LIMIT` clause on queries returning millions of rows
+
+4. **Slow or overloaded client**
+ - Client CPU or memory exhausted
+ - Client garbage collection pauses
+ - Client application not responsive
+ - Too many concurrent connections on client
+
+5. **TCP window exhaustion**
+ - Client TCP receive window fills up (window size = 1)
+ - Client not acknowledging packets fast enough
+ - TCP backpressure from slow consumer
+
+6. **Load balancer or proxy timeout**
+ - Intermediate proxy timing out connection
+ - Load balancer idle timeout
+ - Service mesh timeout configuration
+
+## Common solutions {#common-solutions}
+
+**1. Increase timeout settings**
+
+```sql
+-- Server-side settings (in config.xml or user settings)
+300
+300
+
+-- Or set per query
+SET send_timeout = 600;
+SET receive_timeout = 600;
+```
+
+**2. Reduce result set size**
+
+```sql
+-- Add LIMIT to queries
+SELECT * FROM large_table
+LIMIT 10000;
+
+-- Use pagination
+SELECT * FROM large_table
+ORDER BY id
+LIMIT 10000 OFFSET 0;
+
+-- Filter data more aggressively
+SELECT * FROM large_table
+WHERE date >= today() - INTERVAL 1 DAY;
+```
+
+**3. Use compression**
+
+```sql
+-- Enable compression to reduce data transfer
+SET enable_http_compression = 1;
+SET http_zlib_compression_level = 3;
+```
+
+For client connections:
+
+```python
+# Python clickhouse-connect
+client = clickhouse_connect.get_client(
+ host='your-host',
+ compress=True
+)
+```
+
+**4. Optimize client data consumption**
+
+```python
+# Stream results instead of loading all into memory
+# Python example
+for row in client.query_rows_stream(query):
+ process_row(row) # Process immediately
+
+# Don't do this for large results:
+# result = client.query(query) # Loads all into memory
+```
+
+**5. Check network connectivity**
+
+```bash
+# Test network latency
+ping your-clickhouse-server
+
+# Check for packet loss
+mtr your-clickhouse-server
+
+# Test bandwidth
+iperf3 -c your-clickhouse-server
+
+# Check TCP settings
+netstat -an | grep ESTABLISHED
+```
+
+**6. Configure TCP keep-alive**
+
+```xml
+
+300
+10
+```
+
+Client-side (Linux):
+
+```bash
+# Configure TCP keep-alive
+sysctl -w net.ipv4.tcp_keepalive_time=300
+sysctl -w net.ipv4.tcp_keepalive_intvl=60
+sysctl -w net.ipv4.tcp_keepalive_probes=9
+```
+
+**7. Increase client buffer sizes**
+
+```python
+# JDBC example
+Properties props = new Properties();
+props.setProperty("socket_timeout", "300000"); // 5 minutes
+props.setProperty("socket_rcvbuf", "524288"); // 512KB receive buffer
+props.setProperty("socket_sndbuf", "524288"); // 512KB send buffer
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Timeout writing to socket**
+
+```text
+Error: Code: 209. DB: Timeout exceeded while writing to socket
+```
+
+**Cause:** Client not consuming data fast enough, TCP window exhausted.
+
+**Solution:**
+- Add `LIMIT` to reduce result size
+- Enable compression
+- Increase `send_timeout` setting
+- Optimize client to consume data faster
+- Check client isn't blocked or frozen
+
+**Scenario 2: Distributed query socket timeout**
+
+```text
+Error: Timeout exceeded while writing to socket (distributed query)
+```
+
+**Cause:** Remote shard not responding or network issue between nodes.
+
+**Solution:**
+
+```sql
+-- Increase distributed query timeouts
+SET send_timeout = 600;
+SET receive_timeout = 600;
+SET connect_timeout_with_failover_ms = 5000;
+```
+
+**Scenario 3: Client receive window = 1**
+
+```text
+TCP window size drops to 1 byte, then timeout
+```
+
+**Cause:** Client application stopped reading from socket.
+
+**Solution:**
+- Check client application health
+- Ensure client is actively consuming results
+- Verify client has sufficient resources (CPU, memory)
+- Add rate limiting on server side
+
+**Scenario 4: Network problems**
+
+```text
+Error: Timeout exceeded (with network packet loss visible in tcpdump)
+```
+
+**Cause:** Network connectivity issues, packet loss, or routing problems.
+
+**Solution:**
+- Diagnose network with `ping`, `traceroute`, `mtr`
+- Check firewall rules and network ACLs
+- Verify network bandwidth is sufficient
+- Check for network security appliances causing delays
+
+**Scenario 5: External network problems**
+
+```text
+Error: Code 209 timeout writing to socket
+```
+
+**Cause:** Issues with internet connectivity or cloud provider network.
+
+**Solution:**
+- Check cloud provider status page
+- Verify VPC/network configuration
+- Test connectivity from multiple locations
+- Contact network or cloud support
+
+## Prevention tips {#prevention-tips}
+
+1. **Set appropriate timeouts:** Match client and server timeout settings
+2. **Use LIMIT clauses:** Prevent queries from returning too much data
+3. **Enable compression:** Reduce network bandwidth requirements
+4. **Monitor network health:** Track latency and packet loss
+5. **Optimize queries:** Return only needed data
+6. **Stream results:** Process data as it arrives, don't buffer all
+7. **Configure TCP properly:** Set appropriate keep-alive and buffer sizes
+
+## Debugging steps {#debugging-steps}
+
+1. **Check recent socket timeout errors:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ user,
+ exception,
+ query
+ FROM system.query_log
+ WHERE exception_code = 209
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+2. **Check current timeout settings:**
+
+ ```sql
+ SELECT
+ name,
+ value
+ FROM system.settings
+ WHERE name LIKE '%timeout%' OR name LIKE '%send%' OR name LIKE '%receive%';
+ ```
+
+3. **Monitor active connections:**
+
+ ```sql
+ SELECT
+ user,
+ address,
+ elapsed,
+ formatReadableSize(memory_usage) AS memory,
+ query
+ FROM system.processes
+ WHERE elapsed > 60
+ ORDER BY elapsed DESC;
+ ```
+
+4. **Check network statistics:**
+
+ ```bash
+ # On server
+ netstat -s | grep -i timeout
+ netstat -s | grep -i retrans
+
+ # Check TCP connections
+ ss -tn | grep ESTAB
+ ```
+
+5. **Capture network traffic (if needed):**
+
+ ```bash
+ # Capture packets for analysis
+ tcpdump -i any -w socket_timeout.pcap host client-ip
+
+ # Analyze with wireshark or tcpdump
+ tcpdump -r socket_timeout.pcap -nn
+ ```
+
+6. **Check query result size:**
+
+ ```sql
+ SELECT
+ query_id,
+ formatReadableSize(result_bytes) AS result_size,
+ result_rows,
+ query_duration_ms,
+ query
+ FROM system.query_log
+ WHERE query_id = 'your_query_id';
+ ```
+
+## Special considerations {#special-considerations}
+
+**For HTTP interface:**
+- HTTP connections can be affected by load balancer timeouts
+- Check `http_send_timeout` and `http_receive_timeout` settings
+- Load balancers may have their own timeout configurations
+
+**For distributed queries:**
+- Timeout can occur when sending results between nodes
+- Each hop adds latency
+- Use `send_timeout` and `receive_timeout` for inter-node communication
+
+**For large result sets:**
+- Consider using `LIMIT` and pagination
+- Use `SELECT` only needed columns, not `SELECT *`
+- Apply filters to reduce data volume
+- Consider materialized views for aggregations
+
+**TCP window size = 1:**
+- This is a strong indicator that the client stopped reading
+- The server has data to send, but the client buffer is full
+- Usually client-side issue, not ClickHouse issue
+
+## Timeout-related settings {#timeout-settings}
+
+```xml
+
+
+
+ 300
+ 300
+
+
+ 1800
+ 1800
+
+
+ 300
+
+
+ 10
+ 50
+
+```
+
+Query-level settings:
+
+```sql
+SET send_timeout = 600; -- Timeout for sending data (seconds)
+SET receive_timeout = 600; -- Timeout for receiving data (seconds)
+SET tcp_keep_alive_timeout = 300; -- TCP keep-alive timeout (seconds)
+```
+
+## Client-side configuration {#client-configuration}
+
+**Python (clickhouse-connect):**
+
+```python
+client = clickhouse_connect.get_client(
+ host='your-host',
+ send_receive_timeout=300, # Seconds
+ compress=True
+)
+```
+
+**JDBC:**
+
+```java
+Properties props = new Properties();
+props.setProperty("socket_timeout", "300000"); // Milliseconds
+props.setProperty("connect_timeout", "10000");
+```
+
+**HTTP:**
+
+```bash
+# Set timeout in curl
+curl --max-time 300 'http://clickhouse:8123/?query=SELECT...'
+```
+
+## Distinguishing from `TIMEOUT_EXCEEDED (159)` {#vs-timeout-exceeded}
+
+- **`SOCKET_TIMEOUT (209)`:** Network-level timeout during data transfer
+- **`TIMEOUT_EXCEEDED (159)`:** Query execution time limit exceeded
+
+`SOCKET_TIMEOUT` is about network I/O, while `TIMEOUT_EXCEEDED` is about query execution time.
+
+If you're experiencing this error:
+1. Check if client is actively consuming results
+2. Verify network connectivity and latency
+3. Add `LIMIT` to queries returning large results
+4. Enable compression to reduce bandwidth usage
+5. Increase `send_timeout` and `receive_timeout` if appropriate
+6. Monitor client application health and resource usage
+7. Check for TCP window size dropping to 1 (indicates client not reading)
+8. Verify no intermediate proxies or load balancers timing out
+9. Test with simpler/smaller queries to isolate the issue
+
+**Related documentation:**
+- [ClickHouse settings](/operations/settings/settings)
+- [Server configuration parameters](/operations/server-configuration-parameters/settings)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/210_NETWORK_ERROR.md b/docs/troubleshooting/error_codes/210_NETWORK_ERROR.md
new file mode 100644
index 00000000000..df27d8d7c6d
--- /dev/null
+++ b/docs/troubleshooting/error_codes/210_NETWORK_ERROR.md
@@ -0,0 +1,509 @@
+---
+slug: /troubleshooting/error-codes/210_NETWORK_ERROR
+sidebar_label: '210 NETWORK_ERROR'
+doc_type: 'reference'
+keywords: ['error codes', 'NETWORK_ERROR', '210']
+title: '210 NETWORK_ERROR'
+description: 'ClickHouse error code - 210 NETWORK_ERROR'
+---
+
+# Error 210: NETWORK_ERROR
+
+:::tip
+This error occurs when network communication fails due to connection issues, broken connections, or other I/O problems.
+It indicates that data could not be sent or received over the network, typically because the connection was closed, refused, or reset.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Broken pipe (client disconnected)**
+ - Client closed connection while the server was sending data
+ - Client crashed or was terminated during query execution
+ - Client timeout shorter than query duration
+ - Client application restarted or connection pool recycled connection
+
+2. **Connection refused**
+ - Target server not listening on specified port
+ - Server pod not ready or being restarted
+ - Firewall blocking connection
+ - Wrong hostname or port in configuration
+
+3. **Socket not connected**
+ - Client disconnected prematurely
+ - Connection closed before response could be sent
+ - Network interruption during data transfer
+ - Client-side connection timeout
+
+4. **Connection reset by peer**
+ - Remote side forcibly closed connection (TCP RST)
+ - Network equipment reset connection
+ - Remote server crashed or restarted
+ - Firewall or security device dropped connection
+
+5. **Distributed query failures**
+ - Cannot connect to remote shard in cluster
+ - Network partition between cluster nodes
+ - Remote node down or unreachable
+ - All connection attempts to replicas failed
+
+6. **Network infrastructure issues**
+ - Load balancer health check failures
+ - Pod restarts or rolling updates
+ - Network policy blocking traffic
+ - DNS resolution followed by connection failure
+
+## Common solutions {#common-solutions}
+
+**1. Check if client disconnected early**
+
+For "broken pipe" errors:
+
+```sql
+-- Check query duration and when error occurred
+SELECT
+ query_id,
+ query_start_time,
+ event_time,
+ query_duration_ms / 1000 AS duration_seconds,
+ exception,
+ query
+FROM system.query_log
+WHERE exception_code = 210
+ AND exception LIKE '%Broken pipe%'
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**Cause:** Query took longer than client timeout.
+
+**Solution:**
+- Increase client-side timeout
+- Optimize query to run faster
+- Add `LIMIT` to reduce result size
+
+**2. Verify server availability**
+
+For "connection refused" errors:
+
+```bash
+# Test if server is listening
+telnet server-hostname 9000
+
+# Or using nc
+nc -zv server-hostname 9000
+
+# Check pod status (Kubernetes)
+kubectl get pods -n your-namespace
+
+# Check service endpoints
+kubectl get endpoints service-name -n your-namespace
+```
+
+**3. Check cluster connectivity**
+
+```sql
+-- Test connection to all cluster nodes
+SELECT
+ hostName() AS host,
+ count() AS test
+FROM clusterAllReplicas('your_cluster', system.one);
+
+-- Check cluster configuration
+SELECT *
+FROM system.clusters
+WHERE cluster = 'your_cluster';
+```
+
+**4. Increase client timeout**
+
+```python
+# Python clickhouse-connect
+client = clickhouse_connect.get_client(
+ host='your-host',
+ send_receive_timeout=3600, # 1 hour
+ connect_timeout=30
+)
+
+# JDBC
+Properties props = new Properties();
+props.setProperty("socket_timeout", "3600000"); # 1 hour in ms
+```
+
+**5. Check for pod restarts**
+
+```bash
+# Check pod restart history (Kubernetes)
+kubectl get pods -n your-namespace
+
+# Check events for issues
+kubectl get events -n your-namespace --sort-by='.lastTimestamp'
+
+# Check pod logs
+kubectl logs -n your-namespace pod-name --previous
+```
+
+**6. Verify network policies and firewall**
+
+```bash
+# Test connectivity between nodes
+ping remote-server
+
+# Check port accessibility
+telnet remote-server 9000
+
+# Verify firewall rules (self-managed)
+iptables -L -n | grep 9000
+```
+
+**7. Handle gracefully in application**
+
+```python
+# Implement retry logic for network errors
+def execute_with_retry(query, max_retries=3):
+ for attempt in range(max_retries):
+ try:
+ return client.execute(query)
+ except Exception as e:
+ if 'NETWORK_ERROR' in str(e) or '210' in str(e):
+ if attempt < max_retries - 1:
+ time.sleep(2 ** attempt) # Exponential backoff
+ continue
+ raise
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Broken pipe during long query**
+
+```text
+Error: I/O error: Broken pipe, while writing to socket
+```
+
+**Cause:** Client disconnected after 3+ hours; query completed on server but client was gone.
+
+**Solution:**
+- Increase client timeout to match expected query duration
+- Set realistic timeout expectations
+- For very long queries (>1 hour), consider using `INSERT INTO ... SELECT` to materialize results
+- ClickHouse Cloud gracefully terminates connections with 1-hour timeout during drains
+
+**Scenario 2: Connection refused in distributed query**
+
+```text
+Error: Connection refused (server-name:9000)
+Code: 279. ALL_CONNECTION_TRIES_FAILED
+```
+
+**Cause:** Cannot connect to remote shard; pod may be restarting.
+
+**Solution:**
+
+```sql
+-- Check if nodes are accessible
+SELECT *
+FROM clusterAllReplicas('default', system.one);
+
+-- Verify all replicas are up
+SELECT
+ shard_num,
+ replica_num,
+ host_name,
+ port
+FROM system.clusters
+WHERE cluster = 'default';
+```
+
+**Scenario 3: Socket not connected after query completes**
+
+```text
+Error: Poco::Exception. Code: 1000, e.code() = 107
+Net Exception: Socket is not connected
+```
+
+**Cause:** Client closed connection before server could send response.
+
+**Solution:**
+- This often appears in logs after successful query completion
+- Usually harmless - query already processed successfully
+- Client may have closed connection early due to timeout or crash
+- Check client logs for why disconnect occurred
+
+**Scenario 4: Connection reset by peer**
+
+```text
+Error: Connection reset by peer (code: 104)
+```
+
+**Cause:** Remote side forcibly terminated connection.
+
+**Solution:**
+- Check if remote server crashed or restarted
+- Verify network stability
+- Check firewall or security appliance logs
+- Test with simpler queries
+
+**Scenario 5: All connection tries failed**
+
+```text
+Error: Code: 279. All connection tries failed
+Multiple Code: 210. Connection refused attempts
+```
+
+**Cause:** Cannot establish connection to any replica.
+
+**Solution:**
+- Check if all cluster nodes are down
+- Verify network connectivity
+- Check ClickHouse server status
+- Review cluster configuration
+
+## Prevention tips {#prevention-tips}
+
+1. **Set appropriate client timeouts:** Match client timeout to expected query duration
+2. **Handle connection errors:** Implement retry logic with exponential backoff
+3. **Monitor network health:** Track connection failures and latency
+4. **Use connection pooling:** Maintain healthy connection pools
+5. **Plan for restarts:** Design applications to handle temporary connection failures
+6. **Keep connections alive:** Configure TCP keep-alive appropriately
+7. **Optimize queries:** Reduce query execution time to avoid timeout issues
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify error type:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ exception,
+ query_duration_ms
+ FROM system.query_log
+ WHERE exception_code = 210
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 20;
+ ```
+
+2. **Check for specific error patterns:**
+
+ ```sql
+ SELECT
+ countIf(exception LIKE '%Broken pipe%') AS broken_pipe,
+ countIf(exception LIKE '%Connection refused%') AS conn_refused,
+ countIf(exception LIKE '%Socket is not connected%') AS socket_not_conn,
+ countIf(exception LIKE '%Connection reset%') AS conn_reset
+ FROM system.query_log
+ WHERE exception_code = 210
+ AND event_date >= today() - 1;
+ ```
+
+3. **Check for pod restarts (Kubernetes):**
+
+ ```bash
+ # Check restart count
+ kubectl get pods -n your-namespace
+
+ # Check recent events
+ kubectl get events -n your-namespace \
+ --sort-by='.lastTimestamp' | grep -i restart
+ ```
+
+4. **Monitor distributed query failures:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ exception
+ FROM system.query_log
+ WHERE exception LIKE '%ALL_CONNECTION_TRIES_FAILED%'
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC;
+ ```
+
+5. **Check network connectivity:**
+
+ ```bash
+ # Test connection to ClickHouse
+ telnet your-server 9000
+
+ # Check for packet loss
+ ping -c 100 your-server
+
+ # Trace network route
+ traceroute your-server
+ ```
+
+6. **Review query duration vs client timeout:**
+
+ ```sql
+ SELECT
+ query_id,
+ query_duration_ms / 1000 AS duration_sec,
+ exception
+ FROM system.query_log
+ WHERE query_id = 'your_query_id';
+ ```
+
+## Special considerations {#special-considerations}
+
+**For "broken pipe" errors:**
+- Usually indicates client disconnected
+- Query may have completed successfully before disconnect
+- Common with long-running queries and short client timeouts
+- Often not a server-side issue
+
+**For "connection refused" errors:**
+- Server not ready to accept connections
+- Common during pod restarts or scaling
+- Temporary and usually resolved by retry
+- Check if server is actually running
+
+**For "socket not connected" errors:**
+- Appears in `ServerErrorHandler` logs
+- Often logged after query already completed
+- Client disconnected before server could send final response
+- Usually benign if query completed successfully
+
+**For distributed queries:**
+- Each shard connection can fail independently
+- `ALL_CONNECTION_TRIES_FAILED` means no replicas are accessible
+- Check network between cluster nodes
+- Verify all nodes are running
+
+## Common error subcategories {#error-subcategories}
+
+**Broken pipe (errno 32):**
+- Client closed write end of connection
+- Server trying to send data to closed socket
+- Usually client-side timeout or crash
+
+**Connection refused (errno 111):**
+- No process listening on target port
+- Server not started or port closed
+- Firewall blocking connection
+- Wrong hostname or port
+
+**Socket not connected (errno 107):**
+- Operation on socket that isn't connected
+- Client disconnected before operation
+- Premature connection close
+
+**Connection reset by peer (errno 104):**
+- Remote side sent TCP RST
+- Forceful connection termination
+- Often due to firewall or remote crash
+
+## Network error settings {#network-settings}
+
+```xml
+
+
+
+ 10
+ 50
+
+
+ 300
+ 300
+
+
+ 300
+
+
+ 1024
+
+```
+
+## Handling in distributed queries {#distributed-queries}
+
+For distributed queries with failover:
+
+```sql
+-- Use max_replica_delay_for_distributed_queries for fallback
+SET max_replica_delay_for_distributed_queries = 300;
+
+-- Configure connection attempts
+SET connect_timeout_with_failover_ms = 1000;
+SET connections_with_failover_max_tries = 3;
+
+-- Skip unavailable shards
+SET skip_unavailable_shards = 1;
+```
+
+## Client-side best practices {#client-best-practices}
+
+1. **Set realistic timeouts:**
+ ```python
+ # Match timeout to expected query duration
+ client = get_client(
+ send_receive_timeout=query_expected_duration + 60
+ )
+ ```
+
+2. **Implement retry logic:**
+ ```python
+ # Retry on network errors
+ @retry(stop=stop_after_attempt(3),
+ wait=wait_exponential(multiplier=1, min=2, max=10),
+ retry=retry_if_exception_type(NetworkError))
+ def execute_query(query):
+ return client.execute(query)
+ ```
+
+3. **Handle long-running queries:**
+ ```sql
+ -- For queries > 1 hour, materialize results
+ CREATE TABLE result_table ENGINE = MergeTree() ORDER BY id AS
+ SELECT * FROM long_running_query;
+
+ -- Then query the result table
+ SELECT * FROM result_table;
+ ```
+
+4. **Monitor connection health:**
+ - Log connection errors on client side
+ - Track retry counts
+ - Alert on sustained network errors
+
+## Distinguishing from other errors {#distinguishing-errors}
+
+- **`NETWORK_ERROR (210)`:** Network/socket I/O failure
+- **`SOCKET_TIMEOUT (209)`:** Timeout during socket operation
+- **`TIMEOUT_EXCEEDED (159)`:** Query execution time limit
+- **`ALL_CONNECTION_TRIES_FAILED (279)`:** All connection attempts failed
+
+`NETWORK_ERROR` is specifically about connection failures and broken sockets.
+
+## Query patterns that commonly trigger this {#common-patterns}
+
+1. **Long-running `SELECT` queries:**
+ - Query duration exceeds client timeout
+ - Results in broken pipe when server tries to send results
+
+2. **Large data transfers:**
+ - Client buffer overflows
+ - Client application can't keep up with data rate
+
+3. **`INSERT INTO ... SELECT FROM s3()`:**
+ - Long-running imports from S3
+ - Client timeout during multi-hour operations
+
+4. **Distributed queries:**
+ - Connection to remote shards fails
+ - Network issues between cluster nodes
+
+If you're experiencing this error:
+1. Check the specific error message (broken pipe, connection refused, etc.)
+2. For "broken pipe": verify client timeout settings and query duration
+3. For "connection refused": check if the server is running and accessible
+4. For "socket not connected": usually harmless if query completed
+5. Test network connectivity between client and server
+6. Check for pod restarts or infrastructure changes (Cloud/Kubernetes)
+7. Implement retry logic for transient network failures
+8. For very long queries (>1 hour), consider alternative patterns
+9. Monitor frequency - occasional errors are normal, sustained errors need investigation
+
+**Related documentation:**
+- [ClickHouse server settings](https://clickhouse.com/docs/operations/server-configuration-parameters/settings)
+- [Distributed query settings](https://clickhouse.com/docs/operations/settings/settings#distributed-queries)
diff --git a/docs/troubleshooting/error_codes/215_NOT_AN_AGGREGATE.md b/docs/troubleshooting/error_codes/215_NOT_AN_AGGREGATE.md
new file mode 100644
index 00000000000..2a6956007fe
--- /dev/null
+++ b/docs/troubleshooting/error_codes/215_NOT_AN_AGGREGATE.md
@@ -0,0 +1,359 @@
+---
+slug: /troubleshooting/error-codes/215_NOT_AN_AGGREGATE
+sidebar_label: '215 NOT_AN_AGGREGATE'
+doc_type: 'reference'
+keywords: ['error codes', 'NOT_AN_AGGREGATE', '215', 'GROUP BY', 'aggregate function']
+title: '215 NOT_AN_AGGREGATE'
+description: 'ClickHouse error code - 215 NOT_AN_AGGREGATE'
+---
+
+# Error 215: NOT_AN_AGGREGATE
+
+:::tip
+This error occurs when a column in a `SELECT` statement with `GROUP BY` is not wrapped in an aggregate function and is not listed in the GROUP BY clause. Every column in the SELECT list must either be aggregated (e.g., using SUM, COUNT, MAX) or be part of the GROUP BY clause.
+:::
+
+## Quick reference {#quick-reference}
+
+**Most common fixes:**
+
+```sql
+-- Error: 'name' not in GROUP BY
+SELECT user_id, name, COUNT(*) FROM users GROUP BY user_id;
+
+-- Fix 1: Add to GROUP BY
+SELECT user_id, name, COUNT(*) FROM users GROUP BY user_id, name;
+
+-- Fix 2: Use aggregate function
+SELECT user_id, any(name), COUNT(*) FROM users GROUP BY user_id;
+```
+
+**If you're getting errors after upgrading to 22.8+ or 23.5+:**
+
+```sql
+-- Fails in 22.8+ due to alias reuse
+SELECT max(b) AS b, b AS b1 FROM t GROUP BY a;
+
+-- Quick fix: Use different alias names
+SELECT max(b) AS max_b, max_b AS b1 FROM t GROUP BY a;
+
+-- Or use subquery
+SELECT *, col1 / col2 AS result
+FROM (SELECT argMax(col1, ts) AS col1, argMax(col2, ts) AS col2 FROM t GROUP BY key);
+
+-- Or enable experimental analyzer (23.x+)
+SELECT max(b) AS b, b AS b1 FROM t GROUP BY a SETTINGS allow_experimental_analyzer = 1;
+```
+
+**For materialized views in 24.11+:**
+
+```sql
+-- Fails in 24.11+
+GROUP BY 1, 2, 3 -- Positional arguments
+
+-- Fix: Use explicit column names
+GROUP BY driver_id, creation_date_hour, operation_area
+```
+
+## Most common causes {#most-common-causes}
+
+1. **Column missing from GROUP BY clause**
+ - Selecting columns that are neither aggregated nor in GROUP BY
+ - Referencing columns from subqueries that aren't properly grouped
+ - Using alias substitution incorrectly with GROUP BY
+
+2. **Alias reuse conflicts (22.8+ regression)**
+ - Since ClickHouse 22.3.16 and 22.8+, alias reuse behavior changed
+ - When an aggregate result uses the same alias as a source column, referencing it twice causes issues
+ - Example: `SELECT max(b) AS b, b AS b1 FROM t GROUP BY a` fails because `b` is replaced with `max(b)`
+ - This is a known backward compatibility issue introduced in PR #42827
+
+3. **GROUPING SETS and complex grouping**
+ - Using columns in ORDER BY that aren't in all grouping sets
+ - GROUPING SETS queries with columns that don't appear in every set
+ - Window functions combined with GROUP BY incorrectly
+
+4. **Positional GROUP BY with materialized views (24.11+)**
+ - Using positional arguments (`GROUP BY 1, 2, 3`) in materialized views
+ - Works with `enable_positional_arguments=1` in regular queries but fails in mat views
+ - Affects version 24.11+ specifically
+
+5. **Conditional expressions referencing ungrouped columns**
+ - Using CASE/IF expressions that reference columns not in GROUP BY
+ - Example: `CASE WHEN rank > 100 THEN column ELSE NULL END` where `rank` isn't grouped
+ - Common with window functions in subqueries
+
+6. **Tuple grouping issues**
+ - Grouping by tuple `(col1, col2)` instead of individual columns
+ - ClickHouse may not properly recognize injective function optimizations
+ - Example: `GROUP BY (iteration, centroid)` vs `GROUP BY iteration, centroid`
+
+## Common solutions {#common-solutions}
+
+**1. Add missing columns to GROUP BY**
+
+```sql
+-- Error: column 'name' not in GROUP BY
+SELECT
+ user_id,
+ name,
+ COUNT(*) AS total
+FROM users
+GROUP BY user_id;
+
+-- Fix: add name to GROUP BY
+SELECT
+ user_id,
+ name,
+ COUNT(*) AS total
+FROM users
+GROUP BY user_id, name;
+
+-- Or use aggregate function if you want one value per user_id
+SELECT
+ user_id,
+ any(name) AS name, -- or min(name), max(name), etc.
+ COUNT(*) AS total
+FROM users
+GROUP BY user_id;
+```
+
+**2. Handle alias reuse conflicts (22.8+ backward compatibility issue)**
+
+```sql
+-- Fails in 22.8+ due to alias substitution
+SELECT
+ argMax(col1, timestamp) AS col1,
+ argMax(col2, timestamp) AS col2,
+ col1 / col2 AS final_col -- Error: col1 becomes argMax(argMax(...))
+FROM table
+GROUP BY col3;
+
+-- Solution 1: Use different aliases
+SELECT
+ argMax(col1, timestamp) AS max_col1,
+ argMax(col2, timestamp) AS max_col2,
+ max_col1 / max_col2 AS final_col
+FROM table
+GROUP BY col3;
+
+-- Solution 2: Use subquery
+SELECT
+ *,
+ col1 / col2 AS final_col
+FROM (
+ SELECT
+ argMax(col1, timestamp) AS col1,
+ argMax(col2, timestamp) AS col2
+ FROM table
+ GROUP BY col3
+);
+
+-- Solution 3: Use type cast to force different identifier
+SELECT
+ max(b) AS b,
+ b::Int8 AS b1 -- Cast creates different node
+FROM t
+GROUP BY a;
+
+-- Solution 4: Enable experimental analyzer (works in 23.x+)
+SELECT
+ argMax(col1, timestamp) AS col1,
+ argMax(col2, timestamp) AS col2,
+ col1 / col2 AS final_col
+FROM table
+GROUP BY col3
+SETTINGS allow_experimental_analyzer = 1;
+```
+
+**3. Use prefer_column_name_to_alias setting**
+
+```sql
+-- May help with some alias conflicts
+SELECT
+ max(b) AS b,
+ b AS b1
+FROM t
+GROUP BY a
+SETTINGS prefer_column_name_to_alias = 1;
+
+-- Works for MySQL compatibility issues
+SELECT
+ CASE WHEN `$RANK_1` > 2500 THEN 1 ELSE 0 END AS `isotherrow_1`,
+ COUNT(*) AS `$otherbucket_group_count`
+FROM (
+ SELECT
+ COUNT(*) AS `count`,
+ DENSE_RANK() OVER (ORDER BY `radio` DESC) AS `$RANK_1`
+ FROM `cell_towers`
+ GROUP BY `radio`
+)
+SETTINGS prefer_column_name_to_alias = 1;
+```
+
+**4. Fix tuple grouping syntax**
+
+```sql
+-- Error: grouping by tuple
+SELECT
+ iteration,
+ centroid,
+ avgForEachState(v) AS vector
+FROM temp
+GROUP BY (iteration, centroid); -- Tuple notation
+
+-- Fix: group by individual columns
+SELECT
+ iteration,
+ centroid,
+ avgForEachState(v) AS vector
+FROM temp
+GROUP BY iteration, centroid; -- Correct syntax
+```
+
+**5. Replace positional GROUP BY in materialized views (24.11+)**
+
+```sql
+-- Fails in materialized views on 24.11+
+CREATE MATERIALIZED VIEW mv_driver_location
+ENGINE = AggregatingMergeTree()
+ORDER BY (driver_id, creation_date_hour, operation_area)
+AS
+SELECT
+ driver_id,
+ toStartOfHour(creation_date) AS creation_date_hour,
+ operation_area,
+ uniqState(toStartOfMinute(creation_date)) AS online_minutes_agg
+FROM fct_driver__location
+WHERE driver_status = 1
+GROUP BY 1, 2, 3; -- Positional GROUP BY
+
+-- Fix: use explicit column names
+CREATE MATERIALIZED VIEW mv_driver_location
+ENGINE = AggregatingMergeTree()
+ORDER BY (driver_id, creation_date_hour, operation_area)
+AS
+SELECT
+ driver_id,
+ toStartOfHour(creation_date) AS creation_date_hour,
+ operation_area,
+ uniqState(toStartOfMinute(creation_date)) AS online_minutes_agg
+FROM fct_driver__location
+WHERE driver_status = 1
+GROUP BY driver_id, creation_date_hour, operation_area;
+```
+
+**6. Handle window functions in subqueries correctly**
+
+```sql
+-- Error: referencing window function result in GROUP BY
+SELECT
+ CASE WHEN `$RANK_1` > 2500 THEN 1 ELSE 0 END AS `isotherrow_1`,
+ COUNT(*) AS `count`
+FROM (
+ SELECT
+ COUNT(*) AS `count`,
+ DENSE_RANK() OVER (ORDER BY `radio` DESC) AS `$RANK_1`
+ FROM `cell_towers`
+ GROUP BY `radio`
+); -- Missing GROUP BY
+
+-- Fix: add GROUP BY for the window function result
+SELECT
+ CASE WHEN `$RANK_1` > 2500 THEN 1 ELSE 0 END AS `isotherrow_1`,
+ COUNT(*) AS `count`
+FROM (
+ SELECT
+ COUNT(*) AS `count`,
+ DENSE_RANK() OVER (ORDER BY `radio` DESC) AS `$RANK_1`
+ FROM `cell_towers`
+ GROUP BY `radio`
+)
+GROUP BY CASE WHEN `$RANK_1` > 2500 THEN 1 ELSE 0 END;
+```
+
+**7. Handle GROUPING SETS correctly**
+
+```sql
+-- Error: referencing columns not in all grouping sets
+SELECT
+ CounterID % 2 AS k,
+ CounterID % 3 AS d,
+ quantileBFloat16(0.5)(ResolutionWidth)
+FROM datasets.hits
+GROUP BY GROUPING SETS ((k), (d))
+ORDER BY count() DESC, CounterID % 3 ASC; -- CounterID not available
+
+-- Fix: only reference columns that are in GROUP BY or use aggregates
+SELECT
+ CounterID % 2 AS k,
+ CounterID % 3 AS d,
+ quantileBFloat16(0.5)(ResolutionWidth)
+FROM datasets.hits
+GROUP BY GROUPING SETS ((k), (d))
+ORDER BY count() DESC, d ASC; -- Use alias instead
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Follow SQL GROUP BY rules**: Every non-aggregated column in SELECT must appear in GROUP BY. This is standard SQL behavior, not ClickHouse-specific.
+
+2. **Avoid alias reuse in aggregation queries (22.8+)**: When using aggregate functions, don't reuse the same alias as the source column if you need to reference it multiple times. This prevents alias substitution issues.
+
+3. **Test after version upgrades**: Version 22.3.16, 22.8+, and 24.11+ introduced behavior changes. Test all GROUP BY queries when upgrading, especially:
+ - Queries with alias reuse
+ - Materialized views with positional GROUP BY
+ - Complex subqueries with window functions
+
+4. **Use experimental analyzer for complex queries**: Enable `allow_experimental_analyzer=1` for queries with complex alias usage, nested aggregations, or window functions.
+
+5. **Avoid positional GROUP BY in materialized views**: Always use explicit column names in GROUP BY clauses for materialized views, not positional arguments like `GROUP BY 1, 2, 3`.
+
+6. **Be explicit with GROUPING SETS**: Only reference columns in SELECT/ORDER BY that appear in all grouping sets, or wrap them in aggregate functions.
+
+7. **Document alias patterns**: If you have queries that worked in older versions but fail after upgrade, document them as known issues and prioritize refactoring.
+
+8. **Use query validation in CI/CD**: Add automated tests for GROUP BY queries in your deployment pipeline to catch compatibility issues before production.
+
+## Version-specific notes {#version-specific-notes}
+
+### 22.3.16, 22.8+ - Alias substitution regression {#alias-substitution-regression}
+
+Starting in these versions, PR #42827 changed how aliases are handled in GROUP BY queries. This causes previously working queries to fail:
+
+```sql
+-- Worked in 22.3.15, fails in 22.3.16+
+SELECT max(b) AS b, b AS b1 FROM t GROUP BY a;
+```
+
+**Workaround**: Use subqueries, different aliases, or enable `allow_experimental_analyzer=1`.
+
+### 23.5 - Conditional expression issues {#conditional-expression-issues}
+
+Version 23.5 introduced stricter validation for conditional expressions in GROUP BY:
+
+```sql
+-- Works in 23.4, fails in 23.5
+SELECT
+ false ? post_nat_source_ipv4 : '' as post_nat_source_ipv4
+FROM fullflow
+GROUP BY post_nat_source_ipv4;
+```
+
+**Workaround**: Ensure all columns referenced in conditional expressions are in GROUP BY, even if the condition is always false.
+
+### 24.11+ - Positional GROUP BY in materialized views {#positional-group-by-mv}
+
+Version 24.11 broke positional GROUP BY syntax in materialized views:
+
+```sql
+-- Fails in 24.11+
+GROUP BY 1, 2, 3 -- in materialized view definition
+```
+
+**Fix**: Use explicit column names in GROUP BY.
+
+## Related error codes {#related-error-codes}
+
+- [Error 184: `ILLEGAL_AGGREGATION`](/docs/troubleshooting/error-codes/184_ILLEGAL_AGGREGATION) - Aggregate function used incorrectly (e.g., nested aggregates)
+- [Error 47: `UNKNOWN_IDENTIFIER`](/docs/troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER) - Column not found in the context
diff --git a/docs/troubleshooting/error_codes/216_QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING.md b/docs/troubleshooting/error_codes/216_QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING.md
new file mode 100644
index 00000000000..adb539faefb
--- /dev/null
+++ b/docs/troubleshooting/error_codes/216_QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING.md
@@ -0,0 +1,520 @@
+---
+slug: /troubleshooting/error-codes/216_QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING
+sidebar_label: '216 QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING'
+doc_type: 'reference'
+keywords: ['error codes', 'QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING', '216', 'query_id', 'duplicate']
+title: '216 QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING'
+description: 'ClickHouse error code - 216 QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING'
+---
+
+# Error 216: QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING
+
+:::tip
+This error occurs when you attempt to execute a query with a `query_id` that is already in use by a currently running query.
+ClickHouse enforces unique query IDs to prevent duplicate execution and enable proper query tracking, cancellation, and monitoring.
+:::
+
+## Quick reference {#quick-reference}
+
+**What you'll see:**
+
+```text
+Code: 216. DB::Exception: Query with id = ca038ba5-bcdc-4b93-a857-79b066382917 is already running.
+(QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING)
+```
+
+**Most common causes:**
+1. Reusing the same static `query_id` for multiple concurrent queries
+2. Retry logic that doesn't regenerate the `query_id`
+3. Insufficient randomness in multi-threaded ID generation
+4. **Known bug in ClickHouse 25.5.1** (queries execute twice internally)
+5. Previous query still running when retry is attempted
+
+**Quick fixes:**
+
+```sql
+-- ❌ Don't reuse the same query_id
+SELECT * FROM table SETTINGS query_id = 'my-static-id';
+-- Running again immediately causes error 216
+
+-- ✅ Fix 1: Generate unique query IDs with UUID
+SELECT * FROM table SETTINGS query_id = concat('query-', toString(generateUUIDv4()));
+
+-- ✅ Fix 2: Add high-precision timestamp
+SELECT * FROM table SETTINGS query_id = concat('query-', toString(now64(9)));
+
+-- ✅ Fix 3: Let ClickHouse auto-generate (recommended)
+SELECT * FROM table; -- No query_id setting
+```
+
+**For application code:**
+
+```python
+# Python - use UUID
+import uuid
+query_id = str(uuid.uuid4())
+client.execute(sql, query_id=query_id)
+
+# Java - use UUID
+String queryId = UUID.randomUUID().toString();
+response = client.query(sql, queryId).execute();
+```
+
+## Most common causes {#most-common-causes}
+
+1. **Reusing static query IDs in application code**
+ - Hardcoded query IDs like `'my-query'` or `'daily-report'`
+ - Using the same ID for multiple concurrent requests
+ - Application frameworks generating non-unique IDs
+ - Pattern: `query_id = 'app-name-' + request_type` without uniqueness
+
+2. **Client retry logic without ID regeneration**
+ - Automatic retry on network timeout reusing the same `query_id`
+ - Previous query still running when retry is attempted
+ - Connection pools executing queries with duplicate IDs
+ - Load balancers distributing the same request to multiple servers
+
+3. **Insufficient randomness in multi-threaded applications**
+ - Using `UUID + ":" + random(0, 100)` doesn't provide enough uniqueness
+ - Timestamp-based IDs without sufficient precision (seconds instead of nanoseconds)
+ - Multiple threads generating IDs simultaneously without proper coordination
+ - Example that fails: `query_id = f"{uuid.uuid4()}:{random.randint(0, 100)}"`
+
+4. **Version-specific regression (25.5.1)**
+ - **ClickHouse 25.5.1 has a critical bug** where queries execute twice internally
+ - Single client request results in two `executeQuery` log entries milliseconds apart
+ - First execution succeeds, second fails with error 216
+ - Affects almost all queries with custom `query_id` in 25.5.1
+ - **Workaround**: Downgrade to 25.4.5 or wait for fix
+
+5. **Long-running queries not cleaned up**
+ - Previous query with same ID still in `system.processes`
+ - Query appears completed on client side but server still processing
+ - Network interruptions leaving queries in limbo state
+ - Queries waiting on locks or merges
+
+6. **Distributed query complexity**
+ - Query coordinator using same ID for multiple nodes
+ - Retry on different replica with same query_id
+ - Cross-cluster queries not properly cleaned up
+
+7. **Misunderstanding query_id purpose**
+ - Attempting to use `query_id` as an idempotency key
+ - Expecting ClickHouse to deduplicate based on `query_id`
+ - Using `query_id` to prevent duplicate inserts (doesn't work)
+
+## Common solutions {#common-solutions}
+
+### **1. Generate truly unique query IDs** {#generate-unique-query-ids}
+
+```python
+# ✅ Best practice: Use UUID4
+import uuid
+from clickhouse_driver import Client
+
+client = Client('localhost')
+query_id = str(uuid.uuid4()) # e.g., 'ca038ba5-bcdc-4b93-a857-79b066382917'
+result = client.execute('SELECT * FROM table', query_id=query_id)
+
+# For debugging: Add timestamp and context
+import time
+query_id = f"{uuid.uuid4()}-{int(time.time() * 1000)}-{thread_id}"
+
+# High-precision timestamp (if UUID not available)
+import time
+query_id = f"query-{time.time_ns()}-{random.randint(10000, 99999)}"
+```
+
+```java
+// Java: Use UUID.randomUUID()
+import java.util.UUID;
+import com.clickhouse.client.*;
+
+String queryId = UUID.randomUUID().toString();
+ClickHouseResponse response = client
+ .query(sql, queryId)
+ .format(ClickHouseFormat.JSONEachRow)
+ .execute()
+ .get();
+```
+
+```sql
+-- SQL-level: Generate unique IDs
+SELECT * FROM table
+SETTINGS query_id = concat(
+ 'query-',
+ toString(generateUUIDv4()),
+ '-',
+ toString(now64(9))
+);
+
+-- Or let ClickHouse handle it (recommended)
+SELECT * FROM table;
+-- ClickHouse auto-generates: query_id like 'a1b2c3d4-...'
+```
+
+### **2. Implement proper retry logic** {#implement-retry-logic}
+
+```python
+# WRONG: Reusing same query_id on retry
+def execute_with_retry_wrong(client, sql, max_retries=3):
+ query_id = str(uuid.uuid4()) # Generated ONCE
+ for attempt in range(max_retries):
+ try:
+ return client.execute(sql, query_id=query_id)
+ except Exception as e:
+ if "QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING" in str(e):
+ time.sleep(2 ** attempt)
+ continue # Retries with SAME query_id
+ raise
+
+# CORRECT: Generate new query_id for each attempt
+def execute_with_retry_correct(client, sql, max_retries=3):
+ for attempt in range(max_retries):
+ query_id = str(uuid.uuid4()) # New ID each time
+ try:
+ return client.execute(sql, query_id=query_id)
+ except Exception as e:
+ if attempt == max_retries - 1:
+ raise
+ time.sleep(2 ** attempt)
+
+# BETTER: Check if previous query finished before retry
+def execute_with_smart_retry(client, sql, max_retries=3):
+ previous_query_id = None
+
+ for attempt in range(max_retries):
+ # If we're retrying, check if previous query finished
+ if previous_query_id and not is_query_finished(client, previous_query_id):
+ # Wait for previous query to finish or kill it
+ kill_query(client, previous_query_id)
+ time.sleep(2)
+
+ query_id = str(uuid.uuid4())
+ previous_query_id = query_id
+
+ try:
+ return client.execute(sql, query_id=query_id)
+ except Exception as e:
+ if attempt == max_retries - 1:
+ raise
+ time.sleep(2 ** attempt)
+
+def is_query_finished(client, query_id):
+ result = client.execute(f"""
+ SELECT count() > 0 as finished
+ FROM system.query_log
+ WHERE query_id = '{query_id}'
+ AND type IN ('QueryFinish', 'ExceptionWhileProcessing')
+ AND event_time > now() - INTERVAL 60 SECOND
+ """)
+ return result[0][0]
+
+def kill_query(client, query_id):
+ try:
+ client.execute(f"KILL QUERY WHERE query_id = '{query_id}'")
+ except:
+ pass
+```
+
+### **3. Check if query is still running before retry** {#check-query-running-before-retry}
+
+```sql
+-- Check if a specific query_id is still running
+SELECT
+ query_id,
+ user,
+ elapsed,
+ formatReadableTimeDelta(elapsed) AS duration,
+ query
+FROM system.processes
+WHERE query_id = 'ca038ba5-bcdc-4b93-a857-79b066382917';
+
+-- On clusters, check all nodes
+SELECT
+ hostName() AS host,
+ query_id,
+ elapsed,
+ formatReadableTimeDelta(elapsed) AS duration
+FROM clusterAllReplicas('default', system.processes)
+WHERE query_id = 'ca038ba5-bcdc-4b93-a857-79b066382917';
+```
+
+### **4. Kill stuck queries before retry** {#kill-stuck-queries}
+
+```sql
+-- Kill a specific query by ID
+KILL QUERY WHERE query_id = 'ca038ba5-bcdc-4b93-a857-79b066382917';
+
+-- For clusters, must use ON CLUSTER (common mistake)
+KILL QUERY ON CLUSTER 'default'
+WHERE query_id = 'ca038ba5-bcdc-4b93-a857-79b066382917';
+
+-- Verify the query was killed
+SELECT
+ query_id,
+ type,
+ exception
+FROM system.query_log
+WHERE query_id = 'ca038ba5-bcdc-4b93-a857-79b066382917'
+ AND type IN ('QueryFinish', 'ExceptionWhileProcessing')
+ORDER BY event_time DESC
+LIMIT 1;
+```
+
+### **5. Don't use query_id for idempotency** {#dont-use-query-id-for-idempotency}
+
+```python
+# WRONG: Using query_id to prevent duplicate inserts
+def idempotent_insert_wrong(client, data, request_id):
+ # This WON'T prevent duplicate inserts
+ client.execute(
+ f"INSERT INTO table VALUES {data}",
+ query_id=request_id # Doesn't work for idempotency
+ )
+
+# CORRECT: Implement proper idempotency at data layer
+def idempotent_insert_correct(client, data, request_id):
+ # Option 1: Use ReplacingMergeTree
+ client.execute(f"""
+ INSERT INTO table_replacing_merge_tree
+ (request_id, data, created_at)
+ VALUES ('{request_id}', '{data}', now())
+ """)
+
+ # Option 2: Check before insert
+ client.execute(f"""
+ INSERT INTO table (request_id, data)
+ SELECT '{request_id}', '{data}'
+ WHERE NOT EXISTS (
+ SELECT 1 FROM table WHERE request_id = '{request_id}'
+ )
+ """)
+
+ # Option 3: Use Distributed table deduplication
+ # Set replicated_deduplication_window in config
+```
+
+### **6. Workaround for 25.5.1 regression** {#workaround-25-5-1-regression}
+
+```bash
+# If experiencing widespread issues on 25.5.1, downgrade immediately
+
+# Docker:
+docker pull clickhouse/clickhouse-server:25.4.5
+docker run -d clickhouse/clickhouse-server:25.4.5
+
+# ClickHouse Cloud:
+# Contact support to rollback to 25.4.5
+
+# Self-hosted (Debian/Ubuntu):
+sudo apt-get install clickhouse-server=25.4.5 clickhouse-client=25.4.5
+
+# Temporary workaround: Don't use custom query_id
+# Let ClickHouse auto-generate IDs until upgraded/downgraded
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Always use UUIDs for query_id**: Never use predictable or static query IDs. Use UUID4 (random) or UUID1 (timestamp-based with MAC address).
+
+2. **Generate new query_id for every execution**: Even when retrying the exact same query, generate a fresh `query_id`.
+
+3. **Understand query_id purpose**: It's for monitoring, tracking, and cancellation—NOT for idempotency or deduplication.
+
+4. **Avoid 25.5.1**: If you're on ClickHouse 25.5.1 and experiencing this error frequently, downgrade to 25.4.5 or wait for 25.5.2+.
+
+5. **Test concurrent execution**: Ensure your ID generation strategy produces unique IDs under high concurrency (1000+ queries/second).
+
+6. **Use KILL QUERY ON CLUSTER**: In distributed setups, always use `ON CLUSTER` variant to kill queries on all nodes.
+
+7. **Monitor query cleanup**: Set up alerts for queries stuck in `system.processes` for > 5 minutes.
+
+8. **Implement proper ID structure**:
+ ```text
+ {app_name}-{environment}-{uuid}-{timestamp_ns}
+ example: myapp-prod-a1b2c3d4-1234567890123456789
+ ```
+
+## Debugging steps {#debugging-steps}
+
+### **1. Check if query is actually running** {#check-query-actually-running}
+
+```sql
+-- Is this query_id currently running?
+SELECT
+ query_id,
+ user,
+ elapsed,
+ formatReadableTimeDelta(elapsed) AS duration,
+ memory_usage,
+ query
+FROM system.processes
+WHERE query_id = 'your-query-id';
+
+-- If no results, it's not running (might be an application bug)
+```
+
+### **2. Check query execution history** {#check-query-execution-history}
+
+```sql
+-- See all executions of this query_id in last hour
+SELECT
+ event_time,
+ type,
+ query_duration_ms,
+ formatReadableSize(memory_usage) AS memory,
+ exception_code,
+ exception
+FROM system.query_log
+WHERE query_id = 'your-query-id'
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC;
+
+-- Count execution patterns
+SELECT
+ query_id,
+ count() AS total_executions,
+ countIf(type = 'QueryStart') AS starts,
+ countIf(type = 'QueryFinish') AS finishes,
+ countIf(type = 'ExceptionWhileProcessing') AS exceptions
+FROM system.query_log
+WHERE query_id = 'your-query-id'
+ AND event_time > now() - INTERVAL 1 DAY
+GROUP BY query_id;
+```
+
+### **3. Investigate 25.5.1 regression pattern** {#investigate-regression-pattern}
+
+```sql
+-- Look for the telltale double-execution pattern
+SELECT
+ query_id,
+ groupArray(event_time) AS times,
+ groupArray(type) AS types,
+ groupArray(exception_code) AS error_codes,
+ arrayMax(times) - arrayMin(times) AS time_diff_sec
+FROM system.query_log
+WHERE event_time > now() - INTERVAL 10 MINUTE
+ AND exception_code = 216
+GROUP BY query_id
+HAVING time_diff_sec < 1 -- Executions within 1 second
+ORDER BY time_diff_sec ASC;
+
+-- If you see many results with time_diff < 0.1 sec, it's likely the 25.5.1 bug
+```
+
+### **4. Find duplicate query_id patterns** {#find-duplicate-query-id-patterns}
+
+```sql
+-- Identify queries with non-unique IDs
+SELECT
+ query_id,
+ count() AS collision_count,
+ groupArray(event_time) AS execution_times,
+ groupUniqArray(user) AS users
+FROM system.query_log
+WHERE event_time > now() - INTERVAL 1 HOUR
+ AND type = 'QueryStart'
+GROUP BY query_id
+HAVING count() > 1
+ORDER BY count() DESC
+LIMIT 20;
+
+-- Analyze ID generation patterns
+SELECT
+ substring(query_id, 1, 20) AS id_prefix,
+ count() AS occurrences
+FROM system.query_log
+WHERE event_time > now() - INTERVAL 1 HOUR
+GROUP BY id_prefix
+HAVING count() > 10
+ORDER BY count() DESC;
+```
+
+### **5. Check for stuck queries** {#check-for-stuck-queries}
+
+```sql
+-- Find long-running queries that might be stuck
+SELECT
+ query_id,
+ user,
+ elapsed,
+ formatReadableTimeDelta(elapsed) AS duration,
+ formatReadableSize(memory_usage) AS memory,
+ query
+FROM system.processes
+WHERE elapsed > 300 -- Running for > 5 minutes
+ORDER BY elapsed DESC;
+```
+
+## When query_id is useful {#when-query-id-is-useful}
+
+Despite the limitations, `query_id` is valuable for:
+
+### **1. Query tracking and correlation** {#query-tracking-correlation}
+
+```python
+# Correlate ClickHouse queries with application logs
+import logging
+
+query_id = str(uuid.uuid4())
+logger.info(f"Executing query {query_id} for user {user_id}")
+result = client.execute(query, query_id=query_id)
+logger.info(f"Query {query_id} completed in {duration}s")
+
+# Now you can search logs: "query_id: a1b2c3d4-..."
+```
+
+### **2. Selective query cancellation** {#selective-query-cancellation}
+
+```sql
+-- Start a long-running batch job
+SELECT * FROM huge_table
+WHERE date >= today() - INTERVAL 30 DAY
+SETTINGS query_id = 'batch-monthly-report-2024-01';
+
+-- From another connection, cancel if needed
+KILL QUERY WHERE query_id = 'batch-monthly-report-2024-01';
+```
+
+### **3. Performance analysis over time** {#performance-analysis-over-time}
+
+```sql
+-- Track how query performance changes over time
+SELECT
+ toDate(event_time) AS date,
+ count() AS executions,
+ avg(query_duration_ms) AS avg_duration_ms,
+ max(query_duration_ms) AS max_duration_ms,
+ avg(memory_usage) AS avg_memory_bytes
+FROM system.query_log
+WHERE query_id LIKE 'daily-report-%'
+ AND type = 'QueryFinish'
+ AND event_time > now() - INTERVAL 30 DAY
+GROUP BY date
+ORDER BY date DESC;
+```
+
+### **4. Distributed tracing integration** {#distributed-tracing-integration}
+
+```python
+# OpenTelemetry example
+from opentelemetry import trace
+
+tracer = trace.get_tracer(__name__)
+
+with tracer.start_as_current_span("clickhouse_query") as span:
+ query_id = str(uuid.uuid4())
+ span.set_attribute("query_id", query_id)
+ span.set_attribute("database", "analytics")
+
+ result = client.execute(query, query_id=query_id)
+
+ span.set_attribute("rows_returned", len(result))
+```
+
+## Related error codes {#related-error-codes}
+
+- [Error 202: `TOO_MANY_SIMULTANEOUS_QUERIES`](/docs/troubleshooting/error-codes/202_TOO_MANY_SIMULTANEOUS_QUERIES) - Concurrent query limit exceeded (often seen together)
+- [Error 394: `QUERY_WAS_CANCELLED`](/docs/troubleshooting/error-codes/394_QUERY_WAS_CANCELLED) - Query cancelled via KILL QUERY
diff --git a/docs/troubleshooting/error_codes/241_MEMORY_LIMIT_EXCEEDED.md b/docs/troubleshooting/error_codes/241_MEMORY_LIMIT_EXCEEDED.md
new file mode 100644
index 00000000000..759642f4536
--- /dev/null
+++ b/docs/troubleshooting/error_codes/241_MEMORY_LIMIT_EXCEEDED.md
@@ -0,0 +1,274 @@
+---
+slug: /troubleshooting/error-codes/241_MEMORY_LIMIT_EXCEEDED
+sidebar_label: '241 MEMORY_LIMIT_EXCEEDED'
+doc_type: 'reference'
+keywords: ['error codes', 'MEMORY_LIMIT_EXCEEDED', '241']
+title: '241 MEMORY_LIMIT_EXCEEDED'
+description: 'ClickHouse error code - 241 MEMORY_LIMIT_EXCEEDED'
+---
+
+# Error 241: MEMORY_LIMIT_EXCEEDED
+
+:::tip
+This error occurs when a query or operation attempts to use more memory than the configured limits allow.
+It indicates that ClickHouse's memory protection mechanisms have stopped the operation to prevent out-of-memory (OOM) conditions and system instability.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Query exceeds per-query memory limit**
+ - Query using more than [`max_memory_usage`](/operations/settings/settings#max_memory_usage) setting
+ - Large joins without proper filtering
+ - Aggregations with too many distinct keys
+ - Sorting very large result sets
+
+2. **Total server memory exhausted**
+ - Sum of all query memory exceeds [`max_server_memory_usage`](/operations/server-configuration-parameters/settings#max_server_memory_usage)
+ - Too many concurrent memory-intensive queries
+ - Background operations (merges, mutations) consuming memory
+ - Memory fragmentation and retention
+
+3. **Insufficient resources for workload**
+ - Server RAM too small for data volume
+ - Memory limits set too low for query patterns
+ - Large tables with insufficient memory for operations
+
+4. **Memory-intensive operations**
+ - `GROUP BY` with high cardinality
+ - `JOIN` operations on large tables
+ - `DISTINCT` on millions/billions of rows
+ - Window functions over large datasets
+ - External sorting spilling to disk
+
+5. **Background operations consuming memory**
+ - Large merge operations
+ - Mutations on large partitions
+ - Multiple concurrent merges
+ - Cleanup threads allocating memory
+
+6. **Memory leaks or accumulation**
+ - Old ClickHouse versions with memory leaks
+ - Retained memory not being released
+ - Memory fragmentation (high `retained` in jemalloc stats)
+
+## Common solutions {#common-solutions}
+
+**1. Check current memory limits**
+
+```sql
+-- View memory limit settings
+SELECT
+ name,
+ value,
+ description
+FROM system.settings
+WHERE name LIKE '%memory%'
+ORDER BY name;
+
+-- Key settings to check:
+-- max_memory_usage (per query limit)
+-- max_memory_usage_for_user (per user limit)
+-- max_server_memory_usage (total server limit)
+```
+
+**2. Increase memory limits (if appropriate)**
+
+```sql
+-- Increase per-query limit
+SET max_memory_usage = 20000000000; -- 20 GB
+
+-- Increase for specific query
+SELECT * FROM large_table
+SETTINGS max_memory_usage = 50000000000; -- 50 GB
+
+-- For user
+ALTER USER your_user SETTINGS max_memory_usage = 30000000000;
+```
+
+**3. Optimize the query**
+
+```sql
+-- Add WHERE clause to filter data early
+SELECT * FROM table
+WHERE date >= today() - INTERVAL 7 DAY;
+
+-- Use LIMIT to reduce result size
+SELECT * FROM table
+ORDER BY id
+LIMIT 100000;
+
+-- Pre-aggregate before joining
+SELECT a.id, COUNT(b.id)
+FROM small_table a
+LEFT JOIN (
+ SELECT user_id, COUNT(*) as cnt
+ FROM large_table
+ GROUP BY user_id
+) b ON a.id = b.user_id;
+```
+
+**4. Enable external aggregation/sorting**
+
+```sql
+-- Allow spilling to disk when memory limit approached
+SET max_bytes_before_external_group_by = 20000000000; -- 20 GB
+SET max_bytes_before_external_sort = 20000000000; -- 20 GB
+
+-- This prevents memory errors by using disk when needed
+```
+
+**5. Reduce query concurrency**
+
+```sql
+-- Limit concurrent queries per user
+SET max_concurrent_queries_for_user = 5;
+
+-- Monitor current memory usage
+SELECT
+ user,
+ query_id,
+ formatReadableSize(memory_usage) AS memory,
+ query
+FROM system.processes
+ORDER BY memory_usage DESC;
+```
+
+**6. Upgrade server memory (ClickHouse Cloud)**
+
+For ClickHouse Cloud, if consistent memory issues:
+- Upgrade to a larger instance tier
+- Contact support to increase memory limits
+- Consider horizontal scaling (add more replicas)
+
+**7. Optimize table design**
+
+```sql
+-- Use appropriate codecs to reduce memory
+CREATE TABLE optimized_table (
+ id UInt64,
+ name String CODEC(ZSTD),
+ value Int64 CODEC(Delta, ZSTD)
+) ENGINE = MergeTree()
+ORDER BY id;
+
+-- Use smaller data types where possible
+-- UInt32 instead of UInt64, Date instead of DateTime when possible
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Query aggregating high-cardinality column**
+
+```text
+Error: Memory limit (for query) exceeded:
+would use 10.50 GiB, maximum: 10.00 GiB
+```
+
+**Cause:** `GROUP BY` on column with millions of distinct values.
+
+**Solution:**
+
+```sql
+-- Option 1: Increase limit
+SET max_memory_usage = 20000000000;
+
+-- Option 2: Enable external aggregation
+SET max_bytes_before_external_group_by = 10000000000;
+
+-- Option 3: Reduce cardinality
+SELECT
+ toStartOfHour(timestamp) AS hour, -- Instead of exact timestamp
+ COUNT(*)
+FROM table
+GROUP BY hour;
+```
+
+**Scenario 2: Total server memory exceeded**
+
+```text
+Error: Memory limit (total) exceeded:
+would use 66.23 GiB, maximum: 56.48 GiB
+```
+
+**Cause:** Too many concurrent queries or background operations.
+
+**Solution:**
+```sql
+-- Check what's using memory
+SELECT
+ query_id,
+ user,
+ formatReadableSize(memory_usage) AS memory,
+ query
+FROM system.processes
+ORDER BY memory_usage DESC;
+
+-- Kill memory-intensive queries if needed
+KILL QUERY WHERE query_id = 'high_memory_query_id';
+
+-- Reduce concurrent queries
+SET max_concurrent_queries = 50; -- Server config
+```
+
+**Scenario 3: Large JOIN operation**
+
+```text
+Error: Memory limit exceeded while executing JOIN
+```
+
+**Cause:** Joining large tables without proper filtering.
+
+**Solution:**
+
+```sql
+-- Add filters before JOIN
+SELECT *
+FROM table1 a
+JOIN table2 b ON a.id = b.id
+WHERE a.date >= today() - INTERVAL 1 DAY
+ AND b.active = 1;
+
+-- Or use appropriate JOIN algorithm
+SELECT *
+FROM large_table a
+JOIN small_table b ON a.id = b.id
+SETTINGS join_algorithm = 'hash'; -- or 'parallel_hash'
+```
+
+**Scenario 4: Background merge consuming memory**
+
+```text
+Error: Memory limit (total) exceeded during merge operation
+```
+
+**Cause:** Large parts being merged consume significant memory.
+
+**Solution:**
+
+```sql
+-- Check merge activity
+SELECT *
+FROM system.merges;
+
+-- Adjust merge settings
+SET max_bytes_to_merge_at_max_space_in_pool = 50000000000; -- 50 GB
+
+-- Or temporarily pause merges
+SYSTEM STOP MERGES your_table;
+-- Run query
+-- SYSTEM START MERGES your_table;
+```
+
+**Scenario 5: Pod OOMKilled (ClickHouse Cloud)**
+
+```text
+Pod terminated with OOMKilled status
+```
+
+**Cause:** Memory limit set too low for workload.
+
+**Solution:**
+- Upgrade to higher memory tier
+- Request memory limit increase from support
+- Optimize queries to use less memory
+- Distribute load across more replicas
diff --git a/docs/troubleshooting/error_codes/242_TABLE_IS_READ_ONLY.md b/docs/troubleshooting/error_codes/242_TABLE_IS_READ_ONLY.md
new file mode 100644
index 00000000000..6e7af7d83e5
--- /dev/null
+++ b/docs/troubleshooting/error_codes/242_TABLE_IS_READ_ONLY.md
@@ -0,0 +1,358 @@
+---
+slug: /troubleshooting/error-codes/242_TABLE_IS_READ_ONLY
+sidebar_label: '242 TABLE_IS_READ_ONLY'
+doc_type: 'reference'
+keywords: ['error codes', 'TABLE_IS_READ_ONLY', '242', 'readonly', 'replica']
+title: '242 TABLE_IS_READ_ONLY'
+description: 'ClickHouse error code - 242 TABLE_IS_READ_ONLY'
+---
+
+# Error 242: TABLE_IS_READ_ONLY
+
+:::tip
+This error occurs when a table replica enters read-only mode, preventing write operations (INSERT, UPDATE, DELETE, ALTER). This is a protective measure ClickHouse takes when it cannot maintain consistency with other replicas, typically due to Keeper/ZooKeeper connection issues, metadata mismatches, or data corruption.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Keeper/ZooKeeper connection issues**
+ - Connection loss (ZCONNECTIONLOSS) during critical operations
+ - Session expired (ZSESSIONEXPIRED) causing replica to lose coordination
+ - Operation timeout exceeding configured limits
+ - Network partition between ClickHouse server and Keeper nodes
+ - Keeper cluster losing quorum
+
+2. **Metadata mismatch with ZooKeeper**
+ - Local table metadata differs from metadata stored in ZooKeeper
+ - TTL configuration discrepancies (common after failed ALTER queries)
+ - Incomplete ALTER TABLE operations that updated ZooKeeper but not local metadata
+ - Example: `Existing table metadata in ZooKeeper differs in TTL`
+
+3. **Part validation failures**
+ - Data corruption detected during part loading (ATTEMPT_TO_READ_AFTER_EOF)
+ - Checksum mismatches in data files
+ - Missing or corrupted mark files (CANNOT_READ_ALL_DATA)
+ - Broken parts that cannot be loaded on startup
+ - Example: `Cannot read all marks from file`
+
+4. **Initialization failure scenarios**
+ - Too many suspicious parts detected on startup
+ - Local parts don't match ZooKeeper's expected set
+ - Example: `The local set of parts doesn't look like the set of parts in ZooKeeper: 6.23 million rows are suspicious`
+ - Replica cannot sync with other replicas during startup
+
+5. **Resource exhaustion**
+ - Disk space running low, triggering readonly protection
+ - Too many parts accumulating (often > 300 parts)
+ - Memory pressure preventing proper operations
+ - Heavy INSERT workload overwhelming merge operations
+
+6. **Failed ALTER operations**
+ - ALTER TABLE partially applied (updated ZooKeeper but not all replicas)
+ - DDL queue entry failed on some replicas
+ - Concurrent ALTER PARTITION cancelling INSERT operations
+ - Example: `Insert query was cancelled by concurrent ALTER PARTITION`
+
+## Common solutions {#common-solutions}
+
+**1. Check replica status**
+
+```sql
+-- Check which tables are in readonly mode
+SELECT
+ database,
+ table,
+ engine,
+ is_leader,
+ is_readonly,
+ total_replicas,
+ active_replicas
+FROM system.replicas
+WHERE is_readonly = 1;
+
+-- Check detailed replica status
+SELECT
+ database,
+ table,
+ last_queue_update_exception,
+ zookeeper_exception
+FROM system.replicas
+WHERE is_readonly = 1
+FORMAT Vertical;
+```
+
+**2. Restart the replica (safest first step)**
+
+```sql
+-- Restart specific replica
+SYSTEM RESTART REPLICA database.table_name;
+
+-- Verify recovery
+SELECT
+ database,
+ table,
+ is_readonly,
+ last_queue_update_exception
+FROM system.replicas
+WHERE table = 'table_name';
+```
+
+**3. Check Keeper/ZooKeeper connectivity**
+
+```sql
+-- Verify Keeper is accessible
+SELECT *
+FROM system.zookeeper
+WHERE path = '/clickhouse'
+LIMIT 5;
+
+-- Check for recent Keeper exceptions
+SELECT
+ event_time,
+ message
+FROM system.text_log
+WHERE level = 'Error'
+ AND message LIKE '%KEEPER_EXCEPTION%'
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**4. Force restore data (use with caution)**
+
+This forces the replica to reinitialize from ZooKeeper, discarding suspicious local parts.
+
+```bash
+# Connect to Keeper pod (ClickHouse Cloud or self-hosted)
+kubectl exec -it c-{service}-keeper-0 -n ns-{service} -- bash
+
+# Use keeper-client
+clickhouse keeper-client -h 0.0.0.0 -p 2181
+
+# In keeper-client, create the flag node
+create /clickhouse/tables/{table_uuid}/default/replicas/{replica_name}/flags/force_restore_data ""
+
+# Exit keeper-client and restart the ClickHouse server
+# Kubernetes:
+kubectl delete pod c-{service}-server-0 -n ns-{service}
+
+# Systemctl:
+sudo systemctl restart clickhouse-server
+```
+
+**5. Fix metadata mismatch**
+
+When you see: `Existing table metadata in ZooKeeper differs in TTL`
+
+```bash
+# Connect to Keeper
+clickhouse keeper-client -h 0.0.0.0 -p 2181
+
+# View current metadata
+get /clickhouse/databases/{db_uuid}/metadata/{table_name}
+
+# Set corrected metadata (example for TTL fix)
+set "/clickhouse/databases/{db_uuid}/metadata/{table_name}" "ATTACH TABLE _ UUID '{table_uuid}'
+(
+ `column1` String,
+ `column2` UInt64,
+ `timestamp` DateTime
+)
+ENGINE = SharedMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}')
+ORDER BY column1
+TTL toStartOfHour(timestamp) + toIntervalHour(24)
+SETTINGS index_granularity = 8192
+"
+```
+
+Then restart the server for changes to take effect.
+
+**6. Detach and reattach table**
+
+```sql
+-- Detach table (stops all operations)
+DETACH TABLE database.table_name;
+
+-- Reattach table (forces reinitialization)
+ATTACH TABLE database.table_name;
+
+-- Verify status
+SELECT is_readonly FROM system.replicas WHERE table = 'table_name';
+```
+
+**7. Clean up broken parts**
+
+```sql
+-- Check for parts in problematic states
+SELECT
+ database,
+ table,
+ name,
+ active,
+ marks,
+ rows
+FROM system.parts
+WHERE database = 'your_database'
+ AND table = 'your_table'
+ AND active = 0;
+
+-- Drop specific broken part (use with caution)
+ALTER TABLE database.table_name DROP PART 'part_name';
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Keeper connection timeout**
+
+```text
+Code: 242. DB::Exception: Table is in readonly mode (replica path: /clickhouse/tables/{uuid}/default/replicas/c-server-0).
+(TABLE_IS_READ_ONLY)
+```
+
+**Cause:** Keeper/ZooKeeper session expired during operation.
+
+**Solution:**
+
+```sql
+-- Check Keeper connectivity
+SELECT * FROM system.zookeeper WHERE path = '/clickhouse' LIMIT 1;
+
+-- Restart replica
+SYSTEM RESTART REPLICA database.table_name;
+
+-- If persists, check Keeper cluster health
+-- May need to increase session timeout in config
+```
+
+**Scenario 2: Metadata mismatch after ALTER**
+
+```text
+Error: Existing table metadata in ZooKeeper differs in TTL
+Table is in readonly mode
+```
+
+**Cause:** Failed ALTER TABLE operation left inconsistent metadata between local replica and ZooKeeper.
+
+**Solution:**
+
+```sql
+-- Option 1: Try restarting replica first
+SYSTEM RESTART REPLICA database.table_name;
+
+-- Option 2: If restart doesn't work, manually fix metadata in Keeper
+-- Use keeper-client to correct the metadata (see solution #5 above)
+
+-- Option 3: Force restore data
+-- Create force_restore_data flag in Keeper (see solution #4 above)
+```
+
+**Scenario 3: Suspicious parts on startup**
+
+```text
+Error: The local set of parts doesn't look like the set of parts in ZooKeeper:
+6.23 million rows are suspicious
+Table is in readonly mode
+```
+
+**Cause:** Too many parts locally don't match what ZooKeeper expects, often after unclean shutdown or data corruption.
+
+**Solution:**
+
+```sql
+-- Check part status
+SELECT
+ name,
+ active,
+ rows,
+ modification_time
+FROM system.parts
+WHERE table = 'your_table'
+ORDER BY modification_time DESC
+LIMIT 20;
+
+-- Force restore from ZooKeeper
+-- Create force_restore_data flag in Keeper (see solution #4 above)
+-- This will discard suspicious local parts and resync
+```
+
+**Scenario 4: Part validation failure**
+
+```text
+Error: Cannot read all marks from file
+ATTEMPT_TO_READ_AFTER_EOF
+Table is in readonly mode
+```
+
+**Cause:** Corrupted data files or mark files preventing part from loading.
+
+**Solution:**
+
+```sql
+-- Identify broken parts
+SELECT
+ name,
+ path,
+ modification_time
+FROM system.parts
+WHERE table = 'your_table'
+ AND active = 0;
+
+-- Option 1: Drop broken part and fetch from another replica
+ALTER TABLE database.table_name DROP PART 'broken_part_name';
+SYSTEM SYNC REPLICA database.table_name;
+
+-- Option 2: Force restore entire replica
+-- Create force_restore_data flag in Keeper (see solution #4 above)
+```
+
+**Scenario 5: Concurrent ALTER cancelling INSERT**
+
+```text
+Error: Insert query was cancelled by concurrent ALTER PARTITION
+Table is in readonly mode
+```
+
+**Cause:** ALTER PARTITION operation interfered with ongoing INSERT, putting replica in protective read-only mode.
+
+**Solution:**
+
+```sql
+-- Check for ongoing mutations or merges
+SELECT * FROM system.mutations WHERE is_done = 0;
+SELECT * FROM system.merges;
+
+-- Restart replica to recover
+SYSTEM RESTART REPLICA database.table_name;
+
+-- To prevent: Coordinate ALTER operations to avoid conflicts
+-- Use SYNC modifier to wait for completion
+ALTER TABLE database.table_name DROP PARTITION 'partition_id' SYNC;
+```
+
+**Scenario 6: Disk space exhaustion**
+
+```text
+Error: Not enough space on disk
+Table is in readonly mode
+```
+
+**Cause:** Insufficient disk space triggered readonly protection.
+
+**Solution:**
+
+```sql
+-- Check disk usage
+SELECT
+ name,
+ path,
+ formatReadableSize(free_space) AS free,
+ formatReadableSize(total_space) AS total,
+ round(free_space / total_space * 100, 2) AS free_percent
+FROM system.disks;
+
+-- Free up space by dropping old partitions
+ALTER TABLE database.table_name DROP PARTITION 'old_partition_id';
+
+-- Or increase disk capacity (ClickHouse Cloud)
+-- Contact support to expand storage
+```
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/252_TOO_MANY_PARTS.md b/docs/troubleshooting/error_codes/252_TOO_MANY_PARTS.md
new file mode 100644
index 00000000000..9456c13d426
--- /dev/null
+++ b/docs/troubleshooting/error_codes/252_TOO_MANY_PARTS.md
@@ -0,0 +1,530 @@
+---
+slug: /troubleshooting/error-codes/252_TOO_MANY_PARTS
+sidebar_label: '252 TOO_MANY_PARTS'
+doc_type: 'reference'
+keywords: ['error codes', 'TOO_MANY_PARTS', '252', 'merges', 'parts', 'partition']
+title: '252 TOO_MANY_PARTS'
+description: 'ClickHouse error code - 252 TOO_MANY_PARTS'
+---
+
+# Error 252: TOO_MANY_PARTS
+
+:::tip
+This error occurs when a table accumulates too many data parts, indicating that inserts are creating new parts faster than the background merge process can combine them. This is almost always caused by inserting data too frequently (many small inserts instead of fewer large batch inserts) or having an inappropriate partition key.
+:::
+
+## Quick reference {#quick-reference}
+
+**What you'll see:**
+
+```text
+Code: 252. DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts.
+(TOO_MANY_PARTS)
+```
+
+Or:
+
+```text
+Code: 252. DB::Exception: Too many parts (10004) in all partitions in total in table 'default.table_name'.
+This indicates wrong choice of partition key. The threshold can be modified with 'max_parts_in_total' setting.
+(TOO_MANY_PARTS)
+```
+
+**Most common causes:**
+1. **Too many small inserts** - Inserting data row-by-row or with very high frequency
+2. **Wrong partition key choice** - Daily or hourly partitions creating thousands of partitions
+3. **Merge process can't keep up** - Heavy queries blocking merge threads or insufficient resources
+4. **Small insert batches** - Each insert creating a new part that needs merging
+
+**Quick diagnostic:**
+
+```sql
+-- Check parts per partition
+SELECT
+ partition,
+ count() AS parts,
+ sum(rows) AS rows,
+ formatReadableSize(sum(bytes_on_disk)) AS size
+FROM system.parts
+WHERE active AND table = 'your_table'
+GROUP BY partition
+ORDER BY parts DESC
+LIMIT 10;
+
+-- Check merge activity
+SELECT
+ table,
+ elapsed,
+ progress,
+ num_parts,
+ result_part_name
+FROM system.merges;
+```
+
+**Quick fixes:**
+
+```sql
+-- 1. Manually trigger merges
+OPTIMIZE TABLE your_table FINAL;
+
+-- 2. Temporarily increase limit (emergency only)
+ALTER TABLE your_table
+MODIFY SETTING parts_to_throw_insert = 600;
+
+-- 3. Check and kill heavy queries blocking merges
+SELECT query_id, query, elapsed
+FROM system.processes
+WHERE elapsed > 300;
+
+KILL QUERY WHERE query_id = 'problem-query-id';
+```
+
+**Long-term solution: Fix your insert pattern!**
+- Batch inserts: 10K-500K rows per INSERT
+- Frequency: 1 insert every 1-2 seconds (maximum)
+- Use Buffer tables if you need more frequent small inserts
+- Use [asynchronous inserts](/optimize/asynchronous-inserts)
+
+## Most common causes {#most-common-causes}
+
+### 1. **Too many small inserts (most common root cause)** {#too-many-small-inserts}
+
+Each `INSERT` statement creates a new data part on disk. ClickHouse merges these parts in the background, but if you insert too frequently, parts accumulate faster than they can be merged.
+
+**Examples of problematic patterns:**
+- Row-by-row inserts (one INSERT per row)
+- Inserts every second or multiple times per second
+- Very small batches (< 1,000 rows per INSERT)
+- Hundreds of concurrent INSERT queries
+
+**Why this happens:**
+
+A hypothetical example:
+
+```text
+Time Inserts/sec Parts Created Parts Merged Net Parts
+0:00 100 100 10 +90
+0:01 100 100 10 +180
+0:02 100 100 10 +270
+0:03 100 100 10 +360 -> Error!
+```
+
+### 2. **Inappropriate partition key** {#inappropriate-partition-key}
+
+Using overly granular partition keys (daily, hourly, or by high-cardinality columns) creates too many partitions. Each partition has its own set of parts, multiplying the problem.
+
+**Problematic partition keys:**
+```sql
+-- Daily partitions (creates 365+ partitions per year)
+PARTITION BY toYYYYMMDD(date)
+
+-- Hourly partitions (creates 8,760+ partitions per year)
+PARTITION BY toYYYYMMDDhh(timestamp)
+
+-- High-cardinality column
+PARTITION BY user_id
+
+-- Monthly partitions (recommended)
+PARTITION BY toYYYYMM(date)
+
+-- Or no partition at all
+-- No PARTITION BY clause
+```
+
+### 3. **Merge process blocked or slowed** {#merge-process-blocked}
+
+Merges can be prevented or slowed by:
+- Heavy SELECT queries consuming all resources
+- Insufficient CPU or disk I/O
+- Mutations (ALTER operations) in progress
+- Parts with different projections that can't be merged
+- Maximum part size reached (parts won't merge further)
+
+### 4. **Wrong table engine or settings** {#wrong-table-engine}
+
+- Using special engines (AggregatingMergeTree, SummingMergeTree) with complex aggregations
+- Very large ORDER BY keys causing slow merges
+- `max_bytes_to_merge_at_max_space_in_pool` set too low
+- Insufficient background merge threads
+
+### 5. **Version-specific issues** {#version-specific-issues}
+
+- **Projection mismatch**: Parts with different projection sets cannot be merged (see error: "Parts have different projection sets")
+- **Small parts not merging**: Parts below minimum merge size threshold won't merge even when idle
+
+---
+
+## Common solutions {#common-solutions}
+
+### **1. Fix your insert pattern (PRIMARY SOLUTION)** {#fix-insert-pattern}
+
+This is the #1 fix for 99% of TOO_MANY_PARTS errors.
+
+**Recommended insert pattern:**
+- **Batch size**: 10,000 to 500,000 rows per INSERT
+- **Frequency**: 1 INSERT every 1-2 seconds
+- **Format**: Use bulk INSERT, not row-by-row
+
+```python
+# WRONG: Row-by-row inserts
+for row in data:
+ client.execute(f"INSERT INTO table VALUES ({row})")
+
+# CORRECT: Batch inserts
+batch_size = 50000
+for i in range(0, len(data), batch_size):
+ batch = data[i:i+batch_size]
+ client.execute("INSERT INTO table VALUES", batch)
+ time.sleep(1) # 1 second delay between batches
+```
+
+```bash
+# WRONG: Inserting files too quickly
+for file in *.csv; do
+ clickhouse-client --query="INSERT INTO table FORMAT CSV" < $file
+done
+
+# CORRECT: Add delays between inserts
+for file in *.csv; do
+ clickhouse-client --query="INSERT INTO table FORMAT CSV" < $file
+ sleep 1
+done
+```
+
+### **2. Use Buffer tables for high-frequency small inserts** {#use-buffer-tables}
+
+If you cannot change your application to batch inserts, use a Buffer table to accumulate data in memory before flushing to disk.
+
+```sql
+-- Create the main table
+CREATE TABLE main_table (
+ timestamp DateTime,
+ user_id UInt64,
+ value Float64
+) ENGINE = MergeTree()
+ORDER BY (user_id, timestamp);
+
+-- Create buffer table in front
+CREATE TABLE buffer_table AS main_table
+ENGINE = Buffer(
+ currentDatabase(), main_table,
+ 16, -- num_layers
+ 10, -- min_time (seconds)
+ 100, -- max_time (seconds)
+ 10000, -- min_rows
+ 1000000, -- max_rows
+ 10000000, -- min_bytes
+ 100000000 -- max_bytes
+);
+
+-- Application inserts into buffer_table
+INSERT INTO buffer_table VALUES (...);
+
+-- Queries can read from buffer_table (includes both buffered and persisted data)
+SELECT * FROM buffer_table;
+```
+
+**Buffer flushes when ANY condition is met:**
+- Time: Every 10-100 seconds
+- Rows: When 10,000-1,000,000 rows accumulated
+- Bytes: When 10MB-100MB accumulated
+
+### **3. Fix partition key (if applicable)** {#fix-partition-key}
+
+```sql
+-- Check current partitions
+SELECT
+ partition,
+ count() AS parts,
+ formatReadableSize(sum(bytes_on_disk)) AS size
+FROM system.parts
+WHERE active AND table = 'your_table'
+GROUP BY partition
+ORDER BY partition DESC
+LIMIT 20;
+
+-- If you see hundreds of partitions, you need to fix the partition key
+
+-- Create new table with better partitioning
+CREATE TABLE your_table_new AS your_table
+ENGINE = MergeTree()
+PARTITION BY toYYYYMM(date) -- Monthly instead of daily
+ORDER BY (user_id, date);
+
+-- Copy data
+INSERT INTO your_table_new SELECT * FROM your_table;
+
+-- Swap tables
+RENAME TABLE
+ your_table TO your_table_old,
+ your_table_new TO your_table;
+
+-- Drop old table after verification
+DROP TABLE your_table_old;
+```
+
+### **4. Manually trigger merges (emergency fix)** {#manually-trigger-merges}
+
+```sql
+-- Force merge all parts in a table
+OPTIMIZE TABLE your_table FINAL;
+
+-- For large tables, optimize specific partitions
+OPTIMIZE TABLE your_table PARTITION '202410' FINAL;
+
+-- On clusters
+OPTIMIZE TABLE your_table ON CLUSTER 'cluster_name' FINAL;
+```
+
+:::warning
+`OPTIMIZE TABLE FINAL` can be resource-intensive and block inserts.
+Use during low-traffic periods.
+:::
+
+### **5. Temporarily increase limits (emergency only - not a real fix)** {#temporarily-increase-limits}
+
+```sql
+-- Increase per-partition limit
+ALTER TABLE your_table
+MODIFY SETTING parts_to_throw_insert = 600; -- Default: 300
+
+-- Increase total parts limit
+ALTER TABLE your_table
+MODIFY SETTING max_parts_in_total = 20000; -- Default: 10000
+
+-- Increase delay threshold
+ALTER TABLE your_table
+MODIFY SETTING parts_to_delay_insert = 300; -- Default: 150
+```
+
+:::warning
+This is **not** a solution, it only buys time.
+You must fix the root cause (insert pattern or partition key).
+:::
+
+### **6. Check for blocking merges** {#check-blocking-merges}
+
+```sql
+-- Check if merges are running
+SELECT
+ database,
+ table,
+ elapsed,
+ progress,
+ num_parts,
+ total_size_bytes_compressed,
+ result_part_name,
+ merge_type
+FROM system.merges;
+
+-- Check for stuck mutations
+SELECT
+ database,
+ table,
+ mutation_id,
+ command,
+ create_time,
+ is_done,
+ latest_failed_part,
+ latest_fail_reason
+FROM system.mutations
+WHERE is_done = 0;
+
+-- Check merge thread activity
+SELECT *
+FROM system.metrics
+WHERE metric LIKE '%Merge%' OR metric LIKE '%BackgroundPool%';
+```
+
+### **7. Increase merge capacity** {#increase-merge-capacity}
+
+```xml
+
+
+
+ 161061273600
+
+
+ 16
+
+```
+
+For ClickHouse Cloud users, contact support to adjust these settings.
+
+## Prevention tips {#prevention-tips}
+
+1. **Understand the parts model**: Every INSERT creates a new part. ClickHouse merges parts in the background. If inserts > merges, parts accumulate.
+
+2. **Follow the golden rule**: **One INSERT every 1-2 seconds, containing 10K-500K rows**.
+
+3. **Use appropriate partition keys**:
+ - Most tables: Monthly partitions or no partition
+ - Very large tables (> 1TB): Monthly is fine
+ - Don't partition by high-cardinality columns
+ - Guideline: < 1,000 total partitions
+
+4. **Use Buffer tables** if your application requires high-frequency small inserts.
+
+5. **Monitor parts regularly**:
+
+ ```sql
+ -- Daily monitoring query
+ SELECT
+ database,
+ table,
+ count() AS parts,
+ max(modification_time) AS latest_insert
+ FROM system.parts
+ WHERE active
+ GROUP BY database, table
+ HAVING parts > 100
+ ORDER BY parts DESC;
+ ```
+
+6. **Avoid inserting to too many partitions at once**: A single INSERT that touches > 100 partitions will be rejected (`max_partitions_per_insert_block`).
+
+7. **Test your workload**: Before going to production, test your insert pattern to ensure merges keep up.
+
+8. **Scale appropriately**: If you legitimately need more than 500K rows/second, you need a distributed cluster, not setting adjustments.
+
+## Understanding ClickHouse parts {#understanding-parts}
+
+**What is a "part"?**
+
+A part is a directory on disk containing:
+- One file per column (data + compressed)
+- Index files
+- Metadata files
+
+**Example:**
+
+```text
+/var/lib/clickhouse/data/default/my_table/
+├── 202410_1_1_0/ <- Part 1
+├── 202410_2_2_0/ <- Part 2
+├── 202410_3_3_0/ <- Part 3
+└── 202410_1_3_1/ <- Merged part (contains parts 1, 2, 3)
+```
+
+**The merge lifecycle:**
+1. Each INSERT creates a new part
+2. Background threads select parts to merge based on size and age
+3. Merged part replaces original parts
+4. Old parts are deleted after a delay
+
+**Why too many parts is bad:**
+- Slow SELECT queries (must read from many files)
+- Slow server startup (must enumerate all parts)
+- Filesystem limits (too many inodes)
+- Memory pressure (tracking metadata for each part)
+
+**Settings that control parts:**
+- `parts_to_delay_insert`: 150 (default) - Start slowing down inserts
+- `parts_to_throw_insert`: 300 (default per-partition) - Throw error
+- `max_parts_in_total`: 10,000 (default) - Total across all partitions
+
+## Debugging steps {#debugging-steps}
+
+### **1. Identify which table and partition** {#identify-table-partition}
+
+```sql
+-- Find tables with most parts
+SELECT
+ database,
+ table,
+ count() AS total_parts,
+ countIf(active) AS active_parts
+FROM system.parts
+GROUP BY database, table
+ORDER BY active_parts DESC
+LIMIT 10;
+
+-- Find partitions with most parts
+SELECT
+ database,
+ table,
+ partition,
+ count() AS parts,
+ sum(rows) AS rows,
+ formatReadableSize(sum(bytes_on_disk)) AS size
+FROM system.parts
+WHERE active
+GROUP BY database, table, partition
+HAVING parts > 50
+ORDER BY parts DESC
+LIMIT 20;
+```
+
+### **2. Check recent insert patterns** {#check-insert-patterns}
+
+```sql
+-- Analyze recent inserts
+SELECT
+ toStartOfMinute(event_time) AS minute,
+ count() AS num_inserts,
+ sum(read_rows) AS total_rows,
+ avg(read_rows) AS avg_rows_per_insert
+FROM system.query_log
+WHERE type = 'QueryFinish'
+ AND query_kind = 'Insert'
+ AND event_time > now() - INTERVAL 1 HOUR
+GROUP BY minute
+ORDER BY minute DESC
+LIMIT 20;
+```
+
+### **3. Check merge activity** {#check-merge-activity}
+
+```sql
+-- Current merges
+SELECT * FROM system.merges;
+
+-- Recent merge history
+SELECT
+ event_time,
+ duration_ms,
+ table,
+ partition_id,
+ rows_read,
+ bytes_read_uncompressed,
+ peak_memory_usage
+FROM system.part_log
+WHERE event_type = 'MergeParts'
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 20;
+
+-- Check for merge failures
+SELECT
+ event_time,
+ table,
+ error,
+ exception
+FROM system.part_log
+WHERE event_type = 'MergeParts'
+ AND error > 0
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+### **4. Identify blocking issues** {#identify-blocking-issues}
+
+```sql
+-- Check for parts that can't merge due to projection differences
+-- Look in system.text_log for messages like:
+-- "Can't merge parts ... Parts have different projection sets"
+
+SELECT
+ event_time,
+ message
+FROM system.text_log
+WHERE message LIKE '%Can''t merge parts%'
+ AND event_time > now() - INTERVAL 1 HOUR
+ORDER BY event_time DESC
+LIMIT 20;
+```
+
+## Related error codes {#related-error-codes}
+
+- [Error 241: `MEMORY_LIMIT_EXCEEDED`](/troubleshooting/error-codes/241_MEMORY_LIMIT_EXCEEDED) - Often related, heavy merges consuming memory
+- [Error 242: `TABLE_IS_READ_ONLY`](/troubleshooting/error-codes/242_TABLE_IS_READ_ONLY) - Can prevent merges from running
diff --git a/docs/troubleshooting/error_codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH.md b/docs/troubleshooting/error_codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH.md
new file mode 100644
index 00000000000..d4775caa587
--- /dev/null
+++ b/docs/troubleshooting/error_codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH.md
@@ -0,0 +1,343 @@
+---
+slug: /troubleshooting/error-codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH
+sidebar_label: '258 UNION_ALL_RESULT_STRUCTURES_MISMATCH'
+doc_type: 'reference'
+keywords: ['error codes', 'UNION_ALL_RESULT_STRUCTURES_MISMATCH', '258', 'UNION ALL', 'column mismatch']
+title: '258 UNION_ALL_RESULT_STRUCTURES_MISMATCH'
+description: 'ClickHouse error code - 258 UNION_ALL_RESULT_STRUCTURES_MISMATCH'
+---
+
+# Error 258: UNION_ALL_RESULT_STRUCTURES_MISMATCH
+
+:::tip
+This error occurs when the result sets of queries combined with `UNION ALL` have incompatible structures—different number of columns, different column types, or mismatched column names. All SELECT queries in a UNION ALL must return the same number of columns with compatible types.
+:::
+
+## Quick reference {#quick-reference}
+
+**What you'll see:**
+
+```text
+Code: 258. DB::Exception: UNION ALL result structures mismatch.
+(UNION_ALL_RESULT_STRUCTURES_MISMATCH)
+```
+
+Or in recent versions, this may manifest as:
+
+```text
+Code: 352. DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different types:
+NULL Nullable(String) Nullable(size = 0, String(size = 0), UInt8(size = 0))
+NULL Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))).
+(AMBIGUOUS_COLUMN_NAME)
+```
+
+**Most common causes:**
+1. **Different number of columns** in UNION ALL queries
+2. **Incompatible column types** (e.g., String vs Int64)
+3. **NULL type inference issues** (NULL in one query, typed value in another)
+4. **Column order mismatch** between SELECT statements
+
+**Quick diagnostic:**
+
+```sql
+-- Test each SELECT separately first
+SELECT col1, col2 FROM table1; -- Check column count and types
+SELECT col1, col2 FROM table2; -- Check column count and types
+
+-- Then combine with UNION ALL
+SELECT col1, col2 FROM table1
+UNION ALL
+SELECT col1, col2 FROM table2;
+```
+
+**Quick fixes:**
+
+```sql
+-- Error: Different number of columns
+SELECT name, age FROM users
+UNION ALL
+SELECT name FROM customers;
+
+-- Fix: Match column counts
+SELECT name, age FROM users
+UNION ALL
+SELECT name, NULL AS age FROM customers;
+
+-- Error: Different types
+SELECT name, age FROM users -- age is Int64
+UNION ALL
+SELECT name, signup_date FROM customers; -- signup_date is DateTime
+
+-- Fix: Cast to compatible types
+SELECT name, age FROM users
+UNION ALL
+SELECT name, toInt64(0) AS age FROM customers;
+
+-- Error: NULL type ambiguity
+SELECT NULL, NULL
+UNION ALL
+SELECT 'xxx', NULL;
+
+-- Fix: Explicitly type NULLs
+SELECT NULL::Nullable(String), NULL
+UNION ALL
+SELECT 'xxx', NULL;
+```
+
+## Most common causes {#most-common-causes}
+
+### 1. **Different number of columns** {#different-number-of-columns}
+
+The most straightforward cause—each SELECT in the UNION ALL must return the same number of columns.
+
+```sql
+-- Error
+SELECT id, name, email FROM users
+UNION ALL
+SELECT id, name FROM customers; -- Missing 'email' column
+
+-- Fix
+SELECT id, name, email FROM users
+UNION ALL
+SELECT id, name, NULL AS email FROM customers;
+```
+
+### 2. **Incompatible column types** {#incompatible-column-types}
+
+Even if column names match, types must be compatible or convertible.
+
+```sql
+-- Error: String vs Int64
+SELECT 'text' AS col1
+UNION ALL
+SELECT 123 AS col1;
+
+-- Fix: Cast to common type
+SELECT 'text' AS col1
+UNION ALL
+SELECT toString(123) AS col1;
+
+-- Or
+SELECT CAST('text' AS String) AS col1
+UNION ALL
+SELECT CAST(123 AS String) AS col1;
+```
+
+### 3. **NULL type inference issues (version-specific)** {#null-type-inference-issues}
+
+Before version 21.9, NULL handling was more lenient. Starting from 21.9+, ClickHouse is stricter about NULL type inference.
+
+```sql
+-- Error in 21.9+ (worked in older versions)
+SELECT NULL, NULL
+UNION ALL
+SELECT 'xxx', NULL;
+
+-- Error message:
+-- different types:
+-- NULL Nullable(String) ...
+-- NULL Nullable(Nothing) ...
+
+-- Fix: Explicitly type the NULL
+SELECT NULL::Nullable(String), NULL
+UNION ALL
+SELECT 'xxx', NULL;
+
+-- Or use CAST
+SELECT CAST(NULL AS Nullable(String)), NULL
+UNION ALL
+SELECT 'xxx', NULL;
+```
+
+### 4. **Column order mismatch** {#column-order-mismatch}
+
+Column positions matter, not names. UNION ALL matches columns by position.
+
+```sql
+-- This combines mismatched columns
+SELECT name, age FROM users -- Position 1: name, Position 2: age
+UNION ALL
+SELECT age, name FROM employees; -- Position 1: age, Position 2: name
+
+-- Result: age values in name column, name values in age column!
+
+-- Fix: Match column order
+SELECT name, age FROM users
+UNION ALL
+SELECT name, age FROM employees; -- Correct order
+```
+
+### 5. **Projection optimization conflicts (24.10+ version-specific)** {#projection-optimization-conflicts}
+
+In versions 24.10+, there's a known issue where projection optimization can cause block structure mismatches in UNION operations, particularly with:
+- Tables that have PROJECTION defined
+- ARRAY JOIN operations
+- Complex WHERE clauses with projections
+
+```sql
+-- May fail in 24.10-24.12 with projections
+SELECT model_name
+FROM frame_events
+ARRAY JOIN detections.model_name AS model_name
+WHERE event_time >= '2024-02-01'
+ AND model_name != ''
+GROUP BY model_name;
+
+-- Workaround: Disable projection optimization
+SELECT model_name
+FROM frame_events
+ARRAY JOIN detections.model_name AS model_name
+WHERE event_time >= '2024-02-01'
+ AND model_name != ''
+GROUP BY model_name
+SETTINGS optimize_use_projections = 0;
+```
+
+## Common solutions {#common-solutions}
+
+### **1. Match column counts** {#match-column-counts}
+
+```sql
+-- Ensure all queries return same number of columns
+SELECT col1, col2, col3 FROM table1
+UNION ALL
+SELECT col1, col2, NULL AS col3 FROM table2; -- Add NULL for missing columns
+```
+
+### **2. Cast to compatible types** {#cast-to-compatible-types}
+
+```sql
+-- Different numeric types
+SELECT id::UInt64, value::Float64 FROM table1
+UNION ALL
+SELECT id::UInt64, value::Float64 FROM table2;
+
+-- String and numeric
+SELECT name FROM users
+UNION ALL
+SELECT toString(user_id) FROM logs;
+
+-- DateTime and Date
+SELECT created_at::DateTime FROM orders
+UNION ALL
+SELECT toDateTime(order_date) FROM archived_orders;
+```
+
+### **3. Fix NULL type ambiguity** {#fix-null-type-ambiguity}
+
+```sql
+-- Method 1: Explicit type casting
+SELECT
+ NULL::Nullable(String) AS name,
+ NULL::Nullable(Int64) AS age
+UNION ALL
+SELECT 'John', 30;
+
+-- Method 2: Use actual values in first query
+SELECT 'placeholder', 0 AS age WHERE 1=0 -- Returns no rows but establishes types
+UNION ALL
+SELECT name, age FROM users;
+
+-- Method 3: Reorder queries (put typed query first)
+SELECT name, age FROM users
+UNION ALL
+SELECT NULL::Nullable(String), NULL::Nullable(Int64);
+```
+
+### **4. Use UNION DISTINCT mode for automatic type coercion** {#use-union-distinct}
+
+```sql
+-- UNION (without ALL) applies type coercion more aggressively
+SELECT 'text' AS col
+UNION
+SELECT 123;
+
+-- But note: UNION removes duplicates (slower)
+-- For performance, prefer UNION ALL with explicit casts
+```
+
+### **5. Verify column order** {#verify-column-order}
+
+```sql
+-- Wrong: column positions don't match
+SELECT first_name, last_name, age FROM users
+UNION ALL
+SELECT age, first_name, last_name FROM archived_users;
+
+-- Correct: match positions
+SELECT first_name, last_name, age FROM users
+UNION ALL
+SELECT first_name, last_name, age FROM archived_users;
+
+-- Or explicitly reorder
+SELECT first_name, last_name, age FROM users
+UNION ALL
+SELECT
+ first_name,
+ last_name,
+ age
+FROM archived_users;
+```
+
+### **6. Disable projection optimization (24.10+ workaround)** {#disable-projection-optimization}
+
+If you're encountering "Block structure mismatch in UnionStep stream" errors related to projections:
+
+```sql
+-- Disable projection optimization for the query
+SELECT * FROM table_with_projection
+WHERE condition
+SETTINGS optimize_use_projections = 0;
+
+-- Or disable globally (not recommended)
+SET optimize_use_projections = 0;
+
+-- Check if table has projections
+SELECT
+ database,
+ table,
+ name AS projection_name,
+ type,
+ sorting_key
+FROM system.projections
+WHERE table = 'your_table';
+```
+
+### **7. Debug with DESCRIBE** {#debug-with-describe}
+
+```sql
+-- Check structure of each query
+DESCRIBE (SELECT col1, col2 FROM table1);
+DESCRIBE (SELECT col1, col2 FROM table2);
+
+-- Compare outputs to find mismatches
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Always match column counts**: Every SELECT in UNION ALL must return the same number of columns.
+
+2. **Be explicit with types**: Use explicit casts rather than relying on implicit type conversion, especially with NULL values.
+
+3. **Use consistent column order**: Column positions matter more than names in UNION ALL.
+
+4. **Test each query separately**: Before combining with UNION ALL, verify each SELECT works independently and returns expected types.
+
+5. **Avoid NULL-only queries**: Don't use `SELECT NULL, NULL` without explicit type casting.
+
+6. **Document your schema**: When combining data from multiple tables, document expected column types in comments.
+
+7. **Use table aliases for clarity**:
+ ```sql
+ SELECT u.name, u.age FROM users u
+ UNION ALL
+ SELECT c.name, c.age FROM customers c;
+ ```
+
+8. **Consider using UNION instead of UNION ALL** if you need automatic type coercion (but be aware of performance implications).
+
+## Related error codes {#related-error-codes}
+
+- [Error 49: `LOGICAL_ERROR`](/docs/troubleshooting/error-codes/049_LOGICAL_ERROR) - Related to internal block structure mismatches
+- [Error 352: `AMBIGUOUS_COLUMN_NAME`](/docs/troubleshooting/error-codes/352_AMBIGUOUS_COLUMN_NAME) - Can occur with UNION and column name conflicts
+- [Error 386: `NO_COMMON_TYPE`](/docs/troubleshooting/error-codes/386_NO_COMMON_TYPE) - When types cannot be unified
diff --git a/docs/troubleshooting/error_codes/279_ALL_CONNECTION_TRIES_FAILED.md b/docs/troubleshooting/error_codes/279_ALL_CONNECTION_TRIES_FAILED.md
new file mode 100644
index 00000000000..18b7e9dee41
--- /dev/null
+++ b/docs/troubleshooting/error_codes/279_ALL_CONNECTION_TRIES_FAILED.md
@@ -0,0 +1,599 @@
+---
+slug: /troubleshooting/error-codes/279_ALL_CONNECTION_TRIES_FAILED
+sidebar_label: '279 ALL_CONNECTION_TRIES_FAILED'
+doc_type: 'reference'
+keywords: ['error codes', 'ALL_CONNECTION_TRIES_FAILED', '279']
+title: '279 ALL_CONNECTION_TRIES_FAILED'
+description: 'ClickHouse error code - 279 ALL_CONNECTION_TRIES_FAILED'
+---
+
+# Error 279: ALL_CONNECTION_TRIES_FAILED
+
+:::tip
+This error occurs when ClickHouse cannot establish a connection to any of the available replicas or shards after exhausting all connection attempts.
+It indicates a complete failure to connect to remote nodes needed for distributed query execution, parallel replicas, or cluster operations.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **All replicas unavailable or unreachable**
+ - All remote servers down or restarting
+ - Network partition isolating all replicas
+ - All connection attempts timing out
+ - DNS resolution failing for all hosts
+
+2. **Parallel replicas with stale connections**
+ - First query after idle period using stale connection pool
+ - Connection pool contains dead connections to replicas
+ - Network configuration causing connections to timeout after inactivity (typically 1+ hour)
+ - Known issue in versions before 24.5.1.22937 and 24.7.1.5426
+
+3. **Pod restarts during rolling updates**
+ - Load balancer routing new connections to terminating pods
+ - Replicas marked as `ready: true, terminating: true` still receiving traffic
+ - Delay between pod termination and load balancer deregistration (can be 15-20 seconds)
+ - Multiple replicas restarting simultaneously
+
+4. **Distributed query to offline cluster nodes**
+ - Remote shard servers not running
+ - Network connectivity issues to cluster nodes
+ - Firewall blocking inter-node communication
+ - Wrong hostnames in cluster configuration
+
+5. **Connection refused errors**
+ - ClickHouse server not listening on port
+ - Server crashed or killed
+ - Port not open in firewall
+ - Service not started yet after deployment
+
+6. **`clusterAllReplicas()` queries during disruption**
+ - Queries using [`clusterAllReplicas()`](/sql-reference/table-functions/cluster) function
+ - Some replicas unavailable during query execution
+ - Not using [`skip_unavailable_shards`](/operations/settings/settings#skip_unavailable_shards) setting
+
+## Common solutions {#common-solutions}
+
+**1. For parallel replicas stale connection issue**
+
+Workaround (until fixed in newer versions):
+
+```sql
+-- Periodically execute query to refresh connection pool
+SELECT 1 FROM your_table
+SETTINGS
+ max_parallel_replicas = 60, -- >= cluster size
+ allow_experimental_parallel_reading_from_replicas = 1,
+ cluster_for_parallel_replicas = 'default';
+
+-- Or execute as retry after ALL_CONNECTION_TRIES_FAILED error
+```
+
+**Permanent fix:** Upgrade to ClickHouse 24.5.1.22937, 24.7.1.5426, or later.
+
+**2. Skip unavailable shards/replicas**
+
+```sql
+-- Allow query to proceed even if some replicas unavailable
+SET skip_unavailable_shards = 1;
+
+-- For clusterAllReplicas queries
+SELECT * FROM clusterAllReplicas('default', system.tables)
+SETTINGS skip_unavailable_shards = 1;
+```
+
+**3. Verify cluster connectivity**
+
+```sql
+-- Test connection to all cluster nodes
+SELECT
+ hostName() AS host,
+ count() AS test
+FROM clusterAllReplicas('your_cluster', system.one);
+
+-- Check cluster configuration
+SELECT *
+FROM system.clusters
+WHERE cluster = 'your_cluster';
+```
+
+**4. Check replica status**
+
+```sql
+-- For replicated tables, check replica health
+SELECT
+ database,
+ table,
+ is_leader,
+ is_readonly,
+ total_replicas,
+ active_replicas
+FROM system.replicas;
+
+-- Check for replication lag
+SELECT
+ database,
+ table,
+ absolute_delay,
+ queue_size
+FROM system.replicas
+WHERE absolute_delay > 60 OR queue_size > 100;
+```
+
+**5. Verify servers are running**
+
+```bash
+# Check if ClickHouse is listening on port
+telnet server-hostname 9000
+
+# Or using nc
+nc -zv server-hostname 9000
+
+# Kubernetes - check pod status
+kubectl get pods -n your-namespace
+kubectl get endpoints -n your-namespace
+```
+
+**6. Configure connection retry settings**
+
+```sql
+-- Increase connection attempt count
+SET connections_with_failover_max_tries = 5;
+
+-- Increase timeout for failover connections
+SET connect_timeout_with_failover_ms = 3000;
+
+-- For distributed queries
+SET distributed_connections_pool_size = 1024;
+```
+
+**7. Implement client-side retry logic**
+
+```python
+# Python example
+import time
+
+def execute_with_retry(query, max_retries=3):
+ for attempt in range(max_retries):
+ try:
+ # For parallel replicas workaround
+ if attempt > 0:
+ # Refresh connection pool
+ client.query(
+ "SELECT 1",
+ settings={
+ 'max_parallel_replicas': 60,
+ 'allow_experimental_parallel_reading_from_replicas': 1
+ }
+ )
+ return client.query(query)
+ except Exception as e:
+ if 'ALL_CONNECTION_TRIES_FAILED' in str(e) or '279' in str(e):
+ if attempt < max_retries - 1:
+ time.sleep(2 ** attempt) # Exponential backoff
+ continue
+ raise
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Parallel replicas stale connections**
+
+```text
+Error: Code: 279. DB::Exception: Can't connect to any replica chosen
+for query execution: While executing Remote. (ALL_CONNECTION_TRIES_FAILED)
+```
+
+**Cause:** First query after idle period; connection pool has stale connections (bug in versions < 24.5.1.22937).
+
+**Solution:**
+- Upgrade to 24.5.1.22937 / 24.7.1.5426 or later (permanent fix)
+- Execute dummy query with `max_parallel_replicas >= cluster_size` to refresh pool
+- Implement retry logic that refreshes connection pool
+
+**Scenario 2: All replicas down**
+
+```text
+Error: Code: 279. All connection tries failed. Log:
+Code: 210. Connection refused (server:9000)
+Code: 210. Connection refused (server:9000)
+Code: 210. Connection refused (server:9000)
+```
+
+**Cause:** All replicas in cluster are down or not accepting connections.
+
+**Solution:**
+- Check if ClickHouse servers are running
+- Verify services are accessible on port 9000
+- Check for pod/server restarts
+- Review cluster configuration
+
+**Scenario 3: Rolling restart with load balancer delay**
+
+```text
+Error: Connection failures during rolling restart
+Multiple failed attempts to same terminating replica
+```
+
+**Cause:** Load balancer still routing to pods marked `ready: true, terminating: true` (15-20 second delay before marked `ready: false`).
+
+**Solution:**
+- Implement retry logic with exponential backoff
+- Use connection pooling that handles connection failures
+- Wait for fix to prestop hooks (ongoing work)
+- Design applications to tolerate temporary connection failures
+
+**Scenario 4: clusterAllReplicas() with unavailable replicas**
+
+```text
+Error: ALL_CONNECTION_TRIES_FAILED in clusterAllReplicas query
+```
+
+**Cause:** Using `clusterAllReplicas()` when one or more replicas unavailable.
+
+**Solution:**
+
+```sql
+-- Enable skip_unavailable_shards
+SELECT * FROM clusterAllReplicas('default', system.tables)
+SETTINGS skip_unavailable_shards = 1;
+
+-- Or use cluster() with proper shard selection
+SELECT * FROM cluster('default', system.tables)
+WHERE shard_num = 1;
+```
+
+**Scenario 5: Distributed table with dead shards**
+
+```text
+Error: All connection tries failed during distributed query
+```
+
+**Cause:** Distributed table references shard that is down.
+
+**Solution:**
+
+```sql
+-- Skip unavailable shards
+SELECT * FROM distributed_table
+SETTINGS skip_unavailable_shards = 1;
+
+-- Check which shards are unreachable
+SELECT * FROM system.clusters WHERE cluster = 'your_cluster';
+
+-- Fix cluster configuration to remove dead nodes
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Keep ClickHouse updated:** Upgrade to 24.5+ for parallel replicas fix
+2. **Use skip_unavailable_shards:** Allow queries to proceed with partial data
+3. **Monitor cluster health:** Track replica availability and connectivity
+4. **Implement retry logic:** Handle transient connection failures gracefully
+5. **Test failover:** Regularly verify cluster failover mechanisms work
+6. **Configure appropriate timeouts:** Match connection timeouts to network conditions
+7. **Plan for rolling updates:** Design applications to handle temporary unavailability
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify which replicas failed:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ exception
+ FROM system.query_log
+ WHERE exception_code = 279
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+2. **Check cluster connectivity:**
+
+ ```sql
+ -- Test each shard/replica
+ SELECT
+ cluster,
+ shard_num,
+ replica_num,
+ host_name,
+ port,
+ is_local
+ FROM system.clusters
+ WHERE cluster = 'default';
+
+ -- Try to query each node
+ SELECT * FROM clusterAllReplicas('default', system.one);
+ ```
+
+3. **Check for parallel replicas settings:**
+
+ ```sql
+ SELECT
+ query_id,
+ Settings['allow_experimental_parallel_reading_from_replicas'] AS parallel_replicas,
+ Settings['max_parallel_replicas'] AS max_replicas,
+ exception
+ FROM system.query_log
+ WHERE exception_code = 279
+ ORDER BY event_time DESC
+ LIMIT 5;
+ ```
+
+4. **Test individual replica connections:**
+
+ ```bash
+ # Test each replica manually
+ telnet replica1-hostname 9000
+ telnet replica2-hostname 9000
+
+ # Or with clickhouse-client
+ clickhouse-client --host replica1-hostname --query "SELECT 1"
+ ```
+
+5. **Check for pod restarts (Kubernetes):**
+
+ ```bash
+ # Check pod status and restarts
+ kubectl get pods -n your-namespace
+
+ # Check events during error timeframe
+ kubectl get events -n your-namespace \
+ --sort-by='.lastTimestamp' | grep Killing
+ ```
+
+6. **Review error_log for connection details:**
+
+ ```sql
+ SELECT
+ event_time,
+ name,
+ value,
+ last_error_message
+ FROM system.errors
+ WHERE name = 'ALL_CONNECTION_TRIES_FAILED'
+ ORDER BY last_error_time DESC;
+ ```
+
+## Special considerations {#special-considerations}
+
+**For parallel replicas (experimental feature):**
+- Known bug in versions before 24.5.1.22937 / 24.7.1.5426
+- Stale connections in pool after inactivity
+- First query after idle period likely to fail
+- Subsequent queries succeed after pool refresh
+- Settings [`skip_unavailable_shards`](/operations/settings/settings#skip_unavailable_shards) and [`use_hedged_requests`](/operations/settings/settings#use_hedged_requests) not needed anymore
+
+**For distributed queries:**
+- Error means ALL configured replicas failed
+- Each replica has multiple connection attempts
+- Full error message shows individual NETWORK_ERROR (210) attempts
+- Check both network and server availability
+
+**For `clusterAllReplicas()`:**
+- Queries all replicas in cluster
+- Failure expected if any replica unavailable
+- Use `skip_unavailable_shards = 1` to proceed with available replicas
+- Common during rolling updates or maintenance
+
+**For ClickHouse Cloud rolling updates:**
+- Pods marked as terminating can still show `ready: true` for 15-20 seconds
+- Load balancer may route new connections to terminating pods during this window
+- Graceful shutdown waits up to 1 hour for running queries
+- Design clients to retry connection failures
+
+**Load balancer behavior:**
+- Connection established to load balancer, not directly to replica
+- Each query may route to different replica
+- Terminating pods remain in load balancer briefly after shutdown starts
+- Client retry may succeed if routed to healthy replica
+
+## Parallel replicas specific fix {#parallel-replicas-fix}
+
+**Problem:** Stale connections in cluster connection pools cause first query after inactivity to fail.
+
+**Affected versions:** Before 24.5.1.22937 and 24.7.1.5426
+
+**Fix:** [PR 67389](https://github.com/ClickHouse/ClickHouse/pull/67389)
+
+**Workaround until upgraded:**
+
+```sql
+-- Execute this periodically or as retry after error
+SELECT 1
+SETTINGS
+ max_parallel_replicas = 100, -- >= number of replicas
+ allow_experimental_parallel_reading_from_replicas = 1,
+ cluster_for_parallel_replicas = 'default';
+```
+
+## Connection retry settings {#connection-retry-settings}
+
+```sql
+-- Maximum connection attempts per replica
+SET connections_with_failover_max_tries = 3;
+
+-- Timeout for each connection attempt (milliseconds)
+SET connect_timeout_with_failover_ms = 1000;
+SET connect_timeout_with_failover_secure_ms = 1000;
+
+-- Connection timeout (seconds)
+SET connect_timeout = 10;
+
+-- For hedged requests (parallel connection attempts)
+SET use_hedged_requests = 1; -- Not needed for parallel replicas
+SET hedged_connection_timeout_ms = 100;
+```
+
+## Cluster configuration best practices {#cluster-best-practices}
+
+1. **Remove dead nodes from configuration:**
+
+ ```xml
+
+
+
+
+
+ active-server.domain.com
+ 9000
+
+
+
+
+
+ ```
+
+2. **Use internal_replication:**
+
+ ```xml
+
+ true
+ ...
+
+ ```
+
+3. **Configure failover properly:**
+ - Ensure cluster has multiple replicas per shard
+ - Use appropriate `load_balancing` strategy
+ - Test failover by stopping one replica
+
+## Client implementation recommendations {#client-recommendations}
+
+**For JDBC clients:**
+
+```java
+// Use connection pooling
+ClickHouseDataSource dataSource = new ClickHouseDataSource(url, properties);
+
+// Implement retry logic
+public void executeWithRetry(String query, int maxRetries) {
+ for (int attempt = 0; attempt < maxRetries; attempt++) {
+ try {
+ // Get new connection on each retry
+ try (Connection conn = dataSource.getConnection()) {
+ // Execute query
+ }
+ return; // Success
+ } catch (SQLException e) {
+ if (e.getMessage().contains("ALL_CONNECTION_TRIES_FAILED")
+ && attempt < maxRetries - 1) {
+ Thread.sleep(1000 * (long)Math.pow(2, attempt));
+ continue;
+ }
+ throw e;
+ }
+ }
+}
+```
+
+**For distributed queries:**
+- Expect temporary failures during rolling updates
+- Implement exponential backoff retry
+- Use `skip_unavailable_shards` for non-critical queries
+- Monitor cluster health before sending queries
+
+## Distinguishing scenarios {#distinguishing-scenarios}
+
+**Parallel replicas issue:**
+- First query after idle period
+- Subsequent queries succeed
+- Versions before 24.5.1 / 24.7.1
+- Error mentions "replica chosen for query execution"
+
+**Actual connectivity issue:**
+- Consistent failures, not just first query
+- Network or server problems
+- Individual 210 errors show "Connection refused" or "Timeout"
+
+**Rolling restart:**
+- Errors during known maintenance window
+- Transient, resolves after restarts complete
+- Correlation with pod restart events
+
+**Cluster misconfiguration:**
+- Persistent errors
+- Same replicas always failing
+- Wrong hostnames or dead nodes in config
+
+## When using `clusterAllReplicas()` {#clusterallreplicas-usage}
+
+```sql
+-- Will fail if ANY replica unavailable (without skip setting)
+SELECT * FROM clusterAllReplicas('default', system.tables);
+
+-- Recommended: Skip unavailable replicas
+SELECT * FROM clusterAllReplicas('default', system.tables)
+SETTINGS skip_unavailable_shards = 1;
+
+-- Check which queries are derived from clusterAllReplicas
+SELECT
+ query_id,
+ initial_query_id,
+ is_initial_query,
+ exception
+FROM system.query_log
+WHERE exception_code = 210
+ AND is_initial_query = 0 -- Derived queries
+ORDER BY event_time DESC;
+```
+
+## Monitoring and alerting {#monitoring}
+
+```sql
+-- Track ALL_CONNECTION_TRIES_FAILED errors
+SELECT
+ toStartOfHour(event_time) AS hour,
+ count() AS error_count,
+ uniqExact(initial_query_id) AS unique_queries
+FROM system.query_log
+WHERE exception_code = 279
+ AND event_date >= today() - 7
+GROUP BY hour
+ORDER BY hour DESC;
+
+-- Check error_log for pattern
+SELECT
+ last_error_time,
+ last_error_message,
+ value AS error_count
+FROM system.errors
+WHERE name = 'ALL_CONNECTION_TRIES_FAILED'
+ORDER BY last_error_time DESC;
+```
+
+## Known issues and fixes {#known-issues}
+
+**Issue 1: Parallel replicas stale connections**
+- **Affected:** Versions before 24.5.1.22937 / 24.7.1.5426
+- **Fix:** [PR 67389](https://github.com/ClickHouse/ClickHouse/pull/67389)
+- **Workaround:** Execute dummy query to refresh pool or retry
+
+**Issue 2: Load balancer routing to terminating pods**
+- **Affected:** ClickHouse Cloud during rolling updates
+- **Symptom:** 15-20 second window where terminating pods receive new connections
+- **Status:** Ongoing work on pre-stop hooks
+- **Workaround:** Implement client retry logic
+
+**Issue 3: Round-robin replica selection**
+- **Affected:** Parallel replicas queries
+- **Symptom:** Forcibly uses ROUND_ROBIN even if replicas unavailable
+- **Impact:** If 1/60 replicas dead, 1/60 requests fail consistently
+
+If you're experiencing this error:
+1. Check ClickHouse version - upgrade if using parallel replicas on version \< 24.5.1 / 24.7.1
+2. Verify all cluster nodes are running and accessible
+3. Test connectivity to each replica manually
+4. For parallel replicas: try executing dummy query to refresh connection pool
+5. Use `skip_unavailable_shards = 1` for queries that can tolerate partial data
+6. Check for correlation with pod restarts or maintenance windows
+7. Implement exponential backoff retry logic in client
+8. Review cluster configuration for dead or incorrect nodes
+9. Check individual connection errors in full exception message (usually 210 errors)
+10. For persistent issues, check network connectivity between nodes
+
+**Related documentation:**
+- [Parallel replicas](/operations/settings/settings#allow_experimental_parallel_reading_from_replicas)
+- [Distributed queries](/engines/table-engines/special/distributed)
+- [Cluster functions](/sql-reference/table-functions/cluster)
diff --git a/docs/troubleshooting/error_codes/349_CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN.md b/docs/troubleshooting/error_codes/349_CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN.md
new file mode 100644
index 00000000000..60c2a2e2519
--- /dev/null
+++ b/docs/troubleshooting/error_codes/349_CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN.md
@@ -0,0 +1,344 @@
+---
+slug: /troubleshooting/error-codes/349_CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN
+sidebar_label: '349 CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN', '349', 'null', 'nullable', 'insert']
+title: '349 CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN'
+description: 'ClickHouse error code - 349 CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN'
+---
+
+# Error 349: CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN
+
+:::tip
+This error occurs when you attempt to insert a `NULL` value into a column that is not defined as `Nullable`.
+ClickHouse requires explicit `Nullable()` type declaration to accept null values.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Importing data with NULL values from external sources**
+ - Parquet, CSV, or JSON files containing null values
+ - Schema inference creates Nullable types that don't match table schema
+ - S3, file(), or table function imports without explicit schema
+ - NULL values in arrays or nested structures
+
+2. **Materialized view type mismatches**
+ - SELECT clause returns nullable columns but target table expects non-nullable
+ - Using `NULL` literal without column alias in materialized views
+ - Conditional expressions (if/case) that can return NULL
+ - Missing `coalesce()` or default value handling
+
+3. **Schema inference conflicts**
+ - `schema_inference_make_columns_nullable=1` creates Nullable types
+ - Target table columns are non-nullable
+ - Using wildcards `{}` in file paths changes inference behavior
+ - Applying functions in SELECT prevents using INSERT table structure
+
+4. **Tuple and complex type conversions**
+ - Nested fields in Tuples have nullable elements
+ - Target table has non-nullable nested elements
+ - Error messages may not clearly indicate which tuple field failed
+
+5. **Direct INSERT with NULL literals**
+ - Using `NULL as column_name` syntax incorrectly
+ - Inserting explicit NULL values into non-nullable columns
+ - Missing type casts for NULL values
+
+## What to do when you encounter this error {#what-to-do}
+
+**1. Identify the problematic column**
+
+The error message indicates which column cannot accept NULL:
+
+```text
+Cannot convert NULL value to non-Nullable type: while converting source column
+price to destination column price
+```
+
+**2. Check your table schema**
+
+```sql
+-- View column nullability
+SELECT
+ name,
+ type,
+ is_in_primary_key
+FROM system.columns
+WHERE table = 'your_table'
+ AND database = 'your_database'
+ORDER BY position;
+```
+
+**3. Review source data for NULL values**
+
+```sql
+-- For Parquet/CSV files via s3()
+DESCRIBE s3('your-file-path', 'format');
+
+-- Check for NULL values
+SELECT
+ column_name,
+ countIf(column_name IS NULL) as null_count
+FROM s3('your-file-path', 'format')
+GROUP BY column_name;
+```
+
+## Quick fixes {#quick-fixes}
+
+**1. Make the column Nullable**
+
+```sql
+-- Modify existing column to accept NULL
+ALTER TABLE your_table
+ MODIFY COLUMN column_name Nullable(String);
+
+-- Create new table with Nullable columns
+CREATE TABLE your_table
+(
+ id UInt64,
+ name Nullable(String), -- Allows NULL
+ status String -- Does not allow NULL
+)
+ENGINE = MergeTree
+ORDER BY id;
+```
+
+**2. For file imports - use safe conversion settings**
+
+```sql
+-- Let NULLs become default values (0, '', etc.)
+SET input_format_null_as_default = 1;
+
+-- Disable automatic nullable inference
+SET schema_inference_make_columns_nullable = 0;
+
+-- Then import
+INSERT INTO your_table
+SELECT * FROM s3('file.parquet', 'Parquet');
+```
+
+**3. For file imports with wildcards or functions - specify schema**
+
+```sql
+-- Explicitly define column structure
+INSERT INTO your_table
+SELECT *
+FROM s3(
+ 'https://bucket.s3.amazonaws.com/{file*.parquet}',
+ 'access_key',
+ 'secret_key',
+ 'Parquet',
+ 'id UInt64, name String, price Float64' -- Explicit non-nullable schema
+);
+
+-- Or use setting to inherit from target table
+SET use_structure_from_insertion_table_in_table_functions = 1;
+```
+
+**4. For materialized views - use `coalesce()`**
+
+```sql
+-- Instead of this (fails):
+CREATE MATERIALIZED VIEW mv TO target_table AS
+SELECT
+ if(op = 'd', before_id, after_id) AS business_id
+FROM source_table;
+
+-- Use this (works):
+CREATE MATERIALIZED VIEW mv TO target_table AS
+SELECT
+ coalesce(if(op = 'd', before_id, after_id), 0) AS business_id
+FROM source_table;
+```
+
+**5. Handle NULL explicitly in queries**
+
+```sql
+-- Replace NULL with default values
+INSERT INTO target_table
+SELECT
+ coalesce(nullable_column, 0) AS column_name,
+ ifNull(another_column, 'default') AS another_name
+FROM source;
+
+-- Or use assumeNotNull (careful - throws error if NULL exists)
+SELECT assumeNotNull(nullable_column) FROM source;
+```
+
+## Common specific scenarios {#common-scenarios}
+
+**Scenario 1: Parquet file import with NULL values**
+
+```text
+Cannot convert NULL value to non-Nullable type: While executing ParquetBlockInputFormat
+```
+
+**Cause:** Parquet file contains NULL values, but table columns are not Nullable.
+
+**Solution:**
+```sql
+-- Option 1: Make columns Nullable
+ALTER TABLE your_table MODIFY COLUMN name Nullable(String);
+
+-- Option 2: Use settings to convert NULLs to defaults
+SET input_format_null_as_default = 1;
+INSERT INTO your_table SELECT * FROM s3('file.parquet');
+
+-- Option 3: Handle NULLs explicitly
+INSERT INTO your_table
+SELECT coalesce(name, '') AS name FROM s3('file.parquet');
+```
+
+**Scenario 2: Materialized view with NULL results**
+
+```text
+Cannot convert NULL value to non-Nullable type: while pushing to view mv
+```
+
+**Cause:** Materialized view SELECT returns NULL values, but target table doesn't accept them. Direct INSERT auto-converts NULLs to defaults, but materialized view SELECT does not.
+
+**Solution:**
+```sql
+-- Use coalesce() to provide defaults
+CREATE MATERIALIZED VIEW mv TO target_table AS
+SELECT
+ coalesce(nullable_col, 0) AS col,
+ ifNull(another_col, '') AS another
+FROM source_table;
+```
+
+**Scenario 3: S3 import with wildcards or functions fails**
+
+```text
+Cannot convert NULL value to non-Nullable type: while converting source column
+TMSR_FEATURES to destination column features
+```
+
+**Cause:** When using wildcards `{}` in file paths or functions in SELECT, ClickHouse doesn't use the target table structure for schema inference and infers Nullable types.
+
+**Solution:**
+```sql
+-- Option 1: Use setting to inherit structure from target table
+SET use_structure_from_insertion_table_in_table_functions = 1;
+
+INSERT INTO target_table
+SELECT * FROM s3('https://bucket/{file*.parquet}', 'key', 'secret');
+
+-- Option 2: Explicitly specify schema in s3() function
+INSERT INTO target_table
+SELECT *
+FROM s3(
+ 'https://bucket/{file*.parquet}',
+ 'key',
+ 'secret',
+ 'Parquet',
+ 'id UInt64, features Array(Float64), name String'
+);
+
+-- Option 3: Disable nullable inference
+SET schema_inference_make_columns_nullable = 0;
+```
+
+**Scenario 4: Tuple fields with NULL values**
+
+```text
+Cannot convert NULL value to non-Nullable type: while converting source column
+price to destination column price: while executing FUNCTION _CAST
+```
+
+**Cause:** Tuple contains nullable fields but target expects non-nullable.
+
+**Solution:**
+```sql
+-- Define tuple with proper Nullable structure
+CREATE TABLE your_table
+(
+ price Tuple(
+ effective_price Nullable(Decimal(38, 9)), -- Make nullable if needed
+ tier_start_amount Decimal(38, 9),
+ unit Nullable(String)
+ )
+);
+
+-- Or handle NULLs in nested structures
+SELECT tuple(
+ coalesce(field1, 0),
+ coalesce(field2, 0)
+) AS price;
+```
+
+**Scenario 5: Using bare NULL in materialized views**
+
+```text
+Data type Nullable(Nothing) cannot be used in tables
+```
+
+**Cause:** Using `NULL` without type specification or column alias.
+
+**Solution:**
+```sql
+-- Instead of this (fails):
+CREATE MATERIALIZED VIEW mv AS
+SELECT
+ customer_id,
+ NULL, -- Wrong!
+ maxState(price) AS max_price
+FROM source;
+
+-- Use this (works):
+CREATE MATERIALIZED VIEW mv AS
+SELECT
+ customer_id,
+ NULL AS pincode, -- Column name matches target table
+ maxState(price) AS max_price
+FROM source;
+
+-- Or cast NULL to specific type:
+SELECT
+ CAST(NULL, 'String') AS column_name
+FROM source;
+```
+
+## Prevention best practices {#prevention}
+
+1. **Design tables with Nullable columns when appropriate**
+ - Use `Nullable(Type)` for columns that may contain NULL values
+ - Consider business logic - can this field legitimately be unknown?
+
+2. **For file imports, use explicit schema definitions**
+ - Specify column types in s3/file table functions
+ - Use `use_structure_from_insertion_table_in_table_functions=1`
+ - Control schema inference with `schema_inference_make_columns_nullable=0`
+
+3. **In materialized views, handle NULL explicitly**
+ - Always use `coalesce()`, `ifNull()`, or similar functions
+ - Don't rely on automatic NULL-to-default conversion in SELECT
+
+4. **Test data imports with sample files first**
+ - Check for NULL values: `SELECT * FROM s3(...) WHERE column IS NULL`
+ - Use `DESCRIBE s3(...)` to see inferred schema
+ - Validate type compatibility before full import
+
+5. **Use appropriate settings for your use case**
+ ```sql
+ -- Convert NULLs to default values during import
+ SET input_format_null_as_default = 1;
+
+ -- Keep NULLs as empty strings (for formats like CSV)
+ SET input_format_csv_empty_as_default = 1;
+ ```
+
+6. **For complex nested types**
+ - Define nullability at the correct nesting level
+ - `LowCardinality(Nullable(String))` not `Nullable(LowCardinality(String))`
+ - Test with small data samples first
+
+## Related settings {#related-settings}
+
+```sql
+-- Control NULL handling during import
+SET input_format_null_as_default = 1; -- Convert NULLs to defaults
+SET input_format_csv_empty_as_default = 1; -- Treat empty CSV fields as defaults
+SET schema_inference_make_columns_nullable = 0; -- Don't infer Nullable types
+SET use_structure_from_insertion_table_in_table_functions = 1; -- Use target table schema
+```
diff --git a/docs/troubleshooting/error_codes/352_AMBIGUOUS_COLUMN_NAME.md b/docs/troubleshooting/error_codes/352_AMBIGUOUS_COLUMN_NAME.md
new file mode 100644
index 00000000000..2b0cc139ff3
--- /dev/null
+++ b/docs/troubleshooting/error_codes/352_AMBIGUOUS_COLUMN_NAME.md
@@ -0,0 +1,426 @@
+---
+slug: /troubleshooting/error-codes/352_AMBIGUOUS_COLUMN_NAME
+sidebar_label: '352 AMBIGUOUS_COLUMN_NAME'
+doc_type: 'reference'
+keywords: ['error codes', 'AMBIGUOUS_COLUMN_NAME', '352']
+title: '352 AMBIGUOUS_COLUMN_NAME'
+description: 'ClickHouse error code - 352 AMBIGUOUS_COLUMN_NAME'
+---
+
+# Error 352: AMBIGUOUS_COLUMN_NAME
+
+:::tip
+This error occurs when a column reference in a query is ambiguous because it could refer to columns from multiple tables.
+It indicates that ClickHouse cannot determine which table's column you're referencing, typically in JOIN queries or when using duplicate column names.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Column exists in multiple joined tables**
+ - The same column name appears in two or more tables being joined
+ - No table qualifier used to specify which table's column is needed
+ - Ambiguous column reference in `SELECT`, `WHERE`, `GROUP BY`, or `ORDER BY` clauses
+
+2. **Self-join without proper aliases**
+ - Joining a table to itself
+ - Column referenced without table alias qualifier
+ - Both instances of the table have the same column
+
+3. **Analyzer behavior with joined tables**
+ - New analyzer (enabled with `allow_experimental_analyzer = 1`) enforces stricter rules
+ - Old analyzer preferred left table in ambiguous cases (non-standard SQL)
+ - Table aliases don't block original table name identifiers
+
+4. **Multiple JOIN operations**
+ - Column present in multiple tables across chain of JOINs
+ - Missing table qualifiers on shared column names
+ - Complex queries with many joined tables
+
+5. **Column name conflicts in subqueries**
+ - Subquery aliases creating duplicate column names
+ - Derived tables with overlapping column names
+ - `UNION` queries with inconsistent column naming
+
+## Common solutions {#common-solutions}
+
+**1. Use explicit table qualifiers**
+
+```sql
+-- WRONG: Ambiguous column reference
+SELECT id, name
+FROM table1
+JOIN table2 ON table1.user_id = table2.user_id;
+
+-- CORRECT: Qualify columns with table names
+SELECT
+ table1.id,
+ table1.name,
+ table2.name AS name2
+FROM table1
+JOIN table2 ON table1.user_id = table2.user_id;
+```
+
+**2. Use table aliases for clarity**
+
+```sql
+-- WRONG: Ambiguous in self-join
+SELECT t4.c0
+FROM t4
+INNER JOIN t4 AS right_0 ON t4.c0 = right_0.c0;
+
+-- CORRECT: Use aliases consistently
+SELECT
+ left_t.c0 AS left_c0,
+ right_t.c0 AS right_c0
+FROM t4 AS left_t
+INNER JOIN t4 AS right_t ON left_t.c0 = right_t.c0;
+```
+
+**3. Alias duplicate column names**
+
+```sql
+-- WRONG: Both tables have 'name' column
+SELECT *
+FROM users u
+JOIN profiles p ON u.id = p.user_id;
+
+-- CORRECT: Alias one or both
+SELECT
+ u.id,
+ u.name AS user_name,
+ p.name AS profile_name
+FROM users u
+JOIN profiles p ON u.id = p.user_id;
+```
+
+**4. Use `USING` clause for common columns**
+
+```sql
+-- Instead of ON clause
+SELECT *
+FROM table1
+JOIN table2 USING (user_id, date);
+
+-- This automatically qualifies columns and avoids ambiguity
+```
+
+**5. Enable the new analyzer (if needed)**
+
+```sql
+-- New analyzer provides better error messages and handling
+SET allow_experimental_analyzer = 1;
+
+-- Query will now give clearer error about ambiguity
+SELECT * FROM t1 JOIN t2 ON t1.id = t2.id WHERE name = 'test';
+```
+
+**6. Specify columns explicitly instead of `SELECT` \***
+
+```sql
+-- WRONG: SELECT * can create ambiguity
+SELECT *
+FROM orders o
+JOIN customers c ON o.customer_id = c.id;
+
+-- CORRECT: List needed columns explicitly
+SELECT
+ o.order_id,
+ o.order_date,
+ c.id AS customer_id,
+ c.customer_name
+FROM orders o
+JOIN customers c ON o.customer_id = c.id;
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: Simple JOIN with shared column name**
+
+```text
+Error: JOIN database.t4 ALL INNER JOIN database.t4 AS right_0 ON t4.c0 = right_0.c0
+ambiguous identifier 't4.c0'
+```
+
+**Cause:** Column `c0` exists in both sides of self-join.
+
+**Solution:**
+
+```sql
+-- Use proper table aliases
+SELECT
+ left_t.c0 AS left_value,
+ right_t.c0 AS right_value
+FROM t4 AS left_t
+INNER JOIN t4 AS right_t ON left_t.id = right_t.id;
+```
+
+**Scenario 2: Multi-table JOIN ambiguity**
+
+```text
+Error: Ambiguous column 'id' in SELECT
+```
+
+**Cause:** Column `id` appears in multiple joined tables.
+
+**Solution:**
+
+```sql
+-- Qualify all columns
+SELECT
+ t1.id AS t1_id,
+ t2.id AS t2_id,
+ t3.id AS t3_id
+FROM t1
+JOIN t2 ON t1.key = t2.key
+JOIN t3 ON t2.key = t3.key;
+```
+
+**Scenario 3: WHERE clause ambiguity**
+
+```text
+Error: Ambiguous column reference in WHERE clause
+```
+
+**Cause:** Column used in WHERE exists in multiple tables.
+
+**Solution:**
+
+```sql
+-- WRONG
+SELECT * FROM orders o
+JOIN customers c ON o.customer_id = c.id
+WHERE status = 'active';
+
+-- CORRECT: Specify which table's status
+SELECT * FROM orders o
+JOIN customers c ON o.customer_id = c.id
+WHERE o.status = 'active';
+```
+
+**Scenario 4: Analyzer differences**
+
+```text
+Query works with old analyzer but fails with new analyzer
+```
+
+**Cause:** New analyzer enforces SQL standard more strictly.
+
+**Solution:**
+
+```sql
+-- Option 1: Fix query to be explicit
+SELECT t1.col FROM t1 JOIN t2 ON t1.id = t2.id;
+
+-- Option 2: Temporarily disable new analyzer
+SET allow_experimental_analyzer = 0;
+```
+
+**Scenario 5: Column name from wrong table in multi-JOIN**
+
+```text
+Query accidentally uses t1.d but d doesn't exist in t1
+(exists in t2 but query succeeds incorrectly)
+```
+
+**Cause:** Bug in old query analyzer with multi-table JOINs.
+
+**Solution:**
+- Enable new analyzer: `SET allow_experimental_analyzer = 1`
+- Or explicitly qualify all column references
+- Update to version with new analyzer as default
+
+## Debugging steps {#debugging-steps}
+
+1. **Identify the ambiguous column:**
+2.
+ ```text
+ Error message shows: "ambiguous identifier 't4.c0'"
+ ```
+ The column name causing ambiguity is listed in the error.
+
+2. **Check which tables have this column:**
+
+ ```sql
+ -- Find column across tables
+ SELECT
+ database,
+ table,
+ name AS column_name,
+ type
+ FROM system.columns
+ WHERE name = 'c0' -- Replace with your column
+ AND database = 'your_database';
+ ```
+
+3. **Review query structure:**
+ - Identify all tables in the query
+ - Check which tables have the ambiguous column
+ - Note where column is referenced without qualifier
+
+4. **Use `EXPLAIN` to understand the query:**
+ ```sql
+ EXPLAIN SYNTAX
+ SELECT * FROM t1 JOIN t2 ON t1.id = t2.id;
+ ```
+
+5. **Test with the new analyzer:**
+ ```sql
+ -- Enable new analyzer for better error messages
+ SET allow_experimental_analyzer = 1;
+ SELECT your_query;
+ ```
+
+## Special considerations {#special-considerations}
+
+**Old vs. new analyzer behavior:**
+- **Old analyzer:** Preferred left table in ambiguous cases (non-SQL standard)
+- **New analyzer:** Strictly enforces disambiguation (SQL standard compliant)
+- **Migration:** Queries working on old analyzer may need fixes for new analyzer
+
+**Self-joins:**
+- Always use aliases for both table references
+- Qualify every column reference
+- Use descriptive aliases (not just `t1`, `t2`)
+
+**Multiple JOINs:**
+- Risk of ambiguity increases with each JOIN
+- Some column names may work in 2-table JOIN but fail in 3+ table JOIN
+- Old analyzer had bugs allowing incorrect column references from wrong tables
+
+**Table aliases don't block original names:**
+
+```sql
+-- Even with alias, original table name still works
+SELECT t1.col -- Works
+FROM table1 AS t1;
+
+-- This can cause ambiguity in joins
+SELECT table1.col -- Also works, but can be ambiguous
+FROM table1 AS t1
+JOIN table2 AS t2 ON...;
+```
+
+## Common patterns to avoid {#avoid-patterns}
+
+```sql
+-- AVOID: Unqualified columns in JOIN
+SELECT id, name, email
+FROM users
+JOIN profiles ON users.id = profiles.user_id;
+
+-- AVOID: Ambiguous WHERE conditions
+SELECT *
+FROM orders
+JOIN customers ON orders.customer_id = customers.id
+WHERE status = 'active'; -- Which table's status?
+
+-- AVOID: GROUP BY without qualifiers
+SELECT region, COUNT(*)
+FROM sales
+JOIN stores ON sales.store_id = stores.id
+GROUP BY region; -- Which table's region?
+
+-- AVOID: Self-join without clear aliases
+SELECT * FROM t JOIN t ON t.id = t.parent_id;
+```
+
+## Best practices {#best-practices}
+
+```sql
+-- GOOD: Fully qualified columns
+SELECT
+ o.id AS order_id,
+ o.status AS order_status,
+ c.id AS customer_id,
+ c.status AS customer_status
+FROM orders AS o
+JOIN customers AS c ON o.customer_id = c.id
+WHERE o.status = 'active'
+ AND c.status = 'verified'
+GROUP BY o.region, c.region
+ORDER BY o.created_at DESC;
+
+-- GOOD: Use USING for common columns
+SELECT *
+FROM orders
+JOIN order_items USING (order_id);
+
+-- GOOD: Clear aliases in self-joins
+SELECT
+ parent.id AS parent_id,
+ child.id AS child_id,
+ parent.name AS parent_name,
+ child.name AS child_name
+FROM categories AS parent
+JOIN categories AS child ON child.parent_id = parent.id;
+```
+
+## Analyzer-specific information {#analyzer-info}
+
+The new query analyzer (experimental in older versions, default in newer versions) handles ambiguity differently:
+
+```sql
+-- Check if analyzer is enabled
+SELECT getSetting('allow_experimental_analyzer');
+
+-- Enable for stricter checking
+SET allow_experimental_analyzer = 1;
+
+-- Disable to use old behavior (temporary workaround)
+SET allow_experimental_analyzer = 0;
+```
+
+:::note
+The new analyzer will become the default.
+It's recommended to fix queries to work with it rather than relying on old behavior.
+:::
+
+## Related error codes {#related-errors}
+
+- **`UNKNOWN_IDENTIFIER (47)`:** Column doesn't exist at all
+- **`AMBIGUOUS_COLUMN_NAME (352)`:** Column exists in multiple tables
+- **`AMBIGUOUS_IDENTIFIER (207)`:** General ambiguous identifier (older error code)
+- **`BAD_ARGUMENTS (36)`:** Wrong arguments, sometimes related to column issues
+
+## Migration to new analyzer {#analyzer-migration}
+
+If enabling the new analyzer causes `AMBIGUOUS_COLUMN_NAME` errors in previously working queries:
+
+1. **Add table qualifiers:**
+
+ ```sql
+ -- Change unqualified columns
+ SELECT id, name -- May fail with new analyzer
+ -- To qualified columns
+ SELECT t1.id, t1.name
+ ```
+
+2. **Use explicit aliases:**
+
+ ```sql
+ -- Add aliases for duplicate names
+ SELECT
+ t1.status AS t1_status,
+ t2.status AS t2_status
+ ```
+
+3. **Test incrementally:**
+
+ ```sql
+ -- Test one query at a time with analyzer
+ SET allow_experimental_analyzer = 1;
+ SELECT your_query;
+ ```
+
+If you're experiencing this error:
+1. Identify which column name is ambiguous from the error message
+2. Determine which tables in the query have this column
+3. Add table qualifiers (e.g., `table.column` or `alias.column`) to all references
+4. Use table aliases for clarity in complex JOINs
+5. Test with new analyzer to catch these issues proactively
+6. Consider using `USING` clause for join columns with same name
+7. List columns explicitly instead of `SELECT *` to avoid hidden ambiguity
+
+**Related documentation:**
+- [JOIN clause](/sql-reference/statements/select/join)
+- [Query analyzer](/operations/settings/settings#allow_experimental_analyzer)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/386_NO_COMMON_TYPE.md b/docs/troubleshooting/error_codes/386_NO_COMMON_TYPE.md
new file mode 100644
index 00000000000..2e0c7506b8e
--- /dev/null
+++ b/docs/troubleshooting/error_codes/386_NO_COMMON_TYPE.md
@@ -0,0 +1,355 @@
+---
+slug: /troubleshooting/error-codes/386_NO_COMMON_TYPE
+sidebar_label: '386 NO_COMMON_TYPE'
+doc_type: 'reference'
+keywords: ['error codes', 'NO_COMMON_TYPE', '386', 'type mismatch', 'supertype']
+title: '386 NO_COMMON_TYPE'
+description: 'ClickHouse error code - 386 NO_COMMON_TYPE'
+---
+
+# Error 386: NO_COMMON_TYPE
+
+:::tip
+This error occurs when ClickHouse cannot find a common (super) type to unify different data types in operations that require type compatibility—such as UNION, CASE statements, IF expressions, or array operations. This typically happens when trying to combine incompatible types like String and Integer, or signed and unsigned integers of different ranges.
+:::
+
+## Quick reference {#quick-reference}
+
+**What you'll see:**
+
+```text
+Code: 386. DB::Exception: There is no supertype for types String, UInt8 because some of them are String/FixedString/Enum and some of them are not.
+(NO_COMMON_TYPE)
+```
+
+Or:
+
+```text
+Code: 386. DB::Exception: There is no supertype for types Int64, UInt64 because some of them are signed integers and some are unsigned integers, but there is no signed integer type that can exactly represent all required unsigned integer values.
+(NO_COMMON_TYPE)
+```
+
+**Most common causes:**
+1. **UNION ALL with incompatible types** (String and numeric, signed vs unsigned integers)
+2. **IF/CASE expressions** with different return types
+3. **AggregateFunction mixed with regular types** in CASE statements
+4. **Array operations** requiring consistent element types
+5. **Dynamic/JSON column** type mismatches (25.x+ versions)
+
+**Quick diagnostic:**
+
+Check the types involved:
+
+```sql
+-- Identify column types
+SELECT toTypeName(column1), toTypeName(column2) FROM table;
+
+-- Test type compatibility
+SELECT toTypeName(if(1, 'text', 123)); -- Will fail
+```
+
+**Quick fixes:**
+
+```sql
+-- Error: String and Int incompatible
+SELECT 1 UNION ALL SELECT 'hello';
+
+-- Fix: Cast to common type
+SELECT toString(1) UNION ALL SELECT 'hello';
+
+-- Error: Signed vs Unsigned
+SELECT -1::Int64 UNION ALL SELECT 18446744073709551615::UInt64;
+
+-- Fix: Cast to wider signed type
+SELECT -1::Int128 UNION ALL SELECT 18446744073709551615::Int128;
+
+-- Error: AggregateFunction in CASE
+SELECT CASE WHEN condition THEN total_claims ELSE 0 END
+WHERE total_claims is AggregateFunction(uniq, UInt64)
+
+-- Fix: Apply merge function first
+SELECT CASE WHEN condition THEN uniqMerge(total_claims) ELSE 0 END
+```
+
+## Most common causes {#most-common-causes}
+
+### 1. **UNION with incompatible types** {#union-incompatible-types}
+
+The most common cause—trying to UNION queries that return fundamentally different types.
+
+```sql
+-- String vs Integer
+SELECT 1 AS x
+UNION ALL
+SELECT 'Hello';
+-- Error: No supertype for UInt8, String
+
+-- Signed vs Unsigned (range overflow)
+SELECT -100::Int64 AS value
+UNION ALL
+SELECT 18446744073709551615::UInt64;
+-- Error: No signed integer type can represent all required unsigned values
+```
+
+### 2. **IF/CASE expressions with mixed types** {#if-case-mixed-types}
+
+```sql
+-- Returns different types based on condition
+SELECT
+ CASE
+ WHEN status = 'active' THEN 1
+ ELSE 'inactive'
+ END AS result;
+-- Error: No supertype for UInt8, String
+
+-- DateTime vs String
+SELECT if(date_field >= '2024-01-01', date_field, '1970-01-01');
+-- Error: No supertype for DateTime, String
+```
+
+### 3. **AggregateFunction mixed with regular types** {#aggregatefunction-mixed-types}
+
+This is a subtle error when working with AggregatingMergeTree tables:
+
+```sql
+-- Table with AggregateFunction column
+CREATE TABLE test_table (
+ code String,
+ total_claims AggregateFunction(uniq, UInt64),
+ roll_up_date Date
+) ENGINE = AggregatingMergeTree()
+ORDER BY code;
+
+-- Using AggregateFunction directly in CASE
+SELECT
+ CASE
+ WHEN roll_up_date BETWEEN '2022-01-01' AND '2022-12-31'
+ THEN total_claims -- AggregateFunction type
+ ELSE 0 -- UInt8 type
+ END
+FROM test_table;
+-- Error: No supertype for UInt8, AggregateFunction
+```
+
+### 4. **Array operations with mixed types** {#array-mixed-types}
+
+```sql
+-- Array with mixed element types
+SELECT [1, 2, 'three'];
+-- Error: No supertype for UInt8, String
+
+-- Array functions expecting consistent types
+SELECT arrayConcat([1, 2], ['a', 'b']);
+-- Error: No supertype
+```
+
+### 5. **Dynamic/JSON column type mismatches (25.x+)** {#dynamic-json-mismatches}
+
+Starting in ClickHouse 25.x with stable JSON/Dynamic types:
+
+```sql
+-- JSON column with mixed types
+CREATE TABLE events (
+ ts DateTime,
+ attributes JSON
+) ENGINE = MergeTree()
+ORDER BY ts;
+
+INSERT INTO events VALUES ('2025-01-01 12:00:00', '{"label":"5"}');
+INSERT INTO events VALUES ('2025-01-02 12:00:00', '{"label":5}');
+
+-- Comparing Dynamic column with literal of different type
+SELECT * FROM events WHERE attributes.label = 5;
+-- Error: No supertype for String, UInt8
+-- (Because one row has String "5", another has Int64 5)
+```
+
+### 6. **Signed vs Unsigned integer range issues** {#signed-unsigned-issues}
+
+```sql
+-- Cannot find common type for these ranges
+SELECT
+ CASE
+ WHEN condition THEN -9223372036854775808::Int64 -- Min Int64
+ ELSE 18446744073709551615::UInt64 -- Max UInt64
+ END;
+-- Error: No signed integer type can represent all required unsigned values
+```
+
+## Common solutions {#common-solutions}
+
+### **1. Explicit type casting to common type** {#explicit-type-casting}
+
+```sql
+-- Cast everything to String
+SELECT toString(1) AS x
+UNION ALL
+SELECT 'Hello';
+
+-- Cast to wider numeric type
+SELECT -100::Int128 AS value
+UNION ALL
+SELECT 18446744073709551615::Int128;
+
+-- Cast DateTime to String
+SELECT
+ if(date_field >= '2024-01-01',
+ toString(date_field),
+ '1970-01-01'
+ ) AS result;
+```
+
+### **2. Use appropriate merge functions for AggregateFunctions** {#use-merge-functions}
+
+```sql
+-- Apply uniqMerge first
+SELECT
+ CASE
+ WHEN roll_up_date BETWEEN '2022-01-01' AND '2022-12-31'
+ THEN uniqMerge(total_claims)
+ ELSE 0
+ END AS claims_count
+FROM test_table;
+
+-- Or restructure the query
+SELECT uniqMerge(total_claims) AS claims_count
+FROM test_table
+WHERE roll_up_date BETWEEN '2022-01-01' AND '2022-12-31';
+```
+
+### **3. Handle Dynamic/JSON columns explicitly (25.x+)** {#handle-dynamic-json-explicitly}
+
+```sql
+-- Cast Dynamic column to specific type
+SELECT * FROM events
+WHERE attributes.label::String = '5';
+
+-- Or use type-specific subcolumn
+SELECT * FROM events
+WHERE attributes.label.:String = '5';
+
+-- Use toString for comparison
+SELECT * FROM events
+WHERE toString(attributes.label) = '5';
+```
+
+### **4. Restructure CASE/IF to return consistent types** {#restructure-case-if}
+
+```sql
+-- Original: mixed types
+SELECT
+ CASE
+ WHEN status = 'active' THEN 1
+ ELSE 'inactive'
+ END;
+
+-- Option 1: All strings
+SELECT
+ CASE
+ WHEN status = 'active' THEN '1'
+ ELSE 'inactive'
+ END;
+
+-- Option 2: Use separate columns
+SELECT
+ if(status = 'active', 1, 0) AS is_active,
+ if(status = 'active', '', 'inactive') AS status_text;
+```
+
+### **5. Use widest compatible numeric type** {#use-widest-numeric-type}
+
+```sql
+-- When dealing with signed and unsigned integers
+SELECT
+ CASE
+ WHEN condition THEN toInt128(-100)
+ ELSE toInt128(18446744073709551615)
+ END AS value;
+
+-- Or use Float64 if precision loss is acceptable
+SELECT
+ CASE
+ WHEN condition THEN toFloat64(-100)
+ ELSE toFloat64(18446744073709551615)
+ END AS value;
+```
+
+### **6. Enable Variant type for UNION (future versions)** {#enable-variant-type}
+
+Starting from a future ClickHouse version (PR in progress):
+
+```sql
+-- Will automatically create Variant type
+SELECT 1 AS x
+UNION ALL
+SELECT 'Hello';
+-- Result: Variant(UInt8, String)
+
+-- Current workaround: Use explicit Variant
+SELECT CAST(1 AS Variant(UInt8, String))
+UNION ALL
+SELECT CAST('Hello' AS Variant(UInt8, String));
+```
+
+### **7. Fix array homogeneity** {#fix-array-homogeneity}
+
+```sql
+-- Mixed types
+SELECT [1, 2, 'three'];
+
+-- All the same type
+SELECT [toString(1), toString(2), 'three'];
+
+-- Or
+SELECT [1, 2, 3];
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Plan your type schema carefully**: When designing tables, ensure columns that will be combined in UNION or comparisons have compatible types.
+
+2. **Be explicit with casts**: Don't rely on implicit type conversion—use explicit CAST or type conversion functions.
+
+3. **Understand signed vs unsigned limits**: Be aware that combining signed and unsigned integers can fail if the unsigned value exceeds what the signed type can represent.
+
+4. **Use Nullable consistently**: If one branch returns Nullable, ensure all branches do:
+
+ ```sql
+ SELECT
+ CASE
+ WHEN condition THEN NULL
+ ELSE 0 -- Should be: toNullable(0) or CAST(0 AS Nullable(UInt8))
+ END;
+ ```
+
+5. **For Dynamic/JSON columns (25.x+)**: Always cast to specific type before comparison:
+
+ ```sql
+ WHERE attributes.field::String = 'value'
+ -- OR
+ WHERE attributes.field.:String = 'value'
+ ```
+
+6. **Test UNION queries incrementally**: Test each SELECT in a UNION separately to identify type mismatches.
+
+7. **Use `toTypeName()` for debugging**:
+
+ ```sql
+ SELECT toTypeName(column1), toTypeName(column2);
+ ```
+
+## Related error codes {#related-error-codes}
+
+- [Error 258: `UNION_ALL_RESULT_STRUCTURES_MISMATCH`](/troubleshooting/error-codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH) - Column count or structure mismatch in UNION
+- [Error 53: `TYPE_MISMATCH`](/troubleshooting/error-codes/053_TYPE_MISMATCH) - General type mismatch error
+- [Error 70: `CANNOT_CONVERT_TYPE`](/troubleshooting/error-codes/070_CANNOT_CONVERT_TYPE) - Type conversion failure
+
+## Additional resources {#additional-resources}
+
+**ClickHouse documentation:**
+- [Data Types](/sql-reference/data-types) - Understanding ClickHouse type system
+- [Type Conversion Functions](/sql-reference/functions/type-conversion-functions) - CAST and conversion functions
+- [UNION Clause](/sql-reference/statements/select/union) - UNION behavior and type unification
+- [Dynamic Type](/sql-reference/data-types/dynamic) - Working with Dynamic columns (25.x+)
+- [JSON Type](/sql-reference/data-types/object-data-type) - Working with JSON columns (25.x+)
+- [Variant Type](/sql-reference/data-types/variant) - Variant type for mixed types
diff --git a/docs/troubleshooting/error_codes/394_QUERY_WAS_CANCELLED.md b/docs/troubleshooting/error_codes/394_QUERY_WAS_CANCELLED.md
new file mode 100644
index 00000000000..91e2a3d49fc
--- /dev/null
+++ b/docs/troubleshooting/error_codes/394_QUERY_WAS_CANCELLED.md
@@ -0,0 +1,657 @@
+---
+slug: /troubleshooting/error-codes/394_QUERY_WAS_CANCELLED
+sidebar_label: '394 QUERY_WAS_CANCELLED'
+doc_type: 'reference'
+keywords: ['error codes', 'QUERY_WAS_CANCELLED', '394']
+title: '394 QUERY_WAS_CANCELLED'
+description: 'ClickHouse error code - 394 QUERY_WAS_CANCELLED'
+---
+
+# Error 394: QUERY_WAS_CANCELLED
+
+:::tip
+This error occurs when a query execution is explicitly cancelled or terminated before completion.
+It indicates that the query was stopped either by user request, system shutdown, resource limits, or automatic cancellation policies.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **User-initiated cancellation**
+ - User executed `KILL QUERY` command
+ - Client sent cancel request (Ctrl+C in clickhouse-client)
+ - Application cancelled query via API
+ - Query stopped through management interface
+
+2. **Client disconnection**
+ - Client connection closed before query completed
+ - HTTP client disconnected (with `cancel_http_readonly_queries_on_client_close = 1`)
+ - Network connection lost between client and server
+ - Client timeout causing connection termination
+
+3. **System shutdown or restart**
+ - ClickHouse server shutting down gracefully
+ - Pod termination during Kubernetes rolling update
+ - Service restart draining active queries
+ - Graceful shutdown timeout reached (ClickHouse Cloud: 1 hour)
+
+4. **Query timeout enforcement**
+ - Query exceeding [`max_execution_time`](/operations/settings/settings#max_execution_time) limit
+ - Timeout from `KILL QUERY` command execution
+ - Automatic cancellation due to resource policies
+
+5. **Resource protection mechanisms**
+ - Query was cancelled due to memory pressure
+ - Too many concurrent queries, oldest cancelled
+ - System overload protection
+ - Emergency query termination
+
+6. **Distributed query cancellation**
+ - Parent query cancelled, child queries on remote servers also cancelled
+ - One shard failing causes entire distributed query cancellation
+ - Replica unavailability during parallel replica execution
+
+## Common solutions {#common-solutions}
+
+**1. Check if cancellation was intentional**
+
+```sql
+-- Find cancelled queries
+SELECT
+ event_time,
+ query_id,
+ user,
+ query_duration_ms / 1000 AS duration_sec,
+ exception,
+ query
+FROM system.query_log
+WHERE exception_code = 394
+ AND event_date >= today() - 1
+ORDER BY event_time DESC
+LIMIT 10;
+```
+
+**2. Identify who/what cancelled the query**
+
+```sql
+-- Look for KILL QUERY commands
+SELECT
+ event_time,
+ user,
+ query,
+ query_id
+FROM system.query_log
+WHERE query LIKE '%KILL QUERY%'
+ AND event_date >= today() - 1
+ORDER BY event_time DESC;
+
+-- Check for system shutdowns
+SELECT
+ event_time,
+ message
+FROM system.text_log
+WHERE message LIKE '%shutdown%' OR message LIKE '%terminating%'
+ AND event_date >= today() - 1
+ORDER BY event_time DESC;
+```
+
+**3. Increase timeout limits if queries are legitimately long**
+
+```sql
+-- Increase execution timeout
+SET max_execution_time = 3600; -- 1 hour
+
+-- Or for specific query
+SELECT * FROM large_table
+SETTINGS max_execution_time = 7200; -- 2 hours
+```
+
+**4. Handle client disconnections**
+
+```sql
+-- Configure whether to cancel on client disconnect
+SET cancel_http_readonly_queries_on_client_close = 1; -- Cancel on disconnect
+
+-- Or keep queries running after disconnect
+SET cancel_http_readonly_queries_on_client_close = 0; -- Continue running
+```
+
+:::note
+`cancel_http_readonly_queries_on_client_close` only works when `readonly > 0` (automatic for HTTP GET requests).
+:::
+
+**5. Handle shutdowns gracefully**
+
+For applications that need to survive pod restarts:
+
+```python
+# Implement retry logic for cancelled queries
+def execute_with_retry(query, max_retries=3):
+ for attempt in range(max_retries):
+ try:
+ return client.query(query)
+ except Exception as e:
+ if 'QUERY_WAS_CANCELLED' in str(e) or '394' in str(e):
+ if attempt < max_retries - 1:
+ # Query may have been cancelled due to shutdown
+ time.sleep(5)
+ continue
+ raise
+```
+
+**6. Check for system resource issues**
+
+```sql
+-- Check if queries being killed due to resource limits
+SELECT
+ event_time,
+ query_id,
+ formatReadableSize(memory_usage) AS memory,
+ query_duration_ms,
+ exception
+FROM system.query_log
+WHERE exception_code = 394
+ AND event_date >= today() - 1
+ORDER BY memory_usage DESC
+LIMIT 10;
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: User killed query**
+
+```text
+Error: Code: 394. DB::Exception: Query was cancelled
+```
+
+**Cause:** User executed `KILL QUERY WHERE query_id = 'xxx'`.
+
+**Solution:**
+- This is expected behavior
+- Query was intentionally stopped
+- Check `system.query_log` to see who killed it
+- No action needed unless kill was unintentional
+
+**Scenario 2: Client disconnection auto-cancel**
+
+```text
+Error: Query was cancelled (after client disconnect)
+```
+
+**Cause:** Client disconnected and `cancel_http_readonly_queries_on_client_close = 1`.
+
+**Solution:**
+
+```sql
+-- If you want queries to continue after disconnect
+SET cancel_http_readonly_queries_on_client_close = 0;
+
+-- Or ensure client doesn't disconnect prematurely
+-- Increase client timeout to match query duration
+```
+
+**Scenario 3: Graceful shutdown during rolling update**
+
+```text
+Error: Query was cancelled during pod termination
+```
+
+**Cause:** ClickHouse Cloud pod shutting down during rolling update.
+
+**Solution:**
+- Implement retry logic in application
+- Design queries to complete within grace period (< 1 hour for Cloud)
+- For very long queries, use `INSERT INTO ... SELECT` to materialize results
+- Monitor for scheduled maintenance windows
+
+**Scenario 4: Query takes too long to cancel**
+
+```text
+KILL QUERY executed but query continues running for minutes
+```
+
+**Cause:** Known issue with some query types, especially those with subqueries or complex JOINs.
+
+**Solution:**
+- Query will eventually cancel (may take time to reach cancellation points)
+- Consider using `KILL QUERY SYNC` for synchronous termination
+- For stuck queries, may need to restart ClickHouse (rare)
+- Upgrade to newer versions with improved cancellation
+
+**Scenario 5: Cannot cancel query**
+
+```text
+Cancellation signal sent but query doesn't stop
+```
+
+**Cause:** Query stuck in operation that doesn't check cancellation flag.
+
+**Solution:**
+
+```sql
+-- Try synchronous kill
+KILL QUERY WHERE query_id = 'stuck_query_id' SYNC;
+
+-- If still stuck, may need server restart
+-- Or wait for query timeout
+```
+
+## Prevention tips {#prevention-tips}
+
+1. **Set appropriate timeouts:** Configure [`max_execution_time`](/operations/settings/settings#max_execution_time) for workload patterns
+2. **Monitor long queries:** Track and optimize slow queries before they need cancellation
+3. **Handle shutdowns gracefully:** Design applications to retry cancelled queries
+4. **Use query result cache:** Cache expensive query results to avoid re-execution
+5. **Implement checkpointing:** For very long operations, break into smaller steps
+6. **Monitor cancellation patterns:** Track why queries are being cancelled
+7. **Configure client timeouts:** Match client and server timeout settings
+
+## Debugging steps {#debugging-steps}
+
+1. **Find recently cancelled queries:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ user,
+ query_duration_ms / 1000 AS duration_sec,
+ formatReadableSize(memory_usage) AS memory,
+ query
+ FROM system.query_log
+ WHERE exception_code = 394
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 20;
+ ```
+
+2. **Check for kill commands:**
+
+ ```sql
+ -- Find who killed queries
+ SELECT
+ event_time,
+ user AS killer,
+ query,
+ query_id
+ FROM system.query_log
+ WHERE query LIKE '%KILL QUERY%'
+ AND event_time >= now() - INTERVAL 1 HOUR
+ ORDER BY event_time DESC;
+ ```
+
+3. **Check for pod restarts (ClickHouse Cloud):**
+
+ ```bash
+ # Kubernetes
+ kubectl get events -n your-namespace \
+ --sort-by='.lastTimestamp' | grep -E 'Killing|Terminating'
+
+ # Check pod restart count
+ kubectl get pods -n your-namespace
+ ```
+
+4. **Check error_log for cancellation patterns:**
+
+ ```sql
+ SELECT
+ last_error_time,
+ last_error_message,
+ value AS error_count
+ FROM system.errors
+ WHERE name = 'QUERY_WAS_CANCELLED'
+ ORDER BY last_error_time DESC;
+ ```
+
+5. **Analyze cancellation timing:**
+
+ ```sql
+ -- See when during execution queries are cancelled
+ SELECT
+ toStartOfHour(event_time) AS hour,
+ count() AS cancelled_count,
+ avg(query_duration_ms / 1000) AS avg_duration_sec,
+ max(query_duration_ms / 1000) AS max_duration_sec
+ FROM system.query_log
+ WHERE exception_code = 394
+ AND event_date >= today() - 7
+ GROUP BY hour
+ ORDER BY hour DESC;
+ ```
+
+6. **Check if queries complete before showing cancelled:**
+
+ ```sql
+ -- Some queries may complete but still show as cancelled
+ SELECT
+ query_id,
+ type,
+ event_time,
+ query_duration_ms,
+ exception_code
+ FROM system.query_log
+ WHERE query_id = 'your_query_id'
+ ORDER BY event_time;
+ ```
+
+## Special considerations {#special-considerations}
+
+**For HTTP interface:**
+- Setting `cancel_http_readonly_queries_on_client_close = 1` auto-cancels on disconnect
+- Only works with `readonly > 0` (automatic for GET requests)
+- Useful to prevent runaway queries from disconnected clients
+- Can cause issues if the client has a short timeout but the query is valid
+
+**For distributed queries:**
+- Cancelling parent query cancels all child queries on remote servers
+- Child queries show QUERY_WAS_CANCELLED when parent cancelled
+- Check `initial_query_id` to find the parent query
+
+**For long-running queries:**
+- Cancellation may take time to propagate through query pipeline
+- Some operations (like large JOINs or subqueries) have limited cancellation points
+- Query must reach a cancellation checkpoint to actually stop
+- In rare cases, queries may appear "stuck" but are making progress to cancellation
+
+**For graceful shutdowns (ClickHouse Cloud):**
+- During rolling updates, pods wait up to 1 hour for queries to complete
+- Queries running longer than grace period are cancelled
+- New connections rejected during shutdown
+- Design applications to handle these graceful cancellations
+
+**Cancellation vs interruption:**
+- `exception_code = 394`: Query was cancelled (shows as error)
+- `exception_code = 0` with early termination: Query interrupted but not error
+- Check `type` field in `query_log` to distinguish
+
+## Cancellation commands {#cancellation-commands}
+
+**Kill specific query:**
+
+```sql
+-- Asynchronous kill (default)
+KILL QUERY WHERE query_id = 'your_query_id';
+
+-- Synchronous kill (wait for cancellation to complete)
+KILL QUERY WHERE query_id = 'your_query_id' SYNC;
+
+-- Kill by user
+KILL QUERY WHERE user = 'problem_user';
+
+-- Kill long-running queries
+KILL QUERY WHERE elapsed > 3600;
+```
+
+**Check kill status:**
+
+```sql
+-- See if kill command succeeded
+SELECT
+ query_id,
+ user,
+ elapsed,
+ query
+FROM system.processes
+WHERE query_id = 'query_you_tried_to_kill';
+
+-- If still running, may need SYNC or more time
+```
+
+## Settings affecting cancellation {#cancellation-settings}
+
+```sql
+-- Query execution timeout
+max_execution_time = 0 -- 0 = unlimited (seconds)
+
+-- Cancel on client disconnect
+cancel_http_readonly_queries_on_client_close = 0 -- 0 = don't cancel, 1 = cancel
+
+-- Polling interval for checking cancellation
+interactive_delay = 100000 -- Microseconds
+
+-- For distributed queries
+distributed_connections_pool_size = 1024
+connections_with_failover_max_tries = 3
+```
+
+## Distinguishing cancellation types {#cancellation-types}
+
+```sql
+-- User-initiated kill
+SELECT * FROM system.query_log
+WHERE exception_code = 394
+ AND exception LIKE '%KILL QUERY%';
+
+-- Timeout-based cancellation
+SELECT * FROM system.query_log
+WHERE exception_code = 394
+ AND exception LIKE '%timeout%';
+
+-- Client disconnect cancellation
+SELECT * FROM system.query_log
+WHERE exception_code = 394
+ AND exception LIKE '%client%disconnect%';
+
+-- Shutdown-related cancellation
+SELECT * FROM system.query_log
+WHERE exception_code = 394
+ AND event_time BETWEEN 'shutdown_start' AND 'shutdown_end';
+```
+
+## Known issues with cancellation {#known-issues}
+
+**Issue 1: Slow query cancellation**
+- **Symptom:** Queries take a long time to cancel (minutes after `KILL QUERY`)
+- **Affected:** Complex queries with subqueries or large JOINs
+- **Cause:** Limited cancellation checkpoints in query execution
+- **Workaround:** Use `KILL QUERY SYNC` and wait, or restart server in extreme cases
+
+**Issue 2: Cannot cancel during subquery building**
+- **Symptom:** Query stuck building subquery, doesn't respond to cancel
+- **Affected:** Queries with `IN` subqueries or complex CTEs
+- **Cause:** Query planner doesn't check cancellation during subquery materialization
+- **Status:** Known issue, improved in newer versions with new analyzer
+
+**Issue 3: Double cancellation error**
+- **Symptom:** "Cannot cancel. Either no query sent or already cancelled" `LOGICAL_ERROR`
+- **Affected:** Distributed queries with failover
+- **Cause:** Race condition in cancellation logic
+- **Impact:** Usually harmless, query still gets cancelled
+
+## Best practices for handling cancellations {#best-practices}
+
+**1. Implement retry logic:**
+
+```python
+def execute_query_with_handling(query):
+ try:
+ return client.query(query)
+ except Exception as e:
+ if 'QUERY_WAS_CANCELLED' in str(e):
+ # Log cancellation
+ logger.info(f"Query cancelled: {query_id}")
+ # Decide whether to retry based on context
+ if is_retryable(e):
+ return retry_query(query)
+ raise
+```
+
+**2. Monitor cancellation patterns:**
+
+```sql
+-- Track cancellation frequency
+SELECT
+ toStartOfDay(event_time) AS day,
+ count() AS cancelled_queries,
+ uniq(user) AS affected_users
+FROM system.query_log
+WHERE exception_code = 394
+ AND event_date >= today() - 30
+GROUP BY day
+ORDER BY day DESC;
+```
+
+**3. Design for graceful handling:**
+- Break very long operations into smaller chunks
+- Use `INSERT INTO ... SELECT` to materialize intermediate results
+- Implement savepoints for multi-stage operations
+- Design applications to resume from last checkpoint
+
+**4. Configure appropriate timeouts:**
+
+```sql
+-- Set realistic execution limits
+SET max_execution_time = 1800; -- 30 minutes
+
+-- For known long queries, set explicitly
+SELECT * FROM expensive_aggregation
+SETTINGS max_execution_time = 7200; -- 2 hours
+```
+
+## Monitoring cancelled queries {#monitoring}
+
+```sql
+-- Cancellation rate over time
+SELECT
+ toStartOfHour(event_time) AS hour,
+ count() AS total_queries,
+ countIf(exception_code = 394) AS cancelled,
+ round(cancelled / total_queries * 100, 2) AS cancellation_rate_pct
+FROM system.query_log
+WHERE event_date >= today() - 7
+ AND type = 'ExceptionWhileProcessing'
+GROUP BY hour
+HAVING cancelled > 0
+ORDER BY hour DESC;
+
+-- Most frequently cancelled query patterns
+SELECT
+ substr(normalizeQuery(query), 1, 100) AS query_pattern,
+ count() AS cancel_count,
+ avg(query_duration_ms / 1000) AS avg_duration_before_cancel
+FROM system.query_log
+WHERE exception_code = 394
+ AND event_date >= today() - 7
+GROUP BY query_pattern
+ORDER BY cancel_count DESC
+LIMIT 10;
+
+-- Users with most cancelled queries
+SELECT
+ user,
+ count() AS cancelled_count,
+ uniq(query_id) AS unique_queries
+FROM system.query_log
+WHERE exception_code = 394
+ AND event_date >= today() - 7
+GROUP BY user
+ORDER BY cancelled_count DESC;
+```
+
+## When a query shows as cancelled but completed {#completed-but-cancelled}
+
+Some queries may show `QUERY_WAS_CANCELLED` but actually completed:
+
+```sql
+-- Check both QueryFinish and ExceptionWhileProcessing
+SELECT
+ query_id,
+ type,
+ event_time,
+ query_duration_ms,
+ read_rows,
+ exception_code
+FROM system.query_log
+WHERE query_id = 'your_query_id'
+ORDER BY event_time;
+
+-- If you see QueryFinish before ExceptionWhileProcessing,
+-- the query actually completed successfully
+```
+
+This can happen when:
+- Client disconnects after query completes but before receiving results
+- Graceful shutdown starts after query finishes
+- Race condition between completion and cancellation
+
+## Difference from query interruption {#vs-interruption}
+
+```sql
+-- Cancelled queries (error)
+SELECT * FROM system.query_log
+WHERE exception_code = 394;
+
+-- Interrupted queries (no error, but stopped early)
+SELECT * FROM system.query_log
+WHERE exception_code = 0
+ AND type = 'QueryFinish'
+ AND query_duration_ms < expected_duration;
+
+-- Check result_rows to see if query produced results
+SELECT query_id, result_rows, read_rows
+FROM system.query_log
+WHERE query_id = 'your_query_id';
+```
+
+## Preventing unwanted cancellations {#preventing-cancellations}
+
+1. **Set appropriate limits:**
+
+ ```sql
+ -- Global limits
+ ALTER USER your_user SETTINGS max_execution_time = 3600;
+
+ -- Or in user profile
+
+
+ 7200
+
+
+ ```
+
+2. **Ensure stable client connections:**
+ - Use persistent connections
+ - Configure TCP keep-alive
+ - Set appropriate client timeouts
+ - Handle network interruptions
+
+3. **Optimize query performance:**
+ - Faster queries less likely to be cancelled
+ - Reduce execution time below timeout limits
+ - Use proper indexes and partitioning
+
+4. **Monitor system health:**
+ - Track pod restarts and maintenance windows
+ - Alert on unexpected query cancellations
+ - Review cancellation patterns weekly
+
+## For ClickHouse Cloud users {#clickhouse-cloud}
+
+**Graceful shutdown behavior:**
+- Rolling updates happen automatically
+- 1-hour grace period for running queries
+- Queries >1 hour cancelled during restart
+- New connections rejected during shutdown
+- Design for \<1 hour query duration or handle retries
+
+**Recommendations:**
+- Keep queries under 1 hour when possible
+- Use materialized views for long aggregations
+- Implement retry logic for `QUERY_WAS_CANCELLED`
+- Monitor maintenance windows
+- Break long operations into smaller chunks
+
+If you're experiencing this error:
+1. Check if cancellation was intentional (`KILL QUERY` or user action)
+2. Review query duration vs configured timeouts
+3. Check for pod restarts or system shutdowns at error time
+4. Verify client didn't disconnect prematurely
+5. For unintentional cancellations, investigate what triggered them
+6. Implement retry logic if cancellations are transient
+7. Optimize queries if being cancelled due to timeout
+8. For queries that must run longer, increase timeout limits
+9. Monitor cancellation patterns to identify systemic issues
+
+**Related documentation:**
+- [KILL QUERY statement](/sql-reference/statements/kill#kill-query)
+- [Query complexity settings](/operations/settings/query-complexity)
+- [Server settings](/operations/server-configuration-parameters/settings)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/395_FUNCTION_THROW_IF_VALUE_IS_NON_ZERO.md b/docs/troubleshooting/error_codes/395_FUNCTION_THROW_IF_VALUE_IS_NON_ZERO.md
new file mode 100644
index 00000000000..c98245f1cb5
--- /dev/null
+++ b/docs/troubleshooting/error_codes/395_FUNCTION_THROW_IF_VALUE_IS_NON_ZERO.md
@@ -0,0 +1,94 @@
+---
+slug: /troubleshooting/error-codes/395_FUNCTION_THROW_IF_VALUE_IS_NON_ZERO
+sidebar_label: '395 FUNCTION_THROW_IF_VALUE_IS_NON_ZERO'
+doc_type: 'reference'
+keywords: ['error codes', 'FUNCTION_THROW_IF_VALUE_IS_NON_ZERO', '395']
+title: '395 FUNCTION_THROW_IF_VALUE_IS_NON_ZERO'
+description: 'ClickHouse error code - 395 FUNCTION_THROW_IF_VALUE_IS_NON_ZERO'
+---
+
+## Error Code 395: FUNCTION_THROW_IF_VALUE_IS_NON_ZERO {#error-code-395}
+
+:::tip
+This error occurs when the `throwIf` function evaluates to a non-zero (true) value.
+The `throwIf` function is designed to intentionally throw an exception when its condition is met, and error code 395 is the standard error code for this behavior.
+:::
+
+### When you'll see it {#when-youll-see-it}
+
+You'll encounter this error in the following situations:
+
+1. **Explicit use of `throwIf` function:**
+ - When you deliberately use `throwIf()` in your query to validate data or enforce business rules
+ - Example: `SELECT throwIf(number = 2) FROM numbers(5)`
+
+2. **HTTP streaming queries:**
+ - When an exception occurs mid-stream while data is being sent over HTTP
+ - The error appears in the response body even after HTTP 200 status has been sent
+
+3. **Testing and validation:**
+ - When using `throwIf` to test error handling in applications
+ - During data quality checks that use assertions
+
+4. **Custom error codes (optional):**
+ - With the setting `allow_custom_error_code_in_throwif = 1`, you can specify custom error codes
+ - Example: `throwIf(1, 'test', toInt32(49))` - but this is generally not recommended
+
+### Potential causes {#potential-causes}
+
+1. **Intentional validation failure** - The most common cause, where `throwIf` is working as designed to catch invalid data
+
+2. **Business rule violation** - Data doesn't meet expected criteria (e.g., checking for null values, out-of-range numbers, duplicate records)
+
+3. **Test queries** - Using `throwIf` for debugging or testing error handling
+
+4. **HTTP response timing** - In HTTP queries, error code 395 can appear mid-response when processing rows incrementally
+
+### Quick fixes {#quick-fixes}
+
+**1. For legitimate validation failures:**
+
+```sql
+-- Review the condition causing the exception
+SELECT throwIf(number = 3, 'Value 3 is not allowed') FROM numbers(10);
+```
+
+Fix: Adjust your data or query logic to avoid the triggering condition.
+
+**2. For HTTP streaming issues:**
+
+```sql
+-- Enable response buffering to get complete results before sending HTTP headers
+SELECT * FROM table WHERE condition
+SETTINGS wait_end_of_query=1, http_response_buffer_size=10485760;
+```
+
+**3. For unexpected errors in production:**
+
+```sql
+-- Replace throwIf with conditional logic
+-- Instead of:
+SELECT throwIf(value > 100, 'Value too large')
+
+-- Use:
+SELECT if(value > 100, NULL, value) FROM table;
+```
+
+**4. For testing/debugging:**
+
+```sql
+-- Use identity() function to bypass optimization and see raw performance
+SELECT identity(column) FROM table WHERE NOT throwIf(column IS NULL);
+```
+
+### Important notes {#important-notes}
+
+- The `throwIf` function is **intentional** - it's meant to throw exceptions when the condition is true
+- Error code 395 itself is not a bug; it indicates the function is working as designed
+- When using custom error codes (with `allow_custom_error_code_in_throwif = 1`), thrown exceptions may have unexpected error codes, making debugging harder
+
+### Related documentation {#related-documentation}
+
+- [`throwIf` function documentation](/sql-reference/functions/other-functions#throwIf)
+- [HTTP Interface and error handling](/interfaces/http)
+- [Session settings: allow_custom_error_code_in_throwif](/operations/settings/settings#allow_custom_error_code_in_throwif)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/396_TOO_MANY_ROWS_OR_BYTES.md b/docs/troubleshooting/error_codes/396_TOO_MANY_ROWS_OR_BYTES.md
new file mode 100644
index 00000000000..c9a0d2043f0
--- /dev/null
+++ b/docs/troubleshooting/error_codes/396_TOO_MANY_ROWS_OR_BYTES.md
@@ -0,0 +1,171 @@
+---
+slug: /troubleshooting/error-codes/396_TOO_MANY_ROWS_OR_BYTES
+sidebar_label: '396 TOO_MANY_ROWS_OR_BYTES'
+doc_type: 'reference'
+keywords: ['error codes', 'TOO_MANY_ROWS_OR_BYTES', '396']
+title: '396 TOO_MANY_ROWS_OR_BYTES'
+description: 'ClickHouse error code - 396 TOO_MANY_ROWS_OR_BYTES'
+---
+
+# Error Code 396: TOO_MANY_ROWS_OR_BYTES
+
+:::tip
+This error occurs when query results exceed limits set by `max_result_rows` or `max_result_bytes` settings.
+It's a safety mechanism to prevent queries from consuming excessive memory or network bandwidth when returning large result sets.
+:::
+
+**Error Message Format:**
+
+```text
+Code: 396. DB::Exception: Limit for result exceeded, max bytes: X MiB, current bytes: Y MiB. (TOO_MANY_ROWS_OR_BYTES)
+```
+
+or
+
+```text
+Code: 396. DB::Exception: Limit for result exceeded, max rows: X thousand, current rows: Y thousand. (TOO_MANY_ROWS_OR_BYTES)
+```
+
+### When you'll see it {#when-youll-see-it}
+
+1. **Large query results:**
+ - When a `SELECT` query returns more rows than `max_result_rows` (default: unlimited in self-hosted, varies in ClickHouse Cloud)
+ - When result size exceeds `max_result_bytes` limit
+
+2. **LowCardinality columns:**
+ - With `LowCardinality` columns, even small row counts can trigger this error
+ - LowCardinality dictionaries add significant overhead to result size
+ - A query returning 209 rows can exceed 10MB due to dictionary metadata
+
+3. **HTTP interface queries:**
+ - Particularly common when using SQL Console or HTTP clients
+ - ClickHouse Cloud SQL Console sets `result_overflow_mode=break` by default
+
+4. **Settings profiles:**
+ - When organization/user settings profiles enforce restrictive result limits
+ - Default limits may be set at the profile level for resource control
+
+### Potential causes {#potential-causes}
+
+1. **Queries returning too many rows** - The query legitimately returns more data than allowed by `max_result_rows`
+
+2. **LowCardinality overhead** - Using `LowCardinality` columns with small fixed-size types causes dictionary metadata to inflate result size unexpectedly
+
+3. **Restrictive profile settings** - Settings profiles (in ClickHouse Cloud or user profiles) enforce low limits like:
+
+ ```sql
+ max_result_rows = 1000
+ max_result_bytes = 10000000 -- 10MB
+ result_overflow_mode = 'throw'
+ ```
+
+4. **Query cache incompatibility** - Since ClickHouse 24.9+, using `use_query_cache = true` with `result_overflow_mode != 'throw'` triggers error 731, but older configurations may still hit error 396
+
+5. **Missing ORDER BY optimization** - Queries without `ORDER BY` may hit the limit, while adding `ORDER BY` allows the query to succeed (query execution differences)
+
+### Quick fixes {#quick-fixes}
+
+**1. Increase result limits:**
+
+```sql
+-- For your current session
+SET max_result_rows = 0; -- Unlimited rows
+SET max_result_bytes = 0; -- Unlimited bytes
+
+-- For specific query
+SELECT * FROM large_table
+SETTINGS max_result_rows = 100000, max_result_bytes = 100000000;
+```
+
+**2. Use `result_overflow_mode = 'break'` to get partial results:**
+
+```sql
+-- Returns partial results when limit is reached
+SELECT * FROM table
+SETTINGS result_overflow_mode = 'break',
+ max_result_rows = 10000;
+```
+
+:::warning
+In ClickHouse 24.9+, `result_overflow_mode = 'break'` is **incompatible** with query cache
+:::
+
+```sql
+-- This will fail with error 731 in 24.9+
+SELECT * FROM table
+SETTINGS use_query_cache = true, result_overflow_mode = 'break'; -- Error!
+
+-- Solution: Use 'throw' mode with query cache
+SELECT * FROM table
+SETTINGS use_query_cache = true, result_overflow_mode = 'throw';
+```
+
+**3. Optimize LowCardinality usage:**
+
+```sql
+-- Check if LowCardinality is causing bloat
+SELECT name, type FROM system.columns
+WHERE table = 'your_table' AND type LIKE '%LowCardinality%';
+
+-- Consider removing LowCardinality for small fixed-size types
+ALTER TABLE your_table MODIFY COLUMN col String; -- Remove LowCardinality
+```
+
+**4. Use pagination with LIMIT/OFFSET:**
+
+```sql
+-- Fetch results in chunks
+SELECT * FROM large_table ORDER BY id LIMIT 10000 OFFSET 0;
+SELECT * FROM large_table ORDER BY id LIMIT 10000 OFFSET 10000;
+```
+
+**5. Modify settings profile (ClickHouse Cloud):**
+
+```sql
+-- Check current profile settings
+SELECT name, value FROM system.settings
+WHERE name IN ('max_result_rows', 'max_result_bytes', 'result_overflow_mode');
+
+-- Modify profile (requires admin)
+ALTER SETTINGS PROFILE your_profile SETTINGS
+ max_result_rows = 0,
+ max_result_bytes = 0,
+ result_overflow_mode = 'throw';
+```
+
+**6. For HTTP/JDBC clients - pass settings in connection:**
+
+```bash
+# HTTP with URL parameters
+curl "https://your-host:8443/?max_result_rows=0&max_result_bytes=0" \
+ -d "SELECT * FROM table"
+
+# JDBC connection string
+jdbc:clickhouse://host:port/database?max_result_rows=0&max_result_bytes=0
+```
+
+### Important notes {#important-notes}
+
+- **Cloud SQL Console behavior:** ClickHouse Cloud SQL Console automatically sets `result_overflow_mode=break` and `max_result_rows=500000` in HTTP query parameters
+
+- **LowCardinality overhead:** When using `LowCardinality`, dictionary metadata is sent with each data block, which can cause unexpected size bloat:
+ - 209 rows × 1 column can exceed 10MB limit
+ - 110 rows can require 979MB due to dictionary overhead
+ - Solution: Remove `LowCardinality` or increase `max_result_bytes`
+
+- **Setting precedence:** Settings passed in query parameters override profile settings, but profile settings apply if not explicitly overridden
+
+- **`result_overflow_mode` behavior:**
+ - `'throw'` (default): Throws exception when limit exceeded
+ - `'break'`: Returns partial results (incompatible with query cache in 24.9+)
+ - Using `'break'` provides no indication that results were truncated
+
+- **Version compatibility:** The query cache + overflow mode restriction was introduced in ClickHouse 24.9.
+
+### Related documentation {#related-documentation}
+
+- [`max_result_rows` setting](/operations/settings/settings#max_result_rows)
+- [`max_result_bytes` setting](/operations/settings/settings#max_result_bytes)
+- [`result_overflow_mode` setting](/operations/settings/settings#result_overflow_mode)
+- [Query complexity settings](/operations/settings/query-complexity)
+- [LowCardinality data type](/sql-reference/data-types/lowcardinality)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/403_INVALID_JOIN_ON_EXPRESSION.md b/docs/troubleshooting/error_codes/403_INVALID_JOIN_ON_EXPRESSION.md
new file mode 100644
index 00000000000..f3801758ed4
--- /dev/null
+++ b/docs/troubleshooting/error_codes/403_INVALID_JOIN_ON_EXPRESSION.md
@@ -0,0 +1,167 @@
+---
+slug: /troubleshooting/error-codes/403_INVALID_JOIN_ON_EXPRESSION
+sidebar_label: '403 INVALID_JOIN_ON_EXPRESSION'
+doc_type: 'reference'
+keywords: ['error codes', 'INVALID_JOIN_ON_EXPRESSION', '403']
+title: '403 INVALID_JOIN_ON_EXPRESSION'
+description: 'ClickHouse error code - 403 INVALID_JOIN_ON_EXPRESSION'
+---
+
+# Error Code 403: INVALID_JOIN_ON_EXPRESSION
+
+:::tip
+This error occurs when ClickHouse cannot parse or process the JOIN ON conditions in your query.
+The error indicates that the JOIN expression violates ClickHouse's rules for join conditions, particularly when dealing with complex expressions, OR clauses, NULL conditions, or non-equi joins.
+:::
+
+**Error Message Format:**
+
+```text
+Code: 403. DB::Exception: Cannot get JOIN keys from JOIN ON section: ''. (INVALID_JOIN_ON_EXPRESSION)
+```
+
+or
+
+```text
+Code: 403. DB::Exception: Invalid expression for JOIN ON. Expected equals expression, got . (INVALID_JOIN_ON_EXPRESSION)
+```
+
+### When you'll see it {#when-youll-see-it}
+
+1. **OR conditions not in disjunctive normal form (DNF):**
+ - `t1.a = t2.a AND (t1.b = t2.b OR t1.c = t2.c)` - OR not at top level
+ - `(t1.a = t2.a AND t1.b = t2.b) OR (t1.a = t2.a AND t1.c = t2.c)` ✅ - Proper DNF
+
+2. **JOIN conditions with only NULL checks:**
+ - `(t1.id IS NULL) AND (t2.id IS NULL)` - No equality condition
+ - Missing join keys between tables
+
+3. **Non-equi joins without experimental setting:**
+ - `t1.a > t2.b` without `allow_experimental_join_condition = 1`
+
+4. **Incompatible settings combination:**
+ - Using `allow_experimental_join_condition = 1` with `join_use_nulls = 1` (fixed in recent versions)
+
+5. **Complex OR conditions with filters:**
+ - `t1.id = t2.id OR t1.val = 'constant'` - Second part has no join key
+
+### Potential causes {#potential-causes}
+
+1. **OR conditions nested within AND** - ClickHouse requires OR at the top level (disjunctive normal form)
+
+2. **Missing join keys in OR branches** - Each OR branch must contain at least one equality condition between tables:
+
+ ```sql
+ -- Wrong: second branch has no join key
+ ON t1.id = t2.id OR (t1.val IS NULL AND t2.val IS NULL)
+
+ -- Correct: both branches have join keys (implicit equality via NULL matching)
+ ON t1.id = t2.id OR (isNull(t1.val) = isNull(t2.val) AND t1.val IS NULL)
+ ```
+
+3. **Non-equi join conditions without proper setup** - Inequality conditions (`<`, `>`, `!=`) require:
+
+ - `allow_experimental_join_condition = 1` setting
+ - `hash` or `grace_hash` join algorithm
+ - Cannot be used with `join_use_nulls = 1`
+
+4. **Power BI/Tableau generated queries** - BI tools often generate JOIN conditions with NULL handling that ClickHouse doesn't support in the old query analyzer
+
+5. **Multiple JOIN with column ambiguity** - In multi-table JOINs, columns may be referenced with wrong table qualifiers
+
+### Quick fixes {#quick-fixes}
+
+**1. Rewrite OR conditions to disjunctive normal form (DNF):**
+
+```sql
+-- Wrong: AND at top level
+SELECT * FROM t1 JOIN t2
+ON t1.key = t2.key AND (t1.a = t2.a OR t1.b = t2.b);
+
+-- Correct: OR at top level, repeat common conditions
+SELECT * FROM t1 JOIN t2
+ON (t1.key = t2.key AND t1.a = t2.a)
+ OR (t1.key = t2.key AND t1.b = t2.b);
+```
+
+**2. For NULL-safe joins, use `isNotDistinctFrom` or `COALESCE`:**
+
+```sql
+-- Instead of: t1.id = t2.id OR (t1.id IS NULL AND t2.id IS NULL)
+
+-- Option 1: isNotDistinctFrom (matches NULLs)
+SELECT * FROM t1 LEFT JOIN t2
+ON isNotDistinctFrom(t1.id, t2.id);
+
+-- Option 2: COALESCE with equality check (most efficient)
+SELECT * FROM t1 LEFT JOIN t2
+ON COALESCE(t1.id, 0) = COALESCE(t2.id, 0)
+ AND isNull(t1.id) = isNull(t2.id);
+
+-- Option 3: Using isNull equality
+SELECT * FROM t1 LEFT JOIN t2
+ON t1.id = t2.id
+ OR (isNull(t1.id) = isNull(t2.id) AND t1.id IS NULL);
+```
+
+**3. Enable experimental analyzer for better OR/NULL support:**
+
+```sql
+SET allow_experimental_analyzer = 1;
+
+-- Now this works:
+SELECT * FROM t1 LEFT JOIN t2
+ON t1.id = t2.id OR (t1.id IS NULL AND t2.id IS NULL);
+```
+
+**4. For non-equi joins (inequality conditions):**
+
+```sql
+-- Enable experimental support
+SET allow_experimental_join_condition = 1;
+
+-- Now you can use inequality joins
+SELECT * FROM t1 INNER JOIN t2
+ON t1.key = t2.key AND t1.a > t2.b;
+```
+
+**Important:** Do NOT use `join_use_nulls = 1` with non-equi joins - these settings are incompatible.
+
+**5. Simplify complex filter conditions:**
+
+```sql
+-- Wrong: constant filter in OR without join key
+SELECT * FROM t1 JOIN t2
+ON t1.id = t2.id OR t1.val = 'constant';
+
+-- Correct: move filter to WHERE clause
+SELECT * FROM t1 JOIN t2
+ON t1.id = t2.id
+WHERE t1.val = 'constant' OR t1.id IS NOT NULL;
+```
+
+### Important notes {#important-notes}
+
+- **Disjunctive Normal Form (DNF) requirement:** OR operators must be at the top level of the JOIN condition. Each OR branch should contain complete join conditions.
+
+- **Join key requirement:** Each branch of an OR condition must include at least one equality condition between the joined tables.
+
+- **Experimental analyzer:** The new query analyzer (`allow_experimental_analyzer = 1`) has better support for complex JOIN conditions, including NULL handling. It may become default in future versions.
+
+- **Performance considerations:**
+ - Each OR branch creates a separate hash table, increasing memory usage linearly
+ - Using `COALESCE` for NULL matching is ~5x faster than OR with NULL checks
+ - Power BI bidirectional filters generate complex OR conditions that may not work
+
+- **BI tool compatibility:** Tools like Power BI, Tableau, and Looker may generate incompatible JOIN syntax. Solutions:
+ - Use import mode instead of DirectQuery
+ - Enable `allow_experimental_analyzer = 1` at cluster level
+ - Use ODBC direct queries with custom SQL
+ - Create views with compatible JOIN syntax
+
+### Related documentation {#related-documentation}
+
+- [JOIN clause documentation](/sql-reference/statements/select/join)
+- [JOIN with inequality conditions](/sql-reference/statements/select/join#join-with-inequality-conditions-for-columns-from-different-tables)
+- [NULL values in JOIN keys](/sql-reference/statements/select/join#null-values-in-join-keys)
+- [`join_algorithm` setting](/operations/settings/settings#join_algorithm)
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/439_CANNOT_SCHEDULE_TASK.md b/docs/troubleshooting/error_codes/439_CANNOT_SCHEDULE_TASK.md
new file mode 100644
index 00000000000..2f91b804df2
--- /dev/null
+++ b/docs/troubleshooting/error_codes/439_CANNOT_SCHEDULE_TASK.md
@@ -0,0 +1,746 @@
+---
+slug: /troubleshooting/error-codes/439_CANNOT_SCHEDULE_TASK
+sidebar_label: '439 CANNOT_SCHEDULE_TASK'
+doc_type: 'reference'
+keywords: ['error codes', 'CANNOT_SCHEDULE_TASK', '439']
+title: '439 CANNOT_SCHEDULE_TASK'
+description: 'ClickHouse error code - 439 CANNOT_SCHEDULE_TASK'
+---
+
+# Error 439: CANNOT_SCHEDULE_TASK
+
+:::tip
+This error occurs when ClickHouse cannot allocate a new thread from the thread pool to execute a task.
+It indicates that either the thread pool is exhausted (all threads busy), system thread limit is reached, or the OS cannot create new threads.
+:::
+
+## Most common causes {#most-common-causes}
+
+1. **Thread pool exhausted**
+ - All threads in pool busy with active tasks
+ - Too many concurrent queries requesting threads
+ - Thread pool size limit reached (threads = max pool size)
+ - Jobs queued waiting for available threads
+
+2. **System thread limit reached**
+ - OS kernel thread limit exceeded
+ - `ulimit -u` (max user processes) reached
+ - System-wide thread limit hit
+ - Container or cgroup thread limit reached
+
+3. **High query concurrency with max_threads settings**
+ - Many queries each requesting [`max_threads`](/operations/settings/settings#max_threads) threads
+ - [`max_insert_threads`](/operations/settings/settings#max_insert_threads) setting too high with many concurrent inserts
+ - Thread demand exceeds available thread pool capacity
+ - Spike in concurrent query workload
+
+4. **Resource exhaustion**
+ - System cannot allocate memory for new threads
+ - Out of memory for thread stack allocation
+ - System resource limits preventing thread creation
+ - Container memory limits affecting thread creation
+
+5. **Misconfigured thread pool settings**
+ - [`max_thread_pool_size`](/operations/server-configuration-parameters/settings#max_thread_pool_size) set too low for workload
+ - Thread pool not properly sized for concurrent queries
+ - Imbalance between query concurrency and thread availability
+
+## Common solutions {#common-solutions}
+
+**1. Check current thread usage**
+
+```sql
+-- View current thread pool status
+SELECT
+ metric,
+ value
+FROM system.metrics
+WHERE metric LIKE '%Thread%'
+ORDER BY metric;
+
+-- Key metrics to check:
+-- QueryPipelineExecutorThreads - active query execution threads
+-- QueryPipelineExecutorThreadsActive - threads currently executing
+-- GlobalThread - total threads in global pool
+```
+
+**2. Check thread pool configuration**
+
+```sql
+-- View thread pool settings
+SELECT
+ name,
+ value,
+ description
+FROM system.server_settings
+WHERE name LIKE '%thread%'
+ORDER BY name;
+
+-- Key settings:
+-- max_thread_pool_size - maximum threads in global pool
+-- max_thread_pool_free_size - idle threads kept in pool
+-- thread_pool_queue_size - max tasks waiting in queue
+```
+
+**3. Reduce per-query thread usage**
+
+```sql
+-- Limit threads for specific query
+SELECT * FROM large_table
+SETTINGS max_threads = 4;
+
+-- Reduce insert threads
+INSERT INTO table
+SETTINGS max_insert_threads = 4;
+
+-- Set user-level defaults
+ALTER USER your_user SETTINGS max_threads = 8;
+```
+
+**4. Check system thread limits**
+
+```bash
+# Check current thread limits
+ulimit -u
+
+# Check system-wide limits
+cat /proc/sys/kernel/threads-max
+cat /proc/sys/kernel/pid_max
+
+# Check current thread count
+ps -eLf | wc -l
+
+# For containers, check cgroup limits
+cat /sys/fs/cgroup/pids/pids.max
+```
+
+**5. Enable concurrency control (if available)**
+
+```sql
+-- Check concurrency control settings
+SELECT
+ name,
+ value
+FROM system.server_settings
+WHERE name LIKE '%concurrent_threads_soft_limit%';
+
+-- concurrent_threads_soft_limit_ratio_to_cores - limits threads per core
+-- concurrent_threads_soft_limit_num - absolute thread limit
+```
+
+:::note
+Concurrency control was broken in versions before October 2024 fix.
+Fixed properly in 24.10+.
+:::
+
+**6. Monitor and limit concurrent queries**
+
+```sql
+-- Check concurrent query count
+SELECT count() AS concurrent_queries
+FROM system.processes;
+
+-- Limit concurrent queries per user
+ALTER USER your_user SETTINGS max_concurrent_queries_for_user = 10;
+
+-- Check thread usage per query
+SELECT
+ query_id,
+ user,
+ ProfileEvents['QueryPipelineExecutorThreads'] AS threads_used,
+ query
+FROM system.processes
+ORDER BY threads_used DESC;
+```
+
+## Common scenarios {#common-scenarios}
+
+**Scenario 1: No free threads in pool**
+
+```text
+Error: Cannot schedule a task: no free thread (timeout=0)
+(threads=15000, jobs=15000)
+```
+
+**Cause:** Thread pool completely saturated; all 15000 threads busy.
+
+**Solution:**
+- Reduce concurrent query load
+- Lower [`max_threads`](/operations/settings/settings#max_threads) and [`max_insert_threads`](/operations/settings/settings#max_insert_threads) settings
+- Increase [`max_thread_pool_size](/operations/server-configuration-parameters/settings#max_thread_pool_size) if system can handle it
+- Wait for queries to complete and retry
+
+**Scenario 2: Failed to start thread**
+
+```text
+Error: Cannot schedule a task: failed to start the thread
+(threads=14755, jobs=14754)
+```
+
+**Cause:** System unable to create new thread (OS or resource limit).
+
+**Solution:**
+
+```bash
+# Increase system limits
+ulimit -u 65535
+
+# Or in /etc/security/limits.conf
+* soft nproc 65535
+* hard nproc 65535
+
+# Increase kernel limits
+sysctl -w kernel.threads-max=100000
+sysctl -w kernel.pid_max=100000
+```
+
+**Scenario 3: Cannot allocate thread**
+
+```text
+Error: Cannot schedule a task: cannot allocate thread
+```
+
+**Cause:** Memory or system resources insufficient for thread creation.
+
+**Solution:**
+- Check available memory: `free -h`
+- Check if system is swapping: `vmstat 1`
+- Reduce concurrent query load
+- Increase system memory or reduce thread usage
+
+**Scenario 4: Insert spike with `max_insert_threads`**
+
+```text
+Error: CANNOT_SCHEDULE_TASK during high insert load
+```
+
+**Cause:** Many concurrent inserts each using high [`max_insert_threads`](/operations/settings/settings#max_insert_threads).
+
+**Solution:**
+
+```sql
+-- Reduce insert threads globally
+SET max_insert_threads = 4;
+
+-- For specific insert
+INSERT INTO table
+SETTINGS max_insert_threads = 2;
+
+-- Use async inserts to batch operations
+SET async_insert = 1;
+```
+
+**Scenario 5: Query spike exhausting thread pool**
+
+```text
+Error appears during traffic spike
+Multiple queries failing simultaneously
+```
+
+**Cause:** Sudden increase in concurrent queries.
+
+**Solution:**
+- Implement query queuing or rate limiting on client side
+- Reduce `max_threads` per query
+- Increase `max_thread_pool_size` (if system allows)
+- Scale horizontally (add more replicas)
+
+## Prevention tips {#prevention-tips}
+
+1. **Set reasonable thread limits:** Don't use excessively high [`max_threads`](/operations/settings/settings#max_threads) values
+2. **Monitor thread usage:** Track thread pool metrics regularly
+3. **Configure system limits:** Ensure OS limits are appropriate for workload
+4. **Use async inserts:** Reduce thread usage for insert workloads
+5. **Implement rate limiting:** Control concurrent query load
+6. **Scale horizontally:** Add replicas to distribute thread demand
+7. **Optimize queries:** Efficient queries need fewer threads and complete faster
+
+## Debugging steps {#debugging-steps}
+
+1. **Check recent `CANNOT_SCHEDULE_TASK` errors:**
+
+ ```sql
+ SELECT
+ event_time,
+ query_id,
+ user,
+ exception,
+ query
+ FROM system.query_log
+ WHERE exception_code = 439
+ AND event_date >= today() - 1
+ ORDER BY event_time DESC
+ LIMIT 20;
+ ```
+
+2. **Monitor thread pool metrics:**
+
+ ```sql
+ SELECT
+ event_time,
+ CurrentMetric_GlobalThread AS global_threads,
+ CurrentMetric_QueryPipelineExecutorThreads AS executor_threads,
+ CurrentMetric_QueryPipelineExecutorThreadsActive AS active_threads,
+ CurrentMetric_Query AS concurrent_queries
+ FROM system.metric_log
+ WHERE event_time >= now() - INTERVAL 1 HOUR
+ ORDER BY event_time DESC
+ LIMIT 100;
+ ```
+
+3. **Check concurrent query patterns:**
+
+ ```sql
+ SELECT
+ toStartOfMinute(event_time) AS minute,
+ count() AS query_count,
+ countIf(exception_code = 439) AS thread_errors,
+ avg(ProfileEvents['QueryPipelineExecutorThreads']) AS avg_threads
+ FROM system.query_log
+ WHERE event_time >= now() - INTERVAL 1 HOUR
+ GROUP BY minute
+ ORDER BY minute DESC;
+ ```
+
+4. **Identify high thread-consuming queries:**
+
+ ```sql
+ SELECT
+ query_id,
+ user,
+ ProfileEvents['QueryPipelineExecutorThreads'] AS threads,
+ ProfileEvents['QueryPipelineExecutorThreadsActive'] AS active_threads,
+ normalizeQuery(query) AS query_pattern
+ FROM system.query_log
+ WHERE event_time >= now() - INTERVAL 1 HOUR
+ AND type = 'QueryFinish'
+ ORDER BY threads DESC
+ LIMIT 20;
+ ```
+
+5. **Check system thread limits:**
+
+ ```bash
+ # Check user process limit
+ ulimit -u
+
+ # Check current thread count
+ ps -eLf | wc -l
+
+ # Check system limits
+ cat /proc/sys/kernel/threads-max
+ cat /proc/sys/kernel/pid_max
+
+ # For containers
+ cat /sys/fs/cgroup/pids/pids.current
+ cat /sys/fs/cgroup/pids/pids.max
+ ```
+
+6. **Review thread pool configuration:**
+
+ ```sql
+ SELECT
+ name,
+ value,
+ default
+ FROM system.server_settings
+ WHERE name IN (
+ 'max_thread_pool_size',
+ 'max_thread_pool_free_size',
+ 'thread_pool_queue_size',
+ 'concurrent_threads_soft_limit_num',
+ 'concurrent_threads_soft_limit_ratio_to_cores'
+ );
+ ```
+
+## Special considerations {#special-considerations}
+
+**For ClickHouse Cloud:**
+- Thread pool sized based on instance tier
+- Cannot directly configure [`max_thread_pool_size`](/operations/server-configuration-parameters/settings#max_thread_pool_size)
+- Errors may indicate need to scale up instance
+- Temporary spikes should be tolerated with retry logic
+
+**Thread pool types:**
+- **Global thread pool:** General query execution threads
+- **Background pool:** Merges and mutations
+- **IO pool:** Disk and network I/O operations
+- **Schedule pool:** Background scheduled tasks
+
+**Concurrency control:**
+- Feature to limit threads based on CPU cores
+- Was broken in versions before ~October 2024
+- Fixed properly in 24.10+
+- Settings: [`concurrent_threads_soft_limit_ratio_to_cores`](/operations/server-configuration-parameters/settings#concurrent_threads_soft_limit_ratio_to_cores)
+
+**Thread vs query limits:**
+- [`max_concurrent_queries`](/operations/server-configuration-parameters/settings#max_concurrent_queries) limits number of queries
+- [`max_threads`](/operations/settings/settings#max_threads) limits threads per query
+- Total threads = queries × threads_per_query
+- Thread pool must accommodate total demand
+
+## Thread-related settings {#thread-settings}
+
+**Server-level (config.xml):**
+
+```xml
+
+
+ 10000
+
+
+ 1000
+
+
+ 10000
+
+
+ 2
+
+```
+
+**Query-level:**
+
+```sql
+-- Threads for reading/processing
+SET max_threads = 8;
+
+-- Threads for parallel inserts
+SET max_insert_threads = 4;
+
+-- Threads for distributed queries
+SET max_distributed_connections = 1024;
+
+-- Background operations
+SET background_pool_size = 16;
+SET background_merges_mutations_concurrency_ratio = 2;
+```
+
+## System limit configuration {#system-limits}
+
+**Linux ulimits:**
+
+```bash
+# Temporary increase
+ulimit -u 65535
+
+# Permanent configuration in /etc/security/limits.conf
+clickhouse soft nproc 65535
+clickhouse hard nproc 65535
+
+# Or for all users
+* soft nproc 65535
+* hard nproc 65535
+```
+
+**Kernel parameters:**
+
+```bash
+# Increase thread limits
+sysctl -w kernel.threads-max=200000
+sysctl -w kernel.pid_max=200000
+
+# Make permanent in /etc/sysctl.conf
+kernel.threads-max = 200000
+kernel.pid_max = 200000
+```
+
+**Container limits (Kubernetes):**
+
+```yaml
+# Pod spec - adjust if needed
+spec:
+ containers:
+ - name: clickhouse
+ resources:
+ limits:
+ # Memory affects thread creation
+ memory: 32Gi
+```
+
+## Error message variations {#error-variations}
+
+**"no free thread":**
+- Thread pool at capacity
+- All threads busy with tasks
+- More common, usually temporary
+
+**"failed to start the thread":**
+- System failed to create new thread
+- OS or resource limit reached
+- More serious, indicates system issue
+
+**"cannot allocate thread":**
+- Memory allocation failed for thread
+- System resource exhaustion
+- May indicate memory pressure
+
+## Monitoring thread health {#monitoring}
+
+```sql
+-- Real-time thread usage
+SELECT
+ metric,
+ value,
+ description
+FROM system.metrics
+WHERE metric IN (
+ 'GlobalThread',
+ 'GlobalThreadActive',
+ 'LocalThread',
+ 'LocalThreadActive',
+ 'QueryPipelineExecutorThreads',
+ 'QueryPipelineExecutorThreadsActive'
+);
+
+-- Thread usage over time
+SELECT
+ toStartOfMinute(event_time) AS minute,
+ max(CurrentMetric_GlobalThread) AS max_threads,
+ max(CurrentMetric_GlobalThreadActive) AS max_active,
+ max(CurrentMetric_Query) AS max_queries
+FROM system.metric_log
+WHERE event_time >= now() - INTERVAL 1 HOUR
+GROUP BY minute
+ORDER BY minute DESC;
+
+-- Queries that failed due to thread exhaustion
+SELECT
+ toStartOfMinute(event_time) AS minute,
+ count() AS error_count,
+ count(DISTINCT user) AS affected_users
+FROM system.query_log
+WHERE exception_code = 439
+ AND event_date >= today() - 7
+GROUP BY minute
+HAVING error_count > 0
+ORDER BY minute DESC;
+```
+
+## Recovery and mitigation {#recovery}
+
+**Immediate actions:**
+1. **Wait and retry** - Thread pool may free up quickly
+2. **Kill long-running queries** - Free up threads
+ ```sql
+ -- Find long-running queries
+ SELECT query_id, user, elapsed, query
+ FROM system.processes
+ WHERE elapsed > 300
+ ORDER BY elapsed DESC;
+
+ -- Kill if appropriate
+ KILL QUERY WHERE query_id = 'long_running_query';
+ ```
+
+3. **Reduce query load** - Temporarily throttle queries on client side
+4. **Restart ClickHouse** - Clears thread pool (last resort)
+
+**Long-term fixes:**
+
+1. **Optimize query thread usage:**
+ ```sql
+ -- Set sensible defaults
+ ALTER USER default SETTINGS max_threads = 8;
+ ALTER USER default SETTINGS max_insert_threads = 4;
+ ```
+
+2. **Increase thread pool size** (if system can handle it):
+
+ ```xml
+ 20000
+ ```
+
+3. **Configure concurrency control:**
+
+ ```xml
+
+ 2
+ ```
+
+4. **Increase system limits:**
+
+ ```bash
+ # Increase user process limit
+ ulimit -u 100000
+
+ # Increase kernel limits
+ sysctl -w kernel.threads-max=200000
+ ```
+
+## Prevention tips {#prevention-tips-summary}
+
+1. **Set appropriate max_threads:** Don't use default if you have high concurrency
+2. **Monitor thread metrics:** Track thread pool usage trends
+3. **Configure system limits properly:** Ensure OS limits match workload
+4. **Use async inserts:** Reduce thread consumption for insert operations
+5. **Implement rate limiting:** Control concurrent query load
+6. **Test under load:** Verify thread pool sizing for peak loads
+7. **Keep ClickHouse updated:** Concurrency control improvements in newer versions
+
+## Known issues and fixes {#known-issues}
+
+**Issue: Concurrency control broken before October 2024**
+- **Affected:** Versions before ~24.10
+- **Symptom:** [`concurrent_threads_soft_limit_ratio_to_cores`](/operations/server-configuration-parameters/settings#concurrent_threads_soft_limit_ratio_to_cores) not working
+- **Fix:** Merged in October 2024, available in 24.10+
+- **Impact:** Thread pool could be exhausted more easily
+
+**Issue: High insert threads with concurrent inserts**
+- **Symptom:** Many inserts with [`max_insert_threads`](/operations/settings/settings#max_insert_threads) exhausting pool
+- **Cause:** Each insert requesting many threads simultaneously
+- **Solution:** Reduce [`max_insert_threads`](/operations/settings/settings#max_insert_threads) or use async inserts
+
+**Issue: Query pipeline executor threads**
+- **Symptom:** `QueryPipelineExecutorThreadsActive` reaching pool limit
+- **Context:** Modern query execution uses pipeline executor threads
+- **Solution:** Proper concurrency control (fixed in 24.10+)
+
+## Diagnosing thread pool exhaustion {#diagnosing}
+
+```sql
+-- Snapshot of thread usage at error time
+WITH error_times AS (
+ SELECT DISTINCT toStartOfMinute(event_time) AS error_minute
+ FROM system.query_log
+ WHERE exception_code = 439
+ AND event_time >= now() - INTERVAL 6 HOUR
+)
+SELECT
+ m.event_time,
+ m.CurrentMetric_GlobalThread AS total_threads,
+ m.CurrentMetric_GlobalThreadActive AS active_threads,
+ m.CurrentMetric_Query AS concurrent_queries,
+ m.CurrentMetric_QueryPipelineExecutorThreads AS executor_threads
+FROM system.metric_log m
+INNER JOIN error_times e ON toStartOfMinute(m.event_time) = e.error_minute
+ORDER BY m.event_time;
+
+-- What was running when error occurred
+SELECT
+ user,
+ count() AS query_count,
+ sum(ProfileEvents['QueryPipelineExecutorThreads']) AS total_threads_requested
+FROM system.processes
+WHERE query_start_time >= 'time_of_error' - INTERVAL 1 MINUTE
+ AND query_start_time <= 'time_of_error' + INTERVAL 1 MINUTE
+GROUP BY user
+ORDER BY total_threads_requested DESC;
+```
+
+## Recommended thread settings {#recommended-settings}
+
+**For high-concurrency workloads:**
+
+```sql
+-- Per-query thread limits
+SET max_threads = 4; -- Instead of default (CPU cores)
+SET max_insert_threads = 4;
+
+-- Enable concurrency control
+-- (server config for 24.10+)
+concurrent_threads_soft_limit_ratio_to_cores = 2
+```
+
+**For analytical workloads:**
+
+```sql
+-- Can use more threads per query
+SET max_threads = 16;
+
+-- But limit concurrent queries
+SET max_concurrent_queries_for_user = 5;
+```
+
+**For mixed workloads:**
+
+```sql
+-- Balance between parallelism and concurrency
+SET max_threads = 8;
+SET max_insert_threads = 4;
+SET max_concurrent_queries_for_user = 20;
+```
+
+## When to `increase max_thread_pool_size` {#when-to-increase}
+
+Consider increasing if:
+- Consistently hitting thread pool limit
+- High concurrency is expected workload pattern
+- System has sufficient resources (CPU, memory)
+- Errors correlate with legitimate traffic spikes
+
+**Don't increase if:**
+- System already at resource limits
+- Better to reduce per-query thread usage
+- Horizontal scaling is an option
+- Queries can be optimized to use fewer threads
+
+## Thread pool sizing guidelines {#sizing-guidelines}
+
+```text
+Recommended max_thread_pool_size calculation:
+= (concurrent_queries × max_threads_per_query) × 1.5 safety margin
+
+Example:
+- Expected concurrent queries: 50
+- Average max_threads: 8
+- Calculation: 50 × 8 × 1.5 = 600 threads
+
+But also consider:
+- System CPU cores (more threads than cores causes context switching)
+- Available memory (each thread has stack, typically 8-10 MB)
+- Background operations (merges, mutations need threads too)
+```
+
+## Temporary workarounds {#temporary-workarounds}
+
+While waiting for long-term fixes:
+
+```sql
+-- Reduce thread usage across all queries
+ALTER SETTINGS PROFILE default SET max_threads = 4;
+
+-- Prioritize critical queries
+SELECT * FROM important_table
+SETTINGS priority = 1; -- Higher priority
+
+-- For non-critical queries
+SELECT * FROM less_important_table
+SETTINGS priority = 10, -- Lower priority
+ max_threads = 2; -- Fewer threads
+```
+
+## For ClickHouse Cloud users {#clickhouse-cloud}
+
+**Limitations:**
+- Cannot directly configure [`max_thread_pool_size`](/operations/server-configuration-parameters/settings#max_thread_pool_size)
+- Thread pool sized by instance tier
+- Need to upgrade tier if consistently hitting limits
+
+**Recommendations:**
+- Set appropriate [`max_threads`](/operations/settings/settings#max_threads) and [`max_insert_threads`](/operations/settings/settings#max_insert_threads)
+- Monitor thread usage metrics
+- Scale up tier if thread exhaustion is frequent
+- Implement retry logic for transient errors
+- Consider horizontal scaling (more replicas)
+
+**Escalation:**
+- If errors persist after optimization
+- If thread pool appears undersized for tier
+- Contact support with thread usage metrics
+
+If you're experiencing this error:
+1. Check if this is a transient spike (retry may succeed)
+2. Review current thread pool usage in `system.metrics`
+3. Check for traffic spike or abnormal query patterns
+4. Verify system thread limits are adequate
+5. Reduce `max_threads` and `max_insert_threads` if set too high
+6. Monitor for queries using excessive threads
+7. For persistent issues, increase `max_thread_pool_size` (self-managed) or scale up (Cloud)
+8. Ensure concurrency control is working (upgrade to 24.10+ if needed)
+9. Implement client-side retry with exponential backoff
+
+**Related documentation:**
+- [Server settings](/operations/server-configuration-parameters/settings)
+- [Query settings](/operations/settings/settings)
+- [Thread pool configuration](/operations/server-configuration-parameters/settings#max_thread_pool_size)
diff --git a/docs/troubleshooting/error_codes/455_SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY.md b/docs/troubleshooting/error_codes/455_SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY.md
new file mode 100644
index 00000000000..3092b31294a
--- /dev/null
+++ b/docs/troubleshooting/error_codes/455_SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY.md
@@ -0,0 +1,135 @@
+---
+slug: /troubleshooting/error-codes/455_SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY
+sidebar_label: '455 SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY'
+doc_type: 'reference'
+keywords: ['error codes', 'SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY', '455']
+title: '455 SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY'
+description: 'ClickHouse error code - 455 SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY'
+---
+
+# Error 455: SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY
+
+:::tip
+This error occurs when you're trying to create or use a `LowCardinality` column with a data type that typically performs worse with `LowCardinality` wrapper than without it.
+This is a protective error that prevents performance degradation due to inappropriate use of `LowCardinality` optimization.
+:::
+
+## What this error means {#what-this-error-means}
+
+ClickHouse's `LowCardinality` optimization is designed for columns with relatively few distinct values (typically under 10,000).
+When you wrap certain data types like `Date`, `DateTime`, `UUID`, `Int128`, `UInt128`, `Int256`, or `UInt256` with `LowCardinality`, it often creates additional overhead without providing compression benefits, leading to worse performance.
+
+## Potential causes {#potential-causes}
+
+1. **Using LowCardinality with unsuitable data types** - Wrapping types like `Date`, `DateTime`, or large integers with `LowCardinality` when these types already have efficient storage
+2. **Hive partition columns auto-detection** - When reading Hive-partitioned data (e.g., paths like `hp=2025-09-24/file.parquet`), ClickHouse automatically infers partition columns as `LowCardinality(Date)`
+3. **Automatic schema inference** - Schema inference from external formats may incorrectly suggest `LowCardinality` for date or numeric columns
+4. **High cardinality data** - Using `LowCardinality` on columns with many distinct values (>10,000 unique values)
+
+## When you'll see it {#when-youll-see-it}
+
+- **Table creation**: `CREATE TABLE` statements defining columns like `LowCardinality(Date)` or `LowCardinality(UUID)`
+- **ALTER TABLE**: Modifying statistics or structure involving suspicious `LowCardinality` types
+- **Reading external data**: Loading Parquet/ORC files with Hive partitioning where dates are inferred as partition columns
+- **INSERT operations**: Inserting data that triggers automatic type inference with `LowCardinality` wrapper
+
+### Example scenarios {#example-scenarios}
+
+```sql
+-- Direct creation (will fail)
+CREATE TABLE test (date_col LowCardinality(Date)) ENGINE = MergeTree ORDER BY date_col;
+-- Error: Creating columns of type LowCardinality(Date) is prohibited
+
+-- Hive partitioned data (will fail on default settings)
+SELECT * FROM url('s3://bucket/hp=2025-09-24/data.parquet', Parquet);
+-- Error: LowCardinality(Date) prohibited due to 'hp' partition column
+```
+
+## Quick fixes {#quick-fixes}
+
+### 1. Enable the setting (if you really need it) {#enable-setting-if-needed}
+
+```sql
+SET allow_suspicious_low_cardinality_types = 1;
+```
+
+
+When would I really need this setting?
+
+Based on actual customer cases and internal discussions, here are the **legitimate scenarios**:
+
+### 1. Low-cardinality UUIDs (Most common legitimate use) {#low-cardinality-uuids}
+
+When you have UUID columns that represent categorical data with limited distinct values:
+
+- **Tenant IDs**: ~1,500 repeating UUIDs across millions of rows (real case from support escalation #3470)
+- **Organization IDs**: UUIDs that appear frequently but have \<10,000 distinct values
+- **Service IDs**: Fixed set of service identifiers in UUID format
+- **API Keys**: Limited set of API keys that appear repeatedly
+
+**Example:** A multi-tenant system where you have 1,500 tenants (UUID identifiers) but millions of events per tenant. Using `LowCardinality(UUID)` can provide significant compression benefits here.
+
+### 2. Limited date ranges (Debatable but sometimes valid) {#limited-date-ranges}
+
+When you have date columns with very few distinct values:
+
+- **Billing periods**: Only 12 distinct dates (monthly billing cycles)
+- **Release dates**: Small set of product release dates
+- **Reporting periods**: Quarterly or annual reporting with limited distinct dates
+
+### 3. Hive-partitioned data (Workaround scenario) {#hive-partitioned-data}
+
+When reading external data with Hive partitioning where ClickHouse auto-infers partition columns as `LowCardinality(Date)`:
+
+```sql
+-- Hive partitioned S3 data like: s3://bucket/hp=2025-09-24/data.parquet
+SELECT * FROM url('s3://bucket/hp=2025-09-24/*.parquet', Parquet)
+SETTINGS allow_suspicious_low_cardinality_types = 1;
+```
+
+This is more of a **workaround** than a best practice.
+
+### 4. Integration testing (Development scenario) {#integration-testing}
+
+For automated testing where existing schemas use suspicious types and you need compatibility:
+
+- Testing data migration from other systems
+- Validating schema compatibility
+- CI/CD pipelines with fixed test schemas
+
+
+
+### 2. Remove LowCardinality wrapper {#remove-lowcardinality-wrapper}
+
+```sql
+-- Instead of:
+CREATE TABLE test (date_col LowCardinality(Date)) ENGINE = MergeTree ORDER BY date_col;
+
+-- Use:
+CREATE TABLE test (date_col Date) ENGINE = MergeTree ORDER BY date_col;
+```
+
+### 3. Disable Hive partitioning (for external data) {#disable-hive-partitioning}
+
+```sql
+-- If you don't need Hive partition columns
+SELECT * FROM url('s3://bucket/hp=2025-09-24/data.parquet', Parquet)
+SETTINGS use_hive_partitioning = 0;
+```
+
+### 4. Use appropriate types {#use-appropriate-types}
+- For dates: use `Date` or `DateTime` directly
+- For UUIDs: use `UUID` directly
+- For high-cardinality strings: use `String` or `FixedString`
+- Only use `LowCardinality(String)` for columns with less than 10,000 distinct values
+
+## Understanding the root cause {#understanding-root-cause}
+
+`LowCardinality` works by creating a dictionary of unique values and storing references to this dictionary. This is efficient when:
+- You have relatively few distinct values (\<10,000)
+- The values are strings or other variable-length types
+
+It's **inefficient** when:
+- Types like `Date` (already 2-4 bytes) or `UUID` (16 bytes) don't benefit from dictionary encoding
+- High cardinality data creates a dictionary as large as the original data
+- The overhead of dictionary lookups outweighs storage savings
diff --git a/docs/troubleshooting/error_codes/735_QUERY_WAS_CANCELLED_BY_CLIENT.md b/docs/troubleshooting/error_codes/735_QUERY_WAS_CANCELLED_BY_CLIENT.md
new file mode 100644
index 00000000000..5671e151322
--- /dev/null
+++ b/docs/troubleshooting/error_codes/735_QUERY_WAS_CANCELLED_BY_CLIENT.md
@@ -0,0 +1,213 @@
+---
+slug: /troubleshooting/error-codes/735_QUERY_WAS_CANCELLED_BY_CLIENT
+sidebar_label: '735 QUERY_WAS_CANCELLED_BY_CLIENT'
+doc_type: 'reference'
+keywords: ['error codes', 'QUERY_WAS_CANCELLED_BY_CLIENT', '735']
+title: '735 QUERY_WAS_CANCELLED_BY_CLIENT'
+description: 'ClickHouse error code - 735 QUERY_WAS_CANCELLED_BY_CLIENT'
+---
+
+# Error 735: QUERY_WAS_CANCELLED_BY_CLIENT
+
+:::tip
+This error occurs when your query was stopped because the client application that sent it cancelled the request.
+This is a **client-side cancellation**, not a ClickHouse server issue - it means your application, driver, or tool explicitly told ClickHouse to stop executing the query.
+:::
+
+## What this error means {#what-it-means}
+
+When a client connects to ClickHouse and sends a query, it can later send a "Cancel" packet to stop that query mid-execution.
+ClickHouse receives this cancellation signal and immediately stops processing the query, throwing error 735.
+This is the expected behavior when:
+
+- A user clicks "Stop" in a query tool
+- An application has a timeout and cancels the query
+- A connection is closed or lost
+- A client explicitly calls a cancel/interrupt method
+
+## Potential causes {#potential-causes}
+
+1. **Client timeouts** - Your application or driver has a query timeout shorter than the query execution time
+2. **User cancellation** - A user manually stopped the query in a UI tool (SQL Console, DBeaver, etc.)
+3. **Connection issues** - Network problems causing the client to disconnect and cancel queries
+4. **Application logic** - Your code explicitly cancels queries based on business logic
+5. **Load balancer/proxy timeouts** - Intermediate infrastructure timing out before the query completes
+6. **Resource exhaustion** - Client running out of memory or resources while processing results
+
+## When you'll see it {#when-youll-see-it}
+
+### Common scenarios from production {#common-scenarios}
+
+```sql
+-- Long-running query cancelled by user
+SELECT * FROM large_table WHERE timestamp > now() - INTERVAL 1 YEAR;
+-- User clicks "Stop" button after 30 seconds
+
+-- Query cancelled by client timeout
+Code: 735. DB::Exception: Received 'Cancel' packet from the client, canceling the query.
+(QUERY_WAS_CANCELLED_BY_CLIENT) (version 24.12.1.18350)
+```
+
+### Real-world examples {#real-world-examples}
+
+**Example 1: Grafana dashboard timeout**
+
+```text
+event_time: 2025-05-09 19:52:51
+initial_user: grafana_ro
+exception_code: 735
+exception: Code: 735. DB::Exception: Received 'Cancel' packet from the client,
+canceling the query. (QUERY_WAS_CANCELLED_BY_CLIENT)
+```
+
+**Example 2: Application driver timeout**
+
+```text
+error: write: write tcp 172.30.103.188:51408->18.225.29.123:9440: i/o timeout
+err: driver: bad connection
+```
+
+## Quick fixes {#quick-fixes}
+
+### 1. **Increase client timeout settings** {#increase-client-timeout}
+
+**Go driver (clickhouse-go):**
+
+```go
+conn := clickhouse.OpenDB(&clickhouse.Options{
+ Addr: []string{"host:9000"},
+ Settings: clickhouse.Settings{
+ "max_execution_time": 300, // 5 minutes server-side
+ },
+ DialTimeout: 30 * time.Second,
+ ReadTimeout: 5 * time.Minute, // Increase this
+ WriteTimeout: 5 * time.Minute,
+})
+```
+
+**Python driver (clickhouse-driver):**
+
+```python
+from clickhouse_driver import Client
+
+client = Client(
+ host='hostname',
+ send_receive_timeout=300, # 5 minutes
+ sync_request_timeout=300
+)
+```
+
+**JDBC driver:**
+
+```java
+Properties properties = new Properties();
+properties.setProperty("socket_timeout", "300000"); // 5 minutes in milliseconds
+Connection conn = DriverManager.getConnection(url, properties);
+```
+
+### 2. **Check for query timeout settings** {#check-for-query-timeout}
+
+```sql
+-- Check your current timeout settings
+SELECT
+ name,
+ value
+FROM system.settings
+WHERE name LIKE '%timeout%' OR name LIKE '%execution_time%';
+
+-- Set longer timeout for your session
+SET max_execution_time = 300; -- 5 minutes
+
+-- Or in your query
+SELECT * FROM large_table
+SETTINGS max_execution_time = 600;
+```
+
+### 3. **Optimize slow queries** {#optimize-slow-queries}
+
+If queries are timing out because they're too slow:
+
+```sql
+-- Add LIMIT for testing
+SELECT * FROM large_table LIMIT 1000;
+
+-- Use EXPLAIN to understand query plan
+EXPLAIN SELECT * FROM large_table WHERE condition;
+
+-- Check query progress
+SELECT
+ query_id,
+ elapsed,
+ read_rows,
+ total_rows_approx
+FROM system.processes
+WHERE query NOT LIKE '%system.processes%';
+```
+
+### 4. **Handle cancellations gracefully in your application** {#handle-cancellations}
+
+```go
+// Go example with context
+ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
+defer cancel()
+
+rows, err := conn.QueryContext(ctx, "SELECT * FROM large_table")
+if err != nil {
+ if errors.Is(err, context.DeadlineExceeded) {
+ // Handle timeout
+ log.Println("Query timed out, consider optimization")
+ }
+}
+```
+
+### 5. **Check for infrastructure timeouts** {#check-infrastructure-timeouts}
+
+- **Load balancers**: AWS ALB has 60s default timeout, increase to 300s+
+- **Proxies**: Check HAProxy, Nginx timeouts
+- **Cloud providers**: Check cloud-specific connection limits
+
+## Understanding the root cause {#understanding-the-root-cause}
+
+This error is **informational** from ClickHouse's perspective—it's telling you that it successfully cancelled the query as requested by the client. The actual problem is:
+
+1. **Why did the client cancel?** (timeout, user action, connection loss)
+2. **Is the query too slow?** (needs optimization)
+3. **Are timeout settings too aggressive?** (need tuning)
+
+## Related errors {#related-errors}
+
+- **Error 159: `TIMEOUT_EXCEEDED`** - Server-side timeout (set by `max_execution_time`)
+- **Error 210: `NETWORK_ERROR`** - Network connection problems
+- **Error 394: `QUERY_WAS_CANCELLED`** - Server-side cancellation (vs client-side 735)
+
+## Troubleshooting steps {#troubleshooting-steps}
+
+1. **Check query logs** to see how long queries ran before cancellation:
+
+ ```sql
+ SELECT
+ query_id,
+ query_duration_ms,
+ exception_code,
+ exception,
+ query
+ FROM system.query_log
+ WHERE exception_code = 735
+ ORDER BY event_time DESC
+ LIMIT 10;
+ ```
+
+2. **Monitor client connection metrics**:
+
+ ```sql
+ SELECT
+ user,
+ client_hostname,
+ client_name,
+ elapsed,
+ read_rows,
+ memory_usage
+ FROM system.processes;
+ ```
+
+3. **Check for patterns**: Are cancellations happening at a specific time threshold? This indicates a timeout setting somewhere in your stack.
diff --git a/docs/troubleshooting/error_codes/_category_.json b/docs/troubleshooting/error_codes/_category_.json
new file mode 100644
index 00000000000..f1f52da9f96
--- /dev/null
+++ b/docs/troubleshooting/error_codes/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "Error Codes",
+ "className": "error-codes-category"
+}
\ No newline at end of file
diff --git a/docs/troubleshooting/error_codes/index.md b/docs/troubleshooting/error_codes/index.md
new file mode 100644
index 00000000000..4f5e1768933
--- /dev/null
+++ b/docs/troubleshooting/error_codes/index.md
@@ -0,0 +1,637 @@
+---
+slug: /troubleshooting/error-codes
+sidebar_label: 'Error codes'
+doc_type: 'reference'
+keywords: ['error codes']
+title: 'ClickHouse error code reference'
+description: 'Lists all of the error codes in ClickHouse along with their names. The most common ones are linked to troubleshooting pages.'
+---
+
+# ClickHouse error codes
+
+:::note
+Only the most commonly encountered error codes below are linked to individual pages with common causes
+and potential solutions.
+:::
+
+| Code | Name |
+|-------|-----------------------------------------------------------------------------------------|
+| 0 | OK |
+| 1 | [UNSUPPORTED_METHOD](/troubleshooting/error-codes/001_UNSUPPORTED_METHOD) |
+| 2 | UNSUPPORTED_PARAMETER |
+| 3 | [UNEXPECTED_END_OF_FILE](/troubleshooting/error-codes/003_UNEXPECTED_END_OF_FILE) |
+| 4 | EXPECTED_END_OF_FILE |
+| 6 | [CANNOT_PARSE_TEXT](/troubleshooting/error-codes/006_CANNOT_PARSE_TEXT) |
+| 7 | INCORRECT_NUMBER_OF_COLUMNS |
+| 8 | THERE_IS_NO_COLUMN |
+| 9 | SIZES_OF_COLUMNS_DOESNT_MATCH |
+| 10 | [NOT_FOUND_COLUMN_IN_BLOCK](/troubleshooting/error-codes/010_NOT_FOUND_COLUMN_IN_BLOCK) |
+| 11 | POSITION_OUT_OF_BOUND |
+| 12 | PARAMETER_OUT_OF_BOUND |
+| 13 | SIZES_OF_COLUMNS_IN_TUPLE_DOESNT_MATCH |
+| 15 | [DUPLICATE_COLUMN](/troubleshooting/error-codes/013_DUPLICATE_COLUMN) |
+| 16 | NO_SUCH_COLUMN_IN_TABLE |
+| 19 | SIZE_OF_FIXED_STRING_DOESNT_MATCH |
+| 20 | NUMBER_OF_COLUMNS_DOESNT_MATCH |
+| 23 | CANNOT_READ_FROM_ISTREAM |
+| 24 | CANNOT_WRITE_TO_OSTREAM |
+| 25 | CANNOT_PARSE_ESCAPE_SEQUENCE |
+| 26 | CANNOT_PARSE_QUOTED_STRING |
+| 27 | CANNOT_PARSE_INPUT_ASSERTION_FAILED |
+| 28 | CANNOT_PRINT_FLOAT_OR_DOUBLE_NUMBER |
+| 32 | ATTEMPT_TO_READ_AFTER_EOF |
+| 33 | CANNOT_READ_ALL_DATA |
+| 34 | TOO_MANY_ARGUMENTS_FOR_FUNCTION |
+| 35 | TOO_FEW_ARGUMENTS_FOR_FUNCTION |
+| 36 | [BAD_ARGUMENTS](/troubleshooting/error-codes/036_BAD_ARGUMENTS) |
+| 37 | UNKNOWN_ELEMENT_IN_AST |
+| 38 | [CANNOT_PARSE_DATE](/troubleshooting/error-codes/038_CANNOT_PARSE_DATE) |
+| 39 | TOO_LARGE_SIZE_COMPRESSED |
+| 40 | CHECKSUM_DOESNT_MATCH |
+| 41 | [CANNOT_PARSE_DATETIME](/troubleshooting/error-codes/041_CANNOT_PARSE_DATETIME) |
+| 42 | [NUMBER_OF_ARGUMENTS_DOESNT_MATCH](/troubleshooting/error-codes/042_NUMBER_OF_ARGUMENTS_DOESNT_MATCH) |
+| 43 | [ILLEGAL_TYPE_OF_ARGUMENT](/troubleshooting/error-codes/043_ILLEGAL_TYPE_OF_ARGUMENT) |
+| 44 | [ILLEGAL_COLUMN](/troubleshooting/error-codes/044_ILLEGAL_COLUMN) |
+| 46 | [UNKNOWN_FUNCTION](/troubleshooting/error-codes/046_UNKNOWN_FUNCTION) |
+| 47 | [UNKNOWN_IDENTIFIER](/troubleshooting/error-codes/047_UNKNOWN_IDENTIFIER) |
+| 48 | [NOT_IMPLEMENTED](/troubleshooting/error-codes/048_NOT_IMPLEMENTED) |
+| 49 | [LOGICAL_ERROR](/troubleshooting/error-codes/049_LOGICAL_ERROR) |
+| 50 | [UNKNOWN_TYPE](/troubleshooting/error-codes/050_UNKNOWN_TYPE) |
+| 51 | EMPTY_LIST_OF_COLUMNS_QUERIED |
+| 52 | COLUMN_QUERIED_MORE_THAN_ONCE |
+| 53 | [TYPE_MISMATCH](/troubleshooting/error-codes/053_TYPE_MISMATCH) |
+| 55 | STORAGE_REQUIRES_PARAMETER |
+| 56 | UNKNOWN_STORAGE |
+| 57 | TABLE_ALREADY_EXISTS |
+| 58 | TABLE_METADATA_ALREADY_EXISTS |
+| 59 | ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER |
+| 60 | [UNKNOWN_TABLE](/troubleshooting/error-codes/060_UNKNOWN_TABLE) |
+| 62 | [SYNTAX_ERROR](/troubleshooting/error-codes/062_SYNTAX_ERROR) |
+| 63 | UNKNOWN_AGGREGATE_FUNCTION |
+| 68 | CANNOT_GET_SIZE_OF_FIELD |
+| 69 | ARGUMENT_OUT_OF_BOUND |
+| 70 | [CANNOT_CONVERT_TYPE](/troubleshooting/error-codes/070_CANNOT_CONVERT_TYPE) |
+| 71 | CANNOT_WRITE_AFTER_END_OF_BUFFER |
+| 72 | CANNOT_PARSE_NUMBER |
+| 73 | UNKNOWN_FORMAT |
+| 74 | CANNOT_READ_FROM_FILE_DESCRIPTOR |
+| 75 | CANNOT_WRITE_TO_FILE_DESCRIPTOR |
+| 76 | CANNOT_OPEN_FILE |
+| 77 | CANNOT_CLOSE_FILE |
+| 78 | UNKNOWN_TYPE_OF_QUERY |
+| 79 | INCORRECT_FILE_NAME |
+| 80 | INCORRECT_QUERY |
+| 81 | [UNKNOWN_DATABASE](/troubleshooting/error-codes/081_UNKNOWN_DATABASE) |
+| 82 | DATABASE_ALREADY_EXISTS |
+| 83 | DIRECTORY_DOESNT_EXIST |
+| 84 | DIRECTORY_ALREADY_EXISTS |
+| 85 | FORMAT_IS_NOT_SUITABLE_FOR_INPUT |
+| 86 | RECEIVED_ERROR_FROM_REMOTE_IO_SERVER |
+| 87 | CANNOT_SEEK_THROUGH_FILE |
+| 88 | CANNOT_TRUNCATE_FILE |
+| 89 | UNKNOWN_COMPRESSION_METHOD |
+| 90 | EMPTY_LIST_OF_COLUMNS_PASSED |
+| 91 | SIZES_OF_MARKS_FILES_ARE_INCONSISTENT |
+| 92 | EMPTY_DATA_PASSED |
+| 93 | UNKNOWN_AGGREGATED_DATA_VARIANT |
+| 94 | CANNOT_MERGE_DIFFERENT_AGGREGATED_DATA_VARIANTS |
+| 95 | CANNOT_READ_FROM_SOCKET |
+| 96 | CANNOT_WRITE_TO_SOCKET |
+| 99 | UNKNOWN_PACKET_FROM_CLIENT |
+| 100 | UNKNOWN_PACKET_FROM_SERVER |
+| 101 | UNEXPECTED_PACKET_FROM_CLIENT |
+| 102 | UNEXPECTED_PACKET_FROM_SERVER |
+| 104 | TOO_SMALL_BUFFER_SIZE |
+| 107 | [FILE_DOESNT_EXIST](/troubleshooting/error-codes/107_FILE_DOESNT_EXIST) |
+| 108 | NO_DATA_TO_INSERT |
+| 109 | CANNOT_BLOCK_SIGNAL |
+| 110 | CANNOT_UNBLOCK_SIGNAL |
+| 111 | CANNOT_MANIPULATE_SIGSET |
+| 112 | CANNOT_WAIT_FOR_SIGNAL |
+| 113 | THERE_IS_NO_SESSION |
+| 114 | CANNOT_CLOCK_GETTIME |
+| 115 | UNKNOWN_SETTING |
+| 116 | THERE_IS_NO_DEFAULT_VALUE |
+| 117 | INCORRECT_DATA |
+| 119 | ENGINE_REQUIRED |
+| 120 | CANNOT_INSERT_VALUE_OF_DIFFERENT_SIZE_INTO_TUPLE |
+| 121 | [UNSUPPORTED_JOIN_KEYS](/troubleshooting/error-codes/121_UNSUPPORTED_JOIN_KEYS) |
+| 122 | INCOMPATIBLE_COLUMNS |
+| 123 | UNKNOWN_TYPE_OF_AST_NODE |
+| 124 | INCORRECT_ELEMENT_OF_SET |
+| 125 | [INCORRECT_RESULT_OF_SCALAR_SUBQUERY](/troubleshooting/error-codes/125_INCORRECT_RESULT_OF_SCALAR_SUBQUERY) |
+| 127 | ILLEGAL_INDEX |
+| 128 | TOO_LARGE_ARRAY_SIZE |
+| 129 | FUNCTION_IS_SPECIAL |
+| 130 | [CANNOT_READ_ARRAY_FROM_TEXT](/troubleshooting/error-codes/130_CANNOT_READ_ARRAY_FROM_TEXT) |
+| 131 | TOO_LARGE_STRING_SIZE |
+| 133 | AGGREGATE_FUNCTION_DOESNT_ALLOW_PARAMETERS |
+| 134 | PARAMETERS_TO_AGGREGATE_FUNCTIONS_MUST_BE_LITERALS |
+| 135 | [ZERO_ARRAY_OR_TUPLE_INDEX](/troubleshooting/error-codes/135_ZERO_ARRAY_OR_TUPLE_INDEX) |
+| 137 | UNKNOWN_ELEMENT_IN_CONFIG |
+| 138 | EXCESSIVE_ELEMENT_IN_CONFIG |
+| 139 | NO_ELEMENTS_IN_CONFIG |
+| 141 | SAMPLING_NOT_SUPPORTED |
+| 142 | NOT_FOUND_NODE |
+| 145 | UNKNOWN_OVERFLOW_MODE |
+| 152 | UNKNOWN_DIRECTION_OF_SORTING |
+| 153 | ILLEGAL_DIVISION |
+| 156 | DICTIONARIES_WAS_NOT_LOADED |
+| 158 | TOO_MANY_ROWS |
+| 159 | [TIMEOUT_EXCEEDED](/troubleshooting/error-codes/159_TIMEOUT_EXCEEDED) |
+| 160 | TOO_SLOW |
+| 161 | TOO_MANY_COLUMNS |
+| 162 | TOO_DEEP_SUBQUERIES |
+| 164 | READONLY |
+| 165 | TOO_MANY_TEMPORARY_COLUMNS |
+| 166 | TOO_MANY_TEMPORARY_NON_CONST_COLUMNS |
+| 167 | TOO_DEEP_AST |
+| 168 | TOO_BIG_AST |
+| 169 | BAD_TYPE_OF_FIELD |
+| 170 | BAD_GET |
+| 172 | CANNOT_CREATE_DIRECTORY |
+| 173 | CANNOT_ALLOCATE_MEMORY |
+| 174 | CYCLIC_ALIASES |
+| 179 | [MULTIPLE_EXPRESSIONS_FOR_ALIAS](/troubleshooting/error-codes/179_MULTIPLE_EXPRESSIONS_FOR_ALIAS) |
+| 180 | THERE_IS_NO_PROFILE |
+| 181 | [ILLEGAL_FINAL](/troubleshooting/error-codes/181_ILLEGAL_FINAL) |
+| 182 | ILLEGAL_PREWHERE |
+| 183 | UNEXPECTED_EXPRESSION |
+| 184 | [ILLEGAL_AGGREGATION](/troubleshooting/error-codes/184_ILLEGAL_AGGREGATION) |
+| 186 | UNSUPPORTED_COLLATION_LOCALE |
+| 187 | COLLATION_COMPARISON_FAILED |
+| 190 | [SIZES_OF_ARRAYS_DONT_MATCH](/troubleshooting/error-codes/190_SIZES_OF_ARRAYS_DONT_MATCH) |
+| 191 | SET_SIZE_LIMIT_EXCEEDED |
+| 192 | UNKNOWN_USER |
+| 193 | WRONG_PASSWORD |
+| 194 | REQUIRED_PASSWORD |
+| 195 | IP_ADDRESS_NOT_ALLOWED |
+| 196 | UNKNOWN_ADDRESS_PATTERN_TYPE |
+| 198 | [DNS_ERROR](/troubleshooting/error-codes/198_DNS_ERROR) |
+| 199 | UNKNOWN_QUOTA |
+| 201 | QUOTA_EXCEEDED |
+| 202 | [TOO_MANY_SIMULTANEOUS_QUERIES](/troubleshooting/error-codes/202_TOO_MANY_SIMULTANEOUS_QUERIES) |
+| 203 | NO_FREE_CONNECTION |
+| 204 | CANNOT_FSYNC |
+| 206 | ALIAS_REQUIRED |
+| 207 | AMBIGUOUS_IDENTIFIER |
+| 208 | EMPTY_NESTED_TABLE |
+| 209 | [SOCKET_TIMEOUT](/troubleshooting/error-codes/209_SOCKET_TIMEOUT) |
+| 210 | [NETWORK_ERROR](/troubleshooting/error-codes/210_NETWORK_ERROR) |
+| 211 | EMPTY_QUERY |
+| 212 | UNKNOWN_LOAD_BALANCING |
+| 213 | UNKNOWN_TOTALS_MODE |
+| 214 | CANNOT_STATVFS |
+| 215 | [NOT_AN_AGGREGATE](/troubleshooting/error-codes/215_NOT_AN_AGGREGATE) |
+| 216 | [QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING](/troubleshooting/error-codes/216_QUERY_WITH_SAME_ID_IS_ALREADY_RUNNING) |
+| 217 | CLIENT_HAS_CONNECTED_TO_WRONG_PORT |
+| 218 | TABLE_IS_DROPPED |
+| 219 | DATABASE_NOT_EMPTY |
+| 220 | DUPLICATE_INTERSERVER_IO_ENDPOINT |
+| 221 | NO_SUCH_INTERSERVER_IO_ENDPOINT |
+| 223 | UNEXPECTED_AST_STRUCTURE |
+| 224 | REPLICA_IS_ALREADY_ACTIVE |
+| 225 | NO_ZOOKEEPER |
+| 226 | NO_FILE_IN_DATA_PART |
+| 227 | UNEXPECTED_FILE_IN_DATA_PART |
+| 228 | BAD_SIZE_OF_FILE_IN_DATA_PART |
+| 229 | QUERY_IS_TOO_LARGE |
+| 230 | NOT_FOUND_EXPECTED_DATA_PART |
+| 231 | TOO_MANY_UNEXPECTED_DATA_PARTS |
+| 232 | NO_SUCH_DATA_PART |
+| 233 | BAD_DATA_PART_NAME |
+| 234 | NO_REPLICA_HAS_PART |
+| 235 | DUPLICATE_DATA_PART |
+| 236 | ABORTED |
+| 237 | NO_REPLICA_NAME_GIVEN |
+| 238 | FORMAT_VERSION_TOO_OLD |
+| 239 | CANNOT_MUNMAP |
+| 240 | CANNOT_MREMAP |
+| 241 | [MEMORY_LIMIT_EXCEEDED](/troubleshooting/error-codes/241_MEMORY_LIMIT_EXCEEDED) |
+| 242 | [TABLE_IS_READ_ONLY](/troubleshooting/error-codes/242_TABLE_IS_READ_ONLY) |
+| 243 | NOT_ENOUGH_SPACE |
+| 244 | UNEXPECTED_ZOOKEEPER_ERROR |
+| 246 | CORRUPTED_DATA |
+| 248 | INVALID_PARTITION_VALUE |
+| 251 | NO_SUCH_REPLICA |
+| 252 | [TOO_MANY_PARTS](/troubleshooting/error-codes/252_TOO_MANY_PARTS) |
+| 253 | REPLICA_ALREADY_EXISTS |
+| 254 | NO_ACTIVE_REPLICAS |
+| 255 | TOO_MANY_RETRIES_TO_FETCH_PARTS |
+| 256 | PARTITION_ALREADY_EXISTS |
+| 257 | PARTITION_DOESNT_EXIST |
+| 258 | [UNION_ALL_RESULT_STRUCTURES_MISMATCH](/troubleshooting/error-codes/258_UNION_ALL_RESULT_STRUCTURES_MISMATCH) |
+| 260 | CLIENT_OUTPUT_FORMAT_SPECIFIED |
+| 261 | UNKNOWN_BLOCK_INFO_FIELD |
+| 262 | BAD_COLLATION |
+| 263 | CANNOT_COMPILE_CODE |
+| 264 | INCOMPATIBLE_TYPE_OF_JOIN |
+| 265 | NO_AVAILABLE_REPLICA |
+| 266 | MISMATCH_REPLICAS_DATA_SOURCES |
+| 269 | INFINITE_LOOP |
+| 270 | CANNOT_COMPRESS |
+| 271 | CANNOT_DECOMPRESS |
+| 272 | CANNOT_IO_SUBMIT |
+| 273 | CANNOT_IO_GETEVENTS |
+| 274 | AIO_READ_ERROR |
+| 275 | AIO_WRITE_ERROR |
+| 277 | INDEX_NOT_USED |
+| 279 | [ALL_CONNECTION_TRIES_FAILED](/troubleshooting/error-codes/279_ALL_CONNECTION_TRIES_FAILED) |
+| 280 | NO_AVAILABLE_DATA |
+| 281 | DICTIONARY_IS_EMPTY |
+| 282 | INCORRECT_INDEX |
+| 283 | UNKNOWN_DISTRIBUTED_PRODUCT_MODE |
+| 284 | WRONG_GLOBAL_SUBQUERY |
+| 285 | TOO_FEW_LIVE_REPLICAS |
+| 286 | UNSATISFIED_QUORUM_FOR_PREVIOUS_WRITE |
+| 287 | UNKNOWN_FORMAT_VERSION |
+| 288 | DISTRIBUTED_IN_JOIN_SUBQUERY_DENIED |
+| 289 | REPLICA_IS_NOT_IN_QUORUM |
+| 290 | LIMIT_EXCEEDED |
+| 291 | DATABASE_ACCESS_DENIED |
+| 293 | MONGODB_CANNOT_AUTHENTICATE |
+| 294 | CANNOT_WRITE_TO_FILE |
+| 295 | RECEIVED_EMPTY_DATA |
+| 297 | SHARD_HAS_NO_CONNECTIONS |
+| 298 | CANNOT_PIPE |
+| 299 | CANNOT_FORK |
+| 300 | CANNOT_DLSYM |
+| 301 | CANNOT_CREATE_CHILD_PROCESS |
+| 302 | CHILD_WAS_NOT_EXITED_NORMALLY |
+| 303 | CANNOT_SELECT |
+| 304 | CANNOT_WAITPID |
+| 305 | TABLE_WAS_NOT_DROPPED |
+| 306 | TOO_DEEP_RECURSION |
+| 307 | TOO_MANY_BYTES |
+| 308 | UNEXPECTED_NODE_IN_ZOOKEEPER |
+| 309 | FUNCTION_CANNOT_HAVE_PARAMETERS |
+| 318 | INVALID_CONFIG_PARAMETER |
+| 319 | UNKNOWN_STATUS_OF_INSERT |
+| 321 | VALUE_IS_OUT_OF_RANGE_OF_DATA_TYPE |
+| 336 | UNKNOWN_DATABASE_ENGINE |
+| 341 | UNFINISHED |
+| 342 | METADATA_MISMATCH |
+| 344 | SUPPORT_IS_DISABLED |
+| 345 | TABLE_DIFFERS_TOO_MUCH |
+| 346 | CANNOT_CONVERT_CHARSET |
+| 347 | CANNOT_LOAD_CONFIG |
+| 349 | [CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN](/troubleshooting/error-codes/349_CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN) |
+| 352 | [AMBIGUOUS_COLUMN_NAME](/troubleshooting/error-codes/352_AMBIGUOUS_COLUMN_NAME) |
+| 353 | INDEX_OF_POSITIONAL_ARGUMENT_IS_OUT_OF_RANGE |
+| 354 | ZLIB_INFLATE_FAILED |
+| 355 | ZLIB_DEFLATE_FAILED |
+| 358 | INTO_OUTFILE_NOT_ALLOWED |
+| 359 | TABLE_SIZE_EXCEEDS_MAX_DROP_SIZE_LIMIT |
+| 360 | CANNOT_CREATE_CHARSET_CONVERTER |
+| 361 | SEEK_POSITION_OUT_OF_BOUND |
+| 362 | CURRENT_WRITE_BUFFER_IS_EXHAUSTED |
+| 363 | CANNOT_CREATE_IO_BUFFER |
+| 364 | RECEIVED_ERROR_TOO_MANY_REQUESTS |
+| 366 | SIZES_OF_NESTED_COLUMNS_ARE_INCONSISTENT |
+| 369 | ALL_REPLICAS_ARE_STALE |
+| 370 | DATA_TYPE_CANNOT_BE_USED_IN_TABLES |
+| 371 | INCONSISTENT_CLUSTER_DEFINITION |
+| 372 | SESSION_NOT_FOUND |
+| 373 | SESSION_IS_LOCKED |
+| 374 | INVALID_SESSION_TIMEOUT |
+| 375 | CANNOT_DLOPEN |
+| 376 | CANNOT_PARSE_UUID |
+| 377 | ILLEGAL_SYNTAX_FOR_DATA_TYPE |
+| 378 | DATA_TYPE_CANNOT_HAVE_ARGUMENTS |
+| 380 | CANNOT_KILL |
+| 381 | HTTP_LENGTH_REQUIRED |
+| 382 | CANNOT_LOAD_CATBOOST_MODEL |
+| 383 | CANNOT_APPLY_CATBOOST_MODEL |
+| 384 | PART_IS_TEMPORARILY_LOCKED |
+| 385 | MULTIPLE_STREAMS_REQUIRED |
+| 386 | [NO_COMMON_TYPE](/troubleshooting/error-codes/386_NO_COMMON_TYPE) |
+| 387 | DICTIONARY_ALREADY_EXISTS |
+| 388 | CANNOT_ASSIGN_OPTIMIZE |
+| 389 | INSERT_WAS_DEDUPLICATED |
+| 390 | CANNOT_GET_CREATE_TABLE_QUERY |
+| 391 | EXTERNAL_LIBRARY_ERROR |
+| 392 | QUERY_IS_PROHIBITED |
+| 393 | THERE_IS_NO_QUERY |
+| 394 | [QUERY_WAS_CANCELLED](/troubleshooting/error-codes/394_QUERY_WAS_CANCELLED) |
+| 395 | [FUNCTION_THROW_IF_VALUE_IS_NON_ZERO](/troubleshooting/error-codes/395_FUNCTION_THROW_IF_VALUE_IS_NON_ZERO) |
+| 396 | [TOO_MANY_ROWS_OR_BYTES](/troubleshooting/error-codes/396_TOO_MANY_ROWS_OR_BYTES) |
+| 397 | QUERY_IS_NOT_SUPPORTED_IN_MATERIALIZED_VIEW |
+| 398 | UNKNOWN_MUTATION_COMMAND |
+| 399 | FORMAT_IS_NOT_SUITABLE_FOR_OUTPUT |
+| 400 | CANNOT_STAT |
+| 401 | FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME |
+| 402 | CANNOT_IOSETUP |
+| 403 | [INVALID_JOIN_ON_EXPRESSION](/troubleshooting/error-codes/403_INVALID_JOIN_ON_EXPRESSION) |
+| 404 | BAD_ODBC_CONNECTION_STRING |
+| 406 | TOP_AND_LIMIT_TOGETHER |
+| 407 | DECIMAL_OVERFLOW |
+| 408 | BAD_REQUEST_PARAMETER |
+| 410 | EXTERNAL_SERVER_IS_NOT_RESPONDING |
+| 411 | PTHREAD_ERROR |
+| 412 | NETLINK_ERROR |
+| 413 | CANNOT_SET_SIGNAL_HANDLER |
+| 415 | ALL_REPLICAS_LOST |
+| 416 | REPLICA_STATUS_CHANGED |
+| 417 | EXPECTED_ALL_OR_ANY |
+| 418 | UNKNOWN_JOIN |
+| 419 | MULTIPLE_ASSIGNMENTS_TO_COLUMN |
+| 420 | CANNOT_UPDATE_COLUMN |
+| 421 | CANNOT_ADD_DIFFERENT_AGGREGATE_STATES |
+| 422 | UNSUPPORTED_URI_SCHEME |
+| 423 | CANNOT_GETTIMEOFDAY |
+| 424 | CANNOT_LINK |
+| 425 | SYSTEM_ERROR |
+| 427 | CANNOT_COMPILE_REGEXP |
+| 429 | FAILED_TO_GETPWUID |
+| 430 | MISMATCHING_USERS_FOR_PROCESS_AND_DATA |
+| 431 | ILLEGAL_SYNTAX_FOR_CODEC_TYPE |
+| 432 | UNKNOWN_CODEC |
+| 433 | ILLEGAL_CODEC_PARAMETER |
+| 434 | CANNOT_PARSE_PROTOBUF_SCHEMA |
+| 435 | NO_COLUMN_SERIALIZED_TO_REQUIRED_PROTOBUF_FIELD |
+| 436 | PROTOBUF_BAD_CAST |
+| 437 | PROTOBUF_FIELD_NOT_REPEATED |
+| 438 | DATA_TYPE_CANNOT_BE_PROMOTED |
+| 439 | [CANNOT_SCHEDULE_TASK](/troubleshooting/error-codes/439_CANNOT_SCHEDULE_TASK) |
+| 440 | INVALID_LIMIT_EXPRESSION |
+| 441 | CANNOT_PARSE_DOMAIN_VALUE_FROM_STRING |
+| 442 | BAD_DATABASE_FOR_TEMPORARY_TABLE |
+| 443 | NO_COLUMNS_SERIALIZED_TO_PROTOBUF_FIELDS |
+| 444 | UNKNOWN_PROTOBUF_FORMAT |
+| 445 | CANNOT_MPROTECT |
+| 446 | FUNCTION_NOT_ALLOWED |
+| 447 | HYPERSCAN_CANNOT_SCAN_TEXT |
+| 448 | BROTLI_READ_FAILED |
+| 449 | BROTLI_WRITE_FAILED |
+| 450 | BAD_TTL_EXPRESSION |
+| 451 | BAD_TTL_FILE |
+| 452 | SETTING_CONSTRAINT_VIOLATION |
+| 453 | MYSQL_CLIENT_INSUFFICIENT_CAPABILITIES |
+| 454 | OPENSSL_ERROR |
+| 455 | [SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY](/troubleshooting/error-codes/455_SUSPICIOUS_TYPE_FOR_LOW_CARDINALITY) |
+| 456 | UNKNOWN_QUERY_PARAMETER |
+| 457 | BAD_QUERY_PARAMETER |
+| 458 | CANNOT_UNLINK |
+| 459 | CANNOT_SET_THREAD_PRIORITY |
+| 460 | CANNOT_CREATE_TIMER |
+| 461 | CANNOT_SET_TIMER_PERIOD |
+| 463 | CANNOT_FCNTL |
+| 464 | CANNOT_PARSE_ELF |
+| 465 | CANNOT_PARSE_DWARF |
+| 466 | INSECURE_PATH |
+| 467 | CANNOT_PARSE_BOOL |
+| 468 | CANNOT_PTHREAD_ATTR |
+| 469 | VIOLATED_CONSTRAINT |
+| 471 | INVALID_SETTING_VALUE |
+| 472 | READONLY_SETTING |
+| 473 | DEADLOCK_AVOIDED |
+| 474 | INVALID_TEMPLATE_FORMAT |
+| 475 | INVALID_WITH_FILL_EXPRESSION |
+| 476 | WITH_TIES_WITHOUT_ORDER_BY |
+| 477 | INVALID_USAGE_OF_INPUT |
+| 478 | UNKNOWN_POLICY |
+| 479 | UNKNOWN_DISK |
+| 480 | UNKNOWN_PROTOCOL |
+| 481 | PATH_ACCESS_DENIED |
+| 482 | DICTIONARY_ACCESS_DENIED |
+| 483 | TOO_MANY_REDIRECTS |
+| 484 | INTERNAL_REDIS_ERROR |
+| 487 | CANNOT_GET_CREATE_DICTIONARY_QUERY |
+| 489 | INCORRECT_DICTIONARY_DEFINITION |
+| 490 | CANNOT_FORMAT_DATETIME |
+| 491 | UNACCEPTABLE_URL |
+| 492 | ACCESS_ENTITY_NOT_FOUND |
+| 493 | ACCESS_ENTITY_ALREADY_EXISTS |
+| 495 | ACCESS_STORAGE_READONLY |
+| 496 | QUOTA_REQUIRES_CLIENT_KEY |
+| 497 | ACCESS_DENIED |
+| 498 | LIMIT_BY_WITH_TIES_IS_NOT_SUPPORTED |
+| 499 | S3_ERROR |
+| 500 | AZURE_BLOB_STORAGE_ERROR |
+| 501 | CANNOT_CREATE_DATABASE |
+| 502 | CANNOT_SIGQUEUE |
+| 503 | AGGREGATE_FUNCTION_THROW |
+| 504 | FILE_ALREADY_EXISTS |
+| 507 | UNABLE_TO_SKIP_UNUSED_SHARDS |
+| 508 | UNKNOWN_ACCESS_TYPE |
+| 509 | INVALID_GRANT |
+| 510 | CACHE_DICTIONARY_UPDATE_FAIL |
+| 511 | UNKNOWN_ROLE |
+| 512 | SET_NON_GRANTED_ROLE |
+| 513 | UNKNOWN_PART_TYPE |
+| 514 | ACCESS_STORAGE_FOR_INSERTION_NOT_FOUND |
+| 515 | INCORRECT_ACCESS_ENTITY_DEFINITION |
+| 516 | AUTHENTICATION_FAILED |
+| 517 | CANNOT_ASSIGN_ALTER |
+| 518 | CANNOT_COMMIT_OFFSET |
+| 519 | NO_REMOTE_SHARD_AVAILABLE |
+| 520 | CANNOT_DETACH_DICTIONARY_AS_TABLE |
+| 521 | ATOMIC_RENAME_FAIL |
+| 523 | UNKNOWN_ROW_POLICY |
+| 524 | ALTER_OF_COLUMN_IS_FORBIDDEN |
+| 525 | INCORRECT_DISK_INDEX |
+| 527 | NO_SUITABLE_FUNCTION_IMPLEMENTATION |
+| 528 | CASSANDRA_INTERNAL_ERROR |
+| 529 | NOT_A_LEADER |
+| 530 | CANNOT_CONNECT_RABBITMQ |
+| 531 | CANNOT_FSTAT |
+| 532 | LDAP_ERROR |
+| 535 | UNKNOWN_RAID_TYPE |
+| 536 | CANNOT_RESTORE_FROM_FIELD_DUMP |
+| 537 | ILLEGAL_MYSQL_VARIABLE |
+| 538 | MYSQL_SYNTAX_ERROR |
+| 539 | CANNOT_BIND_RABBITMQ_EXCHANGE |
+| 540 | CANNOT_DECLARE_RABBITMQ_EXCHANGE |
+| 541 | CANNOT_CREATE_RABBITMQ_QUEUE_BINDING |
+| 542 | CANNOT_REMOVE_RABBITMQ_EXCHANGE |
+| 543 | UNKNOWN_MYSQL_DATATYPES_SUPPORT_LEVEL |
+| 544 | ROW_AND_ROWS_TOGETHER |
+| 545 | FIRST_AND_NEXT_TOGETHER |
+| 546 | NO_ROW_DELIMITER |
+| 547 | INVALID_RAID_TYPE |
+| 548 | UNKNOWN_VOLUME |
+| 549 | DATA_TYPE_CANNOT_BE_USED_IN_KEY |
+| 552 | UNRECOGNIZED_ARGUMENTS |
+| 553 | LZMA_STREAM_ENCODER_FAILED |
+| 554 | LZMA_STREAM_DECODER_FAILED |
+| 555 | ROCKSDB_ERROR |
+| 556 | SYNC_MYSQL_USER_ACCESS_ERROR |
+| 557 | UNKNOWN_UNION |
+| 558 | EXPECTED_ALL_OR_DISTINCT |
+| 559 | INVALID_GRPC_QUERY_INFO |
+| 560 | ZSTD_ENCODER_FAILED |
+| 561 | ZSTD_DECODER_FAILED |
+| 562 | TLD_LIST_NOT_FOUND |
+| 563 | CANNOT_READ_MAP_FROM_TEXT |
+| 564 | INTERSERVER_SCHEME_DOESNT_MATCH |
+| 565 | TOO_MANY_PARTITIONS |
+| 566 | CANNOT_RMDIR |
+| 567 | DUPLICATED_PART_UUIDS |
+| 568 | RAFT_ERROR |
+| 569 | MULTIPLE_COLUMNS_SERIALIZED_TO_SAME_PROTOBUF_FIELD |
+| 570 | DATA_TYPE_INCOMPATIBLE_WITH_PROTOBUF_FIELD |
+| 571 | DATABASE_REPLICATION_FAILED |
+| 572 | TOO_MANY_QUERY_PLAN_OPTIMIZATIONS |
+| 573 | EPOLL_ERROR |
+| 574 | DISTRIBUTED_TOO_MANY_PENDING_BYTES |
+| 575 | UNKNOWN_SNAPSHOT |
+| 576 | KERBEROS_ERROR |
+| 577 | INVALID_SHARD_ID |
+| 578 | INVALID_FORMAT_INSERT_QUERY_WITH_DATA |
+| 579 | INCORRECT_PART_TYPE |
+| 580 | CANNOT_SET_ROUNDING_MODE |
+| 581 | TOO_LARGE_DISTRIBUTED_DEPTH |
+| 582 | NO_SUCH_PROJECTION_IN_TABLE |
+| 583 | ILLEGAL_PROJECTION |
+| 584 | PROJECTION_NOT_USED |
+| 585 | CANNOT_PARSE_YAML |
+| 586 | CANNOT_CREATE_FILE |
+| 587 | CONCURRENT_ACCESS_NOT_SUPPORTED |
+| 588 | DISTRIBUTED_BROKEN_BATCH_INFO |
+| 589 | DISTRIBUTED_BROKEN_BATCH_FILES |
+| 590 | CANNOT_SYSCONF |
+| 591 | SQLITE_ENGINE_ERROR |
+| 592 | DATA_ENCRYPTION_ERROR |
+| 593 | ZERO_COPY_REPLICATION_ERROR |
+| 594 | BZIP2_STREAM_DECODER_FAILED |
+| 595 | BZIP2_STREAM_ENCODER_FAILED |
+| 596 | INTERSECT_OR_EXCEPT_RESULT_STRUCTURES_MISMATCH |
+| 597 | NO_SUCH_ERROR_CODE |
+| 598 | BACKUP_ALREADY_EXISTS |
+| 599 | BACKUP_NOT_FOUND |
+| 600 | BACKUP_VERSION_NOT_SUPPORTED |
+| 601 | BACKUP_DAMAGED |
+| 602 | NO_BASE_BACKUP |
+| 603 | WRONG_BASE_BACKUP |
+| 604 | BACKUP_ENTRY_ALREADY_EXISTS |
+| 605 | BACKUP_ENTRY_NOT_FOUND |
+| 606 | BACKUP_IS_EMPTY |
+| 607 | CANNOT_RESTORE_DATABASE |
+| 608 | CANNOT_RESTORE_TABLE |
+| 609 | FUNCTION_ALREADY_EXISTS |
+| 610 | CANNOT_DROP_FUNCTION |
+| 611 | CANNOT_CREATE_RECURSIVE_FUNCTION |
+| 614 | POSTGRESQL_CONNECTION_FAILURE |
+| 615 | CANNOT_ADVISE |
+| 616 | UNKNOWN_READ_METHOD |
+| 617 | LZ4_ENCODER_FAILED |
+| 618 | LZ4_DECODER_FAILED |
+| 619 | POSTGRESQL_REPLICATION_INTERNAL_ERROR |
+| 620 | QUERY_NOT_ALLOWED |
+| 621 | CANNOT_NORMALIZE_STRING |
+| 622 | CANNOT_PARSE_CAPN_PROTO_SCHEMA |
+| 623 | CAPN_PROTO_BAD_CAST |
+| 624 | BAD_FILE_TYPE |
+| 625 | IO_SETUP_ERROR |
+| 626 | CANNOT_SKIP_UNKNOWN_FIELD |
+| 627 | BACKUP_ENGINE_NOT_FOUND |
+| 628 | OFFSET_FETCH_WITHOUT_ORDER_BY |
+| 629 | HTTP_RANGE_NOT_SATISFIABLE |
+| 630 | HAVE_DEPENDENT_OBJECTS |
+| 631 | UNKNOWN_FILE_SIZE |
+| 632 | UNEXPECTED_DATA_AFTER_PARSED_VALUE |
+| 633 | QUERY_IS_NOT_SUPPORTED_IN_WINDOW_VIEW |
+| 634 | MONGODB_ERROR |
+| 635 | CANNOT_POLL |
+| 636 | CANNOT_EXTRACT_TABLE_STRUCTURE |
+| 637 | INVALID_TABLE_OVERRIDE |
+| 638 | SNAPPY_UNCOMPRESS_FAILED |
+| 639 | SNAPPY_COMPRESS_FAILED |
+| 640 | NO_HIVEMETASTORE |
+| 641 | CANNOT_APPEND_TO_FILE |
+| 642 | CANNOT_PACK_ARCHIVE |
+| 643 | CANNOT_UNPACK_ARCHIVE |
+| 645 | NUMBER_OF_DIMENSIONS_MISMATCHED |
+| 647 | CANNOT_BACKUP_TABLE |
+| 648 | WRONG_DDL_RENAMING_SETTINGS |
+| 649 | INVALID_TRANSACTION |
+| 650 | SERIALIZATION_ERROR |
+| 651 | CAPN_PROTO_BAD_TYPE |
+| 652 | ONLY_NULLS_WHILE_READING_SCHEMA |
+| 653 | CANNOT_PARSE_BACKUP_SETTINGS |
+| 654 | WRONG_BACKUP_SETTINGS |
+| 655 | FAILED_TO_SYNC_BACKUP_OR_RESTORE |
+| 659 | UNKNOWN_STATUS_OF_TRANSACTION |
+| 660 | HDFS_ERROR |
+| 661 | CANNOT_SEND_SIGNAL |
+| 662 | FS_METADATA_ERROR |
+| 663 | INCONSISTENT_METADATA_FOR_BACKUP |
+| 664 | ACCESS_STORAGE_DOESNT_ALLOW_BACKUP |
+| 665 | CANNOT_CONNECT_NATS |
+| 667 | NOT_INITIALIZED |
+| 668 | INVALID_STATE |
+| 669 | NAMED_COLLECTION_DOESNT_EXIST |
+| 670 | NAMED_COLLECTION_ALREADY_EXISTS |
+| 671 | NAMED_COLLECTION_IS_IMMUTABLE |
+| 672 | INVALID_SCHEDULER_NODE |
+| 673 | RESOURCE_ACCESS_DENIED |
+| 674 | RESOURCE_NOT_FOUND |
+| 675 | CANNOT_PARSE_IPV4 |
+| 676 | CANNOT_PARSE_IPV6 |
+| 677 | THREAD_WAS_CANCELED |
+| 678 | IO_URING_INIT_FAILED |
+| 679 | IO_URING_SUBMIT_ERROR |
+| 690 | MIXED_ACCESS_PARAMETER_TYPES |
+| 691 | UNKNOWN_ELEMENT_OF_ENUM |
+| 692 | TOO_MANY_MUTATIONS |
+| 693 | AWS_ERROR |
+| 694 | ASYNC_LOAD_CYCLE |
+| 695 | ASYNC_LOAD_FAILED |
+| 696 | ASYNC_LOAD_CANCELED |
+| 697 | CANNOT_RESTORE_TO_NONENCRYPTED_DISK |
+| 698 | INVALID_REDIS_STORAGE_TYPE |
+| 699 | INVALID_REDIS_TABLE_STRUCTURE |
+| 700 | USER_SESSION_LIMIT_EXCEEDED |
+| 701 | CLUSTER_DOESNT_EXIST |
+| 702 | CLIENT_INFO_DOES_NOT_MATCH |
+| 703 | INVALID_IDENTIFIER |
+| 704 | QUERY_CACHE_USED_WITH_NONDETERMINISTIC_FUNCTIONS |
+| 705 | TABLE_NOT_EMPTY |
+| 706 | LIBSSH_ERROR |
+| 707 | GCP_ERROR |
+| 708 | ILLEGAL_STATISTICS |
+| 709 | CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT |
+| 710 | FAULT_INJECTED |
+| 711 | FILECACHE_ACCESS_DENIED |
+| 712 | TOO_MANY_MATERIALIZED_VIEWS |
+| 713 | BROKEN_PROJECTION |
+| 714 | UNEXPECTED_CLUSTER |
+| 715 | CANNOT_DETECT_FORMAT |
+| 716 | CANNOT_FORGET_PARTITION |
+| 717 | EXPERIMENTAL_FEATURE_ERROR |
+| 718 | TOO_SLOW_PARSING |
+| 719 | QUERY_CACHE_USED_WITH_SYSTEM_TABLE |
+| 720 | USER_EXPIRED |
+| 721 | DEPRECATED_FUNCTION |
+| 722 | ASYNC_LOAD_WAIT_FAILED |
+| 723 | PARQUET_EXCEPTION |
+| 724 | TOO_MANY_TABLES |
+| 725 | TOO_MANY_DATABASES |
+| 726 | UNEXPECTED_HTTP_HEADERS |
+| 727 | UNEXPECTED_TABLE_ENGINE |
+| 728 | UNEXPECTED_DATA_TYPE |
+| 729 | ILLEGAL_TIME_SERIES_TAGS |
+| 730 | REFRESH_FAILED |
+| 731 | QUERY_CACHE_USED_WITH_NON_THROW_OVERFLOW_MODE |
+| 733 | TABLE_IS_BEING_RESTARTED |
+| 734 | CANNOT_WRITE_AFTER_BUFFER_CANCELED |
+| 735 | [QUERY_WAS_CANCELLED_BY_CLIENT](/troubleshooting/error-codes/735_QUERY_WAS_CANCELLED_BY_CLIENT) |
+| 736 | DATALAKE_DATABASE_ERROR |
+| 737 | GOOGLE_CLOUD_ERROR |
+| 738 | PART_IS_LOCKED |
+| 739 | BUZZHOUSE |
+| 740 | POTENTIALLY_BROKEN_DATA_PART |
+| 741 | TABLE_UUID_MISMATCH |
+| 742 | DELTA_KERNEL_ERROR |
+| 743 | ICEBERG_SPECIFICATION_VIOLATION |
+| 744 | SESSION_ID_EMPTY |
+| 745 | SERVER_OVERLOADED |
+| 746 | DEPENDENCIES_NOT_FOUND |
+| 900 | DISTRIBUTED_CACHE_ERROR |
+| 901 | CANNOT_USE_DISTRIBUTED_CACHE |
+| 902 | PROTOCOL_VERSION_MISMATCH |
+| 903 | LICENSE_EXPIRED |
+| 999 | KEEPER_EXCEPTION |
+| 1000 | POCO_EXCEPTION |
+| 1001 | [STD_EXCEPTION](/troubleshooting/error-codes/1001_STD_EXCEPTION) |
+| 1002 | UNKNOWN_EXCEPTION |
\ No newline at end of file
diff --git a/scripts/aspell-ignore/en/aspell-dict.txt b/scripts/aspell-ignore/en/aspell-dict.txt
index 32f237ad09a..0e4781b8d5c 100644
--- a/scripts/aspell-ignore/en/aspell-dict.txt
+++ b/scripts/aspell-ignore/en/aspell-dict.txt
@@ -57,6 +57,7 @@ AsynchronousInsertThreadsActive
AsynchronousMetricsCalculationTimeSpent
AsynchronousMetricsUpdateInterval
AsynchronousReadWait
+ATTR
AuroraMySQL
AuroraPostgreSQL
Authenticator
@@ -251,6 +252,7 @@ ContentSquare's
Contentsquare
ContextLockWait
Contrib
+CoreDNS
CopilotKit
Copilotkit
CoreDNS
@@ -291,6 +293,9 @@ DIEs
DLOPEN
DLSYM
DOGEFI
+DATALAKE
+DLOPEN
+DLSYM
DSAR
DSPy
DaemonSet
@@ -375,6 +380,7 @@ Durre
ECMA
EDOT
EMQX
+EPOLL
ENIs
EPOLL
ETag
@@ -412,6 +418,8 @@ FILECACHE
FIPS
FOSDEM
FQDN
+FCNTL
+FILECACHE
FSTAT
Fabi
Failover
@@ -455,6 +463,9 @@ GTID
GTIDs
GTest
GUID
+GETPWUID
+GETTIME
+GETTIMEOFDAY
GWLBs
Gb
Gbit
@@ -494,12 +505,14 @@ HANA
HAProxy
HDDs
HHMM
+HIVEMETASTORE
HIPAA
HIVEMETASTORE
HMAC
HNSW
HSTS
HTAP
+HAProxy
HTTPConnection
HTTPThreads
Hashboard's
@@ -534,12 +547,15 @@ IDEs
IDNA
IMDS
IMDb
+INIT
INFILE
INIT
INOUT
INSERTed
INSERTs
INVOKER
+IOSETUP
+ISTREAM
IOPS
IOPrefetchThreads
IOPrefetchThreadsActive
@@ -709,6 +725,7 @@ Lemire
Levenshtein
Lhotsky
Liao
+LIBSSH
LibFuzzer
LibreChat
Lifecycles
@@ -762,6 +779,7 @@ MMappedFiles
MPROTECT
MQTT
MQTTX
+MPROTECT
MREMAP
MSSQL
MSan
@@ -857,6 +875,7 @@ NEKUDOTAYIM
NETLINK
NEWDATE
NEWDECIMAL
+NETLINK
NFKC
NFKD
NIST
@@ -915,6 +934,7 @@ OLAP
OLTP
OOMKilled
OOMs
+OOMKilled
ORCCompression
ORMs
OSContextSwitches
@@ -964,6 +984,7 @@ OSUptime
OSUserTime
OSUserTimeCPU
OSUserTimeNormalized
+OSTREAM
OTLP
OTel
OUTFILE
@@ -1458,6 +1479,8 @@ Uint
Unbatched
UncompressedCacheBytes
UncompressedCacheCells
+UNCOMPRESS
+UNLINK
UnidirectionalEdgeIsValid
UnionStep
UniqThetaSketch
@@ -1479,6 +1502,7 @@ VPNs
Vadim
Valgrind
ValueError
+validator
Vaza
Vectorization
Vectorized
@@ -1493,6 +1517,7 @@ VirtualBox
Vose
WAITPID
WALs
+WAITPID
WSFG
WarpStream
WebUI
@@ -1537,6 +1562,8 @@ ZooKeeperSession
ZooKeeperWatch
ZooKeepers
Zstandard
+ZCONNECTIONLOSS
+ZSESSIONEXPIRED
aarch
abstractmethod
accurateCast
@@ -2063,6 +2090,7 @@ denormals
deployable
dequeued
dequeues
+deregistration
dereference
deregistration
deserialization
@@ -2156,6 +2184,7 @@ equi
erfc
errno
errorCodeToName
+errno
errored
etag
evalMLMethod
@@ -2529,6 +2558,7 @@ kolya
konsole
kostik
kostikConsistentHash
+kube
ksqlDB
kube
kurtPop
@@ -2972,6 +3002,7 @@ preprocessed
preprocessing
preprocessor
presentational
+prestop
prestable
prestop
prettycompact
@@ -3108,6 +3139,8 @@ redash
reddit
redis
redisstreams
+refetch
+refetched
refcounter
refetch
refetched
@@ -3235,6 +3268,7 @@ serverTimeZone
serverTimezone
serverUUID
serverless
+serializeValueIntoMemory
sessionCacheSize
sessionIdContext
sessionTimeout
@@ -3735,6 +3769,7 @@ variadic
variantElement
variantType
varint
+varargs
varpop
varpopstable
varsamp
diff --git a/sidebars.js b/sidebars.js
index e4ca70c8456..73c5762f99d 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -93,7 +93,12 @@ const sidebars = {
collapsed: false,
collapsible: false,
link: { type: "doc", id: "troubleshooting/index" },
- items: []
+ items: [
+ {
+ type: "autogenerated",
+ dirName: "troubleshooting",
+ }
+ ]
},
{
type: "category",
diff --git a/src/css/custom.scss b/src/css/custom.scss
index 5d208081a62..3733eb0245b 100644
--- a/src/css/custom.scss
+++ b/src/css/custom.scss
@@ -1553,3 +1553,14 @@ input::-ms-input-placeholder { /* Microsoft Edge */
.code-viewer {
margin-bottom: var(--ifm-paragraph-margin-bottom);
}
+
+/* Make error codes sidebar items smaller */
+.error-codes-category .menu__list-item {
+ font-size: 0.7rem;
+
+ .menu__link {
+ padding-top: 1px;
+ padding-bottom: 1px;
+ line-height: 1.2;
+ }
+}