Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@ Manual = [
"manual/code-loading.md",
"manual/profile.md",
"manual/stacktraces.md",
"manual/memory-management.md",
"manual/performance-tips.md",
"manual/workflow-tips.md",
"manual/style-guide.md",
Expand Down
4 changes: 2 additions & 2 deletions doc/src/manual/command-line-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ The following is a complete list of command-line switches available when launchi
|`-m`, `--module <Package> [args]` |Run entry point of `Package` (`@main` function) with `args`|
|`-L`, `--load <file>` |Load `<file>` immediately on all processors|
|`-t`, `--threads {auto\|N[,auto\|M]}` |Enable N[+M] threads; N threads are assigned to the `default` threadpool, and if M is specified, M threads are assigned to the `interactive` threadpool; `auto` tries to infer a useful default number of threads to use but the exact behavior might change in the future. Currently sets N to the number of CPUs assigned to this Julia process based on the OS-specific affinity assignment interface if supported (Linux and Windows) or to the number of CPU threads if not supported (MacOS) or if process affinity is not configured, and sets M to 1.|
| `--gcthreads=N[,M]` |Use N threads for the mark phase of GC and M (0 or 1) threads for the concurrent sweeping phase of GC. N is set to the number of compute threads and M is set to 0 if unspecified.|
| `--gcthreads=N[,M]` |Use N threads for the mark phase of GC and M (0 or 1) threads for the concurrent sweeping phase of GC. N is set to the number of compute threads and M is set to 0 if unspecified. See [Memory Management and Garbage Collection](@ref man-memory-management) for more details.|
|`-p`, `--procs {N\|auto}` |Integer value N launches N additional local worker processes; `auto` launches as many workers as the number of local CPU threads (logical cores)|
|`--machine-file <file>` |Run processes on hosts listed in `<file>`|
|`-i`, `--interactive` |Interactive mode; REPL runs and `isinteractive()` is true|
Expand All @@ -206,7 +206,7 @@ The following is a complete list of command-line switches available when launchi
|`--track-allocation=@<path>` |Count bytes but only in files that fall under the given file path/directory. The `@` prefix is required to select this option. A `@` with no path will track the current directory.|
|`--task-metrics={yes\|no*}` |Enable the collection of per-task metrics|
|`--bug-report=KIND` |Launch a bug report session. It can be used to start a REPL, run a script, or evaluate expressions. It first tries to use BugReporting.jl installed in current environment and falls back to the latest compatible BugReporting.jl if not. For more information, see `--bug-report=help`.|
|`--heap-size-hint=<size>` |Forces garbage collection if memory usage is higher than the given value. The value may be specified as a number of bytes, optionally in units of KB, MB, GB, or TB, or as a percentage of physical memory with %.|
|`--heap-size-hint=<size>` |Forces garbage collection if memory usage is higher than the given value. The value may be specified as a number of bytes, optionally in units of KB, MB, GB, or TB, or as a percentage of physical memory with %. See [Memory Management and Garbage Collection](@ref man-memory-management) for more details.|
|`--compile={yes*\|no\|all\|min}` |Enable or disable JIT compiler, or request exhaustive or minimal compilation|
|`--output-o <name>` |Generate an object file (including system image data)|
|`--output-ji <name>` |Generate a system image data file (.ji)|
Expand Down
177 changes: 177 additions & 0 deletions doc/src/manual/memory-management.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# [Memory Management and Garbage Collection](@id man-memory-management)

Julia uses automatic memory management through its built-in garbage collector (GC). This section provides an overview of how Julia manages memory and how you can configure and optimize memory usage for your applications.

## [Garbage Collection Overview](@id man-gc-overview)

Julia features a garbage collector with the following characteristics:

* **Non-moving**: Objects are not relocated in memory during garbage collection
* **Generational**: Younger objects are collected more frequently than older ones
* **Parallel and partially concurrent**: The GC can use multiple threads and run concurrently with your program
* **Mostly precise**: The GC accurately identifies object references for pure Julia code, and it provides conservative scanning APIs for users calling Julia from C

The garbage collector automatically reclaims memory used by objects that are no longer reachable from your program, freeing you from manual memory management in most cases.

## [Memory Architecture](@id man-memory-architecture)

Julia uses a two-tier allocation strategy:

* **Small objects** (currently ≤ 2032 bytes but may change): Allocated using a fast per-thread pool allocator
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* **Small objects** (currently ≤ 2032 bytes but may change): Allocated using a fast per-thread pool allocator
* **Small objects** (≤ 2032 bytes at the time of writing but may change): Allocated using a fast per-thread pool allocator

Copy link
Member Author

@IanButterworth IanButterworth Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would give the impression it might not be correct/up to date. The docs should always be correct.

Copy link
Member

@serenity4 serenity4 Jun 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought that it would be likely that it eventually becomes out of date, but if you believe the opposite, feel free to discard this suggestion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth adding a comment in the code that it's documented, to help it stay in sync.

* **Large objects** : Allocated directly through the system's `malloc`

This hybrid approach optimizes for both allocation speed and memory efficiency, with the pool allocator providing fast allocation for the many small objects typical in Julia programs.

## [System Memory Requirements](@id man-system-memory)

### Swap Space

Julia's garbage collector is designed with the expectation that your system has adequate swap space configured. The GC uses heuristics that assume it can allocate memory beyond physical RAM when needed, relying on the operating system's virtual memory management.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this adds much, this is true for every program out there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is, but HPC people don't agree so IMO it's worth pointing out.


If your system has limited or no swap space, you may experience out-of-memory errors during garbage collection. In such cases, you can use the `--heap-size-hint` option to limit Julia's memory usage.

### Memory Hints

You can provide a hint to Julia about the maximum amount of memory to use:

```bash
julia --heap-size-hint=4G # To set the hint to ~4GB
julia --heap-size-hint=50% # or to 50% of physical memory
```

The `--heap-size-hint` option tells the garbage collector to trigger collection more aggressively when approaching the specified limit. This is particularly useful in:

* Containers with memory limits
* Systems without swap space
* Shared systems where you want to limit Julia's memory footprint

You can also set this via the `JULIA_HEAP_SIZE_HINT` environment variable:

```bash
export JULIA_HEAP_SIZE_HINT=2G
julia
```

## [Multithreaded Garbage Collection](@id man-gc-multithreading)

Julia's garbage collector can leverage multiple threads to improve performance on multi-core systems.

### GC Thread Configuration

By default, Julia uses multiple threads for garbage collection:

* **Mark threads**: Used during the mark phase to trace object references (default: 1, which is shared with the compute thread if there is only one, otherwise half the number of compute threads)
* **Sweep threads**: Used for concurrent sweeping of freed memory (default: 0, disabled)

You can configure GC threading using:

```bash
julia --gcthreads=4,1 # 4 mark threads, 1 sweep thread
julia --gcthreads=8 # 8 mark threads, 0 sweep threads
```

Or via environment variable:

```bash
export JULIA_NUM_GC_THREADS=4,1
julia
```

### Recommendations

For compute-intensive workloads:

* Use multiple mark threads (the default configuration is usually appropriate)
* Consider enabling concurrent sweeping with 1 sweep thread for allocation-heavy workloads

For memory-intensive workloads:

* Enable concurrent sweeping to reduce GC pauses
* Monitor GC time using `@time` and adjust thread counts accordingly

## [Monitoring and Debugging](@id man-gc-monitoring)

### Basic Memory Monitoring

Use the `@time` macro to see memory allocation and GC overhead:

```julia
julia> @time some_computation()
2.123456 seconds (1.50 M allocations: 58.725 MiB, 17.17% gc time)
```

### GC Logging

Enable detailed GC logging to understand collection patterns:

```julia
julia> GC.enable_logging(true)
julia> # Run your code
julia> GC.enable_logging(false)
```

This logs each garbage collection event with timing and memory statistics.

### Manual GC Control

While generally not recommended, you can manually trigger garbage collection:

```julia
GC.gc() # Force a garbage collection
GC.enable(false) # Disable automatic GC (use with caution!)
GC.enable(true) # Re-enable automatic GC
```

**Warning**: Disabling GC can lead to memory exhaustion. Only use this for specific performance measurements or debugging.

## [Performance Considerations](@id man-gc-performance)

### Reducing Allocations

The best way to minimize GC impact is to reduce unnecessary allocations:

* Use in-place operations when possible (e.g., `x .+= y` instead of `x = x + y`)
* Pre-allocate arrays and reuse them
* Avoid creating temporary objects in tight loops
* Consider using `StaticArrays.jl` for small, fixed-size arrays

### Memory-Efficient Patterns

* Avoid global variables that change type
* Use `const` for global constants

### Profiling Memory Usage

For detailed guidance on profiling memory allocations and identifying performance bottlenecks, see the [Profiling](@ref man-profiling) section.

## [Advanced Configuration](@id man-gc-advanced)

### Integration with System Memory Management

Julia works best when:

* The system has adequate swap space (recommended: 2x physical RAM)
* Virtual memory is properly configured
* Other processes leave sufficient memory available
* Container memory limits are set appropriately with `--heap-size-hint`

## [Troubleshooting Memory Issues](@id man-gc-troubleshooting)

### High GC Overhead

If garbage collection is taking too much time:

1. **Reduce allocation rate**: Focus on algorithmic improvements
2. **Adjust GC threads**: Experiment with different `--gcthreads` settings
3. **Use concurrent sweeping**: Enable background sweeping with `--gcthreads=N,1`
4. **Profile memory patterns**: Identify allocation hotspots and optimize them

### Memory Leaks

While Julia's GC prevents most memory leaks, issues can still occur:

* **Global references**: Avoid holding references to large objects in global variables
* **Closures**: Be careful with closures that capture large amounts of data
* **C interop**: Ensure proper cleanup when interfacing with C libraries

For more detailed information about Julia's garbage collector internals, see the Garbage Collection section in the Developer Documentation.
6 changes: 4 additions & 2 deletions doc/src/manual/multi-threading.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,13 +84,15 @@ julia> Threads.threadid()

### Multiple GC Threads

The Garbage Collector (GC) can use multiple threads. The amount used is either half the number
of compute worker threads or configured by either the `--gcthreads` command line argument or by using the
The Garbage Collector (GC) can use multiple threads. The amount used by default matches the compute
worker threads or can configured by either the `--gcthreads` command line argument or by using the
[`JULIA_NUM_GC_THREADS`](@ref JULIA_NUM_GC_THREADS) environment variable.

!!! compat "Julia 1.10"
The `--gcthreads` command line argument requires at least Julia 1.10.

For more details about garbage collection configuration and performance tuning, see [Memory Management and Garbage Collection](@ref man-memory-management).

## [Threadpools](@id man-threadpools)

When a program's threads are busy with many tasks to run, tasks may experience
Expand Down
2 changes: 2 additions & 0 deletions doc/src/manual/performance-tips.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,8 @@ Consequently, in addition to the allocation itself, it's very likely
that the code generated for your function is far from optimal. Take such indications seriously
and follow the advice below.

For more information about memory management and garbage collection in Julia, see [Memory Management and Garbage Collection](@ref man-memory-management).

In this particular case, the memory allocation is due to the usage of a type-unstable global variable `x`, so if we instead pass `x` as an argument to the function it no longer allocates memory
(the remaining allocation reported below is due to running the `@time` macro in global scope)
and is significantly faster after the first call:
Expand Down