foreBlocks is a flexible and modular deep learning library for time series forecasting, built on PyTorch. It provides a wide range of neural network architectures and forecasting strategies through a clean, research-friendly API โ enabling fast experimentation and scalable deployment.
๐ GitHub Repository
# Clone and install
git clone https://github.com/lseman/foreblocks
cd foreblocks
pip install -e .
Or install directly via PyPI:
pip install foreblocks
from foreblocks import TimeSeriesSeq2Seq, ModelConfig, TrainingConfig
import pandas as pd
import torch
# Load your time series dataset
data = pd.read_csv('your_data.csv')
X = data.values
# Configure the model
model_config = ModelConfig(
model_type="lstm",
input_size=X.shape[1],
output_size=1,
hidden_size=64,
target_len=24,
teacher_forcing_ratio=0.5
)
# Initialize and train
model = TimeSeriesSeq2Seq(model_config=model_config)
X_train, y_train, _ = model.preprocess(X, self_tune=True)
# Create DataLoader and start training
from torch.utils.data import DataLoader, TensorDataset
train_dataset = TensorDataset(
torch.tensor(X_train, dtype=torch.float32),
torch.tensor(y_train, dtype=torch.float32)
)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
history = model.train_model(train_loader)
predictions = model.predict(X_test)
Feature | Description |
---|---|
๐ง Multiple Strategies | Seq2Seq, Autoregressive, and Direct forecasting modes |
๐งฉ Modular Design | Easily swap and extend model components |
๐ค Advanced Models | LSTM, GRU, Transformer, VAE, and more |
โก Smart Preprocessing | Automatic normalization, differencing, EWT, and outlier handling |
๐ฏ Attention Modules | Pluggable attention layers for enhanced temporal modeling |
๐ Multivariate Support | Designed for multi-feature time series with dynamic input handling |
๐ Training Utilities | Built-in trainer with callbacks, metrics, and visualizations |
๐ Transparent API | Clean and extensible codebase with complete documentation |
Section | Description | Link |
---|---|---|
Preprocessing | EWT, normalization, differencing, outliers | Guide |
Custom Blocks | Registering new encoder/decoder/attention blocks | Guide |
Transformers | Transformer-based modules | Docs |
Fourier | Frequency-based forecasting layers | Docs |
Wavelet | Wavelet transform modules | Docs |
DARTS | Architecture search for forecasting | Docs |
ForeBlocks is built around clean and extensible abstractions:
TimeSeriesSeq2Seq
: High-level interface for forecasting workflowsForecastingModel
: Core model engine combining encoders, decoders, and headsTimeSeriesPreprocessor
: Adaptive preprocessing with feature engineeringTrainer
: Handles training loop, validation, and visual feedback
ModelConfig(
model_type="lstm",
strategy="seq2seq",
input_size=3,
output_size=1,
hidden_size=64,
num_encoder_layers=2,
num_decoder_layers=2,
target_len=24
)
ModelConfig(
model_type="lstm",
strategy="autoregressive",
input_size=1,
output_size=1,
hidden_size=64,
target_len=12
)
ModelConfig(
model_type="lstm",
strategy="direct",
input_size=5,
output_size=1,
hidden_size=128,
target_len=48
)
ModelConfig(
model_type="transformer",
strategy="transformer_seq2seq",
input_size=4,
output_size=4,
hidden_size=128,
dim_feedforward=512,
nheads=8,
num_encoder_layers=3,
num_decoder_layers=3,
target_len=96
)
ModelConfig(
multi_encoder_decoder=True,
input_size=5,
output_size=1,
hidden_size=64,
model_type="lstm",
target_len=24
)
from foreblocks.attention import AttentionLayer
attention = AttentionLayer(
method="dot",
attention_backend="self",
encoder_hidden_size=64,
decoder_hidden_size=64
)
model = TimeSeriesSeq2Seq(
model_config=model_config,
attention_module=attention
)
X_train, y_train, _ = model.preprocess(
X,
normalize=True,
differencing=True,
detrend=True,
apply_ewt=True,
window_size=48,
horizon=24,
remove_outliers=True,
outlier_method="iqr",
self_tune=True
)
def schedule(epoch): return max(0.0, 1.0 - 0.1 * epoch)
model = TimeSeriesSeq2Seq(
model_config=model_config,
scheduled_sampling_fn=schedule
)
model_config = ModelConfig(
model_type="lstm",
input_size=3,
output_size=1,
hidden_size=64,
target_len=24
)
attention = AttentionLayer(
method="dot",
encoder_hidden_size=64,
decoder_hidden_size=64
)
model = TimeSeriesSeq2Seq(
model_config=model_config,
attention_module=attention
)
model_config = ModelConfig(
model_type="transformer",
input_size=4,
output_size=4,
hidden_size=128,
dim_feedforward=512,
nheads=8,
num_encoder_layers=3,
num_decoder_layers=3,
target_len=96
)
training_config = TrainingConfig(
num_epochs=100,
learning_rate=1e-4,
weight_decay=1e-5,
patience=15
)
model = TimeSeriesSeq2Seq(
model_config=model_config,
training_config=training_config
)
Parameter | Type | Description | Default |
---|---|---|---|
model_type |
str | "lstm", "gru", "transformer", etc. | "lstm" |
input_size |
int | Number of input features | โ |
output_size |
int | Number of output features | โ |
hidden_size |
int | Hidden layer dimension | 64 |
target_len |
int | Forecast steps | โ |
num_encoder_layers |
int | Encoder depth | 1 |
num_decoder_layers |
int | Decoder depth | 1 |
teacher_forcing_ratio |
float | Ratio of teacher forcing | 0.5 |
Parameter | Type | Description | Default |
---|---|---|---|
num_epochs |
int | Training epochs | 100 |
learning_rate |
float | Learning rate | 1e-3 |
batch_size |
int | Mini-batch size | 32 |
patience |
int | Early stopping patience | 10 |
weight_decay |
float | L2 regularization | 0.0 |
๐ด Dimension Mismatch
- Confirm
input_size
andoutput_size
match your data - Ensure encoder/decoder hidden sizes are compatible
๐ก Memory Issues
- Reduce
batch_size
,hidden_size
, or sequence length - Use gradient accumulation or mixed precision
๐ Poor Predictions
- Try different
strategy
- Use attention mechanisms
- Fine-tune hyperparameters (e.g.
hidden_size
, dropout)
๐ต Training Instability
- Clip gradients (
clip_grad_norm_
) - Use learning rate schedulers (
ReduceLROnPlateau
)
- โ Always normalize input data
- โ Evaluate with appropriate multi-step metrics (e.g. MAPE, MAE)
- โ Try simple models (LSTM) before complex ones (Transformer)
- โ
Use
self_tune=True
in preprocessing for best defaults - โ Split validation data chronologically
We welcome contributions! Visit the GitHub repo to:
- Report bugs ๐
- Request features ๐ก
- Improve documentation ๐
- Submit PRs ๐ง
This project is licensed under the MIT License โ see LICENSE.