Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
65486ae
⏺ I've completed the core migration to tsam 3.0.0. Here's a summary o…
FBumann Jan 12, 2026
156bc47
⏺ The tsam 3.0 migration is now complete with the correct API. All 79…
FBumann Jan 12, 2026
46f3418
⏺ The simplification refactoring is complete. Here's what was done:
FBumann Jan 12, 2026
cfb9926
I continued the work on simplifying flixopt's clustering architecture…
FBumann Jan 12, 2026
cf5279a
All the clustering notebooks and documentation have been updated fo…
FBumann Jan 12, 2026
addde0b
Fixes made:
FBumann Jan 12, 2026
65e872b
⏺ All 126 clustering tests pass. I've added 8 new tests in a new Test…
FBumann Jan 12, 2026
1547a36
Summary of Changes
FBumann Jan 12, 2026
5117967
Summary of changes:
FBumann Jan 12, 2026
ddd4d2d
rename to ClusteringResults
FBumann Jan 12, 2026
3365734
New xarray-like interface:
FBumann Jan 12, 2026
a55b9a1
Updated the following notebooks:
FBumann Jan 12, 2026
5056873
ClusteringResults class:
FBumann Jan 12, 2026
6fc34cd
Renamed:
FBumann Jan 12, 2026
72d6f0d
Expose SegmentConfig
FBumann Jan 12, 2026
42e37e1
The segmentation feature has been ported to the tsam 3.0 API. Key c…
FBumann Jan 12, 2026
c5409c8
Added Properties
FBumann Jan 12, 2026
ad6e5e7
Summary of Changes
FBumann Jan 12, 2026
860b15e
⏺ I've completed the implementation. Here's a summary of everything t…
FBumann Jan 13, 2026
b73a6a1
Add method to extract data used for clustering.
FBumann Jan 13, 2026
e6ee2dd
Summary of Refactoring
FBumann Jan 13, 2026
e880fad
Changes Made
FBumann Jan 13, 2026
fbb2b0f
Update Notebook
FBumann Jan 13, 2026
151e4b3
1. Clustering class now wraps AggregationResult objects directly
FBumann Jan 13, 2026
57f59ed
I've completed the refactoring to make the Clustering class derive …
FBumann Jan 13, 2026
b39e994
The issue was that _build_aggregation_data() was using n_timesteps_…
FBumann Jan 13, 2026
8b1daf6
❯ Remove some data wrappers.
FBumann Jan 13, 2026
bf5d7ff
Improve docstrings and types
FBumann Jan 13, 2026
bb5f7aa
Add notebook and preserve input data
FBumann Jan 13, 2026
556e90f
Implemented include_original_data parameter:
FBumann Jan 13, 2026
810c143
Changes made:
FBumann Jan 13, 2026
1696e47
Changes made:
FBumann Jan 13, 2026
5cf85ac
drop_constant_arrays to use std < atol instead of max == min
FBumann Jan 13, 2026
8332eaa
Temp fix (should be fixed in tsam)
FBumann Jan 14, 2026
9ba340c
Revert "Temp fix (should be fixed in tsam)"
FBumann Jan 14, 2026
94477b1
Merge origin/main into feature/tsam-v3+rework
FBumann Jan 14, 2026
13002a0
Updated tsam dependencies to use the PR branch of tsam containing the…
FBumann Jan 15, 2026
fddea30
All fast notebooks now pass. Here's a summary of the fixes:
FBumann Jan 15, 2026
982e75a
⏺ All fast notebooks now pass. Here's a summary of the fixes:
FBumann Jan 15, 2026
9d5d969
Fix notebook
FBumann Jan 15, 2026
946d374
Fix CI...
FBumann Jan 15, 2026
b483ad4
Revert "Fix CI..."
FBumann Jan 15, 2026
c847ef6
Fix CI...
FBumann Jan 15, 2026
872bbbd
Merge branch 'main' into feature/tsam-v3+rework
FBumann Jan 16, 2026
a5f0147
Merge branch 'dev' into feature/tsam-v3+rework
FBumann Jan 16, 2026
450739c
Fix: Correct expansion of segmented clustered systems (#573)
FBumann Jan 16, 2026
ebf2aab
Added @functools.cached_property to timestep_mapping in clustering/b…
FBumann Jan 16, 2026
79d0e5e
perf: 40x faster FlowSystem I/O + storage efficiency improvements (#578)
FBumann Jan 16, 2026
7a4280d
perf: Optimize clustering and I/O (4.4x faster segmented clustering) …
FBumann Jan 17, 2026
1287792
2. Lines 1245-1251 (new guard): Added explicit check after drop_con…
FBumann Jan 17, 2026
efd91f5
Fix/broadcasting (#580)
FBumann Jan 17, 2026
01a775d
Add some defensive validation
FBumann Jan 17, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
202 changes: 202 additions & 0 deletions benchmarks/benchmark_io_performance.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
"""Benchmark script for FlowSystem IO performance.

Tests to_dataset() and from_dataset() performance with large FlowSystems.
Run this to compare performance before/after optimizations.

Usage:
python benchmarks/benchmark_io_performance.py
"""

import time
from typing import NamedTuple

import numpy as np
import pandas as pd

import flixopt as fx


class BenchmarkResult(NamedTuple):
"""Results from a benchmark run."""

name: str
mean_ms: float
std_ms: float
iterations: int


def create_large_flow_system(
n_timesteps: int = 2190,
n_periods: int = 12,
n_components: int = 125,
) -> fx.FlowSystem:
"""Create a large FlowSystem for benchmarking.

Args:
n_timesteps: Number of timesteps (default 2190 = ~1 year at 4h resolution).
n_periods: Number of periods (default 12).
n_components: Number of sink/source pairs (default 125).

Returns:
Configured FlowSystem ready for optimization.
"""
timesteps = pd.date_range('2024-01-01', periods=n_timesteps, freq='4h')
periods = pd.Index([2028 + i * 2 for i in range(n_periods)], name='period')

fs = fx.FlowSystem(timesteps=timesteps, periods=periods)
fs.add_elements(fx.Effect('Cost', '€', is_objective=True))

n_buses = 10
buses = [fx.Bus(f'Bus_{i}') for i in range(n_buses)]
fs.add_elements(*buses)

# Create demand profile with daily pattern
base_demand = 100 + 50 * np.sin(2 * np.pi * np.arange(n_timesteps) / 24)

for i in range(n_components // 2):
bus = buses[i % n_buses]
# Add noise to create unique profiles
profile = base_demand + np.random.normal(0, 10, n_timesteps)
profile = np.clip(profile / profile.max(), 0.1, 1.0)

Comment on lines +53 to +61
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd benchmarks && head -100 benchmark_io_performance.py | cat -n

Repository: flixOpt/flixopt

Length of output: 3556


🏁 Script executed:

cd benchmarks && sed -n '30,75p' benchmark_io_performance.py | cat -n

Repository: flixOpt/flixopt

Length of output: 1971


🏁 Script executed:

cd benchmarks && sed -n '1,70p' benchmark_io_performance.py | cat -n

Repository: flixOpt/flixopt

Length of output: 2553


Fix n_components semantics and daily-cycle period math

Two issues confirmed:

  1. The docstring defines n_components as "sink/source pairs," but the loop creates only n_components // 2 pairs—with the default value of 125, only 62 pairs are created. This introduces confusion and silently handles odd inputs incorrectly.
  2. With 4-hour timesteps, sin(2*pi*arange(n_timesteps)/24) produces a 4-day cycle, not a daily pattern (24 timesteps × 4h = 96h).

Either clarify the parameter definition or correct the loop count; adjust the sine divisor to match the actual timestep resolution.

🔧 Suggested fix
-    # Create demand profile with daily pattern
-    base_demand = 100 + 50 * np.sin(2 * np.pi * np.arange(n_timesteps) / 24)
+    # Create demand profile with daily pattern
+    hours_per_step = (timesteps[1] - timesteps[0]).total_seconds() / 3600
+    steps_per_day = int(24 / hours_per_step)
+    base_demand = 100 + 50 * np.sin(2 * np.pi * np.arange(n_timesteps) / steps_per_day)
 
-    for i in range(n_components // 2):
+    if n_components % 2 != 0:
+        raise ValueError('n_components must be even (sink/source pairs).')
+    n_pairs = n_components // 2
+    for i in range(n_pairs):
🤖 Prompt for AI Agents
In `@benchmarks/benchmark_io_performance.py` around lines 53 - 61, The loop that
builds profiles treats n_components as "sink/source pairs" incorrectly and the
daily cycle math uses 24 hours instead of timesteps-per-day; update the
generation to iterate over each component (change range(n_components // 2) to
range(n_components)) so the actual number of components matches the n_components
parameter (adjust any pairing logic elsewhere accordingly), and replace the sine
divisor with the number of timesteps per day (e.g. timesteps_per_day = int(24 /
timestep_hours) and use sin(2*pi*np.arange(n_timesteps)/timesteps_per_day)) so
base_demand in base_demand = 100 + 50 * np.sin(...) produces a true daily cycle;
locate these changes around base_demand, the for loop creating profiles, and any
use of timestep_hours or n_timesteps in this file.

fs.add_elements(
fx.Sink(
f'D_{i}',
inputs=[fx.Flow(f'Q_{i}', bus=bus.label, size=100, fixed_relative_profile=profile)],
)
)
fs.add_elements(
fx.Source(
f'S_{i}',
outputs=[fx.Flow(f'P_{i}', bus=bus.label, size=500, effects_per_flow_hour={'Cost': 20 + i})],
)
)

return fs


def benchmark_function(func, iterations: int = 5, warmup: int = 1) -> BenchmarkResult:
"""Benchmark a function with multiple iterations.

Args:
func: Function to benchmark (callable with no arguments).
iterations: Number of timed iterations.
warmup: Number of warmup iterations (not timed).

Returns:
BenchmarkResult with timing statistics.
"""
# Warmup
for _ in range(warmup):
func()

# Timed runs
times = []
for _ in range(iterations):
start = time.perf_counter()
func()
elapsed = time.perf_counter() - start
times.append(elapsed)

return BenchmarkResult(
name=func.__name__ if hasattr(func, '__name__') else str(func),
mean_ms=np.mean(times) * 1000,
std_ms=np.std(times) * 1000,
iterations=iterations,
)


def run_io_benchmarks(
n_timesteps: int = 2190,
n_periods: int = 12,
n_components: int = 125,
n_clusters: int = 8,
iterations: int = 5,
) -> dict[str, BenchmarkResult]:
"""Run IO performance benchmarks.

Args:
n_timesteps: Number of timesteps for the FlowSystem.
n_periods: Number of periods.
n_components: Number of components (sink/source pairs).
n_clusters: Number of clusters for aggregation.
iterations: Number of benchmark iterations.

Returns:
Dictionary mapping benchmark names to results.
"""
print('=' * 70)
print('FlowSystem IO Performance Benchmark')
print('=' * 70)
print('\nConfiguration:')
print(f' Timesteps: {n_timesteps}')
print(f' Periods: {n_periods}')
print(f' Components: {n_components}')
print(f' Clusters: {n_clusters}')
print(f' Iterations: {iterations}')

# Create and prepare FlowSystem
print('\n1. Creating FlowSystem...')
fs = create_large_flow_system(n_timesteps, n_periods, n_components)
print(f' Components: {len(fs.components)}')

print('\n2. Clustering and solving...')
fs_clustered = fs.transform.cluster(n_clusters=n_clusters, cluster_duration='1D')

# Try Gurobi first, fall back to HiGHS if not available
try:
solver = fx.solvers.GurobiSolver()
fs_clustered.optimize(solver)
except Exception as e:
if 'gurobi' in str(e).lower() or 'license' in str(e).lower():
print(f' Gurobi not available ({e}), falling back to HiGHS...')
solver = fx.solvers.HighsSolver()
fs_clustered.optimize(solver)
else:
raise

print('\n3. Expanding...')
fs_expanded = fs_clustered.transform.expand()
print(f' Expanded timesteps: {len(fs_expanded.timesteps)}')

# Create dataset with solution
print('\n4. Creating dataset...')
ds = fs_expanded.to_dataset(include_solution=True)
print(f' Variables: {len(ds.data_vars)}')
print(f' Size: {ds.nbytes / 1e6:.1f} MB')

results = {}

# Benchmark to_dataset
print('\n5. Benchmarking to_dataset()...')
result = benchmark_function(lambda: fs_expanded.to_dataset(include_solution=True), iterations=iterations)
results['to_dataset'] = result
print(f' Mean: {result.mean_ms:.1f}ms (std: {result.std_ms:.1f}ms)')

# Benchmark from_dataset
print('\n6. Benchmarking from_dataset()...')
result = benchmark_function(lambda: fx.FlowSystem.from_dataset(ds), iterations=iterations)
results['from_dataset'] = result
print(f' Mean: {result.mean_ms:.1f}ms (std: {result.std_ms:.1f}ms)')

# Verify restoration
print('\n7. Verification...')
fs_restored = fx.FlowSystem.from_dataset(ds)
print(f' Components restored: {len(fs_restored.components)}')
print(f' Timesteps restored: {len(fs_restored.timesteps)}')
print(f' Has solution: {fs_restored.solution is not None}')
if fs_restored.solution is not None:
print(f' Solution variables: {len(fs_restored.solution.data_vars)}')

# Summary
print('\n' + '=' * 70)
print('Summary')
print('=' * 70)
for name, res in results.items():
print(f' {name}: {res.mean_ms:.1f}ms (+/- {res.std_ms:.1f}ms)')

return results


if __name__ == '__main__':
run_io_benchmarks()
10 changes: 9 additions & 1 deletion docs/notebooks/01-quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -282,8 +282,16 @@
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.11"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
12 changes: 12 additions & 0 deletions docs/notebooks/02-heat-system.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -380,6 +380,18 @@
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
12 changes: 12 additions & 0 deletions docs/notebooks/03-investment-optimization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -429,6 +429,18 @@
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
12 changes: 12 additions & 0 deletions docs/notebooks/04-operational-constraints.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -472,6 +472,18 @@
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
10 changes: 9 additions & 1 deletion docs/notebooks/05-multi-carrier-system.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -541,8 +541,16 @@
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.11"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
15 changes: 14 additions & 1 deletion docs/notebooks/06a-time-varying-parameters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,20 @@
]
}
],
"metadata": {},
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
10 changes: 9 additions & 1 deletion docs/notebooks/06b-piecewise-conversion.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -205,8 +205,16 @@
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.12.7"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
10 changes: 9 additions & 1 deletion docs/notebooks/06c-piecewise-effects.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -312,8 +312,16 @@
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"version": "3.12.7"
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
12 changes: 12 additions & 0 deletions docs/notebooks/08a-aggregation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -388,6 +388,18 @@
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Loading