You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Click here to download full benchmark results](${process.env.GITHUB_SERVER_URL}/${process.env.GITHUB_REPOSITORY}/actions/runs/${process.env.GITHUB_RUN_ID})
As a library focused on performance improvements, it's crucial that mesa-frames maintains its speed advantages over time. To ensure this, we've implemented an automated benchmarking system that runs on every pull request targeting the main branch.
4
+
5
+
## How the Benchmark Workflow Works
6
+
7
+
The automated benchmark workflow runs on GitHub Actions and performs the following steps:
8
+
9
+
1. Sets up a Python environment with all necessary dependencies
10
+
2. Installs optional GPU dependencies (if available in the runner)
11
+
3. Runs a small subset of our benchmark examples:
12
+
- SugarScape model (with 50,000 agents)
13
+
- Boltzmann Wealth model (with 10,000 agents)
14
+
4. Generates timing results comparing mesa-frames to the original Mesa implementation
15
+
5. Produces a visualization of the benchmark results
16
+
6. Posts a comment on the PR with the benchmark results
17
+
7. Uploads full benchmark artifacts for detailed inspection
18
+
19
+
## Interpreting Benchmark Results
20
+
21
+
When reviewing a PR with benchmark results, look for:
22
+
23
+
1.**Successful execution**: The benchmarks should complete without errors
24
+
2.**Performance impact**: Check if the PR introduces any performance regressions
25
+
3.**Expected changes**: If the PR is aimed at improving performance, verify that the benchmarks show the expected improvements
26
+
27
+
The benchmark comment will include:
28
+
- Execution time for both mesa-frames and Mesa implementations
29
+
- The speedup factor (how many times faster mesa-frames is compared to Mesa)
30
+
- A visualization comparing the performance
31
+
32
+
## Running Benchmarks Locally
33
+
34
+
To run the same benchmarks locally and compare your changes to the current main branch:
The full benchmarks will take longer to run than the CI version as they test with more agents.
55
+
56
+
## Adding New Benchmarks
57
+
58
+
When adding new models or features to mesa-frames, consider adding benchmark tests to ensure their performance:
59
+
60
+
1. Create a benchmark script in the `examples` directory
61
+
2. Implement both mesa-frames and Mesa versions of the model
62
+
3. Use the `perfplot` library to measure and visualize performance
63
+
4. Update the GitHub Actions workflow to include your new benchmark (with a small dataset for CI)
64
+
65
+
## Tips for Performance Optimization
66
+
67
+
When optimizing code in mesa-frames:
68
+
69
+
1.**Always benchmark your changes**: Don't assume changes will improve performance without measuring
70
+
2.**Focus on real-world use cases**: Optimize for patterns that users are likely to encounter
71
+
3.**Balance readability and performance**: Code should remain maintainable even while being optimized
72
+
4.**Document performance characteristics**: Note any trade-offs or specific usage patterns that affect performance
73
+
5.**Test on different hardware**: If possible, verify improvements on both CPU and GPU environments
74
+
75
+
Remember that consistent, predictable performance is often more valuable than squeezing out every last bit of speed at the cost of complexity or stability.
0 commit comments