aoc2025/bram/benchmarks/README.md

46 lines
1.9 KiB
Markdown

# Benchmarking setup
In this folder, I run some benchmarks on everyone's code.
## How to benchmark
1. Make sure you have the appropriate software installed. You can do so by
activating the Nix shell as defined in `shell.nix`, or you can view everyone's
installation requirements in the
[software requirements](#Software-requirements) section.
2. Run benchmarks by executing the respective scripts. For example, if you
wish to benchmark Bram's code, you may run `bram.sh` in your terminal. All data
will be collected in the `data/` folder.
3. Once you have collected all the data you wanted to collect, you can open the
Jupyter Notebook by typing `jupyter notebook Benchmarks.ipynb` in the terminal.
4. In the Jupyter Notebook, evaluate all cells and gain the graphs you want.
Graphs are also stored in the `img/` folder.
## Software requirements
To create the measurements, this benchmarking setup uses
[hyperfine](https://github.com/sharkdp/hyperfine). To build the visualization,
this setup uses [Jupyter Notebooks](https://jupyter.org/) - but you can install
the dependencies listed in `requirements.txt` to get all necessary packages
installed.
Bob uses [Python](https://www.python.org/).
Bram uses [Rust](https://rust-lang.org/) at version 1.86.0.
Brechtje uses [Python](https://www.python.org/).
Sander uses [GHCup](https://www.haskell.org/ghcup/) at version 9.6.7. Haskell
typically opens as a REPL from which functions can be evaluated, so some
liberties are taken here. The script is reformatted in the `sander_hs/` folder
and compiled to a binary in order to approximate performance.
Vicky uses [Jupyter Notebook](https://jupyter.org/). A Notebook is difficult
to benchmark and the `*.py` exported files cannot compute properly due to
wrong ordering of function definitions and scripts. The exported script is
reformatted in the `vicky_py/` folder and run as an ordinary Python script
in order to approximate performance.