Skip to content

Add documentation on how to run benchmarks locally #789

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
49 changes: 49 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Benchmarks

Here we document how to run various performance benchmarks about
serialization, validation, struct, gc and memory usage.

## Setup

Benchmark additional dependencies are included in the bench extra so you will have to run this:
```bash
pip install -e ".[dev, bench]"
```

If you want to run the benchmarks against pydantic v1, you'll have to explicitly
downgrade using this command:
```bash
pip install "pydantic<2"
```

## Running Benchmarks

```bash
# JSON Serialization & Validation
python -m benchmarks.bench_validation

# JSON/MessagePack serialization
python benchmarks/bench_encodings.py --protocol json
python benchmarks/bench_encodings.py --protocol msgpack

# JSON Serialization - Large Data
python benchmarks/bench_large_json.py

# Structs
python benchmarks/bench_structs.py

# Garbage Collection
python benchmarks/bench_gc.py

# Library size comparison
python benchmarks/bench_library_size.py
```

## Print versions of benchmarked libraries
```bash
python -m benchmarks.bench_validation --versions
python benchmarks/bench_encodings.py --protocol json --versions
python benchmarks/bench_encodings.py --protocol msgpack --versions
python benchmarks/bench_large_json.py --versions
python benchmarks/bench_structs.py --versions
```
2 changes: 1 addition & 1 deletion benchmarks/bench_encodings.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import importlib.metadata
from typing import Any, Literal, Callable

from .generate_data import make_filesystem_data
from generate_data import make_filesystem_data

import msgspec

Expand Down
2 changes: 1 addition & 1 deletion benchmarks/bench_validation/__main__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import argparse
import json
import tempfile
from ..generate_data import make_filesystem_data
from benchmarks.generate_data import make_filesystem_data
import sys
import subprocess

Expand Down
20 changes: 19 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,31 @@
*yaml_deps,
*toml_deps,
]
dev_deps = ["pre-commit", "coverage", "mypy", "pyright", *doc_deps, *test_deps]
bench_deps = [
"cattrs",
"pydantic",
"mashumaro",
"orjson",
"ujson",
"python-rapidjson",
"pysimdjson",
"ormsgpack",
]
dev_deps = [
"pre-commit",
"coverage",
"mypy",
"pyright",
*doc_deps,
*test_deps,
]

extras_require = {
"yaml": yaml_deps,
"toml": toml_deps,
"doc": doc_deps,
"test": test_deps,
"bench": bench_deps,
"dev": dev_deps,
}

Expand Down