👋 The TrustyAI Service is intended to be a hub for all kinds of Responsible AI workflows, such as explainability, drift, and Large Language Model (LLM) evaluation. Designed as a REST server wrapping a core Python library, the TrustyAI service is intended to be a tool that can operate in a local environment, a Jupyter Notebook, or in Kubernetes.
- Fourier Maximum Mean Discrepancy (FourierMMD)
- Jensen-Shannon
- Approximate Kolmogorov–Smirnov Test
- Kolmogorov–Smirnov Test (KS-Test)
- Meanshift
- Statistical Parity Difference
- Disparate Impact Ratio
- Average Odds Ratio (WIP)
- Average Predictive Value Difference (WIP)
- Individual Consistency (WIP)
uv pip install .
uv pip install .[eval]
podman build -t $IMAGE_NAME .
podman build -t $IMAGE_NAME --build-arg EXTRAS=eval .
uv pip install .[protobuf]
uv run uvicorn src.main --host 0.0.0.0 --port 8080
podman run -t $IMAGE_NAME -p 8080:8080 .
To run all tests in the project:
python -m pytest
Or with more verbose output:
python -m pytest -v
To run tests with coverage reporting:
python -m pytest --cov=src
To process model inference data from ModelMesh models, you can install protobuf support. Otherwise, only KServe models will be supported.
Install the required dependencies for protobuf support:
uv pip install -e ".[protobuf]"
After installing dependencies, generate Python code from the protobuf definitions:
# From the project root
bash scripts/generate_protos.sh
Run the tests for the protobuf implementation:
# From the project root
python -m pytest tests/service/data/test_modelmesh_parser.py -v
When the service is running, visit localhost:8080/docs
to see the OpenAPI documentation!