Aim package logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.
Why use Aim?
Modern ML development revolves around collection and analysis of AI metadata (training metrics, images, distributions etc) to analyze and explore different aspects of the model performance.
There is both a need to manually explore and compare the metadata as well as automate for different infrastructure needs.
Aim helps to track AI metadata and
Explore it manually through the most advanced open-source experiment comparison web UI.
Query programmatically in your favorite notebook or through script for automation.
Use Aim to seamlessly log your ML metadata in your training environment and explore through UI and code. Aim is free, open-source and self-hosted.
What can you do with Aim?
Log metrics and params
Use the Aim SDK to log as many metrics and params as you need for your training and evaluation runs. Aim users track 1000s of training runs and sometimes more than 100s of metrics per run with lots of steps.
Query metadata on Web UI
Aim enables a powerful pythonic query language to filter through metadata. It’s like a python if statement over everything you have tracked. You can use this on all explorer screens.
Runs explorer will help you to hollistically view all your runs, each metric last tracked values and tracked hyperparameters.
Metrics explorer helps you to compare 100s of metrics within a few clicks. It helps to save lots of time compared to other open-source experiment tracking tools.
Track intermediate images and search, compare them on the Images Explorer.
Params explorer enables a parallel coordinates view for metrics and params. Very helpful when doing hyperparameter search.
Query metadata programmatically
Use the same pythonic if statement to query the data through the Aim SDK programmatically.
How Aim works?
Aim is a python package with three main components:
A rocksdb-based embedded storage where the metadata is stored locally
A simple python interface that allows to track AI metadata
A self-hosted web interface to deeply explore the tracked metadata
Integrated with your favorite tools
Comparisons to familiar tools
Training run comparison
Order of magnitude faster training run comparison with Aim
The tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.
With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params. TensorBoard doesn’t have features to group, aggregate the metrics.
Aim is built to handle 1000s of training runs with dozens of experiments each - both on the backend and on the UI.
TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.
Beloved TB visualizations to be added on Aim
Distributions / gradients visualizations.
Neural network visualization.
MLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.
Aim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.
MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.
Aim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!
MLflow UI becomes slow to use when there are a few hundreds of runs.
Weights and Biases
Hosted vs self-hosted
Weights and Biases is a hosted closed-source experiment tracker.
Aim is self-hosted free and open-source.
Remote self-hosted Aim is coming soon…
If you have questions please: