ax.api
The Ax API
IMetric
- class ax.api.protocols.metric.IMetric(name: str)[source]
Bases:
_APIMetric
Metrics
automate the process of fetching data from external systems. They are used in conjunction withRunners
in theClient.run_trials
method to facilitate closed-loop experimentation.- fetch(trial_index: int, trial_metadata: Mapping[str, Any]) tuple[int, float | tuple[float, float]] [source]
Given trial metadata (the mapping returned from
IRunner.run
), fetches readings for the metric.Readings are returned as a pair (progression, outcome), where progression is an integer representing the progression of the trial (e.g. number of epochs for a training job, timestamp for a time series, etc.), and outcome is either direct reading or a (mean, sem) pair for the metric.
IRunner
- class ax.api.protocols.runner.IRunner[source]
Bases:
_APIRunner
Runners
automate the process of running trials on external systems. They are used in conjunction withMetrics
in theClient.run_trials
method to facilitate closed-loop experimentation.- poll_trial(trial_index: int, trial_metadata: Mapping[str, Any]) TrialStatus [source]
Given trial index and metadata, poll the status of the trial.
- run_trial(trial_index: int, parameterization: Mapping[str, int | float | str | bool]) dict[str, Any] [source]
Given an index and parameterization, run a trial and return a dictionary of any appropriate metadata. This metadata will be used to identify the trial when polling its status, stopping, fetching data, etc. This may hold information such as the trial’s unique identifier on the system its running on, a directory where the trial is logging results to, etc.
The metadata MUST be JSON-serializable (i.e. dict, list, str, int, float, bool, or None) so that Trials may be properly serialized in Ax.
- stop_trial(trial_index: int, trial_metadata: Mapping[str, Any]) dict[str, Any] [source]
Given trial index and metadata, stop the trial. Returns a dictionary of any appropriate metadata.
The metadata MUST be JSON-serializable (i.e. dict, list, str, int, float, bool, or None) so that Trials may be properly serialized in Ax.
Client
- class ax.api.client.Client(storage_config: StorageConfig | None = None, random_seed: int | None = None)[source]
Bases:
WithDBSettingsBase
- attach_baseline(parameters: Mapping[str, int | float | str | bool], arm_name: str | None = None) int [source]
Attaches custom single-arm trial to an
Experiment
specifically for use as the baseline or status quo in evaluating relative outcome constraints and improvement over baseline objective value. The trial will be marked as RUNNING and must be completed manually by the user.- Returns:
The index of the attached trial.
Saves to database on completion if
storage_config
is present.
- attach_data(trial_index: int, raw_data: Mapping[str, float | tuple[float, float]], progression: int | None = None) None [source]
Attach data without indicating the trial is complete. Missing metrics are, allowed, and unexpected metric values will be added to the Experiment as tracking metrics.
Saves to database on completion if
storage_config
is present.
- attach_trial(parameters: Mapping[str, int | float | str | bool], arm_name: str | None = None) int [source]
Attach a single-arm trial to the
Experiment
with the provided parameters. The trial will be marked as RUNNING and must be completed manually by the user.Saves to database on completion if
storage_config
is present.- Returns:
The index of the attached trial.
- complete_trial(trial_index: int, raw_data: Mapping[str, float | tuple[float, float]] | None = None, progression: int | None = None) TrialStatus [source]
Indicate the trial is complete and optionally attach data. In non-timeseries settings users should prefer to use
complete_trial
withraw_data
overattach_data
. Ax will determine the trial’s status automatically:- If all metrics on the
OptimizationConfig
are present the trial will be marked as COMPLETED
- If all metrics on the
- If any metrics on the
OptimizationConfig
are missing the trial will be marked as FAILED
- If any metrics on the
Saves to database on completion if
storage_config
is present.
- compute_analyses(analyses: Sequence[Analysis] | None = None, display: bool = True) list[AnalysisCard] [source]
Compute
AnalysisCards
(data about the optimization for end-user consumption) using theExperiment
andGenerationStrategy
. If no analyses are provided use some heuristic to determine which analyses to run. If some analyses fail, log failure and continue to compute the rest.Note that the Analysis class is NOT part of the API and its methods are subject to change incompatibly between minor versions. Users are encouraged to use the provided analyses or leave this argument as
None
to use the default analyses.Saves cards to database on completion if
storage_config
is present.- Parameters:
analyses – A list of Analysis classes to run. If None Ax will choose which analyses to run based on the state of the experiment.
display – Whether to display the AnalysisCards if executed in an interactive environment (e.g. Jupyter). Defaults to True. If not in an interactive environment this setting has no effect.
- Returns:
A list of AnalysisCards.
- configure_experiment(parameters: Sequence[RangeParameterConfig | ChoiceParameterConfig], parameter_constraints: Sequence[str] | None = None, name: str | None = None, description: str | None = None, experiment_type: str | None = None, owner: str | None = None) None [source]
Given an
ExperimentConfig
, construct the AxExperiment
object. Note that validation occurs at time of config instantiation, not atconfigure_experiment
.This method only constitutes defining the search space and misc. metadata like name, description, and owners.
Saves to database on completion if
storage_config
is present.
- configure_generation_strategy(method: Literal['balanced', 'fast', 'random_search'] = 'fast', initialization_budget: int | None = None, initialization_random_seed: int | None = None, initialize_with_center: bool = True, use_existing_trials_for_initialization: bool = True, min_observed_initialization_trials: int | None = None, allow_exceeding_initialization_budget: bool = False, torch_device: str | None = None) None [source]
Optional method to configure the way candidate parameterizations are generated during the optimization; if not called a default
GenerationStrategy
will be used.Saves to database on completion if
storage_config
is present.
- configure_metrics(metrics: Sequence[IMetric]) None [source]
Attach a
Metric
with logic for autmating fetching of a given metric by replacing its instance with the providedMetric
from metrics sequence input, or adds theMetric
provided to theExperiment
as a tracking metric if that metric was not already present.
- configure_optimization(objective: str, outcome_constraints: Sequence[str] | None = None) None [source]
Configures the goals of the optimization by setting the
OptimizationConfig
.Metrics
referenced here by their name will be moved from the Experiment’stracking_metrics
if they were were already present (i.e. they were attached viaconfigure_metrics
) or added as baseMetrics
.- Parameters:
objective – Objective is a string and allows us to express single, scalarized, and multi-objective goals. Ex: “loss”, “ne1 + 2 * ne2”, “-ne, qps”
outcome_constraints – Outcome constraints are also strings and allow us to express a desire to have a metric clear a threshold but not be further optimized. These constraints are expressed as inequalities. Ex: “qps >= 100”, “0.5 * ne1 + 0.5 * ne2 >= 0.95”. To indicate a relative constraint multiply your bound by “baseline” Ex: “qps >= 0.95 * baseline” will constrain such that the QPS is at least 95% of the baseline arm’s QPS. Note that scalarized outcome constraints cannot be relative.
Saves to database on completion if
storage_config
is present.
- configure_runner(runner: IRunner) None [source]
Attaches a
Runner
to theExperiment
, to be used for automating trial deployment when usingrun_n_trials
.Saves to database on completion if
storage_config
is present.
- get_best_parameterization(use_model_predictions: bool = True) tuple[Mapping[str, int | float | str | bool], Mapping[str, float | tuple[float, float]], int, str] [source]
Identifies the best parameterization tried in the experiment so far, also called the best in-sample arm.
If
use_model_predictions
isTrue
, first attempts to do so with the model used in optimization and its corresponding predictions if available. Ifuse_model_predictions
isFalse
or attempts to use the model fails, falls back to the best raw objective based on the data fetched from theExperiment
.Parameterizations which were observed to violate outcome constraints are not eligible to be the best parameterization.
- Returns:
- The parameters predicted to have the best optimization value without
violating any outcome constraints.
- The metric values for the best parameterization. Uses model prediction if
use_model_predictions=True
, otherwise returns observed data.
The trial which most recently ran the best parameterization
- The name of the best arm (each trial has a unique name associated with
each parameterization)
- get_next_trials(max_trials: int, fixed_parameters: Mapping[str, int | float | str | bool] | None = None) dict[int, Mapping[str, int | float | str | bool]] [source]
Create up to
max_trials
trials using theGenerationStrategy
(or as many as possible before reaching the maximum parellelism defined by theGenerationNode
), attach them to theExperiment
with status RUNNING, and return a mapping from trial index to its parameterization. If a partial parameterization is provided viafixed_parameters
each parameterization will have those parameters set to the provided values.Saves to database on completion if
storage_config
is present.- Returns:
A mapping of trial index to parameterization.
- get_pareto_frontier(use_model_predictions: bool = True) list[tuple[Mapping[str, int | float | str | bool], Mapping[str, float | tuple[float, float]], int, str]] [source]
Identifies the parameterizations which are predicted to efficiently trade-off between all objectives in a multi-objective optimization, also called the in-sample Pareto frontier.
- Returns:
The parameters predicted to have the best optimization value without
violating any outcome constraints. - The metric values for the best parameterization. Uses model
prediction if
use_model_predictions=True
, otherwise returns observed data.The trial which most recently ran the best parameterization
- The name of the best arm (each trial has a unique name associated
with each parameterization).
- Return type:
A list of tuples containing
- classmethod load_from_database(experiment_name: str, storage_config: StorageConfig | None = None) Self [source]
Restore an
Client
and its state from database by the given name.- Returns:
The restored
Client
.
- classmethod load_from_json_file(filepath: str = 'ax_client_snapshot.json', storage_config: StorageConfig | None = None) Self [source]
Restore a
Client
and its state from a JSON-serialized snapshot, residing in a .json file by the given path.- Returns:
The restored
Client
.
- mark_trial_abandoned(trial_index: int) None [source]
Manually mark a trial as ABANDONED. ABANDONED trials are typically not able to be re-suggested by
get_next_trials
, though this is controlled by theGenerationStrategy
.Saves to database on completion if
storage_config
is present.
- mark_trial_early_stopped(trial_index: int) None [source]
Manually mark a trial as EARLY_STOPPED. This is used when the user has decided (with or without Ax’s recommendation) to stop the trial after some data has been attached but before the trial is completed. Note that if data has not been attached for the trial yet users should instead call
mark_trial_abandoned
. EARLY_STOPPED trials will not be re-suggested byget_next_trials
.Saves to database on completion if
storage_config
is present.
- mark_trial_failed(trial_index: int) None [source]
Manually mark a trial as FAILED. FAILED trials typically may be re-suggested by
get_next_trials
, though this is controlled by theGenerationStrategy
.Saves to database on completion if
storage_config
is present.
- predict(points: Sequence[Mapping[str, int | float | str | bool]]) list[Mapping[str, float | tuple[float, float]]] [source]
Use the current surrogate model to predict the outcome of the provided list of parameterizations.
- Returns:
A list of mappings from metric name to predicted mean and SEM
- run_trials(max_trials: int, parallelism: int = 1, tolerated_trial_failure_rate: float = 0.5, initial_seconds_between_polls: int = 1) None [source]
Run up to max_trials trials in a loop by creating an ephemeral
Scheduler
under the hood using theExperiment
,GenerationStrategy
,Metrics
, andRunner
attached to thisClient
along with the providedOrchestrationConfig
.Saves to database on completion if
storage_config
is present.
- save_to_json_file(filepath: str = 'ax_client_snapshot.json') None [source]
Save a JSON-serialized snapshot of this
Client
’s settings and state to a .json file by the given path.
- set_early_stopping_strategy(early_stopping_strategy: BaseEarlyStoppingStrategy) None [source]
This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.
Overwrite the existing
EarlyStoppingStrategy
with the providedEarlyStoppingStrategy
.Saves to database on completion if
storage_config
is present.
- set_experiment(experiment: Experiment) None [source]
This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.
Overwrite the existing
Experiment
with the providedExperiment
.Saves to database on completion if
storage_config
is present.
- set_generation_strategy(generation_strategy: GenerationStrategy) None [source]
This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.
Overwrite the existing
GenerationStrategy
with the providedGenerationStrategy
.Saves to database on completion if
storage_config
is present.
- set_optimization_config(optimization_config: OptimizationConfig) None [source]
This method is not part of the API and is provided (without guarantees of method signature stability) for the convenience of some developers, power users, and partners.
Overwrite the existing
OptimizationConfig
with the providedOptimizationConfig
.Saves to database on completion if
storage_config
is present.
- should_stop_trial_early(trial_index: int) bool [source]
Check if the trial should be stopped early. If True and the user wishes to heed Ax’s recommendation the user should manually stop the trial and call
mark_trial_early_stopped(trial_index)
. TheEarlyStoppingStrategy
may be selected automatically or set manually viaset_early_stopping_strategy
.- Returns:
Whether the trial should be stopped early.
- summarize() DataFrame [source]
Special convenience method for producing the
DataFrame
produced by theSummary
Analysis
. This method is a convenient way to inspect the state of theExperiment
, but because the shape of the resultant DataFrame can change based on theExperiment
state both users and Ax developers should prefer to use other methods for extracting information from the experiment to consume downstream.The
DataFrame
computed will contain one row per arm and the following columns (though empty columns are omitted):trial_index: The trial index of the arm
arm_name: The name of the arm
trial_status: The status of the trial (e.g. RUNNING, SUCCEDED, FAILED)
failure_reason: The reason for the failure, if applicable
generation_node: The name of the
GenerationNode
that generated the arm- **METADATA: Any metadata associated with the trial, as specified by the
Experiment’s
runner.run_metadata_report_keys
field
**METRIC_NAME: The observed mean of the metric specified, for each metric
**PARAMETER_NAME: The parameter value for the arm, for each parameter
Configs
- class ax.api.configs.ChoiceParameterConfig(name: str, values: list[float] | list[int] | list[str] | list[bool], parameter_type: Literal['float', 'int', 'str', 'bool'], is_ordered: bool | None = None, dependent_parameters: Mapping[int | float | str | bool, Sequence[str]] | None = None)[source]
Bases:
object
Allows specifying a discrete dimension of an experiment’s search space and internally validates the inputs. Choice parameters can be either ordinal or categorical; this is controlled via the
is_ordered
flag.
- class ax.api.configs.RangeParameterConfig(name: str, bounds: tuple[float, float], parameter_type: Literal['float', 'int'], step_size: float | None = None, scaling: Literal['linear', 'log'] | None = None)[source]
Bases:
object
Allows specifying a continuous dimension of an experiment’s search space and internally validates the inputs.
Types
From Config
- ax.api.utils.instantiation.from_config.parameter_from_config(config: RangeParameterConfig | ChoiceParameterConfig) Parameter [source]
Create a RangeParameter, ChoiceParameter, or FixedParameter from a ParameterConfig.