Specifiying generator options¶
We start with the generator defaults and modify as needed for conservative exploration, which should prevent any constraint violations.
In [1]:
Copied!
# set values if testing
import os
SMOKE_TEST = os.environ.get("SMOKE_TEST")
NUM_MC_SAMPLES = 1 if SMOKE_TEST else 128
NUM_RESTARTS = 1 if SMOKE_TEST else 20
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
from copy import deepcopy
from xopt import Xopt, Evaluator
from xopt.generators.bayesian import BayesianExplorationGenerator
from xopt.resources.test_functions.tnk import evaluate_TNK, tnk_vocs
vocs = deepcopy(tnk_vocs)
vocs.objectives = {}
vocs.observables = ["y1"]
generator = BayesianExplorationGenerator(vocs=vocs)
generator.max_travel_distances = [0.25, 0.25]
generator.numerical_optimizer.n_restarts = NUM_RESTARTS
generator.n_monte_carlo_samples = NUM_MC_SAMPLES
evaluator = Evaluator(function=evaluate_TNK)
X = Xopt(generator=generator, evaluator=evaluator, vocs=vocs)
X
# set values if testing
import os
SMOKE_TEST = os.environ.get("SMOKE_TEST")
NUM_MC_SAMPLES = 1 if SMOKE_TEST else 128
NUM_RESTARTS = 1 if SMOKE_TEST else 20
# Ignore all warnings
import warnings
warnings.filterwarnings("ignore")
from copy import deepcopy
from xopt import Xopt, Evaluator
from xopt.generators.bayesian import BayesianExplorationGenerator
from xopt.resources.test_functions.tnk import evaluate_TNK, tnk_vocs
vocs = deepcopy(tnk_vocs)
vocs.objectives = {}
vocs.observables = ["y1"]
generator = BayesianExplorationGenerator(vocs=vocs)
generator.max_travel_distances = [0.25, 0.25]
generator.numerical_optimizer.n_restarts = NUM_RESTARTS
generator.n_monte_carlo_samples = NUM_MC_SAMPLES
evaluator = Evaluator(function=evaluate_TNK)
X = Xopt(generator=generator, evaluator=evaluator, vocs=vocs)
X
Out[1]:
Xopt ________________________________ Version: 0+untagged.1510.g94a4d6d.dirty Data size: 0 Config as YAML: dump_file: null evaluator: function: xopt.resources.test_functions.tnk.evaluate_TNK function_kwargs: raise_probability: 0 random_sleep: 0 sleep: 0 max_workers: 1 vectorized: false generator: computation_time: null fixed_features: null gp_constructor: covar_modules: {} custom_noise_prior: null mean_modules: {} name: standard trainable_mean_keys: [] transform_inputs: true use_low_noise_prior: true log_transform_acquisition_function: false max_travel_distances: - 0.25 - 0.25 model: null n_candidates: 1 n_interpolate_points: null n_monte_carlo_samples: 128 name: bayesian_exploration numerical_optimizer: max_iter: 2000 max_time: null n_restarts: 20 name: LBFGS supports_batch_generation: true turbo_controller: null use_cuda: false max_evaluations: null serialize_inline: false serialize_torch: false strict: true vocs: constants: a: dummy_constant constraints: c1: - GREATER_THAN - 0.0 c2: - LESS_THAN - 0.5 objectives: {} observables: - y1 variables: x1: - 0.0 - 3.14159 x2: - 0.0 - 3.14159
Run exploration¶
We start with evaluating 2 points that we know satisfy the constraints. We then run 30 exploration steps.
In [2]:
Copied!
X.evaluate_data({"x1":[1.0, 0.75],"x2":[0.7, 0.95]})
X.evaluate_data({"x1":[1.0, 0.75],"x2":[0.7, 0.95]})
Out[2]:
x1 | x2 | a | y1 | y2 | c1 | c2 | xopt_runtime | xopt_error | |
---|---|---|---|---|---|---|---|---|---|
0 | 1.00 | 0.70 | dummy_constant | 1.00 | 0.70 | 0.584045 | 0.290 | 0.000016 | False |
1 | 0.75 | 0.95 | dummy_constant | 0.75 | 0.95 | 0.494833 | 0.265 | 0.000006 | False |
In [3]:
Copied!
for i in range(2):
print(f"step {i}")
X.step()
for i in range(2):
print(f"step {i}")
X.step()
step 0 step 1
In [4]:
Copied!
# view the data
X.data
# view the data
X.data
Out[4]:
x1 | x2 | a | y1 | y2 | c1 | c2 | xopt_runtime | xopt_error | |
---|---|---|---|---|---|---|---|---|---|
0 | 1.000000 | 0.700000 | dummy_constant | 1.000000 | 0.700000 | 0.584045 | 0.290000 | 0.000016 | False |
1 | 0.750000 | 0.950000 | dummy_constant | 0.750000 | 0.950000 | 0.494833 | 0.265000 | 0.000006 | False |
2 | 1.535397 | 1.735397 | dummy_constant | 1.535397 | 1.735397 | 4.313110 | 2.598255 | 0.000021 | False |
3 | 2.320795 | 0.950000 | dummy_constant | 2.320795 | 0.950000 | 5.188811 | 3.517794 | 0.000014 | False |
In [5]:
Copied!
# plot results
ax = X.data.plot("x1", "x2")
ax.set_aspect("equal")
# plot results
ax = X.data.plot("x1", "x2")
ax.set_aspect("equal")
Introspect models, acquisition function and feasibility prediction¶
During exploration we generate Gaussian Process models of each objective and constraint. We demonstrate how they are viewed below.
In [6]:
Copied!
fig, ax = X.generator.visualize_model(show_feasibility=True, n_grid=100)
fig, ax = X.generator.visualize_model(show_feasibility=True, n_grid=100)
Generator model hyperparameters¶
In [7]:
Copied!
# print generator model hyperparameters
for name, val in X.generator.model.named_parameters():
print(f"{name}:{val}")
X.generator.model.models[2].covar_module.base_kernel.lengthscale
# print generator model hyperparameters
for name, val in X.generator.model.named_parameters():
print(f"{name}:{val}")
X.generator.model.models[2].covar_module.base_kernel.lengthscale
models.0.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-22.1766], dtype=torch.float64, requires_grad=True) models.0.mean_module.raw_constant:Parameter containing: tensor(0.2642, dtype=torch.float64, requires_grad=True) models.0.covar_module.raw_outputscale:Parameter containing: tensor(1.9192, dtype=torch.float64, requires_grad=True) models.0.covar_module.base_kernel.raw_lengthscale:Parameter containing: tensor([[-0.8180, -0.8447]], dtype=torch.float64, requires_grad=True) models.1.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-22.6121], dtype=torch.float64, requires_grad=True) models.1.mean_module.raw_constant:Parameter containing: tensor(0.2634, dtype=torch.float64, requires_grad=True) models.1.covar_module.raw_outputscale:Parameter containing: tensor(1.9071, dtype=torch.float64, requires_grad=True) models.1.covar_module.base_kernel.raw_lengthscale:Parameter containing: tensor([[-0.8057, -0.8525]], dtype=torch.float64, requires_grad=True) models.2.likelihood.noise_covar.raw_noise:Parameter containing: tensor([-21.1850], dtype=torch.float64, requires_grad=True) models.2.mean_module.raw_constant:Parameter containing: tensor(0.1798, dtype=torch.float64, requires_grad=True) models.2.covar_module.raw_outputscale:Parameter containing: tensor(2.4870, dtype=torch.float64, requires_grad=True) models.2.covar_module.base_kernel.raw_lengthscale:Parameter containing: tensor([[-1.1313, -0.7034]], dtype=torch.float64, requires_grad=True)
Out[7]:
tensor([[0.2796, 0.4021]], dtype=torch.float64, grad_fn=<SoftplusBackward0>)
Examine the number of constraint violations¶
Using the convience function provided by the vocs object we can evaluate which samples violate either or both of our constraints.
In [8]:
Copied!
X.vocs.feasibility_data(X.data)
X.vocs.feasibility_data(X.data)
Out[8]:
feasible_c1 | feasible_c2 | feasible | |
---|---|---|---|
0 | True | True | True |
1 | True | True | True |
2 | True | False | False |
3 | True | False | False |
In [9]:
Copied!
# generate next point
X.generator.generate(1)
# generate next point
X.generator.generate(1)
Out[9]:
[{'x1': 1.5353975, 'x2': 0.16460249999999998}]
In [ ]:
Copied!