Installation

This package can be installed directly from pip, using:

pip install ioh

Usage

The following walks through the typical use cases of the IOHexperimenter. You can find this in a jupyter notebook on this page to run it in an interactive manner.

Create a function object

The structure of the IOHexperimenter in python is almost equivalent to the C++ version, but with a few ease-of-use features added, such as easy access to any existing benchmark problem usin the ‘get_problem’ function:

# Import the get_problem function
from ioh import get_problem

To check the usage and parameterization of this (and most other) functionality, we provide built-in docstrings, acessible as usual:

#View docstring of get_problem
?get_problem

Based on this, you can then create a problem:

#Create a function object, either by giving the function id from within the suite
f = get_problem(7, dim=5, iid=1, problem_type = 'BBOB')

#Or by giving the function name
f2 = get_problem("Sphere", dim=5, iid=1)

This problem contains a meta-data attributes, which consists of many standard properties, such as number_of_variables (dimension), name,…

#Print some properties of the function
print(f.meta_data)

Additionally, the problem contains information on its bounds / conditions

#Access the box-constrains for this function
f.constraint.lb, f.constraint.ub

The problem also tracks the current state of the optimization, e.g. number of evaluations done so far

#Show the state of the optimization
print(f.state)

And of course, the function can be evaluated easily:

#Evaluate the function
f([0,0,0,0,0])

Running an algorithm

We can construct a simple random-search example wich accepts an IOHprofiler problem as its argument.

#Create a basic random search algorithm
import ioh
import numpy as np

def random_search(problem: ioh.problem.Real, seed: int = 42, budget: int = None) -> ioh.RealSolution:
    np.random.seed(seed)
    
    if budget is None:
        budget = int(problem.meta_data.n_variables * 1e4)

    for _ in range(budget):
        x = np.random.uniform(problem.constraint.lb, problem.constraint.ub)
        
        # problem automatically tracks the current best search point
        f = problem(x)
        
    return problem.state.current_best

To record data, we need to add a logger to the problem

#Import the ioh logger module
from ioh import logger

Within IOHexperimenter, several types of logger are available. Here, we will focus on the default logger (called Analyzer as of version 0.32, Default for version 0.31 and earlier), as described in this section. Note that the logging can be customized by adding new triggers. Additionally, starting in version 0.32, the ability to store search points directly is added by using the store_positions-parameter.

#Create default logger compatible with IOHanalyzer
l = logger.Analyzer(root="data", folder_name="run", algorithm_name="random_search", algorithm_info="test of IOHexperimenter in python")

This can then be attached to the problem

#Add the logger to the problem
f.attach_logger(l)

Now, we can run the algorithm. The logger will automatically store the relevant performance data.

random_search(f)

For versions of ioh prior to 0.31, we need to explicitly ensure all data is written, so we should clear the logger after running our experiments. This is no longer be required after version 0.32.

l.flush()

Tracking algorithm parameters

If we want to track adaptive parameters of the algorithm, we require an object in which the parameters of the algorithm are stored. In the below example, the random search algorithm is restructured into a class for this purpose. Alternatively, we could also create a seperate object which holds the parameters.

class RandomSearch:
    def __init__(self, budget: int):
        self._budget = budget
        self.seed = np.random.get_state()[1][0]
        self._rng = np.random.default_rng(self._seed)

    def __call__(self, func: ioh.problem.Real):
        for i in range(self._budget):
            x = self._rng.random.uniform(func.constraint.lb, func.constraint.ub)
            f = func(x)
        #Set new seed for future runs        
        self.seed = np.random.get_state()[1][0]
        self._rng = np.random.default_rng(self.seed)
        return self.f_opt, self.x_opt
    
    @property
    def param_rate(self) -> int:
        return np.random.randint(100)

    
#create an instance of this algorithm
o = RandomSearch(1000)

We can then identify three different levels at which to track parameters:

Tracking adaptive parameters

The first type of parameters are the most common: parameters which we want to track during the search procedure, e.g. an adaptive stepsize. To track this type of parameter, we can make use of the ‘watch’ function of the logger as follows:

l.watch(o, ['param_rate'])

Tracking run parameters

The second type of parameter is a per-run parameter. This can be something like the used random seed. To track this, we can use the following:

l.add_run_attributes(o, ['seed'])

Tracking experiment parameters

The final type of parameters to track is the most high-level. This can be for example static algorithm parameters or other information about the experiment, which can be added as follows:

l.add_experiment_attribute('budget', '1000')

NOTE

The methods for tracking parameters, e.g. watch, add_run_attributes and add_experiment_attribute can only be called before f.attach_logger(l) is called. Otherwise, the function will have no effect.


Using the experimenter module

In addition to creating each problem individually, we can make use of the built-in experimenter module, which can be imported as follows:

from ioh import Experiment
?Experiment

At its core, the Experimenter object contains three parts:

  • An optimization algorithm (which takes a problem as input)
  • Information on the collection of problems to be executed
  • Information on the logging procedure

The suite object can be created using the suite-module from ioh as follows:

exp = Experiment(algorithm = o, #Set the optimization algorithm
fids = [1,2,3], iids = [1,2,3,4,5], dims = [5,10], reps = 5, problem_type = 'BBOB', #Problem definitions
njobs = 4, #Enable paralellization
logged = True, folder_name = 'IOH_data', algorithm_name = 'Random_Search', store_positions = True, #Logging specifications
experiment_attributes = {'budget' : '1000'}, run_attributes = ['seed'], logged_attributes = ['param_rate'], #Attribute tracking
merge_output = True, zip_output = True, remove_data = True #Only keep data as a single zip-file
)

This can be run as follows:

exp.run()

Using custom functions

In addition to the interfaces to the built-in functions, IOHexperimenter provides an easy way to wrap any problem into the same ioh-problem structure for easy use with the logging and experiment modules. This can be done using the ‘wrap_real_problem’ and ‘wrap_integer_problem’ functions. An example is shown here:

from ioh import problem, OptimizationType

#Define an evaluation method
def f_custom(x):
    return np.sum(x)
#Call the wrapper
problem.wrap_real_problem(f_custom, "custom_name", n_variables=5, optimization_type=OptimizationType.Minimization)

#Call get_problem to instantiate a version of this problem
f = get_problem('custom_name', iid=0, dim=5)

Note that changing the iid of the problem is not yet supported, but changing the dimensionality does work, assuming the evaluate function can handle inputs of the specified size.

f = get_problem('custom_name', iid=0, dim=10)

Using the W-model functions

In addition to the PBO and BBOB functions, the W-model problem generators (one based on OneMax and one based on LeadingOnes) are also avalable.

?problem.WModelOneMax
f = problem.WModelLeadingOnes(iid = 1, dim = 100, dummy_select_rate = 0.5, epistasis_block_size = 1, neutrality_mu = 0, ruggedness_gamma = 0 )