fitbenchmarking.core.fitting_benchmarking module

Main module of the tool, this holds the master function that calls lower level functions to fit and benchmark a set of problems for a certain fitting software.

fitbenchmarking.core.fitting_benchmarking.benchmark(options, data_dir, checkpointer, label='benchmark')

Gather the user input and list of paths. Call benchmarking on these. The benchmarking structure is:

loop_over_benchmark_problems()
    loop_over_starting_values()
        loop_over_software()
            loop_over_minimizers()
                loop_over_jacobians()
                    loop_over_hessians()
Parameters:
  • options (fitbenchmarking.utils.options.Options) – dictionary containing software used in fitting the problem, list of minimizers and location of json file contain minimizers

  • data_dir – full path of a directory that holds a group of problem definition files

  • checkpointer (Checkpoint) – The object to use to save results as they’re generated

  • label (str) – The name for the dataset in the checkpoint

Returns:

all results, problems where all fitting failed, minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_benchmark_problems(problem_group, options, checkpointer)

Loops over benchmark problems

Parameters:
  • problem_group (list) – locations of the benchmark problem files

  • options (fitbenchmarking.utils.options.Options) – FitBenchmarking options for current run

  • checkpointer (Checkpoint) – The object to use to save results as they’re generated

Returns:

all results, problems where all fitting failed, and minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_cost_function(problem, options, start_values_index, grabbed_output, checkpointer, emissions_tracker)

Run benchmarking for each cost function given in options.

Parameters:
Returns:

all results, and minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_fitting_software(cost_func, options, start_values_index, grabbed_output, checkpointer, emissions_tracker)

Loops over fitting software selected in the options

Parameters:
  • cost_func (CostFunction) – a cost_func object containing information used in fitting

  • options (fitbenchmarking.utils.options.Options) – FitBenchmarking options for current run

  • start_values_index (int) – Integer that selects the starting values when datasets have multiple ones.

  • grabbed_output (fitbenchmarking.utils.output_grabber.OutputGrabber) – Object that removes third part output from console

  • checkpointer (Checkpoint) – The object to use to save results as they’re generated

Returns:

all results, and minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_hessians(controller, options, grabbed_output, checkpointer, emissions_tracker)

Loops over Hessians set from the options file

Parameters:
Returns:

a FittingResult for each run

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult],

fitbenchmarking.core.fitting_benchmarking.loop_over_jacobians(controller, options, grabbed_output, checkpointer, emissions_tracker)

Loops over Jacobians set from the options file

Parameters:
Returns:

a FittingResult for each run.

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult]

fitbenchmarking.core.fitting_benchmarking.loop_over_minimizers(controller, minimizers, options, grabbed_output, checkpointer, emissions_tracker)

Loops over minimizers in fitting software

Parameters:
Returns:

all results, and minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str])

fitbenchmarking.core.fitting_benchmarking.loop_over_starting_values(problem, options, grabbed_output, checkpointer, emissions_tracker)

Loops over starting values from the fitting problem.

Parameters:
Returns:

all results, problems where all fitting failed, and minimizers that were unselected due to algorithm_type

Return type:

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.perform_fit(controller, options, grabbed_output, emissions_tracker)

Performs a fit using the provided controller and its data. It will be run a number of times specified by num_runs.

Parameters:
Returns:

The chi squared, runtimes and emissions of the fit.

Return type:

tuple(float, list[float], float)