fitbenchmarking.core.fitting_benchmarking module

Main module of the tool, this holds the master function that calls lower level functions to fit and benchmark a set of problems for a certain fitting software.

fitbenchmarking.core.fitting_benchmarking.benchmark(options, data_dir)

Gather the user input and list of paths. Call benchmarking on these. The benchmarking structure is:

loop_over_benchmark_problems()
    loop_over_starting_values()
        loop_over_software()
            loop_over_minimizers()
                loop_over_jacobians()
                    loop_over_hessians()
Parameters
  • options (fitbenchmarking.utils.options.Options) – dictionary containing software used in fitting the problem, list of minimizers and location of json file contain minimizers

  • data_dir – full path of a directory that holds a group of problem definition files

Returns

all results, problems where all fitting failed, minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_benchmark_problems(problem_group, options)

Loops over benchmark problems

Parameters
Returns

all results, problems where all fitting failed, and minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_cost_function(problem, options, start_values_index, grabbed_output)

Run benchmarking for each cost function given in options.

Parameters
Returns

all results, and minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_fitting_software(cost_func, options, start_values_index, grabbed_output)

Loops over fitting software selected in the options

Parameters
Returns

all results, and minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.loop_over_hessians(controller, options, grabbed_output)

Loops over Hessians set from the options file

Parameters
Returns

a FittingResult for each run

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult],

fitbenchmarking.core.fitting_benchmarking.loop_over_jacobians(controller, options, grabbed_output)

Loops over Jacobians set from the options file

Parameters
Returns

a FittingResult for each run.

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult]

fitbenchmarking.core.fitting_benchmarking.loop_over_minimizers(controller, minimizers, options, grabbed_output)

Loops over minimizers in fitting software

Parameters
Returns

all results, and minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str])

fitbenchmarking.core.fitting_benchmarking.loop_over_starting_values(problem, options, grabbed_output)

Loops over starting values from the fitting problem.

Parameters
Returns

all results, problems where all fitting failed, and minimizers that were unselected due to algorithm_type

Return type

list[fibenchmarking.utils.fitbm_result.FittingResult], list[str], dict[str, list[str]]

fitbenchmarking.core.fitting_benchmarking.perform_fit(controller, options, grabbed_output)

Performs a fit using the provided controller and its data. It will be run a number of times specified by num_runs.

Parameters
Returns

The chi squared and runtime of the fit.

Return type

tuple(float, float)