Simulated Annealing in Octave


The simulated annealing algorithm (proposed by Metropolis in 1953, message me for a reference) draws a reference to the structure of a metal, as it cools, settling into a crystalline structure. As the metal cools, it settles down to one of these lower energy minima. However, as the structure changes into lower minima orientations, the temperature of the metal can actually increase. By mimicking this natural process, a minima is actually achieved. The same analogy is pushed for numerical optimization.

Simulated Annealing in Octave

In Octave, simulated annealing is implemented as samin. Typing “help samin” at the Octave terminal yields the results

help samin
samin: simulated annealing minimization of a function. See samin_example.m

usage: [x, obj, convergence, details] = samin(“f”, {args}, {control})

* “f”: function name (string)
* {args}: a cell array that holds all arguments of the function,
* {control}: a cell array with 11 elements
* LB – vector of lower bounds
* UB – vector of upper bounds
* nt – integer: # of iterations between temperature reductions
* ns – integer: # of iterations between bounds adjustments
* rt – (0 < rt <1): temperature reduction factor
* maxevals – integer: limit on function evaluations
* neps – integer: number of values final result is compared to
* functol – (> 0): the required tolerance level for function value
* paramtol – (> 0): the required tolerance level for parameters
* verbosity – scalar: 0, 1, or 2.
* 0 = no screen output
* 1 = only final results to screen
* 2 = summary every temperature change
* minarg – integer: which of function args is minimization over?

* x: the minimizer
* obj: the value of f() at x
* convergence:
0 if no convergence within maxevals function evaluations
1 if normal convergence to a point interior to the parameter space
2 if convergence to point very near bounds of parameter space
(suggest re-running with looser bounds)
* details: a px3 matrix. p is the number of times improvements were found.
The columns record information at the time an improvement was found
* first: cumulative number of function evaluations
* second: temperature
* third: function value

Example: see samin_example

Additional help for built-in functions and operators is
available in the on-line version of the manual. Use the command
`doc ‘ to search the manual index.

Help and information about Octave is also available on the WWW
at and via the
mailing list.

So let’s try this out for the same example f(x,y)=x^2+y^2. The minima is (x,y)=(0,0). This was encoded in the function obj_fcn1.m which looks like
function out=obj_fcn1(x)

Let’s say that our initial estimate is x_0=[10,10]’. Recall, samin takes first the string argument “obj_fcn1”. The second argument is a cell. This cell, denoted by {,….,}, contains all the arguments of the function “obj_fcn1”. In this case, only x_0 is the input. The third argument is a cell argument with 11 interior arguments including lower and upper bounds, iterations between temperature and iteration changes, factor at which temperature is reduced, max function evals, number of values at which final estimate is compared, function tolerance, parameter tolerances, level of screen output, and argument (2nd samin input) which is being minimized. Here is my setup:

The argument minarg is set to 1 since there is only one input to obj_fcn1.

Here is the final output from the run using the commands above:
samin: intermediate results before next temperature change

temperature 1.000000e-306
current best function value 0.000000
total evaluations so far 2280000
total moves since last temperature reduction 20000
downhill 4824
accepted uphill 4808
rejected uphill 10368
out of bounds trials 0
new minima this temperature 4

parameter search width
-0.000000 0.000000
-0.000000 0.000000

SAMIN results

==> Normal convergence <==

Convergence tolerances:
Function: 1.000000e-07
Parameters: 1.000000e-07

Objective function value at minimum: 0.000000

parameter search width
-0.000000 0.000000
-0.000000 0.000000


Simulated annealing, as a stochastic search, is quite excellent with good convergence properties. In cases, where there is no knowledge of derivatives or where there are many local minima, SA is a good choice. SA has known global convergence properties and might be the answer for your problem.

3 thoughts on “Simulated Annealing in Octave”

  1. yeah, the simulated annealing is quite helpful for my research. Now my project is more complex than the previous version. It needs a loop to make the optimization procedure as good as possible. So I use the powell methode to improve the reliability of the optimization procedure, and it sounds good. Hmmm,,,I think the genetic algorithm is also very interesting.:-)

  2. The genetic algorithm (ga, I think, in Octave) is an interesting algorithm. It’s very good in cases when you don’t care about the cost of evaluating your objective functional, when you have a large-dimension parameter space, and don’t know much about your function. Then the ga is fantastic. I’ll do an article on it soon.Jon

  3. Very helpful material indeed. Also there is de_min algorithm which is very good too. I used samin for my PhD research. Fantastic algorithm.

Leave a Reply