Based on wikipedia, a mathematical optimization problem can represented as:
- Given a function f: A -> R, from a set A to real numbers R
- Sought a value x0 such f(x0) is smaller than f(x) for all x in A for a minimization.
The function f takes on argument, x0, this is the decision variable. So the space A, the problem space has one dimension. The dimension of problem and the number of decision variable are the same concept. If f would takes two arguments, f(x0, x1), there would be two decision variables.
The dimension of objective space is the number of variables return by the function f. In our case, f map a set of solution A to real number R. The dimension of the objective space is therefore 1.
We could define a multi-objective optimization problem where the function f returns a vector or where we try to optimize multiple function f_k at a time. The problem would then be define as :
- Given a set of function (f1, f2, ..., fk) : A -> R^k, from a set A to real numbers R^k
- Sought a value x0 such (f1(x0), f2(x0), ..., fk(x0)) dominates every (f1(x), f2(x), ..., fk(x)) for all x in A for a minimization.
The problem dimension is 1 and the objective space has k dimensions. The objectives can be combined to a single objective using a weighted sum or can be optimized using a concept of multi-criteria dominance such as the Pareto dominance.