Table of Contents



Nomenclature

  1. Feasible set:
    \[\mathbf{X} := \left\{ x \in \mathbf{R}^n ~:~ f_i(x) \le 0, \ \ i=1, \ldots, m \right\}.\]
  2. Solution:
    In an optimization problem, we are usually interested in computing the optimal value of the objective function, and also often a minimizer, which is a vector which achieves that value, if any.
  3. Feasibility problems:
    Sometimes an objective function is not provided. This means that we are just interested in finding a feasible point, or determine that the problem is infeasible.

    By convention, we set \(f_0\) to be a constant in that case, to reflect the fact that we are indifferent to the choice of a point x as long as it is feasible.

  4. Optimal value:
    \[p^\ast := \min_x : f_0(x) ~:~ f_i(x) \le 0, \ \ i=1, \ldots, m.\]

    Denoted \(p^\ast\).

  5. Optimal set:
    The set of feasible points for which the objective function achieves the optimal value:
    \(\mathbf{X}^{\rm opt} := \left\{ x \in \mathbf{R}^n ~:~ f_0(x) = p^\ast, \ \ f_i(x) \le 0, \ \ i=1,\ldots, m \right\}\).
    Equivalently,
    \(\mathbf{X}^{\rm opt} = \mathrm{arg min}_{x \in \mathbf{X}} f_0(x)\).

    We take the convention that the optimal set is empty if the problem is not feasible.

    A point \(x\) is said to be optimal if it belongs to the optimal set.

    If the optimal value is ONLY attained in the limit, then it is NOT in the optimal set.

  6. When is a problem “Attained”?
    If the optimal set is not empty, we say that the problem is attained.
  7. Suboptimality:
    The \(\epsilon\)-suboptimal set is defined as:
    \[\mathbf{X}_\epsilon := \left\{ x \in \mathbf{R}^n ~:~ f_i(x) \le 0, \ \ i=1, \ldots, m, \ \ f_0(x) \le p^\ast + \epsilon \right\}.\]

    \(\implies \ \mathbf{X}_0 = \mathbf{X}_{\rm opt}\).

  8. (Local and Global) Optimality:
    • A point \(z\) is Locally Optimal: if there is a value \(R>0\) such that \(z\) is optimal for the following problem: \(min_x : f_0(x) ~:~ f_i(x) \le 0, \ \ i=1, \ldots, m, \ \ \|z-x\|_2 \le R\).

      i.e. a local minimizer \(x\) minimizes \(f_0\), but only for nearby points on the feasible set.

    • A point \(z\) is Globally Optimal: if it is the optimal value of the original problem on all of the feasible region.

Problem Classes

  1. Least-squares:
    \[\min_x \;\;\;\; \sum_{i=1}^m \left( \sum_{j=1}^n A_{ij} . x_j - b_i \right)^2,\]
    where \(A_{ij}, \ b_i, \ 1 \le i \le m, \ 1 \le j \le n\), are given numbers, and \(x \in \mathbf{R}^n\) is the variable.
  2. Linear Programming:
    \[\min \sum_{j=1}^n c_jx_j ~:~ \sum_{j=1}^n A_{ij} . x_j \le b_i , \;\; i=1, \ldots, m,\]
    where \(c_j, b_i\) and \(A_{ij}, \ 1 \le i \le m, \ 1 \le j \le n\), are given real numbers.

    This corresponds to the case where the functions \(f_i(i=0, \ldots, m)\) in the standard problem are all affine (that is, linear plus a constant term).

    Denoted \(LP\).

  3. Quadratic Programming:
    \[\min_x \;\;\;\; \displaystyle\sum_{i=1}^m \left(\sum_{j=1}^n C_{ij} . x_j+d_i\right)^2 + \sum_{i=1}^n c_ix_i \;:\;\;\;\; \sum_{j=1}^m A_{ij} . x_j \le b_i, \;\;\;\; i=1,\ldots,m.\]

    Includes a sum of squared linear functions, in addition to a linear term, in the objective.

    QP’s are popular in finance, where the linear term in the objective refers to the expected negative return on an investment, and the squared terms corresponds to the risk (or variance of the return).

    QP was introduced by “Markowitz”

  4. Nonlinear optimization:
    A broad class that includes Combinatorial Optimization.

    One of the reasons for which non-linear problems are hard to solve is the issue of local minima.

  5. Convex optimization:
    A generalization of QP, where the objective and constraints involve “bowl-shaped”, or convex, functions.

    They are easy to solve because they do not suffer from the “curse” of local minima.

  6. Combinatorial optimization:
    In combinatorial optimization, some (or all) the variables are boolean (or integers), reflecting discrete choices to be made.

    Combinatorial optimization problems are in general extremely hard to solve. Often, they can be approximately solved with linear or convex programming.

  7. NON-Convex Optimization Problems [Examples]:
    • Boolean/integer optimization: some variables are constrained to be Boolean or integers.

      Convex optimization can be used for getting (sometimes) good approximations.

    • Cardinality-constrained problems: we seek to bound the number of non-zero elements in a vector variable.

      Convex optimization can be used for getting good approximations.

    • Non-linear programming: usually non-convex problems with differentiable objective and functions.

      Algorithms provide only local minima.

    Most (but not all) non-convex problems are hard!