Question

I am new to optimization. I am trying to solve a linear least-squares problem using the fmin_slsqp function in scipy.optimize.

I have the objective function as frobenius norm of |q0_T*P-q1_T| squared, where q0_T is the transpose of a nX1 vector and P is nXn matrix and q1_T is a transpose of a nX1 vector. This is basically a markov process with q vectors as the probability of being in a state and P being the matrix of transition probabilities.

The objective function would be minimized w.r.t P where the constraints are:

1) all elements in P must be non-negative

2) all the rows in P must sum to 1

I have defined this objective function which I am not sure is correct:

def func(P, q):
  return (np.linalg.norm(np.dot(transpose(q[0,]),P)-transpose(q[1,])))**2

The second argument in fmin_slsqp asks for a 1D ndarray x0 which is the initial guess for independent variable. But here, since my independent variable would be P, I would need a 2D array for my initial guess. I am not sure if I am framing the problem correctly or I'll have to use another function. Thanks.

Was it helpful?

Solution

So there's an issue in that it only accepts a 1d array as the independent variable, one hackish solution would be to pass it as 1d, and reshape in the function.

import numpy as np
from scipy.optimize import fmin_slsqp

# some fake data
d = 3 # dimensionality of the problem
P0 = np.arange(d*d).reshape(d, d)
P0 /= P0.sum(1, keepdims=True) # so that each row sums to 1
q = np.random.rand(2, d)       # assuming this is the structure of your q
q /= q.sum(1, keepdims=True)

# the function to minimize
def func(P, q):
    n = q.shape[-1] # or n = np.sqrt(P.size)
    P = P.reshape(n, n)
    return np.linalg.norm(np.dot(q[0], P) - q[1])**2  # no changes here, just simplified syntax

def row_sum(P0, q):
    """ row sums - 1 are zero """
    n = np.sqrt(P0.size)
    return P0.reshape(n,n).sum(1) - 1.

def non_neg(P0, q):
    """ all elements >= 0 """
    return P0

P_opt = fmin_slsqp(func, P0.ravel(), args=(q,), f_eqcons=row_sum, f_ieqcons=non_neg).reshape(d, d)

assert np.allclose(P_opt.sum(1), 1)
assert np.all(P_opt >= 0)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top