Question

So I am writing a program that handles gradient descent. Im using this method to solve equations of the form

Ax = b
where A is a random 10x10 matrix and b is a random 10x1 matrix

Here is my code:

import numpy as np
import math
import random
def steepestDistance(A,b,xO, e):
    xPrev = xO
    dPrev = -((A * xPrev) - b)
    magdPrev = np.linalg.norm(dPrev)
    danger =  np.asscalar(((magdPrev * magdPrev)/(np.dot(dPrev.T,A * dPrev))))
    xNext = xPrev + (danger * dPrev)
    step = 1
    while (np.linalg.norm((A * xNext) - b) >= e and np.linalg.norm((A * xNext) - b) < math.pow(10,4)):
        xPrev = xNext
        dPrev = -((A * xPrev) - b)
        magdPrev = np.linalg.norm(dPrev)
        danger = np.asscalar((math.pow(magdPrev,2))/(np.dot(dPrev.T,A * dPrev)))
        xNext = xPrev + (danger * dPrev)
        step = step + 1
    return xNext

##print(steepestDistance(np.matrix([[5,2],[2,1]]),np.matrix([[1],[1]]),np.matrix([[0.5],[0]]), math.pow(10,-5)))

def chooseRandMatrix():
    matrix = np.zeros(shape = (10,10))
    for i in range(10):
        for a in range(10):
            matrix[i][a] = random.randint(0,100)
    return matrix.T * matrix

def chooseRandColArray():
    arra = np.zeros(shape = (10,1))
    for i in range(10):
        arra[i][0] = random.randint(0,100)
    return arra
for i in range(4): 
  matrix = np.asmatrix(chooseRandMatrix())
  array = np.asmatrix(chooseRandColArray())  
print(steepestDistance(matrix, array, np.asmatrix(chooseRandColArray()),math.pow(10,-5)))

When I run the method steepestDistance on the random matrix and column, I keep getting an infinite loop. It works fine when simple 2x2 matrices are used for A, but it loops indefinitely for 10x10 matrices. The problem is in np.linalg.norm((A * xNext) - b); it keeps growing indefinitely. Thats why I put an upper bound on it; Im not supposed to do it for the algorithm however. Can someone tell me what the problem is?

Was it helpful?

Solution

Solving a linear system Ax=b with gradient descent means to minimize the quadratic function

f(x) = 0.5*x^t*A*x - b^t*x. 

This only works if the matrix A is symmetric, A=A^t, since the derivative or gradient of f is

f'(x)^t = 0.5*(A+A^t)*x - b, 

and additionally A must be positive definite. If there are negative eigenvalues,then the descent will proceed to minus infinity, there is no minimum to be found.


One work-around is to replace b by A^tb and A by a^t*A, that is to minimize the function

f(x) = 0.5*||A*x-b||^2
     = 0.5*x^t*A^t*A*x - b^t*A*x + 0.5*b^t*b

with gradient

f'(x)^t = A^t*A*x - A^t*b

But for large matrices A this is not recommended since the condition number of A^t*A is about the square of the condition number of A.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top