Research >> Algorithms:

We work on a variety of algorithms for sparse and low-rank recovery.

One of our areas of interest in row-sparse Multiple Measurement Vector (MMV) recovery. Where the problem is to solve Y=AX + η where Y and X are matrices, A is a projection operator; moreover X is row-sparse, i.e. there are only a few rows that have non-zero values the rest being all zeroes.

We have come up with a fast solver for this based upon the Randomized Kakzmarz algorithm.

For matrix recovery the problem is to solve a low-rank matrix from its lower-dimensional projections, i.e. y = A vec (X) + η . The corresponding optimization problem is:

minX ||X||* + λ || y - A vec(X)||22

Here, the Schatten-p norm of the matrix promotes rank-deficiency of the solution. We have come up with Split Bregman type techniques to solve this problem. It yields better results compared to state-of-the-art algorithms both in terms of speed and accuracy.

However, the main bottleneck of such algorithms is the computational speed - since all such methods require computing the SVD of the matrix. Currently we are working on techniques to solve these problems without the requirement of SVD. Our techniques should not be confused with matrix factorization based approaches - we still solve the same problem, but without computing the SVD's.

In some cases, the problem is to solve a system of equations where the solution is sparse and low-rank. In those cases, the optimization problem that needs be solved is:

minX ||y- A vec(X)||22 + λ1 ||X||*+ λ2 ||X||2,1

Here the l2,1-norm promotes a row-sparse solution.

For both the above mentioned problems, we have derived efficient algorithms based upon the Split Bregman approach.

Currently, we are working on approximate yet efficient algorithms for some bilinear problems:

||W(y- A (UV))||22 + λ1 ||WU (U)||F2+ λ2 ||WV (V)||F2

A large class of problems in matrix recovery and sparse recovery can be cast in this form.

When W=WU=WV=I, this is a classical matrix factorization problem.

With properly chosen W, and WU=WV=I, this still remains a matrix completion problem, but can be used for situations where the noise in the system is sparse, i.e. can solve:

||y- A (UV))||1 + λ1 ||U||F2+ λ2 ||V||F2

With W=WU=I and a suitably chosen WV, we can solve a Blind Compressed Sensing problem:

||y- A (UV))||22 + λ1 ||U||F2+ λ2 ||vec(V)||1

Other variations are also possible.

Currently, we are working on stochastic descent methods to efficiently solve these problems.