We work on a variety of algorithms for sparse and lowrank
recovery.
One of our areas of interest in rowsparse Multiple Measurement
Vector (MMV) recovery. Where the problem is to solve Y=AX
+ η where Y and X are matrices, A is a projection operator;
moreover X is rowsparse, i.e. there are only a few rows that
have nonzero values the rest being all zeroes.
We have come up with a fast solver for this based upon the
Randomized Kakzmarz algorithm.
For matrix recovery the problem is to solve a lowrank matrix
from its lowerdimensional projections, i.e. y = A vec
(X) + η . The corresponding optimization problem is:
min_{X} X_{*} + λ  y  A vec(X)_{2}^{2}
Here, the Schattenp norm of the matrix promotes
rankdeficiency of the solution. We have come up with Split
Bregman type techniques to solve this problem. It yields better
results compared to stateoftheart algorithms both in terms of
speed and accuracy.
However, the main bottleneck of such algorithms is the
computational speed  since all such methods require computing
the SVD of the matrix. Currently we are working on techniques to
solve these problems without the requirement of SVD. Our
techniques should not be confused with matrix factorization
based approaches  we still solve the same problem, but without
computing the SVD's.
In some cases, the problem is to solve a system of
equations where the solution is sparse and lowrank. In those
cases, the optimization problem that needs be solved is:
min_{X} y A vec(X)_{2}^{2}
+ λ_{1} X_{*}+ λ_{2} X_{2,1}
Here the l_{2,1}norm promotes a rowsparse solution.
For both the above mentioned problems, we have derived
efficient algorithms based upon the Split Bregman approach.
Currently, we are working on approximate yet efficient
algorithms for some bilinear problems:
W(y A (UV))_{2}^{2} + λ_{1}
W_{U} (U)_{F}^{2}+ λ_{2}
W_{V} (V)_{F}^{2}
A large class of problems in matrix recovery and sparse
recovery can be cast in this form.
When W=W_{U}=W_{V}=I, this is a classical matrix
factorization problem.
With properly chosen W, and W_{U}=W_{V}=I, this
still remains a matrix completion problem, but can be used for
situations where the noise in the system is sparse, i.e. can
solve:
y A (UV))_{1} + λ_{1} U_{F}^{2}+
λ_{2} V_{F}^{2}
With W=W_{U}=I and a suitably chosen W_{V}, we
can solve a Blind Compressed Sensing problem:
y A (UV))_{2}^{2} + λ_{1}
U_{F}^{2}+ λ_{2} vec(V)_{1}
Other variations are also possible.
Currently, we are working on stochastic descent methods to
efficiently solve these problems.
