How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming?
Conversion of Basis Pursuit to Linear Programming
The Basis Pursuit problem is given by:
$$ \begin{align*} \arg \min_{x} \: & \: {\left\| x \right\|}_{1} \\ \text{subject to} \: & \: A x = b \end{align*} $$
Method A
The term $ {\left\| x \right\|}_{1} $ can written in element wise form:
$$ {\left\| x \right\|}_{1} = \sum_{i = 1}^{n} \left| {x}_{i} \right| $$
Then setting $ \left| {x}_{i} \right| \leq {t}_{i} $ one could write:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: \left| {x}_{i} \right| \leq {t}_{i} \; \forall i \end{align*} $$
Since $ \left| {x}_{i} \right| \leq {t}_{i} \iff {x}_{i} \leq {t}_{i}, \, {x}_{i} \geq -{t}_{i} $ then:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: {x}_{i} \leq {t}_{i} \; \forall i \\ & \: {x}_{i} \geq -{t}_{i} \; \forall i \end{align*} $$
Which can be farther refined:
$$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: I x - I t \preceq \boldsymbol{0} \\ & \: -I x - I t \preceq \boldsymbol{0} \end{align*} $$
Which is a Linear Programming problem.
Method B
Define:
$$ x = u - v, \; {u}_{i} = \max \left\{ {x}_{i}, 0 \right\}, \; {v}_{i} = \max \left\{ -{x}_{i}, 0 \right\} $$
Then the problem becomes:
$$ \begin{align*} \arg \min_{u, v} \: & \: \sum_{i = 1}^{n} {u}_{i} + {v}_{i} \\ \text{subject to} \: & \: A \left( u - v \right) = b \\ & \: u \succeq \boldsymbol{0} \\ & \: v \succeq \boldsymbol{0} \end{align*} $$
Which is also a Linear Programming problem.
MATLAB Implementation
MATLAB Implementation is straight forward using the linprog()
function.
The full code, including validation using CVX, can be found in my StackExchange Mathematics Q1639716 GitHub Repository.
Code Snippet - Method A
function [ vX ] = SolveBasisPursuitLp001( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
%% vX = [vX; vT]
mAeq = [mA, zeros(numRows, numCols)];
vBeq = vB;
vF = [zeros([numCols, 1]); ones([numCols, 1])];
mA = [eye(numCols), -eye(numCols); -eye(numCols), -eye(numCols)];
vB = zeros(2 * numCols, 1);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vX = linprog(vF, mA, vB, mAeq, vBeq, [], [], sSolverOptions);
vX = vX(1:numCols);
end
Code Snippet - Method B
function [ vX ] = SolveBasisPursuitLp002( mA, vB )
numRows = size(mA, 1);
numCols = size(mA, 2);
% vU = max(vX, 0);
% vV = max(-vX, 0);
% vX = vU - vX;
% vUV = [vU; vV];
vF = ones([2 * numCols, 1]);
mAeq = [mA, -mA];
vBeq = vB;
vLowerBound = zeros([2 * numCols, 1]);
vUpperBound = inf([2 * numCols, 1]);
sSolverOptions = optimoptions('linprog', 'Display', 'off');
vUV = linprog(vF, [], [], mAeq, vBeq, vLowerBound, vUpperBound, sSolverOptions);
vX = vUV(1:numCols) - vUV(numCols + 1:end);
end
I used the code above in Reconstruction of a Signal from Sub Sampled Spectrum by Compressed Sensing.
Figured it out! As Erwin pointed out, the formulation above is valid (save the fact that it should be optimized over x and t together). In order to write it in the form suggested by the problem, I needed to stack x and t:
$$\min_{u=[x^T\;t^T]^T}\begin{bmatrix}0\\1\end{bmatrix}^T\begin{bmatrix}x\\t\end{bmatrix} \quad \text{subject to}\; \begin{bmatrix}I & -I \\ -I & -I\end{bmatrix}\begin{bmatrix}x\\t\end{bmatrix}\le\begin{bmatrix}0\\0\end{bmatrix},\; \begin{bmatrix}A & 0\end{bmatrix}\begin{bmatrix}x\\t\end{bmatrix} = y,\;\begin{bmatrix}x\\t\end{bmatrix}\ge\begin{bmatrix}-\infty\\0\end{bmatrix}$$
(Please excuse my sloppy use of $0$ and $1$ for a vector/matrix of all zeros or all ones)