As pointed out by Snowball, the problem is inherently hard, see also this paper.

However, it can be done much faster in general than generating all the codewords.

I explain the idea for linear $[n,k]$ codes $C$ with $n = 2k$. First, we construct two generator matrices $G_1 = (I\ A)$ and $G_2 = (B\ I)$ of $C$ where $I$ is the $(k\times k)$ unit matrix and $A,B$ are two $(k\times k)$ matrices. This is only possible if the positions $1,\ldots,k$ and $k+1,\ldots n$ form two information sets of $C$. If this is not the case, we have to find a suitable coordinate permutation first.

Now a codeword of weight $w$ has weight at most $\lfloor w/2\rfloor$ either in the first or in the second $n$ coordinates. Thus, we can find all codewords of weight $\leq w$ among the codewords $x G_1 = (x, xA)$ and $x G_2 = (Bx, x)$, where $x$ is a vector of length $k$ and Hamming weight at most $\lfloor w/2\rfloor$. So we can find the minimum distance $d$ of $C$ if we do this iteratively for all $w = 1,2,3,\ldots, d$.

In this way, you have to generate only a small fraction of all the codewords to find the minimum distance, and the idea can be generalized to any linear code. The first step then is to find a covering of the coordinates with information sets. However, the algorithm is still exponential, of course.

The webpage codetables.de is a valuable source for bounds on the best known minimum distances for fixed $q$, $n$ and $k$.


Finding the minimum distance $d$ of a linear code is equivalent to finding its minimum weight. According to On the inherent intractability of certain coding problems, given the check matrix, determining whether or not there exists a codeword of some fixed weight $w$ is a NP-complete problem for binary linear codes.

As Gerry commented, you can easily find the minimum distance for particular codes. For example:

  1. If you have a Reed-Solomon code of length $n$ and dimension $k$, its minimum distance is $n-k+1$.

  2. If you have a Hamming code, its minimum distance is $3$.