Is deconvolution simply division in frequency domain?

Is it correct to say that deconvolution simply division in frequency domain? And that convolution in time domain is multiplication in frequency domain.

And is it a convention to notate a function in frequency domain with a hat above the letter?

like $\hat{f}$*$\hat{g}$=$\hat{result}$

Deconvolution in time domain: $\hat{result}$/$\hat{g}$=$\hat{f}$ for $\hat{g} \not= 0 $

And How would you pronounce $\hat{g}$? function g in frequency domain, frequency domain function g, function g transformed into frequency domain, or none of those?


Under some conditions, yes - it is possible. Say we have $h=f\star g$ where $f$ is known, $g$ is unknown, and $h$ is 'measured'. As you state, by the convolution theorem, we have

$$ \hat{h}=\hat{f}\hat{g} $$ Now, if $\hat{f}\neq 0$, we can claim that $\hat{g}=\hat{h}/\hat{f}$. However this poses two issues: firstly, it is entirely too restrictive to assume that $\hat{f}\neq 0$ - take for example the simplest band-pass filter,

$$ \hat{f}(k)=\left\{\begin{array}{cc} 1 &-1\leq k\leq 1\\ 0 & \text{else}\end{array}\right. $$ This is a very common convolution kernel, and you'll be unable to perform deconvolution by dividing. In fact, the situation is worse - if $\hat{f}(k)=0$ for any $k$, then the convolution operator $g\mapsto f\star g$ will have nontrivial null space: any function $g$ which has $\hat{g}(k)\neq 0$ will result in the zero function after applying the convolution. Hence deconvolution is ill-posed for these kernels, i.e. there will not be a unique solution!

Even if you could perform deconvolution by division, it would be a bad idea as far as numerical accuracy is concerned - division should always be avoided due to round-off error.

As for pronouncing $\hat{g}$, most people say "$g$-hat", though technically it would be more appropriate to say "the Fourier transform of $g$".

Update 1/14:

To really get in to deconvolution, one should talk more seriously about regularization methods. The classical way to do deconvolution is to simply take a Tikhonov regularization, i.e. if $Af=f\star g$ and we want to solve $Af=h$ for $f$, we consider a sequence (taking $\gamma\rightarrow 0$) of problems of the sort

$$ \min_{f_\gamma}\|Af_\gamma-h\|_2+\gamma\|f_\gamma\|_2 $$ this is a "regularized" least squares problem which has explicit solution

$$ f_\gamma=(A^tA+\gamma I)^{-1}A^th $$

This is essentially a filtering technique - we avoid dividing by zero by filtering out those frequencies, then hope that we recover something close to $f$ as we take $\gamma\rightarrow 0$. A better method turns out to be "$l^1$ regularized least squares", where we replace $\|f_\gamma\|_2$ with $\|Wf_\gamma\|_1$, where $W$ is a sparsifying transform such as wavelets. This is a broad topic - see the book "Sparse image and signal processing" by Starck et al. For more info on the classical methods for deconvolution, check out "Introduction to Inverse Problems in Imaging" by Bertero and Boccacci.