The paper presents a unified, flexible framework for the tasks
of audio inpainting, declipping, and dequantization. The concept is
further extended to cover analogous degradation models in a transformed domain, e.g. quantization of the signal’s time-frequency
coefficients. The task of reconstructing an audio signal from degraded observations in two different domains is formulated as an
inverse problem, and several algorithmic solutions are developed.
The viability of the presented concept is demonstrated on an example where audio reconstruction from partial and quantized observations of both the time-domain signal and its time-frequency
coefficients is carried out.