Skip to main content
top

Bayesian regularization and inference for linear and bilinear models

Begin
End
Identification Code
L100751701
Project Focus
teoretický
Project Type (EU)
other
Publications ÚTIA
Abstract
Linear and bilinear models arise in many research areas including statistics, signal processing, machine learning, approximation theory, or image analysis. In cases when the problem of interest is ill-conditioned or suffers from separation ambiguity, the classical solutions such as ordinary least squares or non-negative matrix factorization fail and additional information is needed for acceptable solution. The problem is typically recast as an optimization problem with additional regularization term. In linear models, typical examples are Tikhonov regularization or least absolute shrinkage and selection operator (LASSO). In bilinear problems, the situation is even more difficult and domain-specific assumptions need to be often made to tackle the problem. Existing state-of-the-art approaches have often a tuning parameters of the regularization terms which need to be specified. These parameters typically have Bayesian interpretation as hyperparameters of prior distributions on the models parameters. Beneficially, Bayesian inference allows to minimize the influence of the tuning parameters to a very low level. Moreover, selection of the appropriate regularization can be guided by the Bayesian model selection property. The challenge is to design prior models that are flexible to reflect specific problems and that are also computationally tractable.
Submitted by otichy on