student profile: Mr Simon Luo


Map

Thesis work

Thesis title: Probabilistic Graphical Models for Unsupervised Feature Learning

Supervisors: Fabio RAMOS , Lamiae AZIZI

Thesis abstract:

Probabilistic Graphical Models (PGMs) play a central role in machine learning. They provide a convenient way to mathematically express the structure, relationship and constraint between random variables. PGMs are able to use a large number of parameters to gain a higher representation power. However, selecting the number of parameters is difficult. If there are too few parameters the model is unable to express the relations between features and the target outputs so the model will underfit. If there are too many parameters, the model will be sensitive to random noise in training causing the model to overfit. The balance between underfitting and overfitting is a classical problem in machine learning known as the bias-variance trade-off.

Firstly, we investigate the challenges faced in applying PGMs to real-world applications. Real-world applications often have a complex structure created by the complex interactions between features. Tuning the parameters in the model is often difficult due to a large number of parameters. Often domain knowledge is used to assist in selecting these parameters. However, for large datasets, manually selecting these parameters is often infeasible. To solve this problem, a we propose a Bayesian non-parametric approaches which have the flexibility to learn the structure of the dataset. Our approach uses a Dirichlet process prior to clustering automatically select the model-parameters.

We then study the behaviour of different configurations of PGMs. PGMs can achieve a higher representation power by using hidden nodes and higher-order features. Despite the prevalence of PGMs, a comparison between using hidden nodes and higher-order features have not been well studied. In this thesis, we propose an efficient algorithm for PGMs using an information geometry approach to incorporate higher order features. With our new formulation of PGMs, we then conduct an empirical study to compare the different configurations of PGMs.

Lastly, we use our new information geometry formulation of PGMs to apply it to other classical unsupervised feature learning models such as Independent Component Analysis (ICA), Sparse Coding and Self-Taught Learning. The traditional approach to these machine learning models requires an independence assumption between features. Our newly formulated information geometry approach allows higher-order features to be included in the model.

Note: This profile is for a student at the University of Sydney. Views presented here are not necessarily those of the University.