Project offerings 2017

Click on the supervisors name for a description of the projects they are offering.

Projects will be added over the coming weeks.

Supervisor

Project/s

Wei Bao

User mobility analysis in modern networks

Vera Chung

 

 

Face recognition using Deep neural network

Face alignment using Convolutional neural network

Vincent Gramoli

 

Blockchain: Can We Reach Consensus?

Evaluating Consensus Protocols in Distributed Systems

Ralph Holz

 

 

 

 

Analysing the traffic of mobile messengers

Dependencies in JavaScript projects

Smart Contract Analysis on Blockchains

Enhancing email security with live checks

Routing transparency: an observatory for routing data

Seokhee Hong

 

 

    
Scalable Visual Analytics
Visualisation and Analysis of Massive Complex Social Networks and Biological Networks
Navigation and Interaction Techniques for 2.5D Network Visualisation

Algorithmics for 2.5D Graph Embedding

Beyond Planarity: Algorithmics for Spare Non-planar Graphs

Jinman Kim Robotic Surgical Video Processing
Multidisciplinary team Visualization

Machine Learning for Automatic Thyroid Eye Disease Classification

Skin Lesion Analysis for Melanoma Detection

Adrenal Tumor Recognition and Analysis

Kevin Kuan

Helpfulness of Online Consumer Reviews

Sharing Food Pictures on Social Media

Group-Buying Business Model

Illusion on Social Networks

Emotional Contagion on Social Networks

Diffusion of Information on Social Networks

Na Liu

Personalized Coaching for health behaviour change

Health Data Analysis
Mobile phone addiction and self-diagnosis

David Lowe

Lab augmentation using heads-up-displays

Lab augmentation via mobile phone apps

MOOLS: Massive Open Online Labs

Using virtual reality augmentation to support simultaneous use of physical equipment

Enhancing laboratory learning through scripted guidance using Smart Sparrow

Zhiyong Wang

Multimedia Data Summarization

Predictive Analytics of Big Time Series Data

Human Motion Analysis, Modeling, Animation, and Synthesi

Video Captioning

Bing Zhou

Parallel computing in large-scale fMRI data with high-dimensions

Transcription factor network visualisation based on chromatin states

Albert Zomaya Centre for Distributed & High Performance Computing Honours projects

 

Projects supervised by Wei Bao

User mobility analysis in modern networks
Mobility is an intrinsic trait of many mobile applications. Uber and Pokemon Go are two exciting examples where the location and trajectory are exploited to satisfy users’ requirements. However, user mobility also poses significant challenges for realizing ubiquitous and reliable communication and computing. For example, users’ movement causes frequent handovers among the base stations and access points, which will increase the latency and thus deteriorate users’ experience. In this project, you are expected to have a deep investigation of the impact of user mobility on cutting-edge technologies such as mobile computing, fog computing, Internet of Things, and big data. You may choose (but not limited to) one of the following topics:

  • User mobility oriented applications
  • User mobility data analytics
  • Mobility-aware fog computing
  • Mobility-aware user scheduling

Requirements: Good programming skills, such as Python, Matlab, and Java. Strong mathematical background is a plus.

Projects supervised by Vera Chung

Face recognition using Deep neural network
Recently, a large number of photos have been crawled by search engines and uploaded to social media networks, which include a variety of unconstrained material, such as objects, faces and scenes. This large volume of data and the increase in computational resources have enabled the use of more powerful data mining methods such as deep neural networks. This project is to study how to apply the deep neural networks for the face recognition.

Requirements: good programming in Python

Face alignment using Convolutional neural network
Face alignment or detecting semantic facial landmarks (eyes, ears, mouth and nose) is a fundamental component in many face analysis for face verification and face recognition. Popular approaches include template fitting approaches and regression-based methods. Many deep neural network models have been applied to solve the face alignment problem recently. This project will study how to detect facial landmarks by coarse-to-fine regression using a cascade of deep convolutional neural networks (CNN).

Requirements: good programming in Python

Projects supervised by Vincent Gramoli

Blockchain: Can We Reach Consensus?
Blockchain is a disruptive technology promising to minimize the cost of ownership transfers over internet. While this technology was already shown promising for exchanging crypto-currencies, like BitCoins, they can potentially transfer various types of assets [1]. The key underlying principle is a distributed ledger to which miners can append a block encoding the newly requested transactions. Once recorded, blocks are immutable, hence allowing participants to audit and verify the blockchain since its genesis block.

To prevent blocks with conflicting transactions from being appended concurrently to the chain, participants must run a consensus algorithm that guarantees the total order of the blocks. Unfortunately, consensus has been known to be unsolvable since three decades [2] but practical systems were designed to get as close to a solution as possible. As a financial transaction is a clear incentive for malicious users to break the consensus properties, it is crucial to understand the weaknesses of current implementations of Byzantine agreement [3] to improve existing alternatives.

The goal of this research project is to investigate existing solutions whose code is available, like Ethereum [1], and design an efficient and secure consensus prototype for blockchains.

[1] Ethereum
[2] Impossibility of Distributed Consensus with One Faulty Process. Fischer, Michael J. and Lynch, Nancy A. and Paterson, Michael S. JACM 1985.
[3] The Byzantine Generals Problem. Lamport, Leslie and Shostak, Robert and Pease, Marshall. TOPLAS 1982.

Evaluating Consensus Protocols in Distributed Systems
Distributed system solutions, like CoreOS used by Facebook, Google and Twitter, exploit a key-value store abstraction to replicate the state and a consensus protocol to totally order the state machine configurations. Unfortunately, there is no way to reconfigure this key-value store service, to include new servers or exclude failed ones, without disruption.

The Paxos consensus algorithm that allows candidate leaders to exchange with majorities could be used to reconfigure a key-value store as well [4]. To circumvent the impossibility of implementing consensus with asynchronous communications, Paxos guarantees termination under partial synchrony while always guaranteeing validity and agreement, despite having competing candidate leaders proposing configurations.

Due to the intricateness of the protocol [1] the tendency had been to switch to an alternative algorithm where requests are centralized at a primary. Zab, a primary-based atomic broadcast protocol, was used in Zookeeper [2], a distributed coordination service. Raft [1] reused the centralization concept of Zookeeper to solve consensus. The resulting simplification led to the development of various implementations of Raft in many programming languages.

The goal of this project is to compare a Raft-based implementation to Paxos-based implementations [3] to confirm that Paxos can be better suited than Raft in case of leader failures and explore cases where Raft could be preferable.

[1] Diego Ongaro and John Ousterhout. In search of an understandable consensus algorithm. In ATC, pages 305–319, Philadelphia, PA, 2014. USENIX.
[2] Flavio Junqueira and Benjamin Reed. ZooKeeper: Distributed Process Coordination. O’Reilly Media, Nov. 2013.
[3] Vincent Gramoli, Len Bass, Alan Fekete, Daniel Sun. Rollup: Non-Disruptive Rolling Upgrade. USyd Technical Report 699.

Projects supervised by Ralph Holz

Analysing the traffic of mobile messengers
Co-supervisor: Wei Bao
Mobile messengers have first replaced text messages (SMS), and more recently they have become serious competition to established social networks such as Facebook. In this work, we are going to analyse the network traffic that mobile messengers produce in the network of the University of Sydney.

We will develop a protocol dissector for the Bro Intrusion Detection System. This system is used to analyse traffic passing through the network of the University of Sydney. We run the dissector to obtain data about the prevalence of mobile messaging, its usage patterns (time, frequency), and interaction with remote servers.

Note: the privacy of university members is not infringed in this project as we do not identify network users and measurement data is not released. We hold ethical clearance for the operation of Bro.

Dependencies in JavaScript projects
Co-supervisor: Ingo Weber
Most software today is released in the form of packages, which may have dependencies on other software packages that need to be installed first or be available as a library. In this project, we are going to analyse these dependencies and potential vulnerabilities they introduce.

We have already developed a tool chain that allows us to download PHP packages, extract metadata (author, version, etc.), and store everything into a graph DB for analysis. This project can take one of two forms, depending on the students' interests and background.

The first option is to carry out an analysis of dependencies in the PHP universe and developing an algorithm to find out which packages depend on other, known-vulnerable packages. The result is a live tracking system that continously tracks packages affected by new vulnerabilities.

The second option is to add the extraction of JavaScript package dependencies to our toolchain and run first analyses.

The project can be split up for two students to work on it, with each student working on one option.

Smart Contract Analysis on Blockchains
Co-supervisor: Bernhard Scholz
Blockchains have become popular platforms for trading assets. They are best-known for their two main representatives, Bitcoin and Ethereum. The latter introduced a novelty: smart contracts. These are little programs that are stored on the blockchain and executed by all participants. They allow to define the autonomous execution of transactions, governed only by code. Applications range from business workflows to the management of entire organisations. Smart contracts, due to being Turing-complete, do come with a risk, however: programming errors can result in serious losses as attackers find ways to interact with contracts in unforeseen ways.

We have developed a toolchain to decompile and analyse smart contracts.

In this work, we are going to improve our toolchain in (at least) two ways. First, we generalise its functionality to detect exploitable contracts even better. Second, we build code that can automatically test if a vulnerability is indeed exploitable - i.e. we construct an exploit generator.

Enhancing email security with live checks
Email remains the most heavily used form of nearly instantaneous communication, with billions of subscribers and similarly high number of messages exchanged in a single day. Yet the security of email connections is still often absurdly bad. This is, in part, due to the difficulties of rolling out a working Public Key Infrastructure on a global scale. In this project, we take a new approach. We abandon the attempt to enforce binary security decisions ("secure or not") and instead rely on historical and ongoing observations from Internet measurements to estimate the security of a fresh email connection.

We have already built a toolchain to continously measure and track the security of connections to a large number of email servers. In this project, we want to extend this to the full Internet and the entire range of email protocols. We will use Internet-scale scanners and build models that predict which characteristics a safe connection should have and whether a fresh connection should be considered secure or not. We evaluate our work by using data from a passive network monitor, i.e.
real network traffic, to provide an estimate of the possible benefits our solution can provide.

Routing transparency: an observatory for routing data
It is a curious but true fact: the functionality that the Internet provides is only made possible thanks to a protocol that is entirely
insecure: BGP. Previous research has shown how easy attacks on this protocol really are. While the BGP community has drafted several standards to improve the security of the protocol, these have proven to be very expensive to deploy.

The goal of this project is to bring transparency to Internet routing.

The ownership of Internet routes (and Internet networks) is stored in so-called Regional Internet Registrars (RIR) - for Australia, for instance, this would be APNIC. In this project, we are going to develop code that regularly downloads data from the five existing RIR and imports it into a database for later analysis. We will develop algorithms that keep track of routing data and provide us with on-the-fly reports of changes. We evaluate and improve the performance of our solution and, if we find it useful, take additional data sources into account.

Projects supervised by Seokhee Hong

Scalable Visual Analytics
Technological advances such as sensors have increased data volumes in the last few years, and now we are experiencing a “data deluge” in which data is produced much faster than it can be used by humans.
Further, these huge and complex data sets have grown in importance due to factors such as international terrorism, the success of genomics, increasingly complex software systems, and widespread fraud on stock markets.

We aim to develop new visual representation, visualization and interaction methods for humans to find patterns in huge abstract data sets, especially network data sets.
These data sets include social networks, telephone call networks, biological networks, physical computer networks, stock buy-sell networks, and transport networks.
These new visualization and interaction methods are in high demand by industry.

Visualisation and Analysis of Massive Complex Social Networks and Biological Networks
Recent technological advances have led to many massive complex network models in many domains, including social networks, biological networks, webgraphs and software engineering.

Visualization can be an effective analysis tool for such networks. Good visualisation reveals the hidden structure of the networks and amplifies human understanding, thus leading to new insights, new findings and possible prediction of the future.

However, visualisation of such massive complex networks is very challenging due to the scalability and the visual complexity.

This project addresses the challenging issues for visualisation and analysis of massive complex networks by designing and evaluating new efficient and effective algorithms for massive complex social networks and biological networks.

In particular, integration of good analysis method with good visualisation method will be the key approach to solve the research challenge.

Navigation and Interaction Techniques for 2.5D Network Visualisation
Recent technological advances have led to many large and complex network models in many domains, including social networks, biological networks, webgraphs and software engineering.

Visualization can be an effective analysis tool for such networks; good visualisation may reveal the hidden structure of the networks and amplifies human understanding, thus leading to new insights, new findings and possible prediction of the future.

However, visualisation itself cannot serve as an effective and efficient analysis tool for large and complex networks, if it is not equipped with suitable interaction and navigation methods.

Well designed and easy-to-use navigation and interaction techniques can enable the users to communicate with visualization much faster and effectively to perform various analysis tasks such as finding patterns, trends and unexpected events.

Recently, 2.5D graph visualization methods have been successfully applied for visualization of large and complex networks, arising from biological networks, social networks and internet networks.
However, the corresponding navigation method has yet been investigated so far.

This project aim to design, implement and evaluate new methods for navigating 2.5D layouts of large and complex networks to enable users to perform analytical tasks.

Algorithmics for 2.5D Graph Embedding
Graph Drawing is to construct good geometric representation of graphs in two and three dimensions.

Although Graph Drawing has been extensively studied due to wide range of applications such as VLSI design, information systems, sociology, biology, networks, and software engineering,
majority of research has been devoted to study representations of graphs in two dimensions.

This project will investigate a new MultiPlane framework, which draws graphs using a set of 2D planes, nicely arranged in three dimensions, and satisfying new aesthetic criteria derived from topology and graph theory.

More specifically, this project aims to study Multiplane embeddings from both mathematical and computational points of view: define new mathematical criteria for MultiPlane embeddings and establish lower/upper bounds; characterise MultiPlane graphs; determine the complexity of computing MultiPlane embeddings; and design algorithms for constructing MultiPlane embeddings.
In particular, strong skills and research interests in mathematics, algorithms and theoretical computer science are required.

Beyond Planarity: Algorithmics for Spare Non-planar Graphs
Graph algorithms are used in many application domains, including many graph mining tools, in domains such as market surveillance, fraud detection, bioinformatics, software re-engineering, and counter-terrorism.

Planar graphs have a long history both in the theoretical mathematical literature and graph algorithms literature; however, the planarity constraint is very restrictive for practical applications, since most of real-world networks are non-planar graphs.

This project will relax this constraint, by considering sparse non-planar graphs, and aims to investigate their structural properties, design algorithms, and evaluate the efficiency and effectiveness of the algorithms.

Projects supervised by Jinman Kim

Robotic Surgical Video Processing
Robotic surgery provides the surgeon with articulated instruments capable of the full range of movements that would otherwise be possible performing open surgery. The Robotic system uses binocular telescopes (which can be recorded as a video) allowing the surgeon to have full depth perception which greatly improves surgical precision.

The ability to process surgical videos can provide exceptional level of clinical decision support, via real-time data augmentation during surgery, as well as to facilitate surgical training and measure surgical competence.

This research will develop the following core algorithms: (i) recognition of surgical tools and gestures from video; (ii) visual tracking of major anatomical structures from video; (iii) cloud-based computation for real-time video data processing; and (iv) surgical decision support system within an Augmented Reality and/or Virtual Reality environments.

This research will be conducted within the Biomedical and Multimedia Information Technology (BMIT) research group, and will have access to latest computing facilities; state-of-the-art algorithms / technologies in medical imaging, machine learning and video processing; and access to clinical partners / institutions and biomedical data.

Requirements: Interest in computer vision and image / video processing will be helpful. Good knowledge in programming.

Multidisciplinary team Visualization
Multidisciplinary team meetings (MDTs) are the standard of care in modern clinical practice. MDTs typically comprise members from a variety of clinical disciplines involved in a patient’s care. In MDT, imaging is critical to decision-making and therefore it is important to be able to communicate the image data to other members. However, the concept of changing the image visualisations for different members, to aid in interpretation, is currently not available. In this project, we will design and develop new MDT visualisations, where we propose the use of a novel ‘optimal view selection’ algorithm to transform the image visualisation to suit the needs of the individual team members. In this approach, a set of visual rules (via qualitative and quantitative modelling) will be defined that ensures the selection of the view that best suits the needs of the different users. Our new MDT visualisation will facilitate better communication between all the clinicians involved in a patient’s care and potentially improve patient outcomes.

We have several ongoing visualisation projects at the biomedical and multimedia information technology (BMIT) research group. It involves new innovative visualisation algorithms with the use of emerging hardware devices (e.g., Oculus Rift and HoloLens, couples with Kinect/Leap motion). Students will join a team of researchers and will have the opportunity to work at the clinical environment with clinical staffs / students.

Machine Learning for Automatic Thyroid Eye Disease Classification
Thyroid Eye Disease (TED) affects many people in the world. TED is an extremely unpleasant, painful, cosmetically distressing, and occasionally sight threatening condition. Early diagnosis is particularly important for TED disease since early treatment could minimize the risk of losing sights for severe TED patients.

Current TED diagnosis is usually made by summing the score listed in the clinical activity score system e.g., redness of eyelid. However, even for experienced physicians, diagnosis by human vision can be subjective, inaccurate and non-reproducible. This is primarily attributed to the complexity of eye features that is used to describe the disease.

Machine learning plays an essential role in medical imaging field such as computer-aided diagnosis (CAD), where researchers apply modern machine learning and pattern recognition techniques such as deep learning to solve medical related problems. For instance, CAD system can help physicians with the screen procedures, such as to aid the physicians by indicating locations of suspicious tumors. However, to our best knowledge, there is no such a system/method for automatic TED classification.

In this project, we aim to develop ‘machine learning’ algorithm with the state-of-the-arts techniques to automatically produce the TED diagnosis results. Eventually, the algorithm would help physicians to make accurate, objective and reproducible decisions.

Skin Lesion Analysis for Melanoma Detection
Melanoma is one of the most lethal forms of skin cancer. Unfortunately, Australia has the highest incidence of melanoma in the world. On average, 30 Australians will be diagnosed with melanoma every day and more than 1,200 will die from the disease each year (according to Melanoma Institute Australia).

An automated tools on the consumers’ computer or smart phone together with a camera that can assist in triage, screening, and evaluation of skin lesion will become essential for early detection and diagnosis. Machine learning plays an essential role in medical imaging field and modern machine learning techniques such as deep learning has been using to solve medical related problems. For instance, state-of-the-art machine learning systems can help physicians with the screening procedures, such as to aid the physicians by indicating locations of suspicious tumors. However, due to the complexity of skin lesions, the existing methods for skin lesion analysis and melanoma detection are still not accurate for indication purpose.

In this project, we aim to develop an accurate automated skin lesion analysis tools toward melanoma detection. Eventually, the melanoma detection tool will improve patient awareness and provide a cost effective solution.

Adrenal Tumor Recognition and Analysis
Adrenal tumors are currently very difficult to access from the onset of the disease. The abnormal site is visible but cannot be differentiated to be either malignant or benign. To confirm the diagnosis, multiple additional tests are typically performed to identify and characterize the disease such as CT, PET-CT, and MRI and others. Early confirmation of the disease can lead to significant improvement in the quality of care.

Tumor detection is a fundamental requirement for an adrenal tumor Computer aided diagnosis (CAD) system to provide essential information as clinical decision support. However, there is no algorithm to detect and classify adrenal tumors. In this project, we aim to develop a machine learning algorithm to accurately and efficiently detect the adrenal tumors on medical images.

Projects supervised by Kevin Kuan

Helpfulness of Online Consumer Reviews
Consumers increasingly rely on online product reviews in guiding purchases. This project aims to study different review characteristics (length, sentiment, reviewer reputation, etc.) and their helpfulness using experiment, survey, and/or secondary data. (e.g., from Yelp.com).

Sharing Food Pictures on Social Media
Many people have taken a photo of their food while eating out and posted it somewhere online. This study aims to study the effects of taking and sharing food pictures on the consumer using experiment and/or survey, and its implications to social media marketing.

Group-Buying Business Model
Group-buying sites, such as Groupon, Yahoo! Deals and LivingSocial, have emerged as popular platforms in social commerce. This project aims to study the factors affecting the success of the emerging e-commerce business model using experiment, survey, and/or secondary data (e.g., from Groupon).

Illusion on Social Networks
People only make their decisions and judgments based on their observations on the choices and behaviors of other people in social networks. However, social networks can create the illusion that something is common when it is actually rare. This projects aims to study how the social-network illusion can trick the human mind using experiment, survey, and/or secondary data (e.g., from Twitter).

Emotional Contagion on Social Networks
Emotional contagion refers to the phenomenon of having one person’s emotions and related behaviors directly trigger similar emotions and behaviors. This project aims to study the contagion of emotional messages in online social networks using secondary data (e.g., from Twitter).

Diffusion of Information on Social Networks
Can information on social networks be used to predict something before it happens? This project aims to study the predictive value of information on social networks using secondary data (e.g., from Twitter).

Projects supervised by Na Liu

Personalized Coaching for health behaviour change
There are many health applications and tools available on the market to track users’ health behaviour, provide feedback and promote behaviour change. However, the reminders and feedback will be ignored by the users gradually, and thus become less effective overtime.

The project will design and build various individual and group level motivational factors that can provide personalized coaching message and lead to the formation of healthy lifestyle.

Minimum requirements: Basic web programming skills; mobile application development skills, basic understanding in statistical analysis (e.g. ANOVA, regression, etc.).

Health Data Analysis
The project involves analysing health data captured from various sources, such as wearable devices and mobile phones. The patterns/insights derived from the analysis will be used to improve the design of health mobile app and/or health portal.

You will have the opportunity to work on real data and collaborate with clinical clients.

Skills required: Analytical skills, Application development skills, Statistical analysis

Mobile phone addiction and self-diagnosis
More and more people are addicted to their mobile phones and their life and work have been affected by it. The project will analyse mobile phone usage data and provide diagnosis on users’ level of phone addiction. Interventions will be designed and implemented to foster an effective and healthy phone usage.

Minimum requirements: Mobile app development; basic understanding in statistical analysis (e.g. ANOVA, regression, etc.).

Projects supervised by David Lowe

Lab augmentation using heads-up-displays
Investigation of current "heads-up display" technologies and their suitability for use with augmentation of hands-on laboratory experiment activities. The outcome will be a recommendation regarding the most appropriate technological solution, and development of a conceptual design for supporting experiment augmentation.

Lab augmentation via mobile phone apps
Development of a laboratory augmentation prototype that demonstrates the feasibility of using current mobile phones to support augmentation of standard laboratory experiments. The outcome will be a prototype phone app that allows students to point their phone camera at a set of laboratory apparatus and have it supplement the display with additional information related to the apparatus. (Extension) The augmented information varies depending on additional information retrieved live from an external source (nominally connected to the equipment, so that the information represents the current state of the apparatus.

MOOLS: Massive Open Online Labs
This project involves investigation of strategies that allow multiple users to share control of a single item of physical laboratory equipment, with the objective of allowing each user to feel as though they are an active participant in the resultant behaviour of the equipment. The core outcome will be development of the software interface for an online heat transfer experiment that allows gamified shared control of a set of laboratory equipment.

Using virtual reality augmentation to support simultaneous use of physical equipment
This project will adapt concepts from earlier work on the use of augmentation of an experimental environment to allow multiple users to simultaneously undertake experimentation on the same item of laboratory equipment. The equipment will be designed to allow each user will have their own virtual software agent (manifested just to them using augmented reality) which reacts to the behaviour of the equipment. The outcome will be a simple prototype that demonstrates the feasibility of the approach.

Enhancing laboratory learning through scripted guidance using Smart Sparrow
Investigation of the feasibility of using Smart Sparrow to provide adaptive guidance in carrying out a physical laboratory experiment. This will require consideration of the ways in which the Smart Sparrow adaptation engine can respond to events drawn from the real world (and in particular from the equipment under exploration). The outcome will be an implementation and evaluation of a proof-of-concept prototype and a set of recommendations regarding feasibility and possible design issues.

Projects supervised by Zhiyong Wang

Multimedia Data Summarization
Multimedia data is becoming the biggest big data as technological advances have made it ever easier to produce multimedia content. For example, more than 300 hours video is uploaded to Youtbue every minute. While such wealthy multimedia data is valuable for deriving many insights, it has become extremely time consuming, if not possible, to watch through a large amount of video content. Multimedia data summarization is to produce a concise yet informative version of a given piece of multimedia content, which is highly demanded to assist human beings to discover new knowledge from massive rich multimedia data. This project is to advance this field by developing advanced video content analysis techniques and identifying new applications.

Predictive Analytics of Big Time Series Data
Big time series data have been collected to derive insights in almost every field, such as the clicking/view behaviour of users on social media sites, electricity usage of every household in utility consumption, traffic flow in transportation, to name a few. Being able to predict future state of an event is of great importance for effective planning. For example, social media sites such as Youtube will be able to better distribute popular video content to their caching servers in advance so that users can start watching the videos with minimal delay. This project is to investigate existing algorithms and develop advanced analytic algorithms for higher prediction accuracy.

Human Motion Analysis, Modeling, Animation, and Synthesis
People are the focus in most activities; hence investigating human motion has been driven by a wide range of applications such as visual surveillance, 3D animation, novel human computer interaction, sports, and medical diagnosis and treatment. This project is to address a number of challenge issues of this area in realistic scenarios, including human tracking, motion detection, recognition, modeling, animation, and synthesis. Students will gain comprehensive knowledge in computer vision (e.g. object segmentation and tracking, and action/event detection and recognition), 3D modeling, computer graphics, and machine learning.

Video Captioning
Video captioning is to automatically produce meaningful descriptions for a given video, and has a wide range of applications such as information retrieval and next generation personal assistant. This has been a fundamental challenge in the fields of multimedia computing and artificial intelligence. Recently, there have been significant advances on objection recognition and image understanding. This project is to explore the forefront of empowering a computer to see and think by developing advanced video content analysis and machine learning techniques.

Projects supervised by Bing Zhou

Parallel computing in large-scale fMRI data with high-dimensions
The functional magnetic resonance imaging (fMRI) is a key technique for map the human brain activity at precise localisations. The Neuroimaging data generated from fMRI experiments record entire brain patterns as voxels with their activity profiled through a time-course. To study patients with mental illness and healthy individuals, patterns from millions of voxels need to be mapped to specific brain regions using brain atlas and compared across the time-course.

The amount of data generated from fMRI is extremely large in scale and inherently high-dimensional. Therefore, parallel and distributed computing is crucial for addressing the computational complexity in fMRI data analysis. In this project, we aim to develop parallel algorithms to computing correlations and associations between different brain regions using high-dimensional fMRI data generated from patients who have been diagnosed to have Schizophrenia, Bipolar and compare these with healthy individuals.

This project will enable you to explore cutting-edge parallel computing algorithms to handle neuroimaging data generated latest brain imaging techniques and shed light on mental illness through such an integrative process.

Requirements: This project is mainly on parallel algorithm design, implementation and testing. Thus it will be a good project for you if you are a good programmer and interested in programming on high-performance computing clusters and GPUs.

This project is in collaboration with Dr Pengyi Yang and Professor Jean Yang (School of Mathematics and Statistics).

Transcription factor network visualisation based on chromatin states
Chromatin states of DNA encode for types of regulations and accessibility of transcription factors. The Encyclopedia of DNA Elements (ENCODE) project has profiled a large collection of histone modifications that together specify the chromatin states genome-wide.

We have previously established a parallel computing framework for reconstructing transcription networks by integrating genome-wide binding profiles (ChIP-Seq) of hundreds of transcription factors. This project aims to extend on previous work by implementing parallel and distributed computing algorithms to reconstruct and visualise transcription factor networks based on chromatin states learnt from histone modifications data using hidden markov model (HMM).

This project will allow you the opportunity to develop and apply cutting-edge parallel and distributed computing algorithms and methods to “omics” data generated from the state-of-the-art biological platform. You will get involved in (1) algorithm design, implementation and testing on multicore computers and clusters of PCs; and (2) an interactive graphical user interface design and implementation.
Requirements: good programming skill (essential) and experience in graphical user interface development (desirable).

This project is in collaboration with Dr Pengyi Yang (School of Mathematics and Statistics).