Skip to main content
Man playing with virtual reality headset
Research_

Human-centred technology

Creating accessible and effective technologies for the future

We design, engineer and evaluate new technologies, and conduct fundamental research that reflects the deeply human-centred and collaborative nature of engineering.

Our multidisciplinary teams bring together technical expertise, human-centred research methods and a deep understanding of business, design, education, nutrition, public health, medicine and psychology. At the hardware level, we investigate new device technologies including sensors that detect emotion and affect, and sensors for audio and social robotics. At the software level, we draw on the faculty’s strengths in machine learning and data mining – including social mining and personalisation – as well as natural language understanding and robotics, to create novel interface technologies based on visualisations, natural language, touch and movement.

Technology nodes

Humans in immersive, augmented and virtual environments

Our experts: Associate Professor Craig Jin, Associate Professor Masahiro Takatsuka, Professor Rafael Calvo, Professor Judy Kay, Associate Professor Jinman Kim, Professor Philip Leong, Associate Professor Alistair McEwan

Our collaborators: Dr Martin Tomitsch, Associate Professor Sabina Kleitman

Our research into immersive, augmented and virtual environments ranges from virtual and augmented reality to 3D audio and surface computing based on interactive tabletops, walls and building facades. We aim to create enabling technology for these immersive technologies to provide a new and powerful way to interact with and access complex data source (such as the visualisation of the effect of global warming) and sort through complex medical databases. We support diverse application areas such as medical training scenarios and tele-proctoring and large scale control rooms for coordinating human-to-human interaction at remote sites.   

The SYMARE database research explores the relationship between the morphology of human outer ears and their acoustic filtering properties – a relationship that is viewed by many as holding the key to human spatial hearing and the future of 3D personal audio.

It has been estimated that more data has been generated in the past two years than in the entire history of the human race. Large-scale visualisation systems offer promise for harnessing that data. Our advanced systems, with immersive visual and auditory displays, support sophisticated user interaction. This can be natural ways to interact based on user gestures and facial expressions. This project explores how to bridge the gap between data scientists with expertise in handling big datasets, and the domain experts who need to explore large data set to discover patterns. The work has been applied in a number of health-related big data applications including the use of sonifications of proteomic and genomic data distributions to enable domain experts to identify the neurodegradation in diseases such as Alzheimer’s and Amyotrophic Lateral Sclerosis (ALS).

In this context, we focus on issues related to natural user interaction, including body movement, gestures and expressions. We develop enabling techniques to support natural interactions among people in a virtual/augmented environment. Our research tackles speaker identification, 3D audio-based speaker localisation, human action recognition and facial expression recognition. Our focus includes challenging scenarios such as coordinating human-human interactions at one remote site and integrating them into a centralised virtual environment.

Network operations examines the ability to deliver or broadcast immersive and interactive (real-time) AR/VR. Recent advances in computer graphics and display technologies have enabled rapid growth in locally-operated AR/VR, where everything is computed and generated locally without requiring internet access. These applications facilitate immersive and realistic experience with various types of real-world data. However, our work goes beyond the current technologies to achieve immersive and interactive AR/VR experiences to be rendered and controlled via the internet.

Australia is the guardian of the Great Barrier Reef ecosystem and human activity is causing CO2 levels to rise resulting in the acidification of our oceans. There is real difficulty in conveying this information to the Australian public. We build upon research from the Virtual Human Interaction Laboratory at Stanford University to create a VR simulation showing how the coral reef changes over time, where the brilliantly varied and colourful species have disappeared and been replaced by slimy green algae. The Remote Environment Demonstrator investigates the impact of various immersive VR techniques and the degree of realism required to positively influence people’s behaviour towards sustainability.

The ability to produce accurate maps is important in most autonomous vehicles tasks, but it can also be useful to increase a person’s awareness of its surroundings. In this project, we focus on the real-time generation of 3D surface models based on raw sensor information and the extraction of meaningful features that can be used in a variety of tasks, such as user localisation, object detection and obstacle avoidance. 

Human-centred data management

Our experts: Associate Professor Uwe Roehm, Professor Alan Fekete, Dr Ralph Holz, Dr Bryn Jeffries, Associate Professor Bob Kummerfeld

Data management is a central enabling technology vital for most uses of IT systems to deliver useful services. Data management systems work much better when they take account of the cognitive and physiological characteristics of the people involved. Their needs and wishes for managing their data should be central to the design of systems. These concerns are especially important when there is a trade-off between efficiency, personal privacy and transparency. Recent technology trends provide many ways to collect personal data which supports many personalised services to better address people’s needs and preferences. We focus on how to create data management systems that take account of the people whose data underpins these processes.

As digital objects become increasingly important in people’s lives, people need to understand the provenance (lineage and history) of an important digital object such as those created from large, multi-source collections of personal data. Provenance data is commonly represented as a labelled directed acyclic graph. The challenge is to create effective interfaces onto such graphs so that people can understand the provenance of key digital objects. This unsolved problem is especially challenging in the case of novice and intermittent users as well as complex provenance graphs. We tackle this by creating an interface based on a clustering approach, designed to enable users to view provenance graphs and to simplify complex graphs by combining several nodes.

This area involves the analysis of the data privacy issues in cloud computing from an Australian perspective. Its focus concerns two aspects: policy compliance and user awareness. As cloud computing gains traction, there is a call for a more in-depth study into the technology’s privacy implications for users. Expected outcomes include the revelation of privacy loopholes in cloud services that might be exploited. In addition, by bringing this issue to public awareness, we hope that people in Australia will be better equipped to make informed decisions regarding their use of cloud computing technologies.

This area concerns the development of a collaborative genome browser that allows a team of scientists to browse and annotate a shared genome data set collaboratively via a web browser interface, including a facility for text-based interaction.

Here we assess whether the unique features of touch interfaces can enhance the usability of database systems for novice users. Usability, particularly for average computer users, is a major topic in the database community. Touch interfaces may represent a powerful new avenue for designers and researchers to pursue. We have therefore developed a visual query language for relational databases optimised for touch interfaces such as those found in modern tablets and surface computing devices.

Data analytics and visualisation for a circular waste economy

Our expert: Associate Professor Ali Abbas

With an increased focus on the application of big data to solve commercial problems in the digital age, there are opportunities to apply analytic techniques to waste management at a national level. However, there is a well-known problem with the accuracy and completeness of waste data in Australia. The development of applications to track household or business waste could provide more information and facilitate the exchange of knowledge to consumers for better decision-making. The use of data visualisation techniques could also be used to provide insight and share information about the current state of waste in Australia. These techniques can inform policymakers and waste service providers, while also assisting individual households to manage their waste.

Human-centred cybersecurity

Our experts: Dr Ralph Holz, Associate Professor Uwe Roehm

We work on understanding and improving real-world security in the digital and networked world. We carry out empirical studies to identify and mitigate critical flaws that arise either from the technology itself or from the way it is being used.

This research group obtains empirical data from passive and active measurements, with the data mined to assess the security of a range of internet technologies and derive the clues to build more secure and user-friendly systems. 

We review research into current phishing vulnerabilities within cyberspace. Through human manipulation, phishing is one of the largest vulnerabilities within online defence systems. This research furthers development into testing human vulnerability to phishing.

Human-centred data science for health and education

Our experts: Associate Professor Kalina Yacef,Associate Professor Irena Koprinska, Professor Rafael Calvo, Professor Judy Kay

Our industry partners: Cancer Council, BePATIENT

Health and education are rich application areas for data mining and data science, due to their wide adoption of technology to support their processes. The unprecedented amount of rich data captured as people interact through or with technology (such as eLearning, eHealth applications, use of wearable devices) offers the potential to discover and follow human processes and patterns and better support data-driven decision making. To achieve this potential, human-centred approaches are crucial. We develop computational techniques and user interfaces that enable people to extract relevant and useful information from these large, multimodal, dynamic data sets.

These projects explore how to better support students learning computer programming through mining their own and past students’ code submissions. We create innovative data mining techniques to provide insights into their learning behaviours, and automated and immediate personalised hints should students get stuck.

This interdisciplinary project explores the role that food prepared outside the home versus within the home plays in the diets of young adults. Our original approach uses a newly designed and tested smartphone application that transforms data collection and processing, and involves continuous digital photography. The objective is to create novel data mining techniques to advance experts’ understanding of the outlets where young adults buy food, when they buy it and the nutritional composition and overall contribution to the diet of these foods. The knowledge gained will form the evidence base for policy formulation and electronic and mobile-health promotion to reverse the problem of young adults gaining more weight than any other group in Australia.

The iEngage project leverages new technologies to help to promote healthier behaviours in children with regards to physical activity and nutrition. It provides children with information, education and skills to achieve their physical activity and nutrition goals. As a digital platform, it connects with the activity trackers to provide continuous feedback and summarise the daily activity on a dashboard. The richness of data collected can help various stakeholders:

–      create an unprecedented landscape of health knowledge and actual behaviours

–      understand and monitor the impact of the program

–      provide further personalisation. 

Lack of social relationships has been shown to be an important contributing factor for attrition in MOOCS. This project aims at helping students to connect with other students to alleviate this phenomenon by enhancing MOOCS with a peer-recommender system to encourage interactions, thus improving learning and reducing attrition. It explores various recommendation strategies and different ways to integrate the peer recommender system into the MOOC learning process.

Innovative information and education technologies

Our experts: Professor Judy Kay, Associate Professor Bob Kummerfeld, Associate Professor Kalina Yacef, Associate Professor Irena Koprinska

Our collaborators: Associate Professor Rosanne Quinnell, Dr Danny Liu

Our research into innovative tools and methods for education equip and prepare university and school students with the skills and experiences that they need to successfully achieve their goals and for the workforce. 

This project features an app that addresses plant blindness, a condition where people only see green when they look at a diverse community of plants and are unable to discern the differences between plant families, species and individuals. The work has produced an app to assist biology students and other enthusiasts to become familiar with key characteristics of plants on the University grounds. Uptake of the CampusFlora app has extended beyond the core user group to the broader University community.

MadMaker is a six-week online challenge in which high school students solve science, technology, engineering and mathematics (STEM) issues programming an embedded system: the Arduino Esplora board. The board can detect movement, changes in light and temperature and can control other devices. This online activity is an ideal and gentle introduction to how STEM fields are all connected to real-world problems.

We explore how to better support students learning computer programming through mining their own and past students’ code submissions. We create innovative data mining techniques to provide insights into their learning behaviours, as well as automated and immediate personalised hints when students are stuck.

Metacognitive skills are important for students to develop as they are fundamental to successful learning experiences. We are promoting metacognitive methods with new student-facing and teacher-facing data dashboards that harness the data collected during semester. The goal is to help students meet their own goals for the courses they are studying by promoting effective metacognitive reflection and planning methods. Our dashboard will allow students to compare their own performance to aggregated data from previous years. 

We turn unstructured news stories into computable data that can be used to build rich applications where monitoring or searching for entities is important. It unlocks information in text for online reputation management, trading on capital markets and electronic discovery of legal evidence.

This project investigates low-cost approaches to transport infrastructure expansion through the use of digital information technologies that work in combination with existing hard infrastructures. Our aim is to improve passenger flow through facilities and distribute loads across services by providing practical and novel forms of information to passengers.

Human-Centred Technology for health and nutrition

Our research aims to engineer new technology so that it addresses people’s health needs. We develop custom technology and mobile applications with the goal of promoting lifelong health and wellness.

Our experts: Professor Judy Kay, Professor Alistair McEwan

Our collaborators: Professor Margaret Allman-Farinelli

We demonstrate how Cochlear implants perform considerably better in recipients when the placement of the array is located closer to the inner wall of the cochlea corridor. Currently there are no satisfactory non-invasive visualisation techniques that give visual feedback to the surgeon during an insertion procedure.

Currently, there are no cheap, easy-to-use devices to detect neonatal malnutrition, which increases an infant’s susceptibility to infectious and non-infectious diseases after birth. With funding from the Bill and Melinda Gates foundation, this device seeks to tackle this problem.

The way we interact with technology can be extended to establish relationships with users and support their decision and reflection process. This work explores the best interactions to foster these traits in patients with a chronic disease as they transition from paediatric care (high supervision) to adult care (self-managed care).

VegeChamp is a theory-based mobile gaming intervention designed to motivate young adults to improve their vegetable intake by enhancing their self-efficacy for cooking with vegetables and providing quick and easy ways to include them in their diet. 

The e-DIA allows dietary intake data to be recorded in real time and is a valid method of measuring group dietary intake at both the nutrient and food group level. The database will be updated to include a large selection of Australian fast food outlets and café chains, and future work will include a collaborative study on young adults’ habits with regard to fast food and eating out.

Usable interfaces for novel technologies

Our research into bespoke and novel interfaces for new technologies sits at the intersection between technocentric engineering and the specific context of the end user. We evaluate novel technologies as they are actually used in situ, with the aim of engineering new technologies that fulfil the needs and aspiration of users.

Our experts: Professor Judy Kay, Associate Professor Masahiro Takatsuka, Associate Professor Fabio Ramos

The Smart Glass system is an automatic activity recognition system for cognitive aid of dementia and stroke victims. Based on sensors embedded in glasses and machine learning techniques, the system will automatically detect activities and provide feedback to patients, caregivers and rehabilitation professionals. Tailored treatment will improve rehabilitation.

This project transforms walls into interactive information displays that people control with gestures. It is used for in-the-wild studies, as well as diverse commercial deployments in the creative industries, sales and marketing.

To interact with untrained members of the general public, robots need to behave in ways that are intuitive and readily understood. Here we investigate whether human touch on a robot can  be recognised automatically.

This a set of innovative technologies and processes for creating realtime 3D digital content for holograms and other 3D display systems. It includes virtual tools which enable the direct hand-drawing of subjects, mark by mark.