Binary numbers with a locked padlock
Research_

Cybersecurity Cluster

Strengthening the defences against next generation cyber threats

Our research addresses security and privacy challenges posed by our increasingly "online lives", where unprecedented volumes of personal information is transferred over the Internet and stored in the cloud.

Our research

Cybersecurity is an ongoing arms race between attackers and security researchers. It is not only important to simply build algorithms, tools, and systems addressing various scenarios, but also to update and adapt to the ever-changing cyber threat landscape.

Our team has strong expertise in characterising and exposing cyber risks through empirical studies, developing AI and machine learning based security solutions, and developing secure systems.

Key projects

Our expert: Associate Professor Vincent Gramoli

Web3 promises to revolutionize the economy by letting users provide services to others without the need for centralised institutions. Unforunately Web3 operates on blockchain systems that are insecure and whose participants are not held accountable for their actions.

Redbelly is a blockchain system that offers security and performance for UTXO and account models. Its security stems from its deterministic consensus protocol, called Democratic BFT (DBFT), that prevents forks and its formal verification with parameterized model. Its performance comes from its superblock optimization that combines proposed blocks instead of selecting only one and discarding the other, and its lightweight validation. On the Diablo benchmarking framework, Redbelly outperforms six mainstream blockchains.

This project's goal is to create an accountable version of the Web3 by adding accountability to the Redbelly Blockchain system that features a scalable variant of the Ethereum virtual machine.

This project is partially funded by a grant from the Ethereum Foundation.

Our expert: Associate Professor Vincent Gramoli

Our partner: Rachid Guerraoui (Swiss Federal Institute of Technology, Lausanne)

Diablo is a benchmark to evaluate blockchain systems on the same ground. It was developed in a partnership between University of Sydney and the Swiss Federal Institute of Technology Lausanne (EPFL) to evaluate blockchain and distributed ledger technologies when running realistic applications. The name Diablo stems from DIstributed Analytical BLOckchain benchmark. We are currently extending Diablo to test the security vulnerabilities of blockchain systems by using fuzzing, injecting faults and implementing malicious behaviors (51% attacks, attack of the clones, balance attack, etc.)

We believe that the result will be insightful to improve existing blockchain technologies and protect their users.

Our experts: Dr Rahul Gopinath, Dr Sasha Rubin

Our partner: Andreas Zeller (CISPA Helmholtz Center for Information Security and Privacy, Germany)

To fuzz a program effectively, we need the input specification. However, such specifications are rarely available, or are inaccurate or obsolete. While extracting the input specifications from the program source is possible, the task becomes difficult when access to source code is unavailable. This project aims to leverage the side-channel information often available from the input processor on rejected inputs to infer the input specification implemented by such programs.

Our experts: Dr Rahul Gopinath, Dr Xi Wu

Our partners: Behnaz Hassanshahi, Paddy Krishnan (Oracle, Australia)

Supply chains are one of the emerging vectors for software exploitation. The problem is that a modern software may have on average 500 external dependencies, and a vulnerability in any of these represents a potential threat for the security of the application. This project aims at enhancing the security of software systems by automatically recovering the SLSA specifications of a program, which can allow a developer to make intelligent assumptions about the program's susceptibility to external threats through its supply chain.

Our expert: Dr Rahul Gopinath

Our partner: Associate Professor Jens Dietrich (Victoria University of Wellington)

Modern software systems rely on complex algorithms for data processing and often contain worst-case behaviours that can be exploited by adversaries for denial of service and other attacks. This project investigates how to identify and eliminate such vulnerabilities from given programs.

Our expert: Dr Suranga Seneviratne

Our partners: Professor Aruna Seneviratne (University of New South Wales), Professor Sanjay Chawla (Qatar Computing Research Institute)

PhD students: Bhanuka Malith De Silva, Dishanika Denipitiyage, Nishavi Ranaweera (UNSW), Akila Niroshan (UNSW)

This project aims to develop a novel framework to detect content and privacy malpractices perpetrated by thousands of mobile apps. It will use innovative models and algorithms to achieve unprecedented levels of automation and scalability, making it possible for the first time to identify compliance violations across the global app ecosystem. Outcomes will include a knowledge base of prevalent app malpractices, detection algorithms, and a software framework for scalable app analysis. New evidence and tools will benefit both Australian and global policymakers and regulators in combating malpractices, users in identifying safe mobile apps for themselves, and local and global app market stakeholders in being more diligent about compliance.

Our expert: Dr Suranga Seneviratne

Our partners: Dr Caren Han (University of Western Australia), Ben Doyle (Thales Australia)

PhD student: Fariza Rashid

Industry partner: Defence Innovation Network

Cyber threat intelligence has evolved significantly over the last few years, and many organisations have the practice of sharing intelligence with other peer organisations. Many commercial solutions support both human-in-the-loop and machine-to-machine threat intelligence sharing. Despite its importance in increasing operational efficiencies in cyber threat intelligence, benign intelligence sharing still needs to be explored. To this end, we are developing learning models and algorithms to extract information from security analyst reports and other online reports on benign events and automatically convert them to formats suitable for machine-to-machine intelligence sharing. We will also use explainable AI techniques to further optimise the threat analysis process. 

Our expert: Dr Charika Weerasiriwardhane, Dr Suranga Seneviratne

Large-scale class imbalance can adversely affect the performance of deep learning algorithms. To improve the model reliability, we need strong generalisations on minority classes. In this research, we investigate reweighting the model loss function based on the sample characteristics (i.e. label, hard negatives, easy positives) to minimise a margin-based generalisation bound. The novel re-weighting approach intends to be generic in nature so that it can be integrated with natural loss functions such as hinge loss. We also devise techniques to overcome the barriers associated with optimizing a reweighted loss function. To this end, tight relaxation of the problem is to be proposed and optimisation is to be conducted in stages. We target to test the proposed framework on binary classification scenario in cyber security related applications: i.e. spam filtering and multi class classification task with vision benchmark datasets. 

Our expert: Dr Suranga Seneviratne

PhD Student: Naveen Karunanayake

With DNNs widely deployed in many mission-critical and personalised applications, it is critical to avoid misbehaviours and information leakages of these models. In an era where the global regulations on privacy laws are getting tightened for traditional data access and collection (such as GDPR – General Data Privacy Regulation and CCPA - California Consumer Protection Act), organisations, developers, regulators, and many other stakeholders have no formal understanding on what to expect when there are increased attacks on DNNbased systems or how to protect and regulate them. A seemingly innocuous DNN deployment could lead to leaking confidential details related to finances, health, and biometrics of millions of people who intentionally or unintentionally provided data to build the model. An undefended model deployed in the real-world can also make incorrect or harmful decisions when fed carefully crafted adversarial examples or out-of-distribution data. Models such as DNNs must provide the needed certificate of guarantee in order for users to have the confidence to trust AI tools and for regulators to better regulate them.

Our expert: Dr Clément Canonne

Performing statistical analysis and machine learning tasks on vastly distributed data is now a routine and ubiquitous task; at the same time, the need to guarantee the privacy of this data, which very often includes sensitive information such as medical or personal data, has become increasingly important. This project aims to design and develop practical, simple, data-efficient, and versatile building blocks for key machine learning tasks on personal or sensitive data, providing sound and rigorous privacy guarantees.

We focus on developing such building blocks for three privacy settings, corresponding to three different threat models: central and local differential privacy, and shuffle privacy. This project is supported by an unrestricted gift from Google.

Our researchers

Study cybersecurity

Our Master of Cybersecurity equips graduates with an expert-level understanding of leading attack and defence techniques, assessing the security of networked systems, and applying cybersecurity strategies at the organisational level.

Vincent Gramoli

ARC Future Fellow
Address
  • Room 417 School of Computer Science Building J12