The Machine Learning Research Network brings together researchers from across the University who are passionate about machine learning and its applications. We foster collaborative, multidisciplinary research to achieve international excellence and drive broader scientific and technological innovation.
The University of Sydney has a strong and diverse machine learning research community, spanning multiple disciplines and departments, including but not limited to the School of Computer Science, School of Mathematics and Statistics, and the Business School. Researchers across these faculties explore various aspects of machine learning, from mathematical foundations and theoretical advancements to algorithm development and real-world applications.
This breadth of expertise supports interdisciplinary research that extends beyond traditional computing fields into business, agriculture, transportation, and healthcare, among other domains. As machine learning continues to transform industries and scientific disciplines, the need for a unified, collaborative research network has become increasingly evident.
The Machine Learning Research Network was established to foster collaboration, facilitate knowledge exchange, and drive international excellence in AI and machine learning research.
By bringing together experts from diverse fields, the network aims to encourage cross-disciplinary partnerships, enabling novel approaches to solving complex problems. It supports fundamental research in machine learning theory and algorithms while also promoting the practical application of machine learning innovations in various sectors.
Through workshops, joint research projects, and engagement with industry partners, the network enhances opportunities for innovation, strengthens the University's leadership in AI and machine learning, and contributes to the broader scientific and technological community.
Date: Monday 14 Apr 2025, 11AM – 12 PM
Venue: F23.01.105. Michael Spence Building
About: We have witnessed a rapid progression from traditional machine learning models for prediction and classification, to large language models (LLMs) and other foundation models for generative AI, to compositions of LLMs and other components for autonomous agentic AI.
Although the details have evolved, what has remained constant in this progression is a societal need for the technology to be human-centered and trustworthy. Such AI systems are those that have sufficient basic performance, reliability, human interaction and aligned purpose, and maintain human agency and dignity.
In this talk, we will explain the specifications of these high-level characteristics to traditional machine learning, LLMs, and agentic AI. We will conclude with a discussion of how LLM-based agentic AI does not yet have a satisfactory theoretical grounding and how a systems theory may enable better prediction, control, and optimization.
About the speaker: Kush R. Varshney was born in Syracuse, New York in 1982. He received the B.S. degree (magna cum laude) in electrical and computer engineering with honors from Cornell University, Ithaca, New York, in 2004. He received the S.M. degree in 2006 and the Ph.D. degree in 2010, both in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge.
While at MIT, he was a National Science Foundation Graduate Research Fellow.Dr. Varshney is an IBM Fellow based at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he directs Human-Centered Trustworthy AI research. He was a visiting scientist at IBM Research - Africa, Nairobi, Kenya in 2019. He was the founding co-director of the IBM Science for Social Good initiative from 2015-2023. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation, and IBM Corporate Technical Awards for Trustworthy AI and for AI-Powered Employee Journey.
He and his team created several well-known open-source toolkits, including AI Fairness 360, AI Explainability 360, Uncertainty Quantification 360, and AI FactSheets 360. AI Fairness 360 has been recognized by the Harvard Kennedy School's Belfer Center as a tech spotlight runner-up and by the Falling Walls Science Symposium as a winning science and innovation management breakthrough.
He conducts academic research on the theory and methods of trustworthy machine learning. His work has been recognized through paper awards at the Fusion 2009, SOLI 2013, KDD 2014, and SDM 2015 conferences and the 2019 Computing Community Consortium / Schmidt Futures Computer Science for Social Good White Paper Competition. He independently-published a book entitled 'Trustworthy Machine Learning' in 2022. He is a fellow of the IEEE.