Complex Systems Seminars

Percolation, Cascades, and Control of Networks

Date: Monday 3 April 2017
Time: 10:30am - 12pm
Venue: Civil Eng J05 Conf Room 438

Presenter: Prof. Raissa D'Souza, University of California, Davis

Abstract: Networks are at the core of modern society, spanning physical, biological and social systems. Each distinct network is typically a complex system, shaped by the collective action of individual agents and displaying emergent behaviors. Moreover, collections of these complex networks often interact and depend upon one another, which can lead to unanticipated consequences such as cascading failures and novel phase transitions. Simple mathematical models of networks, grounded in techniques from statistical physics, can provide important insights into such phenomena. Here we will cover several such models, beginning with control of phase transitions in an individual network and the novel classes of percolation phase transitions that result from repeated, small interventions intended to delay the transition. We will then move on to modeling phenomena in coupled networks, including cascading failures, catastrophe-hopping and optimal interdependence.

Mathematical Aspects of Embodied Intelligence

Date: Wednesday 12 April 2017
Time: 10:30am - 12pm
Venue: Civil Eng J05 Conf Room 438

Presenter: Prof. Nihat Ay

Abstract: I will present recent results on the design of embodied systems with concise control architectures, formalising the notion of "cheap design" within the field of embodied intelligence. This notion highlights the fact that high behavioural complexity, seen from the external observer perspective, does not necessarily imply high control complexity. This complexity gap is a result of two different frames of reference, which is closely related to Uexküll's Umwelt concept. If time allows, I will present a measure-theoretic formalisation of this concept and discuss its implications.

Information Geometry and its Application to Complexity Theory

Date: Wednesday 19 April 2017
Time: 2pm
Venue: Carslaw Building, Access Grid Room, 8th floor

Presenter: Prof. Nihat Ay

Abstract: In the first part of my talk, I will review information-geometric structures and highlight the important role of divergences. I will present a novel approach to canonical divergences which extends the classical definition and recovers, in particular, the well-known Kullback-Leibler divergence and its relation to the Fisher-Rao metric and the Amari-Chentsov tensor.
Divergences also play an important role within a geometric approach to complexity. This approach is based on the general understanding that the complexity of a system can be quantified as the extent to which it is more than the sum of its parts. In the second part of my talk, I will motivate this approach and review corresponding work.

A Framework and Language for Complex Adaptive System Modeling and Simulation

Date: Wednesday 7 Dec 2016 10:30am
Venue: Building J05 Civil Engineering, Conference Room 438

Presenter: Lachlan Birdsey and Dr Claudia Szabo from the Centre for Distributed and Intelligent Technologies – Complex Systems Program at The University of Adelaide.

Abstract: Complex adaptive systems (CAS) exhibit properties beyond complex systems such as self-organization, adaptability and modularity. Designing models of CAS is typically a non-trivial task as many components are made up of sub-components and rely on a large number of complex interactions. Studying features of these models also requires specific work for each system.
Moreover, running these models as simulations with a large number of entities requires a large amount of processing power.
We propose a language, Complex Adaptive Systems Language (CASL), and a framework to handle these issues. In particular, an extension to CASL that introduces the concept of `semantic grouping' allows for large scale simulations to execute on relatively modest hardware. A component of our framework, the observation module, aims to provide an extensible and expandable set of metrics to study key features of CAS such as aggregation, adaptability, and modularity, while also allowing for more domain-specific techniques.

Modeling neuromorphic devices: structure and dynamics of atomic switch networks

Date: 27 Oct 2016 2pm
Venue: Room 4020, Sydney Nanoscience Hub (SNH). Research Wing Level 4

Presenter: Ido Marcus presents preliminary results from a pilot project on neuromorphic devices.

Abstract: Self-assembled networks of silver nanowires are next-generation neuromorphic devices. Empirical results show that their collective dynamics share features with the activity of brain tissue. These features include criticality, plasticity and hysteretic behavior akin to memory.
In order to understand the behavior of these neuromorphic chips and the extent of their similarity to biological networks, we have developed a model of their structure and dynamics. Nonlinearity is introduced in the form of voltage-dependent switching of the junctions (i.e., atomic switches) formed at the intersections of nanowires.
Preliminary results show that our model of atomic switch networks (i) reproduces the collective dynamics measured in experiments; (ii) enables to study the microscopic origin of this collective dynamics; (iii) has dynamics consistent with criticality. Lastly, our results suggest that these systems are interesting in their own right, and could be used to model the dynamics of different complex systems.

The brain as its own decoder: predicting behavior from the structure of brain representations

Date: Sept 2016 2pm
Venue: Lecture Theatre 2, Physics Building (A28)

Presenter: Associate Professor Thomas A Carlson (ARC Future Fellow, School of Psychology, University of Sydney).

Summary: Multivariate pattern analysis has become an important tool for measuring “information” in the brain. An important question for the future is whether or not the information measured by neuroscientists is actually utilized by the brain for behaviour, e.g. the retina contains a complete record of the visual world but the brain does not directly access its content. One potentially fruitful approach to address this question is to construct models of how information is “read out” from brain representations, and then testing whether the model’s output can predict behaviour. In this talk, I will discuss a simple “read out” model our lab has been developing. We have shown that this model can predict reaction time (RT) behaviour for object categorisation based on recordings of neural activity in human inferior temporal cortex measured using fMRI. And using the same model with MEG data, we found evidence that the brain “read outs” object category information from the optimal brain state. Our most recent work has utilized single unit recordings from rhesus macaque ITC to test the limits of the model. Our analysis of these recordings has identified several limitations of the model, which represent general challenges for future models of “read out”.

Chaos and Synchronisation in Large Systems of Interacting Particles with Random Connections

Date: Monday 5th September, 2pm
Venue: Carslaw 829

Presenter: James MacLaurin (School of Physics, University of Sydney).

Abstract: I study systems of interacting particles indexed by a lattice. These have many applications, but in this talk I focus on applications in neuroscience. The particles are subject to white noise (Brownian motion), with random connections sampled from a probability distribution that is invariant under translations of the lattice. I study the limiting behaviour of system-wide averages (the `empirical measure') as the size of the network asymptotes to infinity. I find a variety of different behaviours under different scalings of the connection strength. When the connection strength is scaled by N^{-1/2} (N being the network size), the system becomes non-Markovian (meaning that the dynamics is influenced by the entire past). When the connection strength is unscaled but decays spatially, one obtains spatial correlations in the infinite size limit. I also find conditions under which particles with connections from an Erdos-Renyi random graph synchronize their oscillations in the large time limit.

Top Down Causation as the basis of the emergence of complexity

Date: Monday, August 22, 11am
Venue: School of IT Lecture Theatre 123

Presenter: Prof. George Ellis FRS, Emeritus Professor of Applied Mathematics and Senior Research Scholar in the Mathematics Department, University of Cape Town (South Africa).

Abstract: The expanding universe context makes clear that genuine emergence must occur. Because this occurs through adaptive processes, it is only possible through a combination of bottom up and top down (contextual) processes. A key feature in making this possible is multiple realisability of higher level structures and processes at lower levels. Arguments against top-down causation, such as claims of causal closure at lower levels and the concept of supervenience, will be answered. Five essentially different types of top-down effects can be identified; in the case of both digital computers and the mind, they enable the causal effectiveness of non-physical entities such as computer programs, algorithms, theories, plans, and social agreements.

Information decompositions

Date: 3pm, Monday 11th April
Venue: Peter Nicol Russell (PNR) building Lecture Theatre 1 (Farrell lecture theatre).

Presenter: Prof. Jürgen Jost, Director, Max Planck Institute for Mathematics in the Sciences, Leipzig; External Professor, Santa Fe Institute.

Abstract: When several inputs contribute to an output, it is natural to ask about their respective contributions, what information each of the inputs contributes uniquely to the output, what they possess in common, and what is complementary. Recently, several proposals have been put forward how to quantify these contributions, and I will present the version developed in our group.

Video - watch recording - requires internal USyd login.

Information theoretic analyses of neural goal functions

Date: April 1st, 11am
Venue: School of IT Lecture Theatre (Level 1)

Presenter: Prof. Michael Wibral, head of the Magnetoencephalography unit at the BIC.

Summary: In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six- layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a ‘goal function’, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. ‘edge filtering’, ‘working memory’). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon’s mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. In my presentation I demonstrate how to leverage information theory in general and PID in particular for a better understanding of the principles of neural information processing.

Video - watch recording - requires internal USyd login.