Workshop participants in computer lab

Workshops and training

Expand your data skills
We run a wide range of free introductory to advanced training courses spanning data science, statistics, bioinformatics, research computing, and research data management.

Many of our services are available free of charge to University researchers, research students, and affiliates. While workshops and training may be considered to customers external to the University, a fee-for-service arrangement may apply. Please contact us (sih.info@sydney.edu.au) for more information.

SIH Masterclasses

Introducing our new SIH Masterclasses for 2023! Whether you're a seasoned coder or just starting out, our 60 or 90-minute training sessions are designed to introduce you to essential tools and computing skills that will enhance your research. With some hands-on learning and practical application, these Masterclasses are the perfect opportunity to expand your knowledge and develop new skills that will help you take your research to the next level. The SIH Masterclasses will run the 4th Thursday of each month.

Topics


Statistics

In this workshop we provide a systematic workflow to apply to any research data analysis to make your quantitative work comprehensive, efficient and more suitable for top-tier journals.

We introduce you to the resources available from both the Sydney Informatics Hub and across the University that will support you in proceeding from hypothesis generation all the way through to publication. Our research workflow consists of a series of defined steps that will assist you in thinking about your data and preparing it for statistical analysis. Data analysis concepts will be covered in detail, including: how experimental design fits into hypothesis generation and your final publication; how to manage your analysis data and Exploratory Data Analysis (EDA) – an essential and often-overlooked stage of data analysis for determining the appropriate statistical methods to apply in your research. We will show you some of the more advanced statistical analysis methods to give you an idea of what is possible.

Note that this workshop does not require knowledge of, or use of specific statistical software. The analysis methods may be performed using a range of university-supported software options.

Open to University of Sydney staff, students and research affiliates
Prerequisites No previous knowledge of statistical methods is required.
Resources Workshop notes
Duration 90 minutes

In this workshop we focus on the key aspects of experimental design that researchers and students may need to apply in their research. Higher degree research students and researchers engaging in new research are especially invited to attend. During the workshop there will be the opportunity to discuss your own research question and associated experimental design.

The workshop will include the following topics:

• your research question
• experimental validity
• randomisation and bias
• blinding and bias
• blocking and confounding
• fixed and random effects
• replication, experimental units

Open to University of Sydney staff, students and research affiliates
Prerequisites No previous knowledge is assumed.
Resources Workshop notes
Duration 90 minutes

In this workshop we will show you how power and sample size calculations will help you to determine the number of necessary subjects to include in your study, for completion of ethics and grant requirements, and ensure that you have thoroughly thought about your study design. This workshop covers the theory and concepts of power analysis and includes worked examples using G*Power software. You will follow the examples on your own laptop PC or Mac. It is essential you have G*Power pre-installed on your machine prior to the workshop.

Download software from the G*Power website here (it's free, and available for both Windows and MacOS).

Open to University of Sydney staff, students and research affiliates.
Prerequisites Knowledge of basic statistics is recommended.
Resources

Workshop notes

Bring your own laptop with G*Power software installed.

Duration 90 minutes

In this workshop we focus on practical data analysis by presenting statistical workflows appliable in any software for four of the most common univariate analyses: linear regression, ANOVA, ANCOVA, and repeated measures (a simple mixed model) – all assuming a normal (gaussian) residual. These workflows can be easily extended to more complex models. The R code used to create output is also included. ​

​​This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic/binary and count (Poisson) regression. Each one builds on the preceding workshop and together they show how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow. Additionally, how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. ​There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with Linear Models or for those who have done at least the first two of our Linear Models workshops.

The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results.

Open to University of Sydney staff, students and research affiliates.
Prerequisites Knowledge of basic statistics is recommended.
Resources Workshop notes
Duration 90 minutes

In this workshop we focus on practical data analysis applicable in any software for two of the more common GLMMs: Logistic regression for binary data (using a Binomial distribution); and Poisson/count regression for count data (using a Poisson distribution). The GLM framework is also described in detail. The R code used to create output is included.

This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic/binary and count (Poisson) regression. Each one builds on the preceding workshop and together they show how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow. Additionally, how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. ​There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with Linear Models or for those who have done at least the first two of our Linear Models workshops.

The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results.

Open to University of Sydney staff, students and research affiliates.
Prerequisites Knowledge of basic statistics is recommended.
Resources Workshop notes
Duration 90 minutes

Statistical analysis is more than just building the best predictive model, it should also enable researchers to make the discoveries required to build new knowledge. Constructing engaging narratives about their research is also invaluable as researchers look to connect with their field, the community and funding bodies. To do this researchers need to test hypotheses, uncover unknown patterns, and present results in insightful, intuitive and memorable ways. In short, researchers need to build interpretable models.

In this workshop we explore tips and tricks to make your statistical analyses do just that. Topics covered will be:

  • Reporting tricks that aid interpretation - estimated marginal means, confidence vs prediction intervals, applying and correcting for multiple comparisons, reporting variable ‘importance’, plus other reporting and interpretation tricks
  • Model Parameterisation using the Design Matrix - interpreting categorical predictor parameters, dummy and effects coding.
  • More on Mixed Models - introducing the random slope.

This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic/binary and count (Poisson) regression. Each one builds on the preceding workshop and together they show how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow. Additionally, how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with Linear Models or for those who have done at least the first two of our Linear Models workshops.

The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results.

Open to University of Sydney staff, students and research affiliates.
Prerequisites It is recommended that attendees are familiar with concepts of Linear Models explained in Linear Models 1 and 2 workshops.
Resources Workshop notes
Duration 90 minutes

In this workshop we will introduce you to the key aspects and strategies of statistical model building to help you answer your research question, and avoid common pitfalls, erroneous models and incorrect conclusions. Appropriate statistical model building will help you to gain knowledge, as opposed to simply getting the best prediction (although that can be a goal as well).

We will focus on concepts such as variable selection, multi-collinearity, interactions, selecting a model building strategy, comparing models and evaluating models. In general, these concepts are useful for any statistical model building. This workshop will provide generalised linear regression model examples. The focus will be on practical application of concepts, so mathematical descriptions will be kept to a minimum.

Open to University of Sydney staff, students and research affiliates
Prerequisites Prior experience with statistical modelling is assumed, as the basics of regression modelling will not be covered. Please consider attending Linear Models 1 and/or Linear Models 2 workshops to come up to speed beforehand. Note that this workshop does not require knowledge of or use of specific statistics software.  The analysis methods may be performed using a wide range of commonly available software.
Resources Workshop notes

Bring your own laptop
Duration 90 minutes

In this workshop we provide a theoretical and practical introduction to meta-analysis as part of a systematic review. We examine the process of performing a meta-analysis, in particular focusing on key statistical concepts such as heterogeneity and Fixed and Random effects modelling.

We will discuss the available choices of statistical software and show you worked examples using the metafor package in R. A basic knowledge of R software is desirable, but not necessary, since you are not expected to produce and run your own code during the workshop.

Open to University of Sydney staff, students and research affiliates.
Prerequisites Knowledge of basic statistics is recommended. Basic knowledge of R (programming language) is desirable but not required.
Resources

Workshop Notes

Bring your own laptop. If you want to practice the example during the workshop you will need to have R and RStudio installed.

Duration 90 minutes

Survival analysis is used when you want to measure the time elapsed up to when a specified event occurs. It is commonly used in studies where subjects are followed until death occurs, hence the name.

In this workshop we will introduce some key concepts pertaining to survival analysis, including censoring of cases, the survival function, and the hazard ratio estimator. The Kaplan Meier survival curve will be explained through a worked example and the technique of Cox proportional hazards regression will be introduced using the same example dataset.

You will be provided with software code in SPSS and R to reproduce the analysis presented in the workshop.

Open to: University of Sydney staff, students and research affiliates.
Pre-requisites: Knowledge of basic statistics is recommended.
Resources:

Workshop notes

Bring your own laptop. If you want to practice the example during the workshop you will need to be able to run SPSS syntax or R code.  This is optional.

Duration: 90 minutes

In this workshop we present a range of practical tips and guidelines on how to design, field, and analyse the more commonly used surveys. The initial focus is on how to setup and field a study. A variety of different questions and scales, including some unorthodox and novel ones, will be presented to give an appreciation of what is possible. Some of the topics covered will be line vs discrete scales, the effect of colour, optimal discrete/LIKERT scales, etc.

Then we will present on basic analysis of common question types and reporting. We will discuss the pros and cons of common analyses (e.g. linear vs ordinal regression). The material is software agnostic and can be applied in any software.

Open to University of Sydney staff,  students and research affiliates​
Prerequisites No previous knowledge of statistical methods is required.
Resources Workshop notes
Duration 90 minutes

In this workshop we build on the information from Surveys 1. We explore topics including questionnaire validation and index creation using methods such as Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA) using Structural Equation Modelling (SEM), and Conjoint models such as Choice modelling.

The material is software agnostic and can be applied in any software.

Open to University of Sydney staff,  students and research affiliates​
Prerequisites No previous knowledge of statistical methods is required.
Resources Workshop notes
Duration 90 minutes

In multivariate statistics we simultaneously model and estimate variability in more than one variable often in order to examine the relationship between variables. In this workshop we examine the key aspects of moving from univariate to multivariate analysis, and the situations and scenarios where multivariate analysis is typically applied. We will focus on practical application of concepts through examples.

Open to  University of Sydney staff, students and research affiliates
Prerequisites Prior experience with statistical modelling is assumed, as the basics of regression modelling will not be covered. Please consider attending Linear Models 1 and/or Linear Models 2 workshops to come up to speed beforehand. Note that this workshop does not require knowledge of or use of specific statistics software.  The analysis methods may be performed using a wide range of commonly available software.
Resources Workshop Notes
Duration 90 minutes

Programming

Research computing

Learn about the University’s High Performance Computer (HPC) ‘Artemis’, including directory structure, software, and how to submit and monitor compute jobs using the PBS Pro scheduling software. Artemis is available at no cost to University of Sydney staff and students.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course. 
Resources

You must bring your own laptop.

Related courses This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and Introduction to data transfer and the research data store in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’.

Learn how to transfer data between your local computer, an external source, the Research Data Store (RDS) and the Artemis HPC. Learn how to back up Artemis HPC output onto the RDS.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course.
Resources

You must bring your own laptop.

Related courses This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and this course in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’.

This course introduces GPU computing, and running GPU jobs on Artemis and other HPC systems. The University of Sydney’s Artemis HPC hosts several NVIDIA V100 GPUs. This course will help you to understand basic concepts of GPU programming. 

  • Learn fundamentals of basic CUDA code, and write and run examples using C/CUDA, Matlab, and Python.
  • Undertake practical applications in Deep Learning using Python, Tensorflow, and Keras.
  • Learn how to set up suitable environments on Artemis for GPU-enabled applications to run, and how to run and submit jobs on the Artemis HPC GPU queue.
Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course.
Resources

You must bring your own laptop.

Related courses This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and this course in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’.

Learn how to submit jobs to Artemis from Matlab running on your own computer.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites A basic understanding of Matlab is assumed. We also highly recommend that you have taken the ‘Introduction to the Artemis HPC’ course, unless you already have experience using Artemis. These are scheduled regularly on campus.
Resources

You must have MATLAB R2017a installed if you wish to complete the training examples. No other version of MATLAB will work with the version of the MDCS currently installed on Artemis.

This course is designed to transition researchers from local Python development and execution to tailor code for High Performance Computing (traditional and cloud) using specific libraries, functions and common implementations.

  • gain experience with best practices for structuring code and testing modular structure and workflows
  • learn about the libraries, data structures, and functions used for Python multiprocessing
  • explore commonly used codes to solve common problems such as deep learning, parallel computing, multi-threaded applications
  • utilise advanced libraries that outperform (in speed/ability to handle large data/design)
Open to Staff, research students, and affiliates with a valid University of Sydney Unikey.
Prerequisites

Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines. Fundamental Python experience with basic grasp of functions, variables, syntax.

These prerequisites can be satisfied by attending these course regularly run on campus:

Resources

You must bring your own laptop. Contact us if you need to borrow one for the course. You must have a Python environment installed with the required modules. Please refer to the course notes for installation and versioning instructions.

This course is designed to transition researchers from local R development and execution to tailor code for High Performance Computing (traditional and cloud) using specific libraries, functions and common implementations.

  • gain experience with best practices for structuring code and testing modular structure and workflows
  • learn about the libraries, data structures, and functions used for R multiprocessing
  • explore commonly used codes to solve common problems such as deep learning, parallel computing, multi-threaded applications
  • utilise advanced libraries that outperform (in speed/ability to handle large data/design).
Open to Staff, research students, and affiliates with a valid University of Sydney Unikey.
Prerequisites

Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines. Fundamental Python experience with basic grasp of functions, variables, syntax.

These prerequisites can be satisfied by attending these course regularly run on campus:

Resources

You must bring your own laptop. Contact us if you need to borrow one for the course. You must have a Python environment installed with the required modules. Please refer to the course notes for installation and versioning instructions.

OpenFOAM is an opensource computational fluid dynamic found in engineering and science (chemistry) (For example heat transfer, turbulence, solid mechanics and it impact on design in manufacturing process etc..). The recent popularity of OpenFOAM can be attributed to it being free, having a versatile and easily implemented syntax and being adopted in both academic and commercial applications. ANSYS Fluent is a commercial package with similar capabilities.

In this course we will cover the basics of each software. Run through setting up code and using demos on your local machine before executing jobs in a High Performance Computing Environment. You will also learn about visualising results with Paraview.

Open to Staff, research students, and affiliates with a valid University of Sydney Unikey
Prerequisites Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines, will be beneficial but not mandatory as we will cover these fundamentals in the workshop. If you have no prior experience with these kind of environments then it is recommend to complete Intro to HPC.
Resources

You must bring your own laptop. Contact us if you need to borrow one for the course.

OpenFoam and ANSYS Fluent must be installed. Please refer to the course notes for installation and versioning instructions.

You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you've submitted batch jobs.

Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance Computing.

This course is facilitated by Intersect Australia. As a member institution, University of Sydney staff and students can also attend any training hosted by Intersect, at any location, free of charge.

Learn more

Pre-requisites: Assumed knowledge is basic Unix/Linux and Artemis HPC or other HPC running PBS.

This course is for new users of the National Compute Infrastructure’s (NCI’s) high performance computer, Gadi. This will be a webinar/live-demo style course and users with access to Gadi are welcome to follow along. Learn about: 

  • How to get access and where to get help; 
  • Gadi’s hardware, queues and the filesystem; 
  • Running compute jobs; 
  • How to account compute resources; 
  • How to install software; 
  • Tips for optimising code. 

Researchers who were recently awarded or are interested in applying for time on Gadi through Schemes (e.g. National Compute Merit Allocation Scheme or NCMAS, SIH HPC Allocation Schemes) are encouraged to attend.     

Data science

A series of short 1-1.5 hour online training sessions that showcase the tools and techniques we use internally at SIH:

  • Publication-ready tables in R
  • How fast is your R code: an introduction to code profiling and benchmarking
  • Writing better, tidier R: keep calm and code functionally with purrr
  • More in development

This is two day  workshop series designed to provide an introduction to practical machine learning with R.

Day 1: Regression

Day 1 focuses on regression. We will provide an introduction to some basic principles of machine learning experimentation, describing how one selects a model to use, the concepts of cross-validation. We will demonstrate how these apply to several classical machine learning approaches in R, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops.

Day 2: Classification and unsupervised learning

Day 2 focuses on classification and unsupervised learning approaches. We will build on the first day’s activities to discuss how cross-validation applies in the context of classification problems. We will then demonstrate how these apply to several classical machine learning approaches in R, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey 
Prerequisites Attendees are expected to have some R background (at least at the level of the “Introductory R” Intersect courses, including the tidyverse suite of packages and the use of R as a data processing tool). It is assumed that attendees have not had previous training in ML, for example as part of an undergraduate semester-long course.
Resources R, Rstudio and installation of several key packages will be required.

This is a two day workshop series designed to provide an introduction to practical machine learning with python.

Day 1: Regression

Day 1 focuses on regression. We will provide an introduction to some basic principles of machine learning experimentation, describing how one selects a model to use, the concepts of cross-validation. We will demonstrate how these apply to several classical machine learning approaches in python, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops, as the latter builds on the former. 

Day 2: Classification and Unsupervised Learning

Day 2 focuses on classification and unsupervised learning approaches. We will build on the first day’s activities to discuss how cross-validation applies in the context of classification problems. We will then demonstrate how these apply to several classical machine learning approaches in python, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops, as the latter builds on the former.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey 
Prequisites Attendees are expected to have some python background (at least at the level of the “Introductory python” Intersect courses). It is assumed that attendees have not had previous training in ML, for example as part of an undergraduate semester-long course.
Resources Anaconda python, jupyter notebooks and the scikit-learn library will be used in this course. 

This 2 day workshop follows the Data Carpentry R Geospatial curriculum, with additional details relating to working with geospatial data in Australia. It is designed to introduce learners comfortable with R to working with geospatial data, including raster and vector files. At the end of the workshops, learners will be able to load, manipulate and visualise these file types to make maps, and perform basic spatial calculations.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey 
Prerequisites Attendees are expected to have some R background (at least at the level of the “Introductory R” Intersect courses, including the tidyverse suite of packages and the use of R as a data processing tool).
Resources The lessons closely follow the Data Carpentry curriculum (opens new tab), and also include some Australian-specific information in “Introduction to Geospatial Concepts”.

Bioinformatics

This workshop is comprised of two parts and aims to teach participants how to perform RNA-Seq analysis in a reproducible manner.  
 
By the end of the workshop you should be able to:

  • List the steps involved in analysis of RNA-seq data

  • Describe key concepts and considerations for RNA-seq experiments 

  • Describe the benefits of using nf-core workflows

  • Deploy an RNA-seq nf-core workflow on Pawsey’s Nimbus Cloud to perform:

  • Quality control

  • Alignment

  • Quantification to generate raw counts

  • Use R/RStudio on Pawsey’s Nimbus Cloud to perform

  • Quality control

  • Identify differentially expressed genes using DESeq2

  • Perform functional enrichment analysis

This course is for command line users. We use a combination of technologies including Nextflow, Pawsey Nimbus cloud, Singularity and more.  


Open to Staff, research students, and affiliates
Prerequisites

Familiarity with Unix/Linux command line

Familiarity with R/RStudio

Resources

The course requires your own laptop with access to Pawsey Nimbus. 

This workshop is comprised of three courses and aims to teach essential concepts for analysing next generation sequencing data on the University’s Artemis High Performance Computer (HPC).

Concepts learnt can be applied to analysing either whole genome, exome or transcriptomic data. Attendees will also learn how to automate their jobs using job arrays and how to visualise data on Artemis, omitting the need for a local desktop.

Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites Competency on the Unix/Linux command line and basic familiarity with Artemis. You must first take an ‘Introduction to Unix/Linux’ and ‘Introduction to Artemis’ course; these are regularly scheduled on campus. 
Resources

You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed.

This course will introduce you to:

  • Artemis HPC
  • single cell RNA sequencing with the 10X chromium system
  • 10X Genomics’ bioinformatics pipelines Cell Ranger
  • 10X Genomics’ Loupe Cell browser

By the end of the course you will be familiar with:

  • Artemis HPC
  • understand how single cell RNA sequencing works using the 10X system
  • know how to run an end-end QC and analysis pipeline using cellranger
  • know how to visualise results using Loupe Cell Browser
Open to Staff, research students and affiliates with a valid University of Sydney UniKey
Prerequisites Intro to RNA sequence analysis on Galaxy Australia
Resources You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed.

This course provides an introduction on how to carry out RNA-seq data analysis using the Artemis HPC and R. We will cover the processes of:

  • obtaining sequencing data (in fastq or other format)
  • generating a count table
  • generating a list of differentially expressed genes
  • pathway analysis basics
Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites

You must have:

  • completed Intro to RNA sequence analysis on Galaxy Australia
  • your own laptop, with R, Rstudio, Bioconductor and several other key libraries installed.
  • University of Sydney Unikey (to access the Artemis HPC).
  • a text editor: such as Sublime Text, Notepad ++ (Windows only), Visual studio code, Atom etc.
  • a terminal application, such as the built in terminal on a mac or linux machine, and gitbash for Windows.
Resources

You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed.

CLC Genomics Workbench is a comprehensive suite of bioinformatics tools packaged into a user-friendly graphical environment. You can perform a range of analyses on next generation sequencing data and create customisable workflows for studies in genomics, transcriptomics, epigenomics and metagenomics. CLC Genomics Workbench is linked to Artemis HPC, providing users with higher computational power and throughput, and better data security than ever before.

In this course, we will teach you how to submit bioinformatics analyses to be processed on Artemis HPC from the CLC Genomics application on your personal computer.

The course will cover:

  • data management, 
  • importing/exporting data 
  • launching jobs on Artemis
Open to Staff, research students, and affiliates with a valid University of Sydney UniKey
Prerequisites None
Resources

Own laptop is required, with CLC Genomics Workbench 10.1.1 installed. Download it for MacWindows, or Linux.

Purchase license subscription to CLC Genomics Workbench

This course is designed to introduce you to basic concepts in whole genome sequencing (WGS) analysis using Galaxy Australia https://usegalaxy.org.au/ a user-friendly web-based bioinformatics research platform.

In the DNA sequence analysis course, we will investigate a superbug outbreak in a hospital and learn how to:

  • assess read quality
  • de novo assemble a draft genome
  • annotate a draft genome
  • identify the pangenome
  • align reads to a reference genome
  • call variants
  • draw a phylogenetic tree
Open to All (University of Sydney staff, research students and affiliates given priority)
Prerequisites None
Resources

Own laptop is required.

This course is designed to introduce you to basic concepts RNA sequencing analysis pipelines on Galaxy Australia, a user-friendly web-based bioinformatics research platform.

In the RNA sequencing course, we will use RNA sequencing data to align, visualise and perform differential expression analysis.

Open to All (University of Sydney staff, research students and affiliates given priority)
Prerequisites None
Resources

Own laptop is required.

Galaxy Australia is a free web-based bioinformatics analysis and workflow platform. It contains thousands of bioinformatics tools to combine, analyse and interpret genomic (DNA), transcriptomic (RNA), proteomic (proteins) and metabolomic (small molecules) data. It provides a simple point-and-click graphical user interface. and aims to make applying bioinformatics approaches on powerful national computing infrastructure easier.

There are a number of bioinformatics courses using Galaxy Australia that SIH facilitate that cover: quality control, genome assembly, genome annotation, variant calling, antibiotic-resistant genes, strain subtyping, species identification, RNA-seq, metagenomics.

Galaxy Australia is supported by National Research Infrastructure for Australia, Bioplatforms Australia, the Australian Research Data Commons, UQ RCC, QCIF, Melbourne Bioinformatics.

Open to All (University of Sydney staff, research students and affiliates given priority)
Prerequisites None
Resources

Own laptop is required.

There are a number of bioinformatics courses using Galaxy Australia that SIH facilitate that cover:

  • quality control, 
  • genome assembly, 
  • genome annotation, 
  • variant calling, 
  • antibiotic-resistant genes, 
  • strain subtyping, 
  • species identification, 
  • RNA-seq, 
  • metagenomics.

Galaxy Australia is supported by National Collaborative Research Infrastructure for Australia, Bioplatforms Australia, the Australian Research Data Commons, UQ RCC, QCIF, Melbourne Bioinformatics.

Following the successful annual Sydney Summer School in Pathogen Genomics and Global Health programs in 2017 - 2020, we are pleased to open EOIs for 2021, to microbiologists, clinicians, epidemiologists and public health professionals that are interested in translational research in the field of public health pathogen genomics and communicable disease control. The program includes a mix of inspiring keynotes, master classes and practical demonstrations delivered by expert practitioners in a webinar series. We will teach the basics of genomics of bacteria, viruses, and fungi with epidemic potential and critically examine the approaches to the analysis of genomes in global health context. The webinar series will illustrate the power of genomics, functional genomics and metagenomics in answering important questions on the assessment of evolution, virulence, transmissibility and drug resistance as well as on detection of local and international outbreaks and deciphering of transmission pathways. The special focus this year will be on applications of genomics and tracking the evolution and community spread of SARS-CoV-2.

The number of participants is limited. Organisers will select participants based on their provided information about motivation, prior knowledge and interests.

Topics to be covered

  • What can the analysis of microbial genomes tell translational researchers clinicians? How to select sequencing and bioinformatics solutions for specific research questions? Genome-wide association studies and patient outcomes
  • Integration of genomic, clinical and epidemiological data: global and local perspectives and solutions
  • Integrated data models, data analytics for knowledge discovery and data visualisation (we will employ phylogenetics and phylodynamics as case studies)
  • Generation and analysis of SARS-CoV-2 genomic data in the context of current pandemic
  • Implementation of next generation sequencing technologies in diagnostic and public health laboratories
  • Effective and ethical data sharing and translation of genomics into precision medicine and public health
  • Modelling and evaluation of genomics-guided interventions in hospital and community settings; genomics knowledge network

Course Information:

Dates: 26th Feb, 5th, 12th, 19th March 2021

Time: 1pm – 5pm   Australian Eastern Standard Time (AEST)

EOI Submission Link (close date 10th Jan 2021):

https://redcap.sydney.edu.au/surveys/?s=TXCPLW99XR

Program: Detailed program for the 4 day webinar series to be released soon. Please refer to the CIDM-PH website for 2020 program.

Registration Fees (to be confirmed upon EOI):

$400 International and Australian participants

$250 University of Sydney students and academics

Enquiries:  WSLHD-CIDM-PH@health.nsw.gov.au

Research data management

Research Electronic Data Capture (REDCap) is a secure web-based database application maintained by the University. It is ideal for collecting and managing participant data and administering online surveys, with features supporting longitudinal data collection, complex team workflows and exports to a range of statistical analysis programs.

We will cover:

  • how to build a simple data entry project
  • how to choose the appropriate fields for your data collection
  • how to invite collaborators to a project
Open to

University of Sydney staff, students and affiliates

This training session is designed to address the needs of the University’s research community. It includes information specific to the University’s research data systems and platforms.

Resources Bring your own laptop
Related courses This training covers basic functions of REDCap. Many additional features are covered in Surveys in REDCap.

In this training session, we will cover:

  • how to set up a survey
  • how to flow participants through surveys
  • how to distribute surveys

Open to

University of Sydney staff, students and affiliates

This training session is designed to address the needs of the University’s research community. It includes information specific to the University’s research data systems and platforms.

Prerequisites Experience in building REDCap projects using the basic functions
Resources You must bring your own laptop.