Suggest a topic
Let us know if you require training in an area we don't currently offer.
Many of our services are available free of charge to University researchers, research students, and affiliates. While workshops and training may be considered to customers external to the University, a fee-for-service arrangement may apply. Please contact us (sih.info@sydney.edu.au) for more information.
Introducing our new SIH Masterclasses for 2023! Whether you're a seasoned coder or just starting out, our 60 or 90-minute training sessions are designed to introduce you to essential tools and computing skills that will enhance your research. With some hands-on learning and practical application, these Masterclasses are the perfect opportunity to expand your knowledge and develop new skills that will help you take your research to the next level. The SIH Masterclasses will run the 4th Thursday of each month.
In this workshop we provide a systematic workflow to apply to any research data analysis to make your quantitative work comprehensive, efficient and more suitable for top-tier journals.
We introduce you to the resources available from both the Sydney Informatics Hub and across the University that will support you in proceeding from hypothesis generation all the way through to publication. Our research workflow consists of a series of defined steps that will assist you in thinking about your data and preparing it for statistical analysis. Data analysis concepts will be covered in detail, including: how experimental design fits into hypothesis generation and your final publication; how to manage your analysis data and Exploratory Data Analysis (EDA) – an essential and often-overlooked stage of data analysis for determining the appropriate statistical methods to apply in your research. We will show you some of the more advanced statistical analysis methods to give you an idea of what is possible.
Note that this workshop does not require knowledge of or use of specific statistical software. The analysis methods may be performed using a range of university-supported software options.
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | No previous knowledge of statistical methods is required. |
Resources | Workshop notes |
Duration | 90 minutes |
In this workshop we focus on the key aspects of experimental design that researchers and students may need to apply in their research. Higher degree research students and researchers engaging in new research are especially invited to attend. During the workshop there will be the opportunity to discuss your own research question and associated experimental design.
The workshop will include the following topics:
• your research question
• experimental validity
• randomisation and bias
• blinding and bias
• blocking and confounding
• fixed and random effects
• replication, experimental units
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | No previous knowledge is assumed. |
Resources | Workshop notes |
Duration | 90 minutes |
In this workshop we will show you how power and sample size calculations will help you to determine the number of necessary subjects to include in your study, for completion of ethics and grant requirements, and ensure that you have thoroughly thought about your study design. This workshop covers the theory and concepts of power analysis and includes worked examples using G*Power software. You will follow the examples on your own laptop PC or Mac. It is essential you have G*Power pre-installed on your machine prior to the workshop.
Download software from the G*Power website here (it's free, and available for both Windows and MacOS).
Open to | University of Sydney staff, students and research affiliates. |
Prerequisites | Knowledge of basic statistics is recommended. |
Resources |
Bring your own laptop with G*Power software installed. |
Duration | 90 minutes |
In this workshop we focus on practical data analysis by presenting statistical workflows appliable in any software for four of the most common univariate analyses: linear regression, ANOVA, ANCOVA, and repeated measures (a simple mixed model) – all assuming a normal (gaussian) residual. These workflows can be easily extended to more complex models. The R code used to create output is also included.
This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic/binary and count (Poisson) regression. Each one builds on the preceding workshop and together they show how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow. Additionally, how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with Linear Models or for those who have done at least the first two of our Linear Models workshops.
The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results.
Open to | University of Sydney staff, students and research affiliates. |
Prerequisites | Knowledge of basic statistics is recommended. |
Resources | Workshop notes |
Duration | 90 minutes |
In this workshop we focus on practical data analysis applicable in any software for two of the more common GLMMs: Logistic regression for binary data (using a Binomial distribution); and Poisson/count regression for count data (using a Poisson distribution). The GLM framework is also described in detail. The R code used to create output is included.
This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic/binary and count (Poisson) regression. Each one builds on the preceding workshop and together they show how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow. Additionally, how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with Linear Models or for those who have done at least the first two of our Linear Models workshops.
The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results.
Open to | University of Sydney staff, students and research affiliates. |
Prerequisites | Knowledge of basic statistics is recommended. |
Resources | Workshop notes |
Duration | 90 minutes |
Statistical analysis is more than just building the best predictive model, it should also enable you to make impactful discoveries that expand our knowledge. Constructing engaging narratives about your research is also invaluable as you look to connect with your field, the community and funding bodies. To do this you need to build interpretable models, test hypotheses, uncover insightful & impactful patterns, and present results in insightful, intuitive and memorable ways. In this workshop we explore tips and tricks to make your research do just that. Topics covered will be:
–Building impactful real-world recommendations and guidelines – i) why we need to understand both stated and model derived importance, ii) how Quadrant Analysis uses both variable performance and importance to develop impactful real-world recommendations and guidelines.
–Reporting tricks that extract insightful & impactful patterns and craft engaging stories – i) establishing the importance of a predictor/risk factor, ii) confidence vs prediction intervals, iii) applying and correcting for multiple comparisons, iv) testing different hypothesis using different model parameterisations of the design matrix, v) interpreting categorical predictors - dummy vs effects coding and estimated marginal means, plus other reporting and interpretation tricks.
–Building interpretable models – it’s quite common for researchers to incorrectly use model parameters to establish variables ‘impact’ or ‘importance’ . We show how multi-collinearity prevents this interpretation, and how to assess and then fix it so parameters can be used to identify important predictor/risk factors and other insightful patterns.
–Mixed models – extend the Linear Model 1 intro to: i) better explain how mixed models work, ii) use them to test population wide hypotheses outside your sampled groups, and iii) use a random slope (with examples of the patterns it can explain and hypotheses it can test).
–Using data visualisation to report complex nonlinear models graphically and aid pattern extraction
This is one of our three workshops for researchers interested in statistical methods such as linear regression, ANOVA, ANCOVA, mixed models, logistic (binary) and Poisson (count) regression. Each one builds on the preceding workshop showing how all these analyses can be performed using the same easy to understand Generalised Linear Mixed Model (GLMM) framework and workflow, and how they can be used to analyse experimental designs such as Control vs Treatment, Randomised Control Trials (RCTs), Before After Control Impact (BACI) analysis, repeated measures, plus many more. There is also a fourth complementary workshop called Statistical Model Building which we recommend for those experienced with linear models or for those who have done at least the first two of our Linear Model workshops.
The material is organised around Statistical Workflows, applicable in any software, giving practical step-by-step instructions on how to do the analysis, including assumption testing, model interpretation, and presentation of results. Some workshops also have accompanying Software Workflows for R.
Open to | University of Sydney staff, students and research affiliates. |
Prerequisites | It is recommended that attendees are familiar with concepts of Linear Models explained in Linear Models 1 and 2 workshops. |
Resources | Workshop notes |
Duration | 90 minutes |
In this workshop we will introduce you to the key aspects and strategies of statistical model building to help you answer your research question, and avoid common pitfalls, erroneous models and incorrect conclusions. Appropriate statistical model building will help you to gain knowledge, as opposed to simply getting the best prediction (although that can be a goal as well).
We will focus on concepts such as variable selection, multi-collinearity, interactions, selecting a model building strategy, comparing models and evaluating models. In general, these concepts are useful for any statistical model building. This workshop will provide generalised linear regression model examples. The focus will be on practical application of concepts, so mathematical descriptions will be kept to a minimum.
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | Prior experience with statistical modelling is assumed, as the basics of regression modelling will not be covered. Please consider attending Linear Models 1 and/or Linear Models 2 workshops to come up to speed beforehand. Note that this workshop does not require knowledge of or use of specific statistics software. The analysis methods may be performed using a wide range of commonly available software. |
Resources | Workshop notes Bring your own laptop |
Duration | 90 minutes |
In this workshop we provide a theoretical and practical introduction to meta-analysis as part of a systematic review. We examine the process of performing a meta-analysis, in particular focusing on key statistical concepts such as heterogeneity and Fixed and Random effects modelling.
We will discuss the available choices of statistical software and show you worked examples using the metafor package in R. A basic knowledge of R software is desirable, but not necessary, since you are not expected to produce and run your own code during the workshop.
Open to | University of Sydney staff, students and research affiliates. |
Prerequisites | Knowledge of basic statistics is recommended. Basic knowledge of R (programming language) is desirable but not required. |
Resources |
Bring your own laptop. If you want to practice the example during the workshop you will need to have R and RStudio installed. |
Duration | 90 minutes |
Survival analysis is used when you want to measure the time elapsed up to when a specified event occurs. It is commonly used in studies where subjects are followed until death occurs, hence the name.
In this workshop we will introduce some key concepts pertaining to survival analysis, including censoring of cases, the survival function, and the hazard ratio estimator. The Kaplan Meier survival curve will be explained through a worked example and the technique of Cox proportional hazards regression will be introduced using the same example dataset.
You will be provided with software code in SPSS and R to reproduce the analysis presented in the workshop.
Open to: | University of Sydney staff, students and research affiliates. |
Pre-requisites: | Knowledge of basic statistics is recommended. |
Resources: |
Bring your own laptop. If you want to practice the example during the workshop you will need to be able to run SPSS syntax or R code. This is optional. |
Duration: | 90 minutes |
In this workshop we present a range of practical tips and guidelines on how to design, field, and analyse the more commonly used surveys. The initial focus is on how to setup and field a study. A variety of different questions and scales, including some unorthodox and novel ones, will be presented to give an appreciation of what is possible. Some of the topics covered will be line vs discrete scales, the effect of colour, optimal discrete/LIKERT scales, etc.
Then we will present on basic analysis of common question types and reporting. We will discuss the pros and cons of common analyses (e.g. linear vs ordinal regression). The material is software agnostic and can be applied in any software.
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | No previous knowledge of statistical methods is required. |
Resources | Workshop notes |
Duration | 90 minutes |
In this workshop we build on the information from Surveys 1. We explore topics including questionnaire validation and index creation using methods such as Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA) using Structural Equation Modelling (SEM), and Conjoint models such as Choice modelling.
The material is software agnostic and can be applied in any software.
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | No previous knowledge of statistical methods is required. |
Resources | Workshop notes |
Duration | 90 minutes |
In multivariate statistics we simultaneously model and estimate variability in more than one variable often in order to examine the relationship between variables. In this workshop we examine the key aspects of moving from univariate to multivariate analysis, and the situations and scenarios where multivariate analysis is typically applied. We will focus on practical application of concepts through examples.
Open to | University of Sydney staff, students and research affiliates |
Prerequisites | Prior experience with statistical modelling is assumed, as the basics of regression modelling will not be covered. Please consider attending Linear Models 1 and/or Linear Models 2 workshops to come up to speed beforehand. Note that this workshop does not require knowledge of or use of specific statistics software. The analysis methods may be performed using a wide range of commonly available software. |
Resources | Workshop Notes |
Duration | 90 minutes |
Learn about the University’s High Performance Computer (HPC) ‘Artemis’, including directory structure, software, and how to submit and monitor compute jobs using the PBS Pro scheduling software. Artemis is available at no cost to University of Sydney staff and students.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course. |
Resources | You must bring your own laptop. |
Related courses | This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and Introduction to data transfer and the research data store in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’. |
Learn how to transfer data between your local computer, an external source, the Research Data Store (RDS) and the Artemis HPC. Learn how to back up Artemis HPC output onto the RDS.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course. |
Resources | You must bring your own laptop. |
Related courses | This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and this course in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’. |
This course introduces GPU computing, and running GPU jobs on Artemis and other HPC systems. The University of Sydney’s Artemis HPC hosts several NVIDIA V100 GPUs. This course will help you to understand basic concepts of GPU programming.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Competency on the Unix/Linux command line. If you are interested in learning HPC but have no Unix/Linux command-line skills, you must first take an ‘Introduction to Unix/Linux’ course. |
Resources | You must bring your own laptop. |
Related courses | This course is designed as a 2-part session, with Introduction to the Artemis HPC in the morning and this course in the afternoon. We recommend you register for both, however you may take these courses on separate days. We also recommend following up with ‘Intermediate HPC (Automation)’. |
Learn how to submit jobs to Artemis from Matlab running on your own computer.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | A basic understanding of Matlab is assumed. We also highly recommend that you have taken the ‘Introduction to the Artemis HPC’ course, unless you already have experience using Artemis. These are scheduled regularly on campus. |
Resources | You must have MATLAB R2017a installed if you wish to complete the training examples. No other version of MATLAB will work with the version of the MDCS currently installed on Artemis. |
This course is designed to transition researchers from local Python development and execution to tailor code for High Performance Computing (traditional and cloud) using specific libraries, functions and common implementations.
Open to | Staff, research students, and affiliates with a valid University of Sydney Unikey. |
Prerequisites | Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines. Fundamental Python experience with basic grasp of functions, variables, syntax. These prerequisites can be satisfied by attending these course regularly run on campus: |
Resources | You must bring your own laptop. Contact us if you need to borrow one for the course. You must have a Python environment installed with the required modules. Please refer to the course notes for installation and versioning instructions. |
This course is designed to transition researchers from local R development and execution to tailor code for High Performance Computing (traditional and cloud) using specific libraries, functions and common implementations.
Open to | Staff, research students, and affiliates with a valid University of Sydney Unikey. |
Prerequisites | Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines. Fundamental Python experience with basic grasp of functions, variables, syntax. These prerequisites can be satisfied by attending these course regularly run on campus: |
Resources | You must bring your own laptop. Contact us if you need to borrow one for the course. You must have a Python environment installed with the required modules. Please refer to the course notes for installation and versioning instructions. |
OpenFOAM is an opensource computational fluid dynamic found in engineering and science (chemistry) (For example heat transfer, turbulence, solid mechanics and it impact on design in manufacturing process etc..). The recent popularity of OpenFOAM can be attributed to it being free, having a versatile and easily implemented syntax and being adopted in both academic and commercial applications. ANSYS Fluent is a commercial package with similar capabilities.
In this course we will cover the basics of each software. Run through setting up code and using demos on your local machine before executing jobs in a High Performance Computing Environment. You will also learn about visualising results with Paraview.
Open to | Staff, research students, and affiliates with a valid University of Sydney Unikey |
Prerequisites | Competency with high performance computing environments, submitting and running jobs, comfortable moving data between local and remote machines, will be beneficial but not mandatory as we will cover these fundamentals in the workshop. If you have no prior experience with these kind of environments then it is recommend to complete Intro to HPC. |
Resources | You must bring your own laptop. Contact us if you need to borrow one for the course. OpenFoam and ANSYS Fluent must be installed. Please refer to the course notes for installation and versioning instructions. |
You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you've submitted batch jobs.
Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance Computing.
This course is facilitated by Intersect Australia. As a member institution, University of Sydney staff and students can also attend any training hosted by Intersect, at any location, free of charge.
Pre-requisites: Assumed knowledge is basic Unix/Linux and Artemis HPC or other HPC running PBS.
This course is for new users of the National Compute Infrastructure’s (NCI’s) high performance computer, Gadi. This will be a webinar/live-demo style course and users with access to Gadi are welcome to follow along. Learn about:
Researchers who were recently awarded or are interested in applying for time on Gadi through Schemes (e.g. National Compute Merit Allocation Scheme or NCMAS, SIH HPC Allocation Schemes) are encouraged to attend.
A series of short 1-1.5 hour online training sessions that showcase the tools and techniques we use internally at SIH:
This is two day workshop series designed to provide an introduction to practical machine learning with R.
Day 1 focuses on regression. We will provide an introduction to some basic principles of machine learning experimentation, describing how one selects a model to use, the concepts of cross-validation. We will demonstrate how these apply to several classical machine learning approaches in R, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops.
Day 2 focuses on classification and unsupervised learning approaches. We will build on the first day’s activities to discuss how cross-validation applies in the context of classification problems. We will then demonstrate how these apply to several classical machine learning approaches in R, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Attendees are expected to have some R background (at least at the level of the “Introductory R” Intersect courses, including the tidyverse suite of packages and the use of R as a data processing tool). It is assumed that attendees have not had previous training in ML, for example as part of an undergraduate semester-long course. |
Resources | R, Rstudio and installation of several key packages will be required. |
This is a two day workshop series designed to provide an introduction to practical machine learning with python.
Day 1 focuses on regression. We will provide an introduction to some basic principles of machine learning experimentation, describing how one selects a model to use, the concepts of cross-validation. We will demonstrate how these apply to several classical machine learning approaches in python, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops, as the latter builds on the former.
Day 2 focuses on classification and unsupervised learning approaches. We will build on the first day’s activities to discuss how cross-validation applies in the context of classification problems. We will then demonstrate how these apply to several classical machine learning approaches in python, including supervised (classification and regression, such as K-nearest neighbour and linear regression) and unsupervised (clustering, such as hierarchical and k-means clustering, and dimensionality reduction, such as principal component analysis) methods. We recommend attending both the regression and classification workshops, as the latter builds on the former.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prequisites | Attendees are expected to have some python background (at least at the level of the “Introductory python” Intersect courses). It is assumed that attendees have not had previous training in ML, for example as part of an undergraduate semester-long course. |
Resources | Anaconda python, jupyter notebooks and the scikit-learn library will be used in this course. |
This 2 day workshop follows the Data Carpentry R Geospatial curriculum, with additional details relating to working with geospatial data in Australia. It is designed to introduce learners comfortable with R to working with geospatial data, including raster and vector files. At the end of the workshops, learners will be able to load, manipulate and visualise these file types to make maps, and perform basic spatial calculations.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Attendees are expected to have some R background (at least at the level of the “Introductory R” Intersect courses, including the tidyverse suite of packages and the use of R as a data processing tool). |
Resources | The lessons closely follow the Data Carpentry curriculum (opens new tab), and also include some Australian-specific information in “Introduction to Geospatial Concepts”. |
Australian BioCommons training cooperative
SIH is a member of the national bioinformatics training cooperative. Through the cooperative, you can attend and access free online webinars and workshops delivered by institutions around Australia.
You can find all of our materials on the Australian BioCommons Zenodo page and recordings of all webinars on their YouTube channel.
This workshop is comprised of two parts and aims to teach participants how to perform RNA-Seq analysis in a reproducible manner.
By the end of the workshop you should be able to:
List the steps involved in analysis of RNA-seq data
Describe key concepts and considerations for RNA-seq experiments
Describe the benefits of using nf-core workflows
Deploy an RNA-seq nf-core workflow on Pawsey’s Nimbus Cloud to perform:
Quality control
Alignment
Quantification to generate raw counts
Use R/RStudio on Pawsey’s Nimbus Cloud to perform
Quality control
Identify differentially expressed genes using DESeq2
Perform functional enrichment analysis
This course is for command line users. We use a combination of technologies including Nextflow, Pawsey Nimbus cloud, Singularity and more.
Open to | Staff, research students, and affiliates |
Prerequisites | Familiarity with Unix/Linux command line Familiarity with R/RStudio |
Resources | The course requires your own laptop with access to Pawsey Nimbus. |
This workshop is comprised of three courses and aims to teach essential concepts for analysing next generation sequencing data on the University’s Artemis High Performance Computer (HPC).
Concepts learnt can be applied to analysing either whole genome, exome or transcriptomic data. Attendees will also learn how to automate their jobs using job arrays and how to visualise data on Artemis, omitting the need for a local desktop.
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | Competency on the Unix/Linux command line and basic familiarity with Artemis. You must first take an ‘Introduction to Unix/Linux’ and ‘Introduction to Artemis’ course; these are regularly scheduled on campus. |
Resources | You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed. |
This course will introduce you to:
By the end of the course you will be familiar with:
Open to | Staff, research students and affiliates with a valid University of Sydney UniKey |
Prerequisites | Intro to RNA sequence analysis on Galaxy Australia |
Resources | You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed. |
This course provides an introduction on how to carry out RNA-seq data analysis using the Artemis HPC and R. We will cover the processes of:
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | You must have:
|
Resources | You must bring your laptop. If it has a Windows operating system, please ensure you have a terminal client installed. |
CLC Genomics Workbench is a comprehensive suite of bioinformatics tools packaged into a user-friendly graphical environment. You can perform a range of analyses on next generation sequencing data and create customisable workflows for studies in genomics, transcriptomics, epigenomics and metagenomics. CLC Genomics Workbench is linked to Artemis HPC, providing users with higher computational power and throughput, and better data security than ever before.
In this course, we will teach you how to submit bioinformatics analyses to be processed on Artemis HPC from the CLC Genomics application on your personal computer.
The course will cover:
Open to | Staff, research students, and affiliates with a valid University of Sydney UniKey |
Prerequisites | None |
Resources | Own laptop is required, with CLC Genomics Workbench 10.1.1 installed. Download it for Mac, Windows, or Linux. |
This course is designed to introduce you to basic concepts in whole genome sequencing (WGS) analysis using Galaxy Australia https://usegalaxy.org.au/ a user-friendly web-based bioinformatics research platform.
In the DNA sequence analysis course, we will investigate a superbug outbreak in a hospital and learn how to:
Open to | All (University of Sydney staff, research students and affiliates given priority) |
Prerequisites | None |
Resources | Own laptop is required. |
This course is designed to introduce you to basic concepts RNA sequencing analysis pipelines on Galaxy Australia, a user-friendly web-based bioinformatics research platform.
In the RNA sequencing course, we will use RNA sequencing data to align, visualise and perform differential expression analysis.
Open to | All (University of Sydney staff, research students and affiliates given priority) |
Prerequisites | None |
Resources | Own laptop is required. |
Galaxy Australia is a free web-based bioinformatics analysis and workflow platform. It contains thousands of bioinformatics tools to combine, analyse and interpret genomic (DNA), transcriptomic (RNA), proteomic (proteins) and metabolomic (small molecules) data. It provides a simple point-and-click graphical user interface. and aims to make applying bioinformatics approaches on powerful national computing infrastructure easier.
There are a number of bioinformatics courses using Galaxy Australia that SIH facilitate that cover: quality control, genome assembly, genome annotation, variant calling, antibiotic-resistant genes, strain subtyping, species identification, RNA-seq, metagenomics.
Galaxy Australia is supported by National Research Infrastructure for Australia, Bioplatforms Australia, the Australian Research Data Commons, UQ RCC, QCIF, Melbourne Bioinformatics.
Open to | All (University of Sydney staff, research students and affiliates given priority) |
Prerequisites | None |
Resources | Own laptop is required. |
There are a number of bioinformatics courses using Galaxy Australia that SIH facilitate that cover:
Galaxy Australia is supported by National Collaborative Research Infrastructure for Australia, Bioplatforms Australia, the Australian Research Data Commons, UQ RCC, QCIF, Melbourne Bioinformatics.
Research Electronic Data Capture (REDCap) is a secure web-based database application maintained by the University. It is ideal for collecting and managing participant data and administering online surveys, with features supporting longitudinal data collection, complex team workflows and exports to a range of statistical analysis programs.
We will cover:
Open to | University of Sydney staff, students and affiliates This training session is designed to address the needs of the University’s research community. It includes information specific to the University’s research data systems and platforms. |
Resources | Bring your own laptop |
Related courses | This training covers basic functions of REDCap. Many additional features are covered in Surveys in REDCap. |
In this training session, we will cover:
Open to | University of Sydney staff, students and affiliates This training session is designed to address the needs of the University’s research community. It includes information specific to the University’s research data systems and platforms. |
Prerequisites | Experience in building REDCap projects using the basic functions |
Resources | You must bring your own laptop. |