All 2019 seminars will be listed here.
For time and location, please see at the individual seminar.
(1) MATLAB Programming and Transfer Learning (1 August 2019)
(2) Capsule Networks and Information Theory (8 August 2019)
(3) Reinforcement Learning and Optimisation (15 August 2019)
(4) Manifold Learning and Tensor Algebra (22 August 2019)
(5) GAN and Currency Security (29 August 2019)
(6) Autoencoder and Image Noise Removal (05 September 2019)
(7) RNN and Time Series Analysis (12 September 2019)
(8) Deep Learning and Cryptography (19 September 2019)
(9) Probabilistic Graphical Models and Finite Fields (26 September 2019)
(10) Ensemble Learning and Functional Analysis (03 October 2019)
- 28 March ~30 May 2019 (12.00pm, Thursday, WT102) Wei Qi Yan (CS, AUT) Topic on “PG Mathematics for Deep Learning.”
- Most of people have difficulties to understand the relevant mathematics when they study deep learning. In this series of talks, we will review the fundamental knowledge of modern mathematics, we start from mathematical analysis and numerical analysis, then we will detail the mathematics at postgraduate level, such as functional analysis and finite fields in basic algebra; furthermore, we will introduce the relevant knowledge of deep learning such as tensor algebra, information theory, optimisation theory, graphical models, and manifolds. Our goal is to assist our postgraduates and PhD students to complete their high-quality theses timely. The tentative dates and topics include:
(1) 28 March 2019: Mathematical Analysis
(2) 04 April 2019: Numerical Analysis
(3) 11 April 2019: Optimisation Theory
(4) 18 April 2019: Information Theory
(5) 25 April 2019: Probabilistic Graphical Models
(6) 02 May 2019: Manifold Theory
(7) 09 May 2019: Tensor Algebra
(8) 16 May 2019: Functional Analysis
(9) 23 May 2019: Finite Fields
(10) 30 May 2019: Basic Algebra
- 25 March 2019 (6.00pm, Monday, UOA 303-G01) Prof. Nasir Memon (FIEEE, NYU, USA) Topic on “Seeing is Believing? Media Integrity in a Post-Truth World.”
- The emergence of “fake news” along with sophisticated techniques using machine learning to create realistic looking media such as Deep Fakes, has led to a resurgent interest in digital media forensics. In this talk, I will broadly discuss how media has traditionally been generated and detected. Then I will look at new approaches using machine learning for creating media that are leading us to a world where images and video cannot be believed any more as they can evade traditional detection techniques. I will end by discussing approaches that are being developed to return integrity and trust in digital media.
- 21 March 2019 (12.00pm, Thursday, WT102) Mr. Jia Lu (CS, AUT) Topic on ” Impressive Implementations and Evaluations for Human Behaviour Recognition Using Deep Learning”
- As the state-of-the-art technology, deep learning becomes more and more popular because of its superior capacity than conventional machine learning. Moreover, deep learning as an end-to-end model normally does not require feature extraction, which is able to reduce human labor and achieve time efficiency. This talk aims to use GPU-based deep learning to implement a more stable and robust real-time human behaviour recognition prototype. Thus, this project requires a large amount of data for training and a proper mode for learning. Various deep learning approaches were implemented and evaluated in our experiments with positive outcomes, which show the ability and practicability in human behaviour recognition.
- 14 March 2019 (12.00pm, Thursday, WT102) Dr. Chris Rapson (EEE, AUT) Topic on “Detecting headlights and tail lights on vehicles.”
- Computer vision has become an important part of driver assistance and autonomous driving. If we can detect brake lights in an image, then it will help us to predict a vehicle’s future behaviour. It may also be useful for visible light communications. The state of vehicle lights is not detectable by using radar, lidar, or other sensors, thus it is specifically a computer vision problem. Detecting vehicle lights can be challenging due to variable lighting conditions and a wide range of designs by different manufacturers. As usual, Deep Learning is a promising strategy to deal with large variations, however special attention must be paid to solve the problem in real time. This talk presents the state-of-the-art technology and my work to improve the accuracy and frame rate with which vehicle lights can be detected.
- 21 February 2019 (12.00pm, Thursday, WT102) Dr. Sam Hitchman (AgResearch NZ) Topic on “Capturing the Value of New Zealand Red Meat.”
- The meat industry is one of New Zealand’s biggest export earners. Current chemical and physical methods used to quantify meat quality are precise, but also slow, costly, and typically destroy the sample. Objective, non-invasive, and rapid evaluation of true meat quality has been described as the holy grail of meat science. Many individual sensors have been used to determine meat quality parameters, however no single technology has been shown to fully characterise red meat. We hypothesise that a range of integrated optical sensors can be used to estimate intramuscular fat percentage (IMF%), pH and tenderness. I will present our preliminary results from optical technologies such as optical coherence tomography (OCT), hyperspectral imaging (HSI), Raman and NIR spectroscopy. Current work involves developing models which all of the available data to produce the best estimates of red meat quality parameters.
- 07 February 2019 (12.00pm, Thursday, WT102) Prof. Bodo Rosenhahn (Univ. Hannover, Germany)Topic on “Dehyping Neural Networks.”
Computer Vision is revolved from recent developments in machine learning, especially deep learning. Convolutional neural networks, different topologies and strategies, e.g. based on drop-outs, skip connections, autoencoders, adversarial networks, together with huge amount of training data or reinforcement learning paradigms allow for amazing results and applications. Autonomous driving, recommender systems, medical data analysis, industry 4.0, games or even arts are famous fields for applications of machine learning.In this talk I will give an overview of our research at the institute for information processing in Hanover. Starting with an overview on machine learning and basic paradigms, I will switch over to current challenges and research with a glimpse on our applications in industrial projects. I will cover several applications from object detection, semantic segmentation, autoencoder, human pose estimation, autonomous navigation and medical data analysis. Additionally I will reflect pros and cons of neural networks and I will share some basic insights we have gathered over the last years.
- 31 January 2019 (12.00pm, Thursday, WT102) Dr. Xiuhui Wang (China Jiliang University China)Topic on “Gait Recognition Based on Machine Learning.”
- Gait is one of the promising biometrics which can be used in human identification. Although other biometrics, such as human face and fingerprint, have been widely used in commercial applications, gait is still at its nascent stage. In this talk, we will review the state-of-the-art approaches for gait recognition. Then, two open-access gait databases will be introduced, i.e., CASIA gait database and OU-ISIR gait database. Finally, our experimental results will be evaluated.
- 31 January 2019 (12.30pm, Thursday, WT102) Dr. Xia Li (China Jiliang University China)Topic on “Gradient Coils Design with Regularization Method for Superconducting Magnetic Resonance Imaging (MRI).”
- In this talk, we propose an approach to the design of gradient coils for superconducting magnetic resonance imaging (MRI). The designed method takes use of Fourier series expansions to describe the continuous current density of the coil surface and then employs stream function technique to extract the coil wires. During the numerical simulation, a linear equation is constructed and solved with the use of a Tikhonov regularization scheme. Using this method, the gradient coils with high level of linearity are designed. Our contributions are to expend the current densities of coils into Fourier series analytically as well as optimize the parameters of regularization from the plotted curve.
24 January 2019 (12.00pm, Thursday, WT102) Mr. Mohammad Norouzifard (EEE, AUT)Topic on “Diagnosis of Glaucomatous Optic Neuropathy by an Optimised Deep Learning Model.”
Early glaucoma diagnosis and treatment are essential to reduce vision loss rates. Hence, the development of an image-based computer model diagnosis system for medical imaging is required as an auxiliary tool to detect glaucoma in the early stage which would be very beneficial in primary care. Developing such a system for this kind of applications is very challenging. A special purpose model, such as defined for a robust application, is promising for an accurate classification to detect glaucoma with more flexibility at the systems. In this research project, I tried to use GPU-based cloud-computing services to achieve a deep learning model with time efficiency and high performance in training, validation and test steps.
In summary, this talk aims at a proposal of an optimized and robust classifier towards an efficient system with high performance, targeting early detection of glaucoma where it is high demand in New Zealand and worldwide to decrease the loss of vision. Deep transfer learning architectures would be an appropriate solution to combat the lack of data to detect glaucoma. Data collection has been costly, so I will try to develop an accurate model to classify healthy vs glaucoma patients with private and public datasets in an appropriate time duration. Therefore, deep multi-layer perceptron, deep convolutional neural network and pre-trained models are compared with each other to achieve an optimized model for glaucoma detection.