2018 Seminars


For time and location, please see at the individual seminar.

  • 19 April 2018 (1.00pm, Thursday, WT515C) Zhe Liu (CS, AUT) Topic on “Image Denoising Based on A CNN Model.”

    In this talk, we will use a CNN model in deep learning for image denoising. Compared with traditional image denoising methods such as average filtering, Wiener filtering and median filtering, the advantage of using this CNN model is that the parameters of this model can be optimized through network training; whereas in traditional image denoising, the parameters of these algorithms are fixed and cannot be adjusted during the filtering, namely, lack of adaptivity. In this talk, we design and implement the denoising method based on a linear CNN model. Our experimental results show that the proposed CNN model can effectively remove Gaussian noise and improve the performance of traditional image filtering methods significantly.

  • 12 April 2018 (1.00pm, Thursday, WT133) Chen Xin (CS, AUT) Topic on “Flame Detection and Recognition Using Deep Learning.”

    Deep learning is a novel method which could be much efficient and accurate in flame detection. In this paper, we use YOLO and SSD model to implement flame detection and recognition, we compare it with those shallow learning methods. Our contribution of this paper is to make use of the optimized YOLO model for flame detection and SSD mode for flame recognition from video frames. We collected the dataset and trained them using Google platform TensorFlow.

  • 22 March 2018 ~ 24 May 2018 (12.00pm, Thursday) Wei Qi Yan (CS, AUT) Topic on “Ten Talks on Deep Learning.”

    Deep learning has taken the dominant position in visual computing since the publication in Nature. In these ten talks, we will start from the chronicle of machine learning and showcase the platforms of deep learning like TensorFlow and Caffe. We also feature the state-of-the-art technologies of deep learning such as CNN and RNN as well as YOLO2 and SSD. We will take use of image analysis and computer vision as the prominent applications of deep learning; meanwhile, we will detail Markov Random Fields (MRF) and Decision Trees as well as Random Forests. The goal of these talks is to assist our research students to quickly engage in their projects related to deep learning.Please find the tentative dates and venues of the series of talks below:

    (1) Deep Learning and Machine Intelligence (22 March 2018, WT515C)
    (2) TensorFlow Programming for Deep Learning (29 Mar 2018, WT515C)
    (3) Mathematics of Deep Learning (05 Apr 2018, WT515C)
    (4) From CNN & R-CNN to YOLO3 & SSD (12 Apr 2018, WT133)
    (5) From SqueezeNet to Compressing Networks (19 Apr 2018, WT515C)
    (6) Deep Markov Random Fields for Image Analysis (26 Apr 2018, WT515C)
    (7) Deep Random Forest for Computer Vision (03 May 2018, WT515C)
    (8) FSM, HMM and RNN for Event Computing (10 May 2018, WT515C)
    (9) Reinforcement Learning (17 May 2018, WT515C)
    (10) Ensemble Learning: Combining Multiple Learners (24 May 2018, WT515C)

  • 15 March 2018 (12pm, Thursday, WT 121) Dr. Sergey Zuev and Dr. Hongmou Zhang (DLR Berlin, Germany) Topic on “IPS – a vision aided navigation system for localization and 3D-mapping”. 

    Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one’s own position, can guidance be provided, inspections will be executed and autonomous vehicles should be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on human head and its navigation sensors – the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and inertial measurement unit for determining ego pose in six degrees of freedom in a local coordinate system. Additionally the system provides high resolution 3D point clouds by using the stereo cameras. IPS is able to operate in real time and can be applied to indoor and outdoor scenarios without any external reference or prior knowledge. The system is dedicated for applications where the localization by using GPS is missing or disturbed. Main application fields are inspection of technical facilities, especially of underground mining areas, forestry inventory and also in automotive for support of autonomous driving.In this talk, the system and its key hardware and software components will be introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The development team started from scratch and transfers this technology into a commercial product right now.

  • 08 March 2018 (12pm, Thursday, WT515C) Wei Qi Yan(CS, AUT) Topic on ” Secret Sharing and Currency Security.”

    Secret sharing refers to the methods for distributing a secret among a group of participants. The secret can be reconstructed only when a sufficient number of secret shares are combined together; individual shares are of no use. Visual cryptography (VC), as a case of visual secret sharing, provides a very powerful technique by which one binary image can be split into two or more black and white pieces using secret sharing. In this talk, we will review visual secret sharing and Chinese Reminder Theorem (CRT), a raft of VC schemes will be demonstrated. We also consider applying VC to currency security. Our recent research progress in currency security also will be detailed.

  • 01 March 2018 (12pm, Thursday, WT515C) Robert Yang(EEE, AUT) Topic on “The use of video to detect and measure pollen on bees entering a hive (2).”

    In the talk on 14th Dec. 2017, the bee detection and tracking model was introduced. In this time, we will explain details of the model of pollen detection and measurement. Individual bee images were collected from 400 frames of a bee monitoring video. The image moments are used to analyse the individual bee blob images to remove the main body of bees on the individual bee images. Then, the color thresholds are chosen for detecting of pollen color (orange, yellow and white). After that, four pollen blob features are identified to distinguish the pollen and non-pollen blobs. In this step, the receiver operating characteristic (ROC) algorithm is utilized to analyse the features to fine optimal thresholds for the pollen discrimination. Finally, the bee tracking model and pollen measurement model are combined to measure number of bees carrying pollen on the video.

  • 22 February 2018 (12pm, Thursday, WT515C) Ling Ding (Wuhan University, China) Topic on “Traffic-related sign reading from a moving vehicle.”

    A traffic sign is usually represented in the form of graphics or symbols. A moving and automated vehicle on a road is expected to read the traffic signs through the mounted cameras. In our ADAS system, we first detect the location of a visual sign from captured video frames and confirm the candidate area which is called ROI or MSER in the frames; then, the captured video frames will be transformed by inverse perspective mapping (IPM) into bird’s-eye view images which are to be sorted into text signs or symbols. These road markings are recognised using histogram of oriented gradient (HOG) features and support vector machines (SVM). The proposed method is validated using datasets from Wuhan University; the recognition rate and recognition speed are all currently improved compared to previous results.

  • 15 February 2018 (12pm, Thursday, WT515C) Martin Knoche (N3T, NZ) Topic on ” Computer vision testing for driver assisted / autonomous trucks.”

    This talk will provide a brief overview on heavy commercial vehicle automation, why it is more relevant than autonomous cars and its key business, technical and ethical challenges. The N3T approach to self-driving truck testing will be explained. This includes Infrastructure-as-a-Service (IaaS), Testing-as-a-Service (TaaS), and developing testing tools like on-truck telemetry, real-time visualization and simulation, as well as V2x (vehicle-to-cloud/ infrastructure/ vehicle) communication. After answering these questions, N3T is interested to talk to audience with curiosity, drive and skills to assist with developing computer vision solutions, data analytics or AI deep learning skills for testing leading edge computer-vision technology of automotive customers. Experience with software development in Python, OpenCV, OpenDrive, Tensorflow or similar is of interest. Welcome to join a lively discussion on self-driving vehicles and why NZ is the perfect place to test the latest vision technology.

  • 08 February 2018 (12pm, Thursday, WT515C) Amita Dhiman (EEE, AUT) Topic on “Identifying and analyzing road surface distress.”

    Identification of road distress is important for both avoiding traffic accidents and better driving comfort. The reported research aims at automatic identification of distress present on road surface by using stereo vision and a deep learning network for achieving high accuracy and time efficiency.Road surface distress detection differs from other object detection in a way that the “object” of interest is generally below the surface of the road and of irregular shape. To make use of this situation, road plane modelling has been performed in image-disparity space, without back-projecting a disparity image into 3D space. Using this method, it was possible that potholes are detected but also with other minor cracks or shadows. So, a more accurate and robust road distress detector is required.In this research project, stereo vision is used to provide input information about depth, or to what extend there is a road distress on the road. A convolutional neural network is planned to be used in conjunction for helping to identify road distress. The final goal of this research is defined by tracking of detected road distress in multiple frames, also using visual odometry. This project also aims at providing a labelled dataset for testing road distress techniques, needed for training a used neural network. Such a set is expected to assist academic researchers and civil engineers for intended findings in this field.

  • 31 January 2018 (12pm, Wednesday, WT121) Tiejun Huang (CS, PKU China) Topic on “Visual information processing – from video to retina.”

    Visual perception is a corner stone for human and machine intelligence; however, conventional frame-by-frame videos employed in computer vision systems are totally different with spike train on the visual fibers from biological retina to human brain. This talk will give the background and challenges for visual big data processing; then, our work in simulation of neural circuits related to primate retina will be introduced; a new sensor chip is designed based on spiking representation which is potentiality used for machine vision including autonomous driving, robot perception, etc.

  • 25 January 2018 (12pm, Thursday, WT515C) Solmaz Mansouri (EEE, AUT) Topic on “A novel approach for cuff‐less and continuous blood pressure monitoring.”

    The purpose of this study is to develop a novel method to improve the accuracy of cuff‐less and continuous blood pressure (BP) measurement. High BP or hypertension is the world’s biggest killer and a common risk factor for most cardiovascular diseases. Developing a continuous BP monitoring technique is essential for clinicians to improve the rate of prevention, detection, and ideal treatment for hypertension and related diseases. Catheterization, oscillometry, auscultation, volume clamping and tonometry are the main methods available for BP measurement. Nevertheless, they are not suitable for cuff‐less and continuous BP monitoring.Pulse Transit Time (PTT) method is a promising technique that was employed for the purpose of this study. The PTT is defined as the time that pulse wave takes to travel through the length of the cardiovascular system. It can be calculated as the time interval between proximal and distal waveform. In spite of many PTT‐based methods being proposed recently, none of them are clinically adopted and there is room for more research to improve the accuracy and acceptability of such methods.One of the issues associated with the PTT‐based methods is the need for calibration of BP measurement. Although different adaptive algorithms have been proposed to solve the calibration problem, a simple and accurate calibration technique has not yet been suggested. In addition to improving the accuracy of PTT‐based methods for clinical use, this study aims to propose a method to calibrate BP or find a way to estimate BP without calibration.In this presentation, after reviewing the most widely used cuff‐less continuous BP measurement methods, limitations and challenges, I will discuss my plans and directions for this study.

  • 18 January 2018 (12pm, Thursday, WT515C) Subhash Chand (EEE, AUT) Topic on “Analysis of coastline change along the north east coast of viti levu using multi-temporal & multi-scale remote sensed imagery & GIS.”

    This research conducted on a 25km coastline focuses on semi and fully automatic change detection techniques. The study categorizes the movement of coastline along the North east coast of Viti Levu in Fiji. Coastline categorized as eroding and prograding by using two different techniques and data as aerial photographs and satellite imagery. The first technique deployed was a pixel based change detection algorithm using multi-temporal and multi-scale, geometrically corrected remotely sensed imagery. Changes observed for coastline change were for 23 years (1991-2014). The results of this change revealed areas of erosion and accretion for the entire study area. The second technique is a simple mathematical model, End Point Rate (EPR) and Net Shoreline Movement (NSM), used to calculate the rate of change for this coastline for the period of 31 years (1983-2014). Achieved by using geometrically corrected and orthorectified high-resolution aerial photographs, and satellite imagery. Although two different techniques used for this research, the results are consistent. The high-tech orthorectified images used in this research assisted in reducing errors and made features easily identifiable for extraction of vector coastlines. To confirm the changes from desktop application, a ground truthing exercise of this coastline was conducted, to compare the actual occurrence on the ground. The changes on the ground concurred 95% to the changes from the desktop application. Comparing multi-scale and multi-temporal classified remotely sensed imagery showed areas built with jetties that were visible on the change map as prograding coastline. The average rate of erosion in the area is -0.35m/yr. and is prograding at an average rate of +0.41m/yr. Areas lined with large strips of mangrove, the rate of accretion is higher than the rate of erosion. These figures could be maintained if developments are conducted sustainably, removing and clearing only desirable portions for developments and committing the developers to maintain the health of the coastal environment.Changes in this area were trivial, according to statistical results, but if long-term changes are considered, situation may worsen if the intensity of hurricanes increase. Most of the changes in this area were due to developments and settlement of people. Fortification of eroding coastline using hard engineering solution (gabion baskets and sea wall) is not viable. Whereas a more empirical and long term solution is by lining the coastline with vegetation (mangroves) which is self-maintaining and provide long term coastline protection. This was a trend noticed along the Volivoli resort, coastline mangroves trees were maintained and provide protection to the coastline from erosion and they beautify their landscape. Therefore, for long-term coastal durability relies on maintaining the vegetation along a coastline is very deductive.Finally, this research was successfully able to integrate the use of Geographical Information System and analyze remotely sensed data to locate areas of coastal erosion and accretion and quantify the rate of coastline change.