2018 Seminars


For time and location, please see at the individual seminar.

  • 29 November 2018 (10.30am, Thursday, WT121) Prof. Mohan Kankanhalli (Dean, School of Computing, National University of Singapore, Singapore) Topic on “Social Interactions and Presentation Analytics.” 
  • Social interactions play an important role in our daily lives: people organize themselves in groups to share views, opinions, as well as thoughts. In many settings, observing these interactions offer deep insights into on-going events and situations. Manual analysis of such interactions is accurate but is tedious. Recent developments in sensors and processing enable the possibility for automated analysis of social interactions.This talk focuses on the analysis of social interactions from the social signal perspective in a multi-sensor setting. The talk starts with our work on extended F-formation system for robust interaction and interactant detection in an ambient sensor environment. Building upon this work, we study the spatial structure of social interactions in a multiple wearable sensor environment. We use a search-based method to reconstruct the social interaction structure given multiple first-person views, where each view contributes to the multi-faceted understanding of the social interaction.The talk ends with our work on “presentation analytics” which is a very important sub-class of social interactions. A new multi-sensor analytics framework is proposed using ambient and wearable sensors for a substantially improved sensing which allows for presentation self-quantification. We have used some deep learning techniques for the analysis. Feedback from presenters shows a lot of potential for the use of such analytics.
  • 22 November 2018 (1.00pm, Thursday, WT102) Dr. Ignazio Gallo ( University of Insubria in Varese, Italy) Topic on “CNN for Multi-Modal Fusion on Images and Text.” 
  • With the rapid rise of e-commerce, the web has increasingly become multi-modal, making the question of multi-modal strategy ever more important. However, modalities in multi-modal approach come from different input sources (text/image, audio/video etc.) and are often characterized by distinct statistical properties, making it difficult to create a joint representation that uniquely captures the “concept” in the real-world applications. The use of multi-modal approach based on text and image features is extensively employed on a variety of tasks including modeling semantic relatedness, compositionality, classification and retrieval. In this talk convolutional neural networks are applied to a novel multi-modal approach that fuses images and text to improve multi-modal tasks in real-world scenarios.
  • 22 November 2018 (12.00pm, Thursday, WT102) Zarah Moayed (EEE, AUT) Topic on “Object tracking by fusion of Particle Filter tracking and Deep Learning detection algorithms”
  • Object detection and tracking is one of the fundamental problems in computer vision, especially if multiple categories of objects are concerned. In spite of recent improvements in object detection and classification using deep learning approaches, tracking is still a challenge. In this talk, I discuss latest research in tracking objects while considering results of classification. Then I explain a proposed method for multi-object detection and tracking for traffic scenes.
  • 15 November 2018 (12.00pm, Thursday, WT102) Arpita Dawda (EEE, AUT) Topic on “3D Measurement of Reflective Surface”
  • A significant number of techniques for acquiring 3D shapes have been proposed, leading to a wide range of applications, such as range sensing, industrial inspection of manufactured parts, reverse engineering (digitization of complex, free-form surfaces), object recognition, entertainment, 3D map building, biometrics, and documentation of cultural artifacts. However, these techniques can not be directly applied on a shiny or reflective surface. In this talk, I will discuss and compare two conventional methods: stereo vision and structured lightning for 3D reconstruction of reflective surfaces. Also, I will provide an insight into high-dynamic range (HDR) 3D shape measurement techniques used for shiny surfaces.
  • 08 November 2018 (1.00pm, Thursday, WT102) Robert Le (EEE, AUT) Topic on “A Vision Aid for the Visually Impaired using Commodity Dual-Rear-Camera Smartphones.”
  • Dual- (or multiple) rear cameras on hand-held smartphones are believed to be the future of mobile photography. Recently, many of such new has been released (mainly with dual-rear cameras: one wide-angle and one telephoto). Some of the notable ones are Apple iPhone 7 and 8 Plus, iPhone X, Samsung Galaxy S9, LG V30, Huawei Mate 10. With built-in dual-camera systems, these devices are capable of not only producing better quality picture but also acquiring 3D stereo photos (with depth information collected). Thus, they are capable of capturing the moment in life with depth just like our two eye system. Thanks to this current trend, these phones are now getting cheaper while becoming more power complete. In this paper, we describe a system that makes use of the commercial dual rear-camera phones such as the iPhone X, to provide aids for people who are visually impaired. We propose a design to place the phone on the chest centre of the user who has one or two Bluetooth headphone(s) plugged into the ears to listen to the phone audio outputs. Our system is consist of three modules: (1) the scene context recognition to audio, (2) the 3D stereo reconstruction to audio, and (3) the interactive audio/voice controls. In slightly more detail, the wide-angle camera captures live photos to be investigated by a GPS guided Deep Learning process to describe the scene in front of him/herself (module 1). The telephoto camera captures the more narrow-angle and thus to be stereo reconstructed with the aids of the wide angle’s one to form a depth map (densed area-based distance map). The map helps determine the distance to all visible object(s) to notify the user with critical ones (module 2). This module also makes the phone vibrate when an object(s) located close enough to the user, e.g. within hand reach distance. The user can also query the system by asking various questions to get automatic voice answering (module 3). In addition, a manual rescue module (module 4) is also added when other things have gone wrong. An example of the vision to audio could be “Overall, likely a corridor, one medium object is 0.5 m away – central left”, or “Overall, city pathway, front cleared”. Audio command input may be “read texts”, and the phone will detect and read all texts on closest object. More details on the design and implementation are further described in this paper.
  • 08 November 2018 (12.00pm, Thursday, WT102) Mohammad Norouzifard (EEE, AUT) Topic on “Unsupervised Optic Cup and Optic Disk Segmentation on Fundus Images for Glaucoma Detection by ICICA.”
  • Glaucoma is an eye disease that can lead to vision loss by damaging the optic nerve. Although this disease can often be prevented with early glaucoma detection, lack of discernible early symptoms makes the diagnosis difficult. Measuring the cup-to-disc ratio (CDR) is a common approach for glaucoma detection. Glaucoma can be specified by thinning the rim area that identifies the CDR value. Clustering and image segmentation can simply divide fundus images into distinct areas to estimate the optic disc (OD) and the optic cup (OC). This paper is based on a robust method, using the improved chaotic imperialistic competition algorithm (ICICA) for determining the position of the OD and OC on color fundus images for glaucoma detection. The predicted OD and OC boundaries are then used to estimate the CDR for glaucoma diagnosis. The performance of the proposed method was evaluated by using the publicly available RIGA dataset. It was found that some of the common problems of K-means clustering algorithm can be addressed by the proposed method for achieving better results. Moreover, the OC and OD regions can be precisely separated from the color image so that ophthalmologists can measure OC and OD areas more accurately.
  • 01 November ~13 Dec 2018 (11.00am, Thursday, WT102) Wei Qi Yan (CS, AUT) Topic on “Mathematical Knowledge for Deep Learning.”
  • Most of people have the difficulties to understand the relevant mathematics when they study deep learning. In this series of talks, we will review the fundamental knowledge of modern mathematics, we will start from mathematical analysis and advanced algebra, we then detail the mathematics at postgraduate level, such as functional analysis and finite fields in basic algebra; furthermore, we will introduce the relevant knowledge such as tensor algebra, information theory, graphical models and manifolds. Our goal is to assist our postgraduates to complete their high-quality theses. The tentative dates and topics include:(1) 01 Nov 2018: Information Theory
    (2) 08 Nov 2018: Probabilistic Graphical Models
    (3) 15 Nov 2018: Manifold Theory
    (4) 22 Nov 2018: Tensor Algebra
    (5) 29 Nov 2018: IEEE AVSS 2018
    (6) 06 Dec 2018: Functional Analysis
    (7) 13 Dec 2018: Finite Fields and Basic Algebra
  • 04 October 2018 (12.00pm, Thursday, WT102) Jonas Hilty (EEE, AUT) Topic on “Leaf Shape Analysis” 
  • In this talk I present shape analysis based on digital straight segments (DSS). I use thick DSSs to detect turning points along the contour, and to identify the location of the petiole of leaves with complicated shapes.
  • 27 September 2018 (12.00pm, Thursday, WT102) Amita Dhiman (EEE, AUT) Topic on “Identifying and Analyzing Road Distress”
  • This talk is about identifying potholes using two different techniques- conventional programming technique and state of the art – deep learning. The first technique works by accumulation of multiple-frame 3D reconstructions which are properly aligned to a road-centred system. Second technique discusses a model developed using transfer learning-based approach. Introduction to different datasets for road distress will also be provided.
  • 06 &13 September 2018 (12.00pm, Thursday, WT133) Wei Qi Yan (CS, AUT) Topic on “Advanced Deep Learning” 
  • After we have studied fundamental deep learning (10 talks) and deeply learn deep learning (6 talks), we will further introduce advanced deep learning (4 talks) using the coming semester break. In this series of talks, we will continue detailing our understanding of those excellent work in deep learning, we will especially emphasize on those contributions and applications with significant impact in deep learning research. Our goal is to help our students quickly involve in our deep learning projects. The tentative topics include:(1) Awarded work in deep learning (12.00pm-13.00, 06 September 2018)
    (2) Deep learning publications in Nature and Science (13.00-14.00, 06 September 2018)
    (3) CryptoNets: Brain-like cryptography using deep learning (12.00pm-13.00, 13 September 2018)
    (4) Deep learning for time series analysis (13.00-14.00, 13 September 2018)
  • 14 August 2018 (4-5.00pm, Tuesday, Room 201, Bioengineering Institute, the University of Auckland) Prof. Yasushi Yagi (deputy vice-chancellor, Osaka University). Topic on “Gait Video Analysis for Person Authentication”
  • We have been studying human gait analysis for more than 10 years. Because everyone’s walking style is unique, human gait is a prime candidate for person authentication tasks. Our gait analysis technologies are now being used in real criminal investigations. We have constructed large-scale gait datasets and proposed several methods of gait analysis. The appearances of gait patterns are influenced by changes in viewpoint, walking direction, speed, clothes, and shoes. To overcome these problems, we have proposed several approaches using a part-based method, an appearance-based view transformation model, a periodic temporal super resolution method, a manifold-based method and score-level fusion. In this talk, I briefly introduce an overview of our gait analysis technologies and show the efficiency of our approaches by evaluating them with our large gait database.
  • 09 August 2018 (12.00pm, Thursday, WT133)  Dr. TAKENAGA Tomomi (The University of Tokyo Hospital). Topic on “Detection of Nodular Liver Lesions in Gd-EOB-DTPA-enhanced MRI Images with 4D Convolutional Neural Network Technique”
  • Gd-EOB-DTPA-enhanced MRI tends to show higher diagnostic accuracy compared to other modalities. However, in a diagnosis of nodular liver lesions, Gd-EOB-DTPA-enhanced MR generates a huge number of images in five-time phases. Therefore, we developed a computer assisted detection scheme for the detection of nodular liver lesions in Gd-EOB-DTPA-enhanced MRI with four-dimensional convolutional neural network (4D CNN) technique in this study.Material and medical imaging: 184 contrast-enhanced MR images including 291 metastatic liver tumors, 274 hepatocellular carcinomas and 10 hemangiomas were used in this study. Each case includes 1-29 nodular liver lesions, and the effective diameter of these legions was ranged from 5.5 mm to 97.7 mm.Firstly, the target image was the hepatocellular phase image and the remaining four-phase images were non‒rigidly registered to target images by using a DROP 3D registration software. Secondly, the liver region was extracted by 4D fully convolutional residual network. Thirdly, we detected nodular liver lesion candidate voxels by 4D CNN. Among 184 cases, 100 cases were used for CNN training, 42 cases were used for CNN validation and the remaining 42 cases were used for the evaluation. Finally, we determined nodular liver lesion candidates by using the probability of the local maximum in the nodular liver lesion.The sensitivity for detection was 70.0% with 13.3 false positives per case. Our proposed method has a potential for detection of nodular liver lesions in Gd-EOB-DTPA-enhanced MRI. More works are required to modify our model to improve detection accuracy.
  • 07 June 2018 ~ 12 July 2018 (12.00pm, Thursday) Dr. Wei Qi Yan (CS, AUT) Topic on “Deeply Learn Deep Learning.” 
  • Deep learning has taken the dominant position in visual computing since the publications in Nature. After completed the fundamental learning, we will lead the study of deep learning to deeper. The following six talks will focus on the newest knowledge of deep neural networks and generative graphical models. The goal of these talks is to assist our research students to quickly engage in their research projects related to deep learning. The tentative dates and venues of the series of talks are listed as below.(1) MATLAB 2018 Toolboxes for Deep Learning (07 June 2018, WT 407)
    (2) Autoencoder and Generative Networks (14 June 2018, WT 407)
    (3) Transfer Learning and Representation Learning (21 June 2018, WT 121)
    (4) Deep Boltzmann Machine and Deep Belief Networks (28 June 2018, WT 133)
    (5) Representation and Inference of Graphical Models (05 July 2018, WT 133)
    (6) Deep Learning for Vision Intelligence (12 July 2018, WT 133)
  • 24 May 2018 (1.00pm, Thursday, WT515C) Qian Zhang (CS, AUT) Topic on “Currency Recognition Using Deep Learning.” 
  • In this talk, we will introduce our experimental results in currency recognition, obtained by using the deep learning model SSD and the dataset we collected. In the experiments, we selected four kinds of denomination currencies, labelled the positive and negative samples; after training, we are able to accurately identify the location and position of the currency in the test, which is much better than our previous results by using the deep learning model YOLO and SSD.
  • 17 May 2018 (1.00pm, Thursday, WT515C) Yiting Shen (CS, AUT) Topic on “Blind Spot Monitoring Using Deep Learning.” 
  • As well known, blind spots of a car are with potential hazards, especially for those long and heavy vehicles, like buses. It is believable that if a bus can install several cameras pointed in its blind spot areas, the driver may observe what happens through images in real time. In this talk, we will introduce the reason why we use intelligent surveillance systems to detect the blind spots of cars. Then, the blind spot monitoring and cars prediction will be detailed by using RNN (LSTM) in deep learning.
  • 10 May 2018 (1.00pm, Thursday, WT515C) Hui Wang(CS, AUT) Topic on “Fast Face Recognition Using Deep Neural Networks.”Face recognition is an important application in video surveillance and computer vision. However, the conventional algorithms of face recognition are susceptible to many conditions, such as lighting, occlusion, viewing angle or camera rotation. Face recognition based on the deep learning can greatly improve the recognition speed and compatible external interference. In this talk, we use deep neural networks for fast face recognition, the networks have the superior merits of end-to-end, sparse connection and weight sharing. In order to identify various people’s identity, based on the location of a detected face, the face image will be classified. The ultimate goal of this project is to implement face recognition with different confidence within distance for various faces. The use of deep neural networks for face recognition is compatible with external disturbances.
  • 19 April 2018 (1.00pm, Thursday, WT515C) Zhe Liu (CS, AUT) Topic on “Image Denoising Based on A CNN Model.”
  • In this talk, we will use a CNN model in deep learning for image denoising. Compared with traditional image denoising methods such as average filtering, Wiener filtering and median filtering, the advantage of using this CNN model is that the parameters of this model can be optimized through network training; whereas in traditional image denoising, the parameters of these algorithms are fixed and cannot be adjusted during the filtering, namely, lack of adaptivity. In this talk, we design and implement the denoising method based on a linear CNN model. Our experimental results show that the proposed CNN model can effectively remove Gaussian noise and improve the performance of traditional image filtering methods significantly.
  • 12 April 2018 (1.00pm, Thursday, WT133) Chen Xin (CS, AUT) Topic on “Flame Detection and Recognition Using Deep Learning.”
  • Deep learning is a novel method which could be much efficient and accurate in flame detection. In this paper, we use YOLO and SSD model to implement flame detection and recognition, we compare it with those shallow learning methods. Our contribution of this paper is to make use of the optimized YOLO model for flame detection and SSD mode for flame recognition from video frames. We collected the dataset and trained them using Google platform TensorFlow.
  • 22 March 2018 ~ 24 May 2018 (12.00pm, Thursday) Wei Qi Yan (CS, AUT) Topic on “Ten Talks on Deep Learning.”
  • Deep learning has taken the dominant position in visual computing since the publications in Nature. In these ten talks, we will start from the chronicle of machine learning and showcase the platforms of deep learning like TensorFlow and Caffe. We also feature the state-of-the-art technologies of deep learning such as CNN and RNN as well as YOLO2 and SSD. We will take use of image analysis and computer vision as the prominent applications of deep learning; meanwhile, we will detail Markov Random Fields (MRF) and Decision Trees as well as Random Forests. The goal of these talks is to assist our research students to quickly engage in their projects related to deep learning.Please find the tentative dates and venues of the series of talks below:(1) Deep Learning and Machine Intelligence (22 March 2018, WT515C)
    (2) TensorFlow Programming for Deep Learning (29 Mar 2018, WT515C)
    (3) Mathematics for Deep Learning (05 Apr 2018, WT515C)
    (4) From CNN & R-CNN to YOLO3 & SSD (12 Apr 2018, WT133)
    (5) From SqueezeNet to Compressing Networks (19 Apr 2018, WT515C)
    (6) Deep Markov Random Fields for Image Analysis (26 Apr 2018, WT515C)
    (7) Deep Random Forest for Computer Vision (03 May 2018, WT515C)
    (8) FSM, HMM and RNN for Event Computing (10 May 2018, WT515C)
    (9) Reinforcement Learning (17 May 2018, WT515C)
    (10) Ensemble Learning: Combining Multiple Learners (24 May 2018, WT515C)
  • 15 March 2018 (12pm, Thursday, WT 121) Dr. Sergey Zuev and Dr. Hongmou Zhang (DLR Berlin, Germany) Topic on “IPS – a vision aided navigation system for localization and 3D-mapping”. 
  • Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one’s own position, can guidance be provided, inspections will be executed and autonomous vehicles should be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on human head and its navigation sensors – the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and inertial measurement unit for determining ego pose in six degrees of freedom in a local coordinate system. Additionally the system provides high resolution 3D point clouds by using the stereo cameras. IPS is able to operate in real time and can be applied to indoor and outdoor scenarios without any external reference or prior knowledge. The system is dedicated for applications where the localization by using GPS is missing or disturbed. Main application fields are inspection of technical facilities, especially of underground mining areas, forestry inventory and also in automotive for support of autonomous driving.In this talk, the system and its key hardware and software components will be introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The development team started from scratch and transfers this technology into a commercial product right now.
  • 08 March 2018 (12pm, Thursday, WT515C) Wei Qi Yan(CS, AUT) Topic on “Secret Sharing and Currency Security.”
  • Secret sharing refers to the methods for distributing a secret among a group of participants. The secret can be reconstructed only when a sufficient number of secret shares are combined together; individual shares are of no use. Visual cryptography (VC), as a case of visual secret sharing, provides a very powerful technique by which one binary image can be split into two or more black and white pieces using secret sharing. In this talk, we will review visual secret sharing and Chinese Reminder Theorem (CRT), a raft of VC schemes will be demonstrated. We also consider applying VC to currency security. Our recent research progress in currency security also will be detailed.
  • 01 March 2018 (12pm, Thursday, WT515C) Robert Yang(EEE, AUT) Topic on “The use of video to detect and measure pollen on bees entering a hive (2).”In the talk on 14th Dec. 2017, the bee detection and tracking model was introduced. In this time, we will explain details of the model of pollen detection and measurement. Individual bee images were collected from 400 frames of a bee monitoring video. The image moments are used to analyse the individual bee blob images to remove the main body of bees on the individual bee images. Then, the color thresholds are chosen for detecting of pollen color (orange, yellow and white). After that, four pollen blob features are identified to distinguish the pollen and non-pollen blobs. In this step, the receiver operating characteristic (ROC) algorithm is utilized to analyse the features to fine optimal thresholds for the pollen discrimination. Finally, the bee tracking model and pollen measurement model are combined to measure number of bees carrying pollen on the video.
  • 22 February 2018 (12pm, Thursday, WT515C) Ling Ding (Wuhan University, China) Topic on “Traffic-related sign reading from a moving vehicle.”
  • A traffic sign is usually represented in the form of graphics or symbols. A moving and automated vehicle on a road is expected to read the traffic signs through the mounted cameras. In our ADAS system, we first detect the location of a visual sign from captured video frames and confirm the candidate area which is called ROI or MSER in the frames; then, the captured video frames will be transformed by inverse perspective mapping (IPM) into bird’s-eye view images which are to be sorted into text signs or symbols. These road markings are recognized using histogram of oriented gradient (HOG) features and support vector machines (SVM). The proposed method is validated using datasets from Wuhan University; the recognition rate and recognition speed are all currently improved compared to previous results.
  • 15 February 2018 (12pm, Thursday, WT515C) Martin Knoche (N3T, NZ) Topic on “Computer vision testing for driver assisted / autonomous trucks.”This talk will provide a brief overview on heavy commercial vehicle automation, why it is more relevant than autonomous cars and its key business, technical and ethical challenges. The N3T approach to self-driving truck testing will be explained. This includes Infrastructure-as-a-Service (IaaS), Testing-as-a-Service (TaaS), and developing testing tools like on-truck telemetry, real-time visualization and simulation, as well as V2x (vehicle-to-cloud/ infrastructure/ vehicle) communication. After answering these questions, N3T is interested to talk to audience with curiosity, drive and skills to assist with developing computer vision solutions, data analytics or AI deep learning skills for testing leading edge computer-vision technology of automotive customers. Experience with software development in Python, OpenCV, OpenDrive, Tensorflow or similar is of interest. Welcome to join a lively discussion on self-driving vehicles and why NZ is the perfect place to test the latest vision technology.
  • 08 February 2018 (12pm, Thursday, WT515C) Amita Dhiman (EEE, AUT) Topic on “Identifying and analyzing road surface distress.”
  • Identification of road distress is important for both avoiding traffic accidents and better driving comfort. The reported research aims at automatic identification of distress present on road surface by using stereo vision and a deep learning network for achieving high accuracy and time efficiency.Road surface distress detection differs from other object detection in a way that the “object” of interest is generally below the surface of the road and of irregular shape. To make use of this situation, road plane modelling has been performed in image-disparity space, without back-projecting a disparity image into 3D space. Using this method, it was possible that potholes are detected but also with other minor cracks or shadows. So, a more accurate and robust road distress detector is required. In this research project, stereo vision is used to provide input information about depth, or to what extend there is a road distress on the road. A convolutional neural network is planned to be used in conjunction for helping to identify road distress. The final goal of this research is defined by tracking of detected road distress in multiple frames, also using visual odometry. This project also aims at providing a labelled dataset for testing road distress techniques, needed for training a used neural network. Such a set is expected to assist academic researchers and civil engineers for intended findings in this field.
  • 31 January 2018 (12pm, Wednesday, WT121) Tiejun Huang (CS, PKU China) Topic on “Visual information processing – from video to retina.”Visual perception is a corner stone for human and machine intelligence; however, conventional frame-by-frame videos employed in computer vision systems are totally different with spike train on the visual fibers from biological retina to human brain. This talk will give the background and challenges for visual big data processing; then, our work in simulation of neural circuits related to primate retina will be introduced; a new sensor chip is designed based on spiking representation which is potentiality used for machine vision including autonomous driving, robot perception, etc.
  • 25 January 2018 (12pm, Thursday, WT515C) Solmaz Mansouri (EEE, AUT) Topic on “A novel approach for cuff‐less and continuous blood pressure monitoring.”
  • The purpose of this study is to develop a novel method to improve the accuracy of cuff‐less and continuous blood pressure (BP) measurement. High BP or hypertension is the world’s biggest killer and a common risk factor for most cardiovascular diseases. Developing a continuous BP monitoring technique is essential for clinicians to improve the rate of prevention, detection, and ideal treatment for hypertension and related diseases. Catheterization, oscillometry, auscultation, volume clamping and tonometry are the main methods available for BP measurement. Nevertheless, they are not suitable for cuff‐less and continuous BP monitoring.Pulse Transit Time (PTT) method is a promising technique that was employed for the purpose of this study. The PTT is defined as the time that pulse wave takes to travel through the length of the cardiovascular system. It can be calculated as the time interval between proximal and distal waveform. In spite of many PTT‐based methods being proposed recently, none of them are clinically adopted and there is room for more research to improve the accuracy and acceptability of such methods.One of the issues associated with the PTT‐based methods is the need for calibration of BP measurement. Although different adaptive algorithms have been proposed to solve the calibration problem, a simple and accurate calibration technique has not yet been suggested. In addition to improving the accuracy of PTT‐based methods for clinical use, this study aims to propose a method to calibrate BP or find a way to estimate BP without calibration.In this presentation, after reviewing the most widely used cuff‐less continuous BP measurement methods, limitations and challenges, I will discuss my plans and directions for this study.
  • 18 January 2018 (12pm, Thursday, WT515C) Subhash Chand (EEE, AUT) Topic on “Analysis of coastline change along the north east coast of viti levu using multi-temporal & multi-scale remote sensed imagery & GIS.”
  • This research conducted on a 25km coastline focuses on semi and fully automatic change detection techniques. The study categorizes the movement of coastline along the North east coast of Viti Levu in Fiji. Coastline categorized as eroding and prograding by using two different techniques and data as aerial photographs and satellite imagery. The first technique deployed was a pixel based change detection algorithm using multi-temporal and multi-scale, geometrically corrected remotely sensed imagery. Changes observed for coastline change were for 23 years (1991-2014). The results of this change revealed areas of erosion and accretion for the entire study area. The second technique is a simple mathematical model, End Point Rate (EPR) and Net Shoreline Movement (NSM), used to calculate the rate of change for this coastline for the period of 31 years (1983-2014). Achieved by using geometrically corrected and orthorectified high-resolution aerial photographs, and satellite imagery. Although two different techniques used for this research, the results are consistent. The high-tech orthorectified images used in this research assisted in reducing errors and made features easily identifiable for extraction of vector coastlines. To confirm the changes from desktop application, a ground truthing exercise of this coastline was conducted, to compare the actual occurrence on the ground. The changes on the ground concurred 95% to the changes from the desktop application. Comparing multi-scale and multi-temporal classified remotely sensed imagery showed areas built with jetties that were visible on the change map as prograding coastline. The average rate of erosion in the area is -0.35m/yr. and is prograding at an average rate of +0.41m/yr. Areas lined with large strips of mangrove, the rate of accretion is higher than the rate of erosion. These figures could be maintained if developments are conducted sustainably, removing and clearing only desirable portions for developments and committing the developers to maintain the health of the coastal environment.Changes in this area were trivial, according to statistical results, but if long-term changes are considered, situation may worsen if the intensity of hurricanes increase. Most of the changes in this area were due to developments and settlement of people. Fortification of eroding coastline using hard engineering solution (gabion baskets and sea wall) is not viable. Whereas a more empirical and long term solution is by lining the coastline with vegetation (mangroves) which is self-maintaining and provide long term coastline protection. This was a trend noticed along the Volivoli resort, coastline mangroves trees were maintained and provide protection to the coastline from erosion and they beautify their landscape. Therefore, for long-term coastal durability relies on maintaining the vegetation along a coastline is very deductive.Finally, this research was successfully able to integrate the use of Geographical Information System and analyze remotely sensed data to locate areas of coastal erosion and accretion and quantify the rate of coastline change.