Interests
My primary research interests are around computer vision, 3D vision, probabilistic/generative methods particularly for dense prediction, biological vision (including the creation of prosthetic vision) and saliency and salient object detection.
Research
News:
Congratulations to Sahir Shrestha, Best paper at DICTA 2021.
Two papers accepted into AAAI 2022: congratulations Siyuan and Jing.
Two papers accepted into WACV 2021 and one to WACV workshops, congratulations Weixuan, Jiawei, and Kaiyue.
Two papers accepted into NeurIPS 2021: congratulations Sameera and Jing.
One paper accepted into ICCV 2021, congratulations Jing.
Paper Accepted in IEEE-PAMI: Learning Saliency from Single Noisy Labelling:
- A Robust Model Fitting Perspective, J Zhang, Y Dai, T Zhang, MT Harandi, N Barnes, R Hartley, in IEEE Transactions on Pattern Analysis and Machine Intelligence, accepted, available for early access, 2021.
Three papers accepted into CVPR 2021, congratulations Shi, Wangbo, Yunqiu:
- Simultaneously Localize, Segment and Rank the Camouflaged Objects, Yunqui Lv, Jing Zhang, Yuchao Dai, Aixuan Li, Bowen Liu, Nick Barnes, Deng-Ping Fan
- Weakly Supervised Sailent Object Detection, Wangbo Zhao, Jing Zhang, Long Li, Nick Barnes, Nian Liu, Junwei Han
- Semantic Segmentation for Real Point Cloud Scences via Bilateral Augmentation and Adaptive Fusion, Shi Qiu, Saeed Anwar, Nick Barnes
Paper accepted in ICLR 2021:
- Conditional Generative Modeling via Learning in the Latent Space, Sameera Ramasinghe, Kanchana Nisal Ranasinghe, Salman Khan, Nick Barnes, Stephen Gould
I lead a project with the ANU-Optus Bushfire Research Centre of Excellence to detect Bushfires from Fire-towers.
I’m a lead investigator on a project that is developing computer vision research to assist the visually impaired. We are continuing work on this with our partners at Bionic Vision Techologies, The Centre for Eye Research Australia, The Bionics Institute, and Health@CSIRO.
The vision processing for prosthetic vision homepage. See link for some of the media coverage of our 2014 human mobility trials.
From 2010 to 2015 this was funded by by the Australian Research Council through Bionic Vision Australia.
The Bionic Vision Australia partnership developed an Australian bionic eye implant that can restore a sense of vision to profoundly blind individuals. Vision processing is a key part of this approach. This has now become Bionic Vision Technologies, a start-up completing clinical trials with implanted participants.
I am interested in developing algorithms that replicate useful aspects of biological visual performance. This is to develop therapies for vision loss such as retinal implants and assistive devices, as well as to develop better algorithms for computer vision.
PhD students:
- Current:
- Weixuan Sun (with Jing Zhang, Weakly Supervised segmentation)
- Jiawei Liu (with Jing Zhang, Binary segmentation (camouflage, saliency, including semi-supervised), and uncertainty)
- Peipei Song (with Piotr Konsiuz)
- Changkun Ye (with Lars Peterson and Russel Tsuchshida)
- Shi Qiu (with Saeed Anwar)
- Riukai Cui (with Saeed Anwar)
- Sahir Shrestha (with Lars Peterson and Russel Tsuchshida)
- Lin Li (with Salman Khan)
- Submitted:
- Kaiyue Liu
- Sameera Ramasinghe
- Graduated:
- Jing Zhang
- Moshiur Farazi (with Salman Khan)
- Shafin Rahman (with Salman Khan), Winner JG Crawford Prize for PhD Thesis Cedric Scheerlinck (With Rob Mahony)
- David Feng
- John Lim
- Chris McCarthy
- Lachlan Horne
- John Lim: John Lim is using the features of spherical vision to develop new algorithms for recovery of self motion.
- Pengdong Xiao (with Tiberio Caetano and Paulette Lieby)
- Samunda Perera
- Tao Wang (with Xuming He)
- Andrew Dankers: Andrew Dankers developed a mechatronic simulation of active dynamic human visual saliency incorporating inhibition of return. This managed fixations across items of interest in a moving scene with an active stereo head, and via fixation with the zero disparity field and a Markov random field approach segmented objects corresponding to the points of interest.
- David Shaw
- Chris McCarthy: (with Mandyam Srinivasan, RSBS ANU) As part of his PhD Chris McCarthy has developed new models for visual docking from optical flow based on the spherical view field of honeybees. By using the sperhical vision model we are able to construct algorithms that rapidly recover time-to-contact for obstacles in the environment.
- Giulio Sandini (LIRA Lab, Uni of Genoa, Italy)
- Gavin Baker
We have looked at visual navigation using a fixated log-polar camera. Projects have developed corner-based fixation and direction control for docking a fixated object using only visual paramters, specifically, the rotational component of log-polar optical flow. We have also developed a new space variant form of the Hough transform that improves the detection of lines in log-polar images.
Corner tracking for a simple corner under robot motion.
Corner tracking in an office scene under robot motion.