“Tracking on Mobile: Opportunities and Challenges”
This talk will look at the rise in computational power of mobile devices and consider the opportunities that this creates for visual processing applications using technologies such as visual tracking. The barriers to further computational growth will be discussed and various strategies for addressing this will be presented, including techniques for efficient use of mobile devices and the use of asymmetric computing.
“Modeling, Tracking, Annotating and Augmenting a 3D Object in less than 5 Minutes”
Steve Bourgeois, Boris Meden, Vincent Gay-Bellile, Mohamed Tamaazousti and Sebastian Knodel
“Tracking and Integration Aspects of a Mobile Augmented Reality Tool for Shipbuilding”
Juha-Pekka Arimaa, Rami Suominen, Antti Euranto, Olli Lahdenoja, Timo Knuutila and Teijo Lehtonen
“Projective indices for AR/MR benchmarking in TrakMark”
Masayuki Hayashi, Itaru Kitahara, Yoshinari Kameda, Yuichi Ohta, Koji Makita and Takeshi Kurata
“Introduction of TrakMark and its standardization activity at ISO/IEC JTC 1/SC 24/WG 9”
Yoshinari Kameda and Takeshi Kurata
“Real-time tracking and model-building”
Over a number of years the Active Vision Lab has developed various methods for real-time tracking objects or tracking mobile cameras, both of which are essential precursors for many augmented reality tasks. Though many algorithms for visual tracking are content with simply reporting a 2D x-y position in the image, this is rarely sufficient for AR. In contrast we have developed methods for simultaneous segmentation and pose recovery, which also lead to the possibility of building models on the fly. This talk will discuss the theory behind these methods, and show examples of augmented reality applications that have resulted.
Tom Drummond, Monash University
Professor Drummond grew up in the UK and studied mathematics for his BA at the University of Cambridge. In 1989 he emigrated to Australia and worked for CSIRO in Melbourne for four years before moving to Perth for his PhD in Computer Science at Curtin University. In 1998 he returned to Cambridge as a post-doctoral Research Associate and in 1991 was appointed as a University Lecturer and was subsequently promoted to Senior University Lecturer. In 2010 he returned to Melbourne and took up a Professorship at Monash University.
His research is principally in the field of real-time computer vision (i.e. processing of information from a video camera in a computer in real-time typically at frame rate). This has applications in augmented reality, robotics, assistive technologies for visually impaired users as well as medical imaging.
Ian Reid, University of Adelaide
Ian Reid is a Professor of Computer Science and ARC Australian Laureate Fellow at the University of Adelaide, where he has been since September 2012. Prior to that he was a Professor of Engineering Science at the University of Oxford, a post held in association with Exeter College where he was the senior Engineering tutor. He received a BSc in Computer Science and Mathematics with first class honours from University of Western Australia in 1987 and was awarded a Rhodes Scholarship in 1988 to study at the University of Oxford, where he obtained a D.Phil. in 1992. Between then and 2000 when he was appointed to a Lecturership he held various Research Fellowship posts including an EPSRC Advanced Research Fellowship. His research interests include active vision, visual navigation, visual geometry, human motion capture and intelligent visual surveillance, with an emphasis on real-time implementations whenever possible. He has published 150 papers on these topics in major journals and refereed conferences, with prize winning papers BMVC '05, '09, '10, and CVPR '08. He serves on the program committees of various national and international conferences, on the editorial board of Image and Vision Computing and IEEE T-PAMI, and has led a number of EU, UK and Australian Research Council sponsored research projects.
Call for Papers
Download the official call for papers here.
Authors are invited to submit original, unpublished manuscripts in standard IEEE proceedings format. The concept of this workshop is to look at pose tracking from an end-to-end point of view. We invite submissions describing practical tracking solutions for AR which address issues including, but not limited to:
- Model-based detection and tracking
- Large scale object recognition
- SLAM and online reconstruction
- Sensor integration and fusion
- User generated content and tracking data
- Networking and persistence
- Flow graphs and plug-in architectures
This year we would especially like to emphasize:
- Hardware for tracking (e.g. FPGA, special cameras)
- CPU offloading (e.g. GPU or DSP processing)
- Datasets for benchmarking of tracking algorithms
- Evaluation methodology for fair comparison of tracking methods
Please submit your manuscript using the EasyChair submission system. We will accept submissions in two different formats:
- Full papers and position statements; maximum length of six pages
- Works-in-progress papers; maximum length of three pages
Organizers and Program Committee
- Daniel Wagner, Qualcomm, Austria
- Yoshinari Kameda, University of Tsukuba, Japan
- Hideaki Uchiyama, Toshiba Corporation, Japan
- Jonathan Ventura, Graz University of Technology, Austria
- Hideo Saito, Keio University, Japan
- Selim Benhimane, Intel, USA