
In the context of interactive AR/VR systems, this talk will present our research on integrating deep learning models with augmented and virtual reality environments for both traditional use cases of process modelling in smart manufacturing set up to novel applications of rehabilitation of people with disabilities, pilot training and enforcing social distance measures in office spaces. In particular, I shall present the continuum of displays, alternative input modalities like eye gaze tracking, gesture and speech recognition and basic theories of deep learning models like Convolutional Neural Networks. The integration of eye gaze tracking, gesture recognition and deep learning with AR/VR systems will be demonstrated through case studies.