Remote working and collaboration is important towards helping workplaces to become flexible and productive and gained considerable highlight during the COVID-19 pandemic, in which several people, including vulnerable groups such as older people or individuals with a chronic disease, were enforced to work from their home.
In this direction, in Ageing @ Work we present an AR-based platform aiming to improve collaboration efficiency, productivity, and training at the same time, whereas the majority of AR platforms developed for workplaces, have been so far application-specific and limited in integrating both remote collaboration and training capabilities. The platform is based on the marker-less augmented reality technology and it can be used on any environment and workplace from any user equipped with a smartphone, a tablet or a Head-Mounted Display (HMD) as the only required equipment. The platform consists of two applications that communicate with each other, one running on the device of the on-site worker and another running on the remote device used by an expert guide. The remote supervisor receives video feed from the AR HMD of the on-site worker, sharing their first-person view of the workspace. The remote expert guide is able to guide the on-site worker by inserting virtual cues and annotations on his workspace view. Annotations become available in the view of the on-site worker. The on-site user can use a mobile device (smartphone) or a Mixed Reality (MR) HMD, such as the Microsoft Hololens. Through the device’s sensors the surrounding environment is scanned. No previous knowledge of the environment or markers is needed and it can work on any indoor or outdoor space. Once the call is setup the remote expert receives live video view from the on-site worker on his mobile device. Additionally the two users can communicate through real time voice chat. At any time, the remote expert is able to freeze a specific frame from the live view. The expert can zoom and pan on the frame by using pinching and dragging touch gestures in order to focus on a specific part of the worker’s view. Subsequently, the expert can insert annotations on the frozen frame, selecting from an array of available symbols (pointing arrows, 3D models), as well as text. Insertion is intuitively performed through touching a point in the viewing field. Once the call is terminated a session summary is generated containing every annotation step and upload it to a server. Those sessions can be accessed at any time from any user thus contributing to knowledge sharing and reducing training costs and time. The platform also incorporates a push notification system that informs the remote supervisor about incoming calls. Through the same system and a web based manager back-end platform scheduled calls can be arranged between different users for a specific date and time.