I am currently working as a Principal Research Scientist at Nokia Bell Labs in Cambridge, UK and lead the Device Systems team.
In my team, we work on building multi-device systems to support collaborative and interactive services. With an unprecedented rise of on/near-body devices, it is common today to find ourselves surrounded by multiple sensory devices. We explore system challenges and issues to enable multi-device, multi-modal, and multi-sensory funtionalities on these devices, thereby offering exciting opportunities for accurate, robust and seamless edge intelligence.
My research interests include mobile and embedded systems, edge intelligence, tiny ML, Internet of Things (IoT), and social and culture computing. I enjoy building real, working systems and applications and like to collobrate with other domain experts for interdisciplinary research.
Come work with us for exciting device research!! We have several positions (postdoc, research scientist, and tech lead) available in our Cambridge Lab.
PostDoc Research Scientist Tech LeadI will be at Slush to demonstrate how we bring cloud-scale,machine learning and MLOPs to software-defined cameras and IoT devices.
PROJECT CSTO's messageOur paper about on-body microphone collaboration was conditionally accepted to ACM HotMobile 2023. Kudos to our awesome intern, Bhawana!
Our paper about reseource characterisation of MAX78000 was awarded the Best Paper Award at ACM AIChallengeIoT 2022. Kudos to our awesome interns, Hyunjong, Lei and Arthur!
Two papers were presented at ACM UbiComp 2022, Lingo (a hyper-local conversational agent) and ColloSSL (collaborative self-supervised learning).
Our paper about the multi-device and multi-modal dataset for human energy expenditure estimation is published in Nature Scientific Data. DATASET
I am serving as a workshop co-chair for ACM UbiComp/ISWC 2022.
I am serving as a program committee member for ACM MobiSys 2022.
I am serving as a program committee member for ACM MobiCom 2022.
Intelligent cameras such as video doorbells and CCTV are abundant today, yet only used for a single-purpose, privacy-invasive and bandwidth-heavy streaming. We have developed a software solution that transforms intelligent cameras with automated machine learning operations (MLOps), enabling them to provide a range of services, including traffic flow, pedestrian analysis, asset tracking, or even facial recognition.
Multiple intelligent devices on and around us are on the rise and open up an exciting opportunity to leverage redundancy of sensory signals and computing resources. We are building multi-device systems to make an inference of ML models accurate, robust, and efficient at the deployment time so that applications can benefit from such multiplicity and boost the runtime performance of deployed ML models without model retraining and engineering.
The embedded accelerators promise model inferences with 100x improved speed and energy efficiency. In reality, however, this acceleration comes at the expense of extreme tight coupling, preset configurations, and obscure memory management. We challenge these limitations and uncover immediate opportunities for software acceleration to transform the on-chip intelligence.
A multi-device and multi-modal dataset collected from 17 participants with 8 wearable devices placed on 4 body positions.
A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability, containing 13 hours of sensor data collected over 36 sessions from 14 sensors on four wearable devices.
Ambient acoustic context dataset for building responsive, context-augmented voice assistants, containing 57,000 1-second segments for activities that occur in a workplace setting.
Battery usage data from 17 Android Wear smartwatch users over a period of about 3 weeks.
Cocoon: On-body Microphone Collaboration for Spatial Awareness
— Bhawana Chhaglani from University of Massachusetts Amherst in 2022
Exploring Model Inference over Distributed Ultra-low Power DNN Accelerators
— Prerna Khanna from Stony Brook University in 2022
Ultra-low Power DNN Accelerators for IoT: Resource Characterisation of the MAX78000
— Hyunjong Lee from KAIST in 2022
A Multi-device and Multi-modal Dataset for Human Energy Expenditure Estimation using Wearable Devices
— Shkurta Gashi from USI in 2021
Cross-camera Collaboration for Video Analytics on Distributed Smart Cameras
— Juheon Yi from Seoul National University in 2021
SleepGAN: Towards Personalized Sleep Therapy Music
— Jing Yang from ETH Zurich in 2021
FatigueSet: A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability
— Manasa Kalanadhabhatta from University of Massachusetts Amherst in 2021
Coordinating Multi-tenant Models on Heterogeneous Processors using Reinforcement Learning
— Jaewon Choi from Yonsei University in 2021
Modelling Mental Stress using Smartwatch and Smart Earbuds.
— Andrea Patane from University of Oxford in 2019
Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators
— Mattia Antonini from FBK CREATE-NET and University of Trento in 2019
Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators
— Tran Huy Vu from SMU in 2019
Design And Implementation Of Mobile Sensing Applications For Research In Behavioural Understanding
— Dmitry Ermilov from Skoltech in 2018
Automatic Smile and Frown Recognition with Kinetic Earables
— Seungchul Lee from KAIST in 2018
Resource Characterisation of Wi-Fi Sensing for Occupancy Detection
— Zhao Tian from Dartmouth College in 2017