I am currently working as a Principal Research Scientist at Nokia Bell Labs in Cambridge, UK and lead the Device Systems team.
In my team, we work on building multi-device systems to support collaborative and interactive services. With an unprecedented rise of on/near-body devices, it is common today to find ourselves surrounded by multiple sensory devices. We explore system challenges and issues to enable multi-device, multi-modal, and multi-sensory functionalities on these devices, thereby offering exciting opportunities for accurate, robust and seamless edge intelligence.
My research interests include mobile and embedded systems, edge intelligence, tiny ML, Internet of Things (IoT), and social and culture computing. I enjoy building real, working systems and applications and like to collaborate with other domain experts for interdisciplinary research.
I will be giving a keynote speech at ACM NetAISys 2024 (co-located with ACM MobiSys 2024, Tokyo, Japan).
We have successfully showcased our research prototype, Camera-as-a-Service, during a sport event (U23 European Athletics Championship) at Espoo, Finland, by over 10,000 people.
PROJECT Video Blog Nokia CEO's message Bell Labs Solutions Research President's BlogCome work with us for cutting-edge device research!
Leaflet LinkedInCome work with us for exciting device research!! We have several positions (postdoc, research scientist, and tech lead) available in our Cambridge Lab.
PostDoc Research Scientist Tech LeadI will be at Slush to demonstrate how we bring cloud-scale,machine learning and MLOPs to software-defined cameras and IoT devices.
PROJECT CSTO's messageOur paper about cross-camera collaboration for video analytics on distributed smart cameras was accepted to IEEE Transactions on Mobile Computing (TMC).
Our paper about thermal characteristics of AI accelerator-equipped microcontrollers was accepted to ACM BodySys 2024.
Our paper about detecting reactions to daily music listening via earable sesnsing was accepted to ACM Multimedia 2023.
Our paper about bringing MLOps and multi-tenant model serving to edge devices was accepted to ACM TECS.
Our paper about on-body microphone collaboration was conditionally accepted to ACM HotMobile 2023. Kudos to our awesome intern, Bhawana!
Our paper about reseource characterisation of MAX78000 was awarded the Best Paper Award at ACM AIChallengeIoT 2022. Kudos to our awesome interns, Hyunjong, Lei and Arthur!
Two papers were presented at ACM UbiComp 2022, Lingo (a hyper-local conversational agent) and ColloSSL (collaborative self-supervised learning).
Our paper about the multi-device and multi-modal dataset for human energy expenditure estimation is published in Nature Scientific Data. DATASET
I am serving as a program committee member for IEEE PerCom 2025.
I am serving as a program committee member for ACM MobiCom 2025.
I am serving as a program committee member for ACM MobiSys 2024.
I am serving as a student travel grants co-chair for ACM MobiSys 2024.
I am serving as a workshop co-chair for ACM UbiComp/ISWC 2022.
I am serving as a program committee member for ACM MobiSys 2022.
I am serving as a program committee member for ACM MobiCom 2022.
This summer presented us with an exciting opportunity to deploy our Camera-as-a-Service platform at Leppävaara Stadium in Espoo, Finland, for the European Athletics U23 Championships. The setup spanned not only the stadium but also extended to the warm-up and training area, parking spaces, and food stalls. To thousands of spectators, our service brought a unique mix of behind-the-scenes content, dynamic live views from various angles of the stadium, and instant replay perks, all streamed straight to their smartphones.
Intelligent cameras such as video doorbells and CCTV are abundant today, yet only used for a single-purpose, privacy-invasive and bandwidth-heavy streaming. We have developed a software solution that transforms intelligent cameras with automated machine learning operations (MLOps), enabling them to provide a range of services, including traffic flow, pedestrian analysis, asset tracking, or even facial recognition.
Multiple intelligent devices on and around us are on the rise and open up an exciting opportunity to leverage redundancy of sensory signals and computing resources. We are building multi-device systems to make an inference of ML models accurate, robust, and efficient at the deployment time so that applications can benefit from such multiplicity and boost the runtime performance of deployed ML models without model retraining and engineering.
The embedded accelerators promise model inferences with 100x improved speed and energy efficiency. In reality, however, this acceleration comes at the expense of extreme tight coupling, preset configurations, and obscure memory management. We challenge these limitations and uncover immediate opportunities for software acceleration to transform the on-chip intelligence.
A multi-device and multi-modal dataset collected from 17 participants with 8 wearable devices placed on 4 body positions.
A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability, containing 13 hours of sensor data collected over 36 sessions from 14 sensors on four wearable devices.
Ambient acoustic context dataset for building responsive, context-augmented voice assistants, containing 57,000 1-second segments for activities that occur in a workplace setting.
Battery usage data from 17 Android Wear smartwatch users over a period of about 3 weeks.
Memory-efficient Multi-DNN Inference System for Tiny AI Accelerators
— Changmin Jeon from Seoul National University in 2024
Battery Balancing for Dynamic Workloads in Earables
— Sidharth Anupkrishnan from University of Massachusetts Amherst Amherst in 2024
A Protocol for Secure Data Sharing Between Wearable Devices
— Sujin Han from KAIST in 2024
Ultra-Low Power DNN Accelerators for IoT: Energy Characterization of the MAX78000
— Yushan Huang from Imperial College London in 2024
Zero-interaction Multi-device Authentication without Central Trusted Party
— Adiba Orzikulova from KAIST in 2023
Constructing Input Space for Cross-camera Collaboration
— Ila Gokarn from Singapore Management University in 2023
Exploring Distributed Inference on Tiny AI Accelerators
— Arthur Moss from Newcastle University in 2023
Cocoon: On-body Microphone Collaboration for Spatial Awareness
— Bhawana Chhaglani from University of Massachusetts Amherst in 2022
Exploring Model Inference over Distributed Ultra-low Power DNN Accelerators
— Prerna Khanna from Stony Brook University in 2022
Ultra-low Power DNN Accelerators for IoT: Resource Characterisation of the MAX78000
— Hyunjong Lee from KAIST in 2022
A Multi-device and Multi-modal Dataset for Human Energy Expenditure Estimation using Wearable Devices
— Shkurta Gashi from USI in 2021
Cross-camera Collaboration for Video Analytics on Distributed Smart Cameras
— Juheon Yi from Seoul National University in 2021
SleepGAN: Towards Personalized Sleep Therapy Music
— Jing Yang from ETH Zurich in 2021
FatigueSet: A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability
— Manasa Kalanadhabhatta from University of Massachusetts Amherst in 2021
Coordinating Multi-tenant Models on Heterogeneous Processors using Reinforcement Learning
— Jaewon Choi from Yonsei University in 2021
Modelling Mental Stress using Smartwatch and Smart Earbuds.
— Andrea Patane from University of Oxford in 2019
Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators
— Mattia Antonini from FBK CREATE-NET and University of Trento in 2019
Resource Characterisation of Personal-scale Sensing Models on Edge Accelerators
— Tran Huy Vu from SMU in 2019
Design And Implementation Of Mobile Sensing Applications For Research In Behavioural Understanding
— Dmitry Ermilov from Skoltech in 2018
Automatic Smile and Frown Recognition with Kinetic Earables
— Seungchul Lee from KAIST in 2018
Resource Characterisation of Wi-Fi Sensing for Occupancy Detection
— Zhao Tian from Dartmouth College in 2017