I am a Senior Research Scientist working on multimodal perception and intelligent systems. My work translates machine learning and generative AI research into robust, real-world products in complex, data-rich environments. I have authored 30+ peer-reviewed papers and filed 30+ patents.

Current Focus

My current work focuses on building models and systems that learn from complex data, adapt to changing environments, and deliver measurable value in real-world use cases.

My work spans representation learning, multimodal perception, time-series modeling, and human-centered AI, with an emphasis on end-to-end solutions that function reliably in real-world deployments.

Brief Bio

I received my Ph.D. from the Department of Computer Science at the University of Virginia in December 2014, advised by Prof. John A. Stankovic. My doctoral work focused on real-time intelligent systems, laying the foundation for my current research in AI-powered multimodal perception.

During my graduate studies, I worked at Google as a Software Engineering Intern on the AdSense backend infrastructure team. Since then, I have led AI/ML research initiatives in both academic and industrial settings, with a focus on building scalable intelligent systems that translate research into real-world impact.

Awards and Grants

  • Best Paper Award, EWSN, 2021
  • Best Paper Award Nomination, BuildSys, 2019
  • Bosch Research Performance Recognition Award, 2018
  • $1.3M DOE Grant, Human-in-the-loop Sensing and Control for Commercial Buildings, 2016
  • Best Paper Award Finalist, ICCPS, 2014, 2015
  • Microsoft Research Software Engineering Innovation Foundation (SEIF) Award, 2014.

News

  • 09/06/24: NeurIPS Workshop on Behavioral ML ’24 paper accepted
  • 03/08/24: DCOSS-IoT ’24 paper accepted
  • 06/26/23: MASS ’23 paper accepted
  • 06/30/22: IROS ’22 paper accepted
  • 02/19/21: EWSN ’21 Best Paper Award

Service

Reviewer and TPC member for leading AI, ML, and systems venues including ICLR, SenSys, MobiCom, IMWUT, BuildSys, and DCOSS.