AI for Health

Our laboratory is at the forefront of integrating AI into healthcare, focusing on areas such as chronic disease monitoring, including Alzheimer’s disease, enhancing medical diagnosis with large language models, and modernizing Traditional Chinese Medicine. These initiatives strive to create scalable and ethical AI solutions that enhance patient outcomes, broaden access, and tailor care on a global scale.

Autonomous Driving

Our research in autonomous driving utilizes real-time AI and smart roadside infrastructure to advance systems like Soar, which integrates software and hardware for comprehensive support. The αLiDAR System enhances LiDAR sensors with adaptive scanning capabilities. VILAM leverages infrastructure for precise 3D localization and mapping, correcting vehicle map errors. VI-Map maintains accurate HD maps by merging roadside and on-vehicle data in real-time. These innovations collectively boost precision and reliability in autonomous driving.

MobiCom ’21

In this paper, we present VI-Eye, the first system that aligns vehicleinfrastructure point cloud pairs at centimeter accuracy in real-time,
which enables a broad range of on-vehicle autonomous driving applications. Evaluations on the two self-collected datasets show
that VI-Eye outperforms state-of-the-art baselines in accuracy, robustness and efficiency

MobiCom ’22

In this paper, we present VIPS, a novel system that fuses the objects detected by the vehicle and the infrastructure to expand the
vehicle’s perception in real time, which facilitates a number of autonomous driving applications. We implement VIPS end-to-end and evaluate its performance on two self-collected datasets. The experiment results show that VIPS outperforms the existing approaches in accuracy, robustness, and efficiency.

🏆Best Paper Award Runner-Up

SenSys ’22

This paper present AutoMatch, the first system that matches traffic camera-vehicle image pairs or traffic camera-HD map image pairs at pixel-level accuracy with low communication/compute overhead in real-time, which is a key technology for leveraging traffic camera for assisting the perception and localization of autonomous driving. We conduct extensive evaluations on two selfcollected datasets, which show that AutoMatch outperforms SOTA baselines in robustness, accuracy, and efficiency.

MobiCom ’23

In this paper, we present VI-Map, the first system that utilizes the unique advantages of roadside infrastructure to enhance on-vehicle HD maps by providing accurate and
timely infrastructure HD maps. We have implemented VIMap end-to-end and the experimental results show that VIMap enhances existing HD mapping methods in terms of map geometry accuracy, map topology freshness, system robustness, and efficiency.

🏆Best Community Contribution Award

MobiCom ’24

This paper presents the design and deployment of Soar, the first end-to-end SRI system specifically designed for supporting AVs. Soar consists of carefully designed components for data and DL task management, I2I and I2V communication, and an integrated hardware platform, which addresses a multitude set of system and physical challenges, allows to leverage the existing operational traffic infrastructure, and hence lowers the barrier of adoption.
 
🏆Best Artifact Award Runner-up

NSDI ’24

In this paper, we propose VILAM, a novel framework that leverages intelligent roadside infrastructures to realize high-precision and globally consistent localization and mapping on autonomous vehicles. The key idea of VILAM is to utilize the precise scene measurement from the infrastructure as global references to correct errors in the local map constructed by the vehicle.

Embedded ML/LLM Systems

Our research in embedded ML/LLM systems aims to enhance the functionality and integration of sensor systems. We develop technologies that translate sensor capabilities and data dependencies into vocabularies and grammar rules for large language models, allowing for the conversion of user intentions into executable task plans. Additionally, we leverage foundation models for open-set learning on the edge, improving adaptability and performance. These innovations collectively enhance the efficiency and effectiveness of embedded systems through advanced AI integration.

On-Device Deep Learning

SenSys ’21 ’22, HotMobile ’23

Real-time deep learning (DL) on edge devices faces challenges like high computational demands, diverse task requirements, and limited framework support. This project explores DL task scheduling, model scaling, and latency-accuracy trade-offs to optimize performance for resource-constrained platforms. The goal is to enable efficient on-device DL execution for applications like autonomous driving while meeting real-time constraints.

🏆Best Paper Finalist

Edge-Cloud Cooperation

IPSN ’23, IoTDI ‘21, Sensys ‘23

Deep learning models on IoT devices face challenges in generalizing across diverse environments due to limited resources, despite advancements in algorithms and hardware. Foundation models (FMs) offer strong generalization, but leveraging their knowledge on resource-constrained edge devices remains unexplored.

Mobile Sensing

Our research in mobile sensing focuses on developing innovative systems that enhance perception and interaction in challenging environments. We have created systems that leverage advanced technologies such as multi-modal sensors, mmWave radars, and Time-of-Flight (ToF) cameras. These systems enable human-like perception for social assistance, egocentric human mesh reconstruction, and high-resolution sensing in low-light conditions. By addressing limitations like restricted sensing range, occlusion, and noise, our work significantly improves the capabilities and applications of mobile sensing technologies, offering real-time, low-cost, and high-performance solutions for next-generation applications.

Wireless Systems

Our research in wireless systems is driven by a vision to revolutionize wireless communication through cutting-edge technologies. We focus on developing advanced multi-path estimation algorithms like BeamSense and exploring NB-IoT power consumption with tools such as NB-Scope. Our direction involves applying these innovations to Wi-Fi devices and conducting extensive field measurements to push the boundaries of wireless technology.

On-Device Deep Learning

SenSys ’21 ’22, HotMobile ’23

Real-time deep learning (DL) on edge devices faces challenges like high computational demands, diverse task requirements, and limited framework support. This project explores DL task scheduling, model scaling, and latency-accuracy trade-offs to optimize performance for resource-constrained platforms. The goal is to enable efficient on-device DL execution for applications like autonomous driving while meeting real-time constraints.

🏆Best Paper Finalist

Edge-Cloud Cooperation

IPSN ’23, IoTDI ‘21, Sensys ‘23

Deep learning models on IoT devices face challenges in generalizing across diverse environments due to limited resources, despite advancements in algorithms and hardware. Foundation models (FMs) offer strong generalization, but leveraging their knowledge on resource-constrained edge devices remains unexplored.

Edge-Cloud Cooperation

IPSN ’23, IoTDI ‘21, Sensys ‘23

Deep learning models on IoT devices face challenges in generalizing across diverse environments due to limited resources, despite advancements in algorithms and hardware. Foundation models (FMs) offer strong generalization, but leveraging their knowledge on resource-constrained edge devices remains unexplored.

Wireless Systems

Recent years have witnessed an emerging class of real-time deep learning (DL) applications, e.g., autonomous driving, in which resource-constrained edge platforms need to execute a set of mixed deep learning algorithms concurrently.