Home Research Projects About Me

Huaze Liu 刘铧泽

Hi! I am a senior undergraduate student at Harvey Mudd College majoring in Computer Science and Mathematics, with a social science concentration in Economics.

I am a full-stack robotics engineer working on robot design, perception, and control. I'm mostly interested in how autonomous systems perceive and navigate in the real world robustly and safely. At Harvey Mudd College, I closely worked with Prof. Adyasha Mohanty at Engineering Department on Foundation Models and Sensor Fusion for trustworthy autonomous vehicle navigation. Starting from June 2025, I worked on humanoid robot catching with Kehlani Fay and Arth Shukla, under the supervision of Prof. Michael Tolley and Prof. Hao Su at UC San Diego.

Email  /  Github  /  Linkedin  /  Google Scholar

profile photo
Questions That Always Interest Me
Question 1 Image
Question 1: How do we build a robot mind that is both data-efficient and provably reliable?

Modern deep learning gives us powerful perception, but it's often opaque and data-hungry. Classical control theory gives us safety guarantees, but it struggles with the messy, unstructured real world. I am driven by the challenge of bridging this gap: How can we design "trustworthy" learning algorithms that know what they don't know? Whether it's using Conformal Prediction to bound uncertainty in vision models or fusing multi-modal sensors to detect when a neural network is hallucinating, my goal is to create robotic systems that are not just smart, but fundamentally safe and robust in the face of the unknown.


Question 2 Image
Question 2: Can a generalist robot learn to be safe without needing a god-view of the world?

We often train robots in simulators where they have perfect knowledge of the world (state, physics, future trajectories). But when deployed, they only have noisy sensors and limited onboard compute. This "sim-to-real" gap isn't just about domain randomization; it's about information asymmetry. I am fascinated by how we can train policies that explicitly reason about their own perceptual limitations. Can we embed safety guarantees directly into the learning process? By aligning what the robot sees during training with what it can see in reality, I believe we can unlock true generalist capabilities—allowing robots to act confidently even when their understanding of the world is imperfect.