I build AI systems and products people can trust. I'm a Master of Engineering student in AI at Duke with a background in public policy, trust and safety, and machine learning. My work is at the intersection of model behavior, governance, and human-centered design.
I started on the policy side, studying how technology shapes society and where guardrails fail. But as AI began moving faster than regulation, I realized real impact happens inside the product cycle. I learned AI engineering to understand how models behave at the levels of data, training, and failure modes, and to use that insight to shape how AI is built and deployed.
Today, I prototype and evaluate systems that pair technical depth with responsible AI judgment. I've built explainability tools, risk assessment frameworks, and AI-driven features that prioritize clarity, safety, and long-term trust.
My recent work includes:
- Strengthening model transparency with GradCAM, attention mapping, and structured evaluation.
- Supporting trust and safety efforts on AIGC detection, red-teaming workflows, and harm classification improvements.
- Designing ethical onboarding processes for generative AI tools.
- Research on governance, copyright, and international AI regulation.
- Building projects like Alba and JobSkills that explore how AI can improve safety, agency, and user experience.
I collaborate across engineering, research, product, and policy because I understand how models work, where bias comes from, and how small design choices can affect user experience and risk. I'm focused on the practical questions around data, model behavior, evaluation, and deployment from the perspective of real users.
I am extremely passionate about innovation, but I care even more about building it responsibly. My goal is to help develop AI that is reliable, transparent, and safe at scale.