Building AI that is safe, useful, and engineered for impact.

AI engineer and product builder at Duke who works at the intersection of safety, governance, and user experience.

About

About me

I build AI systems and products people can trust. I'm a Master of Engineering student in AI at Duke with a background in public policy, trust and safety, and machine learning. My work is at the intersection of model behavior, governance, and human-centered design.

I started on the policy side, studying how technology shapes society and where guardrails fail. But as AI began moving faster than regulation, I realized real impact happens inside the product cycle. I learned AI engineering to understand how models behave at the levels of data, training, and failure modes, and to use that insight to shape how AI is built and deployed.

Today, I prototype and evaluate systems that pair technical depth with responsible AI judgment. I've built explainability tools, risk assessment frameworks, and AI-driven features that prioritize clarity, safety, and long-term trust.

My recent work includes:

  • Strengthening model transparency with GradCAM, attention mapping, and structured evaluation.
  • Supporting trust and safety efforts on AIGC detection, red-teaming workflows, and harm classification improvements.
  • Designing ethical onboarding processes for generative AI tools.
  • Research on governance, copyright, and international AI regulation.
  • Building projects like Alba and JobSkills that explore how AI can improve safety, agency, and user experience.

I collaborate across engineering, research, product, and policy because I understand how models work, where bias comes from, and how small design choices can affect user experience and risk. I'm focused on the practical questions around data, model behavior, evaluation, and deployment from the perspective of real users.

I am extremely passionate about innovation, but I care even more about building it responsibly. My goal is to help develop AI that is reliable, transparent, and safe at scale.

Education

  • May 2026 MEng, Artificial Intelligence · Duke Pratt School of Engineering
  • May 2025 BA, Public Policy · Cum Laude · Digital Intelligence Certificate & Spanish Minor

Projects

Responsible AI projects

Chrome + Node

Alba

Chrome extension and helper server that estimates the energy, carbon, and water cost of every AI prompt.

Alba wraps ChatGPT, Gemini, and other chat surfaces with inline footprint labels, a prompt optimizer, and a floating dashboard so people can ship work while staying within a climate budget.

  • Real-time footprint labels plus Spotify-style recaps driven by the helper server.
  • Inline prompt optimizer that shows projected savings before replacing a draft.
  • Configurable methodology via energyConfig.js so teams can adjust model profiles.
View project
Streamlit + NLP

Resume–Job Skill Gap Analyzer

Embeddings-powered app that compares resumes to live job postings to highlight missing skills.

I vectorized resumes and postings, queried APIs such as Remotive, and surfaced alignment scores plus skill gaps in a Streamlit interface deployed on Hugging Face Spaces.

  • OpenAI embeddings to compare applicants with hundreds of listings.
  • Automated parsing for PDFs/DOCX plus keyword extraction and visualization.
  • Altair charts that make strengths, gaps, and next steps intuitive for job seekers.
View project
XAI research

Explainable Deep Learning with Grad-CAM Variants

Used GradCAM, GradCAM++, and LayerCAM to visualize where CNNs focus when interpreting facial expressions.

I tuned preprocessing to improve alignment between heatmaps and the true regions that drive a prediction, improving interpretability by roughly 30% so stakeholders could trust the model.

  • Five-image review pipeline comparing attention maps before and after adjustments.
  • Structured evaluation to quantify when the model is “looking” in the right place.
  • Narratives that translate technical findings into product and policy guidance.
View project

Writing

Selected writing

Explainability Isn’t About the Output

What Unmasking AI revealed about datasets, omissions, and the limits of post-hoc explanations.

Read

Babies, Brains, and Robot Sociopaths

What child development and psychopathy can teach us about building safer AI systems.

Read

The Algorithm That Helped Me Choose to Pursue a Master of Engineering

How thinking like a machine learning model helped me make a major career change.

Read

The Antitrust Paradox in the Age of Big Tech

How Apple, Google, Amazon, Microsoft, and Meta rewrote competition and why regulators are still playing catch-up.

Read

Your Body, Their Data

Why convenience comes with risks, and what Flo Health teaches us about privacy in an age of digital surveillance.

Read

The Black Box of Hiring

Applicant tracking systems have reshaped hiring, but at what cost to fairness and equity?

Read

When Your Face Becomes Data

The hidden risks and ethical dilemmas of facial recognition in everyday life.

Read

Deepfakes: Seeing is No Longer Believing

Why deepfake abuse is spreading faster than our protections.

Read