Chaehong Lee is a product-minded AI engineer based in NYC, with a background in computer vision, LLMs, and generative models. She specializes in transforming state-of-the-art research—like function-calling agents, Stable Diffusion pipelines, and edge-native vision systems—into polished, real-time products that feel intuitive and expressive to use.

Currently at Meta Reality Labs, and formerly at Microsoft Mixed Reality, she brings technical depth with strong systems thinking, shipping experience, and visual design instincts. Her recent work spans sparse autoencoder interpretability, multimodal prompt interfaces, and agentic tool-calling.

Open to AI-first teams and startups—especially in NYC, SF, or remote—building next-gen tools at the intersection of usability, creativity, and machine intelligence. Also open to software solutions architect consulting opportunities where strategic technical insight and user-centered design can bring research to life.

CV
Education
  • Washington University in St. Louis
    B.S. in Computer Science
    ↳ Double Major in Applied Mathematics
    ↳ Minor in Communication Design
  • Same school
    M.S. in Computer Vision
    ↳ Sparse to Dense Optical Flow with Deep Neural Networks
Experience
  • Meta
    New York, NY
    Software Engineer (2025-)
  • Microsoft
    Seattle, WA
    Research Engineer (2020-2024)
last updated 06.15.25
archives
folder icon
projects
folder icon
something pretty
folder icon
creative tech
folder icon
untitled folder
Diffusion Models: Continuing Studies from a Former Art Student

Lately, I've been thinking about how images are represented in feature space-
Where in a neural network does the vibe of an image actually reside?

Pattern Studies
Diagram showing the process of applied generation art

In this study, I explore the potential of AI-assisted design in transforming natural forms and original artwork into complex, aesthetically pleasing patterns suitable for fashion applications. The process begins with either a hand-drawn sketch or a photograph of a natural object, which serves as the seed for stable diffusion models.

Methodology:

  1. Source Image Creation/Selection: Original drawings or photographs of natural objects.
  2. Pattern Generation: Utilizing Stable Diffusion, create varied patterns based on the source material. Explore midjourney, SD3, SDXL.
  3. Application Visualization: Implement the generated patterns onto conceptual fashion designs, again using AI-assisted image generation.

Original Drawing - Dancing in my room 2022 Original Drawing - Dancing in my room 2022
Generated pattern 1 2024 Generated pattern 1 2024
Generated pattern 2 2024 Generated pattern 2 2024
Concept Shot 1 2024 Concept Shot 1 2024
Concept Shot 2 2024 Concept Shot 2 2024
Concept Shot 3 2024 Concept Shot 3 2024
Concept Shot 4 2024 Concept Shot 4 2024

Original Object - Sea Urchin 2024 Original Object - Sea Urchin 2024
Generated Pattern 2024 Generated Pattern 2024
Generated Pattern 2024 Generated Pattern 2024
Concept Shot 1 2024 Concept Shot 1 2024
Concept Shot 2 2024 Concept Shot 2 2024
Concept Shot 3 2024 Concept Shot 3 2024
Concept Shot 4 2024 Concept Shot 4 2024

Learning:

+ the positives.

  • The resulting generated patterns showed a remarkable ability to adapt to different scales and contexts, from intricate surface textures to bold, sweeping garment designs.
  • The interplay between human creativity (in the initial drawings and curation) and AI capabilities produced outcomes that neither could achieve independently.

- the negatives.

  • The patterns and style of the image aren't necessarily preserved exactly on the resulting image, suggesting further fine-tuning may be required for one to one matching.

This study demonstrates the potential of AI as a collaborative tool in the creative process, expanding the possibilities of pattern design while maintaining a connection to the artist's original vision or natural inspiration.


Untitled Untitled
Untitled Untitled
Untitled Untitled
Untitled Untitled

Street style:

Untitled Untitled
Untitled Untitled
Untitled Untitled
Untitled Untitled