Fashion Search Demo

Project Overview

This project is a composite demo that brings together model training, retrieval systems, backend APIs, and user-facing interaction. The system accepts a natural-language query and returns product images by ranking text-image similarity in a learned shared embedding space.

It was designed as an end-to-end multimodal search application rather than just a model experiment. That means the project covers the full workflow from representation learning and retrieval indexing to API serving and interactive front-end presentation.

What I Built

Why It Matters

This demo sits at the intersection of applied machine learning and product-minded systems work. It shows how multimodal research can be translated into a usable interface, where the model, retrieval infrastructure, API layer, and frontend experience all need to work together coherently.

It also reflects a broader interest of mine: building systems that make complex ML behavior inspectable rather than opaque.