Holly Huey.

Applied Scientist @ Adobe | Experimental Psychology, PhD | Improving GenAI technologies to enhance human creativity

At Adobe, I'm an Applied Scientist working to improve GenAI image & video systems. I earned my PhD in Experimental Psychology from UC San Diego, under Dr. Judith Fan @ The Cognitive Tools Lab.

I lead the Scientific Evaluation Team's efforts in evaluating state-of-the-art imaging and editing technologies like Adobe Firefly. My research statistically measures improvements & regressions in model behavior related to: text-to-visual & visual-to-visual systems, semantic editing, object recognition, and user intentions

I do this by developing large-scale benchmarks (e.g., 1K-50K assets each with 3-5 annotations), which also enables me to identify market trends in artist and GenAI behavior and provide insights to product managers, data analytics, and model development & engineering teams on how to develop the next generation of creative tools. I also work with a globally and culturally diverse team of professional photographers & editors, graphic designers, and video creators/editors to better understand user needs and human-AI collaboration.

Collaborating with AI Ethics teams is deeply integrated into my work to ensure that models do not create or enable the creation of harmful content.

My dissertation research evaluated how user intentions shift visual production behavior and downstream user interpretations. I tested this by generating & analyzing large-scale datasets of drawings, diagrams, and data visualizations. In other words, I studied how people make pictures. Now I study how GenAI models make pictures.

research highlights

image generation & editing

My current research evaluates text-to-image, image-to-image, image-to-video, and image editing via instructional prompting, masking, layer segmentation, and more. By measuring state-of-the-art model behavior and gathering insights from professional artists, my work helps to identify where the market is today as well as helps product managers define the market of tomorrow. In my day-to-day, I work with modeling and engineering teams to design custom experiments to collect large-scale datasets comparing different models and analyzing the technical quality and production readiness of outputs.

cross-cultural image models

For developing culturally-specific GenAI models, I create and closely work with teams of artists with extensive lived experience from specific cultures and countries. This humans-in-the-loop collaboration ensures that our models do not contribute to erasure or inappropriate Westernization of non-Western cultures.

video editing

Occasionally, I also conduct qualitative user research. At Adobe Research, I interviewed filmakers, video editors, and musicians to develop GenAI video editing parameters based on users' creative workflows. These insights helped frame large-scale annotation experiments that eventually fine-tuned different model parameters. For example, our Creativity & Cognition 2024 paper evaluated how people (N >800) and LLM models select B-Roll to enrich videos depending on their goals to make entertaining or informative content.

sketching

During my PhD, I specialized in large-scale crowdsourcing methods for collecting and analyzing sketches. Datasets like these can be used for a wide array of research domains such as computer vision, attention, memory, semantic segmentation of objects, and information prioritization based on user intentions. See my Publications below to access those datasets, such as our Nature Communications 2024 paper on evaluating concept development in children through their drawings.

Human-made. Designed by me. Please excuse any coding errors. I hand-coded this in my first year of grad school to teach myself front-end design and am now paying for the sins of my poor coding practices from 2019 (pre-GenAI era)...