[2602.14357] Key Considerations for Domain Expert Involvement in LLM Design and Evaluation: An Ethnographic Study

[2602.14357] Key Considerations for Domain Expert Involvement in LLM Design and Evaluation: An Ethnographic Study

arXiv - AI 3 min read Article

Summary

This ethnographic study explores the role of domain experts in the design and evaluation of Large Language Models (LLMs), highlighting key practices and challenges faced by development teams.

Why It Matters

As LLMs are increasingly integrated into professional domains, understanding how domain expertise influences their design and evaluation is crucial. This study provides insights into effective collaboration between developers and experts, which can enhance LLM functionality and user trust.

Key Takeaways

  • Domain experts play a critical role in shaping LLM design and evaluation.
  • Teams often create workarounds for data collection and evaluation due to constraints.
  • Co-development of evaluation criteria with experts enhances system relevance.
  • Challenges include expert motivation, trust issues, and knowledge integration.
  • Future LLM workflows should prioritize AI literacy and transparent consent.

Computer Science > Human-Computer Interaction arXiv:2602.14357 (cs) [Submitted on 16 Feb 2026] Title:Key Considerations for Domain Expert Involvement in LLM Design and Evaluation: An Ethnographic Study Authors:Annalisa Szymanski, Oghenemaro Anuyah, Toby Jia-Jun Li, Ronald A. Metoyer View a PDF of the paper titled Key Considerations for Domain Expert Involvement in LLM Design and Evaluation: An Ethnographic Study, by Annalisa Szymanski and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly developed for use in complex professional domains, yet little is known about how teams design and evaluate these systems in practice. This paper examines the challenges and trade-offs in LLM development through a 12-week ethnographic study of a team building a pedagogical chatbot. The researcher observed design and evaluation activities and conducted interviews with both developers and domain experts. Analysis revealed four key practices: creating workarounds for data collection, turning to augmentation when expert input was limited, co-developing evaluation criteria with experts, and adopting hybrid expert-developer-LLM evaluation strategies. These practices show how teams made strategic decisions under constraints and demonstrate the central role of domain expertise in shaping the system. Challenges included expert motivation and trust, difficulties structuring participatory design, and questions around ownership and integration of expert...

Related Articles

Llms

Nvidia goes all-in on AI agents while Anthropic pulls the plug

TLDR: Nvidia is partnering with 17 major companies to build a platform specifically for enterprise AI agents, basically trying to become ...

Reddit - Artificial Intelligence · 1 min ·
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage | TechCrunch
Llms

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage | TechCrunch

It’s about to become more expensive for Claude Code subscribers to use Anthropic’s coding assistant with OpenClaw and other third-party t...

TechCrunch - AI · 4 min ·
Llms

I am seeing Claude everywhere

Every single Instagram reel or TikTok I scroll i see people mentioning Claude and glazing it like it’s some kind of master tool that’s be...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime