[2602.17672] Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse

[2602.17672] Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse

arXiv - AI 4 min read Article

Summary

This article evaluates the effectiveness of large language models (LLMs) in providing support for survivors of technology-facilitated abuse (TFA), highlighting their capabilities and limitations based on expert assessments and user feedback.

Why It Matters

Understanding how LLMs can assist survivors of TFA is crucial as these technologies become more accessible. This research informs the development of better support tools for vulnerable populations, potentially improving their safety and access to resources.

Key Takeaways

  • The study is the first expert-led evaluation of LLMs in the context of TFA.
  • LLMs showed varying effectiveness in responding to TFA-related inquiries.
  • User feedback highlighted the importance of actionable advice for survivors.
  • Recommendations for improving LLM responses are provided based on findings.
  • The research emphasizes the need for tailored AI solutions in sensitive contexts.

Computer Science > Human-Computer Interaction arXiv:2602.17672 (cs) [Submitted on 11 Jan 2026] Title:Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse Authors:Vijay Prakash, Majed Almansoori, Donghan Hu, Rahul Chatterjee, Danny Yuxing Huang View a PDF of the paper titled Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse, by Vijay Prakash and 4 other authors View PDF HTML (experimental) Abstract:Technology-facilitated abuse (TFA) is a pervasive form of intimate partner violence (IPV) that leverages digital tools to control, surveil, or harm survivors. While tech clinics are one of the reliable sources of support for TFA survivors, they face limitations due to staffing constraints and logistical barriers. As a result, many survivors turn to online resources for assistance. With the growing accessibility and popularity of large language models (LLMs), and increasing interest from IPV organizations, survivors may begin to consult LLM-based chatbots before seeking help from tech clinics. In this work, we present the first expert-led manual evaluation of four LLMs - two widely used general-purpose non-reasoning models and two domain-specific models designed for IPV contexts - focused on their effectiveness in responding to TFA-related questions. Using real-world questions collected from literature and online forums, we assess the quality of zero-shot single-turn LLM responses generated with a survivor safety-centered prompt...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime