Delve accused of misleading customers with ‘fake compliance’ | TechCrunch

Delve accused of misleading customers with ‘fake compliance’ | TechCrunch

TechCrunch - AI 7 min read

About this article

An anonymous Substack post accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations.

An anonymous Substack post published this week accuses compliance startup Delve of “falsely” convincing “hundreds of customers they were compliant” with privacy and security regulations, potentially exposing those customers to “criminal liability under HIPAA and hefty fines under GDPR.” Delve is a Y Combinator-backed startup that last year announced raising a $32 million Series A at a $300 million valuation. (The round was led by Insight Partners.) On Friday, the startup attempted to refute the accusations on its blog, calling the Substack post “misleading” and saying it “contains a number of inaccurate claims.” The Substack post is credited to “DeepDelver,” who described themselves as working at a (now former) Delve client. In response to emailed questions from TechCrunch, DeepDelver said that they and their collaborators “chose to remain anonymous out of fear for retaliation by Delve.” In their post, DeepDelver recounted receiving an email in December claiming the startup had “leaked a spreadsheet with confidential client reports.” While Delve CEO Karun Kaushik apparently assured customers in a subsequent email that they were in compliance and that no external party gained access to sensitive data, DeepDelver said they and other customers had become suspicious. “Having the shared experience of being underwhelmed with the Delve experience, and having the overall sense that something fishy was going on, we decided to pool resources and investigate together,” they wrote. Th...

Originally published on March 22, 2026. Curated by AI News.

Related Articles

Washington needs AI guardrails — now | Opinion
Ai Safety

Washington needs AI guardrails — now | Opinion

We need legislation that draws clear lines on what AI systems may and may not do on behalf of the United States government

AI Tools & Products · 3 min ·
[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
Ai Safety

[2601.12910] SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Abstract page for arXiv paper 2601.12910: SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

arXiv - AI · 3 min ·
[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining
Machine Learning

[2509.21385] Debugging Concept Bottleneck Models through Removal and Retraining

Abstract page for arXiv paper 2509.21385: Debugging Concept Bottleneck Models through Removal and Retraining

arXiv - Machine Learning · 4 min ·
[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval
Llms

[2512.00804] Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Abstract page for arXiv paper 2512.00804: Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime