‘Uncanny Valley’: Iran War in the AI Era, Prediction Market Ethics, and Paramount Beats Netflix | WIRED

‘Uncanny Valley’: Iran War in the AI Era, Prediction Market Ethics, and Paramount Beats Netflix | WIRED

Wired - AI 35 min read

About this article

In this episode, our hosts unpack the ongoing conflict in the Middle East, particularly as the AI industry has been entrenching itself with the Department of Defense.

Save StorySave this storySave StorySave this storyThis week, the team dives into why disinformation and the AI industry battles have quickly positioned themselves at the center of the ongoing conflict between the US and Iran. They also discuss how prediction markets like Polymarket and Kalshi are increasingly facing insider trading accusations and ethical questions. Also, how did Paramount beat Netflix in its bid for Warner Bros? Plus: Hosts Zoë Schiffer, Brian Barrett, and Leah Feiger share their predictions for the future.Articles mentioned in this episode:X Is Drowning in Disinformation Following US and Israeli Attack on IranHow Journalists Are Reporting From Iran With No InternetAnthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’A Former Top Trump Official Is Going After Prediction MarketsEverything Larry and David Ellison Will Control If Paramount Buys Warner Bros.You can follow Brian Barrett on Bluesky at @brbarrett, Zoë Schiffer on Bluesky at @zoeschiffer, and Leah Feiger on Bluesky at @leahfeiger. Write to us at uncannyvalley@wired.com.How to ListenYou can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.TranscriptNote: This is an automated transcript, whic...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] I trained a 3k parameter model on XOR sequences of length 20. It extrapolates perfectly to length 1,000,000. Here's why I think that's architecturally significant.

I've been working on an alternative to attention-based sequence modeling that I'm calling Geometric Flow Networks (GFN). The core idea: i...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime