Meta’s AI accidentally exposes sensitive data for two hours

An internal AI suggestion at Meta reportedly led to a pretty serious slip, exposing sensitive user and company data to engineers for a short window.

The situation came from an employee asking for help on an internal forum, where an AI agent offered a solution that ended up being implemented without much hesitation.

That’s the part people are reacting to — not just the leak itself, but how easily an automated response made its way into something affecting real data.

The post picked up solid engagement, with over a hundred favorites and reblogs, and reactions ranging from disbelief to dark humor about how preventable it sounds.

A lot of the discussion leans into the risks of relying too heavily on AI in critical systems, especially when basic checks seem to get skipped.