Whistleblowers have come forward to reveal that social media giants, including Meta and TikTok, have prioritized engagement and profits over user safety, allowing harmful content to spread on their platforms. Internal research has shown that outrage and anger-based content can drive user engagement, leading companies to compromise on safety measures. As a result, users, especially teenagers, are being exposed to violent, hateful, and misogynistic content, with some even being radicalized by algorithm-driven recommendations. The revelations have sparked concerns about the responsibility of social media companies to protect their users and the need for stricter regulations to prevent the spread of harmful content.