Regulators are intensifying their scrutiny of artificial intelligence, with the Federal Trade Commission demanding information from major AI firms on children’s safety measures. Meanwhile, a federal pilot program set to use AI to authorize Medicare treatments in six states is drawing fire from lawmakers who fear it could lead to wrongful denials of care.
FTC Inquires AI Companies on Safeguards for Children and Pilot Program Uses AI to Authorize Medicare Treatments — AI: The Washington Report

Key Takeaways:
- The FTC is investigating how children interact with AI and reviewing safeguards to protect their well-being.
- This investigation was unanimously approved by commissioners, reflecting bipartisan concern about AI’s impact.
- A federal pilot program will deploy AI for Medicare treatment authorizations in six states by 2026.
- Lawmakers have criticized this approach, with Rep. Greg Landsman dubbing the program the “AI death panel.”
- The FTC utilizes its Section 6(b) authority to collect data and enforce regulations on AI firms.
The FTC’s Inquiry into AI and Child Safety
The Federal Trade Commission (FTC) has heightened its oversight of artificial intelligence technologies by demanding documentation from major AI companies, including Meta. The agency’s primary goal is to understand how children use AI-based platforms and what safeguards exist to protect younger audiences. Citing concerns over mental health risks and inappropriate chatbot interactions, commissioners unanimously approved issuing the orders under the FTC’s Section 6(b) authority. This formal step indicates a strong regulatory commitment to protecting minors from potential harms associated with AI.
Broader Regulatory Context
The FTC’s move is seen as part of a broader push to maintain U.S. leadership in AI while also ensuring consumer safety. By issuing these investigative orders, the Commission aims to balance innovation with necessary guardrails, especially in platforms where minors regularly engage. The orders reflect a growing resolve at the federal level to prevent harmful or predatory AI interactions, particularly among vulnerable populations.
AI in Medicare Treatments
Another development drawing significant attention is a federal pilot program set to begin in 2026, in which AI will determine authorizations for Medicare treatments across six states. While supporters argue that AI can streamline the process and reduce inefficiencies, opponents see potential risks in relying on algorithms for complex medical decisions. Critics fear that algorithmic biases or errors could lead to wrongful denials of essential care.
Lawmakers and Experts React
Concerns over the Medicare pilot have been vocal, with Rep. Greg Landsman (D-OH) referring to it as the “AI death panel.” This label underscores anxiety that AI-driven approvals might place elderly or vulnerable patients at risk. Some lawmakers and experts see the program as promising but worry the fast pace of AI integration could outstrip ethical and regulatory safeguards.
Looking Ahead
Both the FTC inquiry and the Medicare pilot program underscore the critical debate on how to regulate AI responsibly while fostering innovation. As artificial intelligence becomes more deeply embedded in healthcare, consumer apps, and other key areas, policymakers, companies, and the public must weigh the promise of efficiency gains against the risk of unintended consequences. With the unanimous backing of the Commission, it is clear that AI oversight—and its intersection with public welfare—will remain a focal point for lawmakers and regulators alike.