The US Federal Trade Commission (FTC) has begun an enquiry into AI chatbots designed to act as companions. These toolsโwhich make use of generative AI to simulate conversational intimacyโare increasingly being promoted as friends, coaches, or confidants. Dario Betti, CEO, Mobile Ecosystem Forum (MEF), discusses what the positives could be weighing in concerns by the industry players developing these chatbots…
There is concern that these systems may have too much influence on vulnerable users, particularly children and teens. The Commission has issued information requests to seven major companies: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI.
Enquiry into AI chatbot companions: what are the main concerns
Protect younger users.
Essentially the Commission wants to understand how these companies design, monitor, and monetise their products, and what safeguards they have implemented to protect younger users. Regulators also want clarity on whether the companies disclose risks to users and parents, how they restrict or manage access by minors, and to what extent they comply with the Childrenโs Online Privacy Protection Act.
Another focus is monetisation.
By asking companies to explain how they profit from user engagement, the Commission is signalling concern that business incentives may be driving longer and potentially more manipulative interactions with young audiences.
Protecting children’s rights.
Advocacy groups concerned with childrenโs rights and online safety have largely welcomed the inquiry. Many see companion chatbots as posing greater risks than traditional social media, since they invite more personal disclosures and can simulate an emotional bond that may not be in the userโs interest.
Industry expresses concern over regulation of AI chatbot companions
At the same time, some industry voices are worried that regulatory pressure might slow an area of development with potential social value. Companion chatbots are being explored not only for entertainment but also for education, eldercare, and mental health support. From their perspective, the risk is that heavy regulation could discourage new entrants and leave only the largest firms with the resources to comply. Market analysts, however, suggest that regulation could ultimately increase trust in AI services, improving adoption where strong protections are demonstrated.
The broader mobile ecosystem
The enquiry carries implications that extend beyond the companies directly named. Companion AIs are distributed through the same infrastructure of app stores, messaging channels, and devices that underpin other parts of the digital economy. This means the expectations around accountability will eventually touch operators, integrators, vendors, and developers. These actors may not design chatbots themselves, but they play a central role in enabling access and monetisation.
Trust and compliance, therefore, will increasingly function as differentiators. Mobile operators and messaging providers who can demonstrate robust measures to support safe deployment will be better positioned as partners for AI developers and regulators alike. The way ecosystem players handle billing, parental controls, or content classification could become a reference point when regulators assess whether services are responsibly structured.
The enquiryโs attention to monetisation models also signals a shift. If regulators find that engagement-driven revenue creates risks for children, they may impose constraints on such models. That outcome could have knock-on effects for mobile monetisation practices more generally, particularly for companies that rely heavily on behavioural data or prolonged engagement as primary revenue drivers.
Lessons to learn
There are also parallels with existing messaging regulation. Digital ecosystems have already had to grapple with issues like grey route traffic, fraud prevention, and the need to establish clear trust frameworks. Those precedents may provide useful lessons for AI companion models. If regulators decide that companion characters or certain conversation contexts are higher risk, operators and vendors may need to build systems that flag or restrict such traffic, similar to how mobile players have adapted to fraud detection requirements.
There is also a wider responsibility to frame this discussion constructively. Companion AIs have the potential to deliver positive social value, but only if developed with foresight and adequate safeguards. Finding the balance between innovation and protection is not simply the job of regulators; it requires active engagement from across the ecosystem. Industry bodies like MEF can play a role in convening stakeholders, promoting principles such as consent-by-design, transparency, and authentication, and ensuring that mobile operators, messaging providers, and developers align around common standards.
The development of AI companions is not just about software. It is about the systems, partnerships, and rules that allow innovation to deliver benefits without undermining user trust. The FTCโs inquiry is a reminder that ecosystem players must think proactively about their role in that balance.
ABOUT THE AUTHOR
Dario Betti is CEO of MEF (Mobile Ecosystem Forum) a global trade body established in 2000 and headquartered in the UK with members across the world. As the voice of the mobile ecosystem, it focuses on cross-industry best practices, anti-fraud and monetisation. The Forum, which celebrates its 25th anniversary in 2025,ย providesย itsย members with global and cross-sector platformsย for networking, collaboration and advancing industry solutions.
There is concern over the misuse of ChatGPT as it’s found to fuel cyber attacks with a risks up by 26% i Q3 of 2024.




