• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Inside Philanthropy

Inside Philanthropy

Go beyond 990s.

Facebook LinkedIn X
  • Grant Finder
  • For Donors
  • Learn
    • Explainers
    • State of American Philanthropy
  • Articles
    • Arts and Culture
    • Civic
    • Economy
    • Education
    • Environment
    • Global
    • Health
    • Science
    • Social Justice
  • Places
  • Jobs
  • Search Our Site

Philanthropy Can — and Must — Protect Human Connection in the Age of AI

Michelle Barsa, Guest Contributor | September 11, 2025

Share on Facebook Share on LinkedIn Share on X Share via Email
OpenAI CEO Sam Altman. Credit: jamesonwu1972/Shutterstock

In late August, two parents filed a case against OpenAI for the wrongful death of their son, 16-year-old Adam Raine. He spent months confiding in ChatGPT, seeking empathy, validation and guidance. The chatbot gave him validation — and when Adam asked for advice on his plans to commit suicide, ChatGPT obliged. 

His story is unfortunately not unique. Last fall, the mother of 14-year-old Sewell Setzer sued Character.ai after her son took his life, alleging that the chatbot encouraged his suicide. The same company was also sued by the parents of an 11-year-old and 17-year-old after they were exposed to instructions for self-harm, hypersexualized interactions and suggestions to commit violence. 

These families are united in demanding accountability. But their grief also highlights a broader reality: While billions of dollars are flowing into the AI companionship market, less than 1% of that investment goes toward accountability, oversight or research. 

That gap is where philanthropy must step in.

If the ‘attention economy’ helped define the last two decades of tech, AI companions are ushering in an ‘intimacy economy.’ Instead of aiming for our clicks and views, platforms like Replika, Snapchat’s My AI, and Character.ai have capitalized on the universal need for human connection, profiting by emulating closeness and empathy. They remember your name, ask how your day went and, in some cases, step in as your girlfriend, boyfriend or most trusted confidant. But these services are not designed with human wellbeing at the center. They are built by for-profit platforms whose incentives are engagement and subscription — they flatter.

This is no longer a fringe experiment. The AI companion market is projected to reach $36.8 billion this year. And yet the overwhelming majority of that capital is directed toward scaling and profit — not safety or oversight, and no safeguards to prevent further harm to children and vulnerable people turning to these bots.

Philanthropy has seen this story before — and helped work toward solutions. From tobacco and climate change mitigation to social media’s harms, it has often been philanthropy that seeded the research, advocacy and public pressure needed to ground profit-driven models and protect the public good. The same must occur in the age of AI. 

At their most extreme, AI companions can contribute to catastrophic outcomes, including self-harm and suicide. More quietly, they can reshape how we relate to one another. AI companions offer frictionless, always-available “relationships” that provide immediate validation. Over time, they risk hollowing out our tolerance for the conflicts, ruptures and repairs that make human relationships resilient.

Belonging is built in the messiness of human interaction — from disagreements that force us to consider other perspectives to everyday exchanges that bond us with our neighbors and communities. At Omidyar Network, our work on social connection with partners like Noēsis Collaborative and The Rithm Project, among others, has revealed a key tension: Belonging optimizes for the collective, while technology optimizes for the individual. As AI companions draw us inward into one-on-one relationships with machines, they risk draining energy from these shared ties. 

This is not just a question for engineers or regulators, but for philanthropy as well: How do we safeguard the human connections that make belonging possible in an era of machine companionship? Funders have both the tools and responsibility to intervene.

Specifically, philanthropy can seed and support new norms, ethical standards and legislated guidelines for when, how and for whom AI companionship is appropriate — especially for children. Just as funders once supported anti-tobacco campaigns or data privacy standards, they can now lay the groundwork for norms and protections that define responsible AI. 

Philanthropies can also build and support new tools and independent research to better understand the broad spectrum of possible harms, from direct risks like suicide to more subtle effects on empathy and belonging — and hold platforms accountable. And philanthropies can elevate stories like Adam’s — and so many more — to shift how we understand tech’s role in our emotional lives, how we view our collective power to shape how that tech is governed, and how we might act now to protect the social ties that let each of us thrive. 

There is broad agreement that generative AI will reshape our future. What remains unknown is how. That answer cannot be left to engineers or investors alone. It should be shaped by all of us — by the norms we set, the policies we demand and the values we refuse to compromise. 

Philanthropy has a critical role to play in making this happen. If we get this right, we can prevent a future where Silicon Valley profit dictates human intimacy — and preserve the skills, habits and social ties that enable all of us to belong.

Michelle Barsa is a Principal at Omidyar Network.

Related Inside Philanthropy Resources:

For Subscribers Only

  • Grants for Public Health
  • Mental Health Grants
  • Report: Giving for Mental Health

Featured

  • “We Forgot the Kids.” Funders Back New Efforts to Support Youth Wellbeing in a Tech-Driven World

  • Bridging Broken Connections: The Benefits of Intergenerational Programs

  • The Matthew Perry Foundation Zeroes in on Destigmatizing Addiction

  • Philanthropy Can — and Must — Protect Human Connection in the Age of AI

  • With Federal Support for Alzheimer’s Research Scrambled, Can Funders Fill In?

  • The Elevate Prize Foundation: From Humble Beginnings to “Making Good Famous”

  • 7 Questions for Richard Reeves on Boys and Men and Philanthropy

  • Climate Mental Health Issues Are on the Rise. But Funding? Still Limited

  • A Major Southeast Funder Dramatically Ups Its Grantmaking — and Announces Its Spend-Down

  • The Yabukis: A Wealthy Midwestern Couple Focuses on Mental Health, the Arts and More

  • How Philanthropy Is Moving the Needle on an Innovative Approach to Mental Healthcare

  • The Health Foundation Tackling the Nursing Shortage in New York

Filed Under: IP Articles Tagged With: Child Welfare, Front Page Most Recent, FrontPageMore, Gratis, Mental Health, Public Health, Public Health & Wellness, Tech Philanthropy

Primary Sidebar

Find A Grant Square Banner

Receive our newsletter

Donor Advisory Center Banner

Philanthropy Jobs

Check out our Philanthropy Jobs Center or click a job listing for more information.

Girl in a jacket

Footer

  • LinkedIn
  • X
  • Facebook

Quick Links

About Us
Contact Us
FAQ & Help
Terms of Use
Privacy Policy

Become a Subscriber

Sign up for a single user or multi-user subscription.

Receive our newsletter

© 2025 - Inside Philanthropy