Top Nonprofit AI Policies 2025: Analysis and Trends

Generative AI

The nonprofit sector is at a crossroads with artificial intelligence. While 82% of nonprofits now use AI, according to the 2024 Nonprofit Standards Benchmarking Survey, less than 10% have formal policies governing its use. This massive gap between adoption and governance is creating both opportunities and significant risks that organizations can’t afford to ignore.

We analyzed major nonprofit organizations to understand who’s leading the charge in AI policy development and what trends are shaping the sector. The results reveal a landscape where some organizations are setting strong examples while others are still figuring out their approach.

The Policy Leaders Setting the Standard: Top 5 Nonprofit AI Policies

Several major nonprofits have stepped up to establish comprehensive AI policies, and their approaches offer valuable lessons for the rest of the sector.

1. United Way Worldwide

United Way Worldwide demonstrates a high-level commitment to AI ethics through CEO Angela Williams’ participation on the AI Ethics Council, a joint initiative led by Sam Altman and OpenAI to promote ethical AI use. While they haven’t published a standalone AI policy document, their active participation in high-level ethics forums suggests a structured internal approach to AI governance. Their TechConnect United program, part of United Way’s digital equity initiatives, leverages technology to bridge digital access gaps.

2. American Red Cross

The American Red Cross has taken a more operational approach, establishing a dedicated innovation team that uses AI for supply-demand forecasting and conversational chatbots. They’ve even posted volunteer opportunities for “Generative AI Engineers” to support disaster and blood donation systems, demonstrating structural AI adoption. Their AI applications include “Clara” chatbots, automated damage assessment through AWS and computer vision partnerships, and satellite-imaging AI for mapping vulnerable regions with Intel.

3. International Red Cross

The International Red Cross (ICRC) introduced an AI principles framework in late 2024, focusing on ethics, neutrality, and humanitarian protection. This framework includes partnerships with organizations like EPFL to responsibly deploy AI, serving as an exemplar for responsible sector-wide deployment. The ICRC, the de facto publisher and monitor of the Geneva Conventions and laws of armed conflict, also leads increasingly urgent global discussion on the ethics and legality of AI in humanitarian and conflict settings.

4. Oxfam International

Oxfam International has articulated a comprehensive, rights-based approach to AI governance, one that is rooted in fairness, accountability, and transparency, in its January 2025 submission to the UN Working Group on Business and Human Rights. By grounding AI safeguards in the UN Guiding Principles and its own humanitarian mission, Oxfam offers a strong model for how international nonprofits can balance technological innovation with ethical responsibility across diverse cultural contexts.

5. Save the Children

Save the Children has focused their AI guidelines specifically on child protection and privacy. Their approach ensures AI applications enhance educational and health outcomes for children without compromising safety or privacy, showing how organizations can tailor policies to their unique mission requirements.

Many Organizations Are Still Finding  Their Stance

Not every major nonprofit has formal AI policies yet, even when they’re actively using these technologies. Habitat for Humanity is exploring AI applications in project management and volunteer coordination, but hasn’t published a formal AI policy. This represents a common scenario where organizations are experimenting with AI tools while still developing governance frameworks.

The World Wildlife Fund presents an interesting case study. They actively use AI for wildlife conservation, including monitoring endangered species and combating poaching. However, they haven’t published specific AI governance guidelines, highlighting how mission-critical AI applications can develop ahead of formal policy structures.

What the Data Tells Us About Nonprofit AI Adoption

The 2024 Nonprofit Standards Benchmarking Survey reveals some fascinating trends about how nonprofits are actually using AI: 

  • Financial management dominates AI adoption, with organizations using AI for forecasting, budgeting, and payment automation. This makes sense because financial tasks offer immediate, measurable benefits while building organizational confidence with AI technologies.
  • But, nonprofits aren’t stopping at administrative tasks—thirty-six percent now use AI for program optimization and impact assessment, demonstrating how organizations are moving beyond back-office applications to leverage AI for core mission work. This trend toward program-focused AI use represents a significant shift in how nonprofits think about these technologies.
  • The Brookings Institution reports that nonprofits are increasingly adopting AI tools for strategic decision-making processes, using AI technologies to analyze large datasets for informed decisions about resource allocation and future initiatives. This strategic application of AI suggests the sector is maturing in its approach to these technologies.
  • Perhaps most interesting is how AI is revolutionizing philanthropic efforts. Organizations are using AI to gain insights into donor behavior and preferences, allowing them to tailor their approaches and maximize fundraising impact. This data-driven approach to philanthropy represents a fundamental shift in how nonprofits engage with supporters.

The Barriers Holding Nonprofits Back

Despite widespread adoption, significant barriers remain. The primary obstacles include lack of knowledge, infrastructure, and funding. These aren’t surprising challenges, but they’re preventing many organizations from effectively integrating AI technologies into their operations.

More concerning is that approximately one-third of survey respondents cite ethical concerns and employee resistance as significant barriers. This highlights a critical need for clear ethical guidelines and change management strategies to address concerns about AI-based job displacement and ensure responsible AI use.

The Brookings Institution also identifies significant regional disparities in AI readiness, with metropolitan areas like the Bay Area leading in AI adoption. This geographic divide creates unequal access to AI technologies across the nonprofit sector, potentially exacerbating existing inequalities between well-resourced urban organizations and smaller regional nonprofits.

Emerging Ethical Conversations in Nonprofit AI Use

The nonprofit sector is grappling with complex ethical questions that go beyond basic AI governance. Organizations are increasingly focusing on tailoring AI governance frameworks to protect vulnerable populations, designing AI with intentional boundaries to ensure tools don’t inadvertently harm those they aim to help.

Health justice has become a particular focus, with nonprofits exploring AI’s role in promoting equitable healthcare access. This involves using AI to analyze health data and improve care delivery in underserved communities, while confronting the risk that biased training data or unequal digital access could misdiagnose conditions, divert resources toward majority populations, or automate triage rules that deprioritize those already marginalized. By flagging and auditing these pitfalls up front, and by pairing algorithmic outputs with human oversight, organizations aim to ensure technological advances narrow, rather than widen, existing health gaps.

Some organizations are also leveraging AI to foster inclusive governance structures and promote equitable participation in democratic processes. They use AI to enhance transparency, accountability, and representation in decision making so that diverse voices are heard and valued. At the same time, algorithms trained on skewed data or deployed without oversight can automate exclusion, amplify misinformation, or concentrate power. To guard against these risks, nonprofits audit models for bias, disclose how AI informs civic decisions, and keep humans involved whenever outcomes affect community participation.

Public Sector AI Leadership: What’s Next

Several leading organizations are shaping the next phase of AI governance in the nonprofit and public sectors, offering practical guidance and research to help mission-driven groups adopt these technologies responsibly.

Independent Sector and TechSoup continue providing valuable guidance through white papers and reports on AI adoption best practices. Their resources serve as essential guides for nonprofits navigating AI implementation complexities, offering practical advice grounded in sector-specific experience.

The Pew Research Center emphasizes the growing importance of collaborative frameworks involving academic institutions, government agencies, and other organizations. These multi-stakeholder partnerships aim to establish standardized ethical guidelines and best practices, suggesting the sector is moving toward more coordinated approaches to AI governance.

Recent Brookings Institution research highlights AI’s potential to drive economic growth while addressing social challenges, emphasizing the importance of aligning AI technologies with organizational missions and societal values. This research suggests successful AI adoption requires balancing innovation with social responsibility.

The 2025 landscape reveals that successful AI adoption in nonprofits requires more than just implementing new technologies. Organizations need to balance innovation with ethical responsibility, develop formal policies that address their specific mission requirements, and learn from sector leaders who have established comprehensive frameworks. The gap between AI adoption and governance presents both a challenge and an opportunity for organizations willing to invest in thoughtful, responsible AI implementation.

Ready to Build Your Nonprofit’s AI Policy and Ethical AI Capacity?

Whole Whale helps nonprofits develop formal AI policies, train teams on ethical AI practices, and integrate AI tools responsibly through our dedicated AI Capacity & Digital Training service and AI Accelerator package. Suppose your organization is ready to create an AI governance framework, implement ethical AI workflows, or simply needs guidance navigating AI adoption. In that case, we’re here to help, learn more about Whole Whale’s AI services and contact us today to get started.