Skip to content
We are now Bonterra
Blog

Bonterra’s ethical AI values

Ben Miller
SVP, data science
September 09, 2024
A man presses a holographic outline of a head representing AI while he works on a computer

At Bonterra, we believe artificial intelligence (AI) must be rooted in transparency, authenticity, and trust, so that our customers can make the best use of this technology to optimize their organization’s impact. To match ambition with action, we’ve opted to publish our ethical AI values publicly and urge others in the space to do the same. Our understanding of AI is imperfect, but we believe industry-wide transparency will help us stay accountable, constantly improve, and strive for a better world together. 

We recognize the advancement of AI is moving quickly and its use has far-reaching implications. Therefore, we set out to ensure our clients understand what we are doing with their data, how we are protecting their privacy, and what considerations we are taking as we continue to unlock innovation within the social good ecosystem.  

We’re announcing our ethical AI values for adoption, outlined below.

Important definitions 

As we walk through our ethical AI values, the following definitions will be used: 

  • AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. 

  • AI system lifecycle: AI system lifecycle phases involve:

    1. Design, data, and model (which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building) 
    2. Verification and validation 
    3. Deployment 
    4. Operation and monitoring 

    These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase. 

  • AI knowledge: AI knowledge refers to the skills and resources, such as data, code, algorithms, models, research, know-how, training programs, governance, processes, and best practices, required to understand and participate in the AI system lifecycle. 

  • AI actors: AI actors play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI. 

  • Stakeholders: Stakeholders encompass all organizations and individuals involved in, or affected by, AI systems, directly or indirectly. AI actors are a subset of stakeholders.1 

Our ethical AI values

Bonterra worked with a group of colleagues who serve the nonprofit sector, as well as Fundraising.AI, which published an AI framework.2 Building from that document and incorporating other existing policies published by the Organization for Economic Co-operation and Development and the National Institute of Standards and Technology, Bonterra is releasing our ethical AI values as follows:1,3 

  1. Privacy and security: As an AI Actor, Bonterra will protect personal and sensitive data by following robust security standards within our respective roles, maintaining compliance with relevant data protection regulations, and respecting the privacy of donors, beneficiaries, and stakeholders. 

    Bonterra believes that our client’s data is theirs, and only uses that data as described in our privacy policy. 

  2. Inclusiveness: Bonterra actively addresses biases and disparities throughout the entire AI system lifecycle by developing ethical values to monitor, evaluate, and design the AI systems, engaging diverse stakeholders and diverse datasets in the process. Our ethical AI values will incorporate ongoing assessments of fairness, statistical parity, and equal opportunity to identify algorithmic unfairness. When issues are found, we will employ appropriate mitigation strategies, such as reweighting and prejudice removal techniques. We are committed to transparency in our AI practices, educating our team members at the intersection of DEIB and AI, and regularly monitoring our systems for emerging biases. 

  3. Accountability: Bonterra AI actors will be accountable for the AI applications that we develop, deploy, or utilize, ensuring that they align with our values by ensuring the AI systems are:

    1. Verifiable and replicable
    2. Auditable
    3. Appealable
    4. Remediable
    5. Liable2
  4. Explainable and interpretable explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes. Together, explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs. 

    The underlying assumption is that perceptions of negative risk stem from a lack of ability to make sense of or contextualize system output appropriately. Explainable and interpretable AI systems offer information that will help end-users understand the purposes and potential impact of an AI system. Risk from lack of explainability may be managed by describing how AI systems function, with descriptions tailored to individual differences such as the user’s role, knowledge, and skill level. 

    Explainable systems can be debugged and monitored more easily, and they lend themselves to more thorough documentation, audit, and governance. 

    Risks to interpretability often can be addressed by communicating a description of why an AI system made a particular prediction or recommendation. Transparency, explainability, and interpretability are distinct characteristics that support each other. Transparency can answer the question of “what happened” in the system. Explainability can answer the question of “how” a decision was made in the system. Interpretability can answer the question of “why” a decision was made by the system and its meaning or context to the user.3 

  5. Legal compliance: We commit to being aware of, and abiding by, applicable laws, regulations, and best practices concerning AI development and operations pertaining to fundraising, casework, CSR, data protection, and AI systems.2 

  6. Social Impact: We strive to maximize the positive social impact of AI while minimizing any potential harm by focusing on the needs of diverse beneficiaries and communities, ensuring our solutions are inclusive and equitable across various demographic groups.2 

  7. Human control of technology requirements:

    1. Human review of automated decisions
    2. Ability to opt out of automated decisions
    3. Control over AI systems
  8. Valid and reliable validation is the “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled”4 Deployment of AI systems that are inaccurate, unreliable, or poorly generalized to data and settings beyond their training creates and increases negative AI risks and reduces trustworthiness. 

    Reliability is defined in the same standard as the “ability of an item to perform as required, without failure, for a given time interval, under given conditions”5 Reliability is a goal for overall correctness of AI system operation under the conditions of expected use and over a given period of time, including the entire lifetime of the system. 

    Accuracy and robustness contribute to the validity and trustworthiness of AI systems and can be in tension with one another in AI systems.3 

    Accuracy is defined as “closeness of results of observations, computations, or estimates to the true values or the values accepted as being true.”5 Measures of accuracy should consider computational-centric measures (e.g., false positive and false negative rates), human-AI teaming, and demonstrate external validity (generalizable beyond the training conditions). Accuracy measurements should always be paired with clearly defined and realistic test sets – that are representative of conditions of expected use – and details about test methodology; these should be included in associated documentation. Accuracy measurements may include disaggregation of results for different data segments. 

    Robustness or generalizability is defined as the “ability of a system to maintain its level of performance under a variety of circumstances.”5 Robustness is a goal for appropriate system functionality in a broad set of conditions and circumstances, including uses of AI systems not initially anticipated. Robustness requires not only that the system performs exactly as it does under expected uses, but also that it should perform in ways that minimize potential harm to people if it is operating in an unexpected setting. 

    Validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended. 

    Measurement of validity, accuracy, robustness, and reliability contribute to trustworthiness and should take into consideration that certain types of failures can cause greater harm. AI risk management efforts should prioritize the minimization of potential negative impacts and may need to include human intervention in cases where the AI system cannot detect or correct errors. 

  9. Use Case Review: Bonterra has established an AI Committee which will be reviewing and approving all use cases, to include sensitive use cases. We have established an intake form and process for review to ensure that data security and risks associated with any AI system are reviewed prior to use. 

At Bonterra, we believe AI will help us achieve the greatest good. Read more here

Sources 

  1. "OECD, 'OECD Legal Instruments: OECD-LEGAL-0449,' accessed July 16, 2024, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449." 

  2. "Fundraising.AI, 'Framework,' accessed July 16, 2024, https://fundraising.ai/framework/." 

  3. "National Institute of Standards and Technology, 'NIST Special Publication 100-1: Artificial Intelligence Standards and Guidelines: A Comprehensive Assessment,' accessed July 16, 2024, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf." 

  4. "International Organization for Standardization, 'ISO 45481:2021 Occupational health and safety management systems - General guidelines for safe working during the COVID-19 pandemic,' accessed July 16, 2024, https://www.iso.org/standard/45481.html." 

  5. "International Organization for Standardization, 'ISO 81608:2020 Financial services - Requirements for management of a privacy program for cross-border data flows,' accessed July 16, 2024, https://www.iso.org/standard/81608.html." 

    Corporate Social Responsibility