Methodology

How we define and identify ethical companies.

1. We place priority on recent, early-stage startups founded post-2015.

Older companies have a tendency to add ethical AI principles in a post-hoc way. The companies in our database and in our market map have truly innovated. Some exceptions can be made if a company is truly ahead of its time and does great work within the Ethical AI ecosystem.

2. Cybersecurity, climate tech, and related fields are important, but outside of our scope.

Our focus is on companies that affect issues in society-level ethics like lack of representation, discrimination, accountability, etc. Climate, for example, is much more than just a society-level issue.

3. We classify companies based on their main line of business.

Our categories are designed to represent several vertical and horizontal business models (see the ecosystem below). Some companies might have products that belong in multiple categories.

4. Companies must have a specialized concentration on fairness / bias mitigation.

For example, there are many model management and quality assurance platforms, but those that do not have specific features that allow for bias detection / monitoring fall outside of our scope. AI quality assurance is by itself indirectly ethical, of course, but there needs to be a recognition of fairness / bias concerns and some focus on how / why a platform can help.

The Ecosystem

We believe there are five types of companies within the Ethical AI startup ecosystem.

Data for AI

Requirement: Companies providing specific services to maintain data privacy, detect data bias early, or provide alternative methods for data collection / generation to avoid bias amplification later in the ML lifecycle. Many companies in this space are synthetic data or data anonymization specialists.

Examples: TripleBlind, Synthesis AI, Mostly AI, Gretel, Co:Census.

ModelOps, Monitoring, & Observability

Requirement: ModelOps companies that provide specific tooling to monitor and detect prediction bias (however it may be defined in context). Usually self-defined as "quality assurance for ML," these companies specialize in black box explainability, continuous distribution monitoring, and multi-metric bias detection. We're not concerned with MLOps companies that provide generic monitoring services.

Examples: Fiddler AI, Arthur AI, Arize, TruEra.

AI Audits, Governance, Risk, & Compliance

Requirement: Specialist consulting firms or platforms that intend to establish accountability / governance, quantify risk (business and model risk), and / or simplify compliance within AI systems.

Examples: BABL AI, ORCAA, Credo AI, EthicsGrade, Anch AI.

Targeted AI Solutions & Technologies

Requirement: AI companies that attempt to solve a particular ethical issue in a vertical OR companies with specific technology applicable across a horizontal. Companies solving the former challenge are often describable as "a more ethical way to _____". Some broader subcategories include healthtech, fintech, and insuretech. Some examples of horizontals are toxic content moderation or ethical facial recognition.

Examples: FairPlay AI, Flock Safety, Pave HR, Spectrum Labs, Zelros.

Open-Source Solutions

Requirement: Fully open-source solutions meant to provide easy access to ethical technologies and responsible AI. The companies behind these frameworks are not usually for-profit (though some that are), but their open-source technology is usually a good approximation of where the cutting-edge of applied ethical AI research is. Open-source tools certainly play their own role within the startup ecosystem because they provide access to cheap tools that other, non-specialist firms can employ.

Examples: IBM AIF360, Fairlens, Pymetrics Audit AI, Deepchecks.