By: Meenakshi Das, data equity consultant, trainer, and speaker

Responsible AI must be considered more than a buzzword or a box to check; it is a commitment to developing transparent, equitable, and accountable technology. It means we need an approach to design that considers the inclusion and representation of all stakeholders, particularly those likely to be easily ignored in technological advancements.

To the tech companies with AI in their plans and budgets currently or soon, this letter is for you:

Despite the daily fancy and flashy messages about new AI tools, I feel nervous about AI.

Perhaps because my entire work history is based on working very closely with a variety of data points.

When those data points have “flaws” and “outliers,” I have seen algorithms handle them silently through optimization parameters. When those data points have misrepresentation or under-representation of community needs, I have seen algorithms ignore those discrepancies and take those data points as core sources of truth to learn.

As much as looking at the working output of my algorithms has been encouraging, I feel terrified of the harmful consequences in situations when AI is given complete control to make choices around data. I can’t help asking — could those algorithms wipe out someone’s story? Perpetuate misrepresentation? Deny someone access to resources and justice?

The fact that the answer to those questions is not a definite assuring “no” makes me uncomfortable.

This letter is a call to action, not merely to encourage meaningful innovation but to do so with a deep sense of care, imagination, and responsibility toward the people and the planet. This message is a shared hope to ensure that as we progress, we maintain sight of the values that make us human: empathy, integrity, and the relentless pursuit of equality and justice.

Responsible AI must be considered more than a buzzword or a box to check; it is a commitment to developing transparent, equitable, and accountable technology. It means we need an approach to design that considers the inclusion and representation of all stakeholders, particularly those likely to be easily ignored in technological advancements.

As organizations with resources to shift how AI can be operationalized in this world, I want you to be better listeners first — to the community’s ‘why’ and needs — before they become part of your sales funnels.

Your actions around AI are not merely about the new features of a product you may develop. Your actions are also an invitation to the communities around you to decide whether or not they can trust you.

In other words, I want you to commit, collectively, as part of your brand, to non-performative, continuous actions toward responsible AI. This means you need to:

  • Invest in diverse teams: Build AI development teams with diverse backgrounds, perspectives, and experiences. This diversity should span race, gender, age, ethnicity, sexual orientation, disability, and more. This is not about filling year goals and benchmark measures; it’s about creating space for different viewpoints, which ultimately impact the very idea of inclusion and belonging in these AI systems.
  • Center inclusive development designs: Incorporate inclusive design principles from the start of AI project planning. This includes engaging with diverse stakeholders (including representation from different communities in the data) during the development process to understand and mitigate biases in AI models.
  • Continuously push for bias detection and management: Implement rigorous testing for biases in data sets and AI algorithms. This should be an ongoing process, not a one-time check. Employ tools and methodologies conceived to identify and mitigate bias, and be prepared to adjust or redesign systems based on what you find.
  • Own your part in building transparency and accountability: Foster a culture of openness about how AI systems make decisions and who is responsible for them. This includes clearly documenting AI development processes, data sources, and decision-making criteria.
  • Own your accountability for addressing and rectifying biases or errors when they occur.
  • Invest in developing ethical AI guidelines and training: Develop and enforce ethical AI guidelines that outline your organization’s commitment to equitable, responsible AI use. Provide regular training for all employees and customers — not just those in technical roles — on the importance of AI equity and how to achieve it.
  • Invest in regular tech-equity audits: Conduct regular equity audits of algorithms to assess their impact on different communities. When possible, these audits should be performed by independent parties to ensure objectivity and should lead to actionable recommendations for improvement. The outcome here is not merely in the time saved or dollars added but in going deeper into how these AI systems created benefits and harms.
  • Create space for org-wide support for Responsible AI: Ensure that there is org-wide support for AI equity initiatives, including dedicated resources (time, budget, personnel) and leadership backing. AI equity should be integrated into the organization’s core values and operational strategies. That means the entire staff needs to be aware, included, and involved in this endeavor.
  • Build meaningful collaborations outside your organization: Collaborate with other organizations, academia, and non-profits to share best practices, learn from each other’s experiences, and jointly work on initiatives that promote AI equity.
  • Build meaningful diversity if and when hosting AI events: Create space for diverse voices in events (diversity ranging from racial, ethnic, ability, social, titles, roles, work backgrounds, etc.)

No one denies the path to responsible AI is neither straightforward nor easy. It demands an approach that bridges the gap between technologists, ethicists, policymakers, and the broader public.

It requires us to rethink our success metrics, moving beyond short-term gains and market dominance to prioritize long-term societal well-being and sustainability. It challenges us to foster a culture of openness, where tough questions are welcomed, and critical reflection is encouraged.

As representatives of the technology sector, you have a unique opportunity — and ethical obligation — to shape the future in a way that leaves no one behind.

Your work is cut out for you — to listen closely to communities around you and imagine AI’s role in a better world.

As an innate optimist and someone who has spent significant time in tech companies like yours, I believe this kind of commitment is possible. I also believe in the magic your commitment to true AI equity can create for our communities.

To a future that is designed from the inputs of your care, imagination, and responsibility.

Meenakshi Das

Meenakshi Das

Meenakshi (Meena) Das (she/her) is the CEO, consultant, and facilitator of two practices — NamasteData and Data Is For Everyone. Both practices work at the intersection of data, AI, and equity. Meena is a specialist in inclusive data collection techniques and guiding communities move towards human-centric AI. You can learn more about her work through her two newsletters, ‘Dear Human’ and ‘Data Uncollected,’ or directly connect with her on LinkedIn: http://www.linkedin.com/in/meenadas. Currently, Namaste Data is doing sector-wide research on data and AI equity trends. Participate in this anonymous and confidential data collection to learn where we are collectively as a sector on these topics.