We’re joining the Artificial Intelligence Safety Institute Consortium

LA Tech4Good will be one of more than 200 AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government program, the Artificial Intelligence Safety Institute Consortium, under the National Institute of Standards and Technology (NIST).

We welcome the Artificial Intelligence Safety Institute Consortium’s efforts to ground responsible AI in a human-centered context and a social lens. The consortium’s inclusion of members across industries will enable the development of a collective vision that transcends “just talk” to create practical, holistic guidelines. LA Tech4Good teaches equitable and ethical data skills to empower individuals and communities to use data responsibly. Our education and advocacy efforts for data justice align with NIST’s efforts to guide the design of safe and trustworthy data and algorithmic systems. We look forward to participating in the consortium and its work.

Will you help fund our membership?

The membership fee for the consortium is $1,000 per year. Will you donate towards that cost to support our participation? Y’all know we’re scrappy and that we will be very grateful for your financial participation.


About our participation

– Eva Sachar, project lead

We are thrilled to join NIST’s U.S. Artificial Intelligence Safety Institute Consortium (AISIC) and contribute our three years of experience teaching our “Leading Equitable Data Practices'' workshops to this shared effort. We look forward to being in the room with other nonprofits, government agencies, academics, and corporations to build a collective vision for the design and use of algorithmic tools that positively serve the American public. 

What I’m looking forward to

1. Creating frameworks for the design and execution of algorithmic systems that start with ethics, equity, and human-centered principles

Our organization curates tools from Responsible AI, Data Ethics and Design Justice and puts them into the hands of data practitioners and leaders. Among them we promote Datasheets for Datasets, Data Nutrition Project, Model Cards, Mitigating Bias in AI Playbook from Berkeley Haas, Feminist Data Manifest-No, and We All Count. We believe that building on these tried and tested tools can empower more inclusive data and AI practices.

2. Creating best practices for building human-in-the-loop AI systems

As data and algorithmic tools consume all spheres of our lives, we need to create norms to appreciate data in its full social context and to treat these technologies as sociotechnical systems.

Oftentimes, data is about people. But because of the push to optimize efficiency and maximize performance, this is forgotten. Designing systems requires that we consider human, social, and organizational factors in our development of technologies. We hope to work within the consortium to create best practices for building human-in-the-loop AI systems, prioritizing equitable design practices, involving diverse stakeholders in every stage of the development process, and taking the time to ask the right questions and identify blind spots.

3. Improving data and technology education

As our reliance on technology demands that more and more individuals collect and use data to make and inform decisions, we need new ways to train people to collect and use data alongside their own subject matter expertise. Our expertise aims to bring data equity into the whole length of the data life cycle, with an intersectional lens..

At the same time, it is also crucial that we educate workers already in technical roles to think of their data and technology as a sociotechnical system and learn more about the context of their work and the consequences of their decisions. We need to improve data and technology education to build more responsible technology that results in positive, intended outcomes.


About the Artificial Intelligence Safety Institute Consortium

LA Tech4Good is collaborating with the National Institute of Standards and Technology (NIST) in the Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies. NIST does not evaluate commercial products under this Consortium and does not endorse any product or service used.

Additional information on this Consortium can be found here.

Next
Next

The intriguing edge cases of privacy in an AI-driven world