top of page

AI Organizations

​​​​​

Accenture:   Accenture Labs launched a research collaboration with leading thinkers on data ethics to help provide guidelines for security executives and data practitioners and enable development of robust ethical controls throughout data supply chains.

​

Access Now:   defends and extends the digital rights of users at risk around the world. By combining direct technical support, comprehensive policy engagement, global advocacy, grassroots grantmaking, and convenings such as RightsCon, Access Now fights for human rights in the digital age

​​

ACM:   Association for Computing Machinery

​

Ada Lovelace Institute:   mission to ensure data and AI work for people and society.  

​

ADAPT:   funded by Science Foundation Ireland focuses on developing next generation digital technologies that transform how people communicate by helping to analyse, personalise and deliver digital data more effectively for businesses and individuals. ADAPT researchers are based in four leading universities: Trinity College Dublin, Dublin City University, University College Dublin and Dublin Institute of Technology.

​​

Alan Turing Institute:   committed to using data science and AI technologies for everyone’s benefit, and to protect society against these technologies’ unintended consequences.

​

AlgorithmWatch:  non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically. Also keeps a shared inventory of AI related principles.

​

AI4ALL:   working to make artificial intelligence more diverse and inclusive. Their education programs for underrepresented high school students are run in partnership with universities like Stanford, UC Berkeley, Carnegie Mellon, and Princeton. The programs increase access, awareness, and exposure to the field in a variety of ways including through hands-on technical education and connections to role models and mentors in the field.

​

AI Center Sweden: team of experts who collaborate towards a common goal: Equality, safety, and security in the AI industry.

​​

AI Now Institute:   A research institute examining the social implications of artificial intelligence

​

AI Sustainability Center: established to create a world-leading multidisciplinary hub to address the scaling of AI in broader ethical and societal contexts.

​

AJL-Algorithmic Justice League:  collective that aims to Highlight Algorithmic Bias Through Media, Art, and Science; Provide Space For People to Voice Concerns and Experiences with Coded Bias; Develop Practices For Accountability During the Design, Development, and Deployment of Coded Systems

​

Allen Institute for Artificial Intelligence (AI2):  non-profit organization founded by Paul Allen with the mission to contribute to humanity through high-impact AI research and engineering. All work is directed towards AI for the Common Good.

​​

Artificial Intelligence Forum of New Zealand (AI Forum):  not-for-profit, non-governmental organisation (NGO) that is funded by members; brings together New Zealand’s community of artificial intelligence technology innovators, end users, investor groups, regulators, researchers, educators, entrepreneurs and interested public to work together to find ways to use AI to help enable a prosperous, inclusive and thriving future for our nation.

​​

Association for the Advancement of Artificial Intelligence (AAAI):   is an international, nonprofit, scientific society devoted to promote research in, and responsible use of, artificial intelligence.

​

Atomium - European Institute for Science, Media and Democracy (EISMD):  AI4People was launched at the European Parliament, as first multi-stakeholder forum bringing together all actors interested in shaping the social impact of new applications of AI, including the European Parliament, civil society organisations, industry and the media.

​

Auditing Algorithm: Adding Accountability to Automated Authority is a group of events designed to produce a white paper that will help to define and develop the emerging research community for “algorithm auditing.” Algorithmic Auditing is a research design that has shown promise in diagnosing the unwanted consequences of algorithmic systems.

​​

Berkeley Center for Law & Technology (BCLT):  part of University of California, Berkeley, School of Law for teaching, research, convening, and student activities on issues at the intersection of law and technology; covers legal and policy issues related to intellectual property, privacy, cybercrime and cybersecurity, biotech, the sports and entertainment industries, telecommunications regulation, and artificial intelligence.

​

Cambridge Data Centre for Data-driven Discovery:  brings together researchers and expertise from across the academic departments and industry to drive research into the analysis, understanding and use of data science. The Centre is funded by a series of collaborations with partners in business and industry which have an interest in using data science for the benefit of their customers and their organisations.

​​

Center for AI and Digital Policy: nonprofit AI policy & research institute focused to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law.

​

Center for Critical Race and Digital Studies (CR+DS): network of prominent, public scholars of color who produce research, distribute knowledge, and convene stakeholders at the intersections of race and technology.

​

Center for Data Innovation:   nonprofit research institute focused on the intersection of data, technology, and public policy. With staff in Washington, DC and Brussels, the Center formulates and promotes public policies associated with data, as well as technology trends such as artificial intelligence, open data, and the Internet of Things.

​​

​Center for Human-Compatible AI (CHAI):   goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

​

Center for Human Technology:   envisions a world where technology supports our shared well-being, sense-making, democracy, and ability to tackle complex global challenges.

​​

Centre for Internet and Society (CIS):  non-profit organisation based in India that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with diverse abilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, digital privacy, and cyber-security.

​

Center for Information Technology Policy (CITP):  interdisciplinary center at Princeton University.  Center’s research, teaching, and events address digital technologies as they interact with society.

​

Center for Technology Ethics:   an  initiative of the ISTCS.org, the CSETE develops training programmes and carries out research and consultancy in the field of sustainability engineering and  technology ethics 

​​

Centre for the Study of Existential Risk (CSER): interdisciplinary research centre within CRASSH at the University of Cambridge dedicated to the study and mitigation of existential risks

​

Civil Infrastructure Platform (“CIP”) is a collaborative, open source project hosted by the Linux Foundation. The CIP project is focused on establishing an open source “base layer” of industrial grade software to enable the use and implementation of software building blocks in civil infrastructure projects.

​

Council for Big Data, Ethics, and Society:   brings together researchers from diverse disciplines — from anthropology and philosophy to economics and law — to address issues such as security, privacy, equality, and access in order to help guard against the repetition of known mistakes and inadequate preparation.

​

De Montfort University - Centre for Computing and Social Responsibility:  research center for the ethical and social implications of Information and Communications Technology (ICT)

​​

DataEthics:   is a politically independent ThinkDoTank based in Denmark with a European (and global) outreach; purpose is to ensure primacy of the human being in a world of data, based on a European legal and value-based framework - by focusing on collecting, creating and communicating knowledge about data ethics in close interaction with international institutions, organisations and academia.

​

Data Justice Lab:   examines the intricate relationship between datafication and social justice, highlighting the politics and impacts of data-driven processes and big data. The lab is hosted at Cardiff University’s School of Journalism, Media and Culture.

​

Data & Societyadvances public understanding of the social implications of data-centric technologies and automation.

​

Digital Impact:   an initiative of Digital Civil Society Lab at Stanford PACS investigates the challenges and opportunities facing civil society organizations in the digital age, and develops resources to help organizations use digital data and infrastructure safely, ethically, and effectively. The Lab aims to shape the future of civil society globally by fostering the creation of new mechanisms for using, governing, and donating digital assets for public benefit.

​

Doteveryone:  works with businesses, civil society organisations and governments to help them understand and practise responsible technology. Also keeps a shared inventory of AI related organizations and principles.

​

Electronic Frontier Foundation:  nonprofit organization defending civil liberties in the digital world.

​

EthicsNet:  community, one with the purpose of experimenting with different potential techniques to create datasets – examples of nice behaviours (such as social norms), to help socialise A.I.

​

Fight for the Future:   non-profit organization whose mission is to ensure that the web continues to hold freedom of expression and creativity at its core; envisions a world where everyone can access the internet affordably, free of interference or censorship and with full privacy.

​​

ForHumanity: non-profit, crowdsourcing organization building audit frameworks for AI and autonomous systems; mitigating risk from the perspective of Ethics, Bias, Privacy, Trust, and Cybersecurity in our autonomous systems will lead to a better world

​

FTC- Office of Technology Research and Investigation (OTech):   research and information on technology’s impact on consumers, the Office conducts independent studies, evaluates new marketing practices, and provides guidance to consumers, businesses and policy makers.

​

Future of Privacy Forum:   nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies.

​

Future Society:   collective intelligence platform will synthesize the main ideas and proposals, which will in turn be used to inform solutions and actionable policy tools for the international governance of AI

​

GCRI - Global Catastrophic Risk Institute:   nonprofit, nonpartisan think tank. GCRI works on the risk of events that could significantly harm or even destroy human civilization at the global scale.​

​

GENIA:  US Public Benefit Corporation harnessing the power of machine learning and deep tech to connect Latin America to a regional matrix of artificial intelligence research & development.

​

Governance Lab:  Our goal is to strengthen the ability of institutions – including but not limited to governments – and people to work more openly, collaboratively, effectively and legitimately to make better decisions and solve public problems.

​​

IEEE Standards Project for Model Process for Addressing Ethical Concerns During System Design outlines an approach for identifying and analyzing potential ethical issues in a system or software program from the onset of the effort. The values-based system design methods addresses ethical considerations at each stage of development to help avoid negative unintended consequences while increasing innovation.

​

IEET - Institute for Ethics and Emerging Technologies:   nonprofit think tank which promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies; believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed

​

IFTF - Institute for the Future:   provide global forecasts, custom research, and foresight training to navigate complex change and develop world-ready strategies.

​

Insight Centre for Data Analytics:  one of Europe’s largest data analytics research organisations.  Mission is to undertake high impact research in data analytics that has significant benefits for the individual, industry and society by enabling better decision making.

​

Institute for Ethical AI & Machine Learning:   research centre that carries out highly-technical research into processes and frameworks that support the responsible development, deployment and operation of machine learning systems.​

​

​​Leverhulme Centre for the Future of Intelligence (CFI):  based at the University of Cambridge, with partners at the Oxford Martin School at the University of Oxford, at Imperial College London, and at the University of California, Berkeley, mission is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal: to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades.

​

Machine Intelligence Research Institute: A non-profit organization that aims to reduce the risk of a catastrophe, should such an event eventually occur; activities include research, education, and conferences

​​​

Montreal AI Ethics Institute:  mission is to help define humanity’s place in a world increasingly characterized and driven by algorithms. We do this by creating tangible and applied technical and policy research in the ethical, safe and inclusive development of AI.

​

MIT-CSAIL Computer Science & Artificial Intelligence Lab:  CSAIL is committed to pioneering new approaches to computing that will bring about positive changes in the way people around the globe live, play, and work.

​

MIT - The Media Lab:   is an antidisciplinary research community and graduate program at the MIT focused on the study, invention, and creative use of emerging technologies.

​​

Nesta Mapping AI Governance

​

NIST:  contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors.

​​

NSCAI (National Security Commission on Artificial Intelligence):   independent Commission "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States."

​

OCEANIS: The Open Community for Ethics in Autonomous and Intelligent Systems. Global Forum for discussion, debate and collaboration for organizations interested in the development and use of standards to further the development of autonomous and intelligent systems.

​​

OECD.AI Policy Observatory:  combines resources from across the OECD, its partners and all stakeholder groups. OECD.AI facilitates dialogue between stakeholders while providing multidisciplinary, evidence-based policy analysis in the areas where AI has the most impact.

​

Open Data Institute:   works with companies and governments to build an open, trustworthy data ecosystem, where people can make better decisions using data and manage any harmful impacts.  

 

​​​​Partnership on AI: conduct research, organize discussions, share insights, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advances the understanding of AI technologies including machine perception, learning, and automated reasoning.

​

Privacy International:  supports people everywhere to protect privacy, dignity, and freedom; targets companies and governments that don’t respect right to be free from their prying technologies.

​

ProPublica: Machine Bias Series, series of reports / journalism articles investigating algorithmic injustice and the formulas that influence our lives.

 

​​Rights Con:   Originally called the Silicon Valley Human Rights Conference, RightsCon rotated between San Francisco and another global city. Now, RightsCon is an annual event that rotates its location each year to new host cities around the world that are power centers for technology and human rights.

​

Salesforce - Einstein AI:   research, design, and develop AI algorithms for Salesforce CRM; push the state of AI research and engineering while bringing a principled approach to work. Also provides a shared inventory of AI related organizations and principles.

​​

Stanford University HAI (Human-centered AI):   vision for the future is led by our commitment to promoting human-centered uses of AI, and ensuring that humanity benefits from the technology and that the benefits are broadly shared.

​​

Tech Policy Lab:   interdisciplinary collaboration at the University of Washington that aims to enhance technology policy through research, education, and thought leadership; brings together experts from the University’s School of Law, Information School, Computer Science & Engineering, and other units on campus.

​

UNICRI Centre for Artificial Intelligence and Robotics:   in The Hague, the Netherlands, focuses on Goal 16 of the 2030 Agenda for Sustainable Development Agenda, that is centered on promoting peaceful, just and inclusive societies, free from crime and violence.

​

University of Leeds- Consumer Data Research Centre:   creates, supplies and maintains data for a wide range of users; work with private and public data suppliers to ensure efficient, effective and safe use of data in social science.

​​

University of Tokyo Next Generation Artificial Intelligence Research Center (AI Center):   attempts to transcend the framework and limitations of current AI technology to construct a novel scheme of science and technology regarding human and artificial intelligence.

​​​​

Upturn: advances equity and justice in the design, governance, and use of technology.

​

WITNESS:  makes it possible for anyone, anywhere to use video and technology to protect and defend human rights.

​​

World Economic Forum (WEF): Strategic Intelligence

​

bottom of page