Merve Hickok is the founder of AIethicist.org . She is a globally renowned expert on AI policy, ethics and governance. Her contributions and perspective have featured in The New York Times, Guardian, CNN, Forbes, Bloomberg, Wired, Scientific American, The Atlantic, Politico, Protocol, Vox, The Economist and S&P. Her research, training and consulting work focuses on impact of AI systems on individuals, society, public and private organizations - with a particular focus on fundamental rights, democratic values, and social justice. She provides consultancy to C-suite leaders, and training services to public and private organizations on Responsible AI development, due diligence and governance.
​
Merve is the President and Research Director at Center for AI & Digital Policy, deeply engaged in global AI policy and regulatory work. The Center educates AI policy practitioners and advocates across 80+ countries, advises international organizations (such as European Commission, UNESCO, the Council of Europe, OECD). She has provided testimony to the US Congress, State of California Civil Rights Office, New York City Department of Consumer and Worker Protection, Detroit City Council, and many global organizations interested in AI policy and ethics.
At University of Michigan, she is a lecturer at School of Information, the Responsible Data and AI Advisor at Michigan Institute for Data Science, and an affiliated faculty at Gerald R. Ford School of Public Policy (Science, Technology, and Public Policy Program).
​
Merve also works with several non-profit organizations globally to advance both the academic and professional research in this field for underrepresented groups. She has been recognized by a number of organizations - most recently as one of the 100 Brilliant Women in AI Ethics™ – 2021, as Runner-up for Responsible AI Leader of the Year - 2022 (Women in AI) as Finalist for Lifetime Achievement Award - Women in AI of the Year - 2023.
​
Previously, Merve held various senior roles in Fortune 100 companies for more than 15 years.
​

CURRENT AI-FOCUSED WORK:
​
-
President & Research Director at Center for AI & Digital Policy
-
Data Ethics Lecturer at University of Michigan, School of Information
-
Responsible Data and AI Advisor at University of Michigan, Michigan Institute of Data Science (MIDAS)
-
Affiliated Faculty, University of Michigan, Gerald R. Ford School of Public Policy (Science, Technology, and Public Policy Program)
-
Founding editorial board member at Springer Nature AI & Ethics journal
-
Advisory board member at Transatlantic Policy Quarterly
-
Advisory board member at Women in AI Ethics Collective
-
Advisor at Better Images of AI
-
Advisor at Civic Software Foundation
-
Fellow at ForHumanity Center working to draft framework for independent audit of AI systems
-
Steering Committee Member at Start Access (organized by the American Association of People with Disabilities (AAPD))
- Working Group member at Institute of Electrical and Electronics Engineers (IEEE) P7008, 2863, 3119, and IEC SEG10 Working Groups, developing global standards and frameworks on ethics of autonomous and intelligent systems, with a focus on interdisciplinary collaboration
-
Member at Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS) alongside national institutions.
-
Consultant at High Sierra Industries, focused on product and project management of learning systems for individuals with intellectual disabilities.
She is a member of the board of directors for Northern Nevada International Center (University of Nevada-Reno). NNIC serves as Northern Nevada's refugee resettlement agency to help displaced persons and victims of human trafficking, as well as organizing programs for international delegations through the U.S. Department of State and other federal agencies. It is among the top organizations in the United States hosting over a dozen U.S. Department of State Bureau of Educational and Cultural Affairs large exchange programs.
​
PUBLICATIONS
CONTRIBUTIONS & COLLABORATIONS
Center for AI and Digital Policy
-
Artificial Intelligence & Democratic Values 2022 Index (published March 2023)
-
Artificial Intelligence & Democratic Values 2021 Index - Comparative Country Reports
-
Artificial Intelligence & Democratic Values 2020 Index - Comparative Country Reports
​
Springer Nature: AI & Ethics journal
-
Lessons learned from AI ethics principles for future actions
-
A policy primer and roadmap on AI worker surveillance and productivity scoring tools
-
Towards intellectual freedom in an AI Ethics Global Community
​
Springer Nature: AI & Society journal
Journal of Leadership, Accountability & Ethics
​
SocArXiv
Transatlantic Policy Quarterly: The State of AI Policy: The Democratic Values Perspective
Council on Foreign Relations: AI and Democratic Values: Next Steps for the United States
The New York Times (editorial): Regulating A.I.: The U.S. Needs to Act
The Economist (editorial): In response to ""How to worry wisely about artificial intelligence"
Techonomy: Can the U.S. and Europe agree on rules for AI?
Verfassungsblog: The Council of Europe Creates a Black Box for AI Policy
​
Medium
-
What does an AI Ethicist do? A Guide for the Why, the What and the How.
-
Why was your job application rejected: Bias in Recruitment Algorithms?
-
NYC Bias Audit Law: Clock ticking for Employers and HR Talent Technology Vendors
​
United Nations
​
International Organization for Migration
​
Oxford Insights
​
NYU Center on International Cooperation
​
University of Buckingham - Institute for Ethical AI in Education
​
All Tech is Human
-
The Business Case for AI Ethics: Moving From Theory to Action
-
Guide to Responsible Tech: How to Get Involved & Build a Better Tech Future
-
AI & Human Rights: Building a Tech Future Aligned with Public Interest
​
Public Comments to Legislation
-
US Congress Testimony: Expert testimony on the topic "Advances in AI: Are we ready for the tech revolution?"
​
-
White House OSTP: Written Comments for Blueprint for an AI Bill of Rights
​
-
White House OSTP: Written Comments for Automated Worker Surveillance and Management
​
-
Equal Employment Opportunity Commission (EEOC): Written Recommendations for Strategic Enforcement Plan
​
-
New York City Department of Consumer & Worker Protection: Written Comments regarding NYC legislation on Automated Employment Decision Tools (Local Law 144) - June 2022
​
-
New York City Department of Consumer & Worker Protection: Written Comments regarding NYC Local Law 144 - October 2022
​
-
New York City Department of Consumer & Worker Protection: Written Comments regarding NYC Local Law 144 - January 2023
​
-
State of California Civil Rights Council: Written Comments regarding modifications to Employment Regulations and Criminal History - August 2022
MEDIA COVERAGE
-
The New York Times: 8 More Companies Pledge to Make A.I. Safe, White House Says
-
The Atlantic: Before a Bot Steals Your Job, It Will Steal Your Name
-
Guardian: ‘Bossware is coming for almost every worker’: the software you might not realize is watching you
-
Wired: Chatbots Got Big—and Their Ethical Red Flags Got Bigger
-
Wired: Why scientists are building AI-powered avatars of the dead
-
CNN: The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI policy group claims
-
Scientific American: You can probably beat ChatGPT at these math teasers
-
Protocol: How will this AI critic influence Biden’s policies? The clues are hiding in plain sight
-
Protocol: Inside Eric Schmidt’s push to profit from an AI cold war with China
-
Politico: The AI 'gold rush' in Washington
- The New Republic: Congress Is Racing to Catch Up With Artificial Intelligence
- Observer: ChatGPT Sparked Transatlantic Regulatory Threats for All Artificial Intelligence
- Forbes: How The FTC Could Slow OpenAI’s ChatGPT
- Vox: Inside the chaos at Washington’s most connected military tech startup
- The European Association for Biometrics: Facial recognition use in public spaces under the microscope
- The Michigan Daily: MIDAS hosts forum on ethics in artificial intelligence
- Insider: I interviewed a breast-cancer survivor who wanted me to tell her story. She was actually an AI
- arsTechnica: GPT-4 poses too many risks and releases should be halted, AI group tells FTC
- PC Mag: FTC Needs to Probe OpenAI and Halt GPT-4, Nonprofit Says
- University of Michigan, School of Information: AI has the power to undermine human rights. Congress must establish safeguards.
- MENA FN: Non-profit organization files complaint urging investigation into OpenAI's GPT models
- S&P Global Market Intelligence: Biden administration AI policy efforts to be complex balancing act in 2023
- Bloomberg Law: ‘Robot Bosses’ Spur Lawmaker Push to Police AI Job Surveillance
- Daily Beast: Would You Let Your Boss Track Your Sleep Schedule?
- The Cyber Express: Artificial Intelligence and The Top 6 Business Risks
- Welcome to the Jungle: A recruiting revolution: why did NYC delay its landmark AI bias law?
-
Security Boulevard: CPDP 2021 – Moderator: Merve Hickok ‘AI Regulation In Europe & Fundamental Rights’
-
SHRM: Regulations Ahead on AI
-
HR Brew: NYC’s bill to regulate automated hiring software faces challenges
-
HR Brew: The federal government is warning employers that hiring AI must comply with civil rights laws
- Medium: Ethical Challenges of Artificial Intelligence: An Interview with Merve Hickok
- Medium: #IamthefutureofAI Series: Merve Hickok
-
Montreal AI Ethics Institute: The Future of Teaching Tech Ethics
-
Tortoise & Kainos: The future of trust in artificial intelligence
-
IndiaAI: Moonshot Dream
​
AWARDS & RECOGNITION






DIET: Data and Diversity, Inclusion and Impact, Ethics and Equity in Teams and Technology






CONFERENCES, WEBINARS, PODCASTS





















Monitoring Big Tech on the Standards of Social Contract for the AI Age





























UCL The Algo Conference




































