• Research Center for Artificial Intelligence Ethics and Safety

    Beijing Academy of Artificial Intelligence

    The Research Center for Artificial Intelligence Ethics and Security, Beijing Academy of Artificial Intelligence focuses on the technical research of Artificial Intelligence (AI) Ethics, aiming at achieving a series of innovative achievements in the areas of low risk, high safety and ethical AI technical models, AI risk detection, etc., so as to reduce the possible technological, ethical risks in the development of AI, to ensure that the scientific and technological innovation of AI is beneficial to society, nature, ecology and is making steady development. The center will make continuous efforts to the construction of New generation of AI innovation and development pilot area, and promote Beijing to become an international city that is with responsible innovation and development of AI for the human community of a shared future.




    The research center focuses on the theoretical exploration, algorithmic models, systems and platforms, and applications of AI ethics and safety in concrete domains, The main current research includes:


    (1) Highly Safe machine learning models and platforms

    The center will work on the safety of AI learning models and its evaluation systems. The potential risks and safety issues of deep neural networks and brain-inspired neural networks will be investigated, and validated in real world scenarios for reducing risks and improving safety. Evaluation frameworks for the safety of machine learning models will be investigated and validated in the context of specific areas.


    (2) Intelligent Autonomous Learning and acquisition model for ethics, morality, and human values

    Creating computational models of self, theory of mind and autonomous learning, human values, ethics, and morality will be learned through interactions with humans and environments. Human-AI value alignment is the core for the effort, and the model will be verified in simulated environment and real scene.


    (3) Data privacy and Safety system

    Creating data infrastructures with data grading, access control, data auditing, privacy platform to minimize data leaks and reduce risks. Exploring mechanisms and computing platforms that allow users to revoke private data authorization. Build a data firewall to stop external attacks and prevent data leaks.


    (4) Safety and Ethics for Driverless Vehicles

    Constructing the potential edge scene risks database, and proposing highly safe machine learning models that could deal with these risks. Establishing safety engineering architecture of autonomous driving systems. Improving the functional safety and information security based on AI, so as to increase the social acceptance of autonomous vehicle. Providing application-level guidelines and specifications for machine learning algorithms for the autonomous driving industry.


    (5) Artificial intelligence risk and safety synthetic governance sandbox platform
    Constructing automatic detection platform for AI risks and safety. Evaluating AI models and services from the perspectives of data and algorithm safety and security, fairness, traceability, impact degree, etc. Encouraging AI products and services to actively participate in the assessment to reduce the risks of AI in the whole life cycles.

    Director: Yi Zeng

    Yi Zeng is the Director for the Research Center on AI Ethics and Governance, BAAI. He is a Professor and deputy director at Research Center for Brain-inspired Artificial Intelligence, Institute of Automation, Chinese Academy of Sciences. He is also a Professor at University of Chinese Academy of Sciences. He is a board member for the Next Generation Artificial Intelligence Governance Committee, Ministry of Science and Technology China. He is a fellow of the Berggruen Research Center, Peking University, and is also a member for the World Economic Forum Global Future Council on Values, Ethics and Innovation.

    E-mail: aies@baai.ac.cn

  • Beijing AI Principles