AGI Watchful Guardians

We're open for new collaborations.
News to be updated weekly.

  • Home
  • About
  • Alignment Newsletter in Chinese
  • Nick Bostrom’s latest work in Chinese
  • Research
  • What would a Provably Safe AGI Framework look like?

    What would a Provably Safe AGI Framework look like?

    Xiaohu Zhu

    —

    Apr 8, 2024

    ×

    AGI, AI Safety, Beneficial, CSAGI
  • Nexus Research: A Multi-Agent AI Research Platform Technical Deep Dive

    Nexus Research: A Multi-Agent AI Research Platform Technical Deep Dive

    Xiaohu Zhu

    —

    Nov 27, 2025

    ×

    未分类
  • AI 2025

    Xiaohu Zhu

    —

    Jun 11, 2025

    ×

    未分类
  • 谷歌DeepMind通向AGI的负责任路径

    Xiaohu Zhu

    —

    Apr 3, 2025

    ×

    未分类
  • 技术人工智能安全研究方向的建议

    技术人工智能安全研究方向的建议

    Xiaohu Zhu

    —

    Jan 16, 2025

    ×

    未分类
  • Sabotage Modal Logic

    Xiaohu Zhu

    —

    Dec 31, 2024

    ×

    未分类
  • Expansion-Contraction Dynamics of Value Alignment

    Xiaohu Zhu

    —

    Dec 4, 2023

    ×

    未分类
  • 通用人工智能已经来临

    Xiaohu Zhu

    —

    Oct 11, 2023

    ×

    未分类
  • Undecidability of translational monotilings

    Xiaohu Zhu

    —

    Sep 23, 2023

    ×

    未分类
  • On the Ontological Perspective on Agency in Building Safe AGI

    Xiaohu Zhu

    —

    Aug 30, 2023

    ×

    未分类
1 2 3 … 10
Next Page

©

AGI Watchful Guardians

  • Subscribe Subscribed
    • AGI Watchful Guardians
    • Already have a WordPress.com account? Log in now.
    • AGI Watchful Guardians
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar