AGI Watchful Guardians

We're open for new collaborations.
News to be updated weekly.

  • Home
  • About
  • Alignment Newsletter in Chinese
  • Nick Bostrom’s latest work in Chinese
  • Research
  • What would a Provably Safe AGI Framework look like?

    What would a Provably Safe AGI Framework look like?

    Xiaohu Zhu

    —

    Apr 8, 2024

    ×

    AGI, AI Safety, Beneficial, CSAGI
  • Algebraic Topology and Ontological Kolmogorov Complexity for Safe AGI

    Xiaohu Zhu

    —

    Aug 29, 2023

    ×

    未分类
  • Ontologically Aligned Super AI: A Vision for the Future

    Xiaohu Zhu

    —

    Aug 23, 2023

    ×

    未分类
  • Superalignment

    Xiaohu Zhu

    —

    Jul 31, 2023

    ×

    未分类
  • Navigating the Spectrum of Cooperation for Safe AGI Development

    Xiaohu Zhu

    —

    Apr 11, 2023

    ×

    AGI, AI Safety, CSAGI, 未分类
  • Fragmented but Rational

    Xiaohu Zhu

    —

    Apr 3, 2023

    ×

    未分类
  • 人工智能书籍推荐:将这些添加到您的阅读列表

    Xiaohu Zhu

    —

    Aug 22, 2022

    ×

    AGI, AI Safety, Alignment, Beneficial
  • 齐智通讯 第 173 期 来自DeepMind的语言模型

    Xiaohu Zhu

    —

    Jul 22, 2022

    ×

    未分类
  • Compositional game theory reading list

    Xiaohu Zhu

    —

    May 12, 2022

    ×

    未分类
  • 本体论冲突与欧洲人民的故事

    Xiaohu Zhu

    —

    Mar 31, 2022

    ×

    未分类
  • 读论文:本体危机

    Xiaohu Zhu

    —

    Mar 29, 2022

    ×

    未分类
Previous Page
1 2 3 4 … 10
Next Page

©

AGI Watchful Guardians

  • Subscribe Subscribed
    • AGI Watchful Guardians
    • Already have a WordPress.com account? Log in now.
    • AGI Watchful Guardians
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar