AGI Watchful Guardians

We're open for new collaborations.
News to be updated weekly.

  • Home
  • About
  • Alignment Newsletter in Chinese
  • Nick Bostrom’s latest work in Chinese
  • Research
  • What would a Provably Safe AGI Framework look like?

    What would a Provably Safe AGI Framework look like?

    Xiaohu Zhu

    —

    Apr 8, 2024

    ×

    AGI, AI Safety, Beneficial, CSAGI
  • 具有避免奖励函数篡改动机的智能体设计

    Xiaohu Zhu

    —

    Aug 15, 2019

    ×

    AGI, AI Safety, CID, DeepMind
  • AN #61 人工智能策略与治理,来自该领域两位专家的分享

    Xiaohu Zhu

    —

    Aug 5, 2019

    ×

    AGI, AI Safety, Alignment
  • AN #60 一个新的AI挑战:在创造性模式中帮助人类玩家的 Minecraft 智能体

    Xiaohu Zhu

    —

    Jul 23, 2019

    ×

    AGI, AI Safety, Alignment
  • PapeRman #8

    Xiaohu Zhu

    —

    Jul 20, 2019

    ×

    未分类
  • 最坏情况下的保证(重制版)

    Xiaohu Zhu

    —

    Jul 11, 2019

    ×

    AGI, AI Safety, Alignment, OpenAI
  • 人工智能风险争论的转变

    Xiaohu Zhu

    —

    Jul 11, 2019

    ×

    未分类
  • AN #59 对人工智能风险的争论是如何随着时间而改变的

    Xiaohu Zhu

    —

    Jul 11, 2019

    ×

    未分类
  • 用因果影响图建模通用人工智能安全框架

    Xiaohu Zhu

    —

    Jun 26, 2019

    ×

    AGI, AI Safety, CID, DeepMind
  • AN #58 Mesa 优化:这是什么,为什么我们应该关心它

    Xiaohu Zhu

    —

    Jun 24, 2019

    ×

    未分类
  • PapeRman #8

    Xiaohu Zhu

    —

    Jun 16, 2019

    ×

    未分类
Previous Page
1 … 5 6 7 8 9 10
Next Page

©

AGI Watchful Guardians

  • Subscribe Subscribed
    • AGI Watchful Guardians
    • Already have a WordPress.com account? Log in now.
    • AGI Watchful Guardians
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar