Skip to content
AGI Watchful Guardians
We're open for new collaborations.
News to be updated weekly.
Home
About
Alignment Newsletter in Chinese
Nick Bostrom’s latest work in Chinese
Research
What would a Provably Safe AGI Framework look like?
Xiaohu Zhu
—
Apr 8, 2024
×
AGI
,
AI Safety
,
Beneficial
,
CSAGI
语言智能体的对齐
Xiaohu Zhu
—
Mar 27, 2022
×
AGI
,
AI Safety
,
Alignment
,
CSAGI
,
DeepMind
因果影响图的进展
Xiaohu Zhu
—
Jul 1, 2021
×
未分类
元式训练成的智能体实现了贝叶斯最优的智能体
Xiaohu Zhu
—
Jan 31, 2021
×
DeepMind
关于 F. Chollet 的“关于智能的测量”(2019)
Xiaohu Zhu
—
Nov 20, 2020
×
Intelligence
REALab:概念化篡改问题
Xiaohu Zhu
—
Nov 20, 2020
×
AGI
,
AI Safety
,
DeepMind
为什么降低训练神经网络的成本仍然是一个挑战
Xiaohu Zhu
—
Nov 18, 2020
×
未分类
Shakir Mohamed 对“好”的想象,改变的使命
Xiaohu Zhu
—
Nov 2, 2020
×
Beneficial
AN #108 为何需要仔细检查人工智能风险的争论
Xiaohu Zhu
—
Jul 16, 2020
×
AGI
,
AI Safety
,
Alignment
,
CSAGI
AN #107 目标导向的智能体的收敛工具性子目标
Xiaohu Zhu
—
Jul 10, 2020
×
AGI
,
AI Safety
,
Alignment
,
CSAGI
AN #101 为何我们需要严格度量和预测人工智能进展
Xiaohu Zhu
—
Jul 8, 2020
×
AGI
,
AI Safety
,
Alignment
Previous Page
1
2
3
4
5
…
10
Next Page
Subscribe
Subscribed
AGI Watchful Guardians
Sign me up
Already have a WordPress.com account?
Log in now.
AGI Watchful Guardians
Subscribe
Subscribed
Sign up
Log in
Report this content
View site in Reader
Manage subscriptions
Collapse this bar