Skip to content
AGI Watchful Guardians
We're open for new collaborations.
News to be updated weekly.
Home
About
Alignment Newsletter in Chinese
Nick Bostrom’s latest work in Chinese
Research
What would a Provably Safe AGI Framework look like?
Xiaohu Zhu
—
Apr 8, 2024
×
AGI
,
AI Safety
,
Beneficial
,
CSAGI
Logician|Boris Trakhtenbrot
Xiaohu Zhu
—
May 31, 2020
×
未分类
第 98 期对齐周报 通过查看哪些梯度有用来了解神经网络训练
Xiaohu Zhu
—
May 19, 2020
×
AGI
,
AI Safety
,
Alignment
第 99 期对齐周报 算法效率的增倍时间
Xiaohu Zhu
—
May 16, 2020
×
AGI
,
AI Safety
,
Alignment
近似KL散度
Xiaohu Zhu
—
Apr 26, 2020
×
KL divergence
,
KL divergence
规格欺骗:人工智能创造力的另一面
Xiaohu Zhu
—
Apr 23, 2020
×
AGI
,
AI Safety
,
CSAGI
,
CSAGI
,
DeepMind
新研究工作简介:塑造行为的动机
Xiaohu Zhu
—
Feb 7, 2020
×
AGI
,
AI Safety
,
DeepMind
AN #75 用学到的游戏模型解决 Atari 和围棋问题以及一位 MIRI 成员的想法
Xiaohu Zhu
—
Dec 2, 2019
×
AGI
,
AI Safety
,
Alignment
AN #74 将向善的人工智能分解为能力、对齐和应对影响
Xiaohu Zhu
—
Dec 2, 2019
×
AGI
,
AI Safety
,
Alignment
Gated linear networks
Xiaohu Zhu
—
Nov 27, 2019
×
未分类
AN #73 通过了解智能体如何崩溃来检测灾难性故障
Xiaohu Zhu
—
Nov 18, 2019
×
AGI
,
AI Safety
,
Alignment
Previous Page
1
…
3
4
5
6
7
…
10
Next Page
Subscribe
Subscribed
AGI Watchful Guardians
Sign me up
Already have a WordPress.com account?
Log in now.
AGI Watchful Guardians
Subscribe
Subscribed
Sign up
Log in
Report this content
View site in Reader
Manage subscriptions
Collapse this bar