Premium Digital
实验证明,即便是完全不同时期的社会新闻和话题,大语言模型还是能见微知著,从写作风格、兴趣甚至是人口统计特征,从包含数千名干扰者的候选池中准确找到用户“过去的自己”。
,推荐阅读一键获取谷歌浏览器下载获取更多信息
�@����SNS�A�J�E���g���n�T�C�g�ł͑��ɁA���A�J�E���g�Ə̂��Đ��I�ȓ��e�Ȃǂ����Ă���X�A�J�E���g�i4���~�j���A2007�`2017�N�ɍ쐬�����Ƃ����Â�X�A�J�E���g100�Z�b�g�i9500�~�j�Ȃǂ����������Ă����悤���B
This started with Addition Under Pressure, where I gave Claude Code and Codex the same prompt: train the smallest possible transformer that can do 10-digit addition with at least 99% accuracy. Claude Code came back with 6,080 parameters and Codex came back with 1,644. The community has since pushed this dramatically lower.