[Accuracy diff No.16] Fix accuracy diff for paddle.cumsum
、 paddle.logcumsumexp
API
#74081
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Category
Operator Mechanism
PR Types
Bug fixes
Description
错误定位:
ThrustCumsumKernel
自身的精度误差就极大解决方案:
ThrustCumsumKernel
分支处理,进入后续 CUDAcub
计算BlockPrefixCallbackOp
采用 Kahan 算法,参考:https://en.wikipedia.org/wiki/Kahan_summation_algorithmBlockPrefixCallbackOp
对于LogAddExp
算子特例,采用 Kahan + Online Scale,使其数值更稳定其他修改:
np_logcumsumexp_grad
,避免直接计算np.log(-dout)
(dout > 0) 报错测试:
修复后,测试用例大致分为三类:
考虑到超大张量累积误差本身就更大,将 atol、rtol 改为 1 后基本通过测试:

将其添加至 torch_error_skip 跳过精度测试
将其添加至 torch_error_skip 跳过精度测试
补充测试:
张量通过


full
填充为 0.01,则cumsum
期望值约为42949672.97
,logcumsumexp
期望值约为22.1907
发现
cumsum
均与理论值有差异,但数值接近,用时相同;logcumsumexp
的 torch 更接近理论值,paddle 仍有差异,且远远慢于 torch,等待算法进一步修复Pcard-85711