tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

commit 52ce87b407dbf5459b10024489aac07bbe301940
parent 201177984218565a7ee218db48ead3404259a606
Author: luci-bisection@appspot.gserviceaccount.com <luci-bisection@appspot.gserviceaccount.com>
Date:   Wed, 15 Oct 2025 08:57:46 +0000

Bug 1993967 [wpt PR 55387] - Revert "webnn: Enable kokoro_82m_v1_fp16 model on TFLite backend", a=testonly

Automatic update from web-platform-tests
Revert "webnn: Enable kokoro_82m_v1_fp16 model on TFLite backend"

This reverts commit c4f578d5834801a4b0b8a54e671dd2ca386cdcd7.

Reason for revert:
LUCI Bisection has identified this change as the cause of a test failure. See the analysis: https://ci.chromium.org/ui/p/chromium/bisection/test-analysis/b/5737018685390848

Sample build with failed test: https://ci.chromium.org/b/8701302412868775361
Affected test(s):
[ninja://:blink_wpt_tests/virtual/webnn-service-on-npu/external/wpt/webnn/conformance_tests/lstm.https.any.html?npu](https://ci.chromium.org/ui/test/chromium/ninja:%2F%2F:blink_wpt_tests%2Fvirtual%2Fwebnn-service-on-npu%2Fexternal%2Fwpt%2Fwebnn%2Fconformance_tests%2Flstm.https.any.html%3Fnpu?q=VHash%3Abb4467ce740205db)
[ninja://:blink_wpt_tests/virtual/webnn-service-with-gpu/external/wpt/webnn/conformance_tests/lstm.https.any.html?gpu](https://ci.chromium.org/ui/test/chromium/ninja:%2F%2F:blink_wpt_tests%2Fvirtual%2Fwebnn-service-with-gpu%2Fexternal%2Fwpt%2Fwebnn%2Fconformance_tests%2Flstm.https.any.html%3Fgpu?q=VHash%3Abb4467ce740205db)

If this is a false positive, please report it at http://b.corp.google.com/createIssue?component=1199205&description=Analysis%3A+https%3A%2F%2Fci.chromium.org%2Fui%2Fp%2Fchromium%2Fbisection%2Ftest-analysis%2Fb%2F5737018685390848&format=PLAIN&priority=P3&title=Wrongly+blamed+https%3A%2F%2Fchromium-review.googlesource.com%2Fc%2Fchromium%2Fsrc%2F%2B%2F6993849&type=BUG

Original change's description:
> webnn: Enable kokoro_82m_v1_fp16 model on TFLite backend
>
> There is a issue in the decomposition of lstm [1] when running the
> model.The squeeze[2] will output 1-D tensor if batchSize is 1, it will
> fail to matmul a 2-D tensor because the a and b operand of WebNN
> matmul[3] are at least 2-D, so remove specific size 1 dimensions at 0
> axis with squeeze_dims option.
>
> [1] https://github.com/webmachinelearning/webnn/issues/889
> [2] https://www.w3.org/TR/webnn/#api-mlgraphbuilder-lstm [1]
> https://source.chromium.org/chromium/chromium/src/+/main:services/webnn/tflite/graph_builder_tflite.cc;l=5699?q=SerializeSubGraphSliceSqueeze&ss=chromium%2Fchromium%2Fsrc
> [3] https://www.w3.org/TR/webnn/#api-mlgraphbuilder-matmul
>
> Bug: 446545294
> Change-Id: I33eae7f3e81e8f2efac0cbe49627acabf58bed97
> Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/6993849
> Reviewed-by: ningxin hu <ningxin.hu@intel.com>
> Commit-Queue: Junwei Fu <junwei.fu@intel.com>
> Reviewed-by: Reilly Grant <reillyg@chromium.org>
> Cr-Commit-Position: refs/heads/main@{#1528547}
>

Bug: 446545294
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Change-Id: I632782208eadb1354fb6b73e63373dd9b2592fed
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/7034051
Reviewed-by: ningxin hu <ningxin.hu@intel.com>
Bot-Commit: Rubber Stamper <rubber-stamper@appspot.gserviceaccount.com>
Reviewed-by: Junwei Fu <junwei.fu@intel.com>
Commit-Queue: ningxin hu <ningxin.hu@intel.com>
Cr-Commit-Position: refs/heads/main@{#1528764}

--

wpt-commits: 1876b1b6fb9ad7e80b9c5c4c29723fa0cb74a649
wpt-pr: 55387

Diffstat:
Mtesting/web-platform/tests/webnn/conformance_tests/lstm.https.any.js | 65-----------------------------------------------------------------
1 file changed, 0 insertions(+), 65 deletions(-)

diff --git a/testing/web-platform/tests/webnn/conformance_tests/lstm.https.any.js b/testing/web-platform/tests/webnn/conformance_tests/lstm.https.any.js @@ -776,71 +776,6 @@ const lstmTests = [ } }, { - 'name': - "lstm float32 tensors steps=2, batchSize=1 with options.bias, options.recurrentBias, options.activations=['relu', 'relu', 'relu'] and options.direction='backward'", - 'graph': { - 'inputs': { - 'lstmInput': { - 'data': [1, 2, 2, 1], - 'descriptor': {shape: [2, 1, 2], dataType: 'float32'} - }, - 'lstmWeight': { - 'data': [1, -1, 2, -2, 1, -1, 2, -2, 1, -1, 2, -2, 1, -1, 2, -2], - 'descriptor': {shape: [1, 8, 2], dataType: 'float32'}, - 'constant': true - }, - 'lstmRecurrentWeight': { - 'data': [ - 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, - 0.1, 0.1, 0.1 - ], - 'descriptor': {shape: [1, 8, 2], dataType: 'float32'}, - 'constant': true - }, - 'lstmBias': { - 'data': [1, 2, 1, 2, 1, 2, 1, 2], - 'descriptor': {shape: [1, 8], dataType: 'float32'}, - 'constant': true - }, - 'lstmRecurrentBias': { - 'data': [1, 2, 1, 2, 1, 2, 1, 2], - 'descriptor': {shape: [1, 8], dataType: 'float32'}, - 'constant': true - } - }, - 'operators': [{ - 'name': 'lstm', - 'arguments': [ - {'input': 'lstmInput'}, {'weight': 'lstmWeight'}, - {'recurrentWeight': 'lstmRecurrentWeight'}, {'steps': 2}, - {'hiddenSize': 2}, { - 'options': { - 'bias': 'lstmBias', - 'recurrentBias': 'lstmRecurrentBias', - 'direction': 'backward', - 'activations': ['relu', 'relu', 'relu'] - } - } - ], - 'outputs': ['lstmOutput1', 'lstmOutput2'] - }], - 'expectedOutputs': { - 'lstmOutput1': { - 'data': [ - 21955.08984375, 43092.29296875 - ], - 'descriptor': {shape: [1, 1, 2], dataType: 'float32'} - }, - 'lstmOutput2': { - 'data': [ - 867.7901000976562, 1638.4901123046875 - ], - 'descriptor': {shape: [1, 1, 2], dataType: 'float32'} - } - } - } - }, - { 'name': 'lstm float32 tensors steps=2 with all options', 'graph': { 'inputs': {