Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue running XNN-pack unittests #3311

Closed
freddan80 opened this issue Apr 24, 2024 · 3 comments
Closed

Issue running XNN-pack unittests #3311

freddan80 opened this issue Apr 24, 2024 · 3 comments
Assignees
Labels
module: xnnpack Issues related to xnnpack delegation

Comments

@freddan80
Copy link
Collaborator

I get an runtime error when running XNN-pack unittests. Reproducer:

./install_requirements.sh --pybind
or
install_requirements.sh --pybind xnnpack

as per instructions in extension/pybindings/README.md.

pytest -c /dev/null -p no:warnings -s backends/xnnpack/test/models/mobilenet_v2.py -k test_qs8_mv2

Error message:

_________________________________________________________________________ TestMobileNetV2.test_qs8_mv2 __________________________________________________________________________

self = <mobilenet_v2.TestMobileNetV2 testMethod=test_qs8_mv2>

    def test_qs8_mv2(self):
        # Quantization fuses away batchnorm, so it is no longer in the graph
        ops_after_quantization = self.all_operators - {
            "executorch_exir_dialects_edge__ops_aten__native_batch_norm_legit_no_training_default",
        }
    
        dynamic_shapes = (
            {
                2: torch.export.Dim("height", min=224, max=455),
                3: torch.export.Dim("width", min=224, max=455),
            },
        )
    
        (
>           Tester(self.mv2, self.model_inputs, dynamic_shapes=dynamic_shapes)
            .quantize()
            .export()
            .to_edge()
            .check(list(ops_after_quantization))
            .partition()
            .check(["torch.ops.higher_order.executorch_call_delegate"])
            .check_not(list(ops_after_quantization))
            .to_executorch()
            .serialize()
            .run_method_and_compare_outputs(num_runs=10)
        )

backends/xnnpack/test/models/mobilenet_v2.py:66: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/py310_et/lib/python3.10/site-packages/executorch/backends/xnnpack/test/tester/tester.py:566: in run_method_and_compare_outputs
    stage_output = self.stages[stage].run_artifact(inputs_to_run)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <executorch.backends.xnnpack.test.tester.tester.Serialize object at 0x16fddb7f0>
inputs = (tensor([[[[ 0.6119,  0.0162,  1.4876,  ...,  0.3651, -0.2745, -0.6686],
          [-0.8822, -1.8987,  0.2824,  ...,  ... 0.7997,  ..., -0.1194,  1.1729,  0.6180],
          [ 0.2851, -1.0034,  0.8266,  ..., -0.6135, -1.3600, -0.3300]]]]),)

    def run_artifact(self, inputs):
        inputs_flattened, _ = tree_flatten(inputs)
>       executorch_module = _load_for_executorch_from_buffer(self.buffer)
E       RuntimeError: loading method forward failed with error 0x20

.venv/py310_et/lib/python3.10/site-packages/executorch/backends/xnnpack/test/tester/tester.py:323: RuntimeError
------------------------------------------------------------------------------- Captured log call -------------------------------------------------------------------------------
WARNING  root:backend_api.py:373 Disabled validating the partitioner.
============================================================================ short test summary info ============================================================================
FAILED ../../../../dev/::TestMobileNetV2::test_qs8_mv2 - RuntimeError: loading method forward failed with error 0x20
======================================================================= 1 failed, 1 deselected in 17.67s ========================================================================
@JacobSzwejbka
Copy link
Contributor

Sorry looks like this didnt get a timely response. cc @mcr229 @digantdesai @kimishpatel can you take a look

@JacobSzwejbka JacobSzwejbka added the module: xnnpack Issues related to xnnpack delegation label May 9, 2024
@mcr229
Copy link
Contributor

mcr229 commented May 9, 2024

Hi @freddan80, could you rebase and try again? I recloned and ran:

install_requirements.sh --pybind  xnnpack
pytest -c /dev/null -p no:warnings -s backends/xnnpack/test/models/mobilenet_v2.py -k test_qs8_mv2

and it seemed to pass for me:

============================= test session starts ==============================
platform darwin -- Python 3.10.0, pytest-7.2.0, pluggy-1.3.0
rootdir: /dev, configfile: null
plugins: anyio-4.3.0, cov-4.1.0, xdist-3.3.1, hypothesis-6.84.2
collecting ... /Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V2_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V2_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
/Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/executorch/backends/xnnpack/test/tester/tester.py:340: PytestCollectionWarning: cannot collect test class 'Tester' because it has a __init__ constructor (from: )
  class Tester:
collected 3 items / 1 deselected / 2 selected                                  

../../../../../dev s/Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/ao/quantization/utils.py:356: UserWarning: must run observer before calling calculate_qparams. Returning default values.
  warnings.warn(
/Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/torch/ao/quantization/observer.py:1288: UserWarning: must run observer before calling calculate_qparams.                                    Returning default scale and zero point 
  warnings.warn(
Comparing Stage Serialize with Stage <executorch.backends.xnnpack.test.tester.tester.Export object at 0x309294dc0>
Run 0 with input shapes: [torch.Size([1, 3, 302, 340])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 1 with input shapes: [torch.Size([1, 3, 233, 422])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 2 with input shapes: [torch.Size([1, 3, 448, 437])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 3 with input shapes: [torch.Size([1, 3, 264, 430])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 4 with input shapes: [torch.Size([1, 3, 424, 423])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 5 with input shapes: [torch.Size([1, 3, 246, 349])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 6 with input shapes: [torch.Size([1, 3, 433, 298])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 7 with input shapes: [torch.Size([1, 3, 378, 350])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 8 with input shapes: [torch.Size([1, 3, 322, 278])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
Run 9 with input shapes: [torch.Size([1, 3, 317, 246])]
[program.cpp:130] InternalConsistency verification requested but not available
[method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
./Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/_pytest/cacheprovider.py:433: PytestCacheWarning: could not create cache path /dev/.pytest_cache/v/cache/nodeids
  config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
/Users/maxren/miniconda3/envs/executorch/lib/python3.10/site-packages/_pytest/stepwise.py:52: PytestCacheWarning: could not create cache path /dev/.pytest_cache/v/cache/stepwise
  session.config.cache.set(STEPWISE_CACHE_DIR, [])


================= 1 passed, 1 skipped, 1 deselected in 44.38s ==================

@mergennachin
Copy link
Contributor

@freddan80 - i also tried reproducing but it is passing for me.

Reopen if you still encounter the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: xnnpack Issues related to xnnpack delegation
Projects
None yet
Development

No branches or pull requests

4 participants