Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DISABLED test_non_contiguous_input_addmm (__main__.TestMaxAutotune) #126176

Closed
pytorch-bot bot opened this issue May 14, 2024 · 7 comments
Closed

DISABLED test_non_contiguous_input_addmm (__main__.TestMaxAutotune) #126176

pytorch-bot bot opened this issue May 14, 2024 · 7 comments
Labels
module: flaky-tests Problem is a flaky test in CI module: inductor oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@pytorch-bot
Copy link

pytorch-bot bot commented May 14, 2024

Platforms: linux, rocm, slow

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes.

Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_non_contiguous_input_addmm
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
  File "inductor/test_max_autotune.py", line 660, in test_non_contiguous_input_addmm
    self.assertTrue(torch.allclose(ref, act, atol=4 * 1e-3, rtol=4 * 1e-3))
  File "/opt/conda/envs/py_3.8/lib/python3.8/unittest/case.py", line 765, in assertTrue
    raise self.failureException(msg)
AssertionError: False is not true

To execute this test, run the following from the base repo dir:
    PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py -k TestMaxAutotune.test_non_contiguous_input_addmm

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

Test file path: inductor/test_max_autotune.py

cc @clee2000 @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire

@pytorch-bot pytorch-bot bot added module: flaky-tests Problem is a flaky test in CI module: inductor skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 14, 2024
Copy link
Author

pytorch-bot bot commented May 14, 2024

Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below:
  • Test name: test_non_contiguous_input_addmm (__main__.TestMaxAutotune)
  • Platforms for which to skip the test: linux, rocm, slow
  • Disabled by pytorch-bot[bot]

Within ~15 minutes, test_non_contiguous_input_addmm (__main__.TestMaxAutotune) will be disabled in PyTorch CI for these platforms: linux, rocm, slow. Please verify that your test name looks correct, e.g., test_cuda_assert_async (__main__.TestCuda).

To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified.

Platforms: case-insensitive, list, of, platforms

We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.

@ezyang
Copy link
Contributor

ezyang commented May 15, 2024

This reliably reproduces for me when I run test_max_autotune.py, but I'm not able to detect-test-pollution bisect it. It's possible a smarter bisection algorithm could figure it out.

Copy link
Author

pytorch-bot bot commented May 28, 2024

Resolving the issue because the test is not flaky anymore after 950 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive

@pytorch-bot pytorch-bot bot closed this as completed May 28, 2024
Copy link
Author

pytorch-bot bot commented May 29, 2024

Another case of trunk flakiness has been found here. Reopening issue. The list of platforms [linux, rocm, slow] appears to contain all the recently affected platforms [linux, rocm].

@pytorch-bot pytorch-bot bot reopened this May 29, 2024
@eellison
Copy link
Contributor

cc @shunting314 - maybe the atol/rtol is too low for mms

@shunting314
Copy link
Contributor

I'm not able to repro on an A100:

(pytorch) [shunting@devgpu005.nha1 ~/ws/pytorch (dash)]$ python test/inductor/test_max_autotune.py -k test_non_contiguous_input_addmm
AUTOTUNE addmm(50257x768, 50257x32768, 32768x768)
  addmm 10.5517 ms 100.0%
  bias_addmm 10.6080 ms 99.5%
  triton_mm_16 18.7754 ms 56.2%
  triton_mm_17 18.9425 ms 55.7%
  triton_mm_18 19.8427 ms 53.2%
  triton_mm_9 21.1968 ms 49.8%
  triton_mm_15 22.4176 ms 47.1%
  triton_mm_11 22.6063 ms 46.7%
  triton_mm_10 23.8346 ms 44.3%
  triton_mm_13 27.3663 ms 38.6%
SingleProcess AUTOTUNE benchmarking takes 7.0730 seconds and 0.3409 seconds precompiling
Compiled module path: /tmp/torchinductor_shunting/lh/clhmx66vkn4efzqgyl6va2pxjihm46wwt2sp45iuik74453vepgo.py
frames [('total', 1), ('ok', 1)]
stats [('calls_captured', 1), ('unique_graphs', 1)]
inductor [('fxgraph_cache_miss', 1), ('select_algorithm_precompile', 1), ('select_algorithm_autotune', 1), ('extern_calls', 1)]
aot_autograd [('total', 1), ('ok', 1)]
.
----------------------------------------------------------------------
Ran 1 test in 19.985s

OK

@shunting314
Copy link
Contributor

hmm, running another test first make it repro-able...

python test/inductor/test_max_autotune.py -v -k test_max_autotune_addmm_zero_size_input_dynamic_False -k test_non_contiguous_input_addmm

shunting314 added a commit that referenced this issue May 29, 2024
…tune.py"


Fix #126176  . We should not use torch.empty to generate input data if we are gonna do any accuracy test. torch.empty may return NaN. In that cause both the reference and the actual result may contain NaN at the same index. But `NaN != NaN` so the test fail.

Also if torch.empty returns NaN is not deterministic. It may depends on other tests running earlier.

Generating random data instead of calling torch.empty fixes the problem.



cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
shunting314 added a commit that referenced this issue May 29, 2024
Fix #126176  . We should not use torch.empty to generate input data if we are gonna do any accuracy test. torch.empty may return NaN. In that cause both the reference and the actual result may contain NaN at the same index. But `NaN != NaN` so the test fail.

Also if torch.empty returns NaN is not deterministic. It may depends on other tests running earlier.

Generating random data instead of calling torch.empty fixes the problem.



cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: flaky-tests Problem is a flaky test in CI module: inductor oncall: pt2 skipped Denotes a (flaky) test currently skipped in CI. triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants