We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework redundant data masking in store operations with help of linear layout. Masking mechanism should be able to handle any tensor shapes.
test_convert_mma2mma tests introduced for AMD hardware in #5495 are fully functional for AMD hardware, i.e. no tests are skipped: https://github.com/triton-lang/triton/pull/5495/files#diff-ac90f00a03feff6747a9c9ac5af1f44bb67e56bf911e05db71a2c1e818e897e7R5730
test_convert_mma2mma
See PR #5225 for Nvidia specific implementation.
Since test does not require any actual tensor/matrix cores, older hardware could be used.
For example, radeon 6xxx could be used to run WMMA tests, need to:
triton/python/test/unit/language/test_core.py
Line 223 in 11ef427
if "gfx1" in target_arch:
[16, 16]
[8, 8]
triton/lib/Dialect/TritonGPU/IR/LinearLayoutConversions.cpp
Line 494 in 11ef427
The text was updated successfully, but these errors were encountered:
@simonidaa work on this task next, please.
Before you make a PR, please wait for #5495 to land
Sorry, something went wrong.
would like to work on this
already on it
No branches or pull requests
Goal
Rework redundant data masking in store operations with help of linear layout.
Masking mechanism should be able to handle any tensor shapes.
DoD
test_convert_mma2mma
tests introduced for AMD hardware in #5495 are fully functional for AMD hardware, i.e. no tests are skipped: https://github.com/triton-lang/triton/pull/5495/files#diff-ac90f00a03feff6747a9c9ac5af1f44bb67e56bf911e05db71a2c1e818e897e7R5730Hints
See PR #5225 for Nvidia specific implementation.
Since test does not require any actual tensor/matrix cores, older hardware could be used.
For example, radeon 6xxx could be used to run WMMA tests, need to:
triton/python/test/unit/language/test_core.py
Line 223 in 11ef427
if "gfx1" in target_arch:
[16, 16]
to[8, 8]
triton/lib/Dialect/TritonGPU/IR/LinearLayoutConversions.cpp
Line 494 in 11ef427
The text was updated successfully, but these errors were encountered: