-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiply complex tensors with float scalars and vice versa #481
Comments
It's a missing broadcast. It should be easy to add, only same type broadcast are configured at the moment. Arraymancer/src/arraymancer/tensor/operators_broadcasted.nim Lines 97 to 139 in bdcdfe1
|
I added the missing broadcasts that I thought you meant. Though they don't seem to help; the same error still persists and compiling it fails. Alternatively, I tracked down the source code for the Arraymancer/src/arraymancer/tensor/operators_blas_l1.nim Lines 67 to 70 in 9a575de
Arraymancer/src/arraymancer/tensor/operators_blas_l1.nim Lines 72 to 74 in 9a575de
I hacked together a horrible solution with just getting it to compile as my goal. It does compile, but I'm certain there's a much better way of doing this. |
I'm trying to use nim and arraymancer for a scientific project (and teach myself nim in the process). I stumbled across some inconsistency and I'm not sure if this is intentional (and if it is why it is like that).
Essentially I'm trying to multiply a float tensor with a complex number to generate at complex tensor or multiply a complex tensor with a float scalar, both of these fail with a type mismatch error, while multiplying float and complex scalars seems to be valid.
Is this intentional? I know I can work around this by casting, but it becomes cumbersome very quickly.
Minimal test case for illustration:
The text was updated successfully, but these errors were encountered: