Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiply complex tensors with float scalars and vice versa #481

Open
cycomanic opened this issue Dec 19, 2020 · 2 comments
Open

Multiply complex tensors with float scalars and vice versa #481

cycomanic opened this issue Dec 19, 2020 · 2 comments

Comments

@cycomanic
Copy link

I'm trying to use nim and arraymancer for a scientific project (and teach myself nim in the process). I stumbled across some inconsistency and I'm not sure if this is intentional (and if it is why it is like that).

Essentially I'm trying to multiply a float tensor with a complex number to generate at complex tensor or multiply a complex tensor with a float scalar, both of these fail with a type mismatch error, while multiplying float and complex scalars seems to be valid.
Is this intentional? I know I can work around this by casting, but it becomes cumbersome very quickly.

Minimal test case for illustration:

import math, arraymancer, sequtils, complex, sugar

const I = complex64(0,1)

var
  N : int = 2^10
  tmax : float
  x : Complex[float64]
  t = newTensor[float]([N])
  g = newTensor[Complex[float64]]([N])
  gg = newTensor[Complex[float64]]([N])

tmax = 10
t = arange(0.0, tmax)
g = I * t  # does not work
g = I * t.astype(Complex[float64]) # works
x = tmax*I  # works
gg  = g * tmax # does not work
@mratsim
Copy link
Owner

mratsim commented Jan 5, 2021

It's a missing broadcast.

It should be easy to add, only same type broadcast are configured at the moment.

# ##############################################
# # Broadcasting Tensor-Scalar and Scalar-Tensor
proc `+.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted addition for tensor + scalar.
returnEmptyIfEmpty(t)
result = t.map_inline(x + val)
proc `+.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted addition for scalar + tensor.
returnEmptyIfEmpty(t)
result = t.map_inline(x + val)
proc `-.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted substraction for tensor - scalar.
returnEmptyIfEmpty(t)
result = t.map_inline(val - x)
proc `-.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted substraction for scalar - tensor.
returnEmptyIfEmpty(t)
result = t.map_inline(x - val)
proc `/.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted division
returnEmptyIfEmpty(t)
when T is SomeInteger:
result = t.map_inline(val div x)
else:
result = t.map_inline(val / x)
proc `/.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted division
returnEmptyIfEmpty(t)
when T is SomeInteger:
result = t.map_inline(x div val)
else:
result = t.map_inline(x / val)
proc `^.`*[T: SomeFloat|Complex[float32]|Complex[float64]](t: Tensor[T], exponent: T): Tensor[T] {.noInit.} =
## Compute element-wise exponentiation
returnEmptyIfEmpty(t)
result = t.map_inline pow(x, exponent)

@alifarazz
Copy link

alifarazz commented Mar 21, 2021

It's a missing broadcast.

It should be easy to add, only same type broadcast are configured at the moment.

# ##############################################
# # Broadcasting Tensor-Scalar and Scalar-Tensor
proc `+.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted addition for tensor + scalar.
returnEmptyIfEmpty(t)
result = t.map_inline(x + val)
proc `+.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted addition for scalar + tensor.
returnEmptyIfEmpty(t)
result = t.map_inline(x + val)
proc `-.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted substraction for tensor - scalar.
returnEmptyIfEmpty(t)
result = t.map_inline(val - x)
proc `-.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted substraction for scalar - tensor.
returnEmptyIfEmpty(t)
result = t.map_inline(x - val)
proc `/.`*[T: SomeNumber|Complex[float32]|Complex[float64]](val: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Broadcasted division
returnEmptyIfEmpty(t)
when T is SomeInteger:
result = t.map_inline(val div x)
else:
result = t.map_inline(val / x)
proc `/.`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], val: T): Tensor[T] {.noInit.} =
## Broadcasted division
returnEmptyIfEmpty(t)
when T is SomeInteger:
result = t.map_inline(x div val)
else:
result = t.map_inline(x / val)
proc `^.`*[T: SomeFloat|Complex[float32]|Complex[float64]](t: Tensor[T], exponent: T): Tensor[T] {.noInit.} =
## Compute element-wise exponentiation
returnEmptyIfEmpty(t)
result = t.map_inline pow(x, exponent)

I added the missing broadcasts that I thought you meant. Though they don't seem to help; the same error still persists and compiling it fails.


Alternatively, I tracked down the source code for the * operator in line g = I * t. It's

proc `*`*[T: SomeNumber|Complex[float32]|Complex[float64]](a: T, t: Tensor[T]): Tensor[T] {.noInit.} =
## Element-wise multiplication by a scalar
returnEmptyIfEmpty(t)
t.map_inline(x * a)
and
proc `*`*[T: SomeNumber|Complex[float32]|Complex[float64]](t: Tensor[T], a: T): Tensor[T] {.noInit.} =
## Element-wise multiplication by a scalar
a * t

I hacked together a horrible solution with just getting it to compile as my goal. It does compile, but I'm certain there's a much better way of doing this.
https://github.com/alifarazz/Arraymancer/blob/37148f36b53e916ac5c00353c3d72a3d3de46b0f/src/arraymancer/tensor/operators_blas_l1.nim#L72-L76

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants