Implications of array broadcasting
Bradley Lucier
(24 Oct 2024 18:55 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(28 Oct 2024 06:17 UTC)
|
Re: Implications of array broadcasting
Bradley Lucier
(30 Oct 2024 23:57 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(31 Oct 2024 13:15 UTC)
|
Re: Implications of array broadcasting
Bradley Lucier
(02 Nov 2024 04:24 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(03 Nov 2024 22:21 UTC)
|
Re: Implications of array broadcasting
Bradley Lucier
(05 Nov 2024 03:08 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(07 Nov 2024 00:27 UTC)
|
Re: Implications of array broadcasting
Bradley Lucier
(07 Nov 2024 20:29 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(08 Nov 2024 00:01 UTC)
|
Re: Implications of array broadcasting Bradley Lucier (08 Nov 2024 20:51 UTC)
|
Re: Implications of array broadcasting
Alex Shinn
(08 Nov 2024 22:54 UTC)
|
On 11/6/24 19:26, Alex Shinn wrote: > > On Tue, Nov 5, 2024 at 12:07 PM Bradley Lucier <xxxxxx@purdue.edu > <mailto:xxxxxx@purdue.edu>> wrote: > > > Can you give an example where a broadcast just amounts to a reshape? I > can't think of one. > > > Any time the broadcast is just dropping a trivial dimension, it's a reshape. > This does come up fairly often. The NumPy documentation says: ======== Input arrays do not need to have the same number of dimensions. The resulting array will have the same number of dimensions as the input array with the greatest number of dimensions, where the size of each dimension is the largest size of the corresponding dimension among the input arrays. Note that missing dimensions are assumed to have size one. ======== I don't see that dropping a trivial dimension is a possibility with this definition. Adding one is, but that means the other array arguments have a trivial dimension as their leading axis, which would likely not happen very often. > > [...] > What I'm leading to is that I believe that we can achieve the > effects of > array broadcasting in NumPy simply with > > (array-map-with-broadcasting f A_1 A_2 ...) > > which automatically broadcasts arguments when given a set of > "compatible" arrays, and not reifying broadcast arrays for further > purposes. > > > Unfortunately, this doesn't support all use cases. The most common > (and most important) operation is matrix multiplication, which is not a map. I don't see how matrix multiplication can be written using array broadcasting. Perhaps that's not what you mean. > Also, even for mappable operations like `array+`, what I've been doing is > roughly equivalent to: > > (define (array+ a b) > (receive (a b) (array-broadcast-both a b) > (if (can-use-fast-path? a b) > (fast-array+ a b) > (array-map + a b)))) > > where `fast-array+` is currently BLAS but I plan to move to CUDA. > The point here is that the fast path relies on the arguments being > normal arrays - trying to move the broadcasting logic into the fast > path would slow it down and defeat the purpose. I don't see the need to broadcast the arrays before using the fast path, and, indeed, I don't see how the BLAS routines that you use to implement fast-array+ even give the correct answer if you have a nontrivially-broadcast argument. I understand the desire to do things that don't impede fast implementations, I tried to do that with the sample implementation code to move array elements, relying on memmove in the best case, but I don't see yet how you think array broadcasting interacts with that. Brad