|
comments
Jeffrey Mark Siskind
(24 Apr 2020 18:59 UTC)
|
|
Re: comments Jeffrey Mark Siskind (24 Apr 2020 19:53 UTC)
|
|
Re: comments
Bradley Lucier
(17 May 2020 21:40 UTC)
|
|
Re: comments
Jeffrey Mark Siskind
(24 Apr 2020 19:54 UTC)
|
|
Re: comments
John Cowan
(24 Apr 2020 21:13 UTC)
|
|
Re: comments
Bradley Lucier
(25 Apr 2020 23:34 UTC)
|
|
Re: comments
Bradley Lucier
(26 Apr 2020 00:09 UTC)
|
|
Re: comments
John Cowan
(26 Apr 2020 03:46 UTC)
|
|
Re: comments
Bradley Lucier
(28 Apr 2020 20:03 UTC)
|
|
Re: comments
Bradley Lucier
(26 Apr 2020 22:11 UTC)
|
The current Scorch API:
Creation
(torch-byte-tensor number-list)
(torch-char-tensor number-list)
(torch-short-tensor number-list)
(torch-int-tensor number-list)
(torch-long-tensor number-list)
(torch-float-tensor number-list)
(torch-double-tensor number-list)
(torch-cuda-byte-tensor number-list)
(torch-cuda-char-tensor number-list)
(torch-cuda-short-tensor number-list)
(torch-cuda-int-tensor number-list)
(torch-cuda-long-tensor number-list)
;; This should be called torch-cuda-float-tensor but is not for compatibility
;; with the Torch naming conventions.
(torch-cuda-tensor number-list)
(torch-cuda-double-tensor number-list)
(torch-fill-byte dimensions number)
(torch-fill-char dimensions number)
(torch-fill-short dimensions number)
(torch-fill-int dimensions number)
(torch-fill-long dimensions number)
(torch-fill-float dimensions number)
(torch-fill-double dimensions number)
(torch-fill-cuda-byte dimensions number)
(torch-fill-cuda-char dimensions number)
(torch-fill-cuda-short dimensions number)
(torch-fill-cuda-int dimensions number)
(torch-fill-cuda-long dimensions number)
;; This should be called torch-fill-cuda-float but is not for compatibility
;; with the Torch naming conventions.
(torch-fill-cuda dimensions number)
(torch-fill-cuda-double dimensions number)
(torch-randn-float dimensions)
(torch-randn-double dimensions)
;; This should be called torch-randn-cuda-float but is not for compatibility
;; with the Torch naming conventions.
(torch-randn-cuda dimensions)
(torch-randn-cuda-double dimensions)
(torch-normal-float dimensions mean stdv)
(torch-normal-double dimensions mean stdv)
;; This should be called torch-normal-cuda-float but is not for compatibility
;; with the Torch naming conventions.
(torch-normal-cuda dimensions mean stdv)
(torch-normal-cuda-double dimensions mean stdv)
Type predicates
(byte-tensor? thing)
(char-tensor? thing)
(short-tensor? thing)
(int-tensor? thing)
(long-tensor? thing)
(float-tensor? thing)
(double-tensor? thing)
(cuda-byte-tensor? thing)
(cuda-char-tensor? thing)
(cuda-short-tensor? thing)
(cuda-int-tensor? thing)
(cuda-long-tensor? thing)
;; This should be called cuda-float-tensor? but is not for compatibility
;; with the Torch naming conventions.
(cuda-tensor? thing)
(cuda-double-tensor? thing)
RNG
(start-generator [seed])
(stop-generator)
CUDA
(start-cuda [GPU-or-list-of-GPUs])
(stop-cuda)
Accessors
(torch-size tensor)
(tensor->list 1D-tensor)
;; Our standard library provides
(nested-list->float-tensor l)
(nested-list->double-tensor l)
;; This should be called nested-list->cuda-float-tensor but is not for
;; compatibility with the Torch naming conventions.
(nested-list->cuda-tensor l)
(nested-list->cuda-double-tensor l)
(tensor->nested-list t)
Coercion
(torch-byte tensor)
(torch-char tensor)
(torch-short tensor)
(torch-int tensor)
(torch-long tensor)
(torch-float tensor)
(torch-double tensor)
(torch-cuda-byte tensor)
(torch-cuda-char tensor)
(torch-cuda-short tensor)
(torch-cuda-int tensor)
(torch-cuda-long tensor)
;; This should be called torch-cuda-float but is not for compatibility
;; with the Torch naming conventions.
(torch-cuda tensor)
(torch-cuda-double tensor)
Some Scheme primitives that are overloaded pointwise to tensors:
+
-
*
/
max
min
sqrt
exp
log
sin
cos
atan
=
<
>
<=
>=
zero?
positive?
negative?
Some torch primitives:
(torch-view tensor number-list)
(torch-transpose tensor)
(torch-dot tensor tensor)
(torch-sumall tensor)
(torch-addmv tensor tensor)
(torch-addmm tensor tensor)
(torch-addr tensor tensor)
Some less-clean parts of the API.
;; Initially, we had a very general mechanism to do convolutions of any
;; dimension. And we divided the API into parts that did padding, chopping
;; (its inverse), decimation, and interpolation separate from convolution.
;; But now we use cuDNN which combines it all together and is hardwired to
;; particular numbers of dimensions.
(pad tensor number-list)
(chop tensor number-list)
(decimate tensor number-list)
(interpolate tensor number-list)
(interpolate-to-size tensor number-list)
(ReLU tensor)
(convolve2d ...)
;; Batch normalization is statefull.
(batch-normalization ...)
(batch-normalization-validation ...)
(initialize-batch-normalization ...)
(convolve-add-tied ...)
(max-pool2d ...)
(avg-pool2d ...)
;; This was an attempt to separate dropout into a part that was
;; deterministic and a part that was not.
(get-dropout-masks ...)
(get-dropout-planewise-masks ...)
(index-of-max ...)
(cross-entropy-loss ...)
;; We have hardcoded stuff to allow training on ImageNet.
(reduce-ten-crop ...)
(setup-weight-streamer ...)
(stream-weights ...)
(make-data-loader ..)
(get-next-batch ...)
(read-image-as-float-tensor ...)
(resize-image ...)
(center-crop-image ...)
;; These serialize and unserialze tensors.
(save-weights ...)
(load-weights ...)
(setup-cache-allocator ...)
(update-parameters ...)
We adopt a convention where neural network layers are defined in a curried
fashion:
(define (((layer hyperparameters ...) weights ...) inputs ...) ...)
With this, one can build a typical conventional network with function
composition.
Using this API, we have been able to implement and train Resnet-152 on
ImageNet (using GPUs). We also have been able to tensorize two different ray
tracers that also render using the GPU.
Jeff (http://engineering.purdue.edu/~qobi)