Documentation
GitHub
Example
Dlang.org
softmaxForward
grain
cudnn
compute the softmax over all C for each H, W, N
version
(
grain_cuda
)
void
softmaxForward
(
cudnnSoftmaxAlgorithm_t
A
T
size_t
dim
)
(
Variable
!(
T
,
dim
,
DeviceStorage
)
x
,
Variable
!(
T
,
dim
,
DeviceStorage
)
y
,
T
alpha
= 1.0
,
T
beta
= 0.0
)
Meta
Source
See Implementation
grain
cudnn
functions
activationBackward
activationForward
convBackward
convForward
cudnnDataType
fill
isContiguous
isDeterministic
isNanProp
makeCudnnTensor
poolBackward
poolForward
reduce
scale
softmaxBackward
softmaxForward
tensorOp
transform
structs
TensorDesc
ThreadLocalDropout
compute the softmax over all C for each H, W, N