Python chainer.cuda.cudnn_enabled() Examples
The following are 2
code examples of chainer.cuda.cudnn_enabled().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
chainer.cuda
, or try the search function
.
Example #1
Source File: adaptive_softmax.py From models with MIT License | 6 votes |
def backward_log_softmax(self, x, y, gy): if cuda.cudnn_enabled: cudnn = cuda.cudnn libcudnn = cuda.cuda.cudnn _algorithm = libcudnn.CUDNN_SOFTMAX_LOG _mode = libcudnn.CUDNN_SOFTMAX_MODE_CHANNEL xp = cuda.get_array_module(x) if xp is not numpy and chainer.should_use_cudnn('>=auto', 3000): oz_dtype = 'd' if x.dtype == 'd' else 'f' one = numpy.array(1, dtype=oz_dtype).ctypes zero = numpy.array(0, dtype=oz_dtype).ctypes handle = cudnn.get_handle() gx = xp.empty(x.shape, dtype=x.dtype) gx_cube = gx.reshape(gx.shape[:2] + (-1, 1)) desc = cudnn.create_tensor_descriptor(gx_cube) libcudnn.softmaxBackward( handle, _algorithm, _mode, one.data, desc.value, y.data.ptr, desc.value, gy.data.ptr, zero.data, desc.value, gx.data.ptr) else: gx = gy - xp.exp(y) * gy.sum(axis=1, keepdims=True) return gx
Example #2
Source File: function_batch_normalization.py From GUINNESS with GNU General Public License v2.0 | 5 votes |
def __init__(self, eps=2e-5, mean=None, var=None, train=False, decay=0.9, use_cudnn=True): self.running_mean = mean self.running_var = var self.train = train self.eps = eps if cuda.cudnn_enabled and use_cudnn: if eps <= 1e-5: msg = 'cuDNN does not allow an eps value less than 1e-5.' raise RuntimeError(msg) self.use_cudnn = use_cudnn self.mean_cache = None self.decay = decay