Python torch.nn.functional.group_norm() Examples
The following are 7
code examples of torch.nn.functional.group_norm().
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example.
You may also want to check out all available functions/classes of the module
torch.nn.functional
, or try the search function
.
Example #1
Source File: layers.py From BigGAN-PyTorch with MIT License | 6 votes |
def groupnorm(x, norm_style): # If number of channels specified in norm_style: if 'ch' in norm_style: ch = int(norm_style.split('_')[-1]) groups = max(int(x.shape[1]) // ch, 1) # If number of groups specified in norm style elif 'grp' in norm_style: groups = int(norm_style.split('_')[-1]) # If neither, default to groups = 16 else: groups = 16 return F.group_norm(x, groups) # Class-conditional bn # output size is the number of channels, input size is for the linear layers # Andy's Note: this class feels messy but I'm not really sure how to clean it up # Suggestions welcome! (By which I mean, refactor this and make a pull request # if you want to make this more readable/usable).
Example #2
Source File: mixture_batchnorm.py From Parsing-R-CNN with MIT License | 5 votes |
def forward(self, x): output = F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) size = output.size() y = self.attention_weights(x) # TODO: use output as attention input weight = y @ self.weight_ bias = y @ self.bias_ weight = weight.unsqueeze(-1).unsqueeze(-1).expand(size) bias = bias.unsqueeze(-1).unsqueeze(-1).expand(size) return weight * output + bias
Example #3
Source File: misc.py From seamseg with BSD 3-Clause "New" or "Revised" License | 5 votes |
def forward(self, x): x = functional.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) if self.activation == "relu": return functional.relu(x, inplace=True) elif self.activation == "leaky_relu": return functional.leaky_relu(x, negative_slope=self.activation_param, inplace=True) elif self.activation == "elu": return functional.elu(x, alpha=self.activation_param, inplace=True) elif self.activation == "identity": return x else: raise RuntimeError("Unknown activation function {}".format(self.activation))
Example #4
Source File: fp32_group_norm.py From fairseq with MIT License | 5 votes |
def forward(self, input): output = F.group_norm( input.float(), self.num_groups, self.weight.float() if self.weight is not None else None, self.bias.float() if self.bias is not None else None, self.eps, ) return output.type_as(input)
Example #5
Source File: norm_act.py From pytorch-image-models with Apache License 2.0 | 5 votes |
def forward(self, x): x = F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) if self.act is not None: x = self.act(x) return x
Example #6
Source File: activated_group_norm.py From pytorch-tools with MIT License | 5 votes |
def forward(self, x): x = F.group_norm(x, self.num_groups, self.weight, self.bias, self.eps) func = ACT_FUNC_DICT[self.activation] if self.activation == ACT.LEAKY_RELU: return func(x, inplace=True, negative_slope=self.activation_param) elif self.activation == ACT.ELU: return func(x, inplace=True, alpha=self.activation_param) else: return func(x, inplace=True)
Example #7
Source File: fp32_group_norm.py From attn2d with MIT License | 5 votes |
def forward(self, input): output = F.group_norm( input.float(), self.num_groups, self.weight.float() if self.weight is not None else None, self.bias.float() if self.bias is not None else None, self.eps, ) return output.type_as(input)