nn_conv1d function

Conv1D module

Conv1D module

Applies a 1D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N,C\mboxin,L)(N, C_{\mbox{in}}, L) and output (N,C\mboxout,L\mboxout)(N, C_{\mbox{out}}, L_{\mbox{out}}) can be precisely described as:

nn_conv1d( in_channels, out_channels, kernel_size, stride = 1, padding = 0, dilation = 1, groups = 1, bias = TRUE, padding_mode = "zeros" )

Arguments

  • in_channels: (int): Number of channels in the input image
  • out_channels: (int): Number of channels produced by the convolution
  • kernel_size: (int or tuple): Size of the convolving kernel
  • stride: (int or tuple, optional): Stride of the convolution. Default: 1
  • padding: (int, tuple or str, optional) – Padding added to both sides of the input. Default: 0
  • dilation: (int or tuple, optional): Spacing between kernel elements. Default: 1
  • groups: (int, optional): Number of blocked connections from input channels to output channels. Default: 1
  • bias: (bool, optional): If TRUE, adds a learnable bias to the output. Default: TRUE
  • padding_mode: (string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

Details

\mboxout(Ni,C\mboxoutj)=\mboxbias(C\mboxoutj)+k=0Cin1\mboxweight(C\mboxoutj,k)\mboxinput(Ni,k) \mbox{out}(N_i, C_{\mbox{out}_j}) = \mbox{bias}(C_{\mbox{out}_j}) +\sum_{k = 0}^{C_{in} - 1} \mbox{weight}(C_{\mbox{out}_j}, k)\star \mbox{input}(N_i, k)

where \star is the valid cross-correlation operator, NN is a batch size, CC denotes a number of channels, LL is a length of signal sequence.

  • stride controls the stride for the cross-correlation, a single number or a one-element tuple.

  • padding controls the amount of implicit zero-paddings on both sides for padding number of points.

  • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link

    has a nice visualization of what dilation does.

  • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

    • At groups=1, all inputs are convolved to all outputs.
    • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
    • At groups= in_channels, each input channel is convolved with its own set of filters, of size out_channelsin_channels\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor.

Note

Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.

When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also termed in literature as depthwise convolution. In other words, for an input of size (N,Cin,Lin)(N, C_{in}, L_{in}), a depthwise convolution with a depthwise multiplier K, can be constructed by arguments (C\mboxin=Cin,C\mboxout=Cin×K,...,\mboxgroups=Cin)(C_{\mbox{in}}=C_{in}, C_{\mbox{out}}=C_{in} \times K, ..., \mbox{groups}=C_{in}).

Shape

  • Input: (N,Cin,Lin)(N, C_{in}, L_{in})
  • Output: (N,Cout,Lout)(N, C_{out}, L_{out}) where
Lout=Lin+2×\mboxpadding\mboxdilation×(\mboxkernel_size1)1\mboxstride+1 L_{out} = \left\lfloor\frac{L_{in} + 2 \times \mbox{padding} - \mbox{dilation}\times (\mbox{kernel\_size} - 1) - 1}{\mbox{stride}} + 1\right\rfloor

Attributes

  • weight (Tensor): the learnable weights of the module of shape (\mboxout_channels,\mboxin_channels\mboxgroups,\mboxkernel_size)(\mbox{out\_channels}, \frac{\mbox{in\_channels}}{\mbox{groups}}, \mbox{kernel\_size}). The values of these weights are sampled from U(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsC\mboxin\mboxkernel_sizek = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}
  • bias (Tensor): the learnable bias of the module of shape (out_channels). If bias is TRUE, then the values of these weights are sampled from U(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k=groupsC\mboxin\mboxkernel_sizek = \frac{groups}{C_{\mbox{in}} * \mbox{kernel\_size}}

Examples

if (torch_is_installed()) { m <- nn_conv1d(16, 33, 3, stride = 2) input <- torch_randn(20, 16, 50) output <- m(input) }
  • Maintainer: Daniel Falbel
  • License: MIT + file LICENSE
  • Last published: 2025-02-14