torch.cat() torch.Tensor.expand() torch.squeeze() torch.Tensor.repeat()
torch.Tensor.narrow() torch.Tensor.view() torch.Tensor.resize_()
torch.Tensor.permute()
<>拼接张量

torch.cat(seq, dim=0, out=None) → Tensor

* seq (sequence of Tensors) - Python序列或相同类型的张量序列
* dim (int, optional) - 沿着此维度连接张量
* out (Tensor, optional) - 输出参数 x = torch.randn(2, 3) x -0.5866 -0.3784
-0.1705 -1.0125 0.7406 -1.2073 [torch.FloatTensor of size 2x3] torch.cat((x, x,
x), 0) -0.5866 -0.3784 -0.1705 -1.0125 0.7406 -1.2073 -0.5866 -0.3784 -0.1705
-1.0125 0.7406 -1.2073 -0.5866 -0.3784 -0.1705 -1.0125 0.7406 -1.2073
[torch.FloatTensor of size 6x3] torch.cat((x, x, x), 1) -0.5866 -0.3784 -0.1705
-0.5866 -0.3784 -0.1705 -0.5866 -0.3784 -0.1705 -1.0125 0.7406 -1.2073 -1.0125
0.7406 -1.2073 -1.0125 0.7406 -1.2073 [torch.FloatTensor of size 2x9]
<>拼接张量2

torch.stack((Tensor), dim)

a = torch.IntTensor([[1,2,3],[11,22,33]]) b=
torch.IntTensor([[4,5,6],[44,55,66]]) c=torch.stack([a,b],0)
d=torch.stack([a,b],1) e=torch.stack([a,b],2) print(c) print(d) print(e) >>>
print(c) tensor([[[ 1, 2, 3], [11, 22, 33]], [[ 4, 5, 6], [44, 55, 66]]],
dtype=torch.int32) >>> print(d) tensor([[[ 1, 2, 3], [ 4, 5, 6]], [[11, 22,
33], [44, 55, 66]]], dtype=torch.int32) >>> print(e) tensor([[[ 1, 4], [ 2, 5],
[ 3, 6]], [[11, 44], [22, 55], [33, 66]]], dtype=torch.int32)
c, dim = 0时， c = [ a, b]

d, dim =1 时， d = [ [a[0] , b[0] ] , [a[1], b[1] ] ]

e, dim = 2 时， e=[[[a[0][0],b[0][0]],[a[0][1],b[0][1]],[a[0][2],b[0][2]]],[[a[1]
[0],b[1][0]],[a[1][1],b[0][1]],[a[1][2],b[1][2]]]]e = [ [ [ a[0][0], b[0][0] ]
, [ a[0][1], b[0][1] ] , [ a[0][2],b[0][2] ] ] , [ [ a[1][0], b[1][0] ] , [
a[1][1], b[0][1] ] , [ a[1][2],b[1][2] ] ] ]e=[[[a[0][0],b[0][0]],[a[0][1],b[0][
1]],[a[0][2],b[0][2]]],[[a[1][0],b[1][0]],[a[1][1],b[0][1]],[a[1][2],b[1][2]]]]

<>扩大张量

torch.Tensor.expand(*sizes) → Tensor

* sizes (torch.Size or int…) – 想要扩展的目标维度

x = torch.Tensor([[1], [2], [3]]) x.size() torch.Size([3, 1]) x.expand(3, 4) 1
1 1 1 2 2 2 2 3 3 3 3 [torch.FloatTensor of size 3x4]
<>压缩张量

torch.squeeze(input, dim=None, out=None) → Tensor

\times C \times 1 \times D ）（A×1×B×C×1×D），那么输出张量的形状为（A×B×C×D）（ A \times B
\times C \times D ）（A×B×C×D）。

，squeeze(input, 0)会保持张量的维度不变，只有在执行squeeze(input, 1)时，输入张量的形状会被压缩至（A×B）（ A
\times B ）（A×B）。

* input (Tensor) – 输入张量
* dim (int, optional) – 如果给定，则只会在给定维度压缩
* out (Tensor, optional) – 输出张量

x = torch.zeros(2, 1, 2, 1, 2) x.size() torch.Size([2, 1, 2, 1, 2]) y =
torch.squeeze(x) y.size() torch.Size([2, 2, 2]) y = torch.squeeze(x, 0)
y.size() torch.Size([2, 1, 2, 1, 2]) y = torch.squeeze(x, 1) y.size()
torch.Size([2, 2, 1, 2])
<>重复张量

torch.Tensor.repeat(*sizes)

* size (torch.size ot int…) - 沿着每一维重复的次数

1 2 3 1 2 3 1 2 3 [torch.FloatTensor of size 4x6]
<>torch.Tensor.unfold(dim, size, step) → Tensor

* dim (int) - 目标维度
* size (int) - 复制重复的次数（展开维度）
* step (int) - 步长

x.unfold(0, 2, 1) 1 2 2 3 3 4 4 5 5 6 6 7 [torch.FloatTensor of size 6x2]
x.unfold(0, 2, 2) 1 2 3 4 5 6 [torch.FloatTensor of size 3x2]
<>缩小张量

torch.Tensor.narrow(dimension, start, length) → Tensor

* dimension (int) – 要进行缩小的维度
* start (int) – 开始维度索引
* length (int) – 缩小持续的长度

3 4 5 6 [torch.FloatTensor of size 2x3] x.narrow(1, 1, 2) 2 3 5 6 8 9
[torch.FloatTensor of size 3x2]
<>张量变形

torch.Tensor.view(*args) → Tensor

* args (torch.Size or int…) - 理想的指定尺寸 x = torch.randn(4, 4) x.size()
torch.Size([4, 4]) y = x.view(16) y.size() torch.Size([16])
<>重设张量尺寸

torch.Tensor.resize_(*sizes)

* sizes (torch.Size or int…) - 需要调整的大小

x = torch.Tensor([[1, 2], [3, 4], [5, 6]]) x.resize_(2, 2) x 1 2 3 4
[torch.FloatTensor of size 2x2]
<>置换张量维度

torch.Tensor.permute(*dims)

* dim (int) - 指定换位顺序

x = torch.randn(2, 3, 5) x.size() torch.Size([2, 3, 5]) x.permute(2, 0,
1).size() torch.Size([5, 2, 3])
<>查看张量单个元素的字节数

torch.Tensor.element_size() → int

torch.FloatTensor().element_size() 4

<https://zhuanlan.zhihu.com/p/31495102>