当前位置:网站首页>Learning notes of Li Mu's "hands on learning deep learning" (5) Chapter 1 preparatory knowledge section 3 linear algebra
Learning notes of Li Mu's "hands on learning deep learning" (5) Chapter 1 preparatory knowledge section 3 linear algebra
2022-07-21 22:01:00 【Artificial Idiots】
1.3 linear algebra
1.3.1 Scalar
# Scalars are represented by tensors with only one element
from mxnet import np, npx
npx.set_np()
x = np.array(3.0)
y = np.array(2.0)
x + y, x * y, x / y, x**y
(array(5.), array(6.), array(1.5), array(9.))
1.3.2 vector
x = np.arange(4)
x
array([0., 1., 2., 3.])
len(x)
4
x.shape
(4,)
x = np.array([[0],[1],[2],[3]])
x.shape
(4, 1)
1.3.3 matrix
A = np.arange(20).reshape(5,4)
A
array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.],
[16., 17., 18., 19.]])
A.T
array([[ 0., 4., 8., 12., 16.],
[ 1., 5., 9., 13., 17.],
[ 2., 6., 10., 14., 18.],
[ 3., 7., 11., 15., 19.]])
1.3.4 tensor
X = np.arange(24).reshape(2,3,4)
X
array([[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]],
[[12., 13., 14., 15.],
[16., 17., 18., 19.],
[20., 21., 22., 23.]]])
1.3.5 Basic properties of tensor algorithm
A = np.arange(20).reshape(5,4)
B = A.copy()
A, A + B
(array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.],
[16., 17., 18., 19.]]),
array([[ 0., 2., 4., 6.],
[ 8., 10., 12., 14.],
[16., 18., 20., 22.],
[24., 26., 28., 30.],
[32., 34., 36., 38.]]))
# The multiplication of two matrices by elements is called dahama product
A * B, A**2
(array([[ 0., 1., 4., 9.],
[ 16., 25., 36., 49.],
[ 64., 81., 100., 121.],
[144., 169., 196., 225.],
[256., 289., 324., 361.]]),
array([[ 0., 1., 4., 9.],
[ 16., 25., 36., 49.],
[ 64., 81., 100., 121.],
[144., 169., 196., 225.],
[256., 289., 324., 361.]]))
# Multiplying the tensor by or adding a scalar does not change the shape of the tensor , Each element of the tensor will be multiplied by or added to the scalar
a = 2
X = np.arange(24).reshape(2,3,4)
a + X, (a * X).shape
(array([[[ 2., 3., 4., 5.],
[ 6., 7., 8., 9.],
[10., 11., 12., 13.]],
[[14., 15., 16., 17.],
[18., 19., 20., 21.],
[22., 23., 24., 25.]]]),
(2, 3, 4))
1.3.6 Summary
x = np.arange(4)
x, x.sum()
(array([0., 1., 2., 3.]), array(6.))
A, A.shape, A.sum()
(array([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.],
[12., 13., 14., 15.],
[16., 17., 18., 19.]]),
(5, 4),
array(190.))
A_sum_axis0 = A.sum(axis = 0)
A_sum_axis0, A_sum_axis0.shape
(array([40., 45., 50., 55.]), (4,))
A_sum_axis1 = A.sum(axis = 1)
A_sum_axis1, A_sum_axis1.shape
(array([ 6., 22., 38., 54., 70.]), (5,))
A.sum(axis = [0,1]) # Sum along the row and column
# averaging
A.mean(), A.sum() / A.size
(array(9.5), array(9.5))
A.mean(axis = 0), A.sum(axis = 0) / A.shape[0]
(array([ 8., 9., 10., 11.]), array([ 8., 9., 10., 11.]))
# Non summary summation , Keep the number of axes unchanged when calling the function to calculate the sum or average
sum_A = A.sum(axis = 1, keepdims = True)
sum_A
array([[ 6.],
[22.],
[38.],
[54.],
[70.]])
A / sum_A
array([[0. , 0.16666667, 0.33333334, 0.5 ],
[0.18181819, 0.22727273, 0.27272728, 0.3181818 ],
[0.21052632, 0.23684211, 0.2631579 , 0.28947368],
[0.22222222, 0.24074075, 0.25925925, 0.2777778 ],
[0.22857143, 0.24285714, 0.25714287, 0.27142859]])
# Calculate along an axis A The cumulative sum of elements
A.cumsum(axis = 0)
array([[ 0., 1., 2., 3.],
[ 4., 6., 8., 10.],
[12., 15., 18., 21.],
[24., 28., 32., 36.],
[40., 45., 50., 55.]])
1.3.7 Dot product
y = np.ones(4)
x, y, np.dot(x, y), np.sum(x * y)
(array([0., 1., 2., 3.]), array([1., 1., 1., 1.]), array(6.), array(6.))
1.3.8 matrix - Vector product
A.shape, x.shape, np.dot(A, x)
((5, 4), (4,), array([ 14., 38., 62., 86., 110.]))
1.3.9 matrix - Matrix multiplication
B = np.ones(shape = (4,3))
np.dot(A, B)
array([[ 6., 6., 6.],
[22., 22., 22.],
[38., 38., 38.],
[54., 54., 54.],
[70., 70., 70.]])
1.3.10 norm
u = np.array([3,-4])
np.linalg.norm(u)
array(5.)
np.abs(u).sum()
array(7.)
np.linalg.norm(np.ones((4,9)))
array(6.)
边栏推荐
- 内置平头哥玄铁的WiFi和蓝牙芯片
- Supervised Discrete Hashing
- VxWorks 最小系统移植到全志 A40i CPU 过程中的 bug 修复记录
- Task oriented dialogue system for automatic diagnosis
- 基于STM32G4系列对定时器双向断路输入Bidirectional break inputs的应用测试
- 树莓派4B 声音传感器DO模块
- ARMv8 Cortex-a 编程向导手册学习_4
- Driving pit of Fudan micro DW network card
- Raspberry pie 4B file transfer
- 《关于 SylixOS 内核存在的问题 - CAN 内核驱动》
猜你喜欢
随机推荐
Armv8 cortex-a programming wizard manual learning_ two
什么是电子秤称重模块?它具有哪些功能?
【半导体先进工艺制程技术系列】HKMG工艺技术(上)
Raspberry pie 4b is connected with the four sound sensor Ao module of the expansion board
专题解答:nmn概念股是什么意思,NMN到底是什么
WIFI模块在物联网中的应用优势
[advanced semiconductor process technology series] FinFET process flow
TIPS
树莓派 4B 使用拓展板的四声音传感器AO模块连接
[paper translation] [2004] [567] higher order later trait models for cognitive diagnosis (ho-dina high-order latent trait cognitive diagnosis model)
物联网蓝牙模块的特点和应用领域有哪些?
浅聊Matter协议 (原CHIP协议)
[semiconductor advanced process technology series] Introduction to FinFET and utb-soi
智能电表MCU需求,及电表芯片厂商排名
【半导体先进工艺制程技术系列】HKMG工艺技术(下)
Raspberry pie 4B sound sensor Ao module
WiFi6特性,一起深入学习,OFDMA,QAM调制,BSS
Cortex-A53 从裸机开发_开发笔记(64位)
Flexible Online Multi-modal Hashing for Large-scale Multimedia Retrieval
Realtek USB wireless network card can find WiFi and cannot connect to the network