翼度科技»论坛 编程开发 python 查看内容

Pytorch学习笔记-(xiaotudui)

11

主题

11

帖子

33

积分

新手上路

Rank: 1

积分
33
常用的包
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.utils.data import DataLoader
  5. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  6. from torch.utils.tensorboard import SummaryWriter
复制代码
Pytorch

pytorch安装

准备环境


  • 安装Ancona工具
  • 安装python语言
  • 安装pycharm工具
以上工作安装完成后,开始真正的pytorch安装之旅,别担心,很容易
1.打开Ancona Prompt创建一个pytorch新环境
  1. conda create -n pytorch python=版本号比如3.11
复制代码
后面步骤都是y同意安装
2.激活环境
同样在Ancona Prompt中继续输入如下指令
  1. conda activate pytorch
复制代码

3.去pytorch官网找到下载pytorch指令,根据个人配置进行选择

  • window下一般选择Conda
  • Linux下一般选择Pip

这里要区分自己电脑是否含有独立显卡,没有的选择cpu模式就行。
如果有独立显卡,那么去NVIDIA官网查看自己适合什么版本型号进行选择即可。
如果有独立显卡,在Ancona Prompt中输入如下指令,返回True即可确认安装成功。
  1. torch.cuda.is_available()
复制代码
如果没有cpu我们通过pycharm来进行判断,首先创建一个pytorch工程,如下所示:




  1. import torch
  2. print(torch.cuda.is_available())
  3. print(torch.backends.cudnn.is_available())
  4. print(torch.cuda_version)
  5. print(torch.backends.cudnn.version())
  6. print(torch.__version__)
复制代码
是不是发现输出false, false, None, None,是不是以为错了。不,那是因为我们安装的是CPU版本的,压根就没得cuda,cudnn这个东西。我们只要检测python版本的torch(PyTorch)在就行。
ok!恭喜你成功完成安装pytroch!接下来开启你的学习之路吧!
引言:python中的两大法宝函数


  • 这里1、2、3、4是分隔区

  1. # 查看torch里面有什么
  2. for i in range(len(dir(torch))):
  3.     print(f"{dir(torch)[i]}")
复制代码

pytorch加载数据初认识

  1. import import torch
  2. from torch.utils.data import Dataset
复制代码

  • 看看Dataset里面有什么:

Dataset代码实战
  1. from torch.utils.data import Dataset
  2. from PIL import Image
  3. import os
  4. class MyData(Dataset):
  5.     def __init__(self, root_dir, label_dir):
  6.         self.root_dir = root_dir
  7.         self.label_dir = label_dir
  8.         self.path = os.path.join(self.root_dir, self.label_dir)  # 根路径和最后的路径进行拼接
  9.         self.img_path = os.listdir(self.path)  # 路径地址img_path[0] 就是第一张地址
  10.     def __getitem__(self, idx):
  11.         """
  12.         读取每个照片
  13.         :param idx:
  14.         :return:
  15.         """
  16.         img_name = self.img_path[idx]
  17.         img_item_path = os.path.join(self.root_dir, self.label_dir, img_name)
  18.         img = Image.open(img_item_path)
  19.         label = self.label_dir
  20.         return img, label
  21.     def __len__(self):
  22.         """
  23.         查看图片个数,即数据集个数
  24.         :return:
  25.         """
  26.         return len(self.img_path)
  27. # img_path = "E:\\Project\\Code_Python\\Learn_pytorch\\learn_pytorch\\dataset\\training_set\\cats\\cat.1.jpg"
  28. # img = Image.open(img_path)
  29. # print(img)
  30. # img.show()
  31. root_dir = "dataset/training_set"
  32. cats_label_dir = "cats"
  33. dogs_label_dir = "dogs"
  34. cats_dataset = MyData(root_dir, cats_label_dir)
  35. dogs_dataset = MyData(root_dir, dogs_label_dir)
  36. img1, label1 = cats_dataset[1]
  37. img2, label2 = dogs_dataset[1]
  38. # img1.show()
  39. # img2.show()
  40. train_dataset = cats_dataset + dogs_dataset # 合并数据集
  41. print(len(train_dataset))
  42. print(len(cats_dataset))
  43. print(len(dogs_dataset))
复制代码
TensorBoard的使用(一)


  1. from torch.utils.tensorboard import SummaryWriter
  2. writer = SummaryWriter("logs")  # 事件文件存储地址
  3. # writer.add_image()
  4. # y = x
  5. for i in range(100):
  6.     writer.add_scalar("y=2x", 2*i, i)  # 标量的意思 参数2*i 是x轴  i是y轴
  7. writer.close()
复制代码

  • 安装tensorboard
  1. pip install tensorboard
复制代码

  • 运行tensorboard
  1. tensorboard --logdir="logs" --port=6007(这里是指定端口号,也可以不写--port,默认6006)
复制代码


  • 利用Opencv读取图片,获得numpy型图片数据
    1. import numpy as np
    2. from torch.utils.tensorboard import SummaryWriter
    3. import cv2
    4. writer = SummaryWriter("logs")  # 事件文件存储地址
    5. img_array = cv2.imread("./dataset/training_set/cats/cat.2.jpg")
    6. # print(img_array.shape)
    7. writer.add_image("test",img_array,2,dataformats='HWC')
    8. # y = x
    9. for i in range(100):
    10.     writer.add_scalar("y=2x", 2 * i, i)  # 标量的意思
    11. writer.close()
    复制代码

Transforms使用

  1. ![Snipaste_2023-11-01_10-52-09](./pytorch截图/Snipaste_2023-11-01_10-52-09.png)from torchvision import transforms
  2. from PIL import Image
  3. # python当中的用法
  4. # tensor数据类型
  5. # 通过transforms.ToTensor去解决两个问题
  6. # 1.transforms如何使用(pyhton)
  7. # 2.为什么需要Tensor数据类型:因为里面包装了神经网络模型训练的数据类型
  8. # 绝对路径 E:\Project\Code_Python\Learn_pytorch\learn_pytorch\dataset\training_set\cats\cat.6.jpg
  9. # 相对路径 dataset/training_set/cats/cat.6.jpg
  10. img_path = "dataset/training_set/cats/cat.6.jpg"
  11. img = Image.open(img_path)
  12. # 1.transforms如何使用(pyhton)
  13. tensor_trans = transforms.ToTensor()
  14. tensor_img = tensor_trans(img)
  15. print(tensor_img.shape)
复制代码

常见的Transforms

  1. from PIL import Image
  2. from torchvision import transforms
  3. from torch.utils.tensorboard import SummaryWriter
  4. writer = SummaryWriter("logs")
  5. img = Image.open("dataset/training_set/cats/cat.11.jpg")
  6. print(img)
  7. # ToTensor的使用
  8. trans_totensor = transforms.ToTensor()
  9. img_tensor = trans_totensor(img)
  10. writer.add_image("ToTensor", img_tensor)
  11. # Normalize
  12. print(img_tensor[0][0][0])
  13. trans_norm = transforms.Normalize([1, 1, 1], [1, 1, 1])
  14. img_norm = trans_norm(img_tensor)
  15. print(img_norm[0][0][0])
  16. writer.add_image("Normalize", img_norm, 0)
  17. # Resize
  18. print(img.size)
  19. trans_resize = transforms.Resize((512, 512))
  20. # img PIL -> resize -> img_resize PIL
  21. img_resize = trans_resize(img)
  22. # img_resize PIL -> totensor -> img_resize tensor
  23. img_resize = trans_totensor(img_resize)
  24. # print(img_resize)
  25. writer.add_image("Resize", img_resize, 1)
  26. # Compose - resize - 2
  27. trans_resize_2 = transforms.Resize(144)
  28. # PIL -> PIL -> tensor数据类型
  29. trans_compose = transforms.Compose([trans_resize_2, trans_totensor])
  30. img_resize_2 = trans_compose(img)
  31. writer.add_image("Resize_Compose", img_resize_2, 2)
  32. writer.close()
复制代码
torchvision中的数据集使用
  1. import torchvision
  2. from torch.utils.tensorboard import SummaryWriter
  3. dataset_transforms = torchvision.transforms.Compose([
  4.     torchvision.transforms.ToTensor()
  5. ])
  6. # 下载数据集
  7. train_set = torchvision.datasets.CIFAR10(root="./dataset", train=True, transform=dataset_transforms, download=True)
  8. test_set = torchvision.datasets.CIFAR10(root="./dataset", train=False, transform=dataset_transforms, download=True)
  9. print(test_set[0])
  10. print(test_set.classes)
  11. img, target = test_set[0]
  12. print(img)
  13. print(target)
  14. print(test_set.classes[target])
  15. # img.show()
  16. writer = SummaryWriter("p10")
  17. for i in range(10):
  18.     img, target = test_set[i]
  19.     writer.add_image("test_set", img, i)
  20. writer.close()
复制代码


DataLoad的使用

  1. import torchvision
  2. from torch.utils.data import DataLoader
  3. from torch.utils.tensorboard import SummaryWriter
  4. # 准备的测试数据集
  5. test_data = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor())
  6. test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=False)
  7. # 这里数据集是之前官网下载下来的
  8. # 测试数据集中第一张图片及target
  9. img, target = test_data[0]
  10. print(img.shape)
  11. print(target)
  12. writer = SummaryWriter("dataloader")
  13. step = 0
  14. for data in test_loader:
  15.     imgs, targets = data
  16.     # print(imgs.shape)
  17.     # print(targets)
  18.     writer.add_images("test_data", imgs, step)
  19.     step = step + 1
  20. writer.close()
复制代码



  • 最后一次数据不满足64张  于是将参数设置drop_last=True
  1. import torchvision
  2. from torch.utils.data import DataLoader
  3. from torch.utils.tensorboard import SummaryWriter
  4. # 准备的测试数据集
  5. test_data = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor())
  6. test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=True)
  7. # 这里数据集是之前官网下载下来的
  8. # 测试数据集中第一张图片及target
  9. img, target = test_data[0]
  10. print(img.shape)
  11. print(target)
  12. writer = SummaryWriter("dataloader_drop_last")
  13. step = 0
  14. for data in test_loader:
  15.     imgs, targets = data
  16.     # print(imgs.shape)
  17.     # print(targets)
  18.     writer.add_images("test_data", imgs, step)
  19.     step = step + 1
  20. writer.close()
复制代码


  • shuffle 使用
  • True 两边图片选取不一样
  • False两边图片选取一样
  1. import torchvision
  2. from torch.utils.data import DataLoader
  3. from torch.utils.tensorboard import SummaryWriter
  4. # 准备的测试数据集
  5. test_data = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor())
  6. test_loader = DataLoader(dataset=test_data, batch_size=64, shuffle=True, num_workers=0, drop_last=True)
  7. # 这里数据集是之前官网下载下来的
  8. # 测试数据集中第一张图片及target
  9. img, target = test_data[0]
  10. print(img.shape)
  11. print(target)
  12. writer = SummaryWriter("dataloader")
  13. for epoch in range(2):
  14.     step = 0
  15.     for data in test_loader:
  16.         imgs, targets = data
  17.         # print(imgs.shape)
  18.         # print(targets)
  19.         writer.add_images("Eopch: {}".format(epoch), imgs, step)
  20.         step = step + 1
  21. writer.close()
复制代码

神经网络的基本骨架
  1. import torch
  2. from torch import nn
  3. class ConvModel(nn.Module):
  4.     def __init__(self, *args, **kwargs) -> None:
  5.         super().__init__(*args, **kwargs)
  6.     def forward(self, input):
  7.         output = input + 1
  8.         return output
  9. convmodel = ConvModel()
  10. x = torch.tensor(1.0)
  11. output = convmodel(x)
  12. print(output)
复制代码
卷积操作

  1. import torch
  2. import torch.nn.functional as F
  3. # 卷积输入
  4. input = torch.tensor([[1, 2, 0, 3, 1],
  5.                       [0, 1, 2, 3, 1],
  6.                       [1, 2, 1, 0, 0],
  7.                       [5, 2, 3, 1, 1],
  8.                       [2, 1, 0, 1, 1]])
  9. # 卷积核
  10. kernel = torch.tensor([[1, 2, 1],
  11.                        [0, 1, 0],
  12.                        [2, 1, 0]])
  13. # 进行尺寸转换
  14. input = torch.reshape(input, (1, 1, 5, 5))
  15. kernel = torch.reshape(kernel, (1, 1, 3, 3))
  16. print(input.shape)
  17. print(kernel.shape)
  18. output = F.conv2d(input, kernel, stride=1)
  19. print(output)
  20. output2 = F.conv2d(input, kernel, stride=2)
  21. print(output2)
复制代码


  • padding
  1. # padding  默认填充值是0
  2. output3 = F.conv2d(input, kernel, stride=1, padding=1)
  3. print(output3)
复制代码
结果:
tensor([[[[ 1,  3,  4, 10,  8],
[ 5, 10, 12, 12,  6],
[ 7, 18, 16, 16,  8],
[11, 13,  9,  3,  4],
[14, 13,  9,  7,  4]]]])
神经网络-卷积层
  1. import torch
  2. import torchvision
  3. from torch.utils.data import DataLoader
  4. from torch import nn
  5. from torch.nn import Conv2d
  6. from torch.utils.tensorboard import SummaryWriter
  7. dataset = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(),
  8.                                        download=True)
  9. dataloader = DataLoader(dataset, batch_size=64)
  10. class NN_Conv2d(nn.Module):
  11.     def __init__(self):
  12.         super().__init__()
  13.         self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)
  14.     def forward(self, x):
  15.         x = self.conv1(x)
  16.         return x
  17. nn_conv2d = NN_Conv2d()
  18. # print(nn_conv2d)
  19. writer = SummaryWriter("./logs")
  20. step = 0
  21. for data in dataloader:
  22.     imgs, targets = data
  23.     output = nn_conv2d(imgs)
  24.     print(f"imgs: {imgs.shape}")
  25.     print(f"output: {output.shape}")
  26.     # 输入的大小 torch.Size([64,3,32,32])
  27.     writer.add_images("input", imgs, step)
  28.     # 卷积后输出的大小 torch.Size([64,,6,30,30) --> [xxx,3,30,30]
  29.     output = torch.reshape(output, (-1, 3, 30, 30))
  30.     writer.add_images("output", output, step)
  31.     step += 1
复制代码
  1. # import numpy as np
  2. import torch
  3. import torchvision
  4. from torch import nn
  5. from torch.nn import Conv2d
  6. from torch.utils.tensorboard import SummaryWriter
  7. import cv2
  8. from torchvision import transforms
  9. # 创建卷积模型
  10. class NN_Conv2d(nn.Module):
  11.     def __init__(self):
  12.         super().__init__()
  13.         self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=1)
  14.     def forward(self, x):
  15.         x = self.conv1(x)
  16.         return x
  17. nn_conv2d = NN_Conv2d()
  18. writer = SummaryWriter('logs_test')
  19. input_img = cv2.imread("dataset/ice.jpg")
  20. # 转化为tensor类型
  21. trans_tensor = transforms.ToTensor()
  22. input_img = trans_tensor(input_img)
  23. # 设置input输入大小
  24. input_img = torch.reshape(input_img, (-1, 3, 1312, 2100))
  25. print(input_img.shape)
  26. writer.add_images("input_img", input_img, 1)
  27. # 进行卷积输出
  28. output = nn_conv2d(input_img)
  29. output = torch.reshape(output, (-1, 3, 1312, 2100))
  30. print(output.shape)
  31. writer.add_images('output_test', output, 1)
  32. writer.close()
复制代码

神经网络-最大池化

  1. import torch
  2. from torch import nn
  3. from torch.nn import MaxPool2d
  4. input_img = torch.tensor([[1, 2, 0, 3, 1],
  5.                           [0, 1, 2, 3, 1],
  6.                           [1, 2, 1, 0, 0],
  7.                           [5, 2, 3, 1, 1],
  8.                           [2, 1, 0, 1, 1]], dtype=torch.float32)
  9. input_img = torch.reshape(input_img, (-1, 1, 5, 5))
  10. print(input_img.shape)
  11. # 简单的搭建卷积神经网络
  12. class Nn_Conv_Maxpool(nn.Module):
  13.     def __init__(self):
  14.         super().__init__()
  15.         self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=True)
  16.     def forward(self, input_img):
  17.         output = self.maxpool1(input_img)
  18.         return output
  19. nn_conv_maxpool = Nn_Conv_Maxpool()
  20. output = nn_conv_maxpool(input_img)
  21. print(output)
复制代码
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import MaxPool2d
  5. from torch.utils.data import DataLoader
  6. from torch.utils.tensorboard import SummaryWriter
  7. dataset = torchvision.datasets.CIFAR10('./dataset', train=False, download=True,
  8.                                        transform=torchvision.transforms.ToTensor())
  9. dataloader = DataLoader(dataset, batch_size=64)
  10. # 简单的搭建卷积神经网络
  11. class Nn_Conv_Maxpool(nn.Module):
  12.     def __init__(self):
  13.         super().__init__()
  14.         self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=True)
  15.     def forward(self, input_img):
  16.         output = self.maxpool1(input_img)
  17.         return output
  18. nn_conv_maxpool = Nn_Conv_Maxpool()
  19. writer = SummaryWriter('logs_maxpool')
  20. step = 0
  21. for data in dataloader:
  22.     imgs, targets = data
  23.     writer.add_images('input', imgs, step)
  24.     output = nn_conv_maxpool(imgs)
  25.     writer.add_images('output', output, step)
  26.     step += 1
  27. writer.close()
复制代码

神经网络-非线性激活


  • ReLU
  1. import torch
  2. from torch import nn
  3. from torch.nn import ReLU
  4. input = torch.tensor([[1, -0.5],
  5.                       [-1, 3]])
  6. input = torch.reshape(input, (-1, 1, 2, 2))
  7. print(input.shape)
  8. class Nn_Network_Relu(nn.Module):
  9.     def __init__(self):
  10.         super().__init__()
  11.         self.relu1 = ReLU()
  12.     def forward(self, input):
  13.         output = self.relu1(input)
  14.         return output
  15. nn_relu = Nn_Network_Relu()
  16. output = nn_relu(input)
  17. print(outputz)
复制代码

  • 使用图片进行演示
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import ReLU
  5. from torch.nn import Sigmoid
  6. from torch.utils.data import DataLoader
  7. from torch.utils.tensorboard import SummaryWriter
  8. input = torch.tensor([[1, -0.5],
  9.                       [-1, 3]])
  10. input = torch.reshape(input, (-1, 1, 2, 2))
  11. print(input.shape)
  12. dataset = torchvision.datasets.CIFAR10('./dataset', train=False, download=True,
  13.                                        transform=torchvision.transforms.ToTensor())
  14. dataloader = DataLoader(dataset, batch_size=64)
  15. class Nn_Network_Relu(nn.Module):
  16.     def __init__(self):
  17.         super().__init__()
  18.         self.relu1 = ReLU()
  19.         self.sigmoid1 = Sigmoid()
  20.     def forward(self, input):
  21.         output = self.sigmoid1(input)
  22.         return output
  23. nn_relu = Nn_Network_Relu()
  24. nn_sigmoid = Nn_Network_Relu()
  25. writer = SummaryWriter('logs_sigmoid')
  26. step = 0
  27. for data in dataloader:
  28.     imgs, targets = data
  29.     writer.add_images("input_imgs", imgs, step)
  30.     output = nn_sigmoid(imgs)
  31.     writer.add_images("output", output, step)
  32.     step += 1
  33. writer.close()
复制代码

神经网络-线性层及其他层


  • 线性层(linear layer)通常也被称为全连接层(fully connected layer)。在深度学习模型中,线性层和全连接层指的是同一种类型的神经网络层,它将输入数据与权重相乘并加上偏置,然后通过一个非线性激活函数输出结果。可以实现特征提取、降维等功能。
  • 以VGG16网络模型为例,全连接层共有3层,分别是4096-4096-1000,这里的1000为ImageNet中数据集类别的数量。
  1. import torch
  2. import torchvision
  3. from torch.utils.data import DataLoader
  4. from torch import nn
  5. from torch.nn import Linear
  6. dataset = torchvision.datasets.CIFAR10('./dataset', train=False, transform=torchvision.transforms.ToTensor(),
  7.                                        download=True)
  8. dataloader = DataLoader(dataset, batch_size=64)
  9. class Nn_LinearModel(nn.Module):
  10.     def __init__(self):
  11.         super().__init__()
  12.         self.linear1 = Linear(196608, 10)
  13.     def forward(self, input):
  14.         output = self.linear1(input)
  15.         return output
  16. nn_linearmodel = Nn_LinearModel()
  17. for data in dataloader:
  18.     imgs, targets = data
  19.     print(imgs.shape)
  20.     output = torch.flatten(imgs)
  21.     print(output.shape)
  22.     output = nn_linearmodel(output)
  23.     print(output.shape)
复制代码

  • torch.flatten: 将输入(Tensor)展平为一维张量
  • batch_size 一般不展开,以MNIST数据集的一个 batch 为例将其依次转化为例:
    [64, 1, 28, 28] -> [64, 784] -> [64, 128]

神经网络-实践以及Sequential

  1. import torch
  2. from torch import nn
  3. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
  4. class Nn_SeqModel(nn.Module):
  5.     def __init__(self):
  6.         super().__init__()
  7.         self.conv1 = Conv2d(3, 32, 5, padding=2)
  8.         self.maxpool1 = MaxPool2d(2)
  9.         self.conv2 = Conv2d(32, 32, 5, padding=2)
  10.         self.maxpool2 = MaxPool2d(2)
  11.         self.conv3 = Conv2d(32, 64, 5, padding=2)
  12.         self.maxpool3 = MaxPool2d(2)
  13.         self.flatten = Flatten()
  14.         self.linear1 = Linear(1024, 64)
  15.         self.linear2 = Linear(64, 10)
  16.     def forward(self, x):
  17.         x = self.conv1(x)
  18.         x = self.maxpool1(x)
  19.         x = self.conv2(x)
  20.         x = self.maxpool2(x)
  21.         x = self.conv3(x)
  22.         x = self.maxpool2(x)
  23.         x = self.flatten(x)
  24.         x = self.linear1(x)
  25.         x = self.linear2(x)
  26.         return x
  27. if __name__ == '__main__':
  28.     nn_seqmodel = Nn_SeqModel()
  29.     print(nn_seqmodel)
  30.     # 对网络模型进行检验
  31.     input = torch.ones((64, 3, 32, 32))
  32.     output = nn_seqmodel(input)
  33.     print(output.shape)
复制代码
Nn_SeqModel(
(conv1): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(maxpool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear1): Linear(in_features=1024, out_features=64, bias=True)
(linear2): Linear(in_features=64, out_features=10, bias=True)
)
torch.Size([64, 10])


  • Sequential 使代码更加简洁
  1. import torch
  2. from torch import nn
  3. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  4. class Nn_SeqModel(nn.Module):
  5.     def __init__(self):
  6.         super().__init__()
  7.         # self.conv1 = Conv2d(3, 32, 5, padding=2)
  8.         # self.maxpool1 = MaxPool2d(2)
  9.         # self.conv2 = Conv2d(32, 32, 5, padding=2)
  10.         # self.maxpool2 = MaxPool2d(2)
  11.         # self.conv3 = Conv2d(32, 64, 5, padding=2)
  12.         # self.maxpool3 = MaxPool2d(2)
  13.         # self.flatten = Flatten()
  14.         # self.linear1 = Linear(1024, 64)
  15.         # self.linear2 = Linear(64, 10)
  16.         self.model1 = Sequential(
  17.             Conv2d(3, 32, 5, padding=2),
  18.             MaxPool2d(2),
  19.             Conv2d(32, 32, 5, padding=2),
  20.             MaxPool2d(2),
  21.             Conv2d(32, 64, 5, padding=2),
  22.             MaxPool2d(2),
  23.             Flatten(),
  24.             Linear(1024, 64),
  25.             Linear(64, 10)
  26.         )
  27.     def forward(self, x):
  28.         # x = self.conv1(x)
  29.         # x = self.maxpool1(x)
  30.         # x = self.conv2(x)
  31.         # x = self.maxpool2(x)
  32.         # x = self.conv3(x)
  33.         # x = self.maxpool2(x)
  34.         # x = self.flatten(x)
  35.         # x = self.linear1(x)
  36.         # x = self.linear2(x)
  37.         x = self.model1(x)
  38.         return x
  39. if __name__ == '__main__':
  40.     nn_seqmodel = Nn_SeqModel()
  41.     print(nn_seqmodel)
  42.     # 对网络模型进行检验
  43.     input = torch.ones((64, 3, 32, 32))
  44.     output = nn_seqmodel(input)
  45.     print(output.shape)
复制代码
Nn_SeqModel(
(model1): Sequential(
(0): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Flatten(start_dim=1, end_dim=-1)
(7): Linear(in_features=1024, out_features=64, bias=True)
(8): Linear(in_features=64, out_features=10, bias=True)
)
)
torch.Size([64, 10])
  1. if __name__ == '__main__':
  2.     nn_seqmodel = Nn_SeqModel()
  3.     print(nn_seqmodel)
  4.     # 对网络模型进行检验
  5.     input = torch.ones((64, 3, 32, 32))
  6.     output = nn_seqmodel(input)
  7.     print(output.shape)
  8.     # 查看网络结构
  9.     writer = SummaryWriter('./logs_seq')
  10.     writer.add_graph(nn_seqmodel, input)
  11.     writer.close()
复制代码

  • 查看网络结构

损失函数与反向传播


  • loos 损失函数




  • 注意输入和输出
  1. import torch
  2. from torch.nn import L1Loss
  3. inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
  4. targets = torch.tensor([1, 2, 5], dtype=torch.float32)
  5. inputs = torch.reshape(inputs, (1, 1, 1, 3))
  6. targets = torch.reshape(targets, (1, 1, 1, 3))
  7. loss = L1Loss()  # reduction='sum'
  8. result = loss(inputs, targets)
  9. print(result)
复制代码
tensor(0.6667)


  • 交叉熵loss
  1. x = torch.tensor([0.1, 0.2, 0.3])
  2. y = torch.tensor([1])
  3. x = torch.reshape(x, (1, 3))
  4. loss_cross = nn.CrossEntropyLoss()
  5. result_cross = loss_cross(x, y)
  6. print(f"The result_cross of CrossEntropyLoss: {result_cross}")
复制代码
The result_cross of CrossEntropyLoss: 1.1019428968429565


  • 测试
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.tensorboard import SummaryWriter
  6. from torch.utils.data import DataLoader
  7. dataset = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(),
  8.                                        download=True)
  9. dataloader = DataLoader(dataset, batch_size=1)
  10. class Nn_LossNetworkModel(nn.Module):
  11.     def __init__(self):
  12.         super().__init__()
  13.         self.model1 = Sequential(
  14.             Conv2d(3, 32, 5, padding=2),
  15.             MaxPool2d(2),
  16.             Conv2d(32, 32, 5, padding=2),
  17.             MaxPool2d(2),
  18.             Conv2d(32, 64, 5, padding=2),
  19.             MaxPool2d(2),
  20.             Flatten(),
  21.             Linear(1024, 64),
  22.             Linear(64, 10)
  23.         )
  24.     def forward(self, x):
  25.         x = self.model1(x)
  26.         return x
  27. loss = nn.CrossEntropyLoss()
  28. if __name__ == '__main__':
  29.     nn_lossmodel = Nn_LossNetworkModel()
  30.     for data in dataloader:
  31.         imgs, targets = data
  32.         outputs = nn_lossmodel(imgs)
  33.         result_loss = loss(outputs, targets)
  34.         print(f"the result_loss is : {result_loss}")
复制代码


  • 梯度下降 进行反向传播
  • debug测试查看 grad
  1. if __name__ == '__main__':
  2.     nn_lossmodel = Nn_LossNetworkModel()
  3.     for data in dataloader:
  4.         imgs, targets = data
  5.         outputs = nn_lossmodel(imgs)
  6.         result_loss = loss(outputs, targets)
  7.         # print(f"the result_loss is : {result_loss}")
  8.         result_loss.backward()
  9.         print("ok")
复制代码
优化器
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  5. from torch.utils.tensorboard import SummaryWriter
  6. from torch.utils.data import DataLoader
  7. # 加载数据集转换为tensor类型
  8. dataset = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(),
  9.                                        download=True)
  10. # 使用DataLoader将数据集进行加载
  11. dataloader = DataLoader(dataset, batch_size=1)
  12. # 创建网络
  13. class Nn_LossNetworkModel(nn.Module):
  14.     def __init__(self):
  15.         super().__init__()
  16.         self.model1 = Sequential(
  17.             Conv2d(3, 32, 5, padding=2),
  18.             MaxPool2d(2),
  19.             Conv2d(32, 32, 5, padding=2),
  20.             MaxPool2d(2),
  21.             Conv2d(32, 64, 5, padding=2),
  22.             MaxPool2d(2),
  23.             Flatten(),
  24.             Linear(1024, 64),
  25.             Linear(64, 10)
  26.         )
  27.     def forward(self, x):
  28.         x = self.model1(x)
  29.         return x
  30. if __name__ == '__main__':
  31.     loss = nn.CrossEntropyLoss()
  32.     nn_lossmodel = Nn_LossNetworkModel()
  33.     optim = torch.optim.SGD(nn_lossmodel.parameters(), lr=0.01)
  34.     for data in dataloader:
  35.         imgs, targets = data
  36.         outputs = nn_lossmodel(imgs)
  37.         result_loss = loss(outputs, targets)
  38.         optim.zero_grad()
  39.         result_loss.backward()
  40.         optim.step()
复制代码
  1. if __name__ == '__main__':
  2.     loss = nn.CrossEntropyLoss()
  3.     nn_lossmodel = Nn_LossNetworkModel()
  4.     optim = torch.optim.SGD(nn_lossmodel.parameters(), lr=0.01)
  5.     for epoch in range(20):
  6.         running_loss = 0.0
  7.         for data in dataloader:
  8.             imgs, targets = data
  9.             outputs = nn_lossmodel(imgs)
  10.             result_loss = loss(outputs, targets)
  11.             optim.zero_grad()
  12.             result_loss.backward()
  13.             optim.step()
  14.             running_loss = running_loss + result_loss
  15.         print("running_loss: ", running_loss)
复制代码
Files already downloaded and verified
running_loss:  tensor(18788.4355, grad_fn=)
running_loss:  tensor(16221.9961, grad_fn=)
........
现有网络模型的使用以及修改
  1. import torchvision
  2. import torch
  3. from torch import nn
  4. # train_data = torchvision.datasets.ImageNet("./data_image_net", split="train",
  5. #                                            transform=torchvision.transforms.ToTensor(), download=True)
  6. vgg16_false = torchvision.models.vgg16(pretrained=False)
  7. vgg16_true = torchvision.models.vgg16(pretrained=True)
  8. print('ok')
  9. print(vgg16_true)
  10. train_data = torchvision.datasets.CIFAR10('./dataset', train=True, transform=torchvision.transforms.ToTensor(),
  11.                                           download=True)
  12. # vgg16_true.add_module('add_linear', nn.Linear(1000, 10))
  13. vgg16_true.classifier.add_module('add_linear', nn.Linear(1000, 10))
  14. print(vgg16_true)
  15. print(vgg16_false)
  16. vgg16_false.classifier[6] = nn.Linear(4096, 10)
  17. print(vgg16_false)
复制代码
网络模型的保存与读取


  • save
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. vgg16 = torchvision.models.vgg16(pretrained=False)
  5. # 保存方式1: 模型结构+模型参数
  6. torch.save(vgg16, "vgg16_method1.pth")
  7. # 保存方式2: 模型参数(官方推荐)
  8. torch.save(vgg16.state_dict(), "vgg16_method2.pth")
  9. # 陷阱
  10. class Nn_Model(nn.Module):
  11.     def __init__(self):
  12.         super().__init__()
  13.         self.conv1 = nn.Conv2d(3, 64, 3)
  14.     def forward(self, x):
  15.         x = self.conv1(x)
  16.         return x
  17. nn_model = Nn_Model()
  18. torch.save(nn_model, "nnModel_method1.pth")
复制代码

  • load
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from p19_model_save import *
  5. # 加载方式1 ---> 对应保存方式1 ,加载模型
  6. model = torch.load("vgg16_method1.pth")
  7. # print(model)
  8. # 加载方式2
  9. model2 = torch.load("vgg16_method2.pth")
  10. print(model2)
  11. # 方式2 的回复网络模型结构
  12. vgg16 = torchvision.models.vgg16(pretrained=False)
  13. vgg16.load_state_dict(torch.load("vgg16_method2.pth"))
  14. print(vgg16)
  15. # 陷阱1
  16. # class Nn_Model(nn.Module):
  17. #     def __init__(self):
  18. #         super().__init__()
  19. #         self.conv1 = nn.Conv2d(3, 64, 3)
  20. #
  21. #     def forward(self, x):
  22. #         x = self.conv1(x)
  23. #         return x
  24. model1 = torch.load("nnModel_method1.pth")
  25. print(model1)
复制代码
完成的模型训练套路(一)


  • 建包 train.py 和 model.py
  • model.py
  1. import torch
  2. from torch import nn
  3. # 搭建神经网络
  4. class Nn_Neural_NetWork(nn.Module):
  5.     def __init__(self):
  6.         super().__init__()
  7.         self.model = nn.Sequential(
  8.             nn.Conv2d(3, 32, 5, 1, 2),
  9.             nn.MaxPool2d(2),
  10.             nn.Conv2d(32, 32, 5, 1, 2),
  11.             nn.MaxPool2d(2),
  12.             nn.Conv2d(32, 64, 5, 1, 2),
  13.             nn.MaxPool2d(2),
  14.             nn.Flatten(),
  15.             nn.Linear(64 * 4 * 4, 64),
  16.             nn.Linear(64, 10)
  17.         )
  18.     def forward(self, x):
  19.         x = self.model(x)
  20.         return x
  21. if __name__ == '__main__':
  22.     # 测试一下模型准确性
  23.     nn_model = Nn_Neural_NetWork()
  24.     input = torch.ones((64, 3, 32, 32))
  25.     output = nn_model(input)
  26.     print(output.shape)
复制代码

  • train.py
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.utils.data import DataLoader
  5. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  6. from model import *
  7. # 准备数据集
  8. train_data = torchvision.datasets.CIFAR10(root='./data', train=True, transform=torchvision.transforms.ToTensor(),
  9.                                           download=True)
  10. test_data = torchvision.datasets.CIFAR10(root='./data', train=False, transform=torchvision.transforms.ToTensor(),
  11.                                          download=True)
  12. train_data_size = len(train_data)
  13. test_data_size = len(test_data)
  14. print("训练数据集的长度为:{}".format(train_data_size))
  15. print("测试数据集的长度为:{}".format(test_data_size))
  16. # 利用DataLoader 来加载数据集
  17. train_loader = DataLoader(train_data, batch_size=64)
  18. test_loader = DataLoader(test_data, batch_size=64)
  19. # 创建网络模型
  20. nn_model = Nn_Neural_NetWork()
  21. # 损失函数
  22. loss_fn = nn.CrossEntropyLoss()
  23. # 优化器
  24. # 1e-2 = 1 x (10)^(-2) = 1/100 = 0.01
  25. learning_rate = 0.01
  26. optimizer = torch.optim.SGD(nn_model.parameters(), lr=learning_rate)
  27. # 设置训练网络的一些参数
  28. # 记录训练的次数
  29. total_train_step = 0
  30. # 记录测试的次数
  31. total_test_step = 0
  32. # 训练的轮数
  33. epoch = 10
  34. for i in range(epoch):
  35.     print("--------第{}轮训练开始-------".format(i + 1))
  36.     # 训练步骤开始
  37.     for data in train_loader:
  38.         imgs, targets = data
  39.         output = nn_model(imgs)
  40.         loss = loss_fn(output, targets)
  41.         # 优化器优化模型
  42.         optimizer.zero_grad()
  43.         loss.backward()
  44.         optimizer.step()
  45.         total_train_step += 1
  46.         print("训练次数: {}, Loss: {}".format(total_train_step, loss.item()))
复制代码
完成的模型训练套路(二)


  • train.py
  • 增加了tenorboard
  • 增加了精确度Accuracy
  1. import torch
  2. import torchvision
  3. from torch import nn
  4. from torch.utils.data import DataLoader
  5. from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
  6. from torch.utils.tensorboard import SummaryWriterfrom p20_model import *# 准备数据集train_data = torchvision.datasets.CIFAR10(root='./data', train=True, transform=torchvision.transforms.ToTensor(),                                          download=True)test_data = torchvision.datasets.CIFAR10(root='./data', train=False, transform=torchvision.transforms.ToTensor(),                                         download=True)train_data_size = len(train_data)test_data_size = len(test_data)print("训练数据集的长度为:{}".format(train_data_size))print("测试数据集的长度为:{}".format(test_data_size))# 利用DataLoader 来加载数据集train_loader = DataLoader(train_data, batch_size=64)test_loader = DataLoader(test_data, batch_size=64)# 创建网络模型nn_model = Nn_Neural_NetWork()# 损失函数loss_fn = nn.CrossEntropyLoss()# 优化器# 1e-2 = 1 x (10)^(-2) = 1/100 = 0.01learning_rate = 0.01optimizer = torch.optim.SGD(nn_model.parameters(), lr=learning_rate)# 设置训练网络的一些参数# 记录训练的次数total_train_step = 0# 记录测试的次数total_test_step = 0# 训练的轮数epoch = 10# (可加可不加) 添加tensorboardwriter = SummaryWriter('./logs_train')for i in range(epoch):    print("--------第{}轮训练开始-------".format(i + 1))    # 训练步骤开始    for data in train_loader:        imgs, targets = data        output = nn_model(imgs)        loss = loss_fn(output, targets)        # 优化器优化模型        optimizer.zero_grad()        loss.backward()        optimizer.step()        total_train_step += 1        if total_train_step % 100 == 0:            print("训练次数: {}, Loss: {}".format(total_train_step, loss.item()))            writer.add_scalar("train_loss", loss.item(), total_train_step)    # 测试步骤开始    total_test_loss = 0    # 精确度    total_accuracy = 0    with torch.no_grad():        for data in test_loader:            imgs, targets = data            outputs = nn_model(imgs)            loss = loss_fn(outputs, targets)            total_test_loss += loss            accuracy = (outputs.argmax(1) == targets).sum()            total_accuracy += accuracy    print("整体测试集上的Loss: {}".format(total_test_loss))    print("整体测试集上的正确率Accuracy: {}".format(total_accuracy / test_data_size))    writer.add_scalar("test_loss", total_test_loss, total_test_step)    writer.add_scalar("test_accuracy", total_accuracy / test_data_size, total_test_step)    total_test_step += 1    # 保存模型结果    torch.save(nn_model, "model_{}.pth".format(i))    print("模型保存")writer.close()
复制代码
来源:https://www.cnblogs.com/Do1y/p/17819002.html
免责声明:由于采集信息均来自互联网,如果侵犯了您的权益,请联系我们【E-Mail:cb@itdo.tech】 我们会及时删除侵权内容,谢谢合作!

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?立即注册

x

举报 回复 使用道具