博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【flask整合深度学习】ubuntu系统下显示深度学习视觉检测结果图片并可在web端访问,配置允许手机浏览器打开
阅读量:3904 次
发布时间:2019-05-23

本文共 12186 字,大约阅读时间需要 40 分钟。

介绍

之前有一篇flask和mongodb交互的记录文:

https://blog.csdn.net/qq_41358574/article/details/117845077

首先需要先下载的工具:pycharm,pytorch一堆的相关包,flask相关包

本电脑没有cuda,故模型传入时输入:

device = torch.device('cpu')    # Set up modella    model = Darknet(opt.model_def, img_size=opt.img_size).to(device)    if opt.weights_path.endswith(".weights"):        # Load darknet weights        model.load_darknet_weights(opt.weights_path)    else:        # Load checkpoint weights        model.load_state_dict(torch.load(opt.weights_path,map_location=device))

项目在深度学习的pytorch框架下载入模型对图片进行检测,然后结果保存在文件夹中,用flask渲染前段页面动画显示结果,且可以手机浏览器输入ip+端口号访问页面

效果:
在这里插入图片描述
在这里插入图片描述
(字体是动画的,图片看不出来)

flask文件

目录:

在这里插入图片描述utils 和model.py是和深度学习有关的文件代码。
detect.py相当于main.py,深度学习的目标检测也写在里面了,所以代码挺多

from __future__ import divisionfrom models import *from utils.utils import *from utils.datasets import *from utils.augmentations import *from utils.transforms import *import osimport sysimport timeimport datetimeimport argparsefrom PIL import Imageimport torchimport torchvision.transforms as transformsfrom torch.utils.data import DataLoaderfrom torchvision import datasetsfrom torch.autograd import Variableimport matplotlib.pyplot as pltimport matplotlib.patches as patchesfrom matplotlib.ticker import NullLocatorfrom flask import Flask,request,make_response,render_templateimport socketfrom time import sleepmyhost = socket.gethostbyname(socket.gethostname())app = Flask(__name__)igpath = '/home/heziyi/pic/'@app.route('/', methods=['GET', 'POST'])  # 使用methods参数处理不同HTTP方法def home():    return render_template('index.html')#@app.route('/img/
',methods=['GET'])@app.route('/img/',methods=['GET'])def display(): if request.method == 'GET': # if filename is None: # pass # else: image = open("/static/he_21.png", "rb").read() # image = open(igpath+filename,"rb").read() response = make_response(image) response.headers['Content-Type'] = 'image/jpg' return responseif __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--image_folder", type=str, default="data/custom/dd", help="path to dataset") parser.add_argument("--model_def", type=str, default="config/yolov3-custom.cfg", help="path to model definition file") parser.add_argument("--weights_path", type=str, default="checkpoints/ckpt_88.pth", help="path to weights file") parser.add_argument("--class_path", type=str, default="data/custom/classes.names", help="path to class label file") parser.add_argument("--conf_thres", type=float, default=0.8, help="object confidence threshold") parser.add_argument("--nms_thres", type=float, default=0.4, help="iou thresshold for non-maximum suppression") parser.add_argument("--batch_size", type=int, default=1, help="size of the batches") parser.add_argument("--n_cpu", type=int, default=0, help="number of cpu threads to use during batch generation") parser.add_argument("--img_size", type=int, default=416, help="size of each image dimension") parser.add_argument("--checkpoint_model", type=str,default="checkpoints/ckpt_88.pth",help="path to checkpoint model") opt = parser.parse_args() print(opt) #device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = torch.device('cpu') os.makedirs("../output", exist_ok=True) # Set up modella model = Darknet(opt.model_def, img_size=opt.img_size).to(device) if opt.weights_path.endswith(".weights"): # Load darknet weights model.load_darknet_weights(opt.weights_path) else: # Load checkpoint weights model.load_state_dict(torch.load(opt.weights_path,map_location=device)) #cpu!!!!!! model.eval() # Set in evaluation mode dataloader = DataLoader( ImageFolder(opt.image_folder, transform= \ transforms.Compose([DEFAULT_TRANSFORMS, Resize(opt.img_size)])), batch_size=opt.batch_size, shuffle=False, num_workers=opt.n_cpu, ) classes = load_classes(opt.class_path) # Extracts class labels from file Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor imgs = [] # Stores image paths img_detections = [] # Stores detections for each image index print("\nPerforming object detection:") prev_time = time.time() for batch_i, (img_paths, input_imgs) in enumerate(dataloader): # Configure input input_imgs = Variable(input_imgs.type(Tensor)) # Get detections with torch.no_grad(): detections = model(input_imgs) detections = non_max_suppression(detections, opt.conf_thres, opt.nms_thres) # Log progress current_time = time.time() inference_time = datetime.timedelta(seconds=current_time - prev_time) prev_time = current_time print("\t+ Batch %d, Inference Time: %s" % (batch_i, inference_time)) # Save image and detections imgs.extend(img_paths) img_detections.extend(detections) # Bounding-box colors cmap = plt.get_cmap("tab20b") colors = [cmap(i) for i in np.linspace(0, 1, 20)] print("\nSaving images:") # Iterate through images and save plot of detections for img_i, (path, detections) in enumerate(zip(imgs, img_detections)): print("(%d) Image: '%s'" % (img_i, path)) # Create plot img = np.array(Image.open(path)) plt.figure() fig, ax = plt.subplots(1) ax.imshow(img) # Draw bounding boxes and labels of detections if detections is not None: # Rescale boxes to original image detections = rescale_boxes(detections, opt.img_size, img.shape[:2]) unique_labels = detections[:, -1].cpu().unique() n_cls_preds = len(unique_labels) bbox_colors = random.sample(colors, n_cls_preds) for x1, y1, x2, y2, conf, cls_conf, cls_pred in detections: print("\t+ Label: %s, Conf: %.5f" % (classes[int(cls_pred)], cls_conf.item())) box_w = x2 - x1 box_h = y2 - y1 color = bbox_colors[int(np.where(unique_labels == int(cls_pred))[0])] # Create a Rectangle patch bbox = patches.Rectangle((x1, y1), box_w, box_h, linewidth=2, edgecolor=color, facecolor="none") print(int(box_w)*int(box_h)) # if(box_w*box_h>10000): # se.write("1".encode()) # time.sleep(3) # se.write("0".encode()) # Add the bbox to the plot ax.add_patch(bbox) # Add label plt.text( x1, y1, s=classes[int(cls_pred)], color="white", verticalalignment="top", bbox={
"color": color, "pad": 0}, ) # Save generated image with detections plt.axis("off") plt.gca().xaxis.set_major_locator(NullLocator()) plt.gca().yaxis.set_major_locator(NullLocator()) filename = os.path.basename(path).split(".")[0] output_path = os.path.join("../output", f"{
filename}.png") plt.savefig(output_path, bbox_inches="tight", pad_inches=0.0) plt.close() app.run() #启动flask服务器

其中函数display():用于在浏览器输入地址后直接返回图片

@app.route('/img/',methods=['GET'])def display():    if request.method == 'GET':        # if filename is None:        #     pass        # else:            image = open("/static/he_21.png", "rb").read()           # image = open(igpath+filename,"rb").read()            response = make_response(image)            response.headers['Content-Type'] = 'image/jpg'            return response

除了最后一行app.run()其他的都是深度学习的代码,学过的应该很容易看懂。

前端代码

    
Title

检测结果

hello!!!!!!!!!!!!```

this is the detection result

css:

html{
width: 100%; height: 100%; overflow: hidden; font-style: sans-serif;}body{
width: 100%; height: 100%; font-family: 'Open Sans',sans-serif; margin: 0; background-color: #4A374A;} img {
widht: 300px; height: 300px; border: 1px solid red; } h1 {
text-align:center; color:#fff; font-size:48px; text-shadow: 1px 1px 1px #ccc, 0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 40px #ff00de, 0 0 70px #ff00de, 0 0 80px #ff00de, 0 0 100px #ff00de, 0 0 150px #ff00de; transform-style: preserve-3d; -moz-transform-style: preserve-3d; -webkit-transform-style: preserve-3d; -ms-transform-style: preserve-3d; -o-transform-style: preserve-3d; animation: run ease-in-out 9s infinite; -moz-animation: run ease-in-out 9s infinite ; -webkit-animation: run ease-in-out 9s infinite; -ms-animation: run ease-in-out 9s infinite; -o-animation: run ease-in-out 9s infinite; } @keyframes run {
0% {
transform:rotateX(-5deg) rotateY(0);} 50% {
transform:rotateX(0) rotateY(180deg); text-shadow: 1px 1px 1px #ccc, 0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 40px #3EFF3E, 0 0 70px #3EFFff, 0 0 80px #3EFFff, 0 0 100px #3EFFee, 0 0 150px #3EFFee; } 100% {
transform:rotateX(5deg) rotateY(360deg);} } @-webkit-keyframes run {
0% {
transform:rotateX(-5deg) rotateY(0);} 50% {
transform:rotateX(0) rotateY(180deg); text-shadow: 1px 1px 1px #ccc, 0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 40px #3EFF3E, 0 0 70px #3EFFff, 0 0 80px #3EFFff, 0 0 100px #3EFFee, 0 0 150px #3EFFee; } 100% {
transform:rotateX(5deg) rotateY(360deg);} } @-moz-keyframes run {
0% {
transform:rotateX(-5deg) rotateY(0);} 50% {
transform:rotateX(0) rotateY(180deg); text-shadow: 1px 1px 1px #ccc, 0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 40px #3EFF3E, 0 0 70px #3EFFff, 0 0 80px #3EFFff, 0 0 100px #3EFFee, 0 0 150px #3EFFee; } 100% {
transform:rotateX(5deg) rotateY(360deg);} } @-ms-keyframes run {
0% {
transform:rotateX(-5deg) rotateY(0);} 50% {
transform:rotateX(0) rotateY(180deg); text-shadow: 1px 1px 1px #ccc, 0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 40px #3EFF3E, 0 0 70px #3EFFff, 0 0 80px #3EFFff, 0 0 100px #3EFFee, 0 0 150px #3EFFee; } 100% {
transform:rotateX(5deg) rotateY(360deg);} }

输入命令

export FLASK_APP=“detect.py” #由于是linux故输入这个

flask run --host=0.0.0.0 --port=8000
在这里插入图片描述记得开放ubuntu的端口:

sudo ufw allow 8000

在这里插入图片描述
附:
开启/禁用

sudo ufw allow|deny [service]

sudo ufw allow smtp 允许所有的外部IP访问本机的25/tcp (smtp)端口

sudo ufw allow 22/tcp 允许所有的外部IP访问本机的22/tcp (ssh)端口

sudo ufw allow 53 允许外部访问53端口(tcp/udp)

sudo ufw allow from 192.168.1.100 允许此IP访问所有的本机端口

sudo ufw allow proto udp 192.168.0.1 port 53 to 192.168.0.2 port 53

查看防火墙状态

sudo ufw status

你可能感兴趣的文章
最新mysql数据库源码编译安装。
查看>>
第一章 vue入门
查看>>
Linux文件引用计数的逻辑
查看>>
linux PCIe hotplug arch analysis
查看>>
LDD3 study note 0
查看>>
cpio compress and extract
查看>>
PCI SMMU parse in ACPI
查看>>
const使用实例
查看>>
面向对象设计案例——点和圆关系
查看>>
深拷贝与浅拷贝
查看>>
WinForm 打开txt文件
查看>>
WinForm 实现日志记录功能
查看>>
WinForm 读取照片文件
查看>>
WinForm ComboBox不可编辑与不可选择
查看>>
WinForm textbox控件设置为不可编辑
查看>>
winForm ImageList图像控件使用
查看>>
WinForm TabControl标签背景色
查看>>
Winform TabControl标签美化
查看>>
打印日志类
查看>>
WinForm 获取文件/文件夹对话框
查看>>