-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathindex.html
845 lines (652 loc) · 29.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
<!doctype html>
<html lang="cn">
<head>
<meta charset="utf-8">
<title>Tensorflow+Wechat</title>
<link rel="icon" type="image/x-icon" href="img/favo.ico"/>
<meta name="description" content="A framework for easily creating beautiful presentations using HTML">
<meta name="author" content="Hakim El Hattab">
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no">
<link rel="stylesheet" href="css/reveal.css">
<link rel="stylesheet" href="css/theme/black.css" id="theme">
<!-- Theme used for syntax highlighting of code -->
<link rel="stylesheet" href="lib/css/zenburn.css">
<!-- Printing and PDF exports -->
<script>
var link = document.createElement( 'link' );
link.rel = 'stylesheet';
link.type = 'text/css';
link.href = window.location.search.match( /print-pdf/gi ) ? 'css/print/pdf.css' : 'css/print/paper.css';
document.getElementsByTagName( 'head' )[0].appendChild( link );
</script>
<!--[if lt IE 9]>
<script src="lib/js/html5shiv.js"></script>
<![endif]-->
</head>
<body>
<div class="reveal">
<!-- Any section element inside of this container is displayed as a slide -->
<div class="slides">
<!-- <section data-background-video="img/video.mp4" data-background-color="#000000" > -->
<section>
<h3 style="font-family:'STLiti',华文隶书"><font color="#0000FF">一个基于微信的视觉问答系统应用</font></h3>
<h6 style="text-transform:none">Deep Learning + TensorFlow + WeChat</h6>
<br>
<br>
<p>
<small>Created By <a href="https://dataxujing.github.io" target='_blank'>徐静</a> </small>
</p>
<p>
<small>Date 2018-11-11</small>
</p>
</section>
<section style="text-align: left;">
<h5>内容</h5>
<hr style="transition:width 0.5s linear">
<p>☠ Simple TensorFlow Serving</p>
<p>☠ TensorFlow Serving + Docker </p>
<p>☠ 基于微信的图像识别应用 <a href="https://dataxujing.github.io/Fiery-and-RweiXin/" target="_blank">【灵感来源】</a></p>
<hr>
</section>
<!-- Section 1: 如何部署tf model -->
<section>
<!-- 下箭头翻页 1-->
<section>
<h5 style="font-family:'STLiti',华文隶书">1.如何部署一个基于TensorFlow的应用?</h5>
</section>
<section>
<br>
<ul style="font-size:30px">
<li style="list-style-type:none;">✍ 传统的机器学习模型我们已经有很多方法:
<ul>
<li><a href="https://yq.aliyun.com/articles/614406?utm_content=m_1000007161" target='_blank'>Code2Code</a>✔</li>
<li><a href="https://dataxujing.github.io/R_online/" target='_blank'>API接口(R,Python)</a>✔</li>
<li><a href="https://github.com/DataXujing/boston_model" target="_blank">GUI开发(R:rattle,Ricetl,Python:PyQt5,Kivy)</a>✔</li>
<li><a href="https://www.ibm.com/developerworks/cn/opensource/ind-PMML1/" target='_blank'>PMML</a>✘</li>
<li>嵌入式✘</li>
</ul>
</li>
<li style="list-style-type:none;">✍ 一般的思路:
<ul>
<li>模型训练和预测均在线上</li>
<li>模型训练在线下(模型文件,pickle文件,二进制文件等保存模型必要的参数,超参数和模型结构)然后线上做预测</li>
</ul>
</li>
</ul>
</section>
<section style="text-align: left;">
<p>对于TensorFlow训练的Model呢?</p>
<br>
<ul style="font-size:30px">
<li style="list-style-type:none;">☠ <a href="https://github.com/DataXujing/xiaoX" target="_blank">TensorFlow模型持久化后调用速度慢怎样解决?</a></li>
<li style="list-style-type:none;">☠ <a href="https://gyang274.github.io/tensorflow-serving-tutorial/" target="_blank">用TensorFlow Serving去部署吧 </a> [来源于一次面基]</li>
<li style="list-style-type:none;">☠ 无意间发现了一个更简单的办法: <a href="https://stfs.readthedocs.io/en/latest/" target="_blank">Simple TensorFlow Serving</a></li>
<li style="list-style-type:none;">☠ Support multiple models of TensorFlow/ MXNet/ PyTorch/ Caffe2/ CNTK/ ONNX/ H2o/ Scikit-learn/ XGBoost/ PMML ✌</li>
</ul>
</section>
<section data-background="img/sfts1.jpeg" >
<!-- <img src="img/sfts1.jpeg" alt="SFTS支持多客户端多模型" /> -->
</section>
<section data-background="img/sfts2.jpeg" >
<!-- <img src="img/sfts2.jpeg" alt="SFTS速度" /> -->
</section>
<section data-transition="slide" data-background="#4d7e65" data-background-transition="zoom">
<p>我们将部署一个训练好的CNN模型,并基于微信做成图像识别应用 ☕ </p>
</section>
</section>
<!-- section2: STFS -->
<section>
<section>
<h3 style="font-family:'STLiti',华文隶书;text-transform:none">2.Simple TensorFlow Serving</h3>
<br>
<br>
<br>
<ul style="font-size:25px;list-style-type:none;">
<li>☠ 教程:
<ul style="font-size:20px">
<li>
<a href="https://stfs.readthedocs.io/en/latest/" target="_blank">https://stfs.readthedocs.io/en/latest/</a></li>
</ul>
</li>
<li>☠ 项目地址:
<ul style="font-size:20px">
<li>
<a href="https://github.com/tobegit3hub/simple_tensorflow_serving" target="_blank">https://github.com/tobegit3hub/simple_tensorflow_serving</a></li>
</ul>
</li>
</ul>
</section>
<section style="text-align: left; ">
<p style="font-size:30px">
☠ Support distributed TensorFlow models <br>
☠ Support the general RESTful/HTTP APIs <br>
☠ Support inference with accelerated GPU <br>
☠ Support curl and other command-line tools <br>
☠ Support clients in any programming language <br>
☠ Support code-gen client by models without coding <br>
☠ Support inference with raw file for image models <br>
☠ Support statistical metrics for verbose requests <br>
☠ Support serving multiple models at the same time <br>
☠ Support dynamic online and offline for model versions <br>
☠ Support loading new custom op for TensorFlow models <br>
☠ Support secure authentication with configurable basic auth<br>
☠ Support multiple models of TensorFlow/MXNet/PyTorch/Caffe2/CNTK/ONNX/H2o/Scikit-learn/XGBoost/PMML
</p>
</section>
<section>
<pre>
<code class="hljs" data-trim contenteditable>
# 常用命令
simple_tensorflow_serving --model_base_path="./model" --model_platform="scikitlearn" --model_version="v0.1.0"
simple_tensorflow_serving --model_config_file="./examples/model_config_file.json"
# 服务的常用配置
{
"model_config_list": [
{
"name": "ResNet For WeChat Server",
"base_path": "/home/soft/simple_tf_server/ResNet_v1",
"platform": "tensorflow"
"version":lx_v0.1.0
}, {
"name": "server2",
"base_path": "model_save_path2",
"platform": "scikitlearn"
}, {
"name": "server3",
"base_path": "path3",
"platform": "mxnet"
}
]
}
</code>
</pre>
</section>
<section style="text-align: left;">
<h5 style="text-transform:none;"> TensorFlow SavedModel保存和加载模型</h5>
<p style="font-size:30px">
☠经常看到的 tf.train.Saver对应的东西。使用这种方法保存模型会产生两种文件:</p>
<p style="font-size:20px"><span class="fragment">.meta: 里面存储的是整个graph的定义;</span>
<span class="fragment">checkpoint: 这里保存的是 variable 的状态</span>
<span class="fragment">.index文件保存了当前参数名</span><span class="fragment">.data文件保存了当前参数值</span></p>
<pre>
<code class="hljs" data-trim contenteditable>
# 模型保存的过程
checkpoint_dir = "mysaver"
# first creat a simple graph
graph = tf.Graph()
#define a simple graph
with graph.as_default():
x = tf.placeholder(tf.float32,shape=[],name='input')
y = tf.Variable(initial_value=0,dtype=tf.float32,name="y_variable")
update_y = y.assign(x)
saver = tf.train.Saver(max_to_keep=3)
init_op = tf.global_variables_initializer()
# train the model and save the model every 4000 iterations.
sess = tf.Session(graph=graph)
sess.run(init_op)
for i in range(1,10000):
y_result = sess.run(update_y,feed_dict={x:i})
if i %4000 == 0:
saver.save(sess,checkpoint_dir,global_step=i)
# 模型复原
tf.reset_default_graph()
restore_graph = tf.Graph()
with tf.Session(graph=restore_graph) as restore_sess:
restore_saver = tf.train.import_meta_graph('mysaver-8000.meta')
restore_saver.restore(restore_sess,tf.train.latest_checkpoint('./'))
print(restore_sess.run("y_variable:0"))
</code>
</pre>
</p>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ tf.train.Saver()有缺点</p><br>
<pre>
<code class="hljs" data-trim contenteditable>
# tf.train.import_meta_graph函数给出model.ckpt-n.meta的路径后会加载图结构,
# 并返回saver对象
import tensorflow as tf
# 直接加载持久化的图
saver = tf.train.import_meta_graph(
"path/xxx.meta")
with tf.Session() as sess:
saver.restore(sess,"path/xxx.ckpt")
# 通过张量名称获取张量
print(sess.run(tf.get_default_graph().get_tensor_by_name("add:0")))
</code>
</pre>
</section>
<section style="text-align: left;">
<h6 style="text-transform:none;">SavedModel</h6>
<p style="font-size:25px">推荐使用SaveModel. SaveModel是一种与语言无关,可恢复的密封式序列化格式。TensorFlow提供了多种与SavedModel交互的机制,如tf.saved_model API、Estimator API和CLI。</p>
<p style="font-size:25px"><span class="fragment">1.建立一个 tf.saved_model.builder.SavedModelBuilder.</span> <br>
<span class="fragment">2.使用刚刚建立的 builder把当前的graph和variable添加进去:SavedModelBuilder.add_meta_graph_and_variables(...)</span> <br>
<span class="fragment">3.可以使用 SavedModelBuilder.add_meta_graph 添加多个meta graph</span><br></p>
<h6 style="text-transform:none;">SavedModel导入</h6>
<p class="fragment" style="font-size:25px">tf.saved_model.loader.load()</p>
<p style="font-size:25px"><span class="fragment">1.要在其中恢复图定义和变量的会话</span><br>
<span class="fragment">2.用于标识要加载的 MetaGraphDef 的标签</span><br>
<span class="fragment">3.SavedModel 的位置(目录)</span>
<aside class="notes">
https://tensorflow.google.cn/programmers_guide/saved_model#models
</aside>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ tf.saved_model.builder.SavedModelBuilder</p>
<pre>
<code class="hljs" data-trim contenteditable>
class tf.saved_model.builder.SavedModelBuilder
# 初始化方法
__init__(export_dir)
# 导入graph与变量信息
add_meta_graph_and_variables(
sess,
tags,
signature_def_map=None,
assets_collection=None,
legacy_init_op=None,
clear_devices=False,
main_op=None
)
# 载入保存好的模型
tf.saved_model.loader.load(
sess,
tags,
export_dir,
**saver_kwargs
)
</code>
</pre>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 模型保存后的结构</p>
<pre>
<code class="hljs" data-trim contenteditable>
assets/ #是包含辅助(外部)文件(如词汇表)的子文件夹
assets.extra/ #一个子文件夹,其中较高级别的库和用户可以添加自己的资源,这些资源与模型共存,但不会被图加载
variables/ #是包含 tf.train.Saver 的输出的子文件夹
variables.data-?????-of-?????
variables.index
saved_model.pb|saved_model.pbtxt #是 SavedModel 协议缓冲区。它包含作为 MetaGraphDef 协议缓冲区的图定义
</code>
</pre>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ Example</p>
<pre>
<code class="hljs" data-trim contenteditable>
tensor_info_input = tf.saved_model.utils.build_tensor_info(input)
tensor_info_output = tf.saved_model.utils.build_tensor_info(output)
sess = tf.Session()
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
signatures =(
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'input': tensor_info_input},
outputs={'output': tensor_info_output},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME
)
)
sess.run(tf.global_variables_initializer())
builder.add_meta_graph_and_variables(sess=sess, tags=[tf.saved_model.tag_constants.SERVING], signature_def_map={
'predict': signatures
})
builder.save()
</code>
</pre>
</section >
<section style="text-align: left;">
<p style="font-size:30px">☠ 参数列表</p>
<img src='img/sig_para.png'>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 参数tags</p>
<img src='img/tag.png'>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ <a href="https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/signature_defs.md" target="_blank">使用SignatureDef</a></p>
<p style="font-size:20px">1.SignatureDef,将输入输出tensor的信息都进行了封装,并且给他们一个自定义的别名,所以在构建模型的阶段,可以随便给tensor命名,只要在保存训练好的模型的时候,在SignatureDef中给出统一的别名即可。</p>
<p style="font-size:20px">2.TensorFlow的关于这部分的例子中用到了不少<a href="https://tensorflow.google.cn/api_docs/python/tf/saved_model/signature_constants" target="_blank">signature_constants</a>,这些constants的用处主要是提供了一个方便统一的命名。</p>
<pre>
<code class="hljs" data-trim contenteditable>
# 构建signature
tf.saved_model.signature_def_utils.build_signature_def(
inputs=None,
outputs=None,
method_name=None
)
# 构建tensor info
tf.saved_model.utils.build_tensor_info(tensor)
</code>
</pre>
</section>
<section style="text-align: left;">
<p class="fragment grow" style="font-size:30px"><a href="coede_static/code_stfs1.html" target="_blank">一个小栗子</a></p>
<p class="fragment grow" style="font-size:30px">训练好的saveModel要保存</p>
<p class="fragment grow" style="font-size:30px"><a href="http://172.16.100.147:8500/" target="_blank">看一下我们部署的深度学习Model</a></p>
<pre>
<code class="hljs" data-trim contenteditable>
simple_tensorflow_serving -h
simple_tensorflow_serving --model_base_path="/home/soft/model"
</code>
</pre>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ saveModel导入</p>
<pre>
<code class="hljs" data-trim contenteditable>
export_dir = ...
...
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, [tag_constants.TRAINING], export_dir)
...
</code>
</pre>
<p style="font-size:25px" class="fragment grow">简单保存:tf.saved_model.simple_save </p>
<p style="font-size:25px" class="fragment grow">搭配 Estimator 使用 SavedModel </p>
<p style="font-size:25px" class="fragment grow">使用 CLI 检查并执行 SavedModel </p>
</section>
<section data-background="img/score.jpeg" >
</section>
</section>
<!-- section3 tfs+docker -->
<section>
<section>
<h3 style="font-family:'STLiti',华文隶书;text-transform:none">3.TensorFlow Serving + Docker</h3>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ Docker-快到碗里来!</p>
<br>
<blockquote cite="http://www.runoob.com/docker/docker-tutorial.html" style="font-size:25px">
Docker 是一个开源的应用容器引擎,基于 Go 语言 并遵从Apache2.0协议开源。<br>
Docker 可以让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化。<br>
容器是完全使用沙箱机制,相互之间不会有任何接口(类似 iPhone 的 app),更重要的是容器性能开销极低。
</blockquote>
</section>
<section style="text-align: left;">
<a href="http://www.docker.org.cn/" target="_blank"><img src="img/docker_CN.png" style="border:0;"></a><br>
<!-- <a href="http://www.runoob.com/docker/docker-tutorial.html" target="_blank"><img src="img/cainiao.png" style="border:0;"></a> -->
<ul style="font-size:30px;list-style-type:none;">
<li>☠ Docker通常用于如下场景:
<ul style="font-size:25px">
<li>web应用的自动化打包和发布</li>
<li>自动化测试和持续集成、发布</li>
<li>在服务型环境中部署和调整数据库或其他的后台应用</li>
<li>从头编译或者扩展现有的OpenShift或Cloud Foundry平台来搭建自己的PaaS环境</li>
</ul>
</li>
<li>☠ 不是本节课重点,可以自行学习</li>
</ul>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 1.模型保存:savedModel</p>
<br>
<a href="coede_static/code_avemodel.html" target="_blank"><p>【点我查看官方提供的样例】</p></a>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 2.Docker安装tensorflow serving</p>
<br>
<p class="fragment highlight-red" style="font-size:30px">安装方式有很多种,官方推荐通过Docker</p>
<pre>
<code class="hljs" data-trim contenteditable>
# docker镜像中运行资源(会在dockerHub中安装该镜像)
docker run tensorflow/serving
#安装GPU版的,还需要nvidia-docker
docker pull tensorflow/serving:latest-gpu
#查看现在系统中存在的镜像
docker images
# 后边会常用的docker命令
docker pull **
docker ps # 查看正在运行的容器列表
docker stop IDs
docker rmi IDs
docker rm XXX
</code>
</pre>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 3.Docker部署tensorflow serving</p>
<br>
<pre>
<code class="hljs" data-trim contenteditable>
# 在我们的GPU服务器上
docker run -p 8501:8501 \
--mount type=bind,source=/home/xujing/lx_soft/saved_model_half_plus_two_cpu,target=/models/half_plus_two \
-e MODEL_NAME=half_plus_two -t tensorflow/serving &
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/models/half_plus_two:predict
</code>
</pre>
<pre>
<code class="hljs" data-trim contenteditable>
# 在我们GPU服务器上
docker run -p 8501:8501 \
--mount type=bind,source=/home/xujing/lx_soft/resnet,target=/models/resnet \
-e MODEL_NAME=resnet -t tensorflow/serving &
</code>
</pre>
<p style="font-size:25px" class="fragment grow">开启我们的TensorFlow Serving 部署ResNet 50</p>
</section>
<section style="text-align: left;">
<p style="font-size:30px">☠ 4.测试部署的TensorFlow部署的Model</p>
<br>
<p style="font-size:25px" class="fragment grow">很大的坑调了我2天!</p>
<pre>
<code class="hljs" data-trim contenteditable>
from __future__ import print_function
import base64
import requests
SERVER_URL = 'xxx'
# from scipy import ndimage # 图像转化为n维数组
# image_ndarray = ndimage.imread("pic3/1.jpg", mode="RGB") #RGB
def predict_api():
#图片转换为字节
with open('pic3/1.jpg','rb') as f:
res = f.read()
predict_request = '{"instances" : [{"b64": "%s"}]}' % base64.b64encode(res).decode()
print(predict_request)
headers = {'Content-Type': 'application/json'}
response = requests.post(SERVER_URL, data=predict_request)
response.raise_for_status()
prediction = response.json()['predictions'][0]
#prediction = response.json()
proba = max(prediction['probabilities'])*100
class1000 = prediction['classes']
print('Prediction class: %s and Prediction proba: %s%%' % (class1000,proba ))
if __name__ == "__main__":
predict_api()
</code>
</pre>
<p style="font-size:25px" class="fragment grow">开启我们的TensorFlow Serving 部署ResNet 50</p>
</section>
<section data-transition="slide" data-background="#b5533c" data-background-transition="zoom">
<p>最后一步要做的是微信端的开发!</p>
</section>
</section>
<!-- section4 基于微信的图像识别应用 -->
<section>
<section>
<h3 style="font-family:'STLiti',华文隶书;text-transform:none">4.基于微信的图像识别应用</h3>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ 基于微信构建最终的VQA系统</p><br>
<ul style="font-size:25px;list-style-type:none;">
<li>☠ 方案有3种:
<ul style="font-size:25px">
<li>基于微信聊天接口(Python: itchat,wxpy)</li>
<li>基于公众号 (RWeixin)</li>
<li>基于微信小程序</li>
</ul>
</li>
</ul>
<br>
<p style="font-size:25px">⚡ 我们采取基于微信聊天的接口</p>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ 核心代码如下:</p>
<pre>
<code class="hljs" data-trim contenteditable>
'''
过程描述:
1.登录微信
2.检查图像消息和发送人员
3.获取图像并做简单图像处理
4.访问tensorflow serving RestAPI
5.获得预测结果
6.返回给微信发送给接收者
'''
__version__="v0.1.0"
__author__="Xu Jing"
import requests
import json
import datetime
import itchat
# import wxpy
import time
import base64
import pandas as pd
img_label = pd.read_excel("imgeNet_label/Image_label.xlsx")
SERVER_URL = 'http://172.16.100.202:8501/v1/models/resnet:predict'
def predict_api(img_path):
#图片转换为字节
with open(img_path,'rb') as f:
res = f.read()
predict_request = '{"instances" : [{"b64": "%s"}]}' % base64.b64encode(res).decode()
#print(predict_request)
headers = {'Content-Type': 'application/json'}
response = requests.post(SERVER_URL, data=predict_request)
response.raise_for_status()
prediction = response.json()['predictions'][0]
#prediction = response.json()
proba = max(prediction['probabilities'])*100
class1000 = prediction['classes']
class_EN = list(img_label[img_label['class_NO']==class1000]['class_EN'])[0]
return (class_EN,proba)
#print('Prediction class: %s and Prediction proba: %s%%' % (class1000,proba))
# 登录 & 消息回复
@itchat.msg_register(itchat.content.PICTURE,isGroupChat=True)
def reply_text(msg):
chatroom_id = msg['FromUserName']
chatroom_NickName = [item['NickName'] for item in chatrooms if item['UserName'] == chatroom_id ]
username = msg['ActualNickName']
print(chatroom_NickName[0]+'@'+username)
if chatroom_NickName[0] =='XXXXX':
itchat.send("[%s]收到我们一家@%s的信息:%s\n" %(time.strftime("%Y-%m-%d %H:%M:%S",time.localtime(msg['CreateTime'])),username,"一张图片"),'filehelper')
msg.download(msg.fileName)
# 发送信息
itchat.send('@%s@%s' % (username,msg['FileName']))
print('%s received' % msg['Type'])
my_class,my_proba = predict_api(msg.fileName)
return '@%s 让我猜猜,你发的这张图片有 %s %%的概率是: %s \n' %(username,my_proba,my_class)
elif chatroom_NickName[0] !='XXXXXX':
return
itchat.auto_login()
chatrooms = itchat.get_chatrooms(update=True, contactOnly=True)
itchat.run()
input()
</code>
</pre>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ 基于WeChat的VQA效果:</p>
<style>
#my_div{
display:table-cell;
height:600px;
}
</style>
<div id='my_div'><img src="img/wechat1.jpg" align="middle" /></div>
<div id='my_div'><img src="img/wechat2.jpg" align="middle" /></div>
<div id='my_div'><img src="img/wechat3.jpg" align="middle" /></div>
</section>
<section style="text-align: left;">
<p style="font-size:25px">☠ 基于WeChat的VQA效果:</p>
<div id='my_div'><img src="img/wechat4.jpg" align="middle" /></div>
<div id='my_div'><img src="img/wechat5.jpg" align="middle" /></div>
<div id='my_div'><img src="img/wechat6.jpg" align="middle" /></div>
</section>
<section data-transition="slide" data-background="#4d7e65" data-background-transition="zoom">
<p>最后开启服务让我们体验基于微信的VQA系统! ☕ </p>
</section>
</section>
<section>
<section>
<h3 style="font-family:'STLiti',华文隶书;text-transform:none">5.总结</h3>
</section>
<section>
<ul style="font-size:35px;list-style-type:none;">
<li>☠ 传统机器学习模型部署的方法</li><br>
<li>☠ 深度学习模型的部署之道--在硬件资源限制下如何提高模型调用速度:</li>
<ul style="font-size:30px">
<li>传统的部署方法</li>
<li>simple TensorFlow Serving</li>
<li>TensorFlow Serving+Docker</li>
<li>基于WeChat的VQA系统构建</li>
</ul>
<br>
<li>☠ 与传统方式相比,大大提高了深度学习模型的调用速度(毫秒级),并且支持GPU,分布式和多框架</li>
</ul>
</section>
</section>
<!-- section6 the end -->
<section style="text-align: left;">
<h1>THE END</h1>
<p>
- <a href="https://slides.com" target="_blank">Thanks Reveal.js</a> <br>
- <a href="https://github.com/hakimel/reveal.js" target="_blank"> Reveal.js</a> <br>
- <a href="https://github.com/DataXujing/tensorflow-serving-Wechat" target="_blank">Source code & documentation</a><br>
- <a href="https://dataxujing.github.io/tensorflow-serving-Wechat/#/" target="_blank">
Online Slides</a><br>
- <a href="https://dataxujing.github.io" target="_blank">About Xu Jing</a>
</p>
</section>
<section>
<section>
<h3 style="font-family:'STLiti',华文隶书;text-transform:none">还没完!Keras呢?</h3>
<p>除了STFS和TFS部署,对于Keras模型有没有更好的办法?</p>
</section>
<section>
<ul style="font-size:35px;list-style-type:none;">
<li>☠ Train our model with Keras</li>
<li>☠ Save our model </li>
<li>☠ Write our Flask backend to serve our saved model (then look at dep classes (load.py, index.js, index.html)</li>
<li>☠ Deploy our code to ** Cloud</li>
</ul>
</section>
<section data-background="img/keras0.png">
</section>
<section style="text-align: left;">
<div id='my_div'>
<a href="https://github.com/transcranial/keras-js" target="_blank"><img src="img/keras1.png" align="middle" /></a></div>
<div id='my_div'><a href="https://github.com/mil-tokyo/webdnn" target="_blank"><img src="img/keras2.png" align="middle" /></a></div>
<div id='my_div'><a href="https://github.com/scienceai/neocortex" target="_blank"><img src="img/keras3.png" align="middle" /></a></div>
</section>
</section>
</div>
</div>
<!-- Other Code -->
<script src="lib/js/head.min.js"></script>
<script src="js/reveal.js"></script>
<script>
// More info https://github.com/hakimel/reveal.js#configuration
Reveal.initialize({
controls: true,
progress: true,
history: true,
center: true,
transition: 'slide', // none/fade/slide/convex/concave/zoom
// More info https://github.com/hakimel/reveal.js#dependencies
dependencies: [
{ src: 'lib/js/classList.js', condition: function() { return !document.body.classList; } },
{ src: 'plugin/markdown/marked.js', condition: function() { return !!document.querySelector( '[data-markdown]' ); } },
{ src: 'plugin/markdown/markdown.js', condition: function() { return !!document.querySelector( '[data-markdown]' ); } },
{ src: 'plugin/highlight/highlight.js', async: true, callback: function() { hljs.initHighlightingOnLoad(); } },
{ src: 'plugin/search/search.js', async: true },
{ src: 'plugin/zoom-js/zoom.js', async: true },
{ src: 'plugin/notes/notes.js', async: true }
]
});
</script>
</body>
</html>