{"id":34833,"date":"2025-01-10T12:38:23","date_gmt":"2025-01-10T04:38:23","guid":{"rendered":"https:\/\/www.zhidianwl.net\/zhidianwl\/?p=34833"},"modified":"2025-01-10T12:38:23","modified_gmt":"2025-01-10T04:38:23","slug":"%e8%8b%b9%e6%9e%9c%e4%b8%8a%e6%9e%b6tf%e6%b5%8b%e8%af%95%e6%8a%80%e6%9c%af%e5%8e%9f%e7%90%86%e4%bb%8b%e7%bb%8d","status":"publish","type":"post","link":"https:\/\/www.zhidianwl.net\/zhidianwl\/2025\/01\/10\/%e8%8b%b9%e6%9e%9c%e4%b8%8a%e6%9e%b6tf%e6%b5%8b%e8%af%95%e6%8a%80%e6%9c%af%e5%8e%9f%e7%90%86%e4%bb%8b%e7%bb%8d\/","title":{"rendered":"\u82f9\u679c\u4e0a\u67b6tf\u6d4b\u8bd5\u6280\u672f\u539f\u7406\u4ecb\u7ecd"},"content":{"rendered":"
TF\uff08TensorFlow\uff09\u662f\u4e00\u79cd\u6d41\u884c\u7684\u673a\u5668\u5b66\u4e60\u6846\u67b6\uff0c\u7531Google\u5f00\u53d1\u5e76\u7ef4\u62a4\u3002\u5b83\u53ef\u4ee5\u5728\u591a\u79cd\u5e73\u53f0\u4e0a\u8fd0\u884c\uff0c\u5305\u62ec\u684c\u9762\u3001\u79fb\u52a8\u8bbe\u5907\u548c\u4e91\u7aef\u3002\u5728\u82f9\u679c\u8bbe\u5907\u4e0a\uff0cTF\u53ef\u4ee5\u901a\u8fc7Core ML\u6846\u67b6\u6765\u5b9e\u73b0\u3002Core ML\u662f\u82f9\u679c\u516c\u53f8\u63a8\u51fa\u7684\u4e00\u79cd\u673a\u5668\u5b66\u4e60\u6846\u67b6\uff0c\u5b83\u53ef\u4ee5\u5c06\u8bad\u7ec3\u597d\u7684\u6a21\u578b\u8f6c\u6362\u6210\u53ef\u4ee5\u5728iOS\u8bbe\u5907\u4e0a\u8fd0\u884c\u7684\u683c\u5f0f\u3002\u5728\u672c\u6587\u4e2d\uff0c\u6211\u4eec\u5c06\u4ecb\u7ecd\u5982\u4f55\u5c06TF\u6a21\u578b\u8f6c\u6362\u6210Core ML\u683c\u5f0f\u5e76\u5728iOS\u8bbe\u5907\u4e0a\u4f7f\u7528\u3002<\/p>\n
1. \u51c6\u5907\u5de5\u4f5c<\/p>\n
\u5728\u5f00\u59cb\u4e4b\u524d\uff0c\u6211\u4eec\u9700\u8981\u786e\u4fdd\u5df2\u7ecf\u5b89\u88c5\u4e86\u4ee5\u4e0b\u8f6f\u4ef6\uff1a<\/p>\n
– TensorFlow 1.13\u6216\u66f4\u9ad8\u7248\u672c<\/p>\n
– Xcode 10\u6216\u66f4\u9ad8\u7248\u672c<\/p>\n
– TensorFlow\u7684Python API<\/p>\n
2. \u5bfc\u51faTF\u6a21\u578b<\/p>\n
\u9996\u5148\uff0c\u6211\u4eec\u9700\u8981\u5728Python\u4e2d\u5b9a\u4e49\u4e00\u4e2aTF\u6a21\u578b\uff0c\u5e76\u5c06\u5176\u5bfc\u51fa\u4e3a\u4e00\u4e2apb\u6587\u4ef6\u3002\u8fd9\u4e2apb\u6587\u4ef6\u5305\u542b\u4e86TF\u6a21\u578b\u7684\u6240\u6709\u6743\u91cd\u548c\u7ed3\u6784\u4fe1\u606f\u3002<\/p>\n
\u5bfc\u51fa\u6a21\u578b\u7684\u4ee3\u7801\u5982\u4e0b\uff1a<\/p>\n
“`python<\/p>\n
import tensorflow as tf<\/p>\n
# \u5b9a\u4e49\u6a21\u578b<\/p>\n
input_tensor = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name=’input_tensor’)<\/p>\n
conv1 = tf.layers.conv2d(inputs=input_tensor, filters=32, kernel_size=[5, 5], padding=’same’, activation=tf.nn.relu)<\/p>\n
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)<\/p>\n
conv2 = tf.layers.conv2d(inputs=pool1, filters=64, kernel_size=[5, 5], padding=’same’, activation=tf.nn.relu)<\/p>\n
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)<\/p>\n
flatten = tf.layers.flatten(inputs=pool2)<\/p>\n
dense1 = tf.layers.dense(inputs=flatten, units=1024, activation=tf.nn.relu)<\/p>\n
dropout = tf.layers.dropout(inputs=dense1, rate=0.4)<\/p>\n
\n logits = tf.layers.dense(inputs=dropout, units=10)<\/p>\n # \u5bfc\u51fa\u6a21\u578b<\/p>\n with tf.Session() as sess:<\/p>\n sess.run(tf.global_variables_initializer())<\/p>\n saver = tf.train.Saver()<\/p>\n saver.save(sess, ‘model.ckpt’)<\/p>\n tf.train.write_graph(sess.graph_def, ‘.’, ‘model.pb’, as_text=False)<\/p>\n “`<\/p>\n \u8fd9\u4e2a\u4ee3\u7801\u5b9a\u4e49\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\uff0c\u7528\u4e8e\u5bf9MNIST\u624b\u5199\u6570\u5b57\u6570\u636e\u96c6\u8fdb\u884c\u5206\u7c7b\u3002\u6211\u4eec\u5c06\u8fd9\u4e2a\u6a21\u578b\u5bfc\u51fa\u4e3a\u4e00\u4e2apb\u6587\u4ef6\uff0c\u5e76\u5c06\u5b83\u4fdd\u5b58\u5728\u5f53\u524d\u76ee\u5f55\u4e0b\u3002<\/p>\n 3. \u8f6c\u6362\u4e3aCore ML\u683c\u5f0f<\/p>\n \u63a5\u4e0b\u6765\uff0c\u6211\u4eec\u9700\u8981\u5c06pb\u6587\u4ef6\u8f6c\u6362\u4e3aCore ML\u683c\u5f0f\u3002\u4e3a\u6b64\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528Apple\u63d0\u4f9b\u7684tfcoreml\u5de5\u5177\u3002\u8fd9\u4e2a\u5de5\u5177\u53ef\u4ee5\u81ea\u52a8\u5c06TF\u6a21\u578b\u8f6c\u6362\u4e3aCore ML\u683c\u5f0f\uff0c\u5e76\u751f\u6210Swift\u6216Objective-C\u4ee3\u7801\uff0c\u7528\u4e8e\u5728iOS\u5e94\u7528\u4e2d\u4f7f\u7528\u3002<\/p>\n \u9996\u5148\uff0c\u6211\u4eec\u9700\u8981\u5b89\u88c5tfcoreml\u5de5\u5177\u3002\u5728\u7ec8\u7aef\u4e2d\u8f93\u5165\u4ee5\u4e0b\u547d\u4ee4\uff1a<\/p>\n “`bash<\/p>\n pip install tfcoreml<\/p>\n “`<\/p>\n \u5b89\u88c5\u5b8c\u6210\u4e4b\u540e\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u5c06pb\u6587\u4ef6\u8f6c\u6362\u4e3aCore ML\u683c\u5f0f\uff1a<\/p>\n “`bash<\/p>\n tfcoreml.convert(tf_model_path=’model.pb’,<\/p>\n mlmodel_path=’model.mlmodel’,<\/p>\n output_feature_names=[‘dense_1\/BiasAdd:0’],<\/p>\n input_name_shape_dict={‘input_tensor:0’: [None, 28, 28, 1]},<\/p>\n image_input_names=[‘input_tensor:0’],<\/p>\n image_scale=1\/255.0)<\/p>\n “`<\/p>\n \u8fd9\u4e2a\u547d\u4ee4\u5c06pb\u6587\u4ef6\u8f6c\u6362\u4e3aCore ML\u683c\u5f0f\uff0c\u5e76\u5c06\u5176\u4fdd\u5b58\u4e3amodel.mlmodel\u6587\u4ef6\u3002\u5176\u4e2d\uff0coutput_feature_names\u53c2\u6570\u6307\u5b9a\u4e86\u8f93\u51fa\u8282\u70b9\u7684\u540d\u79f0\uff0cinput_name_shape_dict\u53c2\u6570\u6307\u5b9a\u4e86\u8f93\u5165\u8282\u70b9\u7684\u540d\u79f0\u548c\u5f62\u72b6\uff0cimage_input_names\u53c2\u6570\u6307\u5b9a\u4e86\u56fe\u50cf\u8f93\u5165\u7684\u8282\u70b9\u540d\u79f0\uff0cimage_scale\u53c2\u6570\u6307\u5b9a\u4e86\u56fe\u50cf\u50cf\u7d20\u503c\u7684\u7f29\u653e\u56e0\u5b50\u3002<\/p>\n 4. \u5728iOS\u5e94\u7528\u4e2d\u4f7f\u7528<\/p>\n \u73b0\u5728\uff0c\u6211\u4eec\u5df2\u7ecf\u5c06TF\u6a21\u578b\u8f6c\u6362\u4e3a\u4e86Core ML\u683c\u5f0f\uff0c\u5e76\u5c06\u5176\u4fdd\u5b58\u4e3a\u4e86model.mlmodel\u6587\u4ef6\u3002\u63a5\u4e0b\u6765\uff0c\u6211\u4eec\u53ef\u4ee5\u5728iOS\u5e94\u7528\u4e2d\u4f7f\u7528\u8fd9\u4e2a\u6a21\u578b\u8fdb\u884c\u63a8\u65ad\u3002<\/p>\n \u5728Xcode\u4e2d\u521b\u5efa\u4e00\u4e2a\u65b0\u7684iOS\u5e94\u7528\uff0c\u5e76\u5c06model.mlmodel\u6587\u4ef6\u6dfb\u52a0\u5230\u9879\u76ee\u4e2d\u3002\u7136\u540e\uff0c\u5728ViewController.swift\u6587\u4ef6\u4e2d\u6dfb\u52a0\u4ee5\u4e0b\u4ee3\u7801\uff1a<\/p>\n “`swift<\/p>\n import UIKit<\/p>\n import CoreML<\/p>\n class ViewController: UIViewController {<\/p>\n override func viewDidLoad() {<\/p>\n super.viewDidLoad()<\/p>\n<\/p>\n let model = MNIST()<\/p>\n<\/p>\n guard let image = UIImage(named: “test.png”), let pixelBuffer = image.pixelBuffer() else {<\/p>\n fatalError()<\/p>\n }<\/p>\n<\/p>\n guard let output = try? model.prediction(input_tensor: pixelBuffer) else {<\/p>\n fatalError()<\/p>\n }<\/p>\n<\/p>\n print(output.classLabel)<\/p>\n }<\/p>\n }<\/p>\n extension UIImage {<\/p>\n func pixelBuffer() -> CVPixelBuffer? {<\/p>\n let width = Int(self.size.width)<\/p>\n let height = Int(self.size.height)<\/p>\n let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,<\/p>\n kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary<\/p>\n var pixelBuffer: CVPixelBuffer?<\/p>\n let status = CVPixelBufferCreate(kCFAllocatorDefault,<\/p>\n width,<\/p>\n height,<\/p>\n kCVPixelFormatType_OneComponent8,<\/p>\n attrs,<\/p>\n &pixelBuffer)<\/p>\n guard let buffer = pixelBuffer, status == kCVReturnSuccess else {<\/p>\n return nil<\/p>\n }<\/p>\n<\/p>\n CVPixelBufferLockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0))<\/p>\n defer {<\/p>\n CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0))<\/p>\n }<\/p>\n<\/p>\n let pixelData = CVPixelBufferGetBaseAddress(buffer)<\/p>\n let rgbColorSpace = CGColorSpaceCreateDeviceGray()<\/p>\n<\/figure>\n<\/p>\n