微信小程序 细分

    科技2023-11-11  99

    微信小程序 细分

    Segmenting a region of interest (ROI) in an image is a key problem in computer vision. With the rise of deep learning, numerous network architectures have been proposed to overcome the challenge of semantic segmentation. A few years ago, deploying a trained segmentation network model in a smart phone was far from reality. However, thanks to the powerful hardware in the recent smart phones coupled with the powerful software tools at our disposal, deploying a segmentation network in your smart phone app is only a few lines of code away!

    小号 egmenting的图像中感兴趣(ROI)区域是计算机视觉中的一个关键问题。 随着深度学习的兴起,已经提出了许多网络架构来克服语义分割的挑战。 几年前,在智能手机中部署经过训练的细分网络模型远非现实。 但是,由于最近智能手机中功能强大的硬件以及可使用的强大软件工具,在您的智能手机应用程序中部署分段网络仅需几行代码!

    This article would walk you through the process of integrating a trained segmentation model with your iPhone App. This process consists of two steps; (1) Converting the model to the .mlmodel format and (2) Using the converted model to segment the ROI. To get ourselves familiar with these two steps, we shall use a U-Net model trained (using the Keras framework) to segment the sclera region of an external eye image.

    本文将引导您完成将训练有素的细分模型与iPhone App集成的过程。 此过程包括两个步骤; (1) 将模型转换为.mlmodel格式,以及(2) 使用转换后的模型分割ROI。 为了使自己熟悉这两个步骤,我们将使用经过训练的U-Net模型(使用Keras框架)来分割外眼图像的巩膜区域。

    步骤1:将模型转换为移动兼容格式 (Step 1 : Converting the model to a mobile compatible format)

    To convert the keras model to the .mlmodel format, we can use the coremltools python library. It can be installed using the following command.

    要将keras模型转换为.mlmodel格式,我们可以使用coremltools python库。 可以使用以下命令进行安装。

    pip install coremltools

    After installing the library, the following code snippet lets you perform the conversion and save the converted model in the specified location.

    安装库之后,以下代码片段使您可以执行转换并将转换后的模型保存在指定位置。

    Converting the .h5 model file to a .mlmodel model file 将.h5模型文件转换为.mlmodel模型文件

    第2步:使用转换后的模型细分投资回报率 (Step 2 : Using the converted model to segment the ROI)

    After obtaining the .mlmodel file, drag and drop it inside your project folder in the Xcode navigation area. The Vision library in swift lets you work with the .mlmodel file when you are integrating it with the app.

    获取.mlmodel文件后,将其拖放到Xcode导航区域的项目文件夹中。 Vision 使用Swift库可以在将.mlmodel文件与应用程序集成时使用它。

    We start by importing the libaries — in this case UIKit and Vision — to our code as below.

    我们首先将库(在本例中为UIKit和Vision导入我们的代码,如下所示。

    import UIKit import Vision

    Then we define our class ViewController with UIImagePickerControllerDelegate that allows us to pick an image from our photo library. To keep things simple, we shall not add any textfields or labels to our app at this point.

    然后,我们使用UIImagePickerControllerDelegate定义class ViewController , class ViewController允许我们从照片库中选择图像。 为简单起见,目前我们不会在应用程序中添加任何文本字段或标签。

    class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {}

    Inside this class we can start defining the functions that would build up our app. First we define the UIImageView property that would display the selected external eye image from the photo library and the segmentation result.

    在该类中,我们可以开始定义将构建应用程序的功能。 首先,我们定义UIImageView属性,该属性将显示照片库中选定的外眼图像和分割结果。

    @IBOutlet weak var photoImageView: UIImageView!

    Then we can define out segmentation model from the following command.

    然后,我们可以从以下命令定义细分模型。

    let SegmentationModel = ScleraUNet()

    Then we define the vision properties request and visionModel .

    然后我们定义视觉属性request和visionModel 。

    var request: VNCoreMLRequest?var visionModel: VNCoreMLModel?

    Now we write a function that sets up the SegmentationModel and the request.

    现在,我们编写一个函数来设置SegmentationModel和request 。

    Setting up the model 建立模型

    Here you can notice that the completionHandler of the VNCoreMLRequest is set to a function called visionRequestDidComplete that we have not defined yet. In visionRequestDidComplete function, we can include the actions that should be executed after the request outputs a segmentation mask using the segmentation model. We shall define thevisionRequestDidComplete function later in this article.

    在这里,您可以注意到VNCoreMLRequest的completionHandler处理VNCoreMLRequest已设置为我们尚未定义的名为visionRequestDidComplete的函数。 在visionRequestDidComplete函数中,我们可以包括在request使用细分模型输出细分掩码之后应执行的操作。 我们将在本文后面定义visionRequestDidComplete函数。

    To setup the model after the app is loaded we add the following code snippet inside the class ViewController

    要在加载应用后设置模型,我们在class ViewController添加以下代码段

    override func viewDidLoad() {super.viewDidLoad()print("Setting up model...")setupModel()print("Model is ready")nameTextField.delegate = self}

    Now we can write the functions that let us select an external eye image from the photo library.

    现在,我们可以编写使我们从照片库中选择外眼图像的功能。

    With that we can access the selected image with photoImageView.image .

    这样我们就可以使用photoImageView.image访问选定的图像。

    In addition we would like to have an interactive button which would execute segmentation process upon pressing.

    另外,我们希望有一个交互式按钮,按下该按钮即可执行分段过程。

    @IBAction func segementImage(_ sender: UIButton){print("Segmenting... ")let input_image = photoImageView.image?.cgImagepredict(with: input_image!)}

    When the button is pressed, the above function activates and runs the predict function. We will define this predict function later.

    当按下按钮时,以上功能将激活并运行predict功能。 我们稍后将定义此predict函数。

    After adding the aforementioned code snippets to our code, it would like this :

    在将上述代码段添加到我们的代码中之后,它将是这样的:

    Now our code is almost complete. But we have to define the visionRequestDidComplete and predict functions that we left our earlier. We define them inside the extension ViewController below the class ViewController as follows.

    现在我们的代码几乎完成了。 但是我们必须定义visionRequestDidComplete并predict我们之前离开的函数。 我们在class ViewController下面的extension ViewController中定义它们,如下所示。

    In visionRequestDidComplete we define the tasks that has to be completed after the model predicts the segmentation result. In this case, we would like to binarize and display the segmentation result on the photoView we defined earlier.

    在visionRequestDidComplete我们定义了模型预测细分结果之后必须完成的任务。 在这种情况下,我们想将分割结果二值化并显示在我们先前定义的photoView 。

    After the adding the extension ViewController to our code, it would look as below.

    在将extension ViewController添加到我们的代码后,它将如下所示。

    To polish our work a bit, we can add a few labels, buttons and text fields to the UI. In this case, we add a few labels, a clear button and a text field that takes the subject name to whom the external eye image belong to. Here’re some screenshots of our newly created simple app run that can segment the sclera region from a given external eye image.

    为了进一步完善我们的工作,我们可以向UI添加一些标签,按钮和文本字段。 在这种情况下,我们添加一些标签,一个清除按钮和一个文本字段,该文本字段采用了外眼图像所属的主题名称。 这是我们新创建的简单应用程序运行的一些屏幕截图,可以从给定的外部眼睛图像中分割巩膜区域。

    Left : Opening view of the app, Middle : View after selecting an external eye image from the photo library, Right : View after the the sclera region is segmented. (Image by Author) 左:应用程序的打开视图,中:从照片库中选择外眼图像后的视图,右:将巩膜区域分割后的视图。 (图片由作者提供)

    It is also possible to overlay the segmentation mask on the original image, but it is excluded from this work.

    也可以将分割蒙版覆盖在原始图像上,但这项工作不包括在内。

    That is the end of this article! Hope it would help you to properly integrate a segmentation model to your iPhone App.

    到此为止! 希望它能帮助您将细分模型正确集成到您的iPhone App中。

    翻译自: https://towardsdatascience.com/how-to-integrate-a-segmentation-network-model-with-your-iphone-app-5c11736b95a

    微信小程序 细分

    相关资源:四史答题软件安装包exe
    Processed: 0.013, SQL: 8