AI時代的網站與手機App建置與開發Part29 - 使用YOLO模型偵測影片中的物件

 l  摘要

YOLO(You Only Look Once)模型除了能夠辨識圖片內容以外, 也能夠辨識影片的內容, 也就是自駕車, 無人機, 空拍機, 等裝置必備的功能.

: 空拍機空拍與辨識內容示意圖

·        使用YOLO模型偵測影片中的物件

從技術面來看, 影片其實是一堆圖片以連續的方式高速 (例如每秒30張圖片, 或每秒60張圖片)播放形成的效果, 以下列的影片為例:

1: 內含狗與貓兩種動物的短片

就是由以下的多張圖片高速連續播放形成的: 而網路攝影機(WebCam)的鏡頭觀察到的內容其實和影片的內容具有相同的結構, 也就是網路攝影機的鏡頭觀察到的內容也是由一組連續的圖片組成的. 例如上述的影片就是由以下的圖片組成的:

2: 組成影片內容的多張連續的圖片

所以只要能夠使用YOLO模型偵測圖片的內容, 就一定能夠偵測影片的內容.

·        認識Emgu.CV程式庫

Emgu.CV程式庫是一套包裝OpenCV圖形處理功能的套件, 支援Microsoft .NET平台的應用程式使用Microsoft .NET支援的程式語言方便地叫用OpenCV圖形處理功能. Emgu.CV程式庫支援與Visual StudioUnity整合, 並部署到Windows, Linux, macOS等平台, 甚至是iOSAndroid等手機平台執行.

·       使用Emgu.CV程式庫將欲辨識的影片解析成一組的圖片

首先請將欲辨識的影片加入到專案中名稱為Video資料夾中, 並到[屬性]視窗將欲辨識的影片檔案的[複製到輸出目錄]屬性的內容值設定為:有更新時才複製.

string videoPath = "Video/CatnDog.mp4";                   // 指定欲分析的影片的位置

string outputFolder = "images";                           // 指定影片分解成一組圖片欲存放的資料夾

 

Directory.CreateDirectory(outputFolder);    // 建立影片分解成一組圖片欲存放的資料夾

VideoCapture capture = new VideoCapture(videoPath);//建立Emgu.CV支援的VideoCapture

int frameNumber = 0;

while (true)                                       // 將影片分解成一張張的圖片

{

    Mat frame = capture.QueryFrame();                      // 取得影片拆解成的圖片

    if (frame == null)                                     // 拆解圖片失敗

        break;                                             // 結束拆解影片的工作

// 儲存拆解得到的圖片檔案

    frame.Save(Path.Combine(outputFolder, $"frame_{frameNumber:D5}.jpg")); 

    frameNumber++;                                         // 遞增圖片檔案的編號

}

capture.Dispose();                                         // 丟棄VideoCapture

//顯示拆解得到的圖片檔案的資訊

Console.WriteLine($"Extracted {frameNumber} frames to {outputFolder}.");   

將欲分析的影片拆解成一張張的影片, 並儲存到指定的資料夾之, 辨識圖片內容的動作就和辨識圖片內容的動作完成相同.

·        準備YOLO預訓練模型

首先請連線到以下的連結(模型下載網址: https://github.com/dotnet/machinelearning-samples/tree/main/samples/csharp/end-to-end-apps/ObjectDetection-Onnx/OnnxObjectDetection/ML/OnnxModels), 下載TinyYolo2_model.onnx檔案到專案中名稱為Models資料夾中, 並到[屬性]視窗將TinyYolo2_model.onnx檔案的[複製到輸出目錄]屬性的內容值設定為:有更新時才複製.

·        使用YOLO預訓練模型進行物件偵測

首先請定義描述偵測到的物件位置的BoundingBox類別:

public class BoundingBox

{

    public float X { get; set; }           //左上角點X座標

    public float Y { get; set; }           //左上角點Y座標

    public float Height { get; set; }      //高度

    public float Width { get; set; }       //寬度

}

定義描述欲偵測的圖片的資訊的ImageData類別:

public class ImageData

{

    [LoadColumn(0)]

    public string ImagePath;        //欲偵測的圖片的路徑

    [LoadColumn(1)]

    public string Label;                 //欲偵測的圖片的標籤

    //支援讀取存放欲辨識圖片的資料夾中所有的圖片的函式

    public static IEnumerable<ImageData> ReadFromFile(string imageFolder)

    {

        return Directory

            .EnumerateFiles(imageFolder)

            .Where(filePath => Path.GetExtension(filePath) != ".md")

            .Select(filePath => new ImageData { ImagePath = filePath, Label =

Path.GetFileName(filePath) });

    }

}

定義描述欲辨識的圖片規格的ImageSettings結構:

public struct ImageSettings

{

    public const int imageHeight = 416;     //圖片高度

    public const int imageWidth = 416;      //圖片寬度

}

定義描述YOLO模型輸入和輸出欄位名稱的ModelSettings結構

public struct ModelSettings

{      

    public const string ModelInput = "image";       // 輸入欄位名稱

    public const string ModelOutput = "grid";       // 輸出欄位名稱

}

定義包裝OnnxModel功能的類別

public class OnnxModel

{

    private readonly string imagesFolder;

    private readonly string modelLocation;

    private readonly MLContext mlContext;

    //建構函式

    public OnnxModel(string imagesFolder, string modelLocation, 

                            MLContext mlContext)

    {

        this.imagesFolder = imagesFolder;

        this.modelLocation = modelLocation;

        this.mlContext = mlContext;

    }

    //支援載入YOLO模型的函式

    private ITransformer LoadModel(string modelLocation)

    {

        Trace.WriteLine("Read model");

        Trace.WriteLine($"Model location: {modelLocation}");

        Trace.WriteLine($"Default parameters: image size=({ImageSettings.imageWidth},{ImageSettings.imageHeight})");

       // 取得輸入資料的相關資訊

       var data = mlContext.Data.LoadFromEnumerable(new List<ImageData>());

       // 定義物件偵測管線

       var pipeline = mlContext.Transforms.LoadImages(outputColumnName: "image",

imageFolder: "", inputColumnName: nameof(ImageData.ImagePath))

.Append(mlContext.Transforms.ResizeImages(outputColumnName: "image",

imageWidth: ImageSettings.imageWidth, imageHeight: ImageSettings.imageHeight, inputColumnName: "image"))

          .Append(mlContext.Transforms.ExtractPixels(outputColumnName: "image"))

          .Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelLocation,

outputColumnNames: new[] { ModelSettings.ModelOutput }, inputColumnNames: new[] { ModelSettings.ModelInput }));

       // 建立物件偵測模型

       var model = pipeline.Fit(data);

       // 傳回建立的物件偵測模型

       return model;

   }

   //應用程式可以呼叫Score函式, 並傳入欲偵測的圖片, 並取得偵測的結果

   public IEnumerable<float[]> Score(IDataView data)

   {

       //載入物件偵測模型

       var model = LoadModel(modelLocation);

       //叫用PredictDataUsingModel函式進行物偵測並傳回偵測的結果

       return PredictDataUsingModel(data, model);

   }

   // 支援對傳入的圖片進行物件的函式

   private IEnumerable<float[]> PredictDataUsingModel(IDataView testData,

ITransformer model)

   {

       Trace.WriteLine($"Images location: {imagesFolder}");

       Trace.WriteLine("");

       Trace.WriteLine("=====Identify the objects in the images=====");

       Trace.WriteLine("");

       //執行物件偵測並取得偵測結果

       IDataView scoredData = model.Transform(testData);

       //取得物件偵測結果的信心指數

       IEnumerable<float[]> probabilities =

scoredData.GetColumn<float[]>(ModelSettings.ModelOutput);

       //傳回取得的物件偵測結果的信心指數

       return probabilities;

   }

}

定義描述偵測到的物件資訊的YoloBoundingBox類別:

public class YoloBoundingBox

{

    // 記錄偵測到的物件的位置的屬性

    public BoundingBox Dimensions { get; set; }

    // 記錄偵測到的物件的種類的屬性

    public string Label { get; set; }

    // 記錄偵測到的物件的信心指數的屬性

    public float Confidence { get; set; }

    //將偵測到的物件的位置建立成RectangleF型態, 做為標記用途

    public RectangleF Rect

    {

        get { return new RectangleF(

Dimensions.X, Dimensions.Y, Dimensions.Width, Dimensions.Height); }

    }

    //記錄繪製矩形欲使用的色彩的屬性

    public Color BoxColor { get; set; }

}

定義支援解讀Yolo物件偵測模型偵測圖片的結果的YoloOutputParser類別:

public class YoloOutputParser

{

    public const int ROW_COUNT = 13;

    public const int COL_COUNT = 13;

    public const int CHANNEL_COUNT = 125;

    public const int BOXES_PER_CELL = 5;

    public const int BOX_INFO_FEATURE_COUNT = 5;

    public const int CLASS_COUNT = 20;

    public const float CELL_WIDTH = 32;

    public const float CELL_HEIGHT = 32;

    private int channelStride = ROW_COUNT * COL_COUNT;

    private float[] anchors = new float[]

    {

        1.08F, 1.19F, 3.42F, 4.41F, 6.63F, 11.38F, 9.42F, 5.11F, 16.62F, 10.52F

    };

    // 支援偵測的物件種類

    private string[] labels = new string[]

    {

        "aeroplane", "bicycle", "bird", "boat", "bottle",

        "bus", "car", "cat", "chair", "cow",

        "diningtable", "dog", "horse", "motorbike", "person",

        "pottedplant", "sheep", "sofa", "train", "tvmonitor"

    };

    //繪製物件種類使用的顏色

    private static Color[] classColors = new Color[]

    {

        Color.Khaki,

        Color.Fuchsia,

        Color.Silver,

        Color.RoyalBlue,

        Color.Green,

        Color.DarkOrange,

        Color.Purple,

        Color.Gold,

        Color.Red,

        Color.Aquamarine,

        Color.Lime,

        Color.AliceBlue,

        Color.Sienna,

        Color.Orchid,

        Color.Tan,

        Color.LightPink,

        Color.Yellow,

        Color.HotPink,

        Color.OliveDrab,

        Color.SandyBrown,

        Color.DarkTurquoise

     };

 

     //將傳入的參數轉化成0~1之間的數值的函式

     private float Sigmoid(float value)

     {

        var k = (float)Math.Exp(value);

        return k / (1.0f + k);

     }

     //支援計算傳入的陣列參數每一個值的機率分佈

     private float[] Softmax(float[] values)

     {

          var maxVal = values.Max();

          var exp = values.Select(v => Math.Exp(v - maxVal));

          var sumExp = exp.Sum();

 

          return exp.Select(v => (float)(v / sumExp)).ToArray();

      }

      //依據傳入的x, y參數計算欲取得的channel值的位置

      private int GetOffset(int x, int y, int channel)

      {

          return (channel * this.channelStride) + (y * COL_COUNT) + x;

      }

      //取得偵測到的物件的位置

      private BoundingBox ExtractBoundingBoxDimensions(

float[] modelOutput, int x, int y, int channel)

      {

         return new BoundingBox

            {

                X = modelOutput[GetOffset(x, y, channel)],

                Y = modelOutput[GetOffset(x, y, channel + 1)],

                Width = modelOutput[GetOffset(x, y, channel + 2)],

                Height = modelOutput[GetOffset(x, y, channel + 3)]

            };

      }

      //取得信心指數

      private float GetConfidence(float[] modelOutput, int x, int y, int channel)

      {

          return Sigmoid(modelOutput[GetOffset(x, y, channel + 4)]);

      }

      //BoundingBox換算成相對於Cell的座標

      private BoundingBox MapBoundingBoxToCell(int x, int y, int box,                         BoundingBox boxDimensions)

      {

         return new BoundingBox

         {

             X = ((float)x + Sigmoid(boxDimensions.X)) * CELL_WIDTH,

             Y = ((float)y + Sigmoid(boxDimensions.Y)) * CELL_HEIGHT,

             Width = (float)Math.Exp(boxDimensions.Width) *                                                                  CELL_WIDTH * anchors[box * 2],

             Height = (float)Math.Exp(boxDimensions.Height) *                            CELL_HEIGHT * anchors[box * 2 + 1],

         };

     }

     //取得偵測到的物件的種類

     public float[] ExtractClasses(float[] modelOutput, int x, int y, int channel)

     {

        float[] predictedClasses = new float[CLASS_COUNT];

        int predictedClassOffset = channel + BOX_INFO_FEATURE_COUNT;

        for (int predictedClass = 0; predictedClass < CLASS_COUNT;                                                     predictedClass++)

        {

            predictedClasses[predictedClass] = modelOutput[GetOffset(

x, y, predictedClass + predictedClassOffset)];

        }

        return Softmax(predictedClasses);

     }

     //取得可能性最高的種類

     private ValueTuple<int, float> GetTopResult(float[] predictedClasses)

     {

         return predictedClasses

            .Select((predictedClass, index) => (Index: index, Value:                                                     predictedClass))

            .OrderByDescending(result => result.Value)

            .First();

     }

     //支援計算IOU(IntersectionOverUnion)的函式

     private float IntersectionOverUnion(RectangleF boundingBoxA,

RectangleF boundingBoxB)

     {

         var areaA = boundingBoxA.Width * boundingBoxA.Height;

         if (areaA <= 0)

             return 0;

         var areaB = boundingBoxB.Width * boundingBoxB.Height;

         if (areaB <= 0)

             return 0;

         var minX = Math.Max(boundingBoxA.Left, boundingBoxB.Left);

         var minY = Math.Max(boundingBoxA.Top, boundingBoxB.Top);

         var maxX = Math.Min(boundingBoxA.Right, boundingBoxB.Right);

         var maxY = Math.Min(boundingBoxA.Bottom, boundingBoxB.Bottom);

         var intersectionArea = Math.Max(maxY - minY, 0) * Math.Max(                                                                maxX - minX, 0);

          return intersectionArea / (areaA + areaB - intersectionArea);

      }

      //解析物件偵測的結果

      public IList<YoloBoundingBox> ParseOutputs(float[] yoloModelOutputs,

float threshold = .3F)

      {

          var boxes = new List<YoloBoundingBox>();

          for (int row = 0; row < ROW_COUNT; row++)

          {

              for (int column = 0; column < COL_COUNT; column++)

              {

                  for (int box = 0; box < BOXES_PER_CELL; box++)

                  {

                     var channel = (box * (CLASS_COUNT + BOX_INFO_FEATURE_COUNT));

                     BoundingBox boundingBoxDimensions =                                                         ExtractBoundingBoxDimensions(yoloModelOutputs,                                                 row, column, channel);

                     float confidence = GetConfidence(yoloModelOutputs, row,

column, channel);

                     BoundingBox mappedBoundingBox = MapBoundingBoxToCell(

row, column, box, boundingBoxDimensions);

                     if (confidence < threshold)

                         continue;

                     float[] predictedClasses = ExtractClasses(yoloModelOutputs,

row, column, channel);

                     var (topResultIndex, topResultScore) =

GetTopResult(predictedClasses);

                     var topScore = topResultScore * confidence;

                     if (topScore < threshold)

                         continue;

                     boxes.Add(new YoloBoundingBox()

                     {

                        Dimensions = new BoundingBox

                        {

                          X=(mappedBoundingBox.X-mappedBoundingBox.Width / 2),

                          Y=(mappedBoundingBox.Y-mappedBoundingBox.Height / 2),

                          Width = mappedBoundingBox.Width,

                          Height = mappedBoundingBox.Height,

                        },

                        Confidence = topScore,

                        Label = labels[topResultIndex],

                        BoxColor = classColors[topResultIndex]

                    });

                 }

             }

          }

          return boxes;

      }

      //篩選正確的BoundingBox

      public IList<YoloBoundingBox> FilterBoundingBoxes(IList<YoloBoundingBox>

boxes, int limit, float threshold)

      {

          var activeCount = boxes.Count;

          var isActiveBoxes = new bool[boxes.Count];

          for (int i = 0; i < isActiveBoxes.Length; i++)

                isActiveBoxes[i] = true;

          var sortedBoxes = boxes.Select((b, i) => new { Box = b, Index = i })

                                .OrderByDescending(b => b.Box.Confidence)

                                .ToList();

          var results = new List<YoloBoundingBox>();

          for (int i = 0; i < boxes.Count; i++)

          {

              if (isActiveBoxes[i])

              {

                  var boxA = sortedBoxes[i].Box;

                  results.Add(boxA);

                  if (results.Count >= limit)

                      break;

                  for (var j = i + 1; j < boxes.Count; j++)

                  {

                      if (isActiveBoxes[j])

                      {

                          var boxB = sortedBoxes[j].Box;

                          if (IntersectionOverUnion(boxA.Rect, boxB.Rect) >

threshold)

                          {

                              isActiveBoxes[j] = false;

                              activeCount--;

                              if (activeCount <= 0)

                                  break;

                          }

                      }

                  }

                  if (activeCount <= 0)

                      break;

              }

          }

          return results;

      }

  }                    

·        使用YOLO模型實作物件偵測

//選取欲辨識圖片的函式

private void btnSelect_Click(object sender, EventArgs e)

{

    if (openFileDialog1.ShowDialog() == DialogResult.OK)

    {

        picOriginal.ImageLocation = openFileDialog1.FileName;

    }

 }

 //執行物件偵測

 private void btnDetect_Click(object sender, EventArgs e)

 {

     var modelFilePath = "Model/TinyYolo2_model.onnx";  //YOLO模型的位置

     MLContext mlContext = new MLContext();             //建立MLContext類別的物件

     try

     {

         // 載入指定資料夾中的所有圖片進行物件偵測

         IEnumerable<ImageData> images = ImageData.ReadFromFile("images");

         IDataView imageDataView = mlContext.Data.LoadFromEnumerable(images);

         // 建立類別的物件

         var modelScorer = new OnnxModel("images", modelFilePath, mlContext);

         // 物件資料中的圖片並取得偵測結果

         IEnumerable<float[]> probabilities = modelScorer.Score(imageDataView);

         // 解析偵測結果

         YoloOutputParser parser = new YoloOutputParser();

         //取得偵測到的物件的座標位置和大小

         var boundingBoxes =

                    probabilities

                    .Select(probability => parser.ParseOutputs(probability))

                    .Select(boxes => parser.FilterBoundingBoxes(boxes, 5, .5F));

         // 繪製被偵測的圖片中辨識成功的物件

         for (var i = 0; i < images.Count(); i++)

         {

             string imageFileName = images.ElementAt(i).Label;

             IList<YoloBoundingBox> detectedObjects = boundingBoxes.ElementAt(i);

             DrawBoundingBox("images", imageFileName, detectedObjects);

             LogDetectedObjects(imageFileName, detectedObjects);

         }

     }

     catch (Exception ex)

     {

         Console.WriteLine(ex.ToString());

     }

 }

 //繪製物件的位置矩形

 void DrawBoundingBox(string inputImageLocation, string imageName,

IList<YoloBoundingBox> filteredBoundingBoxes)

 {

     Image image = Image.FromFile(Path.Combine(inputImageLocation, imageName));

     var originalImageHeight = image.Height;

     var originalImageWidth = image.Width;

     foreach (var box in filteredBoundingBoxes)

     {

         // Get Bounding Box Dimensions

         var x = (uint)Math.Max(box.Dimensions.X, 0);

         var y = (uint)Math.Max(box.Dimensions.Y, 0);

         var width = (uint)Math.Min(originalImageWidth - x, box.Dimensions.Width);

         var height = (uint)Math.Min(originalImageHeight - y,                                                                 box.Dimensions.Height);

         // Resize To Image

         x = (uint)originalImageWidth * x / ImageSettings.imageWidth;

         y = (uint)originalImageHeight * y / ImageSettings.imageHeight;

         width = (uint)originalImageWidth * width / ImageSettings.imageWidth;

         height = (uint)originalImageHeight * height / ImageSettings.imageHeight;

         // Bounding Box Text

         string text = $"{box.Label} ({(box.Confidence * 100).ToString("0")}%)";

         using (Graphics thumbnailGraphic = Graphics.FromImage(image))

         {

             thumbnailGraphic.CompositingQuality = CompositingQuality.HighQuality;

             thumbnailGraphic.SmoothingMode = SmoothingMode.HighQuality;

             thumbnailGraphic.InterpolationMode =

InterpolationMode.HighQualityBicubic;

             // Define Text Options

             Font drawFont = new Font("Arial", 12, FontStyle.Bold);

                    SizeF size = thumbnailGraphic.MeasureString(text, drawFont);

                    SolidBrush fontBrush = new SolidBrush(Color.Black);

                    Point atPoint=new Point((int)x, (int)y-(int)size.Height - 1);

             // Define BoundingBox options

             Pen pen = new Pen(box.BoxColor, 3.2f);

             SolidBrush colorBrush = new SolidBrush(box.BoxColor);

             // Draw text on image

             thumbnailGraphic.FillRectangle(colorBrush, (int)x, (int)(y -                 size.Height - 1), (int)size.Width, (int)size.Height);

             thumbnailGraphic.DrawString(text, drawFont, fontBrush, atPoint);

               // Draw bounding box on image

               thumbnailGraphic.DrawRectangle(pen, x, y, width, height);

            }

        }

        image.Save(imageName);

        picDetected.ImageLocation=imageName ;

    }

    //顯示辨識結果的函式

    void LogDetectedObjects(string imageName, IList<YoloBoundingBox> boundingBoxes)

    {

        Trace.WriteLine(

$"The objects in the image {imageName} are detected as below....");

       foreach (var box in boundingBoxes)

        {

            Trace.WriteLine(

$"{box.Label} and its Confidence score: {box.Confidence}");

        }

        Trace.WriteLine("");

     }

執行上述的程式碼會顯示對每一張圖片檔案預測的結果, 辨識成功的物件會加上外框, 並標註成功的物件的內容, 如圖3所示: 

3: 使用YOLO物件偵測模型執行影片辨識的結果

範例下載:

https://github.com/aiguy1995/VideoAnalyzer.git

留言

這個網誌中的熱門文章

AI時代的網站與手機App建置與開發Part27 - ML.NET與物件偵測

AI時代的網站與手機App建置與開發Part24 - ML.NET與圖片異常偵測

AI時代的網站與手機App建置與開發Part28 - 使用YOLO模型進行物件偵測