MultiStream YOLOv8S Object Detection#

Introduction#

In this tutorial, we will show how to use the MultiStreamAcclerator API to perform multistream real-time object detection on MX3 to demostrate the MultiStream capability of the API. We will use the YOLOv8S model for our demo.

Note

This tutorial assumes a four-chip solution is correctly connected.

Download the Model#

The YOLOv8S pre-trained models are available on the Official YOLOv8S GitHub page. For the sake of the tutorial, we exported both TFLite and ONNX models for the user to download. The model can be found in the compressed folder yolov8_object_detection_c++.zip to this tutorial.

Compile the Model#

The YOLOv8S model was exported with the option to include a post-processing section in the model graph. Hence, it needed to be compiled with the neural compiler autocrop option.

After the compilation, the compiler will generate the DFP file for the main section of the model (yolov8s.dfp) and the cropped post-processing section of the model (yolov8s_post.tflite or yolov8s_post.onnx).

The compilation step is typically needed once and can be done using the Neural Compiler API or Tool.

Hint

You can use the pre-compiled DFP and post-processing section yolov8_object_detection_c++.zip to this tutorial and skip the compilation step.

from memryx import NeuralCompiler
nc = NeuralCompiler(num_chips=4, models="yolov8s.tflite", verbose=1, dfp_fname = "yolov8s", autocrop=True)
dfp = nc.run()

In your command line, you need to type,

mx_nc -v -m yolov8s.tflite --autocrop -c 4

In your python code, you need to point the dfp via a generated file path,

dfp = "yolov8s.dfp"

In your C++ code, you need to point the dfp via a generated file path,

from memryx import NeuralCompiler
nc = NeuralCompiler(num_chips=4, models="yolov8s.onnx", verbose=1, dfp_fname = "yolov8s", autocrop=True)
dfp = nc.run()

In your command line, you need to type,

mx_nc -v -m yolov8s.onnx --autocrop -c 4

In your python code, you need to point the dfp via a generated file path,

dfp = "yolov8s.dfp"

In your C++ code, you need to point the dfp via a generated file path,

Note

YOLOv8s can be run using both TFLite and ONNX. Only the path needs to be changed in the script.

CV Pipelines#

In this tutorial, we will show end-to-end implementations for the CV graph. Here, the overlay and the output display will be part of the output function connected to the MultiStreamAsyncAccl API. The following flowchart shows the different parts of the pipeline. It should be noted that the input camera frame should be saved (queued) to be later overlayed and displayed.

graph LR input([Input Function]) --> accl[Accelerator] accl --> output([Output Function]) input.->q[[Frames Queue]] q .-> output style input fill:#CFE8FD, stroke:#595959 style accl fill:#FFE699, stroke:#595959 style output fill:#A9D18E, stroke:#595959 style q fill:#dbd9d3, stroke:#595959

CV Initializations#

We will import the needed libraries, initialize the CV pipeline, and define common variables in this step.

import cv2
from queue import Queue, Full
from threading import Thread
from matplotlib import pyplot as plt
# Stream-related containers
self.streams = []
self.streams_idx = [True] * self.num_streams
self.stream_window = [False] * self.num_streams
self.cap_queue = {i: Queue(maxsize=10) for i in range(self.num_streams)}
self.dets_queue = {i: Queue(maxsize=10) for i in range(self.num_streams)}
self.outputs = {i: [] for i in range(self.num_streams)}
vidcap = cv2.VideoCapture(video_path)
self.streams.append(vidcap)

Model Pre-/Post-Processing#

The pre-/post-processing steps are typically provided by the model authors and are outside of the scope of this tutorial. We provided a helper class with the tutorial compressed folder that implements the pre- and post-processing of YOLOv8, and the user can check it for their reference. You can use the helper class as follows,

from yolov8 import YoloV8 as YoloModel
# Initialize video captures, models, and dimensions for each streams
for i, video_path in enumerate(video_paths):
    vidcap = cv2.VideoCapture(video_path)
    # Initialize the model with the stream dimensions
    self.model[i] = YoloModel(stream_img_size=(self.dims[i][1], self.dims[i][0], 3))

The accl.set_postprocessing_model() function will automatically retrieve the output from the chip, apply the cropped graph post-processing section using the TFLite/ONNX runtime if you pass the corresponding TFLite/ONNX model, and generate the final output.

accl.set_postprocessing_model('tflite/model_0_yolov8s_post.tflite', model_idx=0)

In this case, we will use the TFLite runtime since the model passed is in TFLite format. Users can also write their own post-processing if TFLite/ONNX Runtime is not available on their system.

After that, output can then be sent to the post-processing code in the YOLOv8 helper class to get the detection on the output image. The class can be found as a part of the full code file.

The accl->connect_post_model(); function will automatically retrieve the output from the chip, apply the cropped graph post-processing section using the TFLite/ONNX runtime if you pass the corresponding TFLite/ONNX model, and generate the final output.

In this case, we will use the TFLite runtime since the model passed is in TFLite format. Users can also write their own post-processing if TFLite/ONNX Runtime is not available on their system.

Define an Input Function#

We need to define an input function for the accelerator to use. In this case, our input function will get a new frame from the cam and pre-process it.

def capture_and_preprocess(self, stream_idx):
    """
    Captures a frame for the video device and pre-processes it.
    """
    got_frame, frame = self.streams[stream_idx].read()

    if not got_frame:
        self.streams_idx[stream_idx] = False
        return None

    try:
        # Put the frame in the cap_queue to be processed later
        self.cap_queue[stream_idx].put(frame, timeout=2)

        # Pre-process the frame using the corresponding model
        frame = self.model[stream_idx].preprocess(frame)
        return frame

    except Full:
        print('Dropped frame .. exiting')
        return None

Note

In the above code, method preprocess is used as pre-processing step. This method can be found as a part of the full code file.

Note

In the above code, method preprocess is used as pre-processing step. This method can be found as a part of the full code file.

Define Output Functions#

We also need to define an out function for the accelerator to use. Our output function will post-process the accelerator output and display it on the screen.

The output function will also overlay and display the output frame besides the MXA data collection and post-processing.

def postprocess(self, stream_idx, *mxa_output):
    """
    Post-process the MXA output.
    """
    dets = self.model[stream_idx].postprocess(mxa_output)

    # Push the detection results to the queue
    self.dets_queue[stream_idx].put(dets)

    # Calculate the FPS
    self.dt_array[stream_idx][self.dt_index[stream_idx]] = time.time() - self.frame_end_time[stream_idx]
    self.dt_index[stream_idx] += 1

    if self.dt_index[stream_idx] % 15 == 0:
        self.fps[stream_idx] = 1 / np.average(self.dt_array[stream_idx])

    if self.dt_index[stream_idx] >= 30:
        self.dt_index[stream_idx] = 0

    self.frame_end_time[stream_idx] = time.time()

Note

In the above code, method postprocess is used as post-processing step. This method can be found as a part of the full code file.

The output function will also overlay and display the output frame besides the MXA data collection and post-processing.

Connect the Accelerator#

The run function creates the accelerator, starts it with the specified number of streams, and waits for it to finish.

def run(self):
    """
    The function that starts the inference on the MXA.
    """
    accl = MultiStreamAsyncAccl(dfp='tflite/yolov8s.dfp')
    print("YOLOv8s inference on MX3 started")
    accl.set_postprocessing_model('tflite/model_0_yolov8s_post.tflite', model_idx=0)

    self.display_thread.start()

    start_time = time.time()

    # Connect the input and output functions and let the accl run
    accl.connect_streams(self.capture_and_preprocess, self.postprocess, self.num_streams)
    accl.wait()

    self.done = True

    # Join the display thread
    self.display_thread.join()

The main() function Creates the accelerator, YoloV7 object and starts the acceleartor and waits for it to finish

The YoloV8() constructor connects the input stream to the accelartor.

Third-Party License#

This tutorial uses third-party software, models, and libraries. Below are the details of the licenses for these dependencies:

Summary#

This tutorial showed how to use a MultiStreamAccelerator API to run a multistream real-time inference using an object-detection model. The code and the resources used in the tutorial are available to download:

See also