Automating DFP Generation with the Compiler API#

The MemryX Neural Compiler offers a versatile command-line interface (CLI Tool) to compile models and generate DFPs for the MemryX Accelerator (MXA). In addition to the CLI, MemryX provides a robust Compiler API (compiler API), allowing for full automation of DFP generation.

In this step-by-step guide, we will demonstrate how to leverage the Compiler API to build an automation tool that reads input parameters from a JSON file and generates one or multiple DFPs programmatically.

Defining the Input JSON Format#

The first step is to define a structured input format that includes all necessary parameters for the Compiler API. Below is a sample JSON file that can be used as input.

{
  "dfps": [
    {
      "dfp_name": "yolov7Tiny416_2chip",
      "models": [
        "MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx"
      ],
      "num_chips": 2,
      "chip_gen": "mx3",
      "autocrop": true,
      "effort": "normal",
      "dfp_fname":"fname.dfp",
      "verbose": 4,
      "show_optimization": true
    }
  ]
}

This JSON structure enables defining DFP groups with the model name and its corresponding parameters.

For multi-model compilations, you can provide multiple model names within the model_name field, and the API will distinguish between their parameters based on the documentation in the compiler API.

Here’s an example of a multi-model input JSON:

{
    "dfps": [
      {
        "dfp_name": "yolov7Tiny416_multimodelinputtest",
        "models": [
          "MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx",
        "MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx"
        ],
        "num_chips": 4,
        "chip_gen": "mx3",
        "autocrop": false,
        "effort": "normal",
        "dfp_fname":"testingfname.dfp",
        "inputs":"images | images",
        "outputs":"/model/model.77/m.0/Conv_output_0,/model/model.77/m.1/Conv_output_0,/model/model.77/m.2/Conv_output_0 | /model/model.77/m.0/Conv_output_0,/model/model.77/m.1/Conv_output_0,/model/model.77/m.2/Conv_output_0",
        "verbose": 4,
        "show_optimization": true
      }
    ]
  }
  

Additionally, to support generating DFPs multiple times in a single run, you can include several DFP groups in the JSON file, as illustrated below:

{
  "dfps": [
    {
      "dfp_name": "yolov7Tiny416_firsttime",
      "models": [
        "MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx"
      ],
      "num_chips": 2,
      "chip_gen": "mx3",
      "autocrop": true,
      "effort": "normal",
      "dfp_fname":"testingfname.dfp",
      "verbose": 4,
      "show_optimization": true
    },
    {
        "dfp_name": "yolov7Tiny416_secondtime",
        "models": [
          "MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx"
        ],
        "num_chips": 2,
        "chip_gen": "mx3",
        "autocrop": false,
        "effort": "normal",
        "inputs":"images",
        "outputs":"/model/model.77/m.0/Conv_output_0,/model/model.77/m.1/Conv_output_0,/model/model.77/m.2/Conv_output_0",
        "verbose": 4,
        "show_optimization": true
      }
  ]
}

Automating the Process with Python#

Now that the input format is defined, we can move on to writing a Python script that parses this input file and invokes the Compiler API based on the provided configurations.

First, import the required Python libraries along with the MemryX libraries.

import json
import argparse
import os
import shutil
from memryx import NeuralCompiler

Next, parse the DFP group data from the input file and set the input configuration dynamically.

def compile_models_recursively(nc, groups_data):

    # Base case: if no dfp groups left to compile then enbd and return
    if not groups_data:
        return

    # Compile the first dfdp group of models
    group_data = groups_data[0]
    compile_group(nc, group_data)

    # Recursively compile the remaining dfp groups
    compile_models_recursively(nc, groups_data[1:])

def compile_groups(json_file):
    # Load the dfp groups and their parameters from the input JSON file
    with open(json_file, 'r') as file:
        groups_data = json.load(file)["dfps"]
    
    #Initialize the NeuralCompiler
    nc = NeuralCompiler()

    # Start the recursive group compilation
    compile_models_recursively(nc, groups_data)

if __name__ == "__main__":
    # Parse CLI arguments to accept the input JSON file path
    parser = argparse.ArgumentParser(description="Compile neural network models using configurations given in a JSON file")
    parser.add_argument("json_file", type=str, help="Path to the input JSON configuration file.")
    
    # Get the file path from the command line input
    args = parser.parse_args()

    # Compile models based on the provided JSON file
    compile_groups(args.json_file)

The script will iterate through each DFP group and compile the models as specified. Each DFP group will be processed in its own folder, and the output files will be placed accordingly. The DFP file name defaults to the group name, but you can override this with the dfp_fname parameter in the input JSON.


def set_compiler_config(nc, params):
    for key, value in params.items():
        if isinstance(value, dict):  # If a nested dictionary, call recursively
            set_compiler_config(nc, value)
        else:
            # Set the parameter if it exists in the config
            nc.set_config(**{key: value})

def compile_group(nc, group_data):
    
    # Reset configuration of the neural compiler for the new dfp group
    nc.reset_config()

    # Set group-level configurations
    group_params = {key: value for key, value in group_data.items() if key not in ["models", "dfp_name"]}
    set_compiler_config(nc, group_params)

    # Compile all models in the group together
    models = group_data["models"]
    nc.set_config(models=models)

    #set dfp f name config to dfp_name from json or dfp_fname if specified
    dfp_fname = group_data.get("dfp_fname", f"{group_data['dfp_name']}.dfp")
    nc.set_config(dfp_fname=dfp_fname)

    # Verify and print the current configuration
    config = nc.get_config()
    print(f"Compiling group: {group_data.get('dfp_name')} with models: {models}")
    # print(f"Inputs: {inputs}, Outputs: {outputs}")
    
    print("printing config after final set ")
    print(config)

    
    dfp_dir = group_data['dfp_name']
    # Create a directory in the nanme of the dfp group name given will replace old directory
    os.makedirs(dfp_dir, exist_ok=True)


# Change directory into newly created directory
    original_dir = os.getcwd()
    os.chdir(dfp_dir)

    try:
        # Run the compiler and generate the DFP file
        dfp = nc.run()
        print(f"DFP saved to: {dfp_fname}")

    finally:
        #Move back to original directory
        os.chdir(original_dir)


Running the Script#

To execute the script, follow these steps:

# activate the memryx virtual environment - refer intall page to create venv and installation of sdk
. ~/mx/bin/activate
python recur_compiler.py input_sample.json

You should see output similar to this:

Compiling group: yolov7Tiny416_2chip with models: ['MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx']
printing config after final set
{'models': ['MX_API/samples/models/yolov7Tiny416/yolov7-tiny_416.onnx'], 'num_chips': 2, 'input_shapes': [], 'effort': 'normal', 'inputs': 'images', 'outputs': '/model/model.77/m.0/Conv_output_0,/model/model.77/m.1/Conv_output_0,/model/model.77/m.2/Conv_output_0', 'model_in_out': None, 'autocrop': False, 'target_fps': 'max', 'dfp_fname': 'testingfname.dfp', 'verbose': 4, 'show_optimization': True, 'hpoc': None, 'hpoc_file': None, 'wbtable': None, 'exp_auto_dp': False}

Summary#

In this tutorial, we covered how to programmatically use the MemryX Compiler API to automate DFP generation for one or multiple models. You can find the full code implementation here: