image_cover

MemryX software enables developers to efficiently use MemryX AI accelerator chips to create systems and applications. It’s built to be simple, effective, and flexible, avoiding complex coding setups or the need to alter already-trained AI models. You can quickly simulate and deploy AI models, choosing the best balance of performance, power, latency, and utilization for any application.


First Steps

① Get Started Quickly

Follow our Getting Started guide to install, set up, and verify all necessary software and hardware.

Get Started

get_started/index.html

② Run Your First Model

Experience how easy it is to download, compile, deploy, and benchmark an AI model with MemryX.

# Download a model
python3 -c "import keras; keras.applications.MobileNet().save('mobilenet.h5');"

# Compile the model
mx_nc -m mobilenet.h5

# Deploy and Benchmark
mx_bench -d mobilenet.dfp -f 1000
from memryx import NeuralCompiler, Benchmark
import keras

# Load a model
mobilenet = keras.applications.MobileNet()

# Compile the model
dfp = NeuralCompiler(models=mobilenet, verbose=1).run()

# Deploy and Benchmark
with Benchmark(dfp=dfp) as accl:
    _, _, fps = accl.run(frames=1000)
    print(f"FPS of MobileNet Accelerated on MXA: {fps:.2f}")

Hello MXA!

③ Explore Models

Use the Model eXplorer to find the best model for your AI needs. Search, filter, and compare hundreds of models from multiple sources.

Model eXplorer

model_explorer/models.html

④ Learn with Tutorials

Get hands-on with our Tutorials to dive into MemryX-powered AI projects.

Tutorials

tutorials/tutorials.html

⑤ End-to-End Examples

Explore complete AI pipelines with our End-to-End Examples.

MemryX eXamples

https://github.com/memryx/MemryX_eXamples

Documentation

How it works

The SDK simplifies AI deployment on MemryX AI hardware with tools like the neural compiler, APIs, drivers, benchmarking, and a simulator. These tools can be used independently or integrated into various software setups, making AI application deployment efficient.

1. Select a model
2. Compile your model

Neural Compiler

3. Deploy & Benchmark

HW Benchmark

Simulator

4. Integrate to your App

Accelerator APIs

Driver APIs

More Helper Tools