MemryX software enables developers to efficiently use MemryX AI accelerator chips to create systems and applications. It’s built to be simple, effective, and flexible, avoiding complex coding setups or the need to alter already-trained AI models. You can quickly simulate and deploy AI models, choosing the best balance of performance, power, latency, and utilization for any application.
First Steps
① Get Started Quickly
Follow our Getting Started guide to install, set up, and verify all necessary software and hardware.
② Run Your First Model
Experience how easy it is to download, compile, deploy, and benchmark an AI model with MemryX.
# Download a model
python3 -c "import keras; keras.applications.MobileNet().save('mobilenet.h5');"
# Compile the model
mx_nc -m mobilenet.h5
# Deploy and Benchmark
mx_bench -d mobilenet.dfp -f 1000
from memryx import NeuralCompiler, Benchmark
import keras
# Load a model
mobilenet = keras.applications.MobileNet()
# Compile the model
dfp = NeuralCompiler(models=mobilenet, verbose=1).run()
# Deploy and Benchmark
with Benchmark(dfp=dfp) as accl:
_, _, fps = accl.run(frames=1000)
print(f"FPS of MobileNet Accelerated on MXA: {fps:.2f}")
③ Explore Models
Use the Model eXplorer to find the best model for your AI needs. Search, filter, and compare hundreds of models from multiple sources.
Documentation
Tools
Compile, Deploy, and Benchmark
APIs
Integrate to your App using the APIs
Specs
Specifications and Supported Operators
Get Help
Troubleshooting and FAQs
What’s New?
The Release Notes of version 1.0.1
How it works
The SDK simplifies AI deployment on MemryX AI hardware with tools like the neural compiler, APIs, drivers, benchmarking, and a simulator. These tools can be used independently or integrated into various software setups, making AI application deployment efficient.