Skip to content
#

directml

Here are 19 public repositories matching this topic...

It is the compatibility engine behind the SOTA LabVIEW Deep Learning Toolkit, ensuring that every ONNX operator behaves consistently across hardware targets. It validates each node against multiple execution providers to guarantee reliable and predictable AI deployment.

  • Updated Nov 18, 2025
  • Python

low-latency real-time object detection and tracking pipeline built in Python, featuring zero-allocation preprocessing, optimized GDI-based screen capture, and GPU-accelerated inference via ONNX Runtime with TensorRT and CUDA backends. Designed for high-FPS, production-grade performance experimentation.

  • Updated Feb 5, 2026
  • Python

A robust benchmarking framework for evaluating GPU/CPU performance from NVIDIA, AMD, Intel, DirectML, using PyTorch, TensorRT, TensorFlow, Pytest, and Allure Reporting Dashboards, and leveraging CI/CD, Docker, and Kubernestes.

  • Updated Nov 2, 2025
  • Python

Improve this page

Add a description, image, and links to the directml topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the directml topic, visit your repo's landing page and select "manage topics."

Learn more