TT-Forge™

MLIR: A Unified Approach to AI Compilation

Engineered for Innovation

tt-torch is a MLIR-native, open-source, PyTorch 2.X and torch-mlir based front-end. It provides stableHLO (SHLO) graphs to tt-mlir.
It supports the ingestion of PyTorch models via PT2.X compile and ONNX models via torch-mlir (ONNX->SHLO).
It also enables breaking down PyTorch graphs into individual operations, facilitating parallelized bug or missing operation discovery.
The TT-Forge-FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their performance and efficiency.
It supports the ingestion of PyTorch, ONNX, TensorFlow, and similar ML frameworks via TVM (tt-tvm).
Based on TVM IR, it enables breaking down graphs from different frameworks into individual operations, making model bringup effort data-driven.
tt-xla leverages PJRT interface to integrate JAX (and in the future other frameworks), tt-mlir and Tenstorrent hardware.
It supports the ingestion of JAX models via jit compile, providing StableHLO (SHLO) graph to the tt-mlir compiler.
The tt-xla plugin is loaded natively in JAX to compile and run JAX models with the tt-mlir compiler and runtime.


