Skip to content
ADHDecode
  1. Home
  2. Articles
  3. Tensorrt

Tensorrt Articles

50 articles

TensorRT INT8 Quantization: Calibrate for Accuracy

The most surprising thing about TensorRT INT8 quantization is that it often improves accuracy, not just performance, by forcing your model to be more ro.

3 min read

TensorRT on Jetson: Edge Deployment and Optimization

The most surprising thing about TensorRT on Jetson is that it's not just about making your neural nets faster; it's about making them run at all on hard.

3 min read

TensorRT KV Cache: LLM Memory Optimization

The KV cache in LLMs is a performance bottleneck that most people try to optimize, but the real win is realizing it's not just about size, it's about ac.

2 min read

TensorRT P99 Latency Optimization: Cut Tail Latency

Cutting P99 latency in TensorRT isn't about making your model faster on average; it's about taming the worst-case outliers that make your application fe.

4 min read

TensorRT Layer Fusion: Fuse Operations for Speed

TensorRT's layer fusion is less about combining layers and more about aggressively optimizing the computational graph to eliminate overhead and maximize.

3 min read

TensorRT Linux Production Deployment: Step-by-Step

TensorRT deployment isn't just about copying files; it's about coaxing a highly optimized, hardware-specific execution engine into behaving predictably .

2 min read

TensorRT-LLM: Large Language Model Inference at Scale

The most surprising thing about TensorRT-LLM is that it's not just about making LLMs faster; it's about making them behave differently, unlocking capabi.

3 min read

TensorRT-LLM Quantization: AWQ and SmoothQuant

Quantization in TensorRT-LLM isn't just about making models smaller; it's about unlocking performance by using lower-precision numbers without sacrifici.

3 min read

TensorRT-LLM Throughput Benchmarking: Tokens per Second

TensorRT-LLM is a library that optimizes large language models LLMs for inference on NVIDIA GPUs, and its throughput is typically measured in tokens per.

3 min read

TensorRT Multi-GPU Tensor Parallelism: Scale Across GPUs

The most surprising thing about TensorRT's multi-GPU tensor parallelism is that it doesn't actually split tensors across GPUs; it splits the operations .

4 min read

TensorRT Multi-Instance GPU: MIG Partitioning Setup

MIG allows you to carve up a single GPU into smaller, isolated instances, each with its own dedicated compute, memory, and cache.

3 min read

TensorRT Network Definition API: Add Custom Ops

You can define custom operations in TensorRT, but it's not about adding them to TensorRT itself; it's about telling TensorRT how to execute an operation.

3 min read

TensorRT ONNX GraphSurgeon: Edit and Optimize Models

GraphSurgeon is a library for manipulating and optimizing ONNX models, often used as a preprocessing step before TensorRT conversion.

2 min read

TensorRT ONNX to Engine: Convert and Serialize

TensorRT ONNX to Engine: Convert and Serialize — practical guide covering tensorrt setup, configuration, and troubleshooting with real-world examples.

4 min read

TensorRT vs ONNXRuntime: GPU Inference Comparison

TensorRT and ONNX Runtime are both powerful tools for accelerating deep learning inference on GPUs, but they approach optimization from fundamentally di.

2 min read

TensorRT Paged KV Cache: Configure for LLM Serving

The TensorRT Paged KV Cache is a memory management technique that dramatically improves LLM serving performance by treating KV cache like a garbage-coll.

3 min read

TensorRT Performance Profiling: Nsight Systems Analysis

NVIDIA's TensorRT is a powerful SDK for high-performance deep learning inference, but understanding why it's fast or not fast enough requires looking un.

6 min read

TensorRT Custom Plugin: Add Unsupported Operations

TensorRT doesn't let you just add operations to its graph; it forces you to reimplement them within its own C++ framework.

6 min read

TensorRT Polygraphy: Debug Model Accuracy and Conversion

Polygraphy's accuracy debugging is a powerful tool for pinpointing discrepancies between your model's output and a reference, often due to TensorRT's op.

3 min read

TensorRT Pose Estimation: Deploy Body Keypoint Models

Deploying body keypoint models with TensorRT can be surprisingly tricky because the framework aggressively optimizes for speed, often changing the model.

3 min read

TensorRT Production Monitoring: GPU Metrics and Alerts

The most surprising thing about TensorRT production monitoring is that the most critical performance indicators often aren't found in the typical applic.

3 min read

TensorRT Python API: Build and Run Inference Engines

Building and running inference engines with TensorRT's Python API is less about writing Python code and more about orchestrating a complex C++ compilati.

3 min read

TensorRT PyTorch Model Optimization: torch2trt Guide

torch2trt is a PyTorch extension that converts PyTorch models into TensorRT engines, allowing for optimized inference on NVIDIA GPUs.

3 min read

TensorRT Recommendation System: Embedding Table Inference

TensorRT's embedding table inference is surprisingly efficient because it treats embedding lookups as a highly parallelized matrix multiplication, not a.

3 min read

TensorRT RNN and LSTM Optimization: Sequence Models

The surprising truth about TensorRT's RNN and LSTM optimization is that it doesn't magically speed up every sequence model; it's a highly targeted proce.

3 min read

TensorRT ResNet and EfficientNet: Image Classification

The most surprising thing about using TensorRT for ResNet and EfficientNet is that it doesn't just make them faster, it fundamentally changes how they o.

5 min read

TensorRT Semantic Segmentation: Deploy Pixel Models

TensorRT can deploy semantic segmentation models, but the real magic is how it aggressively optimizes them for inference speed on NVIDIA GPUs, often out.

4 min read

TensorRT SmoothQuant: Weight-Only Quantization for LLMs

SmoothQuant makes LLMs run faster by quantizing their weights, but it only works if you get the scaling factors just right.

5 min read

TensorRT Sparsity: Accelerate Pruned Models

TensorRT Sparsity isn't about making your models smaller; it's about making them run faster by exploiting the shape of their weights.

3 min read

TensorRT with Triton: Inference Server Integration

TensorRT with Triton: Inference Server Integration — practical guide covering tensorrt setup, configuration, and troubleshooting with real-world examples.

2 min read

TensorRT trtexec: CLI Tool for Profiling and Benchmarking

TensorRT trtexec: CLI Tool for Profiling and Benchmarking — practical guide covering tensorrt setup, configuration, and troubleshooting with real-world ...

3 min read

TensorRT vs OpenVINO: GPU vs Intel Inference Compared

TensorRT, NVIDIA's inference optimizer, and OpenVINO, Intel's equivalent, aren't just about running models faster; they fundamentally change how models .

3 min read

TensorRT Windows Setup: Visual Studio and CUDA Config

TensorRT on Windows can feel like a tangled mess of dependencies, but once you untangle the Visual Studio and CUDA configuration, it clicks into place.

3 min read

TensorRT Workspace Memory: Builder Config Optimization

The most surprising thing about TensorRT's workspace memory is that its size isn't just a passive requirement, but an active knob you can turn to dramat.

3 min read

TensorRT YOLO: Deploy Object Detection Models Fast

TensorRT can make your YOLO object detection models run blazing fast, but it's not magic. It's a compiler that optimizes your model for NVIDIA GPUs by f.

3 min read

Fix TensorRT Accuracy Loss After Post-Training Quantization

TensorRT's post-training quantization PTQ is failing to preserve model accuracy because the calibration process isn't adequately capturing the dynamic r.

3 min read

TensorRT Batch Size: Optimize for Maximum Throughput

TensorRT batch size is the most counterintuitive knob you have for throughput, often leading people to believe larger is always better when the opposite.

4 min read

TensorRT BERT and Transformer: NLP Inference Optimization

BERT and other Transformer models are notoriously slow for inference, and TensorRT is the go-to solution for speeding them up.

3 min read

TensorRT Calibration Dataset: Prepare for INT8 Accuracy

The most surprising thing about TensorRT calibration is that the dataset itself is far more impactful on INT8 accuracy than the specific calibration alg.

3 min read

TensorRT vs CUDA: Measure Speedup on Your Model

TensorRT is not just an optimization layer on top of CUDA; it's a fundamentally different way to execute deep learning models that can achieve orders of.

3 min read

Debug TensorRT NaN and Inf: Fix Accuracy Issues

The core problem is that your TensorRT inference engine is producing NaN Not a Number or Inf Infinity values, which are corrupting your model's output a.

3 min read

TensorRT DeepStream: Video Analytics Pipeline Setup

DeepStream is a powerful SDK from NVIDIA for building efficient video analytics pipelines, leveraging TensorRT for hardware-accelerated AI inference.

4 min read

TensorRT Depthwise Separable Conv: Optimize Efficiently

Depthwise separable convolutions are a cornerstone of efficient deep learning, but TensorRT's optimization of them can feel like a black box.

3 min read

TensorRT NGC Container: Pull and Run GPU Inference

The most surprising thing about TensorRT is that it doesn't fundamentally change your model's architecture; it optimizes the execution of that architect.

3 min read

TensorRT Dynamic Shapes: Optimization Profiles Setup

TensorRT's dynamic shapes let you build a single engine that can handle a range of input dimensions, but setting them up is more about defining what you.

4 min read

TensorRT Engine Serialization: Serialize and Load Engines

Serializing and loading TensorRT engines is how you save a compiled model for later use, avoiding the costly recompilation step.

3 min read

TensorRT Engine Compatibility: Version Lock and Rebuild

TensorRT engine compatibility is a surprisingly fragile beast, often leading to "version lock" where an engine built with one TensorRT version will outr.

5 min read

TensorRT FP16 Precision: Speed Up Inference with Half

TensorRT FP16 precision fundamentally changes how neural network weights are stored and processed, allowing for significant speedups by using 16-bit flo.

3 min read

TensorRT Inflight Batching: LLM Serving Throughput

LLM serving is often a game of throughput, and TensorRT-LLM's inflight batching is a surprisingly effective way to squeeze more requests through your GP.

4 min read

TensorRT Installation: CUDA, cuDNN, and Setup Guide

TensorRT doesn't actually install like a typical library; it's more of a sophisticated compilation toolkit that leverages existing CUDA and cuDNN instal.

3 min read
ADHDecode

Complex topics, finally made simple

Courses

  • Networking
  • Databases
  • Linux
  • Distributed Systems
  • Containers & Kubernetes
  • System Design
  • All Courses →

Resources

  • Cheatsheets
  • Debugging
  • Articles
  • About
  • Privacy
  • Sitemap

Connect

  • Twitter (opens in new tab)
  • GitHub (opens in new tab)

Built for curious minds. Free forever.

© 2026 ADHDecode. All content is free.

  • Home
  • Learn
  • Courses
Esc
Start typing to search all courses...
See all results →
↑↓ navigate Enter open Esc close