Skip to content

16: Production Observability Stack 🆕

Kaggle Time Level

Overview

Phase: Observability Trilogy Difficulty: Production Duration: 45 min

Learn production observability stack for LLM inference on Kaggle dual Tesla T4 GPUs.

Learning Objectives

  • ✅ Implement multi-layer observability
  • ✅ Combine OpenTelemetry + llama.cpp + GPU monitoring
  • ✅ Create unified Graphistry 2D + Plotly 2D/3D dashboard
  • ✅ Deploy production observability stack

Topics Covered

  • 📚 Multi-layer telemetry
  • 📚 Graphistry + Plotly
  • 📚 Complete stack

Prerequisites

  • llamatelemetry v0.1.0 installed
  • Kaggle dual Tesla T4 environment (30GB VRAM)
  • Basic Python knowledge
  • Completed Tutorial 01

Quick Start

# Install llamatelemetry
!pip install -q --no-cache-dir git+https://github.com/llamatelemetry/llamatelemetry.git@v0.1.0

# Verify GPU environment
!nvidia-smi --query-gpu=index,name,memory.total --format=csv

Key Concepts

🆕 Observability Trilogy

This tutorial is part of the Observability Trilogy (Notebooks 14-16), which introduces production-grade observability features:

  • Notebook 14: OpenTelemetry integration with distributed tracing
  • Notebook 15: Real-time performance monitoring with live dashboards
  • Notebook 16: Complete production observability stack

Step-by-Step Guide

Step 1: Environment Setup

Verify your Kaggle environment has dual Tesla T4 GPUs.

Step 2: Installation

Install llamatelemetry v0.1.0 with all dependencies.

Step 3: Configuration

Configure the server for optimal performance on Kaggle T4.

Step 4: Implementation

Implement the tutorial objectives step by step.

Step 5: Verification

Test and verify the implementation works correctly.

Expected Output

After completing this tutorial, you should be able to:

  • ✅ Implement multi-layer observability
  • ✅ Combine OpenTelemetry + llama.cpp + GPU monitoring
  • ✅ Create unified Graphistry 2D + Plotly 2D/3D dashboard
  • ✅ Deploy production observability stack

Common Issues

Issue: Server Fails to Start

Solution: Check GPU memory and ensure no other processes are using the GPUs.

nvidia-smi

Issue: Out of Memory

Solution: Reduce context size or use lower quantization.

ctx_size=2048  # Instead of 4096

Performance Benchmarks

Expected performance on Kaggle dual Tesla T4:

Model Quantization Speed VRAM
Gemma-3 1B Q4_K_M ~85 tok/s ~1 GB
Gemma-3 4B Q4_K_M ~42 tok/s ~2.5 GB
Llama-3.1 8B Q4_K_M ~25 tok/s ~5 GB

Next Steps

Resources

Full Notebook

View and run the complete notebook on Kaggle:

Kaggle


Tutorial {num}/{len(NOTEBOOKS)} | llamatelemetry v0.1.0 | Back to Tutorials