Docker Deployment
Get TinyOlly running on Docker in minutes!
TinyOlly UI showing distributed traces
All examples are launched from the repo - clone it first or download the current GitHub release archive:
1. Deploy TinyOlly Core (Required)
Start the observability backend (OTel Collector, TinyOlly Receiver, Redis, UI):
This starts:
- OTel Collector: Listening on localhost:4317 (gRPC) and localhost:4318 (HTTP)
- OpAMP Server: ws://localhost:4320/v1/opamp (WebSocket), localhost:4321 (HTTP REST API)
- TinyOlly UI: http://localhost:5005
- TinyOlly OTLP Receiver and its Redis storage: OTLP observability back end and storage
- Rebuilds images if code changes are detected
Open the UI: http://localhost:5005 (empty until you send data)
OpenTelemetry Collector + OpAMP Config Page: Navigate to the "OpenTelemetry Collector + OpAMP Config" tab in the UI to view and manage collector configurations remotely. See the OpAMP Configuration section below for setup instructions.
Stop core services:
2. Deploy Demo Apps (Optional)
Deploy two Flask microservices with automatic traffic generation:
Wait 30 seconds. The demo apps automatically generate traffic - traces, logs, and metrics will appear in the UI!
Stop demo apps:
This leaves TinyOlly core running. To stop everything:
3. AI Agent Demo with Ollama (Optional)
Deploy an AI agent demo with zero-code OpenTelemetry auto-instrumentation for GenAI:
Note
First run will pull the Ollama image and TinyLlama model (~1.5GB total). This may take a few minutes.
This starts:
- Ollama: Local LLM server with TinyLlama model (http://localhost:11434)
- AI Agent: Python agent making LLM calls every 15 seconds, auto-instrumented with opentelemetry-instrumentation-ollama
View AI Traces: Navigate to the AI Agents tab in TinyOlly UI to see:
- Prompts and responses for each LLM call
- Token usage (input ↓ / output ↑) with color coding
- Latency per request
- Click any row to expand the full span JSON
Watch agent logs:
Stop AI demo:
Cleanup (remove Ollama model volumes):
4. OpenTelemetry Demo (~20 Services - Optional)
Prerequisites: Clone the OpenTelemetry Demo first:
Configure: Edit src/otel-collector/otelcol-config-extras.yml:
exporters:
otlphttp/tinyolly:
endpoint: http://otel-collector:4318
service:
pipelines:
traces:
exporters: [spanmetrics, otlphttp/tinyolly]
Deploy:
export OTEL_COLLECTOR_HOST=host.docker.internal
docker compose up \
--scale otel-collector=0 \
--scale prometheus=0 \
--scale grafana=0 \
--scale jaeger=0 \
--scale opensearch=0 \
--force-recreate \
--remove-orphans \
--detach
Stop:
Note
This disables the demo's built-in collector, Jaeger, OpenSearch, Grafana, and Prometheus. All telemetry routes to Otel Collector -> TinyOlly.
5. Use TinyOlly with Your Own Apps
After deploying TinyOlly core (step 1 above), instrument your application to send telemetry:
For apps running in Docker containers:
Point your OpenTelemetry exporter to:
- gRPC: http://otel-collector:4317
- HTTP: http://otel-collector:4318
For apps running on your host machine (outside Docker):
Docker Desktop automatically exposes container ports to localhost. Point your OpenTelemetry exporter to:
- gRPC: http://localhost:4317
- HTTP: http://localhost:4318
Example environment variables:
The Otel Collector will forward everything to TinyOlly's OTLP receiver, which process telemetry and stores it in Redis in OTEL format for the backend and UI to access.
6. TinyOlly Core-Only Deployment: Use Your Own Docker OpenTelemetry Collector
If you already have an OpenTelemetry Collector or want to send telemetry directly to the TinyOlly Receiver, you can deploy the core components without the bundled OTel Collector.
This starts:
- TinyOlly OTLP Receiver: Listening on localhost:4343 (gRPC only)
- OpAMP Server: ws://localhost:4320/v1/opamp (WebSocket), localhost:4321 (HTTP REST API)
- TinyOlly UI: http://localhost:5005
- TinyOlly Redis: localhost:6579
Swap out the included Otel Collector for any distro of Otel Collector.
Point your OpenTelemetry exporters to tinyolly-otlp-receiver:4343:
i.e.
exporters:
debug:
verbosity: detailed
otlp:
endpoint: "tinyolly-otlp-receiver:4343"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp, spanmetrics]
metrics:
receivers: [otlp,spanmetrics]
processors: [batch]
exporters: [debug, otlp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp]
The Otel Collector will forward everything to TinyOlly's OTLP receiver, which process telemetry and stores it in Redis in OTEL format for the backend and UI to access.
OpAMP Configuration (Optional)
The OpenTelemetry Collector + OpAMP Config page in the TinyOlly UI allows you to view and manage collector configurations remotely. To enable this feature, add the OpAMP extension to your collector config:
The default configuration template (located at docker/otelcol-configs/config.yaml) shows a complete example with OTLP receivers, OpAMP extension, batch processing, and spanmetrics connector. Your collector will connect to the OpAMP server and receive configuration updates through the TinyOlly UI.
Stop core-only services:
Building Images
By default, all deployment scripts pull pre-built images from Docker Hub. For building images locally or publishing to Docker Hub, see build/README.md.