Dashboard Import Instructions
Available Dashboards
✅ TradePsykl - API Overview (api-overview.json)
- Status: Fully functional
- Metrics: Process memory, CPU, GC activity
- Data Source: grafanacloud-t4apps-prom (Prometheus metrics)
- Update Frequency: 30-second refresh
✅ TradePsykl - Engine Metrics (engine-metrics.json)
- Status: Fully functional
- Metrics: Process memory, CPU, GC activity
- Data Source: grafanacloud-t4apps-prom (Prometheus metrics)
- HTTP Server: Port 8080 with /metrics and /health endpoints
- Update Frequency: 30-second refresh
Datasource Configuration
These dashboards are pre-configured for the Grafana Cloud Prometheus datasource: grafanacloud-t4apps-prom
Ready to Import
The dashboards are ready to import directly into your Grafana Cloud instance. They will automatically use the correct datasource.
To import:
- Go to your Grafana Cloud instance
- Navigate to Dashboards → New → Import
- Upload
api-overview.jsonorengine-metrics.json - Click Import
If your datasource name is different, update it in Dashboard Settings → Variables → datasource after import
Current Setup: OTLP Push to Grafana Cloud
Your services are configured to:
- Export traces via OTLP → Grafana Cloud Tempo
- Export metrics via OTLP → Grafana Cloud Mimir (Prometheus-compatible storage)
- Expose metrics at
/metricsendpoint (for optional Prometheus scraping)
Metrics Available
These metrics are pushed to Grafana Cloud Mimir:
| Metric Name | Type | Description | Labels |
|---|---|---|---|
process_resident_memory_bytes | Gauge | Memory usage in bytes | deployment_environment, service_name |
process_cpu_seconds_total | Counter | Total CPU time used | deployment_environment, service_name |
python_gc_objects_collected_total | Counter | Objects collected by GC | deployment_environment, service_name, generation |
python_info | Gauge | Python version info | deployment_environment, service_name, version, implementation |
Troubleshooting
"No data" in dashboards
Cause 1: OTLP export timeouts (check logs first)
- Check container logs:
docker compose logs api engine | grep -i error - OTLP export may be timing out due to network issues
- Fix: The timeout errors need to be resolved first. Check your network/firewall or try from a different network.
Cause 2: Metrics not yet in Grafana Cloud
- OTLP metrics export runs periodically (every 60 seconds by default)
- Fix: Wait a few minutes, then check Grafana Cloud Explore → Prometheus → query
process_resident_memory_bytes
Cause 3: Label mismatch
- OpenTelemetry converts dots to underscores in label names
deployment.environmentbecomesdeployment_environmentservice.namebecomesservice_name- Fix: Dashboards already use correct underscore format
Verify metrics in Grafana Cloud
- Go to Grafana Cloud → Explore
- Select your Prometheus datasource
- Run query:
process_resident_memory_bytes{deployment_environment="dev"} - If you see data, the dashboards should work (after datasource name fix)
- If no data, check the OTLP export errors in logs
Alternative: Local Prometheus Setup
If you prefer scraping metrics locally instead of OTLP push:
yaml
# docker-compose.yml - add this service
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'yaml
# prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'api'
static_configs:
- targets: ['api:8000']
labels:
service_name: 'api'
deployment_environment: 'dev'Then update dashboards to use local Prometheus datasource.