Skip to content

Dashboard Import Instructions

Available Dashboards

✅ TradePsykl - API Overview (api-overview.json)

  • Status: Fully functional
  • Metrics: Process memory, CPU, GC activity
  • Data Source: grafanacloud-t4apps-prom (Prometheus metrics)
  • Update Frequency: 30-second refresh

✅ TradePsykl - Engine Metrics (engine-metrics.json)

  • Status: Fully functional
  • Metrics: Process memory, CPU, GC activity
  • Data Source: grafanacloud-t4apps-prom (Prometheus metrics)
  • HTTP Server: Port 8080 with /metrics and /health endpoints
  • Update Frequency: 30-second refresh

Datasource Configuration

These dashboards are pre-configured for the Grafana Cloud Prometheus datasource: grafanacloud-t4apps-prom

Ready to Import

The dashboards are ready to import directly into your Grafana Cloud instance. They will automatically use the correct datasource.

To import:

  1. Go to your Grafana Cloud instance
  2. Navigate to DashboardsNewImport
  3. Upload api-overview.json or engine-metrics.json
  4. Click Import

If your datasource name is different, update it in Dashboard SettingsVariablesdatasource after import

Current Setup: OTLP Push to Grafana Cloud

Your services are configured to:

  • Export traces via OTLP → Grafana Cloud Tempo
  • Export metrics via OTLP → Grafana Cloud Mimir (Prometheus-compatible storage)
  • Expose metrics at /metrics endpoint (for optional Prometheus scraping)

Metrics Available

These metrics are pushed to Grafana Cloud Mimir:

Metric NameTypeDescriptionLabels
process_resident_memory_bytesGaugeMemory usage in bytesdeployment_environment, service_name
process_cpu_seconds_totalCounterTotal CPU time useddeployment_environment, service_name
python_gc_objects_collected_totalCounterObjects collected by GCdeployment_environment, service_name, generation
python_infoGaugePython version infodeployment_environment, service_name, version, implementation

Troubleshooting

"No data" in dashboards

Cause 1: OTLP export timeouts (check logs first)

  • Check container logs: docker compose logs api engine | grep -i error
  • OTLP export may be timing out due to network issues
  • Fix: The timeout errors need to be resolved first. Check your network/firewall or try from a different network.

Cause 2: Metrics not yet in Grafana Cloud

  • OTLP metrics export runs periodically (every 60 seconds by default)
  • Fix: Wait a few minutes, then check Grafana Cloud Explore → Prometheus → query process_resident_memory_bytes

Cause 3: Label mismatch

  • OpenTelemetry converts dots to underscores in label names
  • deployment.environment becomes deployment_environment
  • service.name becomes service_name
  • Fix: Dashboards already use correct underscore format

Verify metrics in Grafana Cloud

  1. Go to Grafana Cloud → Explore
  2. Select your Prometheus datasource
  3. Run query: process_resident_memory_bytes{deployment_environment="dev"}
  4. If you see data, the dashboards should work (after datasource name fix)
  5. If no data, check the OTLP export errors in logs

Alternative: Local Prometheus Setup

If you prefer scraping metrics locally instead of OTLP push:

yaml
# docker-compose.yml - add this service
prometheus:
  image: prom/prometheus:latest
  ports:
    - "9090:9090"
  volumes:
    - ./prometheus.yml:/etc/prometheus/prometheus.yml
  command:
    - '--config.file=/etc/prometheus/prometheus.yml'
yaml
# prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'api'
    static_configs:
      - targets: ['api:8000']
        labels:
          service_name: 'api'
          deployment_environment: 'dev'

Then update dashboards to use local Prometheus datasource.

Documentation generated with VitePress