LogoLogo
HomeApplicationBlog
  • WELCOME
    • Overview
    • Quickstart
    • Key Features
    • Demo
  • ONBOARDING YOUR DATA
    • Getting started
    • Prerequisites
    • Integrations
      • Prometheus
        • Handling organization_id
        • Privacy and Security
      • CloudWatch
      • Datadog
      • Coming soon
      • Request integration
    • Self-hosting
      • Configuration yaml
      • Running locally with Docker
      • Running with Kubernetes (k8s)
    • Data API
      • Example implementation
    • Filters
      • Open source log parsing library
      • Data hashing & transformation
    • Custom adapters
  • API (BETA)
    • Authentication
    • Pagination
    • API Reference
  • How to's
    • Tutorials
      • Build a SLI
      • Build a SLO
      • Create an Organization
      • Build a SLA
      • Configure a SLA Portal
    • Guides
    • Glossary
  • MISC
    • Changelog
    • Join the Closed Beta
  • Legal
    • Terms of Service
    • Contributor License Agreement
    • Privacy Notice
Powered by GitBook
On this page
  • Our Flexibility Promise
  • Requirements
  • Next Steps
  1. ONBOARDING YOUR DATA

Prerequisites

Data hygiene requirements you need to be aware of before pushing to slaOS

PreviousGetting startedNextIntegrations

Last updated 5 months ago

Welcome to slaOS, home to the most flexible computation engine in the service level management space. Our platform is designed to accommodate a wide array of data sources and types, enabling you to build sophisticated computations that truly reflect your service's performance.

Our Flexibility Promise

The slaOS engine is built to handle:

  • Multiple data sources

  • Various data types (logs, metrics, events)

  • Different granularities (from raw event data to pre-aggregated metrics)

  • Diverse key structures

Whether you're dealing with log-level data, metric-level information, event tables, or pre-aggregated statistics, slaOS is equipped to process and analyze it all.

Requirements

There are a few key requirements for your data, in order to ensure a successful integration with slaOS:

  • Organization Identification

    • Organization identification is crucial for determining how your data is grouped and analyzed in slaOS.

    • Each data point must be associated with a organization identifier.

    • This can be a customer ID, vendor ID, account number, or any unique identifier that distinguishes between your service consumers, integrations, vendors etc.

Real-world example

Example: RPC Provider Service

Consider an RPC provider tracking:

  • Request latency per downstream provider

  • Overall API availability

  • Request volume per provider

Your monitoring tracks:

  • provider_id (e.g., "alchemy", "infura")

  • request_type (e.g., "eth_call", "eth_getBalance")

  • response_status ("success", "error", "timeout")

Provider-Specific Metrics

{
  "organization_id": "alchemy",
  "key": "rpc_requests",
  "values": {
    "latency": 0.23,
    "success_rate": 0.998,
    "requests": 1000000
  }
}

Service-Wide Metrics API availability tracked across all providers:

{
  "organization_id": "global_service",
  "key": "api_status",
  "values": {
    "availability": 0.9999,
    "error_rate": 0.001
  }
}

This enables:

  1. Per-provider latency SLOs (e.g., "Alchemy p95 latency < 250ms")

  2. Global availability SLOs (e.g., "API availability > 99.99%")

  3. Per-provider capacity planning

The organization_id could represent:

  • Provider ID (tracking RPC providers)

  • Region ID (monitoring geographical performance)

  • Network ID (tracking network-specific metrics)

  • Method ID (monitoring specific RPC methods)

Best Practices

  • Avoid generic IDs (e.g., "default_provider")

  • Use "global_service" for service-wide metrics

  • Keep provider identification consistent

  • Document your organization ID schema

organization_id applies to "customers" for vendors using slaOS, and conversely to "integrations" or your "vendors" if you are a consumer of services, standing up your SLA surface on the platform.

  • Timestamp

    • Every event or metric should include a timestamp. This allows for time-based analysis and tracking of SLIs over time.

  • Consistent Keys

    • A key identifies a unique data stream within slaOS. For example, keys like api_logs or etl_logs represent different types of data streams.

    • Consistency within each data type is crucial. Define and stick to a naming convention for your metrics and event types to ensure accurate and efficient analysis.

Next Steps

Once you've ensured your data meets these prerequisites:

  1. Choose your preferred integration method (pre-built integrations, custom connectors direct, or the Data API).

  2. Set up your data pipelines to start flowing data into slaOS.

  3. Begin defining your Service Level Indicators (SLIs) and Objectives (SLOs) on slaOS.

Need help preparing your data to onboard to slaOS? Contact us at !

Handling organization_id
hello@rated.co