
The architecture is genuinely different from ClickHouse and Postgres-based solutions. Object storage is first-class (not a tiering afterthought) so costs stay predictable as telemetry volume grows. Native OTLP ingestion means we went from SDK to queryable SQL with no queue, no workers, no transformation pipeline. For AI observability specifically, the Gen-AI semantic conventions land directly as columns, which makes building cost and latency dashboards straightforward. Review collected by and hosted on G2.com.
It's a younger project, and you'll occasionally hit rough edges before the broader ecosystem does. We ran into a few bugs (binary timestamp parameters in cluster mode, Unicode encoding in query results) that needed attention. But the team is super responsive and everything was addressed within 2 days of reporting. Review collected by and hosted on G2.com.
The reviewer uploaded a screenshot or submitted the review in-app verifying them as current user.
Validated through LinkedIn
Invitation from G2. This reviewer was not provided any incentive by G2 for completing this review.

