March 24, 2026
·
4
Mins Read

The real reason your dashboards contradict each other

Nanda Vijadev

In part five of our series "Closing the IT Data Trust Gap: From Raw Records to Decision‑Grade Intelligence" we talk about how your filters in your UI is why your user count changes depending on which report you open.

You’re in a vendor review meeting. You pull up the overview dashboard: 4,200 users. The procurement lead pulls up the license utilization report: 3,800 users. Your security analyst has the Shadow IT report open: it references 5,100 identities. Three screens, three numbers, same platform.

This isn’t a rounding error. It’s a design flaw baked into how most IT asset management platforms handle data.

The Problem: Filtering at the UI Layer

Most platforms ingest raw data from source systems and store it as-is. Then, at the point of display, each dashboard or report applies its own filtering logic. The overview page might exclude disabled accounts. The utilization report might include them but exclude guest accounts. The security view might include everything.

Each screen defines “user” differently. Each definition is defensible in isolation. But when you look across screens, the numbers contradict — and there’s no way for the person reading the dashboard to understand why.

The same thing happens with application counts, spend figures, and utilization metrics. Application spend on the overview page doesn’t match the spend in the license utilization report. Every inconsistency erodes trust.

The Fix: Enrichment at the Data Layer

The alternative is to push enrichment, classification, and filtering upstream — to the data layer, before any dashboard ever renders.

When every identity is classified (human user, service account, guest, admin, etc.) and every application is typed (user-facing, CDN, infrastructure, auth endpoint) at the point of ingestion, the definitions are set once. Every downstream screen, tile, report, and data product draws from the same enriched dataset.

A “user” count on the overview page matches the “user” count in the utilization report — because both are reading the same classified, scoped, enriched entity.

What This Enables

When enrichment happens at the data layer, every downstream capability gets better without any additional work:

  • Software Visibility presents an accurate app landscape by type and business function, because the classification already exists in the data
  • Utilization analysis focuses on paid, user-facing applications, because infrastructure and free tools were already excluded at the entity level
  • Shadow IT surfaces genuine risk, because CDNs, auth endpoints, and bundled apps were already classified
  • Renewal intelligence is grounded in true user counts and properly attributed entitlements, because vendor product families and bundle relationships are resolved in the knowledge graph

This is the compounding effect of data-layer enrichment. You solve it once, and every product surface inherits the benefit. But consistent, trusted data doesn’t just improve dashboards - it makes something else possible: AI agents that can act on what they see. When an agent can trust that a “user” is really a user and an “app” is really an app, it can reclaim licenses, prepare renewal briefs, and flag compliance gaps automatically. That’s the Execute layer, and it’s where data intelligence becomes IT asset optimization.

One enrichment layer. One knowledge graph. One source of truth.  

Consistent numbers across every product surface.