- Companies do not trust
- Only one in three has implemented or optimized data observability programs
- Observability must be standard on the entire data life cycle
New research from Ataccama has affirmed a considerable proportion of companies still do not trust the production of AI models – but that could simply be due to the fact that their data is not yet in order.
The study revealed that two out of five organizations (42%) do not trust their results of the AI / ML model, but only three out of five (58%) have implemented or optimized data observability programs.
Ataccama says that could be a problem, because traditional observability tools are not designed to monitor unstructured data, such as PDF and images.
You don’t trust AI? An appropriate lack of data could be the problem
The report also revealed the ad hoc approach that companies often adopt, with the often implemented observability in a reactive manner, resulting in fragmented governance and silos through the organization.
Ataccama has defined an effective program as proactive, automated and integrated throughout the life cycle of data. More advanced observability could also include automated data quality checks and correction workflows, which could ultimately prevent other upstream problems.
“They invested in tools, but they did not operationalize confidence.
However, current skills shortages and limited budgets always have challenges along the way. Ataccama has also noted that unstructured inputs continue to grow due to the increase in generative adoption of AI and RAG, but currently less than one in three organizations unstructured in their models.
The report continues by explaining: “The most mature programs reduce this difference by directly integrating observability into their engineering and data governance frameworks”.
With appropriate observability in place, companies can expect better data reliability, faster decision -making and operational risk reduction.