- Google Cloud Services Dominate Leaked Credentials in the Android Ecosystem
- Hundreds of Firebase databases show clear signs of automated compromise
- Exposed storage compartments leaked hundreds of millions of files
A major security investigation analyzed 1.8 million Android apps available on the Google Play Store, focusing on those that explicitly claim AI features, and identified worrying security flaws that could reveal secrets.
From the initial search pool, Cybernews Researchers identified 38,630 Android AI apps and examined their internal code for exposed credentials and cloud service references, finding widespread failures in data management that extended far beyond isolated developer errors.
Overall, researchers found that nearly three-quarters (72%) of Android AI apps analyzed contained at least one hard-coded secret directly embedded in the app’s code – and on average, each affected app leaked 5.1 secrets.
Hardcoded secrets remain common in Android AI apps
In total, the researchers identified 197,092 unique secrets in the dataset, demonstrating that insecure coding practices remain widespread despite long-standing warnings.
More than 81% of all detected secrets were related to Google Cloud infrastructure, including project IDs, API keys, Firebase databases, and storage buckets.
Of the hardcoded Google Cloud endpoints detected, 26,424 were identified, although about two-thirds pointed to infrastructure that no longer existed.
Of the remaining endpoints, 8,545 Google Cloud storage buckets still existed and required authentication, while hundreds were misconfigured and left publicly accessible, possibly exposing more than 200 million files, totaling nearly 730 TB of user data.
The study also identified 285 Firebase databases without any authentication checks, collectively leaking at least 1.1 GB of user data.
In 42% of these exposed databases, researchers found tables labeled as proof of concept, indicating previous compromise by attackers.
Other databases contained administrator accounts created with attacker-style email addresses, showing that the exploitation was not theoretical but already underway.
Many of these databases remained insecure even after clear signs of intrusion, suggesting poor monitoring rather than one-off errors.
Despite concerns about AI functionality, leaks of API keys from large language models were relatively rare: only a small number of keys associated with major vendors such as OpenAI, Google Gemini, and Claude were detected in the entire dataset.
In typical configurations, these leaked keys would allow attackers to submit new requests, but would not provide access to stored conversations, historical prompts, or previous responses.
Some of the most serious exposures involved live payments infrastructure, including the leak of Stripe secret keys capable of granting full control over payment systems.
Other leaked credentials allowed access to communications, analytics, and customer data platforms, enabling application impersonation or unauthorized data extraction.
These failures cannot be mitigated by basic tools like a firewall or post-exposure malware removal tools.
The scale of data exposed and the number of apps already compromised suggest that app store controls alone have not reduced systemic risk.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




