Cybersecurity researchers have uncovered millions of openly accessible AI systems running on the internet without proper security controls.
The discovery raises serious concerns about misuse, data theft, and large-scale abuse of artificial intelligence tools. Experts say this hidden network could support activities like spam, scams, and disinformation campaigns.
The findings come from a joint investigation by SentinelLABS and Censys, shared with Reuters. Researchers scanned the internet for nearly ten months and found widespread exposure of open-source AI deployments across the globe.

Open Source AI Security: Global Scan Reveals Unprotected AI Systems
The researchers focused on Ollama, an open-source framework that lets users run large language models on their own machines. By default, Ollama stays private. But a small configuration change can expose it to the public internet.
Key findings from the research:
- 7.23 million observations recorded
- 175,000 unique hosts identified
- Systems active across 130 countries
- Scanning conducted over 293 days
- Around 23,000 hosts formed a constant core of activity
Many systems had advanced features enabled.
Researchers found:
- Nearly 50% of hosts allowed tool calling
- These tools can run code and access APIs
- At least 201 hosts used prompts that removed safety rules
- 22% of systems had vision features enabled
The highest number of exposed systems appeared in major infrastructure regions.
- China: about 30% of hosts
- United States: about 20%
- Virginia: 18% of U.S. hosts
- Beijing: 30% of Chinese hosts
Also read about: NordVPN Data Breach Denied After Hacker Claims Leak
Security Risks and Industry Concerns Grow
Experts warn that these open AI systems operate outside the safety controls of big platforms. This makes them attractive to criminals.
According to SentinelOne researcher Juan Andres Guerrero-Saade, the industry ignores this unused and risky AI capacity. Some use cases are legal. Others are clearly criminal.
Major risks include:
- Spam and phishing campaigns
- Disinformation and fake content
- Data theft through prompt injection
- Abuse of cloud and internal systems
The risk increases because most systems use the same AI models.
Common models include:
- Meta’s Llama
- Alibaba’s Qwen2
- Google’s Gemma2
A single flaw could affect many systems at once.
Experts say responsibility is shared. AI creators must warn users. Deployers must secure systems. Researchers must keep monitoring threats.
Several companies did not comment on the findings.
More News To Read: