AI Index· Israel

Our Sources

The score is built from daily monitoring of dozens of public sources — from frontier labs through independent safety evals, incident databases, academic research, policy, and cyber intelligence. Here's the complete list.

Frontier AI Labs

Official safety publications from the companies leading the field

Independent Safety Evaluation

Organizations that evaluate models third-party — not the company that built the model

Incident Databases & Case Studies

Real documentation of AI harms and incidents — not speculation

Academic Research — arXiv

The source for current research papers. Four categories we monitor daily

Policy & Standards

Government and international bodies setting the rules

Industry Synthesis — Newsletters

Those doing the hard work of reading everything and summarizing

Chinese AI Ecosystem

Half the world's model progress — a blind spot for most Western trackers

Cyber Intelligence

Who uses AI to attack, and how — real field reports

Open Source Risks

Supply chain incidents — where they're most documented in 2026

📋 How We Work with Sources

Each day, the research engine automatically scans these sources. Every public event is evaluated by its impact on one of 4 pillars (capability, autonomy, integration, bypass-of-control), at an impact of ±0.1 to ±2.0 points. The daily score sums the four pillars. Negative changes (effective regulation, things that didn't happen) offset positive ones. No internal estimates, no secret information — only verifiable public sources.

View this site in Hebrew