AI models fail to predict 2018 World Cup winners
AI models fail to predict 2018 World Cup winners
Occurred: August 2018
Page published: December 2025
A number of AI and machine learning models developed by Goldman Sachs, UBS, ING and academic reseachers made widely inaccurate forecasts for the 2018 FIFA World Cup in Russia, highlighting overconfidence in data-driven forecasting for complex, high-variance real-world events.
Ahead of the 2018 World Cup in Russia, a range of AI systems, from machine learning betting models to academic prediction engines and corporate AI demos, attempted to forecast the tournament’s outcome.
Many models chose favourites such as Germany, Spain, Brazil, or France for entirely different probabilistic reasons, yet failed either to predict knockout-stage dynamics or, in many cases, the eventual champion (France).
Goldman Sachs initially predicted a Brazil vs. Germany final with a Brazil victory. Dortmund/Ghent/Munich academic team's AI simulated the tournament 100,000 times, predicting Spain as the most likely winner, followed closely by Germany.
Some predictions were published widely through the media and technology company communications, creating an impression of precision and reliability.
The incident revealed the inability of these systems to account for rare events, tactical surprises, and high-variance outcomes inherent in international football, leading to misplaced trust and, in certain cases, financial or reputational losses for those who relied on the forecasts.
The failures appear to have stemmed from several factors:
Model limitations: Football outcomes depend heavily on non-quantifiable factors (injuries, referee decisions, team morale), which most models failed to capture.
Small datasets: International tournaments provide limited historical samples, reducing statistical robustness.
Overfitting and spurious correlations: Some models leaned heavily on past performance or ELO-type metrics that did not generalise to an unpredictable tournament.
Opaque methodology: Many commercial prediction systems lacked transparency about their inputs, assumptions, and uncertainty ranges, leading users to overestimate their accuracy.
Corporate incentives: Tech companies promoted their models as showcases of AI capability, sometimes downplaying uncertainty or methodological weaknesses.
For gamblers, analysts, and organisations using AI-driven forecasts, the incident demonstrated the risks of overreliance on algorithmic predictions in contexts characterised by randomness and sparse data.
For society, the event serves as a cautionary tale: that AI forecasting is often marketed as precise even when uncertainty dominates, and that clearer communication of model limitations is required. when public literacy about AI is very low.
Unknown
Developer: Blue Yonder; German Technische Universitat; Munich Technical University; Ghent University; Goldman Sachs; ING; UBS
Country: Global
Sector: Banking/financial services
Purpose: Predict World Cup winners
Technology: Machine learning; Prediction algorithm
Issue: Accuracy/reliability; Transparency
Hassan, A., et al. (2020). Predicting Wins, Losses and Attributes’ Sensitivities in the Soccer World Cup 2018 Using Neural Network Analysis
Groll, A., et al. (2018). Probabilistic forecasts for the 2018 FIFA World Cup based on the bookmaker consensus model
Lepschy, H., et al. (2021). Success Factors in the FIFA 2018 World Cup in Russia and FIFA 2014 World Cup in Brazil
AIAAIC Repository ID: AIAAIC0182