Page published: February 2023 | Page last updated: November 2025
Gemini is a "multimodal" generative AI system developed and managed by Google, capable of generating text, images, audio video and software code and able to take an input in one mode and produce an output in another through "cross-modal reasoning".
First named Bard and launched in March 2023 in response to OpenAI's earlier launch of ChatGPT., Bard was initially based on Google's LaMDA family of large language models.
Bard was relaunched as Gemini in February 2024 and comprised three models: Gemini Ultra, designed for "highly complex tasks"; Gemini Pro, designed for "a wide range of tasks"; and Gemini Nano, designed for "on-device tasks".
The same month, Google launched Gemma, a family of free and open-source LLMs that serve as a lightweight version of Gemini.
Generative artificial intelligence
Generative artificial intelligence (generative AI, GenAI, or GAI) is artificial intelligence capable of generating text, images, videos, or other data using generative models, often in response to prompts.
Source: Wikipedia 🔗
Website: Gemini 🔗
Released: 2023
Purpose: Generate text
Type: Chatbot; Generative AI
Technique: Machine learning
Upstream. Google provides almost no information about the data used to train Gemini, its use of human labour to train the model and how it mitigates privacy and copyright risks. Neither does the company provide information about the model's energy usage, carbon emissions and broader environmental impact.
Model. Basic information about how Gemini works, including its capabilities, known limitations and risks, are not disclosed.
Downstream. Google provides little information on how user data is protected in Gemini, which uses are restricted or prohibited, and on the impacts of the model. It also fails to provide visible channels for users to provide feedback or submit complaints or lodge appeals.
Center for Research on Foundation Models. Gemini 1.0 Ultra API transparency report 🔗
Gemini has been criticised for a wide range of actual and potential harms, including:
Accuracy and reliability. The model sometimes struggles with complex or factual topics, leading to unreliable or inaccurate outputs, It can confidently and convincingly generate responses that are inaccurate, misleading, or even invent non-existent facts or sources.
Bias and discrimination. The model's training data, often sourced from the public internet, reflects and sometimes amplifies existing societal biases, including political biases; racial, ethnic and other forms of discimination; gender, religious and ethnic stereotypes; and historical distortions.
Security, confidentiality and privacy. Gemini's design makes it vulnerable to prompt injections and data extraction/leakage; the sharing of proprietary, financial, or legally protected business information; and the collection, retention, and use of personal data.
Copyright. Gemini is trained on massive datasets scraped from the public internet. This vast corpus likely includes copyrighted material (books, articles, code, images, etc.) that was used without explicit license or permission from the creators.
Dual use. Users may exploit Gemini's capabilities for unethical or malicious purposes, such as generating code for malware, engaging in phishing, or creating harmful content.
Environment. The environmental impact of Gemini spans energy consumption, carbon emissions, and water usage, primarily centred on the massive data centres required to train and run the system.
November 2025. Google accused of using Gemini AI to snoop on users
October 2025. Political chatbots provide biased political advice about Dutch elections
May 2025. Fortnite accused of unfairly using AI to replicate Darth Vader's voice
May 2025. US law firms fined for false AI-generated legal citations, quotations
July 2024. AI increases Google emissions by 48 percent
June 2024. Study: Top chatbots spread Russian misinformation
March 2024. Google fined for training Gemini on news content without consent
March 2024. Chatbots misinform citizens about European Parliament elections
February 2024. Gemini characterises Indian PM's policies as 'fascist'
February 2024. AI models found to generate inaccurate and untrue US election info
February 2024. Google Gemini generates 'woke' 'diverse' racial images
December 2023. Google AI systems falsely call conservative activist Robby Starbuck a “child rapist”
December 2023. Michael Cohen supplies fake AI legal citations to lawyer
November 2023. Australian academics make false AI-generated allegations about Deloitte, KPMG
October 2023. Large language models perpetuate healthcare racial bias
September 2023. Asylum claim rejected by French authorities using Google Bard
September 2023. Google Search indexes Bard personal chats
August 2023. Google AI bots expouse slavery, fascism
July 2023. Chatbot guardrails bypassed using lengthy character suffixes
July 2023. Study: Google Bard lets users generate phishing emails, ransomware
April 2023. C4 dataset is trained on unsafe, copyright-protected web content
March 2023. Google Bard says the UK's exit from the European Union is a 'bad idea'
March 2023. Study: Google Bard exhibits left-leaning political bias
February 2023. Google Bard makes factual error about James Webb Space Telescope
Checkpoint Research (2023). Lowering the Bar(d)? Check Point Research’s security analysis spurs concerns over Google Bard’s limitations
AIAAIC Repository ID: AIAAIC0943