ChatGPT mostly gets programming questions wrong

Occurred: August 2023

Can you improve this page?
Share your insights with us

ChatGPT wrongly answers over half of software engineering questions it receives, according to a research study. 

Purdue University researchers analysed how ChatGPT responded to 517 questions posed on Stack Overflow to assess the correctness, consistency, comprehensiveness, and conciseness of the chatbot's answers using linguistic and sentiment analysis, and by questioning a dozen volunteer participants. 

The analysis showed that 52 percent of ChatGPT answers are incorrect, and 77 percent are verbose. 'Nonetheless', the researchers said, 'ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style.' 

The paper raised questions about ChatGPT's ability to generate high quality engineering information.

Databank

Operator: Stack Overflow; OpenAI
Developer: OpenAI
Country: USA
Sector: Technology
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Accuracy/reliability
Transparency