More developers are coding with AI than you think, Stack Overflow survey finds

[ad_1]

Person coding on a computer screen

Emilija Manevska/Getty Images

One of ChatGPT’s biggest claims to fame was the chatbot’s ability to code. Shortly after being released, people quickly noticed ChatGPT could perform advanced coding tasks such as debugging code

As a result, developers, whose job heavily relies on coding, have adopted the technology. 

Also: Who owns the code? If ChatGPT’s AI helps write your app, does it still belong to you?

Stack Overflow’s 2023 Developer Survey polled over 90,000 developers to gather industry insights, including developer sentiments towards AI. 

Out of the 90,000 respondents, 70% of all respondents use AI tools in their development process or plan to use AI tools this year. Only 29.4% of the respondents said they don’t use it and they don’t plan to. 

The insights from the previous survey question also showed that developers learning to code are more likely to use AI tools than professional developers (82% compared to 70%). This highlights that ChatGPT’s value includes coding but is not limited to it, since professional developers are still using it. 

Also: How to use ChatGPT to write code 

In terms of personal sentiments towards AI, 77% of all respondents expressed that they have either a favorable or very favorable stance on using AI tools as part of their development workflow. 

The developers delineated that the top use cases for using AI tools in their workflow include writing code (83%), debugging and getting help (49%), documenting code (35%), learning about a codebase (30%), and testing code (24%).

Despite the positive sentiments and widespread implementation, many of the developers show hesitancy regarding the accuracy of these AI tools. 

Only 42% of the polled respondents trust the accuracy of the output while 31% are on the fence and 27% either somewhat distrust or highly distrust it. 

Also: If you use AI-generated code, what’s your liability exposure? 

This distrust regarding results is likely rooted in the hallucinations that AI models are prone to. These hallucinations refer to the incorrect output or misinformation that AI models can generate at times. 

The consequences of these hallucinations can be as small as outputting an incorrect answer or significant enough to get OpenAI sued. To get the most out of AI assistance while still maintaining accuracy, it is best to have a human work in tangent with the AI.



[ad_2]

Source link