[ad_1]
Creating and testing code at the touch of a button through generative artificial intelligence (AI) models, such as GitHub CoPilot or ChatGPT, almost seems too good to be true. So good, in fact, that there has to be a catch.
While software professionals are embracing AI as a power tool to build, launch, and update applications, there is also nervousness about its intellectual property and security implications. Is that AI-generated code scraped from someone else’s intellectual property? Does the model draw on internal corporate data that should be kept secure?
Also: How to use ChatGPT to write code
Technologists recognize that AI adoption requires attention to rights, privacy, security, productivity, and training, according to a GitLab survey of 1,001 developers and executives, conducted in June.
The majority of respondents (79%) expressed concern about AI tools having access to private information or intellectual property. The main concern was that sensitive information, such as customer data, may be exposed.
Copyright concerns top the list of concerns about using AI-generated code. Close to half of respondents (48%) cited concern that code generated using AI might not be subject to the same copyright protection as human-generated code. Another 39% were worried about security vulnerabilities with such code.
Also: 6 skills you need to become an AI prompt engineer
Still, technologists are optimistic that these issues can be worked through and they continue to forge ahead. Among respondents whose organizations are using AI in software development today, as many as 90% felt confident using AI in their daily tasks at work. In addition, 60% said they use AI daily, and 22% said they use AI several times a week. More than half (51%) rated their organization’s efforts in incorporating AI into the software development lifecycle as “very” or “extremely” successful.
AI is seen as an important investment from a software development perspective. Among respondents whose organizations are using AI or plan to in the future, 83% said they have or will have budget specifically allocated to AI for software development. Benefits cited included improved efficiency (55%), faster cycle times (44%), and increased innovation (41%).
Training and skills also emerged as a common theme in the obstacles and concerns identified by respondents. As much as 81% said they need more training to use AI at work, and 87% said organizations will need to re-skill employees to adapt to the changes AI will bring. A top area of concern was the potential to introduce a new set of skills to learn (42%), followed by a lack of the appropriate skill sets to use AI or interpret AI output (34%).
Also: 5 ways to explore the use of generative AI at work
The bottom line is that AI cannot replace human oversight and innovation. More experienced professionals “accept AI as a supportive tool for skill development, but don’t think it can completely replace the expertise, knowledge, and problem-solving of seasoned professionals like themselves,” the survey’s authors assert.
“Ultimately, it comes down to more than simply human versus machine. Leveraging the experience of human team members alongside AI is the best — and perhaps only — way organizations can fully address the concerns around security and intellectual property.”
AI might be able to generate code more quickly than a human developer, “but a human team member needs to verify that the AI-generated code is free of errors, security vulnerabilities, or copyright issues before it goes to production,” they said.
[ad_2]
Source link