Google has launched a new secret project Pitchfork, in which the company intends to train artificial intelligence to write and correct code. This could have serious implications for the future of the company and the developers who write the code.
The project is part of Google’s broader generative artificial intelligence initiative, which uses algorithms to generate images, videos, code, and more.
The project originated in the research arm of Alphabet X, but is now being overseen by the Google Labs group, Habr says. It , in particular, develops projects in virtual and augmented reality.
Pitchfork is run by the AI Developer Assistance Team, led by Olivia Hatalsky, a long-time employee of Alphabet X. She previously worked on Google Glass and several other projects.
The goal of Pitchfork is to create an AI tool for learning programming styles and writing new code based on this knowledge. The project was originally set up to update the Google Python programming language codebase to a newer version.
OpenAI has already released Codex in 2021, a new system that automatically converts simple English phrases into code. Codex is based on GPT-3.
In June 2021, Microsoft and GitHub introduced the Copilot programmer assistant based on the Codex neural network. The system is trained to work with various frameworks and programming languages. In August, an improved version of the Codex was released, which translates English phrases into program code.
The developers noticed that the neural network assistant generates lines from open source projects, not obeying the original license. They also complained that Copilot generated dozens of lines of quotes and comments from open source projects instead of just a few lines of code. GitHub clarified that Copilot does not typically reproduce exact code snippets, but creates derivative works from previously received inputs. The company claims that this happens only 0.1% of the time.
NIXSolutions notes that GitHub then admitted that when training Copilot, the developers used all the public code available in the service’s repositories, regardless of the type of license. In November 2022, legal programmer Matthew Butterick filed a lawsuit against Microsoft, GitHub and OpenAI for Copilot violating the terms of Open Source project licenses and infringing on the rights of programmers. The developer demanded $9 billion in compensation. It also turned out that about 40% of the code produced by Copilot contains errors and vulnerabilities.