In this April 23, 2021 article for Wired, Will Knight dives deeper into the latest acquired capacity of GPT-3 – writing codes based only on a short description of what the code should do [to learn about GPT-3, read GPT-3: THE IMPRESSIVE BABY STEPS TOWARDS AGI].
According to Furkan Bektes, founder and CEO of SourceAI, “While testing the tool, we realized that it could generate code…That’s when we had the idea to develop SourceAI”. Meanwhile, another company, DeBuild, plans to commercialize a GPT-3-based technology that can create custom web apps.
Unlike DeBuild, however, SourceAI wants its users to be able to create a wider range of programs using different languages. The process, they say, will automate the creation of more software. Bektes says, “Developers will save time in coding, while people with no coding knowledge will also be able to develop applications”.
Editor’s Note: While we believe in capacitating people, the problem we see with SourceAI is that it has the potential to proliferate already known biases in computer algorithm [see THE BIASES THAT CAN BE FOUND IN AI ALGORITHM and ARE WE PASSING ON OUR BIASES TO ROBOTS?]. It’s been years since tech experts saw these AI biases and until now, no solution has been made. The explosion in software as a result of this “auto programming” system could make it impossible for programmers to debug codes when they start worsening these impacts of these AI biases.
That last statement suddenly reminds us of Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies. Perhaps now is a good time to review it.
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the article offline.