GPT-3 may just be an autocomplete program, but it is not as simple as it sounds. The program has taken years to be developed, and unlike a regular software, it utilities deep learning to find patterns in data. It uses 175 billion parameters to complete its tasks, and the data it used to train on is mammoth, to say the least. To give you an idea, the English Wikipedia, which is composed of 6 million articles, composes only 0.6% of its training data.
GPT-3 is currently being used as a question based search engine, a chat bot that allows you to talk to historical figure, and recently, an AI philosopher [read Open AI’s Philosopher AI Will Answer Existential Questions But Will Say Nothing On Controversial Issues].
Editor’s Note: In a Tweet posted by Open AI CEO Sam Altman, he said, “The GPT-3 hype is way too much. It’s impressive…but it still has serious weaknesses and sometimes makes silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse.”
It is true that GPT-3 still has a long way to go before it can make autonomous decisions, but the potential and the pitfalls of the technology is very clear to see.
Read Original Article
Click the button below if you wish to read the article on the website where it was originally published.
Click the button below if you wish to read the original article offline.