ChatGPT is an impressive advance in artificial intelligence, as it can understand many requests made in natural language and provide complex responses. Even so, he relies on a human giving commands. What if that wasn’t necessary? Some developers are building tools for this purpose, such as Auto-GPT and BabyAGI.
Both Auto-GPT and BabyAGI appeared in a report on the site Ars Technica. They have slightly different approaches, but pursue a common end: an artificial intelligence capable of carrying out tasks without needing a human to give orders.
O Auto-GPT tries to chain together multiple GPT-4 responses and take the task further so that it gets done with less user interaction. It runs locally as a Python script or a more limited web version called AgentGPT.
The program works as follows: you name your agent and describe a goal for him. From then on, he breaks that objective into smaller tasks and asks GPT-4 one by one. In some cases, the response from one task is used in the following
In the local version, Auto-GPT requires the user to provide an API key from OpenAI and another from Google Search. AgentGPT does not ask for this, but it terminates jobs early as it cannot run the agent for a long time due to access limitations.
Theoretically, the script could search for a product on the internet and automatically select one of the stores. He couldn’t buy it, since he doesn’t have a specific tool for it.
O BabyAGI works similarly, with local or web versions, on a site called God Mode. The big difference to Auto-GPT is that you can choose to step by step.
When I asked him to help me choose a productivity method for those with tasks with short deadlines, he gave me three steps: identify, evaluate, select. I could choose which of these steps I wanted to follow, or add new ones.
With each one, I had to approve the sequence, and I could add new instructions. That is, it is more like automatic, when chaining tasks, than autonomous, which would solve everything by itself.
General AI is still just imagination
The name of BabyAGI comes from “artificial general intelligence”, or “artificial general intelligence”.
This denomination is given to an AI that would have the power to perform practically any task, even without receiving specific training for it. The developers of Auto-GPT and BabyAGI say this is the goal.
Microsoft says it has seen some hints of AGI behaviors in GPT-4. Even so, such a model is still hypothetical.
What experiments like Auto-GPT and BabyAGI show is that, as impressive as GPT-4 is, it still has very defined limits and can’t go very far from what it’s trained to do.
An important point about this is that LLMs (acronym for “large language models”, or “large language models”) still “hallucinate”, a name given in the jargon for when the AI writes something that has no basis in reality.
By making GPT-4 link one task to another, there is the possibility of it hallucinating halfway through, with consequences on the way to the goal.
For now, what we do know is that existing AIs work better together with humans.
An MIT study showed that an AI architecture tool achieves better results when it receives human feedback to correct flaws at each step.
In other words: you still can’t send the AI to buy a shoe or choose the best organization method for your needs.
With information: Ars Technica