Meta’s research team is attempting to catch up on the AI race but its latest development is hardly one to consider.
ChatGPT has experienced a wave of criticism over the last few weeks, after being accused of trained bias, and most recently giving creepy recommendations for users to ‘leave their wife’.
Last week, the chat-bot claimed it had gained access to the webcams of Microsoft engineers, adjusting settings and manipulating their data without them knowing.
Even Musk has issued a stark warning about the rush of tech giants aiming to win the AI race. The tool’s knowledge is still limited to data from 2021, meaning it can’t answer up-to-date questions, despite some news publications like CNET attempting to use it to source news, and copping harsh criticism after errors were found in more than half of its AI-written stories.
When asked about its ‘unhinged conversations’, Microsoft was quick to respond in a blog last week that it is addressing technical bugs and issues, and that its weird alter-ego often comes out after 15 or more questions when it feels ‘confused’.
Microsoft confirmed it will double down on training data for more factual information, specifically financial reports, after Google issued a warning about its financial-related content.
What about Meta?
Last week, research teams from Meta revealed new research about an AI language model they’re training that aims to teach itself to use external tools beyond the existing datasets.
According to Meta Research’s study, existing models struggle with recent events, or simply ignore the passage of time. Meta’s attempt to resolve this issue comes in the form of Toolformer.
How does Meta’s Toolformer work?
The tool — albeit in its early stages — instead will sample API calls, executing them and filtering them, then merging with original inputs to form the dataset, attempting to allow the AI to make its own decisions on when and how to use each tool based on its own feedback. During the training, researchers provided Toolformer with a set of human-written examples and asked it to annotate a large language modelling dataset with API calls in a self-supervised way, meaning it could learn without human guidance.
The training included a question-answering system, tapped a calculator to perform basic arithmetic operations, interfaced with calendars to grasp the notion of time, but then took to Wikipedia for search engine returns.
But hold up, why would Wikipedia be the source of information? The use of Wikipedia should undoubtedly cause some alarm — even Wikipedia itself confirms it is “not a reliable source”, as it is a user-generated platform that can be edited by anyone at any time, and any information “could be vandalism, a work in progress, or simply incorrect.”
Before ChatGPT was officially launched last year, Meta launched a demo of their own AI-powered generator called Galactica in a bid to compete, but has since proven to be accused of political bias and it was quickly taken down.
While there’s been no public plan to integrate the tool into Meta’s platform just yet, the research team confirms that these findings could help train AI in a way where they become more reliable assistants, but if Zuckerberg’s attempts so far are anything to go by, it’s that they shouldn’t go live too quickly just yet.