Nvidia, a major supplier of chips and computing systems for artificial intelligence, on Tuesday, released a set of software tools aimed at helping chatbots watch their language.
Nvidia's chips have helped companies like Microsoft add human-like chat features to search engines like Bing. But the chatbots can still be unpredictable and say things their creators wish they did not.
Microsoft in February limited users to five questions per session with its Bing search engine after the New York Times reported the system gave unsettling responses during long conversations.
Nvidia's software tools, provided free of charge, are designed to help companies guard against unwanted responses from chatbots. Some of those uses are straightforward - the maker of a customer service chatbot might not want the system to mention products from its competitors.
But the Nvidia tools are also designed to help AI system creators put into place important safety measures, such as ensuring that chatbots do not respond with potentially dangerous information such as how to create weapons or send users to unknown links that could contain computer viruses.
US lawmakers have called for regulations around AI systems as apps like ChatGPT have surged in popularity. Few legal rules or industry standards exist on how to make AI systems safe.
Jonathan Cohen, vice president of applied research at Nvidia, said the company aims to provide tools to put those standards into software code if and when they do arrive, whether through industry consensus or regulation.
"I think it's difficult to talk about standards if you don't have a way to implement them," he said. "If standards emerge, then there'll be good place to put them."
© Thomson Reuters 2023
from Gadgets 360 https://ift.tt/NUgbWQy
No comments:
Post a Comment