資訊服務Business News

Home/Business News /Business News list
Claude, Anthropic’s harmless A.I. chatbot
Source:The New York Times From:Taiwan Trade Center, New York Update Time:2023/09/23
Claude,
Claude

Despite its small size – only 160 employees – and its low profile, Anthropic is today one of the world’s leading A.I. research labs. What makes it different from OpenAI is the company’s deep concerns about the dangers of developing powerful A.I. models that could reach artificial general intelligence (A.G.I.), the industry term for human-level machine intelligence, and potentially take over or harm humanity. Claude, Anthropic’s A.I.-powered chatbot, can do everything ChatGPT can – write poems, concoct business plans, compose a college essay – while addressing fears of an A.I. apocalypse. After giving some insights into how Anthropic began, this article will describe how Claude’s Constitutional A.I. technique works and why the company’s ambitions spark controversy.

Anthropic was started in 2021 by a group of employees from OpenAI who grew concerned that the company had gotten too commercial. At first, the co-founders considered doing safety research using other companies’ AI models but soon became convinced that it would be more effective to use powerful models of their own. The company has raised more than $1 billion from investors including Salesforce and Skype to build Claude and is now competing with giants like Meta and Google.

Claude’s goals, the co-founders decided, were to be helpful, harmless, and honest. Unlike its counterpart ChatGPT, Claude is less likely to say harmful things because of a training technique called Constitutional A.I. This method begins by giving an A.I. model a written list of principles and instructing it to follow those principles as much as possible. Then, a second A.I. model is used to evaluate how well the first model follows its constitution and correct it when necessary. Claude’s constitution is a mixture of rules borrowed from other sources – such as the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service – and some rules that Anthropic created, including things like “Choose the response that would be most unobjectionable if shared with children.”

Anthropic’s safety obsession has positively impacted its image and relations with regulators and lawmakers. However, it has also led to criticism. First, Claude is an unusually jumpy and preachy chatbot who often seems scared to say anything at all. Then, and most importantly, critics argue that A.I. labs, including Anthropic, may be stoking fears for self-interest or marketing tactics. After a fund-raising document recently revealed that Anthropic wanted to raise $5 billion to train its next-generation A.I. model to be 10 times as capable as today’s most powerful A.I. systems, concerns have arisen about the apparent contradiction between the company’s safety mission and its ambition to develop advanced A.I. models. Isn’t it hypocritical to warn about an A.I. race you are actively helping to fuel? And if Anthropic is so worried about powerful A.I. models, why doesn’t it just stop building them?

To respond to skeptics, Mr. Amodei, Anthropic’s chief executive, argued that the main reason Anthropic wants to compete is to do better safety research: you cannot understand where the A.I. models’ vulnerabilities are unless you build powerful models yourself. The start-up hopes that its safety preoccupation will catch on in Silicon Valley more broadly and, ultimately, start a safety race.

Source: https://www.nytimes.com/2023/07/11/technology/anthropic-ai-claude-chatbot.html