Anthropic AI Launches Claude To Rival ChatGPT
Anthropic AI launched Claude, a rival AI chatbot to ChatGPT. The company claims it can do everything the latter can, except it avoids “harmful outputs.”
ChatGPT has a serious problem with “jailbreaking,” tricking the AI chatbot into breaking its rules. As a result, users could trick it into spouting offensive comments or dangerous information.
In contrast, Anthropic AI says it extensively trained Claude to avoid such problems. Consequently, it could become a safer choice for businesses.
Article continues after this advertisementHow does the Anthropic AI Claude work?
After working for the past few moths with key partners like @NotionHQ, @Quora, and @DuckDuckGo, we’ve been able to carefully test out our systems in the wild. We are now opening up access to Claude, our AI assistant, to power businesses at scale. pic.twitter.com/m0QFE74LJD
— Anthropic (@AnthropicAI) March 14, 2023
Anthropic AI’s co-founders were ex-OpenAI employees, so they have experience in creating AI chatbots like ChatGPT.
That is why their Claude AI has similar features, such as writing, coding, searching across documents, summarizing, and answering questions.
Article continues after this advertisementMoreover, OpenAI launched an API so that businesses can incorporate it into their digital systems. As a result, they can launch new products and services.
For example, Snap was one of the first companies that gained access to ChatGPT API. It used the tool to launch an AI companion called “My AI” on Snapchat.
As mentioned, ChatGPT has safeguards to ensure it does not produce harmful or offensive comments. Yet, many people took it as a challenge to trick the AI.
How did Anthropic AI prevent this problem? It used a technique called “constitutional AI.” The company made 10 principles that serve as a ”constitution” for its chatbot.
The tech firm said it based the laws on three concepts. The first is beneficence or maximizing positive impact.
The second is nonmaleficence, or avoiding giving harmful advice, and the third is autonomy, or respecting freedom of choice.
Then, it made an AI separate from Claude to answer questions while referring to these principles.
Afterward, that same system chose the answers that closely corresponded to the constitution. Anthropic AI used the final results to train Claude.
How to use Anthropic AI Claude
If you want to use Anthropic AI Claude, head to anthropic.com/product and click on the Request Access button.
Enter the required information and wait for further instructions in your email inbox. Unlike ChatGPT, you cannot use Claude for free.
You must pay for either one of its versions: Claude Instant and Claude-v1. Claude Instant swaps quality with speed with its responses to be “1/6th the cost of our Claude family of models.”
It gives you access to 9,000 tokens, which represents how the AI chatbot breaks down text. Each prompt or user input costs $0.43 for a million characters.
It charges $1.45 per one million characters for outputs or results. On the other hand, Claude-v1 can handle more complicated tasks than the Instant version.
That version also offers 9,000 tokens, but its prompts cost $2.90 per one million characters. Claude-v1 charges $8.60 per one million characters for its outputs.
Anthropic admits that Claude AI still has flaws. Like ChatGPT, it relies on a database limited to information up to spring 2021. Also, it cannot connect to the internet.
Claude may also “hallucinate” data, meaning it could invent information that does not exist. For example, AI researcher Dan Elton tested it and got results for a nonexistent chemical.
Related Articles
Conclusion
Former OpenAI employees created an AI chatbot Claude to rival ChatGPT. Unlike the latter, Anthropic AI Claude is less likely to provide harmful content.
It seems OpenAI beat them to this upgrade after launching GPT-4 hours ago. Aside from being less susceptible to “jailbreaking,” it accepts images and text queries.
The AI race is becoming faster than ever, so expect it to impact your life sooner. Adapt by following the latest digital trends from Inquirer Tech.