The AI chatbot Claude from start-up Anthropic has received an update that allows it to handle 100,000 “tokens” of context in a single conversation, which is equivalent to about 75,000 words. The bot could thus ‘read’ an entire book in less than a minute, says the start-up.
Before this, the bot had a limit of 9000 tokens. By comparison, OpenAI’s standard GPT-4 model also currently has a context limit of 9000 tokens, while the largest version has a limit of 32,000 tokens. Anthropic tested the increase by adding the entire text of The Great Gatsby as an attachment, with one custom sentence. Then they were asked which sentence was modified. The bot found the adjustment within 22 seconds, says the start-up.
According to Anthropic, this allows companies to have hundreds of pages of material analyzed by Claude. According to the start-up, this can be used, for example, to summarize up to about six hours of transcribed audio, or to estimate the pros and cons of a particular piece of legislation. It also allows conversations to go on for several hours or even days before the context limit is reached. The new version of the chatbot is currently only available to Anthropic’s business partners via the API.
Earlier, Google parent company Alphabet invested $300 million in the start-up, but the reason behind that investment has not been disclosed. The start-up did say it wants to partner with Google Cloud to run its artificial intelligence systems.