4 things Claude AI can do that ChatGPT can’t

[ad_1]

Users can not only add documents to Claude, but can also drop links into the chat and get it to summarize the contents. When you add links, Anthropic warns that Claude might hallucinate content when links are input into the chat, which I found to be true about a third of the time. 

The example below shows how Claude “summarized” a Wall Street Journal article when I gave it just the link on the left, and when I gave it the text I accessed with my subscription on the right. This task resulted in two striking issues: major hallucinations, and the possibility of accessing paywalled content.

The article discusses controversies involving restaurant tipping in Chicago, where new regulations are being considered. Claude’s major hallucination involved saying that the issue was being debated in New York City instead of Chicago, which changes the main subject of the article.  

Also: How does ChatGPT actually work?

The fact that it seems like it can access content through links brings up the question of whether or not Claude can bypass paywalls, a characteristic that could infringe copyright laws. I did give Claude paywalled links to test its capabilities. Claude gave a summary that was similar to the answer it gave using the copied text, albeit with hallucinations. 

The bottom line is that I wouldn’t trust the summarization Claude makes through links, as the results can be wildly inaccurate. However, summarization is a major feature that Anthropic could improve and build upon, as long as restrictions to respect paywalled content are in place.



[ad_2]

Source link