Two years ago, in my article “The Pros and Cons (and Ethics) of Using AI Content Tools for Writing,” I opined on the future of generative AI for writers and discussed how writers might use it ethically, if they should use it at all. At the time, tools like Jasper AI (formerly Jarvis) were new technologies that had begun hitting the mainstream, and many bloggers and SEO bloodhounds were taking advantage of its ability to produce quick but bland articles that summarized topics without necessarily adding anything creative or thought-provoking to them, all in short time.
When I’d written this article, I’d merely speculated on what could become of the practice of using AI to generate written content. I still had no reference to what would actually come.
In late 2023, I have a stronger picture of the real pros and cons of generative AI for writers (and other creators) thanks to the mainstreaming of such tools as ChatGPT, Jasper AI, and numerous art, music, and video generators coming onto the scene, and my opinions of it are much more optimistic today, even if the AI writers are still in the habit of writing uninspiring content.
For example, tools like ChatGPT and Recast are powerful summarizers of shortform content, built to give users mostly solid answers to questions that could take hours to find out on the power of their own research skills. Recast, in particular, is an interesting addition to the AI content summary space because it reinterprets web articles as two- to seven-minute podcasts between “knowledgeable” speakers, using a ChatGPT architecture for the summary and AI voices for the podcasters. This makes it easier for “readers” to consume the articles’ main points while they’re driving or making breakfast and gives them enough of an overview to help them decide whether the rest of the article is worth their time.
I’d even “recasted” my own article “A Tale of Two Vikings Movies” to test it out and found the results impressive.
Here is the original article.
And here is the Recast version.
Of course, I did this back when the service was free to use on any article, not just the ones from the approved list of top publishers. Now that it’s also got a paid “Pro” version, I’d have to spend a hundred dollars a year to add new recasts of my content, and my budget isn’t ready for every cool new toy on the block just yet.
Nevertheless, the fact that such technology exists supercharges my imagination, and I’m glad AI has given us new ways to explore old content and refresh tired ideas into something exciting again.
That brings me to Claude, a longform content analyzer and ChatGPT competitor.
Claude, Like ChatGPT, but Better (Mostly)
Most of us are aware of ChatGPT by Open AI. For the rare uninitiated, ChatGPT is an advanced chatbot that has been trained to answer any question based on any topic, provided the information was available as of September 2021 and doesn’t trigger the robot’s sensitivity buttons or push against its moral, politically correct leanings.
Claude by Anthropic is similar to ChatGPT in its ability to synthesize information the user feeds it and apologize for information it gets wrong. But it does so on a newer training model (it knows events from 2023) and a much higher token usage.
Quick Note: All generative AI models from text, to art, to music, etc. operate on a “token” system. A token is a computer’s processing currency, and it generally takes one token to spin four characters of text. This is why ChatGPT and other older generative AI models give short answers. Most of them are capped at 4096 tokens per prompt, which include synthesizing the characters from both questions and answers. So, the current limit for most generative text models is about 3,000 words per generation. Some may push this limit further, but the standard is still ideal for short answers or short content. As of this writing, tools like ChatGPT either struggle with longer works, or simply cannot process them.
Claude by Anthropic boasts a healthy 100,000 tokens per conversation, or about 75,000 words of text processing. For the attention-deprived, this means Claude can analyze entire books (of typical feature length) and supply summaries, specific details, and answer most questions about the content.
It’s also useful for helping fiction writers better understand their stories, which I’ll get into in just a minute.
But the user does not have to copy/paste the epic content like he would for the free version of ChatGPT. By simply uploading a PDF or text document to the chat box, the user can ask Claude to “read” the entire text. This is especially useful for summarizing complex documents or creating matrices out of the material inside, as Claude would now become an expert on the content.
Note: This “expertise” is simulated and subject to the same errors and “hallucinations” that ChatGPT and similar generative models occasionally demonstrate. More on that in a moment.
Now, most of us might assume Claude’s power is best suited for the technology and science writers out there, considering its power is in summarizing content that can be summarized, and what content is more subject to summary than a piece of intelligently written nonfiction?
But Claude is better than that. Claude’s got a mean “book club member” streak about him, if you give him a chance to flex that muscle.
To demonstrate, you can watch me feed Claude a copy of my novella Gutter Child and ask it follow-up questions about the story’s characters, locations, and story categories.
Or you can read the transcript below of my similar questions about my short story “Amusement,” a story that you can read for free in its entirety on this website.
Note: The “Amusement” PDF asks questions beyond those asked in the Gutter Child demonstration on YouTube.
Claude’s Breakdown of Gutter Child and “Amusement”: An Analysis
According to Anthropic’s FAQ about the Pro version of Claude (a $20/month membership), Claude has to re-read any document you’ve fed it to answer any question you’ve asked it, which can tax its resources rather quickly if you ask it a bunch of new questions about the same document. So, whether you’re on the free plan (about five questions every four to eight hours) or the paid plan (5x the free plan), asking individual questions can use up your quota much faster than if you just ask it several questions in the same prompt.
However, what Anthropic doesn’t tell you (that I’ve found) is that single questions still get better answers.
Take the “Amusement” results, for example. My first question borrows the four individual questions I’d asked from my original Gutter Child test (again, watch the video if you’ve missed this), lumps them all into one prompt, and adds three more questions for good measure.
Claude’s answer to this seven-layered question is sufficient but dull. Compared to the answers it had provided for Gutter Child a few days earlier, the lumped answers for “Amusement” were next to empty.
So, I broke up the layers and asked Claude to analyze “Amusement” the same way I’d asked it to summarize Gutter Child for my remaining free questions (of that particular eight-hour block). The results were better, if not much better.
Look at the character question as an example:
Question: Could you please give me a list of every character in this story and tell me what their roles are in the story. Could you also explain their relationship to the main character?
Claude (as part of the seven-layered lump sum):
– Sammy McGuinness – Main character, businessman
– Mr. Chip – Sammy’s boss
– Parking garage attendant – Tries to charge Sammy more to park
– Ticket collector – Won’t let Sammy into the park without a ticket
– Kids and young adults – Harass Sammy while he’s in the Bubby costume
– Waitress – Offers food at the diner
– Nippy the Cat – Main cartoon character that interacts with Sammy
– Other Wacky Fuzzies characters – Supporting cartoon characters
Claude (answering just those two questions):
Here is a list of the characters and their roles/relationships:
Sammy McGuinness – Main character, businessman going to a meeting at an amusement park.
Mr. Chip – Sammy’s boss, wants Sammy to experience the amusement park for marketing research.
Parking garage attendant – Tries to overcharge Sammy for parking, gets in an argument with him.
Ticket collector – Won’t let Sammy into the park without a ticket, argues with Sammy.
Kids and young adults – Bother Sammy while he’s wearing the Bubby mascot costume, no direct relationship.
Waitress – Offers food at the Wacky Walnut diner, no direct relationship.
Nippy the Cat – Main cartoon character, tries to get Sammy to be happy in the cartoon world, adversarial relationship.
Other Wacky Fuzzies characters – Supporting cartoon characters, no direct relationship.
Bubby the Bear – Missing cartoon character that Sammy is meant to replace.
As you can see, the individual question gets a better answer than the one sandwiched inside the bigger question. So, right off the bat, we’d need to anticipate the quality of Claude’s answer before we use the credits to ask the question. If we just want short snapshot answers to our questions, then lumpy questions are adequate. But if we want richer answers, or at least those with more thoughtful or detailed outputs, then we need to ask individual questions with detailed prompts.
But just like ChatGPT and other generative AI models, Claude does occasionally get it wrong.
Take the location questions for both my Gutter Child and “Amusement” transcripts.
In both examples, Claude got most of the locations by chapter correct, but it hallucinated the DNA testing center in Gutter Child (mentioning it in Chapter 4 when it doesn’t appear until about Chapter 10) and the locker room in “Amusement” (mentioning it in Part 6 when it actually appears in Part 2). In both cases, it also ignores the macro locations (the university in Gutter Child—in fairness, the only name we gather from the story is that its initials are PP, so I can’t fault Claude for not knowing it—and Happy Fun Land in “Amusement.”) For a question asking about locations, I’d want it knowing the prime location as much as the specific places within that location. That said, I appreciate that it tries to identify scenes by their specific compartments, as that makes scene tracking easier on the writer.
Of course, if I’d asked it for macro-locations, I wonder if it would’ve also mentioned Happy Fun Land (in the “Amusement” query).
Most impressive than the “underlined” content it can scour, however, is Claude’s ability to interpret the text for ideas, themes, and in the case of the specific question I’d asked it, its genre specifications.
Claude understands genre. It understands what makes a story a thriller. It knows the tropes of horror (I assume). And it correctly identified Gutter Child as a coming-of-age story and “Amusement” as a satire.
For writers who have difficulty classifying their works, this ability is virtually miraculous. Claude takes the guesswork out of the genre-defining process.
And these are just the features I’ve discovered about Claude from using its free plan a few times. I’m sure a Claude power user could milk a lot more out of it.
For example, a user might want to ask Claude to identify the story beats in his manuscript that follow the hero’s journey. Whether Claude can figure this out would be an interesting experiment, but a valuable one if it gets it right. The writer could then map out those beats on a spreadsheet and use the resultant line to decide if the pacing is working.
Likewise, the writer could ask Claude to identify the locations of important key characters to see how important they are versus how important they should be. If Claude’s story analysis is strong enough, it might even recommend a character’s promotion or demotion from main to major to minor to walk-on part and vice versa. This would help the writer decide who should keep their roles, who should become composites of others, and who should leave the story entirely to improve pacing.
In essence, the writer could use Claude’s analysis of his story to better understand what his story is doing and whether it follows a known or necessary structure.
The main limitation right now is in the allotment of credits. Claude’s free plan is sufficient for casual users but a waiting game for those who need more information in a hurry. And even the paid plans are limiting if the user has a lot of questions to ask, even though Anthropic insists that four out of five users never run out of questions within the usage window. My take is that the free plan resets often enough to justify keeping it free, especially if you’re analyzing just one story.
But everyone has different needs. Some users may need to play the lawyer card when analyzing their stories. In this case, the paid plan might be worth it, especially if their questions are long, complex, or both.
It’s also worth remembering Claude’s processing limits. Compared to ChatGPT, Claude is a beast (and very much like the X-Men’s Beast if you know your superhero pop culture). But it’s not unlimited. It’s unlikely Claude will be able to process your 1,000-page tome, so larger works may need splitting into several smaller chunks. It’s also worth noting that Claude can accept up to five files at a time but has a combined file limit of 10 MB (suitable for most PDFs). This is useful if you need it to search a collection of works, not just one, as long as those works are smaller. And remember that Claude reads text files and PDFs. It does not read Word documents (as of this writing).
So, what do you think of Claude? Are you the type of writer who would commission it to read all your works and give you feedback? Or would you be one of those SEO-flippers who’d rather have Claude just write the whole thing for you and figure out whether it’s done a good job after Google starts crawling it for the search engines?
Leave your comments below if you want.
Epilogue: After writing this article, I went back to Claude and asked it to craft a synopsis and book blurb for “Amusement.” I’ve included the extra content in the transcript file above. Also, Microsoft seems to be in the process of adding a similar file search feature to its Microsoft Office suite, so at some point you may find yourself in a position to compare search results between Claude and Microsoft. In short, the world is crazy.
Additional Note: If you decide to use Claude for story analysis, it would be best to feed it a manuscript with just the title and content. Including unnecessary front or back matter would just tax the tokens required to process it. That said, I did not remove any content from either Gutter Child or “Amusement” analyses. But I would remove it for The Computer Nerd should I attempt to ask Claude to process a 90,000-word novel.
The Computer Nerd Update: When I uploaded the PDF for The Computer Nerd, Claude informed me that the document was 41% over the message limit. So, it would be difficult to analyze a complete work over 50,000 words as of Claude 2, the current generation of Claude. I imagine Claude 3 will be able to handle it, however, since it seems to be Anthropic’s goal to increase the tokens available for processing. If you’re interested in using Claude for this purpose, then I’d keep watch of its progress.