How to get better results from Claude

How to get better results from Claude

This section introduces you to Claude, covering the basics of your first conversation and essential features. Understanding these fundamentals will set the stage for maximizing your interactions with Claude.

5 аудио · 4:19

Nortren·

Common problems when working with Claude and how to solve them

0:56
When you start working with Claude, you will surely encounter situations where the response doesn't match your expectations. This is normal — and it's an opportunity to improve your approach. Here are the most common problems. If the answer is too general, it means the request lacks context. Add details about the audience, role, or constraints. Instead of "write a letter about a project delay," try "write a letter to a corporate client explaining a two-week delay, considering this is already the second delay." If the answer is too long or too short, Claude is simply guessing the required length. Specify explicitly: "give a two-paragraph summary" or "I need a detailed analysis, length doesn't matter." If Claude doesn't follow the required format, show an example: "use bulleted lists with headings." If the tone is not appropriate, describe it: "make it conversational" or "I need an authoritative formal style." If you've received confidently sounding but incorrect information — for important facts, always verify independently and ask Claude to cite sources.

Iterative thinking when working with AI

0:43
One of the most important shifts when working with Claude is the realization that the first prompt rarely delivers the perfect result. And that's okay. Treat the first request as the start of a conversation, not a one-off assignment. Effective Claude users work according to the following logic. First: treat first drafts as a starting point. Study what works and what doesn't, and refine. Second: give specific feedback. "Make it shorter" is fine, but "remove the first two paragraphs and make the conclusion more concrete" is better. Third: know when to start over. If the conversation has gone off track, it's sometimes faster to open a new chat with a clearer prompt than to try to correct the current one. Iterative thinking is not a sign that you are doing something wrong, but the right strategy for working with any AI tool. The more often you refine and adjust, the more accurately Claude understands what you need.

What is AI literacy and the four-competency framework

0:58
AI literacy is the ability to effectively collaborate with artificial intelligence tools. Not just knowing which buttons to press, but developing judgment to use AI well in different situations. The four-competency framework, developed through a research collaboration between Professors Rick Dean and Joseph Feller, highlights four key competencies. The first is delegating: deciding what work is done by the human and what by the AI, and how to distribute tasks between them. The second is describing: effective communication with AI systems, clearly defining outcomes and desired behavior. The third is discerning: the thoughtful and critical evaluation of AI results — quality, accuracy, appropriateness, and directions for improvement. The fourth is diligence: responsible and ethical use of AI, transparency, and taking responsibility for the work done with AI. The prompt structure from lesson two — set the context, define the task, specify the rules — is based on the "describing" competency. The troubleshooting methods in this lesson rely on "discerning" and "diligence."

Evaluating Claude for your workflows

0:53
As you begin to integrate Claude into more of your work, a question may arise: how do you know whether Claude actually handles a specific task well? This is where evaluations, or simply evals, come to the rescue. These are systematic ways to check how well Claude performs specific types of tasks that matter specifically to you. A simple approach to evaluation looks like this. Step one: gather examples. Take five to ten examples of a regularly performed task: letters you have written, reports you have created, analyses you have done. Step two: create test prompts. Write prompts that should generate similar results. Step three: compare results. Run the prompts and compare Claude's responses with your examples: is the key information captured, is the tone and style appropriate, what is missing. Step four: improve the approach. Based on the findings, adjust the prompts, add examples, or determine where human review is necessary. This approach helps develop intuition for working with Claude on exactly the tasks that matter to you.

When it's worth starting a conversation over

0:49
Not every conversation with Claude is worth continuing to the bitter end. Sometimes the most effective decision is to start a new chat. There are several signs that it's time to start over. If the conversation has drifted significantly off topic and attempts to steer it lead to tangled responses — it's easier to formulate the task from scratch in a new chat, already knowing exactly what to clarify. If you've accumulated a lot of context in the conversation that is not actually needed for the current task — excess context can interfere with the accuracy of responses. If Claude has started giving contradictory answers or "forgetting" earlier instructions — a fresh start clears accumulated misunderstandings. Starting a new conversation is not a defeat, but part of the iterative approach. Every time you start over with a clearer prompt, you train your ability to formulate tasks. Over time, Claude's first responses will hit the mark more accurately precisely because you have learned to explain what you want better.