The power of asking the right questions: Elevating LLMs to the next level


4.8
(6)

“Asking the right question is the first step toward getting the right answer.” This timeless wisdom applies not only to humans but also to Large Language Models (LLMs). While these AI models are capable of incredible feats, they often overlook a critical element: ensuring that the user’s input is clear, complete, and meaningful. The consequences of this oversight? Misaligned answers, wasted processing power, and a frustrating user experience.

As a product manager, I have learned that success often hinges on identifying and solving the right problem. The same principle holds true for LLMs. Imagine if these models could emulate skilled problem-solvers, actively helping users refine vague questions or intelligently deriving missing context from prior interactions. This seemingly “small hack” of enabling LLMs to ask clarifying questions could dramatically improve their performance, reduce operational costs, and deliver exceptional user satisfaction.

The Art of prompt engineering: Shaping input for optimal output

The concept of refining inputs to achieve better outputs is not new—it is the essence of prompt engineering. At its core, prompt engineering is the art and science of crafting prompts that elicit the desired response from an AI model. However, today’s prompt engineering primarily relies on the user to anticipate the AI’s needs. What if we flipped the script? What if the AI could take a more active role in ensuring the quality of the input?

Consider these examples of how an LLM empowered with clarifying capabilities could transform user interactions:

  • Ambiguity resolution: If a user’s input is ambiguous, the AI could proactively ask clarifying questions to pinpoint the specific information needed (missing in all major LLMs)
  • Contextual awareness: If essential context is missing, the AI could draw from previous interactions within the same conversation or session to fill in the gaps (all major LLMs do it today)
  • Intent discovery: The AI could probe deeper to understand not just what the user is asking, but why, allowing it to tailor responses to underlying goals (follows contextual awareness)

This approach becomes especially critical as we move toward agentic AI systems. Without clear instructions, these autonomous agents risk producing irrelevant—or worse, harmful—outcomes. It all comes down to the age-old adage: “Garbage In, Garbage Out.” The quality of the input determines the quality of the output.

Why clarifying questions matter: The ROI of input refinement

Improving the quality of user prompts is not just about creating a smoother user experience; it translates into tangible benefits:

  • Lower operational costs: Refined inputs lead to more concise and focused responses, reducing token usage and thus processing costs.
  • Faster query resolution: Clearer questions lead to quicker, more accurate answers, saving users time and effort.
  • Enhanced trust and satisfaction: Users receive more relevant, insightful responses, fostering trust in the AI and leading to greater satisfaction.

Quality of input is equally important as the quality of output.

Can LLMs learn to ask clarifying questions?

Here is the million-dollar question: Can LLMs be developed to natively refine user queries? Imagine a model that mirrors human conversational skills—validating questions, posing clarifying queries, and ensuring it truly understands the user’s intent before responding.

This ideal interaction flow could look like this:

  1. User input: The user asks a question or provides a prompt.
  2. Validation and clarification: The LLM checks for issues like grammatical errors, missing context, or ambiguity. It then proactively asks clarifying questions or suggests improvements to the prompt (missing in major frontier models -atleast from a user perspective)
  3. Response generation: The refined, unambiguous input is used to generate a precise and relevant response.
  4. Continuous learning: Over time, the LLM refines its ability to ask effective clarifying questions based on historical interactions, becoming better at understanding user intent and anticipating their needs (context lenght will play an important role and major frontier model providers are already making significant progress in providing virtually unlimited context window)

This iterative process, incorporating a feedback loop, could unlock significant cost savings and performance improvements. While features like memory in ChatGPT add some contextual awareness, integrating a robust prompt refinement capability would push the boundaries of LLMs to a whole new level.

A Simple analogy: Understanding intent

Why intent is so important? Why am I harping on it?

Let us say your nephew asks, “Is Toronto a good city?” You would likely pause and consider the context. Is he inquiring about its living standards, vibrant food scene, unpredictable weather, or perhaps its renowned concert venues? Your response would depend heavily on clarifying his intent.

A well-designed LLM should behave similarly, proactively seeking to understand the dimensions of intent before generating a response. Doing so not only aligns the response with the user’s goals but also creates a more meaningful and human-like interaction.

Currently, LLMs rely heavily on prediction, similar to assuming a person asking for “water” on a hot day wants a drink. While humans excel at predicting context based on experience, we also instinctively recognize the value of asking simple clarifying questions: “Do you need it for drinking, watering plants, or perhaps to cool something down?”

This ability to seek clarification becomes even more crucial in an agent-driven world. Autonomous agents will be tasked with executing increasingly complex instructions, making it absolutely critical to ensure they have complete, accurate, and unambiguous context before taking action.

Why LLMs struggle with ambiguity?

When confronted with incomplete or ambiguous prompts, most current LLMs resort to leveraging training data making educated guesses and producing generic or safe Answers. While these coping mechanisms can sometimes suffice, they often fall short of true understanding. Leading LLMs like ChatGPT, Gemini, and Claude still struggle to ask genuinely insightful clarifying questions—a crucial aspect of natural, interactive dialogue. It is difficult to build a coherent multi-turn conversation management flow but a simple hack of putting a small model to improve the prompt makes things much better from a user perspective. It can possibly open a new domesion of ‘prompt-scaling’ (after pre-training scaling , post-traning scaling and test-time scaling).

Small changes in how we interact with AI—specifically, embracing the power of the clarifying question—can unlock massive benefits: lower costs, higher accuracy, and more satisfying user experiences. As with most things in life, the quality of the answers we receive is directly tied to the quality of the questions we ask. By actively focusing on refining prompts on the AI side, we can redefine how we interact with AI and pave the way for a smarter, more efficient, and more human-centric future.

Disclaimer: https://vinaysachdeva.com/disclaimer/. The opinions expressed in the blog post are my own and do not reflect the view(s) of my employer.

How useful was this post?

Click on a star to rate it!

Average rating 4.8 / 5. Vote count: 6

No votes so far! Be the first to rate this post.


Leave a Reply

Your email address will not be published. Required fields are marked *