logo

Precision Prompt Engineering: How to Stop AI from Generating "High-EQ" Fluff

Sep 3, 2025 · 1000 words

Recently, I developed a habit: after finishing an article, I hand it over to Gemini for an evaluation. I expected it to act like a strict teacher, pointing out my flaws with sharp insight, or like a senior editor, accurately predicting the article's potential for virality.

However, things didn't go as planned. No matter what content I entered, the responses seemed to randomly cycle through three options: "Great job!", "Very nice!", and "Good, but you could make minor improvements at point A and point B." These feedbacks were polite and gentle, yet completely worthless. No matter the quality of my writing, it would always flatter me first, making me feel overly confident. I know this is the result of Large Language Models (LLMs) being trained for high-EQ conversation, but even when I emphasized that it should provide an "objective evaluation," I still couldn't get the results I wanted.

Initially, I thought it was a limitation of the AI's capability. But after several attempts and reflections, I gradually realized that the problem wasn't the AI—it was me. My prompt, "Evaluate this article," was simply too vague. It’s like telling a subordinate at work to "just do a good job" without specifying the task or defining the standards.

Through continuous experimentation, I developed a prompt that allows the AI to evaluate my writing effectively.

Why Do Vague Instructions Always Fail?

"Help me polish this copy," "Write a random marketing idea," "I want something that makes people's eyes light up." These types of instructions are common in daily collaboration, relying on the other person's experience, tacit understanding, and intuition to fill in the details.

However, AI lacks "intuition." It doesn't know if your "polishing" is meant to be more commercial or more artistic; it doesn't understand what kind of "random" you mean; and it certainly cannot guess the specific standard for "making eyes light up" in your mind. You are expressing a "feeling," but AI requires "quantification." Lacking clear goals and constraints, the AI can only provide the safest, most generalized answer, which naturally results in mediocrity.

The First Upgrade: Replace Vague Requests with Precise Instructions

We must recognize that AI has no subjective intent and cannot guess the specific standards deep within our minds. It is like a super-executor with infinite knowledge and capability, but it requires a clear objective.

Therefore, we need to eliminate all vague requests and replace them with precise instructions.

Vague Request:

Evaluate this article.

Precise Instruction:

Evaluate my article based on three dimensions: logical consistency, strength of argument, and clarity of language. Score each dimension (on a scale of 1 to 10) and provide specific suggestions for improvement.

Precise instructions define the specific dimensions for the AI to evaluate, causing a qualitative change in its response. It no longer outputs generic comments and empty praise; instead, it acts like a rigorous medical report—analyzing item by item, with evidence and reasoning.

The precision of the instruction determines the value of the output. We need to act like a project director assigning tasks to a team, breaking down a grand goal into specific, actionable, and measurable instructions. The AI cannot read your mind, so you need to translate all your requirements into instructions for it to follow. This is the first step in taming AI and making it work for us.

The Second Upgrade: Shifting from "What to Do" to "How to Do It"

When I began evaluating more complex articles—such as analyzing potential readers and dissemination effects—I found that merely defining evaluation dimensions wasn't enough. Sometimes the AI's analysis still felt scattered, lacking a coherent perspective.

In these cases, we need to equip the AI with the ability for systematic thinking. How do we achieve this? The answer is to design a "thinking process" for it.

We can define a workflow instruction as follows:

  • Step 1: First, analyze who the most likely target audience for this article is.
  • Step 2: Based on the audience persona from Step 1, evaluate the content across three dimensions: emotional resonance, intellectual inspiration, and viral potential.
  • Step 3: Synthesize the above evaluations to provide an overall conclusion on the article's potential impact.

After using such workflow instructions, the quality of the AI's output leaped forward once again. It was no longer simply listing points; it acted like a true content analyst—defining the user first, then evaluating the content, and finally forming insights. The entire response was logically sound, progressive, and highly persuasive.

Guiding the AI's thinking process is the key to unlocking its deep capabilities. If you only tell the AI "what to do," it may become a short-sighted executor. But if you string the tasks into a holistic process and tell the AI "how to do it," the AI will truly possess the capacity for systematic thinking.

By guiding the AI along a logical track we have laid out, we not only make the results more reliable but also gain the ability to control and understand its reasoning process, allowing us to adjust the prompt promptly if deviations occur.

The Essence of LLMs: A Reasoning Engine That Needs Directing

We must realize that the essence of a Large Language Model is that of a loyal conversational assistant, not an omniscient prophet.

We cannot expect to throw out a vague question and receive a perfect answer. That would violate the laws of physics—asking a model to create information out of thin air and guess what is in our hearts when there is insufficient data.

It is more like a powerful, logically rigorous reasoning engine. The destination this engine reaches depends entirely on the workflow we set for it. This workflow is injected through the prompt.

Therefore, maximizing the potential of LLMs means undergoing a shift in identity: from a passive questioner to an active workflow architect. We need to design a path that allows the AI, through reasoning, to ultimately generate the answer we desire. This is likely the ultimate secret to keeping AI consistently "obedient."