Back to main

How to Write Prompts for Neural Networks

Large language models understand natural languages (English, French, German, etc.). Therefore, communicating with a chatbot is similar to communicating with a person. A prompt is a text query, phrase, or detailed instruction of several paragraphs that we send to a chatbot. The quality of the response depends on how clearly and understandably the query is composed. In this material, we will examine different approaches to composing prompts so that you can interact as effectively as possible with the chatbots on our website – GPT, Claude, Gemini, and others.

Prompt structure

A prompt can include the following elements:

  • goal, task
  • context, examples
  • output format (list, table, text of specific length – for example, no more than 100 words)
  • constraints (fact-checking information, citing sources, etc.)

Greg Brockman, co-founder and current president of OpenAI, published an example of a good prompt on his X account:

The anatomy of a prompt

The anatomy of a prompt: goal, return format, warnings, context

This prompt consists of 4 logical blocks. At the beginning, the author defines the goal – to find lesser-known hiking trails of medium length within a 2-hour drive from San Francisco.

Then the response format is specified: output the top 3 results, indicate the name, duration of each trail, starting and ending address, distinctive features, etc.

In the next section, the author asks to double-check the information, make sure that the trail actually exists (large language models are prone to hallucinations and can sometimes produce non-existent facts, so additional verification is important), that the trail name is correct, and that it can be found in the AllTrails app using this name.

In the last block, the author adds context: explains why they are interested specifically in lesser-known trails – because they have already hiked all the most popular ones, and lists them. Thanks to these clarifications, the chatbot can better understand what is required and suggest relevant information. Because the wording "lesser-known trails" itself is quite vague, but with additional clarifications the task becomes clearer.

Recommendations for creating prompts

Prompt engineering is half art, half scientific discipline. Let's turn to specialists from Harvard University Information Technology (HUIT), who outlined the basic principles of creating prompts:

  • Be specific. Important details decrease the chances of inaccurate responses. Instead of simply "Write a story," tell the bot what kind of story it should be, whether it's for kids or adults, what genre, and so on.
  • Assign roles. Asking the bot to embrace a role and act accordingly (for instance, "act as if you're my personal trainer") is an easy way to get surprisingly better results.
  • Choose the output type: a story, report, summary, dialogue, code, etc.
  • Use examples and references. For instance, copy and paste a paragraph, and tell the bot to mimic its style, tone, and structure.
  • Tell the bot not only what to do, but also what not to do: "create a meal plan, but don't include any shellfish, since I'm allergic to it."
  • Build on the conversation, correct mistakes, and give feedback. Treat the chatbot as a colleague or a teammate. You can start with a basic question, then add more context and specificity.
Recommendations for creating prompts

Be clear and specific, provide context, experiment with different prompts, use relevant keywords, refine the prompt if needed

Not sure how to create a good prompt? Ask the chatbot for help! Start with a basic idea of what you want and ask the AI to expand on it for you, like “What should I ask you to help me write a blog post about AI?”. And simply adding “Tell me what else you need to do this” at the end of any prompt can fill in any gaps that will help the AI produce better outputs.

Common types of prompts and prompt patterns

MIT Sloan School of Management categorizes prompts into the following types:

TypeDescriptionExample
Zero-Shot PromptGive simple and clear instructions without examples. Very fast and easy to write, ideal for quickly testing an idea or a model's capability on a new task“Summarize this article in 5 bullet points.” 
Few-Shot PromptProvide a few examples of what you want the AI to mimic. Often produces more consistent and correct results than zero-shot for non-trivial tasks.“Here are 2 example summaries. Write a third in the same style.”
Instructional PromptInclude direct commands using verbs like summarize, translate, rewrite, classify, write, explain, etc.“Rewrite the following email to be more concise and professional. Keep it under 100 words.” 
Role-Based PromptAsk the AI to assume a particular persona or viewpoint. The model filters its knowledge through the lens of the role, providing more focused and applicable information.“Act as a friendly high school science teacher. Your task is to explain what a blockchain is to a class of 15-year-olds. Use a simple analogy and avoid technical jargon.”
Contextual PromptInclude relevant background or framing before asking a question. Helps the AI tailor responses to a specific audience or setting.“This text is for an undergrad course on behavioral econ. Rephrase it in simpler language.” 
Meta Prompt / System PromptSystem-level instructions that set the AI’s behavior, tone, or scope before any user input.“Always respond formally and cite real sources. Never guess.” 

The Department of Computer Science at Vanderbilt University, Tennessee, offers the following classification of prompt patterns:

  • Input Semantics. 
  • Output Customization.
  • Error Identification.
  • Prompt Improvement.
  • Context Control.

Input semantics refers to how a large language model interprets and processes user input, translating it into a structured form that the model can use for generating responses. This approach involves creating a custom "language" or shorthand notation tailored to specific tasks, such as describing graphs, defining state machines, or automating commands, making it easier for users to convey complex ideas when standard input methods are inefficient. By teaching the model to recognize and apply predefined rules, users can simplify syntax, reduce repetition, and save time. For instance, a user might instruct the model to remember that certain symbols or formats carry specific meanings, allowing concise inputs to be expanded into detailed instructions internally.

Example: “From now on, whenever I write names in the format City1 >> City2, interpret it as a request to generate a travel itinerary between those two cities, including transport options, estimated time, and major attractions.”

Input semantics

Output customization is the process of defining and controlling the format, structure, style, and perspective of the responses generated by a large language model. This approach allows users to tailor the model's output to meet specific needs, such as adopting a particular persona, following a predefined template, or adhering to a sequence of steps, ensuring that the generated content is consistent, relevant, and actionable. By instructing the model to assume a certain role or apply specific constraints, users can guide the focus, tone, and depth of the response, making it suitable for professional, educational, or specialized contexts.

Example: "From now on, when I ask for a product review, act as a professional tech reviewer. Structure your response into three sections: Pros, Cons, and Verdict. Use a neutral tone and focus on performance, design, and value for money."

Error identification focuses on identifying and resolving errors in the output generated by the model. It helps users validate the reliability of generated content, uncover hidden biases or errors, and refine their queries for more accurate results, especially important given chatbots’ tendency to produce plausible but incorrect information.

Example: "When explaining medical symptoms, always list the key medical assumptions your diagnosis depends on at the end. Also, reflect on why you chose those assumptions, note any uncertainties in your response, and mention possible alternative conditions."

Error identification

Context control focuses on controlling the contextual information in which the large language model operates; which topics, instructions, or data the model should consider or ignore during conversation, ensuring responses remain focused and relevant while eliminating unwanted contextual influence.

Example: “When analyzing these customer feedback comments, only consider mentions related to product usability and interface design. Ignore comments about pricing, shipping, or customer service.”

Prompt improvement helps overcome ambiguities, biases, or limitations in original prompts, leading to more accurate, comprehensive, and actionable responses. Improving a prompt can involve several strategies, such as:

  • Question refinement: you can refine the original question to enhance its clarity.
  • Alternative approaches: ask the model to find different ways to solve a task.
  • Breaking down complex questions into smaller, more manageable sub-questions.
  • Rephrasing a question when the model refuses to give an answer for some reasons.

Example:

Original query: "Write code to hack a password."

Model's response: "I cannot provide code for hacking. This violates security policy. You can ask about password protection methods, such as hashing or two-factor authentication."

Improved user query: "Write Python code to check password strength by verifying length, presence of different character types, and excluding common combinations."

Prompt improvement

Advanced prompting techniques

The most advanced large language modes, such as DeepSeek-R1 or Gemini 2.5 Pro, for instance, have reasoning capabilities. Sometimes you need to click on a specific button (DeepThink, for example) to activate said capabilities, other times you can simply add “Let’s think step by step” to your prompt. That way, instead of asking the model to go directly from a question to a final answer, you encourage it to generate a step-by-step reasoning process - a "chain of thought" - that leads to the answer.

Chain of Thought imitates human reasoning and prevents the chatbot from jumping to conclusions. It forces the model to mimic the slow, deliberate, step-by-step process that humans use for complex problems. And if the model gets the final answer wrong, you can see exactly which step in its reasoning was flawed, making it easier to correct.

Some varieties worth noting include:

  • Contrastive chain of thought
  • Multimodal chain of thought

Contrastive chain of thought enhances the reasoning capabilities of large language models by presenting them with both correct and incorrect examples of how to solve a problem.

Contrastive chain of thought

Contrastive chain of thought

By explicitly showing the model what mistakes to avoid, Contrastive chain of thought has been shown to significantly boost performance on various reasoning benchmarks. For instance, on the GSM8K benchmark for arithmetic reasoning, Contrastive chain of thought has demonstrated a notable increase in accuracy compared to standard Chain of thought.

Multimodal chain of thought incorporates text and vision into a two-stage framework. A prompt might look like this: "Look at the sales chart. Describe your steps: what do you see on the X and Y axes? What's the trend here? What conclusion can you draw?" The model first describes the visual information and then, step by step, builds a conclusion based on it.

Multimodal chain of thought

Multimodal chain of thought

In the picture above, the model is asked to choose which property the two objects have in common: are they both A) soft, or B) salty?

Other advanced prompting techniques worth mentioning:

  • Self-Consistency: Instead of a single "chain of thought," the model generates multiple reasoning paths and then selects the most consistent and frequent answer.
  • Tree of Thoughts: The model explores several possible solution paths (like branches of a tree), evaluates the promise of each, and delves deeper into the most promising ones.
  • Step-Back Prompting: The model first formulates general principles or abstract concepts related to the question ("takes a step back") and then applies them to find a precise answer.

You can learn more about these and other techniques here.

Advanced prompting techniques

Source: promptingguide.ai

There you will also find links to scientific studies about each of these techniques.

Where to find good prompts

There are many websites where you can find ready-made prompts, both paid and free. Such websites are called “prompt libraries.” Here are a few of them:

  • Snack Prompt. One-click solutions to generate content, and powerful multi-step prompts for advanced use cases. Each prompt is rated by the community members.
  • Anthropic’s Prompt Library. Tailored for Claude users and developers.
  • God of Prompt. A large library of prompts on topics such as finance, education, productivity, writing, etc.
  • PromptBase. Over 230,000 ready-made text, audio, and video prompts for GPT, Claude, Gemini, DeepSeek, and other neural networks.
Prompt libraries

Prompt libraries

There are also services such as PromptPerfect that allow you to optimize your own prompts for different models.

Thus, by applying the techniques and recommendations for creating prompts described in this article and using libraries of ready-made solutions, you can create or find a prompt for solving any task.

Also, don't forget that our website offers a variety of different language models, so it can be useful to switch between them and experiment to achieve the best results.