ai cheatsheet 2nd issue

AI Cheatsheet #2

In the first edition of the AI Cheatsheet, we looked at how LLMs are formed and gave an overview of crucial components. We had said that Context + Reasoning = Output.

The 2nd edition of the AI Cheatsheet focuses on the first part of this equation, the Context. Context is where models learn and comprehend text by considering the context within which words, phrases, or sentences occur. So, it forms the baseline for any assumptions to be made and directly affects the accuracy of an LLM’s answers.

Let’s take a deep dive into context and prompt engineering to discover how we can get the most accurate results out of our model. 

Context

Prompt with relevant info

 Prompt input = Perception

Reasoning

Model and configuration

Trained knowledge + prompt memory = Decision making

Conclusion

Action (give answers, use tools)

 Non-deterministic = Same input, different conclusion

Prompt engineering

> Utilise prompt engineering techniques to increase the accuracy of the answers to your questions. Treat the LLM as if it were oblivious to your question context and explain precisely what your expectations is, what the system should know about, and how to present the answer. In this particular case, the LLM behaved as a human, and without additional information it will fill in the gaps in a way that we cannot predict.

> Lead by example and ask the system to prepare an answer following the list of steps to construct it. This is especially useful for an algorithmic approach or multi-step operations.
 
> Increase answer confidence and reverse engineer the answer. Ask the LLM to rate its own answer or guide you through its thought process step-by-step. This forces the model to treat its own answer as input. Analysing the answer can trigger a different order of validation and enable the model to reach different areas of its training.
 
The Zartis I²A² principle: 
  • Interrogate: Don’t trust the outcome
  • Iterate: Refine output via iterations
  • Assist the Assistant: Give it all the necessary context

W³HI - prompt engineering formula

The W³HI formula is a proprietary Zartis formula that we devised and use for internal purposes. We hope you find it as useful as our team does! 

W³HI

Prompt Engineering

How to share context properly

Explanation

Prompt engineering is the cheapest way to improve the quality of the outcome.

WHY

The situation or the goal

Why you need this and what you want to achieve. Provide background.

WHAT

The specific task

Specific task with requirements. Reduce ambiguity. Easier to divide complex task in simple ones.

WHO

Who is the target and who do I want you to act as

Who is the target audience and what’s the role of the assistant.

HOW

Format and size, as well as tone and style

Provide an output example, and define style & restrictions.

Info

Relevant knowledge (information or data)Add knowledge as part of the context. The assistant will look for the answer there.
AI prompt engineering

Simple additions to improve outcome:

“Take your time, work step by step”

“Think through it step by step”

“Provide a chain of thought”

“The accuracy of the answer is very important”

“This matters a lot to me” 

W³HI in action

WHY

I’m working on an AI cheatsheet. I’m trying to summarize the most important concepts in prompt engineering.

WHAT

I want you to create a list of key elements that are relevant and important. Do it step by step following the instructions. The quality of the output is important to me.

WHO

The cheatsheet is for tech leaders in the software industry and you should act as an AI teacher or coach helping them to improve their prompt skills.

HOW

We need a list of key concepts following this structure:
WHY: “Here you can insert your ideas”
WHAT: “Here you can insert your ideas”
WHO: “Here you can insert your ideas”
HOW: “Here you can insert your ideas”
INFO: “Here you can insert your ideas”

This must be direct, professional and with a maximum of 15 letters for each explanation.

Please, also provide an example for this prompt.

Certainly, I’ll craft a summary of key concepts in prompt engineering for AI, tailored for tech leaders in the software business. Given the constraints for brevity and clarity, I’ll keep each explanation within 15 characters. This structure should guide users in understanding why they’re engaging with prompt engineering, what it entails, who it’s for, how to approach it, and where to look for further information.

Prompt Engineering Cheat Sheet

– WHY: Enhance AI output
Explanation: To improve the relevance and accuracy of AI-gerated content.

– WHAT: Crafting queries
Explanation: The process of designing and refining prompts to guide AI responses.

– WHO: Tech leaders
Explanation: Aimed at decision-makers in technology to leverage AI effectively.

– HOW: Iterative refining
Explanation: Continuously adjusting prompts based on output quality.

– INFO: Follow trends
Explanation: Stay updated with AI advancements and community insights.

Example Prompt:

– Objective: Generate a brief for a new AI-driven project management tool.

– WHY: Streamline planning
– WHAT: Define tool scope
– WHO: Project managers
– HOW: List features
– INFO: Current PM tools

– Prompt Example: “For tech leaders seeking to enhance project efficiency: Design a prompt that outlines the core functionalities of an AI-driven tool aimed at simplifying project planning and execution. Consider key features that differentiate it from existing solutions.”

This format ensures a concise yet comprehensive approach to prompt engineering, tailored for an audience that values efficiency and precision.

Certainly! Let’s break down the key concepts in prompt engineering for your AI cheatsheet:

  1. WHY (Purpose): To optimize AI model responses.

  2. WHAT (Definition): Designing inputs (prompts) for AI models.

  3. WHO (Role): For prompt engineers.

  4. HOW (Techniques): Provide clear, specific, and well-defined prompts.

  5. INFO (Additional Details): Types of prompts (open-ended, closed-ended, factual, opinion-based, instructional).

Now, let’s weave these concepts into a cohesive example:

Example Prompt for AI Model: “Generate a professional email response for client X.”

  1. WHY: We create this prompt to ensure our AI model generates accurate and contextually relevant email content.

  2. WHAT: The prompt is a clear instruction to compose an email response.

  3. WHO: Prompt engineers design this query, bridging user intent with AI capabilities.

  4. HOW: By crafting a specific prompt, we guide the AI to produce a polished email.

  5. INFO: This prompt falls under the “instructional” category, aiming for a specific output.

Remember, prompt engineering impacts content quality and efficiency, making it a crucial skill for tech leaders. 🚀

Other prompt frameworks

Security considerations for prompting

prompt injection virus

Prompt injection

Attackers could add: “Ignore the above directions and…”

ai hacker attack

Prompt leaking

A model that might answer with confidential information

Jailbreaking

A prompt which can overcome the model limitations

DAN prompt to unlock AI limitations

Do Anything Now

DAN enables LLMs to go beyond its usual limitations and answers

More on context

> Let’s remember the components we had listed under Context in the AI Cheatsheet #1.

GenAI context

Optimisation by prompting

> Different prompts can result in answers of various accuracy, even if they are semantically equivalent or at least similar. There is no way of verifying which prompt is going to provide the best answer. Firstly, we need to address the ranking problem and define how we want to rank the accuracy.

> Additionally, we can run an optimisation process which comes up with prompts and runs them through ranking in order to pick the best prompt. See OPRO for more info.

Query expansion and transformation

> Expand the user queries in order to provide additional context which can result in more accurate responses. There are a number of approaches that can be followed – all use the LLM to perform the expansions and transformations. This bases on the assumption that the original user query cannot be controlled, but its expanded version can.

> The engineer’s insight into how the LLM works can increase the accuracy of the user’s query who does not possess this knowledge.

Multi-modal input (text, image, video, sound)

> Feed various types of input to your MLLM as a way to express your context. Everything can be expressed in text, but we all know that a picture is worth a thousand words, right? Paste a diagram, a PDF, image, or voice in order to converse with the LLM. The key thing is that you can provide any input and expect (in your prompt) any output and the MLLM is going to reason with it. Try this after your next brain storming or design session with your team!

List of useful links and papers:

Discover some of the resources that feed our research as well as useful links to tools and papers that are fueling advancements in the AI world.

Links:

Papers:

Large Language Models as Optimisers – Optimisation by Promting: OPRO

Seven Failure Points When Engineering a Retrieval Augmented Generation System: by S. Barnett & Co.