The Power of AI Prompt Engineering in a Developer’s Toolbox

by | Feb 13, 2024 | Blog

Kate Davis – Junior Database Developer with a foundation in Risk Management. My journey began analyzing data in Workers Compensation, Benefits, and Unemployment, where I discovered my love of all things data and a passion for turning insights into action. I’m currently immersed in a 9-month ETL project for a major client. My diverse background in Linguistics and a former life as a teacher prior to Risk Management sets me apart, giving me the ability to not only approach programming languages with a distinct perspective but also allowing me to communicate effectively with stakeholders based on where they are and how they learn best.

Prompt engineering is an emerging field that is revolutionizing the way developers work. It’s a game-changer, making workflow much faster and easier. This article delves into how prompt engineering can be applied in a real-world project, particularly in the context of an Extract, Transform, Load (ETL) project for a maintenance ticketing system.

Prompt engineering aims to help developers write clean, efficient, and error-free code by reducing cognitive load and making the development process smoother and more efficient. It also aims to improve code consistency among team members, saving time and effort in debugging and troubleshooting.

The Magic of Prompt Engineering in Data Cleaning and Management

Recently, at 3Ci, we embarked on a big clean-up job for one of our largest clients. The project was massive and involved a significant amount of data manipulation and consolidation. The raw data, quite messy and disorganized, needed to be cleaned and managed in Excel before being transferred to the Maximo system. This project, which began in October 2023, is expected to continue until June of the next year.

One innovative approach I adopted was creating an Excel visualization that served as a dummy Maximo system. This proved crucial in reassuring the client about the project’s progress and provided a clear picture of the final hierarchical layout of the data.

My use of prompt engineering was central to generating VBA code for the project. I used it to create the macro involved in the data validation process, enabling sophisticated information handling and making my task more efficient and manageable. I used fake placeholder information that did not match my exact data to ensure our client’s data stayed secure.

I provided GPT-3 with bulleted instructions, specific details, and prompts to ask me questions for clarification. This approach resulted in more refined results and successful code generation. Additionally, GPT-3 preemptively troubleshooted on my behalf. I discovered that using clear and straightforward language, such as describing data sets in an easily understandable way, improved the reliability and effectiveness of the AI-generated results.

Now, several months into the project, what was originally just to validate that our work would meet their needs has now become a critical step in the process for each site in order to limit errors and needs for corrections in their system.

Types of Prompting

In the realm of prompt engineering, developers employ various strategies, including soft prompting, in-context learning, and symbolic learning.

Soft Prompting is a technique that gently nudges the model in a specific direction, without explicitly stating the task. It allows for a wider range of outputs, enabling the model to think more creatively and produce unique solutions. It’s like guiding the model along a path without dictating its exact journey.

For instance, instead of asking the model to “Translate the following English text to French,” a soft prompt could be “Imagine you’re a translator working for the United Nations, tasked with translating an English document to French.”

In-context learning, on the other hand, involves providing additional context to help the model better understand the task. This context can include specific information about the desired output format or more general data about the project or domain. Providing context greatly enhances the accuracy and relevance of the model’s outputs.

For example, when working on a text generation task for a scientific project, providing the model with context about the scientific field, project goals, and target audience can result in more precise and relevant outputs.

Symbolic Reasoning plays a crucial role in advanced logic and mathematics. They offer powerful capabilities for formal proofs, code generation, game strategies, and mathematical problem-solving. By leveraging these tools, users can achieve precise verification, efficient software development, enhanced gameplay, and solutions to complex problems. To optimize their benefits, it is important to ensure a model understanding of vocabulary and specify the reasoning approach.

For example, in a coding task, a directed prompt could be “Write a Python function that takes two integers as input and returns their sum.” This instructs the model specifically on what to do, ensuring the output matches the exact requirement.

Prompt Engineering Best Practices

To fully leverage the power of prompt engineering, it is imperative to adhere to a set of best practices. These practices encompass a variety of key considerations and strategies that can greatly enhance the effectiveness and efficiency of AI prompting.

  1. Use the Latest Model: Always use the most recent model available, as it has been trained with the latest data and improvements.
  2. Know the Model’s Strengths and Weaknesses: Understanding what the model can do well and where it might struggle can help you craft better prompts.
  3. Be Specific: Make your instructions as clear and detailed as possible. The more specific your prompt, the better the results.
  4. Form Matters: The way you phrase your prompt can significantly affect the output. Soft prompting is a good example of the different ways to engage with AI. Experiment with different phrasings to get the best results.
  5. Provide Context: Giving the model enough context can improve the relevance and accuracy of its responses.
  6. Iterate Gradually: Try different approaches and make small changes at a time. This iterative process allows for fine-tuning and improvement.
  7. Understand the Desired Outcome: Consider a clear goal or outcome when creating your prompt. This will guide you in crafting effective prompts.
  8. Maintain Strict Security Measures: When using AI in prompt engineering, security is of paramount importance. It’s crucial to protect the data that feeds the AI algorithms as well as the outputs they generate.

AI as a Tool, Not a Replacement

While AI and prompt engineering have proven to be incredibly useful tools, they are not replacements for the human workforce. They have limitations in creativity and critical intervention, particularly when it comes to detecting and rectifying errors.

It’s crucial to always have a well-versed human in the loop to oversee and manage these processes. This ensures the code’s accuracy and promotes a healthy balance between human and AI involvement in development.

In conclusion, prompt engineering is a powerful tool in a developer’s toolbox. It can significantly speed up workflows, automate processes, and simplify handling large-scale projects. However, it’s important to remember that while AI and prompt engineering are beneficial tools, they are not replacements for human oversight and intervention.