Personal study

Category

Used tools

Used tools

Figma, Chat GPT

Figma, Chat GPT

Oct, 2025

Prompt Engineering for everyone

AI has already become part of our everyday lives. I also used it today myself to generate an image and write an email. How many people don’t use AI these days? At least in my circle, especially among people working in tech, almost everyone has tried it at least once. It’s now completely natural for people to use AI to reach their goals faster and work more efficiently.

But to be honest, I sometimes get frustrated when using AI tools. The whole point of using them is to get results quickly, yet there are times when they give me something totally off, wasting my time instead. For example, when I was working on a healthcare-related project, I needed to generate an image using AI. In my head, I pictured a scene of a patient smiling while video chatting with a doctor. Then I opened the image generator and stared at the empty text box. My first thought was, “Uh… how am I supposed to describe this?” I could see the image so clearly in my head, but I didn’t know how to turn it into words. So I just started typing whatever came to mind.


“A female patient and a female doctor are on a video call, from the patient’s point of view. The patient is using a laptop to see the doctor on screen. Both are smiling and sitting in chairs.”


After writing that, I thought, “AI is smart. It’ll understand what I mean.” Then I hit the generate button. But of course, the result looked nothing like what I had imagined. So I tried again. And again. More than five times.


“Make it horizontal.” “I want warmer colors.” “Move the camera angle a bit to the left.” “Reduce the contrast.”


Eventually, I got the result I wanted, but by then I was already annoyed with my computer. Text-based commands give us incredible freedom, but that same freedom often leaves us stuck, unsure how to express what we want. And yes, it’s not really the AI’s fault. It’s mine for not being specific enough. But that raises the question: what does it actually mean to give a “clear instruction”?

The commands we give to AI are called prompt engineering. I see it as a kind of language-based programming, a form of coding but with words instead of syntax. There are even professionals who work full-time as prompt engineers. Yet because AI tools are so easy to access, most people don’t really understand what prompt engineering is. They may know the term, but not how to use it effectively. That made me wonder: how could people who aren’t familiar with prompt engineering still write better prompts and use AI more intuitively?

To find out, I began looking through research papers about different prompting methods, and I discovered there are far more types of prompts than I had imagined. One paper even introduced more than forty. From there, I selected four that I thought would be practical and approachable for everyday users.*

AI has already become part of our everyday lives. I also used it today myself to generate an image and write an email. How many people don’t use AI these days? At least in my circle, especially among people working in tech, almost everyone has tried it at least once. It’s now completely natural for people to use AI to reach their goals faster and work more efficiently.

But to be honest, I sometimes get frustrated when using AI tools. The whole point of using them is to get results quickly, yet there are times when they give me something totally off, wasting my time instead. For example, when I was working on a healthcare-related project, I needed to generate an image using AI. In my head, I pictured a scene of a patient smiling while video chatting with a doctor. Then I opened the image generator and stared at the empty text box. My first thought was, “Uh… how am I supposed to describe this?” I could see the image so clearly in my head, but I didn’t know how to turn it into words. So I just started typing whatever came to mind.


“A female patient and a female doctor are on a video call, from the patient’s point of view. The patient is using a laptop to see the doctor on screen. Both are smiling and sitting in chairs.”


After writing that, I thought, “AI is smart. It’ll understand what I mean.” Then I hit the generate button. But of course, the result looked nothing like what I had imagined. So I tried again. And again. More than five times.


“Make it horizontal.” “I want warmer colors.” “Move the camera angle a bit to the left.” “Reduce the contrast.”


Eventually, I got the result I wanted, but by then I was already annoyed with my computer. Text-based commands give us incredible freedom, but that same freedom often leaves us stuck, unsure how to express what we want. And yes, it’s not really the AI’s fault. It’s mine for not being specific enough. But that raises the question: what does it actually mean to give a “clear instruction”?

The commands we give to AI are called prompt engineering. I see it as a kind of language-based programming, a form of coding but with words instead of syntax. There are even professionals who work full-time as prompt engineers. Yet because AI tools are so easy to access, most people don’t really understand what prompt engineering is. They may know the term, but not how to use it effectively. That made me wonder: how could people who aren’t familiar with prompt engineering still write better prompts and use AI more intuitively?

To find out, I began looking through research papers about different prompting methods, and I discovered there are far more types of prompts than I had imagined. One paper even introduced more than forty. From there, I selected four that I thought would be practical and approachable for everyday users.*

Used tools

Figma, Chat GPT

Category

Personal study

Oct, 2025

Prompt Engineering for everyone

After analyzing several prompt examples, I identified common elements that appeared repeatedly and organized them into 23 components. These were divided into two main categories: Blocks and Parameters, both of which can be used for text and image generation. Blocks represent large structural elements such as the subject, goal, and rules (10 total). Parameters are the smaller details within each block that refine or specify its meaning (13 total).

Later, I designed an interface that allows users to actually build prompts using these Blocks and Parameters. Because prompt engineering feels similar to coding, I designed it with a code-like interface. When you type /, a components tool appears where you can select Blocks and Parameters. The usual flow is to pick a Block first, then add Parameters inside it. Once the prompt is complete, you can press the Generate button to see the output appear on the right side of the screen.

Next, I tested this system by writing prompts using all four methods: Zero-shot, Few-shot, CoT, and ReAct. The results were fascinating. Zero-shot gave me the simplest yet most accurate result. Few-shot followed the input-output example pattern faithfully and produced consistent, concise text. CoT didn’t apply the five steps as part of the generation process, but instead used them as the structure of the writing itself. The output even included numbered sections like 1, 2, and 3. ReAct included real information from the reference link I had provided. All of them, except CoT, matched my intent. From CoT, I learned that if I want the model to follow a step-by-step process while generating, I need to specify that more clearly in the prompt.

In the end, I realized that having even a simple Block and Parameter guideline makes prompt writing much clearer and easier. Once I understood what to include, writing prompts became more intuitive and less stressful. I plan to keep using this system. As my prompt-writing experience builds up, I believe AI’s responses will become more precise and realistic. And when that day comes, I’ll be back with another post to share how it went. See you then 👋

In the end, I realized that having even a simple Block and Parameter guideline makes prompt writing much clearer and easier. Once I understood what to include, writing prompts became more intuitive and less stressful. I plan to keep using this system. As my prompt-writing experience builds up, I believe AI’s responses will become more precise and realistic. And when that day comes, I’ll be back with another post to share how it went. See you then 👋

[ReAct]

[Few-Shot]

[CoT]

[Zero-Shot]

Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., Li, Y., Gupta, A., Han, H., Schulhoff, S., Dulepet, P. S., Vidyadhara, S., Ki, D., Agrawal, S., Pham, C., Kroiz, G., Li, F., Tao, H., Srivastava, A., . . . Resnik, P. (2024). The Prompt Report: A Systematic Survey of Prompt Engineering Techniques. ArXiv. https://arxiv.org/abs/2406.06608

Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2023). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4. ArXiv. https://arxiv.org/abs/2312.16171

Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., & Chadha, A. (2024). A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. ArXiv. https://arxiv.org/abs/2402.07927

Gozalo-Brizuela, R., & Merchan, E. E. G. (2024). A Survey of Generative AI Applications. Journal of Computer Science, 20(8), 801-818. https://doi.org/10.3844/jcssp.2024.801.818

After analyzing several prompt examples, I identified common elements that appeared repeatedly and organized them into 23 components. These were divided into two main categories: Blocks and Parameters, both of which can be used for text and image generation. Blocks represent large structural elements such as the subject, goal, and rules (10 total). Parameters are the smaller details within each block that refine or specify its meaning (13 total).

Later, I designed an interface that allows users to actually build prompts using these Blocks and Parameters. Because prompt engineering feels similar to coding, I designed it with a code-like interface. When you type /, a components tool appears where you can select Blocks and Parameters. The usual flow is to pick a Block first, then add Parameters inside it. Once the prompt is complete, you can press the Generate button to see the output appear on the right side of the screen.

Next, I tested this system by writing prompts using all four methods: Zero-shot, Few-shot, CoT, and ReAct. The results were fascinating. Zero-shot gave me the simplest yet most accurate result. Few-shot followed the input-output example pattern faithfully and produced consistent, concise text. CoT didn’t apply the five steps as part of the generation process, but instead used them as the structure of the writing itself. The output even included numbered sections like 1, 2, and 3. ReAct included real information from the reference link I had provided. All of them, except CoT, matched my intent. From CoT, I learned that if I want the model to follow a step-by-step process while generating, I need to specify that more clearly in the prompt.

[ReAct]

[Few-Shot]

[CoT]

[Zero-Shot]

In the end, I realized that having even a simple Block and Parameter guideline makes prompt writing much clearer and easier. Once I understood what to include, writing prompts became more intuitive and less stressful. I plan to keep using this system. As my prompt-writing experience builds up, I believe AI’s responses will become more precise and realistic. And when that day comes, I’ll be back with another post to share how it went. See you then 👋

Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., Li, Y., Gupta, A., Han, H., Schulhoff, S., Dulepet, P. S., Vidyadhara, S., Ki, D., Agrawal, S., Pham, C., Kroiz, G., Li, F., Tao, H., Srivastava, A., . . . Resnik, P. (2024). The Prompt Report: A Systematic Survey of Prompt Engineering Techniques. ArXiv. https://arxiv.org/abs/2406.06608

Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2023). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4. ArXiv. https://arxiv.org/abs/2312.16171

Sahoo, P., Singh, A. K., Saha, S., Jain, V., Mondal, S., & Chadha, A. (2024). A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. ArXiv. https://arxiv.org/abs/2402.07927

Gozalo-Brizuela, R., & Merchan, E. E. G. (2024). A Survey of Generative AI Applications. Journal of Computer Science, 20(8), 801-818. https://doi.org/10.3844/jcssp.2024.801.818

AI has already become part of our everyday lives. I also used it today myself to generate an image and write an email. How many people don’t use AI these days? At least in my circle, especially among people working in tech, almost everyone has tried it at least once. It’s now completely natural for people to use AI to reach their goals faster and work more efficiently.

But to be honest, I sometimes get frustrated when using AI tools. The whole point of using them is to get results quickly, yet there are times when they give me something totally off, wasting my time instead. For example, when I was working on a healthcare-related project, I needed to generate an image using AI. In my head, I pictured a scene of a patient smiling while video chatting with a doctor. Then I opened the image generator and stared at the empty text box. My first thought was, “Uh… how am I supposed to describe this?” I could see the image so clearly in my head, but I didn’t know how to turn it into words. So I just started typing whatever came to mind.


“A female patient and a female doctor are on a video call, from the patient’s point of view. The patient is using a laptop to see the doctor on screen. Both are smiling and sitting in chairs.”


After writing that, I thought, “AI is smart. It’ll understand what I mean.” Then I hit the generate button. But of course, the result looked nothing like what I had imagined. So I tried again. And again. More than five times.


“Make it horizontal.” “I want warmer colors.” “Move the camera angle a bit to the left.” “Reduce the contrast.”


Eventually, I got the result I wanted, but by then I was already annoyed with my computer. Text-based commands give us incredible freedom, but that same freedom often leaves us stuck, unsure how to express what we want. And yes, it’s not really the AI’s fault. It’s mine for not being specific enough. But that raises the question: what does it actually mean to give a “clear instruction”?

The commands we give to AI are called prompt engineering. I see it as a kind of language-based programming, a form of coding but with words instead of syntax. There are even professionals who work full-time as prompt engineers. Yet because AI tools are so easy to access, most people don’t really understand what prompt engineering is. They may know the term, but not how to use it effectively. That made me wonder: how could people who aren’t familiar with prompt engineering still write better prompts and use AI more intuitively?

To find out, I began looking through research papers about different prompting methods, and I discovered there are far more types of prompts than I had imagined. One paper even introduced more than forty. From there, I selected four that I thought would be practical and approachable for everyday users.*