#Prompt Engineering

0 Followers · 3 Posts

Prompt engineering is the practice of designing and refining inputs (called prompts) to get the most useful, accurate, or creative responses from AI systems like language models (e.g., ChatGPT, Gemini, etc). It's about figuring out what to say to an AI and how to say it to make it do what you want.

InterSystems staff + admins Hide everywhere
Hidden post for admin
Article Pietro Di Leo · Oct 9, 2025 6m read

Introduction

In my previous article, I introduced the FHIR Data Explorer, a proof-of-concept application that connects InterSystems IRIS, Python, and Ollama to enable semantic search and visualization over healthcare data in FHIR format, a project currently participating in the InterSystems External Language Contest.

In this follow-up, we’ll see how I integrated Ollama for generating patient history summaries directly from structured FHIR data stored in IRIS, using lightweight local language models (LLMs) such as Llama 3.2:1B or Gemma 2:2B.

The goal was to build a completely local AI pipeline that can extract, format, and narrate patient histories while keeping data private and under full control.

All patient data used in this demo comes from FHIR bundles, which were parsed and loaded into IRIS via the IRIStool module. This approach makes it straightforward to query, transform, and vectorize healthcare data using familiar pandas operations in Python. If you’re curious about how I built this integration, check out my previous article Building a FHIR Vector Repository with InterSystems IRIS and Python through the IRIStool module.

Both IRIStool and FHIR Data Explorer are available on the InterSystems Open Exchange — and part of my contest submissions. If you find them useful, please consider voting for them!

0
1 44
Article Henry Pereira · Jul 31, 2025 5m read

artisan cover

If you’ve ever watched a true artisan—whether a potter turning mud into a masterpiece or a luthier bringing raw wood to life as a marvelous guitar—you know that magic isn’t in the materials, but in care, craft, and process. I know this firsthand: my handmade electric guitar is a daily inspiration, but I’ll admit—creating something like that is a talent I don’t have.

Yet, in the digital world, I often see people hoping for “magic” from generative AI by typing vague, context-free prompts like “build an app.” The results are usually frustratingly shallow—no artistry, no finesse. Too many expect AI to work miracles with zero context or structure. That frustration is what motivated us to build dc-artisan—a tool for digital prompt artisans. Our goal: to enable anyone to transform rough, wishful prompts into efficient, functional, and context-rich masterpieces.

Like watching a master artisan transform raw materials into art, creating with GenAI is about intent, preparation, and thoughtful crafting. The problem isn’t with AI itself—it’s how we use it. Just as a luthier must carefully select and shape each piece of wood, effective prompt engineering demands clear context, structure, and intention.

We believe the world deserves more than “magical prompts” that lead to disappointment. Powerful generative AI arises from thoughtful human guidance: precise context, real objectives, and deliberate structure. No artisan creates beauty by accident—reliable AI outputs require care and preparation.

dc-artisan approaches prompt engineering as a true craft—systematic, teachable, and testable. It offers a comprehensive toolkit for moving beyond trial, error, and guesswork.

The first thing dc-artisan does is aim to understand your prompt the way a thoughtful collaborator would. When you begin drafting, the tool engages directly with your input:

  • Clarifying questions: dc-artisan analyzes your initial prompt and asks focused questions to uncover your core objective, target audience, expected format, and any missing context. For example:
    • “What kind of output are you expecting—text summary, code, or structured data?”
    • “Who is the target audience?”
    • “What type of input or data will this prompt be used with?”

prompt enhance

These interactions help you clarify not just what you want the prompt to say, but also why.

Once your intent is clear, dc-artisan reviews the structure and offers tailored suggestions—enhancing clarity, improving tone, and filling in missing details critical for context-rich, actionable output.

And the best thing? You use all these features right inside your beloved editor, VS Code! You can insert variables directly in your prompt (like {task} or {audience}) for flexibility and reuse, instantly previewing how final prompts look with different substitutions—so you see exactly how it will work in practice.

But that’s not all. dc-artisan supports prompt tuning for optimal performance. Upload a CSV of test cases to automatically evaluate consistency, output quality, and the impact of your prompt structure across varied inputs. dc-artisan evaluates each response and generates comprehensive reports with quality scores and similarity metrics—so you can measure and optimize your prompts’ effectiveness with confidence.

testing

Prompting Without Context Isn’t Craft — It’s Chaos

Prompt engineering without structure is like carving wood blindfolded. You might produce something, but it likely won’t play a tune.

Many resort to vague or overloaded prompts—short, ambiguous commands or pages of raw content without structure. Either the model has no real idea what you want, or it’s lost in a swamp of noise.

When a prompt’s context becomes too long or cluttered, even advanced LLMs can lose focus. Instead of reasoning or generating new strategies, they often get distracted, repeating earlier content or sticking to familiar patterns from the beginning of your prompt history. Ironically, larger models with bigger context windows (like 32k tokens) are even more susceptible to this. Simply providing more context (more documents, bigger prompts, entire knowledge bases) frequently backfires, resulting in context overload, missed objectives, and confused outputs.

That’s precisely the gap that RAG (Retrieval-Augmented Generation) is designed to fill: not by giving LLMs more information, but by feeding them the most relevant knowledge at the right moment.

How dc-artisan and RAG Pipeline Mode Help

dc-artisan unifies prompt crafting and context management. It doesn’t just help you write better prompts; it ensures your AI receives curated, relevant information, not a tidal wave of trivia.

With RAG Pipeline Mode, you can:

  • 📄 Upload & Chunk Documents: PDF, DOCX, Markdown, TXT—easily embedding into your vector database.
  • 🧬 Inspect Chunks: View each atomic unit of embedded text with precision.
  • 🗑️ Smart Cleanup: Delete unwanted or outdated content directly from the extension, keeping your AI’s knowledge base curated and relevant.

rag

This workflow is inspired by the InterSystems Ideas Portal (see DPI-I-557)

Here’s how you can smoothly integrate a new section about dc-artisan’s backend architecture just before “Closing Thoughts,” highlighting the integration with InterSystems IRIS Interoperability and our custom liteLLM adapter.

What truly sets dc-artisan apart is its robust backend, engineered for both interoperability and flexibility. The extension’s engine runs on InterSystems IRIS Interoperability, utilizing a custom-built liteLLM adapter that we developed.

This architecture means you’re not locked into a single large language model (LLM) provider. Instead, you can seamlessly connect and switch between a wide range of leading LLM platforms—including OpenAI, Gemini, Claude, Azure OpenAI, and others—all managed from a unified, enterprise-grade backend.

Closing Thoughts

More developers are discovering that prompting isn’t about guessing the “magic words.” It’s about thoughtful goals, clear language, and powerful context—writing prompts like engineers, not wizards. Just as luthiers shape wood into instruments with soul, you can shape prompts into reliable, context-enriched AI workflows using tools crafted for your craft.

dc-artisan is more than a tool—it’s a mindset shift from vibe coding toward clarity, precision, and true digital artistry.

🎸 Ready to build prompts with your own hands?
⚙️ Fire up VS Code, install dc-artisan, and start crafting your AI like an artisan—not a magician.

🗳️ And if you like what we’ve built, vote for us in the InterSystems IRIS Dev Tools Contest—your support means a lot!

dc-artisan

2
0 88
Article Henry Pereira · May 29, 2025 6m read

image

You know that feeling when you get your blood test results and it all looks like Greek? That's the problem FHIRInsight is here to solve. It started with the idea that medical data shouldn't be scary or confusing – it should be something we can all use. Blood tests are incredibly common for checking our health, but let's be honest, understanding them is tough for most folks, and sometimes even for medical staff who don't specialize in lab work. FHIRInsight wants to make that whole process easier and the information more actionable.

FHIRInsight logo

🤖 Why We Built FHIRInsight

It all started with a simple but powerful question:

“Why is reading a blood test still so hard — even for doctors sometimes?”

If you’ve ever looked at a lab result, you’ve probably seen a wall of numbers, cryptic abbreviations, and a “reference range” that may or may not apply to your age, gender, or condition. It’s a diagnostic tool, sure — but without context, it becomes a guessing game. Even experienced healthcare professionals sometimes need to cross-reference guidelines, research papers, or specialist opinions to make sense of it all.

That’s where FHIRInsight steps in.

We didn’t build it just for patients — we built it for the people on the frontlines of care. For the doctors pulling back-to-back shifts, for the nurses catching subtle patterns in vitals, for every health worker trying to make the right call with limited time and lots of responsibility. Our goal is to make their jobs just a little bit easier — by turning dense, clinical FHIR data into something clear, useful, and grounded in real medical science. Something that speaks human.

FHIRInsight does more than just explain lab values. It also:

  • Provides contextual advice on whether a test result is mild, moderate, or severe
  • Suggests potential causes and differential diagnoses based on clinical signs
  • Recommends next steps — whether that’s follow-up tests, referrals, or urgent care
  • Leverages RAG (Retrieval-Augmented Generation) to pull in relevant scientific articles that support the analysis

Imagine a young doctor reviewing a patient’s anemia panel. Instead of Googling every abnormal value or digging through medical journals, they receive a report that not only summarizes the issue but cites recent studies or WHO guidelines that support the reasoning. That’s the power of combining AI and vector search over curated research.

And what about the patient?

They’re no longer left staring at a wall of numbers, wondering what something like “bilirubin 2.3 mg/dL” is supposed to mean — or whether they should be worried. Instead, they get a simple, thoughtful explanation. One that feels more like a conversation than a clinical report. Something they can actually understand — and bring into the discussion with their doctor, feeling more prepared and less anxious.

Because that’s what FHIRInsight is really about: turning medical complexity into clarity, and helping both healthcare professionals and patients make better, more confident decisions — together.

🔍 Under the Hood

Of course, all that simplicity on the surface is made possible by some powerful tech working quietly in the background.

Here’s what FHIRInsight is built on:

  • FHIR (Fast Healthcare Interoperability Resources) — This is the global standard for health data. It’s how we receive structured information like lab results, patient history, demographics, and encounters. FHIR is the language that medical systems speak — and we translate that language into something people can actually use.
  • Vector Search for RAG (Retrieval-Augmented Generation): FHIRInsight enhances its diagnostic reasoning by indexing scientific PDF papers and trusted URLs into a vector database using InterSystems IRIS native vector search. When a lab result looks ambiguous or nuanced, the system retrieves relevant content to support its recommendations — not from memory, but from real, up-to-date research.
  • Prompt Engineering for Medical Reasoning: We’ve fine-tuned our prompts to guide the LLM toward identifying a wide spectrum of blood-related conditions. Whether it’s iron deficiency anemia, coagulopathies, hormonal imbalances, or autoimmune triggers — the prompt guides the LLM through variations in symptoms, lab patterns, and possible causes.
  • LiteLLM Integration: A custom adapter routes requests to multiple LLM providers (OpenAI, Anthropic, Ollama, etc.) through a unified interface, enabling fallback, streaming, and model switching with ease.

All of this happens in a matter of seconds — turning raw lab values into explainable, actionable medical insight, whether you’re a doctor reviewing 30 patient charts or a patient trying to understand what your numbers mean.

🧩 Creating the LiteLLM Adapter: One Interface to Rule All Models

Behind the scenes, FHIRInsight’s AI-powered reporting is driven by LiteLLM — a brilliant abstraction layer that allows us to call over 100+ LLMs (OpenAI, Claude, Gemini, Ollama, etc.) through a single OpenAI-style interface.

But integrating LiteLLM into InterSystems IRIS required something more permanent and reusable than Python scripts tucked away in a Business Operation. So, we created our own LiteLLM Adapter.

Meet LiteLLMAdapter

This adapter class handles everything you’d expect from a robust LLM integration:

  • Accepts parameters like prompt, model, and temperature
  • Loads your environment variables (e.g., API keys) dynamically

To plug this into our interoperability production, we wrapped it in a dedicated Business Operation:

  • Handles production configuration via standard LLMModel setting
  • Integrates with the FHIRAnalyzer component for real-time report generation
  • Acts as a central “AI bridge” for any future components needing LLM access

Here’s the core flow simplified:

set response = ##class(dc.LLM.LiteLLMAdapter).CallLLM("Tell me about hemoglobin.", "openai/gpt-4o", 0.7)
write response

🧭 Conclusion

When we started building FHIRInsight, our mission was simple: make blood tests easier to understand — for everyone. Not just patients, but doctors, nurses, caregivers... anyone who’s ever stared at a lab result and thought, “Okay, but what does this actually mean?”

We’ve all been there.

By blending the structure of FHIR, the speed of InterSystems IRIS, the intelligence of LLMs, and the depth of real medical research through vector search, we created a tool that turns confusing numbers into meaningful stories. Stories that help people make smarter decisions about their health — and maybe even catch something early that would’ve gone unnoticed.

But FHIRInsight isn’t just about data. It’s about how we feel when we look at data. We want it to feel clear, supportive, and empowering. We want the experience to be... well, kind of like “vibecoding” healthcare — that sweet spot where smart code, good design, and human empathy come together.

We hope you’ll try it, break it, question it — and help us improve it.

Tell us what you’d like to see next. More conditions? More explainability? More personalization?

This is just the beginning — and we’d love for you to help shape what comes next.

2
0 95