#Python

0 Followers · 456 Posts

Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace

Official site.

InterSystems Python Binding Documentation.

Article Thomas Dyar · Mar 25, 2025 2m read

Introduction

In InterSystems IRIS 2024.3 and subsequent IRIS versions, the AutoML component is now delivered as a separate Python package that is installed after installation. Unfortunately, some recent versions of Python packages that AutoML relies on have introduced incompatibilities, and can cause failures when training models (TRAIN MODEL statement). If you see an error mentioning "TypeError" and the keyword argument "fit_params" or "sklearn_tags", read on for a quick fix.

Root Cause

1
0 231
Article Kate Lau · Oct 13, 2025 13m read

Hi all,

Let's do some more work about the testing data generation and export the result by REST API.😁

Here, I would like to reuse the datagen.restservice class which built in the pervious article Writing a REST api service for exporting the generated patient data in .csv

This time, we are planning to generate a FHIR bundle include multiple resources for testing the FHIR repository.

Here is some reference for you, if you want to know mare about FHIR The Concept of FHIR: A Healthcare Data Standard Designed for the Future

OK... Let's start😆

6
0 106
Announcement Derek Robinson · Oct 16, 2025

Hi Community,

Are you a Python developer? If so, you can already start building apps with InterSystems IRIS without learning a new programming language!

Use Python with InterSystems IRIS. Try the exercise. 

👨‍💻Try this exercise to get started quickly with using Python's familiar DB-API interface to connect to an InterSystems IRIS database and run SQL queries.

💬What was your experience with the exercise? Let me know in the comments!

0
0 47
Article Kate Lau · Oct 13, 2025 5m read

Hi all,

It's me again 😁. In the pervious article Writing a REST api service for exporting the generated FHIR bundle in JSON, we actually generated a resource DocumentReference, with the content data encoded in Base64

Question!! Is it possible to write a REST service for decoding it? Because I am very curious what is the message data talking about🤔🤔🤔

OK, Let's start!

1. Create a new utility class datagen.utli.decodefhirjson.cls for decoding the data inside the DocumentReference
 

Class datagen.utli.decodefhirjson Extends%RegisteredObject
{
}
2
1 67
Article Kate Lau · Oct 9, 2025 6m read

Hi,

It's me again😁, recently I am working on generating some fake patient data for testing purpose with the help of Chat-GPT by using Python. And, at the same time I would like to share my learning curve.😑

1st of all for building a custom REST api service is easy by extending the %CSP.REST

Creating a REST Service Manually

Let's Start !😂

1. Create a class datagen.restservice which extends  %CSP.REST 

Class datagen.restservice Extends%CSP.REST
{
Parameter CONTENTTYPE = "application/json";
}

 

2. Add a function genpatientcsv() to generate the patient data, and package it into csv string

3
1 76
Article Pietro Di Leo · Oct 9, 2025 6m read

Introduction

In my previous article, I introduced the FHIR Data Explorer, a proof-of-concept application that connects InterSystems IRIS, Python, and Ollama to enable semantic search and visualization over healthcare data in FHIR format, a project currently participating in the InterSystems External Language Contest.

In this follow-up, we’ll see how I integrated Ollama for generating patient history summaries directly from structured FHIR data stored in IRIS, using lightweight local language models (LLMs) such as Llama 3.2:1B or Gemma 2:2B.

The goal was to build a completely local AI pipeline that can extract, format, and narrate patient histories while keeping data private and under full control.

All patient data used in this demo comes from FHIR bundles, which were parsed and loaded into IRIS via the IRIStool module. This approach makes it straightforward to query, transform, and vectorize healthcare data using familiar pandas operations in Python. If you’re curious about how I built this integration, check out my previous article Building a FHIR Vector Repository with InterSystems IRIS and Python through the IRIStool module.

Both IRIStool and FHIR Data Explorer are available on the InterSystems Open Exchange — and part of my contest submissions. If you find them useful, please consider voting for them!

0
1 44
Article Pietro Di Leo · Oct 9, 2025 4m read

Introduction

In a previous article, I presented the IRIStool module, which seamlessly integrates the pandas Python library with the IRIS database. Now, I'm explaining how we can use IRIStool to leverage InterSystems IRIS as a foundation for intelligent, semantic search over healthcare data in FHIR format.

This article covers what I did to create the database for another of my projects, the FHIR Data Explorer. Both projects are candidates in the current InterSystems contest, so please vote for them if you find them useful.

You can find them at the Open Exchange:

In this article we'll cover:

  • Connecting to InterSystems IRIS database through Python
  • Creating a FHIR-ready database schema
  • Importing FHIR data with vector embeddings for semantic search
0
0 46
Article Pietro Di Leo · Oct 6, 2025 4m read
2
0 73
Article Pietro Di Leo · Oct 6, 2025 5m read

Hi everyone! 👋
I’m excited to share the project I’ve submitted to the current InterSystems .Net, Java, Python, and JavaScript Contest — it’s called IRIStool and Data Manager, and you can find it on the InterSystems Open Exchange and on my GitHub page.

1
2 67
Article Eric Fortenberry · Oct 7, 2025 3m read

While working with external languages for IRIS (such as Python and Node.js), one of the first things you must accomplish is making a connection to an IRIS instance.

For instance, to make a connection in python (from https://pypi.org/project/intersystems-irispython/):

import iris

# Open a connection to the server
args = {
	'hostname':'127.0.0.1', 
	'port': 1972,
	'namespace':'USER', 
	'username':'username', 
	'password':'password'
}
conn = iris.connect(**args)

# Create an iris object
irispy = iris.createIRIS(conn)

# Create a global array in the USER namespace on the server
irispy.set("myGlobal", "hello world!") 

To establish a connection, you must either hard-code connection information in your script or you must prompt the user for the information.

To help manage these IRIS connections in my own projects, I created irisconns on Open Exchange.

irisconns allows you to decouple your connection information from your project/code by allowing you to save that connection information into files that are separate from your code. (Think "DSN" and "ODBC" for your IRIS Native SDK connections.)

Getting Started

To get started with irisconns, create either an irisconns or .irisconns file in your project's working directory, any parent directory to your working directory, or your home directory. Populate your irisconns file with connection information in an INI file format:

# 'default' is the connection returned if no name is provided.
[default]
hostname = localhost
port = 1972
namespace = USER
username = _SYSTEM
# confirm password? true or false?
confirm = false

# This connection name is "TEST".
[TEST]
hostname = test-server
port = 1972
namespace = USER
username = _SYSTEM
# confirm password? true or false?
confirm = false

# This connection name is "PROD".
[PROD]
hostname = prod-server
port = 1972
namespace = %SYS
username = _SYSTEM
# confirm password? true or false?
confirm = false

You will also need to copy the associated irisconns.py or irisconns.js libraries into your project so that you can import the irisconns module from your code. (Currently, only Python and Node.js libraries exist.) You also need to install the IRIS native packages for your programming language:

# Install Dependencies for Python
cp /path/to/irisconns/irisconns.py ./irisconns.py
pip install intersystems-irispython

# Install Dependencies for Node.js
cp /path/to/irisconns/irisconns.js ./irisconns.js
npm install @intersystems/intersystems-iris-native

Using "irisconns"

Once installed, you should be able to use/prompt for your connection configuration:

# Python Connection Example
import irisconns

# default connection
irispy = irisconns.get_irispy()

# named connection
irispy = irisconns.get_irispy('TEST')

# usage
irispy.set('hello world!', 'test', 1)
// Javascript Connection Example

// Import the package
const irisconns = require('./irisconns.js');

// Wrap in an async function so we can await the connection...
(async () => {
  // 'default' connection
  const iris = await irisconns.get_iris();

  // named (i.e. "PROD") connection
  // const iris = await irisconns.get_iris('PROD');

  // usage
  iris.set('hello world!','test',1);
})()

The above code will produce prompts, similar to the following:

# Connecting to default
Hostname    : localhost (default)
Port        : 11972
Namespace   : USER (default)
Username    : _SYSTEM
Password    : [hidden]
Confirm     : [hidden]

Closing

You can find more information about irisconns on the Open Exchange page. Hopefully you will find it useful!

Thanks!

0
0 33
Question Eugene.Forde · Aug 31, 2025

I’ve been exploring options for connecting Google Cloud Pub/Sub with InterSystems IRIS/HealthShare, but I noticed that IRIS doesn’t seem to ship with any native inbound/outbound adapters for Pub/Sub. Out of the box, IRIS offers adapters for technologies like Kafka, HTTP, FTP, and JDBC, which are great for many use cases, but Pub/Sub appears to be missing from the list.

Has anyone here implemented such an integration successfully?

For example:

2
1 75
Article sween · Mar 31, 2025 8m read

Vanna.AI - Personalized AI InterSystems OMOP Agent

Along this OMOP Journey, from the OHDSI book to Achilles, you can begin to understand the power of the OMOP Common Data Model when you see the mix of well written R and SQL deriving results for large scale analytics that are shareable across organizations. I however do not have a third normal form brain and about a month ago on the Journey we employed Databricks Genie to generate sql for us utilizing InterSystems OMOP and Python interoperability. This was fantastic, but left some magic under the hood in Databricks on how the RAG "model" was being constructed and the LLM in use to pull it off.

This point in the OMOP Journey we met Vanna.ai on the same beaten path...

Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. Vanna works in two easy steps - train a RAG “model” on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database. image

Vanna exposes all the pieces to do it ourselves with more control and our own stack against the OMOP Common Data Model.

The approach from the Vanna camp I found particularly fantastic, and conceptually it felt like a magic trick was being performed, and one could certainly argue that was exactly what was happening.

Vanna needs 3 choices to pull of its magic trick, a sql database, a vector database, and an LLM. Just envision a dealer handing you out three piles and making you choose from each one.

image

So if its not obvious, our sql database is InterSystems OMOP implementing the Common Data Model, our LLM of choice is Gemini, and for the quick and dirty evaluation we are using Chroma DB for a vector to get to the point quickly in python.

Gemini

So I cut a quick key and grew up a little bit and actually paid for it, I tried the free route with the rate limits of 50 prompts a day, and 1 per minute and it was unsettling... I may be happier being completely broke anyway, so we will see.

image

InterSystems OMOP

I am using my same fading trial as the [other posts](https://community.intersystems.com/smartsearch?search=OMOP+Journey). The CDM is loaded with about 100 patient pop per United State region with the pracs and orgs to boot.

image

Vanna

Let's turn the letters (get it?) notebook style and spin the wheel (get it again?) and put Vanna to work...
pip3 install 'vanna[chromadb,gemini,sqlalchemy-iris]'

Lets organize our pythons.

from vanna.chromadb import ChromaDB_VectorStore
from vanna.google import GoogleGeminiChat
from sqlalchemy import create_engine

import pandas as pd
import ssl
from sqlalchemy import create_engine
import time

Initialize the star of our show and introduce her to our model. Kind of weird right, Vanna (White) is a model.

class MyVanna(ChromaDB_VectorStore, GoogleGeminiChat):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        GoogleGeminiChat.__init__(self, config={'api_key': "shaazButt", 'model': "gemini-2.0-flash"})

vn = MyVanna()

Let's connect to our InterSystems OMOP Cloud deployment using sqlalchemy-iris from @caretdev. The work done with this dialect is quickly becoming the key ingredient for modern data interoperability of iris products in the data world.

engine = create_engine("iris://SQLAdmin:LordFauntleroy!!!@k8s-0a6bc2ca-adb040ad-c7bf2ee7c6-e6b05ee242f76bf2.elb.us-east-1.amazonaws.com:443/USER", connect_args={"sslcontext": context})

context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.verify_mode=ssl.CERT_OPTIONAL
context.check_hostname = False
context.load_verify_locations("vanna-omop.pem")

conn = engine.connect()

You define a function that takes in a SQL query as a string and returns a pandas dataframe. This gives Vanna a function that it can use to run the SQL on the OMOP Common Data Model.

def run_sql(sql: str) -> pd.DataFrame:
    df = pd.read_sql_query(sql, conn)
    return df

vn.run_sql = run_sql
vn.run_sql_is_set = True

Feeding the Model with a Menu

The information schema query may need some tweaking depending on your database. This is a good starting point. This will break up the information schema into bite-sized chunks that can be referenced by the LLM... If you like the plan, then uncomment this and run it to train Vanna.

df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")

plan = vn.get_training_plan_generic(df_information_schema)
plan

vn.train(plan=plan)

Training

The following are methods for adding training data. Make sure you modify the examples to match your database. DDL statements are powerful because they specify table names, column names, types, and potentially relationships. These ddl's are generated with the now supported DataBaseConnector as outlined in this [post](https://community.intersystems.com/post/omop-odyssey-celebration-house-hades).
vn.train(ddl="""
--iris CDM DDL Specification for OMOP Common Data Model 5.4
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.person (
			person_id integer NOT NULL,
			gender_concept_id integer NOT NULL,
			year_of_birth integer NOT NULL,
			month_of_birth integer NULL,
			day_of_birth integer NULL,
			birth_datetime datetime NULL,
			race_source_concept_id integer NULL,
			ethnicity_source_value varchar(50) NULL,
			ethnicity_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation_period (
			observation_period_id integer NOT NULL,
			person_id integer NOT NULL,
			observation_period_start_date date NOT NULL,
			observation_period_end_date date NOT NULL,
			period_type_concept_id integer NOT NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_occurrence (
			visit_occurrence_id integer NOT NULL,
			discharged_to_source_value varchar(50) NULL,
			preceding_visit_occurrence_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_detail (
			visit_detail_id integer NOT NULL,
			person_id integer NOT NULL,
			visit_detail_concept_id integer NOT NULL,
			provider_id integer NULL,
			care_site_id integer NULL,
			visit_detail_source_value varchar(50) NULL,
			visit_detail_source_concept_id Integer NULL,

--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.condition_occurrence (
			condition_occurrence_id integer NOT NULL,
			person_id integer NOT NULL,
			visit_detail_id integer NULL,
			condition_source_value varchar(50) NULL,
			condition_source_concept_id integer NULL,
			condition_status_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.drug_exposure (
			drug_exposure_id integer NOT NULL,
			person_id integer NOT NULL,
			dose_unit_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.procedure_occurrence (
			procedure_occurrence_id integer NOT NULL,
			person_id integer NOT NULL,
			procedure_concept_id integer NOT NULL,
			procedure_date date NOT NULL,
			procedure_source_concept_id integer NULL,
			modifier_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.device_exposure (
			device_exposure_id integer NOT NULL,
			person_id integer NOT NULL,
			device_concept_id integer NOT NULL,
			unit_source_value varchar(50) NULL,
			unit_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation (
			observation_id integer NOT NULL,
			person_id integer NOT NULL,
			observation_concept_id integer NOT NULL,
			observation_date date NOT NULL,
			observation_datetime datetime NULL,
<SNIP>

""")

Sometimes you may want to add documentation about your business terminology or definitions, here I like to add the resource names from FHIR that were transformed to OMOP.

vn.train(documentation="Our business is to provide tools for generating evicence in the OHDSI community from the CDM")
vn.train(documentation="Another word for care_site is organization.")
vn.train(documentation="Another word for provider is practitioner.")

Now lets add all the data from the InterSystems OMOP Common Data Model, probably a better way to do this, but I get paid by the byte.

cdmtables = ["care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"]
for table in cdmtables:
    vn.train(sql="SELECT * FROM  WHERE OMOPCDM54." + table)
    time.sleep(60)

I added the ability for Gemini to see the data here, ensure you want to do this in your travels or give Google your OMOP data with slight of hand.

Lets do our best Pat Sajak, and boot the shiny Vanna app.

from vanna.flask import VannaFlaskApp
app = VannaFlaskApp(vn,allow_llm_to_see_data=True, debug=False)
app.run()

image

Skynet!

This is a bit hackish, but really where I want to go with AI future forward integrating with apps, here we ask in natural language a question, which returns a sql query, then we immediately use that query against the InterSystems OMOP deployment using sqlalchemy-iris.

while True:
    import io
    import sys
    old_stdout = sys.stdout
    sys.stdout = io.StringIO()  # Redirect stdout to a dummy stream

    question = 'How Many Care Sites are there in Los Angeles?'
    sys.stdout = old_stdout

    sql_query = vn.ask(question)
    print("Ask Vanna to generate a query from a question of the OMOP database...")
    #print(type(sql_query))
    raw_sql_to_send_to_sqlalchemy_iris = sql_query[0]
    print("Vanna returns the query to use against the database.")
    gar = raw_sql_to_send_to_sqlalchemy_iris.replace("FROM care_site","FROM OMOPCDM54.care_site")
    print(gar)
    print("Now use sqlalchemy-iris with the generated query back to the OMOP database...")

    result = conn.exec_driver_sql(gar)
    #print(result)
    for row in result:
        print(row[0])
    time.sleep(3)

Utilities

At any time you can inspect what OMOP data the Vanna package is able to reference. You can also remove training data if there's obsolete/incorrect information (you can do this through the UI too).
training_data = vn.get_training_data()
training_data
vn.remove_training_data(id='omop-ddl')

About Using IRIS Vectors

Wish me luck here, if I manage to crush all the things to crush and resist the sun coming out, Ill implement iris vectors in vanna with the following repo.

image

1
0 195
Article Alberto Fuentes · Sep 16, 2025 4m read

In the previous article, we saw how to build a customer service AI agent with smolagents and InterSystems IRIS, combining SQL, RAG with vector search, and interoperability.

In that case, we used cloud models (OpenAI) for the LLM and embeddings.

This time, we’ll take it one step further: running the same agent, but with local models thanks to Ollama.


Why run models locally?

Using LLMs in the cloud is the simplest option to get started:

  • ✅ Models already optimized and maintained
  • ✅ Easy access with a simple API
  • Serverless service: no need to worry about hardware or maintenance
  • ❌ Usage costs
  • ❌ Dependency on external services
  • ❌ Privacy restrictions when sending data

On the other hand, running models locally gives us:

  • ✅ Full control over data and environment
  • ✅ No variable usage costs
  • ✅ Possibility to fine-tune or adapt models with techniques such as LoRA (Low-Rank Adaptation), which allows training certain layers of the model to adapt it to your specific domain without retraining the entire model
  • ❌ Higher resource consumption on your server
  • ❌ Limitations on model size depending on your hardware

That’s where Ollama comes into play.


What is Ollama?

Ollama is a tool that makes it easy to run language models and embeddings on your own computer with a very simple experience:

  • Download models with an ollama pull
  • Run them locally, exposed as an HTTP API
  • Integrate them directly into your applications, just like you would with OpenAI

In short: the same API you’d use in the cloud, but running on your laptop or server.


Basic Ollama setup

First, install Ollama from its website and verify that it works:

ollama --version

Then, download a couple of models:

# Download an embeddings model
ollama pull nomic-embed-text:latest

# Download a language model
ollama pull llama3.1:8b

# See all available models
ollama list

You can test embeddings directly with a curl:

curl http://localhost:11434/api/embeddings -d '{
  "model": "nomic-embed-text:latest",
  "prompt": "Ollama makes it easy to run LLMs locally."
}'

Using Ollama in the IRIS agent

The Customer Support Agent Demo repository already includes the configuration for Ollama. You just need to:

  1. Download the models needed to run them in Ollama I used nomic-embed-text for vector search embeddings and devstral as the LLM.

  2. Configure IRIS to use Ollama embeddings with the local model:

INSERT INTO %Embedding.Config (Name, Configuration, EmbeddingClass, VectorLength, Description)
  VALUES ('ollama-nomic-config', 
          '{"apiBase":"http://host.docker.internal:11434/api/embeddings", 
            "modelName": "nomic-embed-text:latest"}',
          'Embedding.Ollama', 
          768,  
          'embedding model in Ollama');
  1. Adjust the column size to store vectors in the sample tables (the local model has a different vector size than the original OpenAI one).
ALTER TABLE Agent_Data.Products DROP COLUMN Embedding;
ALTER TABLE Agent_Data.Products ADD COLUMN Embedding VECTOR(FLOAT, 768);

ALTER TABLE Agent_Data.DocChunks DROP COLUMN Embedding;
ALTER TABLE Agent_Data.DocChunks ADD COLUMN Embedding VECTOR(FLOAT, 768);
  1. Configure the .env environment file to specify the models we want to use:
OPENAI_MODEL=devstral:24b-small-2505-q4_K_M
OPENAI_API_BASE=http://localhost:11434/v1
EMBEDDING_CONFIG_NAME=ollama-nomic-config
  1. Update the embeddings

Since we have a different embedding model than the original, we need to update the embeddings using the local nomic-embed-text:

python scripts/embed_sql.py
  1. Run the agent so that it uses the new configuration

The code will now use the configuration so that both embeddings and the LLM are served from the local endpoint.

With this configuration, you can ask questions such as:

  • “Where is my order #1001?”
  • “What is the return period?”

And the agent will use:

  • IRIS SQL for structured data
  • Vector search with Ollama embeddings (local)
  • Interoperability to simulate external API calls
  • A local LLM to plan and generate code that calls the necessary tools to obtain the answer

Conclusion

Thanks to Ollama, we can run our Customer Support Agent with IRIS without relying on the cloud:

  • Privacy and control of data
  • Zero cost per token
  • Total flexibility to test and adapt models (LoRA)

The challenge? You need a machine with enough memory and CPU/GPU to run large models. But for prototypes and testing, it’s a very powerful and practical option.


Useful references

0
5 92
Article Alberto Fuentes · Sep 1, 2025 7m read

Customer support questions span structured data (orders, products 🗃️), unstructured knowledge (docs/FAQs 📚), and live systems (shipping updates 🚚). In this post we’ll ship a compact AI agent that handles all three—using:

  • 🧠 Python + smolagents to orchestrate the agent’s “brain”
  • 🧰 InterSystems IRIS for SQL, Vector Search (RAG), and Interoperability (a mock shipping status API)

⚡ TL;DR (snack-sized)

  • Build a working AI Customer Support Agent with Python + smolagents orchestrating tools on InterSystems IRIS (SQL, Vector Search/RAG, Interoperability for a mock shipping API).
  • It answers real questions (e.g., “Was order #1001 delivered?”“What’s the return window?”) by combining tables, documents, and interoperability calls.
  • You’ll spin up IRIS in Docker, load schema and sample data, embed docs for RAG, register tools (SQL/RAG/API), and run the agent via CLI or Gradio UI.

image


🧭 What you’ll build

An AI Customer Support Agent that can:

  • 🔎 Query structured data (customers, orders, products, shipments) via SQL
  • 📚 Retrieve unstructured knowledge (FAQs & docs) via RAG on IRIS Vector Search
  • 🔌 Call a (mock) shipping API via IRIS Interoperability, with Visual Trace to inspect every call

Architecture (at a glance)

User ➜ Agent (smolagents CodeAgent)
               ├─ SQL Tool ➜ IRIS tables
               ├─ RAG Tool ➜ IRIS Vector Search (embeddings + chunks)
               └─ Shipping Tool ➜ IRIS Interoperability (mock shipping) ➜ Visual Trace

New to smolagents? It’s a tiny agent framework from Hugging Face where the model plans and uses your tools—other alternatives are LangGraph and LlamaIndex.


🧱 Prerequisites

  • 🐍 Python 3.9+
  • 🐳 Docker to run IRIS in a container
  • 🧑‍💻 VS Code handy to checkout the code
  • 🔑 OpenAI API key for the LLM + embeddings — or run locally with Ollama if you prefer

1) 🧩 Clone & set up Python

git clone https://github.com/intersystems-ib/customer-support-agent-demo
cd customer-support-agent-demo

python -m venv .venv
# macOS/Linux
source .venv/bin/activate
# Windows (PowerShell)
# .venv\Scripts\Activate.ps1

pip install -r requirements.txt
cp .env.example .env   # add your OpenAI key

2) 🐳 Start InterSystems IRIS (Docker)

docker compose build
docker compose up -d

Open the Management Portal (http://localhost:52773 in this demo).


3) 🗃️ Load the structured data (SQL)

From SQL Explorer (Portal) or your favorite SQL client:

LOAD SQL FROM FILE '/app/iris/sql/schema.sql' DIALECT 'IRIS' DELIMITER ';';
LOAD SQL FROM FILE '/app/iris/sql/load_data.sql' DIALECT 'IRIS' DELIMITER ';';

This is the schema you have just loaded: image

Run some queries and get familiar with the data. The agent will use this data to resolve questions:

-- List customers
SELECT * FROM Agent_Data.Customers;

-- Orders for a given customer
SELECT o.OrderID, o.OrderDate, o.Status, p.Name AS Product
FROM Agent_Data.Orders o
JOIN Agent_Data.Products p ON o.ProductID = p.ProductID
WHERE o.CustomerID = 1;

-- Shipment info for an order
SELECT * FROM Agent_Data.Shipments WHERE OrderID = 1001;

✅ If you see rows, your structured side is ready.


4) 📚 Add unstructured knowledge with Vector Search (RAG)

Create an embedding config (example below uses an OpenAI embedding model—tweak to taste):

INSERT INTO %Embedding.Config
  (Name, Configuration, EmbeddingClass, VectorLength, Description)
VALUES
  ('my-openai-config',
   '{"apiKey":"YOUR_OPENAI_KEY","sslConfig":"llm_ssl","modelName":"text-embedding-3-small"}',
   '%Embedding.OpenAI',
   1536,
   'a small embedding model provided by OpenAI');

Need the exact steps and options? Check the documentation

Then embed the sample content:

python scripts/embed_sql.py

Check the embeddings are already in the tables:

SELECT COUNT(*) AS ProductChunks FROM Agent_Data.Products;
SELECT COUNT(*) AS DocChunks     FROM Agent_Data.DocChunks;

🔎 Bonus: Hybrid + vector search directly from SQL with EMBEDDING()

A major advantage of IRIS is that you can perform semantic (vector) search right inside SQL and mix it with classic filters—no extra microservices needed. The EMBEDDING() SQL function generates a vector on the fly for your query text, which you can compare against stored vectors using operations like VECTOR_DOT_PRODUCT.

Example A — Hybrid product search (price filter + semantic ranking):

SELECT TOP 3
    p.ProductID,
    p.Name,
    p.Category,
    p.Price,
    VECTOR_DOT_PRODUCT(p.Embedding, EMBEDDING('headphones with ANC', 'my-openai-config')) score
FROM Agent_Data.Products p
WHERE p.Price < 200
ORDER BY score DESC

Example B — Semantic doc-chunk lookup (great for feeding RAG answers):

SELECT TOP 3
    c.ChunkID  AS chunk_id,
    c.DocID      AS doc_id,
    c.Title         AS title,
    SUBSTRING(c.ChunkText, 1, 400) AS snippet,
    VECTOR_DOT_PRODUCT(c.Embedding, EMBEDDING('warranty coverage', 'my-openai-config')) AS score
FROM Agent_Data.DocChunks c
ORDER BY score DESC

Why this is powerful: you can pre-filter by price, category, language, tenant, dates, etc., and then rank by semantic similarity—all in one SQL statement.


5) 🔌 Wire a live (mock) shipping API with Interoperability

The project exposes a tiny /api/shipping/status endpoint through IRIS Interoperability—perfect to simulate “real world” calls:

curl -H "Content-Type: application/json" \
  -X POST \
  -d '{"orderStatus":"Processing","trackingNumber":"DHL7788"}' \
  http://localhost:52773/api/shipping/status

Now open Visual Trace in the Portal to watch the message flow hop-by-hop (it’s like airport radar for your integration ✈️).


6) 🤖 Meet the agent (smolagents + tools)

Peek at these files:

  • agent/customer_support_agent.py — boots a CodeAgent and registers tools
  • agent/tools/sql_tool.py — parameterized SQL helpers
  • agent/tools/rag_tool.py — vector search + doc retrieval
  • agent/tools/shipping_tool.py — calls the Interoperability endpoint

The CodeAgent plans with short code steps and calls your tools. You bring the tools; it brings the brains using a LLM model


7) ▶️ Run it!

One-shot (quick tests)

python -m cli.run --email alice@example.com --message "Where is my order #1001?"
python -m cli.run --email alice@example.com --message "Show electronics that are good for travel"
python -m cli.run --email alice@example.com --message "Was my headphones order delivered, and what’s the return window?"

Interactive CLI

python -m cli.run --email alice@example.com

Web UI (Gradio)

python -m ui.gradio
# open http://localhost:7860

🛠️ Under the hood

The agent’s flow (simplified):

  1. 🧭 Plan how to resolve the question and what available tools must be used: e.g., “check order status → fetch returns policy ”.

  2. 🛤️ Call tools as needed

    • 🗃️ SQL for customers/orders/products
    • 📚 RAG over embeddings for FAQs/docs (and remember, you can prototype RAG right inside SQL using EMBEDDING() + vector ops as shown above)
    • 🔌 Interoperability API for shipping status
  3. 🧩 Synthesize: stitch results into a friendly, precise answer.

Add or swap tools as your use case grows: promotions, warranties, inventory, you name it.


🎁 Wrap-up

You now have a compact AI Customer Support Agent that blends:

  • 🧠 LLM reasoning (smolagents CodeAgent)
  • 🗃️ Structured data (IRIS SQL)
  • 📚 Unstructured knowledge (IRIS Vector Search + RAG) — with the bonus that EMBEDDING() lets you do hybrid + vector search directly from SQL
  • 🔌 Live system calls (IRIS Interoperability + Visual Trace)
0
1 101
Article Muhammad Waseem · Aug 18, 2025 7m read

Interoperability on Python (IoP) is a proof-of-concept project designed to showcase the power of the InterSystems IRIS Interoperability Framework when combined with a Python-first approach.IoP leverages Embedded Python (a feature of InterSystems IRIS) to enable developers to write interoperability components in Python, which can seamlessly integrate with the robust IRIS platform. This guide has been crafted for beginners and provides a comprehensive introduction to IoP, its setup, and practical steps to create your first interoperability component. By the end of this article, you will get a clear understanding of how to use IoP to build scalable, Python-based interoperability solutions.

6
5 326
Article Ashok Kumar T · Jul 21, 2025 13m read

This article is a continuation of the IRIS JSON project and features additional methods and insights.

Let's continue with the instance methods

%GetTypeOf(Key)

This instance method is used to determine the JSON data type of the %DynamicObject or %DynamicArray.

It returns one of the following strings:

1
3 239
Question Hannah Sullivan · Aug 6, 2025

I have a custom Buffer class which is designed to capture written/printed statements to the device, to be able to transform the captured text to string or stream type. I have used this in ObjectScript to capture ObjectScript write statements and return a string. I would like to try to use this with a [ Language = python ] method as follows. This class will be called by a scheduled task.

4
0 79
Article Guillaume Rongier · Jul 31, 2025 4m read

img

This article will introduce you to the concept of virtual environments in Python, which are essential for managing dependencies and isolating project from the OS.

What is a Virtual Environment?

A virtual environment is a folder that contains :

  • A specific version of Python
  • At start an empty site-packages directory

Virtual environments will help you to isolate your project from the OS Python installation and from other projects.

How to use it?

To use virtual environments, you can follow these steps:

  1. Create a virtual environment: You can create a virtual environment using the venv module that comes with Python. Open your terminal and run:

    python -m venv .venv
    

    Replace .venv with your desired environment name.

  2. Activate the virtual environment: After creating the virtual environment, you need to activate it. The command varies depending on your operating system:

    • On Windows:
    .venv\Scripts\Activate.ps1
    

    If you encounter an error, you may need to run the following command in your terminal:

    Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process -Force; .venv\Scripts\Activate.ps1
    
    • On macOS and Linux:
    source .venv/bin/activate
    

Once activated, your terminal prompt will change to indicate that you are now working within the virtual environment.

Example:

(.venv) user@machine:~/project$

Notice the (.venv) prefix in the terminal prompt, which indicates that the virtual environment is active.

Now then you can install packages using pip, and they will be installed in the virtual environment rather than the global Python installation.

Can I use Virtual Environments in IRIS?

Humm, good question!

The answer is simple : Yes and No.

  • No, because IRIS do not officially support virtual environments.
  • Yes, because going through all those articles, now we understand how Python works, how Iris works and what is a virtual environment, maybe we can simulate a virtual environment within IRIS by using the right configurations and setups.

How to simulate a virtual environment in IRIS?

A virtual environment is two things:

  • A specific version of Python
  • An site-packages directory

We have in IRIS what we call Flexible Python Runtime, which allows us to

  • use a specific version of Python.
  • update the sys.path to include a specific directory.

So, we can simulate a virtual environment in IRIS by using the Flexible Python Runtime and configuring the sys.path to include a specific directory and a specific version of Python. 🥳

Setup a Flexible Python Runtime in IRIS is easy, you can follow the steps in the IRIS documentation.

In a nutshell, you need to:

  1. Configure the PythonRuntimeLibrary to point to the lib python file of the specific Python version you want to use.

    Example:

    • Windows : C:\Program Files\Python311\python3.dll (Python 3.11 on Windows)
    • Linux : /usr/lib/x86_64-linux-gnu/libpython3.11.so.1.0 (Python 3.11 on Ubuntu 22.04 on the x86 architecture)
  2. Configure the PythonPath to point to the site-packages directory of the specific Python version you want to use.

    Example:

    • Use your virtual environment site-packages directory, which is usually located in the .venv/lib/python3.x/site-packages directory.

⚠️ This will setup your whole IRIS instance to use a specific version of Python and a specific site-packages directory.

🩼 Limitation :

  • You will not end up with exactly the same sys.path as a virtual environment, because IRIS will add some directories to the sys.path automatically, like <installation_directory>/lib/python an others we have seen in the module article.

🤫 If you want to make it automatic, you can use this awsome package: iris-embedded-python-wrapper

To use it, you need to:

Be in your venv environment, then install the package:

(.venv) user@machine:~/project$
pip install iris-embedded-python-wrapper

Then, simply bind this venv to IRIS with the following command:

(.venv) user@machine:~/project$
bind_iris

You will see the following message:

INFO:iris_utils._find_libpyton:Created backup at /opt/intersystems/iris/iris.cpf.0f4a1bebbcd4b436a7e2c83cfa44f515
INFO:iris_utils._find_libpyton:Created merge file at /opt/intersystems/iris/iris.cpf.python_merge
IRIS Merge of /opt/intersystems/iris/iris.cpf.python_merge into /opt/intersystems/iris/iris.cpf
INFO:iris_utils._find_libpyton:PythonRuntimeLibrary path set to /usr/local/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/Python
INFO:iris_utils._find_libpyton:PythonPath set to /xxxx/.venv/lib/python3.11/site-packages
INFO:iris_utils._find_libpyton:PythonRuntimeLibraryVersion set to 3.11

To unbind the venv from IRIS, you can use the following command:

(.venv) user@machine:~/project$
unbind_iris

Conclusion

We have seen what are the benefits of using virtual environments in Python, how to create and use them, and how to simulate a virtual environment in IRIS using the Flexible Python Runtime.

3
2 133
Question Oliver Wilms · Apr 21, 2025

I am brand new to using AI. I downloaded some medical visit progress notes from my Patient Portal. I extracted text from PDF files. I found a YouTube video that showed how to extract metadata using an OpenAI query / prompt such as this one:

ollama-ai-iris/data/prompts/medical_progress_notes_prompt.txt at main · oliverwilms/ollama-ai-iris
 

I combined @Rodolfo Pscheidt Jr https://github.com/RodolfoPscheidtJr/ollama-ai-iris with some files from @Guillaume Rongier https://openexchange.intersystems.com/package/iris-rag-demo.

I attempted to run

1
0 128