#InterSystems IRIS

1 Follower · 5.4K Posts

InterSystems IRIS is a Complete Data Platform
InterSystems IRIS gives you everything you need to capture, share, understand, and act upon your organization’s most valuable asset – your data.
As a complete platform, InterSystems IRIS eliminates the need to integrate multiple development technologies. Applications require less code, fewer system resources, and less maintenance.

Article Andrew Sklyarov · Oct 3, 2025 8m read

I was really surprised that such a flexible integration platform with a rich toolset specifically for app connections has no out-of-the-box Enterprise Service Bus solution. Like Apache ServiceMix, Mule ESB, SAP PI/PO, etc, what’s the reason? What do you think? Has this pattern lost its relevance completely nowadays? And everybody moved to message brokers, maybe?

0
1 65
Article Raef Youssef · Oct 2, 2025 5m read

Why This Matters

Managing IAM can be tedious when done manually — especially when your APIs are already well-documented using OpenAPI (Swagger) specs. Wouldn't it be great if you could automatically generate Kong services and routes directly from your OpenAPI spec?

That's exactly what this ObjectScript method does: it reads an OpenAPI 2.0 spec stored in the XData block of your spec class and generates a decK-compatible YAML file that can be used to sync your IAM configuration.

This approach:

  • Reduces manual configuration errors
  • Keeps your gateway in sync with your API spec
  • Speeds up deployment and onboarding

Prerequisites:

  • InterSystems IRIS or IRIS based platform
  • InterSystems API Manager
  • Deck CLI tool

What the Method Does

The method ConvertOpenAPIXDataToDeckYAML:

  1. Reads the OpenAPI spec from an XData block named OpenAPI in a given class.
  2. Parses the JSON into a dynamic object.
  3. Extracts endpoints and HTTP methods.
  4. Generates a YAML file that defines:
    • A Kong service pointing to the API host and base path
    • Routes for each endpoint
    • A rate-limiting plugin on each route (optional enhancement)

Example Spec Class

You can use the below sample spec class or use the class generated from posting the spec file that is included in the previous post linked below.

Class MyApp.spec
{
XData OpenAPI [ MimeType = application/json ]
{
{
  "swagger": "2.0",
  "host": "api.example.com",
  "basePath": "/v1",
  "paths": {
    "/users": {
      "get": { "summary": "Get users" },
      "post": { "summary": "Create user" }
    },
    "/products": {
      "get": { "summary": "Get products" }
    }
  }
}
}
}

The Method and Use

Below is the ClassMethod. You can of course tweak to suit your needs.

/// Convert OpenAPI XData to Deck YAML
ClassMethod ConvertOpenAPIXDataToDeckYAML(specClassName As %String, outputFilePath As %String = "") As %Status
{
    Try {
        // Read the XData block named "OpenAPI"
        Set reader = ##class(%Dictionary.XDataDefinition).%OpenId(specClassName _ "||OpenAPI")
        If reader = "" {
            Write "Error: XData block 'OpenAPI' not found in class ", specClassName, !
            //Quit $$$ERROR
        }
        // Read the stream content into a string
        Set stream = reader.Data
        Set specJSON = ""
        While 'stream.AtEnd {
            Set specJSON = specJSON _ stream.ReadLine()
        }
        // Test call
        //Do ##class(ConsentAPI.Utils).ConvertOpenAPIXDataToDeckYAML("SpecAPI.spec")

        // Parse the JSON into a dynamic object
        Set spec = ##class(%DynamicObject).%FromJSON(specJSON)

        // Initialize YAML structure
        Set deckYAML = "services:" _ $CHAR(13)

        // Extract host and basePath
        Set host = $PIECE($SYSTEM, ":", 1)
        Set basePath = spec.basePath

        // Create service block
        Set deckYAML = deckYAML _ "  - name: " _ host _ $CHAR(13)
        Set deckYAML = deckYAML _ "    url: "_$c(34)_"http://" _ host _ basePath _$c(34)_ $CHAR(13)
        Set deckYAML = deckYAML _ "    routes:" _ $CHAR(13)
        
        // Iterate over paths
        Set pathIter = spec.paths.%GetIterator()
        While pathIter.%GetNext(.key, .value) {
            Set pathKey = key
            Set path = spec.paths.key
            Set routeName = $REPLACE(key , "/", "")
            Set deckYAML = deckYAML _ "      - name: " _ routeName _ $CHAR(13)
            Set deckYAML = deckYAML _ "        strip_path: false"_$CHAR(13)
            Set deckYAML = deckYAML _ "        preserve_host: false"_$CHAR(13)
            Set deckYAML = deckYAML _ "        paths:"_$CHAR(13)
            Set deckYAML = deckYAML _ "          - " _ pathKey _ $CHAR(13)
            Set deckYAML = deckYAML _ "        methods:"_$CHAR(13)
            Set methodIter = value.%GetIterator()
            While methodIter.%GetNext(.key, .value) {
                Set methodKey = key
                Set deckYAML = deckYAML _ "          - " _ $ZCONVERT(methodKey, "U") _ $CHAR(13)

            }
            Set deckYAML = deckYAML _ "        plugins:"_$CHAR(13)
            Set deckYAML = deckYAML _ "        - name: " _ "rate-limiting" _ $CHAR(13)
            Set deckYAML = deckYAML _ "          config:"_$CHAR(13)
            Set deckYAML = deckYAML _ $REPLACE($J("",12),"",$C(32))_"minute: 20" _ $CHAR(13)
            Set deckYAML = deckYAML _ $REPLACE($J("",12),"",$C(32))_"hour: 500" _ $CHAR(13)
        }

        // Output to file or console
        If outputFilePath '= "" {
            Set file = ##class(%Stream.FileCharacter).%New()
            Set file.Filename = outputFilePath
            Do file.Write(deckYAML)
            Do file.%Save()
            Write "YAML saved to: ", outputFilePath, !
        } Else {
            Write deckYAML, !
        }
        //Quit $$$OK
    } Catch ex {
        Write "Error: ", ex.DisplayString(), !
        //Quit ex.AsStatus()
    }
}

To use the method and have it generate a yaml file, just call it from a terminal session AKA IRIS CLI

Do ##class(MyUtils.APIConverter).ConvertOpenAPIXDataToDeckYAML("MyApp.spec", "/tmp/deck.yaml")

Or you can just print the yaml to console

Do ##class(MyUtils.APIConverter).ConvertOpenAPIXDataToDeckYAML("MyApp.spec")


What to Do Next

Once the YAML file is generated, you can use decK to sync it with your IAM instance:

deck gateway sync --workspace <WorkSpace> deck.yml

This will create or update services and routes in IAM based on your spec.


Final Thoughts

This method bridges the gap between spec driven development and gateway configuration. It’s ideal for teams using InterSystems IRIS or HealthShare and IAM in their architecture.

Want to extend it?

  • Include authentication plugins
  • Generate multiple services based on tags

Let me know in the comments or reach out if you'd like help customizing it!

0
2 42
Article Megumi Kakechi · Oct 2, 2025 2m read

InterSystems FAQ rubric

The ^%GCMP utility can be used to compare the contents of two globals.

For example, to compare ^test and ^test in the USER and SAMPLES namespaces, it would look like this:
*In the example below, 700 identical globals are created in the two namespaces, and the contents of one of them is changed to make it the detection target.

0
1 68
Question Darima Budazhapova · Oct 2, 2025

Hi community,

A colleague gets ERROR #822: Access denied every time he tries to log in via Management portal. It is NOT the case of wrong credentials: I reset his password password to a temporary one so it would prompt him to create a new one upon first login. He did get the prompt, changed his password and his next attempt at logging in displayed the same error.

The audit log record displays this:
Error message: ERROR #862: User is restricted from running application /csp/sys/op, %Admin_Operate:U required -- cannot execute.
Web Application: /csp/sys/op
$I: |TCP|1972|1533396
$P: |TCP|1972|1533396

3
0 54
Article Kurro Lopez · Sep 29, 2025 13m read

I am truly excited to continue my "InterSystems for Dummies" series of articles, and today, we want to tell you everything about one of the most powerful features we have for interoperability.

Hey, even if you have already had a go, we plan to take a really close look at how to get the most out of them and make our production even better.

What Is Record Mapper?

In essence, a Record Mapper is a tool that lets you map data from text files to production messages and vice versa. The Management Portal interface, on the other hand, allows you to create a visual representation of a text file and a valid object model of that data to map them to a single persistent production message object.

Therefore, if you wish to import data from a CSV file into your persistent class, you can play with a couple of inbound classes to do it (by FTP or File directory... ). Do not rush, though! We will get to each of those points in due course.


TIP: All the examples and classes described in this article can be downloaded from the following link: https://github.com/KurroLopez/iris-recordmap-fordummies.git


How to Start?

Let's get to the point and specify our scenario!.

We need to import information from our customers, including their name, date of birth, national identification number, address, city, and country.

Open your IRIS portal and select Interoperability – Build – Record Maps: image

Create a new Record Map with the package and class name. image

In our example, the package name is Demo.Data, whereas the class name is PersonalInfo.

The first step is to configure the CSV file. What I mean by that is to determine the separator character if the string fields have double quotes, etc.. image

If you use Windows OS, the common record terminator is CRLF (Char(10) Char(12)).

Since my CSV file is a standard one, separated by a semicolon (;), I must define the character of the field separator.

Now, I am going to declare the fields of the customer profile (name, surname, date of birth, national identification number, address, city, and country). image

This is a basic definition, but you can set more conditions regarding your CSV file if you wish. image

Remember that by default, a %String field has a maximum length of 50 characters. Therefore, I will update this value to allow more characters in the address field (a maximum of 100).

I will also define the date format using the ISO layout (yyyy-mm-dd), which corresponds to the number 3.

In addition, I will make the first name, surname, and date of birth fields mandatory. image

Everything is ready! Let’s go and press the “Generate” button to create the persistent class! image

Let's take a look at the generated class:
/// THIS IS GENERATED CODE. DO NOT EDIT.<br/>
/// RECORDMAP: Generated from RecordMap 'Demo.Data.PersonalInfo'
/// on 2025-07-14 at 08:37:00.646 [2025-07-14 08:37:00.646 UTC]
/// by user SuperUser
Class Demo.Data.PersonalInfo.Record Extends (%Persistent, %XML.Adaptor, Ens.Request, EnsLib.RecordMap.Base) [ Inheritance = right, ProcedureBlock ]
{

Parameter INCLUDETOPFIELDS = 1;

Property Name As %String [ Required ];

Property Surname As %String [ Required ];

Property DateOfBirth As %Date(FORMAT = 3) [ Required ];

Property NationalId As %String;

Property Address As %String(MAXLEN = 100);

Property City As %String;

Property Country As %String;

Parameter RECORDMAPGENERATED = 1;

Storage Default
{
<Data name="RecordDefaultData">
<Value name="1">
<Value>%%CLASSNAME</Value>
</Value>
<Value name="2">
<Value>Name</Value>
</Value>
<Value name="3">
<Value>%Source</Value>
</Value>
<Value name="4">
<Value>DateOfBirth</Value>
</Value>
<Value name="5">
<Value>NationalId</Value>
</Value>
<Value name="6">
<Value>Address</Value>
</Value>
<Value name="7">
<Value>City</Value>
</Value>
<Value name="8">
<Value>Country</Value>
</Value>
<Value name="9">
<Value>Surname</Value>
</Value>
</Data>
<DataLocation>^Demo.Data.PersonalInfo.RecordD</DataLocation>
<DefaultData>RecordDefaultData</DefaultData>
<ExtentSize>2000000</ExtentSize>
<IdLocation>^Demo.Data.PersonalInfo.RecordD</IdLocation>
<IndexLocation>^Demo.Data.PersonalInfo.RecordI</IndexLocation>
<StreamLocation>^Demo.Data.PersonalInfo.RecordS</StreamLocation>
<Type>%Storage.Persistent</Type>
}

}

As you can see, each property has the name of the fields in our CSV file.

At this point, we will create a CSV file with the structure below to test our Record Mapper:

Name;Surname;DateOfBirth;NationalId;Address;City;Country Matthew O.;Wellington;1964-31-07;208-36-1552;1485 Stiles Street;Pittsburgh;USA Deena C.;Nixon;1997-03-03;495-26-8850;1868 Mandan Road;Columbia;USA Florence L.;Guyton;2005-04-10;21 069 835 790;Invalidenstrasse 82;Contwig;Germany Maximilian;Hahn;1945-10-17;92 871 402 258;Boxhagener Str. 97;Hamburg;Germany Amelio;Toledo Zavala;1976-06-07;93789292F;Plaza Mayor, 71;Carbajosa de la Sagrada;Spain

You can use it as a test now.

Click “Select sample file”, pick the sampling in /irisrun/repo/Samples, and choose PersonalInfo-Test.csvimage

At this moment, you can observe how your data is being imported: image

The Problems Grow

Just as you think everything is ready, you receive a new specification from your boss:

"We need the data to be able to load the client's phone number and store more than one of them (landline, cell phone, etc.)"

Ops… I need to upgrade my Record Map and add a phone number. However, it should have more than one of them… How can I do it?


Note: You can do it directly in the same class. Yet, I will create a new one for explanation purposes and store it in the examples. This way, you can review and run the code, following all the steps in this article.


Okay, it is time to reopen the Record Map we have just created.

Add the new field “Phone”, but remember to indicate that this field is “Repeating” this time. image

Since we have appointed this field as "Repeating", we must define the separator character for replicated data. This indicator is in the same place where we typically specify the field separator. image

Perfect! Let's load the example CSV file with phone numbers separated by #. image

If we take a look at the persistent class we produced, we can notice that the "Phone" field is of a type List of %String:

Property Phone As list Of %String(MAXLEN = 20);

Ok Kurro, but How Can We Upload This File?

It is a really nice question, my dear reader.

Intersystems IRIS provides us with two inbound classes: EnsLib.RecordMap.Service.FileServiceEnsLib.RecordMap.Service.FTPService

I will not go in depth with these classes because it would be too long. Yet, we can check out their main functions.

In summary, the service monitors processes in a defined folder, captures files stored in that directory, loads them, reads them line by line, and sends that record to the designated business process.

It happens in both the server and FTP directories.

Let's get to the point…


Note: I will present my examples using the EnsLib.RecordMap.Service.FileService class. However, EnsLib.RecordMap.Service.FTPService class has the same operations.


If you have downloaded the sample code, you should notice that a production has been built with two components:

A service class (EnsLib.RecordMap.Service.FileService), which will load the files, and a business class (Demo.BP.ProcessData), which will process each of the records read from the file. The latter, in this case, we will use ONLY to view communication traces.

It is important to configure some parameters in the business service class. image

File Path: It is a trail for the class to monitor whether any files are pending processing. When a file is placed in this directory, the upload process automatically triggers and sends each record to the class defined as Business Process.

File Spec: It is a file pattern to search for (by default, it is *, but we can define some files we wish to differentiate from other processes). For instance, we can have two inbound listening classes in the same directory, with each using a different RecordMap class. We can assign the extension .pi1 for the files to be processed by the PersonalInfo class, whereas .pi2 will flag files to be processed by the PersonalInfoPhone class.

Archive Path: It is a directory where files are moved after being processed.

Work Path: It is a trackway where the adapter should place the input file while processing the data in it. This setting is beneficial when the same filename is used for repeated file submissions. If WorkPath is not specified, the adapter will not move the file during processing.

Call Interval: It is the frequency (calculated in seconds) of the adapter checkups for input files in the specified locations.

RecordMap: It is the name of the RecordMap class, containing the definition of the data in the file.

Target Config Name: It is the name of the Business Process that handles the data stored in the file. image

Subdirectory Levels: It is a space where the process searches for a new file. For instance, if we have a process that adds a file every day (Monday, Tuesday, Wednesday, Thursday, and Friday), it will search all subdirectories starting from the root directory, provided that we specify level 1. By default, level 0 means that it will only search in the root directory.

Delete From Server: This function indicates that if the directory of processed files is not specified, the file will be deleted from the root directory.

File Access Timeout: It is a defined time (calculated in seconds) set to access the file. If the file is read-only or there is another problem obstructing access to the directory, it will display an error.

Header Count: It is an important feature indicating the number of headers to ignore. For example, if the file has a header specifying the fields it contains, you must reveal how many header lines it consists of, so that they can be ignored and only the data lines can be read.

Uploading a File

As I previously mentioned, the upload process is triggered when a file is placed in the process directory. Note: The following instructions are based on the sample code. In the “samples” folder, you can find the file PersonalInfoPhone-Test.csv. You should copy this file to the process folder, and it will be handled automatically.


NOTE: If you are working with a Docker sample, use the following command:

docker cp .\PersonalInfoPhone-Test.csv containerId:/opt/irisbuild/process/

containerId is the id of your container, ex: docker cp .\PersonalInfoPhone-Test.csv 66f96b825d43398ba6a1edcb2f02942dc799d09f1b906627e0563b1392a58da1:/opt/irisbuild/process/` image


For each record, it throws a call to the business process with all the data. image

Amazing job! With just a few steps, you managed to create a process that can read files from a directory and manage that data quickly and easily. What else could you possibly ask for in your Interoperability processes?

Complex Record Map

Nobody wants to have a complex life, but I promise you will fall in love with complex Record Maps!.

Complex Record Maps are precisely what their name indicates. It is a combination of several Record Maps that provides us with more complete and structured information.

Let's imagine that our boss came to us and gave us the following requirements:

“We need customer information with more phone numbers, including country codes and prefixes. We also need more contact addresses, including postal codes, countries, and state names.

One customer can have one phone number, two, or none.”

If we require more information about phone numbers and addresses, as we have previously seen, including this information in a single line would be too complicated. Let's separate the different parts we need:

  • Customer information that is required.
  • Phone numbers, which can be from 0 to 5.
  • Mailing address information, which can be from 0 to 2.

For each section, we will create an alias to differentiate what type of information it includes.

Let's build each of the sections:

Step 1 Design a new Record Map for customer information (First Name, Last Name, Date of Birth, and National Identity Document), and include an identifier to indicate that it is the USER section. image

The section name must be unique for "User" data types, since they are responsible for setting the columns and positions for each piece of information. The content should look like the following: USER|Matthew O.;Wellington;1964-07-31;208-36-1552 In BOLD, the section name, in ITALIC, the content.

Step 2 Create PHONE and ADDR sections for phone numbers and postal addresses.

Remember to specify the section name and activate the Complex Record Map option. imageimage

Now, we should have three classes:

  • Demo.Data.ComplexUser
  • Demo.Data.ComplexPhone
  • Demo.Data.ComplexAddress

Step 3 Complete the Complex Record Map.

Open the “Complex Record Maps” option: image

The first thing we can see here is a structure with a header and a footer. The header can be another Record Map to hold information from the data packet (e.g., user department information, etc).

Since these sections are optional, we will ignore them in our example. image

Set the name of this record (e.g., PersonalInfo), and add new records for each section. image

If we wish one of the sections to have repetitions, we must indicate the minimum and maximum repetition values. image

According to the specifications above, the file with the information will look like the following:

USER|Matthew O.;Wellington;1964-07-31;208-36-1552
PHONE|1;305;2089160
PHONE|1;805;9473136
ADDR|1485 Stiles Street;Pittsburgh;15286;PA;USA

If we want to upload a file, we require a service that can read these kinds of files, and Intersystems IRIS provides us with two inbound classes for that:

EnsLib.RecordMap.Service.ComplexBatchFileServiceEnsLib.RecordMap.Service.ComplexBatchFTPService As I mentioned earlier, we will use the EnsLib.RecordMap.Service.ComplexBatchFileService class as an example. However, the process for FTP is identical.

It uses the same configuration as the Record Map, except for the Header line number, because this kind of file does not need one: image

As I stated before, the upload process is triggered when a file is placed in the process directory.

Note: The following instructions are based on the sample code.

In the “samples” folder, you can find the file PersonalInfoComplex.txt. You should copy this file to the process folder, and it will handle things automatically.


NOTE: If you work with the Docker sample, use the following command:

docker cp .\ PersonalInfoComplex.txt containerId:/opt/irisbuild/process/

containerId is the id of your container, ex: docker cp .\ PersonalInfoComplex.txt 66f96b825d43398ba6a1edcb2f02942dc799d09f1b906627e0563b1392a58da1:/opt/irisbuild/process/


Here, we can see each row calling the Business Service: imageimageimage

As you must have realized by now, Record Maps are a powerful tool for importing data in a complex and structured way. It allows us to save information in related tables or process each piece of data independently.

Thanks to this tool, you can quickly create batch data loading processes and store them without having to perform complex data reading, field separation, data type validation, and so on.

I hope you find this article helpfull.

See you in the next “InterSystems for Dummies.”

2
5 211
Question Eugene.Forde · Aug 31, 2025

I’ve been exploring options for connecting Google Cloud Pub/Sub with InterSystems IRIS/HealthShare, but I noticed that IRIS doesn’t seem to ship with any native inbound/outbound adapters for Pub/Sub. Out of the box, IRIS offers adapters for technologies like Kafka, HTTP, FTP, and JDBC, which are great for many use cases, but Pub/Sub appears to be missing from the list.

Has anyone here implemented such an integration successfully?

For example:

2
1 75
Question Abdul Majeed · Oct 1, 2025

I'm trying to access the Bearer token from the Authorization header in my REST service class, but I'm getting a 500 Internal Server Error when I try to use %request.GetCgiEnv("HTTP_AUTHORIZATION").

My Environment:

  • InterSystems ensemble 2018
  • Using EnsLib.REST.Service with HTTP Inbound Adapter
  • REST API URL: http://ip:port/api-kiosk/patientData

My Code:

objectscript

4
0 56
Article Beatrice Zorzoli · Sep 10, 2025 4m read

I joined InterSystems less than a year ago. Diving into ObjectScript and IRIS was exciting, but also full of small surprises that tripped me up at the beginning. In this article I collect the most common mistakes I, and many new colleagues, make, explain why they happen, and show concrete examples and practical fixes. My goal is to help other new developers save time and avoid the same bumps in the road.

1. Getting lost among system classes and where to start

11
2 260
Announcement Anastasia Dyubaylo · Sep 24, 2025

Hi Community,

We’re excited to share a brand-new Instruqt tutorial: 

🧑‍🏫 RAG using InterSystems IRIS Vector Search

This hands-on lab walks you through building a Retrieval Augmented Generation (RAG) AI chatbot powered by InterSystems IRIS Vector Search. You’ll see how vector search can be leveraged to deliver up-to-date and accurate responses, combining the strengths of IRIS with generative AI.

✨ Why try it?

7
3 170
Article Developer Community Admin · Sep 30, 2025 7m read

InterSystems IRIS Data Platform is a comprehensive, multi-model, multi-workload data platform that is ideal for accommodating the challenging requirements of applications for the Internet of Things. It is a complete platform for developing, executing, and maintaining IoT applications in a single, consistent, unified environment. It features a distributed architecture to support massive data-ingest rates and data volumes, while providing the flexibility and durability of an enterprise-grade transactional multi-model database to ingest, process, and persist data from a wide range of devices in different formats. It features a complete set of integration, event-processing, and integrated analytics capabilities, including full SQL support and text processing, business process orchestration, and a standards-based development environment.

Connect to, ingest, and persist a wide range of disparate device data types and formats

The data types associated with IoT applications are often heterogeneous, as they may originate from various devices with diverse functions and manufactured by different vendors. The underlying data platform must be able to ingest and process a wide range of raw data types in their original formats. Many applications also require the data platform to persist all of the disparate source data to detect deviations from normal ranges, accommodate downstream ad hoc analytics, maintain regulatory compliance, and fulfill other purposes.

0
1 118
Question Matthew Martinez · Sep 29, 2025

The default request class, Ens.Request is fine for our initial workflow.  

We want to define other workflows that will reuse the same BPL class.  These workflows would send messages inbound to the BPL as different request classes.

Is this possible or is it required that we send in a request class matching the context request class in the context tab?

Thank you

3
0 49
Article David Hockenbroch · Apr 2, 2024 9m read

One of the most common kinds of integration we are asked to do is emailing. One of the most typical email services our customers use is Microsoft’s Office 365. After setting up the right configuration on the Microsoft side, we can email from IRIS with two HTTP requests. By the end of this article, we will be able to send an email with an attachment through our Microsoft 365 service!

9
6 822
Question Ashok Kumar T · Sep 28, 2025

Hello Community

The InitialExpression keyword values does not to set default values for properties in classes that extend %CSP.Page, unlike in other class types such as %Persistent or %RegisteredObject, where it works as expected during object instantiation (typically via %New()).

  1. Is %CSP.Page instantiated using %New() under the hood, or does it use a different initialization mechanism?
  2. Are there specific limitations or behaviors in CSP pages that prevent InitialExpression from working as expected?

Thank you!

5
0 54
Article Robert Cemper · Sep 16, 2025 1m read

If one of your packages on OEX receives a review you get notified by OEX only of YOUR own package.   
The rating reflects the experience of the reviewer with the status found at the time of review.   
It is kind of a snapshot and might have changed meanwhile.   
Reviews by other members of the community are marked by * in the last column.

I also placed a bunch of Pull Requests on GitHub when I found a problem I could fix.    
Some were accepted and merged, and some were just ignored.     
So if you made a major change and expect a changed review, just let me know.

5
1 102
Announcement Derek Gervais · Sep 26, 2025

Hey Community,

The InterSystems team recently held another monthly Developer Meetup in the AWS Boston office location in the Seaport, breaking our all-time attendance record with over 80 attendees! This meetup was our second time being hosted by our friends at AWS, and the venue was packed with folks excited to learn from our awesome speakers.

0
0 52
Article Steve Lubars · Sep 21, 2025 5m read

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones
1
1 128
Article sween · Mar 31, 2025 8m read

Vanna.AI - Personalized AI InterSystems OMOP Agent

Along this OMOP Journey, from the OHDSI book to Achilles, you can begin to understand the power of the OMOP Common Data Model when you see the mix of well written R and SQL deriving results for large scale analytics that are shareable across organizations. I however do not have a third normal form brain and about a month ago on the Journey we employed Databricks Genie to generate sql for us utilizing InterSystems OMOP and Python interoperability. This was fantastic, but left some magic under the hood in Databricks on how the RAG "model" was being constructed and the LLM in use to pull it off.

This point in the OMOP Journey we met Vanna.ai on the same beaten path...

Vanna is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. Vanna works in two easy steps - train a RAG “model” on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database. image

Vanna exposes all the pieces to do it ourselves with more control and our own stack against the OMOP Common Data Model.

The approach from the Vanna camp I found particularly fantastic, and conceptually it felt like a magic trick was being performed, and one could certainly argue that was exactly what was happening.

Vanna needs 3 choices to pull of its magic trick, a sql database, a vector database, and an LLM. Just envision a dealer handing you out three piles and making you choose from each one.

image

So if its not obvious, our sql database is InterSystems OMOP implementing the Common Data Model, our LLM of choice is Gemini, and for the quick and dirty evaluation we are using Chroma DB for a vector to get to the point quickly in python.

Gemini

So I cut a quick key and grew up a little bit and actually paid for it, I tried the free route with the rate limits of 50 prompts a day, and 1 per minute and it was unsettling... I may be happier being completely broke anyway, so we will see.

image

InterSystems OMOP

I am using my same fading trial as the [other posts](https://community.intersystems.com/smartsearch?search=OMOP+Journey). The CDM is loaded with about 100 patient pop per United State region with the pracs and orgs to boot.

image

Vanna

Let's turn the letters (get it?) notebook style and spin the wheel (get it again?) and put Vanna to work...
pip3 install 'vanna[chromadb,gemini,sqlalchemy-iris]'

Lets organize our pythons.

from vanna.chromadb import ChromaDB_VectorStore
from vanna.google import GoogleGeminiChat
from sqlalchemy import create_engine

import pandas as pd
import ssl
from sqlalchemy import create_engine
import time

Initialize the star of our show and introduce her to our model. Kind of weird right, Vanna (White) is a model.

class MyVanna(ChromaDB_VectorStore, GoogleGeminiChat):
    def __init__(self, config=None):
        ChromaDB_VectorStore.__init__(self, config=config)
        GoogleGeminiChat.__init__(self, config={'api_key': "shaazButt", 'model': "gemini-2.0-flash"})

vn = MyVanna()

Let's connect to our InterSystems OMOP Cloud deployment using sqlalchemy-iris from @caretdev. The work done with this dialect is quickly becoming the key ingredient for modern data interoperability of iris products in the data world.

engine = create_engine("iris://SQLAdmin:LordFauntleroy!!!@k8s-0a6bc2ca-adb040ad-c7bf2ee7c6-e6b05ee242f76bf2.elb.us-east-1.amazonaws.com:443/USER", connect_args={"sslcontext": context})

context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.verify_mode=ssl.CERT_OPTIONAL
context.check_hostname = False
context.load_verify_locations("vanna-omop.pem")

conn = engine.connect()

You define a function that takes in a SQL query as a string and returns a pandas dataframe. This gives Vanna a function that it can use to run the SQL on the OMOP Common Data Model.

def run_sql(sql: str) -> pd.DataFrame:
    df = pd.read_sql_query(sql, conn)
    return df

vn.run_sql = run_sql
vn.run_sql_is_set = True

Feeding the Model with a Menu

The information schema query may need some tweaking depending on your database. This is a good starting point. This will break up the information schema into bite-sized chunks that can be referenced by the LLM... If you like the plan, then uncomment this and run it to train Vanna.

df_information_schema = vn.run_sql("SELECT * FROM INFORMATION_SCHEMA.COLUMNS")

plan = vn.get_training_plan_generic(df_information_schema)
plan

vn.train(plan=plan)

Training

The following are methods for adding training data. Make sure you modify the examples to match your database. DDL statements are powerful because they specify table names, column names, types, and potentially relationships. These ddl's are generated with the now supported DataBaseConnector as outlined in this [post](https://community.intersystems.com/post/omop-odyssey-celebration-house-hades).
vn.train(ddl="""
--iris CDM DDL Specification for OMOP Common Data Model 5.4
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.person (
			person_id integer NOT NULL,
			gender_concept_id integer NOT NULL,
			year_of_birth integer NOT NULL,
			month_of_birth integer NULL,
			day_of_birth integer NULL,
			birth_datetime datetime NULL,
			race_source_concept_id integer NULL,
			ethnicity_source_value varchar(50) NULL,
			ethnicity_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation_period (
			observation_period_id integer NOT NULL,
			person_id integer NOT NULL,
			observation_period_start_date date NOT NULL,
			observation_period_end_date date NOT NULL,
			period_type_concept_id integer NOT NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_occurrence (
			visit_occurrence_id integer NOT NULL,
			discharged_to_source_value varchar(50) NULL,
			preceding_visit_occurrence_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.visit_detail (
			visit_detail_id integer NOT NULL,
			person_id integer NOT NULL,
			visit_detail_concept_id integer NOT NULL,
			provider_id integer NULL,
			care_site_id integer NULL,
			visit_detail_source_value varchar(50) NULL,
			visit_detail_source_concept_id Integer NULL,

--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.condition_occurrence (
			condition_occurrence_id integer NOT NULL,
			person_id integer NOT NULL,
			visit_detail_id integer NULL,
			condition_source_value varchar(50) NULL,
			condition_source_concept_id integer NULL,
			condition_status_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.drug_exposure (
			drug_exposure_id integer NOT NULL,
			person_id integer NOT NULL,
			dose_unit_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.procedure_occurrence (
			procedure_occurrence_id integer NOT NULL,
			person_id integer NOT NULL,
			procedure_concept_id integer NOT NULL,
			procedure_date date NOT NULL,
			procedure_source_concept_id integer NULL,
			modifier_source_value varchar(50) NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.device_exposure (
			device_exposure_id integer NOT NULL,
			person_id integer NOT NULL,
			device_concept_id integer NOT NULL,
			unit_source_value varchar(50) NULL,
			unit_source_concept_id integer NULL );
--HINT DISTRIBUTE ON KEY (person_id)
CREATE TABLE omopcdm54.observation (
			observation_id integer NOT NULL,
			person_id integer NOT NULL,
			observation_concept_id integer NOT NULL,
			observation_date date NOT NULL,
			observation_datetime datetime NULL,
<SNIP>

""")

Sometimes you may want to add documentation about your business terminology or definitions, here I like to add the resource names from FHIR that were transformed to OMOP.

vn.train(documentation="Our business is to provide tools for generating evicence in the OHDSI community from the CDM")
vn.train(documentation="Another word for care_site is organization.")
vn.train(documentation="Another word for provider is practitioner.")

Now lets add all the data from the InterSystems OMOP Common Data Model, probably a better way to do this, but I get paid by the byte.

cdmtables = ["care_site", "cdm_source", "cohort", "cohort_definition", "concept", "concept_ancestor", "concept_class", "concept_relationship", "concept_synonym", "condition_era", "condition_occurrence", "cost", "death", "device_exposure", "domain", "dose_era", "drug_era", "drug_exposure", "drug_strength", "episode", "episode_event", "fact_relationship", "location", "measurement", "metadata", "note", "note_nlp", "observation", "observation_period", "payer_plan_period", "person", "procedure_occurrence", "provider", "relationship", "source_to_concept_map", "specimen", "visit_detail", "visit_occurrence", "vocabulary"]
for table in cdmtables:
    vn.train(sql="SELECT * FROM  WHERE OMOPCDM54." + table)
    time.sleep(60)

I added the ability for Gemini to see the data here, ensure you want to do this in your travels or give Google your OMOP data with slight of hand.

Lets do our best Pat Sajak, and boot the shiny Vanna app.

from vanna.flask import VannaFlaskApp
app = VannaFlaskApp(vn,allow_llm_to_see_data=True, debug=False)
app.run()

image

Skynet!

This is a bit hackish, but really where I want to go with AI future forward integrating with apps, here we ask in natural language a question, which returns a sql query, then we immediately use that query against the InterSystems OMOP deployment using sqlalchemy-iris.

while True:
    import io
    import sys
    old_stdout = sys.stdout
    sys.stdout = io.StringIO()  # Redirect stdout to a dummy stream

    question = 'How Many Care Sites are there in Los Angeles?'
    sys.stdout = old_stdout

    sql_query = vn.ask(question)
    print("Ask Vanna to generate a query from a question of the OMOP database...")
    #print(type(sql_query))
    raw_sql_to_send_to_sqlalchemy_iris = sql_query[0]
    print("Vanna returns the query to use against the database.")
    gar = raw_sql_to_send_to_sqlalchemy_iris.replace("FROM care_site","FROM OMOPCDM54.care_site")
    print(gar)
    print("Now use sqlalchemy-iris with the generated query back to the OMOP database...")

    result = conn.exec_driver_sql(gar)
    #print(result)
    for row in result:
        print(row[0])
    time.sleep(3)

Utilities

At any time you can inspect what OMOP data the Vanna package is able to reference. You can also remove training data if there's obsolete/incorrect information (you can do this through the UI too).
training_data = vn.get_training_data()
training_data
vn.remove_training_data(id='omop-ddl')

About Using IRIS Vectors

Wish me luck here, if I manage to crush all the things to crush and resist the sun coming out, Ill implement iris vectors in vanna with the following repo.

image

1
0 195
Article Padmaja Konduru · Jan 14, 2025 2m read

The utility returns the desired values from the text and display the multiple values if exists based on starting and ending string.

Class Test.Utility.FunctionSet Extends %RegisteredObject
{

/// W !,##class(Test.Utility.FunctionSet).ExtractValues("Some random text VALUE=12345; some other VALUE=2345; more text VALUE=345678;","VALUE=",";")
 

4
1 229
Article André Dienes Friedrich · Sep 26, 2025 3m read

 

Building a Sensor Data Demo with Spring Boot and InterSystems IRIS

In the era of IoT and connected devices, sensor data is everywhere—tracking temperature in logistics, monitoring equipment performance, or recording environmental conditions. But capturing, storing, and analyzing this data in real time requires more than just hardware sensors.

0
0 46
Article Robert Cemper · Sep 22, 2025 2m read

Finishing my previous example for multiple IRIS instances, I tried
to compose a local single instance version.  The step from the external
Python app to a version using embedded Python seemed to be obvious.
This was a wrong assumption, as some Python libraries just refused installation
into my local Windows-based environment.

3
2 68
Article Arsh Hasan · Jan 14, 2025 1m read

In this tutorial, I will discuss how can you connect your IRIS data platform to sql server db  .

Prereq: 

4
3 409
Article Megumi Kakechi · Sep 25, 2025 2m read

InterSystems FAQ rubric

One way to optimize query performance is to use query parallelism on a per-query or system-wide basis (a standard feature).

This is a technique for dividing the execution of a particular query among processors on a multi-processor system. The query optimizer will execute parallel processing only if there is a possibility of benefiting from parallel processing. Parallel processing is only applicable to SELECT statements.

0
0 40
Question Saju Abraham · Sep 24, 2025

Our vendor is developing an interface API on their end to capture HL7 data on a Server Port, and they require us to send a pre-defined HL7 Order message for testing every hour until the API is completely operational.

Is it possible to accomplish that in a Business Operation automatically without utilizing a service or process? The BO is a standard TCP/IP connection.

I'm manually sending the message again from the Operations right now. I do not have access to the System Operation to use the Task Manager feature.

3
0 60
Article Steve Lubars · Sep 9, 2025 8m read

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones
1
4 233