What are the best practices for creating a multi-tenant app in IRIS? How can I isolate data per tenant using namespaces, control resource usage, and delegate access via roles securely?
We’re encountering occasional deadlocks when accessing persistent objects. How can I trace lock acquisition and identify cyclic dependencies in real time?
We want to expose both REST and GraphQL endpoints over the same data models. Is there a way to implement or integrate GraphQL with ObjectScript and map to class methods?
I have large joins involving millions of rows. How can I profile and tune the SQL engine’s parallel execution? Are there EXPLAIN plan features to inspect threading and task distribution?
When writing dynamic SQL queries using embedded SQL, how can I force or ensure that filter conditions are pushed down to the data access layer rather than evaluated in memory?
We run mixed workloads in IRIS. For analytical queries, are bitmap indexes effective? What are the caveats for concurrent OLTP updates, and how should I maintain bitmap indexes efficiently?
The built-in task manager is limited. How can I implement a robust, distributed job scheduler in IRIS with support for dependencies, CRON syntax, and failover recovery?
We need to authenticate users via Azure AD or Okta. What are the best practices to implement federated authentication using OAuth2/OIDC or SAML in IRIS Management Portal or custom web apps?
Can the IRIS SQL engine be extended with custom optimization rules (e.g., prioritizing certain indices or join orders)? If not, is there a supported way to influence cost models?
For geographically distributed nodes using async mirroring or ECP, how can I detect and resolve data conflicts manually (custom logic) while maintaining eventual consistency?
I’m using recursive CTEs for hierarchical data, but the planner seems to produce inefficient plans. Can I influence or extend the query optimizer behavior in IRIS?
We are using IRIS with a sharded architecture. Complex SQL queries (with joins, aggregates, and subqueries) are performing slowly. How can I design queries or indexes to optimize distributed execution across shards?
Instead of default storage classes, I want to implement my own SQL storage mapping for a persistent class (e.g., denormalized or sparse matrix structures). How do I define and manage custom storage definitions?
Deploying new IRIS instances can be a time-consuming task, especially when setting up multiple environments with mirrored configurations.
I’ve encountered this issue many times and want to share my experience and recommendations for using Ansible to streamline the IRIS installation process. My approach also includes handling additional tasks typically performed before and after installing IRIS.
To manage the accumulation of production data, InterSystems IRIS enables users to manage the database size by periodically purging the data. This purge can apply to messages, logs, business processes, and managed alerts.
Please check the documentation for more details on the settings of the purge task:
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EGMG_purge#EGMG_purge_settings
This code provide the configured production items with enabled or disabled status.
Include (Ensemble, EnsUI, EnsUtil)
Class Test.ProductionConfig
{
I am creating a class to validate JSON body of requests. When I use the method %ValidateObject to check errors in the object, if I define some properties with %DynamicObject or %DynamicArray and use the Required parameter, this method does not work, it ignores validation, only works with properties %String, %Integer etc.
Introduction
Since InterSystems has recently announced the discontinuation of support for InterSystems Studio starting from version 2023.2 in favor of exclusive development of extensions for the Visual Studio Code (VSC) IDE, believing that the latter offers a superior experience compared to Studio, many of us developers have switched or are beginning to use VSC. Many may have wondered how to open the Terminal to perform operations, as VSC does not have an Output panel like Studio did, nor an integrated feature to open the IRIS terminal, except by downloading the plugins developed by InterSystems.
Hello everybody,
I would like to export project contents using Visual Studio Code, as well as I do with InterSystems Studio.
However, while attempting the export through the InterSystems extension I get the following error:
- There are no folders in the current workspace that code can be exported to.
These are the step to reproduce my error:
- Create and save a new VSCode workspace
- Open a local directory
- Go to the InterSystems extension
- Open "Projects" panel and right-click on the project you want to export
- Click on "Export Project Contents"
Anyone can help me solving this issue?
Hi Community,
It seems our Developer Community AI has decided to take a coffee break ☕️ (probably after answering one too many tricky ObjectScript questions).
For now, it’s gone mysteriously silent and refuses to generate answers. We suspect it might be rethinking its life choices after reading one too many deeply philosophical ObjectScript questions.
We’ll let you know as soon as our digital colleague is back online, refreshed, and ready to assist you again. In the meantime, if you notice it suddenly waking up and replying again, please do let us know in the comments before it changes its mind.
Hi,
It's me again😁, recently I am working on generating some fake patient data for testing purpose with the help of Chat-GPT by using Python. And, at the same time I would like to share my learning curve.😑
1st of all for building a custom REST api service is easy by extending the %CSP.REST
Creating a REST Service Manually
Let's Start !😂
1. Create a class datagen.restservice which extends %CSP.REST
Class datagen.restservice Extends%CSP.REST
{
Parameter CONTENTTYPE = "application/json";
}
2. Add a function genpatientcsv() to generate the patient data, and package it into csv string
I know the next ones:
1. Place all different settings in environment variables. You have a different .env file for each environment, and you must add some code to Production for reading and setting these values. It's good for deploying into containers, but challenging for management when we have a large production. I mean, we have many settings that can vary depending on the environment: active flag, pool size, timeouts, and so on. Not only endpoints.
This one's for those of you who feel that InterSystems tech makes your life easier... whether you’re working on integration projects, building innovative solutions, or evolving existing ones.
Join our next in-person Developer Meetup in Boston to explore Security & AI for Developers and Startups.
This event is hosted at CIC Venture Cafe.
Talk 1: When Prompts Become Payloads
Speaker: Mark-David McLaughlin, Director, Corporate Security, InterSystems
Talk 2: Serial Offenses: Common Vulnerability Types
Speaker: Jonathan Sue-Ho, Senior Security Engineer, InterSystems
Introduction
In my previous article, I introduced the FHIR Data Explorer, a proof-of-concept application that connects InterSystems IRIS, Python, and Ollama to enable semantic search and visualization over healthcare data in FHIR format, a project currently participating in the InterSystems External Language Contest.
In this follow-up, we’ll see how I integrated Ollama for generating patient history summaries directly from structured FHIR data stored in IRIS, using lightweight local language models (LLMs) such as Llama 3.2:1B or Gemma 2:2B.
The goal was to build a completely local AI pipeline that can extract, format, and narrate patient histories while keeping data private and under full control.
All patient data used in this demo comes from FHIR bundles, which were parsed and loaded into IRIS via the IRIStool module. This approach makes it straightforward to query, transform, and vectorize healthcare data using familiar pandas operations in Python. If you’re curious about how I built this integration, check out my previous article Building a FHIR Vector Repository with InterSystems IRIS and Python through the IRIStool module.
Both IRIStool and FHIR Data Explorer are available on the InterSystems Open Exchange — and part of my contest submissions. If you find them useful, please consider voting for them!
Introduction
In a previous article, I presented the IRIStool module, which seamlessly integrates the pandas Python library with the IRIS database. Now, I'm explaining how we can use IRIStool to leverage InterSystems IRIS as a foundation for intelligent, semantic search over healthcare data in FHIR format.
This article covers what I did to create the database for another of my projects, the FHIR Data Explorer. Both projects are candidates in the current InterSystems contest, so please vote for them if you find them useful.
You can find them at the Open Exchange:
In this article we'll cover:
- Connecting to InterSystems IRIS database through Python
- Creating a FHIR-ready database schema
- Importing FHIR data with vector embeddings for semantic search
Hello Developers! 👋
I’m excited to share the project I’ve submitted to the current InterSystems .Net, Java, Python, and JavaScript Contest — it’s called FHIR Data Explorer with Hybrid Search and AI Summaries, and you can find it on the InterSystems Open Exchange and on my GitHub page.
Hi everyone! 👋
I’m excited to share the project I’ve submitted to the current InterSystems .Net, Java, Python, and JavaScript Contest — it’s called IRIStool and Data Manager, and you can find it on the InterSystems Open Exchange and on my GitHub page.
With the rapid adoption of telemedicine, remote consultations, and digital dictation, healthcare professionals are communicating more through voice than ever before. Patients engaging in virtual conversations generate vast amounts of unstructured audio data, so how can clinicians or administrators search and extract information from hours of voice recordings?
Enter IRIS Audio Query - a full-stack application that transforms audio into a searchable knowledge base. With it, you can:
It's about an example for the External Languages Contest 2025
You get almost any information about your databases in IRIS using
the System Management Portal. After passing several levels, you often
get a wide list of items, but the interesting ones are hard to find.
