Hi all.
I'm trying to create an indexed table with an vector field so I can search by the vector value.
I've been investigating and found that to get the vector value based on the text (token), use a Python method like the following:
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write.
Hi all.
I'm trying to create an indexed table with an vector field so I can search by the vector value.
I've been investigating and found that to get the vector value based on the text (token), use a Python method like the following:
A REST API (Representational State Transfer) is an interface that allows different applications to communicate with each other through the HTTP protocol, using standard operations such as GET, POST, PUT, and DELETE. REST APIs are widely used in software development to expose services accessible by other applications, enabling integration between different systems.
However, to ensure that APIs are easy to understand and use, good documentation is essential. This is where OpenAPI comes in.
Hello,
In the world of APIs, REST is very extended. But what happens when you need more flexibility in your data-fetching strategies? For instance letting the client to choose what fields is going to receive. Enter GraphQL, a query language for your APIs that provides a flexible alternative to REST.
In this post, we will:
Grab a coffee ☕ and let's go:
Let’s start by comparing REST and GraphQL side by side:
| Feature | REST | GraphQL |
|---|---|---|
| Data Fetching | Multiple endpoints for resources | Single endpoint for all queries |
| Flexibility | Predefined responses | Query exactly what you need |
| Overfetching | Getting too much data for client needs | Solves overfetching |
| Underfetching | Needing multiple calls to get enough data | Solves underfetching |
| Versioning | Requires version management (e.g. /api/v1/users) | "Versionless" as clients can choose the fields they need. Also, schemas are typed and can evolve |
The good news is that both use HTTP so they are pretty easy to consume from client applications.
GraphQL is built around three main concepts:
Queries: Retrieve data from your API.
query {
allEmployees(hiredAfter: "2020-01-01") {
name
hiredon
}
}
query keyword indicates a read operation (although it can be omitted)allEmployees is the field that represents a list of employees. In the server, a resolver function in the GraphQL schema will fetch the data. It takes (hiredAfter: "2020-01-01") as an argument that is used to filter data.name and hiredon are subfields that we are selecting to be returnedMutations: Modify data
mutation {
createEmployee(name: "John", lastname: "Doe", position: "Developer", hiredon: "2021-11-30", departmentId: "1") {
employee {
name
lastname
department {
name
}
}
}
}
mutation indicates an operation that will modify datacreateEmployee is the mutation field that handles the creation of a new employee (a resolver function in the GraphQL schema will handle that). It takes different arguments to define the new employee.employee and structure containing name, lastname and department with some nested fields)./graphql.For example, the previous query could be sent as:
curl -X POST \
-H "Content-Type: application/json" \
-d '{"query": "query { allEmployees(hiredAfter: \"2020-01-01\") { name hiredon } }"}' \
http://localhost:5000/graphql
And the response data could be:
{"data":{"allEmployees":[{"name":"Alice","hiredon":"2020-06-15"},{"name":"Clara","hiredon":"2021-09-01"},{"name":"Eve","hiredon":"2022-01-05"},{"name":"Frank","hiredon":"2020-11-15"},{"name":"John","hiredon":"2021-11-30"},{"name":"Phileas","hiredon":"2023-11-27"}]}}
We will use this tools ⚒️:
📝 You will find detailed instructions on setting up InterSystems IRIS Community and a Python virtual environment in iris-graphql-demo.
Now let's focus on the main steps you need to do.
We will use a simple model with Employee and Department tables that will be created and populated in InterSystems IRIS.
Create a models.py file with:
from sqlalchemy import *
from sqlalchemy.orm import relationship, declarative_base, Session, scoped_session, sessionmaker, relationship, backref
engine = create_engine('iris://superuser:SYS@localhost:1972/USER')
db_session = scoped_session(sessionmaker(bind=engine))
Base = declarative_base()
Base.query = db_session.query_property()
class Department(Base):
__tablename__ = 'departments'
__table_args__ = {'schema': 'dc_graphql'}
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
description = Column(String)
employees = relationship('Employee', back_populates='department')
class Employee(Base):
__tablename__ = 'employees'
__table_args__ = {'schema': 'dc_graphql'}
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
lastname = Column(String, nullable=False)
hiredon = Column(Date)
position = Column(String)
department_id = Column(Integer, ForeignKey('dc_graphql.departments.id'))
department = relationship('Department', back_populates='employees')
Let's define the GraphQL Schema using Graphene using the models we have just defined.
In the related OpenExchange application you will find a version that also implements a mutation.
Create a schema.py file with this content:
import graphene
from graphene_sqlalchemy import SQLAlchemyObjectType, SQLAlchemyConnectionField
from models import db_session, Employee, Department
class DepartmentType(SQLAlchemyObjectType):
class Meta:
model = Department
interfaces = ()
class EmployeeType(SQLAlchemyObjectType):
class Meta:
model = Employee
interfaces = ()
class Query(graphene.ObjectType):
all_departments = graphene.List(DepartmentType)
all_employees = graphene.List(
EmployeeType,
hired_after=graphene.Date(),
position=graphene.String()
)
def resolve_all_departments(self, info):
return db_session.query(Department).all()
def resolve_all_employees(self, info, hired_after=None, position=None):
query = db_session.query(Employee)
if hired_after:
query = query.filter(Employee.hiredon > hired_after)
if position:
query = query.filter(Employee.position == position)
return query.all()
schema = graphene.Schema(query=Query)
And we will need also a Flask application that sets up the /graphql endpoint.
Create an app.py file like this:
from flask import Flask, request, jsonify
from flask_graphql import GraphQLView
from schema import schema
app = Flask(__name__)
@app.teardown_appcontext
def shutdown_session(exception=None):
db_session.remove()
app.add_url_rule(
'/graphql',
view_func=GraphQLView.as_view(
'graphql',
schema=schema,
graphiql=True,
)
)
if __name__ == '__main__':
app.run(debug=True)
We could create the tables and insert some data using SQL or ObjectScript.
But, if you are exploring the capabilities of Python + InterSystems IRIS I recommend you to use Python 🐍 directly.
Open an interactive Python session:
python
And then:
from models import db_session, engine, Base, Department, Employee
# this will create the tables for you in InterSytems IRIS
Base.metadata.create_all(bind=engine)
# add departments
engineering = Department(name='Engineering', description='Handles product development and technology')
hr = Department(name='Human Resources', description='Manages employee well-being and recruitment')
sales = Department(name='Sales', description='Responsible for sales and customer relationships')
db_session.add_all([engineering, hr, sales])
# add employees
employees = [
Employee(name='Alice', lastname='Smith', hiredon=date(2020, 6, 15), position='Software Engineer', department=engineering),
Employee(name='Bob', lastname='Brown', hiredon=date(2019, 3, 10), position='QA Engineer', department=engineering),
Employee(name='Clara', lastname='Johnson', hiredon=date(2021, 9, 1), position='Recruiter', department=hr),
Employee(name='David', lastname='Davis', hiredon=date(2018, 7, 22), position='HR Manager', department=hr),
Employee(name='Eve', lastname='Wilson', hiredon=date(2022, 1, 5), position='Sales Executive', department=sales),
Employee(name='Frank', lastname='Taylor', hiredon=date(2020, 11, 15), position='Account Manager', department=sales)
]
db_session.add_all(employees)
db_session.commit()
First, you will need to run your server:
python app.py
Then you can start testing your server using an included GraphiQL UI in http://localhost:5000/graphql.

If you need to work with very large datasets or you need pagination, consider using Relay. Relay introduces Edge and Node concepts for efficient pagination and handling of large datasets.
In the related OpenExchange application you will also find a GraphQL schema implementation using Relay.
In case you don't know what WGSI is, just check this article WSGI Support Introduction and the related posts that explain how to deploy Flask, Django and FastAPI applications in IRIS by @Guillaume Rongier
In a nutshell, InterSystems IRIS can deploy WGSI applications now so you can deploy your GraphQL server.
You only need to set up a web application like this:

After that, your endpoint will be http://localhost:52773/graphqlserver/graphql
Congratulations! You’ve built a GraphQL server on top of InterSystems IRIS, explored its basic features, and even deployed it as a WSGI application. This demonstrates how flexible API approaches can integrate seamlessly with enterprise-grade data platforms like InterSystems IRIS.
If you are still awake, thanks for reading 😄! I'll give you a final fun fact:
Hi Dev Community
I thought i would share a little method I knocked together to traverse and compare 2 JSON objects for basic equivilance. I'm currently working on some data migration, and wanted a basic sanity check to validate that the JSON output is basically equivliant between the old and new, excluding a few things like timestamps.
It's a basic little recurvsive method, that will bubble up any differences over a nested structure. It's very low tech, as that's all I need it to do, but I thought it might be useful for others?
REST API with Swagger in InterSystems IRIS
Hello
The HTTP protocol allows you to obtain resources, such as HTML documents. It is the basis of any data exchange on the Web and a client-server protocol, meaning that requests are initiated by the recipient, usually a Web browser.
REST APIs take advantage of this protocol to exchange messages between client and server. This makes REST APIs fast, lightweight, and flexible. REST APIs use the HTTP verbs GET, POST, PUT, DELETE, and others to indicate the actions they want to perform.
Hi community,
I am having a hard time figuring out how to properly use %ZEN.Auxiliary.jsonProvider to serialize a %RegisteredObject to JSON. Would be anyone so kind to share some simple usage example?
Thanks
Jiri
We are trying to create a JWT in order to get oauth2 token from login.microsoftonline.com and then use that token to get key/secret from an Azure Key Vault.
It seemed like it would be quite straightforward by using the Create method of class %Net.JSON.JWT
But already trying to set the first header parameter alg PS256 as specified by https://learn.microsoft.com/en-us/entra/identity-platform/certificate-c… becomes a problem. Just to test, I do this:
set JOSE = {
"alg": "PS256"
}
and then using that and an arbitrary claims as parameters in Create method gives this
Using Flask, REST API, and IAM with InterSystems IRIS
Part 3 – IAM
InterSystems API Manager (IAM) is a component that allows you to monitor, control, and manage traffic from HTTP-based APIs. It also acts as an API gateway between InterSystems IRIS applications and servers.
Using Flask, REST API, and IAM with InterSystems IRIS
Part 2 – Flask App
Flask is a web development microframework written in Python. It is known for being simple, flexible, and enabling rapid application development.
Installing Flask is very simple. Once you have python installed correctly on your operating system, we need to install the flask library with the pip command. For REST API consumption, it is advisable to use the requests library. The following link provides a guide to installing flask: https://flask.palletsprojects.com/en/stable/installation/
Using Flask, REST API, and IAM with InterSystems IRIS
Part 1 - REST API
Hello
In this article we will see the implementation of a REST API to perform the maintenance of a CRUD, using Flask and IAM.
In this first part of the article we will see the construction and publication of the REST API in Iris.
First, let's create our persistent class to store the data. To do this, we go to Iris and create our class:
Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris
Part 2 – Python and Vector Search
Since we have access to the data from our external table, we can use everything that Iris has to offer with this data. Let's, for example, read the data from our external table and generate a polynomial regression with it.
For more information on using python with Iris, see the documentation available at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_epython
Let's now consume the data from the external database to calculate a polynomial regression. To do this, we will use a python code to run a SQL that will read our MySQL table and turn it into a pandas dataframe:
I have a scenario where I send a GET request to a broker and receive a FHIR response. When I attempted to use the built-in InterSystems functions to convert this FHIR response into SDA, the transformation failed—likely because it is not a standard FHIR request.
How should I handle this situation? Is there a recommended approach to processing FHIR responses in this context?
I understand that InterSystems provides functions to facilitate transactions between FHIR and HL7 via the SDA segment. My question is:
From the previous article, we identified some issues when working with JSON in SQL.
IRIS offers a dedicated feature for handling JSON documents, called DocDB.
InterSystems IRIS® data platform DocDB is a facility for storing and retrieving database data. It is compatible with, but separate from, traditional SQL table and field (class and property) data storage and retrieval. It is based on JSON (JavaScript Object Notation) which provides support for web-based data exchange. InterSystems IRIS provides support for developing DocDB databases and applications in REST and in ObjectScript, as well as providing SQL support for creating or querying DocDB data.
By its nature, InterSystems IRIS Document Database is a schema-less data structure. That means that each document has its own structure, which may differ from other documents in the same database. This has several benefits when compared with SQL, which requires a pre-defined data structure.
The word “document” is used here as a specific industry-wide technical term, as a dynamic data storage structure. “Document”, as used in DocDB, should not be confused with a text document, or with documentation.
Let's explore how DocDB can help store JSON in the database and integrate it into projects that rely solely on xDBC protocols.
Hello All,
I need help Integrating the vendor-provided code into the current code to check if the data is an array and if it is, iterate through each item using a for each loop. Also, I need to hit every error handler code mentioned at the bottom along with the transform in both instances.
My Current Code:
/// Given a patient number in a JSON string format, this includes required transformations and makes use of
/// the MPI Query Handler to pull basic demographics from Epic.
Class CUH.Proc.DCIQGetPatient Extends Ens.BusinessProcessBPL [ ClassType = persistent, ProcedureBlock ]
{
While working on getting JSON support for some Python libraries, I discovered some capabilities IRIS provided.
Hi, I am currently setting up a new API using %CSP.REST - I've gotten swagger spec generation to work like such:
Hello,
Is there a single ObjectScript operator or method to concatenate two %DynamicArrays?
I'm looking for something that will do the following:
set arr1 = [ 1, 2, 3 ] set arr2 = [ 4, 5, 6 ] set arrcombined = arr1.%Concatenate(arr2)
or
set arrcombined arr1_arr2
With end result:
zw arrcombined arr1=[1,2,3,4,5,6] ; <DYNAMIC ARRAY>
I can iterate and %Pop over the 2nd array and %Push each popped entry to the 1st array, but I was looking for something more succinct.
Thanks in advance.
I tried executing the SQL JSON_TABLE query with large JSON string(more than 200000 characters) and I got the below error. I'm curious about this under the hood workflow and how does it reach reaches MAXSTRING.
Thanks!
Imagine the scene. You are working happily at Widgets Direct, the internet's premier retailer of Widgets and Widget Accessories. Your boss has some devastating news, some customers might not be fully happy with their widgets, and we need a helpdesk application to track these complaints. To makes things interesting, he wants this with a very small code footprint and challenges you to deliver an application in less than 150 lines of code using InterSystems IRIS. Is this even possible?
Last Chapter: Creating a REST client to get Tracks from Spotify REST API - Part4 Save the Search Result
Git link: https://github.com/ecelg/InterSystems-IRIS-as-a-Spotify-REST-client
OK.... based on what I have done.... I am able to
1. Query Track information by making use of the Spotify API
2. Store the necessary data into my own album, artists, and track table
so.... what next?🤔 How about I set up my own REST API service on my IRIS for the other people to query my table?🤔🤨
ok... 1st... start from document Introduction to Creating REST Services
Last Chapter: Creating a REST client to get Tracks from Spotify REST API - Part3 Get some data (e.g. Artists)
Git link: https://github.com/ecelg/InterSystems-IRIS-as-a-Spotify-REST-client
OK we create a method to get data and lets try to get some Tracks 😁
Run the following line
w ##class(rest.utli.requestUtli).getdata("Spotify","/search","offset=5&limit=10&query=Shape%20of%20you&type=track&market=SG")ooooo no seems there is huge among of data returns.....😥
I would like to know what information can be found in 1 track....🤔 how about only query 1 track?
Last Chapter: Creating a REST client to get Tracks from Spotify REST API - Part2 Save and Refresh Token
Git link: https://github.com/ecelg/InterSystems-IRIS-as-a-Spotify-REST-client
Ok, now I am pretty sure i have a valid token for making query.😀
Shall we try to query something from the API.
Again, its time to go through the API document https://developer.spotify.com/documentation/web-api/tutorials/getting-started
Search for Request artist data
the suggested code is like the following
Last Chapter: Creating a REST client to get Tracks from Spotify REST API - Part1 Check out token
Git link: https://github.com/ecelg/InterSystems-IRIS-as-a-Spotify-REST-client
Ok... Now we can check out a token but it will be expired in 3600 seconds.
There are 2 questions come up🤔
1. How to save this token????🙄
2. How to refresh this token????🤨🤔
Lets come back to the API document https://developer.spotify.com/documentation/web-api/tutorials/getting-started
Base on my understanding, this piece of API do not have a token called refresh_token, as a result, we can assume the logic like following
As we keep updating our software, we often realize that we require more and more modern solutions. So far, only one major piece of our software relies on reading barcodes in documents and images. Since Cache did not have a means of reading barcodes in the past, we have always achieved our goals by using a Visual Basic 6 application. However, it is no longer an ideal solution because it is currently complicated to maintain it. IRIS also lacks this capability, but it has recently got an option that makes up for it: embedded Python!
I need to create a JWT to connect to EPIC FHIRserver sandbox.
https://fhir.epic.com/Documentation?docId=oauth2§ion=BackendOAuth2G…
You will generate a one-time use JSON Web Token (JWT) to authenticate your app to the authorization server and obtain an access token that can be used to authenticate your app's web service calls. There are several libraries for creating JWTs. See jwt.io for some examples.
The header and payload are then base64 URL encoded, combined with a period separating them, and cryptographically signed using the private key to generate a signature.

This is a template for a FastApi application that can be deployed in IRIS as an native Web Application.
git clone
cd iris-fastapi-template
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
docker-compose up
The base URL is http://localhost:53795/fastapi/.
/iris - Returns a JSON object with the top 10 classes present in the IRISAPP namespace./interop - A ping endpoint to test the interoperability framework of IRIS./posts - A simple CRUD endpoint for a Post object./comments - A simple CRUD endpoint for a Comment object.See WSGI introduction article: wsgi-introduction.
TL;DR : You can toggle the DEBUG flag in the Security portal to make changes to be reflected in the application as you develop.
app.pyThis is the main file of the FastAPI application. It contains the FastAPI application and the routes.
from fastapi import FastAPI, Request
import iris
from grongier.pex import Director
# import models
from models import Post, Comment, init_db
from sqlmodel import Session,select
app = FastAPI()
# create a database engine
url = "iris+emb://IRISAPP"
engine = init_db(url)
from fastapi import FastAPI, Request - Import the FastAPI class and the Request class.import iris - Import the IRIS module.from grongier.pex import Director: Import the Director class to bind the flask app to the IRIS interoperability framework.from models import Post, Comment, init_db - Import the models and the init_db function.from sqlmodel import Session,select - Import the Session class and the select function from the sqlmodel module.app = FastAPI() - Create a FastAPI application.url = "iris+emb://IRISAPP" - Define the URL of the IRIS namespace.engine = init_db(url) - Create a database engine for the sqlmodel ORM.models.pyThis file contains the models for the application.
from sqlmodel import Field, SQLModel, Relationship, create_engine
class Comment(SQLModel, table=True):
id: int = Field(default=None, primary_key=True)
post_id: int = Field(foreign_key="post.id")
content: str
post: "Post" = Relationship(back_populates="comments")
class Post(SQLModel, table=True):
id: int = Field(default=None, primary_key=True)
title: str
content: str
comments: list["Comment"] = Relationship(back_populates="post")
Not much to say here, just the definition of the models with foreign keys and relationships.
The init_db function is used to create the database engine.
def init_db(url):
engine = create_engine(url)
# create the tables
SQLModel.metadata.drop_all(engine)
SQLModel.metadata.create_all(engine)
# initialize database with fake data
from sqlmodel import Session
with Session(engine) as session:
# Create fake data
post1 = Post(title='Post The First', content='Content for the first post')
...
session.add(post1)
...
session.commit()
return engine
engine = create_engine(url) - Create a database engine.SQLModel.metadata.drop_all(engine) - Drop all the tables.SQLModel.metadata.create_all(engine) - Create all the tables.with Session(engine) as session: - Create a session to interact with the database.post1 = Post(title='Post The First', content='Content for the first post') - Create a Post object.session.add(post1) - Add the Post object to the session.session.commit() - Commit the changes to the database.return engine - Return the database engine./iris endpoint######################
# IRIS Query example #
######################
@app.get("/iris")
def iris_query():
query = "SELECT top 10 * FROM %Dictionary.ClassDefinition"
rs = iris.sql.exec(query)
# Convert the result to a list of dictionaries
result = []
for row in rs:
result.append(row)
return result
@app.get("/iris") - Define a GET route for the /iris endpoint.query = "SELECT top 10 * FROM %Dictionary.ClassDefinition" - Define the query to get the top 10 classes in the IRIS namespace.rs = iris.sql.exec(query) - Execute the query.result = [] - Create an empty list to store the results.for row in rs: - Iterate over the result set.result.append(row) - Append the row to the result list.return result - Return the result list./interop endpoint########################
# IRIS interop example #
########################
bs = Director.create_python_business_service('BS')
@app.get("/interop")
@app.post("/interop")
@app.put("/interop")
@app.delete("/interop")
def interop(request: Request):
rsp = bs.on_process_input(request)
return rsp
bs = Director.create_python_business_service('BS') - Create a Python business service.
@app.get("/interop") - Define a GET route for the /interop endpoint.@app.post("/interop") - Define a POST route for the /interop endpoint.def interop(request: Request): - Define the route handler.rsp = bs.on_process_input(request) - Call the on_process_input method of the business service.return rsp - Return the response./posts endpoint############################
# CRUD operations posts #
############################
@app.get("/posts")
def get_posts():
with Session(engine) as session:
posts = session.exec(select(Post)).all()
return posts
@app.get("/posts/{post_id}")
def get_post(post_id: int):
with Session(engine) as session:
post = session.get(Post, post_id)
return post
@app.post("/posts")
def create_post(post: Post):
with Session(engine) as session:
session.add(post)
session.commit()
return post
This endpoint is used to perform CRUD operations on the Post object.
Note much to say here, just the definition of the routes to get all posts, get a post by id, and create a post.
Everything is done using the sqlmodel ORM.
/comments endpoint############################
# CRUD operations comments #
############################
@app.get("/comments")
def get_comments():
with Session(engine) as session:
comments = session.exec(select(Comment)).all()
return comments
@app.get("/comments/{comment_id}")
def get_comment(comment_id: int):
with Session(engine) as session:
comment = session.get(Comment, comment_id)
return comment
@app.post("/comments")
def create_comment(comment: Comment):
with Session(engine) as session:
session.add(comment)
session.commit()
return comment
This endpoint is used to perform CRUD operations on the Comment object.
Note much to say here, just the definition of the routes to get all comments, get a comment by id, and create a comment.
Everything is done using the sqlmodel ORM.
You can always run a standalone Flask application with the following command:
python3 /irisdev/app/community/app.py
NB : You must be inside of the container to run this command.
docker exec -it iris-fastapi-template-iris-1 bash
Be in DEBUG mode make multiple calls to the application, and the changes will be reflected in the application.
You can access the IRIS Management Portal by going to http://localhost:53795/csp/sys/UtilHome.csp.
For this you need to have IRIS installed on your machine.
Next you need to create a namespace named IRISAPP.
Install the requirements.
Install IoP :
#init iop
iop --init
# load production
iop -m /irisdev/app/community/interop/settings.py
# start production
iop --start Python.Production
Configure the application in the Security portal.
Hello to my fellow Cache Gurus:
I ran into two issues with Cache to XML Export and Cache to JSON Export in regard to array sequences. So before I waste time opening a WRC ticket, I figured I would poll the Development Community, since there is always so much wonderful feedback and suggestions via this Developer Community! So much thanks in advance for everyone's input! Go Team!
Hi folks!
Examining FHIR profile validation with InterSystems FHIR server. FHIR profiles is a very useful feature of FHIR standard that helps an organization or solution to establish constraints to a very disperse FHIR standards that are relevant to a particular business solution. Learn more on FHIR profiles.
I created a very simple FHIR profile with the following JSON: