#InterSystems IRIS

1 Follower · 5.4K Posts

InterSystems IRIS is a Complete Data Platform
InterSystems IRIS gives you everything you need to capture, share, understand, and act upon your organization’s most valuable asset – your data.
As a complete platform, InterSystems IRIS eliminates the need to integrate multiple development technologies. Applications require less code, fewer system resources, and less maintenance.

Announcement Larry Finlayson · Nov 3, 2025
    Using InterSystems Embedded Analytics – Virtual December 1-5, 2025

    Embed analytics capabilities in applications and create the supporting business intelligence cubes.
    This 5-day course teaches developers and business intelligence users how to embed real-time analytics capabilities in their applications using InterSystems IRIS® Business Intelligence.
    This course presents the basics of building data models from transactional data using the InterSystems IRIS BI Architect, exploring those models and building pivot tables and charts using the InterSystems IRIS BI Analyzer, as well as creating

0
0 21
Question Michael Akselrod · Nov 3, 2025

Only method/routine executed as "do Method..." is included into $ESTACK.How can I modify a method or routine that returns a value so it can be included in $ESTACK?Example upon executing via Studio Debug ($ESTACK is converted to string with ">" delimiter):    Expected: stackstr = %Debugger.System.DebugStub > LogGenerate > Entry > DirChain > Attributes >  LogNew > LogSize >  LogFormat > LogWrite        Processes DirChain, LogNew, LogSize, LogFormat are executed as "If Method..."        Process Attributes is executed as "set a = Attributes..."    Current: stackstr =  %Debugger.System.DebugStub >

1
0 28
Article Chi Wan Chan · Oct 30, 2025 2m read

Hi All,

First I want give a Shout Out to @Theo Stolker  and @Rupert.Young. Because they helped me with the solution.

 When you're using the EnsLib.SQL.Snapshot as a Property in the Response Message to return Snapshot data (,e.g.: from Business Operation to Business Process,) the Snapshot data won't be cleaned with the Purge messages task/service.

Class ResponseMessage Extends Ens.Response

{

    Property SnapshotProp As EnsLib.SQL.Snapshot;

}

The data will be stuck in the global: ^Ens.AppData. You can find it with this query in System>Globals:  ^Ens.AppData("EnsLib.SQL.Snapshot",

1
0 49
Question José Pereira · Oct 27, 2025

Hi everyone,

I'm dealing with a situation where LOAD DATA operations — especially large batches with data inconsistencies — are consuming a lot of disk space. I've noticed that the same error messages are being repeatedly logged in the %SQL_Diag.Result and %SQL_Diag.Message tables, which is significantly increasing the size of the database.

One idea was to move these diagnostic tables to a separate database with a configured size limit, but before going down that path, I'd like to ask:

Is there a simpler or more efficient way to handle this?
For example:

3
0 55
Question Michael Akselrod · Oct 29, 2025

Environment:
    Targeted *.inc file (with hundreds of defined macros) is in use throughout the application and included into every class declaration.
    Statement "set a = $$$TestIf(3)" is included into a classmethod with no other code in. Expected output 5
Same macro options in *.inc:
    #define TestIf(%arr)    if %arr>0 QUIT 5
    #define TestIf(%arr)    if (%arr>0) {QUIT 5}
Issue:
    failure to compile class with the same error on all tried definition options as:

13
0 120
Article Julio Esquerdo · Feb 14, 2025 5m read

HTTP and HTTPS with REST API

Hello

The HTTP protocol allows you to obtain resources, such as HTML documents. It is the basis of any data exchange on the Web and a client-server protocol, meaning that requests are initiated by the recipient, usually a Web browser.

REST APIs take advantage of this protocol to exchange messages between client and server. This makes REST APIs fast, lightweight, and flexible. REST APIs use the HTTP verbs GET, POST, PUT, DELETE, and others to indicate the actions they want to perform.

When we make a call to a RESt API, what actually happens is an HTTP call. The API receives this call and according to the requested verb and path, the API performs the desired action. In the case of the Iris implementation we can see this clearly in the URLMap definition area:

1
3 377
Article Thomas Dyar · Mar 25, 2025 2m read

Introduction

In InterSystems IRIS 2024.3 and subsequent IRIS versions, the AutoML component is now delivered as a separate Python package that is installed after installation. Unfortunately, some recent versions of Python packages that AutoML relies on have introduced incompatibilities, and can cause failures when training models (TRAIN MODEL statement). If you see an error mentioning "TypeError" and the keyword argument "fit_params" or "sklearn_tags", read on for a quick fix.

Root Cause

1
0 232
Article Luis Angel Pérez Ramos · Oct 31, 2025 5m read

Yes, yes! Welcome! You haven't made a mistake, you are in your beloved InterSystems Developer Community in Spanish.

You may be wondering what the title of this article is about, well it's very simple, today we are gathered here to honor the Inquisitor and praise the great work he performed. 

So, who or what is the Inquisitor?

Perfect, now that I have your attention, it's time to explain what the Inquisitor is. The Inquisitor is a solution developed with InterSystems technology to subject public contracts published daily on the platform  https://contrataciondelestado.es/ to scrutiny.

0
0 41
Discussion Benjamin De Boe · Oct 29, 2025

Hi, 

We very much appreciate the interest in the Developer Community for IRIS Vector Search and hope our technology has helped many of you build innovative applications or advanced your R&D efforts. With a dedicated index, integrated embeddings generation, and deep integration with our SQL engine now available in InterSystems IRIS, we're looking at the next frontier, and would love to hear your feedback on the technology to prioritize our investments.

0
0 38
Article sween · Oct 28, 2025 3m read

InterSystems IRIS Community Edition HAOS Add-On

Run InterSystems IRIS inside of Home Assistant, as an add-on.  Before you dismiss this article possibly under the guise that this is just a gimmick, Id like you to step back and take a look at how easy it is to launch IRIS based applications using this platform.  If you look at Open Exchange, you will see dozens of dozens of applications worthy of launching while they are basically hung out to dry as gitware, and launchable if you want to get into a laptop battle with containerd or Docker.  With a simple git repo, and a specification, you can now build your app on IRIS, and make it launchable through a marketplace with limited hassle to your end users.  Run it along side Ollama and the LLM/LAM implementations, expose anything in IRIS as a sensor or expose an endpoint for interaction in your IRIS app to interact with anything you've connected to HAOS.  Wanna restart an IRIS production with a flick of a physical switch or Assisted AI? You can do it with this add-on, or your own, right alongside the home automation hackers.

0
1 60
Article Andreas Schneider · Apr 23, 2025 3m read

The first part of this article provides all the background information. It also includes links to the DATATYPE_SAMPLE database, which you can use to follow along with the examples.

In that section, we explored an error type ("Access Failure") that is easy to detect, as it immediately triggers a clear error message when attempting to read the data via the database driver.

The errors discussed in this section are more subtle and harder to detect. I’ve referred to them as “Silent Corruption” and “Undetected Mutation”

1
0 101
Announcement Liubov Zelenskaia · Oct 28, 2025

Join our next in-person Developer Meetup in Boston to explore AI for Developers and Startups.

This event is hosted at CIC Venture Cafe.

Talk 1: Building Agentic conversational systems
Speaker: Suprateem Banerjee, Sales Engineer - AI Specialist, InterSystems

Talk 2: Let’s Talk about Agentic Benchmarks: the good, the bag, the ugly
Speaker: Jayesh Gupta, Solutions Developer, InterSystems

>> Register here
     <--break->

0
0 26
Article Eric Fortenberry · Feb 19, 2025 19m read

What is TLS?

TLS, the successor to SSL, stands for Transport Layer Security and provides security (i.e. encryption and authentication) over a TCP/IP connection. If you have ever noticed the "s" on "https" URLs, you have recognized an HTTP connection "secured" by SSL/TLS. In the past, only login/authorization pages on the web would use TLS, but in today's hostile internet environment, best practice indicates that we should secure all connections with TLS.

Why use TLS?

So, why would you implement TLS for HL7 connections? As data breaches, ransomware, and vulnerabilities continue to rise, every measure you take to add security to these valuable data feeds becomes more crucial. TLS is a proven, well-understood method of protecting data in transit.

TLS provides two main features that are beneficial to us: 1) encryption and 2) authentication.

Encryption

Encryption transforms the data in transit so that only the two parties in the conversation can read/understand the information being exchanged. In most cases, only the application processes involved in the TLS connection can interpret the data being transferred. This means that any bad actors on the communicating servers or networks will not be able to read the data, even if they happen to capture the raw TCP packets with a packet sniffer (think wiretapping, wireshark, tcpdump, etc.).

Without TLS

Authentication

Authentication insures that each side is communicating with their intended party and not an impostor. By relying on the exchange of certificates (and the associated proof-of-ownership verification that occurs during a TLS handshake), when using TLS, you can be certain that you are exchanging data with a trusted party. There are several attacks that involve tricking a server into communicating with a bad actor by redirecting traffic to the wrong server (for instance, DNS and ARP poisoning). When TLS is involved, the impostors would not only have to redirect traffic, but they would also have to steal the certificates and keys belonging to the trusted party.

Authentication not only protects against intentional attacks by hackers/bad actors, but it can also protect against accidental misconfigurations that could send data to the wrong system(s). For example, if you accidentally change the IP address of an HL7 connection to a server that is not using the expected certificate, the TLS handshake will fail verification before sending any data to the incorrect server.

Host Verification

When performing verification, a client has the option of performing host verification. This verification compares the IP or hostname used in the connection with the IPs and hostnames embedded in the certificate. If enabled and the connection IP/host does not match an IP/host found in the certificate, the TLS handshake will not succeed. You can find the IPs and hostnames in the "Subject" and "Subject Alternative Name" X.509 fields that are discussed below.

Proving Ownership of a Certificate with a Private Key

To prove ownership of the certificates exchanged with TLS, you also need access to the private key tied to the public key embedded in the certificate. We won't discuss the cryptography used to prove ownership with a private key, but you need to realize that access to your certificate's private key is necessary during the TLS handshake.

Mutual TLS

With most https connections made by your web browser, only the web server's authenticity/certificate is verified. Web servers typically do not authenticate the client with certificates. Instead, most web servers rely upon application-level client authentication (login forms, cookies, passwords, etc.).

With HL7, it is preferred that both sides of the connection are authenticated. When both sides are authenticated, it is called "mutual TLS". With mutual TLS, both the server and the client exchange their certificates and the other side verifies the provided certificates before continuing with the connection and exchanging data.

X.509 Certificates

X.509 Certificate Fields

To provide encryption and authentication, information about each party's public key and identity is exchanged in X.509 certificates. Below are some of the common fields of an X.509 certificate that we will focus on:

  • Serial Number: A number unique to a CA that identifies this specific certificate
  • Subject Public Key Info: Public key of the owner
  • Subject: Distinguished name (DN) of the server/service this certificate represents
    • This can be blank, if Subject Alternative Names are provided.
  • Issuer: Distinguished name (DN) of the CA that issued/signed this certificate
  • Validity Not Before: Start date that this certificate becomes valid
  • Validity Not After: Expiration date when this certificate becomes invalid
  • Basic Constraints: Indicates whether this is a CA or not
  • Key Usage: The intended usage of the public key provided by this certificate
    • Example values: digitalSignature, contentCommitment, keyEncipherment, dataEncipherment, keyAgreement, keyCertSign, cRLSign, encipherOnly, decipherOnly
  • Extended Key Usage: Additional intended usages of the public key provided by this certificate
    • Example values: serverAuth, clientAuth, codeSigning, emailProtection, timeStamping, OCSPSigning, ipsecIKE, msCodeInd, msCodeCom, msCTLSign, msEFS
    • Both serverAuth and clientAuth usages are needed for mutual TLS connections.
  • Subject Key Identifier: Identifies the subject's public key provided by this certificate
  • Authority Key Identifier: Identifies the issuer's public key used to verify this certificate
  • Subject Alternative Name: Contains one or more alternative names for this subject
    • DNS names and IP addresses are common alternative names provided in this field.
    • Subject Alternative Name is sometimes abbreviated SAN.
    • The DNS name or IP address used in the connection should be in this list or the Subject's Common Name for host verification to be successful.

Distinguished Names

The Subject and Issuer fields of an X.509 certificate are defined as Distinguished Names (DN). Distinguished names are made up of multiple attributes, where each attribute has the format <attr>=<value>. While not an exhaustive list, here are several common attributes found in Subject and Issuer fields:

AbbreviationNameExampleNotes
CNCommon NameCN=server1.domain.comUsually, the Fully Qualified Domain Name (FQDN) of a server/service
CCountryC=USTwo-Character Country Code
STState (or Province)ST=MassachusettsFull State/Province Name
LLocalityL=CambridgeCity, County, Region, etc.
OOrganizationO=Best CorporationOrganization's Name
OUOrganizational UnitOU=FinanceDepartment, Division, etc.

Given the examples in the table above, the full DN for this example would be C=US, ST=Massachusetts, L=Cambridge, O=Best Corporation, OU=Finance, CN=server1.domain.com

Note that the Common Name found in the Subject is used during host verification and normally matches the fully qualified domain name (FQDN) of the server or service associated with the certificate. The Subject Alternative Names from the certificate can also be used during host verification.

Certificate Expiration

The Validity Not Before and Validity Not After fields in the certificate provide a range of dates, between which, the given certificate is valid.

Typically, leaf certificates are valid for a year or two (though there is a push for web sites to reduce their expiration windows to much shorter ranges). Certificate authorities tend to have an expiration window of several years.

Certificate expiration is a necessary but inconvenient feature of TLS. Before adding TLS to your HL7 connections, be sure to have a plan for replacing the certificates prior to their expiration. Once a certificate expires, you will no longer be able to establish a TLS connection with it.

X.509 Certificate Formats

These X.509 certificate fields (along with others) are arranged in ASN.1 format and typically saved to file in one of the following formats:

  • DER (binary format)
  • PEM (base64)

An example PEM-encoding of an X.509 certificate:

-----BEGIN CERTIFICATE-----
MIIEVTCCAz2gAwIBAgIQMm4hDSrdNjwKZtu3NtAA9DANBgkqhkiG9w0BAQsFADA7
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMQww
CgYDVQQDEwNXUjIwHhcNMjUwMTIwMDgzNzU0WhcNMjUwNDE0MDgzNzUzWjAZMRcw
FQYDVQQDEw53d3cuZ29vZ2xlLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IA
BDx/pIz8HwLWsWg16BG6YqeIYBGof9fn6z6QwQ2v6skSaJ9+0UaduP4J3K61Vn2v
US108M0Uo1R1PGkTvVlo+C+jggJAMIICPDAOBgNVHQ8BAf8EBAMCB4AwEwYDVR0l
BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU3rId2EvtObeF
NL+Beadr56BlVZYwHwYDVR0jBBgwFoAU3hse7XkV1D43JMMhu+w0OW1CsjAwWAYI
KwYBBQUHAQEETDBKMCEGCCsGAQUFBzABhhVodHRwOi8vby5wa2kuZ29vZy93cjIw
JQYIKwYBBQUHMAKGGWh0dHA6Ly9pLnBraS5nb29nL3dyMi5jcnQwGQYDVR0RBBIw
EIIOd3d3Lmdvb2dsZS5jb20wEwYDVR0gBAwwCjAIBgZngQwBAgEwNgYDVR0fBC8w
LTAroCmgJ4YlaHR0cDovL2MucGtpLmdvb2cvd3IyLzlVVmJOMHc1RTZZLmNybDCC
AQMGCisGAQQB1nkCBAIEgfQEgfEA7wB2AE51oydcmhDDOFts1N8/Uusd8OCOG41p
wLH6ZLFimjnfAAABlIMTadcAAAQDAEcwRQIgf6SEH+xVO+nGDd0wHlOyVTbmCwUH
ADj7BJaSQDR1imsCIQDjJjt0NunwXS4IVp8BP0+1sx1BH6vaxgMFOATepoVlCwB1
AObSMWNAd4zBEEEG13G5zsHSQPaWhIb7uocyHf0eN45QAAABlIMTaeUAAAQDAEYw
RAIgBNtbWviWZQGIXLj6AIEoFKYQW4pmwjEfkQfB1txFV20CIHeouBJ1pYp6HY/n
3FqtzC34hFbgdMhhzosXRC8+9qfGMA0GCSqGSIb3DQEBCwUAA4IBAQCHB09Uz2gM
A/gRNfsyUYvFJ9J2lHCaUg/FT0OncW1WYqfnYjCxTlS6agVUPV7oIsLal52ZfYZU
lNZPu3r012S9C/gIAfdmnnpJEG7QmbDQZyjF7L59nEoJ80c/D3Rdk9iH45sFIdYK
USAO1VeH6O+kAtFN5/UYxyHJB5sDJ9Cl0Y1t91O1vZ4/PFdMv0HvlTA2nyCsGHu9
9PKS0tM1+uAT6/9abtqCBgojVp6/1jpx3sx3FqMtBSiB8QhsIiMa3X0Pu4t0HZ5j
YcAkxtIVpNJ8h50L/52PySJhW4gKm77xNCnAhAYCdX0sx76eKBxB4NqMdCR945HW
tDUHX+LWiuJX
-----END CERTIFICATE-----

As you can see, PEM encoding wraps the base64-encoded ASN.1 data of the certificate with -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

Building Trust with Certicate Authorities

On the open internet, it would be impossible for your web browser to know about and trust every website's certificate. There are just too many!

To get around this problem, your web browser delegates trust to a pre-determined set of certificate authorities (CAs). Certificate authorities are entities which verify that a person requesting a certificate for a web site or domain actually owns and is responsible for the server, domain, or business associated with the certificate request. Once the CA has verified an owner, it is able to issue the requested certificate.

Each certificate authority is represented by one or more X.509 certificates. These CA certificates are used to sign any certificates issued by the CA. If you look in the Issuer field of an X.509 certificate, you will find a reference to the CA certificate that created and signed this certificate.

If a certificate is created without a certificate authority, the certificate is called a self-signed certificate. You know a certificate is self-signed if the Subject and Issuer fields of the certificate match.

Generally, the CA will create a self-signed root certificate with a long expiration window. This root certificate will then be used to generate a couple of intermediate certificate authorities, that have a slightly shorter expiration window. The root CA will be securely locked down and rarely be used after creating the intermediate CAs. The intermediate CAs will be used to issue and sign leaf certificates on a day-to-day basis.

The reason for creating intermediate CAs instead of using the root CA directly is to minimize impact in the case of a breach or mishandled certificate. If a single intermediate CA is compromised, the company will still have the other CAs available to continue providing service.

Certificate Chains

A connection's certificate and all of the CA certificates involved in issuing and signing this certificate can be arranged into a structure called a certificate chain. This certificate chain (as described below) will be used to verify and trust the connection's certificate.

If you follow a connection's leaf certificate to the issuing CA (using the Issuer field), and then from that CA walk to its issuer (and so on, until you reach a self-signed root certifcate) you would have walked the certificate chain.

Building a Certificate Chain

Trusting a Certificate

Your web browser and operating system typically maintains a list of trusted certificate authorities. When configuring an HL7 interface or other application, you will likely point your interface to a CA-bundle file that contains a list of trusted CAs. This file will usually contain a list of one or more CA certificates encoded in PEM format. For example:

# Maybe an Intermediate CA
-----BEGIN CERTIFICATE-----
MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF
...
rqXRfboQnoZsG4q5WTP468SQvvG5
-----END CERTIFICATE-----

# Maybe the Root CA
-----BEGIN CERTIFICATE-----
MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV
...
WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg==
-----END CERTIFICATE-----

When your web browser (or HL7 interface) attempts to make a TLS connection, it will use this list of trusted CA certificates to determine if it trusts the certificate exchanged during the TLS handshake.

The process will start at the leaf certificate and traverse the certificate chain to next CA certificate. If the CA certificate is not found in the trust store or CA-bundle, then the leaf certificate is not trusted, and the TLS connection fails.

If the CA certificate is found in the trust store or CA-bundle file, then the process continues walking up the certificate chain, verifying that each CA along the way is in the trust store. Once the root CA certificate at the top of the chain is verified (along with all of the intermediate CA certificates along the way), the process can trust the server's leaf certificate.

Determining Trust

The TLS Handshake

To add TLS to a TCP/IP connection (such as an HL7 feed), the client and server must perform a TLS handshake after the TCP/IP connection has been established. This handshake involves agreeing on encryption ciphers/methods, agreeing on TLS version, exchanging X.509 certificates, proving ownership of these certificates, and validating that each side trusts the other.

The high-level steps of a TLS handshake are:

  1. Client makes TCP/IP connection to the server.
  2. Client starts the TLS handshake.
  3. Server sends it's certificate (and proof-of-ownership) to the client.
  4. Client verifies the server certificate.
  5. If mutual TLS, the client sends it's certificate (and proof-of-ownership) to the server.
  6. If mutual TLS, the server verifies the client certificate.
  7. Client and server send encrypted data back and forth.

TLS Handshake

1. Client makes TCP/IP connection to the server.

During step #1, the client and server perform a TCP 3-way handshake to establish a TCP/IP connection between them. In a 3-way handshake:

  1. The client sends a SYN packet.
  2. The server sends a SYN-ACK packet.
  3. The client sends an ACK packet.

Once this handshake is complete, the TCP/IP connection is established. The next step is to start the TLS handshake.

2. Client starts the TLS handshake.

After a TCP connection is established, one of the sides must act as the client and start the TLS handshake. Typically, the process that initiated the TCP connection also is responsible for initiating the TLS handshake, but this can be flipped in rare cases.

To start the TLS handshake, the client sends a ClientHello message to the server. This message contains various options used to negotiate the security settings of the connection with the server.

3. Server sends it's certificate (and proof-of-ownership) to the client.

After receiving the client's ClientHello message, the server in turn responds with a ServerHello message. This includes the negotiated security settings.

Following the ServerHello message, the server will also send a Certificate and CertificateVerify message to the client. This shares the X.509 certificate chain with the client and provides proof-of-ownership of the associated private key for the certificate.

4. Client verifies the server certificate.

Once the client receives the ServerHello, Certificate, and CertificateVerify messages, the client will verify that the certificate is valid and trusted (by comparing the CAs to trusted CA-bundle files, the operating system certificate store, or web browser certificate store). The client will also do any host verification (see above) to make sure the connection address matches the certificate addresses/IPs.

5. If mutual TLS, the client sends it's certificate (and proof-of-ownership) to the server.

If this is a mutual TLS connection (determined by the server sending a CertificateRequest message), the client will send a Certificate message including its certificate chain and then a CertificateVerify message to prove ownership of the associated private key.

6. If mutual TLS, the server verifies the client certificate.

Again, if this is a mutual TLS connection, the server will verify that certificate chain sent by the client is valid and trusted.

7. Client and server send encrypted data back and forth.

If the TLS handshake makes it this far without failing, the client and server will exchange Finished messages to complete the handshake. After this, encrypted data can be sent back-and-forth between the client and the server.

Setting Up TLS on HL7 Interfaces

Congratulations on making it this far! Now that you know about TLS, how would you go about implementing TLS on your HL7 connections? In general, here are the steps that you will need to perform to setup TLS on your HL7 connections.

  1. Choose a certificate authority.
  2. Create a key and certificate signing request.
  3. Obtain your certificate from your CA.
  4. Obtain the certificate chain for your peer.
  5. Create an SSL config for the connection.
  6. Add the SSL config to the interface, bounce the interface, and verify message flow.

1. Choose a certificate authority.

The process you use to obtain a certificate and key for your server will greatly depend upon the security policies of your company. In most scenarios, you will end up with one of the following CAs signing your certificate:

  1. An internal, company CA will sign your certificate.
  • This is my favorite option, as your company already has the infrastructure in place to maintain certificates and CAs. You just need to work with the team that owns this infrastructure to get your own certificate for your HL7 interfaces.
  1. A public CA will sign your certificate.
  • This option is nice in the sense that the public CA also has all of the infrastructure in place to maintain certificates and CAs. This option is probably overkill for most HL7 interfaces, as public CAs typically provide certificates for the open internet; HL7 interfaces tend to connect over private intranet, not the public internet.
  • Obtaining certificates from a public CA may incur a cost, as well.
  1. A CA you create and maintain will sign your certificate.
  • This option may work well for you, but unfortunately, this means you bear the burden of maintaining and securing your CA configuration and software.
  • Use at your own risk!
  • This option is the most complex. Get ready for a steep learning curve.
  • You can use open source, proven software packages for managing your CA and certificates. The OpenSSL suite is a great option. Other options are EJBCA, step-ca, and cfssl.

2. Create a key and certificate signing request.

After you have chosen your CA, your next step is to create a private key and certificate signing request (CSR). How you generate the key and CSR will depend upon your company policy and the CA that you chose. For now, we'll just talk about the steps from a high-level.

When generating a private key, the associated public key is also generated. The public key will be embedded within your CSR and your signed certificate. These two keys will be used to prove ownership of your signed certificate when establishing a TLS connection.

CAUTION! Make sure that you save your private key in a secure location (preferably in a password-protected format). If you lose this key, your certificate will no longer be usable. If someone else gains access to this key, they will be able to impersonate your server.

The certificate signing request will include information about your server, your company, your public key, how you will use the certificate, etc. It will also include proof that you own the associated private key. This CSR will then be provided to your CA to generate and sign your certificate.

NOTE: When creating the CSR, make sure that you request an Extended Key Usage of both serverAuth and clientAuth, if you are using mutual TLS. Most CAs are used to signing certificates with only serverAuth key usage. Unfortunately, this means that the certificate can not be used as a client certificate in a mutual TLS connection.

3. Obtain your certificate from your CA.

After creating your key and CSR, submit the CSR to your certificate authority. After performing several checks, your CA should be able to provide you with a signed certificate and the associated certificate chain. You will want this certificate and chain saved in PEM format. If the CA provided your certificate in a different format, you will need to convert it using a tool like OpenSSL.

4. Obtain the certificate chain for your peer.

The previous steps were focused on obtaining a certificate for your server. You should be able to use this certificate (and the associated key) with each HL7 connection to/from this server. You will also have to obtain the certificate chains for each of the systems/peers to which you will be connecting.

The certificate chains for each peer will need to be saved in a file in PEM format. This CA-bundle will not need to contain the leaf certificates; it only needs to contain the intermediate and root CA certificates.

Be sure to provide your peer with a CA-bundle containing your intermediate and root CAs. This will allow them to trust your certificate when you make a connection.

5. Create an SSL config for the connection.

In InterSystems's Health Connect, you will need to create client and server SSL configs for each system that your server will be connecting to. These SSL configs will point to the associated system's CA-bundle file and will also point to your server's key and certificate files.

Client SSL configs are used on operations to initiate the TLS handshake. Server SSL configs are used on services to respond to TLS handshakes. If a system has both inbound services and outbound operations, you will need to configure both a client and server SSL config for that system.

To create a client SSL config:

  1. Go to System Administration > Security > SSL/TLS Configurations.
  2. Click Create New Configuration.
  3. Give your SSL configuration a Configuration Name and Description.
  4. Make sure your SSL configuration is Enabled.
  5. Choose Client as the Type.
  6. Choose Require for the Server certificate verification field. This performs host verification on the connection.
  7. Point File containing trusted Certificate Authority certificate(s) to the CA-bundle file that contains the intermediate and root CAs (in PEM format) for the system to which you are connecting.
  8. Point File containing this client's certificate to the file that holds your server's X.509 certificate in PEM format.
  9. Point File containing associated private key to the file containing your certificate's private key.
  10. Private key type will most likely be RSA. This should match the type of your private key.
  11. If you private key is password protected (as it should be), fill in the password in both the Private key password and Private key password (confirm) fields.
  12. You likely can leave the other fields to their default values.

To create a server SSL config:

  1. Go to System Administration > Security > SSL/TLS Configurations.
  2. Click Create New Configuration.
  3. Give your SSL configuration a Configuration Name and Description.
  4. Make sure your SSL configuration is Enabled.
  5. Choose Server as the Type.
  6. Choose Require for the Client certificate verification field. This will make sure that mutual TLS is performed.
  7. Point File containing trusted Certificate Authority certificate(s) to the CA-bundle file that contains the intermediate and root CAs (in PEM format) for the system to which you are connecting.
  8. Point File containing this server's certificate to the file that holds your server's X.509 certificate in PEM format.
  9. Point File containing associated private key to the file containing your certificate's private key.
  10. Private key type will most likely be RSA. This should match the type of your private key.
  11. If you private key is password protected (as it should be), fill in the password in both the Private key password and Private key password (confirm) fields.
  12. You likely can leave the other fields to their default values.

Creating SSL Configs

6. Add the SSL config to the interface, bounce the interface, and verify message flow.

Once you've created the client and server SSL configs, you are ready to activate TLS on the interfaces. On each service or operation, select the associated SSL config on the Connection Settings > SSL Configuration dropdown found on the Settings tab of the interface.

After bouncing the interface, you should see the connection reestablish. When a new message is tranferred, a Completed status indicates that TLS is working. If TLS is not working, every time a message is attempted, the connection will drop.

To help debug issues with TLS, you may need to use tools such as tcpdump, Wireshark, or OpenSSL's s_client utility.

Summary

This has been a very long deep-dive into the topic of SSL/TLS. There is so much more information that was not included in this article. Hopefully, this has provided you with enough of an overview of how TLS works that you can research the details and learn more information as needed.

If you are looking for an in-depth resource on TLS, check out Ivan Ristić's website, fiestyduck.com and book, Bulletproof TLS and PKI. I have found this book to be a great resource for learning more about the details of TLS.

1
6 466
Article Kate Lau · Oct 13, 2025 13m read

Hi all,

Let's do some more work about the testing data generation and export the result by REST API.😁

Here, I would like to reuse the datagen.restservice class which built in the pervious article Writing a REST api service for exporting the generated patient data in .csv

This time, we are planning to generate a FHIR bundle include multiple resources for testing the FHIR repository.

Here is some reference for you, if you want to know mare about FHIR The Concept of FHIR: A Healthcare Data Standard Designed for the Future

OK... Let's start😆

6
0 106
Question Colin Nagle · Oct 24, 2024

I have an API set up in IRIS which is secured using an IRIS authentication service, so there is a bearer token being passed down in the request header.

I've already set Parameter HandleCorsRequest = 1; on the spec class and All the endpoints I am have (a mix of GET, POST, PATCH and DELETE) are working from postman without issue, the problem is when consuming from the web front-end and the preflight checks the browser instigates. Most of the endpoints work in the browser, but some are triggering the preflight (OPTIONS) check causing the CORS issue.

This is what I am seeing in the browser:-

5
0 287
Question David.Satorres6134 · Oct 9, 2023

Hello all,

I'm trying to build a cube based on a linked table but seems that IRIS is not able to do it :O

Long story short, I have a linked table in IRIS that sources a Microsoft SQL table (using standard linked feature from the portal). It works fine, I can access it using SQL as many other times. On top of that, I've created in DeepSee (ok, Analytics) a cube that uses this class as source. It compiles correctly, no errors given. When I build it with 100 records, all goes well and using Analyzer I can see results.

1
0 195
Question Jerry Wang · Apr 27, 2023

Hi experts

I'm trying to configure an IRIS ODBC connection with "Windows NT authentication using the network login ID". I have created the System DSN as below:

and user (PROD\test) in the SQL Gateway connection 

However, as the error message suggests, IRIS is trying to connect with PROD\svc_mist, rather than PROD\test configured above. 

Is there anyway to configure the ODBC connection with specified account with Windows Auth method? 

3
0 262
Article Evgeny Shvarov · Feb 13, 2022 2m read

Folks!

Recently I found several one-line long ObjectScript commands on DC and think that it'd be great not to lose it and to collect more!

So I decided to gather a few first cases, put in one OEX project, and share them with you!

And here is how you can use them.

1. Create client SSL configuration.

set $namespace="%SYS", name="DefaultSSL" do:'##class(Security.SSLConfigs).Exists(name) ##class(Security.SSLConfigs).Create(name)

Useful if you need to read content from an URL.

Don't forget to return to a previous namespace. Or add 

n $namespace
23
8 1252
Article Robert Cemper · Oct 21, 2025 2m read

If you start with InterSystems ObjectScript, you will meet the XECUTE command.
And beginners may ask: Where and Why may I need to use this ?

The official documentation has a rich collection of code snippets. No practical case.
Just recently, I met a use case that I'd like to share with you.

The scenario:

When you build an IRIS container with Docker, then, in most cases,
you run the  initialization script  

iris session iris < iris.script 
1
3 78
Question Scott Roth · Oct 24, 2025

According to the Documentation  EnsLib.Workflow.TaskRequest has the following fields...

  • %Action
  • %Command
  • %FormFields
  • %FormTemplate
  • %FormValues
  • %Message
  • %Priority
  • %Subjext
  • %TaskHandler
  • %Title
  • %UserName

I want to be able to capture the Source, Session ID, and any other Identifiers outside of the Error so it will show up on the Task List.

I am struggling how to build a csp template for me to be able to capture additional fields to send to the Workflow Operation.

0
0 30
InterSystems Official Dipak Bhujbal · Oct 24, 2025

Overview 

This release focuses on upgrade reliability, security expansion, and support experience improvements across multiple InterSystems Cloud Services. With this version, all major offerings—including FHIR Server, InterSystems Data Fabric Studio (IDS), IDS with Supply Chain, and IRIS Managed Services—now support Advanced Security, providing a unified and enhanced security posture. 

New Features and Enhancements 

0
0 60
Question Pietro Di Leo · Jan 17, 2024

Hi everyone, 

Does anyone know how to export projects via VSC? 

I opened the project through the "InterSystems Tools" plugin (command is "Edit Code in Project") and I can correctly work on it.

However, when I try using the "ObjectScript" plugin to export the project (right click on the project -> "Export Project Contents")

This message appears and it is not possible to export the project:

I've tried also to open a new window, then a folder and finally the project, but nothing changes. 

This is an example of my workspace: 


Anyone knows how to do it? 

Thank you! 

6
0 2978
Article Murray Oldfield · Aug 26, 2025 6m read

I am regularly contacted by customers about memory sizing when they get alerts that free memory is below a threshold, or they observe that free memory has dropped suddenly. Is there a problem? Will their application stop working because it has run out of memory for running system and application processes? Nearly always, the answer is no, there is nothing to worry about. But that simple answer is usually not enough. What's going on?

Consider the chart below. It is showing the output of the free metric in vmstat. There are other ways to display a system's free memory, for example, the free -m command. Sometimes, free memory will gradually disappear over time. However, the chart below is an extreme example, but it is a good example to illustrate what's going on.

image

As you can see, at around 2 am, some memory is freed, then suddenly drops close to zero. This system is running the IntelliCare EHR application on the InterSystems IRIS database. The vmstat information came from a ^SystemPerformance HTML file that collects vmstat, iostat and many other system metrics. What else is going on on this system? It is the middle of the night, so I don't expect much is happening in the hospital. Let's look at iostat for the database volumes.

image

There is a burst of reads at the same time as the free memory drops. The drop in reported free memory aligns with a spike in large block-sized reads (2048 KB request size) shown in iostat for the database disk. This is very likely a backup process or file copy operation. Ok, so correlation isn't causation, but this is worth looking at, and it turns out, explains what's going on.

Let's look at some other output from ^SystemPerformance. The command free -m is run at the same rate as vmstat (for example, every 5 seconds), and is output with a date and time stamp, so we can also chart the counters in free -m.

The counters are:

  • Memtotal – Total physical RAM.
  • used – RAM in active use (apps + OS + cache).
  • free – Completely unused RAM.
  • shared – Memory shared between processes.
  • buf/cache – RAM used for buffers & cache, reclaimable if needed.
  • available – RAM still usable without swapping.
  • swaptotal – Total swap space on disk.
  • swapused – Swap space currently in use.
  • swapfree – Unused swap space.

Why does free memory drop at 2 am?

  • The large sequential reads fill the filesystem page cache, temporarily consuming memory that appears as "used" in free -m.
  • Linux aggressively uses otherwise idle memory for caching I/O to improve performance.
  • Once the backup ends (≈ 03:00), memory is gradually reclaimed as processes need it.
  • Around 6 am, the hospital starts to get active, and memory is used for IRIS and other processes.

Low free memory is not a shortage, but rather the system utilising "free" memory for caching. This is normal Linux behaviour! The backup process is reading large amounts of data, which Linux aggressively caches in the buffer/cache memory. The Linux kernel converts "free" memory into "cache" memory to speed up I/O operations.

Summary

The filesystem cache is designed to be dynamic. If a process requires memory, it will be immediately reclaimed. This is a normal part of Linux memory management.


Does Huge Pages have an impact?

For performance and to reserve memory for IRIS shared memory, the best practice for production IRIS deployments on servers with large memory is to use Linux Huge Pages. For IntelliCare, a rule of thumb I use is to use 8 GB memory per core and around 75% of memory for IRIS shared memory -- Routine and Global buffers, GMHEAP, and other shared memory structures. How shared memory is divided up depends on application requirements. Your requirements could be completely different. For example, using that CPU to memory ratio, is 25% enough for your application IRIS processes and OS processes?

InterSystems IRIS uses direct I/O for database and journal files, which bypasses the filesystem cache. Its shared memory segments (globals, routines, gmheap, etc.) are allocated from Huge Pages.

  • These huge pages are dedicated to IRIS shared memory and do not appear as "free" or "cache" in free -m.
  • Once allocated, huge pages are not available for filesystem cache or user processes.

This explains why the free -m metrics look "low" even though the IRIS database itself is not starved of memory.


How is the free memory for a process calculated?

From above, in free -m, the relevant lines are:

  • free – Completely unused RAM.
  • available – RAM still usable without swapping.

Available is a good indicator — it includes reclaimable cache and buffers, showing what’s actually available for new processes without swapping. What processes? For a review, have a look at InterSystems Data Platforms and Performance Part 4 - Looking at Memory . A simple list is: Operating system, other non-IRIS application processes, and IRIS processes.

Let's look at a chart of the free -m output.

image

Although free drops near zero during the backup, available remains much higher (tens of GB). That means the system could provide that memory to processes if needed.

Where do huge pages appear in free?

By default, free -m does not show huge pages directly. To see them, you need /proc/meminfo entries like HugePages_TotalHugePages_Free, and Hugepagesize.

Because the OS reserves huge pages at startup, they are effectively invisible to free -m. They are locked away from the general memory pool.

Summary

  • The low "free memory" seen around 02:00 is caused by the Linux page cache being filled by backup reads. This is expected behaviour and does not indicate a memory shortage.
  • Huge pages reserved for IRIS are unaffected and continue serving the database efficiently.
  • The actual memory available to applications is best measured by the available column, which shows the system still has sufficient headroom.

But wait, what about if I don't use Huge Pages?

It is common not to use Huge Pages on non-production or systems with limited memory -- performance gains of Huge Pages are usually not significant under 64 GB, although it is still best practice to use Huge Pages to protect IRIS shared memory.

Sidebar. I have seen sites get into trouble by allocating Huge Pages smaller than shared memory, which causes IRIS to try and start with very small global buffers or fail to start if memlock is used (consider memlock=192 for production systems).

Without Huge Pages, IRIS shared memory segments (globals, routines, gmheap, etc.) are allocated from normal OS memory pages. This would show up under “used” memory in free -m. It would also contribute to “available” going lower, because that memory can’t easily be reclaimed.

  • used – Much higher, reflecting IRIS shared memory + kernel + other processes.
  • free – Likely lower, because more RAM is permanently allocated to IRIS in the regular pool.
  • buf/cache – Would still rise during backups, but the apparent headroom for processes would look tighter because IRIS memory is in the same pool.
  • available – Closer to the true “free + reclaimable cache” minus IRIS memory. This would look smaller than in your Huge Pages setup.

So, should you use Huge Pages on production systems?

YES!

For memory protection. IRIS shared memory is protected from:

  • Swap out during memory pressure.
  • Competition with filesystem operations like backups and file copies, as we have seen in this example.

Other notes - into the weeds...

How is the data collected?

The command used in ^SystemPerformance for a 24-hour collection (17280 seconds) with 5 second tick is:

free -m -s 5 -c 17280 | awk '{now=strftime(""%m/%d/%y %T""); print now "" "" $0; fflush()}' > ","/filepath/logs/20250315_000100_24hours_5sec_12.log

2
7 172