#Continuous Delivery

0 Followers · 43 Posts

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently.

Article Enzo Medina · Oct 10, 2025 9m read

Deploying new IRIS instances can be a time-consuming task, especially when setting up multiple environments with mirrored configurations.

I’ve encountered this issue many times and want to share my experience and recommendations for using Ansible to streamline the IRIS installation process. My approach also includes handling additional tasks typically performed before and after installing IRIS.

2
3 72
Question Cristiano Silva · Oct 1, 2025

Hi all,

I'm developing a Azure Pipeline to automate the deployment process in Caché.

I use selfhosted agent to execute code im my Caché Server.

My problem is that cession execution via cmd always terminate with exit code 1 and the pipeline finishes with error, but the execution in Caché is fine, the method executed returns $$$OK
I use the following line to execute a class in Caché.
C:\InterSystems\Cache\bin\csession.exe CACHE -U  %RELEASE_TRIGGERINGARTIFACT_ALIAS% "##Class(sgf.pipeline.DeploymentManager).ProcessDeployment()"
Bellow printscreen of execution in Azure:

The problem is the exit code 1

5
0 74
Article Dmitry Maslennikov · Jul 31, 2025 5m read

Overview I'm excited to announce the release of testcontainers-iris-node, a Node.js library that makes it easy to spin up temporary InterSystems IRIS containers for integration and E2E testing. This project is a natural addition to the existing family of Testcontainers adapters for IRIS, including testcontainers-iris-python and testcontainers-iris-java.

Why testcontainers-iris-node? As a Node.js developer working with InterSystems IRIS, I often faced challenges when setting up test environments that mimic production. testcontainers-iris-node solves this by leveraging the testcontainers-node framework to create isolated IRIS environments on-demand.

This is particularly valuable for:

  • Integration testing with IRIS databases
  • Testing data pipelines or microservices
  • Automating test environments in CI pipelines

Features

  • Launches IRIS in Docker containers using Testcontainers
  • Supports custom Docker images and configuration
  • Wait strategies to ensure IRIS is ready before tests begin
  • Clean teardown between test runs
0
1 61
InterSystems Official Daniel Palevski · Jul 23, 2025

InterSystems is pleased to announce the General Availability (GA) of the 2025.2 release of InterSystems IRIS® data platform. This is a Continuous Delivery (CD) release. Please note that the GA versions of InterSystems IRIS for Health™ and HealthShare® Health Connect™ 2025.2 are currently withheld due to mirroring limitations introduced by security updates (details below).

Release Highlights

This release introduces impactful enhancements across security, developer experience, operations, and interoperability. Notable new features include:

0
1 141
Article Eduard Lebedyuk · Jan 7, 2025 1m read

When you deploy code from a repo, class (file) deletion might not be reflected by your CICD system.
Here's a simple one-liner to automatically delete all classes in a specified package that have not been imported. It can be easily adjusted for a variety of adjunct tasks:

set packages = "USER.*,MyCustomPackage.*"set dir = "C:\InterSystems\src\"set sc = $SYSTEM.OBJ.LoadDir(dir,"ck", .err, 1, .loaded)
set sc = $SYSTEM.OBJ.Delete(packages _ ",'" _ $LTS($LI($LFS(loaded_",",".cls,"), 1, *-1), ",'"),, .err2)

The first command compiles classes and also returns a list of loaded classes. The second command deletes all classes from specified packages, except for the classes loaded just before that.

8
4 393
Question Dmitrii Baranov · Dec 10, 2024

I am developing locally on my IRIS instance using VSCode and client-side editing approach. How can I automatically export a single .cls file/a whole package to a remote TEST/PREPROD server using a script or command line and recompile the unit remotely? Are there any more simple and straightforward ways than CI/CD explained in the series of articles by Eduard?

3
0 166
InterSystems Official Daniel Palevski · Nov 27, 2024

InterSystems announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2024.3

The 2024.3 release of InterSystems IRIS® data platform, InterSystems IRIS® for Health, and HealthShare® Health Connect is now Generally Available (GA).

Release Highlights

In this release, you can expect a host of exciting updates, including:

  1. Much faster extension of database and WIJ files
  2. Ability to resend messages from Visual Trace
  3. Enhanced Rule Editor capabilities
  4. Vector search enhancements
  5. and more.
0
0 327
Question Adam Raszkiewicz · Jul 26, 2024

I was watching this video about IRIS and GitHub and all is clear to me how it works and how code from each branch is getting deployed to each IRIS environment but the process to deploy is manual. My question is how can I, if possible, to utilize gti-source-control from GitLab CICD pipeline to deploy code automaticaly after PR approval instead going to the Git UI?

Thanks

2
0 476
Article Murray Oldfield · Jun 19, 2024 14m read

I have created some example Ansible modules that add functionality above and beyond simply using the Ansible builtin Command or Shell modules to manage IRIS. You can use these as an inspiration for creating your own modules. My hope is that this will be the start of creating an IRIS community Ansible collection.

I expect some editing and changes during the few weeks after the Global Summit (June 2024) so check GitHub.


Ansible modules

To give you an idea of where I am going with this, consider the following: There are many (~100) collections containing many 1,000s of Ansible modules. A full list is here: https://docs.ansible.com/ansible/latest/collections/index.html

By design, modules are very granular and do one job well. For example, the built-in module ansible.builtin.file can create or delete a folder. There are multiple parameters for setting owner and permissions, etc., but the module focuses on this one task. The philosophy is that you should not create complex logic in your Ansible playbooks. You want to make your scripts simple to read and maintain.

You can write your own modules, and this post illustrates that. Modules can be written in nearly any language, and they can even be binaries. Ansible is written in Python, so I will use Python to handle the complex logic in the module. However, the logic is hidden within a few lines of YAML that the user interacts with.


How to stop and start IRIS using the Ansible built-in command module

You can start or stop IRIS using the built-in command module in an Ansible task or play. The command module runs command line commands with optional parameters on the target hosts. For example:

- name: Start IRIS using built-in command  
  ansible.builtin.command: iris start "{{ iris_instance }}"
  register: iris_output  # Capture the output from the module  
  
- name: Display IRIS Start Output test 1  
  ansible.builtin.debug:  
    msg: "IRIS Start Output test 1: {{ iris_output.stdout }}"  
  when: iris_output.stdout is defined  # Ensures stdout is displayed only if defined

"{{ iris_instance }}" is variable. In this case, "iris_instance" is the instance name set sometime earlier. For example, it could be "IRIS" or "PRODUCTION", or anything else. Variables are a way to make your scripts reusable. Using register: iris_output will capture stdout from the command into the variable "iris_output"; we can display or use the output later.

If successful, the output "msg" is the same as if you had run the command on the command line. For example, it will be like this:

TASK [db_server : Display IRIS Start Output test 1] ***********************************************************
ok: [dbserver1] => {
    "msg": "IRIS Start Output test 1: Using 'iris.cpf' configuration file\n\nStarting Control Process\nAllocated 4129MB shared memory\n2457MB global buffers, 512MB routine buffers\nThis copy of InterSystems IRIS has been licensed for use exclusively by:\nISC TrakCare Development\nCopyright (c) 1986-2024 by InterSystems Corporation\nAny other use is a violation of your license agreement\nStarting IRIS"
}

If there is an error, for example, IRIS is already started, the instance name is wrong, or for some other reason, the return code is not 0, the playbook will fail, and no additional tasks will run, which is not ideal. There are ways to manage failure in Ansible scripts, but that will get messy and more complicated to manage. For example, your playbook will come to a halt with this error message if IRIS is already started. Note failed=1 in the PLAY RECAP below.

TASK [db_server : Start IRIS using built-in command] **********************************************************
fatal: [dbserver1]: FAILED! => {"changed": true, "cmd": ["iris", "start", "IRIS"], "delta": "0:00:00.014049", "end": "2024-05-09 03:47:05.027348", "msg": "non-zero return code", "rc": 1, "start": "2024-05-09 03:47:05.013299", "stderr": "", "stderr_lines": [], "stdout": "IRIS is already up!", "stdout_lines": ["IRIS is already up!"]}

PLAY RECAP ****************************************************************************************************
dbserver1                  : ok=2    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Sidebar: If you have used IRIS for a while, you may ask: Why do you not recommend Running IRIS as a systemd service on Unix? I recommend that, but that will get in the way of my current story! But, as you will see, there are reasons to start and stop IRIS manually. Read on :) I have an example playbook and templates for setting up IRIS as a service in the demo examples that accompany this post.

How to use a custom module to start and stop IRIS

The custom Ansible module to start and stop IRIS gracefully handles errors and output. Apart from having cleaner and easier-to-maintain playbooks, here is some background on why you might do this.

Sidebar: Hugepages

Linux systems use pages for memory allocation. The default page size is 4KB. Hugepages are larger memory pages, typically 2MB by default, but can be more. By reducing memory management overhead, hugepages can significantly improve database performance. Hugepages are large, contiguous blocks of physical memory that a process must explicitly use. Using hugepages is good practice for IRIS databases that keep frequently accessed data in memory.

So, hugepages are good for IRIS database servers. The best practice for IRIS is to put RIS shared memory in hugepages, including Global buffers, routine buffers, and GMHEAP (add link). A common rule of thumb for a database server is to use 70-80% of memory for IRIS shared memory in hugepages. Your requirements will vary depending on how much free memory you need for other processes.

Estimating the exact size of IRIS shared memory is complex and can change depending on your IRIS configuration and between IRIS versions.

You want to be as close as possible to the shared memory size IRIS uses when configuring the number of huge pages on your system.

  • If you allocate too few hugepages for all IRIS shared memory, by default IRIS will start up using standard pages in whatever memory is left, wasting the memory set aside for hugepages!
    • This is a common cause of application performance issues. By default, IRIS keeps downsizing buffers until it can start using the available memory (not hugepages). For example, starting with smaller global buffers increases IO requirements. In the worst case, over time, if not enough memory is available, the system will need to page process memory to the swap file, which will severely impact performance.
  • If you allocate more hugepages than IRIS shared memory, the remainder is wasted!

The recommended process after installing IRIS or making configuration changes that affect shared memory is:

  • Calculate the memory that is required for IRIS and other processes. If in doubt, start with the common 30% other memory / 70% for shared memory rule mentioned above.
  • Calculate the major structures that will use shared memory from the remainder of memory: Global buffers (8K, 64K, etc.), Routine buffers, and GMHeap.
    • See this link for more details.
  • Update the settings in your IRIS configuration file.
  • Stop and start IRIS and review the startup output, either at the command line or in messages.log.
  • Make the OS kernel hugepages size change based on the actual shared memory used.
  • Finally, restart IRIS and make sure everything is as you expect it to be.

An Ansible workflow to right-size hugepages

You may be sizing hugepages and shared memory because you have just installed IRIS, upgraded, or increased or decreased the host memory size for capacity planning reasons. In the example above we saw that if the start command is successful there is some useful information returned, for example, the amount of shared memory:

Allocated 2673MB shared memory
1228MB global buffers, 512MB routine buffers

Ansible playbooks and plays are more like Linux commands than a programming language, although they can be used like a language. In addition to monitoring and handling errors, you could capture the output of the iris start command inline in your playbook and process it to make decisions based on it. But that all gets messy and breaks the DRY principle we should be aiming for when building our automation.

The DRY principle stands for "Don't Repeat Yourself." It is a fundamental concept in software development aimed at reducing the repetition of software patterns, for example, by replacing them with abstractions, which we will now do.

I have created several custom Ansible modules. The Ansible term is a collection. This is the start of an open-source IRIS collection. The source and examples are here: GitHub.

  • The IRIS modules collection for these demos is in the /library folder.

Example playbooks

The module iris_start_stop is used like any other Ansible module. The stanzas in the following playbook extract:

  • Run the customiris_start_stop module. In this case, stop if already running and restart.
    • Register (or store) output in a JSON object in a variable named iris_output.
  • As a demo display the stdout part as a message.
  • As a demo display the stderr part as a message.
  • Display an additional part named memory_info as a message.
- name: Stop IRIS instance test 1  
  iris_start_stop:  
    instance_name: 'IRIS'  
    action: 'stop'  
    quietly: true  
    restart: true  
  register: iris_output  # Capture the output from the stop command  
  
- name: Display IRIS Stop Output test 1  
  ansible.builtin.debug:  
    msg: "IRIS Stop Output test 1: {{ iris_output.stdout }}"  
  when: iris_output.stdout is defined  # Display stdout from stop command  
  
- name: Display IRIS Stop Error test 1  
  ansible.builtin.debug:  
    msg: "IRIS Stop Error test 1: {{ iris_output.stderr }}"  
  when: iris_output.stderr is defined  # Display stderr from stop command  
  
- name: Display IRIS Stop memory facts test 1  
  ansible.builtin.debug:  
    msg: "IRIS Stop memory facts test 1: {{ iris_output.memory_info }}"  
  when: iris_output.memory_info is defined

Example output shows memory_info is returned as a Python dictionary (key: value pairs).

TASK [db_server : Stop IRIS instance test 1] ************************************************************************************************************
changed: [monitor1]

TASK [db_server : Display IRIS Stop Output test 1] ******************************************************************************************************
ok: [monitor1] => {
    "msg": "IRIS Stop Output test 1: Starting Control Process\nAllocated 4129MB shared memory\n2457MB global buffers, 512MB routine buffers\nThis copy of InterSystems IRIS has been licensed for use exclusively by:\nISC TrakCare Development\nCopyright (c) 1986-2024 by InterSystems Corporation\nAny other use is a violation of your license agreement\nStarting IRIS"
}

TASK [db_server : Display IRIS Stop Error test 1] *******************************************************************************************************
ok: [monitor1] => {
    "msg": "IRIS Stop Error test 1: "
}

TASK [db_server : Display IRIS Stop memory facts test 1] ************************************************************************************************
ok: [monitor1] => {
    "msg": "IRIS Stop memory facts test 1: {'shared_memory': 4129, 'global_buffers': 2457, 'routine_buffers': 512, 'hugepages_2MB': 2106}"
}

As you can see, stdout displays the startup message.

However, if you look closely at the memory_info output you can see that the information has been put in a dictionary, which will be useful soon. It also contains the key: value pair 'hugepages_2MB': 2106

Starting and stopping IRIS using an Ansible IRIS module means that a system administrator using Ansible doesn't need to create complex playbooks to handle error checking and calculations or even have a deep understanding of IRIS. The details of how that information was extracted during startup are hidden, as is the calculation of the number of hugepages required for the actual shared memory used by IRIS.

Now that we know the hugepages requirements, we can go on and:

Create a playbook to configure hugepages.

The complete playbook is below.

  • Stop IRIS if its running and restart to capture shared memory.
  • Loop over the memory_info dictionary and create variables from key: value pairs.
  • Stop IRIS.
  • Set hugepages using sysctl to set hugepages passing the hugepages variable. Note: this step is in its own playbook (DRY principles again).
  • Start IRIS.
---  
- name: IRIS hugepages demo  
  ansible.builtin.debug:  
    msg: "IRIS Set hugepages based on IRIS shared memory"  
  
# Stop iris, in Ansible context "quietly" is required, else the command hangs  
# iris stop has output if there is a restart, use that to display changed status  
  
- name: Stop IRIS instance and restart  
  iris_start_stop:  
    instance_name: 'IRIS'  
    action: 'stop'  
    quietly: true  
    restart: true  
  register: iris_output  # Capture the output from the stop command  
  
- name: Set dynamic variables using IRIS start output  
  ansible.builtin.set_fact:  
    "{{ item.key }}": "{{ item.value }}"  
  loop: "{{ iris_output.memory_info | ansible.builtin.dict2items }}"  
  
# Stop IRIS  
  
- name: Stop IRIS instance  
  iris_start_stop:  
    instance_name: 'IRIS'  
    action: 'stop'  
    quietly: true  
  
# Set hugepages  
  
- name: Set hugepages  
  ansible.builtin.include_tasks: set_hugepages.yml  
  vars:  
    hugepages: "{{ hugepages_2MB }}"  
  
# Start, quietly. 
  
- name: Start IRIS instance again  
  iris_start_stop:  
    instance_name: 'IRIS'  
    action: 'start'  
    quietly: true  
  register: iris_output  # Capture the output from the module

The following is the output from the playbook.

TASK [db_server : IRIS hugepages demo] **********************************************************************
ok: [dbserver1] => {
    "msg": "IRIS Set hugepages based on IRIS shared memory"
}

TASK [db_server : Stop IRIS instance and restart] ***********************************************************
changed: [dbserver1]

TASK [db_server : Set dynamic variables using IRIS start output] ********************************************
ok: [dbserver1] => (item={'key': 'iris_start_shared_memory', 'value': 4129})
ok: [dbserver1] => (item={'key': 'iris_start_global_buffers', 'value': 2457})
ok: [dbserver1] => (item={'key': 'iris_start_routine_buffers', 'value': 512})
ok: [dbserver1] => (item={'key': 'iris_start_hugepages_2MB', 'value': 2106})

TASK [db_server : Stop IRIS instance] ***********************************************************************
changed: [dbserver1]

TASK [db_server : Set hugepages] ****************************************************************************
included: /.../roles/db_server/tasks/set_hugepages.yml for dbserver1

TASK [db_server : Set hugepages] ****************************************************************************
ok: [dbserver1] => {
    "msg": "Set hugepages to 2106"
}

:
:
:

TASK [db_server : Start IRIS instance again] ****************************************************************
changed: [dbserver1]

PLAY RECAP **************************************************************************************************
dbserver1                  : ok=13   changed=4    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

Note that I have edited out the "set hugepages" playbook output. The short story is that the playbook sets hugepages depending on the OS type. Suppose contiguous memory is unavailable, and the required number of huge pages cannot be set. In that case, the server is rebooted (you might defer this if this process is part of an initial build). The reboot and wait for the server to be available before continuing. The reboot steps are skipped if memory is available.


Running IRIS as a systemd service

Managing an InterSystems IRIS database instance as a systemd service on systems that use systemd (like most modern Linux distributions: Red Hat, Ubuntu, etc.) offers several advantages regarding consistency, automation, monitoring, and system integration. The main reason I recommend using systemd is that systemd allows you to configure services to start automatically at boot, which is crucial for production environments to ensure that your database is always available unless deliberately stopped. Likewise, it ensures the database shuts down gracefully when the system is rebooting or shutting down.

An example is at this link: iris_start_stop_systemd.yml

You can also manually start and stop IRIS while running as a service.


Running qlist

Many of the iris commands could benefit from being made their own modules. I have created iris_qlist.py as another example. The value of using a custom module is that the output is in a dictionary that can easily be turned into variables for use in your Ansible scripts. For example:

- name: Execute IRIS qlist  
  iris_qlist:  
    instance_name: 'IRIS'  
  register: qlist_output  
  
- name: Debug qlist_output  
  ansible.builtin.debug:  
    var: qlist_output  
  
- name: Display qlist  
  ansible.builtin.debug:  
    msg: "qlist {{ qlist_output.fields }}"  
  
- name: Create variables from dictionary  
  ansible.builtin.set_fact:  
    "{{ item.key }}": "{{ item.value }}"  
  loop: "{{ lookup('dict', qlist_output.fields) }}"

And the output. Populates variables with IRIS details.

TASK [db_server : IRIS iris_qlist module demo] **************************************************************
ok: [dbserver1] => {
    "msg": "IRIS iris_qlist module demo"
}

TASK [db_server : Execute IRIS qlist] ***********************************************************************
ok: [dbserver1]

:
:
:

TASK [db_server : Create variables from dictionary] *********************************************************
ok: [dbserver1] => (item={'key': 'iris_qlist_instance_name', 'value': 'IRIS'})
ok: [dbserver1] => (item={'key': 'iris_qlist_instance_install_directory', 'value': '/iris'})
ok: [dbserver1] => (item={'key': 'iris_qlist_version_identifier', 'value': '2024.1.0.263.0'})
ok: [dbserver1] => (item={'key': 'iris_qlist_current_status_for_the_instance', 'value': 'running, since Thu May  9 06:50:55 2024'})
ok: [dbserver1] => (item={'key': 'iris_qlist_configuration_file_name_last_used', 'value': 'iris.cpf'})
ok: [dbserver1] => (item={'key': 'iris_qlist_SuperServer_port_number', 'value': '1972'})
ok: [dbserver1] => (item={'key': 'iris_qlist_WebServer_port_number', 'value': '0'})
ok: [dbserver1] => (item={'key': 'iris_qlist_JDBC_Gateway_port_number', 'value': '0'})
ok: [dbserver1] => (item={'key': 'iris_qlist_Instance_status', 'value': 'ok'})
ok: [dbserver1] => (item={'key': 'iris_qlist_Product_name_of_the_instance', 'value': 'IRISHealth'})
ok: [dbserver1] => (item={'key': 'iris_qlist_Mirror_member_type', 'value': ''})
ok: [dbserver1] => (item={'key': 'iris_qlist_Mirror_Status', 'value': ''})
ok: [dbserver1] => (item={'key': 'iris_qlist_Instance_data_directory', 'value': '/iris'})

I will expand the use of Ansible IRIS modules in the future and create more community posts as I progress.

1
2 624
Article Eduard Lebedyuk · Feb 9, 2024 6m read

Welcome to the next chapter of my CI/CD series, where we discuss possible approaches toward software development with InterSystems technologies and GitLab. Today, we continue talking about Interoperability, specifically monitoring your Interoperability deployments. If you haven't yet, set up Alerting for all your Interoperability productions to get alerts about errors and production state in general.

Inactivity Timeout is a setting common to all Interoperability Business Hosts. A business host has an Inactive status after it has not received any messages within the number of seconds specified by the Inactivity Timeout field. The production Monitor Service periodically reviews the status of business services and business operations within the production and marks the item as Inactive if it has not done anything within the Inactivity Timeout period. The default value is 0 (zero). If this setting is 0, the business host will never be marked Inactive, no matter how long it stands idle.

This is an extremely useful setting since it generates alerts, which, together with configured alerting, allows for real-time notifications about production issues. Business Host being idle means there might be some issues with production, integrations, or network connectivity worth looking into. However, Business Host can have only one constant Inactivity Timeout setting, which might generate unnecessary alerts during known periods of low traffic: nights, weekends, holidays, etc. In this article, I will outline several approaches towards dynamic Inactivity Timeout implementation. While I do provide a working example (currently running in production for one of our customers), this article is more of a guideline for building your own dynamic Inactivity Timeout implementation, so don't consider the proposed solution as the only alternative.

Idea

The interoperability engine maintains a special HostMonitor global, which contains each Business Host as a subscript and a timestamp of the last activity as a value. Instead of using Inactivity Timeout, we will monitor this global ourselves and generate alerts based on the state of the HostMonitor. HostMonitor is maintained regardless of the Inactivity Timeout value being set - it's always on.

Implementation

To start with, here's how we can iterate the HostMonitor global:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
}

To create our Monitor Service, we need to perform the following checks for each Business Host:

  1. Decide if the Business Host is under the scope of our Dynamic Inactivity Timeout at all (for example, high-traffic hl7 interfaces can work with the usual Inactivity Timeout).
  2. If the Business Host is in scope, we need to calculate the time since the last activity.
  3. Now, based on inactivity time and any number of conditions (day/night time, day of week), we need to decide if we do want to send an alert.
  4. If we want to send an alert record, we need to record the Last Activity time so that we won't send an alert twice.

Our code now looks like this:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Continue:'..InScope(tHost)
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
  Set tDiff = $$$timeDiff($$$timeUTC, lastActivity)
  Set tTimeout = ..GetTimeout(tDayTimeout)
  If (tDiff > tTimeout) && ((lastActivityReported="") || ($system.SQL.DATEDIFF("s",lastActivityReported,lastActivity)>0)) {
    Set tText = $$$FormatText("InactivityTimeoutAlert: Inactivity timeout of '%1' seconds exceeded for host '%2'", +$fn(tDiff,,0), tHost)
    Do ..SendAlert(##class(Ens.AlertRequest).%New($LB(tHost, tText)))
    Set $$$EnsJobLocal("LastActivity", tHost) = lastActivity
  } 
}

You need to implement InScope and GetTimeout methods, which will actually hold your custom logic, and you're good to go. In my example, there are Day Timeouts (which might be different for each Business Host, but with a default value) and Night Timeout (which is the same for all tracked Business Hosts), so the user needs to provide the following settings:

  • Scopes: List of Business Host names (or parts of names) paired with their custom DayTimeout value, one per line. Only Business Hosts that are in scope (satisfy $find(host, scope) condition for at least one scope) would be tracked. Leave empty to monitor all Business Hosts. Example: OperationA=120
  • DayStart: Seconds since 00:00:00, after which a day starts. It must be lower than DayEnd. I.e. 06:00:00 AM is 6*3600 = 21600
  • DayEnd: Seconds since 00:00:00, after which a day ends. It must be higher than DayStart. I.e. 08:00:00 PM is (12+8)*3600 = 72000
  • DayTimeout: Default timeout value in seconds to raise alerts during the day.
  • NightTimeout: Timeout value in seconds to raise alerts during the night.
  • WeekendDays: Days of Week which are considered Weekend. Comma separated. For Weekend, NightTimeout applies 24 hours a day. Example: 1,7 Check the date's DayOfWeek value by running: $SYSTEM.SQL.Functions.DAYOFWEEK(date-expression). By default, the returned values represent these days: 1 — Sunday, 2 — Monday, 3 — Tuesday, 4 — Wednesday, 5 — Thursday, 6 — Friday, 7 — Saturday.

Here's the full code, but I don't think there's anything interesting in there. It just implements InScope and GetTimeout methods. You can use other criteria and adjust InScope and GetTimeout methods as needed.

Issues

There are two issues to speak of:

  • No yellow icon for Inactive Business Hosts (since the host's InactivityTimeout setting value is zero).
  • Out-of-host setting - developers need to remember to update this custom monitoring service each time they add a new Business Host and want to use dynamic inactivity timeouts.

Alternatives

I explored these approaches before implementing the above solution:

  1. Create the Business Service that changes InactivityTimeout settings when day/night starts. Initially, I tried to go this route but encountered a number of issues, mainly the requirement to restart all affected Business Hosts every time we changed the InactivityTimeout setting.
  2. In the custom Alert processor, add rules that, instead of sending the alert, suppress it if it's nightly InactivityTimeout. But an inactivity alert from Ens.MonitorServoce updates the LastActivity value, so from a Custom Alert Processor, I don't see a way to get "true" last activity timestamp (besides querying Ens.MessageHeader, I suppose?). And if it’s "night" – return the host state to OK, if it’s not nightly InactivityTimeout yet and suppress the alert.
  3. Extending Ens.MonitorService does not seem possible except for OnMonitor callback, but it serves another purpose.

Conclusion

Always configure alerting for all your Interoperability productions to get alerts about errors and production state in general. If static Inactivity timeout is not enough you can easily create a dynamic implementation.

Links

1
0 1286
Article Eduard Lebedyuk · Aug 14, 2018 6m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers
  • CD using ICM
  • Container architecture

In this article, we would talk about building your own container and deploying it.

14
3 1464
Article Eduard Lebedyuk · May 10, 2018 9m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab

In the fourth article, we wrote a CD configuration.

In the fifth article, we talked about containers and how (and why) they can be used.

In the sixth article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.

In this article, we'll build Continuous Delivery configuration discussed in the previous articles.

10
4 2795
Article Benjamin De Boe · Sep 13, 2022 8m read

In the vast and varied SQL database market, InterSystems IRIS stands out as a platform that goes way beyond just SQL, offering a seamless multimodel experience and supporting a rich set of development paradigms. Especially the advanced Object-Relational engine has helped organizations use the best-fit development approach for each facet of their data-intensive workloads, for example ingesting data through Objects and simultaneously querying it through SQL. Persistent Classes correspond to SQL tables, their properties to table columns and business logic is easily accessed using User-Defined Functions or Stored Procedures. In this article, we'll zoom in on a little bit of the magic just below the surface, and discuss how it may affect your development and deployment practices. This is an area of the product where we have plans to evolve and improve, so please don't hesitate to share your views and experiences using the comments section below.

6
0 1139
Article Oliver Wilms · Apr 4, 2023 6m read

IRIS configurations and user accounts contain various data elements that need to be tracked, and many people struggle to copy or sync those system configurations and user accounts between IRIS instances. So how can this process be simplified?

In software engineering, CI/CD or CICD is the set of combined practices of continuous integration (CI) and (more often) continuous delivery or (less often) continuous deployment (CD). Can CI/CD eliminate all our struggles?

I work in a team which develops and deploys IRIS clusters. We run IRIS in containers on Red Hat OpenShift container platform.

1
2 769
Question Scott Roth · Feb 16, 2023

Can someone confirm that HealthShare Health Connect 2022.2 is the correct latest release that is available via the Online Distribution? I tried looking at the HealthShare Health Connect on WRC and now do not see a 2022.2 or 2022.3 version. Is this correct? So I shouldn't be running 2022.2? Did Health Connect get renamed? A couple of months ago I downloaded HealthConnect-2022.2.0.368.0-lnxrh8x64 but not seeing it now on the WRC site.

2
0 358
Article Alex Woodhead · Jan 28, 2023 3m read

Some Usage cases

1. A deployment may consist of two high availability instances and two disaster recovery instances in a different data center.

The corresponding UAT environment could replicate this giving a total of 8 instances. How do you confirm CPF and Scheduled task alignment across ALL instances.

2
0 491
Article Eduard Lebedyuk · Sep 26, 2022 11m read

Welcome to the next chapter of my CI/CD series, where we discuss possible approaches toward software development with InterSystems technologies and GitLab.

Today, let's talk about interoperability.

Issue

When you have an active interoperability production, you have two separate process flows: a working production that processes messages and a CI/CD process flow that updates code, production configuration and system default settings.

Clearly, CI/CD process affects interoperability. But questions are:

  • What exactly happens during an update?
  • What do we need to do to minimize or eliminate production downtime during an update?

Terminology

  • Business Host (BH) - one configurable element of Interoperability Production: Business Service (BS), Business Process (BP, BPL), or Business Operation (BO).
  • Business Host Job (Job) - InterSystems IRIS job that runs Business Host code and is managed by Interoperability production.
  • Production - interconnected collection of Business Hosts.
  • System Default Settings (SDS) - values that are specific to the environment where InterSystems IRIS is installed.
  • Active Message - a request which is currently being processed by one Business Host Job. One Business Host Job can have a maximum of one Active Message. Business Host Job, which does not have an Active Message, is idle.

What's going on?

Let's start with the Production Lifecycle.

Production Start

First of all, Production can be started. Only one production per namespace can run simultaneously, and in general (unless you really know what and why you're doing it), only one production should be run per namespace, ever. Switching back and forth in one namespace between two or more different productions is not recommended. Starting production starts all enabled Business Hosts defined in the production. Failure of some Business Hosts to start does not affect Production start.

Tips:

  • Start the production from the System Management Portal or by calling: ##class(Ens.Director).StartProduction("ProductionName")
  • Execute arbitrary code on Production start (before any Business Host Job is started) by implementing an OnStart method
  • Production start is an auditable event. You can always see who and when did that in the Audit Log.

Production Update

After Production has been started, Ens.Director continuously monitors running production. Two production states exist: target state, defined in production class, and System Default Settings; and running state - currently running jobs with settings applied when the jobs were created. If desired and current states are identical, everything is good, but production could (and should) be updated if there's a difference. Usually, you see that as a red Update button on the Production Configuration page in the System Management Portal.

Updating production means an attempt to get the current Production state to match the target Production state.

When you run ##class(Ens.Director).UpdateProduction(timeout=10, force=0) To Update the production, it does the following for each Business Host:

  1. Compares active settings to production/SDS/class settings
  2. If, and only if (1) shows a mismatch, the Business Host would be marked as out-of-date and requiring an update.

After running this for each Business Host, UpdateProduction builds the set of changes:

  • Business Hosts to stop
  • Business Hosts to start
  • Production settings to update

And after that, applies them.

This way, “updating” settings without changing anything results in no production downtime.

Tips:

  • Update the production from the System Management Portal or by calling: ##class(Ens.Director).UpdateProduction(timeout=10, force=0)
  • Default System Management Portal update timeout is 10 seconds. If you know that processing your messages takes more than that, call Ens.Director:UpdateProduction with a larger timeout.
  • Update Timeout is a production setting, and you can change it to a larger value. This setting applies to the System Management Portal.

Code Update

UpdateProduction DOES NOT UPDATE the BHs with out-of-date code. This is a safety-oriented behavior, but if you want to automatically update all running BHs if the underlying code changes, follow these steps:

First, load and compile like this:

do $system.OBJ.LoadDir(dir, "", .err, 1, .load)
do $system.OBJ.CompileList(load, "curk", .errCompile, .listCompiled)

Now, listCompiled would contain all items which were actually compiled (use git diffs to minimize loaded set) due to the u flag. Use this listCompiled to get a $lb of all classes which were compiled:

set classList = ""
set class = $o(listCompiled(""))
while class'="" { 
  set classList = classList _ $lb($p(class, ".", 1, *-1))
  set class=$o(listCompiled(class))
}

And after that, calculate a list of BHs which need a restart:

SELECT %DLIST(Name) bhList
FROM Ens_Config.Item 
WHERE 1=1
  AND Enabled = 1
  AND Production = :production
  AND ClassName %INLIST :classList

Finally, after obtaining bhList stop and start affected hosts:

for stop = 1, 0 {
  for i=1:1:$ll(bhList) {
    set host = $lg(bhList, i)
    set sc = ##class(Ens.Director).TempStopConfigItem(host, stop, 0)
  }
  set sc = ##class(Ens.Director).UpdateProduction()
}

Production Stop

Productions can be stopped, which means sending a request to all Business Host Jobs to shut down (safely, after they are done with their active messages, if any).

Tips:

  • Stop the production from the System Management Portal or by calling: ##class(Ens.Director).StopProduction(timeout=10, force=0)
  • Default System Management Portal stop timeout it 120 seconds. If you know that processing your messages takes more than that, call Ens.Director:StopProduction with a larger timeout.
  • Shutdown Timeout is a production setting. You can change that to a larger value. This setting applies to the System Management Portal.
  • Execute arbitrary code on Production stop by implementing an OnStop method
  • Production stop is an auditable event, you can always see who and when did that in the Audit Log.

The important thing here is that Production is a sum total of the Business Hosts:

  • Starting production means starting all enabled Business Hosts.
  • Stopping production means stopping all running Business Hosts.
  • Updating production means calculating a subset of Business Hosts which are out of date, so they are first stopped and immediately after that started again. Additionally, a newly added Business Host is only started, and a Business Host deleted from production is just stopped.

That brings us to the Business Hosts lifecycle.

Business Host Start

Business Hosts are composed of identical Business Hosts Jobs (according to a Pool Size setting value). Starting a Business Host means starting all Business Hosts Jobs. They are started in parallel.

Individual Business Host Job starts like this:

  1. Interoperability JOBs a new process that would become a Business Host Job.
  2. The new process registers as an Interoperability job.
  3. Business Host code and Adapter code is loaded into process memory.
  4. Settings related to a Business Host and Adapter are loaded into memory. The order of precedence is: a. Production Settings (overrides System Default and Class Settings). b. System Default Settings (overrides Class Settings). c. Class Settings.
  5. Job is ready and starts accepting messages.

After (4) is done, the Job can’t change settings or code, so when you import new/same code and new/same systems default settings, it does not affect currently running Interoperability jobs.

Business Host Stop

Stopping a Business Host Job means:

  1. Interoperability orders Job to stop accepting any more messages/inputs.
  2. If there’s an active message, Business Host Job has timeout seconds to process it (by completing it - finishing OnMessage method for BO, OnProcessInput for BS, state S<int> method for BPL BPs, and On* method for BPs).
  3. If an active message has not been processed till the timeout and force=0, production update fails for that Business Host (and you’ll see a red Update button in the SMP).
  4. Stop succeeds if anything on this list is true:
    • No active message
    • Active message was processed before the timeout
    • Active message was not processed before the timeout BUT force=1
  5. Job is deregistered with Interoperability and halts.

Business Host Update

Business host update means stopping currently running Jobs for the Business Host and starting new Jobs.

Business Rules, Routing Rules, and DTLs

All Business Hosts immediately start using new versions of Business Rules, Routing Rules, and DTLs as they become available. A restart of a Business Host is not required in this situation.

Offline updates

Sometimes, however, Production updates require downtime of individual Business Hosts.

Rules depend on new code

Consider the situation. You have a current Routing Rule X which routes messages to either Business Process A or B based on arbitrary criteria. In a new commit, you add, simultaneously:

  • Business Process C
  • A new version of Routing Rule X, which routes messages to A, B, or C.

In this scenario, you can't just load the rule first and update the production second. Because the newly compiled rule would immediately start routing messages to Business Process C, which InterSystems IRIS might not have yet compiled, or Interoperability did not yet Update to use. In this case, you need to disable the Business Host with a Routing Rule, update the code, update production and enable the Business Host again.

Notes:

  • If you update a production using produciton deployment file it would automatically disable/enable all affected BHs.
  • For InProc invoked hosts, the compilation invalidates the cache of the particular host held by the caller.

Dependencies between Business Hosts

Dependencies between Business Hosts are critical. Imagine you have Business Processes A and B, where A sends messages to B. In a new commit, you add, simultaneously:

  • A new version of Process A, which sets a new property X in a request to B
  • A new version of Process B which can process a new property X

In this scenario, we MUST update Process B first and A second. You can do this in one of two ways:

  • Disable Business Hosts for the duration of the update
  • Split the update into two: first, update Process B only, and after that, in a separate update, start sending messages to it from Process A.

A more challenging variation on this theme, where new versions of Processes A and B are incompatible with old versions, requires Business Host downtime.

Queues

If you know that after the update, a Business Host will not be able to process old messages, you need to guarantee that the Business Host Queue is empty before the update. To do that, disable all Business Hosts that send messages to the Business Host and wait till its queue becomes empty.

State change in BPL Business Processes

First, a little intro into how BPL BPs work. After you compile a BPL BP, two classes get created into the package with the same name as a full BPL class name:

  • Thread1 class contains methods S1, S2, ... SN, which correspond to activities within BPL
  • Context class has all context variables and also the next state which BPL would execute (i.e., S5)

Also BPL class is persistent and stores requests currently being processed.

BPL works by executing S methods in a Thread class and correspondingly updating the BPL class table, Context table, and Thread1 table where one message "being processed" is one row in a BPL table. After the request is processed, BPL deletes the BPL, Context, and Thread entries. Since BPL BPs are asynchronous, one BPL job can simultaneously process many requests by saving information between S calls and switching between different requests. For example, BPL processed one request till it got to a sync activity - waiting for an answer from BO. It would save the current context to disk, with %NextState property (in Thread1 class) set to response activity S method, and work on other requests until BO answers. After BO answers, BPL would load Context into memory and execute the method corresponding to a state saved in %NextState property.

Now, what happens when we update the BPL? First, we need to check that at least one of the two conditions is satisfied:

  • During the update, the Context table is empty, meaning no active messages are being worked on.
  • The New States are the same as the old States, or new States are added after the old States.

If at least one condition is satisfied, we are good to go. There are either no pre-update requests for post-update BPL to process, or States are added at the end, meaning old requests can also go there (assuming that pre-update requests are compatible with post-update BPL activities and processing).

But what if you have active requests in processing and BPL changes state order? Ideally, if you can wait, disable BPL callers and wait till the Queue is empty. Validate that the Context table is also empty. Remember that the Queue shows only unprocessed requests, and the Context table stores requests which are being worked on, so you can have a situation where a very busy BPL shows zero Queue size, and that's normal. After that, disable the BPL, perform the update and enable all previously disabled Business Hosts.

If that's not possible (usually in a case where there is a very long BPL, i.e., I remember updating one that took around a week to process a request, or the update window is too short), use BPL versioning.

Alternatively, you can write an update script. In this update script, map old next states to new next states and run it on Thread1 table so that updated BPL can process old requests. BPL, of course, must be disabled for the duration of the update. That said, it's an extremely rare situation, and usually, you don't have to do this, but if you ever need to do that, that's how.

Conclusion

Interoperability implements a sophisticated algorithm to minimize the number of actions required to actualize Production after the underlying code change. Call UpdateProduction with a safe timeout on every SDS update. For every code update, you need to decide on an update strategy.

Minimizing the amount of compiled code by using git diffs helps with the compilation time, but "updating" the code with itself and recompiling it or "updating" the settings with the same values does not trigger or require a Production Update.

Updating and compiling Business Rules, Routing Rules, and DTLs makes them immediately accessible without a Production Update.

Finally, Production Update is a safe operation and usually does not require downtime.

Links

Author would like to thank @James MacKeith, @Dmitry Zasypkin, and @Regilo Regilio Guedes de Souza for their invaluable help with this article.

0
1 610
Article Eduard Lebedyuk · Jul 13, 2022 7m read

After almost four years on hiatus, my CI/CD series is back! Over the years, I have worked with several InterSystems clients, developing CI/CD pipelines for different use cases. I hope the information presented in this article will be helpful to someone.

This series of articles discusses several possible approaches toward software development with InterSystems technologies and GitLab.

We have an exciting range of topics to cover: today, let's talk about things beyond the code - namely configurations and data.

Issue

Previously we discussed code promotions, and that was, in a way, stateless - we always go from a (presumably) empty instance to a complete codebase. But sometimes, we need to provide data or state. There are different data types:

  • Configuration: users, web apps, LUTs, custom schemas, tasks, business partners, and many more
  • Settings: environment-specific key-value pairs
  • Data: reference tables and such often must be provided for your app to work

Let's discuss all these data types and how they can be first committed into source control and later deployed.

Configuration

System configuration is spread across many different classes, but InterSystems IRIS can export most of them into XMLs. First of all, is a Security package containing information about:

  • Web Applications
  • DocDBs
  • Domains
  • Audit Events
  • KMIPServers
  • LDAP Configs
  • Resources
  • Roles
  • SQL Privileges
  • SSL Configs
  • Services
  • Users

All these classes provide Exists, Export, and Import methods, allowing you to move them between environments.

A few caveats:

  • Users and SSL Configurations might contain sensitive information, such as passwords. It is generally NOT recommended to store them in source control for security reasons. Use Export/Import methods to facilitate one-off transfers.
  • By default, Export/Import methods output everything in one file, which might not be source control friendly. Here's a utility class that can export and import Lookup Tables, Custom Schemas, Business Partners, Tasks, Credentials, and SSL Configuration. It exports one item per file, so you get a directory with LUT, another directory with Custom Schemas, and so on. For SSL Configurations, it also exports files: certificates and keys.

Also worth noting that instead of export/import, you can use %Installer or Merge CPF to create most of these. Both tools also support the creation of namespaces and databases. Merge CPF can adjust system settings, such as global buffer size.

Tasks

%SYS.Task class stores tasks and provides ExportTasks and ImportTasks methods. You can also check the utility class above to import and export tasks one by one. Note that when you import tasks, you can get import errors (ERROR #7432: Start Date and Time must be after the current date and time) if StartDate or other schedule-related properties are in the past. As a solution, set LastSchedule to 0, and InterSystems IRIS would reschedule a newly imported task to run in the nearest future.

Interoperability

Interoperability productions contain:

  • Business Partners
  • System Default Settings
  • Credentials
  • Lookup Tables

The first two are available in Ens.Config package with %Export and %Import methods. Export Credentials and Lookup Tables using the utility class above. In recent versions, Lookup Tables can be exported/imported via $system.OBJ class.

Settings

System Default Settings - is a default interoperability mechanism for environment-specific settings:

The purpose of system default settings is to simplify the process of copying a production definition from one environment to another. In any production, the values of some settings are determined as part of the production design; these settings should usually be the same in all environments. Other settings, however, must be adjusted to the environment; these settings include file paths, port numbers, and so on.

System default settings should specify only the values that are specific to the environment where InterSystems IRIS is installed. In contrast, the production definition should specify the values for settings that should be the same in all environments.

I highly recommend making use of them in production environments. Use %Export and %Import to transfer system default settings.

Application Settings

Your application probably also uses settings. In that case, I recommend using System Default Settings. While it's an interoperability mechanism, settings can be accessed via: %GetSetting(pProductionName, pItemName, pHostClassName, pTargetType, pSettingName, Output pValue) (docs). You can write a wrapper which would set the defaults you don't care about, for example:

ClassMethod GetSetting(name, Output value) As %Boolean [Codemode=expression]
{
##class(Ens.Config.DefaultSettings).%GetSetting("myAppName", "default", "default", , name, .value)
}

If you want more categories, you can also expose pItemName and/or pHostClassName arguments. Settings can be initially set by importing, using System Management Portal, creating objects of Ens.Config.DefaultSettings class, or setting ^Ens.Config.DefaultSettingsD global.

My main advice here would be to keep settings in one place (it can be either System Default Settings or a custom solution), and the application must get the settings using only a provided API. This way application itself does not know about the environment and what's left is supplying centralized setting storage with environment-specific values. To do that, create a settings folder in your repository containing settings files, with file names the same as the environment branch names. Then during CI/CD phase, use the $CI_COMMIT_BRANCHenvironment variable to load the correct file.

DEV.xml
TEST.xml
PROD.xml

If you have several settings files per environment, use folders named after environment branches. To get environment variable value from inside InterSystems IRIS use$System.Util.GetEnviron("name").

Data

If you want to make some data (reference tables, catalogs, etc.) available, you have several ways of doing it:

  • Global export. Use either a binary GOF export or a new XML export. With GOF export, remember that locales on source and target systems must match (or at least global collation must be available on the target system). XML export takes more space. You can improve it by exporting global into an xml.gz file, $system.OBJ methods automatically (un)archive xml.gz files as required. The main disadvantage of this approach is that data is not human-readable, even XML - most of it is base64 encoded.
  • CSV. Export CSV and import it with LOAD DATA. I prefer CSV as it's the most storage-efficient human-readable format, which anything can import.
  • JSON. Make class JSON Enabled.
  • XML. Make class XML Enabled to project objects into XML. Use it if your data has a complex structure.

Which format to choose depends on your use case. Here I listed the formats in the order of storage efficiency, but that's not a concern if you don't have a lot of data.

Conclusions

State adds additional complexity for your CI/CD deployment pipelines, but InterSystems IRIS provides a vast array of tools to manage it.

Links

4
3 854
Discussion Dmitry Maslennikov · Jul 10, 2022

I would say it is a post of pain after years of using InterrSystems IRIS Docker images in many projects.

And I hope InterSystems will hear me and do something with it.

We have a lot of issues with Docker images, but I see no progress in solving them.

  • containers.intersystems.com - any new releases substitute previous versions, makes build useless
    • ARM64 images have separate names, and it makes a pain to use them
  • flags in iris-main, appears and disappears from version to version, which may fail the start the container
  • healthcheck does not work as expected
3
0 782
Question Chris Marais · Jun 28, 2022

I have followed the Article: Continuous Delivery of your InterSystems solution using GitLab, On our own Linux environment it works well but when I try to run it on a clients windows server; and the following command is run by the runner

irissession IRISHEALTH -U TEST "##class(isc.git.GitLab).load()" 

we get the following "error"

<NOTOPEN>>

I do not know why this happens, it does look like it has something to do with user rights but I am totally lost at this point as we have done everything I can think off relating to grant correct access etc.

1
0 405
InterSystems Official Thomas Dyar · Jan 24, 2022

The Data Platforms team is very pleased to announce the 2021.2 release of InterSystems IRIS Data Platform, InterSystems IRIS for Health and HealthShare Health Connect, which are now Generally Available (GA) to our customers and partners.

Release Highlights

InterSystems IRIS Data Platform 2021.2 makes it even easier to develop, deploy and manage augmented applications and business processes that bridge data and application silos. It has many new capabilities including:

Enhancements for application and interface developers, including:

17
1 1574
Article Eduard Lebedyuk · Jan 26, 2022 4m read

If you're deploying to more than one environment/region/cloud/customer, you will inevitably encounter the issue of configuration management.

While all (or just several) of your deployments can share the same source code, some parts, such as configuration (settings, passwords) differ from deployment to deployment and must be managed somehow.

In this article, I will try to offer several tips on that topic. This article talks mainly about container deployments.

4
0 482
InterSystems Official Steven LeBlanc · Aug 21, 2020

I am pleased to announce the availability of InterSystems Container Registry. This provides a new distribution channel for customers to access container-based releases and previews. All Community Edition images are available in a public repository with no login required. All full released images (IRIS, IRIS for Health, Health Connect, System Alerting and Monitoring, InterSystems Cloud Manager) and utility images (such as arbiter, Web Gateway, and PasswordHash) require a login token, generated from your WRC account credentials.

14
7 2260
Job Utsavi Gajjar · Jan 5, 2021

We are looking to hire a DevOps engineer with expertise in Intersystems Technologies like Ensemble and/or IRIS as essential.


Main responsibility of the Role will be to implement Version Control and automated CI/CD pipeline for code build and deployment via tools and automation scripts for the current Intersystems platforms within the organisation.

If interested please email your resume to utsavi.gajjar@mater.org.au

0
0 444
Question Rajasekar Balasubramaniyan · Oct 6, 2020

Hi, I am new to IRIS and We are planning to setup a CI pipeline on AWS VM deploying the iris data platform container. I am trying to find out which folders needs to be inside the source control and where (exact folder) the updated code needs to be pulled in the container. I would be much obliged if anyone cant point the CI CD related documentation.

Thanks,
Raj

1
0 657