#Continuous Integration

0 Followers · 68 Posts

In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day.

Article Sanjib Pandey · Oct 17, 2025 13m read

Overview

This web interface is designed to facilitate the management of Data Lookup Tables via a user-friendly web page. It is particularly useful when your lookup table values are large, dynamic, and frequently changing. By granting end-users controlled access to this web interface (read, write, and delete permissions limited to this page), they can efficiently manage lookup table data according to their needs.

The data managed through this interface can be seamlessly utilized in HealthConnect rules or data transformations, eliminating the need for constant manual monitoring and management of the lookup tables and thereby saving significant time.

Note:
If the standard Data Lookup Table does not meet your mapping requirements, you can create a custom table and adapt this web interface along with its supporting class with minimal modifications. Sample class code is available upon request.

0
1 59
Question Cristiano Silva · Oct 1, 2025

Hi all,

I'm developing a Azure Pipeline to automate the deployment process in Caché.

I use selfhosted agent to execute code im my Caché Server.

My problem is that cession execution via cmd always terminate with exit code 1 and the pipeline finishes with error, but the execution in Caché is fine, the method executed returns $$$OK
I use the following line to execute a class in Caché.
C:\InterSystems\Cache\bin\csession.exe CACHE -U  %RELEASE_TRIGGERINGARTIFACT_ALIAS% "##Class(sgf.pipeline.DeploymentManager).ProcessDeployment()"
Bellow printscreen of execution in Azure:

The problem is the exit code 1

5
0 74
Question Flávio Lúcio Naves Júnior · Nov 7, 2023

Hello everyone,

I am attempting to implement continuous integration using Docker with Caché 2018.1, and I am in the process of creating an image for our client. I have already installed Caché 2018.1 on the RedHat server, but I am working on a script to create the database and namespace. For the database, I used the following code:

do ##class(SYS.Database).CreateDatabase("/usr/cachepoc/cache2018/mgr/poc/")

However, I have encountered some issues with this code. For instance, I am unable to view this local database in the portal's list:

4
0 522
Article Eduard Lebedyuk · Jan 7, 2025 1m read

When you deploy code from a repo, class (file) deletion might not be reflected by your CICD system.
Here's a simple one-liner to automatically delete all classes in a specified package that have not been imported. It can be easily adjusted for a variety of adjunct tasks:

set packages = "USER.*,MyCustomPackage.*"set dir = "C:\InterSystems\src\"set sc = $SYSTEM.OBJ.LoadDir(dir,"ck", .err, 1, .loaded)
set sc = $SYSTEM.OBJ.Delete(packages _ ",'" _ $LTS($LI($LFS(loaded_",",".cls,"), 1, *-1), ",'"),, .err2)

The first command compiles classes and also returns a list of loaded classes. The second command deletes all classes from specified packages, except for the classes loaded just before that.

8
4 393
Question Adam Raszkiewicz · Jul 26, 2024

I was watching this video about IRIS and GitHub and all is clear to me how it works and how code from each branch is getting deployed to each IRIS environment but the process to deploy is manual. My question is how can I, if possible, to utilize gti-source-control from GitLab CICD pipeline to deploy code automaticaly after PR approval instead going to the Git UI?

Thanks

2
0 476
Article Eduard Lebedyuk · Feb 9, 2024 6m read

Welcome to the next chapter of my CI/CD series, where we discuss possible approaches toward software development with InterSystems technologies and GitLab. Today, we continue talking about Interoperability, specifically monitoring your Interoperability deployments. If you haven't yet, set up Alerting for all your Interoperability productions to get alerts about errors and production state in general.

Inactivity Timeout is a setting common to all Interoperability Business Hosts. A business host has an Inactive status after it has not received any messages within the number of seconds specified by the Inactivity Timeout field. The production Monitor Service periodically reviews the status of business services and business operations within the production and marks the item as Inactive if it has not done anything within the Inactivity Timeout period. The default value is 0 (zero). If this setting is 0, the business host will never be marked Inactive, no matter how long it stands idle.

This is an extremely useful setting since it generates alerts, which, together with configured alerting, allows for real-time notifications about production issues. Business Host being idle means there might be some issues with production, integrations, or network connectivity worth looking into. However, Business Host can have only one constant Inactivity Timeout setting, which might generate unnecessary alerts during known periods of low traffic: nights, weekends, holidays, etc. In this article, I will outline several approaches towards dynamic Inactivity Timeout implementation. While I do provide a working example (currently running in production for one of our customers), this article is more of a guideline for building your own dynamic Inactivity Timeout implementation, so don't consider the proposed solution as the only alternative.

Idea

The interoperability engine maintains a special HostMonitor global, which contains each Business Host as a subscript and a timestamp of the last activity as a value. Instead of using Inactivity Timeout, we will monitor this global ourselves and generate alerts based on the state of the HostMonitor. HostMonitor is maintained regardless of the Inactivity Timeout value being set - it's always on.

Implementation

To start with, here's how we can iterate the HostMonitor global:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
}

To create our Monitor Service, we need to perform the following checks for each Business Host:

  1. Decide if the Business Host is under the scope of our Dynamic Inactivity Timeout at all (for example, high-traffic hl7 interfaces can work with the usual Inactivity Timeout).
  2. If the Business Host is in scope, we need to calculate the time since the last activity.
  3. Now, based on inactivity time and any number of conditions (day/night time, day of week), we need to decide if we do want to send an alert.
  4. If we want to send an alert record, we need to record the Last Activity time so that we won't send an alert twice.

Our code now looks like this:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Continue:'..InScope(tHost)
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
  Set tDiff = $$$timeDiff($$$timeUTC, lastActivity)
  Set tTimeout = ..GetTimeout(tDayTimeout)
  If (tDiff > tTimeout) && ((lastActivityReported="") || ($system.SQL.DATEDIFF("s",lastActivityReported,lastActivity)>0)) {
    Set tText = $$$FormatText("InactivityTimeoutAlert: Inactivity timeout of '%1' seconds exceeded for host '%2'", +$fn(tDiff,,0), tHost)
    Do ..SendAlert(##class(Ens.AlertRequest).%New($LB(tHost, tText)))
    Set $$$EnsJobLocal("LastActivity", tHost) = lastActivity
  } 
}

You need to implement InScope and GetTimeout methods, which will actually hold your custom logic, and you're good to go. In my example, there are Day Timeouts (which might be different for each Business Host, but with a default value) and Night Timeout (which is the same for all tracked Business Hosts), so the user needs to provide the following settings:

  • Scopes: List of Business Host names (or parts of names) paired with their custom DayTimeout value, one per line. Only Business Hosts that are in scope (satisfy $find(host, scope) condition for at least one scope) would be tracked. Leave empty to monitor all Business Hosts. Example: OperationA=120
  • DayStart: Seconds since 00:00:00, after which a day starts. It must be lower than DayEnd. I.e. 06:00:00 AM is 6*3600 = 21600
  • DayEnd: Seconds since 00:00:00, after which a day ends. It must be higher than DayStart. I.e. 08:00:00 PM is (12+8)*3600 = 72000
  • DayTimeout: Default timeout value in seconds to raise alerts during the day.
  • NightTimeout: Timeout value in seconds to raise alerts during the night.
  • WeekendDays: Days of Week which are considered Weekend. Comma separated. For Weekend, NightTimeout applies 24 hours a day. Example: 1,7 Check the date's DayOfWeek value by running: $SYSTEM.SQL.Functions.DAYOFWEEK(date-expression). By default, the returned values represent these days: 1 — Sunday, 2 — Monday, 3 — Tuesday, 4 — Wednesday, 5 — Thursday, 6 — Friday, 7 — Saturday.

Here's the full code, but I don't think there's anything interesting in there. It just implements InScope and GetTimeout methods. You can use other criteria and adjust InScope and GetTimeout methods as needed.

Issues

There are two issues to speak of:

  • No yellow icon for Inactive Business Hosts (since the host's InactivityTimeout setting value is zero).
  • Out-of-host setting - developers need to remember to update this custom monitoring service each time they add a new Business Host and want to use dynamic inactivity timeouts.

Alternatives

I explored these approaches before implementing the above solution:

  1. Create the Business Service that changes InactivityTimeout settings when day/night starts. Initially, I tried to go this route but encountered a number of issues, mainly the requirement to restart all affected Business Hosts every time we changed the InactivityTimeout setting.
  2. In the custom Alert processor, add rules that, instead of sending the alert, suppress it if it's nightly InactivityTimeout. But an inactivity alert from Ens.MonitorServoce updates the LastActivity value, so from a Custom Alert Processor, I don't see a way to get "true" last activity timestamp (besides querying Ens.MessageHeader, I suppose?). And if it’s "night" – return the host state to OK, if it’s not nightly InactivityTimeout yet and suppress the alert.
  3. Extending Ens.MonitorService does not seem possible except for OnMonitor callback, but it serves another purpose.

Conclusion

Always configure alerting for all your Interoperability productions to get alerts about errors and production state in general. If static Inactivity timeout is not enough you can easily create a dynamic implementation.

Links

1
0 1286
Article Timothy Leavitt · Mar 24, 2020 5m read

This article will describe processes for running unit tests via the InterSystems Package Manager (aka IPM - see https://openexchange.intersystems.com/package/InterSystems-Package-Manager-1), including test coverage measurement (via https://openexchange.intersystems.com/package/Test-Coverage-Tool).

Unit testing in ObjectScript

There's already great documentation about writing unit tests in ObjectScript, so I won't repeat any of that. You can find the Unit Test tutorial here: https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=TUNT_preface

23
2 2037
Article Eduard Lebedyuk · Aug 14, 2018 6m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers
  • CD using ICM
  • Container architecture

In this article, we would talk about building your own container and deploying it.

14
3 1464
Article Eduard Lebedyuk · May 10, 2018 9m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab

In the fourth article, we wrote a CD configuration.

In the fifth article, we talked about containers and how (and why) they can be used.

In the sixth article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.

In this article, we'll build Continuous Delivery configuration discussed in the previous articles.

10
4 2795
Article AndreClaude Gendron · Sep 19, 2017 1m read

It is with great pleasure that the CIUSSS de l'Estrie - CHUS is sharing the mocking framework it developed and presented at the InterSystems Summit 2017.  I will update this post with more detailed instructions in the next few weeks but I wanted to share the code and presentation quickly : 

https://gitlab.com/ciussse-drit-srd-public/Mocking-Framework

I hope you'll find this useful for your unit testing. We are using this extensively for the last 2 years and it really works well! The repo is public, feel free to submit enhancements!

14
1 2155
Article Benjamin De Boe · Sep 13, 2022 8m read

In the vast and varied SQL database market, InterSystems IRIS stands out as a platform that goes way beyond just SQL, offering a seamless multimodel experience and supporting a rich set of development paradigms. Especially the advanced Object-Relational engine has helped organizations use the best-fit development approach for each facet of their data-intensive workloads, for example ingesting data through Objects and simultaneously querying it through SQL. Persistent Classes correspond to SQL tables, their properties to table columns and business logic is easily accessed using User-Defined Functions or Stored Procedures. In this article, we'll zoom in on a little bit of the magic just below the surface, and discuss how it may affect your development and deployment practices. This is an area of the product where we have plans to evolve and improve, so please don't hesitate to share your views and experiences using the comments section below.

6
0 1139
Article Oliver Wilms · Apr 4, 2023 6m read

IRIS configurations and user accounts contain various data elements that need to be tracked, and many people struggle to copy or sync those system configurations and user accounts between IRIS instances. So how can this process be simplified?

In software engineering, CI/CD or CICD is the set of combined practices of continuous integration (CI) and (more often) continuous delivery or (less often) continuous deployment (CD). Can CI/CD eliminate all our struggles?

I work in a team which develops and deploys IRIS clusters. We run IRIS in containers on Red Hat OpenShift container platform.

1
2 769
Question Kari Vatjus-Anttila · Jan 28, 2023

Hello,

I read a great article by @Timothy Leavitt here, where he wrote about unit testing and test coverage with ZPM. I am facing a slight problem and was wondering if someone has some insight into the matter.

I am running my unit tests in the following way with ZPM, as instructed. They work well and test reports are generated correctly. Test coverage is also measured correctly according to the logs. However, even though I instructed ZPM to generate Cobertura-style coverage reports, it is not generating one. When I run the GenerateReport() method manually, the report is generated correctly.

3
1 383
Question Chris Marais · Nov 15, 2021

When running the command 

do ##class(TestCoverage.Manager).RunTest(,"/nodelete",.userParams)

I now get the following error. these tests used to run fine.

LogStateStatus:0:TestCoverage.Manager:OnBeforeAllTests:ERROR #6060: Somebody else is using the Monitor. <<==== **FAILED**

 

Can someone please point me in the right direction?

7
0 460
Article Eduard Lebedyuk · Sep 26, 2022 11m read

Welcome to the next chapter of my CI/CD series, where we discuss possible approaches toward software development with InterSystems technologies and GitLab.

Today, let's talk about interoperability.

Issue

When you have an active interoperability production, you have two separate process flows: a working production that processes messages and a CI/CD process flow that updates code, production configuration and system default settings.

Clearly, CI/CD process affects interoperability. But questions are:

  • What exactly happens during an update?
  • What do we need to do to minimize or eliminate production downtime during an update?

Terminology

  • Business Host (BH) - one configurable element of Interoperability Production: Business Service (BS), Business Process (BP, BPL), or Business Operation (BO).
  • Business Host Job (Job) - InterSystems IRIS job that runs Business Host code and is managed by Interoperability production.
  • Production - interconnected collection of Business Hosts.
  • System Default Settings (SDS) - values that are specific to the environment where InterSystems IRIS is installed.
  • Active Message - a request which is currently being processed by one Business Host Job. One Business Host Job can have a maximum of one Active Message. Business Host Job, which does not have an Active Message, is idle.

What's going on?

Let's start with the Production Lifecycle.

Production Start

First of all, Production can be started. Only one production per namespace can run simultaneously, and in general (unless you really know what and why you're doing it), only one production should be run per namespace, ever. Switching back and forth in one namespace between two or more different productions is not recommended. Starting production starts all enabled Business Hosts defined in the production. Failure of some Business Hosts to start does not affect Production start.

Tips:

  • Start the production from the System Management Portal or by calling: ##class(Ens.Director).StartProduction("ProductionName")
  • Execute arbitrary code on Production start (before any Business Host Job is started) by implementing an OnStart method
  • Production start is an auditable event. You can always see who and when did that in the Audit Log.

Production Update

After Production has been started, Ens.Director continuously monitors running production. Two production states exist: target state, defined in production class, and System Default Settings; and running state - currently running jobs with settings applied when the jobs were created. If desired and current states are identical, everything is good, but production could (and should) be updated if there's a difference. Usually, you see that as a red Update button on the Production Configuration page in the System Management Portal.

Updating production means an attempt to get the current Production state to match the target Production state.

When you run ##class(Ens.Director).UpdateProduction(timeout=10, force=0) To Update the production, it does the following for each Business Host:

  1. Compares active settings to production/SDS/class settings
  2. If, and only if (1) shows a mismatch, the Business Host would be marked as out-of-date and requiring an update.

After running this for each Business Host, UpdateProduction builds the set of changes:

  • Business Hosts to stop
  • Business Hosts to start
  • Production settings to update

And after that, applies them.

This way, “updating” settings without changing anything results in no production downtime.

Tips:

  • Update the production from the System Management Portal or by calling: ##class(Ens.Director).UpdateProduction(timeout=10, force=0)
  • Default System Management Portal update timeout is 10 seconds. If you know that processing your messages takes more than that, call Ens.Director:UpdateProduction with a larger timeout.
  • Update Timeout is a production setting, and you can change it to a larger value. This setting applies to the System Management Portal.

Code Update

UpdateProduction DOES NOT UPDATE the BHs with out-of-date code. This is a safety-oriented behavior, but if you want to automatically update all running BHs if the underlying code changes, follow these steps:

First, load and compile like this:

do $system.OBJ.LoadDir(dir, "", .err, 1, .load)
do $system.OBJ.CompileList(load, "curk", .errCompile, .listCompiled)

Now, listCompiled would contain all items which were actually compiled (use git diffs to minimize loaded set) due to the u flag. Use this listCompiled to get a $lb of all classes which were compiled:

set classList = ""
set class = $o(listCompiled(""))
while class'="" { 
  set classList = classList _ $lb($p(class, ".", 1, *-1))
  set class=$o(listCompiled(class))
}

And after that, calculate a list of BHs which need a restart:

SELECT %DLIST(Name) bhList
FROM Ens_Config.Item 
WHERE 1=1
  AND Enabled = 1
  AND Production = :production
  AND ClassName %INLIST :classList

Finally, after obtaining bhList stop and start affected hosts:

for stop = 1, 0 {
  for i=1:1:$ll(bhList) {
    set host = $lg(bhList, i)
    set sc = ##class(Ens.Director).TempStopConfigItem(host, stop, 0)
  }
  set sc = ##class(Ens.Director).UpdateProduction()
}

Production Stop

Productions can be stopped, which means sending a request to all Business Host Jobs to shut down (safely, after they are done with their active messages, if any).

Tips:

  • Stop the production from the System Management Portal or by calling: ##class(Ens.Director).StopProduction(timeout=10, force=0)
  • Default System Management Portal stop timeout it 120 seconds. If you know that processing your messages takes more than that, call Ens.Director:StopProduction with a larger timeout.
  • Shutdown Timeout is a production setting. You can change that to a larger value. This setting applies to the System Management Portal.
  • Execute arbitrary code on Production stop by implementing an OnStop method
  • Production stop is an auditable event, you can always see who and when did that in the Audit Log.

The important thing here is that Production is a sum total of the Business Hosts:

  • Starting production means starting all enabled Business Hosts.
  • Stopping production means stopping all running Business Hosts.
  • Updating production means calculating a subset of Business Hosts which are out of date, so they are first stopped and immediately after that started again. Additionally, a newly added Business Host is only started, and a Business Host deleted from production is just stopped.

That brings us to the Business Hosts lifecycle.

Business Host Start

Business Hosts are composed of identical Business Hosts Jobs (according to a Pool Size setting value). Starting a Business Host means starting all Business Hosts Jobs. They are started in parallel.

Individual Business Host Job starts like this:

  1. Interoperability JOBs a new process that would become a Business Host Job.
  2. The new process registers as an Interoperability job.
  3. Business Host code and Adapter code is loaded into process memory.
  4. Settings related to a Business Host and Adapter are loaded into memory. The order of precedence is: a. Production Settings (overrides System Default and Class Settings). b. System Default Settings (overrides Class Settings). c. Class Settings.
  5. Job is ready and starts accepting messages.

After (4) is done, the Job can’t change settings or code, so when you import new/same code and new/same systems default settings, it does not affect currently running Interoperability jobs.

Business Host Stop

Stopping a Business Host Job means:

  1. Interoperability orders Job to stop accepting any more messages/inputs.
  2. If there’s an active message, Business Host Job has timeout seconds to process it (by completing it - finishing OnMessage method for BO, OnProcessInput for BS, state S<int> method for BPL BPs, and On* method for BPs).
  3. If an active message has not been processed till the timeout and force=0, production update fails for that Business Host (and you’ll see a red Update button in the SMP).
  4. Stop succeeds if anything on this list is true:
    • No active message
    • Active message was processed before the timeout
    • Active message was not processed before the timeout BUT force=1
  5. Job is deregistered with Interoperability and halts.

Business Host Update

Business host update means stopping currently running Jobs for the Business Host and starting new Jobs.

Business Rules, Routing Rules, and DTLs

All Business Hosts immediately start using new versions of Business Rules, Routing Rules, and DTLs as they become available. A restart of a Business Host is not required in this situation.

Offline updates

Sometimes, however, Production updates require downtime of individual Business Hosts.

Rules depend on new code

Consider the situation. You have a current Routing Rule X which routes messages to either Business Process A or B based on arbitrary criteria. In a new commit, you add, simultaneously:

  • Business Process C
  • A new version of Routing Rule X, which routes messages to A, B, or C.

In this scenario, you can't just load the rule first and update the production second. Because the newly compiled rule would immediately start routing messages to Business Process C, which InterSystems IRIS might not have yet compiled, or Interoperability did not yet Update to use. In this case, you need to disable the Business Host with a Routing Rule, update the code, update production and enable the Business Host again.

Notes:

  • If you update a production using produciton deployment file it would automatically disable/enable all affected BHs.
  • For InProc invoked hosts, the compilation invalidates the cache of the particular host held by the caller.

Dependencies between Business Hosts

Dependencies between Business Hosts are critical. Imagine you have Business Processes A and B, where A sends messages to B. In a new commit, you add, simultaneously:

  • A new version of Process A, which sets a new property X in a request to B
  • A new version of Process B which can process a new property X

In this scenario, we MUST update Process B first and A second. You can do this in one of two ways:

  • Disable Business Hosts for the duration of the update
  • Split the update into two: first, update Process B only, and after that, in a separate update, start sending messages to it from Process A.

A more challenging variation on this theme, where new versions of Processes A and B are incompatible with old versions, requires Business Host downtime.

Queues

If you know that after the update, a Business Host will not be able to process old messages, you need to guarantee that the Business Host Queue is empty before the update. To do that, disable all Business Hosts that send messages to the Business Host and wait till its queue becomes empty.

State change in BPL Business Processes

First, a little intro into how BPL BPs work. After you compile a BPL BP, two classes get created into the package with the same name as a full BPL class name:

  • Thread1 class contains methods S1, S2, ... SN, which correspond to activities within BPL
  • Context class has all context variables and also the next state which BPL would execute (i.e., S5)

Also BPL class is persistent and stores requests currently being processed.

BPL works by executing S methods in a Thread class and correspondingly updating the BPL class table, Context table, and Thread1 table where one message "being processed" is one row in a BPL table. After the request is processed, BPL deletes the BPL, Context, and Thread entries. Since BPL BPs are asynchronous, one BPL job can simultaneously process many requests by saving information between S calls and switching between different requests. For example, BPL processed one request till it got to a sync activity - waiting for an answer from BO. It would save the current context to disk, with %NextState property (in Thread1 class) set to response activity S method, and work on other requests until BO answers. After BO answers, BPL would load Context into memory and execute the method corresponding to a state saved in %NextState property.

Now, what happens when we update the BPL? First, we need to check that at least one of the two conditions is satisfied:

  • During the update, the Context table is empty, meaning no active messages are being worked on.
  • The New States are the same as the old States, or new States are added after the old States.

If at least one condition is satisfied, we are good to go. There are either no pre-update requests for post-update BPL to process, or States are added at the end, meaning old requests can also go there (assuming that pre-update requests are compatible with post-update BPL activities and processing).

But what if you have active requests in processing and BPL changes state order? Ideally, if you can wait, disable BPL callers and wait till the Queue is empty. Validate that the Context table is also empty. Remember that the Queue shows only unprocessed requests, and the Context table stores requests which are being worked on, so you can have a situation where a very busy BPL shows zero Queue size, and that's normal. After that, disable the BPL, perform the update and enable all previously disabled Business Hosts.

If that's not possible (usually in a case where there is a very long BPL, i.e., I remember updating one that took around a week to process a request, or the update window is too short), use BPL versioning.

Alternatively, you can write an update script. In this update script, map old next states to new next states and run it on Thread1 table so that updated BPL can process old requests. BPL, of course, must be disabled for the duration of the update. That said, it's an extremely rare situation, and usually, you don't have to do this, but if you ever need to do that, that's how.

Conclusion

Interoperability implements a sophisticated algorithm to minimize the number of actions required to actualize Production after the underlying code change. Call UpdateProduction with a safe timeout on every SDS update. For every code update, you need to decide on an update strategy.

Minimizing the amount of compiled code by using git diffs helps with the compilation time, but "updating" the code with itself and recompiling it or "updating" the settings with the same values does not trigger or require a Production Update.

Updating and compiling Business Rules, Routing Rules, and DTLs makes them immediately accessible without a Production Update.

Finally, Production Update is a safe operation and usually does not require downtime.

Links

Author would like to thank @James MacKeith, @Dmitry Zasypkin, and @Regilo Regilio Guedes de Souza for their invaluable help with this article.

0
1 610
Discussion Dmitry Maslennikov · Sep 22, 2022

For quite some time InterSystems IRIS supports such thing as Merging CPF. So, with help of this it should be possible to define only desired changes in configuration. And get them applied even with vanilla Docker image. 

And I though it could be useful when used with Dockerfile. Use this way to configure IRIS during docker build instead of using Installer manifest.

8
0 591
Discussion Dmitry Maslennikov · Jul 10, 2022

I would say it is a post of pain after years of using InterrSystems IRIS Docker images in many projects.

And I hope InterSystems will hear me and do something with it.

We have a lot of issues with Docker images, but I see no progress in solving them.

  • containers.intersystems.com - any new releases substitute previous versions, makes build useless
    • ARM64 images have separate names, and it makes a pain to use them
  • flags in iris-main, appears and disappears from version to version, which may fail the start the container
  • healthcheck does not work as expected
3
0 782
Article Eduard Lebedyuk · Mar 20, 2018 8m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

8
2 3170
Question Chris Marais · Jun 28, 2022

I have followed the Article: Continuous Delivery of your InterSystems solution using GitLab, On our own Linux environment it works well but when I try to run it on a clients windows server; and the following command is run by the runner

irissession IRISHEALTH -U TEST "##class(isc.git.GitLab).load()" 

we get the following "error"

<NOTOPEN>>

I do not know why this happens, it does look like it has something to do with user rights but I am totally lost at this point as we have done everything I can think off relating to grant correct access etc.

1
0 405
Article Eduard Lebedyuk · Mar 13, 2018 4m read

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

I this article we'll discuss:

  • GitLab installation and configuration
  • Connecting your environments to GitLab
2
1 2087
Question Rajasekaran Dhandapani · Apr 9, 2022

I am new to Intersystems, in our project we are directly connecting to the server (environment) using  Intersystems VSCode extensions and publishing our changes from local machine. This is not the way we usually do as development process.

Is it possible to implement continuous integration ? So that developers can check-in their code in GIT Hub and can integrate Jenkins and automate the deployment?

Could you please help me on this ?

3
0 412
Question Zoltán Mencser · Mar 24, 2022

Hello everyone!

When running the command 

do ##class(TestCoverage.Manager).RunTest(,"/nodelete",.userParams)

I now get the following error.

ERROR #5002 : ObjectScript error:  <FUNCTION>zStart+45^%Monitor.System.LineByLine.1  <<==== **FAILED** TestCoverage.Manager:OnBeforeAllTests:::

These tests run fine in Studio. 

Also the TestCoverage works fine on all the other namespaces. So far i'm experienceing this problem on this 1 namespace.

The solution of stopping the LinebyLine monitor doesn't help like in the case of ERROR #6060

Can someone please point me in the right direction?

9
0 730
InterSystems Official Steven LeBlanc · Aug 21, 2020

I am pleased to announce the availability of InterSystems Container Registry. This provides a new distribution channel for customers to access container-based releases and previews. All Community Edition images are available in a public repository with no login required. All full released images (IRIS, IRIS for Health, Health Connect, System Alerting and Monitoring, InterSystems Cloud Manager) and utility images (such as arbiter, Web Gateway, and PasswordHash) require a login token, generated from your WRC account credentials.

14
7 2260
Job Utsavi Gajjar · Jan 5, 2021

We are looking to hire a DevOps engineer with expertise in Intersystems Technologies like Ensemble and/or IRIS as essential.


Main responsibility of the Role will be to implement Version Control and automated CI/CD pipeline for code build and deployment via tools and automation scripts for the current Intersystems platforms within the organisation.

If interested please email your resume to utsavi.gajjar@mater.org.au

0
0 444
Question Rajasekar Balasubramaniyan · Oct 6, 2020

Hi, I am new to IRIS and We are planning to setup a CI pipeline on AWS VM deploying the iris data platform container. I am trying to find out which folders needs to be inside the source control and where (exact folder) the updated code needs to be pulled in the container. I would be much obliged if anyone cant point the CI CD related documentation.

Thanks,
Raj

1
0 657