#Best Practices

0 Followers Β· 298 Posts

Best Practices recommendations on how to develop, test, deploy and manage solutions on InterSystems Data Platforms better. 

InterSystems staff + admins Hide everywhere
Hidden post for admin
Article Guillaume Rongier Β· Feb 5, 2024 20m read

I have been using embedded python for more than 2 years now on a daily basis. May be it's time to share some feedback about this journey.

Why write this feedback? Because, I guess, I'm like most of the people here, an ObjectScript developer, and I think that the community would benefit from this feedback and could better understand the pros & cons of chosing embedded python for developing stuff in IRIS. And also avoid some pitfalls.

image

Introduction

I'm a developer since 2010, and I have been working with ObjectScript since 2013.

So roughly 10 years of experience with ObjectScript.

Since 2021 and the release of Embedded Python in IRIS, I put my self a challenge :

  • Learn Python
  • Do as much as possible everything in Python

When I started this journey, I had no idea of what Python was. So I started with the basics, and I'm still learning every day.

Starting with Python

The good thing with Python is that it's easy to learn. It's even easier when you already know ObjectScript.

Why ? They have a lot in common.

ObjectScriptPython
UntypedUntyped
Scripting languageScripting language
Object OrientedObject Oriented
InterpretedInterpreted
Easy C integrationEasy C integration

So, if you know ObjectScript, you already know a lot about Python.

But, there are some differences, and some of them are not easy to understand.

Python is not ObjectScript

To keep it simple, I will focus on the main differences between ObjectScript and Python.

For me there are mainly 3 differences :

  • Pep8
  • Modules
  • Dunders

Pep8

What the hell is Pep8 ?

It's a set of rules to write Python code.

pep8.org

Few of them are :

  • naming convention
  • variable names
    • snake_case
  • class names
    • CamelCase
  • indentation
  • line length
  • etc.

Why is it important ?

Because it's the way to write Python code. And if you don't follow these rules, you will have a hard time to read other people's code, and they will have a hard time to read your code.

As ObjectScript developers, we also have some rules to follow, but they are not as strict as Pep8.

I learned Pep8 the hard way.

For the story, I'm a sales engineer at InterSystems, and I'm doing a lot of demos. And one day, I was doing a demo of Embedded Python to a customer, this customer was a Python developer, and the conversation turned short when he saw my code. He told me that my code was not Pythonic at all (he was right) I was coding in python like I was coding in ObjectScript. And because of that, he told me that he was not interested in Embedded Python anymore. I was shocked, and I decided to learn Python the right way.

So, if you want to learn Python, learn Pep8 first.

Modules

Modules are something that we don't have in ObjectScript.

Usually, in object oriented languages, you have classes, and packages. In Python, you have classes, packages, and modules.

What is a module ?

It's a file with a .py extension. And it's the way to organize your code.

You didn't understand ? Me neither at the beginning. So let's take an example.

Usually, when you want to create a class in ObjectScript, you create a .cls file, and you put your class in it. And if you want to create another class, you create another .cls file. And if you want to create a package, you create a folder, and you put your .cls files in it.

In Python, it's the same, but Python bring the ability to have multiple classes in a single file. And this file is called a module. FYI, It's Pythonic to have multiple classes in a single file.

So plan head how you will organize your code, and how you will name your modules to not end up like me with a lot of modules with the same name as your classes.

A bad example :

MyClass.py

class MyClass:
    def __init__(self):
        pass

    def my_method(self):
        pass

To instantiate this class, you will do :

import MyClass.MyClass # weird right ?

my_class = MyClass()

Weird right ?

Dunders

Dunders are special methods in Python. They are called dunder because they start and end with double underscores.

They are kind of our % methods in ObjectScript.

They are used for :

  • constructor
  • operator overloading
  • object representation
  • etc.

Example :

class MyClass:
    def __init__(self):
        pass

    def __repr__(self):
        return "MyClass"

    def __add__(self, other):
        return self + other

Here we have 3 dunder methods :

  • __init__ : constructor
  • __repr__ : object representation
  • __add__ : operator overloading

Dunders methods are everywhere in Python. It's a major part of the language, but don't worry, you will learn them quickly.

Conclusion

Python is not ObjectScript, and you will have to learn it. But it's not that hard, and you will learn it quickly. Just keep in mind that you will have to learn Pep8, and how to organize your code with modules and dunder methods.

Good sites to learn Python :


Embedded Python

Now that you know a little bit more about Python, let's talk about Embedded Python.

What is Embedded Python ?

Embedded Python is a way to execute Python code in IRIS. It's a new feature of IRIS 2021.2+. This means that your python code will be executed in the same process as IRIS. For the more, every ObjectScript class is a Python class, same for methods and attributes and vice versa. πŸ₯³ This is neat !

How to use Embedded Python ?

There are 3 main ways to use Embedded Python :

  • Using the language tag in ObjectScript
    • Method Foo() As %String [ Language = python ]
  • Using the ##class(%SYS.Python).Import() function
  • Using the python interpreter
    • python3 -c "import iris; print(iris.system.Version.GetVersion())"

But if you want to be serious about Embedded Python, you will have to avoid using the language tag.

image

Why ?

  • Because it's not Pythonic
  • Because it's not ObjectScript either
  • Because you don't have a debugger
  • Because you don't have a linter
  • Because you don't have a formatter
  • Because you don't have a test framework
  • Because you don't have a package manager
  • Because you are mixing 2 languages in the same file
  • Because when you process crashes, you don't have a stack trace
  • Because you can't use virtual environments or conda environments
  • ...

Don't get me wrong, it works, it can be useful, if you want to test something quickly, but IMO it's not a good practice.

So, what did I learn from this 2 years of Embedded Python, and how to use it the right way ?

How I use Embedded Python

For me, you have two options :

  • Use Python libraries as they were ObjectScript classes
    • with ##class(%SYS.Python).Import() function
  • Use a python first approach

Use Python libraries and code as they were ObjectScript classes

You still want to use Python in your ObjectScript code, but you don't want to use the language tag. So what can you do ?

"Simply" use Python libraries and code as they were ObjectScript classes.

Let's take an example :

You want to use the requests library ( it's a library to make HTTP requests ) in your ObjectScript code.

With the language tag

ClassMethod Get() As %Status [ Language = python ]
{
	import requests

	url = "https://httpbin.org/get"
	# make a get request
	response = requests.get(url)
	# get the json data from the response
	data = response.json()
	# iterate over the data and print key-value pairs
	for key, value in data.items():
		print(key, ":", value)
}

Why I think it's not a good idea ?

Because you are mixing 2 languages in the same file, and you don't have a debugger, a linter, a formatter, etc. If this code crashes, you will have a hard time to debug it. You don't have a stack trace, and you don't know where the error comes from. And you don't have auto-completion.

Without the language tag

ClassMethod Get() As %Status
{
	set status = $$$OK
    set url = "https://httpbin.org/get"
    // Import Python module "requests" as an ObjectScript class
    set request = ##class(%SYS.Python).Import("requests")
    // Call the get method of the request class
    set response = request.get(url)
    // Call the json method of the response class
	set data = response.json()
    // Here data is a Python dictionary
    // To iterate over a Python dictionary, you have to use the dunder method and items()
	// Import built-in Python module
	set builtins = ##class(%SYS.Python).Import("builtins")
    // Here we are using len from the builtins module to get the length of the dictionary
    For i = 0:1:builtins.len(data)-1 {
        // Now we convert the items of the dictionary to a list, and we get the key and the value using the dunder method __getitem__
		Write builtins.list(data.items())."__getitem__"(i)."__getitem__"(0),": ",builtins.list(data.items())."__getitem__"(i)."__getitem__"(1),!
	}
	quit status
}

Why I think it's a good idea ?

Because you are using Python as it was ObjectScript. You are importing the requests library as an ObjectScript class, and you are using it as an ObjectScript class. All the logic is in ObjectScript, and you are using Python as a library. Even for maintenance, it's easier to read and understand, any ObjectScript developer can understand this code. The drawback is that you have to know how to use duners methods, and how to use Python as it was ObjectScript.

Conclusion

Belive me, this way you will end up with a more robust code, and you will be able to debug it easily. At first, it's seems hard, but you will find the benefits of learning Python faster than you think.

Use a python first approach

This is the way I prefer to use Embedded Python.

I have built a lot of tools using this approach, and I'm very happy with it.

Few examples :

So, what is a python first approach ?

There is only one rule : Python code must be in .py files, ObjectScript code must be in .cls files

How to achieve this ?

The whole idea is to create ObjectScript wrappers classes to call Python code.


Let's take the example of iris-fhir-python-strategy :

Example : iris-fhir-python-strategy

First of all, we have to understand how IRIS FHIR Server works.

Every IRIS FHIR Server implements a Strategy.

A Strategy is a set of two classes :

SuperclassSubclass Parameters
HS.FHIRServer.API.InteractionsStrategyStrategyKey β€” Specifies a unique identifier for the InteractionsStrategy.
InteractionsClass β€” Specifies the name of your Interactions subclass.
HS.FHIRServer.API.RepoManagerStrategyClass β€” Specifies the name of your InteractionsStrategy subclass.
StrategyKey β€” Specifies a unique identifier for the InteractionsStrategy. Must match the StrategyKey parameter in the InteractionsStrategy subclass.

Both classes are Abstract classes.

  • HS.FHIRServer.API.InteractionsStrategy is an Abstract class that must be implemented to customize the behavior of the FHIR Server.
  • HS.FHIRServer.API.RepoManager is an Abstract class that must be implemented to customize the storage of the FHIR Server.

Remarks

For our example, we will only focus on the HS.FHIRServer.API.InteractionsStrategy class even if the HS.FHIRServer.API.RepoManager class is also implemented and mandatory to customize the FHIR Server. The HS.FHIRServer.API.RepoManager class is implemented by HS.FHIRServer.Storage.Json.RepoManager class, which is the default implementation of the FHIR Server.

Where to find the code

All source code can be found in this repository : iris-fhir-python-strategy The src folder contains the following folders :

  • python : contains the python code
  • cls : contains the ObjectScript code that is used to call the python code

How to implement a Strategy

In this proof of concept, we will only be interested in how to implement a Strategy in Python, not how to implement a RepoManager.

To implement a Strategy you need to create at least two classes :

  • A class that inherits from HS.FHIRServer.API.InteractionsStrategy class
  • A class that inherits from HS.FHIRServer.API.Interactions class

Implementation of InteractionsStrategy

HS.FHIRServer.API.InteractionsStrategy class aim to customize the behavior of the FHIR Server by overriding the following methods :

  • GetMetadataResource : called to get the metadata of the FHIR Server
    • this is the only method we will override in this proof of concept

HS.FHIRServer.API.InteractionsStrategy has also two parameters :

  • StrategyKey : a unique identifier for the InteractionsStrategy
  • InteractionsClass : the name of your Interactions subclass

Implementation of Interactions

HS.FHIRServer.API.Interactions class aim to customize the behavior of the FHIR Server by overriding the following methods :

  • OnBeforeRequest : called before the request is sent to the server
  • OnAfterRequest : called after the request is sent to the server
  • PostProcessRead : called after the read operation is done
  • PostProcessSearch : called after the search operation is done
  • Read : called to read a resource
  • Add : called to add a resource
  • Update : called to update a resource
  • Delete : called to delete a resource
  • and many more...

We implement HS.FHIRServer.API.Interactions class in the src/cls/FHIR/Python/Interactions.cls class.

 
Spoiler
Class FHIR.Python.Interactions Extends (HS.FHIRServer.Storage.Json.Interactions, FHIR.Python.Helper)
{

Parameter OAuth2TokenHandlerClass As%String = "FHIR.Python.OAuth2Token";

Method %OnNew(pStrategy As HS.FHIRServer.Storage.Json.InteractionsStrategy) As%Status { // %OnNew is called when the object is created.// The pStrategy parameter is the strategy object that created this object.// The default implementation does nothing// Frist set the python path from an env varset..PythonPath = $system.Util.GetEnviron("INTERACTION_PATH") // Then set the python class name from the env varset..PythonClassname = $system.Util.GetEnviron("INTERACTION_CLASS") // Then set the python module name from the env varset..PythonModule = $system.Util.GetEnviron("INTERACTION_MODULE")

<span class="hljs-keyword">if</span> (<span class="hljs-built_in">..PythonPath</span> = <span class="hljs-string">""</span>) || (<span class="hljs-built_in">..PythonClassname</span> = <span class="hljs-string">""</span>) || (<span class="hljs-built_in">..PythonModule</span> = <span class="hljs-string">""</span>) {
	<span class="hljs-comment">//quit ##super(pStrategy)</span>
	<span class="hljs-keyword">set</span> <span class="hljs-built_in">..PythonPath</span> = <span class="hljs-string">"/irisdev/app/src/python/"</span>
	<span class="hljs-keyword">set</span> <span class="hljs-built_in">..PythonClassname</span> = <span class="hljs-string">"CustomInteraction"</span>
	<span class="hljs-keyword">set</span> <span class="hljs-built_in">..PythonModule</span> = <span class="hljs-string">"custom"</span>
}


<span class="hljs-comment">// Then set the python class</span>
<span class="hljs-keyword">do</span> <span class="hljs-built_in">..SetPythonPath</span>(<span class="hljs-built_in">..PythonPath</span>)
<span class="hljs-keyword">set</span> <span class="hljs-built_in">..PythonClass</span> = <span class="hljs-keyword">##class</span>(FHIR.Python.Interactions).GetPythonInstance(<span class="hljs-built_in">..PythonModule</span>, <span class="hljs-built_in">..PythonClassname</span>)

<span class="hljs-keyword">quit</span> <span class="hljs-keyword">##super</span>(pStrategy)

}

Method OnBeforeRequest( pFHIRService As HS.FHIRServer.API.Service, pFHIRRequest As HS.FHIRServer.API.Data.Request, pTimeout As%Integer) { // OnBeforeRequest is called before each request is processed.if$ISOBJECT(..PythonClass) { set body = ##class(%SYS.Python).None() if pFHIRRequest.Json '= "" { set jsonLib = ##class(%SYS.Python).Import("json") set body = jsonLib.loads(pFHIRRequest.Json.%ToJSON()) } do..PythonClass."on_before_request"(pFHIRService, pFHIRRequest, body, pTimeout) } }

Method OnAfterRequest( pFHIRService As HS.FHIRServer.API.Service, pFHIRRequest As HS.FHIRServer.API.Data.Request, pFHIRResponse As HS.FHIRServer.API.Data.Response) { // OnAfterRequest is called after each request is processed.if$ISOBJECT(..PythonClass) { set body = ##class(%SYS.Python).None() if pFHIRResponse.Json '= "" { set jsonLib = ##class(%SYS.Python).Import("json") set body = jsonLib.loads(pFHIRResponse.Json.%ToJSON()) } do..PythonClass."on_after_request"(pFHIRService, pFHIRRequest, pFHIRResponse, body) } }

Method PostProcessRead(pResourceObject As%DynamicObject) As%Boolean { // PostProcessRead is called after a resource is read from the database.// Return 1 to indicate that the resource should be included in the response.// Return 0 to indicate that the resource should be excluded from the response.if$ISOBJECT(..PythonClass) { if pResourceObject '= "" { set jsonLib = ##class(%SYS.Python).Import("json") set body = jsonLib.loads(pResourceObject.%ToJSON()) } return..PythonClass."post_process_read"(body) } quit1 }

Method PostProcessSearch( pRS As HS.FHIRServer.Util.SearchResult, pResourceType As%String) As%Status { // PostProcessSearch is called after a search is performed.// Return $$$OK to indicate that the search was successful.// Return an error code to indicate that the search failed.if$ISOBJECT(..PythonClass) { return..PythonClass."post_process_search"(pRS, pResourceType) } quit$$$OK }

Method Read( pResourceType As%String, pResourceId As%String, pVersionId As%String = "") As%DynamicObject { return##super(pResourceType, pResourceId, pVersionId) }

Method Add( pResourceObj As%DynamicObject, pResourceIdToAssign As%String = "", pHttpMethod = "POST") As%String { return##super(pResourceObj, pResourceIdToAssign, pHttpMethod) }

/// Returns VersionId for the "deleted" version Method Delete( pResourceType As%String, pResourceId As%String) As%String { return##super(pResourceType, pResourceId) }

Method Update(pResourceObj As%DynamicObject) As%String { return##super(pResourceObj) }

}

The FHIR.Python.Interactions class inherits from HS.FHIRServer.Storage.Json.Interactions class and FHIR.Python.Helper class.

The HS.FHIRServer.Storage.Json.Interactions class is the default implementation of the FHIR Server.

The FHIR.Python.Helper class aim to help to call Python code from ObjectScript.

The FHIR.Python.Interactions class overrides the following methods :

  • %OnNew : called when the object is created
    • we use this method to set the python path, python class name and python module name from environment variables
    • if the environment variables are not set, we use default values
    • we also set the python class
    • we call the %OnNew method of the parent class
Method %OnNew(pStrategy As HS.FHIRServer.Storage.Json.InteractionsStrategy) As %Status
{
	// First set the python path from an env var
	set ..PythonPath = $system.Util.GetEnviron("INTERACTION_PATH")
	// Then set the python class name from the env var
	set ..PythonClassname = $system.Util.GetEnviron("INTERACTION_CLASS")
	// Then set the python module name from the env var
	set ..PythonModule = $system.Util.GetEnviron("INTERACTION_MODULE")

	if (..PythonPath = "") || (..PythonClassname = "") || (..PythonModule = "") {
		// use default values
		set ..PythonPath = "/irisdev/app/src/python/"
		set ..PythonClassname = "CustomInteraction"
		set ..PythonModule = "custom"
	}

	// Then set the python class
	do ..SetPythonPath(..PythonPath)
	set ..PythonClass = ..GetPythonInstance(..PythonModule, ..PythonClassname)

	quit ##super(pStrategy)
}
  • OnBeforeRequest : called before the request is sent to the server
    • we call the on_before_request method of the python class
    • we pass the HS.FHIRServer.API.Service object, the HS.FHIRServer.API.Data.Request object, the body of the request and the timeout
Method OnBeforeRequest(
	pFHIRService As HS.FHIRServer.API.Service,
	pFHIRRequest As HS.FHIRServer.API.Data.Request,
	pTimeout As %Integer)
{
	// OnBeforeRequest is called before each request is processed.
	if $ISOBJECT(..PythonClass) {
		set body = ##class(%SYS.Python).None()
		if pFHIRRequest.Json '= "" {
			set jsonLib = ##class(%SYS.Python).Import("json")
			set body = jsonLib.loads(pFHIRRequest.Json.%ToJSON())
		}
		do ..PythonClass."on_before_request"(pFHIRService, pFHIRRequest, body, pTimeout)
	}
}
  • OnAfterRequest : called after the request is sent to the server
    • we call the on_after_request method of the python class
    • we pass the HS.FHIRServer.API.Service object, the HS.FHIRServer.API.Data.Request object, the HS.FHIRServer.API.Data.Response object and the body of the response
Method OnAfterRequest(
	pFHIRService As HS.FHIRServer.API.Service,
	pFHIRRequest As HS.FHIRServer.API.Data.Request,
	pFHIRResponse As HS.FHIRServer.API.Data.Response)
{
	// OnAfterRequest is called after each request is processed.
	if $ISOBJECT(..PythonClass) {
		set body = ##class(%SYS.Python).None()
		if pFHIRResponse.Json '= "" {
			set jsonLib = ##class(%SYS.Python).Import("json")
			set body = jsonLib.loads(pFHIRResponse.Json.%ToJSON())
		}
		do ..PythonClass."on_after_request"(pFHIRService, pFHIRRequest, pFHIRResponse, body)
	}
}
  • And so on...

Interactions in Python

FHIR.Python.Interactions class calls the on_before_request, on_after_request, ... methods of the python class.

Here is the abstract python class :

import abc
import iris

class Interaction(object):
    __metaclass__ = abc.ABCMeta

    @abc.abstractmethod
    def on_before_request(self, 
                          fhir_service:'iris.HS.FHIRServer.API.Service',
                          fhir_request:'iris.HS.FHIRServer.API.Data.Request',
                          body:dict,
                          timeout:int):
        """
        on_before_request is called before the request is sent to the server.
        param fhir_service: the fhir service object iris.HS.FHIRServer.API.Service
        param fhir_request: the fhir request object iris.FHIRServer.API.Data.Request
        param timeout: the timeout in seconds
        return: None
        """
        

    @abc.abstractmethod
    def on_after_request(self,
                         fhir_service:'iris.HS.FHIRServer.API.Service',
                         fhir_request:'iris.HS.FHIRServer.API.Data.Request',
                         fhir_response:'iris.HS.FHIRServer.API.Data.Response',
                         body:dict):
        """
        on_after_request is called after the request is sent to the server.
        param fhir_service: the fhir service object iris.HS.FHIRServer.API.Service
        param fhir_request: the fhir request object iris.FHIRServer.API.Data.Request
        param fhir_response: the fhir response object iris.FHIRServer.API.Data.Response
        return: None
        """
        

    @abc.abstractmethod
    def post_process_read(self,
                          fhir_object:dict) -> bool:
        """
        post_process_read is called after the read operation is done.
        param fhir_object: the fhir object
        return: True the resource should be returned to the client, False otherwise
        """
        

    @abc.abstractmethod
    def post_process_search(self,
                            rs:'iris.HS.FHIRServer.Util.SearchResult',
                            resource_type:str):
        """
        post_process_search is called after the search operation is done.
        param rs: the search result iris.HS.FHIRServer.Util.SearchResult
        param resource_type: the resource type
        return: None
        """

Implementation of the abstract python class

from FhirInteraction import Interaction

class CustomInteraction(Interaction):

    def on_before_request(self, fhir_service, fhir_request, body, timeout):
        #Extract the user and roles for this request
        #so consent can be evaluated.
        self.requesting_user = fhir_request.Username
        self.requesting_roles = fhir_request.Roles

    def on_after_request(self, fhir_service, fhir_request, fhir_response, body):
        #Clear the user and roles between requests.
        self.requesting_user = ""
        self.requesting_roles = ""

    def post_process_read(self, fhir_object):
        #Evaluate consent based on the resource and user/roles.
        #Returning 0 indicates this resource shouldn't be displayed - a 404 Not Found
        #will be returned to the user.
        return self.consent(fhir_object['resourceType'],
                        self.requesting_user,
                        self.requesting_roles)

    def post_process_search(self, rs, resource_type):
        #Iterate through each resource in the search set and evaluate
        #consent based on the resource and user/roles.
        #Each row marked as deleted and saved will be excluded from the Bundle.
        rs._SetIterator(0)
        while rs._Next():
            if not self.consent(rs.ResourceType,
                            self.requesting_user,
                            self.requesting_roles):
                #Mark the row as deleted and save it.
                rs.MarkAsDeleted()
                rs._SaveRow()

    def consent(self, resource_type, user, roles):
        #Example consent logic - only allow users with the role '%All' to see
        #Observation resources.
        if resource_type == 'Observation':
            if '%All' in roles:
                return True
            else:
                return False
        else:
            return True

Too long, do a summary

The FHIR.Python.Interactions class is a wrapper to call the python class.

IRIS abstracts classes are implemented to wrap python abstract classes πŸ₯³.

That help us to keep python code and ObjectScript code separated and for so benefit from the best of both worlds.

10
22 1771
Article Sean McKenna Β· Feb 12, 2024 3m read

One of the great features in InterSystems IRIS is Monitoring InterSystems IRIS using REST API.  This enables every InterSystems HealthShare instance with the ability to use a REST interface to provide statistics about the InterSystems HealthShare instance.  This feature includes information about the InterSystems IRIS instance with many out of the box statistics and metrics.

You also have the ability to create application level statistics and metrics.

2
4 475
Article Eduard Lebedyuk Β· Feb 9, 2024 6m read

Welcome to the next chapter of my CI/CD series, where we discuss possible approaches toward software development with InterSystems technologies and GitLab. Today, we continue talking about Interoperability, specifically monitoring your Interoperability deployments. If you haven't yet, set up Alerting for all your Interoperability productions to get alerts about errors and production state in general.

Inactivity Timeout is a setting common to all Interoperability Business Hosts. A business host has an Inactive status after it has not received any messages within the number of seconds specified by the Inactivity Timeout field. The production Monitor Service periodically reviews the status of business services and business operations within the production and marks the item as Inactive if it has not done anything within the Inactivity Timeout period. The default value is 0 (zero). If this setting is 0, the business host will never be marked Inactive, no matter how long it stands idle.

This is an extremely useful setting since it generates alerts, which, together with configured alerting, allows for real-time notifications about production issues. Business Host being idle means there might be some issues with production, integrations, or network connectivity worth looking into. However, Business Host can have only one constant Inactivity Timeout setting, which might generate unnecessary alerts during known periods of low traffic: nights, weekends, holidays, etc. In this article, I will outline several approaches towards dynamic Inactivity Timeout implementation. While I do provide a working example (currently running in production for one of our customers), this article is more of a guideline for building your own dynamic Inactivity Timeout implementation, so don't consider the proposed solution as the only alternative.

Idea

The interoperability engine maintains a special HostMonitor global, which contains each Business Host as a subscript and a timestamp of the last activity as a value. Instead of using Inactivity Timeout, we will monitor this global ourselves and generate alerts based on the state of the HostMonitor. HostMonitor is maintained regardless of the Inactivity Timeout value being set - it's always on.

Implementation

To start with, here's how we can iterate the HostMonitor global:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
}

To create our Monitor Service, we need to perform the following checks for each Business Host:

  1. Decide if the Business Host is under the scope of our Dynamic Inactivity Timeout at all (for example, high-traffic hl7 interfaces can work with the usual Inactivity Timeout).
  2. If the Business Host is in scope, we need to calculate the time since the last activity.
  3. Now, based on inactivity time and any number of conditions (day/night time, day of week), we need to decide if we do want to send an alert.
  4. If we want to send an alert record, we need to record the Last Activity time so that we won't send an alert twice.

Our code now looks like this:

Set tHost=""
For { 
  Set tHost=$$$OrderHostMonitor(tHost) 
  Quit:""=tHost
  Continue:'..InScope(tHost)
  Set lastActivity = $$$GetHostMonitor(tHost,$$$eMonitorLastActivity)
  Set tDiff = $$$timeDiff($$$timeUTC, lastActivity)
  Set tTimeout = ..GetTimeout(tDayTimeout)
  If (tDiff > tTimeout) && ((lastActivityReported="") || ($system.SQL.DATEDIFF("s",lastActivityReported,lastActivity)>0)) {
    Set tText = $$$FormatText("InactivityTimeoutAlert: Inactivity timeout of '%1' seconds exceeded for host '%2'", +$fn(tDiff,,0), tHost)
    Do ..SendAlert(##class(Ens.AlertRequest).%New($LB(tHost, tText)))
    Set $$$EnsJobLocal("LastActivity", tHost) = lastActivity
  } 
}

You need to implement InScope and GetTimeout methods, which will actually hold your custom logic, and you're good to go. In my example, there are Day Timeouts (which might be different for each Business Host, but with a default value) and Night Timeout (which is the same for all tracked Business Hosts), so the user needs to provide the following settings:

  • Scopes: List of Business Host names (or parts of names) paired with their custom DayTimeout value, one per line. Only Business Hosts that are in scope (satisfy $find(host, scope) condition for at least one scope) would be tracked. Leave empty to monitor all Business Hosts. Example: OperationA=120
  • DayStart: Seconds since 00:00:00, after which a day starts. It must be lower than DayEnd. I.e. 06:00:00 AM is 6*3600 = 21600
  • DayEnd: Seconds since 00:00:00, after which a day ends. It must be higher than DayStart. I.e. 08:00:00 PM is (12+8)*3600 = 72000
  • DayTimeout: Default timeout value in seconds to raise alerts during the day.
  • NightTimeout: Timeout value in seconds to raise alerts during the night.
  • WeekendDays: Days of Week which are considered Weekend. Comma separated. For Weekend, NightTimeout applies 24 hours a day. Example: 1,7 Check the date's DayOfWeek value by running: $SYSTEM.SQL.Functions.DAYOFWEEK(date-expression). By default, the returned values represent these days: 1 β€” Sunday, 2 β€” Monday, 3 β€” Tuesday, 4 β€” Wednesday, 5 β€” Thursday, 6 β€” Friday, 7 β€” Saturday.

Here's the full code, but I don't think there's anything interesting in there. It just implements InScope and GetTimeout methods. You can use other criteria and adjust InScope and GetTimeout methods as needed.

Issues

There are two issues to speak of:

  • No yellow icon for Inactive Business Hosts (since the host's InactivityTimeout setting value is zero).
  • Out-of-host setting - developers need to remember to update this custom monitoring service each time they add a new Business Host and want to use dynamic inactivity timeouts.

Alternatives

I explored these approaches before implementing the above solution:

  1. Create the Business Service that changes InactivityTimeout settings when day/night starts. Initially, I tried to go this route but encountered a number of issues, mainly the requirement to restart all affected Business Hosts every time we changed the InactivityTimeout setting.
  2. In the custom Alert processor, add rules that, instead of sending the alert, suppress it if it's nightly InactivityTimeout. But an inactivity alert from Ens.MonitorServoce updates the LastActivity value, so from a Custom Alert Processor, I don't see a way to get "true" last activity timestamp (besides querying Ens.MessageHeader, I suppose?). And if it’s "night" – return the host state to OK, if it’s not nightly InactivityTimeout yet and suppress the alert.
  3. Extending Ens.MonitorService does not seem possible except for OnMonitor callback, but it serves another purpose.

Conclusion

Always configure alerting for all your Interoperability productions to get alerts about errors and production state in general. If static Inactivity timeout is not enough you can easily create a dynamic implementation.

Links

1
0 1287
Article Luis Angel PΓ©rez Ramos Β· Feb 7, 2024 6m read

In this article we are going to see how we can use the WhatsApp instant messaging service from InterSystems IRIS to send messages to different recipients. To do this we must create and configure an account in Meta and configure a Business Operation to send the messages we want.

Let's look at each of these steps in more detail.

Setting up an account on Meta

This is possibly the most complicated point of the entire configuration, since we will have to configure a series of accounts until we can have the messaging functionality.

Here you can read the official Meta documentation.

1
4 703
Article Mark Bolinsky Β· Feb 5, 2019 9m read

There are often questions surrounding the ideal Apache HTTPD Web Server configuration for HealthShare.  The contents of this article will outline the initial recommended web server configuration for any HealthShare product. 

As a starting point, Apache HTTPD version 2.4.x (64-bit) is recommended.  Earlier versions such as 2.2.x are available, however version 2.2 is not recommended for performance and scalability of HealthShare.

1
15 11518
Article Theo Stolker Β· Feb 2, 2024 9m read

In a customer project I was asked how you can keep track of database changes: Who changed what at which date and time. Goal was to track insert, update and delete for both SQL and object access.

This is the table that I created to keep the Change Log:

/// Changelog, keep track of changes to any table
Class ChangeLog.DB.ChangeLog Extends (%Persistent, %JSON.Adaptor)
{

/// Action 
Property Action As %String(%JSONFIELDNAME = "action", DISPLAYLIST = ",Create,Update,Delete", MAXLEN = 1, VALUELIST = ",0,1,2");

/// Classname of the %Persistent class
Property ClassName As %String(%JSONFIELDNAME = "table", MAXLEN = "") [ SqlFieldName = TableName ];

/// ID of the record
Property DRecordId As %String(%JSONFIELDNAME = "id") [ SqlFieldName = RecordId ];

/// Name of the user that made the change
Property DUsername As %String(%JSONFIELDNAME = "user") [ SqlFieldName = Username ];

/// ISO 8601 formatted UTC timestamp e.g 2023-03-20T15:14:45.7384083Z
Property ETimestamp As %String(%JSONFIELDNAME = "timestamp", MAXLEN = 28) [ SqlFieldName = Timestamp ];

/// Changed Data (only there for Action < 2)
Property NewData As %String(%JSONFIELDNAME = "changed-data", MAXLEN = "");

/// Old Data (only there for Action > 0)
Property OldData As %String(%JSONFIELDNAME = "old-data", MAXLEN = "");

}

The table for which I wanted to track changes was a simple Name-Value type:

Class ChangeLog.DB.NameValues Extends %Persistent
{

/// Name
Property name As %String;

Index nameIndex On name [ Unique ];

/// Value
Property value As %String(MAXLEN = "");

/// CreateOrUpdate
ClassMethod CreateOrUpdate(name As %String, value As %String = "") As %Status
{
    if ..nameIndexExists(name)
    {
        if (value = "")
        {
            return ..nameIndexDelete(name)
        }

        set record = ..nameIndexOpen(name)
    }
    else
    {
        set record = ..%New()
        set record.name = name
    }

    if (value = "") // Do not store!
    {
        return $$$OK        
    }

    set record.value = value

    return record.%Save()
}

}

I first attempted using an %OnAfterSave() method, which was easy enough, but it wasn't called when the update happened via SQL. So I learned that I had to write a Trigger method instead, see https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_triggers

Once I got used to the specific syntax rules that apply, It was relatively straightforward to write a Trigger method, so I added the following Triggers to the NameValues class:

/// Write the DB changelog
Trigger LogUpdate [ Event = INSERT/UPDATE, Foreach = row/object, Time = AFTER ]
{
    New changelog
    set changelog = ##class(ChangeLog.DB.ChangeLog).%New()
    set changelog.ClassName = $CLASSNAME()
    set changelog.DRecordId = {ID}
    set changelog.Action = (%oper = "UPDATE")
    set changelog.DUsername = $SYSTEM.Process.UserName()
    set changelog.ETimestamp = $ZDATETIME($ZTIMESTAMP, 3, 7, 7)
    if (%oper = "UPDATE") // also add old data
    {
        set changelog.OldData = { "name": ({name*O}), "value": ({value*O}) }.%ToJSON()
    }
    set changelog.NewData = { "name": ({name*N}), "value": ({value*N}) }.%ToJSON()
    do changelog.%Save()
}

/// Write delete to changelog
Trigger LogDelete [ Event = DELETE, Foreach = row/object ]
{
    New changelog
    set changelog = ##class(ChangeLog.DB.ChangeLog).%New()
    set changelog.ClassName = $CLASSNAME()
    set changelog.DRecordId = {ID}
    set changelog.Action = 2
    set changelog.DUsername = $SYSTEM.Process.UserName()
    set changelog.ETimestamp = $ZDATETIME($ZTIMESTAMP, 3, 7, 7)
    set changelog.OldData = { "name": ({name*O}), "value": ({value*O}) }.%ToJSON()
    do changelog.%Save()
}

The above code has been specifically written for the Name-Values table, because it uses the specific syntax to access old- and new property values, like {nameO} for the old value of the name property and {valueN} to represent the new value of the value property. In addition, I could have used {value*C} to check out if the value property did change during a specific update.

Now that I was able to create a Trigger for a specific table, I wondered how I could change it to work the same but be completely generic, given that the Trigger syntax supports specific property names, but no wildcards.

In a recent post (see https://community.intersystems.com/post/how-add-webterminal-when-you-have-no-terminal-access) I had used [ CodeMode = objectgenerator ], and I wondered if I could use that here too.

Interestingly enough, section https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_generators#GOBJ_methodgen showed me right away how to list properties during object generation. And yes, CodeMode = objectgenerator is supported for Triggers too!

With that background, I set of to change the Triggers to be completely generic, and here is the resulting code:

/// This class adds logic to write records to ChangeLog.DB.ChangeLog for %Persistent classes
/// Extend from %Persistent before this class, like in:
///    Class ChangeLog.DB.NameValues Extends (%Persistent, ChangeLog.DB.ChangeLogWriter)
/// That way you avoid records with only an id column being created in the table ChangeLog.DB.ChangeLogWriter
Class ChangeLog.DB.ChangeLogWriter Extends %Persistent [ Abstract, PropertyClass = ChangeLog.DB.ChangeLogPropertyParameters ]
{

/// Write the DB changelog
Trigger LogUpdate [ CodeMode = objectgenerator, Event = INSERT/UPDATE, Foreach = row/object, Time = AFTER ]
{
    do %code.WriteLine(" new changelog")
    do %code.WriteLine(" set changelog = ##class(ChangeLog.DB.ChangeLog).%New()")
    do %code.WriteLine(" set changelog.ClassName = ..%ClassName()")
    do %code.WriteLine(" set changelog.DRecordId = {ID}")
    do %code.WriteLine(" set changelog.Action = (%oper = ""UPDATE"")")
    do %code.WriteLine(" set changelog.DUsername = $UserName")
    do %code.WriteLine(" set changelog.ETimestamp = $ZDATETIME($ZTIMESTAMP, 3, 7, 7)")
    do %code.WriteLine(" if (%oper = ""UPDATE"") {")
    do %code.WriteLine("     new data")
    do %code.WriteLine("     set data = {}")

    for i = 1:1:%compiledclass.Properties.Count()
    {
        set prop = %compiledclass.Properties.GetAt(i)
        set propName = prop.Parameters.GetAt("%JSONFIELDNAME")

        if (propName = "")
        {
            set propName = prop.Name
        }

        if (prop.Name '[ "%") && 'prop.Transient && (prop.Type ? 1"%Library"0.E)
        {
            do %code.WriteLine("     do data.%Set(""" _ propName _ """, {"_ prop.Name _ "*O})")
        }
    }

    do %code.WriteLine("     set changelog.OldData = data.%ToJSON()")
    do %code.WriteLine(" }")
    do %code.WriteLine(" set data = {}")

    for i = 1:1:%compiledclass.Properties.Count()
    {
        set prop = %compiledclass.Properties.GetAt(i)
        set propName = prop.Parameters.GetAt("%JSONFIELDNAME")

        if (propName = "")
        {
            set propName = prop.Name
        }

        if (prop.Name '[ "%") && 'prop.Transient && (prop.Type ? 1"%Library"0.E)
        {
            do %code.WriteLine(" if {"_ prop.Name _ "*C} && '$ISOBJECT({"_ prop.Name _ "*N}) {")
            do %code.WriteLine("     do data.%Set(""" _ propName _ """, {"_ prop.Name _ "*N})")
            do %code.WriteLine(" }")
        }
    }
    do %code.WriteLine(" set changelog.NewData = data.%ToJSON()")
    do %code.WriteLine(" do changelog.%Save()")
    return $$$OK
}

/// Write delete to changelog
Trigger LogDelete [ CodeMode = objectgenerator, Event = DELETE, Foreach = row/object ]
{
    do %code.WriteLine(" new changelog")
    do %code.WriteLine(" set changelog = ##class(ChangeLog.DB.ChangeLog).%New()")
    do %code.WriteLine(" set changelog.ClassName = ..%ClassName()")
    do %code.WriteLine(" set changelog.DRecordId = {ID}")
    do %code.WriteLine(" set changelog.Action = 2")
    do %code.WriteLine(" set changelog.DUsername = $UserName")
    do %code.WriteLine(" set changelog.ETimestamp = $ZDATETIME($ZTIMESTAMP, 3, 7, 7)")
    do %code.WriteLine(" new data")
    do %code.WriteLine(" set data = {}")

    for i = 1:1:%compiledclass.Properties.Count()
    {
        set prop = %compiledclass.Properties.GetAt(i)
        set propName = prop.Parameters.GetAt("%JSONFIELDNAME")

        if (propName = "")
        {
            set propName = prop.Name
        }

        if (prop.Name '[ "%") && 'prop.Transient && (prop.Type ? 1"%Library"0.E)
        {
            do %code.WriteLine(" do data.%Set(""" _ propName _ """, {"_ prop.Name _ "*O})")
        }
    }

    do %code.WriteLine(" set changelog.OldData = data.%ToJSON()")
    do %code.WriteLine(" do changelog.%Save()")
    return $$$OK
}

I have tested this of course with the Name-Values class:

/// Test table with name values
/// Each change must be recorded in the ChangeLog Table
Class ChangeLog.DB.NameValues Extends (%Persistent, ChangeLog.DB.ChangeLogWriter)
{

/// Name
Property name As %String;

Index nameIndex On name [ Unique ];

/// Value
Property value As %String(MAXLEN = "");

/// CreateOrUpdate
ClassMethod CreateOrUpdate(name As %String, value As %String = "") As %Status
{
    if ..nameIndexExists(name)
    {
        if (value = "")
        {
            return ..nameIndexDelete(name)
        }

        set record = ..nameIndexOpen(name)
    }
    else
    {
        set record = ..%New()
        set record.name = name
    }

    if (value = "") // Do not store!
    {
        return $$$OK        
    }

    set record.value = value

    return record.%Save()
}

}

This worked great! After executing the following 3 commands:

  1. Inserting a record into NameValues using
w ##class(ChangeLog.DB.NameValues).CreateOrUpdate("name", "value1")
  1. Updating that instance using
w ##class(ChangeLog.DB.NameValues).CreateOrUpdate("name", "value2")
  1. Delting all records using
delete FROM ChangeLog_DB.NameValues

This is what the the ChangeLog looked like:

IDActionTableNameRecordIdUsernameTimestampNewDataOldData
1CreateNameValues1_SYSTEM2023-11-27T12:52:05.8768627Z{"name":"name","value":"value1"}
2UpdateNameValues1_SYSTEM2023-11-27T12:52:09.7573293Z{"value":"value2"}{"name":"name","value":"value1"}
3DeleteNameValues1_SYSTEM2023-11-27T12:53:15.2558132Z{"name":"name","value":"value2"}

Then, I changed all 12 classes extending %Persistent defined in the customer project to extend ChangeLog.DB.PersistentWithChangeLog instead. This lead to a couple of changes (already in the code above):

  1. I wanted to exclude properties %%OID en %Concurrency properties which are part of the class by default
  2. Transient Properties need to be excluded, as these do not exist as SQL properties.
  3. Given that we log the old and new data as JSON, it made sense to use the "%JSONFIELDNAME" parameter as the property name when that is defined.

I did ran into one unexplained issue, where a class using a unique index named "UniqueIndex" compiled when extending %Persistent, but no longer when extending ChangeLog.DB.PersistentWithChangeLog. Changing the Index name to UniqueIndex2 solved the issue.

To summarize, I am really excited about the power of [ CodeMode = objectgenerator ], and I hope this article is helpful to you too!

5
4 559
Article Alberto Fuentes Β· Jan 29, 2024 12m read

We have a yummy dataset with recipes written by multiple Reddit users, however most of the information is free text as the title or description of a post. Let's find out how we can very easily load the dataset, extract some features and analyze it using features from OpenAI large language model within Embedded Python and the Langchain framework.

Loading the dataset

First things first, we need to load the dataset or can we just connect to it?

There are different ways you can achieve this: for instance CSV Record Mapper you can use in an interoperability production or even nice OpenExchange applications like csvgen.

We will use Foreign Tables. A very useful capability to project data physically stored elsewhere to IRIS SQL. We can use that to have a very first view of the dataset files.

We create a Foreign Server:

CREATE FOREIGN SERVER dataset FOREIGN DATA WRAPPER CSV HOST '/app/data/'

And then a Foreign Table that connects to the CSV file:

CREATE FOREIGN TABLE dataset.Recipes (
  CREATEDDATE DATE,
  NUMCOMMENTS INTEGER,
  TITLE VARCHAR,
  USERNAME VARCHAR,
  COMMENT VARCHAR,
  NUMCHAR INTEGER
) SERVER dataset FILE 'Recipes.csv' USING
{
  "from": {
    "file": {
       "skip": 1
    }
  }
}

And that's it, immediately we can run SQL queries on dataset.Recipes: image

## What data do we need? The dataset is interesting and we are hungry. However if we want to decide a recipe to cook we will need some more information that we can use to analyze.

We are going to work with two persistent classes (tables):

  • yummy.data.Recipe: a class containing the title and description of the recipe and some other properties that we want to extract and analyze (e.g. Score, Difficulty, Ingredients, CuisineType, PreparationTime)
  • yummy.data.RecipeHistory: a simple class for logging what are we doing with the recipe

We can now load our yummy.data* tables with the contents from the dataset:

do ##class(yummy.Utils).LoadDataset()

It looks good but still we need to find out how are going to generate data for the Score, Difficulty, Ingredients, PreparationTime and CuisineType fields.

## Analyze the recipes We want to process each recipe title and description and:

  • Extract information like Difficulty, Ingredients, CuisineType, etc.
  • Build our own score based on our criteria so we can decide what we want to cook.

We are going to use the following:

LLM (large language models) are really a great tool to process natural language.

LangChain is ready to work in Python, so we can use it directly in InterSystems IRIS using Embedded Python.

The full SimpleOpenAI class looks like this:

/// Simple OpenAI analysis for recipes
Class yummy.analysis.SimpleOpenAI Extends Analysis
{

Property CuisineType As %String;

Property PreparationTime As %Integer;

Property Difficulty As %String;

Property Ingredients As %String;

/// Run
/// You can try this from a terminal:
/// set a = ##class(yummy.analysis.SimpleOpenAI).%New(##class(yummy.data.Recipe).%OpenId(8))
/// do a.Run()
/// zwrite a
Method Run()
{
    try {
        do ..RunPythonAnalysis()

        set reasons = ""

        // my favourite cuisine types
        if "spanish,french,portuguese,italian,korean,japanese"[..CuisineType {
            set ..Score = ..Score + 2
            set reasons = reasons_$lb("It seems to be a "_..CuisineType_" recipe!")
        }

        // don't want to spend whole day cooking :)
        if (+..PreparationTime < 120) {
            set ..Score = ..Score + 1
            set reasons = reasons_$lb("You don't need too much time to prepare it") 
        }
        
        // bonus for fav ingredients!
        set favIngredients = $listbuild("kimchi", "truffle", "squid")
        for i=1:1:$listlength(favIngredients) {
            set favIngred = $listget(favIngredients, i)
            if ..Ingredients[favIngred {
                set ..Score = ..Score + 1
                set reasons = reasons_$lb("Favourite ingredient found: "_favIngred)
            }
        }

        set ..Reason = $listtostring(reasons, ". ")

    } catch ex {
        throw ex
    }
}

/// Update recipe with analysis results
Method UpdateRecipe()
{
    try {
        // call parent class implementation first
        do ##super()

        // add specific OpenAI analysis results
        set ..Recipe.Ingredients = ..Ingredients
        set ..Recipe.PreparationTime = ..PreparationTime
        set ..Recipe.Difficulty = ..Difficulty
        set ..Recipe.CuisineType = ..CuisineType

    } catch ex {
        throw ex
    }
}

/// Run analysis using embedded Python + Langchain
/// do ##class(yummy.analysis.SimpleOpenAI).%New(##class(yummy.data.Recipe).%OpenId(8)).RunPythonAnalysis(1)
Method RunPythonAnalysis(debug As %Boolean = 0) [ Language = python ]
{
    # load OpenAI APIKEY from env
    import os
    from dotenv import load_dotenv, find_dotenv
    _ = load_dotenv('/app/.env')

    # account for deprecation of LLM model
    import datetime
    current_date = datetime.datetime.now().date()
    # date after which the model should be set to "gpt-3.5-turbo"
    target_date = datetime.date(2024, 6, 12)
    # set the model depending on the current date
    if current_date > target_date:
        llm_model = "gpt-3.5-turbo"
    else:
        llm_model = "gpt-3.5-turbo-0301"

    from langchain.chat_models import ChatOpenAI
    from langchain.prompts import ChatPromptTemplate
    from langchain.chains import LLMChain

    from langchain.output_parsers import ResponseSchema
    from langchain.output_parsers import StructuredOutputParser

    # init llm model
    llm = ChatOpenAI(temperature=0.0, model=llm_model)

    # prepare the responses we need
    cuisine_type_schema = ResponseSchema(
        name="cuisine_type",
        description="What is the cuisine type for the recipe? \
                     Answer in 1 word max in lowercase"
    )
    preparation_time_schema = ResponseSchema(
        name="preparation_time",
        description="How much time in minutes do I need to prepare the recipe?\
                     Anwer with an integer number, or null if unknown",
        type="integer",
    )
    difficulty_schema = ResponseSchema(
        name="difficulty",
        description="How difficult is this recipe?\
                     Answer with one of these values: easy, normal, hard, very-hard"
    )
    ingredients_schema = ResponseSchema(
        name="ingredients",
        description="Give me a comma separated list of ingredients in lowercase or empty if unknown"
    )
    response_schemas = [cuisine_type_schema, preparation_time_schema, difficulty_schema, ingredients_schema]

    # get format instructions from responses
    output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
    format_instructions = output_parser.get_format_instructions()
    
    analysis_template = """\
    Interprete and evaluate a recipe which title is: {title}
    and the description is: {description}
    
    {format_instructions}
    """
    prompt = ChatPromptTemplate.from_template(template=analysis_template)

    messages = prompt.format_messages(title=self.Recipe.Title, description=self.Recipe.Description, format_instructions=format_instructions)
    response = llm(messages)

    if debug:
        print("======ACTUAL PROMPT")
        print(messages[0].content)
        print("======RESPONSE")
        print(response.content)

    # populate analysis with results
    output_dict = output_parser.parse(response.content)
    self.CuisineType = output_dict['cuisine_type']
    self.Difficulty = output_dict['difficulty']
    self.Ingredients = output_dict['ingredients']
    if type(output_dict['preparation_time']) == int:
        self.PreparationTime = output_dict['preparation_time']

    return 1
}

}

The RunPythonAnalysis method is where the OpenAI stuff happens :). You can run it directly from your terminal for a given recipe:

do ##class(yummy.analysis.SimpleOpenAI).%New(##class(yummy.data.Recipe).%OpenId(12)).RunPythonAnalysis(1)

We will get an output like this:

USER>do ##class(yummy.analysis.SimpleOpenAI).%New(##class(yummy.data.Recipe).%OpenId(12)).RunPythonAnalysis(1)
======ACTUAL PROMPT
                    Interprete and evaluate a recipe which title is: Folded Sushi - Alaska Roll
                    and the description is: Craving for some sushi but don't have a sushi roller? Try this easy version instead. It's super easy yet equally delicious!
[Video Recipe](https://www.youtube.com/watch?v=1LJPS1lOHSM)
# Ingredients
Serving Size:  \~5 sandwiches      
* 1 cup of sushi rice
* 3/4 cups + 2 1/2 tbsp of water
* A small piece of konbu (kelp)
* 2 tbsp of rice vinegar
* 1 tbsp of sugar
* 1 tsp of salt
* 2 avocado
* 6 imitation crab sticks
* 2 tbsp of Japanese mayo
* 1/2 lb of salmon  
# Recipe     
* Place 1 cup of sushi rice into a mixing bowl and wash the rice at least 2 times or until the water becomes clear. Then transfer the rice into the rice cooker and add a small piece of kelp along with 3/4 cups plus 2 1/2 tbsp of water. Cook according to your rice cookers instruction.
* Combine 2 tbsp rice vinegar, 1 tbsp sugar, and 1 tsp salt in a medium bowl. Mix until everything is well combined.
* After the rice is cooked, remove the kelp and immediately scoop all the rice into the medium bowl with the vinegar and mix it well using the rice spatula. Make sure to use the cut motion to mix the rice to avoid mashing them. After thats done, cover it with a kitchen towel and let it cool down to room temperature.
* Cut the top of 1 avocado, then slice into the center of the avocado and rotate it along your knife. Then take each half of the avocado and twist. Afterward, take the side with the pit and carefully chop into the pit and twist to remove it. Then, using your hand, remove the peel. Repeat these steps with the other avocado. Dont forget to clean up your work station to give yourself more space. Then, place each half of the avocado facing down and thinly slice them. Once theyre sliced, slowly spread them out. Once thats done, set it aside.
* Remove the wrapper from each crab stick. Then, using your hand, peel the crab sticks vertically to get strings of crab sticks. Once all the crab sticks are peeled, rotate them sideways and chop them into small pieces, then place them in a bowl along with 2 tbsp of Japanese mayo and mix until everything is well mixed.
* Place a sharp knife at an angle and thinly slice against the grain. The thickness of the cut depends on your preference. Just make sure that all the pieces are similar in thickness.
* Grab a piece of seaweed wrap. Using a kitchen scissor, start cutting at the halfway point of seaweed wrap and cut until youre a little bit past the center of the piece. Rotate the piece vertically and start building. Dip your hand in some water to help with the sushi rice. Take a handful of sushi rice and spread it around the upper left hand quadrant of the seaweed wrap. Then carefully place a couple slices of salmon on the top right quadrant. Then place a couple slices of avocado on the bottom right quadrant. And finish it off with a couple of tsp of crab salad on the bottom left quadrant. Then, fold the top right quadrant into the bottom right quadrant, then continue by folding it into the bottom left quadrant. Well finish off the folding by folding the top left quadrant onto the rest of the sandwich. Afterward, place a piece of plastic wrap on top, cut it half, add a couple pieces of ginger and wasabi, and there you have it.
                    
                    The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "```json" and "```":
json
{
        "cuisine_type": string  // What is the cuisine type for the recipe?                                  Answer in 1 word max in lowercase
        "preparation_time": integer  // How much time in minutes do I need to prepare the recipe?                                    Anwer with an integer number, or null if unknown
        "difficulty": string  // How difficult is this recipe?                               Answer with one of these values: easy, normal, hard, very-hard
        "ingredients": string  // Give me a comma separated list of ingredients in lowercase or empty if unknown
}

                    
======RESPONSE
json
{
        "cuisine_type": "japanese",
        "preparation_time": 30,
        "difficulty": "easy",
        "ingredients": "sushi rice, water, konbu, rice vinegar, sugar, salt, avocado, imitation crab sticks, japanese mayo, salmon"
}

That looks good. It seems that our OpenAI prompt is capable of returning some useful information. Let's run the whole analysis class from the terminal:

set a = ##class(yummy.analysis.SimpleOpenAI).%New(##class(yummy.data.Recipe).%OpenId(12))
do a.Run()
zwrite a
USER>zwrite a
a=37@yummy.analysis.SimpleOpenAI  ; <OREF>
+----------------- general information ---------------
|      oref value: 37
|      class name: yummy.analysis.SimpleOpenAI
| reference count: 2
+----------------- attribute values ------------------
|        CuisineType = "japanese"
|         Difficulty = "easy"
|        Ingredients = "sushi rice, water, konbu, rice vinegar, sugar, salt, avocado, imitation crab sticks, japanese mayo, salmon"
|    PreparationTime = 30
|             Reason = "It seems to be a japanese recipe!. You don't need too much time to prepare it"
|              Score = 3
+----------------- swizzled references ---------------
|           i%Recipe = ""
|           r%Recipe = "30@yummy.data.Recipe"
+-----------------------------------------------------

## Analyzing all the recipes! Naturally, you would like to run the analysis on all the recipes we have loaded.

You can analyze a range of recipes IDs this way:

USER>do ##class(yummy.Utils).AnalyzeRange(1,10)
> Recipe 1 (1.755185s)
> Recipe 2 (2.559526s)
> Recipe 3 (1.556895s)
> Recipe 4 (1.720246s)
> Recipe 5 (1.689123s)
> Recipe 6 (2.404745s)
> Recipe 7 (1.538208s)
> Recipe 8 (1.33001s)
> Recipe 9 (1.49972s)
> Recipe 10 (1.425612s)

After that, have a look again at your recipe table and check the results

select * from yummy_data.Recipe

image

I think I could give a try to Acorn Squash Pizza or Korean Tofu Kimchi with Pork :). I will have to double check at home anyway :)

Final notes

You can find the full example in https://github.com/isc-afuentes/recipe-inspector

With this simple example we've learned how to use LLM techniques to add features or to analyze some parts of your data in InterSystems IRIS.

With this starting point, you could think about:

  • Using InterSystems BI to explore and navigate your data using cubes and dashboards.
  • Create a webapp and provide some UI (e.g. Angular) for this, you could leverage packages like RESTForms2 to automatically generate REST APIs to your persistent classes.
  • What about storing whether you like or not a recipe, and then try to determine if a new recipe will like you? You could try an IntegratedML approach, or even an LLM approach providing some example data and building a RAG (Retrieval Augmented Generation) use case.

What other things could you try? Let me know what you think!

3
2 410
Article Alex Woodhead Β· Jan 26, 2024 8m read

Considering new business interest in applying Generative-AI to local commercially sensitive private data and information, without exposure to public clouds. Like a match needs the energy of striking to ignite, the Tech lead new "activation energy" challenge is to reveal how investing in GPU hardware could support novel competitive capabilities. The capability can reveal the use-cases that provide new value and savings.

Sharpening this axe begins with a functional protocol for running LLMs on a local laptop.

3
8 1593
Article Erin Spencer Β· Jan 24, 2024 9m read

The traditional use of an IRIS production is for an inbound adapter to receive input from an external source, send that input to an IRIS service, then have that service send that input through the production.

With a custom inbound adapter though, we can make an IRIS production do more. We can use an IRIS production to process data from our own database without any external trigger.

By using an IRIS production in this way your data processing tasks now get to leverage all the built in features of an IRIS production, including:

1
2 509
Article Keren Skubach Β· Jan 22, 2024 2m read

Did you know that you can get JSON data directly from your SQL tables?

Let me introduce you to 2 useful SQL functions that are used to retrieve JSON data from SQL queries - JSON_ARRAY and JSON_OBJECT
You can use those functions in the SELECT statement with other types of select items, and they can be specified in other locations where an SQL function can be used, such as in a WHERE clause

The JSON_ARRAY function takes a comma-separated list of expressions and returns a JSON array containing those values.

11
4 1064
Article Robbie Luman Β· Jan 12, 2024 7m read

With the advent of Embedded Python, a myriad of use cases are now possible from within IRIS directly using Python libraries for more complex operations. One such operation is the use of natural language processing tools such as textual similarity comparison.

Setting up Embedded Python to Use the Sentence Transformers Library

4
4 640
Article Seisuke Nakahashi Β· Jan 10, 2024 5m read

[Background]

InterSystems IRIS family has a nice utility ^SystemPerformance (as known as ^pButtons in CachΓ© and Ensemble) which outputs the database performance information into a readable HTML file. When you run ^SystemPerformance on IRIS for Windows, a HTML file is created where both our own performance log mgstat and Windows performance log are included.

2
3 779
Article Nicholai Mitchko Β· Aug 12, 2020 2m read

Updated Jan 19th, 2023.

Hi all,

I want to share a quick little method you can use to enable ssl with a self signed certificate on your local development instance of IRIS/HealthShare. This enables you to test https-specific features such as OAuth without a huge lift.

1. Install OpenSSL

Windows     : Download from https://www.openssl.org or other built OpenSSL Binary. 

Debian Linux: $ sudo apt-get -y install openssl

RHEL        : $ sudo yum install openssl

 

2. Create a self-signed certificate pair. In your terminal (powershell, bash, zsh, etc)

7
9 2281
Article Pravin Barton Β· Sep 1, 2022 4m read

Say I've been developing a web application that uses IRIS as the back end. I've been working on it with unauthenticated access. It's getting to the point where I would like to deploy it to users, but first I need to add authentication. Rather than using the default IRIS password authentication, I'd like users to sign in with my organization's Single Sign On, or some other popular identity provider like Google or GitHub. I've read that OpenID Connect is a common authentication standard, and it's supported by IRIS. What is the simplest way to get up and running?

Example 1: a plain CSP app

2
4 1253
Article Adel Elsayed Β· Jan 8, 2024 9m read

InterSystems IRIS Document Database (DocDB) offers a flexible and dynamic approach to managing database data. DocDB embraces the power of JSON (JavaScript Object Notation), providing a schema-less environment for storing and retrieving data.

It is a powerful tool, enables developers to bypass a ton of boiler plate code in interaction with existing applications, serialization, pagination and integration. the seamless flow of DocDB with Interoperability Rest services and operations, gives a big leap in API production and management.

for full DocDB documentation Here. in the context of this article i will showcase a use case in which DocDB will make a perfect fit.

1
0 791
Article Ben Spead Β· Dec 20, 2023 11m read

Your may not realize it, but your InterSystems Login Account can be used to access a very wide array of InterSystems services to help you learn and use InterSystems IRIS and other InterSystems technologies more effectively.  Continue reading to learn more about how to unlock new technical knowledge and tools using your InterSystems Login account.  Also - after reading, please participate in the Poll at the bottom, so we can see how this article was useful to you!

What is an InterSystems Login Account? 

4
1 657
Article Lily Taub Β· Mar 19, 2019 9m read

Intro

Most server-client communication on the web is based on a request and response structure. The client sends a request to the server and the server responds to this request. The WebSocket protocol provides a two-way channel of communication between a server and client, allowing servers to send messages to clients without first receiving a request. For more information on the WebSocket protocol and its implementation in InterSystems IRIS, see the links below.

This tutorial is an update of "Asynchronous Websockets -- a quick tutorial" for CachΓ© 2016.2+ and InterSystems IRIS 2018.1+.

Asynchronous vs Synchronous Operation

In InterSystems IRIS, a WebSocket connection can be implemented synchronously or asynchronously. How the WebSocket connection between client and server operates is determined by the β€œSharedConnection” property of the %CSP.WebSocket class.

  • SharedConnection=1 : Asynchronous operation

  • SharedConnection=0: Synchronous operation

A WebSocket connection between a client and a server hosted on an InterSystems IRIS instance includes a connection between the IRIS instance and the Web Gateway. In synchronous WebSocket operation, the connection uses a private channel. In asynchronous WebSocket operation, a group of WebSocket clients share a pool of connections between the IRIS instance and the Web Gateway. The advantage of an asynchronous implementation of WebSockets stands out when one has many clients connecting to the same server, as this implementation does not require that each client be handled by an exclusive connection between the Web Gateway and IRIS instance.

In this tutorial we will be implementing WebSockets asynchronously. Thus, all open chat windows share a pool of connections between the Web Gateway and the IRIS instance that hosts the WebSocket server class.

Chat Application Overview

The β€œhello world” of WebSockets is a chat application in which a user can send messages that are broadcast to all users logged into the application. In this tutorial, the components of the chat application include:

  • Server: implemented in a class that extends %CSP.WebSocket

  • Client: implemented by a CSP page

The implementation of this chat application will achieve the following:

  • Users can broadcast messages to all open chat windows

  • Online users will appear in the β€œOnline Users” list of all open chat windows

  • Users can change their username by composing a message starting with the β€œalias” keyword and this message will not be broadcast but will update the β€œOnline Users” list

  • When users close their chat window they will be removed from the β€œOnline Users” list

For the chat application source code, please visit this GitHub repository.

The Client

The client side of our chat application is implemented by a CSP page containing the styling for the chat window, the declaration of the WebSocket connection, WebSocket events and methods that handle communication to and from the server, and helper functions that package messages sent to the server and process incoming messages.

First, we’ll look at how the application initiates the WebSocket connection using a Javascript WebSocket library.

    ws = new WebSocket(((window.location.protocol === "https:")? "wss:":"ws:")
                    + "//"+ window.location.host + "/csp/user/Chat.Server.cls");

new creates a new instance of the WebSocket class. This opens a WebSocket connection to the server using the "wss” (indicates the use of TLS for the WebSocket communication channel) or β€œws” protocol. The server is specified by the web server port number and host name of the instance that defines the Chat.Server class (this information is contained in the window.location.host variable). The name of our server class (Chat.Server.cls) is included in the WebSocket opening URI as a GET request for the resource on the server.

The ws.onopen event fires when the WebSocket connection is successfully established, transitioning from a connecting to a open state.

    ws.onopen = function(event){
        document.getElementById("headline").innerHTML = "CHAT - CONNECTED";
    };

This event updates the header of the chat window to indicate that the client and server are connected.

Sending Messages

The action of a user sending a message triggers the send function. This function serves as a wrapper around the ws.send method, which contains the mechanics for sending the client message to the server over the WebSocket connection.

function send() {
	var line=$("#inputline").val();
	if (line.substr(0,5)=="alias"){
	    alias=line.split(" ")[1];
		if (alias==""){
		    alias="default";
		}
		var data = {}
		data.User = alias
		ws.send(JSON.stringify(data));
        } else {
	    var msg=btoa(line);
	    var data={};
	    data.Message=msg;
	    data.Author=alias;
	    if (ws && msg!="") {
		    ws.send(JSON.stringify(data));
	    }
	}
	$("#inputline").val("");
}

send packages the information to be sent to the server in a JSON object, defining key/value pairs according to the type of information being sent (alias update or general message). btoa translates the contents of a general message into a base-64 encoded ASCII string.

Receiving Messages

When the client receives a message from the server, the ws.onmessage event is triggered.

ws.onmessage = function(event) {
	var d=JSON.parse(event.data);
	if (d.Type=="Chat") {
	    $("#chat").append(wrapmessage(d));
            $("#chatdiv").animate({ scrollTop: $('#chatdiv').prop("scrollHeight")}, 1000);
	} else if(d.Type=="userlist") {
		var ul = document.getElementById("userlist");
		while(ul.firstChild){ul.removeChild(ul.firstChild)};
		$("#userlist").append(wrapuser(d.Users));
	} else if(d.Type=="Status"){
		document.getElementById("headline").innerHTML = "CHAT - connected - "+d.WSID;
	}
};

Depending on the type of message the client receives (β€œChat”, β€œuserlist”, or β€œstatus”), the onmessage event calls wrapmessage or wrapuser to populate the appropriate sections of the chat window with the incoming data. If the incoming message is a status update the status header of the chat window is updated with the WebSocket ID, which identifies the bidirectional WebSocket connection associated with the chat window.

Additional Client Components

An error in the communication between the client and the server triggers the WebSocket onerror method, which issues an alert that notifies us of the error and updates the page's status header.

ws.onerror = function(event) {
	document.GetElementById("headline").innerHTML = "CHAT - error";
	alert("Received error"); 
};

The onclose method is triggered when the WebSocket connection between the client and server is closed and updates the status header.

ws.onclose = function(event) {
	ws = null;
	document.getElementById("headline").innerHTML = "CHAT - disconnected";
}

The Server

The server side of the chat application is implemented by the Chat.Server class, which extends %CSP.WebSocket. Our server class inherits various properties and methods from %CSP.WebSocket, a few of which I’ll discuss below. Chat.Server also implements custom methods to process messages from and broadcast messages to the client(s).

Before Starting the Server

OnPreServer() is executed before the WebSocket server is created and is inherited from the %CSP.WebSocket class.

Method OnPreServer() As %Status
{
    set ..SharedConnection=1
    if (..WebSocketID '= ""){ 
        set ^Chat.WebSocketConnections(..WebSocketID)=""
    } else {
        set ^Chat.Errors($INCREMENT(^Chat.Errors),"no websocketid defined")=$HOROLOG 
    }
    Quit $$$OK
}

This method sets the SharedConnection class parameter to 1, indicating that our WebSocket connection will be asynchronous and supported by multiple processes that define connections between the InterSystems IRIS instance and the Web Gateway. The SharedConnection parameter can only be changed in OnPreServer(). OnPreServer() also stores the WebSocket ID associated with the client in the ^Chat.WebSocketConnections global.

The Server Method

The main body of logic executed by the server is contained in the Server() method.

Method Server() As %Status
{
    do ..StatusUpdate(..WebSocketID)
    for {		
        set data=..Read(.size,.sc,1) 
        if ($$$ISERR(sc)){
            if ($$$GETERRORCODE(sc)=$$$CSPWebSocketTimeout) {
                //$$$DEBUG("no data")
      	    }
            if ($$$GETERRORCODE(sc)=$$$CSPWebSocketClosed){
                kill ^Chat.WebSocketConnections(..WebSocketID)
                do ..RemoveUser($g(^Chat.Users(..WebSocketID)))	
                kill ^Chat.Users(..WebSocketID)
                quit  // Client closed WebSocket
            }
        } else{
            if data["User"{
                do ..AddUser(data,..WebSocketID)
            } else {
                set mid=$INCREMENT(^Chat.Messages)
                set ^Chat.Messages(mid)=data
                do ..ProcessMessage(mid)
            }
        }
    }
    Quit $$$OK
}

This method reads incoming messages from the client (using the Read method of the %CSP.WebSockets class), adds the received JSON objects to the ^Chat.Messages global, and calls ProcessMessage() to forward the message to all other connected chat clients. When a user closes their chat window (thus terminating the WebSocket connection to the server), the Server() method’s call to Read returns an error code that evaluates to the macro $$$CSPWebSocketClosed and the method proceeds to handle the closure accordingly.

Processing and Distributing Messages

ProcessMessage() adds metadata to the incoming chat message and calls SendData(), passing the message as a parameter.

ClassMethod ProcessMessage(mid As %String)
{
    set msg = ##class(%DynamicObject).%FromJSON($GET(^Chat.Messages(mid)))
    set msg.Type="Chat"
    set msg.Sent=$ZDATETIME($HOROLOG,3)
    do ..SendData(msg)
}

ProcessMessage() retrieves the JSON formatted message from the ^Chat.Messages global and converts it to an InterSystems IRIS object using the %DynamicObject class' %FromJSON method. This allows us to easily edit the data before we forward the message to all connected chat clients. We add a Type attribute with the value β€œChat,” which the client uses to determine how to deal with the incoming message. SendData() sends out the message to all the other connected chat clients.

ClassMethod SendData(data As %DynamicObject)
{
    set c = ""
    for {
        set c = $order(^Chat.WebSocketConnections(c))
        if c="" Quit
        set ws = ..%New()
        set sc = ws.OpenServer(c)
        if $$$ISERR(sc) { do ..HandleError(c,"open") } 
        set sc = ws.Write(data.%ToJSON())
        if $$$ISERR(sc) { do ..HandleError(c,"write") }
    }
}

SendData() converts the InterSystems IRIS object back into a JSON string (data.%ToJSON()) and pushes the message to all the chat clients. SendData() gets the WebSocket ID associated with each client-server connection from the ^Chat.WebSocketConnections global and uses the ID to open a WebSocket connection via the OpenServer method of the %CSP.WebSocket class. We can use the OpenServer method to do this because our WebSocket connections are asynchronous – we pull from the existing pool of IRIS-Web Gateway processes and assign one the WebSocket ID that identifies the server’s connection to a specific chat client. Finally, the Write()%CSP.WebSocket method pushes the JSON string representation of the message to the client.

Conclusion

This chat application demonstrates how to establish WebSocket connections between a client and server hosted by InterSystems IRIS. For continue reading on the protocol and its implementation in InterSystems IRIS, take a look at the links in the introduction.

7
3 6042
Article Eduard Lebedyuk Β· Dec 19, 2023 8m read

If you're running IRIS in a mirrored configuration for HA in Azure, the question of providing a Mirror VIP (Virtual IP) becomes relevant. Virtual IP offers a way for downstream systems to interact with IRIS using one IP address. Even when a failover happens, downstream systems can reconnect to the same IP address and continue working.

The main issue, when deploying to Azure, is that an IRIS VIP has a requirement of IRIS being essentially a network admin, per the docs.

To get HA, IRIS mirror members must be deployed to different availability zones in one subnet (which is possible in Azure as subnets can span several zones). One of the solutions might be load balancers, but they, of course, cost extra, and you need to administrate them.

In this article, I would like to provide a way to configure a Mirror VIP without the using Load Balancers suggested in most other Azure reference architectures.

Architecture

Architecture

We have a subnet running across two availability zones (I simplify here - of course, you'll probably have public subnets, arbiter in another az, and so on, but this is an absolute minimum enough to demonstrate this approach). Subnet's CIDR is 10.0.0.0/24, which means it is allocated IPs 10.0.0.1 to 10.0.0.255. As Azure reserves the first four addresses and the last address, we can use 10.0.0.4 to 10.0.0.254.

We will implement both public and private VIPs at the same time. If you want, you can implement only the private VIP.

Idea

Virtual Machines in Azure have Network Interfaces. These Network Interfaces have IP Configurations. IP configuration is a combination of Public and/or Private IPs, and it's routed automatically to the Virtual Machine associated with the Network interface. So there is no need to update the routes. What we'll do is, during a mirror failover event, delete the VIP IP configuration from the old primary and create it for a new primary. All operations to do that take 5-20 seconds for Private VIP only, from 5 seconds and up to a minute for a Public/Private VIP IP combination.

Implementing VIP

  1. Allocate External IP to use as a public VIP. Skip this step if you want private VIP only. If you do allocate the VIP, it must reside in the same resource group and in the same region and be in all zones with primary and backup. You'll need an External IP name.
  2. Decide on a private VIP value. I will use the last available IP: 10.0.0.254.
  3. On each VM, allocate the private VIP IP address on the eth0:1 network interface.
cat << EOFVIP >> /etc/sysconfig/network-scripts/ifcfg-eth0:1
          DEVICE=eth0:1
          ONPARENT=on
          IPADDR=10.0.0.254
          PREFIX=32
          EOFVIP
sudo chmod -x /etc/sysconfig/network-scripts/ifcfg-eth0:1
sudo ifconfig eth0:1 up

If you want just to test, run (but it won't survive system restart):

sudo ifconfig eth0:1 10.0.0.254

Depending on the os you might need to run:

ifconfig eth0:1
systemctl restart network
  1. For each VM, enable System or User assigned identity.
  2. For each identity, assign the permissions to modify Network Interfaces. To do that create a custom role, the minimum permission set in that case would be:
{
  "roleName": "custom_nic_write",
  "description": "IRIS Role to assign VIP",
  "assignableScopes": [
    "/subscriptions/{subscriptionid}/resourceGroups/{resourcegroupid}/providers/Microsoft.Network/networkInterfaces/{nicid_primary}",
    "/subscriptions/{subscriptionid}/resourceGroups/{resourcegroupid}/providers/Microsoft.Network/networkInterfaces/{nicid_backup}"
  ],
  "permissions": [
    {
      "actions": [
        "Microsoft.Network/networkInterfaces/write",
        "Microsoft.Network/networkInterfaces/read"
      ],
      "notActions": [],
      "dataActions": [],
      "notDataActions": []
    }
  ]
}

For non-production environments you might use a Network Contributor system role on the resource group, but that is not a recommended approach as Network Contributor is a very broad role.

  1. Each network interface in Azure can have a set of IP configurations. When a current mirror member becomes primary, we'll use a ZMIRROR callback to delete a VIP IP configuration on another mirror member's network interface and create a VIP IP configuration pointing at itself:

Here are the Azure CLI commands for both nodes assuming rg resource group, vip IP configuration, and my_vip_ip External IP:

az login --identity
az network nic ip-config delete --resource-group rg --name vip --nic-name mirrorb280_z2
az network nic ip-config create --resource-group rg --name vip --nic-name mirrora290_z1 --private-ip-address 10.0.0.254 --public-ip-address my_vip_ip

and:

az login --identity
az network nic ip-config delete --resource-group rg --name vip --nic-name mirrora290_z1
az network nic ip-config create --resource-group rg --name vip --nic-name mirrorb280_z2 --private-ip-address 10.0.0.254 --public-ip-address my_vip_ip

And the same code as a ZMIRROR routine:

ROUTINE ZMIRROR

NotifyBecomePrimary() PUBLIC {
    #include %occMessages
    set rg = "rg"
    set config = "vip"
    set privateVIP = "10.0.0.254"
    set publicVIP = "my_vip_ip"

    set nic = "mirrora290_z1"
    set otherNIC = "mirrorb280_z2"
    if ##class(SYS.Mirror).DefaultSystemName() [ "MIRRORB" {
        // we are on mirrorb node, swap
        set $lb(nic, otherNIC)=$lb(otherNIC, nic)
    }

    set rc1 = $zf(-100, "/SHELL", "export", "AZURE_CONFIG_DIR=/tmp", "&&", "az", "login", "--identity")
    set rc2 = $zf(-100, "/SHELL", "export", "AZURE_CONFIG_DIR=/tmp", "&&", "az", "network", "nic", "ip-config", "delete", "--resource-group", rg, "--name", config, "--nic-name", otherNIC)
    set rc3 = $zf(-100, "/SHELL", "export", "AZURE_CONFIG_DIR=/tmp", "&&", "az", "network", "nic", "ip-config", "create", "--resource-group", rg, "--name", config, "--nic-name",      nic,  "--private-ip-address", privateVIP, "--public-ip-address", publicVIP)
    quit 1
}

The routine is the same for both mirror members, we just swap the NIC names based on a current mirror member name. You might not need export AZURE_CONFIG_DIR=/tmp, but sometimes az tries to write credentials into the root home dir, which might fail. Instead of /tmp, it's better to use the IRIS user's home subdirectory (or you might not even need that environment variable, depending on your setup).

And if you want to use Embedded Python, here's Azure Python SDK code:

from azure.identity import DefaultAzureCredential
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.network.models import NetworkInterface, NetworkInterfaceIPConfiguration, PublicIPAddress

sub_id = "AZURE_SUBSCRIPTION_ID"
client = NetworkManagementClient(credential=DefaultAzureCredential(), subscription_id=sub_id)

resource_group_name = "rg"
nic_name = "mirrora290_z1"
other_nic_name = "mirrorb280_z2"
public_ip_address_name = "my_vip_ip"
private_ip_address = "10.0.0.254"
vip_configuration_name = "vip"


# remove old VIP configuration
nic: NetworkInterface = client.network_interfaces.get(resource_group_name, other_nic_name)
ip_configurations_old_length = len(nic.ip_configurations)
nic.ip_configurations[:] = [ip_configuration for ip_configuration in nic.ip_configurations if
                            ip_configuration.name != vip_configuration_name]

if ip_configurations_old_length != len(nic.ip_configurations):
    poller = client.network_interfaces.begin_create_or_update(
        resource_group_name,
        other_nic_name,
        nic
    )
    nic_info = poller.result()

# add new VIP configuration
nic: NetworkInterface = client.network_interfaces.get(resource_group_name, nic_name)
ip: PublicIPAddress = client.public_ip_addresses.get(resource_group_name, public_ip_address_name)
vip = NetworkInterfaceIPConfiguration(name=vip_configuration_name,
                                      private_ip_address=private_ip_address,
                                      private_ip_allocation_method="Static",
                                      public_ip_address=ip,
                                      subnet=nic.ip_configurations[0].subnet)
nic.ip_configurations.append(vip)

poller = client.network_interfaces.begin_create_or_update(
    resource_group_name,
    nic_name,
    nic
)
nic_info = poller.result()

Initial start

NotifyBecomePrimary is also called automatically on system start (after mirror reconnection), but if you want your non-mirrored environments to acquire VIP the same way use ZSTART routine:

SYSTEM() PUBLIC {
  if '$SYSTEM.Mirror.IsMember() {
    do NotifyBecomePrimary^ZMIRROR()
  }
  quit 1
}

Conclusion

And that's it! We change IP configuration pointing to a current mirror Primary when the NotifyBecomePrimary event happens.

1
3 1005
Article Luis Angel PΓ©rez Ramos Β· Oct 16, 2023 10m read

We resume our series of articles on the FHIR Adapter tool available to HealthShare HealthConnect and InterSystems IRIS users.

In the previous articles we have presented the small application on which we set up our workshop and showed the architecture deployed in our IRIS instance after installing the FHIR Adapter. In today's article we will see an example of how we can perform one of the most common CRUD (Create - Read - Update - Delete) operations, the reading operation, and we will do it by recovering a Resource.

What is a Resource?

3
5 627
Article Luis Angel PΓ©rez Ramos Β· Oct 11, 2023 3m read

We return with our example of using the FHIR Adapter, in this article we are going to review how we can configure it in our IRIS instances and what the result of the installation is.

The steps taken to configure the project are the same as indicated in the official documentation, you can review them directly here. Well, let's get to work!

Installation

1
1 483
Article Luis Angel PΓ©rez Ramos Β· Oct 10, 2023 3m read

Surely you have all heard about FHIR as the panacea and solution to all interoperability and compatibility problems between systems. Right here we can see one of his classic defenders holding a FHIR resource in his hand and enjoying it immensely:

But for the rest of us mortals we are going to make a small introduction.

What is FHIR?

3
5 793
Article Lorenzo Scalese Β· Nov 10, 2023 13m read

Hi, developers!

Currently, I'm working on a project that requires highly dynamic event management. In the context of the Java programming language, my first instinct should be to opt for the "Observer Pattern", which is an approach to managing interactions between objects by establishing a notification mechanism. It allows multiple observers to react to changes in the state of a subject autonomously, promoting code flexibility and modularity. If you are not familiar with this design pattern, check out Wikipedia to find more information about it.


While it's natural and commonly used in certain programming languages as Java and C++, in ObjectScript, it's quite a different story.

3
5 914
InterSystems Official Andreas Dieckow Β· Jan 18, 2024 2m read

For your convenience InterSystems is posting the typical install steps for the operating systems that are supported with InterSystems IRIS.

For Microsoft Windows, please consult the InterSystems product documentation.

The IRIS installer will detect if a web server is installed on the same machine, which gives you the option to automatically have the web server configured.

All Apache installations will require sudo (recommended) or root permission to install the web server. This requirement supports recommended best practices. 

4
1 976
Article Heloisa Paiva Β· Sep 22, 2022 4m read

Here you'll find a simple program that uses Python in an IRIS environment and another simple program that uses ObjectScript in a Python environment. Also, I'd like to share a few of the troubles I went trough while learning to implement this.

Python in IRIS environment

Let's say, for example, you're in an IRIS environment and you want to solve a problem that you find easy, or more efficient with Python.

You can simply change the environment: create your method as any other, and in the end of it's name and specifications, you add [ Language = python ]:

9
5 2306
Article sween Β· Oct 20, 2023 6m read

image

This article will cover turning over control of provisioning the InterSystems Kubernetes Operator, and starting your journey managing your own "Cloud" of InterSystems Solutions through Git Ops practices. This deployment pattern is also the fulfillment path for the PID^TOO||| FHIR Breathing Identity Resolution Engine.

Git Ops

I encourage you to do your own research or ask your favorite LLM about Git Ops, but I can paraphrase it here for you as we understand it. Git Ops is an alternative deployment paradigm, where the Kubernetes Cluster itself is "pulling" updates from manifests that reside in source control to manage the state of your solutions, making "Git" an integral part of the name.

Prerequisites

  • Provision a Kubernetes Cluster , this has been tested on EKS, GKE, and MicroK8s Clusters
  • Provision a GitLab, GitHub, or other Git Repo that is accessible by your Kubernetes Cluster

Argo CD

The star of our show here is ArgoCD, which provides a declarative approach to continuous delivery with a ridiculously well done UI. Getting the chart going on your cluster is a snap with just a couple of strokes on your cluster.

kubectl create namespace argocd
kubectl apply -n argocd -f \
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Let's go get logged into the UI for ArgoCD on your Kubernetes Cluster, to do this, you need to grab the secret that was created for the UI, and setup a port forward to make it accessible on your system.

Grab Secret
Decrypt it and put it on your clipboard. image

Port Forward
Redirect port 4000 (or whatever) to your local host

image

UI
Navigate to https://0.0.0.0:4000 and supply the secret to the login screen and login.

image

InterSystems Kubernetes Operator (IKO)

Instructions for obtaining the IKO Helm chart in the documentation itself, once you get it, check it in to your git repo in a feature branch. I would provide a sample repo for this, but unfortunately cant do it without violating a re-distribution as it does not appear the chart is available in a public repository.

Create yourself a feature branch in your git repository and unpack the IKO Helm chart into a single directory. As below, this is iko/iris_operator_amd-3.5.48.100 off the root of the repo.

On feature/iko branch as an example:

β”œβ”€β”€ iko
β”‚   β”œβ”€β”€ AIKO.pdf
β”‚   └── iris_operator_amd-3.5.48.100
β”‚       β”œβ”€β”€ chart
β”‚       β”‚   └── iris-operator
β”‚       β”‚       β”œβ”€β”€ Chart.yaml
β”‚       β”‚       β”œβ”€β”€ templates
β”‚       β”‚       β”‚   β”œβ”€β”€ apiregistration.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ appcatalog-user-roles.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ cleaner.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ cluster-role-binding.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ cluster-role.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ deployment.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ _helpers.tpl
β”‚       β”‚       β”‚   β”œβ”€β”€ mutating-webhook.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ service-account.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ service.yaml
β”‚       β”‚       β”‚   β”œβ”€β”€ user-roles.yaml
β”‚       β”‚       β”‚   └── validating-webhook.yaml
β”‚       β”‚       └── values.yaml

IKO Setup
Create isc namespace, and add secret for containers.intersystems.com into it.

kubectl create ns isc

kubectl create secret docker-registry \
pidtoo-pull-secret --namespace isc \
--docker-server=https://containers.intersystems.com \
--docker-username='ron@pidtoo.com' \
--docker-password='12345'

This should conclude the setup for IKO, and enable it's delegate it entirely through Git Ops to Argo CD.

Connect Git to Argo CD

This is a simple step in the UI for Argo CD to connect the repo, this step ONLY "connects" the repo, further configuration will be in the repo itself.

image

Declare Branch to Argo CD

Configure Kubernetes to poll branch through Argo CD values.yml in the Argo CD chart. It is up to you really for most of these locations in the git repo, but the opinionated way to declare things in your repo can be in an "App of Apps" paradigm.

Consider creating the folder structure below, and the files that need to be created as a table of contents below:

β”œβ”€β”€ argocd
β”‚   β”œβ”€β”€ app-of-apps
β”‚   β”‚   β”œβ”€β”€ charts
β”‚   β”‚   β”‚   └── iris-cluster-collection
β”‚   β”‚   β”‚       β”œβ”€β”€ Chart.yaml  ## Chart
β”‚   β”‚   β”‚       β”œβ”€β”€ templates
β”‚   β”‚   β”‚       β”‚   β”œβ”€β”€ iris-operator-application.yaml  ## IKO As Application
β”‚   β”‚   β”‚       └── values.yaml ## Application Chart Values
β”‚   β”‚   └── cluster-seeds
β”‚   β”‚       β”œβ”€β”€ seed.yaml  ## Cluster Seed

Chart

apiVersion: v1
description: 'pidtoo IRIS cluster'
name: iris-cluster-collection
version: 1.0.0
appVersion: 3.5.48.100
maintainers:
  - name: intersystems
    email: support@intersystems.com  

IKO As Application

apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: iko
      namespace: argocd
    spec:
      destination:
        namespace: isc
        server: https://kubernetes.default.svc
      project: default
      source:
        path: iko/iris_operator_amd-3.5.48.100/chart/iris-operator
        repoURL: {{ .Values.repoURL }}
        targetRevision: {{ .Values.targetRevision }}
      syncPolicy:
        automated: {}
        syncOptions:
        - CreateNamespace=true  

IKO Application Chart Values

targetRevision: main
repoURL: https://github.com/pidtoo/gitops_iko.git

Cluster Seed

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: gitops-iko-seed
  namespace: argocd
  labels:
    isAppOfApps: 'true'
spec:
  destination:
    namespace: isc
    server: https://kubernetes.default.svc
  project: default
  source:
    path: argocd/app-of-apps/charts/iris-cluster-collection
    repoURL: https://github.com/pidtoo/gitops_iko.git
    targetRevision: main
  syncPolicy:
    automated: {}
    syncOptions:
    - CreateNamespace=true

Seed the Cluster!

This is the final on interacting with your Argo CD/IKO Cluster applications, the rest is up to Git!

kubectl apply -n argocd -f argocd/app-of-apps/cluster-seeds/seed.yaml

Merge to Main

Ok, this is where we see how we did in the UI, you should immediately start seeing in Argo CD applications starting coming to life.

The apps view:
image

InterSystems Kubernetes Operator View
imageimage

Welcome to GitOps with the InterSystems Kubernetes Operator!

Git Demos are the Best! - Live October 19, 2023

Ron Sweeney, Principal Architect Integration Required, LLC (PID^TOO) Dan McCracken, COO, Devsoperative, INC

3
1 821
Article Lorenzo Scalese Β· Aug 16, 2023 11m read

Hi developers!

Today I would like to address a subject that has given me a hard time. I am sure this must have been the case for quite a number of you already (so-called β€œthe bottleneck”). Since this is a broad topic, this article will only focus on identifying incoming HTTP requests that could be causing slowness issues. I will also provide you with a small tool I have developed to help identify them.

Our software is becoming more and more complex, processing a large number of requests from different sources, be it front-end or third-party back-end applications. To ensure optimal performance, it is essential to have a logging system capable of taking a few key measurements, such as the response time, the number of global references and the number of lines of code executed for each HTTP response. As part of my work, I get involved in the development of EMR software as well as incident analysis.  Since user load comes mostly from HTTP requests (REST API or CSP application), the need to have this type of measurement when generalized slowness issues occur has become obvious.

5
9 1348