#Deployment

0 Followers · 200 Posts

Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. 

Question Alexei Yugov · Nov 15, 2024

Hello. On some hosts, IRIS in containers.intersystems.com/intersystems/iris-community:2024.1 falls with a core dump.

auser:~$ docker run --rm -it --entrypoint=""  containers.intersystems.com/intersystems/iris-community:2024.1 bash
irisowner@6170dcdbe77c:~$ iris start IRIS
Illegal instruction (core dumped)

Coredump stack:

(gdb) bt
#0  0x000055688cf44743 in osregopen ()
#1  0x000055688cf4060a in ListConfig ()
#2  0x000055688cf3dcd7 in main ()

Are there some hardware requirements for IRIS docker container? Or maybe some specific settings?
Host details:
 

2
1 212
Question Alexander Rischke · Nov 22, 2024

Good morning dear community,

This is like my first post in this community. Let's see how this turns out.
I have a question about the Intersystems Kubernetes Operator and the deployment of the webgateways.
I am responsible for the hosting and deployment of the apps. For the future we are planning to host our application in a kubernetes cluster. I am using the IKO for this.
I am using webgateways, for external access as separate pods. And sidecar containers for internal access, like the management portal.

0
0 94
Discussion Joel Solon · Nov 11, 2024

The IRIS Installation Guide for Linux, Installation Directory section, says "Do not choose the /home directory, any of its subdirectories, or the /usr/local/etc/irissys directory." but there are no suggestions or any default.

What are your opinions on this? For example, I see that IRIS in a Docker container is installed in /usr/irissys. I'm wondering why that directory was chosen.

The official Linux filesystem docs say:

4
0 298
Article Evgeny Shvarov · Apr 22, 2024 3m read

Hi folks!

Often, when we develop commercial solutions, there is a necessity to deploy solutions without source code, e.g., in order to preserve the IP.

One of the ways how this can be achieved is to use InterSystems Package Manager.

Here I asked Midjourney to paint an intellectual property of software:

How this can be achieved with IPM?

In fact, this is very simple; just add the Deploy="true" clause in the Resource element in your module.xml manifest. Documentation.

I decided to provide the simplest possible example to illustrate how it works and also to give you a development environment template to let start building and deploying your own modules without source code. He we go!

2
2 395
Question Charles TETU · Sep 12, 2024

Hi there,I'm discovering IRIS and I need to POC the solution, with a constraint: containerization.I'm used to deploy my apps in a Swarm cluster, and all my bind volumes are written on a GlusterFS volume.The problem here, when I start my stack, the first log is:[WARN] ISC_DATA_DIRECTORY is located on a mount of type 'fuse.glusterfs' which is not supported, consider a named volume for '/iris_conf'And of course the deployment fails.Any idea? How can I provide my data on all my cluster nodes? I read this article: https://community.intersystems.com/post/deploying-sharded-cluster-docke…

0
0 102
Article Ariel Glikman · Sep 2, 2024 1m read

Say I want to uninstall the IKO - all I need to do is:

> helm uninstall intersystems

What happens behind the scenes is that helm will uninstall what was installed when you ran :

> helm install intersystems <relative/path/to/iris-operator>

In some sense - this is symmetric to when we ran install - however with a different image.

You'll notice that when you install, it knows what image to take from:

operator:
  registry: containers.intersystems.com
  repository: intersystems/iris-operator-amd
  tag: 3.7.13.100

For uninstall the image to take note of is:

0
0 225
Article Muhammad Waseem · Mar 25, 2024 7m read

In this article, we will cover below topics:

  • What is Kubernetes?
  • Main Kubernetes (K8s) Components


What is Kubernetes?

Kubernetes is an open-source container orchestration framework developed by Google. In essence, it controls container speed and helps you manage applications consisting of multiple containers. Additionally, it allows you to operate them in different environments, e.g., physical machines, virtual machines, Cloud environments, or even hybrid deployment environments.


What problems does it solve?

0
3 357
Article Ariel Glikman · Mar 11, 2024 3m read

In case you're planning on deploying IRIS For Health, or any of our containerized products, via the IKO on OpenShift, I wanted to share some of the hurdles we had to overcome.

As with any IKO based installation, we first need to deploy the IKO itself. However we were getting this error:

Warning FailedCreate 75s (x16 over 3m59s) replicaset-controller Error creating: pods "intersystems-iris-operator-amd-f6757dcc-" is forbidden: unable to validate against any security context constraint:

proceeded by a list of all the security context constraints (SCCs) it could not validate against.

0
0 366
Article Ben Spead · Dec 20, 2023 11m read

Your may not realize it, but your InterSystems Login Account can be used to access a very wide array of InterSystems services to help you learn and use InterSystems IRIS and other InterSystems technologies more effectively.  Continue reading to learn more about how to unlock new technical knowledge and tools using your InterSystems Login account.  Also - after reading, please participate in the Poll at the bottom, so we can see how this article was useful to you!

What is an InterSystems Login Account? 

4
1 657
Article Nikolay Solovyev · Jun 24, 2020 2m read

ZPM is a package manager designed for convenient deployment of applications and modules on the IRIS platform.

Module developers, in order for their module to be installed using ZPM, need to follow a series of simple steps.

  • Write module code
  • Create a module.xml file that contains the meta description of the module
  • Using the test registry, publish the module, verify that it is published
  • Install the module from the test registry
  • Publish the module. To publish in the public registry pm.community.intersystems.com, you need to publish the module in https://openexchange.intersystems.com, specifying the github url of your package and tick the “Publish in Package Manager” checkbox.

Creating a module.xml file manually can be tedious, so the generate command will now be created in zpm (starting with version 0.2.3).

The generate command is for creating module.xml for your project.

How to use:

Run zpm in terminal And then type generate

USER>zpm
zpm: USER>generate /temp/zzz

As an argument (in this case /temp/zzz) specify the path to the directory with your project. The module.xml file will be created in this directory.

Then answer questions:

zpm: USER>generate /temp/zzz

Enter module name: my-module
Enter module version: 1.0.0 => 1.0.1
Enter module description: module description
Enter module keywords: test,zpm,docker
Enter module source folder: src => 

Existing Web Applications:
    /csp/user
    /registry
    Enter a comma separated list of web applications or * for all: /csp/user
    Enter path to csp files for /csp/user:  web
Dependencies:
    Enter module:version or empty string to continue: sslclient:*  
    Enter module:version or empty string to continue: 
zpm: USER>
  • module source folder – relative path to your code (classes, routines), usually src. All classes and routines in this folder are loaded into current namespace.
  • If your module includes web applications, indicate which web applications from the current namespace should be added to module.xml
  • If your module contains dependencies, specify the module and its version. Use * for the latest version.

If you need to add author and license information to module.xml, use the -author (-a) modifier.

zpm: USER>generate -author /temp/zzz

The generate command also supports a different option: use the -template (-t) modifier. The module.xml is created with the fictional data, which you need to change manually.

zpm: USER>generate -template /temp/zzz

Watch this video demonstrating the usage of the generate command.

2
2 594
Article Murray Oldfield · Jun 6, 2017 17m read

I am often asked by customers, vendors or internal teams to explain CPU capacity planning for large production databases running on VMware vSphere.

In summary there are a few simple best practices to follow for sizing CPU for large production databases:

  • Plan for one vCPU per physical CPU core.
  • Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node.
  • Right-size virtual machines. Add vCPUs only when needed.

Generally this leads to a couple of common questions:

  • Because of hyper-threading VMware lets me create VMs with 2x the number of physical CPUs. Doesn’t that double capacity? Shouldn’t I create VMs with as many CPUs as possible?
  • What is a NUMA node? Should I care about NUMA?
  • VMs should be right-sized, but how do I know when they are?

I answer these questions with examples below. Bust also remember, best practices are not written in stone. Sometimes you need to make compromises. For example, it is likely that large production database VMs will NOT fit in a NUMA node, and as we will see that’s OK. Best practices are guidelines that you will have to evaluate and validate for your applications and environment.

Although I am writing this with examples for databases running on InterSystems data platforms, the concepts and rules apply generally for capacity and performance planning for any large (Monster) VMs.


For virtualisation best practices and more posts on performance and capacity planning; [A list of other posts in the InterSystems Data Platforms and performance series is here.](https://community.intersystems.com/post/capacity-planning-and-performance-series-index)

Monster VMs

This post is mostly about deploying _Monster VMs _, sometimes called Wide VMs. The CPU resource requirements of high transaction databases mean they are often deployed on Monster VMs.

A monster VM is a VM with more Virtual CPUs or memory than a physical NUMA node.


# CPU architecture and NUMA

Current Intel processor architecture has Non-Uniform Memory Architecture (NUMA) architecture. For example, the servers I am using to run tests for this post have:

  • Two CPU sockets, each with a processor with 12 cores (Intel E5-2680 v3).
  • 256 GB memory (16 x 16GB RDIMM)

Each 12-core processor has its own local memory (128GB of RDIMMs and local cache) and can also access memory on other processors in the same host. Each 12-core package of CPU, CPU cache and 128 GB RDIMM memory is a NUMA node. To access memory on another processor NUMA nodes are connected by a fast inter-connect.

Processes running on a processor accessing local RDIMM and Cache memory have lower latency than going across the interconnect to access remote memory on another processor. Access across the interconnect increases latency, so performance is non-uniform. The same design applies to servers with more than two sockets. A four socket Intel server has four NUMA nodes.

ESXi understands physical NUMA and the ESXi CPU scheduler is designed to optimise performance on NUMA systems. One of the ways ESXi maximises performance is to create data locality on a physical NUMA node. In our example if you have a VM with 12 vCPU and less than 128GB memory, ESXi will assign that VM to run on one of the physical NUMA nodes. Which leads to the rule;

If possible size VMs to keep CPU and memory local to a NUMA node.

If you need a Monster VM larger than a NUMA node that is OK, ESXi does a very good job of automatically calculating and managing requirements. For example, ESXi will create virtual NUMA nodes (vNUMA) that intelligently schedule onto the physical NUMA nodes for optimal performance. The vNUMA structure is exposed to the operating system. For example, if you have a host server with two 12-core processors and a VM with 16 vCPUs ESXi may use eight physical cores on on each of two processors to schedule VM vCPUs, the operating system (Linux or Windows) will see two NUMA nodes.

It is also important to right-size your VMs and not allocate more resources than are needed as that can lead to wasted resources and loss of performance. As well as helping you size for NUMA, it is more efficient and will result in better performance, to have a 12 vCPU VM with high (but safe) CPU utilisation than a 24 vCPU VM with low or middling VM CPU utilisation, especially if there are other VMs on this host needing to be scheduled and competing for resources. This also re-enforces the rule;

Right-size virtual machines.

Note: There are differences between Intel and AMD implementations of NUMA. AMD has multiple NUMA nodes per processor. It’s been a while since I have seen AMD processors in a customer server, but if you have them review NUMA layout as part of your planning.


## Wide VMs and Licencing

For best NUMA scheduling configure wide VMs; Correction June 2017: Configure VMs with 1 vCPU per socket. For example, by default a VM with 24 vCPUs should be configured as 24 CPU sockets each with one core.

Follow VMware best practice rules .

Please see this post on the VMware blogs for examples. 

The VMware blog post goes into detail, but the author, Mark Achtemichuk, recommends the following rules of thumb:

  • While there are many advanced vNUMA settings, only in rare cases do they need to be changed from defaults.
  • Always configure the virtual machine vCPU count to be reflected as Cores per Socket, until you exceed the physical core count of a single physical NUMA node.
  • When you need to configure more vCPUs than there are physical cores in the NUMA node, evenly divide the vCPU count across the minimum number of NUMA nodes.
  • Don’t assign an odd number of vCPUs when the size of your virtual machine exceeds a physical NUMA node.
  • Don’t enable vCPU Hot Add unless you’re okay with vNUMA being disabled.
  • Don’t create a VM larger than the total number of physical cores of your host.

Caché licensing counts cores so this is not a problem, however for software or databases other than Caché specifying that a VM has 24 sockets could make a difference to software licensing so you must check with vendors.


# Hyper-threading and the CPU schedular

Hyper-threading (HT) often comes up in discussions, I hear; “hyper-threading doubles the number of CPU cores”. Which obviously at the physical level it can’t — you have as many physical cores as you have. Hyper-threading should be enabled and will increase system performance. An expectation is maybe 20% or more application performance increase, but the actual amount is dependant on the application and the workload. But certainly not double.

As I posted in the VMware best practice post, a good starting point for sizing large production database VMs is to assume is that the vCPU has full physical core dedication on the server —basically ignore hyper-threading when capacity planning. For example;

For a 24-core host server plan for a total of up to 24 vCPU for production database VMs knowing there may be available headroom.

Once you have spent time monitoring the application, operating system and VMware performance during peak processing times you can decide if higher VM consolidation is possible. In the best practice post I stated the rule as;

One physical CPU (includes hyper-threading) = One vCPU (includes hyper-threading).


## Why Hyper-threading does not double CPU

HT on Intel Xeon processors is a way of creating two logical CPUs on one physical core. The operating system can efficiently schedule against the two logical processors — if a process or thread on a logical processor is waiting, for example for IO, the physical CPU resources can be used by the other logical processor. Only one logical processor can be progressing at any point in time, so although the physical core is more efficiently utilised performance is not doubled.

With HT enabled in the host BIOS, when creating a VM you can configure a vCPU per HT logical processor. For example, on a 24-physical core server with HT enabled you can create a VM with up to 48 vCPUS. The ESXi CPU scheduler will optimise processing by running VMs processes on separate physical cores first (while still considering NUMA). I explore later in the post whether allocating more vCPUs than physical cores on a Monster database VM helps scaling.

co-stop and CPU scheduling

After monitoring host and application performance you may decide that some overcommitment of host CPU resources is possible. Whether this is a good idea will be very dependant on the applications and workloads. An understanding of the schedular and a key metric to monitor can help you be sure that you are not over committing host resources.

I sometimes hear; for a VM to be progressing there must be the same number of free logical CPUs as there are vCPUs in the VM. For example, a 12 vCPU VM must ‘wait’ for 12 logical CPUs to be ‘available’ before execution progresses. However it should be noted that ESXi after version 3 this is not the case. ESXi uses relaxed co-scheduling for CPU for better application performance.

Because multiple cooperating threads or processes frequently synchronise with each other not scheduling them together can increase latency in their operations. For example a thread waiting to be scheduled by another thread in a spin loop. For best performance ESXi tries to schedule as many sibling vCPUs together as possible. But the CPU scheduler can flexibly schedule vCPUs when there a multiple VMs competing for CPU resources in a consolidated environment. If there is too much time difference as some vCPUs make progress while siblings don’t (the time difference is called skew) then the leading vCPU will decide whether to stop itself (co-stop). Note that it is vCPUs that co-stop (or co-start), not the entire VM. This works very well when even when there is some over commitment of resources, however as you would expect; too much over commitment of CPU resources will inevitably impact performance. I show an example of over commitment and co-stop later in Example 2.

Remember it is not a flat-out race for CPU resources between VMs; the ESXi CPU scheduler’s job is to ensure that policies such as CPU shares, reservations and limits are followed while maximising CPU utilisation and to ensure fairness, throughput, responsiveness and scalability. A discussion of using reservations and shares to prioritise production workloads is beyond the scope of this post and dependant on your application and workload mix. I may revisit this at a later time if I find any Caché specific recommendations. There are many factors that come into play with the CPU scheduler, this section just skims the surface. For a deep dive see the VMware white paper and other links in the references at the end of the post.


# Examples

To illustrate the different vCPU configurations, I ran a series of benchmarks using a high transaction rate browser based Hospital Information System application. A similar concept to the DVD Store database benchmark developed by VMware.

The scripts for the benchmark are created based on observations and metrics from live hospital implementations and include high use workflows, transactions and components that use the highest system resources. Driver VMs on other hosts simulate web sessions (users) by executing scripts with randomised input data at set workflow transaction rates. A benchmark with a rate of 1x is the baseline. Rates can be scaled up and down in increments.

Along with the database and operating system metrics a good metric to gauge how the benchmark database VM is performing is component (also could be a transaction) response time as measured on the server. An example of a component is part of an end user screen. An increase in component response time means users would start to see a change for the worse in application response time. A well performing database system must provide consistent high performance for end users. In the following charts, I am measuring against consistent test performance and an indication of end user experience by averaging the response time of the 10 slowest high-use components. Average component response time is expected to be sub-second, a user screen may be made up of one component, or complex screens may have many components.

Remember you are always sizing for peak workload, plus a buffer for unexpected spikes in activity. I usually aim for average 80% peak CPU utilisation.

A full list of benchmark hardware and software is at the end of the post.


## Example 1. Right-sizing - single monster VM per host

It is possible to create a database VM that is sized to use all the physical cores of a host server, for example a 24 vCPU VM on the 24 physical core host. Rather than run the server “bare-metal” in a Caché database mirror for HA or introduce the complication of operating system failover clustering, the database VM is included in a vSphere cluster for management and HA, for example DRS and VMware HA.

I have seen customers follow old-school thinking and size a primary database VM for expected capacity at the end of five years hardware life, but as we know from above it is better to right-size; you will get better performance and consolidation if your VMs are not oversized and managing HA will be easier; think Tetris if there is maintenance or host failure and the database monster VM has to migrate or restart on another host. If transaction rate is forecast to increase significantly vCPUs can be added ahead of time during planned maintenance.

Note, 'hot add' CPU option disables vNUMA so do not use it for monster VMs.

Consider the following chart showing a series of tests on the 24-core host. 3x transaction rate is the sweet spot and the capacity planning target for this 24-core system.

  • A single VM is running on the host.
  • Four VM sizes were used to show performance at 12, 24, 36 and 48 vCPU.
  • Transaction rates (1x, 2x, 3x, 4x, 5x) were run for each VM size (if possible).
  • Performance/user experience is shown as component response time (bars).
  • Average CPU% utilisation in the guest VM (lines).
  • Host CPU utilisation reached 100% (red dashed line) at 4x rate for all VM sizes.

![24 Physical Core Host Single guest VM average CPU% and Component Response time ](https://community.intersystems.com/sites/default/files/inline/images/single_guest_vm.png "Single Guest VM")

There is a lot going on in this chart, but we can focus on a couple of interesting things.

  • The 24 vCPU VM (orange) scaled up smoothly to the target 3x transaction rate. At 3x rate the in-guest VM is averaging 76% CPU (peaks were around 91%). Host CPU utilisation is not much more than the guest VM. Component response time is pretty much flat up to 3x, so users are happy. As far as our target transaction rate — this VM is right-sized.

So much for right-sizing, what about increasing vCPUs, that means using hyper threads. Is it possible to double performance and scalability? The short answer is No!

In this case the answer can be seen by looking at component response time from 4x onwards. While the performance is ‘better’ with more logical cores (vCPUs) allocated, it is still not flat and as consistent as it was up to 3x. Users will be reporting slower response times at 4x no matter how many vCPUs are allocated. Remember at 4x the _host _ is already flat-lined at 100% CPU utilisation as reported by vSphere. At higher vCPU counts even though in-guest CPU metrics (vmstat) are reporting less than 100% utilisation this is not the case for physical resources. Remember the guest operating system does not know it is virtualised and is just reporting on resources presented to it. Also note the guest operating system does not see HT threads, all vCPUs are presented as physical cores.

The point is that database processes (there are more than 200 Caché processes at 3x transaction rate) are very busy and make very efficient use of processors, there is not a lot of slack for logical processors to schedule more work, or consolidate more VMs to this host. For example, a large part of Caché processing is happening in-memory so there is not a lot of wait on IO. So while you can allocate more vCPUs than physical cores there is not a lot to be gained because the host is already 100% utilised.

Caché is very good at handling high workloads. Even when the host and VM are at 100% CPU utilisation the application is still running, and transaction rate is still increasing — scaling is not linear, and as we can see response times are getting longer and user experience will suffer — but the application does not ‘fall off a cliff’ and although not a good place to be users can still work. If you have an application that is not so sensitive to response times it is good to know you can push to the edge, and beyond, and Caché still works safely.

Remember you do not want to run your database VM or your host at 100% CPU. You need capacity for unexpected spikes and growth in the VM, and ESXi hypervisor needs resources for all the networking, storage and other activities it does.

I always plan for peaks of 80% CPU utilisation. Even then sizing vCPU only up to the number of physical cores leaves some headroom for ESXi hypervisor on logical threads even in extreme situations.

If you are running a hyper-converged (HCI) solution you MUST also factor in HCI CPU requirements at the host level. See my previous post on HCI for more details. Basic CPU sizing of VMs deployed on HCI is the same as other VMs.

Remember, You must validate and test everything in your own environment and with your applications.


## Example 2. Over committed resources

I have seen customer sites reporting ‘slow’ application performance while the guest operating system reports there are CPU resources to spare.

Remember the guest operating system does not know it is virtualised. Unfortunately in-guest metrics, for example as reported by vmstat (for example in pButtons) can be deceiving, you must also get host level metrics and ESXi metrics (for example esxtop) to truly understand system health and capacity.

As you can see in the chart above when the host is reporting 100% utilisation the guest VM can be reporting a lower utilisation. The 36 vCPU VM (red) is reporting 80% average CPU utilisation at 4x rate while the host is reporting 100%. Even a right-sized VM can be starved of resources, if for example, after go-live other VMs are migrated on to the host, or resources are over-committed through badly configured DRS rules.

To show key metrics, for this series of tests I configured the following;

  • Two database VMs running on the host.
    • a 24vCPU running at a constant 2x transaction rate (not shown on chart).
    • a 24vCPU running at 1x, 2x, 3x (these metrics are shown on chart).

With another database using resources; at 3x rate, the guest OS (RHEL 7) vmstat is only reporting 86% average CPU utilisation and the run queue is only averaging 25. However, users of this system will be complaining loudly as the component response time shot up as processes are slowed.

As shown in the following chart Co-stop and Ready Time tell the story why user performance is so bad. Ready Time (%RDY) and CoStop (%CoStop) metrics show CPU resources are massively over committed at the target 3x rate. This should not really be a surprise as the host is running 2x (other VM) and this database VMs 3x rate.


![](https://community.intersystems.com/sites/default/files/inline/images/overcommit_3.png "Over-committed host")

The chart shows Ready time increases when total CPU load on the host increases.

Ready time is time that a VM is ready to run but cannot because CPU resources are not available.

Co-stop also increases. There are not enough free logical CPUs to allow the database VM to progress (as I detailed in the HT section above). The end result is processing is delayed due to contention for physical CPU resources.

I have seen exactly this situation at a customer site where our support view from pButtons and vmstat only showed the virtualised operating system. While vmstat reported CPU headroom user performance experience was terrible.

The lesson here is it was not until ESXi metrics and a host level view was made available that the real problem was diagnosed; over committed CPU resources caused by general cluster CPU resource shortage and to make the situation worse bad DRS rules causing high transaction database VMs to migrate together and overwhelm host resources.


## Example 3. Over committed resources

In this example I used a baseline 24 vCPU database VM running at 3x transaction rate, then two 24 vCPU database VMs at a constant 3x transaction rate.

The average baseline CPU utilisation (see Example 1 above) was 76% for the VM and 85% for the host. A single 24 vCPU database VM is using all 24 physical processors. Running two 24 vCPU VMs means the VMs are competing for resources and are using all 48 logical execution threads on the server.


![](https://community.intersystems.com/sites/default/files/inline/images/overcommit_2vm.png "Over-committed host")

Remembering that the host was not 100% utilised with a single VM, we can still see a significant drop in throughput and performance as two very busy 24 vCPU VMs attempt to use the 24 physical cores on the host (even with HT). Although Caché is very efficient using the available CPU resources there is still a 16% drop in database throughput per VM, and more importantly a more than 50% increase in component (user) response time.


## Summary

My aim for this post is to answer the common questions. See the reference section below for a deeper dive into CPU host resources and the VMware CPU schedular.

Even though there are many levels of nerd-knob twiddling and ESXi rat holes to go down to squeeze the last drop of performance out of your system, the basic rules are pretty simple.

For large production databases :

  • Plan for one vCPU per physical CPU core.
  • Consider NUMA and ideally size VMs to keep CPU and memory local to a NUMA node.
  • Right-size virtual machines. Add vCPUs only when needed.

If you want to consolidate VMs remember large databases are very busy and will heavily utilise CPUs (physical and logical) at peak times. Don't oversubscribe them until your monitoring tells you it is safe.


## References
## Tests

I ran the examples in this post on a vSphere cluster made up of two processor Dell R730’s attached to an all flash array. During the examples there was no bottlenecks on the network or storage.

  • Caché 2016.2.1.803.0

PowerEdge R730

  • 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
  • 16x 16GB RDIMM, 2133 MT/s, Dual Rank, x4 Data Width
  • SAS 12Gbps HBA External Controller
  • HyperThreading (HT) on

PowerVault MD3420, 12G SAS, 2U-24 drive

  • 24x 24 960GB Solid State Drive SAS Read Intensive MLC 12Gbps 2.5in Hot-plug Drive, PX04SR
  • 2 Controller, 12G SAS, 2U MD34xx, 8G Cache

VMware ESXi 6.0.0 build-2494585

  • VMs are configured for best practice; VMXNET3, PVSCSI, etc.

RHEL 7

  • Large pages

Baseline 1x rate averaged 700,000 glorefs/second (database access/second). 5x rate averaged more than 3,000,000 glorefs/second for 24 vCPUs. The tests were allowed to burn in until constant performance is achieved and then 15 minute samples were taken and averaged.

These examples only to show the theory, you MUST validate with your own application!


7
0 6626
Question Wesley West · Nov 15, 2023

I am trying to execute a program from within cache using a $zf call

S X=$ZF(-1,"C:\""Program Files (x86)""\Car-Part\Messaging\iCPM.exe")

For the sake of this post I changed it to open notepad

S X=$ZF(-1,"C:\Windows\notepad.exe")

If I call it directly from terminal notepad opens and all is happy.  

If I add it to a program we use to run certain tasks once an hour or even every 10 minutes it will fire off notepad but it will be in the background.

The messaging application we use will not work at all in the background and needs to be in the foreground.  

2
1 312
Article Mihoko Iijima · Nov 2, 2023 3m read

InterSystems FAQ rubric

For routines (*.mac)

You can hide the source by exporting/importing only the *.obj that is generated after compiling the source program.

The command execution example specifies EX1Sample.obj and EX2Sample.obj, which are generated by compiling EX1Sample.mac and EX2Sample.mac, as export targets and exports them to the second argument file.

After moving to another namespace, I am using the exported XML file to perform the import.

1
0 634
Question Nael Nasereldeen · Nov 12, 2019

Hi,

I am trying to use the %Net.HttpRequest Class, and the request has to pass through a proxy server that requires authentication.

I am usually able to do that by using the following code:

S httprequest.ProxyServer=proxyServer
S httprequest.ProxyPort=proxyPort
S httprequest.ProxyAuthorization="Basic xyzxyzxyz"
 

I have a problem accessing a site that is accessible only using SSL-

The combination of the proxy authentication code and the SSL code-

S httprequest.Https=1
S httprequest.SSLConfiguration = "XYZ"
S httprequest.ProxyHTTPS=1
 

Does not work- I get the following error:

5
0 1134
Question Jordan Everett · Oct 10, 2023

Hello! I just had a quick question for anyone out there with more experience on deploying Ensemble productions.

I'm currently trying to export my Ensemble production WITH the Business Partners so I don't have to rebuild or add to that table after I import my Ensemble Production. I know that this is a small thing and doesn't actually affect anything in the production in terms of performance, but I like to have it for better documentation.

I've tried including it in my export, but I'm unable to find it and I'm wondering if there is something obvious that I'm missing?

3
0 228
Question Andrew Smith · Sep 18, 2023

We have some custom tools that do project based deployment from environment to environment. In VS Code, we're struggling to find the best way to export PRJ files that contain the contents of a project the way we were able to in Studio. We can edit existing projects, and in server side editing we can create projects, but we can't seem to extract the project files (Not the classes, the wrapper) to store in our source control solution for deployment purposes.

1
1 203
Announcement Brenna Quirk · Sep 11, 2023

Hi all! I just wanted to share a new video – the first in a series of three – addressing the migration from the InterSystems Private Web Server to an external web server. This video covers the process of installing a web server and upgrading your IRIS instance in a Linux/Unix environment. Two more videos on the way will show the process for a mirrored setup in Linux/Unix, and for a single instance in Windows. 

Migrating a Single Instance to an External Web Server in Linux or Unix 

0
0 224
Article Lorenzo Scalese · Nov 10, 2022 8m read

REST API for Security Package

Hi community,

In this article, we will learn how to set up a REST API for the IRIS Security Package. We will be able to create users, roles, add applications, etc... by simple HTTP requests as well as generate a client application in ObjectScript.

Requirements

We need :

  1. An IRIS instance (installation kit or docker).
  2. ObjectScript package manager (ZPM).
  3. (Optional) A second IRIS instance to generate an ObjectScript client.

We will use a set of existing applications and libraries on OpenExchange. The package manager (ZPM) will make their installations much easier. If you don't have ZPM on your instance, you can easily install it by copying this line into an IRIS terminal:

set $namespace="%SYS" do ##class(Security.SSLConfigs).Create("ssl") set r=##class(%Net.HttpRequest).%New(),r.Server="pm.community.intersystems.com",r.SSLConfiguration="ssl" do r.Get("/packages/zpm/latest/installer"),$system.OBJ.LoadStream(r.HttpResponse.Data,"c")

Create the web application

A REST service dedicated to the package is available starting from version 1.4.0 of the Config-API application. We will simply install this application via the ZPM command:

zpm "install config-api"

By default, installing Config-API does not expose the REST services so we need to create the web application "/config-api". A ready-to-use script is available to avoid manual intervention in the management portal:

Do ##class(Api.Config.Developers.Install).installMainRESTApp()

The REST services of the security package are now available via the path "/config-api/security/".

API security

Now when the "/config-api" web application is created, you can view the details in the management portal.

webapp

By default, the application will only be accessible by login and password (basic authentication) provided that the user has the %Admin_Secure resource.

Of course, this is not enough. Requests must use the HTTPS protocol, and communications between the WebGateway and the IRIS instance must be encrypted. Using an API manager (IAM) might be beneficial. In that way, you can have smoother access control and, for example, accept HTTP requests from certain IP addresses only. We won't go into details of how to configure this, as it is beyond the scope of this article. However, you can find more articles on this subject in the community and in the official documentation. Yet, if you wish to leave me a comment, and I will provide you with a Docker-based repository with WebGateway HTTPS and SSL\TLS.

Custom OnPreDispatch (optional)

The REST dispatch class "Api.Config.REST.Main" has an implementation of the "OnPreDispatch" method. This method is called before the request processing. Just put any common code you want to run for each request here. Remember that if pContinue is set to 0, the request will not be processed.

Class Api.Config.REST.Main Extends %CSP.REST
{

....

ClassMethod OnPreDispatch(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean) As %Status
{
    Set sc = $$$OK, class = ##class(Api.Config.REST.OnPreDispatchAbstract).GetSubClass()

    Set pContinue = $$$YES
    
    Return:class="" sc

    Return $CLASSMETHOD(class, "OnPreDispatch", pUrl, pMethod, .pContinue)
}
}

The default implementation checks for the presence of a subclass of "Api.Config.REST.OnPreDispatchAbstract". If it exists, it will be executed. Since we have a hook to execute custom code, it can be an alternative to perform additional access checks or logging if you don't have an API manager.

Here is an example of an implementation that only logs incoming requests:

Class dc.sample.RestSecurity Extends Api.Config.REST.OnPreDispatchAbstract
{

ClassMethod OnPreDispatch(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean) As %Status
{
    Set sc = $$$OK

    /// Implement your custom access verifications here.

    Set key = $Increment(^RestSecurity.log)
    Set ^RestSecurity.log(key) = $ZDateTime($Horolog,3,1) _ " " _ pMethod _ " " _pUrl _ "( IP : " _ $Get(%request.CgiEnvs("REMOTE_ADDR")) _ ")"

    merge ^RestSecurity.log(key, "CgiEnvs") = %request.CgiEnvs
    merge ^RestSecurity.log(key, "Data") = %request.Data

    // Example to stop the execution :
    // Set %response.Status = "401 Unauthorized"
    // Set pContinue = $$$NO

    Quit sc
}
}

Testing the REST API

The API is provided with a specification (swagger 2.0) available at http://localhost:52773/config-api/security/ (adapt the port number if necessary). Therefore, we can easily generate a client application with, for example, the swagger-ui application available on OpenExchange.

Install swagger-ui:

zpm "install swagger-ui"

Now open the browser at the URL http://localhost:52773/csp/swagger-ui/index.html

Once you open the page, you will most likely see an error

swerr

This error occurs because by default swagger-ui tries to retrieve a specification from a non-existent URL. To avoid this error, You can force swagger-ui to open by exploring the URL of our REST service:

Do ##class(Api.Config.Developers.Install).SetSwaggerUIDefaultPath("/config-api/security/")

Now refresh your browser. Log in with a user with the %Admin_secure resource.

swspec

At this stage, you can see on this interface that **C**reate **R**ead **U**pdate **D**elete operations are available for: users, roles, resources, SSL configuration, and web applications.

It's a bit different for services and system settings, where it can only be read (GET) and modified (PUT).

The interface is self-documented thanks to the rather detailed swagger specification:

swuser

It is not useful to describe here how to test each request because by clicking on POST /user, you can get a detailed description, a sample body for the request, and complete documentation of the model. In this article, we will only describe the particular case of SQL privileges.

Adding an SQL privilege

POST/sqlprivileges

Set SQL Privileges.

Here is an example of a body for adding the "select" privilege.

{
  "Grantable": "1",
  "Grantee": "MyRoleName",
  "Grantor": "_system",
  "Namespace": "USER",
  "Privilege": "s",
  "SQLObject": "1,schema_name.table_name"
}

Privilege can take the next values: s (select), i (insert), u (update), d (delete), r (reference), and e (execute).

SQLObject starts with "1," for tables, "3," for views, and "9," for stored procedures.

If you need to assign privileges to a large number of tables, It is not very convenient... In this case, it is better to use “(PUT) /sqlhelper”.

Deleting a privilege

DELETE/sqlprivileges​/{id}

The deletion is done by the ID, which is composed of a namespace, SQLObject, privilege, grantee and grantor. The ID corresponding to the creation of our previous privilege is "USER||1,schema_name.table_name||s||MyRoleName||_system". Be careful with escaping special characters.

Adding privileges to all tables in a schema

PUT/sqlhelper

This service allows you to assign a set of privileges to all the tables of one or more schemas. Here is an example of a body:

{
  "Grantable": "1",
  "Grantee": "MyNewRole",
  "Grantor": "_system",
  "Namespace": "USER",
  "Table": {
    "Schemas": [
      "schema_name",
      "schema_name_2"
    ],
    "Privileges": [
      "S",
      "I",
      "U",
      "D"
    ]
  }
}

Generating an HTTP ObjectScript client

Swagger editor allows us to generate clients from a swagger specification in different languages, but unfortunately ObjectScript is not in the list. However, it’s possible to generate it using an application on the OpenExchange “OpenApi-client-gen”. Currently, it is only compatible with swagger 2.0 specifications.

Install openapi-client-gen:

zpm "install openapi-client-gen"

Generate the client application:

Set features("simpleHttpClientOnly") = 1
Set sc = ##class(dc.openapi.client.Spec).generateApp("IrisSecurity", "https://raw.githubusercontent.com/lscalese/iris-config-api/master/swagger-security.json", .features)
Write !,"Status : ",$SYSTEM.Status.GetOneErrorText(sc)

Below you can see an example of code allowing you to make an HTTP request to retrieve the list of web applications and display the result in the terminal:

Class iris.dc.sample.ObjectScriptRestClient
{

ClassMethod GetRequestObj() As %Net.HttpRequest
{
    Set httpRequest = ##class(%Net.HttpRequest).%New()
    Set httpRequest.Username = "_system"
    Set httpRequest.Password = "SYS"
    Set httpRequest.Server = "iris-security-rest-server"
    Set httpRequest.Port = 52773
    Set httpRequest.Https = 0
    Quit httpRequest
}

ClassMethod ExampleGetWebAppList() As %Status
{
    Set sc = $$$OK
    Set httpClient = ##class(IrisSecurity.HttpClient).%New()
    Set httpRequest = ..GetRequestObj()
    Do httpRequest.SetHeader("accept", "application/json")
    Set msg = ##class(IrisSecurity.msg.GetListOfWebAppsRequest).%New()
    Set sc = httpClient.GETGetListOfWebApps(msg,.response,.httpRequest,.httpresponse)
    Write !,"Status           : ",$SYSTEM.Status.GetOneErrorText(sc)
    Quit:$$$ISERR(sc) sc
    Write !,"Http Status Code : ",response.httpStatusCode
    Write !
    zw response
    
    If response.httpStatusCode = 200 {
        Set formatter=##class(%JSON.Formatter).%New()
        Do formatter.Format({}.%FromJSON(response.body))
    } 

    Quit sc
}

}

GitHub

If you are a docker user, everything we have seen in this article is available on the following GitHub repository: https://github.com/lscalese/iris-sample-security-rest-api.

You just have to do this:

git clone https://github.com/lscalese/iris-sample-security-rest-api.git
cd iris-sample-security-rest-api
docker-compose up -d

You can now open your browser at the URL http://localhost:32773/swagger-ui/index.html to test the REST API directly.

It is also possible to test the generated ObjectScript client application by opening a terminal on the "iris-cli" service:

docker exec -it iris-security-rest-client iris session iris
Do ##class(iris.dc.sample.ObjectScriptRestClient).ExampleGetWebAppList()

You should see the list of web applications displayed in JSON format.

1
0 778
Question Scott Roth · May 1, 2023

I am running into an error trying to send an Alert Email to test the functionality of IRIS HealthShare Health Connect  2022.1 compared to Cache HealthShare Health Connect 2018.1.3. I was trying to send an Alert email, when I am getting the following error on my EMailAlert operation which is using EnsLib.EMail.OutboundAdpater.

ERROR <Ens>ErrException: <UNDEFINED>FText+4 ^%occMessages *msg -- logged as '-'
number - @''

I verified the message to the EMailAlert was populated, so what could be throwing this error...

5
0 377
Article Ward De Backer · Apr 21, 2023 5m read

When you install an IRIS or Caché instance on Windows Server, you'll usually need to install it under a specific user account that has network access permissions. This is very handy when you needs to access network resources for creating files or directly accessing printers.

TL;DR: see key takeaways at the bottom!

When you need to change the Windows user account the IRIS/Caché service is running as, you can configure (after installation):

0
1 628
Question Tani Frankel · Feb 8, 2023

Does anyone happen to have a sample Configuration (CPF) Merge file that includes Action parameters setting up authentication methods (e.g. Password, Kerberos) for certain Services and Web Applications (e.g. via the ModifyService or Modify/CreateApplication AutheEnabled property)?

Thanks!

2
0 267
Article Kurro Lopez · Feb 4, 2023 2m read

Usualy, if you want to deploy a solution, you need to add the items, configure your lookup tables and default configuration manually. 
It's okay if you have all the permissions and privileges to perform these actions. If you want to deploy to a client's production server, and you don't have the permissions, you need to indicate in a document ALL the steps that the deployment manager has to perform. 

0
0 338
Article Alex Woodhead · Jan 28, 2023 3m read

Some Usage cases

1. A deployment may consist of two high availability instances and two disaster recovery instances in a different data center.

The corresponding UAT environment could replicate this giving a total of 8 instances. How do you confirm CPF and Scheduled task alignment across ALL instances.

2
0 491
Question Oleksandr Kyrylov · Dec 15, 2022

I am trying to make application deployable using Installation Manifest.

I use IRISHealth_Community-2022.2.0.368.0-win_x64.exe to run manifest during installation

I run it from windows command line using the following command:

IRISHealth_Community-2022.2.0.368.0-win_x64.exe INSTALLERMANIFEST="C:\FixxerInstall\src\Installer.xml" INSTALLERMANIFESTLOGFILE="Log.txt" INSTALLERMANIFESTLOGLEVEL=3 INSTALLERMANIFESTPARAMS="FlaggerCSPDir=C:\FixxerInstall\src\csp\flagger,ClassImportDir=C:\FixxerInstall\src\import\"

It works but a little bit strange.

I try to create csp application using manifest. 

My XData:

4
0 478