#System Administration

0 Followers · 540 Posts

System administration refers to the management of one or more hardware and software systems.

Documentation on InterSystems system administration.

Article Sylvain Guilbaud · Aug 24, 2023 1m read

It sometimes happens that due to an adverse event the AUDIT database (IRISAUDIT) has grown to such proportions that the disk it resides on is full and the daily purge cannot be expected to reclaim disk space.

As IRISAUDIT is a system database required at startup, there is no question of attempting to restart IRIS after simply deleting IRIS.DAT from the <IRIS ROOT>/mgr/irisaudit/ database, nor of hot swapping, by system manipulations trying to dismount, replace, remount, since it is simply not possible to dismount it.

IRISAUDIT database: mounting required when starting IRIS

4
0 301
Article Megumi Kakechi · Aug 10, 2023 2m read

InterSystems FAQ rubric

※Use this method if you want to compare databases that have been replicated using mirroring, shadowing, or some other mechanism.

You can use the DATACHECK utility to compare global variables. Please refer to the document below.
Overview of DataCheck [IRIS]

***

Routine comparisons use the system routine %RCMP or the Management Portal.

Below is how to use it in the Management Portal.

0
2 409
Article Mihoko Iijima · Jul 20, 2023 4m read

InterSystems FAQ rubric

You can search for a specific global variable in the journal file using the ByTimeReverseOrder query of the %SYS.Journal.File class and the List query of the %SYS.Journal.Record class.

The role of each query is as follows.

A) %SYS.Journal.File query of the ByTimeReverseOrder class

You can get the journal file name. Results are returned in descending order of journal file name. 

1
1 482
Article Murray Oldfield · May 25, 2023 12m read

I am often asked to review customers' IRIS application performance data to understand if system resources are under or over-provisioned.

This recent example is interesting because it involves an application that has done a "lift and shift" migration of a large IRIS database application to the Cloud. AWS, in this case.

A key takeaway is that once you move to the Cloud, resources can be right-sized over time as needed. You do not have to buy and provision on-premises infrastructure for many years in the future that you expect to grow into.

Continuous monitoring is required. Your application transaction rate will change as your business changes, the application use or the application itself changes. This will change the system resource requirements. Planners should also consider seasonal peaks in activity. Of course, an advantage of the Cloud is resources can be scaled up or down as needed.

For more background information, there are several in-depth posts on AWS and IRIS in the community. A search for "AWS reference" is an excellent place to start. I have also added some helpful links at the end of this post.

AWS services are like Lego blocks, different sizes and shapes can be combined. I have ignored networking, security, and standing up a VPC for this post. I have focused on two of the Lego block components;

  • Compute requirements.
  • Storage requirements.

Overview

The application is a healthcare information system used at a busy hospital group. The architecture components I am focusing on here include two database servers in an InterSystems mirror failover cluster.

Sidebar: Mirrors are in separate availability zones for additional high availability.


Compute requirements

EC2 Instance Types

Amazon EC2 provides a wide selection of instance types optimised for different use cases. Instance types comprise varying FIXED combinations of CPU and memory and fixed upper limits on storage and networking capacity. Each instance type includes one or more instance sizes.

EC2 instance attributes to look at closely include:

  • vCPU cores and Memory.
  • Maximum IOPS and IO throughput.

For IRIS applications like this one with a large database server, two types of EC2 instances are a good fit: 

  • EC2 R5 and R6i are in the Memory Optimised family of instances and are an ideal fit for memory-intensive workloads, such as IRIS. There is 8GB memory per vCPU.
  • EC2 M5 and M6i are in the General Purpose family of instances. There is 4GB memory per vCPU. They are used more for web servers, print servers and non-production servers.

Note: Not all instance types are available in all AWS regions. R5 instances were used in this case because the more recently released R6i was unavailable.

Capacity Planning

When an existing on-premises system is available, capacity planning means measuring current resource use, translating that to public cloud resources, and adding resources for expected short-term growth. Generally, if there are no other resource constraints, IRIS database applications scale linearly on the same processors; for example, imagine adding a new hospital to the group; increasing system use (transaction rate) by 20% will require 20% more vCPU resources using the same processor types. Of course, that's not guaranteed; validate your applications.

vCPU requirements

Before the migration, CPU utilisation peaked near 100% at busy times; the on-premises server has 26 vCPUs. A good rule of thumb is to size systems with an expected peak of 80% CPU utilisation. This allows for transient spikes in activity or other unusual activity. An example CPU utilisation chart for a typical day is shown below.

image

Monitoring the on-premises servers would prompt an increase in vCPUs to 30 cores to bring general peak utilisation below 80%. The customer was anticipating adding 20% transaction growth in the short term. So, a 20% buffer is added to the calculations, also allowing some extra headroom for the migration period.

A simple calculation is that 30 cores + 20% growth and migration buffer is 36 vCPU cores required

Sizing for the cloud

Remember, AWS EC2 instances in each family type come in fixed sizes of vCPU and memory and set upper limits on IOPS, storage, and network throughput.

For example, available instance types in the R5 and R6i families include:

  • 16 vCPUs and 128GB memory
  • 32 vCPUs and 256 GB memory
  • 48 vCPUs and 384 GB memory
  • 64 vCPUs and 512 GB memory
  • And so on.

Rule of thumb: A simplified way to size an EC2 instance from known on-premises metrics to the cloud is to round up the recommended on-premises vCPU requirements to the next available EC2 instance size.

Caveats: There can be many other considerations; for example, differences in on-premises and EC2 processor types and speeds, or having more performant storage in the cloud than an old on-premises system, can mean that vCPU requirements change, for example, more IO and more work can be done in less time, increasing peak vCPU utilisation. On-premises servers may have a full CPU processor, including hyper-threading, while cloud instance vCPUs are a single hyper thread. On the other hand, EC2 instances are optimised to offload some processing to onboard Nitro cards allowing the main vCPU cores to spend more cycles processing workloads, thus delivering better instance performance. But, in summary, the above rule is a good guide to start. The advantage of the cloud is that with continuous monitoring, you can plan and change the instance type to optimise performance and cost.

For example, to translate 30 or 36 vCPUs on-premises to similar EC2 instance types:

  • The r5.8xlarge has 32 vCPUs, 256 GB memory and a maximum of 30,000 IOPS.
  • The r512xlarge has 48 vCPUs, 384 GB memory and a maximum of 40,000 IOPS

Note the maximum IOPS. This will become important later in the story.

Results

An r512xlarge instance was selected for the IRIS database mirrors for the migration.

In the weeks after migration, monitoring showed the 48 vCPU instance type with sustained peaks near 100% vCPU utilisation. Generally, though, the processing peaked at around 70%. Well within the acceptable range, and if the periods of high utilisation can be traced to a process that can be optimised, there is plenty of headroom to consider right-sizing to a lower specification and cheaper EC2 instance type.

image

Sometime later, the instance type remained the same. A system performance check shows that the general peak vCPU utilisation has dropped to around 50%. However, there are still transient peaks near 100%.

image

Recommendation

Continuous monitoring is required. With constant monitoring, the system can be right-sized to achieve the necessary performance and be cheaper to run.

The transient spikes in vCPU utilisation should be investigated. For example, a report or batch job may be moved out of business hours, lowering the overall vCPU peak and lessening any adverse impact on interactive application users.

Review the storage IOPS and throughput requirements before changing the instance type; remember, instance types have fixed limits on maximum IOPS.

Instances can be right-sized by using failover mirroring. Simplified steps are:

  • Power off the backup mirror.
  • Power on the backup mirror using a smaller or larger instance with configuration changes to mount the EBS storage and account for a smaller memory footprint (think about things like Linux hugepages and IRIS global buffers).
  • Let the backup mirror catch up.
  • Failover the backup mirror to become primary.
  • Repeat, resize the remaining mirror, return it online, and catch up.

Note: During the mirror failover, there will be a short outage for all users, interfaces, etc. However, if ECP application servers are used, there can be no interruption to users. Application servers can also be part of an autoscaling solution.

Other cost-saving options include running the backup mirror on a smaller instance. However, there is a significant risk of reduced performance (and unhappy users) if a failover occurs at peak processing times.

Caveats: Instance vCPU and memory are fixed. Restarting with a smaller instance with a smaller memory footprint will mean a smaller global buffer cache, which can increase the database read IOPS. Please take into account the storage requirements before reducing the instance size. Automate and test rightsizing to minimise the risk of human error, especially if it is a common occurrence.


Storage requirements

Predictable storage IO performance with low latency is vital to provide scalability and reliability for your applications.

Storage types

Amazon Elastic Block Store (EBS) storage is recommended for most high transaction rate IRIS database applications. EBS provides multiple volume types that allow you to optimise storage performance and cost for a broad range of applications. SSD-backed storage is required for transactional workloads such as applications using IRIS databases.

Of the SSD storage types, gp3 volumes are generally recommended for IRIS databases to balance price and performance for transactional applications; however, for exceptional cases with very high IOPS or throughput, io2 can be used (typically for a higher cost). There are other options, such as locally attached ephemeral storage and third-party virtual array solutions. If you have requirements beyond io2 capabilities, talk to InterSystems about your needs.

Storage comes with limits and costs, for example;

  • gp3 volumes deliver a baseline performance of 3,000 IOPS and 125 MiBps at any volume size with single-digit millisecond latency 99% of the time for the base cost of the storage GB capacity. gp3 volumes can scale up to 16,000 IOPS and 1,000 MiBps throughput for an additional cost. Storage is priced per GB and on provisioned IOPS over the 3,000 IOPS baseline.
  • io2 volumes deliver a consistent baseline performance of up to 500 IOPS/GB to a maximum of 64,000 IOPS with single-digit millisecond latency 99.9% of the time. Storage is priced per GB and on provisioned IOPS.

Remember: EC2 instances also have limits on total EBS IOPS and throughput. For example, the r5.8xlarge has 32 vCPUs and a maximum of 30,000 IOPS. Not all instance types are optimised to use EBS volumes.

Capacity Planning

When an existing on-premises system is available, capacity planning means measuring current resource use, translating that to public cloud resources, and adding resources for expected short-term growth.

The two essential resources to consider are:

  • Storage capacity. How many GB of database storage do you need, and what is the expected growth? For example, you know your on-premises system's historical average database growth for a known transaction rate. In that case, you can calculate future database sizes based on any anticipated transaction rate growth. You will also need to consider other storage, such as journals.
  • IOPS and throughput. This is the most interesting and is covered in detail below.

Database requirements

Before the migration, database disk reads were peaking at around 8,000 IOPS.

image

Read plus Write IOPS was peaking above 40,000 on some days. Although, during business hours, the peaks are much lower.

image

The total throughput of reads plus writes was peaking at around 600 MB/s.

image

Remember, EC2 instances and EBS volumes have limits on IOPS AND throughput. Whichever limit is reached first will result in the throttling of that resource by AWS, causing performance degradation and likely impacting the users of your system. You must provision IOPS AND throughput.

Sizing for the cloud

For a balance of price and performance, gp3 volumes are used. However, in this case, the limit of 16,000 IOPS for a single gp3 volume is exceeded, and there is an expectation that requirements will increase in the future.

To allow for the provisioning of higher IOPS than is possible on a single gp3 volume, an LVM stripe is used.

For the migration, the database is deployed using an LVM stripe of four gp3 volumes with the following:

  • Provisioned 8,000 IOPS on each volume (for a total of 32,000 IOPS).
  • Provisioned throughput of 250 MB/s on each volume (for a total of 1,000 MB/s).

The exact capacity planning process was done for the Write Image Journal (WIJ) and transaction journal on-premises disks. The WIJ and journal disks were each provisoned on a single gp3 disk.

For more details and an example of using an LVM stripe, see: https://community.intersystems.com/post/using-lvm-stripe-increase-aws-ebs-iops-and-throughput

Rule of thumb: If your requirements exceed the limits of a single gp3 volume, investigate the cost difference between using LVM gp3 and io2 provisioned IOPS.

Caveats: Ensure the EC2 instance does not limit IOPS or throughput.

Results

In the weeks after migration, database write IOPS peaked at around 40,000 IOPS, similar to on-premises. However, the database reads IOPS were much lower.

Lower read IOPS is expected due to the EC2 instance having more memory available for caching data in global buffers. More application working set data in memory means it does not have to be called in from much slower SSD storage. Remember, the opposite will happen if you reduce the memory footprint.

image

During peak processing times, the database volume had spikes above 1 ms latency. However, the spikes are transient and will not impact the user's experience. Storage performance is excellent.

image

Later, a system performance check shows that although there are some peaks, generally, read IOPS is still lower than on-premises.

image

Recommendation

Continuous monitoring is required. With constant monitoring, the system can be right-sized to achieve the necessary performance and be cheaper to run.

An application process responsible for the 20 minutes of high overnight database write IOPS (chart not shown) should be reviewed to understand what it is doing. Writes are not affected by large global buffers and are still in the 30-40,000 IOPS range. The process could be completed with lower IOPS provisioning. However, there will be a measurable impact on database read latency if the writes overwhelm the IO path, adversely affecting interactive users. Read latency must be monitored closely if reads are throttled for an extended period.

The database disk IOPS and throughput provisioning can be adjusted via AWS APIs or interactively via the AWS console. Because four EBS volumes comprise the LVM disk, the IOPS and throughput attributes of the EBS volumes must be adjusted equally.

The WIJ and journal should also be continuously monitored to understand if any changes can be made to the IOPS and throughput provisioning.

Note: The WIJ volume has high throughput requirements (not IOPS) due to the 256 kB block size. WIJ volume IOPS may be under the baseline of 3,000 IOPS, but throughput is currently above the throughput baseline of 125 MB/s. Additional throughput is provisioned in the WIJ volume.

Caveats: Decreasing IOPS provisioning to throttle the period of high overnight writes will result in a longer write daemon cycle (WIJ plus random database writes). This may be acceptable if the writes finish within 30-40 seconds. However, there may be a severe impact on read IOPS and read latency and, therefore, the experience of interactive users on the system for 20 minutes or longer. Please be sure to proceed with caution.


Helpful links

AWS


1
3 1142
Article Megumi Kakechi · May 4, 2023 2m read

InterSystems FAQ rubric

In Windows, set the processes with the following image names as monitoring targets.

[irisdb.exe]

contains important system processes.
* Please refer to the attachment for how to check important system processes that should be monitored.

[IRISservice.exe]

This is the process for handling IRIS instances via services.
When this process ends, it does not directly affect the IRIS instance itself, but stopping IRIS (stopping the service) is no longer possible.

[ctelnetd.exe]

0
1 388
Article Hiroshi Sato · Apr 27, 2023 2m read

InterSystems FAQ rubric

Migrating data to another system takes two steps.

1. Migrating class definitions

To migrate the class definition to another system, export it to a file in XML format or UDL format (extension .cls).

The export procedure in Studio is as follows.

Tools > Export

> Select multiple classes you want to migrate with the [Add] button

> Check [Export to local file]

> Confirm that the file type is XML, enter a file name, and click [OK].

After this, import the exported XML and UDL files in the studio on another system. The import procedure in Studio is as follows.

Tools > Import from Local

4
0 383
Article Ward De Backer · Apr 21, 2023 5m read

When you install an IRIS or Caché instance on Windows Server, you'll usually need to install it under a specific user account that has network access permissions. This is very handy when you needs to access network resources for creating files or directly accessing printers.

TL;DR: see key takeaways at the bottom!

When you need to change the Windows user account the IRIS/Caché service is running as, you can configure (after installation):

0
1 628
Question Colin Brough · Apr 20, 2023

On a developer's laptop, having had two or three Ensemble installs with different settings/config changes made, and encountering unexplained errors compiling classes, wanting to scrub as much of the previous installs off the machine before doing any fresh installation... But can't find clear documentation on doing a complete uninstall!

Have stopped the server.

Am I safe to remove C:\InterSystems\Ensemble (for the instance installed into that folder)?

What other files/Registry/locations contain references to an Ensemble install, and are there tools or documentation on clearing those out?

6
0 626
Question Scott Roth · Apr 20, 2023

I am trying to finish build for moving to IRIS HealthShare Health Connect 2022.1 from HealthShare Health Connect 2018.1.3. I am currently using Delegated Authentication using an AD group to match up to the Role in IRIS. The Role has access to everything but the HS Resources because we don't really use the HS Resources for anything. We are mainly using IRIS for the Interoperability Engine. 

0
0 187
Article Mihoko Iijima · Apr 13, 2023 6m read

InterSystems FAQ rubric

In this article, we will introduce how to deal with the situation: "I accidentally deleted a global!"

Backup files and journals are used to recover specific globals that have been accidentally deleted. Restoration is performed by specifying conditions and restoring journal records using the ^ZJRNFILT utility. In this way, you can apply a point-in-time backup of the database up to and including deleting a specific global for journal records that contain deletions. For more information on the ^ZJRNFILT utility, please refer to the document below:

2
1 696
Article Sean McKenna · Aug 26, 2016 8m read

Enterprise Monitor is a component of Ensemble and can help organizations monitor multiple productions running on different namespaces within the same instance or namespaces running on multiple instances.

Documentation can be found at:

http://docs.intersystems.com/ens20161/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_all#EMONITOR_enterprise

In Ensemble 2016.1 there were changes made to make this utility work with HealthShare environments.

This article will:

  • Show how to set up Enterprise Monitor for HealthShare sites
  • Show some features of Enterprise Monitor
  • Show some features of Enterprise Message Viewer

For this article, I used the following version of HealthShare:

Cache for Windows (x86-64) 2016.1 (Build 656U) Fri Mar 11 2016 17:42:42 EST [HealthShare Modules:Core:14.02.2415 + Linkage Engine:14.02.2415 + Patient Index:14.02.2415 + Clinical Viewer:14.02.2415 + Active Analytics:14.02.2415]

2
0 1538
Question Scott Roth · Mar 10, 2023

I am looking into creating a ZSTOP as you probably have seen from my previous posts, is there a way to capture the type of shutdown that occurred? So say if there was an unknown hardware failure (forced), vs a user shutdown? Mainly looking for user or system shutdown when we force another destination to become the primary in the mirror. So if a user shutdown the production to do.,... Task A, Task B etc..

Thanks

Scott 

3
0 465
Question Robert Cemper · Mar 14, 2023

To prepare a migration to IRIS I use Docker images.
The (aged) application is built around Caché Terminal
And on Windows,  IRIS uses the same ctelnetd.exe as Caché.

In my Docker installation, Telnet Settings are just grayed out in SMP.
and my Terminal can't 'connect.
Port mapping is OK and verified with TCP

Working from the console in Docker  with the whole set of ESC and
screen formatting is not acceptable.
We tried WebTerminal but there is just no Partition behind as in Terminal.

How can I switch on Telnet support in the Docker image?
 

4
0 322
Article Murray Oldfield · Nov 14, 2019 6m read

Released with no formal announcement in IRIS preview release 2019.4 is the /api/monitor service exposing IRIS metrics in Prometheus format. Big news for anyone wanting to use IRIS metrics as part of their monitoring and alerting solution. The API is a component of the new IRIS System Alerting and Monitoring (SAM) solution that will be released in an upcoming version of IRIS.

However, you do not have to wait for SAM to start planning and trialling this API to monitor your IRIS instances. In future posts, I will dig deeper into the metrics available and what they mean and provide example interactive dashboards. But first, let me start with some background and a few questions and answers.

IRIS (and Caché) is always collecting dozens of metrics about itself and the platform it is running on. There have always been multiple ways to collect these metrics to monitor Caché and IRIS. I have found that few installations use IRIS and Caché built-in solutions. For example, History Monitor has been available for a long time as a historical database of performance and system usage metrics. However, there was no obvious way to surface these metrics and instrument systems in real-time.

IRIS platform solutions (along with the rest of the world) are moving from single monolithic applications running on a few on-premises instances to distributed solutions deployed 'anywhere'. For many use cases existing IRIS monitoring options do not fit these new paradigms. Rather than completely reinvent the wheel InterSystems looked to popular and proven current Open Source solutions for monitoring and alerting.

Prometheus?

Prometheus is a well known and widely deployed open source monitoring system based on proven technology. It has a wide variety of plugins. It is designed to work well within the cloud environment, but also is just as useful for on-premises. Plugins include operating systems, web servers such as Apache and many other applications. Prometheus is often used with a front end client, for example, Grafana, which provides a great UI/UX experience that is extremely customisable.

Grafana?

Grafana is also open source. As this series of posts progresses, I will provide sample templates of monitoring dashboards for common scenarios. You can use the samples as a base to design dashboards for what you care about. The real power comes when you combine IRIS metrics in context with metrics from your whole solution stack. From the platform components, operating system, IRIS and especially when you add instrumentation from your applications.

Haven't I seen this before?

Monitoring IRIS and Caché with Prometheus and Grafana is not new. I have been using these applications for several years to monitor my development and test systems. If you search the Developer Community for "Prometheus", you will find other posts (for example, some excellent posts by Mikhail Khomenko) that show how to expose Caché metrics for use by Prometheus.

The difference now is that the /api/monitor API is included and enabled by default. There is no requirement to code your own classes to expose metrics.


Prometheus Primer

Here is a quick orientation to Prometheus and some terminology. I want you to see the high level and to lay some groundwork and open the door to how you think of visualising or consuming the metrics provided by IRIS or other sources.

Prometheus works by scraping or pulling time series data exposed from applications as HTTP endpoints (APIs such as IRIS /api/monitor). Exporters and client libraries exist for many languages, frameworks, and open-source applications — for example, for web servers like Apache, operating systems, docker, Kubernetes, databases, and now IRIS.

Exporters are used to instrument applications and services and to expose relevant metrics on an endpoint for scraping. Standard components such as web servers, databases, and the like - are supported by core exporters. Many other exporters are available open-source from the Prometheus community.

Prometheus Terminology

A few key terms are useful to know:

  • Targets are where the services are that you care about, like a host or application or services like Apache or IRIS or your own application.
  • Prometheus scrapes targets over HTTP collecting metrics as time-series data.
  • Time-series data is exposed by applications, for example, IRIS or via exporters.
  • Exporters are available for things you don't control like Linux kernel metrics.
  • The resulting time-series data is stored locally on the Prometheus server in a database **.
  • The time-series database can be queried using an optimised query language (PromQL). For example, to create alerts or by client applications such as Grafana, to display the metrics in a dashboard.

** Spoiler Alert; For security, scaling, high availability and some other operational efficiency reasons, for the new SAM solution the database used for Prometheus time-series data is IRIS! However, access to the Prometheus database -- on IRIS -- is transparent, and applications such as Grafana do not know or care.

Prometheus Data Model

Metrics returned by the API are in Prometheus format. Prometheus uses a simple text-based metrics format with one metric per line, the format is;

<identifier> [ (time n, value n), ....]

Metrics can have labels as (key, value) pairs. Labels are a powerful way to filter metrics as dimensions. As an example, examine a single metric returned for IRIS /api/monitor. In this case journal free space:

iris_jrn_free_space{id="WIJ",dir=”/fast/wij/"} 401562.83

The identifier tells you what the metric is and where it came from:

iris_jrn_free_space

Multiple labels can be used to decorate the metrics, and then used to filter and query. In this example, you can see the WIJ and the directory where the WIJ is stored:

id="WIJ",dir="/fast/wij/"

And a value: 401562.83 (MB).


What IRIS metrics are available?

The preview documentation has a list of metrics. However, be aware there may be changes. You can also simply query the /api/monitor/metrics endpoint and see the list. I use Postman which I will demonstrate in the next community post.


What should I monitor?

Keep these points in mind as you think about how you will monitor your systems and applications.

  • When you can, instrument key metrics that affect users.
    • Users don't care that one of your machines is short of CPU.
    • Users care if the service is slow or having errors.
    • For your primary dashboards focus on high-level metrics that directly impact users.
  • For your dashboards avoid a wall of graphs.
    • Humans can't deal with too much data at once.
    • For example, have a dashboard per service.
  • Think about services, not machines.
    • Once you have isolated a problem to one service, then you can drill down and see if one machine is the problem.

References

Documentation and downloads for: Prometheus and Grafana

I presented a pre-release overview of SAM (including Prometheus and Grafana) at InterSystems Global Summit 2019 you can find the link at InterSystems learning services. If the direct link does not work go to the InterSystems learning services web site and search for: "System Alerting and Monitoring Made Easy"

Search here on the community for "Prometheus" and "Grafana".

2
7 2165
Question Scott Roth · Mar 7, 2023

I am working on setting up our Failover techniques as we move to a Mirror Environment with a Arbiter, 2 Failover Nodes, and a Async (DR) Node. There are some system commands that I would like to call when the Mirror moves, and I am working on a ZMIRROR routine for that, but I also wanted to create an additional step if we wanted to manually shutdown and for the Mirror to move. So I was looking at using ZSTOP to call a couple of different items while shutting down, while the documentation has an example a couple of questions come to mind about using ZSTOP.

3
0 351
Question Scott Roth · Feb 21, 2023

I am new to setting up a mirror environment....

We will have a Arbiter, Two Failover members (A,B), and a Async (DR) member (C). I have the two failover members in sync and are configured for Arbiter Control.

My question is about the Async member, when I initially set it up I pointed it to the mirror on the primary node A.

Is that correct?

8
0 395
Question Scott Roth · Nov 13, 2018

We are trying to script a High Availability Shutdown/Start script in case we need to fail over to one of our other servers we can be back up within mins. Is there a way to configure the startup procedure to Automatically Stop/Start the JDBC server when shutting down or starting up cache? is there an auto setting we can change?

Thanks

Scott Roth

The Ohio State University Wexner Medical Center

10
0 827
Question Scott Roth · Feb 16, 2023

Can someone confirm that HealthShare Health Connect 2022.2 is the correct latest release that is available via the Online Distribution? I tried looking at the HealthShare Health Connect on WRC and now do not see a 2022.2 or 2022.3 version. Is this correct? So I shouldn't be running 2022.2? Did Health Connect get renamed? A couple of months ago I downloaded HealthConnect-2022.2.0.368.0-lnxrh8x64 but not seeing it now on the WRC site.

2
0 358
Question Mary George · Dec 28, 2022

Hi Team , 

Can I please check if anyone has encountered SOAP authentication error when trying to submit a certificate signing request or when trying to get certificate .

I configured a local CA server without SMTP configuration and I configured a local CA client. These steps worked okay.

Then I tried to Submit Certificate Signing Request to Certificate Authority server and I am getting the following error :

Similar error is appearing when I try to use the Get Certificate(s) from Certificate Authority server option

1
0 357