Timo Lindenschmid · Feb 26, 2025 go to post

Hi,

Iris comes with a PDF render engine based on Apache FOP. This is though more used to create PDF documents from scratch than convert documents to pdf.

PDF render config documentation

Which is used in the now deprecated ZENReports %ZEN.Report.PrintServer - InterSystems IRIS Data Platform 2024.3 - including private class members
The other option is to make use of InterSystems Reports, but again this is for creating new pdfs from data contained in the database, not converting existing documents to pdf.

Timo Lindenschmid · Mar 2, 2025 go to post

Hi Harshita,

please get an iService ticket raised and someone from support will assist you.

Best Regards

Timo

Timo Lindenschmid · Mar 3, 2025 go to post

Just a note for embedded SQL, you can modify the compiler options in e.g. VSCode to include /compileembedded=1. This then will trigger a compilation of your embedded SQL on compile time and highlight any errors you might have there like missing tables etc.

Timo Lindenschmid · Mar 6, 2025 go to post

Hi,
just wondering what you want to achieve.

Is this for outputting a report? If so, there are better options available. e.g. InterSystems Reports or although deprecated Zenreports.
 

Timo Lindenschmid · Mar 6, 2025 go to post

Hi Jude, better option to get help is to open an iService ticket to get specialist help here.
Just a high level:

1. make sure that the parameter you want to use are added via the URL expression on the mun item used to call the report  

2. then the parameter can used in the report manager definition and assigned

Also make sure the parameter id in the format expected. IRIS dates are usually in $Horolog format and not in yyyy-mm-dd as might be expected by LogiReports.

Timo Lindenschmid · Mar 13, 2025 go to post

Hi Evan,

i think the only way is using process query like this:
set currentUser = ##class(%SYS.ProcessQuery).%OpenId($job).OSUserName
 

Timo Lindenschmid · Apr 3, 2025 go to post

You might want to look into Work queue Manager. It can be configured to use multiple agents to process anything in a queue. This approach is best if the queue is fixed at the start and during the run of the processing, i.e. no items added to be processed.
If you are more for a spooling type setup. You can use Integrations to monitor a spool global and start jobs based on poolsize etc.


ref: Using the Work Queue Manager | Using the Work Queue Manager | InterSystems IRIS Data Platform 2025.1
 

Timo Lindenschmid · Apr 7, 2025 go to post

Hi,

what SSH client are you using? Putty perchance?
If so try to set the KeepAliveTimeout to something other than 0 say 60

This usually solves the issue of being disconnected for me.

Timo Lindenschmid · Apr 9, 2025 go to post

This sounds like the tune table messed up the table statistics. I would look at the table statistics for that boolean field. Also i would open a support ticket with WRC on this.

Timo Lindenschmid · Apr 13, 2025 go to post

If you add a calculated field to a class definition you don't have to "Update" your data for the field to be populated. It will get calculated on record access, i.e. when the record gets selected with the field included in the select.

Timo Lindenschmid · Apr 14, 2025 go to post

my approach would be:
1. run through each list and generate a value - list index
     e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)=""
     also add the list to a still valid list
2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list.
3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list
4. iterate above until either value index has no more entries, or the still valid list has no more entries.
5. result list contains all lists required for maximum coverage
Hope that makes sense.

Timo Lindenschmid · Apr 15, 2025 go to post

Here is the implementation:
 

ClassMethod MaxCoverage(ByRef SourceLists, Output Solution as%List) {
    /*
    1. run through each sourcelist and generate a value - list index
     e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)=""
     also add the list to a still valid list
    2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list.
    3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list
    4. iterate above until either value index has no more entries, or the still valid list has no more entries.
    5. result list contains all lists required for maximum coverage
    */kill Solution
    kill ^||lengthIdx
    kill ^||idx
    kill ^||covered
    set idx=""for {
        set idx=$order(SourceLists(idx))
        quit:idx=""set listid=0set stillAvailable(idx)=""set ^||lengthIdx($listlength(SourceLists(idx)),idx)=idx
        while$listnext(SourceLists(idx),listid,listval) {
            set ^||idx(listval,idx)=""
        }
    }


    set listid=""// for loop - exit when either ^||idx has no more entries or the still valid list has no more entriesfor {
        if$data(stillAvailable)=0 {
            // no more lists to processquit
        }
        if$data(^||idx)=0 {
            // no more values to processquit
        }
        // find the first value with only one entryset val=""set found=0for {
            quit:found=1set val=$order(^||idx(val))
            quit:val=""set listid=""for {
                set listid=$order(^||idx(val,listid))
                quit:listid=""// found a value check now if ther eis more than one entryif$order(^||idx(val,listid))="" {
                    // found a value with only one entryset found=1quit
                }
            }
        }

        if found=0 {
            // haven't found one yet so use the one with the most entries ^||lengthIdx(set res=$query(^||lengthIdx(""),-1,val)
            if res'="" {
                set listid=val
            } else {
                // no more entries// should never hit thisquit
            }
        }

        if listid'="" {
            // got a list now process it// first remove the list from the available listskill stillAvailable(listid)
            kill ^||lengthIdx($listlength(SourceLists(listid)),listid)
            // iterate trhough the list, check value against the value indexset listval=0w !,"found listid:"_listid,!

            set ptr=0set added=0while$listnext(SourceLists(listid),ptr,listval) {
                // check if the value is still in the indexw !,"   checking value:"_listval
                If$INCREMENT(^||covered(listval))
                if$data(^||idx(listval)) {
                    w" - found it!"// remove the value from the indexkill ^||idx(listval)
                    // add the list to the result listif added=0 {
                        set Solution=$select($get(Solution)="":$listbuild(listid),1:Solution_$listbuild(listid))
                        set added=1
                    }
                }
            }
        }
    }
}

And the execution result:

DEV>set List(1)=$lb(3,5,6,7,9),List(2)=$lb(1,2,6,9),List(3)=$lb(5,8,9),List(4)=$lb(2,4,6,8),List(5)=$lb(4,7,9)

DEV>d ##class(Custom.codegolf).MaxCoverage(.List,.res)
found listid:2

   checking value:1 - found it!
   checking value:2 - found it!
   checking value:6 - found it!
   checking value:9 - found it!
found listid:1

   checking value:3 - found it!
   checking value:5 - found it!
   checking value:6
   checking value:7 - found it!
   checking value:9
found listid:5

   checking value:4 - found it!
   checking value:7
   checking value:9
found listid:4

   checking value:2
   checking value:4
   checking value:6
   checking value:8 - found it!
DEV>zw res
res=$lb("2","1","5","4")
DEV>zw ^||lengthIdx
^||lengthIdx(3,3)=3
DEV>zw ^||covered
^||covered(1)=1
^||covered(2)=2
^||covered(3)=1
^||covered(4)=2
^||covered(5)=1
^||covered(6)=3
^||covered(7)=2
^||covered(8)=1
^||covered(9)=3

Timo Lindenschmid · May 6, 2025 go to post

%ExecDirectNoPrivs just omits the access check on prepare, access rights are still checked on execute of the SQL.

You can create a Security Role that Grants SQL access to the required storage table via System Management Studio, then assign this access role to the UnknownUser.

Timo Lindenschmid · May 20, 2025 go to post

Hi,
Couple of things to check.

Is there any difference in Server design? .e.g. number of disks, scsi controllers, volume/storage distribution etc
Is the VM definition the same? e.g. storage driver versions (generic scsi controller vs hyperV SCSI controller)
Is the OS on the host and in HyperV the same? 
Is the storage provider design the same? 
Is the IRIS config the same (i.e. cpf file), especially are below settings present?

[config]
wduseasyncio=1
asyncwij=8

I guess both IRIS versions are the exactly the same build although i never heard that to affect disk performance.

Timo Lindenschmid · May 20, 2025 go to post

Hi Pietro,

this depends on your application.
In general, you cannot define DB write access without having read access.
This said, you can though define a user that only has SQL insert rights to specific tables without select rights.
I have not tested this though, but SMP allows this type of setup.

Best Regards

Timo

Timo Lindenschmid · Jun 3, 2025 go to post

Hi Scott,
Check mirror monitor if all databases are caught up on the backup. Or is there one database that is stuck on dejournaling because of that journal file?
Usually happens if the the backup is out of sync for a long time and the file got corrupted/deleted and is no longer available on the Primary or other mirror members.

Two option that i know of here is to 1. restore that file from backup and supply it in the folder that BACKUP complains about. Or rebuild the backup from your primary.

Timo Lindenschmid · Jun 3, 2025 go to post

Just as an addendum here, the PWS is configured to be a very stable management platform. This stability is reached on the cost of performance. If you put any load on the PWS it will not cope very well. During my time using it i always experienced lags, CSP timeouts when trying to work with PWS with more than 4 concurrent power users. 

Timo Lindenschmid · Jun 10, 2025 go to post

Hi Norman,

we need to seperate 2 areas of fragmentation.
1. filesystem/OS level fragmentation
     nothing that we can do anything about it except running your trusted defrag if the filesystem has one and actually is in need of defragging.
2. database/global fragmentation:
     This is a very interesting topic, usually nothing needs to be done for an IRIS database, IRIS is pretty good in managing global block density. (refer to https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cl…)
You can use the output of an integrity check to see how your global density is per global in the database. Both defrag and compact operations are non-destructive and non interruptive, so even if they don't finish they can just be started again and will continue on.

Timo Lindenschmid · Jun 24, 2025 go to post

I would safegueard the code execution in the daemon by checking %Dictionary.CompiledClass to see if the chunk classes are compiled yet.

Timo Lindenschmid · Jul 11, 2025 go to post

From a performance aspect i would not use objects to retrieve the data, but use SQL.

SQL will take care of the conversion for you.

e.g.

select PAADM_PAPMI_DR->PAPMI_PAPER_DR->PAPER_StName
from SQLUSer.PA_Adm
where
PAADM_Hospital_DR = 2and
PAADM_AdmDate>='19/03/2025'and
PAADM_AdmDate<='19/03/2025'
Timo Lindenschmid · Jul 15, 2025 go to post

Hi Dimitrii,

There are various options here. You can use $job to start new process then continue on with your main process or you can use WorkQueueManager to create a workqueue and feed it with items to process.

Best Regards

Timo

Timo Lindenschmid · Jul 15, 2025 go to post

It would be good to understand what versions you are talking about. You marked this s IRIS2024.1 but you are talking about Cache odbc drivers. Also it would be good to know which licenses you are using as you are talking of a paywall... Usually IRIS is not limited if you are using a full license. Only limitations in using community are resources, connections and access to some enterprise level protocols (like ECP, Sharding, API Manager). 

Timo Lindenschmid · Jul 15, 2025 go to post

Hi,

a 404 error usually comes from Apache. As we don't know your Apache setup its difficult to advise. It might be that you need to add additional config to allow the new path to be accessible.
Also seeing you call this using a port number other than 80/443, i guess you are using still using PWS, which is not supported for production loads.