Amir Samary · Sep 26, 2017 go to post

Hi!

I would add that they exist for compatibility with systems migrated from other databases. On other SQL databases, it's a common practice to use as the primary key a meaningful attribute of the tuple such as SSN, Code, Part Number, etc. I think this is a very bad practice even on normal SQL databases since a Primary Key/IdKey, once created, can't be easily changed. If you want to have one of your meaningful fields of your tuple as your primary key, use IdKey. I strongly advise against it though.

So, instead of having as Primary Key one of the attributes of the tuple (using IdKey), it's much better to have a sequential number. This is provided by different databases in different ways. Oracle has it's SEQUENCES. SQL Server has its AUTO_INCREMENT. Continuing with Oracle and SQL Server examples, many fields/columns on a table can be populated with values from a sequence (Oracle) or be auto incremented (SQL server). But only one of the fields can be the Primary Key. The Primary Key is used by other rows on the table or other tables to reference this row. It can't be changed just because there may be other rows relying on it and that is fine.

In Caché, we have our ID field that is an auto_increment kind of primary key. Other rows/objects from this or other tables will be referencing this object through it's ID. You can create your own unique fields. As many as you want. Code, PartNum, SSN, etc. You can define them Unique indices for them all. Don't use IdKey to do that. That is not its purpose. IdKey will make that field THE Primary Key of the class. It will make its ID be the field you picked. That is very bad (IMHO).

On the other hand, there are cases where performance can be increased by using IdKey. It will save you a trip to the indices global to get the real id for that field before going to the data global to get the other fields you need. If we are talking about millions of accesses per second and you are pretty sure you won't need to change the value of that field once it was created, use IdKey. It will give you better performance. But if you do, beware that by choosing you own IdKey, you may not be able to use special kinds of indices like bitmap indices or even iFind. It will depend on the type of IdKey you pick. If it's a string, for instance, iFind and BitMap indices won't work with your class anymore. They rely on having a numeric sequential ID to work.

Kind regards,

AS

Amir Samary · Oct 2, 2017 go to post

Hi!

HL7 has an XML Schema that you can import as Ensemble classes. You can use these classes to implement your business services and operations web services. Here is the XML schema link.

Another way of using this XML Schema is not to import them as classes. Instead, use the XML Schema importing tool we have on Ensemble Management Portal and use EnsLib.EDI.XML.Document with it. You could receive plain XML text and transform it into an EnsLib.EDI.XML.Document. It will be faster to process than to transform it to objects. If you are interested, I can share some code.

All that being said, I strongly suggest NOT TO DO ANY OF THIS.

HL7v2 XML encoding isn't used anywhere and the natural reasoning behind using it is just ridiculous. If you are asking someone to support HL7v2 and they are willing to spend their time implementing the standard, then explain to them that it is best for them to simply implement it the way 99% of the world uses it: with the normal |^~& separators. It's a text string that they can simply embed into their web service call.

There are these two XML encoding versions:

  •  EnsLib.HL7.Util.FormatSimpleXMLv2 - provided by Ensemble (example you have already on this post)
  • The XML encoding from HL7 itself (link I gave you above) 

Both encodings are horrible since they won't give you pretty field names that you can simply transform into an object and understand the meaning of the fields just by reading their names. Here is what an class generated from the standard HL7v2 XML Schema looks like (the PID segment):

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]{Parameter XMLNAME = "PID.CONTENT";Parameter XMLSEQUENCE = 1;Parameter XMLTYPE = "PID.CONTENT";Parameter XMLIGNORENULL = 1;Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

You see? No semantics. Just structure. So, why not simply send the HL7v2 standard text through your Web Service or HTTP service? Here are the advantages of using the standard HL7v2 text encoding instead of the XML encoding:

  • You will all be working with the HL7 standard as 99% of the world that uses HL7v2 does instead of trying to use something that never caught up (XML encoding)
  • Ensemble brings Business Services and Business Operations out of the box to receive/send HL7v2 text messages through TCP, FTP, File, HTTP and SOAP. So you wouldn't have to code much in Ensemble to process them.
  • It takes less space (XML takes a lot of space)
  • It takes less processing (XML is heavier to parse)
  • Using the XML schema won't really help you with anything since both XML encoding systems provide just a dumb structure for your data, without semantics.

Kind regards,

AS

Class HL7251Schema.PID.CONTENT Extends (%Persistent, %XML.Adaptor) [ ProcedureBlock ]
{
Parameter XMLNAME = "PID.CONTENT";
Parameter XMLSEQUENCE = 1;
Parameter XMLTYPE = "PID.CONTENT";
Parameter XMLIGNORENULL = 1;
Property PID1 As HL7251Schema.PID.X1.CONTENT(XMLNAME = "PID.1", XMLREF = 1);
Property PID2 As HL7251Schema.PID.X2.CONTENT(XMLNAME = "PID.2", XMLREF = 1);
Property PID3 As list Of HL7251Schema.PID.X3.CONTENT(XMLNAME = "PID.3", XMLPROJECTION = "ELEMENT", XMLREF = 1) [ Required ];
Property PID4 As list Of HL7251Schema.PID.X4.CONTENT(XMLNAME = "PID.4", XMLPROJECTION = "ELEMENT", XMLREF = 1);

Amir Samary · Oct 18, 2017 go to post

Hi!

The method %FileIndices(id) may help you. The problem is that you won't know which ID you must use to call it. Have new objects been added? Which have been changed?

I assume you have taken an old style global and mapped it to classes and have created an index for it. That's ok. Normally old style applications will have its own indices on the same global or another. They probably have an index on Name already that they use and you should try to map that existing subscript as your name index on your mapped class instead of creating a new index.

But if you really want to go on and create a new index, you have only two choices:

  1. To change the old style code to use objects/sql and stop setting the globals directly for that specific entity
  2. To change the old style code to set the new index global together with the data global (this is very common practice on old style applications).
  3. Ask them (the programmers of the old style application) to call %FileIndices(id) themselves after adding/changing a record so that all the indices defined on your mapped class would be (re)computed (including the indices that they have already set manually and you have mapped on your class). You could offer them to call this method to populate the indices global for them instead of setting the indices globals themselves to eliminate the redundant work. But you would have to have your mappings very well defined to replace their code, with all their indices.

As you can see, all options imply changing the old style code. I would push for option number 1 as this can be the start of some real good improvements to your old style application. 

There is no middle ground here, unless you want to rebuild all your indices every day.

Depending on how many places this global is referred on your application, the way the application is compartmentalized, and if you are using indirection or not throughout your application, these changes can be very simple or very complex to do and test.

Another way of going about this is to try to use the journals APIs to read the journal files, detect changes on this specific global and update the index... That would prevent you from touching the old style application. I have never done this before, but I believe it is pretty feasible.

Kind regards,

AS

Amir Samary · Oct 27, 2017 go to post

Hi!

Are you calling the workflow using an asynchronous <call> activity and using a <sync> activity to wait on it? For how long is your <sync> activity waiting? Beware that a workflow task will only stay on the workflow inbox while that <sync> activity is not done. So, if you don't have a <sync> activity or if the timeout on the <sync> activity is too short, the task will not appear on the workflow inbox or appear only momentarily.

The idea is that the BPL can put a timeout as an SLA on that task. If no one finish that task until that timeout expires, you will receive whatever (incomplete) response is there and, maybe, escalate to other roles, send alerts, etc.

Kind regards,

AS

Amir Samary · Oct 30, 2017 go to post

Hi!

I like to use EnsLib.EDI.XML.Document class to move XML around because it will show nicely on Visual Message Trace and I can use message routing and DTL to route and transform it since it is a virtual document like HL7, ASTM or EDIFACT. Try looking into the documentation here:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EXML_tools_classes

You will notice there are out of the box business services to grab a file on an FTP server and take it's XML content as EnsLib.EDI.XML.Document. If you can't use it directly you will, at least, have an example to start with and make one of your own.

That is the Business Service. 

For your Business Operation, you have two options:

  1. Use the SOAP Wizard to create a business operation for you based on your WSDL and use it without changes
  2. Use the SOAP Wizard to only create the proxy to the web service and create your own Business Operation around it yourself

Option 2 is nice if you want to hide the intricacies and details of using that web service. Some times, a web service will ask for a username and password on the body of the message, for instance, and you don't want to do that. You would create a credential property on the business operation and use it instead. So you are hiding these details from the user of that business operation. Another reason for implementing the business operation yourself is to change the data types and name of properties. That is our case. Your web service probably expects to receive a XML string. We are dealing with an EnsLib.EDI.XML.Document. Obviously, that won't work out of the box.

So, I suggest you create a business operation that receives the EnsLib.EDI.XML.Document and calls the Web Service proxy underneath, by extracting the XML string from the EnsLib.EDI.XML.Document request. You can use the SOAP Wizard to create the first version of your Business Operation and use it as a starting point for creating your own. Make the appropriate changes on it to receive an EnsLib.EDI.XML.Document as a Request.

Kind regards,

AS

Amir Samary · Nov 1, 2017 go to post

Hi!

Yes. There is definitely a rollback if you return an error. That won't ever work. If you want to prevent people from deleting records on this table, simply don't give them permission to do it.

If you configure your security right, you can have a role for the user that is being used to access the application (by JDBC, ODBC or dynamic result sets such as %SQL.Statement or %Library.ResultSet) and give that role the right permissions. That would be permissions on the database (say %DB_MYDATABASE), and specific GRANTs for SELECT, INSERT, UPDATE and DELETE on specific tables.

Then, you can let this specific table without the GRANT for DELETE and obligate everyone to UPDATE the Deleted property instead.

You can even use ROW Level Security to hide that record from "normal" users while letting it appear to "report" or "analytics" users.

Beware though that the enforcement of SQL privileges will only work through JDBC, ODBC or dynamic result set objects such as %SQL.Statement. If you issue a %DeleteId() on an object, it will be deleted. That is by design because if you are running object script code and have RW privileges on the database, you can do almost anything. So, it makes no sense to check for SQL privileges on every method call. The system would spend most of its time checking privileges instead of working.  So, what is normally done is that you set your security roles and users right and try to protect the entry points of your application. For instance, you can use custom resources to label and protect:

  • CSP Pages using the SECURITYRESOURCE parameter
  • General Methods - by checking only once if a user has a specific resource with $System.Security.Check() method
  • SOAP and REST services - by protecting the CSP application that exposes them or by using the SECURITYRESOURCE parameter on your %CSP.REST class or even by using $System.Security.Check() on each individual method.

The idea is to protect entire areas of the application instead of checking privileges on every single database access.

HTH!

Kind regards,

AS

Amir Samary · Dec 5, 2017 go to post

Hi!

I can see Auditing is enabled on your system. Have you checked under "Configure System Events" that the event "%Ensemble/%Production/StartStop" is enabled?

Kind regards,

Amir Samary

Amir Samary · Oct 26, 2018 go to post

Hi!

It is not hard to build a task that will search for your message headers from a specific service, process or operation in a date range and delete the headers and the messages. You can start by searching on Ens.MessageHeader. You will notice that there are indices that will allow you to search by date range and source config item pretty quickly.

BUT: 

I always recommend to my customers to create a production for each different business needs. That allow you to have different back up strategies, purging strategies and even maintenance strategies (some times you want to touch one production and be sure that the others will not be touched). You are also able to take these productions to other servers when you need them to scale independently (ops! now this new service is receiving a big volume of messages and is disturbing the others!). On different servers, you may be able to patch one production (one server) while the others are running. This is specially more feasible now with InterSystems IRIS. You can easily migrate from Ensemble to IRIS and have a production running on a VM with 2 cores, another on another VM with 4 cores, and so on. You don't have to start with 8 cores and scale by 4 like ensemble. So you can pay the same you are paying today, but with the possibility of having more governance.

PS: Mirroring on IRIS is paid differently from mirroring with Ensemble. With IRIS, you do have to pay for the mirrors as well while with Ensemble you don't have to.

So, sincerely I believe you will be happier having another production. It's a good start to have more governance.

Kind regards,

Amir Samary

Senior Sales Engineer - InterSystems

Amir Samary · Nov 2, 2018 go to post
/// <p>This Reader is to be used with %ResultSet. It will allow you to scan a CSV file, record by record.
/// There are three queries available for reading:</p>
/// <ol>
/// <li>CSV - comma separated
/// <li>CSV2 - semicolon separated
/// <li>TSV - tab separated
/// </ol>
/// <p>Use the query accordingly to your file type! The reader expects to find a header on the first line with the name of the fields.
/// This will be used to name the columns when you use the method Get().</p>
/// <p>The reader can deal with fields quoted and unquoted automatically.</p>  
///
///    <p>This assumes that your CSV file has a header line.
/// This custom query supports Comman Separated Values (CSV), Semicolumn Separated Values (CSV2)
/// and Tab Separated Values (TSV).</p>    
///
/// <EXAMPLE>
///
///            ; Comma Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV")
///            ;
///            ; Semicolon Separated Values
///            ;Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:CSV2")
///            ;
///            ; Tab Separated Values
///            Set oCSVFileRS=##class(%ResultSet).%New("IRISDemo.Util.FileReader:TSV")
///            
///            Set tSC = oCSVFileRS.Execute(pFileName)
///            Quit:$System.Status.IsError(tSC)
///            
///            While oCSVFileRS.Next()
///            {
///                Set tDataField1 = oCSVFileRS.Get("Column Name 1")
///                Set tDataField2 = oCSVFileRS.Get("Column Name 2")
///                Set tDataField4 = oCSVFileRS.GetData(3)
///                
///                // Your code goes here            
///            }
///    </EXAMPLE>                    
///
Class IRISDemo.Util.FileReader Extends %RegisteredObject
{

    /// Use this Query with %ResultSet class to read comma (,) separated files.
    Query CSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read semicolon (;) separated files.
    Query CSV2(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Use this Query with %ResultSet class to read tab separated files.
    Query TSV(pFileName As %String, pFileEncoding As %String = "UTF8") As %Query
    {
    }
    
    /// Index a header from a CSV file so we can later get every row of the file
    /// and read every column by it's name. This method supports fields quoted, double 
    /// quoted or not quoted or a mix of the three. The only thing that is required for this to work
    /// is that the correct field separator is specified (Comma, Semicolon or Tab).
    ClassMethod IndexHeader(pHeader As %String, pSeparator As %String = ",", Output pHeaderIndex)
    {
        //How many columns?
        Set pHeaderIndex=$Length(pHeader,pSeparator)
        
        Set tRegexSeparator=pSeparator
        
        If tRegexSeparator=$Char(9)
        {
            Set tRegexSeparator="\x09"
        }
        
        //Let's build a regular expression to read all the data without the quotes...
        Set pHeaderIndex("Regex")=""
        For i=1:1:pHeaderIndex
        {
            Set $Piece(pHeaderIndex("Regex"),tRegexSeparator,i)="\""?'?(.*?)\""?'?"
        }
        Set pHeaderIndex("Regex")="^"_pHeaderIndex("Regex")
        
        //Let's use this regular expression to index the column names...
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pHeader)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set pHeaderIndex("Columns",i)=$ZStrip(oMatcher.Group(i),"<>W")
        }
    }

    ClassMethod IndexRow(pRow As %String, ByRef pHeaderIndex As %String, Output pIndexedRow As %String)
    {
        Set oMatcher = ##class(%Regex.Matcher).%New(pHeaderIndex("Regex"))
        Do oMatcher.Match(pRow)
        
        //Now let's index the colum names
        For i=1:1:oMatcher.GroupCount
        {
            Set $List(pIndexedRow,i)=oMatcher.Group(i)
        }
    }

    ClassMethod CSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod CSV2GetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod TSVGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
            Quit ..FileGetInfo(.colinfo, .parminfo, .idinfo, .qHandle, .extoption, .extinfo)
    }
    
    ClassMethod FileGetInfo(colinfo As %List, parminfo As %List, idinfo As %List, qHandle As %Binary, extoption As %Integer = 0, extinfo As %List) As %Status
    {
        Merge tHeaderIndex = qHandle("HeaderIndex")
        Set colinfo = ""
        For i=1:1:tHeaderIndex
        {
            Set tColName = tHeaderIndex("Columns",i)
            Set colinfo=colinfo_$LB($LB(tColName))    
        }
        
        Set parminfo=$ListBuild("pFileName","pFileEncoding")
        Set extinfo=""
        Set idinfo=""
        Quit $$$OK
    }

    ClassMethod CSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=","
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod CSV2Execute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=";"
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod TSVExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set qHandle("Separator")=$Char(9)
        Quit ..FileExecute(.qHandle, pFileName, pFileEncoding)
    }

    ClassMethod FileExecute(ByRef qHandle As %Binary, pFileName As %String, pFileEncoding As %String = "UTF8") As %Status
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set oFile = ##class(%Stream.FileCharacter).%New()
            If pFileEncoding'="" Set oFile.TranslateTable=pFileEncoding
            Set oFile.Filename=pFileName
            
            Set tHeader=oFile.ReadLine()
            Do ..IndexHeader(tHeader, qHandle("Separator"), .tHeaderIndex)
            
            Merge qHandle("HeaderIndex")=tHeaderIndex
            Set qHandle("File")=oFile
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod CSV2Close(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod TSVClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileClose(.qHandle)
    }

    ClassMethod FileClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = FileExecute ]
    {
        #Dim oFile As %Library.FileCharacterStream
        Set tSC = $System.Status.OK()
        Try
        {
            Kill qHandle("File")
            Kill qHandle("HeaderIndex")
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }

    ClassMethod CSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod CSV2Fetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod TSVFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Quit ..FileFetch(.qHandle, .Row, .AtEnd)
    }

    ClassMethod FileFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = FileExecute ]
    {
        Set tSC = $System.Status.OK()
        Try
        {
            Set Row = ""
            
            Set oFile = qHandle("File")
            If oFile.AtEnd
            {
                Set AtEnd=1
                Quit
            }
    
            Merge tHeaderIndex=qHandle("HeaderIndex")
            
            While 'oFile.AtEnd
            {
                Set tRow=oFile.ReadLine()
                Continue:tRow=""
                Quit
            }
                    
            Do ..IndexRow(tRow, .tHeaderIndex, .Row)
            
            
        }
        Catch (oException)
        {
            Set tSC = oException.AsStatus()
        }
        
        Quit tSC
    }
}
Amir Samary · Feb 25, 2020 go to post

"What files/directories should I keep track of from the durable %SYS directory? (e.g: I want to bind-mount those files/directories and import the code and see the Namespace and all the instance configuration )."

Answer: None.

Your configuration needs to be code. Right now, the best approach to have your namespace/databases created/configured, CSP applications, security, etc. is by using %Installer Manifests during the build process of the Dockerfile. I personally don't use Durable %SYS on my development machine. I prefer to use %Installer to configure everything and if I need to load pre-tables with data, I load it from CSV files that are on GitHub as well as source code. 

That allows you to source control your Code Table contents as well. Test records can be inserted as well with the same procedure.

For an example of this, look at this demo (based on IRIS):

https://github.com/intersystems-community/irisdemo-demo-fraudprevention

Look at the normalized_datalake image. It loads CSV files into the tables as part of the Dockerfile build process. You will notice that this image is based on a base image that has some standard reusable code. This source for this base image is here:

https://github.com/intersystems-community/irisdemo-base-irisdb-community

I was using Atelier when I built this image. But the principle is the same. I am now using VSCode doing the same.

This is another demo based on IRIS for Health:

https://github.com/intersystems-community/irisdemo-demo-readmission

Look at the riskengine image. It loads data from JSON files into the data model as part of the build process. The JSON files are created by synthea. An open source tool for generating synthetic patient data.

If you use this method, any developer will be able to jump between versions of your software very quickly. If you need to fix a problem on an old version, you can just checkout that version tag, build the image (which will load the tables) and make the changes you want looking at that exact version with the exact data needed for it to work.

When you are done, you can go back to your previous branch, rebuild the image again using the current source code (and data in CSV/JSON files) and keep going with your new features.

Just to be clear: I don't mean you shouldn't use Durable %SYS. You must use Durable %SYS on Production!

But I have strong reservations about using it on your PC (developer environment). That's all. Even the central development environment (where Unit Tests could be run) wouldn't need it.

But your UAT/QA and Production should definitely use Durable %SYS and you should come up with your own DevOps approach to deploying your software on these environments so you can test the upgrade procedure.

Kind regards,

AS

Amir Samary · Jul 11, 2020 go to post
<scope xpos='200' ypos='250' xend='200' yend='800' >
<code name='Transform' xpos='200' ypos='350' >
<![CDATA[ 
    
    /*
     If I use a <transform> activity with indirection (@$parameter(request,"%DTL")), and 
     assign the returned object to the context, and an error is produced when the context is
     saved because the returned object has, say, missing properties, IRIS will only see this error
     too late and the scope we defined will not be able to help us to deal with this fault.
     
     So, in order to avoid this bug, we need to call the transformation from a <code> activity instead.
     And try to save the returned object before assigning it to the context and returning. If we can't
     save it, we must assign the error to the status variable and leave the context alone. This way
     the <scope> will capture the problem and take us to the <catchall> activity. 
    */ 
    
    Set tTransformClass=$parameter(request,"%DTL")
    Set status = $classmethod(tTransformClass, "Transform", request, .normalizedRequest)
    If $$$ISERR(status) Quit
    
    Set status = normalizedRequest.%Save()
    If $$$ISERR(status) Quit
    
    Set context.NormalizedData=normalizedRequest
]]>
</code> <assign name="Done" property="context.Action" value="&quot;Done&quot;" action="set" xpos='200' ypos='550' /> <faulthandlers>    <catchall name='Normalization error' xpos='200' ypos='650' xend='200' yend='1150' >
      <trace name='Normalization problem' value='"Normalization problem"' xpos='200' ypos='250' />
   </catchall>
</faulthandlers>