#Embedded Python

0 Followers · 283 Posts

Embedded Python refers to the integration of the Python programming language into the InterSystems IRIS kernel, allowing developers to operate with data and develop business logic for server-side applications using Python.

Documentation.

Question Ashok Kumar T · Sep 2, 2024

Hello Community,

I got the PROTECT error while running functions. But, I could able to call the classmethods and methods in class definition with classMethodObject, classMethodValue etc.. from python. without any errors
python code

irispy.functionString('fnString','IRISPython',14)
irispy.function('fnString','IRISPython',14)
raise RuntimeError(error_message)
RuntimeError: <PROTECT> *Function not allowed
IRISPython.mac
fnString(fn1) public {
  quit"Hello "_fn1
}

7
0 250
Question Timothy Leavitt · Mar 31, 2025

I'm exploring this right now: given a bunch of types defined as Pydantic models, how can I come up with an equivalent %RegisteredObject/%SerialObject and convert to/from (e.g., to support persistence and match validation as much as possible)?

People who know Python better than I do (e.g., your average undergraduate from this decade): is this a stupid idea or a cool idea? Has anyone else done this before?

5
0 159
Article Muhammad Waseem · Apr 5, 2025 6m read

Hi Community,
Traditional keyword-based search struggles with nuanced, domain-specific queries. Vector search, however, leverages semantic understanding, enabling AI agents to retrieve and generate responses based on context—not just keywords.
This article provides a step-by-step guide to creating an Agentic AI RAG (Retrieval-Augmented Generation) application.

Implementation Steps:

  1. Create Agent Tools
    • Add Ingest functionality: Automatically ingests and index documents (e.g., InterSystems IRIS 2025.1 Release Notes).
    • Implement Vector Search Functionality
  2. Create Vector Search Agent
  3. Handoff to Triage (Main Agent)
  4. Run The Agent 
0
1 440
Announcement Derek Gervais · Apr 4, 2025

You can learn a lot from a first impression; we want to hear about yours.

As a continuation of our User Insights Interview program (see this post for more details), we’re expanding our scope to include Python developers, particularly those of you who are new to working with InterSystems technology. We’re looking to conduct one-on-one interviews to hear your honest thoughts about getting started: What made sense, what didn’t, and where we could improve.

Interested in sharing your thoughts? Sign up to participate here.

0
0 95
Article Henry Pereira · Apr 2, 2025 17m read

Image generated by OpenAI DALL·E

I'm a huge sci-fi fan, but while I'm fully onboard the Star Wars train (apologies to my fellow Trekkies!), but I've always appreciated the classic episodes of Star Trek from my childhood. The diverse crew of the USS Enterprise, each masterminding their unique roles, is a perfect metaphor for understanding AI agents and their power in projects like Facilis. So, let's embark on an intergalactic mission, leveraging AI as our ship's crew and  boldly go where no man has gone before!  This teamwork concept is a wonderful analogy to illustrate how AI agents work and how we use them in our DC-Facilis project. So, let’s dive in and assume the role of a starship captain, leading an AI crew into unexplored territories!

Welcome to CrewAI!

To manage our AI crew, we use a fantastic framework called CrewAI. It's lean, lightning-fast, and operates as a multi-agent Python platform. One of the reasons we love it, besides the fact that it was created by another Brazilian, is its incredible flexibility and role-based design.

from crewai import Agent, Task, Crew

the taken quote

Meet the Planners

In Facilis, our AI agents are divided into two groups. Let's start with the first one I like to call "The Planners."

The Extraction Agent

The main role of Facilis is to take a natural language description of a REST service and auto-magically create all the necessary interoperability. So, our first crew member is the Extraction Agent. This agent is tasked with "extracting" API specifications from a user's prompt description.

Here's what the Extraction Agent looks out for:

  • Host (required)
  • Endpoint (required)
  • Params (optional)
  • Port (if available)
  • JSON model (for POST/PUT/PATCH/DELETE)
  • Authentication (if applicable)
    def create_extraction_agent(self) -> Agent:
        return Agent(
            role='API Specification Extractor',
            goal='Extract API specifications from natural language descriptions',
            backstory=dedent("""
                You are specialized in interpreting natural language descriptions
                and extracting structured API specifications.
            """),
            allow_delegation=True,
            llm=self.llm
        )

    def extract_api_specs(self, descriptions: List[str]) -> Task:
        return Task(
            description=dedent(f"""
                Extract API specifications from the following descriptions:
                {json.dumps(descriptions, indent=2)}
                
                For each description, extract:
                - host (required)
                - endpoint (required)
                - HTTP_Method (required)
                - params (optional)
                - port (if available)
                - json_model (for POST/PUT/PATCH/DELETE)
                - authentication (if applicable)
                
                Mark any missing required fields as 'missing'.
                Return results in JSON format as an array of specifications.
            """),
            expected_output="""A JSON array containing extracted API specifications with all required and optional fields""",
            agent=self.extraction_agent
        )

The Validation Agent

Next up, the Validation Agent! Their mission is to ensure that the API specifications gathered by the Extraction Agent are correct and consistent. They check:

  1. Valid host format
  2. Endpoint starting with '/'
  3. Valid HTTP methods (GET, POST, PUT, DELETE, PATCH)
  4. Valid port number (if provided)
  5. JSON model presence for applicable methods.

    def create_validation_agent(self) -> Agent:
        return Agent(
            role='API Validator',
            goal='Validate API specifications for correctness and consistency',
            backstory=dedent("""
                You are an expert in API validation, ensuring all specifications
                meet the required standards and format.
            """),
            allow_delegation=False,
            llm=self.llm
        )

 def validate_api_spec(self, extracted_data: Dict) -> Task:
        return Task(
            description=dedent(f"""
                Validate the following API specification:
                {json.dumps(extracted_data, indent=2)}
                
                Check for:
                1. Valid host format
                2. Endpoint starts with '/'
                3. Valid HTTP method (GET, POST, PUT, DELETE, PATCH)
                4. Valid port number (if provided)
                5. JSON model presence for POST/PUT/PATCH/DELETE methods
                
                Return validation results in JSON format.
            """),
            expected_output="""A JSON object containing validation results with any errors or confirmation of validity""",
            agent=self.validation_agent
        )

The Interaction Agent

Moving on, we meet the Interaction Agent, our User Interaction Specialist. Their role is to obtain any missing API specification fields that were marked by the Extraction Agent and validate them based on the Validation Agent's findings. They interact directly with users to fill any gaps.

The Production Agent

We need two crucial pieces of information to create the necessary interoperability: namespace and production name. The Production Agent engages with users to gather this information, much like the Interaction Agent.

The Documentation Transformation Agent

Once the specifications are ready, it's time to convert them into OpenAPI documentation. The Documentation Transformation Agent, an OpenAPI specialist, takes care of this.

    def create_transformation_agent(self) -> Agent:
        return Agent(
            role='OpenAPI Transformation Specialist',
            goal='Convert API specifications into OpenAPI documentation',
            backstory=dedent("""
                You are an expert in OpenAPI specifications and documentation.
                Your role is to transform validated API details into accurate
                and comprehensive OpenAPI 3.0 documentation.
            """),
            allow_delegation=False,
            llm=self.llm
        )

    def transform_to_openapi(self, validated_endpoints: List[Dict], production_info: Dict) -> Task:
        return Task(
            description=dedent(f"""
                Transform the following validated API specifications into OpenAPI 3.0 documentation:
                
                Production Information:
                {json.dumps(production_info, indent=2)}
                
                Validated Endpoints:
                {json.dumps(validated_endpoints, indent=2)}
                
                Requirements:
                1. Generate complete OpenAPI 3.0 specification
                2. Include proper request/response schemas
                3. Document all parameters and request bodies
                4. Include authentication if specified
                5. Ensure proper path formatting
                
                Return the OpenAPI specification in both JSON and YAML formats.
            """),
            expected_output="""A JSON object containing the complete OpenAPI 3.0 specification with all endpoints and schemas""",
            agent=self.transformation_agent
        )

The Review Agent

After transformation, the OpenAPI documentation undergoes a meticulous review to ensure compliance and quality. The Review Agent follows this checklist:

  1. OpenAPI 3.0 Compliance
  • Correct version specification
  • Required root elements
  • Schema structure validation
  1. Completeness
  • All endpoints documented
  • Fully specified parameters
  • Defined request/response schemas
  • Properly configured security schemes
  1. Quality Checks
  • Consistent naming conventions
  • Clear descriptions
  • Proper use of data types
  • Meaningful response codes
  1. Best Practices
  • Proper tag usage
  • Consistent parameter naming
  • Appropriate security definitions

Finally, if everything looks good, the Review Agent reports a healthy JSON object with the following structure:

{
 "is_valid": boolean,
 "approved_spec": object (the reviewed and possibly corrected OpenAPI spec),
 "issues": [array of strings describing any issues found],
 "recommendations": [array of improvement suggestions]
}

    def create_reviewer_agent(self) -> Agent:
        return Agent(
            role='OpenAPI Documentation Reviewer',
            goal='Ensure OpenAPI documentation compliance and quality',
            backstory=dedent("""
                You are the final authority on OpenAPI documentation quality and compliance.
                With extensive experience in OpenAPI 3.0 specifications, you meticulously
                review documentation for accuracy, completeness, and adherence to standards.
            """),
            allow_delegation=True,
            llm=self.llm
        )


    def review_openapi_spec(self, openapi_spec: Dict) -> Task:
        return Task(
            description=dedent(f"""
                Review the following OpenAPI specification for compliance and quality:
                
                {json.dumps(openapi_spec, indent=2)}
                
                Review Checklist:
                1. OpenAPI 3.0 Compliance
                - Verify correct version specification
                - Check required root elements
                - Validate schema structure
                
                2. Completeness
                - All endpoints properly documented
                - Parameters fully specified
                - Request/response schemas defined
                - Security schemes properly configured
                
                3. Quality Checks
                - Consistent naming conventions
                - Clear descriptions
                - Proper use of data types
                - Meaningful response codes
                
                4. Best Practices
                - Proper tag usage
                - Consistent parameter naming
                - Appropriate security definitions
                
                You must return a JSON object with the following structure:
                {{
                    "is_valid": boolean,
                    "approved_spec": object (the reviewed and possibly corrected OpenAPI spec),
                    "issues": [array of strings describing any issues found],
                    "recommendations": [array of improvement suggestions]
                }}
            """),
            expected_output="""A JSON object containing: is_valid (boolean), approved_spec (object), issues (array), and recommendations (array)""",
            agent=self.reviewer_agent
        )

The Iris Agent

The last agent in the planner group is the Iris Agent, who sends the finalized OpenAPI documentation to Iris.


    def create_iris_i14y_agent(self) -> Agent:
        return Agent(
            role='Iris I14y Integration Specialist',
            goal='Integrate API specifications with Iris I14y service',
            backstory=dedent("""
                You are responsible for ensuring smooth integration between the API
                documentation system and the Iris I14y service. You handle the
                communication with Iris, validate responses, and ensure successful
                integration of API specifications.
            """),
            allow_delegation=False,
            llm=self.llm
        )

    def send_to_iris(self, openapi_spec: Dict, production_info: Dict, review_result: Dict) -> Task:
        return Task(
            description=dedent(f"""
                Send the approved OpenAPI specification to Iris I14y service:

                Production Information:
                - Name: {production_info['production_name']}
                - Namespace: {production_info['namespace']}
                - Is New: {production_info.get('create_new', False)}

                Review Status:
                - Approved: {review_result['is_valid']}
                
                Return the integration result in JSON format.
            """),
            expected_output="""A JSON object containing the integration result with Iris I14y service, including success status and response details""",
            agent=self.iris_i14y_agent
        )

class IrisI14yService:
    def __init__(self):
        self.logger = logging.getLogger('facilis.IrisI14yService')
        self.base_url = os.getenv("FACILIS_URL", "http://dc-facilis-iris-1:52773") 
        self.headers = {
            "Content-Type": "application/json"
        }
        self.timeout = int(os.getenv("IRIS_TIMEOUT", "504"))  # in milliseconds
        self.max_retries = int(os.getenv("IRIS_MAX_RETRIES", "3"))
        self.logger.info("IrisI14yService initialized")

    async def send_to_iris_async(self, payload: Dict) -> Dict:
        """
        Send payload to Iris generate endpoint asynchronously
        """
        self.logger.info("Sending payload to Iris generate endpoint")
        if isinstance(payload, str):
            try:
                json.loads(payload)  
            except json.JSONDecodeError:
                raise ValueError("Invalid JSON string provided")
        
        retry_count = 0
        last_error = None

        # Create timeout for aiohttp
        timeout = aiohttp.ClientTimeout(total=self.timeout / 1000)  # Convert ms to seconds

        while retry_count < self.max_retries:
            try:
                self.logger.info(f"Attempt {retry_count + 1}/{self.max_retries}: Sending request to {self.base_url}/facilis/api/generate")
                
                async with aiohttp.ClientSession(timeout=timeout) as session:
                    async with session.post(
                        f"{self.base_url}/facilis/api/generate",
                        json=payload,
                        headers=self.headers
                    ) as response:
                        if response.status == 200:
                            return await response.json()
                        response.raise_for_status()

            except asyncio.TimeoutError as e:
                retry_count += 1
                last_error = e
                error_msg = f"Timeout occurred (attempt {retry_count}/{self.max_retries})"
                self.logger.warning(error_msg)
                
                if retry_count < self.max_retries:
                    wait_time = 2 ** (retry_count - 1)
                    self.logger.info(f"Waiting {wait_time} seconds before retry...")
                    await asyncio.sleep(wait_time)
                continue

            except aiohttp.ClientError as e:
                error_msg = f"Failed to send to Iris: {str(e)}"
                self.logger.error(error_msg)
                raise IrisIntegrationError(error_msg)

        error_msg = f"Failed to send to Iris after {self.max_retries} attempts due to timeout"
        self.logger.error(error_msg)
        raise IrisIntegrationError(error_msg, last_error)

Meet the Generators

Our second set of agents are - the Generators. They are here to transform the OpenAPI specifications into InterSystems IRIS interoperability. There are eight of them in this group.

The first one is the Analyzer Agent. He's like the planner, mapping out the route. Its job is to delve into the OpenAPI specs and figure out what IRIS Interoperability components are needed.


    def create_analyzer_agent():
        return Agent(
            role="OpenAPI Specification Analyzer",
            goal="Thoroughly analyze OpenAPI specifications and plan IRIS Interoperability components",
            backstory="""You are an expert in both OpenAPI specifications and InterSystems IRIS Interoperability. 
            Your job is to analyze OpenAPI documents and create a detailed plan for how they should be 
            implemented as IRIS Interoperability components.""",
            verbose=False,
            allow_delegation=False,
            tools=[analyze_openapi_tool],
            llm=get_facilis_llm()
        )

   analysis_task = Task(
        description="""Analyze the OpenAPI specification and plan the necessary IRIS Interoperability components. 
        Include a list of all components that should be in the Production class.""",
        agent=analyzer,
        expected_output="A detailed analysis of OpenAPI spec and plan for IRIS components, including Production components list",
        input={
            "openapi_spec": openApiSpec,
            "production_name": "${production_name}" 
        }
    )

Next up, the Business Services (BS) and Business Operations (BO) Agents take over. They generate the Business Services and Business Operations based on the OpenAPI endpoints. They use a handy tool called MessageClassTool to generate the perfect message classes, ensuring the communication.


    def create_bs_generator_agent():
        return Agent(
            role="IRIS Production and Business Service Generator",
            goal="Generate properly formatted IRIS Production and Business Service classes from OpenAPI specifications",
            backstory="""You are an experienced InterSystems IRIS developer specializing in Interoperability Productions.
            Your expertise is in creating Business Services and Productions that can receive and process incoming requests based on
            API specifications.""",
            verbose=False,
            allow_delegation=True,
            tools=[generate_production_class_tool, generate_business_service_tool],
            llm=get_facilis_llm()
        )

    def create_bo_generator_agent():
        return Agent(
            role="IRIS Business Operation Generator",
            goal="Generate properly formatted IRIS Business Operation classes from OpenAPI specifications",
            backstory="""You are an experienced InterSystems IRIS developer specializing in Interoperability Productions.
            Your expertise is in creating Business Operations that can send requests to external systems
            based on API specifications.""",
            verbose=False,
            allow_delegation=True,
            tools=[generate_business_operation_tool, generate_message_class_tool],
            llm=get_facilis_llm()
        )

    bs_generation_task = Task(
        description="Generate Business Service classes based on the OpenAPI endpoints",
        agent=bs_generator,
        expected_output="IRIS Business Service class definitions",
        context=[analysis_task]
    )

    bo_generation_task = Task(
        description="Generate Business Operation classes based on the OpenAPI endpoints",
        agent=bo_generator,
        expected_output="IRIS Business Operation class definitions",
        context=[analysis_task]
    )

    class GenerateMessageClassTool(BaseTool):
        name: str = "generate_message_class"
        description: str = "Generate an IRIS Message class"
        input_schema: Type[BaseModel] = GenerateMessageClassToolInput

        def _run(self, message_name: str, schema_info: Union[str, Dict[str, Any]]) -> str:
            writer = IRISClassWriter()
            try:
                if isinstance(schema_info, str):
                    try:
                        schema_dict = json.loads(schema_info)
                    except json.JSONDecodeError:
                        return "Error: Invalid JSON format for schema info"
                else:
                    schema_dict = schema_info

                class_content = writer.write_message_class(message_name, schema_dict)
                # Store the generated class
                writer.generated_classes[f"MSG.{message_name}"] = class_content
                return class_content
            except Exception as e:
                return f"Error generating message class: {str(e)}"

Once BS and BO have done their thing, it's time for the Production Agent to shine! This agent pulls everything together to create a cohesive production environment.

After everything is set up, next in line is the Validation Agent. This one makes comes in for a final checkup, making sure each Iris class is ok.

Then we have the Export Agent and the Collection Agent. The Export Agent generates the .cls files while the Collection Agent gathers all the file names. Everything gets passed along to the importer, which compiles everything into InterSystems Iris.


    def create_exporter_agent():
        return Agent(
            role="IRIS Class Exporter",
            goal="Export and validate IRIS class definitions to proper .cls files",
            backstory="""You are an InterSystems IRIS deployment specialist. Your job is to ensure 
            that generated IRIS class definitions are properly exported as valid .cls files that 
            can be directly imported into an IRIS environment.""",
            verbose=False,
            allow_delegation=False,
            tools=[export_iris_classes_tool, validate_iris_classes_tool],
            llm=get_facilis_llm()
        )
        
    def create_collector_agent():
        return Agent(
            role="IRIS Class Collector",
            goal="Collect all generated IRIS class files into a JSON collection",
            backstory="""You are a file system specialist responsible for gathering and 
            organizing generated IRIS class files into a structured collection.""",
            verbose=False,
            allow_delegation=False,
            tools=[CollectGeneratedFilesTool()],
            llm=get_facilis_llm()
        )

    export_task = Task(
        description="Export all generated IRIS classes as valid .cls files",
        agent=exporter,
        expected_output="Valid IRIS .cls files saved to output directory",
        context=[bs_generation_task, bo_generation_task],
        input={
            "output_dir": "/home/irisowner/dev/output/iris_classes"  # Optional
        }
    )

    collection_task = Task(
        description="Collect all generated IRIS class files into a JSON collection",
        agent=collector,
        expected_output="JSON collection of all generated .cls files",
        context=[export_task, validate_task],
        input={
            "directory": "./output/iris_classes"
        }
    )

Limitations and Challenges

Our project started as an exciting experiment, where my fellow musketeers and I aimed to create a fully automated tool using agents. It was a wild ride! Our main focus was on REST API integrations. It's always a joy to get a task with an OpenAPI specification to integrate; however, legacy systems can be a whole different story. We thought automating these tasks could be incredibly useful. But every adventure has its twists: One of the biggest challenges was instructing the AI to convert OpenAPI to Iris Interoperability. We started with openAI GPT3.5-turbo model, on initial tests proved difficult with debugging and preventing breaks. Switching to Anthropic Claude 3.7 Sonnet showed better results for the Generator group but not so much for the Planners... This led us to split our environment configurations, using different LLM providers for flexibility. We used GPT3.5-turbo for planning and Claude sonnet for generation, great combo! This combination worked well but we did encounter issues with hallucinations. Moving to GT4o improved results, yet we still faced hallucinations creating Iris classes and sometimes unnecessary OpenAPI specifications like the renowned Pet Store OpenAPI example. we had a blast learning along the way, and I'm super excited about the amazing future in this field with countless possibilities!

0
1 131
Article Muhammad Waseem · Apr 1, 2025 7m read

Hi Community,
In this article, I will introduce my application iris-AgenticAI .

The rise of agentic AI marks a transformative leap in how artificial intelligence interacts with the world—moving beyond static responses to dynamic, goal-driven problem-solving. Powered by OpenAI’s Agentic SDK , The OpenAI Agents SDK enables you to build agentic AI apps in a lightweight, easy-to-use package with very few abstractions. It's a production-ready upgrade of our previous experimentation for agents, Swarm.
This application showcases the next generation of autonomous AI systems capable of reasoning, collaborating, and executing complex tasks with human-like adaptability.

Application Features

  • Agent Loop 🔄 A built-in loop that autonomously manages tool execution, sends results back to the LLM, and iterates until task completion.
  • Python-First 🐍 Leverage native Python syntax (decorators, generators, etc.) to orchestrate and chain agents without external DSLs.
  • Handoffs 🤝 Seamlessly coordinate multi-agent workflows by delegating tasks between specialized agents.
  • Function Tools ⚒️ Decorate any Python function with @tool to instantly integrate it into the agent’s toolkit.
  • Vector Search (RAG) 🧠 Native integration of vector store (IRIS) for RAG retrieval.
  • Tracing 🔍 Built-in tracing to visualize, debug, and monitor agent workflows in real time (think LangSmith alternatives).
  • MCP Servers 🌐 Support for Model Context Protocol (MCP) via stdio and HTTP, enabling cross-process agent communication.
  • Chainlit UI 🖥️ Integrated Chainlit framework for building interactive chat interfaces with minimal code.
  • Stateful Memory 🧠 Preserve chat history, context, and agent state across sessions for continuity and long-running tasks.
0
0 199
Article Luis Angel Pérez Ramos · Apr 1, 2025 5m read

I just realized I never finished this serie of articles!

In today's article, we'll take a look at the production process that extracts the ICD-10 diagnoses most similar to our text, so we can select the most appropriate option from our frontend.

Looking for diagnostic similarities:

From the screen that shows the diagnostic requests received in HL7 in our application, we can search for the ICD-10 diagnoses closest to the text entered by the professional.

0
0 98
Article lando miller · Mar 31, 2025 2m read

Prompt

Firstly, we need to understand what prompt words are and what their functions are.

Prompt Engineering

Hint word engineering is a method specifically designed for optimizing language models.
Its goal is to guide these models to generate more accurate and targeted output text by designing and adjusting the input prompt words.

Core Functions of Prompts

0
5 82
Article Luis Angel Pérez Ramos · Mar 11, 2025 53m read

Since the introduction of Embedded Python there has always been doubt about its performance compared to ObjectScript and on more than one occasion I have discussed this with @Guillaume Rongier , well, taking advantage of the fact that I was making a small application to capture data from public competitions in Spain and to be able to perform searches using the capabilities of VectorSearch I saw the opportunity to carry out a small test.

Data for the test

Public tender information is provided monthly in XML files from this URL  and the typical format of a tender information is as follows:

6
1 261
Article Luis Angel Pérez Ramos · Jul 25, 2024 4m read

With the introduction of vector data types and the Vector Search functionality in IRIS, a whole world of possibilities opens up for the development of applications and an example of these applications is the one that I recently saw published in a public contest by the Ministry of Health from Valencia in which they requested a tool to assist in ICD-10 coding using AI models.

How could we implement an application similar to the one requested? Let's see what we would need:

2
2 391
InterSystems Official Daniel Palevski · Mar 26, 2025

InterSystems Announces General Availability of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2025.1

The 2025.1 release of InterSystems IRIS® data platform, InterSystems IRIS® for HealthTM, and HealthShare® Health Connect is now Generally Available (GA). This is an Extended Maintenance (EM) release.

Release Highlights

In this exciting release, users can expect several new features and enhancements, including:

0
1 286
Article Adam Coppola · Mar 26, 2025 3m read

Hello interface engineers and app developers,     

Did you know that you can use Python in productions or integrations?

The production configuration user interface is low-code, but in many projects, you may reach a point where you need to write some code. As discussed in the Integration Architecture course, business processes are full of places for inserting code, specifically the BPL editor (for BPL business processes) and the DTL editor (for data transformations).

0
0 163
Question Eduard Lebedyuk · Mar 4, 2025

I have an Embedded Python method, which is essentially a call to one third-party module.

Most of the time, the method takes <0.1 seconds to execute, but sometimes it takes 30 or 60 seconds.

The server is relatively idle (20-30% CPU load).

How can I debug this issue further? Ideally, I want to know where this library spends the time. The library is mainly Python code (it's boto3, so it's not a Python C API proxy library).

2
0 126
Article Guillaume Rongier · May 31, 2023 6m read

I'm proud to announce the new release of iris-pex-embedded-python (v2.3.1) with a new command line interface.

This command line is called iop for Interoperability On Python.

First I would like to present in few words the project the main changes since the version 1.

A breif history of the project

Version 1.0 was a proof of concept to show how the interoperability framework of IRIS can be used with a python first approach while remaining compatible with any existing ObjectScript code.

What does it mean? It means that any python developer can use the IRIS interoperability framework without any knowledge of ObjectScript.

Example :

from grongier.pex import BusinessOperation

class MyBusinessOperation(BusinessOperation):

    def on_message(self, request):
        self.log.info("Received request")

Great, isn't it?

With version 1.1, I added the possibilty to register those python classes to IRIS with an helper function.

from grongier.pex import Utils

Utils.register_file("/src/MyBusinessOperation.py")

Version 2.0 was a major release because now you can install this project with pip.

pip install iris-pex-embedded-python

What's new in version 2.3.1

Version 2.3.1 is a major release because it introduces a new command line interface.

This command line interface can be used with this python project based on this module or maybe with projects that doesn't use this module.

Let me introduce it and explain why it can be used in non python projects.

The command line interface

The command line is part of this project, to install it you just have to install this project with pip.

pip install iris-pex-embedded-python

Then you can use the command line iop to start the interoperability framework.

iop

output :

usage: iop [-h] [-d DEFAULT] [-l] [-s START] [-k] [-S] [-r] [-M MIGRATE] [-e EXPORT] [-x] [-v] [-L]
optional arguments:
  -h, --help            display help and default production name
  -d DEFAULT, --default DEFAULT
                        set the default production
  -l, --lists           list productions
  -s START, --start START
                        start a production
  -k, --kill            kill a production (force stop)
  -S, --stop            stop a production
  -r, --restart         restart a production
  -M MIGRATE, --migrate MIGRATE
                        migrate production and classes with settings file
  -e EXPORT, --export EXPORT
                        export a production
  -x, --status          status a production
  -v, --version         display version
  -L, --log             display log

default production: UnitTest.Production

Let's go in few examples.

help

The help command display the help and the default production name.

iop -h

output :

usage: python3 -m grongier.pex [-h] [-d DEFAULT] [-l] [-s START] [-k] [-S] [-r] [-M MIGRATE] [-e EXPORT] [-x] [-v] [-L]
...
default production: PEX.Production

default

The default command set the default production.

With no argument, it display the default production.

iop -d

output :

default production: PEX.Production

With an argument, it set the default production.

iop -d PEX.Production

lists

The lists command list productions.

iop -l

output :

{
    "PEX.Production": {
        "Status": "Stopped",
        "LastStartTime": "2023-05-31 11:13:51.000",
        "LastStopTime": "2023-05-31 11:13:54.153",
        "AutoStart": 0
    }
}

start

The start command start a production.

To exit the command, you have to press CTRL+C.

iop -s PEX.Production

If no argument is given, the start command start the default production.

iop -s

output :

2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting production
2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting item Python.FileOperation
2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting item Python.EmailOperation
...

kill

The kill command kill a production (force stop).

Kill command is the same as stop command but with a force stop.

Kill command doesn't take an argument because only one production can be running.

iop -k 

stop

The stop command stop a production.

Stop command doesn't take an argument because only one production can be running.

iop -S 

restart

The restart command restart a production.

Restart command doesn't take an argument because only one production can be running.

iop -r 

migrate

The migrate command migrate a production and classes with settings file.

Migrate command must take the absolute path of the settings file.

Settings file must be in the same folder as the python code.

iop -M /tmp/settings.py

export

The export command export a production.

If no argument is given, the export command export the default production.

iop -e

If an argument is given, the export command export the production given in argument.

iop -e PEX.Production

output :

{
    "Production": {
        "@Name": "PEX.Production",
        "@TestingEnabled": "true",
        "@LogGeneralTraceEvents": "false",
        "Description": "",
        "ActorPoolSize": "2",
        "Item": [
            {
                "@Name": "Python.FileOperation",
                "@Category": "",
                "@ClassName": "Python.FileOperation",
                "@PoolSize": "1",
                "@Enabled": "true",
                "@Foreground": "false",
                "@Comment": "",
                "@LogTraceEvents": "true",
                "@Schedule": "",
                "Setting": [
                    {
                        "@Target": "Adapter",
                        "@Name": "Charset",
                        "#text": "utf-8"
                    },
                    {
                        "@Target": "Adapter",
                        "@Name": "FilePath",
                        "#text": "/irisdev/app/output/"
                    },
                    {
                        "@Target": "Host",
                        "@Name": "%settings",
                        "#text": "path=/irisdev/app/output/"
                    }
                ]
            }
        ]
    }
}

status

The status command status a production.

Status command doesn't take an argument because only one production can be running.

iop -x 

output :

{
    "Production": "PEX.Production",
    "Status": "stopped"
}

Status can be :

  • stopped
  • running
  • suspended
  • troubled

version

The version command display the version.

iop -v

output :

2.3.0

log

The log command display the log.

To exit the command, you have to press CTRL+C.

iop -L

output :

2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting production
2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting item Python.FileOperation
2021-08-30 15:13:51.000 [PEX.Production] INFO: Starting item Python.EmailOperation
...

Can it be used outside of iris-pex-embedded-python project ?

That's your choice.

But, before you leave, let me tell why I think it can be used outside of iris-pex-embedded-python project.

First, because it can interact with production without the need to use an iris shell. That means it's easier to use in a script.

Secoud, because settings.py can be used to import production and classes with environment variables.

Here is an example of settings.py :

import os

PRODUCTIONS = [
        {
            'UnitTest.Production': {
                "Item": [
                    {
                        "@Name": "Python.FileOperation",
                        "@ClassName": "Python.FileOperation",
                        "Setting": {
                            "@Target": "Host",
                            "@Name": "%settings",
                            "#text": os.environ['SETTINGS']
                        }
                    }
                ]
            }
        } 
    ]

Pay attention to #text value. It's an environment variable. Neat, isn't it ?

Do you see yourself using this command line tool, worth it to keep developing it ?

Thanks for reading and your feedbacks are welcome.

11
0 587
Article Sylvain Guilbaud · Feb 28, 2025 1m read

Hello,

as it took me some time to figure out what's wrong, I would like to share this experience, so that you do not fall into the same trap.

I've just noticed that if you name your package "code" (all lowercase), in a class using some embedded python using [Language = python], you'll face the <THROW> *%Exception.PythonException <PYTHON EXCEPTION> 246 <class 'ModuleNotFoundError'>: No module named 'code.basics'; 'code' is not a package

Class code.basics Extends%RegisteredObject
{

ClassMethod Welcome() As%Status [ Language = python ]
{
print('Welcome!')
return True
}
}
2
0 149
Article Tani Frankel · Jan 14, 2025 6m read

Using embedded Python while building your InterSystems-based solution can add very powerful and deep capabilities to your toolbox.

I'd like to share one sample use-case I encountered - enabling a CDC (Change Data Capture) for a mongoDB Collection - capturing those changes, digesting them through an Interoperability flow, and eventually updating an EMR via a REST API.

1
2 497
Article Claudio Devecchi · Jan 17, 2025 2m read

Hello everyone

For those interested, here is an implementation example I used in one of my projects to interoperate with MongoDB using the pymongo package.

iris-mongodb

Interoperates with MongoDB (via pymongo package) with InterSystems IRIS (including outbound adapter)

Remember to import pymongo package.

implementation example for testing here

NAMESPACE> do ##class(custom.python.pymongo.test).TestMongoDBCrud()

for interoperability, use the following adapter in your business operation:

Parameter ADAPTER = "custom.python.pymongo.outboundAdapter";

Parameter INVOCATION = "Queue";

Query example:

/// parameters: (dbName, collectionName, query, fields, sort, limit) => query, fields and sort are %DynamicObjects
/// resultset is an array of objects (%DynamicArray)
set resultset = ..Adapter.Query("mydb","mycollection", {"_id":(tId)})

Insert example:

set newUser = {
            "name":"Claudio Devecchi Junior",
            "email":"devechi@inters.com"
}
/// parameters: (dbName, collectionName, objOrArray, filterForUpdate) => objOrArray and filterForUpdate are %DynamicObjects
/// insertResponse is an object with the inserted id/ids (%DynamicObject)
set insertResponse = ..Adapter.InsertOrUpdate("sample_mflix", "users", newUser)

Update example:

set newValues = {"email":"claudio.devechi@inter.com"}
set filter = {"_id":(tObjectId)} //is the filter for update
/// parameters: (dbName, collectionName, objOrArray, filterForUpdate) => objOrArray and filterForUpdate are %DynamicObjects
/// filterForUpdate defines update action
/// updateResponse is an object with the updated id/ids (%DynamicObject)
set updateResponse =..Adapter.InsertOrUpdate("sample_mflix", "users", newValues, filter)

Delete example:

set filter = {"_id":(tObjectId)} //is the filter for deletion
/// parameters: (dbName, collectionName, filter,  deleteMany) => filter is %DynamicObject. deleteMany is false by default
/// deleteResponse is an object with the deleted information (%DynamicObject)
set deleteResponse =..Adapter.Delete("sample_mflix", "users", filter)

ps: It's compatible with IRIS versions that supports embedded python.

Any feedback is welcome!

4
0 270
Article Shuheng Liu · Jul 15, 2024 3m read

Introduction to running WSGI in IRIS

With IRIS 2024+, users can host WSGI applications using Security.Applications. As an example, a user can do something like this

Minimum working example

zn "%SYS"
Kill props
Set props("Description") = "Sample WSGI Application"
Set props("MatchRoles") = ":%All"
Set props("WSGIAppLocation") = "/path/to/flaskapp"
Set props("WSGIAppName") = "myapp"
Set props("WSGICallable") = "app"
Set props("DispatchClass") = "%SYS.Python.WSGI" // important, otherwise will be recognized as CSP application
Set sc = ##class(Security.Applications).Create("/flask", .props)
zw sc

where /path/to/flaskapp directory contains an myapp.py file that reads

from flask import Flask

app = Flask(__name__)

@app.route("/")
def index():
    return "Hello, WSGI!"

Now, go to the URL http(s)://<host>:<port>/<optional-prefix>/flask/. It should show "Hello, WSGI!" in plain text.

Common pitfalls

  1. If the URL http(s):///flask/ is not working, check for the trailing slash first, which must be present.

  2. Also, when running for the first time, flask needs to be installed for embedded python (not your local OS-level python interpreter). Verify the installation is successful by going to the embedded python shell and run import flask.

  3. Finally, read permission of whatever OS user IRIS is assuming must be granted to /path/to/flaskapp/myapp.py and all parent folders.

  4. If the error still can't be resolved, check for entries in messages.log. You can also reach out to us by posting an issue

Using IPM to ship WSGI applications for easy install

IPM makes the process easier by

  1. copying flask app directory to a place with guaranteed read-access
  2. installing relevant python dependencies in a requirements.txt file

Package Example

Here is an example that can be installed easily wherever IPM (v0.7.2+) is installed on IRIS 2024+. Clone this package to a suitable <PACKAGE_ROOT>, and start an IRIS terminal

zn "%SYS"
zpm "load <PACKAGE_ROOT>"

After successful installation, you should be able to visit http(s)://<host>:<port>/<optional-instance-prefix>/my/flask/demo/. In my case, the URL is http://localhost:8080/iris-ml-wsgi/my/flask/demo/ and it reads:

This is a sample WSGI application using Flask!

Hint: you need to install zpm following instructions here first for the zpm command to work.

The module.xml of the above repo is also listed here for quick reference

<?xml version="1.0" encoding="UTF-8"?>
<Export generator="Cache" version="25">
  <Document name="flask-demo.ZPM">
    <Module>
      <Name>flask-demo</Name>
      <Version>1.0.0</Version>
      <Description>This is a demo of a flask application</Description>
      <Keywords>flask</Keywords>
      <Author>
        <Person>Shuheng Liu</Person>
        <Organization>InterSystems</Organization>
        <CopyrightDate>2024</CopyrightDate>
        <License>MIT</License>
        <Notes>notes</Notes>
      </Author>
      <Packaging>module</Packaging>
      <SystemRequirements Version=">=2024.1" />
      <SourcesRoot>src</SourcesRoot>
      <FileCopy Name="src/python/flaskapp/" Target="${libdir}flask-demo/flaskapp/"/>
      <SystemSetting Name="CSP.DefaultFileCharset" Value="UTF-8"/>

      <WSGIApplication
        Url="/my/flask/demo"
        UnauthenticatedEnabled="1"
        Description="Sample WSGI application using Flask"
        MatchRoles=":${dbrole}"
        WSGIAppLocation="${libdir}flask-demo/flaskapp/"
        WSGIAppName="app"
        WSGICallable="app"
       />
    <AfterInstallMessage>Module installed successfully!</AfterInstallMessage>     
    </Module>    
  </Document>
</Export>
4
1 370
Article Chris Stewart · Feb 7, 2025 9m read

Learning LLM Magic

The world of Generative AI has been pretty inescapable for a while, commercial models running on paid Cloud instances are everywhere.  With your data stored securely on-prem in IRIS, it might seem daunting to start getting the benefit of experimentation with Large Language Models without having to navigate a minefield of Governance and rapidly evolving API documentation.   If only there was a way to bring an LLM to IRIS, preferably in a very small code footprint....

Some warnings before we start

0
5 406
Question John McBride · Feb 5, 2025

Hello,

When setting up a new web app in iris (iris is in a container) iris complains that a WSGI framework is not installed. I have installed python into the container as well as both flask and django via the python virtual environment (see second screenshot) and the python language server is running

Is this the wrong way to install flask? How do I get the container version to recoginize that flask is installed?

2
0 94
Article Heloisa Paiva · Feb 17, 2023 2m read

Why am I writting this?

Last year I made an article for starters on using embedded python. Later, it started a little discussion on how to return values with python and I found some interesting observations that are worth writing a little article. Also, hopefully I can reach more people by writing this.

Possible situations

There are two things you'll need to care about when returning a value with python. The first is the type you're trying to return and the second is where you're returning it. 

1
0 645
Question Robert Cemper · Feb 3, 2025
USER>do$System.Python.Shell()
 
ERROR #5002: ObjectScript-Error: <OBJECT DISPATCH>Shell+16^%SYS.Python.1 
*Failed to Load Python: Check documentation and messages.log, 
Check CPF parameters:[PythonRuntimeLibrary,PythonRuntimeLibraryVersion], 
Check sys.path setup in: $INSTANCE/lib/python/iris_site.py
02/03/25-18:27:39:497 (13156) 1 [Generic.Event] CPF settings (PythonRuntimeLibraryVersion) do not specify python correctly - Python can not be loaded
02/03/25-18:27:39:498 (13156) 1 [Generic.Event] CPF settings (PythonRuntimeLibrary) do not specify python correctly - Python can not be loaded
9
0 227
Article shan yue · May 15, 2024 2m read

You need to install the application first. If not installed, please refer to the previous article

Application demonstration

After successfully running the iris image vector search application, some data needs to be stored to support image retrieval as it is not initialized in the library.

Image storage

Firstly, drag and drop the image or click the upload icon, select the image, and click the upload button to upload and vectorize it. This process may be a bit slow.

This process involves using embedded Python to call the CLIP model and vectorize the image into 512 dimensional vector data.

3
1 278
Question Phillip Wu · Jan 16, 2025

Hi,

I'm using embedded Python as follows:

AUMHTESTTC01:%SYS>do ##class(%SYS.Python).Shell()
Python 3.6.15 (default, Sep 232021, 15:41:43) [GCC] on linux
Type quit() or Ctrl-D to exit this shell.
>>> import iris
>>> mirror=iris.cls('SYS.Mirror')
>>> pri=[]
>>> alt=[]
>>> status=mirror.GetFailoverMemberStatus(pri,alt)
>>> print(status)
1>>> print(pri)
[]
>>> print(alt)
[]
>>> exit()

However the equivalent ObjectScript works:

4
0 118
Article Ashok Kumar T · Sep 13, 2024 7m read

In the previous article. Practices of class members and their execution within embedded Python. We will now turn our attention to the process of switching namespaces, accessing global variables , traversing and routine executions within embedded Python.

Before proceeding to the other functions. let us briefly review the execute function within the iris package. This function is exceptionally beneficial for executing the arbitrary ObjectScript functions and class invocation.

6
1 475