It's a semi-connected environment
Keep in mind that this is a semi-connected environment, meaning it can temporarily operate in a disconnected mode. However, it is important to note that this technology relies on both outbound and inbound connections to function effectively.
Outbound connections (firewall-friendly) are used for:
- fetching the desired state and transmitting to the cloud information about the azure-arc synchronization components (agent version, synchronization state, etc..) together with information about the resources deployed in the Kubernetes cluster (e.g. deployments, pods, etc.) .
- sending container apps environment signals such as logging and billing information.
- accessing to cloud functionalities that are not available yet on Logic Apps Hybrid like Integration Accounts and Managed Connectors.
Inbound connections, on the other hand, are necessary for accessing workflow execution history from the Azure Portal. To enable this, a LoadBalancer service is created to allow traffic from Azure to query the run history API and retrieve data stored on the remote SQL Server. Yes, your security team may not be thrilled, but this requirement is essential for maintaining visibility into what’s happening on the edge.
Arc and Container Apps concepts are blended
Given an arc-enabled connected cluster, we can create multiple (arc) locations but only one (aca) connected environment per location. A connected environment is the target destination where we can deploy one or more logic apps. If we translate this concept in the Kubernetes world, the connected environment is a digital twin of a Kubernetes namespace.
It leverages KEDA and its extensibility points
Logic Apps Hybrid leverages the target-based scaling mechanism to dynamically calculate the desired number of replicas needed to handle the workload efficiently. To achieve this, an external Logic Apps custom scaler is installed in the remote cluster, allowing KEDA (Kubernetes Event-Driven Autoscaler) to fetch workflow metrics stored in the local SQL Server.
KEDA will then leverage the Horizontal Pod Auto-scaler (HPA) to adjust the number of Logic Apps pod replicas based on real-time workload demand. By continuously monitoring execution metrics and scaling events, this mechanism ensures optimal performance, resource efficiency, and high availability.
In the picture below an sequence of events during a scale out and back.
Logic Apps Hybrid Scaling out and back
Note that scaling to zero instances is not allowed, even though the underlying technologies powering Logic Apps Hybrid support it.
No drift detection
LogicApps Hybrid allows to host workflows into a customer-managed infrastructure (e.g. bare metal Kubernetes), while still benefiting from the Logic Apps platform. This approach provides flexibility while still benefiting from the Logic Apps platform, but it also introduces additional operational considerations, particularly around configuration consistency.
Drift occurs when the actual state of a Kubernetes cluster diverges from the declared state in the control plane (Azure Resource Manager), often due to manual changes or failed updates. Logic Apps Hybrid does not include native drift detection or automatic reconciliation. Once deployed on an edge cluster, the infrastructure components do not actively monitor the cluster to detect unintended changes. This means that:
- Configuration changes made directly on the cluster (e.g., modifying deployments, removing pods, or tweaking resource limits) will persist until manually corrected.
- Failed updates or partial deployments might leave the environment in an inconsistent state without automatic rollback or remediation.
- Security risks may increase if unauthorized modifications go unnoticed, potentially impacting workflow execution and resource availability.
Organizations leveraging this solution should consider to adopt policy enforcement, and monitoring strategies to ensure that their Logic Apps workloads remain in a predictable and well-managed state.
Subscribe to our RSS feed