Simple API Enhancements: Enterprise Integration Patterns

Plan

Date: 2026-03-05 Status: Draft Author: API Management Team


1. Objective

Define which Enterprise Integration Patterns (EIPs) HIP will support through its Simple API capability, producing a platform capability specification that:

  • Establishes a shared vocabulary based on the canonical EIP reference
  • Maps each selected pattern to its Apache Camel YAML DSL implementation
  • Provides HMRC-aligned examples with sequence diagrams for each pattern
  • Clarifies which patterns are platform-managed vs producer-configured
  • Documents which EIPs are out of scope and why

2. Context

Current State

Today, a Simple API proxies requests almost as-is to a single egress destination. API producers upload an OAS file, select a destination, and the platform handles routing, auth, and transport. A Simple API requires no Java application, no custom Docker image, and no Helm chart — the platform runs the API from configuration alone.

Enhancement

The enhancement introduces Camel YAML workflow support for Simple APIs, enabling producers to define multi-step integration pipelines for more complex cases. The pipeline is authored in Camel YAML DSL — declarative configuration that the platform executes. For most pipelines, all logic is expressed through Camel’s DSL elements and embedded Groovy/Simple expressions. For complex cases, producers can author Java components (processors, aggregation strategies, transformers) that the platform builds, tests, and includes on the pipeline’s classpath — without requiring a Docker image or Helm chart. This combination of declarative YAML for orchestration and Java for complex logic tends towards retiring the need for the Advanced API (full Java application) model. This specification defines the integration patterns that those pipelines can express.

Key Design Decisions

These architectural decisions shape which patterns apply and where they sit:

  1. Canonical model defined in OAS — The canonical model is defined in OAS using JSON Schema. Within the pipeline, the Camel exchange body is a Java object representation of this model (typically Map<String, Object>). Producers interact with it through Groovy expressions and Camel’s Simple language, accessing fields directly (e.g., body.declarationType). Serialisation to/from wire formats (JSON, XML, etc.) happens at boundaries — ingress, egress, and Kamelet calls — not between pipeline steps.
  2. Consumer-driven content negotiation — Consumers specify Content-Type (what they’re sending) and Accept (what they want back) independently. A consumer can send XML and request a JSON response, or vice versa. The platform deserialises the request body into canonical Java objects based on Content-Type, and serialises the response based on Accept. The pipeline is unaffected by either choice.
  3. Backend-driven content transformation — If a backend requires a different format (XML, SOAP, etc.), the Kamelet handles the conversion. The pipeline passes Java objects to the Kamelet; the Kamelet serialises to/from the backend’s format transparently.
  4. Events use the same model — Event-triggered pipelines will use the same workflow patterns, but events are not currently in scope.
  5. Producer-authored Java components — For logic that exceeds what Camel YAML DSL and Groovy can cleanly express, producers can author Java components that the platform builds, tests, and includes on the pipeline’s classpath. The pipeline remains Camel YAML; Java handles complex business logic within individual steps. This tends towards the retirement of the Advanced API model.
  6. Reusable component library — Every API in the HIP catalogue is automatically available as a callable block with strongly typed inputs derived from its OAS. The platform favours central catalogue registration. For APIs not in the catalogue, producers can build custom blocks from an OAS, XML Schema, or WSDL. Calling syntax is intended to be a first-class DSL element, subject to POC.

3. Approach

Three-Section Model

Rather than organising patterns by the traditional EIP taxonomy (Routing, Transformation, etc.), we organise by who or what drives the behaviour:

SectionDescriptionOwnership
Consumer-DrivenPlatform feature — behaviour specified at invocation time by the API consumer (e.g., content type via Accept header, idempotency keys)Platform-managed
PipelineThe Camel YAML workflow that the API producer defines. This is the core of the specification.Producer-configured
Backend-DrivenPlatform feature — behaviour handled based on what the backend system supports (e.g., protocol, content type, auth)Platform-managed (Kamelets / Egress Gateways)

A fourth section, Events, covers future-scope patterns that will use the same workflow model when event-driven pipelines are introduced.

This model makes ownership clear: Consumer-Driven and Backend-Driven patterns are platform capabilities that work automatically. Pipeline patterns are what producers compose.

Pattern Selection

We selected 25 patterns from the 65 in the canonical EIP reference. Patterns are included if they are applicable to the Simple API model — the pattern addresses a concern that arises in synchronous REST API orchestration, content negotiation, or backend integration. Patterns specific to asynchronous messaging, persistent channels, or system operations are excluded unless they apply to the future Events scope.

The remaining 40 patterns are documented in an “Out of Scope” section with rationale for exclusion. Most excluded patterns are implementable in Camel YAML DSL — they are excluded because they don’t apply to HIP’s synchronous, stateless, design-time pipeline model, not because of implementation limitations.

Organisation: Selected patterns are grouped by who or what drives the behaviour — Consumer-Driven, Pipeline, or Backend-Driven — rather than by the traditional EIP taxonomy.

Pattern Documentation Structure

Each pattern entry follows a consistent structure:

### Pattern Name

> **EIP Reference**: [link to enterpriseintegrationpatterns.com]
> **Camel DSL**: `element-name`
> **Status**: Supported | Planned | Platform-managed

**Problem**: When/why you need this pattern (from EIP).

**HIP Context**: Why this matters for HIP Simple APIs. How it fits
into the three-section model.

**Example**: HMRC-aligned scenario description.

[Mermaid sequence diagram]

[Camel YAML DSL snippet where applicable]

Example Domains

Examples draw from a mix of HMRC domains to show breadth of applicability:

DomainExample Scenarios
CustomsDeclarations, tariff classification, goods item processing, trader authorisation
ExciseDuty submissions, validation and calculation, multi-backend orchestration
VATReturns, registration, taxpayer enrichment
PAYEFilings, bulk processing, employee data validation
Self-AssessmentLarge submissions, liability summaries
Agent ServicesAuthorisation workflows, multi-step credential verification

4. EIP Section Mapping

This table maps each of the 8 top-level sections from the canonical EIP reference to our three-section model:

EIP SectionBrief DescriptionOur Section(s)
Integration StylesFoundational approaches to connecting systems: File Transfer, Shared Database, Remote Procedure Invocation, MessagingContext — HIP uses Remote Procedure Invocation (REST APIs) as its primary style, with Messaging (events) as future
Messaging SystemsFundamental building blocks: channels, messages, pipes and filters, routers, translators, endpointsAll sections — these are foundational primitives. Pipes and Filters is Pipeline. Message Translator appears in Consumer-Driven and Backend-Driven.
Messaging ChannelsHow messages are transported: point-to-point, pub-sub, dead letter, guaranteed deliveryPlatform-managed / Out of Scope — channels are infrastructure. Publish-Subscribe maps to future Events.
Message ConstructionIntent, form and content of messages: commands, documents, events, request-reply, correlationConsumer-Driven + Backend-Driven + Events — describes how messages enter and leave the system
Message RoutingDirecting messages to correct receivers: content-based routing, splitting, aggregating, orchestrationPipeline — almost entirely producer-configured workflow patterns. This is the richest section for us.
Message TransformationChanging message content: translating, enriching, filtering, normalisingAll sections — most cross-cutting category. Platform-managed at Consumer-Driven/Backend-Driven, producer-configured in Pipeline.
Messaging EndpointsHow applications connect: gateways, consumers, dispatchersConsumer-Driven + Backend-Driven + Events
System ManagementOperating the system: monitoring, tracing, testingPlatform-managed / Out of Scope — operational concerns not producer-visible

5. Pattern Selection Summary

Consumer-Driven Patterns (4)

PatternDescriptionExample
Message TranslatorConverts a message from one format to another so sender and receiver can use their preferred representationsConsumer sends XML VAT return via Content-Type: application/xml; platform translates to canonical Java objects before pipeline
Envelope WrapperWraps/unwraps payload data to comply with infrastructure requirements (headers, auth tokens, transport metadata)API Gateway strips transport headers, extracts OAuth token claims, presents clean payload + auth context to pipeline
Canonical Data ModelDefines a common data format independent of any specific application, eliminating point-to-point translationsAll Simple APIs define their canonical model in OAS (JSON Schema). A customs declaration is the same canonical Java objects regardless of consumer wire format
Messaging GatewayEncapsulates access to the integration system from external consumers, providing a single entry pointSimple API endpoint on the API Gateway is the gateway — consumers see a clean REST API; underlying Camel orchestration is invisible

Pipeline Patterns (14)

Flow Structure:

PatternDescriptionExample
Pipes and FiltersDecomposes processing into a sequence of independent steps, each performing a single transformation or actionPAYE filing: validate employee data, calculate tax, check anomalies, store filing, return confirmation
Routing SlipAttaches a predetermined sequence of processing steps to a message, visited in orderAgent authorisation: validate credentials, check relationship, verify scope, record grant
Process ManagerManages a complex multi-step workflow where the next step depends on the outcome of the previous stepTrader registration: validate identity, branch on entity type, assign category, provision, notify

Conditional and Filtering:

PatternDescriptionExample
Content-Based RouterInspects message content and routes to different processing paths based on data valuesCustoms declaration: route to “simplified” or “full declaration” branch based on declarationType
Message FilterExamines message content and short-circuits messages that don’t match criteriaExcise duty: validate calculation; if invalid, return 422 immediately without calling backends

Transformation and Enrichment:

PatternDescriptionExample
Content EnricherAugments a message with additional data from an external source when the original lacks required informationVAT return: enrich with taxpayer registration details and compliance status from taxpayer API
Content FilterRemoves unwanted fields from a message, passing only what the next step or response needsSelf-assessment: backend returns full taxpayer record, consumer needs liability summary only
NormalizerRoutes messages from different sources through appropriate translators to produce consistent outputMultiple excise backends return different structures; normalise into standard duty calculation model

Splitting, Aggregation and Parallel Processing:

PatternDescriptionExample
SplitterBreaks a composite message into individual elements, each processed independentlyCustoms declaration with 50 line items: split for per-item tariff classification and duty calculation
AggregatorCollects and combines related messages into a single composite messageAfter splitting and calculating duty per item, aggregate back into single response with total
Scatter-GatherSends a message to multiple recipients concurrently and aggregates their repliesTrader overview: simultaneously query excise, customs, and VAT registration APIs; merge results
Composed Message ProcessorSplits a composite message, processes each part individually, and reassembles the resultsImport declaration: split goods, classify each against tariff API, calculate duty, reassemble
Recipient ListRoutes a single message to a dynamically determined list of recipientsDuty payment: notify accounting, trader’s agent, and compliance ledger in parallel
Request-ReplySends a request and waits for a reply, enabling synchronous interaction within the pipelineCall excise backend to validate and calculate duty; store response in context for subsequent steps

Backend-Driven Patterns (4)

PatternDescriptionExample
Message Translator (egress)Converts the pipeline’s canonical Java objects into the format the backend requiresPipeline works with canonical Java objects; excise backend requires SOAP/XML — Kamelet serialises and deserialises transparently
Envelope Wrapper (egress)Adds transport headers, auth credentials, protocol wrappers required by the backendExcise backend requires mTLS + WS-Security; egress gateway adds these, pipeline doesn’t know
Command MessageEncapsulates a request to invoke a specific action on the receiving systemPOST /submissions to tax platform — Kamelet translates into backend’s command format
Correlation IdentifierAttaches a unique identifier to request/reply so the pipeline can match responses to originating requestsEach Kamelet call carries correlation ID visible in distributed tracing

Event Patterns — Future Scope (3)

PatternDescriptionExample
Event MessageTransmits notification of a change without requiring a responsePublish declaration.submitted after storing customs declaration
Event-Driven ConsumerAutomatically processes messages as they become available on an event channelPipeline triggered by payment.received event
Publish-Subscribe ChannelBroadcasts a message to all interested subscribersduty.calculated published once; compliance, analytics, notification each receive independently

6. Composite Scenarios

The spec includes 5 end-to-end HMRC scenarios showing how patterns compose across sections:

  1. Customs Declaration Processing — Consumer-Driven (Message Translator, Canonical Data Model) + Pipeline (Pipes and Filters, Content Enricher, Content-Based Router, Splitter, Aggregator) + Backend-Driven (Message Translator, Command Message)
  2. VAT Return Submission — Consumer-Driven (Envelope Wrapper) + Pipeline (Content Enricher, Content Filter, Request-Reply) + Backend-Driven (Command Message) + Future (Event Message)
  3. Excise Duty Multi-Backend Orchestration — Pipeline (Scatter-Gather, Normalizer, Aggregator, Content Filter) + Backend-Driven (Message Translator per backend)
  4. Agent Authorisation Workflow — Pipeline (Process Manager, Content-Based Router, Request-Reply) + Future (Event Message)
  5. PAYE Bulk Filing — Pipeline (Splitter, Composed Message Processor, Aggregator) + Backend-Driven (Command Message) + Future (Event Message)

7. Beyond Declarative YAML

The spec includes a section on handling complexity that exceeds what Camel YAML DSL and Groovy can cleanly express. Producers can author Java components (processors, aggregation strategies, transformers) that the platform builds, tests, and includes on the pipeline’s classpath. The model: YAML for orchestration and simple steps, Java for complex business logic.

Five patterns are examined where this is most likely to be needed:

PatternSimple case (YAML/Groovy)Complex case (Java)
AggregatorList collection, sumsPartial failure, dedup, conflict resolution
Scatter-GatherBasic parallel fan-outTimeout-aware, partial results, failure policy
Process ManagerNested choice blocksMulti-step decision trees calling direct: YAML routes
Splitter / CMPSequential/parallel splitBatch awareness, concurrency control, error recovery
Normalizerchoice + Groovy transformType-safe, backend-aware, unit-testable

This capability tends towards the retirement of the Advanced API (full Java application, Docker image, Helm chart) model.

8. Reusable Component Library

The spec defines a three-tier component library that provides the building blocks producers use to compose patterns:

TierSourceDescription
1. Platform Catalogue APIs (primary)HIP API catalogueEvery registered API is automatically available as a callable block with strongly typed inputs from its OAS
2. Central Catalogue RegistrationGovernanceAPIs should be registered centrally for discoverability, deduplication, and consistency
3. Producer-Specific BlocksEscape hatchFor APIs not in the catalogue — producer provides OAS, XSD, or WSDL; expectation is these graduate to the catalogue

Three calling syntax options are presented for POC evaluation, with a first-class DSL element (hip-api:) as the preferred direction over Kamelet-based or URI-based approaches. Key requirement: strongly typed named inputs derived from the API’s OAS request schema, enabling design-time validation and Visual Pipeline Builder integration.

9. Deliverables

FileDescription
plan.mdThis document — approach, rationale, structure
spec.mdThe specification — full pattern documentation with examples, diagrams, and Camel YAML DSL
metadata.jsonProject metadata for .ai tooling

10. Next Steps

  1. Review and refine spec.md pattern entries
  2. Validate Camel YAML DSL snippets against Camel 4.x documentation
  3. Review composite scenarios with domain experts
  4. Feed into Visual Pipeline Builder design (ui-simple-apis project) when UI work begins
  5. Use as input for Kamelet design and egress gateway patterns
  6. POC: Evaluate calling syntax options for reusable component library (Section 10 of spec), favouring the first-class DSL element approach
  7. POC: Validate Java component build/test/include mechanism with a representative complex pipeline (Aggregator or Process Manager)
  8. Define the catalogue API registration workflow and schema requirements