Extending Simple APIs to support more of the Enterprise Integration Patterns
Date: 2026-03-05 Status: Draft Version: 0.1 Canonical Reference: Enterprise Integration Patterns (Hohpe & Woolf)
1. Introduction
Purpose
This specification defines which Enterprise Integration Patterns (EIPs) HIP will support through its Simple API capability. It serves as a platform capability specification that:
- Establishes a shared vocabulary based on the canonical EIP reference
- Maps each selected pattern to its Apache Camel YAML DSL implementation
- Provides HMRC-aligned examples with sequence diagrams
- Clarifies which patterns are platform-managed vs producer-configured
- Documents which EIPs are out of scope and why
Simple API: Current and Enhanced
Today: A Simple API proxies requests to a single egress destination. API producers upload an OAS file, select a destination, and the platform handles routing, authentication, and transport. This covers ~85% of use cases. A Simple API requires no Java application, no custom Docker image, and no Helm chart — the platform runs the API from configuration alone.
Enhanced: For the remaining ~15% of cases requiring orchestration, transformation, or multi-backend integration, producers will be able to specify a Camel YAML workflow alongside their OAS. The pipeline is authored in Camel YAML DSL — declarative configuration that the platform executes. For most pipelines, all logic is expressed through Camel’s DSL elements and embedded Groovy/Simple expressions. For complex cases, producers can author Java components (processors, aggregation strategies, transformers) that the platform builds, tests, and includes on the pipeline’s classpath — without requiring a Docker image or Helm chart. This combination of declarative YAML for orchestration and Java for complex logic tends towards retiring the need for the Advanced API (full Java application) model. This specification defines the integration patterns those workflows can express.
Key Design Decisions
Six architectural decisions shape which patterns apply and where they sit:
-
Canonical model defined in OAS — The canonical model is defined in OAS using JSON Schema. Within the pipeline, the Camel exchange body is a Java object representation of this model (typically
Map<String, Object>). Producers interact with it through Groovy expressions and Camel’s Simple language, accessing fields directly (e.g.,body.declarationTypeor${body[totalDuty]}). Serialisation to/from wire formats (JSON, XML, etc.) happens at boundaries — ingress, egress, and Kamelet calls — not between pipeline steps. -
Consumer-driven content negotiation — Consumers specify
Content-Type(what they’re sending) andAccept(what they want back) independently. A consumer can send XML and request a JSON response, or vice versa. The platform deserialises the request body into canonical Java objects based onContent-Type, and serialises the response based onAccept. The pipeline is unaffected by either choice. This is a platform feature, not something producers configure. -
Backend-driven content transformation — If a backend system requires a different format (SOAP/XML, legacy protocols), the Kamelet handles the conversion transparently. The pipeline passes Java objects to the Kamelet; the Kamelet serialises to the backend’s format, makes the call, deserialises the response, and returns Java objects to the pipeline. Any necessary field mapping and restructuring between canonical and backend schemas is still pipeline work.
-
Events use the same model — Event-triggered pipelines will use the same workflow patterns as API-triggered ones, but events are not currently in scope.
-
Producer-authored Java components — For logic that exceeds what Camel YAML DSL and Groovy can cleanly express, producers can author Java components (processors, aggregation strategies, transformers). The platform provides build, test, and include tooling — Java is compiled and made available on the pipeline’s classpath without requiring a Docker image or Helm chart. The pipeline remains Camel YAML; Java components are referenced as named beans or classes. YAML handles orchestration and simple steps; Java handles complex business logic within individual steps. The bridge between them is bidirectional: YAML references Java via bean/class, and Java can call back into named YAML routes via
direct:. This capability, combined with the declarative YAML pipeline, tends towards the retirement of the Advanced API (full Java application) model. (See Section 9.) -
Reusable component library — Pipelines compose their integration logic by calling reusable blocks that represent APIs and services. Every API in the HIP catalogue is automatically available as a callable block with strongly typed inputs derived from its OAS. The platform favours central catalogue registration for discoverability and governance. For APIs not in the catalogue, producers can build custom blocks from an OAS, XML Schema, or WSDL. The calling syntax is intended to be a first-class DSL element (not a URI string), enabling design-time schema validation and Visual Pipeline Builder integration. The exact syntax is subject to POC. (See Section 10.)
How to Read This Document
Patterns are organised into three sections based on who or what drives the behaviour, plus a future-scope section:
| Section | Description | Ownership |
|---|---|---|
| Consumer-Driven | Platform feature — behaviour specified at invocation time by the API consumer | Platform-managed |
| Pipeline | The Camel YAML workflow that the API producer defines | Producer-configured |
| Backend-Driven | Platform feature — behaviour handled based on what the backend system supports | Platform-managed (Kamelets / Egress Gateways) |
| Events | Future scope — same workflow model for event-driven pipelines | Planned |
The document also includes:
| Section | Description |
|---|---|
| Composite Pattern Scenarios | Real-world HMRC scenarios showing how patterns compose across sections |
| Out of Scope | EIP patterns not included, with rationale for exclusion |
| Beyond Declarative YAML | How Java components handle complexity that exceeds YAML’s strengths |
| Reusable Component Library | The building blocks producers use to compose patterns — catalogue APIs, typed blocks, calling syntax |
Each pattern entry includes:
- EIP Reference — link to the canonical pattern definition
- Camel DSL — the Camel YAML DSL element(s) that implement it
- Status — Supported, Planned, or Platform-managed
- Problem — when and why you need this pattern
- HIP Context — how it fits into the Simple API capability
- Example — an HMRC-aligned scenario
- Sequence Diagram — Mermaid diagram showing message flow
- Camel YAML — DSL snippet (where applicable)
2. EIP Navigation Guide
The canonical EIP reference organises its 65 patterns into 8 sections. This table maps each EIP section to our model, helping readers familiar with the book orient themselves.
| EIP Section | Brief Description | Our Section(s) |
|---|---|---|
| Integration Styles | Foundational approaches to connecting systems: File Transfer, Shared Database, Remote Procedure Invocation, Messaging | Context — HIP uses Remote Procedure Invocation (REST APIs) as its primary integration style, with Messaging (events) as a future addition. These describe our architectural choice, not patterns we implement in pipelines. |
| Messaging Systems | The fundamental building blocks of any integration architecture: channels, messages, pipes and filters, routers, translators, endpoints | All sections — these are foundational primitives. Pipes and Filters is core to Pipeline. Message Translator appears in both Consumer-Driven and Backend-Driven. Message Router underpins Pipeline routing patterns. |
| Messaging Channels | How messages are transported between systems: point-to-point, publish-subscribe, dead letter, guaranteed delivery | Platform-Managed / Out of Scope — channels are infrastructure concerns. Point-to-Point describes our synchronous API model implicitly. Publish-Subscribe maps to the future Events section. Dead Letter Channel and Guaranteed Delivery are Camel runtime or event platform concerns. |
| Message Construction | The intent, form and content of messages: commands, documents, events, request-reply, correlation identifiers | Consumer-Driven + Backend-Driven + Events — describes how messages enter and leave the system. Request-Reply is the interaction model when calling backends. Command Message is an egress pattern. Event Message is future scope. |
| Message Routing | How messages are directed to the correct receivers: content-based routing, splitting, aggregating, scatter-gather, process management | Pipeline — almost entirely producer-configured workflow patterns. This is the richest and most relevant EIP section for Simple API enhancements. |
| Message Transformation | Changing message content: translating formats, enriching with external data, filtering unwanted fields, normalising across sources | All sections — the most cross-cutting EIP category. Platform-managed at Consumer-Driven (ingress content negotiation) and Backend-Driven (egress format conversion). Producer-configured in Pipeline (enrichment, filtering, normalisation). |
| Messaging Endpoints | How applications connect to the integration system: gateways, consumers, dispatchers, idempotent receivers | Consumer-Driven + Backend-Driven + Events — Messaging Gateway is consumer-facing (API Gateway). Event-Driven Consumer is future Events. Idempotent Receiver spans Consumer-Driven and Events. |
| System Management | Operating and monitoring the integration system: tracing, debugging, testing, control | Platform-Managed / Out of Scope — Wire Tap maps to platform observability (distributed tracing). These are operational concerns handled by the platform, not producer-visible. |
Pattern Coverage Summary
| Section | Patterns Selected | Primary EIP Categories |
|---|---|---|
| Consumer-Driven | 4 | Transformation, Endpoints, Messaging Systems |
| Pipeline | 14 | Routing, Transformation, Construction |
| Backend-Driven | 4 | Transformation, Construction |
| Events (Future) | 3 | Construction, Channels, Endpoints |
| Total Selected | 25 | |
| Out of Scope | 40 | Channels, System Management, Endpoints |
3. Consumer-Driven Patterns
Platform feature — behaviour specified at invocation time by the API consumer.
These patterns describe how the platform handles incoming requests before they reach the producer’s pipeline. Consumers influence behaviour through standard HTTP mechanisms (content type headers, idempotency keys). Producers benefit from these automatically — they don’t configure them.
3.1 Message Translator (Ingress)
EIP Reference: Message Translator Camel DSL:
marshal/unmarshal,dataFormatStatus: Platform-managed
Problem: Systems using different data formats need to communicate. The sender uses one format; the receiver expects another.
HIP Context: API consumers may send requests in XML or other formats. The platform deserialises the incoming content into the canonical Java object representation (as defined by the OAS) before the pipeline processes it. The pipeline always works with Java objects — producers never handle format conversion at ingress. Content-Type and Accept are independent: a consumer can send XML and request a JSON response, or vice versa. The platform handles both translations independently — deserialising the request body based on Content-Type and serialising the response based on Accept.
Example: A legacy customs system submits an import declaration as XML via Content-Type: application/xml but requests a JSON response via Accept: application/json. The platform deserialises the XML into canonical Java objects for the pipeline. On the response path, it serialises the Java objects to JSON as requested by the Accept header.
sequenceDiagram participant Consumer participant Gateway as API Gateway participant Translator as Message Translator participant Pipeline Consumer->>Gateway: POST /customs/declarations<br/>Content-Type: application/xml<br/>Accept: application/json<br/>[XML payload] Gateway->>Translator: Forward request Translator->>Translator: Deserialise XML → Java objects<br/>(per OAS schema) Translator->>Pipeline: Canonical Java objects Pipeline-->>Translator: Java objects (response) Translator->>Translator: Serialise Java objects → JSON<br/>(per Accept header) Translator-->>Consumer: JSON response
3.2 Envelope Wrapper (Ingress)
EIP Reference: Envelope Wrapper Camel DSL: Header manipulation (platform-level) Status: Platform-managed
Problem: Existing systems participate in messaging exchanges that place specific requirements on message format, such as header fields or encryption. The business payload needs to be separated from transport concerns.
HIP Context: The API Gateway strips transport-specific headers, extracts OAuth token claims, and validates authentication before presenting a clean business payload with auth context to the pipeline. The producer’s workflow receives only the canonical Java objects and relevant auth claims — not raw HTTP headers, TLS details, or transport metadata.
Example: A VAT return submission arrives with OAuth bearer token and rate-limiting metadata. The API Gateway validates the token, extracts the taxpayer identifier from claims, strips transport headers, and passes the clean VAT return payload plus taxpayerId context to the pipeline.
sequenceDiagram participant Consumer participant Gateway as API Gateway participant Pipeline Consumer->>Gateway: POST /vat/returns<br/>Authorization: Bearer {token}<br/>X-Request-Id: abc123<br/>[VAT return payload] Gateway->>Gateway: Validate OAuth token<br/>Extract taxpayerId from claims<br/>Strip transport headers Gateway->>Pipeline: Canonical Java objects<br/>+ auth context (taxpayerId, scopes)
3.3 Canonical Data Model
EIP Reference: Canonical Data Model Camel DSL: N/A (design pattern, not a runtime element) Status: Platform design decision
Problem: When integrating multiple applications that use different data formats, point-to-point translations create an unsustainable number of format mappings (N x N problem).
HIP Context: This is a foundational design decision, not a runtime pattern. Every Simple API defines its canonical model in OAS using JSON Schema. At runtime, this model is represented as Java objects within the pipeline. Consumer-Driven Message Translation deserialises inbound wire formats into these canonical Java objects. Backend-Driven Message Translation serialises them to whatever format the backend requires. The pipeline only ever works with the canonical model as Java objects — it is never concerned with wire format.
Example: A customs declaration has one canonical model defined in OAS (JSON Schema). Whether a consumer sends JSON or XML, and whether the backend stores it in a relational database, an XML document store, or a legacy mainframe format, the pipeline always processes the same canonical Java objects with fields like declarationType, goodsItems[], trader.eori, and totalDuty.
sequenceDiagram participant XML Consumer participant JSON Consumer participant Platform as Platform (Canonical Java Objects) participant SOAP Backend participant REST Backend XML Consumer->>Platform: XML declaration JSON Consumer->>Platform: JSON declaration Note over Platform: Both deserialised to canonical<br/>Java objects (OAS-defined schema) Platform->>SOAP Backend: SOAP/XML (backend format) Platform->>REST Backend: JSON (backend format)
3.4 Messaging Gateway
EIP Reference: Messaging Gateway Camel DSL: N/A (API Gateway) Status: Platform-managed
Problem: Application code becomes tightly coupled to the integration infrastructure. A gateway encapsulates access to the messaging system, providing a clean interface.
HIP Context: The Simple API endpoint on the API Gateway is the Messaging Gateway. Consumers interact with a standard REST API — they have no knowledge of whether the request is proxied directly to a backend or processed through a Camel YAML workflow. The gateway abstracts the entire integration layer. From the consumer’s perspective, the REST contract is the same regardless of backend complexity.
Example: A consumer calls POST /excise/duty-submissions with a JSON payload. They receive a standard REST response. They don’t know (or care) whether the platform proxied directly to a single backend or orchestrated calls across three backend systems with conditional logic and transformation. The gateway contract is the OAS — everything behind it is an implementation detail.
sequenceDiagram participant Consumer participant Gateway as API Gateway (Messaging Gateway) participant Simple as Simple Proxy participant Workflow as Camel Workflow Consumer->>Gateway: POST /excise/duty-submissions alt Simple API (proxy mode) Gateway->>Simple: Forward to single egress else Enhanced Simple API (workflow mode) Gateway->>Workflow: Execute Camel pipeline end Gateway-->>Consumer: Standard REST response Note over Consumer: Consumer sees identical<br/>REST contract either way
4. Pipeline Patterns
Producer-configured — the Camel YAML workflow that the API producer defines.
These are the core patterns that producers compose within their Camel YAML workflows. They represent the value of the Simple API enhancement: enabling complex integration logic without writing full applications.
4a. Flow Structure
Patterns that define how steps are sequenced and how overall flow is managed.
4.1 Pipes and Filters
EIP Reference: Pipes and Filters Camel DSL: Sequential route steps,
directStatus: Supported
Problem: Complex processing needs to be decomposed into independent, reusable steps. Monolithic processing is hard to maintain, test, and extend.
HIP Context: This is the foundational pattern for Simple API workflows. Every Camel YAML pipeline is a Pipes and Filters architecture — a sequence of steps where each step receives input, processes it, and passes output to the next. Each step is independent and testable. Steps can be reordered or inserted without affecting others.
Example: A PAYE filing pipeline processes an employer’s monthly payroll submission through five sequential steps: validate employee records against HMRC’s database, calculate tax and NI contributions, check for anomalies against historical patterns, store the filing with the tax platform, and return a confirmation with the filing reference.
sequenceDiagram participant Consumer participant Step1 as Validate participant Step2 as Calculate Tax participant Step3 as Check Anomalies participant Step4 as Store Filing participant Consumer2 as Consumer Consumer->>Step1: PAYE filing payload Step1->>Step2: Validated payload Step2->>Step3: Payload + tax calculations Step3->>Step4: Payload + anomaly flags Step4->>Consumer2: Filing confirmation {ref: "PAYE-2026-03"}
- route:
id: paye-monthly-filing
from:
uri: direct:paye-filing
steps:
- to:
uri: direct:validate-employees
- to:
uri: direct:calculate-tax-ni
- to:
uri: direct:check-anomalies
- to:
uri: kamelet:tax-platform-storeFiling
- transform:
groovy: |
// Assemble confirmation response4.2 Routing Slip
EIP Reference: Routing Slip Camel DSL: Sequential route steps (design-time sequence) Status: Supported
Problem: A message needs to be routed through a series of processing steps, but the sequence must be defined per use case rather than hardcoded into a single monolithic route.
HIP Context: In Simple API workflows, the routing slip is implicit — the producer defines the sequence of steps at design-time for each route in their OAS. Different routes (e.g., POST /authorisations vs GET /authorisations/{id}) have different step sequences. Each sequence is a routing slip attached to that route. Unlike a Process Manager, the sequence is fixed at design-time — it doesn’t change based on runtime data.
Example: An agent authorisation request follows a four-step sequence: validate the agent’s credentials against the agent services API, check the agent-client relationship exists, verify the requested scope is permitted for the relationship type, and record the authorisation grant. Every authorisation request follows this exact sequence.
sequenceDiagram participant Consumer participant Step1 as Validate Credentials participant Step2 as Check Relationship participant Step3 as Verify Scope participant Step4 as Record Grant participant Consumer2 as Consumer Consumer->>Step1: Authorisation request Step1->>Step2: Credentials valid ✓ Step2->>Step3: Relationship confirmed ✓ Step3->>Step4: Scope permitted ✓ Step4->>Consumer2: 201 Created {authorisationId}
- route:
id: agent-authorisation
from:
uri: direct:authorise-agent
steps:
- to:
uri: kamelet:agent-services-validateCredentials
- to:
uri: kamelet:agent-services-checkRelationship
- to:
uri: kamelet:agent-services-verifyScope
- to:
uri: kamelet:agent-services-recordGrant4.3 Process Manager
EIP Reference: Process Manager Camel DSL:
choice+ sequential steps (conditional flow) Status: Supported
Problem: A message must be routed through multiple processing steps, but the required steps are not fixed — the next step depends on the outcome of the previous step.
HIP Context: Unlike a Routing Slip where the sequence is fixed, a Process Manager adapts the flow at runtime. In Simple API workflows, this is achieved by combining sequential steps with Content-Based Router (CHOICE) blocks. The pipeline evaluates results from each step and determines the next action. This is the most complex flow pattern and maps to multi-step business processes.
Example: A new trader registration adapts based on entity type. The pipeline validates the trader’s identity, then branches: if the trader is a sole trader, it checks their self-assessment record; if a limited company, it verifies against Companies House. Based on the result, it assigns a registration category (standard, simplified, or authorised economic operator), provisions the trader’s accounts, and sends a notification.
sequenceDiagram participant Consumer participant Pipeline participant SelfAssess as Self-Assessment API participant CompHouse as Companies House API participant Provision as Provisioning API Consumer->>Pipeline: POST /traders/registrations Pipeline->>Pipeline: Validate identity alt entityType = "sole-trader" Pipeline->>SelfAssess: GET /self-assessment/{utr} SelfAssess-->>Pipeline: SA record else entityType = "limited-company" Pipeline->>CompHouse: GET /companies/{crn} CompHouse-->>Pipeline: Company record end Pipeline->>Pipeline: Determine registration category Pipeline->>Provision: POST /accounts Provision-->>Pipeline: Account created Pipeline-->>Consumer: 201 Created {traderId, category}
- route:
id: trader-registration
from:
uri: direct:register-trader
steps:
- to:
uri: direct:validate-identity
- choice:
when:
- simple: "${exchangeProperty.entityType} == 'sole-trader'"
steps:
- to:
uri: kamelet:self-assessment-getRecord/reg-sa
- simple: "${exchangeProperty.entityType} == 'limited-company'"
steps:
- to:
uri: kamelet:companies-house-getCompany/reg-ch
- to:
uri: direct:determine-category
- to:
uri: kamelet:provisioning-createAccount/reg-prov4b. Conditional and Filtering
Patterns that route or filter messages based on content inspection.
4.4 Content-Based Router
EIP Reference: Content-Based Router Camel DSL:
choice,when,otherwiseStatus: Supported
Problem: A single logical function is spread across multiple physical systems, or different processing is needed based on message content. The router inspects the message and directs it to the appropriate path.
HIP Context: Content-Based Router is implemented as a CHOICE block in the Camel YAML workflow. The producer defines conditions based on fields in the canonical model (Java objects) and routes to different step sequences. This is one of the most frequently needed patterns — almost any non-trivial integration involves branching logic.
Example: A customs declaration pipeline routes based on declarationType. Simplified declarations (low-value goods, trusted traders) go through a fast-track path with fewer checks. Full declarations go through the standard path with complete tariff classification, valuation, and compliance checks.
sequenceDiagram participant Consumer participant Router as Content-Based Router participant FastTrack as Fast-Track Path participant Standard as Standard Path participant Consumer2 as Consumer Consumer->>Router: POST /customs/declarations alt declarationType = "simplified" Router->>FastTrack: Simplified checks FastTrack->>Consumer2: 201 Created (fast-track ref) else declarationType = "full" Router->>Standard: Full classification + valuation Standard->>Consumer2: 201 Created (full ref) end
- choice:
when:
- simple: "${body[declarationType]} == 'simplified'"
steps:
- to:
uri: direct:simplified-declaration
- simple: "${body[declarationType]} == 'full'"
steps:
- to:
uri: direct:full-declaration
otherwise:
steps:
- setHeader:
name: CamelHttpResponseCode
constant: 400
- transform:
constant: '{"error": "Unknown declaration type"}'4.5 Message Filter
EIP Reference: Message Filter Camel DSL:
choice+ error response /stopStatus: Supported
Problem: A component receives messages that it cannot or should not process. It needs to discard or reject them without passing them further down the pipeline.
HIP Context: In the Simple API context, Message Filter manifests as early validation with short-circuit. The pipeline checks a condition and, if it fails, returns an error response immediately without invoking any backend systems. This is critical for efficiency — why call three backend APIs if the input is invalid? It differs from Content-Based Router in that the “filtered” path terminates the pipeline rather than routing to an alternative.
Example: An excise duty submission pipeline validates the duty calculation before doing anything else. If the calculation fails validation (e.g., negative duty amount, missing product codes, duty period out of range), the pipeline returns a 422 response immediately without calling the storage or notification backends.
sequenceDiagram participant Consumer participant Pipeline participant Validator as Excise Validation API participant Storage as Storage API Consumer->>Pipeline: POST /excise/duty-submissions Pipeline->>Validator: Validate duty calculation Validator-->>Pipeline: {valid: false, errors: [...]} alt valid = false Pipeline-->>Consumer: 422 Unprocessable Entity<br/>{errors: [...]} Note over Storage: Not called else valid = true Pipeline->>Storage: Store submission Storage-->>Pipeline: {submissionRef: "EX-001"} Pipeline-->>Consumer: 201 Created {submissionRef} end
- route:
id: excise-duty-submission
from:
uri: direct:submit-duty
steps:
- to:
uri: kamelet:excise-validateCalculation/exc-val
- choice:
when:
- simple: "${exchangeProperty.validationResult[valid]} == false"
steps:
- setHeader:
name: CamelHttpResponseCode
constant: 422
- transform:
groovy: |
def result = exchange.getProperty('validationResult')
return '{"errors": ' + result.errors + '}'
- stop: {}
- to:
uri: kamelet:tax-platform-storeSubmission/exc-store4c. Transformation and Enrichment
Patterns that modify, augment, or reduce message content within the pipeline.
4.6 Content Enricher
EIP Reference: Content Enricher Camel DSL:
kamelet(API call) +setPropertyStatus: Supported
Problem: A message doesn’t contain all the data required for processing. Additional data must be obtained from an external source and merged into the message.
HIP Context: Content Enricher is one of the most common pipeline patterns. The producer calls a backend API to fetch additional data, stores the result in the exchange context, and uses it in subsequent steps. This is how pipelines add information that the consumer didn’t (or couldn’t) provide. The Kamelet abstraction handles the mechanics of the backend call; the producer focuses on what data to fetch and where to use it.
Example: A VAT return submission includes the taxpayer’s VRN (VAT Registration Number) but not their full registration details or current compliance status. The pipeline enriches the return by calling the taxpayer API with the VRN to fetch registration details, then calls the compliance API to get the taxpayer’s current risk score. Both are added to the context for use in subsequent processing steps.
sequenceDiagram participant Consumer participant Pipeline participant TaxpayerAPI as Taxpayer API participant ComplianceAPI as Compliance API participant StorageAPI as Storage API Consumer->>Pipeline: POST /vat/returns<br/>{vrn: "GB123456789", ...} Pipeline->>TaxpayerAPI: GET /taxpayers/{vrn} TaxpayerAPI-->>Pipeline: {name, address, registrationDate, ...} Note over Pipeline: Store as exchangeProperty.taxpayer Pipeline->>ComplianceAPI: GET /compliance/{vrn} ComplianceAPI-->>Pipeline: {riskScore: "low", lastFiled: ...} Note over Pipeline: Store as exchangeProperty.compliance Pipeline->>StorageAPI: POST /returns (enriched payload) StorageAPI-->>Pipeline: {returnRef: "VAT-2026-Q1"} Pipeline-->>Consumer: 201 Created {returnRef}
- route:
id: vat-return-enriched
from:
uri: direct:submit-vat-return
steps:
- to:
uri: kamelet:taxpayer-getByVrn/vat-tp?vrn=${body[vrn]}
- setProperty:
name: taxpayer
simple: "${body}"
- to:
uri: kamelet:compliance-getRiskScore/vat-comp?vrn=${exchangeProperty.originalVrn}
- setProperty:
name: compliance
simple: "${body}"
- to:
uri: direct:assemble-enriched-return
- to:
uri: kamelet:tax-platform-storeReturn/vat-store4.7 Content Filter
EIP Reference: Content Filter Camel DSL:
transform,setBodyStatus: Supported
Problem: A message contains more data than the next step or the consumer needs. Passing the entire message is wasteful, may expose sensitive data, or may confuse downstream processing.
HIP Context: Content Filter is used to strip fields from a response before returning it to the consumer, or to reduce a message before passing it to the next step. Common uses: removing internal backend fields, stripping personally identifiable information, selecting only the fields the consumer requested (sparse fieldsets). The canonical model may contain fields accumulated from multiple enrichment steps — Content Filter shapes the final output.
Example: A self-assessment enquiry returns the taxpayer’s full record from the backend, including address history, bank details, correspondence preferences, and historical filings. The consumer only needs the liability summary. The pipeline filters the response to include only the current year’s liability, payment status, and any amounts due.
sequenceDiagram participant Consumer participant Pipeline participant Backend as Self-Assessment API Consumer->>Pipeline: GET /self-assessment/{utr}/liability Pipeline->>Backend: GET /taxpayers/{utr}/full-record Backend-->>Pipeline: Full record (50+ fields) Pipeline->>Pipeline: Content Filter:<br/>Keep only liability fields Pipeline-->>Consumer: {currentYear: "2025-26",<br/>liability: 4250.00,<br/>paid: 2125.00,<br/>balanceDue: 2125.00}
- route:
id: sa-liability-summary
from:
uri: direct:get-liability
steps:
- to:
uri: kamelet:self-assessment-getFullRecord/sa-full
- transform:
groovy: |
// body is a Java Map — access fields directly
def summary = [
currentYear: body.taxYear,
liability: body.calculations.totalLiability,
paid: body.payments.totalPaid,
balanceDue: body.calculations.totalLiability - body.payments.totalPaid
]
return summary4.8 Normalizer
EIP Reference: Normalizer Camel DSL:
choice+transformStatus: Supported
Problem: Semantically equivalent messages arrive in different formats from different sources. Downstream processing needs a single consistent format.
HIP Context: While the Canonical Data Model handles wire format differences at ingress (consumer-side) and egress (backend-side), the Normalizer handles structural differences between backend responses within the pipeline. When a pipeline calls multiple backends that return the same logical data in different object structures, the Normalizer brings them into a common shape before aggregation or further processing. This is distinct from Message Translator (which handles wire format conversion like XML→JSON at boundaries) — the Normalizer handles structural differences between Java objects within the pipeline.
Example: A duty calculation pipeline queries two excise backend systems. The legacy system returns a flat structure ({productCode: "W100", dutyRate: 28.74, dutyAmount: 143.70}), while the modern system returns a nested structure ({product: {code: "W100"}, duty: {rate: 28.74, calculated: {amount: 143.70}}}). The Normalizer converts both to a standard duty calculation model before aggregation.
sequenceDiagram participant Pipeline participant Legacy as Legacy Excise API participant Modern as Modern Excise API participant Normalizer as Normalizer participant Next as Next Step Pipeline->>Legacy: GET /duty-calc (product W100) Legacy-->>Pipeline: {productCode: "W100",<br/>dutyRate: 28.74, dutyAmount: 143.70} Pipeline->>Normalizer: Flat structure Normalizer->>Normalizer: Detect format → apply flat transform Pipeline->>Modern: GET /duty-calc (product B200) Modern-->>Pipeline: {product: {code: "B200"},<br/>duty: {rate: 12.50, calculated: {amount: 62.50}}} Pipeline->>Normalizer: Nested structure Normalizer->>Normalizer: Detect format → apply nested transform Normalizer->>Next: Standard model:<br/>{code, rate, amount} for both
- choice:
when:
- simple: "${body.containsKey('productCode')}"
steps:
- transform:
groovy: |
// body is a Java Map — access and reshape directly
return [code: body.productCode, rate: body.dutyRate, amount: body.dutyAmount]
- simple: "${body.containsKey('product')}"
steps:
- transform:
groovy: |
// body is a Java Map — access nested fields directly
return [code: body.product.code, rate: body.duty.rate, amount: body.duty.calculated.amount]4d. Splitting, Aggregation and Parallel Processing
Patterns that decompose messages, process parts independently, and recombine results.
4.9 Splitter
EIP Reference: Splitter Camel DSL:
splitStatus: Supported
Problem: A composite message contains multiple elements that each need to be processed independently — potentially with different routing or transformation for each element.
HIP Context: Many HMRC submissions contain arrays of items that need individual processing. The Splitter breaks the array into individual messages, each processed through subsequent pipeline steps. After splitting, each item has access to the parent context (e.g., the declaration-level fields) as well as its own item-level fields. Commonly paired with Aggregator to reassemble results.
Example: A customs declaration contains 50 goods items. Each item needs individual tariff classification — the commodity code determines the duty rate, and different goods may have different requirements (licenses, quotas, preferences). The Splitter processes each goods item independently.
sequenceDiagram participant Consumer participant Splitter participant TariffAPI as Tariff API participant Next as Next Step Consumer->>Splitter: Declaration with 3 goods items Splitter->>TariffAPI: Item 1: {commodityCode: "2204.21"} TariffAPI-->>Splitter: {dutyRate: 32.0, measures: [...]} Splitter->>TariffAPI: Item 2: {commodityCode: "0901.11"} TariffAPI-->>Splitter: {dutyRate: 0.0, preference: "GSP"} Splitter->>TariffAPI: Item 3: {commodityCode: "8471.30"} TariffAPI-->>Splitter: {dutyRate: 0.0, measures: [...]} Splitter->>Next: 3 classified items
- split:
jsonpath: "$.goodsItems"
steps:
- to:
uri: kamelet:tariff-classifyItem/cls-${exchangeProperty.CamelSplitIndex}
- setProperty:
name: classificationResult
simple: "${body}"4.10 Aggregator
EIP Reference: Aggregator Camel DSL:
aggregateStatus: Supported
Problem: Related messages that were processed individually need to be recombined into a single composite message. The aggregator must determine which messages belong together and when the set is complete.
HIP Context: Aggregator is the natural counterpart to Splitter. After splitting a customs declaration into individual goods items and classifying each, the Aggregator recombines the classified items into a single declaration response with per-item duties and a total. In a config-only pipeline, aggregation is achieved by collecting results into exchange properties during the split and then using a Groovy transform after the split block to assemble the final response.
Example: After splitting and classifying 50 goods items from a customs declaration, the Aggregator combines all classification results into a single response containing per-item duty amounts and a total customs duty for the declaration.
sequenceDiagram participant Split as Split Items participant Agg as Aggregator participant Consumer Split->>Agg: Item 1: {duty: 320.00} Split->>Agg: Item 2: {duty: 0.00, preference: "GSP"} Split->>Agg: Item 3: {duty: 0.00} Note over Agg: All 3 items received.<br/>Combine results. Agg->>Consumer: {items: [...],<br/>totalDuty: 320.00,<br/>itemCount: 3}
- split:
jsonpath: "$.goodsItems"
aggregationStrategy: useLatest
steps:
- to:
uri: kamelet:tariff-classifyItem/cls-${exchangeProperty.CamelSplitIndex}
- to:
uri: kamelet:duty-calculateItem/dty-${exchangeProperty.CamelSplitIndex}
- setProperty:
name: processedItems
groovy: |
def items = exchange.getProperty('processedItems', [])
items << body
return items
# After the split block, transform the collected results
# into the final aggregated response
- transform:
groovy: |
def items = exchange.getProperty('processedItems')
def totalDuty = items.sum { it.dutyAmount ?: 0 }
return [items: items, totalDuty: totalDuty, itemCount: items.size()]For complex aggregation logic (partial failure handling, deduplication, conflict resolution), a producer-authored Java AggregationStrategy can be used — see Section 9.1.
4.11 Scatter-Gather
EIP Reference: Scatter-Gather Camel DSL:
multicast+aggregationStrategyStatus: Supported
Problem: A message needs to be sent to multiple recipients concurrently, and their replies must be aggregated into a single combined response.
HIP Context: Scatter-Gather sends the same request (or derived requests) to multiple backend APIs in parallel and combines their responses. This is more efficient than sequential calls when the backends are independent. The multicast element in Camel sends to all recipients simultaneously; the aggregationStrategy defines how to combine the results.
Example: A trader overview API needs to present a unified view of a trader’s status across three independent domains. The pipeline queries the excise registration API, customs authorisation API, and VAT registration API simultaneously using the trader’s EORI number, then merges the three responses into a single trader status document.
sequenceDiagram participant Consumer participant Pipeline participant Excise as Excise Registration participant Customs as Customs Authorisation participant VAT as VAT Registration Consumer->>Pipeline: GET /traders/{eori}/overview par Scatter Pipeline->>Excise: GET /excise/registrations/{eori} Pipeline->>Customs: GET /customs/authorisations/{eori} Pipeline->>VAT: GET /vat/registrations/{eori} end Excise-->>Pipeline: {exciseStatus: "active", ...} Customs-->>Pipeline: {aeoStatus: "authorised", ...} VAT-->>Pipeline: {vatStatus: "registered", ...} Note over Pipeline: Aggregate responses Pipeline-->>Consumer: {eori, excise: {...},<br/>customs: {...}, vat: {...}}
- route:
id: trader-overview
from:
uri: direct:get-trader-overview
steps:
- setProperty:
name: eori
simple: "${body[eori]}"
- multicast:
aggregationStrategy: useLatest
parallelProcessing: true
steps:
- pipeline:
steps:
- to:
uri: kamelet:excise-getRegistration/to-exc
- setProperty:
name: exciseResult
simple: "${body}"
- pipeline:
steps:
- to:
uri: kamelet:customs-getAuthorisation/to-cust
- setProperty:
name: customsResult
simple: "${body}"
- pipeline:
steps:
- to:
uri: kamelet:vat-getRegistration/to-vat
- setProperty:
name: vatResult
simple: "${body}"
- transform:
groovy: |
return [
eori: exchange.getProperty('eori'),
excise: exchange.getProperty('exciseResult'),
customs: exchange.getProperty('customsResult'),
vat: exchange.getProperty('vatResult')
]For complex aggregation with timeout handling and partial failure policies, a producer-authored Java AggregationStrategy can be used — see Section 9.2.
4.12 Composed Message Processor
EIP Reference: Composed Message Processor Camel DSL:
split+ processing steps + implicit aggregation Status: Supported
Problem: A composite message contains multiple elements, each of which may require different processing. The results must be reassembled into a single output. This combines Splitter, Router (per-element), and Aggregator into a single logical pattern.
HIP Context: Composed Message Processor is a common pattern for HMRC submissions containing line items. Each item may need individual validation, classification, or calculation — potentially calling different APIs depending on the item type. The key distinction from a simple Splitter is that each element may be routed differently based on its content.
Example: An import declaration contains goods items of different types. Each item needs tariff classification, but controlled goods (firearms, pharmaceuticals, dual-use) require an additional licence check. The pipeline splits items, routes each based on control status, and reassembles the full declaration with per-item results.
sequenceDiagram participant Consumer participant Splitter participant Tariff as Tariff API participant Licence as Licence API participant Agg as Aggregator Consumer->>Splitter: Declaration with mixed items Splitter->>Tariff: Item 1 (wine) → classify Tariff-->>Splitter: Classified, no licence needed Splitter->>Tariff: Item 2 (pharmaceutical) → classify Tariff-->>Splitter: Classified, controlled Splitter->>Licence: Item 2 → check licence Licence-->>Splitter: Licence valid ✓ Splitter->>Tariff: Item 3 (electronics) → classify Tariff-->>Splitter: Classified, no licence needed Splitter->>Agg: Reassemble results Agg-->>Consumer: Declaration with per-item classification + licence status
- split:
jsonpath: "$.goodsItems"
aggregationStrategy: useLatest
steps:
- to:
uri: kamelet:tariff-classifyItem/cmp-cls-${exchangeProperty.CamelSplitIndex}
- choice:
when:
- simple: "${body[controlled]} == true"
steps:
- to:
uri: kamelet:licence-checkItem/cmp-lic-${exchangeProperty.CamelSplitIndex}
- setProperty:
name: processedItems
groovy: |
def items = exchange.getProperty('processedItems', [])
items << body
return items
- transform:
groovy: |
def items = exchange.getProperty('processedItems')
return [items: items, itemCount: items.size()]For complex cases requiring per-item error recovery, batch-aware splitting, or concurrency control, producer-authored Java components can be used — see Section 9.4.
4.13 Recipient List
EIP Reference: Recipient List Camel DSL:
multicast/recipientListStatus: Supported
Problem: A message needs to be sent to multiple recipients, but unlike Publish-Subscribe, the sender determines the list of recipients (potentially dynamically based on message content).
HIP Context: Recipient List is used when a pipeline needs to notify or update multiple systems as part of processing. Unlike Scatter-Gather, the pipeline may not need to wait for or aggregate the responses — some recipients may be fire-and-forget. The list of recipients can be determined by the message content (e.g., notify the agent only if the submission was made via an agent).
Example: A duty payment confirmation needs to be sent to three systems: the accounting system (to record the payment), the trader’s registered agent (if the submission was agent-mediated), and the compliance ledger (to update the trader’s payment history). The pipeline sends to all applicable recipients in parallel.
sequenceDiagram participant Pipeline participant Accounting as Accounting API participant Agent as Agent Notification participant Compliance as Compliance Ledger Note over Pipeline: Payment confirmed par Notify recipients Pipeline->>Accounting: POST /payments (record) Pipeline->>Agent: POST /notifications (if agent-mediated) Pipeline->>Compliance: PUT /ledger/{eori} (update history) end Accounting-->>Pipeline: Recorded ✓ Agent-->>Pipeline: Notified ✓ Compliance-->>Pipeline: Updated ✓
- multicast:
parallelProcessing: true
steps:
- to:
uri: kamelet:accounting-recordPayment/rl-acc
- choice:
when:
- simple: "${exchangeProperty.agentMediated} == true"
steps:
- to:
uri: kamelet:agent-notify/rl-agent
- to:
uri: kamelet:compliance-updateLedger/rl-comp4.14 Request-Reply
EIP Reference: Request-Reply Camel DSL:
kamelet(synchronous call) Status: Supported
Problem: An application sends a message and needs to receive a response from the receiver to continue processing. This is the fundamental synchronous interaction pattern.
HIP Context: Request-Reply is the basic interaction model for every backend API call in a pipeline. When a Kamelet calls a backend API, it sends a request and waits for the response before the pipeline continues to the next step. The response is stored in the exchange body (or as an exchange property) for use by subsequent steps. This pattern underpins Content Enricher, Scatter-Gather, and most other pipeline patterns that involve backend calls.
Example: The pipeline calls the excise backend to validate and calculate duty on a submission. It sends the submission payload, waits for the validation result (which includes calculated duty amounts), stores the result in the exchange context, and uses it in the next step (either proceeding to storage or returning a validation error).
sequenceDiagram participant Pipeline participant Excise as Excise Validation API Pipeline->>Excise: POST /validate-and-calculate<br/>{productCode: "W100",<br/>quantity: 500, ...} Note over Excise: Process request Excise-->>Pipeline: {valid: true,<br/>dutyRate: 28.74,<br/>dutyAmount: 143.70} Note over Pipeline: Store response as<br/>exchangeProperty.validationResult Pipeline->>Pipeline: Continue to next step
- to:
uri: kamelet:excise-validateAndCalculate/exc-val
- setProperty:
name: validationResult
simple: "${body}"
# Pipeline continues with validationResult available in context5. Backend-Driven Patterns
Platform feature — behaviour handled based on what the backend system supports.
These patterns describe how the platform handles communication with backend systems via Kamelets and egress gateways. The backend’s characteristics (protocol, format, authentication) determine the behaviour. Producers interact with backends through the Kamelet abstraction and don’t need to manage these concerns directly.
5.1 Message Translator (Egress)
EIP Reference: Message Translator Camel DSL:
marshal/unmarshal,dataFormat(inside Kamelet) Status: Platform-managed (Kamelet)
Problem: The receiving system expects data in a specific format that differs from the sender’s format. A translator converts between the two.
HIP Context: The pipeline always works with canonical Java objects. When a backend requires a different format (SOAP/XML, fixed-width, legacy binary), the Kamelet handles the conversion transparently. The producer calls kamelet:excise-validateAndCalculate and passes/receives Java objects — the Kamelet serialises to SOAP/XML, makes the call, and deserialises the response back to Java objects. This means pipelines are never coupled to backend data formats. If a backend migrates from SOAP to REST, only the Kamelet changes — every pipeline using it continues to work unchanged.
Example: The excise validation backend is a legacy SOAP service. The pipeline passes canonical Java objects to the Kamelet. The Kamelet serialises them to SOAP/XML, calls the backend, deserialises the SOAP/XML response back to Java objects, and returns them to the pipeline. The producer’s Camel YAML never deals with wire formats.
sequenceDiagram participant Pipeline participant Kamelet participant Backend as Excise SOAP Backend Pipeline->>Kamelet: Java objects: {productCode: "W100", qty: 500} Kamelet->>Kamelet: Serialise → SOAP/XML Kamelet->>Backend: SOAP: <validateDuty>...</validateDuty> Backend-->>Kamelet: SOAP: <dutyResult>...</dutyResult> Kamelet->>Kamelet: Deserialise → Java objects Kamelet-->>Pipeline: Java objects: {valid: true, dutyAmount: 143.70} Note over Pipeline: Pipeline only sees Java objects
5.2 Envelope Wrapper (Egress)
EIP Reference: Envelope Wrapper Camel DSL: Header manipulation (inside Kamelet / Egress Gateway) Status: Platform-managed (Egress Gateway)
Problem: The receiving system requires specific transport wrappers — authentication headers, protocol-specific envelopes, certificates — that are not part of the business payload.
HIP Context: Backend systems have varying authentication and transport requirements: OAuth tokens, API keys, mutual TLS, WS-Security headers, custom correlation headers. The egress gateway and Kamelet handle all of this. The pipeline passes canonical Java objects to a Kamelet; the egress layer wraps the serialised payload with whatever the backend needs. This is critical for security — credentials never appear in producer-authored Camel YAML.
Example: The excise backend requires mutual TLS client certificates and a WS-Security UsernameToken header. The pipeline calls kamelet:excise-validateAndCalculate with canonical Java objects. The egress gateway attaches the client certificate, the Kamelet adds the WS-Security header, and the request is sent to the backend. None of these details are visible in the pipeline.
sequenceDiagram participant Pipeline participant Kamelet participant Egress as Egress Gateway participant Backend as Excise Backend Pipeline->>Kamelet: Java objects (no auth) Kamelet->>Kamelet: Add WS-Security header Kamelet->>Egress: SOAP request Egress->>Egress: Attach mTLS client cert Egress->>Backend: Secured SOAP request Backend-->>Egress: SOAP response Egress-->>Kamelet: Response Kamelet-->>Pipeline: Java objects (auth stripped)
5.3 Command Message
EIP Reference: Command Message Camel DSL:
kamelet(POST/PUT/DELETE call) Status: Supported (via Kamelet)
Problem: Messaging is used to invoke a specific procedure or action on a receiving system, not just to transfer data. The message represents a command to be executed.
HIP Context: When a pipeline calls a backend to create, update, or delete a resource, the Kamelet sends a Command Message. The distinction from Request-Reply (which is about the interaction pattern) is that Command Message describes the intent — this is an action, not a query. In practice, this maps to POST/PUT/DELETE Kamelet calls. The Kamelet serialises the pipeline’s canonical Java objects into whatever format the backend expects.
Example: After validating a customs declaration and calculating duties, the pipeline stores the declaration by sending a POST /submissions command to the tax platform storage API. This is a command — it creates a new resource and returns a reference. The Kamelet handles the serialisation from the pipeline’s canonical Java objects to the backend’s specific submission format.
sequenceDiagram participant Pipeline participant Kamelet as Storage Kamelet participant Backend as Tax Platform Storage Pipeline->>Kamelet: Command: Store declaration<br/>{declarationType: "full",<br/>totalDuty: 320.00, ...} Kamelet->>Backend: POST /submissions<br/>(backend format) Backend-->>Kamelet: 201 Created<br/>{submissionId: "SUB-2026-001"} Kamelet-->>Pipeline: {submissionId: "SUB-2026-001"}
# The Kamelet call is a Command Message when it performs
# a state-changing operation on the backend
- to:
uri: kamelet:tax-platform-storeSubmission/store-sub
# The Kamelet handles:
# - Java objects → backend format translation
# - POST method and endpoint
# - Auth via egress gateway
# - Backend response → Java objects translation5.4 Correlation Identifier
EIP Reference: Correlation Identifier Camel DSL: Exchange ID, trace headers Status: Platform-managed
Problem: When a requestor sends multiple requests, it needs to match each reply to the correct originating request. A unique identifier links request and reply.
HIP Context: Each Kamelet call within a pipeline carries a correlation identifier that links the pipeline step to the specific backend request/response pair. This is managed automatically by the Camel exchange and the platform’s distributed tracing (trace ID + span ID). Producers don’t need to manage correlation — it’s built into the platform. The correlation is visible in observability tooling, enabling end-to-end tracing of a declaration through all backend calls.
Example: A customs declaration pipeline makes three backend calls (validate, classify, store). Each call generates a unique span in the distributed trace, correlated by the overall trace ID. If the storage call fails, an operator can trace back through the classify and validate calls to see the complete processing history for that specific declaration.
sequenceDiagram participant Consumer participant Pipeline participant Validate as Validate API participant Classify as Classify API participant Store as Store API participant Tracing as Distributed Tracing Consumer->>Pipeline: POST /declarations<br/>traceId: T-001 Pipeline->>Validate: spanId: S-001 (parent: T-001) Validate-->>Pipeline: Response (correlated) Pipeline->>Classify: spanId: S-002 (parent: T-001) Classify-->>Pipeline: Response (correlated) Pipeline->>Store: spanId: S-003 (parent: T-001) Store-->>Pipeline: Response (correlated) Note over Tracing: All 3 calls linked<br/>via traceId T-001 Pipeline-->>Consumer: 201 Created
6. Event Patterns (Future Scope)
Planned — same workflow model for event-driven pipelines.
Events are not currently in scope, but the platform envisages offering the same workflow patterns for event-triggered pipelines. This section documents the key event-related EIPs that will be supported when event-driven Simple APIs are introduced. The pipeline patterns from Section 4 will apply identically — the only difference is the trigger mechanism.
6.1 Event Message
EIP Reference: Event Message Camel DSL: Event producer DSL Status: Planned
Problem: An application needs to notify other applications that something has happened, without requiring a response. The notification is a statement of fact, not a command.
HIP Context: Pipelines will be able to publish events as a step in their workflow. After storing a customs declaration, the pipeline could publish a declaration.submitted event to trigger downstream systems (compliance checks, analytics, trader notifications). Event publishing would be fire-and-forget from the pipeline’s perspective — it doesn’t wait for subscribers to process the event.
Example: A customs declaration pipeline stores the declaration, then publishes a declaration.submitted event containing the declaration reference, trader EORI, and total duty amount. Multiple downstream systems receive this event independently.
sequenceDiagram participant Pipeline participant Storage as Storage API participant EventBus as Event Platform participant Compliance as Compliance System participant Analytics as Analytics System Pipeline->>Storage: Store declaration Storage-->>Pipeline: {ref: "DEC-001"} Pipeline->>EventBus: Publish: declaration.submitted<br/>{ref: "DEC-001", eori: "GB123",<br/>totalDuty: 320.00} par Event delivery EventBus->>Compliance: declaration.submitted EventBus->>Analytics: declaration.submitted end Pipeline-->>Pipeline: Continue (fire-and-forget)
6.2 Event-Driven Consumer
EIP Reference: Event-Driven Consumer Camel DSL: Event consumer DSL Status: Planned
Problem: An application needs to automatically process messages as they become available, rather than being triggered by a synchronous request.
HIP Context: Event-Driven Consumer would allow Simple API pipelines to be triggered by platform events instead of HTTP requests. The event payload would become the pipeline’s input context (equivalent to the HTTP request body), and the same pipeline patterns (Content Enricher, Content-Based Router, Splitter, etc.) would apply. This enables reactive integration — processing that happens in response to business events rather than API calls.
Example: A pipeline triggered by a payment.received event enriches the payment with taxpayer details from the taxpayer API, updates the taxpayer’s ledger, and publishes a payment.applied event. The pipeline uses the same patterns as an HTTP-triggered pipeline — only the trigger differs.
sequenceDiagram participant EventBus as Event Platform participant Pipeline participant TaxpayerAPI as Taxpayer API participant Ledger as Ledger API EventBus->>Pipeline: Event: payment.received<br/>{paymentRef: "P-001",<br/>vrn: "GB123", amount: 2125.00} Pipeline->>TaxpayerAPI: GET /taxpayers/{vrn} TaxpayerAPI-->>Pipeline: Taxpayer details Pipeline->>Ledger: PUT /ledger/{vrn}/payments Ledger-->>Pipeline: Updated ✓ Pipeline->>EventBus: Publish: payment.applied
6.3 Publish-Subscribe Channel
EIP Reference: Publish-Subscribe Channel Camel DSL: Event platform (infrastructure) Status: Planned
Problem: A sender needs to broadcast a message to all interested receivers without knowing who they are. New receivers can be added without modifying the sender.
HIP Context: The event platform provides Publish-Subscribe channels. When a pipeline publishes an event (Event Message pattern), the event platform delivers it to all subscribers. This decouples the publishing pipeline from downstream consumers. A customs declaration pipeline publishes declaration.submitted — it doesn’t know or care whether zero, one, or ten systems subscribe to that event.
Example: The duty.calculated event is published once by the excise duty pipeline. Three independent systems subscribe: the compliance team’s risk assessment system, the analytics platform for duty revenue forecasting, and the trader notification service. Each receives the event independently and processes it on their own schedule.
sequenceDiagram participant Pipeline as Excise Pipeline participant EventBus as Event Platform participant Risk as Risk Assessment participant Analytics as Analytics participant Notify as Trader Notifications Pipeline->>EventBus: Publish: duty.calculated<br/>{ref: "EX-001", duty: 143.70} par Deliver to subscribers EventBus->>Risk: duty.calculated EventBus->>Analytics: duty.calculated EventBus->>Notify: duty.calculated end Note over Risk: Assess risk profile Note over Analytics: Update revenue forecast Note over Notify: Send trader notification
7. Composite Pattern Scenarios
Real-world HMRC scenarios showing how patterns compose across sections. Each scenario identifies which patterns are used and from which section they come.
7.1 Customs Declaration Processing
A full customs declaration submission involving content negotiation, multi-step validation, per-item classification, and storage.
Patterns used:
- Consumer-Driven: Message Translator, Canonical Data Model
- Pipeline: Pipes and Filters, Content Enricher, Content-Based Router, Splitter, Aggregator, Request-Reply
- Backend-Driven: Message Translator (egress), Command Message
Scenario: A customs broker submits an import declaration in XML. The platform deserialises to canonical Java objects. The pipeline enriches with trader details, routes based on declaration type (simplified vs full), splits goods items for individual tariff classification and duty calculation, aggregates results, and stores the completed declaration. The storage backend receives the data in its own format via the Kamelet.
sequenceDiagram participant Broker as Customs Broker participant Ingress as Consumer-Driven<br/>(Message Translator) participant Pipeline participant TraderAPI as Trader API participant TariffAPI as Tariff API participant DutyAPI as Duty Calc API participant Storage as Storage API participant Egress as Backend-Driven<br/>(Message Translator) Broker->>Ingress: POST /customs/declarations<br/>Content-Type: application/xml Ingress->>Pipeline: Canonical Java objects Note over Pipeline: Pipes and Filters begins Pipeline->>TraderAPI: Content Enricher:<br/>GET /traders/{eori} TraderAPI-->>Pipeline: Trader details alt declarationType = "simplified" Note over Pipeline: Content-Based Router Pipeline->>Pipeline: Fast-track validation else declarationType = "full" Pipeline->>Pipeline: Full compliance checks end Note over Pipeline: Splitter: process each goods item loop Each goods item Pipeline->>TariffAPI: Request-Reply: classify item TariffAPI-->>Pipeline: Classification result Pipeline->>DutyAPI: Request-Reply: calculate duty DutyAPI-->>Pipeline: Duty amount end Note over Pipeline: Aggregator: combine results Pipeline->>Egress: Command Message: store declaration Egress->>Storage: Backend format Storage-->>Egress: {declarationRef: "DEC-2026-001"} Egress-->>Pipeline: Java objects Pipeline-->>Ingress: Java objects Ingress-->>Broker: XML response
7.2 VAT Return Submission
A VAT return enriched with taxpayer data, filtered for the response, and stored.
Patterns used:
- Consumer-Driven: Envelope Wrapper
- Pipeline: Content Enricher, Content Filter, Request-Reply
- Backend-Driven: Command Message
- Future: Event Message
Scenario: A taxpayer submits a quarterly VAT return. The platform extracts auth context (Envelope Wrapper). The pipeline enriches the return with taxpayer registration details and compliance status, stores the return, filters the response to return only the filing reference and calculated amounts (not the full enriched record), and in future would publish a vat-return.submitted event.
sequenceDiagram participant Taxpayer participant Platform as Consumer-Driven<br/>(Envelope Wrapper) participant Pipeline participant TaxpayerAPI as Taxpayer API participant ComplianceAPI as Compliance API participant StorageAPI as Storage API Taxpayer->>Platform: POST /vat/returns<br/>Authorization: Bearer {token}<br/>{vrn, period, boxes: {...}} Platform->>Pipeline: Clean payload + auth context Pipeline->>TaxpayerAPI: Content Enricher:<br/>GET /taxpayers/{vrn} TaxpayerAPI-->>Pipeline: Registration details Pipeline->>ComplianceAPI: Content Enricher:<br/>GET /compliance/{vrn} ComplianceAPI-->>Pipeline: Risk score + history Pipeline->>StorageAPI: Command Message:<br/>POST /returns (enriched) StorageAPI-->>Pipeline: {returnRef: "VAT-2026-Q1"} Pipeline->>Pipeline: Content Filter:<br/>Keep returnRef, amounts only Pipeline-->>Taxpayer: 201 Created<br/>{returnRef: "VAT-2026-Q1",<br/>netVat: 12500.00}
7.3 Excise Duty Multi-Backend Orchestration
Querying multiple backend systems in parallel, normalising their different response formats, and aggregating into a unified response.
Patterns used:
- Pipeline: Scatter-Gather, Normalizer, Aggregator, Content Filter
- Backend-Driven: Message Translator (egress) per backend
Scenario: An excise duty overview for a warehouse keeper requires data from three independent backend systems that evolved separately. The pipeline queries all three in parallel (Scatter-Gather), normalises their structurally different responses into a standard model (Normalizer), aggregates the results (Aggregator), and filters to return only the summary fields the consumer needs (Content Filter).
sequenceDiagram participant Consumer participant Pipeline participant Legacy as Legacy Duty System participant Modern as Modern Duty System participant Warehouse as Warehouse System Consumer->>Pipeline: GET /excise/overview/{warehouseId} par Scatter-Gather Pipeline->>Legacy: GET /duties (flat format) Pipeline->>Modern: GET /duties (nested format) Pipeline->>Warehouse: GET /stock-levels end Legacy-->>Pipeline: Flat: {code, rate, amount} Modern-->>Pipeline: Nested: {product: {code}, duty: {rate, calc: {amount}}} Warehouse-->>Pipeline: {items: [...]} Note over Pipeline: Normalizer: convert both<br/>duty formats to standard model Note over Pipeline: Aggregator: merge all three Note over Pipeline: Content Filter: summary only Pipeline-->>Consumer: {warehouseId,<br/>totalDutyLiability: 15420.00,<br/>stockValue: 89000.00,<br/>products: [{code, duty}, ...]}
7.4 Agent Authorisation Workflow
A multi-step workflow where the next step depends on the outcome of the previous step.
Patterns used:
- Pipeline: Process Manager, Content-Based Router, Request-Reply
- Future: Event Message
Scenario: A tax agent requests authorisation to act on behalf of a client. The pipeline validates the agent’s credentials, checks whether an agent-client relationship exists, and then branches based on the relationship type: individual clients require identity verification via self-assessment records, while business clients require verification via Companies House. After verification, the authorisation is recorded and a confirmation returned.
sequenceDiagram participant Agent participant Pipeline participant AgentAPI as Agent Services API participant SAAPI as Self-Assessment API participant CHAPI as Companies House API Agent->>Pipeline: POST /agent/authorisations<br/>{agentRef, clientRef, scope} Pipeline->>AgentAPI: Request-Reply:<br/>Validate credentials AgentAPI-->>Pipeline: Agent valid ✓ Pipeline->>AgentAPI: Request-Reply:<br/>Check relationship AgentAPI-->>Pipeline: {exists: true,<br/>clientType: "individual"} alt clientType = "individual" Note over Pipeline: Content-Based Router Pipeline->>SAAPI: Verify via SA record SAAPI-->>Pipeline: Identity confirmed else clientType = "business" Pipeline->>CHAPI: Verify via Companies House CHAPI-->>Pipeline: Company confirmed end Pipeline->>AgentAPI: Command Message:<br/>Record authorisation AgentAPI-->>Pipeline: {authorisationId: "AUTH-001"} Pipeline-->>Agent: 201 Created {authorisationId}
7.5 PAYE Bulk Filing
Processing a bulk payroll submission by splitting into individual employee records, processing each, and aggregating the results.
Patterns used:
- Pipeline: Splitter, Composed Message Processor, Content-Based Router, Aggregator, Request-Reply
- Backend-Driven: Command Message
- Future: Event Message
Scenario: An employer submits a monthly PAYE filing containing records for 200 employees. The pipeline splits the filing into individual employee records. For each employee, it calculates tax and NI contributions, checks whether the employee has a student loan (requiring additional deduction), and stores the individual result. After all employees are processed, the aggregator combines the results into a filing summary with total tax, total NI, and total student loan deductions.
sequenceDiagram participant Employer participant Pipeline participant TaxCalc as Tax Calc API participant StudentLoan as Student Loan API participant Storage as Filing Storage Employer->>Pipeline: POST /paye/filings<br/>{period: "2026-03", employees: [200 records]} Note over Pipeline: Splitter: 200 employees loop Each employee Pipeline->>TaxCalc: Calculate tax + NI TaxCalc-->>Pipeline: {tax, ni} alt hasStudentLoan = true Pipeline->>StudentLoan: Calculate deduction StudentLoan-->>Pipeline: {deduction} end end Note over Pipeline: Aggregator: combine 200 results Pipeline->>Storage: Command Message:<br/>POST /filings (summary + details) Storage-->>Pipeline: {filingRef: "PAYE-2026-03-EMP001"} Pipeline-->>Employer: 201 Created<br/>{filingRef,<br/>totalTax: 45200.00,<br/>totalNI: 12800.00,<br/>totalStudentLoan: 3200.00,<br/>employeeCount: 200}
8. Patterns Not Supported / Out of Scope
The following EIP patterns are not included in the Simple API capability specification. For each, a brief rationale explains why.
Note: Most excluded patterns are implementable in Camel YAML DSL — they are excluded because they don’t apply to HIP’s synchronous, stateless, design-time pipeline model, not because of implementation limitations. The exclusion rationale falls into three categories: Infrastructure/Implicit (the platform provides this transparently), Platform/Runtime (handled by Camel runtime, Kubernetes, or future event platform), and Not Applicable (the pattern addresses concerns specific to asynchronous messaging that don’t arise in synchronous REST APIs).
Infrastructure / Implicit Patterns
These patterns describe infrastructure that HIP provides implicitly or that are not relevant to the API-based integration model.
| EIP Pattern | EIP Section | Rationale |
|---|---|---|
| File Transfer | Integration Styles | Not applicable — HIP is API-based, not file-based |
| Shared Database | Integration Styles | Not applicable — HIP is stateless; no shared data store between integrations |
| Remote Procedure Invocation | Integration Styles | Implicit — this is HIP’s primary integration style (REST APIs). Not a pattern to implement; it’s the architecture. |
| Messaging | Integration Styles | Future — event-driven messaging is planned but not the current primary model |
| Message Channel | Messaging Systems | Implicit — every API call and pipeline step is a message channel. Not producer-visible. |
| Message | Messaging Systems | Implicit — everything flowing through the pipeline is a message |
| Message Endpoint | Messaging Systems | Implicit — the Simple API (consumer-side) and Kamelet (backend-side) are message endpoints |
| Point-to-Point Channel | Channels | Implicit — each synchronous API call is point-to-point by nature |
| Datatype Channel | Channels | Implicit — the canonical model (OAS-defined JSON Schema) serves this purpose |
| Message Bus | Channels | HIP is the message bus — the platform itself is the integration backbone |
Platform / Runtime Concerns
These patterns are handled by the platform infrastructure (Camel runtime, event platform, Kubernetes) and are not producer-visible.
| EIP Pattern | EIP Section | Rationale |
|---|---|---|
| Dead Letter Channel | Channels | Camel runtime concern — failed messages are handled by Camel’s error handling, not by producer-authored pipelines |
| Invalid Message Channel | Channels | Handled by pipeline validation (Message Filter pattern) and HTTP error responses — no separate channel needed |
| Guaranteed Delivery | Channels | Future event platform concern — not applicable to synchronous API calls |
| Channel Adapter | Channels | Subsumed by Kamelets — the Kamelet is the channel adapter connecting the pipeline to backend systems |
| Messaging Bridge | Channels | Not applicable — single platform, no bridging between messaging systems |
| Competing Consumers | Endpoints | Platform scaling concern — Kubernetes pod scaling handles this, not producer-visible |
| Durable Subscriber | Endpoints | Future event platform concern — subscribers maintain their position in the event stream |
| Selective Consumer | Endpoints | Not applicable — each pipeline handles its specific route; filtering happens within the pipeline via Message Filter |
| Message Dispatcher | Endpoints | Platform-managed — the API Gateway routes requests to the correct pipeline; not producer-visible |
| Transactional Client | Endpoints | Not applicable — stateless pipeline model; transactional concerns handled at the backend level |
| Messaging Mapper | Endpoints | Subsumed by Content Enricher and Content Filter in the pipeline — these patterns handle the mapping |
| Service Activator | Endpoints | Subsumed by the Kamelet abstraction — Kamelets activate backend services |
| Control Bus | System Management | Platform operational concern — system management is handled by platform tooling, not producer pipelines |
| Detour | System Management | Not applicable at pipeline level — debugging and testing routes are platform concerns |
| Wire Tap | System Management | Platform observability — distributed tracing captures this; not producer-configured |
| Message History | System Management | Platform distributed tracing — trace ID propagation provides message history automatically |
| Message Store | System Management | Platform operational concern — audit logging and message archival are infrastructure |
| Smart Proxy | System Management | Not applicable — the platform doesn’t need to track reply-to addresses across dynamic routing |
| Test Message | System Management | Not applicable at pipeline level — health checking is a platform/Kubernetes concern |
| Channel Purger | System Management | Not applicable — synchronous request/response model; no persistent channels to purge |
| Claim Check | Transformation | Not applicable — HIP is a stateless platform with no persistence layer. Claim Check requires intermediate storage to hold large payloads and retrieve them later. |
Backend Responsibility
These patterns are valid integration concerns but are the responsibility of the backend systems, not the platform. The platform is stateless — it does not persist request/response state between invocations.
| EIP Pattern | EIP Section | Rationale |
|---|---|---|
| Idempotent Receiver | Endpoints | HIP is a stateless platform — it does not store request/response state between invocations, so it cannot detect or deduplicate retried requests. Backend systems that receive non-idempotent commands (e.g., payment submissions, resource creation) are expected to implement their own idempotency handling, typically via a client-supplied idempotency key that the backend checks against its own persistent store. The platform will pass through any Idempotency-Key header (or equivalent) from the consumer to the backend via the Kamelet, but deduplication logic and response caching are backend concerns. Future possibility: If a pattern emerges where many backends lack idempotency support, the platform could introduce an optional idempotency layer (request hash + response cache with TTL) as a platform capability. This would require introducing state (a cache or store), which is a significant architectural change and would only be pursued if backend-side idempotency proves insufficient in practice. |
Not Applicable to Synchronous API Model
These patterns address concerns specific to asynchronous messaging systems that don’t arise in HIP’s synchronous request/response model.
| EIP Pattern | EIP Section | Rationale |
|---|---|---|
| Resequencer | Routing | Not applicable — synchronous request model processes one request at a time in order; no out-of-sequence messages |
| Dynamic Router | Routing | Pipelines are defined at design-time; runtime routing changes are not supported. Content-Based Router handles conditional logic. |
| Message Sequence | Construction | Not applicable — each API call is a single request/response; no need to sequence multiple related messages |
| Message Expiration | Construction | Not applicable — synchronous requests have timeouts, not expiration. Future event platform may use TTL. |
| Return Address | Construction | Implicit — synchronous HTTP responses go back to the caller automatically |
| Document Message | Construction | Implicit — all API payloads are document messages by nature |
| Format Indicator | Construction | Handled by OAS versioning and content-type headers — not a pattern producers implement |
| Polling Consumer | Endpoints | Not applicable — Simple APIs are request-driven, not polling-driven |
9. Beyond Declarative YAML: Handling Complexity with Java Components
Camel YAML DSL with embedded Groovy/Simple expressions handles the majority of integration patterns described in this specification. For most pipelines — content-based routing, sequential enrichment, simple splits, request-reply — pure YAML is sufficient and preferred for its readability and simplicity. However, some patterns have a complexity threshold beyond which YAML and Groovy become unwieldy. For these cases, producers can author Java components (processors, aggregation strategies, transformers) that the platform builds, tests, and includes on the pipeline’s classpath. The pipeline remains Camel YAML — Java handles complex business logic within individual steps, not the overall orchestration.
The model is:
| Concern | YAML/Groovy | Java component |
|---|---|---|
| Flow orchestration (linear, simple branches) | Yes | — |
| Flow orchestration (complex multi-step decision trees) | — | Yes, calling back into named YAML routes via direct: |
| Individual processing steps | Yes (composable, reusable) | — |
| Backend calls | Yes (via reusable blocks / Kamelets) | — |
| Aggregation (simple) | Groovy: list collection, sums | — |
| Aggregation (complex) | — | Yes: partial failure, dedup, conflict resolution |
| Normalisation / Mapping (simple) | Groovy: field renaming, reshaping | — |
| Normalisation / Mapping (complex) | — | Yes: type-safe, unit-testable |
The bridge between Java and YAML is bidirectional: YAML references Java components via bean or class name, and Java components can call back into named YAML routes (direct:) using Camel’s ProducerTemplate. This means complex orchestration logic in Java can delegate to simple, readable YAML routes for each step — the Java component decides what to do; the YAML route does it.
This capability, combined with the declarative YAML pipeline, tends towards the retirement of the Advanced API (full Java application, Docker image, Helm chart) model. The combination covers the full complexity spectrum without requiring producers to build and operate a separate application.
The following subsections examine the patterns most likely to require Java components and how the responsibility splits between YAML and Java for each.
9.1 Aggregator
Simple case (YAML/Groovy): Collecting split results into a list and computing a sum or count. The Groovy collects items into an exchange property during the split, and a transform after the split block assembles the final response. This works well when aggregation is straightforward — every item succeeds, and the combination logic is simple.
Complex case (Java): A Java AggregationStrategy receives the old exchange and new exchange at each step and returns the merged result. This is needed when:
- Partial failure — some split items fail and the pipeline must decide whether to skip, flag, or fail the whole batch
- Deduplication — items may produce duplicate results that need merging
- Conflict resolution — two enrichment sources return contradictory data for the same field
- Weighted or conditional merging — different items contribute differently to the aggregate
What stays in YAML: the split, per-item processing steps, and post-aggregation flow. What moves to Java: the “how to combine two results” logic, referenced as aggregationStrategy: "#class:com.example.DutyAggregator".
9.2 Scatter-Gather
Simple case (YAML/Groovy): Parallel fan-out to multiple backends with results collected into exchange properties and assembled via a Groovy transform. Works when all backends are expected to respond and the combination is straightforward.
Complex case (Java): A Java aggregation strategy with:
- Timeout awareness — what to do when one leg doesn’t respond within its SLA
- Partial failure policy — return whatever results are available vs fail the entire request
- Result merging with conflict resolution — two backends return overlapping data with different values
What stays in YAML: the multicast structure, per-leg processing steps, and the steps each leg executes. What moves to Java: the failure policy and aggregation logic.
9.3 Process Manager
Simple case (YAML/Groovy): Nested choice blocks — works well for two or three decision points with clear conditions. Most Simple API workflows will fall into this category.
Complex case (Java): A Java Processor acts as the orchestrator for workflows with many decision points, where the next step depends on accumulated state from multiple prior steps. The Java component:
- Evaluates the current state of the exchange (properties accumulated from prior steps)
- Determines the next step to execute
- Calls the appropriate named YAML route (
direct:validate-identity,direct:check-entity-type, etc.) viaProducerTemplate - Evaluates the result and decides the next action
Each direct: route is a YAML-defined pipeline segment — simple, testable, readable. The complex decision logic lives in Java, where it benefits from type safety, testability, and proper control flow. This is the pattern where Java + YAML composition is most powerful: the Process Manager is Java; the steps it manages are YAML.
9.4 Splitter / Composed Message Processor
Simple case (YAML/Groovy): The split element processes items sequentially or in parallel, with Groovy-based result collection. Works for moderate item counts where every item follows the same processing path and errors are handled uniformly.
Complex case (Java): A Java component handles:
- Batch awareness — deciding whether to split and call individually vs make a single batch call (if the backend supports it). This is a significant performance consideration: 50 individual calls vs 1 batch call.
- Concurrency control — limiting concurrent backend calls to avoid overwhelming a legacy system (e.g., “process 50 items but no more than 5 concurrently”)
- Per-item error recovery — retry failed items, skip and flag them in the result, or fail the batch after N failures
What stays in YAML: per-item processing steps (classification, validation, calculation). What moves to Java: the splitting strategy, concurrency management, and error recovery policy.
9.5 Normalizer
Simple case (YAML/Groovy): A choice block detects the response format (by inspecting keys in the body) and applies a Groovy transform to reshape it into the standard model. Works when there are few variants and the detection logic is reliable.
Complex case (Java): A Java Normalizer that:
- Uses exchange properties (which backend was called, set by the preceding step) rather than brittle response structure sniffing
- Applies type-safe mapping with compile-time validation
- Can be unit-tested independently of the pipeline
- Handles edge cases (missing fields, unexpected structures) with proper error handling
What stays in YAML: the backend calls, the flow after normalisation. What moves to Java: the normalisation/mapping logic itself.
10. Reusable Component Library
The patterns in Sections 3–6 describe what integration logic is possible. The reusable component library provides the building blocks producers use to compose that logic in practice. Every backend call, API invocation, or service interaction in a pipeline uses a reusable block — a callable component with a defined contract.
10.1 Component Sources
The component library has three tiers, with Tier 1 as the primary and preferred mechanism.
Tier 1: Platform Catalogue APIs
Every API registered in the HIP API catalogue is automatically available as a callable block in any pipeline. The block’s contract — its inputs and outputs — is derived from the API’s OAS.
To call a catalogue API from a pipeline, the producer specifies:
- Service name — the API as registered in the catalogue
- Operation — HTTP method and path
- Input mapping — how fields from the current exchange map to the API’s request schema
The platform handles everything else: catalogue resolution, schema validation, egress routing, authentication, transport, and protocol concerns. Producers don’t configure connection details, credentials, or endpoints — that’s the catalogue’s responsibility.
This is the expected default. Most pipeline steps should call catalogue APIs.
Tier 2: Central Catalogue Registration (governance)
APIs should be registered in the central catalogue rather than configured ad-hoc by individual producers. When multiple producers need the same backend, there should be one catalogue entry — not three bespoke configurations.
Central registration provides:
- Discoverability — producers can find what APIs are available without asking around
- Governance — clear ownership, versioning, deprecation lifecycle
- Deduplication — one set of connection settings, auth patterns, and SLAs
- Consistency — all consumers of an API use the same configuration
APIM discovers and uses APIs from the catalogue. This is a governance principle: favour registering APIs centrally, and consume them from there.
Tier 3: Producer-Specific Blocks (escape hatch)
For APIs not in the catalogue — external partners, legacy systems being migrated, temporary integrations — producers can build their own reusable block by providing a schema definition:
- OAS — for REST APIs
- XML Schema (XSD) — for XML-based services
- WSDL — for SOAP services
The platform generates a callable block from the schema definition. The producer is responsible for connection configuration (endpoint URL, auth credentials via the platform’s secrets management).
The expectation is that producer-specific blocks should eventually graduate to the catalogue (Tier 1) if they prove useful beyond a single pipeline.
10.2 Calling Syntax: Design Options
The calling syntax determines the producer’s authoring experience and the platform’s ability to validate pipelines at design-time. Four options are presented for evaluation, with Option D (First-Class DSL Element) as the preferred direction.
Scenario: A pipeline needs to call the customs declarations API to submit a declaration. The request requires three fields from the OAS schema: declarationType (string), traderEori (string), and goodsItems (array).
Option A: Raw Camel YAML (Status quo)
How it works today: Producers manually construct backend calls using standard Camel components, with no catalogue integration or schema validation.
- to:
uri: http:backend.example.com/api/v1/declarations
setHeader:
- name: Content-Type
constant: application/json
setBody:
groovy: |
return [
declarationType: body.type,
traderEori: exchangeProperty.eori,
goodsItems: body.items
]Characteristics:
- No catalogue integration — endpoint hardcoded
- No schema validation — parameter mapping is manual, untyped
- No IDE support — autocompletion not available
- Manual auth/transport setup — credentials in configuration or embedded
- Works but requires expertise and is error-prone
Option B: Kamelet with Typed Parameters
How it would work: Platform auto-generates Kamelets from catalogue OAS definitions. Kamelet parameters are defined with names and types derived from the OAS request schema.
- to:
uri: kamelet:customs-declarations-submit
parameters:
declarationType: "${body[type]}"
traderEori: "${exchangeProperty.eori}"
goodsItems: "${body[items]}"Implementation approach:
- For each catalogue API, generate a Kamelet with parameter definitions
- Parameter names match OAS request schema fields
- Kamelet encapsulates: schema validation (on parameters), auth, egress routing, transport
- Camel’s standard Kamelet validation applies to parameters at deploy-time
- IDE tooling can introspect Kamelet definitions for autocompletion (limited — Kamelet parameters are flat key-value, not structured)
Characteristics:
- Uses standard Camel Kamelet model — portable, well-documented
- Parameters are typed (Kamelet parameter definitions)
- Some IDE support via Kamelet introspection (names, basic types)
- Catalogue-aware via auto-generated Kamelets
- Limitation: flat key-value parameters don’t naturally express nested request bodies
- Parameter validation at Kamelet definition time, not YAML authoring time
Option C: URI-Based Component
How it would work: A custom Camel component that resolves catalogue APIs from a URI scheme, with parameters encoded in URI or body.
- to:
uri: hip-api:customs-declarations/POST/declarations
parameters:
declarationType: "${body[type]}"
traderEori: "${exchangeProperty.eori}"
goodsItems: "${body[items]}"Or with body encoding:
- to:
uri: hip-api:customs-declarations/POST/declarations
setBody:
groovy: |
return [declarationType: body.type, traderEori: exchangeProperty.eori, goodsItems: body.items]Implementation approach:
- Register a custom
hip-api:Camel component - Component resolver: takes URI scheme, looks up service/method/path in catalogue, retrieves OAS
- Component instantiation: creates dynamic endpoint with appropriate auth, egress routing
- Parameter handling: flat map passed to component, no structured validation
- No schema validation at YAML authoring time — validation happens at runtime
Characteristics:
- Catalogue-aware via URI scheme resolution
- Simpler than DSL extension — standard Camel component model
- No structural schema validation — parameters are strings or untyped maps
- No IDE support — URI scheme is opaque
- Quick to implement, viable stepping stone towards richer syntax
Option D: First-Class DSL Element (preferred direction)
How it would work: A platform-specific extension to the Camel YAML DSL that treats API calls as structured, typed elements with strongly typed inputs from the OAS schema.
- hip-api:
service: customs-declarations
method: POST
path: /declarations
input:
declarationType: "${body[type]}"
traderEori: "${exchangeProperty.eori}"
goodsItems: "${body[items]}"Where declarationType, traderEori, and goodsItems are field names from the OAS request schema for POST /declarations — not arbitrary keys.
Implementation approach:
- Extend Camel YAML DSL schema with
hip-api:element definition - Element schema: auto-generated from catalogue OAS definitions
- For each catalogue API, the DSL schema includes the request schema structure (field names, types)
- YAML validation: checks that producer’s input mapping matches the OAS request schema
- Design-time validation: IDE can validate YAML before deployment
- Tooling: IDE can provide autocompletion of service names, paths, input field names
- Runtime: platform resolves service from catalogue, applies auth, egress routing, transport
Characteristics:
- Structured YAML — each field has defined purpose, validated by YAML schema
- Strong IDE support — autocompletion of services, paths, typed input fields
- Design-time schema validation — mismatched inputs caught before deployment
- Natural fit for Visual Pipeline Builder — each block is a node with typed input/output ports
- Clear, readable pipelines — intent is immediately obvious
- Catalogue-aware — service discovery, auth, egress routing handled by platform
Trade-offs:
- Requires platform-specific DSL extension — not standard Camel
- Requires defining and maintaining the DSL schema (auto-generated from catalogue)
- More complex to implement than Options B/C
Recommendation
Option D (First-Class DSL Element) is the preferred direction for the long term. It provides the best authoring experience, enables design-time validation, and integrates naturally with the Visual Pipeline Builder.
Implementation path:
- Short term (POC): Validate feasibility of extending Camel YAML DSL with custom elements. Start with a single catalogue API as a proof-of-concept.
- Medium term: If DSL extension proves viable, implement Option D as the primary calling syntax. Option B (Kamelets) is a viable fallback for APIs that don’t require nested request bodies.
- Fallback: If DSL extension is impractical, Option C (URI-based component) provides catalogue awareness with minimal implementation effort, though with reduced IDE support.
10.3 Strongly Typed Inputs and Outputs
A core requirement of the component library is that block inputs and outputs are strongly typed, derived from the API’s schema definition (OAS, XSD, or WSDL).
Inputs: When a producer calls a block, the input fields are the named fields from the API’s request schema — not arbitrary key-value pairs. The platform validates at design-time (and optionally at runtime) that the producer’s input mapping matches the API’s expected schema. This enables:
- Early error detection — misconfigured pipelines are caught before deployment, not at runtime
- Autocompletion — tooling knows what fields are available for each block
- Self-describing contracts — the block’s interface is derived from its OAS, not documented separately
Outputs: The block’s response conforms to the API’s OAS response schema. The response is available in the exchange body (or a named exchange property) for subsequent pipeline steps. Subsequent steps can access response fields by name with confidence that they exist and have the expected types.
10.4 Relationship to Visual Pipeline Builder
The reusable component library is the foundation for the Visual Pipeline Builder (see UI Simple APIs Architecture):
- Each catalogue block becomes a draggable node in the visual editor
- Typed inputs become connection ports — the editor can validate that output types from one step match input types of the next
- The first-class DSL syntax (Option A) maps directly to the visual representation — what you see is what you get in the YAML
- The catalogue provides the palette of available blocks — producers browse and drag rather than writing URIs from memory
Appendix: References
- Enterprise Integration Patterns — Gregor Hohpe & Bobby Woolf (canonical reference)
- Apache Camel YAML DSL — Camel 4.x YAML DSL reference
- HIP System Context — Platform architecture and constraints
- UI Simple APIs Architecture — Visual Pipeline Builder (future UI)
- Domain APIs Lessons Learned — Practical Camel YAML experience
- Domain APIs Integration Template — Reusable route patterns
- Camel YAML DSL Evaluation — YAML DSL evaluation