diff --git a/src/Apps/W1/EDocument/App/CLAUDE.md b/src/Apps/W1/EDocument/App/CLAUDE.md new file mode 100644 index 0000000000..74f2a47ff5 --- /dev/null +++ b/src/Apps/W1/EDocument/App/CLAUDE.md @@ -0,0 +1,64 @@ +# E-Document Core + +E-Document Core is Business Central's foundation framework for electronic document exchange. It sits between BC's posting engine and external e-invoicing networks (PEPPOL, country clearance systems, etc.), providing the plumbing so that connector and format apps don't have to reinvent lifecycle tracking, error handling, or status management. Think of it as the "middleware" that turns a posted sales invoice into a tracked, auditable electronic document that flows through configurable services. + +## Quick reference + +- **ID range**: 6100-6199, 6208-6209, 6231-6232, 6234 +- **Dependencies**: None -- this is the base layer. Everything else depends on it. +- **Namespace**: `Microsoft.eServices.EDocument` + +## How it works + +The app is organized in three conceptual layers. The **Core** layer owns the E-Document record, its lifecycle states, logging, and workflow orchestration. The **Document Format** layer (plugged in via the `E-Document` interface and the `E-Document Format` enum) handles serialization -- turning a BC document into XML/JSON and vice versa. The **Service Integration** layer (plugged in via `IDocumentSender`, `IDocumentReceiver`, and friends on the `Service Integration` enum) handles the actual HTTP communication with external networks. + +Documents flow bidirectionally. **Outbound**: when a sales invoice is posted, `EDocumentSubscribers` fires, creating an E-Document record, exporting it through the configured format, then routing it through a BC Workflow (`EDOC` category) that determines which service(s) send it. Sending can be synchronous or asynchronous -- if the connector implements `IDocumentResponseHandler`, the framework polls for responses via `GetResponse`. **Inbound**: a service's `IDocumentReceiver` fetches documents from an API, then each document flows through a multi-stage pipeline: Structure (convert PDF to XML via ADI or similar) -> Read into Draft (parse into `E-Document Purchase Header/Line`) -> Prepare Draft (resolve vendor, items, GL accounts) -> Finish Draft (create the actual BC purchase invoice). + +There are two import processing versions. **V1** (legacy, behind `#if not CLEAN27` guards) is a monolithic path where `GetCompleteInfoFromReceivedDocument` on the format interface does everything at once -- parse, resolve, and create the BC document in a single call. **V2** breaks this into four discrete steps with undo capability, each driven by a separate interface (`IStructureReceivedEDocument`, `IStructuredFormatReader`, `IProcessStructuredData`, `IEDocumentFinishDraft`). V2 is the active development path; V1 exists only for backward compatibility. + +Batch processing supports two modes: **Threshold** (accumulate N documents, then send as one blob) and **Recurrent** (job queue fires on a schedule, sends whatever is pending). Each mode creates a single export blob from multiple E-Documents via `CreateBatch` on the format interface. + +## Structure + +- `src/Document/` -- The E-Document table itself, status enums, direction/type enums, and the inbound/outbound list pages +- `src/Service/` -- E-Document Service configuration, supported document types, and service participants +- `src/Integration/` -- The V2 integration interfaces (`IDocumentSender`, `IDocumentReceiver`, etc.), context codeunits (`SendContext`, `ReceiveContext`, `ActionContext`), and the send/receive runners +- `src/Processing/` -- The heavy lifting: export (`EDocExport`), import pipeline (`ImportEDocumentProcess`), AI-powered matching (Copilot tools), order matching, and all the V2 import interfaces +- `src/Processing/Import/Purchase/` -- Draft purchase document tables and history tables used by V2 import +- `src/Logging/` -- E-Document Log, Integration Log (HTTP request/response), and Data Storage (blob table) +- `src/Mapping/` -- Field-level value mapping (find/replace on RecordRef fields during export/import) +- `src/Workflow/` -- BC Workflow integration: EDOC category setup, flow processing, step arguments +- `src/Helpers/` -- Error helper, JSON helper, import helper, log helper +- `src/Format/` -- PEPPOL BIS 3.0 import handler +- `src/DataExchange/` -- Data Exchange Definition integration for PEPPOL +- `src/ClearanceModel/` -- QR code handling for clearance-model countries (posted invoice extensions with QR viewer) +- `src/Extensions/` -- Page/table extensions that embed E-Document functionality into standard BC pages + +## Documentation + +- [docs/data-model.md](docs/data-model.md) -- How the data fits together +- [docs/business-logic.md](docs/business-logic.md) -- Processing flows and gotchas +- [docs/extensibility.md](docs/extensibility.md) -- Extension points and how to customize +- [docs/patterns.md](docs/patterns.md) -- Recurring code patterns (and legacy ones to avoid) + +## Things to know + +- The E-Document table's `Status` field (3 values: In Progress / Processed / Error) is an **aggregate** derived from the per-service `E-Document Service Status` (24+ granular values). Each service status value implements `IEDocumentStatus` to declare which aggregate it maps to. Don't set E-Document.Status directly -- modify the service status and call `ModifyEDocumentStatus`. + +- The `E-Document Service Status` enum is `Extensible = true` with `DefaultImplementation = IEDocumentStatus = "E-Doc In Progress Status"`. This means any new enum value added by an extension defaults to "In Progress" unless it explicitly declares a different implementation. + +- E-Documents link to their source BC document via `Document Record ID` (a RecordId field). This is fragile across company renames and data migrations. The `Table ID` field exists separately for FlowField lookups. + +- The V2 import pipeline stores state across steps using enum fields on the E-Document record itself (`Structure Data Impl.`, `Read into Draft Impl.`, `Process Draft Impl.`). Each step's output determines which implementation the next step uses. This is a form of runtime dispatch chain. + +- `Commit()` is called deliberately before `Codeunit.Run()` in export and send paths. This is the error isolation pattern -- if the format/connector implementation throws a runtime error, the E-Document record survives and the error is logged rather than rolling back everything. + +- Duplicate detection uses the combination of `Incoming E-Document No.` + `Bill-to/Pay-to No.` + `Document Date`. The `IsDuplicate` method checks this and logs telemetry. Deleting non-duplicate E-Documents requires user confirmation. + +- The `E-Doc. Data Storage` table is a blob store. E-Documents reference it twice: `Unstructured Data Entry No.` (the original received file, e.g. PDF) and `Structured Data Entry No.` (the parsed structured version, e.g. XML). The structuring step may produce a new Data Storage entry, or reuse the original if the document was already structured. + +- Mapping (`E-Doc. Mapping` table) works at the RecordRef/FieldRef level. It applies find/replace transformations on field values during export or import, per service. The `For Import` flag distinguishes export mappings from import mappings. + +- The `Service Participant` table maps Customer/Vendor codes to external identifiers (PEPPOL participant IDs, etc.) per service. This is how the framework resolves "who does this document go to on the external network." + +- V1 integration code is wrapped in `#if not CLEAN26` / `#if not CLEAN27` guards and is being actively removed. The `Service Integration` field (old enum) is obsolete; use `Service Integration V2` instead. diff --git a/src/Apps/W1/EDocument/App/docs/business-logic.md b/src/Apps/W1/EDocument/App/docs/business-logic.md new file mode 100644 index 0000000000..beec68ced8 --- /dev/null +++ b/src/Apps/W1/EDocument/App/docs/business-logic.md @@ -0,0 +1,165 @@ +# Business logic + +This document covers how documents flow through the E-Document Core framework -- the actual processing sequences, decision points, and error handling. Read [data-model.md](data-model.md) first for table relationships. + +## Outbound flow + +When a sales invoice (or credit memo, service invoice, reminder, etc.) is posted, the BC posting engine fires events that `EDocumentSubscribers` handles. The subscriber reads the Document Sending Profile for the customer, and if it's configured for "Extended E-Document Service Flow," the export pipeline kicks in. + +```mermaid +flowchart TD + A[Document posted] --> B[Get Document Sending Profile] + B --> C{Electronic Document = Extended Flow?} + C -- No --> Z[Exit -- not an e-document] + C -- Yes --> D[Get Workflow + Services from flow] + D --> E[Create E-Document record] + E --> F{For each service} + F --> G{Document type supported?} + G -- No --> F + G -- Yes --> H{Batch processing enabled?} + H -- Yes --> I[Set status: Pending Batch] + H -- No --> J[Export via format interface] + J --> K{Export succeeded?} + K -- Yes --> L[Status: Exported] + K -- No --> M[Status: Export Error] + L --> N[Start E-Document Created Flow] + N --> O[Workflow: Send step] + O --> P[Commit before send] + P --> Q[Call IDocumentSender.Send] + Q --> R{Connector implements IDocumentResponseHandler?} + R -- No --> S[Status: Sent] + R -- Yes --> T[Status: Pending Response] + T --> U[Job Queue polls GetResponse] + U --> V{Response received?} + V -- No --> U + V -- Yes --> S +``` + +The key codeunits involved: + +- **`EDocExport.CreateEDocument`** (in `EDocExport.Codeunit.al`) -- Creates the E-Document record, populates it from the source document header via RecordRef field reads, inserts service status records for each supporting service, then exports non-batch documents immediately. +- **`EDocExport.ExportEDocument`** -- Applies field mappings, then calls `EDocumentCreate.Run()` (which invokes the format interface's `Create` method inside a `Codeunit.Run` error boundary). The exported blob is logged. +- **`EDocIntegrationManagement.Send`** -- Retrieves the exported blob from the log, sets up `SendContext` with a default status of "Sent," then runs `SendRunner` inside a `Codeunit.Run` error boundary. After send, it logs the HTTP request/response and updates the service status. +- **`EDocumentGetResponse`** -- A job queue handler that polls `IDocumentResponseHandler.GetResponse`. Returns `true` when the service confirms receipt; `false` keeps polling. + +The workflow engine (`EDocumentWorkFlowProcessing`) routes the E-Document through configured workflow steps. Each step can target a different service (stored in `Workflow Step Argument."E-Document Service"`). The framework supports email sending steps too, where the exported document is attached to an email. + +### Batch sending + +When `Use Batch Processing` is enabled on a service, individual E-Documents are exported but not immediately sent. Instead: + +1. **Threshold mode**: When the count of pending-batch documents reaches `Batch Threshold`, `EDocRecurrentBatchSend` fires and calls `ExportEDocumentBatch` to create a single combined blob, then sends it. +2. **Recurrent mode**: A job queue entry runs on a schedule (`Batch Start Time` + `Batch Minutes between runs`), sending whatever is pending. + +The format interface's `CreateBatch` method receives all E-Documents and their mapped source records at once, producing a single `TempBlob`. + +## Inbound V2 flow + +V2 import processing is a four-step pipeline where each step transitions the E-Document to a new `Import Processing Status`. Every step is individually undoable, allowing users to fix data and re-run. + +```mermaid +flowchart TD + A[Service: ReceiveDocuments] --> B[For each document in batch] + B --> C[Create E-Document record] + C --> D[Service: DownloadDocument] + D --> E[Store blob in Data Storage] + E --> F["Status: Unprocessed"] + + F --> G["Step 1: Structure received data"] + G --> H{Already structured?} + H -- Yes --> I["Reuse unstructured as structured"] + H -- No --> J["Call IStructureReceivedEDocument"] + J --> K["Store structured blob"] + K --> L["Determine Read into Draft impl"] + I --> M["Status: Readable"] + L --> M + + M --> N["Step 2: Read into Draft"] + N --> O["Call IStructuredFormatReader.ReadIntoDraft"] + O --> P["Populate E-Doc Purchase Header/Lines"] + P --> Q["Status: Ready for draft"] + + Q --> R["Step 3: Prepare draft"] + R --> S["Call IProcessStructuredData.PrepareDraft"] + S --> T["Resolve vendor, items, accounts"] + T --> U["Status: Draft ready"] + + U --> V["Step 4: Finish draft"] + V --> W["Call IEDocumentFinishDraft.ApplyDraftToBC"] + W --> X["Create BC Purchase Invoice"] + X --> Y["Link E-Document to Purchase Invoice"] + Y --> ZZ["Status: Processed"] +``` + +The import processing statuses form an ordered sequence: + +| Status | After step | Meaning | +|--------|-----------|---------| +| Unprocessed | (initial) | Document received, blob stored | +| Readable | Structure received data | Structured content available | +| Ready for draft | Read into Draft | Draft purchase header/lines populated | +| Draft ready | Prepare draft | Vendor, items, accounts resolved | +| Processed | Finish draft | BC document created and linked | + +**`ImportEDocumentProcess`** (codeunit 6104) is the orchestrator. It's a `Codeunit.Run` target configured with a step to execute and an optional undo flag. The `ConfigureImportRun` method sets the step, and then `OnRun` dispatches to the appropriate local procedure. + +### Receiving + +The receive path starts in `EDocIntegrationManagement`. For V2: + +1. `IDocumentReceiver.ReceiveDocuments` is called, populating a `Temp Blob List` with metadata for each available document. +2. For each metadata blob, a new E-Document record is created (direction = Incoming, status = Created). +3. `IDocumentReceiver.DownloadDocument` fetches the actual content, storing it in `ReceiveContext.GetTempBlob()`. +4. The blob is saved to `E-Doc. Data Storage` and linked as the E-Document's `Unstructured Data Entry No.`. +5. If the service has `Automatic Import Processing` enabled, the pipeline starts immediately. + +### Undo behavior + +Each step can be undone, which reverts the E-Document to the previous status: + +- **Undo Finish Draft**: calls `IEDocumentFinishDraft.RevertDraftActions` (deletes the created BC document), clears `Document Record ID`. +- **Undo Prepare Draft**: deletes header mappings, clears vendor assignment, resets `Document Type` to None. +- **Undo Structure**: clears `Structured Data Entry No.`. +- **Undo Read into Draft**: handled by re-running -- the draft tables are overwritten. + +The `E-Document Log` entry for an undone step gets its `Step Undone` flag set to `true`. + +### V1 legacy path + +V1 documents (where `Import Process = "Version 1.0"`) bypass the four-step pipeline entirely. When `ImportEDocumentProcess` detects V1, it only responds to the "Finish Draft" step, delegating to `EDocImport.V1_ProcessEDocument`. This calls the old format interface methods (`GetBasicInfoFromReceivedDocument` and `GetCompleteInfoFromReceivedDocument`) in sequence. There is no undo capability in V1. + +## Error recovery + +Errors are handled at two levels: + +1. **Runtime errors** in format/connector code are caught by the `Codeunit.Run` pattern. The framework calls `Commit()` before running the interface code, so if it throws, the E-Document and its state survive. The error text is captured via `GetLastErrorText()` and logged through `EDocumentErrorHelper.LogSimpleErrorMessage`. + +2. **Business errors** are logged by connectors calling `EDocumentErrorHelper` directly. These don't throw runtime errors but still transition the service status to an error state. + +Error statuses include: `Export Error`, `Sending Error`, `Cancel Error`, `Imported Document Processing Error`, `Approval Error`. Each maps to the aggregate `E-Document Status::Error` via the `E-Doc Error Status` implementation of `IEDocumentStatus`. + +Recovery paths: + +- **Re-export**: Call `EDocExport.Recreate` to re-run the export step. +- **Re-send**: The workflow can be restarted, or the send step can be retried. +- **Re-import**: Use the undo mechanism to revert to an earlier step, fix data, and re-run forward. + +## Order matching + +For incoming invoices that reference a purchase order, the PO matching feature resolves draft purchase lines against existing PO lines or receipt lines. + +The matching can be manual (user selects PO lines on `EDocSelectPOLines` page) or AI-assisted (Copilot). The Copilot path (`EDocPOCopilotMatching`) uses Azure OpenAI function calling to propose matches based on descriptions, quantities, and amounts. + +Configuration lives in `E-Doc. PO Matching Setup` (table in `PurchaseOrderMatching/`). It defines whether to match against PO lines, receipt lines, or both, and the tolerance levels for amount/quantity differences. + +Matches are stored in `E-Doc. Purchase Line PO Match` as a junction between draft lines, PO lines, and receipt lines (all via SystemId). When `Finish Draft` runs, these matches determine whether to update an existing PO or create a new purchase invoice. + +## Batch import processing + +Auto-import is configured per service via `Auto Import`, `Import Start Time`, and `Import Minutes between runs`. The `EDocumentBackgroundJobs` codeunit manages the job queue entries: + +- `HandleRecurrentImportJob` creates or updates a recurring job queue entry. +- `HandleRecurrentBatchJob` does the same for batch sending. +- Job queue entry IDs are stored on the service record and cleaned up on service deletion. + +When auto-import fires, it calls the receive flow, then optionally processes documents through the pipeline based on `Automatic Import Processing` and the service's `GetDefaultImportParameters` configuration. diff --git a/src/Apps/W1/EDocument/App/docs/data-model.md b/src/Apps/W1/EDocument/App/docs/data-model.md new file mode 100644 index 0000000000..441b495748 --- /dev/null +++ b/src/Apps/W1/EDocument/App/docs/data-model.md @@ -0,0 +1,125 @@ +# Data model + +This document covers how the E-Document Core tables relate to each other and why they exist. Tables are grouped by conceptual area. The goal is to help you understand what to query and where relationships live -- not to list every field. + +## E-Document lifecycle + +The central record is `E-Document` (table 6121). Every electronic document -- inbound or outbound -- gets exactly one row here. It tracks the document's identity (vendor/customer, amounts, dates), its relationship to the BC source document (`Document Record ID`), and its aggregate processing state (`Status`). + +Each E-Document is processed by one or more services. The `E-Document Service Status` (table 6138) tracks the granular state of each E-Document-to-Service pair. The composite key is `(E-Document Entry No, E-Document Service Code)`. The aggregate `E-Document.Status` is derived by iterating all related service status records and checking their `IEDocumentStatus` implementation. + +`E-Doc. Data Storage` (table 6125) is a blob store. The E-Document references it via two foreign keys: `Unstructured Data Entry No.` (original received file -- PDF, image, etc.) and `Structured Data Entry No.` (parsed structured content -- XML, JSON). Outbound documents typically only have a structured entry. Inbound unstructured documents get both after the "Structure" step produces a structured version. + +```mermaid +erDiagram + E-DOCUMENT ||--o{ E-DOCUMENT-SERVICE-STATUS : "has per-service state" + E-DOCUMENT ||--o| E-DOC-DATA-STORAGE : "unstructured content" + E-DOCUMENT ||--o| E-DOC-DATA-STORAGE : "structured content" + E-DOCUMENT-SERVICE-STATUS }o--|| E-DOCUMENT-SERVICE : "tracks service" +``` + +Key non-obvious details: + +- `E-Document.Service` is a denormalized copy of the service code, set at creation time for inbound documents. For outbound documents with multiple services (via workflow), the per-service tracking lives entirely in `E-Document Service Status`. +- `E-Document.Document Record ID` is a RecordId pointing to the BC source/target document (e.g., Sales Invoice Header, Purchase Header). This is set during creation for outbound docs, and during "Finish Draft" for inbound docs. +- Deleting an E-Document cascades: the `OnDelete` trigger removes all related log entries, integration logs, service statuses, document attachments, mapping logs, and imported lines. + +## Service configuration + +`E-Document Service` (table 6103) is the configuration hub for each electronic document service. It defines which format to use (`Document Format` enum), which connector to use (`Service Integration V2` enum), batch processing settings, auto-import scheduling, and import processing options. + +`E-Doc. Service Supported Type` (table 6122) is a simple junction table listing which `E-Document Type` values a service accepts. The key is `(E-Document Service Code, Source Document Type)`. If a document type isn't listed here, the service won't process that document. + +`Service Participant` (table 6104) maps BC entities (customers or vendors) to their external identifiers on a per-service basis. For example, a customer's PEPPOL participant ID for a specific access point. The key is `(Service, Participant Type, Participant)`. + +`E-Doc. Service Data Exch. Def.` (table 6139) links a service's format to Data Exchange Definitions, separately for import and export, per document type. + +```mermaid +erDiagram + E-DOCUMENT-SERVICE ||--o{ E-DOC-SERVICE-SUPPORTED-TYPE : "accepts types" + E-DOCUMENT-SERVICE ||--o{ SERVICE-PARTICIPANT : "has participants" + E-DOCUMENT-SERVICE ||--o{ E-DOC-SERVICE-DATA-EXCH-DEF : "uses data exchange defs" +``` + +Key non-obvious details: + +- The `E-Document Service` table has two integration enum fields. `Service Integration` (old, field 4) is obsolete and behind `CLEANSCHEMA29` guards. `Service Integration V2` (field 27) is the active one. Code referencing the old field will be removed. +- Batch processing configuration (`Use Batch Processing`, `Batch Mode`, `Batch Threshold`, `Batch Start Time`, etc.) is all on the service record. The service also holds two job queue entry GUIDs (`Batch Recurrent Job Id`, `Import Recurrent Job Id`) for its background jobs. +- `Import Process` field distinguishes V1 vs V2 processing. `Automatic Import Processing` controls whether imported documents are automatically pushed through the full pipeline. + +## Logging and audit + +Every significant state transition creates an `E-Document Log` (table 6124) entry. The log captures the service code, the service status at that point, and optionally a reference to a `E-Doc. Data Storage` entry containing the document blob at that stage. This gives you a full audit trail of what was exported, sent, received, etc. + +`E-Document Integration Log` (table 6127) stores HTTP communication details: the request blob, response blob, response status code, HTTP method, and URL. One integration log entry per API call. This is your debugging tool when a connector fails. + +`E-Doc. Mapping Log` (table 6123) records which field mappings were applied during a specific export or import. It links to the E-Document Log entry and the E-Document, capturing the table ID, field ID, original value, and replacement value. + +```mermaid +erDiagram + E-DOCUMENT-LOG }o--|| E-DOCUMENT : "logs for" + E-DOCUMENT-LOG ||--o| E-DOC-DATA-STORAGE : "stores blob" + E-DOCUMENT-LOG ||--o{ E-DOC-MAPPING-LOG : "mapping applied" + E-DOCUMENT-INTEGRATION-LOG }o--|| E-DOCUMENT : "HTTP trace for" +``` + +Key non-obvious details: + +- The `E-Document Log` has a `Step Undone` boolean field. When a V2 import step is undone, the log entry is marked rather than deleted. This preserves the audit trail. +- `E-Document Log.Status` is an `E-Document Service Status` enum value, not the aggregate `E-Document Status`. This means log entries record the granular service-level state. +- Integration logs store request and response as BLOBs, not text. They can be exported to files via `ExportRequestMessage` / `ExportResponseMessage`. + +## Import processing (V2) + +The V2 import pipeline uses its own set of tables to hold the draft document state between steps. + +`E-Document Purchase Header` and `E-Document Purchase Line` (tables in the `Processing.Import.Purchase` namespace) are the draft tables. They mirror purchase header/line structures but with both "external" fields (from the received document -- product code, description, unit price as received) and "BC" fields (resolved vendor no., item no., GL account, etc.). The purchase line table key is `(E-Document Entry No., Line No.)`. + +`E-Doc. Purchase Line History` (table 6140) records what draft line values were used when a purchase invoice was ultimately posted. This feeds into the historical matching AI -- when a new e-document arrives from the same vendor with similar line descriptions, the system can suggest the same item/account assignments. + +`E-Doc. Vendor Assign. History` (table 6108) stores the external vendor identifiers (company name, address, VAT ID, GLN) alongside the BC vendor number that was assigned. This powers vendor auto-matching for future documents. + +`E-Doc. Purchase Line PO Match` (table 6114) is a junction table linking draft purchase lines to existing Purchase Order lines and/or Purchase Receipt lines. This is used by the PO matching feature where an incoming invoice is matched against an existing order. + +`E-Doc. Import Parameters` (table 6106) is a **temporary table** that configures how a single import run behaves: which step to run, whether to target a specific status, V1 vs V2 behavior flags, and an optional existing document RecordId to link to instead of creating new. + +```mermaid +erDiagram + E-DOCUMENT ||--|| E-DOC-PURCHASE-HEADER : "draft header" + E-DOC-PURCHASE-HEADER ||--o{ E-DOC-PURCHASE-LINE : "draft lines" + E-DOC-PURCHASE-LINE ||--o{ E-DOC-PURCHASE-LINE-PO-MATCH : "matched to PO" + E-DOCUMENT ||--o{ E-DOC-VENDOR-ASSIGN-HISTORY : "vendor matching history" + E-DOC-PURCHASE-LINE ||--o{ E-DOC-PURCHASE-LINE-HISTORY : "line matching history" +``` + +Key non-obvious details: + +- The `E-Doc. Purchase Line PO Match` table uses SystemId GUIDs as foreign keys to both `E-Document Purchase Line` and `Purchase Line`, not integer IDs. This is because purchase lines can be renumbered. +- `E-Doc. Import Parameters` is temporary (in-memory only). It's constructed fresh for each import run and never persisted. The "Step to Run / Desired Status" option field determines whether processing is step-driven or status-driven. +- `E-Document Header Mapping` and `E-Document Line Mapping` tables (in the Import folder) handle the mapping between external field names and BC field references during import. + +## Mapping + +`E-Doc. Mapping` (table 6118) defines find/replace rules that transform field values during export or import. Each mapping rule targets a specific table + field + service, with a "Find Value" and "Replace Value". The `For Import` boolean distinguishes export-time from import-time mappings. Rules can use BC's `Transformation Rule` system for complex transforms. + +`E-Doc. Mapping Log` (table 6123) records which mappings were actually applied, linking back to the E-Document Log entry. + +```mermaid +erDiagram + E-DOC-MAPPING }o--|| E-DOCUMENT-SERVICE : "defined for" + E-DOC-MAPPING-LOG }o--|| E-DOCUMENT-LOG : "applied during" + E-DOC-MAPPING-LOG }o--|| E-DOCUMENT : "for document" +``` + +Key non-obvious details: + +- Mapping operates at the RecordRef/FieldRef level. The `EDocMapping.MapRecord` codeunit copies a source RecordRef into a temporary mapped RecordRef, applying transformations. The original record is never modified -- the mapped copy is what gets passed to the format interface. +- The `Used` boolean on `E-Doc. Mapping` is reset to `false` before each export/import run, then set to `true` for rules that matched. This lets you see which rules are actually being used. + +## Cross-cutting concerns + +**SystemId linking**: Several relationships use `SystemId` (GUID) rather than primary key integers. The `E-Document.SystemId` is used in the purchase header's `E-Document Link` field (V1 legacy). The PO match table uses SystemId GUIDs for all three parties. This makes relationships stable across renumbering but harder to query manually. + +**Blob storage pattern**: Binary content is never stored on the E-Document record itself. It always goes through `E-Doc. Data Storage`, which is referenced by integer `Entry No.` from both the E-Document (for unstructured/structured content) and the E-Document Log (for point-in-time snapshots). Deleting a log entry cascades to its data storage entry. + +**Change detection**: The `E-Document Notification` table (table in `Document/Notification/`) tracks notification state for role center cue tiles. It uses a simple enum-based type system to track things like "new inbound documents available." diff --git a/src/Apps/W1/EDocument/App/docs/extensibility.md b/src/Apps/W1/EDocument/App/docs/extensibility.md new file mode 100644 index 0000000000..7e7f1b2846 --- /dev/null +++ b/src/Apps/W1/EDocument/App/docs/extensibility.md @@ -0,0 +1,244 @@ +# Extensibility + +This document is organized by developer intent. Each section describes a specific customization scenario, the interfaces involved, and what the framework expects from your implementation. Read [business-logic.md](business-logic.md) first for the processing flow context. + +## How to implement a document format + +Implement the `E-Document` interface (in `Document/Interfaces/EDocument.Interface.al`) and register it on the `E-Document Format` enum. + +The format interface handles both export (BC document to XML/JSON) and import (XML/JSON to BC document). It has five methods: + +```al +interface "E-Document" +{ + procedure Check(var SourceDocumentHeader: RecordRef; EDocumentService: Record "E-Document Service"; + EDocumentProcessingPhase: Enum "E-Document Processing Phase") + procedure Create(EDocumentService: Record "E-Document Service"; var EDocument: Record "E-Document"; + var SourceDocumentHeader: RecordRef; var SourceDocumentLines: RecordRef; var TempBlob: Codeunit "Temp Blob") + procedure CreateBatch(EDocumentService: Record "E-Document Service"; var EDocuments: Record "E-Document"; + var SourceDocumentHeaders: RecordRef; var SourceDocumentsLines: RecordRef; var TempBlob: Codeunit "Temp Blob") + procedure GetBasicInfoFromReceivedDocument(var EDocument: Record "E-Document"; var TempBlob: Codeunit "Temp Blob") + procedure GetCompleteInfoFromReceivedDocument(var EDocument: Record "E-Document"; + var CreatedDocumentHeader: RecordRef; var CreatedDocumentLines: RecordRef; var TempBlob: Codeunit "Temp Blob") +} +``` + +**What to know**: + +- `Check` is called at release/post time, before the document is posted. Validate that all required data is present and throw an error if not. The `EDocumentProcessingPhase` tells you whether this is a release check or a post check. +- `Create` receives mapped RecordRefs (after `E-Doc. Mapping` transformations). Write the exported document into `TempBlob`. The E-Document record is writeable -- you can update fields like `Receiving Company VAT Reg. No.` here. +- `CreateBatch` is only called when batch processing is enabled. The `EDocuments` record set contains multiple E-Documents. Write a single combined blob. +- `GetBasicInfoFromReceivedDocument` is the V1 import entry point. Populate basic E-Document fields (vendor, amounts, dates) from the blob. This is called first for every received document. +- `GetCompleteInfoFromReceivedDocument` is the V1 import method that creates the actual BC purchase document. The `CreatedDocumentHeader` and `CreatedDocumentLines` RecordRefs should be populated with the newly created records. + +For V2 import, these last two methods are still called for backward compatibility but the real work happens through the V2 interfaces described below. + +## How to build a service connector + +A service connector handles the network communication -- sending documents to and receiving documents from an external API. Implement one or more of these interfaces and register them on the `Service Integration` enum (in `Integration/ServiceIntegration.Enum.al`). + +### Sending: `IDocumentSender` + +In `Integration/Interfaces/IDocumentSender.Interface.al`: + +```al +interface IDocumentSender +{ + procedure Send(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + SendContext: Codeunit SendContext); +} +``` + +The `SendContext` gives you: + +- `SendContext.GetTempBlob()` -- the exported document blob. +- `SendContext.Http().GetHttpRequestMessage()` / `GetHttpResponseMessage()` -- HTTP objects that will be automatically logged if populated. +- `SendContext.Status().SetStatus(...)` -- override the resulting service status (defaults to "Sent"). + +The framework determines sync vs async based on whether your implementation also implements `IDocumentResponseHandler`. If it does, the status will be set to "Pending Response" after a successful send. + +### Async response: `IDocumentResponseHandler` + +In `Integration/Interfaces/IDocumentResponseHandler.Interface.al`: + +```al +interface IDocumentResponseHandler +{ + procedure GetResponse(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + SendContext: Codeunit SendContext): Boolean; +} +``` + +Return `true` when the service confirms the document was received/processed. Return `false` to keep polling. If you log an error via `EDocumentErrorHelper`, the status transitions to "Sending Error" and polling stops. + +### Receiving: `IDocumentReceiver` + +In `Integration/Interfaces/IDocumentReceiver.Interface.al`: + +```al +interface IDocumentReceiver +{ + procedure ReceiveDocuments(var EDocumentService: Record "E-Document Service"; + DocumentsMetadata: Codeunit "Temp Blob List"; ReceiveContext: Codeunit ReceiveContext) + procedure DownloadDocument(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + DocumentMetadata: Codeunit "Temp Blob"; ReceiveContext: Codeunit ReceiveContext) +} +``` + +`ReceiveDocuments` queries the API for available documents and adds one `Temp Blob` per document to the `DocumentsMetadata` list. Each blob typically contains the document ID or metadata needed by `DownloadDocument`. The count of blobs determines how many E-Documents will be created. + +`DownloadDocument` is called once per document. Fetch the actual content and write it to `ReceiveContext.GetTempBlob()`. You can also update `EDocument` fields (like `Incoming E-Document No.`). + +### Marking fetched: `IReceivedDocumentMarker` + +In `Integration/Interfaces/IReceivedDocumentMarker.Interface.al`. Optional. If your connector implements this, the framework calls it after successfully downloading a document, so you can mark it as fetched on the external API to prevent re-downloading. + +### Privacy consent: `IConsentManager` + +In `Integration/Interfaces/IConsentManager.Interface.al`. Optional. If your connector implements this, the framework calls `ObtainPrivacyConsent()` when a user selects your integration on a service. Return `false` to block activation. + +## How to add custom actions + +Actions are post-send operations like approval checks and cancellation requests. + +### `ISentDocumentActions` + +In `Integration/Interfaces/ISentDocumentActions.Interface.al`: + +```al +interface ISentDocumentActions +{ + procedure GetApprovalStatus(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + ActionContext: Codeunit ActionContext): Boolean; + procedure GetCancellationStatus(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + ActionContext: Codeunit ActionContext): Boolean; +} +``` + +These are pre-built action types. The framework provides `SentDocumentApproval` and `SentDocumentCancellation` codeunits that call your implementation and update the status accordingly (Approved, Rejected, Canceled, or error). + +### `IDocumentAction` + +In `Integration/Interfaces/IDocumentAction.Interface.al`: + +```al +interface IDocumentAction +{ + procedure InvokeAction(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; + ActionContext: Codeunit ActionContext): Boolean +} +``` + +This is the generic action interface. Return `true` to update the E-Document service status to whatever `ActionContext.Status().SetStatus(...)` was set to. Return `false` to leave the status unchanged. + +`ActionContext` provides the same `Http()` and `Status()` accessors as `SendContext`. + +## How to customize import processing + +The V2 import pipeline has four steps, each driven by a separate interface. To customize a step, implement the relevant interface and register it on the corresponding enum. + +### Step 1 -- Structure received data: `IStructureReceivedEDocument` + +In `Processing/Interfaces/IStructureReceivedEDocument.Interface.al`: + +```al +interface IStructureReceivedEDocument +{ + procedure StructureReceivedEDocument(EDocumentDataStorage: Record "E-Doc. Data Storage"): Interface IStructuredDataType +} +``` + +Takes raw blob data (e.g., a PDF) and returns a structured representation. The returned `IStructuredDataType` declares its file format and preferred `Read into Draft` implementation. Register on the `Structure Received E-Doc.` enum. + +The built-in "Already Structured" value skips this step (reuses the unstructured blob as-is). This is the path for documents received as XML/JSON. + +### Step 2 -- Read into draft: `IStructuredFormatReader` + +In `Processing/Interfaces/IStructuredFormatReader.Interface.al`. Takes structured content and populates the `E-Document Purchase Header` and `E-Document Purchase Line` draft tables. Returns an `E-Doc. Process Draft` enum value that determines which `IProcessStructuredData` implementation runs next. Register on the `E-Doc. Read into Draft` enum. + +### Step 3 -- Prepare draft: `IProcessStructuredData` + +In `Processing/Interfaces/IProcessStructuredData.Interface.al`: + +```al +interface IProcessStructuredData +{ + procedure PrepareDraft(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): Enum "E-Document Type"; + procedure GetVendor(EDocument: Record "E-Document"; Customizations: Enum "E-Doc. Proc. Customizations"): Record Vendor; + procedure OpenDraftPage(var EDocument: Record "E-Document"); + procedure CleanUpDraft(EDocument: Record "E-Document"); +} +``` + +`PrepareDraft` resolves BC entities (vendor, items, GL accounts) and returns the document type. `GetVendor` is called separately to populate the E-Document's vendor fields. `OpenDraftPage` opens the UI for manual review. `CleanUpDraft` is called when the E-Document is deleted. + +### Step 4 -- Finish draft: `IEDocumentFinishDraft` + +In `Processing/Interfaces/IEDocumentFinishDraft.Interface.al`: + +```al +interface IEDocumentFinishDraft +{ + procedure ApplyDraftToBC(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): RecordId; + procedure RevertDraftActions(EDocument: Record "E-Document"); +} +``` + +`ApplyDraftToBC` creates the actual BC document (purchase invoice, journal line, etc.) and returns its `RecordId`. `RevertDraftActions` undoes this -- typically deleting the created document. This interface is registered on the `E-Document Type` enum, meaning different document types can have different finish behaviors. + +### `IPrepareDraft` + +In `Processing/Interfaces/IPrepareDraft.Interface.al`: + +```al +interface IPrepareDraft +{ + procedure PrepareDraft(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): Enum "E-Document Type"; +} +``` + +A simpler alternative to `IProcessStructuredData` when you only need to customize the draft preparation logic without vendor resolution or UI. + +## How to customize export eligibility + +### `IExportEligibilityEvaluator` + +In `Processing/Interfaces/IExportEligibilityEvaluator.Interface.al`: + +```al +interface IExportEligibilityEvaluator +{ + procedure ShouldExport(EDocumentService: Record "E-Document Service"; + SourceDocumentHeader: RecordRef; DocumentType: Enum "E-Document Type"): Boolean; +} +``` + +Register on the `Export Eligibility Evaluator` enum (set on the E-Document Service). Called after the document type support check passes. The default implementation (`DefaultExportEligibility`) always returns `true`. Override this to add custom conditions -- for example, only export invoices above a threshold amount, or only for specific customer groups. + +## Key integration events + +These are the most useful events for extending behavior without implementing a full interface: + +**Export path** (in `EDocExport.Codeunit.al`): + +- `OnBeforeEDocumentCheck` -- Skip or override the pre-post validation. Has `IsHandled` pattern. +- `OnAfterEDocumentCheck` -- Run additional validations after the standard check. +- `OnBeforeCreateEDocument` -- Modify the E-Document record before it's inserted. +- `OnAfterCreateEDocument` -- React to E-Document creation (e.g., add custom logging). + +**Import path** (in `ImportEDocumentProcess.Codeunit.al`): + +- `OnADIProcessingCompleted` -- Fired after Azure Document Intelligence structures a PDF. +- `OnFoundVendorNo` -- Fired when vendor resolution succeeds during Prepare Draft. + +**Service status** (in `EDocumentProcessing.Codeunit.al`): + +- `OnAfterModifyServiceStatus` -- React to any service status change. + +**Log export** (in `EDocumentLog.Table.al`): + +- `OnBeforeExportDataStorage` -- Customize the filename when a user exports log data. + +**Service configuration** (in `EDocumentService.Table.al`): + +- `OnAfterGetDefaultFileExtension` -- Override the default `.xml` file extension for a service. diff --git a/src/Apps/W1/EDocument/App/docs/patterns.md b/src/Apps/W1/EDocument/App/docs/patterns.md new file mode 100644 index 0000000000..de994eb1d3 --- /dev/null +++ b/src/Apps/W1/EDocument/App/docs/patterns.md @@ -0,0 +1,162 @@ +# Patterns + +This document covers recurring code patterns you'll encounter when reading or extending the E-Document Core codebase. Understanding these patterns prevents you from fighting the framework. + +## Active patterns + +### Commit-before-Run for error isolation + +The most important pattern in this codebase. Whenever the framework calls third-party code (format interfaces, connectors), it follows this sequence: + +``` +Commit(); +EDocumentCreate.SetSource(...); +if not EDocumentCreate.Run() then + EDocumentErrorHelper.LogSimpleErrorMessage(EDocument, GetLastErrorText()); +``` + +The `Commit()` persists the E-Document record and its current state. The `Codeunit.Run()` executes the interface code in an implicit error boundary -- if it throws a runtime error, the transaction inside `Run` is rolled back, but the committed E-Document survives. The error text is captured and logged. + +You'll see this in `EDocExport.CreateEDocument` (line ~399), `EDocExport.CreateEDocumentBatch`, `EDocIntegrationManagement.Send`, and throughout the integration layer. Never remove these `Commit()` calls -- they're load-bearing. + +**Gotcha**: Because of the commit, any changes made to the E-Document *before* the `Codeunit.Run` call are persisted even if the interface code fails. The framework re-reads the record after the call: `EDocument.Get(EDocument."Entry No")`. + +### Dual status model + +The E-Document has two status systems that must stay in sync: + +1. **`E-Document.Status`** -- 3 values: In Progress, Processed, Error. This is what the user sees. +2. **`E-Document Service Status.Status`** -- 24+ values: Created, Exported, Sending Error, Sent, Pending Response, Imported, etc. This is the real state machine. + +The bridge is the `IEDocumentStatus` interface implemented by each enum value of `E-Document Service Status`: + +- Values like Created, Exported (via `E-Doc Processed Status`), Imported use `E-Doc In Progress Status` (default) -- maps to "In Progress" +- Values like Sent, Approved, Imported Document Created use `E-Doc Processed Status` -- maps to "Processed" +- Values like Sending Error, Export Error use `E-Doc Error Status` -- maps to "Error" + +The aggregate is computed in `EDocumentProcessing.ModifyEDocumentStatus`. It iterates all `E-Document Service Status` records for the document. If any is Error, the aggregate is Error. If all are Processed, the aggregate is Processed. Otherwise, In Progress. + +**Rule**: Never set `E-Document.Status` directly. Always update the service status and call `ModifyEDocumentStatus`. + +### Context object pattern + +Send, receive, and action operations pass state through context codeunits rather than loose parameters: + +- **`SendContext`** (`Integration/Send/SendContext.Codeunit.al`) -- Holds TempBlob (document content), HttpMessageState (request/response for logging), and IntegrationActionStatus (resulting status). +- **`ReceiveContext`** (`Integration/Receive/ReceiveContext.Codeunit.al`) -- Same structure for the receive path. +- **`ActionContext`** (`Integration/Actions/ActionContext.Codeunit.al`) -- Same structure for post-send actions. + +These codeunits are `Access = Public` and use `this.` prefixed globals. They act as mutable state containers passed by reference to interface implementations. The pattern keeps interface method signatures clean and lets the framework automatically log HTTP communication after the call returns. + +**Key behavior**: `SendContext.Status()` defaults to `Sent` before `IDocumentSender.Send` is called. If the connector doesn't explicitly change it, the document goes to "Sent." If it also implements `IDocumentResponseHandler`, the framework overrides this to "Pending Response" (see `SendRunner.SendV2`). + +### V1 vs V2 integration architecture + +V1 and V2 coexist behind conditional compilation guards: + +- `#if not CLEAN26` -- V1 integration code (the monolithic `E-Document Integration` interface) +- `#if not CLEAN27` -- V1 import processing code (direct Purchase Header linking via `E-Document Link` GUID field) + +In `SendRunner.OnRun()`: + +```al +#if not CLEAN26 + if GlobalEDocumentService."Service Integration V2" <> Enum::"Service Integration"::"No Integration" then + SendV2() + else + if GlobalEDocumentService."Use Batch Processing" then + SendBatch() + else + Send(); +#else + SendV2(); +#endif +``` + +V1 checks the old `Service Integration` enum field; V2 checks `Service Integration V2`. When `CLEAN26` is defined, V1 code is removed entirely. + +**Rule**: All new development should use V2 interfaces. V1 exists only for backward compatibility during the migration period. + +### IsHandled pattern in events + +Integration events use the standard BC `IsHandled` pattern: + +```al +OnBeforeEDocumentCheck(EDocSourceRecRef, EDocumentProcessingPhase, IsHandled); +if IsHandled then + exit; +``` + +Subscribers set `IsHandled := true` to completely replace the default behavior. This is used sparingly in E-Document Core -- mainly in the export check path. Most events are notification-only (no `IsHandled` parameter). + +### RecordRef-based mapping + +The `E-Doc. Mapping` system operates entirely on `RecordRef` and `FieldRef` to be table-agnostic. The `EDocMapping.MapRecord` codeunit: + +1. Opens a temporary RecordRef of the same table as the source. +2. Copies all fields from the source record to the temp record. +3. For each matching mapping rule (same table ID, field ID), reads the field value, applies the find/replace, and writes it back. +4. Inserts the temp record into the temporary table. + +The mapped temp records are what the format interface receives -- never the original data. This means format implementations don't need to know about mappings at all. The mapping rules reference tables and fields by integer IDs, making them resilient to code changes but opaque to read. + +The `For Import` boolean on `E-Doc. Mapping` distinguishes export-time mappings from import-time ones. The same service can have different mappings for each direction. + +### Telemetry scopes + +The codebase follows a consistent Start/End scope pattern for telemetry: + +```al +Telemetry.LogMessage('0000LBF', EDocTelemetryCreateScopeStartLbl, Verbosity::Normal, + DataClassification::OrganizationIdentifiableInformation, TelemetryScope::All, TelemetryDimensions); +// ... actual work ... +Telemetry.LogMessage('0000LBG', EDocTelemetryCreateScopeEndLbl, Verbosity::Normal, + DataClassification::OrganizationIdentifiableInformation, TelemetryScope::All); +``` + +Each scope has a unique telemetry ID pair (start/end). Dimensions are built via `EDocumentProcessing.GetTelemetryDimensions` and include service and document identifiers. The `Locked = true` labels prevent translation of telemetry strings. + +Feature uptake telemetry (`EDocTok: Label 'W1 E-Document'`) tracks overall module adoption. + +### Import step state machine + +The V2 import pipeline uses a linear state machine where each step's output determines the next step's implementation. The chain lives on the E-Document record itself: + +1. `Structure Data Impl.` (enum) -- set by the receive path or auto-detected from file format +2. `Read into Draft Impl.` (enum) -- set by the `IStructuredDataType` returned from step 1 +3. `Process Draft Impl.` (enum) -- set by `IStructuredFormatReader.ReadIntoDraft` in step 2 + +Each enum value implements the corresponding interface. This is runtime polymorphism via enum dispatch -- the E-Document record carries its own processing pipeline configuration. + +`ImportEDocumentProcess.GetNextStep` and `GetPreviousStep` implement bidirectional navigation. `StatusStepIndex` maps each status to a numeric index (0-4) for comparison. `IsEDocumentInStateGE` checks if the document has reached or passed a given state. + +## Legacy patterns (avoid in new code) + +### V1 Integration Interface + +The `E-Document Integration` interface (in `Integration/EDocumentIntegration.Interface.al`) was the original monolithic connector interface. It combined send, receive, batch send, and response handling into a single interface with many methods. It's registered on the `E-Document Integration` enum (field 4 on E-Document Service). + +**Why to avoid**: V2 splits this into focused interfaces (`IDocumentSender`, `IDocumentReceiver`, `IDocumentResponseHandler`). This allows connectors to implement only what they need, and it enables the context object pattern for cleaner state management. The V1 interface passes raw `HttpRequestMessage`/`HttpResponseMessage` as var parameters rather than using contexts. + +### Direct error message manipulation + +Older code sometimes creates error messages directly or uses `Error()` calls. The modern pattern is to use `EDocumentErrorHelper`: + +- `LogSimpleErrorMessage(EDocument, ErrorText)` -- logs a text error +- `LogErrorMessage(EDocument, Record, FieldNo, ErrorText)` -- logs an error with field context +- `LogWarningMessage(EDocument, Record, FieldNo, WarningText)` -- logs a warning +- `ErrorMessageCount(EDocument)` -- counts errors (used to detect if an operation added new errors) + +The error helper integrates with BC's Error Message framework, making errors visible on the E-Document page's error factbox. Direct `Error()` calls bypass this and lose the audit trail. + +### Hardcoded workflow steps + +Early implementations sometimes hardcoded service routing logic. The correct approach is to use BC Workflow with the `EDOC` category. The `EDocumentWorkFlowSetup` codeunit registers the workflow events and responses. The `Workflow Step Argument` extension adds the `E-Document Service` field so each workflow step can target a different service. + +This allows administrators to configure multi-service flows (e.g., "export via PEPPOL, then email a PDF copy") without code changes. + +### Purchase Header E-Document Link field + +The V1 import path linked E-Documents to purchase headers via a GUID field (`E-Document Link`) on the Purchase Header table extension. This is a direct table relationship that bypasses the standard `Document Record ID` mechanism. + +V2 uses `E-Document.Document Record ID` exclusively, which works with any table (not just Purchase Header) and follows the standard BC pattern for cross-table references. The old field is behind `#if not CLEAN27` guards. diff --git a/src/Apps/W1/EDocument/App/src/ClearanceModel/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/ClearanceModel/docs/CLAUDE.md new file mode 100644 index 0000000000..e79b3272c8 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/ClearanceModel/docs/CLAUDE.md @@ -0,0 +1,20 @@ +# ClearanceModel + +QR code generation and display for posted documents that have been cleared by a tax authority (e.g., Saudi Arabia ZATCA). This module adds QR code blob fields to posted documents and surfaces them on pages and printed reports. It is country-specific infrastructure that is a silent no-op when no QR data is populated. + +## How it works + +Four table extensions (`PostedSalesInvoicewithQR.TableExt.al` and siblings) add `QR Code Image` (MediaSet) and `QR Code Base64` (Blob) fields to Sales Invoice Header, Sales Cr.Memo Header, Service Invoice Header, and Service Cr.Memo Header. Corresponding report extensions add the QR Code Image column to the standard printed document reports (Word layout), and page extensions add a QR viewer action to the posted document pages. + +`EDocumentQRCodeManagement.Codeunit.al` is the core logic. `InitializeAndRunQRCodeViewer` reads the Base64-encoded QR data from the posted document, copies it to a temporary `EDoc QR Buffer` record, and opens the viewer page. `SetQRCodeImageFromBase64` decodes the Base64 string into a PNG binary and imports it into the MediaSet field for display. `ExportQRCodeToFile` lets users download the QR code as a PNG file. + +`EDocQRBuffer.Table.al` is a temporary-only table used as a display container for the QR viewer page. It holds the document type, document number, Base64 blob, and decoded QR image. + +## Things to know + +- The QR code data itself is written to the posted document by country-specific clearance connectors (e.g., ZATCA). This module only reads and displays it -- it does not generate the QR content. +- If `QR Code Base64` has no value, the viewer shows a message and exits. There is no error -- this is the expected path for countries that do not use clearance. +- The four table/report/page extension triplets are structurally identical across Sales Invoice, Sales Credit Memo, Service Invoice, and Service Credit Memo. +- The report extensions provide Word layout files (`.docx`) stored in `.resources/Template/` that include the QR code image placeholder. +- Base64 decoding uses the standard `Base64 Convert` codeunit. The decoded binary is imported as a `MediaSet` for rendering in the AL client. +- The `EDoc QR Buffer` table is `TableType = Temporary` by declaration, so it never persists data to the database. diff --git a/src/Apps/W1/EDocument/App/src/DataExchange/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/DataExchange/docs/CLAUDE.md new file mode 100644 index 0000000000..ba09b3adaa --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/DataExchange/docs/CLAUDE.md @@ -0,0 +1,20 @@ +# DataExchange + +Bridges the E-Document framework to BC's standard Data Exchange Framework for PEPPOL BIS 3.0 import and export. This module is the older, Data Exchange Definition-based approach to format handling -- the newer approach is the `IEDocument` interface in the Format module. Both coexist and can be selected per service. + +## How it works + +`EDocDataExchangeImpl.Codeunit.al` implements the `IEDocument` interface using Data Exchange Definitions. On export, it looks up the correct Data Exchange Definition from `E-Doc. Service Data Exch. Def.` for the service + document type combination, creates a `Data Exch.` record, and runs `ExportFromDataExch`. On import, `GetBasicInfoFromReceivedDocument` tries every registered import definition to find the best match (the one producing the most intermediate data records), then `GetCompleteInfoFromReceivedDocument` processes that intermediate data into Purchase Header/Line records. + +The `PEPPOL Data Exchange Definition` subfolder contains pre-mapping codeunits (`PreMapSalesInvLine`, `PreMapSalesCrMemoLine`, `PreMapServiceInvLine`, `PreMapServiceCrMemoLine`) that transform BC records before export. `EDocDEDPEPPOLSubscribers.Codeunit.al` is a SingleInstance codeunit that hooks into data exchange events to handle PEPPOL-specific logic like tax subtotals and allowance charges. `EDocDEDPEPPOLExternal.Codeunit.al` provides external handler entry points. + +## Things to know + +- The `FindDataExchAndDocumentType` method is a brute-force approach: it runs every registered import definition against the incoming blob and picks the one yielding the most intermediate records. This involves Commit() calls inside a loop. +- Batch processing is explicitly not supported (`BatchNotSupportedErr`) -- this implementation works document-by-document only. +- The `OnBeforeCheckRecRefCount` subscriber suppresses empty-record-count errors for e-document exports, since not all related data (e.g., Document Attachments) will exist for every document. +- Import creates temporary Purchase Header/Line records from intermediate data, handling field type conversion via `Config. Validate Management`. Document attachments embedded as Base64 in the XML are also extracted. +- `E-Doc. Service Data Exch. Def.` (`EDocServiceDataExchDef.Table.al`) links a service code + document type to separate import and export Data Exchange Definition codes. +- The import intermediate table safety check (`DataExchDefUsesIntermediate`) ensures that only definitions using intermediate tables are considered, preventing accidental direct database inserts. +- Header field extraction during `GetBasicInfo` uses XPath navigation with namespace handling -- the root element's default namespace and all `xmlns:` prefixed namespaces are registered on the namespace manager before queries. +- Integration events `OnAfterDataExchangeInsert` and `OnBeforeDataExchangeExport` let extensions modify the Data Exch. record or inject additional processing before the actual export runs. diff --git a/src/Apps/W1/EDocument/App/src/Document/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Document/docs/CLAUDE.md new file mode 100644 index 0000000000..f16e57b07f --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Document/docs/CLAUDE.md @@ -0,0 +1,23 @@ +# Document + +This area defines the core E-Document entity (table 6121), its status model, direction routing, type classification, and notification system. Everything else in E-Document Core ultimately references back to this table. + +## How it works + +An E-Document record represents one electronic document, either outgoing (created from a posted sales/service doc) or incoming (received from an external service). The `Direction` enum (`EDocumentDirection.Enum.al`) has two values -- Outgoing and Incoming -- and controls which UI pages, processing pipelines, and validation logic apply. `Document Record ID` is a RecordId pointing to the source/target BC document (e.g. Posted Sales Invoice, Purchase Header); it can be empty for newly imported incoming documents that haven't been linked to a BC record yet. + +The status model has two layers. `E-Document Status` (enum 6108) is the top-level aggregate with three values: In Progress, Processed, Error. `E-Document Service Status` (enum 6106) is the granular per-service status with ~24 values (Created, Exported, Sent, Imported, Order Linked, Sending Error, etc.). Each service status value declares which `IEDocumentStatus` implementation it uses -- for example, "Exported" and "Sent" map to `EDocProcessedStatus`, "Sending Error" and "Export Error" map to `EDocErrorStatus`, and most others default to `EDocInProgressStatus`. This interface-based mapping means the top-level status is derived from the service status, not set independently. + +The `E-Document Type` enum (enum 6121) has 22 values covering sales, purchase, service, finance charge, reminder, journal, and transfer document types. It implements `IEDocumentFinishDraft` -- only Purchase Invoice has a concrete implementation (`E-Doc. Create Purchase Invoice`); all others use the default unspecified implementation. This reflects the fact that inbound document creation is currently purchase-focused. + +Duplicate detection uses a secondary key on `(Incoming E-Document No., Bill-to/Pay-to No., Document Date)` via the `IsDuplicate` method. When a duplicate is found, telemetry is logged and the user is warned. + +## Things to know + +- Deletion is restricted: you cannot delete a Processed E-Document or one with a non-empty `Document Record ID` (linked to a BC document). Deleting a non-duplicate requires UI confirmation +- Validating `Document Record ID` triggers `EDocAttachmentProcessor.MoveAttachmentsAndDelete`, which moves Document Attachment records from the E-Document to the newly linked BC document +- `CleanupDocument` cascades deletion to logs, service statuses, attachments, imported lines, mapping logs, and Purchase Header E-Document Links -- it also calls `IProcessStructuredData.CleanUpDraft` for version 2 processing cleanup +- The `Structured Data Entry No.` and `Unstructured Data Entry No.` fields point to `E-Doc. Data Storage` records (e.g. XML and PDF respectively) -- structured means machine-parseable, unstructured means human-readable +- The notification table (`EDocumentNotification.Table.al`) is keyed by (E-Document Entry No., ID, User Id) -- notifications are per-user per-document, currently used for vendor matching warnings +- `Service` field on E-Document references an `E-Document Service` record and is set at creation time along with `Service Integration` -- this determines which integration and format implementation processes the document +- The `Import Processing Status` is a FlowField that reads from `E-Document Service Status`, not stored directly on the E-Document diff --git a/src/Apps/W1/EDocument/App/src/Extensions/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Extensions/docs/CLAUDE.md new file mode 100644 index 0000000000..e04e254427 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Extensions/docs/CLAUDE.md @@ -0,0 +1,23 @@ +# Extensions + +This area hooks E-Document Core into standard Business Central documents. It extends posted and unposted sales, purchase, service, finance charge, and reminder pages with E-Document actions, extends key tables with E-Document fields, and subscribes to posting events so E-Documents are created automatically when documents are posted. + +## How it works + +The central piece is `EDocumentSubscribers.Codeunit.al` (codeunit 6103), which subscribes to BC posting events across all supported document types. The pattern for outgoing documents is: subscribe to `OnAfterPost*` events (e.g. `OnAfterPostSalesDoc`, `OnAfterPostServiceDoc`, `OnAfterIssueReminder`) and call `CreateEDocumentFromPostedDocument`. The subscriber first checks whether the customer has a Document Sending Profile with Electronic Document set to "Extended E-Document Service Flow" -- if not, no E-Document is created. For purchase documents, the pattern is different: `OnAfterPostPurchaseDoc` calls `PointEDocumentToPostedDocument`, which re-links an existing incoming E-Document from the Purchase Header to the posted purchase invoice/credit memo. + +Before posting, separate `OnBefore*` subscribers run `RunEDocumentCheck` on release and post events to validate that required E-Document fields are present. This means validation failures surface before the document is committed, not after. + +Page extensions follow a repeating pattern -- see `EDocPostedSalesInv.PageExt.al` as representative. Each posted document page gets an "E-Document" action group with Open (enabled when E-Document exists for the record) and Create (enabled when it does not). The `OnAfterGetRecord` trigger checks `EDocument.SetRange("Document Record ID", Rec.RecordId())` to set visibility. Purchase pages like `EDocPurchaseOrder.PageExt.al` also surface inbound E-Document factboxes and order matching actions. + +Role center page extensions (`RoleCenter/` subfolder) embed the `E-Document Activities` page part, which shows cue tiles for outgoing/incoming documents split by status (Processed, In Progress, Error) plus counts for linked purchase orders and waiting purchase E-Invoices. + +## Things to know + +- The subscriber codeunit lives outside this folder, in `Processing/EDocumentSubscribers.Codeunit.al`, but its logic is tightly coupled to the extensions here +- Outgoing E-Documents are only created when the Document Sending Profile has `"Electronic Document" = "Extended E-Document Service Flow"` -- the `EDocSendProfileElecDoc.EnumExt.al` adds this value to the standard BC enum +- The `E-Document Link` field on Purchase Header (`EDocPurchaseHeader.TableExt.al`) is a Guid matching `E-Document.SystemId`, not a record reference -- it links a purchase document to the incoming E-Document that spawned it +- `Receive E-Document To` on Vendor/Vendor Template (`EDocumentVendor.TableExt.al`) controls whether incoming docs become Purchase Orders or Purchase Invoices -- defaults to Purchase Order, only those two values are allowed +- Sending extensions in the `Sending/` subfolder add "E-Document" and "PDF & E-Document" email attachment types, and wire up the Electronic Service Flow (workflow code) on Document Sending Profile +- The `E-Doc. Attachment` table extension tags Document Attachment records with an E-Document Entry No., enabling attachment movement when `Document Record ID` changes (e.g. when a draft purchase becomes a posted invoice) +- Deleting a Purchase Header linked to an E-Document triggers a confirmation dialog and resets the E-Document back to "Draft Ready" status via `E-Doc. Import` diff --git a/src/Apps/W1/EDocument/App/src/Format/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Format/docs/CLAUDE.md new file mode 100644 index 0000000000..96daf977b6 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Format/docs/CLAUDE.md @@ -0,0 +1,21 @@ +# Format + +PEPPOL BIS 3.0 format implementation -- the built-in `IEDocument` interface implementation that ships with the E-Document Core app. This is the reference implementation that connector and format developers can study to understand how to build their own. + +## How it works + +`EDocPEPPOLBIS30.Codeunit.al` implements the `IEDocument` interface. On export (`Create`), it dispatches by document type to XMLport-based generators for invoices, credit memos, and financial results (reminders, finance charge memos), or to codeunit-based generators for sales shipments and transfer shipments. It also fires `OnAfterCreatePEPPOLXMLDocument` so extensions can modify the generated XML. On import (`GetBasicInfoFromReceivedDocument` / `GetCompleteInfoFromReceivedDocument`), it delegates to `EDocImportPEPPOLBIS30.Codeunit.al`. + +`EDocImportPEPPOLBIS30.Codeunit.al` parses PEPPOL XML using `XML Buffer` (not XmlDocument directly). `ParseBasicInfo` detects the document type from the root element name (Invoice vs CreditNote), extracts header fields (vendor, dates, amounts, currency), and resolves the vendor through a multi-step chain: GLN, VAT number, service participant, then name+address. `ParseCompleteInfo` builds temporary Purchase Header/Line records by walking the XML buffer path-by-path. It also extracts embedded Base64 document attachments. + +`EDocPEPPOLValidation.Codeunit.al` validates Reminders and Finance Charge Memos for PEPPOL compliance (required fields, currency code length, country ISO code length, company/customer GLN or VAT). Sales invoice and credit memo validation reuses the standard `PEPPOL Validation` codeunit. `EDocShipmentExportToXml.Codeunit.al` and `EDocTransferShptToXML.Codeunit.al` generate custom XML for shipment document types that are not covered by the standard PEPPOL XMLports. + +## Things to know + +- When `Document Format` is set to "PEPPOL BIS 3.0" on a service, the `OnAfterValidateDocumentFormat` subscriber auto-populates supported document types (Sales Invoice, Sales Credit Memo, Service Invoice, Service Credit Memo) if none exist. +- The import parser determines Invoice vs Credit Memo from the XML root element name, not from any content inside the document. +- Currency codes are compared against `LCY Code` from General Ledger Setup -- if the document currency matches LCY, the Currency Code field is left blank (BC convention). +- The `OnAfterParseInvoice` and `OnAfterParseCreditMemo` integration events fire for every XML buffer row, letting extensions handle custom PEPPOL paths. +- Shipment and Transfer Shipment exports build XML programmatically via `XML DOM Management` rather than using XMLports, because the standard PEPPOL XMLports do not cover these document types. +- The `Embed PDF in export` service flag is passed through to the XMLport generators, which embed a report-generated PDF as a Base64 attachment in the PEPPOL XML. +- `FinResultsPEPPOLBIS30.XmlPort.al` handles the Financial Results export (Issued Reminders and Issued Finance Charge Memos) as a separate XMLport. diff --git a/src/Apps/W1/EDocument/App/src/Helpers/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Helpers/docs/CLAUDE.md new file mode 100644 index 0000000000..56cb52e6f9 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Helpers/docs/CLAUDE.md @@ -0,0 +1,24 @@ +# Helpers + +Utility codeunits used across the E-Document app for error handling, logging convenience, import data resolution, and JSON parsing. These are shared building blocks, not standalone features. + +## How it works + +`EDocumentHelper.Codeunit.al` provides general-purpose utilities: checking if a record is an electronic document (via Document Sending Profile), retrieving an E-Document's service from workflow step arguments, enabling HttpClient permissions for the app, and opening the correct draft page based on the import process version (V1 vs V2). + +`EDocumentErrorHelper.Codeunit.al` wraps BC's Error Message table with E-Document context. All methods call `ErrorMessage.SetContext(EDocument)` so errors are scoped to the specific document. It also logs every error to Feature Telemetry with the document's string representation and the error callstack. + +`EDocumentImportHelper.Codeunit.al` is the most complex helper. It contains the vendor resolution chain: find by No., then by GLN, then by VAT Registration No., then by service participant, then by phone number, then by name+address (fuzzy matching with 95% nearness threshold). For line items, the resolution chain is: Item Reference (vendor-specific) then GTIN then G/L Account (via Text-to-Account Mapping then Purchases & Payables defaults). It also handles UoM resolution, line discount validation, invoice discount application, and total verification. + +`EDocumentLogHelper.Codeunit.al` is a thin public facade over the internal `E-Document Log` codeunit, exposing `InsertLog` and `InsertIntegrationLog` for use by connector extensions. + +`EDocumentJsonHelper.Codeunit.al` parses structured JSON responses (from Azure Document Intelligence), extracting header fields and line arrays from a specific `outputs/1/result` structure with typed accessors for text, date, number, and currency values. + +## Things to know + +- The vendor resolution in Import Helper uses fuzzy string matching (`RecordMatchMgt.CalculateStringNearness`) with a 95% threshold and normalizing factor of 100 -- near-misses in vendor name or address will not match. +- `FindVendorByBankAccount` prefers non-blocked vendors, falling back to payment-blocked, then all-blocked, rather than just returning the first match. +- `FindGLAccountForLine` tries Text-to-Account Mapping with the vendor number first, then without it, then falls back to Purchases & Payables Setup default accounts. Multiple mapping matches log an error rather than picking one. +- Error Helper's `LogSimpleErrorMessage` does not require a related record or field number -- use it for free-form error text. +- JSON Helper's structure (`outputs/1/result/fields` and `outputs/1/result/items`) is specific to Azure Document Intelligence CAPI responses. It uses TryFunction wrappers to silently handle null JSON values. +- Import Helper validates self-billing vendors: if a vendor has `Self-Billing Agreement = true`, incoming e-documents are blocked with an error. diff --git a/src/Apps/W1/EDocument/App/src/Integration/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Integration/docs/CLAUDE.md new file mode 100644 index 0000000000..29ba607eff --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Integration/docs/CLAUDE.md @@ -0,0 +1,22 @@ +# Integration + +Integration defines how e-documents are transmitted to and received from external services. It provides the V2 interface contracts (send, receive, actions), context objects that encapsulate HTTP and status state, and runner codeunits that orchestrate the calls. This module deliberately contains no business logic about document content -- it only cares about transport and lifecycle. + +## How it works + +The module is organized around three operations: **Send**, **Receive**, and **Action**. Each has an interface, a context object, and a runner codeunit. + +For sending, `SendRunner` dispatches to the service's `IDocumentSender.Send()` implementation, passing a `SendContext` that carries the document blob, HTTP message state, and an integration action status. After the call, the runner checks if the sender also implements `IDocumentResponseHandler` -- if so, the send is async and a background job polls `GetResponse()` via `GetResponseRunner` until the service confirms receipt. + +For receiving, `ReceiveDocuments` calls `IDocumentReceiver.ReceiveDocuments()` which populates a `Temp Blob List` with document metadata. Then `DownloadDocument` is called per document to fetch the actual content into the `ReceiveContext`. If the receiver also implements `IReceivedDocumentMarker`, the framework calls `MarkFetched` to tell the external service the document has been downloaded, preventing duplicates. + +Actions (`EDocumentActionRunner`) handle post-send lifecycle events like approval checks and cancellation requests. The `ISentDocumentActions` interface provides `GetApprovalStatus` and `GetCancellationStatus`. For custom actions, `IDocumentAction.InvokeAction` is the generic entry point. All action calls receive an `ActionContext` with HTTP state and a status object that the implementation sets to control whether the e-document status should be updated. + +## Things to know + +- Async sending is determined by interface implementation, not configuration -- if your `IDocumentSender` implementation also implements `IDocumentResponseHandler`, the framework treats the send as async automatically. +- Context objects (`SendContext`, `ReceiveContext`, `ActionContext`) all expose `.Http()` for HTTP request/response state and `.Status()` for the integration action status. If you populate the HTTP objects, request/response content is automatically logged to communication logs. +- The V1 interface (`E-Document Integration`, guarded by `#if not CLEAN26`) combined send, receive, batch, response, approval, and cancellation into a single interface with raw `HttpRequestMessage`/`HttpResponseMessage` parameters. V2 splits this into focused interfaces with context objects. V1 is deprecated in 26.0 and will be removed at CLEAN26. +- `SendRunner` and `GetResponseRunner` still contain V1 fallback paths that extract HTTP messages from legacy interface calls and inject them back into the context objects for unified logging. +- `IConsentManager` is called before service operations to obtain privacy consent. The implementation decides how to prompt the user and stores the consent state. +- Batch sending in V2 is handled by setting filters on the EDocument record passed to `IDocumentSender.Send()` -- the record contains multiple documents. This replaces the V1 `SendBatch` method. diff --git a/src/Apps/W1/EDocument/App/src/Integration/docs/extensibility.md b/src/Apps/W1/EDocument/App/src/Integration/docs/extensibility.md new file mode 100644 index 0000000000..a6ed23abfa --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Integration/docs/extensibility.md @@ -0,0 +1,108 @@ +# Integration extensibility + +The Integration module exposes 7 interfaces (in `Interfaces/`) that define how connector extensions communicate with external e-document services. All are in the `Microsoft.eServices.EDocument.Integration.Interfaces` namespace. Implementations are registered via the `Service Integration` enum on the E-Document Service record. + +## How to build a send connector + +Implement **IDocumentSender** to send e-documents to an external service. The framework calls `Send` with the E-Document, service configuration, and a `SendContext` that carries the exported blob and HTTP state. + +```al +interface IDocumentSender +{ + procedure Send(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; SendContext: Codeunit SendContext); +} +``` + +The `SendContext` provides: + +- `GetTempBlob()` / `SetTempBlob()` -- the exported document content +- `Http()` -- returns `HttpMessageState` where you set your `HttpRequestMessage` and `HttpResponseMessage`. If populated, the framework automatically logs them to communication logs. +- `Status()` -- returns `IntegrationActionStatus` for setting the resulting service status + +For batch sending, the framework sets filters on the `EDocument` record so it contains multiple records. Your implementation iterates them. + +For **synchronous** sending, just implement `IDocumentSender`. The framework sets the status to Sent after a successful call. + +For **asynchronous** sending, implement `IDocumentSender` **and** `IDocumentResponseHandler` on the same codeunit. The framework detects this via an `is` check (`IDocumentSender is IDocumentResponseHandler`) and automatically queues a background job to poll for the response. + +## How to handle async responses + +Implement **IDocumentResponseHandler** on the same codeunit as your `IDocumentSender`. The framework calls `GetResponse` on a recurring schedule until it returns `true` or an error is logged. + +```al +interface IDocumentResponseHandler +{ + procedure GetResponse(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; SendContext: Codeunit SendContext): Boolean; +} +``` + +Return `true` when the service confirms the document was received -- the status moves to Sent. Return `false` when the service hasn't finished processing -- the status stays at Pending Response and the job retries. If a runtime error occurs or you log an error via the error helper, the status moves to Sending Error and polling stops. + +## How to build a receive connector + +Implement **IDocumentReceiver** to fetch documents from an external service. Receiving is a two-phase process: + +```al +interface IDocumentReceiver +{ + procedure ReceiveDocuments(var EDocumentService: Record "E-Document Service"; DocumentsMetadata: Codeunit "Temp Blob List"; ReceiveContext: Codeunit ReceiveContext); + procedure DownloadDocument(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; DocumentMetadata: Codeunit "Temp Blob"; ReceiveContext: Codeunit ReceiveContext); +} +``` + +**Phase 1 -- `ReceiveDocuments`**: Query the service for available documents and add one `Temp Blob` per document to the `DocumentsMetadata` list. Each blob typically contains a document ID or metadata JSON. The count of blobs determines how many E-Documents will be created. + +**Phase 2 -- `DownloadDocument`**: Called once per document. Read the metadata blob to get the document ID, then fetch the actual content (XML, PDF, etc.) and write it into `ReceiveContext.GetTempBlob()`. Set the file format via `ReceiveContext.SetFileFormat()` and the name via `ReceiveContext.SetName()`. + +The `ReceiveContext` provides the same `Http()` and `Status()` accessors as `SendContext`, plus file format and name setters. + +Optionally implement **IReceivedDocumentMarker** to tell the service a document has been successfully fetched (prevents duplicate downloads): + +```al +interface IReceivedDocumentMarker +{ + procedure MarkFetched(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; var DocumentBlob: Codeunit "Temp Blob"; ReceiveContext: Codeunit ReceiveContext); +} +``` + +This is called after the document is successfully created in BC. If `MarkFetched` errors, the document creation is rolled back. + +## How to add custom actions + +The action framework handles post-send lifecycle events. There are two levels: + +**ISentDocumentActions** -- provides the two built-in action types: approval and cancellation. Implement this for services that support checking whether a sent document was approved or requesting cancellation. + +```al +interface ISentDocumentActions +{ + procedure GetApprovalStatus(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; ActionContext: Codeunit ActionContext): Boolean; + procedure GetCancellationStatus(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; ActionContext: Codeunit ActionContext): Boolean; +} +``` + +Both return `true` if the action should update the E-Document service status, `false` otherwise. The built-in implementations (`SentDocumentApproval`, `SentDocumentCancellation`) delegate to this interface via the `Integration Action Type` enum. + +**IDocumentAction** -- the generic action interface for custom action types beyond approval/cancellation. Each action type is registered in the `Integration Action Type` enum and resolved to an `IDocumentAction` implementation. + +```al +interface IDocumentAction +{ + procedure InvokeAction(var EDocument: Record "E-Document"; var EDocumentService: Record "E-Document Service"; ActionContext: Codeunit ActionContext): Boolean; +} +``` + +The `ActionContext` provides `Http()` for HTTP state and `Status()` where your implementation sets the target service status via `SetStatus()`. The `EDocumentActionRunner` calls `InvokeAction` and uses the boolean return to decide whether to persist the status change. + +## How to manage consent + +Implement **IConsentManager** to customize the privacy consent flow shown before service operations. + +```al +interface IConsentManager +{ + procedure ObtainPrivacyConsent(): Boolean; +} +``` + +Return `true` if the user granted consent, `false` to block the operation. The default implementation uses the standard BC `Customer Consent Mgt.` codeunit, but you can replace it with a custom consent message or external consent flow. diff --git a/src/Apps/W1/EDocument/App/src/Logging/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Logging/docs/CLAUDE.md new file mode 100644 index 0000000000..201bb39814 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Logging/docs/CLAUDE.md @@ -0,0 +1,21 @@ +# Logging + +Three-tier logging infrastructure for E-Documents: document state transitions, HTTP integration traces, and binary data storage. This module owns the audit trail and is used by nearly every other module in the app. + +## How it works + +`E-Document Log` (`EDocumentLog.Table.al`) records one entry per state transition per service. Each entry captures the service code, status enum, document format, integration version, and an optional reference to `E-Doc. Data Storage` for the document payload. The `E-Document Log` codeunit (`EDocumentLog.Codeunit.al`) provides the insert API and manages the relationship between logs, data storage, and integration logs. + +`E-Document Integration Log` (`EDocumentIntegrationLog.Table.al`) stores raw HTTP request/response data as BLOB fields, plus the URL, method, and response status code. One E-Document can have many integration log entries (one per HTTP call). The log codeunit's `InsertIntegrationLog` skips logging when there is no integration configured or when the request URI is empty. + +`E-Doc. Data Storage` (`EDocDataStorage.Table.al`) holds binary payloads (XML, JSON, PDF) in a BLOB field with a cached `Data Storage Size` integer and a `File Format` enum implementing `IEDocFileFormat`. Multiple log entries can reference the same Data Storage record -- this is intentional for batch scenarios where one exported blob covers multiple documents. The OnDelete trigger on E-Document Log cleans up the referenced Data Storage only if it exists. + +## Things to know + +- Document Log entries are immutable by design -- there is no update API, only insert. Each state change creates a new entry. +- The `E-Doc. File Format` enum (`EDocFileFormat.Enum.al`) has four values: Unspecified, PDF, XML, JSON. Each implements `IEDocFileFormat` for format-specific handling during import processing. +- `GetDocumentBlobFromLog` filters by service, integration version, format, and processing status to find the correct log entry. If it fails, it logs telemetry rather than throwing an error. +- `ModifyDataStorageEntryNo` errors if the log entry already has a Data Storage reference -- this prevents accidental overwrites of the blob link. +- Integration log entries store request and response bodies as BLOBs via `TempBlob.ToRecordRef`, not direct BLOB field writes. +- The `InsertDataStorage` method returns 0 (not an error) if the TempBlob has no value, so callers must handle the zero-entry-no case. +- Mapping logs are also written through this codeunit's `InsertMappingLog` method, tying field-level change tracking to specific E-Document Log entries. diff --git a/src/Apps/W1/EDocument/App/src/Mapping/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Mapping/docs/CLAUDE.md new file mode 100644 index 0000000000..65e7114ee6 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Mapping/docs/CLAUDE.md @@ -0,0 +1,20 @@ +# Mapping + +Field-level transformation rules applied to document records during import or export. This module lets administrators define find/replace pairs and transformation rules per service without writing code, keeping country-specific or partner-specific value mappings as configuration data. + +## How it works + +`EDocMapping.Table.al` stores rules keyed by service code. Each rule targets a Table ID + Field ID combination and specifies either a `Transformation Rule` reference (from BC's standard Transformation Rule table) or a simple `Find Value` / `Replace Value` pair. The `For Import` flag separates import-time mappings from export-time ones. + +`EDocMapping.Codeunit.al` applies the rules via RecordRef reflection. It processes mappings in three passes: first specific-table-and-field rules, then any-field-on-specific-table rules (Field ID = 0), then fully generic rules (Table ID = 0, Field ID = 0). Only Text and Code fields are eligible -- other field types are silently skipped by `ValidateFieldRef`. When a rule fires, the `Used` flag is set on the mapping record and a temporary change record is collected for the audit trail. + +`EDocMappingLog.Table.al` stores the audit trail. Each entry references an E-Document Log entry and records the table, field, original value, and replacement value. The log is written by `E-Document Log.InsertMappingLog`. + +## Things to know + +- Mappings operate on RecordRef, so a typo in the Table ID or Field ID will not cause a compile error -- it will either silently skip the field or fail at runtime. +- The target record must be temporary (`NonTempRecordErr` guard). Mapping always writes to a temp copy, never directly to the database. +- The `Used` flag is reset to false before each preview or export run via `ModifyAll(Used, false)`, so it reflects the most recent execution only. +- Generic rules (Table ID = 0) apply to every text/code field on every record -- use them carefully to avoid unintended replacements. +- The preview feature (`PreviewMapping`) lets users select a service and see what changes would be applied to a real document before committing, using the `EDocChangesPreview` page. +- Mapping log entries are linked to E-Document Log entries, not directly to E-Documents, enabling per-state-transition audit. diff --git a/src/Apps/W1/EDocument/App/src/Processing/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Processing/docs/CLAUDE.md new file mode 100644 index 0000000000..da4f4fd873 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Processing/docs/CLAUDE.md @@ -0,0 +1,21 @@ +# Processing + +Processing is the largest module in the E-Document Core app. It owns both the export pipeline (turning BC documents into e-document blobs) and the import pipeline (turning received blobs into BC purchase documents). The import side is significantly more complex because it must handle unstructured formats like PDF, resolve vendors and items from external data, and optionally use AI to fill in what deterministic matching cannot. + +## How it works + +**Export** starts in `EDocExport.Codeunit.al`. When a document is posted and a Document Sending Profile triggers the e-document workflow, `CreateEDocument` populates the E-Document record from the source document header, then `ExportEDocument` applies field mappings and delegates to the document format interface to produce the output blob. An `IExportEligibilityEvaluator` gates whether a given service should receive the document at all. + +**Import V2** is a 5-state machine defined in `ImportEDocumentProcess.Codeunit.al`. Each state transition runs exactly one step. The pipeline is: Unprocessed -> (Structure received data) -> Readable -> (Read into Draft) -> Ready for draft -> (Prepare draft) -> Draft ready -> (Finish draft) -> Processed. Every step can be undone, which resets the E-Document back to the previous state and cleans up intermediate data. The V1 legacy path collapses all steps into a single "Finish draft" call. + +**AI matching** runs during "Prepare draft". After deterministic matching (item references, text-to-account mappings), `PreparePurchaseEDocDraft` invokes three sequential Copilot steps: historical matching, GL account matching, and deferral matching. Each is an `IEDocAISystem` + `AOAI Function` implementation that builds a JSON user message, calls GPT-4 via the AOAI platform with function-calling, and applies the returned matches to draft lines. + +## Things to know + +- The import state machine is strict about ordering -- `GetNextStep` and `GetPreviousStep` use an integer index on the status enum, so adding states requires updating those mappings. +- Draft purchase tables use field ranges: [2-100] for external data (vendor name, product code) and [101-200] for BC-resolved data (`[BC] Vendor No.`, `[BC] Purchase Type No.`). The `[BC]` prefix is the naming convention. +- Vendor resolution in `EDocProviders.Codeunit.al` tries GLN/VAT ID lookup, then Service Participant, then name+address fuzzy match. If all fail, the draft proceeds but the user must assign the vendor manually. +- AI matching is sequential and subtractive: historical matching runs first on unmatched lines, then GL account matching on still-unmatched lines, then deferral matching on lines that have an account but no deferral code. +- PO matching (`EDocPOMatching.Codeunit.al`) tracks matches in a separate link table (`E-Doc. Purchase Line PO Match`) using SystemId references, not primary keys -- this survives record renumbers. +- Historical matching loads up to 5000 posted purchase invoice lines from the last year and uses similar-description detection to find candidates before sending them to the LLM for final selection. +- The export pipeline checks `IExportEligibilityEvaluator` before producing the blob -- this is the hook for suppressing export of specific documents per service. diff --git a/src/Apps/W1/EDocument/App/src/Processing/docs/business-logic.md b/src/Apps/W1/EDocument/App/src/Processing/docs/business-logic.md new file mode 100644 index 0000000000..e2674cbef1 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Processing/docs/business-logic.md @@ -0,0 +1,105 @@ +# Processing business logic + +## Import V2 state machine + +The import pipeline in `ImportEDocumentProcess.Codeunit.al` progresses an E-Document through five states. Each transition executes exactly one step, and every step supports an undo operation that reverts to the previous state. + +```mermaid +flowchart LR + A[Unprocessed] -->|Structure received data| B[Readable] + B -->|Read into Draft| C[Ready for draft] + C -->|Prepare draft| D[Draft ready] + D -->|Finish draft| E[Processed] + E -->|Undo finish| D + D -->|Undo prepare| C + C -.->|Undo read| B + B -->|Undo structure| A +``` + +**Structure received data** takes the unstructured blob (PDF, XML, JSON) stored in `E-Doc. Data Storage` and converts it to structured data. The step first resolves which `IStructureReceivedEDocument` implementation to use -- if the E-Document has none set, it falls back to the file format's `PreferredStructureDataImplementation()`. For PDFs this typically routes to the ADI (Azure Document Intelligence) handler; for XML it returns an "Already Structured" marker. The original unstructured blob is saved as an attachment, and the structured output is stored in a new data storage entry. The `IStructuredDataType` returned by the converter also specifies which `IStructuredFormatReader` to use in the next step. + +**Read into Draft** invokes the `IStructuredFormatReader` to parse the structured data and populate the draft purchase tables (`E-Document Purchase Header` and `E-Document Purchase Line`). The PEPPOL handler parses XML; the ADI handler parses the ADI JSON schema; the MLLM handler processes LLM-structured JSON. The reader returns an enum indicating which `IProcessStructuredData` implementation should run next. + +**Prepare draft** resolves external data into BC entities. `PreparePurchaseEDocDraft.Codeunit.al` orchestrates this: it finds the vendor via `IVendorProvider`, looks up a matching purchase order via `IPurchaseOrderProvider`, applies historical header mappings, then iterates each line to resolve UoM via `IUnitOfMeasureProvider` and account/item via `IPurchaseLineProvider`. After deterministic matching, AI matching runs (see below). If the vendor cannot be resolved, the draft proceeds anyway -- the user assigns it manually in the draft page. + +**Finish draft** calls `IEDocumentFinishDraft.ApplyDraftToBC()`, which creates the actual BC purchase document (invoice or credit memo) and sets the `Document Record ID` on the E-Document. For PO-matched lines, receipt matches are transferred to the invoice. Undo deletes the created document via `RevertDraftActions`. + +### V1 compatibility + +If the service's import process version is "Version 1.0", the state machine is bypassed entirely. Only the "Finish draft" step is recognized, and it delegates to `EDocImport.V1_ProcessEDocument` which uses the old monolithic import path. + +## Draft processing -- vendor, item, and additional field resolution + +Vendor resolution follows a strict fallback chain in `EDocProviders.GetVendor`: + +1. GLN / VAT ID lookup via `E-Document Import Helper` +2. Service Participant table -- first filtered to the specific service, then without service filter +3. Name + address fuzzy match + +If no vendor is found, a warning is logged but processing continues. The user sees the unresolved draft and can assign the vendor manually. + +Line resolution in `EDocProviders.GetPurchaseLine` tries: + +1. Item Reference lookup (vendor-specific, filtered by product code, UoM, and date range -- tries exact UoM, then blank UoM, then any) +2. Text-to-Account Mapping (matches description against vendor-specific mappings) + +Lines that are not resolved by either mechanism are left for AI matching or manual assignment. + +Header-level historical matching (`EDocPurchaseHistMapping`) checks whether a similar invoice from the same vendor was previously processed and applies any header-level overrides (like default dimension codes) from that history. + +## PO matching + +PO matching links incoming invoice lines to existing purchase order lines. The matching state is tracked in the `E-Doc. Purchase Line PO Match` table using SystemId references to purchase lines, e-document purchase lines, and receipt lines. + +```mermaid +flowchart TD + A[E-Document Purchase Line] -->|MatchPOLinesToEDocumentLine| B[Purchase Order Line] + B -->|MatchReceiptLinesToEDocumentLine| C[Receipt Line] + A -->|Finish draft / TransferPOMatchesFromEDocumentToInvoice| D[Purchase Invoice Line] + D -->|Receipt No. / Receipt Line No.| C +``` + +The flow works as follows: + +1. Available PO lines are loaded by filtering on the same vendor (pay-to) and optionally filtered to the order number specified in the e-document header. If the order number filter yields no results, it falls back to all orders for that vendor. +2. The user (or automated process) selects PO lines to match. Validation ensures the PO line belongs to the same vendor, is not already matched to another e-document line, and all matched lines share the same type, number, and UoM. +3. Optionally, receipt lines are matched. `SuggestReceiptsForMatchedOrderLines` auto-matches the first receipt line that covers the full invoice quantity. +4. Warnings are computed comparing invoice quantity (I) against remaining-to-invoice (R = Ordered - Previously Invoiced) and invoiceable quantity (J = Received - Previously Invoiced). The warnings are: I > J (not yet received), I > R (exceeds remaining), and I = R but I < J (over-receipt). +5. At finish-draft time, `TransferPOMatchesFromEDocumentToInvoice` copies the matched receipt references onto the created purchase invoice lines. + +PO matching behavior is configurable per vendor via `E-Doc. PO Matching Setup`. The receipt configuration controls whether receipt is required before matching ("Always ask", "Always receive at posting", "Never receive at posting") and can be overridden per vendor. + +## AI/Copilot matching + +AI matching runs during the "Prepare draft" step, after all deterministic matching has been applied. It uses the `E-Doc. AI Tool Processor` codeunit, which wraps Azure OpenAI with function-calling. Each AI tool implements both `AOAI Function` (the tool definition and execute handler) and `IEDocAISystem` (system prompt, tool list, feature name). + +The three tools run sequentially and subtractively -- each only sees lines that previous steps could not resolve: + +**1. Historical matching** (`EDocHistoricalMatching.Codeunit.al`, function name: `match_lines_historical`) loads up to 5000 posted purchase invoice lines from the last year (scoped to the same vendor in the control group, all vendors in experiment variants). It collects potential matches by product code (exact), description (exact), and similar descriptions (fuzzy). These candidates are sent as JSON alongside the unmatched e-document lines. The LLM selects the best match per line and returns the purchase type, item number, deferral code, dimension codes, and a reasoning string. Confidence scoring accounts for same-vendor vs cross-vendor matches (20% penalty for different vendors). + +**2. GL account matching** (`EDocGLAccountMatching.Codeunit.al`, function name: `match_gl_account`) sends all direct-posting GL accounts (with their full category hierarchy) alongside the still-unmatched lines. The LLM assigns a GL account to each line with a reasoning explanation and candidate count (used for confidence scoring -- single candidate = Medium, multiple = Low). + +**3. Deferral matching** (`EDocDeferralMatching.Codeunit.al`, function name: `match_lines_deferral`) sends all deferral templates alongside lines that have an account but no deferral code. The LLM assigns deferral templates where appropriate. + +All three tools use GPT-4.1 (latest) at temperature 0 with a 125K input token limit and 32K output token limit. System prompts are loaded from embedded resource files and optionally augmented with security prompts from Azure Key Vault. Results are validated (TryFunction pattern) before being applied, and every match is logged via the Activity Log and telemetry systems. + +## Export pipeline + +Export starts in `EDocExport.CreateEDocument`, triggered by the Document Sending Profile workflow: + +```mermaid +flowchart TD + A[Document posted] -->|Document Sending Profile triggers| B[CreateEDocument] + B -->|For each supported service| C[ExportEDocument] + C --> D[Apply field mappings] + D --> E[Call document format interface] + E --> F[Log blob + status] + F -->|Batch?| G[Skip - handled separately] + F -->|Not batch| H[Start Created Flow] +``` + +`CreateEDocument` populates the E-Document record from the source document header using RecordRef field access -- it handles Sales, Purchase, Service, Finance Charge, Reminder, Shipment, and Transfer document types. For each service in the workflow, it checks `IExportEligibilityEvaluator.ShouldExport()` and `IsDocumentTypeSupported()` before proceeding. + +`ExportEDocument` applies the E-Doc. Mapping rules (which perform field-level transformations on the source header and lines), then delegates to `EDocumentCreate.SetSource` which invokes the document format interface to produce the output blob. The blob is logged, and the service status is set to Exported or Export Error. + +Batch export follows the same pattern but collects all documents first, applies mappings, then calls `CreateEDocumentBatch` to produce a single blob for all documents. diff --git a/src/Apps/W1/EDocument/App/src/Processing/docs/extensibility.md b/src/Apps/W1/EDocument/App/src/Processing/docs/extensibility.md new file mode 100644 index 0000000000..68438d0f4d --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Processing/docs/extensibility.md @@ -0,0 +1,136 @@ +# Processing extensibility + +The Processing module exposes 18 interfaces (in `Interfaces/`) organized around specific developer scenarios. All are in the `Microsoft.eServices.EDocument.Processing.Interfaces` namespace unless noted. + +## How to add a new file format handler + +These interfaces let you support a new file type (e.g., EDI, CSV) in the import pipeline. + +**IEDocFileFormat** -- describes the file format itself. Implement this to define the extension, preview behavior, and which structuring implementation should be used by default. + +```al +procedure FileExtension(): Text; +procedure PreviewContent(FileName: Text; TempBlob: Codeunit "Temp Blob"); +procedure PreferredStructureDataImplementation(): Enum "Structure Received E-Doc."; +``` + +**IBlobType** (obsolete, `#if not CLEAN26`) -- the predecessor to IEDocFileFormat. Checked whether a blob was structured, had a converter, and returned the converter. Replaced by the IEDocFileFormat + IStructureReceivedEDocument split. + +**IBlobToStructuredDataConverter** (obsolete, `#if not CLEAN26`) -- old conversion interface. Replaced by IStructureReceivedEDocument. + +## How to customize document structuring + +These interfaces control how raw received data becomes structured data that the pipeline can parse. + +**IStructureReceivedEDocument** -- the main structuring hook. Given the raw `E-Doc. Data Storage` record, produce an `IStructuredDataType` that holds the structured output. The built-in implementations are the PEPPOL handler (XML pass-through), the ADI handler (PDF-to-JSON via Azure Document Intelligence), and the MLLM handler (LLM-based structuring). + +```al +procedure StructureReceivedEDocument(EDocumentDataStorage: Record "E-Doc. Data Storage"): Interface IStructuredDataType; +``` + +**IStructuredDataType** -- returned by the structuring step. Encapsulates the file format, content text, and which `IStructuredFormatReader` implementation should parse it. This is how a single file format (e.g., JSON) can have multiple schemas (ADI JSON vs MLLM JSON). + +```al +procedure GetFileFormat(): Enum "E-Doc. File Format"; +procedure GetContent(): Text; +procedure GetReadIntoDraftImpl(): Enum "E-Doc. Read into Draft"; +``` + +**IStructuredFormatReader** -- parses structured data into the draft purchase tables. The `ReadIntoDraft` method populates `E-Document Purchase Header` and `E-Document Purchase Line` records. Returns an enum specifying which `IProcessStructuredData` runs next. + +```al +procedure ReadIntoDraft(EDocument: Record "E-Document"; TempBlob: Codeunit "Temp Blob"): Enum "E-Doc. Process Draft"; +procedure View(EDocument: Record "E-Document"; TempBlob: Codeunit "Temp Blob"); +``` + +## How to customize draft preparation + +These interfaces control how external draft data is resolved into BC entities (vendor, items, accounts). + +**IPrepareDraft** -- a simpler alternative to `IProcessStructuredData` for scenarios that don't need the full vendor/line resolution pipeline. Just receives the E-Document and import parameters and returns the document type. + +```al +procedure PrepareDraft(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): Enum "E-Document Type"; +``` + +**IProcessStructuredData** -- the full draft preparation interface used by `PreparePurchaseEDocDraft`. Resolves vendors, opens the draft page, and handles cleanup. The default implementation orchestrates vendor resolution, line matching, and AI invocation. + +```al +procedure PrepareDraft(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): Enum "E-Document Type"; +procedure GetVendor(EDocument: Record "E-Document"; Customizations: Enum "E-Doc. Proc. Customizations"): Record Vendor; +procedure OpenDraftPage(var EDocument: Record "E-Document"); +procedure CleanUpDraft(EDocument: Record "E-Document"); +``` + +## How to customize draft finalization + +These interfaces control how a fully prepared draft becomes an actual BC document. + +**IEDocumentFinishDraft** -- creates the BC document from the draft and supports reversal. `ApplyDraftToBC` returns the RecordId of the created document. `RevertDraftActions` deletes it. + +```al +procedure ApplyDraftToBC(EDocument: Record "E-Document"; EDocImportParameters: Record "E-Doc. Import Parameters"): RecordId; +procedure RevertDraftActions(EDocument: Record "E-Document"); +``` + +**IEDocumentCreatePurchaseInvoice** -- a narrower hook specifically for changing how purchase invoices are created from the draft. The default implementation creates a standard purchase invoice and copies lines from the draft. + +```al +procedure CreatePurchaseInvoice(EDocument: Record "E-Document"): Record "Purchase Header"; +``` + +## How to provide custom data for matching + +These provider interfaces supply data during the "Prepare draft" step. They are resolved from the `E-Doc. Proc. Customizations` enum, which means a single enum value implements all of them. The default implementation is `EDocProviders.Codeunit.al`. + +**IVendorProvider** -- resolves the vendor for an incoming e-document. + +```al +procedure GetVendor(EDocument: Record "E-Document"): Record Vendor; +``` + +**IItemProvider** -- resolves an item for a given e-document line, vendor, and unit of measure. + +```al +procedure GetItem(EDocument: Record "E-Document"; EDocumentLineId: Integer; Vendor: Record Vendor; UnitOfMeasure: Record "Unit of Measure"): Record Item; +``` + +**IUnitOfMeasureProvider** -- resolves the BC unit of measure from the external unit string. + +```al +procedure GetUnitOfMeasure(EDocument: Record "E-Document"; EDocumentLineId: Integer; ExternalUnitOfMeasure: Text): Record "Unit of Measure"; +``` + +**IPurchaseLineProvider** -- resolves purchase line fields (type, number, variant, item reference) for a draft line. Replaces the older `IPurchaseLineAccountProvider`. + +```al +procedure GetPurchaseLine(var EDocumentPurchaseLine: Record "E-Document Purchase Line"); +``` + +**IPurchaseLineAccountProvider** (obsolete, tag 27.0) -- the predecessor. Only returned account type and number, now replaced by `IPurchaseLineProvider` which can set all line fields. + +**IPurchaseOrderProvider** -- finds a matching purchase order for the incoming invoice based on the order number in the e-document. + +```al +procedure GetPurchaseOrder(EDocumentPurchaseHeader: Record "E-Document Purchase Header"): Record "Purchase Header"; +``` + +## How to customize export eligibility + +**IExportEligibilityEvaluator** -- called during export to determine whether a specific document should be exported via a given service. The default implementation always returns true. Override this to suppress exports based on customer, document fields, or external criteria. + +```al +procedure ShouldExport(EDocumentService: Record "E-Document Service"; SourceDocumentHeader: RecordRef; DocumentType: Enum "E-Document Type"): Boolean; +``` + +## How to add AI capabilities + +**IEDocAISystem** (in namespace `Microsoft.eServices.EDocument.Processing.AI`) -- defines an AI processing scenario. Implement this alongside the `AOAI Function` interface to add a new AI tool to the E-Document matching pipeline. The `E-Doc. AI Tool Processor` uses `IEDocAISystem` to get the system prompt and tool list, then calls Azure OpenAI with function-calling. + +```al +procedure GetSystemPrompt(UserLanguage: Text): SecretText; +procedure GetTools(): List of [Interface "AOAI Function"]; +procedure GetFeatureName(): Text; +``` + +The built-in implementations (`EDocHistoricalMatching`, `EDocGLAccountMatching`, `EDocDeferralMatching`) each implement both `IEDocAISystem` and `AOAI Function` on the same codeunit -- the codeunit is both the AI system configuration and the function tool the LLM can call. diff --git a/src/Apps/W1/EDocument/App/src/Service/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Service/docs/CLAUDE.md new file mode 100644 index 0000000000..410a2bdf1f --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Service/docs/CLAUDE.md @@ -0,0 +1,20 @@ +# Service + +The E-Document Service (`EDocumentService.Table.al`) is the central configuration record that binds together a document format, an integration connector, batch settings, and import behavior for a specific e-document exchange channel. Each service defines how documents are exported, sent, received, and processed. Adjacent modules (Workflow, Logging, Format) all reference a service by its Code. + +## How it works + +A service record stores which format implementation to use (`Document Format` enum), which connector to use (`Service Integration V2` enum), and a rich set of import-processing flags (resolve UoM, lookup item reference, lookup GTIN, validate line discount, etc.). The `Import Process` field selects between V1 and V2 processing architectures -- V1 uses the legacy `IEDocument` interface flow while V2 uses the newer structured-data pipeline with draft documents. This choice fundamentally changes which codeunits handle inbound documents. + +`E-Doc. Service Supported Type` is a simple N:M junction table that links services to document types (Sales Invoice, Credit Memo, etc.). `E-Document Service Status` tracks per-service progress for each E-Document with a rich enum of ~24 states, compared to the 3-state status on the E-Document header itself. The `Service Participant` table (`Participant\ServiceParticipant.Table.al`) associates customers or vendors with a service using a service-specific identifier (e.g., a PEPPOL endpoint ID), enabling participant resolution during import. + +## Things to know + +- Toggling `Use Batch Processing` or `Auto Import` immediately creates or removes Job Queue entries via `EDocumentBackgroundJobs` -- this is not deferred. +- Changing `Service Integration V2` away from "No Integration" triggers a privacy consent dialog; if the user declines, the change is silently reverted. +- Deleting a service that is referenced by an active workflow raises an error. The OnDelete trigger also cleans up all supported types and background jobs. +- Batch mode (`EDocumentBatchMode.Enum.al`) is extensible -- Threshold and Recurrent are built-in, but custom modes can be added and handled via the `OnBatchSendWithCustomBatchMode` event. +- `GetPDFReaderService` auto-creates a hardcoded Azure Document Intelligence service record (`MSEOCADI`) for PDF import if it does not exist. +- The old `Service Integration` enum field (V1) is obsolete since 26.0 and removed in 29.0; the `#if CLEAN26` guards manage the transition. +- Import-related fields (start time, minutes between runs) all re-schedule the recurrent job on every validate, so editing these fields has an immediate side effect. +- The `Export Eligibility Evaluator` enum field allows services to plug in custom logic for deciding whether a posted document should be exported, without modifying the core posting flow. diff --git a/src/Apps/W1/EDocument/App/src/Workflow/docs/CLAUDE.md b/src/Apps/W1/EDocument/App/src/Workflow/docs/CLAUDE.md new file mode 100644 index 0000000000..07c242fb22 --- /dev/null +++ b/src/Apps/W1/EDocument/App/src/Workflow/docs/CLAUDE.md @@ -0,0 +1,20 @@ +# Workflow + +Integration with BC's Workflow engine to orchestrate E-Document export, send, and email actions. This module registers events, responses, and templates in the EDOC workflow category and handles the runtime execution of workflow steps. + +## How it works + +`EDocumentWorkFlowSetup.Codeunit.al` registers four events (Created, Status Changed, Imported, Exported) and five responses (Send, Import, Export, Email E-Document, Email PDF+E-Document) into the Workflow library via event subscribers. It also installs two workflow templates: single-service (EDOCTOS) and multi-service (EDOCTOM). The multi-service template chains response steps so that Service A's status change triggers Service B's send. + +`EDocumentWorkFlowProcessing.Codeunit.al` is the runtime engine. When a response step executes, it resolves the E-Document from the RecordRef (which can be either an E-Document or E-Document Service Status record), validates the workflow step instance matches the document's workflow code, retrieves the service from the step argument, and dispatches to `DoSend` or `DoBatchSend`. After each send completes, `HandleNextEvent` fires the Status Changed event to advance to the next workflow step. + +`EDocWorkflowStepArgument.TableExt.al` extends the standard Workflow Step Argument with an `E-Document Service` field, which is how each workflow response step knows which service to use. `EDocumentCreatedFlow.Codeunit.al` is a Job Queue handler that fires the EDocCreated event for a specific E-Document. + +## Things to know + +- Each workflow step argument must have a non-empty `E-Document Service` -- if it is blank, the step logs an error and exits without sending. +- `ValidateFlowStep` checks that the step's workflow code matches the E-Document's `Workflow Code`. A mismatch raises an error, which matters when multiple workflows are active simultaneously. +- The first workflow step that runs on an E-Document sets its `Workflow Step Instance ID`. Subsequent steps for the same document must match this ID or they are silently skipped. +- Batch send with Threshold mode accumulates documents in "Pending Batch" status until the count reaches `Batch Threshold`, then exports and sends them all at once. Recurrent mode defers to the background job. +- Deleting a service that is used in any enabled workflow is blocked by `IsServiceUsedInActiveWorkflow`, which scans all active workflow step arguments. +- Email responses do not require an E-Document Service in the step argument -- they use the document's sending profile directly.