Archiving Inbound Data in Original EDI Format

The best thing you can do for long term visibility into your data, and ease of access for resubmissions and analysis, is to archive the data the moment it arrives.  Archiving the original EDI document can be done in several ways – but in order to get it in the original format with the envelope and all of the data, you must archive it before it runs through the EDIReceive pipeline in BizTalk.  This can be accomplished in several ways:

  • Use a series of ports to receive the data.  If, for example, your data is coming in via FTP, you could pick it up with a Receive Port that has the EDIReceive pipeline on it – which would mean you only have one port, but it would also mean that the data would be converted to XML format and validated before it could be archived (unless you use a custom pipeline – see next bullet).  If you want to archive the original data in the original EDI format and not use a custom pipeline, you will need to create a Receive Port that picks up the data from the FTP site using the “PassThrough” pipeline.  Next, you will need to create two Send Ports that subscribe to this.  One should write the file out to an Archive directory (your archiving requirement is solved that easily!).  The other Send Port should write the file out to a temporary “staging” folder.  Now, you can create your final Receive Port that subscribes to this “staging” folder, picks up everything that arrives, and runs it through the EDIReceive pipeline.
  • Use a custom pipeline.  If you don’t want the extra hops in the previous option, or your solution doesn’t lend itself to this kind of multiple port solution, you can create a custom pipeline that writes the data to a directory.  You will need to write a custom pipeline component to write the data in C# to a directory.  Then, you will need to include this component along with the EDI Disassembler component (and potentially others!) to your custom pipeline.  The final custom pipeline mimics the out of the box EDIReceive pipeline, but adds the extra custom component that writes the file.  This is much more labor intensive, but opens up some additional options around archiving.