The blender is an automated method of consolidating multiple feeds and creating a single source of fine data. Unlike the filter, which looks only at past data from the same source, the blender looks at multiple data sources (including computed data generated by other real-time data modules) to generate high-frequency data of the highest quality. In many respects, the blender behaves like an artificial market maker.

The blender works on the principle of weighting each of the latest datum. Weights are credited based on the source of the data and then reduced, based on the age and spread (and, possibly, inverse filter credibility) of the datum.

The algorithm can be understood by imaging a two-column table:

  • Data publication is triggered when the first tick arrives from any source after a configured sleep time following the last publication.
  • When publication is triggered, the latest middle prices from each feed are collected and placed in sorted order in the first column of our hypothetical table.
  • For each of the middle prices, the corresponding net weight (after appropriate reduction) is entered in the second column.
  • A target weight is calculated as one-half of the sum of the weights in the second column.
  • The second column is accumulated by addition, starting from the top row, until the (select) row where the target weight is equaled or exceeded.
  • The datum associated with the select row is published.

Technical Paper on the Blender

See a published technical paper on the Blender application for forex spot data


The ideas presented in this paper can easily be applied to other instrument sets using the same techniques. Forex spot data, however, has the advantage that even from a single data feed one can construct multiple virtual feeds of computed forex cross rates through vehicle currencies. (Of course, for the purpose of the blender, different institutional contributors can also be regarded as distinct feeds.)