- Server Components
- Feed / Platfom
- Interface Bridge
- Real Time Data Apps
- Real-Time + Historical Data Apps
- Repo API
- Repo Server
- Data Distributor
- Historical Data Apps
This Server can be configured to include the RIDE System, which is a sophisticated and adaptable data-retrieval system developed by Olsen.
Sun/Solaris, Intel/Solaris, and Intel/Linux platforms are supported, depending on availability of the live market data interface for these platforms.
The Server is deployed at customer site with a maintenance contract which assumes that remote access will be possible.
The schematic of a generic Olsen Data Server installation is presented below. (Typical installations may not take advantage of multi-platform support.)
Maintenance schedule and deployment architecture can range from low to high criticality, including the possibility of no single point of failure.
- Reuters Select Feed/Reuters Triach
- Reuters Select Feed/Bridge TRS
- Tenfore (satellite or Internet)/QuoteSpeed and Tapi2TCP (from Tenfore)
- Bloomberg/Data Licence (not for real-time applications)
- Bloomberg/Logiscope (proposed)
- Reuters Select Feed/TIBCO Rendezvous (proposed)
To convert data to the standardized format, the collector must organize it using Feed-specific logic for determining the kind of instrument, the nature of the update, the critical fields that must be parsed, and the internal dependencies between fields which must pass logic tests to make sure the data is valid for writing to the Repository.
This logic becomes increasingly complicated when dealing with expiring instruments and futures. For example: when converting bond future data to the internal standard format, the collector may need to access cached information for other symbols to determine the cheapest-to-deliver underlier. The collector attempts to collect all information regarding the price update—as far as possible—from the Feed it is built to service. As mentioned above, the Olsen Data Server is designed to handle multiple Feed/Platform interfaces, and each interface needs to be supported by its own collector process to handle the unique logic (of internal relations between symbols) of the interface in question.
Human input errors as well as automated quoting algorithms run by market makers inevitably produce bad data—especially for non-binding quotes for OTC instruments. Exchange-traded instruments suffer somewhat less from these problems, but tick data (for transactions or quotes) from exchanges also show signs of contamination.
Traders looking at real-time data graphs on-screen intuitively filter out bad data. Therefore, using filter algorithms may seem trivial; but experience shows how quickly market changes can cause a filter to appear excessively conservative or brash. The development of good filter algorithms borders on work in the area of artificial intelligence; the ability to adapt to rapidly evolving market conditions is paramount.
Any analysis of high-frequency data, as well as risk-management exercises (such as VaR calculations), are strongly influenced by bad data. Olsen learned this 20 years ago when it began to publish the first papers ever that investigated the behavior of high-frequency market evolution.
The filter component is embedded within the collector process. The Repository writes every tick appearing from the live data interface, but it assigns to every tick a credibility number between 0 and 1. A credibility of 0.5 or above may be regarded as a default threshold (a higher credibility means the tick is good). However, the data retrieval system—RIDE — can extract data with a custom credibility level, to allow the user to determine the stringency of the filter process. So, while the credibility number is assigned when the data is written, filtering occurs only when the data is retrieved for use in an application.
The embedded filter also maintains a checkpoint of the filter state for each of the thousands of instruments being collected. In case the system must be restarted, this enables the recovery of the latest filter state.
Olsen Financial Technologie consulting services can build such bridges to facilitate integration and maintain services during times of transition or to preserve old applications. It is not always possible to recover all the fields associated with Platform B using data from Platform A for all instruments. In general, we have to analyze the complexity of the information in order to provide bridges that would work for specific symbols or sets of instruments so that existing applications could be supported. The purpose of this component is not a complete and exact translation of one feed into another.
Real-time applications interfacing with the Olsen server can take advantage of data from multiple live data feeds, making the applications exceedingly fail-safe and robust.
A common usage of this component is as a platform for building real-time forecasting and trading models. Another common application is to generate real-time computed market indicators (such as continuously updated volatilities and/or correlations from forex spot or zeroes from bond prices).
Data generated by this component (forecast or indicators) can also be fed back to the Repo Server and stored in the Repository, as indicated on the diagram.
These are real-time applications that also require historical data. Although conceptually simple, we consider these apps a distinct category because the dual purpose they serve invokes unique as well as classic problems. Many of these problems, which we have solved, have to do with the point of transition between historical and real-time data, and how best to handle the rare conditions that arise under such circumstances.
The Repo API allows Real-Time + Historical Apps to be elegantly coded, allowing outwardly seamless access to the data.
The layer that guards the Repository . It controls the writing of the database and allows multiple processes to register for this purpose. It makes sure that ticks are written in time order and prevents any ill-formatted information from passing to the database.
The Repo Server also re-transmits real-time data to consumer applications who subscribe for this data via the Repo API .
Finally, the Repo Server facilitates maintenance of the database via specialized functions. Relatively complex transactions, such as replacing sections of data without switching off the collection process, are achievable through run-time commands issued to the Repo Server.
Repository = the software architecture
Database = the contentThe Repository has evolved from our long experience with the need to store and maintain large sets of high-frequency financial data. It is associated with several maintenance tools that allow our experts to maintain it remotely.
Repository = the software architecture
Database = the contentOur database contains tick-by-tick data collected since 1986. The Olsen Server, deployed at a customer site, can be initialized with a Repository containing an appropriate subset of the database to support mission-critical applications that require access to historical data. The subset can be constructed with respect to time as well as to instrument type.
The following figures reflect the scope of the database
|FX quotes||330. 5 mil|
|Equity index levels||212.2 mil|
|FX quotes||330. 5 mil|
|Interest rate deposit quotes||16.3 mil|
|Interest rate swap quotes||52.6 mil|
|Benchmark bond quotes||45.1 mil|
- Data publication is triggered when the first tick arrives from any source after a configured sleep time following the last publication.
- When publication is triggered, the latest middle prices from each feed are collected and placed in sorted order in the first column of our hypothetical table.
- For each of the middle prices, the corresponding net weight (after appropriate reduction) is entered in the second column.
- A target weight is calculated as one-half of the sum of the weights in the second column.
- The second column is accumulated by addition, starting from the top row, until the (select) row where the target weight is equaled or exceeded.
- The datum associated with the select row is published.