- Hot Topics
- White Papers
a) FAST is going to take off.
b) The background questions, confusions, documentation for a while.
c) Usual design-by-committee and no reference implementation for 1.1 (what a grand idea)
d) Big exchanges taking it on and interpreting it own way,
I have a few very simple questions:
Who designed this mess?
Why wasnâ€™t the problem tackled with something other than usual mixing up of bitwise-logic with schema and XML bloat?
And if it just works and FAST is the-end-of-it-all, why do vendors still provide proprietary interface to their data?
This work increasingly seems to be a typical hacking, bloating and consultancy wreck waiting to collapse, and all the talk of latency improvement is as close to efficiency as FIX is to God-Object designs.
Seems like a great exercise to enforce and then dump yet another proprietary protocol network cards do better and faster than humans reading encoding and decoding ciphertext of engineering.
I'm not sure if this is flame-bait, but I will answer anyway.
To respond to your questions first:
2. Why wasnâ€™t the problem tackled with something other than
3. And if it just works and FAST is the-end-of-it-all, why do
A few comments on your observations:
c1) "Usual design-by-committee"
c2) "no reference implementation for 1.1 (what a grand idea)"
c3) "Big exchanges taking it on and interpreting it own way"
A question to you regarding your second question:
And, please identify your company affiliation.
I was part of the design team along with Rolf, and your assumptions about the committee process are far from the truth. We spent, and still spend, a considerable amount of time working with the FAST specification and designing future versions. If you have a better solution, or an enhancement to the specification, then speak up. My guess is you will not.
Prove me wrong and provide some constructive criticism, and offer some solutions.
> I'm not sure if this is flame-bait, but I will answer anyway.
Not at all. And response is appreciated.
> 1. Who designed this mess?
Right, so what was the primary motive doing this?
Bandwidth reduction, better recovery, less latent processing rates, better throughput etc. And all of that coming from adaptation for streaming bits rather than bytes? Yes, that is all there is to it. And no, it doesn't do the job, ask the pros as Real Software and the hype their G2 caused.
None of it justifies any of the complexity or has any technical value in my opinion.
Why? Because it is already done by hardware and I would recommend talking to a few network card manufacturers and optical providers.
> 2. Why wasnâ€™t the problem tackled with something other than
I don't think I have confused anything.
The wire representation is doing what NICs are doing and better.
FAST as a state machine is a classic flaw. Stop-bit and presence maps do not enhance anything existing network infrastructure or application protocols that are done right cannot already do.
XML is used to express templates, flaky schemas (another topic too long to bother with), and desire to repeat the same old FIX mistake where everybody tacks on whatever they like (and it varies between exchanges to such a silly extent), it defines no protocol at all, but fields and values.
Not just providing flaky and breaking semantics, XML as a template and desciribing a protocol is nothing that XML didn't do for the past 11 years of it, SGML did it too for much longer.
Even schema driven bitwise encoding and decoding code generation is nothing new, it exists since 1970s.
Perhaps you're too young to remember Baudot code and where stop-bit like ideas comes from, and the same applies for 'presence map' which has a long computing history (basic Logic in computing) and even in relatively new (a decade or more) applications in Semantic Modelling.
>3. And if it just works and FAST is the-end-of-it-all, why do
If it was intertia they wouldn't invest in their feeds would they? FAST has been around for far too long without any concrete proof beyond 'look Ma, I can compress and publish numbers'. Compared to what other than FIX?
You should look up those players and see why they did it (hint: hard data).
If it was different goal functions, what makes anything FIX or FASTFix better?
I have no hypothesis, I just see that both CME and Eurex ,the drivers of this process would rather keep:
a) reselling bandwidth instead of tackling the problem of ancient and cobol-like legacy
> A few comments on your observations:
No reference implementation and valid technical reason or data or any substantiated (not manufactured) samples why existing infrastructure with far more simple approaches cannot do the job better or in less complicated fashion.
Remember VHS vs BetaMax? Sure, it means nothing, FAST could be a winner, but then came a CD-ROM etc.
If you can provide a set of samples, I will provide proof FAST does not gain anything more than 3% over existing methods done in hardware and very little work done in app-level space. And that 3% will come at a huge cost that can be put to better use elsewhere.
Reading networking hardware manuals and looking at their chip designs should do the job as well as looking at what non-FIX major players did.
> c2) "no reference implementation for 1.1 (what a grand idea)"
It is more than a problem. There is nothing out there that did well, even if it had a great reason to exist, because of lack of implementation so the vapour-compared-against-raw-FIX and supposed benefits can really be measured, and most importantly, measured against alternatives.
The key is hard data to prove your ideas and concepts (which are not new anyway) and against what can be done. Not assumptions, and not the off-the-tangent benchmarks.
It is not a flame at all, it is requesting a justification in samples so we can see for ourselves whether this makes any sense or not (hint: it doesn't).
> c3) "Big exchanges taking it on and interpreting it own way"
I can agree it is a classic FIX approach problem and not just FAST, and..
> Some of the exchanges have been careful to get feedback and
Those same exchanges should focus on what does provide more value for everyone not just their interest.
They should enhance their designs not to redistribute redundant things such as depth, gruesome hacks on snapshots and rewinds, require silly private lines and VPNs and huge pipes for their idiotic designs and plenty more.
But no, there is more interest in bolting another layer that will not address the problem better than a network card, they would rather mix-up application and network concepts.
> A question to you regarding your second question:
Simple. For big players that adopted this totally unnecessary process and hype, why not measure and test against an implementation that distributes what is required and with classic or streaming compression (there are stacks of implementations out there).
That way people don't pay for exchanges mark-up on bandwith (revenue before actually providing any service), and their lack of interest in fixing the problem that lead to FAST.
That way we get to see what the tangible benefits are, not hypothesis or designs for the sake of design and show off that leads to no advance.
> And, please identify your company affiliation.
Retired. And after 30 years of dealing with the same old thing, seeing the same thing repeat all over again.
Why not just FIX the problem, the FIX itself is within those exchanges and their infrastructure implementation.
It has been and it still is dead obvious even to fresh-out-of-college or networking graduates screaming "bitwise, tagging and XML horror ahead" (ie. story for my son and his friends)
> Most of your questions can be answered by reading either the
I suggest a "proof" of concept that would reduce the problem where it needs complexity reduction. I suggest a "proof" as in tangible, substantiated set of samples ( a month of it really ) and compared to a number of alternatives. Heck, do it like a Google Summer of Code competition and you will be shocked and awed by better, cheaper and more sensible implementation.
> and the current specification. How much time have you really taken to > try to understand where FAST has come from
Enough to teach my grandchildren about it.
Does (stop-bit encoding) and (presence map) and implementation of software and hardware for pre-Internet communication count?
Please see my response to Panter Engineer.
> and what the design considerations were ?
What are they? Have they been proven to achieve anything than alternatives? And which alternatives?
> If there are specific parts of the specification your are having >trouble grasping, the forum is a great place to get help.
It is actually the entire spec.
As the subject says, I have trouble grasping doing entropy encoding , bit- and wire-level signalling hacking and more on application level designs. Suffice to say it is beyond odd and justifiable, by any criteria out there as well as history.
We are not dealing with MPG29 here.
> I was part of the design team along with Rolf, and your assumptions
So where is the proof? Where is the data? And compared to what? Raw FIX surely cannot rcount just like a compressed file on original text cannot.
Advertising huge message processing rates is a typically flawed benchmark people are always keen to provide.
> We spent, and still spend, a considerable amount of time working with > the FAST specification and designing future versions.
Well I'm sure you'll keep designing something network equipment manufacturers have been doing better for decades.
So copies, constants, defaults, deltas, is there really something to design there?
> If you have a better solution, or an enhancement to the
It it not my job to prove you wrong, it is your job to prove yourself right and you ought to be aware of one major player dumping FAST already (which tells a lot).
My guess is you are coming from the same stock exchange background like much of FIX community that provided initial and highly-ambigous, forest-growing and gapful designs.
That is why specification even to date ( and inclusive of FIX 5.0 ) still lack basic domain knowledge and keeps tacking on and tacking, leading to what I clearly stated (and under a set of valid questions):
Despite that being a huge inefficiency by itself and a massive invitation-for-hacking-as-proven-by-all-adopters:
I suggest the committee get some exposure to existing CME and Eurex infrastructure, from deep inside, and realise where the problems really are.
They have no interest in reducing bandwidth requirements, just to expand product range over existing bloat. They have no interest in fixing problems of data distribution, they prefer to resell bandwidth at a huge premium if you're new to this same old industry.
And besides, all the FAST efforts are going to be running over TCP and/or IP. Please enquire and attain knowledge of alternatives. There are hardware solutions out there already that beat all of it, hands down, in latency, bandwidth reduction and throughput goals already.
Next thing I can foresee is the committee doing a new-age HDLC and X.25 in specification. Please don't go down those and similar routes, it all fell apart and rapidly in modern day communication and development.
The best advice I can give is go and talk to Cisco, Broadcom and IETF, anyone with good few decades of experience, and see what they suggest in chip/hardware and new protocols. PLEASE: Don't to a classic mistake of 'standards' bodies and pay them to endorse any of exchange or new protocol ideas for sake of publication and marketing of supposedly new tech, but do do it for the sake of advance, to have the best there can be for the least overhead and price paid.
Modern stacks in hardware + app-level logic beat FAST in price, lack of complexity, and choice of compression, as is well demonstrated by highest-load exchanges that did try it (hint: not talking of that famous options exchange in US or CME, there is another one that tried it all and gave up on FAST pretty quick).
In hope this helps you guys produce something more sensible that actually deals with *their*/exchange/ecn problems beyond bits and bobs and blobs and deltas.
I do doubt (don't care really), there will be anything as such though ever suggested purely because it is the type of industry open to abuse and monopoly and 'open standards' (none of which really exist apart from tag/value mega-universe) and expensive and highly inefficient *application-level* designs in networking on their part.
That is what the real problem is.. not mine for sure.
It's Pantor not panter.
So, retired, what's your interest in this?
Sorry, you haven't (yet) convinced me that you are not a troll.
> > I'm not sure if this is flame-bait, but I will answer anyway.
> Bandwidth reduction, better recovery, less latent processing rates,
> Why? Because it is already done by hardware and I would recommend
FAST too is already processed in hardware. By HPC Platform (my company) at least and probably by others. And BTW we don't only decode FAST in hardware, we also filter, maintain the order book and compute whatever it takes. All that in hardware and much, much faster than the alternative you propose.
> The wire representation is doing what NICs are doing and better.
Again, this is wrong, we process FAST encoded multicast packets at wire speed and this is much faster than what any NIC + OS stack can do.
Stop bits, presence maps and operators are not a problem.
FAST is a domain specific compression scheme and this is fine IMO.
XML too is fine as long as it does not do into the wire. Personally I would have preferred and s-expression based description but I'm probably the only one. :)
You are missing the point, again read the proof of concept and original design goals, specifically
There were several design constraints that we had to adhere to, and multiple proposed solutions were evaluated before we settled on FAST. Your focus on a hardware based, NIC solution is very short sighted. There are far too many transport and media formats to consider.
How can you say the reduction of bandwidth by over 70% in most cases is not a valid proof of concept ? Yes, data elimination is always a better way of reducing bandwidth than data compression, but we can not dictate what an exchange or vendor can or cannot include in a data feed.
And what is with all the secrecy ? Retired from where ? What exchanges have dropped FAST ?
> If you can provide a set of samples, I will provide proof FAST does
Anyway, I do not think your comments really warrant any more of my time. Your motive is an ulterior one, I am just not sure what yet.
a correction to my comment on c2 in the post below:
> Hi Majkara,
Eurex did not fix a bandwidth problem, we had 64kbps lines until a few years ago. We also used presence maps in our previous proprietary implementation and welcome the fact that nobody tried to reinvent the wheel with FAST. It simply puts together proven technologies and gave people a chance to agree upon something to be used for encoding. Usually, everybody knows it better than everybody else and does his or her own, proprietary thing (vendors included). FAST has proven to be an enabler for FIX market data. It has and will make life easier for our customers accessing multiple exchanges and it is only for them that exchanges do it, not because FAST is the latest and greatest technology.
I am equally unsure what your intentions are but you have certainly received a fair amount of attention. Maybe that is sufficient as a motive. I am not retired and live in the real world.
> > I'm not sure if this is flame-bait, but I will answer anyway.
> Why? Because it is already done by hardware and I would recommend
>FAST too is already processed in hardware. By HPC Platform (my company) at least and probably by others. And >BTW we don't only decode FAST in hardware, we also filter, maintain the order book and compute whatever it >takes. All that in hardware and much, much faster than the alternative you propose.
You are missing the point completely.
You can do whatever you wish in hardware, FGPAs are cheap these days, the problem is that you already tied-in your customers into your invention. And who's to say Intel won't smash it apart with NIC close to CPU anyway. Again, talk to these people and then do a hardwired impl in hardware (heck even if you drive it via XML you have achieved nothing to boast about; remember that is how all platforms failed).
How can you talk about "much faster" and the alternative "proposed":
a) When it is about application and the remote-end doing Cobol, passing you stacks of redudant data, legacy bloat and on top of all doing it via *their* *protocol* (read that and you'll see that no matter what you do in hardware it is not necessarily faster than alternatives that already exist; not my job to teach you about it though).
b) When you have nothing to compare it against?
For both to be resolved, please go talk to the exchanges that have done it right and then do a comparison and only then your sentence will stand.
>> The wire representation is doing what NICs are doing and better.
>Again, this is wrong, we process FAST encoded multicast packets at wire speed and this is much faster than what >any NIC + OS stack can do.
You can process nothing at wire speed, simple, because it is speed-of-light. If you want to address latency see what Cisco does. Is your hardware better than theirs? So why doesn't it sell on that scale?
> Stop bits, presence maps and operators are not a problem.
Yes they are.
> FAST is a domain specific compression scheme and this is fine IMO.
The domain of which is bits and bytes, great.
>XML too is fine as long as it does not do into the wire. Personally I would have preferred and s-expression > based description but I'm probably the only one. :)
You are not. Similar 'grand' designs went into networking Lisp VMs and the outcome:
> You are missing the point, again read the proof of concept and original design goals, specifically
LSE is the worst 'options' (warrants really) example you could have taken. Little liquidity if any at all.
Plus it is not a heavy load exchange at all.
Plus there is no data for *all* markets for one week, hardly a month.
>How can you say the reduction of bandwidth by over 70% in most cases is not a valid proof of concept ? Yes, >data elimination is always a better way of reducing bandwidth than data compression, but we can not dictate >what an exchange or vendor can or cannot include in a data feed.
Reduction in comparison to what, what alternative?
Where's the sample set? Where can I get a complete day+week+month sample set to try out basic streaming compression and prove that FAST will not gain you anything over an exchange that does distribute data you are only interested in.
Is there an FTP to a month of samples? (amateurs will base it on one example and one sample only).
> And what is with all the secrecy ? Retired from where ? What exchanges have dropped FAST ?
Retired from chasing Cobol-legacy and Baude bit invention. I review in retirement, mainly on performance side.
No secrecy, just cannot represent people I used to work for.
> The sample data is on the FAST web site, but you already know that since you have read all the information > available.
You must be joking or be new to this. Any decent exchange spits out at least 300MB of data a day and that's from ages ago.
Where are those samples and not stats and contrived proofs of concept?
Is there an FTP for at least a week of real data from any of you?
> Anyway, I do not think your comments really warrant any more of my time. Your motive is an ulterior one, I am > just not sure what yet.
My motive is simple:
Prove if there is any meat before introducing another 're-invention' that doesn't adress any of the problems from the adopters that are pushing it.
Or, for good of mankind.
I have seen nothing to warrant my time either, so I asked very basic questions to which there are no samples or any proof to play with at all. You know, so we can see for ourselves.
> Hanno Klein / Deutsche BÃ¶rse Systems
> Eurex did not fix a bandwidth problem, we had 64kbps lines until a
And you are not mentioning your latency which can, quite often, reach, wait for it... wait a bit longer:
Wow, that's great, Accenture tops it.
> We also used presence maps in our previous proprietary implementation > and welcome the fact that nobody tried to reinvent the wheel with
Fact? I don't see any fact by just stating FACT. You previous proprietary implementation is ranking No. 1 and right after FX ECNs (which are famous themselves).
> It simply puts together proven technologies and gave people a chance > to agree upon something to be used for encoding.
Please see all previous response, proven how, where, where are the samples?
There is no one agreeing but you guys pushing for more ideas bordering on insanity.
And again if Eurex is not the greatest example of latency in the world, I really do not know what is. Forget your co-location resales as they are about the only thing beating resales of bandwidth:
Resale of rack-space and electricity.
> Usually, everybody knows it better than everybody else and does his
Like single-threaded dispatch, oh dear..
You state machine people have really no idea what people did back in 70s and you still hold on to broadcasting depth and inventing proprietary account management and what not .
> FAST has proven to be an enabler for FIX market data.
Facts please and facts given in samples that any kid out there will prove is:
Plain Wrong Assumption.
> It has and will make life easier for our customers accessing multiple > exchanges
Oh, you guys can't even agree on clearing, and you are talking of multiple exchanges and FIX-tag semantic and syntactic hacking.. hmm.
> and it is only for them that exchanges do it, not because FAST is the > latest and greatest technology.
It isn't latest nor greatest, see previous responses.
I would suggest fixing your implementation so your reputation for high-latency goes down rather than up and that is without asking for rack space.
> I am equally unsure what your intentions are but you have certainly
Benefit of Man Kind.
> received a fair amount of attention.
I am not after attention.
I am after sample data I can throw at Google kids.
> Maybe that is sufficient as a motive. I am not retired and live in
There is a challenge for your real world. Sentence by sentence.
Please, provide, sample, data, so, people, can, see, it, is, HYPE.
For the record, until such time the adopters sober up:
Most basic streaming compression turns FIX data of:
200,909 KB in size to 20,106 KB
And that is as primitive as it gets.
Provide real samples and real data and demonstrate your optimisation makes any sense whatsoever *against alternatives*.
Hint: they do not, try it out. Try it out and see for yourself if you are all reluctant to provide any substantiated proof against any alternative.
Contrived examples for the sake of proof never did any good to anyone. They are only delusional.
If you do need a gauntlet, here it is: I will provide mine if you provide one month of samples for everyone to play with.
If you do not, I wouldn't care much. It isn't my job to execute anything and especially execute poorly while repeating historical mistakes leading to another casualty of self-delusion.
Samples Gentlemen, Samples..
Typo in your response (order of magnitude), Eurex latency is less than 5ms and not 500ms. Speed of light will add on to that depending on your geographic location. Co-location was not invented by exchanges but by their customers who perceived themselves at a disadvantage due to their geographical distance from the central matching engine.
Sample data is simply not possible due to confidentiality requirements to protect exchange members. You need to approach them and not us. Tampering with the data is not an option because this opens the door to manipulation.
In the real world changes come at a cost. That is why such changes are typically incremental and are only introduced if a benefit has been proven. You doubt the due diligence capabilities of an entire industry here. Open software and specifications do not incur a lock-in as a hardware purchase does. There are more aspects than just the best technical solution to which you are reducing this discussion thread. That is why non-technical people usually run the business but I am pretty sure you are not in favor of that setup. I as a technical person do not have a problem with it and see the dependencies between the business and the technology.
> Hi Majkara,
I think it was very much flame-bait, with the original poster going off and spewing rants abt Intel and network hardware vendors, etc. I have been around long enough to have proposed a few OIDs in ASN.1 specification (S/MIME anyone?), in an ancient past life. But coding up to the FAST spec was a LOT simpler than writing a full-scale ASN.1 library, just based on personal experience. All of his rants can apply to ASN.1 just as easily, it was clearly "design by committee", and "people just extend it however they wish". Is FAST perfect? of cos not, but I believe under the constraints of the original WG, under the FIX charter, Rolf, Daniel and the WG guys did the job admirably. And what is better news is that the exchanges are pushing for FAST as a way to get out of the bandwidth monster jam.
Maybe my perspective have changed (moved from a pure technologist to now running a trading firm). But technology is a tool, a wonderfully powerful tool, but don't let it be an end all by itself.