Skip to main content
site map
my account
contact
our facebook page
Latest Posts
Archive

+
...

Market Timing

Wednesday, April 13 2016

Did we see the Bulls blast out to the upside today ?
- New run to at least to 212.0 or 214.0 ???

Posted by: Dr. G. Paul Distefano AT 08:51 pm   |  Permalink   |  Email
Monday, April 04 2016

Where do we go from here?

The SPY is sitting close to 204, up from about 181.  The Bears want to push the SPY back down to 181, but the Bulls want it to continue up to the all-time-high resistance level at 214.  When the SPY hit 204, it was like hitting a triple-reinforced brick wall (with lots of waffling around since then).   "The Battle of 204" will turn out to be quite a battle. 

Lots of people are simply peeking over the fence  (watching) to see what might happen next, but MIPS is "in the gears" analyzing  (calculating) what is most likely to happen next.  I'm betting on MIPS !!!

PEEKING...
                                                                                               


MIPS'ing...

Posted by: Dr. G. Paul Distefano AT 09:45 pm   |  Permalink   |  Email
Sunday, April 03 2016


Rightfully so, many MIPS members are concerned about curve fitting, data mining, etc., in timing models on the market today (and, or course, in the MIPS models themselves).  I can tell you point blank that we do not curve fit in the development of the MIPS models. I can also say with a high degree of certainly that any engineer with an advanced degree from a good school who has written software for control systems (say for nuclear power plants, space rockets, fighter jets, commercial aircraft, etc.), where failure is expensive or disastrous, knows the difference between curve fitting and the development of new algorithms to improve their software's performance.

I say "rightfully so" above, because many (or most) model developers do curve fit in the "development" of their models (if you want to call it "development"). The problem is that "curve fitting" is not really "development".  It is merely forcing a bad model to look good using a certain set of data.  And yes, done correctly, that model may produce good results after curve fitting with a certain set of data, but will almost certainly fail with any other set of data.  I can also tell you that I know several model "developers" that curve fit in some form or fashion in their models, and they don't even know that they are doing so. 

In reality, there can be a fine line between:
   (1) curve fitting an existing model to perform better or
   (2) introducing new algorithms to make a revised model perform better than its predecessor. 

Let's see if I can come up with an example of this.  In many cases, trying to come up with an example to explain something is more complicated than describing the original. 

Allow me to try to explain fixing a fighter jet's control system software to better adhere to the design specs as to how fast the jet should climb in relation to how far back the pilot pulls "the stick".  In any decent software of this type, the formula would be based on the physical fact that, because of the curvature of the top of the wing compared to the flat bottom, the air traveling over the top of the wing goes faster than that along the bottom, and hence the air pressure on the top of the wing is lower than that on the bottom. This pressure difference, of course, is what "pushes" the jet up into the lower pressure zone. This is why the wing is designed and built the way it is.

Now, let's say the fighter jet isn't working the way is should and two teams set out to fix it.  Team #1 applies curve fitting and team #2 chooses to redesign.

Team #1
Curve fitting in this case could be "developers" tweaking certain parameters in the existing formula to force the jet to climb at a certain rate depending upon the speed of the jet.  They may make this one jet perform better, but their "design" will likely fail for all others.

Team #2
Real development in this control system software would be where the design engineers that developed the formulas for how fast the wind over the wing should travel, realize that these formulas were developed for Mach1 speeds, but the current jets travel at Mach3.  They also know that, at Mach3, the faster wind speed heats up the air going over the wing more than Mach1, and this hotter air get "lighter" (lower pressure).  So, lighter air on the top makes the pressure from the bottom more effective, and the jet moves up faster. 

Therefore, rather than one-time random "adjustments" which are also unexplainable from Team #1 (the curve fitters), the "real" design engineers in Team #2 introduced new mathematical algorithms that take the speed of the jet into consideration and adjust accordingly and automatically, and the software then works again at all speeds.

The main difference between these two approaches, of course, is that, when the software controlling of the jet needs to be improved:
   Team #1 members "need and use" the raw data to adjust/develop their model, whereas
   Team #2 members introduce new mathematical algorithms to closer resemble the performance of the jet itself (and they use the data
   only to "prove" that the new algorithms/equations did improve the performance of the jet in the way in which it was designed).

Is this understandable?

Posted by: Dr. G. Paul Distefano AT 06:55 pm   |  Permalink   |  Email
Saturday, April 02 2016

GREAT NEWS FOR MIPS MEMBERS !!!

We have spent the last 15 months developing what has turned out to be some of the very best models on the market today. We call this version of our models the "Blaster Series".  Even though we added many new algorithms that would make our models better, the main contribution is how we now handle "flat markets" (aka, sideways trading patterns, consolidation patterns, etc.).  Call them what you will, but they wreak havoc on all types of timing models; partially because they are "trendless" and partially because these types of markets can change direction so frequently (every few days), that timing models available on the market today usually get whipsawed when trying to follow these patterns.

Therefore, we added new code in our MIPS models to successfully deal with:
  (a) low volatility markets that "wiggle" either in a flat or very slow growing/degrading trend (2005),
  (b) high volatility markets that shoot up/down in big cycles, and end up where they started (2011),
  (c) high volatility markets that trade in a "very tight trading range" of plus/minus 3-5% and change
       direction very frequently (like every 4-7 days, as in the first 8 months of 2015).

The results are a developer's dream:
Our tests of the new models show that, compared to the performance when we started developing this new series, 

   1) the CAGR of the Blaster models are 30-50% higher,
   2) the Maximum Drawdowns have been reduced by about 35%, and
   3) the average number of annual trades are about the same as, or a little less than, before.


Nomenclature:
Below are the names that we will use going forward to distinguish between the new MIPS "Blaster Series" models and the names of the prior models from which they came.
Pre-Blaster Models        Blaster Models
MIPS1                             No new model
MIPS2                             MIPS22
MIPS3                             MIPS33
MIPS4                             MIPS44
MIPS/Nitro                      MIPS/Nitro5

                                                                       Blaster Performance:

 

Posted by: Dr. G. Paul Distefano AT 04:46 pm   |  Permalink   |  Email

MIPS Timing Systems
P.O. Box 925214
Houston, TX  77292

An affordable and efficient stock market timing tool. Contact MIPS
281-251-MIPS (6477)
E-mail: support@mipstiming.com