This new logic I’m working on is pretty cool, now that I think about it. I was really forced to think more critically about it, in a general sense, when I tried to explain it to a friend of mine today. I’m so used to just knowing how it’s going to work, from all the details, that it was quite the excersize to condense / simplify it so that someone else could understand it. Now that I’ve done that, I might as well document it forever so that I can look back and go, wow.

This won’t go into details that could hurt my biz of course 🙂

First off, start with about 15,000 stocks. The anatomy (basic) of a stock is, open, high, low, close, volume. This is recorded for every stock for every trade for every day. Looking historically, usually one looks at only the end of day stuff. What we do is create about 200 different “indicators” based on this information (exponential moving averages, highs, lows, trend lines, MACD’s, RSI, etc – basically a lot of different formulas). In order to create the indicators, I’ve written software that will go to an ‘indicators’ directory, and look for files of a specific naming convention (this allows me to put them in a specific order as well, which is important for indicators that depend upon eachother). This allows me to simply create a new file, put the formula in it (as a function) and then my system automatically includes it and creates the database table based on a standard template that I’ve created. The database table simply has 3 fields, symbol, date, indicator (to varying degrees of accuracy – the database is dumb as far as this is concerned, it simply stores decimal values of 21,11). Througout the day these change – at the end of the day, it stays the same, and the next day will be added on, to keep a record of the EOD data for each indicator – so we can do further analysises later.

Now we have all this data, what to do with it? Well we’ve developed about 30 different signals, which are either on or off (1 or 0) that depend upon these indicators, in different ways. Some only depend upon 1 or two. Others depend upon time of day, plus 10 indicators, if this one, then not that one, what phase is the moon in (joke), etc….

Each of these signals is also stored in the database. We keep a minute by minute copy of these for up to 5 months.

And, another level of abstraction! We have developed 4 different strategies that look and grade each of these Signals differently (all for different purposes, and with an accuracy (I can’t explain how we define ‘accuracy’) of 94% – 100%.

And one more thing to make it EVEN more complicated (not finished yet) is that the system adapts itself. Using our own algorithms, it analyses the past / present accuracy of each logic for each symbol. If the accuracy falls below a certain level for that symbol, then it’s rating falls as well. So if it shows up on a list, it will be graded lower – or not even show up depending on the situation.

Near the end of this year we’re working on building a Mathematica model to replicate this so we can do further analysis to it – abstract even more. I want the system not only to grade the symbols individually, but as part of a whole (eg compare to the grade of the TSX, or NYSE) or of their sector, and use those as baselines to ‘normalize’ the grading process over time. So that if a symbol goes into a grading ‘slump’ we won’t downgrade it (permanently) if it’s entire sector is in a slump as well. Lots to think about there.

But the holy grail is that once we have all this data, and decision tables, etc at our disposal in Mathematica, we can use stuff like ‘Rough Sets’ to do the analysis – where the computer will find strange relationships between data and decissions that may not have been apparent before. I’ve ready about a marketing company that did this and they found a relationship between mustard and golf. Advertise Mustard during a golf game and sales double or something. Weird. Walmart uses this heavily as they keep a detailed dB of all their sales transactions.

That’s enough.

(Visited 13 times, 1 visits today)