The 2013 edition of the IEEE Congress on Evolutionary Computation (CEC) took place in Mexico, Cancun - a rather exotic setting for us Europeans. Along with the pleasant location it is also worth mentioning that CEC is one of the biggest and most important conferences, its main topic being evolutionary computation - which gives the enthusiasts in the field of evolutionary computation, such as myself, at least two reasons to participate in this event.
Over 200 researchers, professors, PhD students, mainly from America and Asia, took part in the event in order to present their results but also to take advantage of the unique opportunity of exchanging ideas and starting new collaborations with famous people in the field. The world of IT companies was represented almost imperceptibly, as I was probably the only representative of this industry - the academic environment is clearly ahead of others in the field of discoveries using IT specific techniques and methods, unfortunately without the support of the companies specialised in this sector. The functional topics approached during the conference cover a very wide range of fields: biochemistry, biology, medicine, economics, game theory, meteorology, etc. The applications of evolutionary algorithms manage to cover the hybridization and optimisation experiments of some known methods while waiting for major discoveries or simply for a high-performance software, which would decrease the number of human errors as a result of interpretations - for instance, finding out the tumour type by analysing a radiography. By possessing general knowledge of evolutionary computation, the participation in the presentations from CEC brings new perspectives regarding the field but it also triggers the research appetite oriented towards applicability.
The financial applications from CEC were exclusively oriented towards the field of stock market forecasts. From the very beginning of the presentations, the authors have admitted the danger in approaching such a topic, due to the scepticism of many in respect of the existence of a pattern in financial fluctuations or of a fine line between legality and illegality. The Efficient Market Hypothesis - EMT encourages sceptics claiming that the information is available right away to all the participants and stock prices immediately reflect the current state of the market. EMH implies that all participants in the stock market can obtain the same winnings regardless of their experience in the field, as prices are completely random. Makiel explains in "The Efficient Market Hypothesis and Its Critics" how EMH rejects the utility of technical and fundamental analysis - basic methods incorporated in such applications with a success ration which manages to raise doubt over the validity of EMH. Lately, investors have been using fundamental analysis independently by using macroeconomic indices, the exchange rate or technical analysis using the stock prices and the transaction volume. The technical analysis of the historical data of stock exchange is used in the applications presented at CEC 2013, which differ in respect of the used optimization algorithms - genetic programming, taboo search. The optimisation methods from the works forming the topic of this article function on different internal structures but they all aim at an answer offered to the user, whether the pack of shares is worth buying or sold and the profit resulting from this operation.
Kampouridis et al. suggest in "Metaheuristics Application on a Financial Forecasting Problem" hybridizations over the main line of the algorithm of an application already at the 8th version, EDDIE (ED), which was the object of his research for many years. ED is based on a version of the genetic programming trying to generate optimum transaction strategies. The algorithm uses as entry data the values of the technical analysis applied to historical data (daily closing price and transactional indices), the actual values of the prices for a limited period of time and the result of historical classification (1 - decision to buy, 0 - decision not to buy). The used technical analysis indices are: "movement average" (MA), "trade break out" (TBR), "filter" (FLR), "volatility" (Vol), "momentum" (Mom) and "momentum movement average" (MomMA). The values of the indices involved in the algorithm are calculated for a certain number of days, for short and long term. In ED7 the calculation values of technical indicators were constant, 12 namely 50 days - these periods of time being directly taken from the daily mode of action of economic analysts. The algorithm structure to be optimised is a genetic decision tree connected to the following formula:
::= if-then-else |
::= "AND" | "OR" | "NOT" |
::= MA12 | MA50 |
TBR12 | TBR50 | FLR12 |
FLR50 | Vol12 | Vol50 |
Mom12 | Mom12 | MomMA12 |
MomMA12
::= < | > | =
< threshold > ::= rational number
::= 0 | 1
A tree described by the previous formula represents a solution for a transactional decision concerning a specific pack of shares. A set of such decisions is evaluated by calculating the ratio between the correct and wrong decisions, with different weights - the error in a decision to buy having the highest weight. The genetic algorithm in ED7 generates trees from a search space limited by the fixed application period of technical indices; but with ED8 the periods become variable thus opening the search space and avoiding a too rapid convergence. The results of ED8 were reported as promising but the search efficiency has dropped and therefore in order to maintain the diversity of solutions, but in the same time to recover the efficiency of the search, search metaheuristics were involved in the algorithm to optimise the punctual characteristics of the trees: variables describing the new periods for the calculation of technical indices. Simulated annealing (SA) is an algorithm which allows the selection of a not so good solution with a certain probability (to avoid finding a local optimum); in ED8, SA applies to limitrophe nodes of random trees from the population, changing the period with a value from the interval [-10,+10]. The taboo search (TS) is a metaheuristic that forces the search for a better solution not to return to the marked solutions creating a taboo list out of these with limited existence; in ED8, TS is applied in the same manner as SA.
This application was tested on 10 sets of data obtained from finance.yahoo.com. If ED8 is able to provide an average precision ratio of 0.5735 for these 10 sets of data and 0.75 for the best solutions, ED8-SA returns 0.5773 for the same data and 0.81 for the best solution; ED8-TS returns 0.5591 and for the best solution 0.81. According to the above stated results, the improvement is not remarkable in respect of the average value but the investors using this application will certainly look for the best solution and thus the new hybrids become efficient
Kuo et al. present in "Dynamic Stock Trading System based on Quantum-Inspired Tabu Search Algorithm" a new dynamic system able to generate, in a realistic context, complex strategies - sale and purchase. The basic algorithm for the application, a new form for the taboo search, with concepts borrowed from quantum physics was developed by the same team. The idea of the algorithm is to assign to each possible solution a quantum matrix, each characteristic of the solution having a value from the [0, 1] interval - quantum probability. Taboo search is translated in this context by updating the matrix of all the solutions, by extracting the weight from the matrix for the characteristics of the weakest solution, namely by adding the weights for the characteristics of the best solution. The dynamism of the application is given by applying the sliding window concept - the training data sets change from one iteration to the other, in the same time as the test sets. Technical analysis indices are selected again from the economic theory, with set application periods per index. The selection of certain indices represents a purchase/sell strategy, a solution. A solution is appreciated by estimating the profit in currency and the number of stocks. The system was assessed according to historical data compared to other solutions suggested in the past. The simulations presented in the paper report for certain sets of data a maximum profit of 69.94%, compared to approximately 14% for other implementations on the same data.
The two applications described above are only comparable in respect of the application field, as the first one offers only a support mechanism for the decision of investors, while the second one provides a complex and independent software. In respect of the implementation, the two are alike due to the use of technical analysis indices as the decisional support. From an algorithmic point of view, the two suggestions have gaps in respect of the dimensions of the search space - Kampouridis extends it very much and Kuo keeps it very well controlled (even though the authors do not provide details), I might say. These comparisons are made only to indicate the diversity of approaches, which may result in high end solutions and not as a review of these papers, which were very appreciated by the audience at CEC.
As a final note, with lyric inspiration, the participation in the CEC 2013 was a unique experience due to the high number of ideas and methods presented by numerous participants open to discussions and debates at an algorithmic level, good organisation and the exotic ambiance of the location in which the conference was held. Such conferences are organised annually - the next CEC 2014 edition will be held in Beijing, China. So, keep on researching!