My Technical Books


Two news flashes!!!

1) In October 2019 my book "Statistically Sound Indicators for Financial Market Trading" was published.  Please click the 'Indicators' link for more information on this book.

2) In February 2020 my book "Permutation and Randomization Tests for Trading System Development" was published.  Please click the "Market Trading link for more information on this book.


A quick note about my books...

The key word for my writing style is practical.  I've used neural networks in a wide variety of applications throughout my profession, so I know what works in real life and what doesn't.  My books focus on those tools that I have found most useful.  Also, they avoid drowning the reader in theory.  For most topics that I cover, the order of presentation is as follows:

1) Provide an intuitive justification for the technique.
2) Present the most fundamental equations, only those that are absolutely necessary.
3) Show commented C or C++ source code that implements the algorithm.
4) Provide references to the original source of the algorithm, so that interested readers can see proofs and other details.

Most readers love this approach, as it maximizes the amount of useful information in the book while minimizing the amount of detail that is extraneous to the application at hand.  However, a few readers complain about my lack of rigor.  They want to see proofs right there in the text so that they do not need to bother looking them up in scholarly papers.  If you are such a person, more interested in theory than practice, you might want to skip my books.  But if you want to dig right in and solve your problem, leaving the fine mathematical details to cited references, you've come to the right place!

Practical Neural Network Recipes in C++

This book was and still is my most popular.  It was an Academic Press "Super Seller" for its first few years in print, garnered rave reviews from technical journals, and almost 30 years after being written it is still selling briskly.
This is because it focuses on the timeless aspects of developing neural-network solutions to problems, those things that never change.
The book covers such topics as intelligently designing the training set, properly coding predictors and targets, and computing confidence in the network's predictions or decisions.
It also covers aspects of neural-net solutions that are usually ignored in other texts, such as awareness of hidden bias in training set design, handling circular discontinuities, and other subtleties that can make or break an application.

Note that the book's code for computing area under a ROC curve is flawed.  For a correct code snippet, click here.

For the complete Table of Contents, click here.

For the accompanying code disk, click here.



Signal and Image Processing with Neural Networks

The crown jewel of this book is its extensive treatment of complex-valued neural networks.

Much of my real-world experience has involved signal and image processing, and when one deals with frequency-domain predictors (and perhaps even targets), complex numbers play a major role.  Extensive experience indicates that allowing the neurons in the network to operate in the complex domain, as opposed to restricting them to (possibly paired) real numbers, tremendously increases the power of the network.  Despite this fact, complex-valued neural networks have not caught on as they should.  This may be because the mathematics of complex-valued neural networks is hairy.

But have no fear.  This book derives formulas for forward and backward propagation and gradients in the complex domain.  It includes complete source code for designing, training, and testing complex-valued neural networks.  Honestly, if your work involves complex-valued predictors (which covers most items in the frequency domain) you need to try these networks.  The increase in power relative to real-valued networks can be astonishing.

The remainder of the book covers preprocessing and the design of predictors that I have found particularly useful in signal and image processing.  These include Morlet wavelets (vastly superior to Daubechies wavelets and other orthogonal wavelets in my applications), as well as Gabor filters and techniques for isolating signals amid noise.

Note that this text does not attempt to be in any way a comprehensive treatment of signal and image processing.  Rather, it focuses on those techniques that I have found to be exceptionally useful in neural network applications.

For the complete Table of Contents, click here.

For the accompanying code disk, click here.



Advanced Algorithms for Neural Networks

This book is a collection of assorted neural network techniques that I have found useful in many applications.  It includes advanced training algorithms such as conjugate gradients, Levenberg-Marquardt, simulated annealing, and stochastic smoothing.

The book also includes a detailed treatment of the extremely useful Probabilistic Neural Network, along with its real-valued generalization, the General Regression Neural Network.  Computation of gradients of the PNN/GRNN weight vectors is covered, a topic that I've never seen anywhere else, and which allows advanced training algorithms to be used with these models.

It discusses the use of principle components for dimension reduction of the dataset, which is vital in most neural network applications.

Last but certainly not least, it provides extensive coverage of modern algorithms for assessing generalization ability of trained models.  This includes bootstrap and jackknife algorithms, cross validation, and Efron's E0 and E632 estimators.

For the complete Table of Contents, click here.

For the accompanying code disk, click here.



Neural, Novel, and Hybrid Algorithms for Time Series Prediction

Like the Advanced Algorithms for Neural Networks book, this one is a collection of assorted algorithms that I have found useful for time series prediction.  It is far from comprehensive, and it emphasizes the importance of heuristics and intuition in developing effective time series prediction algorithms.  This drives some readers crazy because it is the opposite of most such books, which cover less material but present detailed analyses and mathematical proofs for every technique described.  You pays your money and you takes your choice.

The book also uses nonstandard but nicely descriptive terminology when it discusses in-phase, in-quadrature, and quadrature-mirror filters.  Be warned.  If you are an electrical engineer already familiar with these filters, you will see terms bantered about in ways that you've never seen before but that make complete sense.  Some readers can handle this, and some can't.
 
Probably the most unique and useful aspect of this book is its significant generalization of the Box-Jenkins ARMA model in two ways.
First, the original ARMA model is univariate, although Box and Jenkins did extend it to a sort of bivariate version that they called a transfer function model.
But this book presents a fully multivariate version of the model, allowing multiple related series to interact with each other using AR and/or MA terms.  Techniques for training this exceedingly complex model are discussed, along with complete source code.

Even more interesting, the traditional ARMA model assumes that the shocks are random noise.  This book introduces the concept of the shocks being predictable, just not predictable with an ARMA model.  So the book presents an algorithm and complete source code for fitting an ARMA model, computing the implied shocks, fitting a neural network model to the shocks, using this model to predict future shocks, and then inverting the ARMA model using these predicted shocks to predict the original series.  In some applications this has astonishing power.

The bottom line is that the methods in this book work in real life, or at least they do in my life.  I have found them to be enormously valuable members of my toolbox, even though some of them may feel a bit heuristic to the theoretically inclined.

For the complete Table of Contents, click here.

For the accompanying code disk, click here.



Assessing and Improving Prediction and Classification

This book begins by presenting methods for performing practical, real-life assessment of the performance of prediction and classification models.  It then goes on to discuss techniques for improving the performance of such models by intelligent resampling of training/testing data, combining multiple models into sophisticated committees, and making use of exogenous information to dynamically choose modeling methodologies.  Rigorous statistical techniques for computing confidence in predictions and decisions receive extensive treatment.  Finally, a hundred pages are devoted to the use of information theory in evaluating and selecting useful predictors.  Special attention is paid to Schreiber's Information Transfer, a recent generalization of Grainger Causality.

561 pages

Like all of my books, heavily commented C++ code is given for every algorithm and technique.

For the complete Preface, click here.

For the complete Table of Contents, click here

For complete source code, click here.



Deep Belief Nets in C++ and CUDA C

Please click the "Deep learning" tab on the left for more information on these three books and their accompanying free programs.



Data Mining Algorithms in C++

Please click the "Data Mining" tab on the left for more information on this book and its accompanying free program.