Jim Killingsworth

Lat­est Posts

Gen­er­al­ized Hy­per­bol­ic Dis­tri­b­u­tions

The gen­er­al­ized hy­per­bol­ic dis­tri­b­u­tion was pi­o­neered by Ole Barn­dorf­f-Nielsen with ap­pli­ca­tions re­lat­ed to wind-blown sand. There are sev­er­al prob­a­bil­i­ty dis­tri­b­u­tions that can be ex­pressed as spe­cial cas­es of the gen­er­al­ized hy­per­bol­ic dis­tri­b­u­tion, which is in­dica­tive of its ver­sa­til­i­ty. As we shall see in this post, this dis­tri­b­u­tion seems to work pret­ty well for mod­el­ing price fluc­tu­a­tions in fi­nan­cial mar­ket­s.

Char­ac­ter­is­tic Func­tions and Sta­ble Dis­tri­b­u­tions

In the pre­vi­ous post, I ex­plored the use of the gen­er­al­ized nor­mal dis­tri­b­u­tion to mod­el the price move­ments of fi­nan­cial in­stru­ments. This ap­proach of­fered bet­ter fit­ting dis­tri­b­u­tions than the nor­mal and Laplace dis­tri­b­u­tions stud­ied in ear­li­er post­s. But the shape of the fit­ted dis­tri­b­u­tions still did­n’t quite match the shape of the his­togram. In this post, I want to ex­plore a class of prob­a­bil­i­ty dis­tri­b­u­tions known as Lévy al­pha-stable dis­tri­b­u­tion­s. And to ex­plore these dis­tri­b­u­tion­s, we need to un­der­stand char­ac­ter­is­tic func­tion­s.

Gen­er­al­ized Nor­mal Dis­tri­b­u­tions

The gen­er­al­ized nor­mal dis­tri­b­u­tion is a fam­i­ly of prob­a­bil­i­ty dis­tri­b­u­tions that vary ac­cord­ing to a shape pa­ra­me­ter. The sym­met­ri­cal vari­ant of this dis­tri­b­u­tion may go by oth­er names such as the gen­er­al­ized er­ror dis­tri­b­u­tion, the gen­er­al­ized Gaussian dis­tri­b­u­tion, etc. In this post, we will ex­plore this prob­a­bil­i­ty dis­tri­b­u­tion and its re­la­tion­ship with the nor­mal dis­tri­b­u­tion and the Laplace dis­tri­b­u­tion. I’ll al­so show some ex­am­ples il­lus­trat­ing the use of the max­i­mum like­li­hood method to es­ti­mate the pa­ra­me­ters of the dis­tri­b­u­tion us­ing re­al-life data.

Con­vo­lu­tions and First Dif­fer­ences

This is a study of the con­vo­lu­tion op­er­a­tion and its ap­pli­ca­tions to time se­ries da­ta and prob­a­bil­i­ty dis­tri­b­u­tion­s. In this post, I first demon­strate the use of the con­vo­lu­tion op­er­a­tion to find the first dif­fer­ences of some ran­dom­ly gen­er­at­ed time se­ries data. I then show how to find the dis­tri­b­u­tion of the first dif­fer­ences based on the dis­tri­b­u­tion of the val­ues in the orig­i­nal se­ries. I al­so show how to work back­wards us­ing de­con­vo­lu­tion, which is the in­verse of the con­vo­lu­tion op­er­a­tion.

Us­ing the Kel­ly Cri­te­ri­on for Op­ti­mal Bet Siz­ing

Sup­pose a gam­bler is play­ing a game in which he has a sta­tis­ti­cal ad­van­tage. And let’s as­sume he can quan­ti­fy his ad­van­tage with a fair amount of ac­cu­ra­cy. If the gam­bler plays this game over and over again, what per­cent­age of his bankroll should he bet on each round if he wants to max­i­mize his win­nings? In this post, I ex­plore this ques­tion us­ing a se­ries of ex­am­ples il­lus­trat­ing the ap­pli­ca­tion of the Kel­ly cri­te­ri­on. Many of the ideas pre­sent­ed here are in­spired by ma­te­ri­als writ­ten by Ed Seyko­ta, Ed­ward O. Thor­p, and J. L. Kel­ly.

Un­der­stand­ing the Dis­crete Fouri­er Trans­form

For the longest time, the Fouri­er trans­form re­mained a bit of a mys­tery to me. I knew it in­volved trans­form­ing a func­tion in the time do­main in­to a rep­re­sen­ta­tion in the fre­quen­cy do­main. And I knew it had some­thing to do with si­nu­soidal waves. But I did­n’t un­der­stand what it meant to have a fre­quen­cy do­main rep­re­sen­ta­tion of a func­tion. As it turns out, it’s quite a sim­ple thing once you re­al­ize what the fre­quen­cy val­ues rep­re­sen­t. In this post, I ex­plain the dis­crete Fouri­er trans­form by work­ing through a set of ex­am­ples.

High­er Or­der Poly­no­mi­al Ap­prox­i­ma­tion

This is an ex­ten­sion of one of my ear­li­er posts on poly­no­mi­al ap­prox­i­ma­tions. Pre­vi­ous­ly, I showed how to find ap­prox­i­mate so­lu­tions to the weight­ed coin toss prob­lem us­ing first, sec­ond, and third or­der poly­no­mi­als to de­scribe the weights of the bi­ased coin­s. In this post, I demon­strate a gen­er­al­ized method for ap­ply­ing this tech­nique us­ing high­er or­der poly­no­mi­al­s.

More Coin Toss Per­for­mance En­hance­ments

This post is an ex­ten­sion of the pre­vi­ous post in which I ex­plored some tech­niques for speed­ing up the cal­cu­la­tions used to find ap­prox­i­mate so­lu­tions to the coin toss prob­lem. Here I want to ex­am­ine a cou­ple of en­hance­ments to these ideas. First, I de­scribe an en­hanced com­pu­ta­tion method that cuts the num­ber of float­ing-point op­er­a­tions re­quired al­most in half. Sec­ond, I in­tro­duce a pro­gres­sive poly­no­mi­al ap­prox­i­ma­tion tech­nique that can re­duce the num­ber of it­er­a­tions need­ed to find a so­lu­tion.

Per­for­mance Tun­ing for the Coin Toss Mod­el

I wrapped up the last post ex­press­ing a de­sire to study the ap­prox­i­ma­tion tech­nique us­ing larg­er mod­els of the coin toss game. Up un­til now, I was us­ing a naive im­ple­men­ta­tion of the com­pu­ta­tion method to per­form the cal­cu­la­tion­s—an im­ple­men­ta­tion that was crude­ly im­ple­ment­ed and too slow for larg­er mod­el­s. In this post, I demon­strate an al­ter­na­tive ap­proach that has a much bet­ter per­for­mance pro­file. I al­so de­scribe a sim­ple tech­nique that can be used to re­duce the num­ber of it­er­a­tions re­quired when ap­ply­ing the hill climb­ing al­go­rith­m.

Ap­prox­i­ma­tions with Poly­no­mi­als

The pre­vi­ous post demon­strates the use of bi­as­es de­rived from a sim­ple line for­mu­la to find an ap­prox­i­mate so­lu­tion to the weight­ed coin toss prob­lem. In this post, I want to ex­pand on some of these ideas us­ing var­i­ous poly­no­mi­al for­mu­las to de­scribe the weights of the bi­ased coin­s. As this ex­per­i­ment demon­strates, high­er or­der poly­no­mi­als do seem to yield bet­ter re­sult­s.

Ap­prox­i­mat­ing the Tar­get Dis­tri­b­u­tion

In pre­vi­ous stud­ies of the weight­ed coin toss game, our fo­cus was on find­ing a set of weights for the bi­ased coins that would yield a giv­en tar­get dis­tri­b­u­tion for the ex­pect­ed out­come. In this post, I want to ex­plore a dif­fer­ent ap­proach. In­stead of find­ing an ex­act so­lu­tion, I want to try find­ing an ap­prox­i­mate so­lu­tion us­ing a set of weights based on a pa­ra­me­ter­ized for­mu­la. This might pro­duce an ap­prox­i­mate so­lu­tion that is good enough for prac­ti­cal pur­pos­es while al­so be­ing eas­i­er to com­pute for a mod­el with a large num­ber of coin toss events per round.

Gen­er­al­iz­ing the Coin Toss Markov Mod­el

This is a con­tin­u­a­tion of a se­ries of posts on weight­ed coin toss games. In pre­vi­ous post­s, we ex­plored vari­a­tions of the weight­ed coin toss game us­ing two, three, and four flips per round. In each vari­a­tion, the game was de­scribed us­ing a Markov mod­el with a fixed num­ber of coin toss events. This post presents a gen­er­al­ized form of the Markov mod­el that can be used to mod­el a game with an ar­bi­trary num­ber of coin toss events. I al­so show a few ex­am­ples us­ing a mod­el of the coin toss game with ten flips per round.

Vi­su­al­iz­ing Sad­dle Points and Min­i­mums

The two pre­vi­ous posts demon­strat­ed how to use the method of La­grange mul­ti­pli­ers to find the op­ti­mum so­lu­tion for a coin toss game with bi­ased coins of un­known weight. In one case, we found the min­i­mum of a cost func­tion based on the La­grangian func­tion. In the oth­er case, we found the sad­dle point of the La­grangian func­tion it­self. The pur­pose of this post is to pro­vide some vi­su­al rep­re­sen­ta­tions of these func­tion­s.

Find­ing the Roots with New­ton’s Method

In the last post, we ex­plored the use of gra­di­ent de­scent and oth­er op­ti­miza­tion meth­ods to find the root of a La­grangian func­tion. These op­ti­miza­tion meth­ods work by find­ing the min­i­mum of a cost func­tion. In this post, I want to ex­plore the mul­ti­vari­ate form of New­ton’s method as an al­ter­na­tive. Un­like op­ti­miza­tion meth­ods such as gra­di­ent de­scen­t, New­ton’s method can find so­lu­tions that lie on a sad­dle point, elim­i­nat­ing the need for a cost func­tion. This may or may not be a bet­ter ap­proach.

Equal­i­ty Con­straints and La­grange Mul­ti­pli­ers

My last few posts have cen­tered around a weight­ed coin toss game in which the weights of a set of bi­ased coins are de­ter­mined based on a known tar­get dis­tri­b­u­tion. And while mul­ti­ple so­lu­tions are pos­si­ble, the in­clu­sion of a scor­ing func­tion al­lowed for a unique so­lu­tion to be found. Un­til now, I was not sure how to in­clude the scor­ing func­tion in such a way that I could solve the prob­lem nu­mer­i­cal­ly for an ar­bi­trary num­ber of coin toss­es. In this post, I show how to use the method of La­grange mul­ti­pli­ers to min­i­mize the scor­ing func­tion while con­form­ing to the con­straints of the coin toss prob­lem.

Min­i­miz­ing with Gra­di­ent De­scent

The pre­vi­ous post demon­strates the use of a hill climb­ing al­go­rithm to find a set of pa­ra­me­ters that min­i­mize a cost func­tion as­so­ci­at­ed with a coin toss game. In this post, I want to ex­plore the use of a gra­di­ent de­scent al­go­rithm as an al­ter­na­tive. The two class­es of al­go­rithms are very sim­i­lar in that they both it­er­a­tive­ly up­date an es­ti­mat­ed pa­ra­me­ter set. But while the hill climb­ing al­go­rithm on­ly up­dates one pa­ra­me­ter at a time, the gra­di­ent de­scent ap­proach up­dates all pa­ra­me­ters in pro­por­tion to the di­rec­tion of steep­est de­scen­t.

Vi­su­al­iz­ing the Climb up the Hill

The hill climb­ing al­go­rithm de­scribed in my pre­vi­ous post finds the weights of bi­ased coins for a coin toss game in which the dis­tri­b­u­tion of pos­si­ble out­comes is known. In the ex­am­ple pre­sent­ed, there are many pos­si­ble so­lu­tion­s. A cost func­tion is used to find a valid so­lu­tion, and a scor­ing func­tion is used to nar­row down the set of valid so­lu­tions to a sin­gle re­sult. In this post, I want to look at some vi­su­al­iza­tions to get a bet­ter feel for how the al­go­rithm work­s.

Hill Climb­ing and Cost Func­tions

If you’re climb­ing a hill, you know you’ve reached the top when you can’t take any fur­ther steps that lead to a high­er el­e­va­tion. But if the hill is ac­tu­al­ly a plateau with a flat top, the top­most point you reach can de­pend large­ly on where you start­ed climb­ing. In this post, I elab­o­rate on the top­ic of my pre­vi­ous post ti­tled Es­ti­mat­ing the Weights of Bi­ased Coins. This post presents the re­sults of an im­proved hill climb­ing al­go­rithm and al­so some ideas for rank­ing the dif­fer­ent so­lu­tions that fall on a plateau of valid val­ues.

Es­ti­mat­ing the Weights of Bi­ased Coins

Sup­pose we flip a coin four times. If the coin lands on head­s, we win a dol­lar. If the coin lands on tail­s, we lose a dol­lar. Af­ter four toss­es of the coin, the best pos­si­ble out­come is a win­ning to­tal of four dol­lars. The worst pos­si­ble out­come is a loss of four dol­lars. Let’s as­sume the coin is a bi­ased coin. Fur­ther­more, let’s al­so as­sume a dif­fer­ent bi­ased coin is used on each flip de­pend­ing on the to­tal amount won or lost since the be­gin­ning of the game. How can we de­ter­mine the bias of each coin giv­en a prob­a­bil­i­ty mass func­tion of the ex­pect­ed out­come?

Sep­a­rat­ing Sig­nal from Noise

I want to ex­per­i­ment with mod­el­ing price changes over time as the com­bi­na­tion of a smooth trend com­po­nent over­laid with a ran­dom noise com­po­nen­t. My goal is to ex­am­ine the sta­tis­ti­cal prop­er­ties of each con­stituent com­po­nent and com­pare the re­sults to the sta­tis­ti­cal prop­er­ties of the un­de­com­posed mar­ket price.

The Very Strange Chi­nese Yuan

In my pre­vi­ous post, I ex­plored the dis­tri­b­u­tion of price fluc­tu­a­tions for a va­ri­ety of dif­fer­ent mar­kets and time frames. Across all da­ta set­s, plot­ting the log re­turns in a his­togram ap­pears to rough­ly ap­prox­i­mate the den­si­ty func­tion of a Laplace dis­tri­b­u­tion. The in­tra­day prices of the Chi­nese yuan, how­ev­er, seem to ex­hib­it a dis­tinct­ly strange phe­nom­enon.

The Dis­tri­b­u­tion of Price Fluc­tu­a­tions

Are price fluc­tu­a­tions in the fi­nan­cial mar­kets nor­mal­ly dis­trib­ut­ed? If I un­der­stand his­to­ry cor­rect­ly, it was French math­e­mati­cian Louis Bache­li­er who was the first to ex­plore this top­ic over 100 years ago. While Bache­lier’s work as­sumed that price move­ments were nor­mal­ly dis­trib­ut­ed, a math­e­mati­cian named Benoit Man­del­brot made some in­ter­est­ing ob­ser­va­tions that sug­gest oth­er­wise.

Dis­tri­b­u­tions on a Log­a­rith­mic Scale

In this post, I want to ex­plore the log­a­rith­mic ana­logues of the nor­mal and Laplace dis­tri­b­u­tion­s. We can de­fine a log-nor­mal prob­a­bil­i­ty dis­tri­b­u­tion as a dis­tri­b­u­tion in which its log­a­rithm is nor­mal­ly dis­trib­ut­ed. Like­wise, a log-Laplace dis­tri­b­u­tion is a dis­tri­b­u­tion whose log­a­rithm has a Laplace dis­tri­b­u­tion. If we have a giv­en prob­a­bil­i­ty den­si­ty func­tion, how can we de­ter­mine its log­a­rith­mic equiv­a­len­t?

Nor­mal and Laplace Dis­tri­b­u­tions

I’m in­ter­est­ed in study­ing the Laplace dis­tri­b­u­tion. I was once un­der the im­pres­sion that price fluc­tu­a­tions in the fi­nan­cial mar­kets were nor­mal­ly dis­trib­ut­ed. How­ev­er, as I plan to show in a lat­er post, stock prices seem to move up and down ac­cord­ing to a Laplace dis­tri­b­u­tion in­stead. Be­fore an­a­lyz­ing any his­tor­i­cal price data, I first want to lay some ground­work and com­pare the Laplace dis­tri­b­u­tion to the nor­mal dis­tri­b­u­tion.

Weight­ed Lin­ear Re­gres­sion

When do­ing a re­gres­sion analy­sis, you might want to weight some da­ta points more heav­i­ly than oth­er­s. For ex­am­ple, when fit­ting a mod­el to his­toric stock price data, you might want to as­sign more weight to re­cent­ly ob­served price val­ues. In this post, I demon­strate how to es­ti­mate the co­ef­fi­cients of a lin­ear mod­el us­ing weight­ed least squares re­gres­sion. As with the pre­vi­ous post, I al­so show an al­ter­na­tive de­riva­tion us­ing the max­i­mum like­li­hood method.

Least Squares and Nor­mal Dis­tri­b­u­tions

The method of least squares es­ti­mates the co­ef­fi­cients of a mod­el func­tion by min­i­miz­ing the sum of the squared er­rors be­tween the mod­el and the ob­served val­ues. In this post, I show the de­riva­tion of the pa­ra­me­ter es­ti­mates for a lin­ear mod­el. In ad­di­tion, I show that the max­i­mum like­li­hood es­ti­ma­tion is the same as the least squares es­ti­ma­tion when we as­sume the er­rors are nor­mal­ly dis­trib­ut­ed.

Least Squares Mov­ing Av­er­ages

Mov­ing av­er­ages are of­ten over­laid on stock price charts to give a smooth rep­re­sen­ta­tion of chop­py price move­ments. But a sim­ple mov­ing av­er­age can lag sig­nif­i­cant­ly in a trend­ing mar­ket. In this post, I ex­plore the use of least squares re­gres­sion meth­ods to gen­er­ate more ac­cu­rate mov­ing av­er­ages.

Lin­ear and Log Scale Dis­tri­b­u­tions

In my pre­vi­ous post ti­tled Fixed Frac­tions and Fair Games, I ex­plored the prop­er­ties of two dif­fer­ent bet­ting strate­gies ap­plied to a re­peat­ed coin toss game. The fo­cus was on the ex­pect­ed val­ue for each of the two bet­ting strate­gies. In this post, I take a deep­er look at the dis­tri­b­u­tion of pos­si­ble out­comes af­ter a large num­ber of plays.

Fixed Frac­tions and Fair Games

A gam­bler has a $100 bankrol­l. He’s feel­ing lucky and he wants to make some bet­s. But he on­ly wants to play fair games where the ex­pec­ta­tion is breakeven for a large num­ber of plays. If the gam­bler plays a fair game re­peat­ed­ly us­ing a con­stant bet amoun­t, would it still be a fair game if he de­cides to bet a fixed frac­tion of his bankroll in­stead of bet­ting a fixed con­stant amoun­t?

How Much Is an Op­tion Worth?

Con­sid­er an at-the-money call op­tion with a strike price of $50. The un­der­ly­ing as­set is cur­rent­ly trad­ing at $50 per share. As­sume it’s a Eu­ro­pean-style op­tion. One trad­er wants to take the long side of the con­trac­t. An­oth­er trad­er wants to take the short side. How can they agree on a fair price?

Old­er Posts

Show archive