Jim Killingsworth

Dis­tri­b­u­tions on a Log­a­rith­mic Scale

In this post, I want to ex­plore the log­a­rith­mic ana­logues of the nor­mal and Laplace dis­tri­b­u­tion­s. We can de­fine a log-nor­mal prob­a­bil­i­ty dis­tri­b­u­tion as a dis­tri­b­u­tion in which its log­a­rithm is nor­mal­ly dis­trib­ut­ed. Like­wise, a log-Laplace dis­tri­b­u­tion is a dis­tri­b­u­tion whose log­a­rithm has a Laplace dis­tri­b­u­tion. If we have a giv­en prob­a­bil­i­ty den­si­ty func­tion, how can we de­ter­mine its log­a­rith­mic equiv­a­len­t?

De­ter­min­ing the Log­a­rith­mic Equiv­a­lent

Sup­pose we have a con­tin­u­ous ran­dom vari­able. We can de­fine the cu­mu­la­tive dis­tri­b­u­tion func­tion of the ran­dom vari­able like this:

Figure 1

Let’s al­so as­sume we know the prob­a­bil­i­ty den­si­ty func­tion of the ran­dom vari­able. The den­si­ty func­tion is the de­riv­a­tive of the cu­mu­la­tive dis­tri­b­u­tion func­tion. We can de­fine the cu­mu­la­tive dis­tri­b­u­tion func­tion based on the prob­a­bil­i­ty den­si­ty func­tion like this:

Figure 2

The prob­a­bil­i­ty of ob­serv­ing a re­al­iza­tion of the ran­dom vari­able in a range be­tween two points can be ex­pressed like this:

Figure 3

Now sup­pose we have two con­tin­u­ous ran­dom vari­ables. The prob­a­bil­i­ty dis­tri­b­u­tion of one is the log­a­rithm of the oth­er:

Figure 4

Our goal is to de­rive the den­si­ty func­tion of one based on the den­si­ty func­tion of the oth­er. Let’s use the fol­low­ing no­ta­tion:

Figure 5

With this no­ta­tion, we can ex­press the re­la­tion­ship be­tween these two dis­tri­b­u­tions us­ing the fol­low­ing equa­tion:

Figure 6

The sub­sti­tu­tion rule for in­te­gra­tion can be used to eval­u­ate this fur­ther. Let’s con­sid­er the fol­low­ing sub­sti­tu­tion:

Figure 7

Let’s al­so con­sid­er its de­riv­a­tive:

Figure 8

Plug­ging in the sub­sti­tu­tion, we can com­pute the prob­a­bil­i­ty of ob­serv­ing the ran­dom vari­able be­tween two points on a log­a­rith­mic scale like this:

Figure 9

The sub­sti­tu­tion rule for def­i­nite in­te­grals gives us the fol­low­ing iden­ti­ty:

Figure 10

With this, we can com­pute the same prob­a­bil­i­ty of ob­serv­ing the ran­dom vari­able be­tween two points, but this time on a lin­ear scale:

Figure 11

We can now state the fol­low­ing so­lu­tion:

Figure 12

Us­ing these step­s, we can de­ter­mine the log­a­rith­mic equiv­a­lent of any con­tin­u­ous dis­tri­b­u­tion for which we know the for­mu­la for the prob­a­bil­i­ty den­si­ty func­tion.

The Log-Nor­mal Dis­tri­b­u­tion

To give an ex­am­ple, we can use the prob­a­bil­i­ty den­si­ty func­tion for the nor­mal dis­tri­b­u­tion to de­ter­mine the prob­a­bil­i­ty den­si­ty for the log-nor­mal dis­tri­b­u­tion. Re­call the den­si­ty func­tion for the nor­mal dis­tri­b­u­tion:

Figure 13

The log­a­rith­mic equiv­a­lent is:

Figure 14

If we have a set of sam­ples of a ran­dom vari­able that we know the have a log-nor­mal dis­tri­b­u­tion, the pa­ra­me­ters of the dis­tri­b­u­tion can be es­ti­mat­ed us­ing the max­i­mum like­li­hood method out­lined in my pre­vi­ous post. I’ll skip the in­ter­me­di­ate steps and jump straight to the re­sult­s.

Here is the es­ti­mate for the mean:

Figure 15

Here is the es­ti­mate for the stan­dard de­vi­a­tion:

Figure 16

Not sur­pris­ing­ly, the for­mu­las to com­pute the pa­ra­me­ter es­ti­mates for the log-nor­mal dis­tri­b­u­tion are near­ly the same as those of the nor­mal dis­tri­b­u­tion. The on­ly dif­fer­ence is that we take the log­a­rithm of the ob­served da­ta points.

The Log-Laplace Dis­tri­b­u­tion

The log­a­rith­mic equiv­a­lent of the Laplace dis­tri­b­u­tion can be found in the same way as the log­a­rith­mic equiv­a­lent of the nor­mal dis­tri­b­u­tion. Con­sid­er the prob­a­bil­i­ty den­si­ty func­tion for the Laplace dis­tri­b­u­tion:

Figure 17

The log­a­rith­mic equiv­a­lent is:

Figure 18

If we have a set of sam­ples of a ran­dom vari­able that we know to have a log-Laplace dis­tri­b­u­tion, the pa­ra­me­ters can be es­ti­mat­ed as be­fore us­ing the max­i­mum like­li­hood method. You can see my pre­vi­ous post for full de­tail­s. We first need to rank the sam­ples in as­cend­ing or­der:

Figure 19

We al­so need to de­ter­mine the mid­dle val­ue:

Figure 20

Here is the es­ti­mate for the lo­ca­tion pa­ra­me­ter:

Figure 21

Here is the es­ti­mate for the scale pa­ra­me­ter:

Figure 22

Once again, the for­mu­las to com­pute the pa­ra­me­ter es­ti­mates for the log-Laplace dis­tri­b­u­tion are near­ly the same as those of the reg­u­lar Laplace dis­tri­b­u­tion. For the log­a­rith­mic equiv­a­len­t, we sim­ply take the log­a­rithm of the ob­served da­ta points.

Com­par­i­son

When plot­ted on a graph, the prob­a­bil­i­ty den­si­ty func­tion for the log-nor­mal dis­tri­b­u­tion looks like a dis­tort­ed ver­sion of the nor­mal dis­tri­b­u­tion’s den­si­ty func­tion. This is­n’t too sur­pris­ing. What’s in­ter­est­ing to me, how­ev­er, is the shape of the log-Laplace den­si­ty func­tion. It looks like a skate­board ram­p:

Figure 23

The flat top of the log-Laplace den­si­ty func­tion looks pe­cu­liar. I was­n’t ex­pect­ing it, and I ini­tial­ly thought I had made a mis­take when gen­er­at­ing the chart. How­ev­er, the flat part on­ly ex­ists when the scale pa­ra­me­ter is set to the stan­dard val­ue of one. The shape of the graph changes as the scale pa­ra­me­ter is ad­just­ed up or down. Take a look at the same chart when the hor­i­zon­tal ax­is has a log­a­rith­mic scale:

Figure 24

No­tice how the shape of the log-nor­mal den­si­ty func­tion looks very much like the sym­met­ri­cal shape of the reg­u­lar nor­mal dis­tri­b­u­tion. This trait does not ex­ist for the log-Laplace dis­tri­b­u­tion, how­ev­er. When plug­ging in small­er val­ues for the scale pa­ra­me­ter, the shape of the log-Laplace den­si­ty func­tion tends to have a clos­er re­sem­blance to that of the reg­u­lar Laplace dis­tri­b­u­tion, but it does­n’t ex­hib­it the same sym­me­try.

Ac­com­pa­ny­ing source code is avail­able on GitHub.

Com­ments

Show comments