Standard Deviation and Ultimate return

In order to address this issue, let me first give you some basics on standard deviation (also called volatility). Hopefully you are all familiar with the standard bell shaped curve. (They all don't look so neat- they can be skewed left and right- but this is fine for the example.) Literally everything that happens falls into the middle of such a curve with lesser amounts at the edges. Let's take golf for example. Almost all of us have played it at one point in our lives and most people have simply average ability that is best represented by the, say, three bars in the middle (see below). We'll say that it represents 68% of all people that play- and that number also represents one standard deviation. The average score might be 100 and the two side bars represent + or -15%. That means in 68% of the time, people score 100 on average with scores of 85 and 115 on each side. If you include the next two bars on each side, it would mean that 95% of all people are covered by the graph. That is the definition for two standard deviations. It would be a score of 100 average but with a + 25 or -25 or scores as low as 75 or as high as 125 for 95% of time. You can keep doing this by adding another  two bars that covers 99% of the scores/population. And then finally you get to the extreme left with the likes of Tiger Woods. Or on the extreme right with scores above, say,  150 or more.  

Put into terms for securities, the average return might be 10% with a standard deviation (68% of the time) of 21%. That means the low could be as far down as -11% (10% minus 21%) to as high as 31% (10% plus 21%). It also (generally) references statistics over a period of one year. A graph might look like below with the shaded are representing 68% of all occurrences. The  middle point reflects the 10% "average" return and the plus and minus are represented by the extreme ends of the shaded area.

normal2.jpg (14035 bytes)

So, believe it or not, you now know what standard deviation means. It is the return generated with the pluses and minuses you might expect 68% of the time over a period of one year.

The basics of standard deviation go one step further- though without reading the article below, it is clearly reflects a distortion of risk by many planners. Anyway, standard deviation is reduced the longer you hold a security. Let's say there was a standard deviation of 30%. Pretty high. But if you potentially held the security for five years, the standard deviation is reduced to "only" 13.42% because you simply divide the annual deviation by the square root of the years held- in this case 5 years. That number is 2.236 (just another reason why you or your adviser needs direct competency with a financial calculator). And if you held it for 10 years, you divide by 3.16 for a volatility of just 9.48%.   If you had started with just 20% volatility, then a 5 year deviation would result in a 8.94% volatility and a 10 year deviation of only 6.32%.

How do you use standard deviation with various investments? Well, for any given level of return- say 10%- you want the thinnest graph (representing the least amount of deviation from your anticipated return). In other words, you would prefer a standard deviation of 12% rather than a 10% return with a deviation of 20%. Are there "investments" with NO deviation? Taking a little latitude, consider a 7.50% FDIC CD for one year. Guaranteed return with no change in ultimate return. That's unusual- but it does exist right now. Over time however, fixed rates of return do terribly- particularly after accounting for inflation and taxes.

Many brochures talk about standard deviation but give the reader no real life example of what in the world they are talking about- probably because they don't know or can't explain it. And, remember, standard deviation is NOT taught or tested for securities representatives. It may be taught in some element for planners. But having discussed this with many- and certainly with brokers and insurance agents- they still don't have a clue to the real world. That's why you have to read further.

Monte Carlo, Shomonte Carlo: The whole planning industry- certainly that element concerned with asset allocation- is all agog over the supposedly new element of monte carlo simulations-  repeated random number generation. In a nutshell, it provides all types of returns based on statistics INCLUDING the recognition of risk when something can really go wrong (though the same element exists when there is excess on the positive side as well as identifying all types of returns). A Financial Planning Interactive poll. noted, "The majority of financial planners prefer standard asset allocation over Monte Carlo simulation when structuring a client's portfolio. In short, 60% of respondents said they use standard asset allocation and 13% said they prefer Monte Carlo. Thirty-one respondents, or 26%, said they use both methods." The issue is primarily to show that ultimate returns may be, among other things, far less than projected.  

The problem is that such "simulations" addressing the downside potential always were required but the issue was universally unknown or ignored by all planners until recently. The point? The book on Investments by Bodie, Kane and Marcus, 1989, page 222, Appendix C, called "The Fallacy of Time Diversification". The comments addressed the 30% deviation shown above and that a investor would be "emotionally relieved" that over a 5 year period, the volatility would be reduce to an "acceptable" 13.42%. But it notes that the impact of a one time standard deviation over the entire portfolio could reduce the amount anticipated by almost 50%. (Probably earlier Investment books also identify the statistical fallacy, but that is the book I have used.)

"A standard deviation in the average return over the five year period will effect final wealth by a factor of (1- .1342) to the fifth power = .487. (That's the formula. One minus the standard deviation for the time period selected and the resulting number multiplied to the "X" power where x represents the number of years in question.) That means that final wealth will be less than one half its expected value."

"Time diversification does NOT reduce risk. It is true that the per year average rate of return has a smaller standard deviation for a longer time horizon. It is also true that the uncertainty compounds over a greater number of years. Unfortunately, this latter effect dominates in the sense that the total return becomes more uncertain the longer the investment horizon."

"The lesson is that one should NOT use the rate of return analysis to compare portfolios of different size. Investing for more than one holding period  means that the amount at risk is GROWING. This is analogous to an insurer taking on more insurance policies. The fact that these policies are independent of each other does not offset the effect of placing more funds at risk. Focus on the standard deviation of the rate of return should NEVER obscure the more proper emphasis on the possible dollar values of a portfolio strategy". (Such risk  for insurers actually DID occur. Remember the monstrous hurricane that swept across mid Florida years ago? Certainly the insurance companies were well "diversified" since they had thousands of different homes covered. But the loss was so substantial that many insurers opted to leave the Florida market altogether.)

But here we go again.  If 60% of planners are simply using the standard asset allocation packages without addressing the real issue of randomness and, more significantly, of  a potential negative return, they are ignoring a significant caveat to long term investing- or are more likely, in my mind, either oblivious to the seriousness  of the problem or incompetent to address it fully. But so what? Regardless of the fault, the consumer /retiree is left in a disastrous situation should the problem of poor returns actually exist.

What about the Monte Carlo simulations and the recognition of this risk? Unquestionably a viable issue and retirees should know/recognize that the projections of basic software allocations (almost all calculators on the web) may not even come close to providing needed projections because they omitted what can go wrong and do not advice what to do. Without the capability to address what is happening- or might happen to the market-  based on economics, it is (perhaps) nothing more than an treatise on random numbers with no real life application. Indicating that it can happen provides nil reassurance that the adviser will do anything about it unless it is well after the fact. That's not going to do it. Or that they should simply "stay the course" since the market has "always" come back. Well true overall- but could you wait 12+ years?? Read on.

Is there a better way of doing the same thing and make it real life? Well, I have been advising both clients and students for years of the actual problems of 1973 and 1974. That represents your monte carlo problem in real life since investors lost over 45% of their monies in less than 2 years. And it took about 12+ years to break even with what they could have earned on a present value had they just invested in Treasuries. But that reference, in itself, is almost worthless unless the adviser is astute enough (means intensive reading) to view the economics to determine if the market is or will enter a bad situation and make proper adjustments prior to or in the beginning stages of such a bear market. And there is no question that such a bear market will happen again. When? No idea. Maybe its when Greenspan dies. Or AIDs completely decimates Africa. Or ???? But you need to recognize that no software- including Financial Engines- with Monte Carlo tells you when to make the adjustment necessary to alleviate the losses that no reasonable person could sustain- certainly a retiree. (The above comments were written before 2000. I did adjust funds according to my perception of the risk in 2000 and subsequently changed all equity positions in 2001. Was I  perfect in my 'timing'? No. Did I respond t o changing economics. Yes. Did clients miss all the mess in late 2001 and all of 2002. Yes. )

It is my contention that the simple reference to the issue via software does not relieve the adviser (or investor) of the duty to make a change. But many will not since their ability is almost solely defined by a computer program that can only provide direction AFTER the fact. Further, many investors will not sell because many are loathe to eliminate a losing position no matter what they have made previously nor that the economics are clear.

Standard Deviation (2002) - The range of standard deviations for ultra-short term bond funds is a mere 0.14 to 1.32, with an average 0.67. The standard deviations for precious metals funds range from 17.58 to 37.23, with an average of 25.74.

Monte Carlo: (NY Times 2002) Monte Carlo simulations, which use random numbers to imitate behavior, run investments through thousands of situations to assess the probability of reaching a financial goal. Calculations assume variations in inflation, interest rates and market returns based in part on history's wide range of returns, going back to the 1920's.

William F. Sharpe, the chairman of Financial Engines and a Nobel winner in economics in 1990, began applying Monte Carlo simulations to individual investors' portfolios later in the decade. With Mr. Jones, he created a software recipe that mixes thousands of pieces of data by tracking 15 types of investments, or asset classes, like long-term government bonds or large-cap value stocks, with financial and economic factors like interest rate shifts and their effects on capital markets. Information about 20,000 stocks and mutual funds contributes in forecasting performance.

Because Monte Carlo simulation takes into account the inherent uncertainty of financial markets, Dr. Sharpe regards it as superior to other ways of gauging future performance — like calculating a linear, or fixed rate of return. One such linear calculator, found on Quicken.com, tallies investment performance by assuming an annual average rate of return, like 8 percent, and projecting it forward. Monte Carlo simulations use more varied assumptions.

a 48-year-old investor with a $300,000 portfolio of stock and bond mutual funds who saves an additional $6,500 annually has an 83 percent likelihood of retiring with at least a $50,000 income at age 65 (including about $20,000 in Social Security income). The results of the calculation would be a range of outcomes, the most likely being $64,700. The calculation also indicates a 5 percent likelihood of hitting $103,000 a year, and an equal probability of receiving $42,100 a year.

With the Quicken calculator, the same investor would get one outcome. Using fixed projections, the investor would retire at 65 with an average income of $71,240, based on an 8 percent expected return and a 3 percent inflation rate.

This is a comment I made to a reader regarding Monte Carlo- "The times article is good but still misses the point of why would anyone want to be in equities as it takes a beating. The theory is valid but it still misses real life application. "

The fallacy of Monte Carlo Analysis: (Still River software 2004) Monte Carlo and similar models are being used to answer the wrong question, and so the results they produce are of little or no use.

The essence of the Monte Carlo approach is to create a large number – usually a few thousand – randomly generated scenarios, compute the financial characteristics of each scenario, and then compile the results to determine the overall likelihood of a satisfactory outcome. This is a useful exercise, and helpful to a point, but in the context of financial planning for retirees, it is off the mark. There are four main reasons for this:

1. The success rates that result are not what they purport to be.

2. The results are not meaningful to most people.

3. The method fails to address the questions that retirees are actually asking.

4. Conducting a Monte Carlo analysis is impractical in key situations

Monte Carlo models are being used to predict the likelihood of retirees not outliving their assets. If a computer model could indeed tell us that, the results would be of interest. But that is not what the computer models actually reveal. What they show is the likelihood of the model coming out OK.

•Most risk factors are ignored. The models claim to be “stochastic” (i.e., using variable assumptions) rather than “deterministic” (i.e., using fixed assumptions), but in reality they are only semi-stochastic. They randomize one or two or a few factors in the analysis, but they make fixed assumptions about a wide array of other factors. In real life, nothing is pre-determined.

•They ignore discontinuous and unpredictable risks, such as the possibility that Social Security benefits will change or be eliminated, the possibility that new financial instruments will be invented, the possibility that there will be a radical change in mortality, or that something completely unexpected will occur. Twenty years beforehand, no one could have predicted the Great Depression, World War II, or 9/11. Yet these events had widespread and long-lasting implications. The models assume that nothing unprecedented occurs. Yet the one thing we know from history is that something unprecedented is bound to occur sooner or later.

•Analysis of financial risk is based on data gathered mainly over the past 75 years, as if this somehow defines the outer limits of future possibilities.

The fact is that 75 years is not nearly enough of a statistical base to determine the probabilities associated with future financial performance. Yes, it’s the best we have, so we have to use it, but let’s not pretend that it actually enables us to calculate probabilities for the next 30 years or so. It could be that the 21 st century will more closely resemble the 14 th than the 20 th , and if it does, what use will the current models have been?

Similar logic applies to all other elements of the analysis, whether deterministic or stochastic. We simply don’t have sufficient basis for determining the range and likelihood of future events.

The upshot of these problems is that Monte Carlo models can project, say, a 90% chance of success within the framework of the model. But what is the probability that the model itself corresponds with reality? Probably about zero. This does not mean that the model is worthless, of course. Models do have some semblance to reality. The real problem is that we don’t know how much.

So if an actual model predicts a 90% success rate, a hypothetically perfect model might instead predict a 95% rate, or an 85% rate, or a 65% rate. Unfortunately, we don’t have any way to measure the divergence between the actual model and reality.

Only in the ivory tower world of retirement income model building do people actually try to come up with plans designed to work for the rest of their lives. Real people are usually more realistic. They want a plan that is geared to work well for them under normal circumstances, and that will also make adequate provision for the contingencies that are of the highest concern to them. And they want to know what steps they need to take now to get such a plan going, understanding that adjustments are inevitable along the way."

Up until the last couple sentences, the article was right on. But to suggest that an adviser- or a piece of software- will provide the intuitive understanding of a change in economics and what to do when the market adjusts, is ludicrous. Consumers will NEVER read material from the FED, nevermind understand it. As such, they are are bound to be like the bulk of investors in 2000- 2002 that held onto portfolios that made no valid sense. Many, with 401(k) plans kept putting monies into equities and they dropped further. Why? Because they trusted their company, their broker, their planner- all, almost exclusively, lacking in the fundamentals of investing and clueless to economic issues. That will not change in my lifetime (short as it now may be).

Risk Analysis in Investment Appraisal (Savvakis C. Savvides 2004) The basics on which Monte Carlo is derived.

This paper was prepared for the purpose of presenting the methodology and uses of the Monte Carlo simulation technique as applied in the evaluation of investment projects to analyse and assess risk. The first part of the paper highlights the importance of risk analysis in investment appraisal. The second part presents the various stages in the application of the risk analysis process. The third part examines the interpretation of the results generated by a risk analysis application including investment decision criteria and various measures of risk based on the expected value concept. The final part draws some conclusions regarding the usefulness and limitations of risk analysis in investment appraisal.

Standard deviation

[Image]

Fat tails: The empirical distributions of many economic and financial time series exhibit fat tails. This has been well documented in the literature.1 The presence of fat tails warrants the use of probability distributions that can accommodate the likelihood of large positive or negative shocks impacting the economy. Gaussian distributions do not admit this possibility. However, these distributions are well understood and analytically tractable.

Normal Gaussian distribution

STANDARD DEVIATION AND VARIANCE LINK:  From a statistical viewpoint

On the Risk of Stocks in the Long Run (Zvi Bodie 2006) A familiar proposition is that investing in common stocks is less risky the longer an investor plans to hold them. Robert C. Merton and Paul A. Samuelson have written numerous articles over the years showing the fallacy in such statements. If this proposition were true, then the cost of insuring against earning less than the risk-free rate of interest should decline as the investment horizon lengthens. This paper shows that the opposite is true, even if stock returns are mean reverting in the long run. The case for young people investing more heavily than older people in stocks cannot, therefore, rest solely on the long-run properties of stock returns. For guarantors of money-fixed annuities, the proposition that stocks in their portfolios are a better hedge the longer the maturity of their obligations is unambiguously wrong.

Asset allocation for individuals should be viewed in the broader context of deciding on an allocation of total wealth between risk-free and risky assets.

Time and Money (Norstad 2007) In the random walk model of the S&P 500 stock market index , the probability that a stock investment will earn less than a bank account earning 6% interest is 42% after 1 year. After 40 years this probability decreases to only 10%. Doesn't this prove that risk decreases with time?

The problem with this argument is that it treats all shortfalls equally. A loss of $1000 is treated the same as a loss of $1! This is clearly not fair. For example, if I invest $5000, a loss of $1000, while less likely, is certainly a more devastating loss to me than is a loss of $1, and it should be weighted more heavily in the argument. Similarly, the argument treats all gains equally, which is not fair by the same reasoning.

John Norstad has provided one of the best argument for the fallacy of time diversification . I approach the problem slightly differently since I actively manage clients' monies- but the essence is the same. Time does not reduce the risk of losing money. It INCREASES the risk. That is why literally all financial and investment plans nationally are incompetent at best and fraudulent at worst. But if one is acting as a fiduciary, the point is the same- the consumer has been provided information that is detrimental to their financial future.

Just plain wrong.  (2007) Read this article. This professional article comments about risk synonymous with standard deviation. It's wrong. Risk has many definitions but it is not standard deviation, ipso facto. Every time you read anything with this type commentary, just recognize you are reading something that is erroneous.

Volatility is Not Risk. (2007)

A shareholder from Los Angeles referred to the fact that many people talk about “sigmas” (the standard deviations of price changes) and equate volatility with risk. He asked why a rational person would substitute the opinions of the public (as reflected in volatility caused by mass decisions) for one’s own measurement of the inherent risk of a company.

Buffett: The measurement of volatility: it’s nice, it’s mathematical, and wrong. Volatility is not risk. Those who have written about risk don’t know how to measure risk. Past volatility does not measure risk. When farm prices crashed, [farm price] volatility went up, but a farm priced at $600 per acre that was formerly $2,000 per acre isn’t riskier because it’s more volatile. [Measures like] beta let people who teach finance use the math they’ve learned. That’s nonsense. Risk comes from not knowing what you’re doing. Dexter Shoes was a terrible mistake—I was wrong about the business, but not because shoe prices were volatile. If you understand the business you own, you’re not taking risk. Volatility is useful for people who want a career in teaching. I cannot recall a case where we lost a lot of money due to volatility. The whole concept of volatility as a measure of risk has developed in my lifetime and isn’t any use to us.

Munger: Finance taught in business schools is about 50% twaddle. We early recognized that very smart people do very dumb things. We wanted to figure out when and why…and who, so we could avoid them.

Comment: This was one of the best questions asked, and Buffett and Munger were characteristically straightforward in their answer. If volatility is risk, then an investment that does nothing but shoot sharply upward—that’s volatility, too—is risky. Similarly, suppose that an average worker regularly saves a modest amount from each paycheck and invests in T-Bills for retirement. It’s unlikely that this worker will amass sufficient purchasing power to retire comfortably, but because T-Bills aren’t volatile should we say that this investment approach is low risk? Investment managers may be quick with their opinions, but at least you can usually see their investment track records before you judge their insightfulness. Academics are free to spout nonsense, and there is usually nothing to alert the public that they may not know what they’re talking about. As the questioner implied, it’s a mistake to let Mr. Market—or Professor Beta—decide what’s risky and what isn’t.pooled income funds in existence less than three tax years must use a 4.8% deemed rate of return.

Theoretical determination of standard deviation (2007) - standard deviation scales (increases) in proportion to the square root of time. Therefore, if the daily standard deviation is 1.1%, and if there are 250 trading days in a year, the annualized standard deviation is the daily standard deviation of 1.1% multiplied by the square root of 250 (1.1% x 15.8 = 18.1%). Knowing this, we can 'annualize" the interval standard deviations for the S&P 500 by multiplying by the square root of the number of intervals in a year:

Another theoretical property of volatility may or may not surprise you: it erodes returns. This is due to the key assumption of the random walk idea: that returns are expressed in percentages. Imagine you start with $100 and then gain 10% to get $110. Then you lose 10%, which nets you $99 ($110 x 90% = $99). Then you gain 10% again, to net $108.90 ($99 x 110% = $108.9). Finally, you lose 10% to net $98.01. It may be counter-intuitive, but your principal is slowly eroding even though your average gain is 0%!

If, for example, you expect an average annual gain of 10% per year (i.e. arithmetic average), it turns out that your long-run expected gain is something less than 10% per year. In fact, it will be reduced by about half the variance (where variance is the standard deviation squared). In the pure hypothetical below, we start with $100 and then imagine five years of volatility to end with $157:

The average annual returns over the five years was 10% (15% + 0% + 20% - 5% + 20% = 50% ÷ 5 = 10%), but the compound annual growth rate (CAGR, or geometric return) is a more accurate measure of the realized gain, and it was only 9.49%. Volatility eroded the result, and the difference is about half the variance of 1.1%. These results aren't from a historical example, but in terms of expectations, given a standard deviation of (variance is the square of standard deviation, ^2) and an expected average gain of , the expected annualized return is approximately - (^2 ÷ 2).

Are Returns Well-Behaved?

The theoretical framework is no doubt elegant, but it depends on well-behaved returns. Namely, a normal distribution and a random walk (i.e. independence from one period to the next). How does this compare to reality? We collected daily returns over the last 10 years for the S&P 500 and Nasdaq below (about 2,500 daily observations):

As you may expect, the volatility of Nasdaq (annualized standard deviation of 28.8%) is greater than the volatility of the S&P 500 (annualized standard deviation at 18.1%). We can observe two differences between the normal distribution and actual returns. First, the actual returns have taller peaks - meaning a greater preponderance of returns near the average. Second, actual returns have fatter tails. (Our findings align somewhat with more extensive academic studies, which also tend to find 'tall peaks' and 'fat tails"; the technical term for this is kurtosis). Let's say we consider minus three standard deviations to be a big loss: the S&P 500 experienced a daily loss of minus three standard deviations about -3.4% of the time. The normal curve predicts such a loss would occur about three times in 10 years, but it actually happened 14 times!

These are distributions of separate interval returns, but what does theory say about returns over time? As a test, let's take a look at the actual daily distributions of the S&P 500 above. In this case, the average annual return (over the last 10 years) was about 10.6% and, as discussed, the annualized volatility was 18.1%. Here we perform a hypothetical trial by starting with $100 and holding it over 10 years, but we expose the investment each year to a random outcome that averaged 10.6% with a standard deviation of 18.1%. This trial was done 500 times, making it a so-called Monte Carlo simulation. The final price outcomes of 500 trials are shown below:

A normal distribution is shown as backdrop solely to highlight the very non-normal price outcomes. Technically, the final price outcomes are lognormal (meaning that if the x-axis were converted to natural log of x, the distribution would look more normal). The point is that several price outcomes are way over to the right: out of 500 trials, six outcomes produced a $700 end-of-period result! These precious few outcomes managed to earn over 20% on average, each year, over 10 years. On the left hand side, because a declining balance reduces the cumulative effects of percentage losses, we only got a handful of final outcomes that were less than $50. To summarize a difficult idea, we can say that interval returns - expressed in percentage terms - are normally distributed, but final price outcomes are log-normally distributed.

Finally, another finding of our trials is consistent with the 'erosion effects' of volatility: if your investment earned exactly the average each year, you would hold about $273 at the end (10.6% compounded over 10 years). But in this experiment, our overall expected gain was closer to $250. In other words, the average (arithmetic) annual gain was 10.6%, but the cumulative (geometric) gain was less.

It is critical to keep in mind that our simulation assumes a random walk: it assumes that returns from one period to the next are totally independent. We have not proven that by any means, and it is not a trivial assumption. If you believe returns follow trends, you are technically saying they show positive serial correlation. If you think they 'revert to the mean", then technically you are saying they show negative serial correlation. Neither stance is consistent with independence.

Conclusion

Volatility is annualized standard deviation of returns. In the traditional theoretical framework, it not only measures risk, but affects the expectation of long-term (multi-period) returns. As such, it asks us to accept the dubious assumptions that interval returns are normally distributed and independent. If these assumptions are true, high volatility is a double-edged sword: it erodes your expected long-term return (it reduces the arithmetic average to the geometric average), but it also provides you with more chances to make a few big gains.

Risk over time (Morningsstar) Assume a risk premium (geometric) of 1% (e.g., above 20 year TIPS)
Assume a standard deviation of annual returns of 18% (=long term S&P volatility)
Convert G-risk premium to Arithmetic average risk premium of 1% + 18% ^ 2 /2 = 2.62%.

Using Excel's NORMSDIST function, find % of results where average portfolio growth will be less than sticking with safe assets. We'll call these "lean" results.

NORMSDIST(-Arithmetic average risk premium / (annual standard deviation / SQRT(n years))

For n years, I get the following % of "lean" results:

n years......% lean results
1...............44.21%
5...............37.24%
10.............32.27%
20.............25.75%
30.............21.27%
40.............17.86%
50.............15.17%

To keep one's risk (measured as a probability of having "lean" years) constant as one's expected portfolio duration drops from 40 years to 10 years, one would need to reduce the standard deviation of their portfolio by (1 - 17.86%/32.27%) = 45%. If that person started with a 60% AA between risky and risk-free assets with 40 years to go, they could maintain the same risk by decreasing their AA in risky assets to 33% by the time they had 10 years to go.

 Monte Carlo- (2008) Analytical models are the most elegant of the pricing methodologies. In this approach, we begin with a set of assumptions about how the relevant variables behave. We then translate these assumptions into mathematical equations. We then use the mathematical equations to derive a formal relationship between the input variables and the output variable (in this case the output variable is the fair option premium). The analytical model is the end result of the derivation process and takes the form of a “formula” or “equation” that ties the inputs to the output. This formula is often referred to as the “solution.” The beauty of analytical models is that they allow us to quickly produce a precise valuation. But there are problems associated with analytical models as well. First, the models are quite difficult to derive without an advanced knowledge of stochastic calculus. Second, in some cases it is not possible to derive analytical solutions. Third, analytical models once derived are completely inflexible. That is, they can only be applied in situations where the exact set of assumptions used to build the model hold.