# Value at risk: a quantitative study on the Nordic stock exchange

Abstract,

The role of risk management has gained momentum in recent years most notably after the recent financial crisis. This thesis uses a quantitative approach to evaluate the theory of value at risk which is considered a benchmark to measure financial risk. The thesis makes use of both parametric and non parametric approaches to evaluate the effectiveness of VAR as a standard tool in measuring risk of stock portfolio.

**Value at risk: a quantitative study on the Nordic stock exchange**

or any similar topic only for you

This study uses the normal distribution, student t-distribution, historical simulation and the exponential weighted moving average at 95% and 99% confidence level on the stock returns of Sonny Ericsson, three months Swedish Treasury bill and the Nordea. The evaluation of the VAR models is based on the Kupiec Test. From a general perspective, the results of the study indicate that VAR as a risk measure has some imprecision in its estimates. However this imprecision is not all the same for all the approaches. The results indicate that models which assume normality of return distribution display poor performance at both confidence level than models which assume fatter tails or have leptokurtic characteristics. Another finding from the study which may be interesting is the fact that during the period of high volatility such the crisis of 2008, the imprecision of VAR estimates increases. The normal distribution VAR performed most poorly of all the models. This is particularly with methods such as the historical simulation that is based on historical data to determine future returns thereby failing to take the weights of the past events into consideration. The student t distribution and the exponentially waited moving average outperform all the other models

Keywords: Value at risk, back testing, kupiec test, student t-distribution, historical simulation, normal distribution, exponential weighted moving average.

## 1.0 INTRODUCTION

The role of risk management in financial institutions has greatly expanded over the recent decades, this has led to the development of measures which can be use to manage risk in a sustainable way that can create economic value for financial assets. Technological development, increased trading volumes have increased concerns on the effectiveness of risk measures. These concerns were further highlighted by the collapse of the global stock market in 1987; collapse of Orange County, the Asians crisis in 1997 and the recent financial crisis which started in the summer of 2008.These crisis exposed the weaknesses of risk management tools and exposed the global economy to huge negative consequences. One of the centralized and widely used risk management tools by banks for modeling financial risk is the value at risk (VAR) which was developed in the early 1990s( for details see Basle committee on banking supervision,2004). This risk measure combines two main ideas that risk should either be measured at the level of the institution by top management executive or measured at the level of the portfolio. This idea had earlier been propounded by Harry Markowitz (1952) who highlighted the necessity of measuring risk at the level of the portfolio, based on the current positions of the portfolio. With varied financial products VAR is able to measure risk across different portfolios.VAR uses statistical properties to measure the worst loss of a portfolio of stocks, at given confidence level and time horizon.VAR risk estimates are usually verify and tested for accuracy through a process of back testing which. This involves comparing the risk estimates predicted by the various VAR approaches with the actual outcomes, thereby suggesting area of improvement in the model. Kupiec (1995) test uses the binomial distribution to check if the VAR model estimates are consistent with the realistic returns given the binomial confidence level. Parametric approaches of VAR highly rely on some assumptions made about the return distribution. For example they may assume that stock market data is normally distributed. This assumption however has been proven unrealistic by numerous empirical researches which show that financial data is not normally distributed and has fatter tails. On the other hand, the non parametric approaches do not make any assumption of the returns; they rely on historical observation to make predictions about future returns. However such prediction made by the nonparametric methods may not be valid because historical data and events may not reflect current market situations. While there has been the wide usage of VAR, there is however no general consensus among scholars as to which of these approaches is best. The motivation for this study is to examine the various VAR approaches in a concise manner which would help us to understand the weaknesses and how accurate each of this approaches are using back testing. This is particular important given that risk exposure factors have increased with the advent of globalization. This has affected financial assets which had traditionally been considered to have low volatility such as government bills and bonds. These traditional assets are now faced with high fluctuations in their prices and exposed to default risk which had never been considered in previous decades. To the best of our knowledge, a study which measures the accuracy of VAR approaches using government bills(assumed to have low volatility) and other traditional stock exchange assets such as stocks( assumed to have high volatility ) has not been a focal attention for previous researchers. Most previous research in this area such as that carried by Blake et al(2004),Jeff et al(2009),Artzner et al (1999),Duffie et al(1997) have been more involve in

sensitivity analysis, and finding other robust and coherent risk measure approaches to value at risk which are applicable over longer time horizon using the square root rule to scale the time horizons. These studies have often used mainly traditional stock price indices with benchmark indices such as Standard and Poor. This has left a number of issues about VAR applicability and accuracy open, thereby hindering the generalization of VAR approaches across all types of assets. Thus our approach to this study of value at risk offers a new dimension in comparing VAR approaches using relatively stable government bills having low returns to traditional volatile and diversified stock of banking industry and the fast growing mobile technology industry.

## 1.1 BACKGROUND of VAR

In this contemporary volatile business environment with increased uncertainty in financial markets and the recent global financial meltdown in 2008, effective measures of market risk have become crucial in most financial institutions. Vulnerability emerging from extensive movements in market prices of financial assets as well as increased use of derivatives call for a standard measure of risk that will capture and mitigate the growing financial risks. Supervisory authorities and management ask for a quantitative measure of market risks in order to make sound investment decisions in allocating risk capital or fulfilling external regulations. According to Jorion (2001) market risk is a volatility of unexpected outcomes. In other words, it is a risk that an investment loses its value due to movements in market risk factors such as equity, exchange rate, interest rate and commodity risks. Volatility of financial markets creates risk and opportunities that must be measured and managed. The extent of this study is limited to the market risk management using Value at Risk (VAR).

The origin of VAR dates far back as 1952 which evolved naturally from the Markowitz’s portfolio theory (PT) in the mean-variance framework. That not withstanding, there are some important differences between PT and VAR. Stated by Dowd (2005, p11) that;

1. PT interprets risk in terms of standard deviation, while VAR interprets it in terms of maximum likely loss.

2. PT assumes distributions close to normal, while VAR accommodates wide range of possible distributions.

3. PT is limited to market risk, while VAR can be applied to other types of risk.

Value Risk became a popular tool for measuring exposure, from the aftermath of the infamous financial disaster in 1990s that involved Orange County, Barings, Metallgesellschaft, Daiwa and so many others (Jorion, 2001). The common lessons drawn from these disasters are that billions of dollars can be lost due to poor supervision and management of financial risks. As a result of this, financial institutions and regulators sought a means to deal with this and turned to VAR which they found as an easy- to- understand method for quantifying market risks (Jorion, 2007). This is why VAR is fast becoming an essential tool for conveying trading risks to senior management, directors, and shareholders. In spite of its early origin the term value at risk became well known during the G-30 report published in 1993.Much of this has been attributed to the efforts of Till Guldimann who was head of global research at J.P. Morgan in the late 1980s (Jorion 2001). J.P Morgan was one of the first banks to disclose it VAR. Which revealed in it 1994 Annual Report that it’s trading VAR was an average of $15million at the 95% level over 1 dayBecause of this information, shareholders can then assess if they are comfortable with that level of risk. VAR can be defined statistically as a measure of downside risk based on current positions (Jorion,2007). Jorion further describes VAR as “the quantile of the projected distribution of gains and losses over the target horizon’’. This definition of VAR indicates that risk can both be considered as a gain and a loss for investor. But in this thesis we would concerned with the loss part. This also means that there are two components which we must take in consideration when calculating VAR, the time horizon and the confidence level. Although value at risk was initially limited to calculation of market risk, the use of VAR as an active risk management tool has gone well beyond derivatives. Value at Risk has been recommended as a standardized measure of risk by the Basel Committee (Basel II Accord (2004)), a committee in charge of Bank Supervision and regulations, and also by the U.S Federal reserve and the U.S security and exchange commission. Jorion (2002) pointed out that there have been a number of factors which have contributed to the used of VAR as a standard tool for risk measurement and control. This include Pressure on regulatory authorities to put in place a better and accurate measure to control risk, globalization of financial markets which has increased the number of risk exposure factors that firms are exposed to and technological innovations which have increased the need for measures to control enterprise wide risk.

## 1.2PROBLEM STATEMENT

Do VAR approaches accurately measure the risk on stock portfolioThe aftermath of the deregulation and globalization of the financial industry in the 1970s witnessed much competition from financial firms all over the world. As firms compete with each other, risk exposure factors have increased, coupled with the recent financial crisis, governments and other regulatory authorities havebecomemore actively involved in the need to put in place accurate risk control measure. However some governments have also contributed to some of the financial crisis as a result of their interference in the private sector through actions such as currency devaluation, leading to systematic misallocation of capital and trade imbalances across nations. A typical example is the Asian crisis of 1997 which was largely attributed to the unsustainable economic policies of the Asian governments. From the basic point of view VAR can be used by institutions whose activities are exposed to market risk in three ways;

1)Passive (Information Reporting): This involves using VAR as a tool to calculate the overall risk which the company is exposed too. This had been the initial objectiveVAR had been predominantly involved in.

2)Defensive (Risk Control): This involves using VAR as a tool to set limits on the trading positions of the activities of a company.

3)Active (Risk Management): This involves using VAR as a management tool to set and allocate capital for various trading activities for example options, put position on forward contracts and many other trading that is exposed to risks.

In spite of the convergence on VAR as a bench mark in measuring financial risk, Jorion, pointed out that, “there exist as many approaches to calculate VAR as there are numerous users, each claiming to be the best approach”. As such there has been no consensus among scholars and practitioners as to which approach of VAR should be used. Given the fact that VAR is widely used even in non financial firms to measure different kinds of financial risks, this indicates that VAR has come to stay. As future financiers we think it would be important to contribute to the ongoing ‘‘VAR revolution’’ a terminology used by Jorion (2001) to describe the increasingly use of VAR as a bench mark in risk management.

## 1.3 PURPOSE OF STUDY

The purpose of this study is to evaluate the accuracy of Value at Risk approaches in measuring risk, by comparing the VAR estimates of stock return distribution to check if there are consistent with the actual portfolio returns .This test of accuracy is done with the use of back testing model based on the kupiec (1995) test, we expect to find the best model(s) for risk management. The parametric methods used are normal distribution VAR, t-distribution, while the non-parametric models are Historical Simulation and EWMA. This approaches will be discusses in details in subsequent chapters. The choice of this mixture of technological, Banking and governmental stocks in our thesis is to understand how VAR is measured and applied in these three sectors which are exposed to varied risk factors. In addition, we want to study how the VAR measures can be complement with other risk measures so that these models could correctly measure market risk on stock portfolio. We use95% and 99% confidence levels. Lastly in our study, we will make proposal on how the risk measures could be improved upon.

## 1.4 DELIMITATIONS

Delimitations are necessary due to time limitation in the thesis procedure. The complex nature of some of the models and the quantitative skills needed to really understand these models, make us to limit ourselves to the models above. Hoping that they are going to meet the purpose of this study and improve our understanding of VAR calculation.

Furthermore, the companies chosen are from different business sectors that increase and widen the scope of the study. Covering other financial markets, stock indices and foreign exchange would have been of great importance to our thesis, since it might have been easier to interpret differences in the results if it exists. Our study is limited only to one back testing technique that is the kupiec(1995) test .It would have been good to incorporate more than one testing technique to the study because of the merit and demerit of the back testing technique. More over, using four models in the thesis is not sufficient to find the most accurate approach because an accurate model requires many models and this could have improve the outcome of our results.

## 1.5 DISPOSITIONS

In the second section of this thesis we will discus what value at risk is about and the various approaches of VAR measurement. This section also dwells on reviewing literatures related to the topic. Section three presents the data used, research method and strategy applied in this thesis .It alsoconsiders the reasons for the selection of the stock indices used. In chapter four we present results of the study. Chapter five presents the analysis and discussion of the results. In chapter six we make conclusion of the findings of this study and suggest areas for possible future research

## 2.0 THEORETICAL FRAMEWORK

In this chapter of the thesis, we will discus what Value at Risk is, with an interpretation of the VAR formula and discuss the different approaches of VAR measurement together with the mathematical models that they relied on.

## 2.1 VALUE AT RISK (VAR)

As mentioned earlier it is important that top management of firms should be aware of the risks that the company is exposed to including trading positions that are being taken by traders on behalf of the firm. This is because some traders may not follow the rules sometimes and gamble huge sums of the firm’s capital into very risky businesses. Some examples earlier mentioned include the loss of 1.1 billion dollars by a trader at Daiwa bank in Japan over duration of 11 years. Leeson caused a huge loss for the Barings Bank as a result of taking unauthorized positions in derivatives and the collapse of Orange County was caused by the high leverage position which was assumed by the County treasurer, Ruppert (2004). As a result of these financial disasters, there is the high demand for a better risk tool that can be used to quantify risk. There are many statistical models used in measuring risk but since VAR can measure the risk in almost all kind of asset portfolio and the sum of the loss expressed in money terms, this has made it to be widely used in risk quantification.

VAR can be defined intuitively as, the summary of the worst loss beyond a given time frame that falls within a particular confidence interval (Jorion, 2006. P.17), the given time frame is known as the time horizon or the holding period, the confidence interval is known as the level of certainty. The confidence level and the time horizon can be denoted 1-? and T, respectively. For example if we assume that the time horizon is one day, and the confidence coefficient is 99%, this means that our value for ?= 0.01 and the VAR is 5 million over the next day. An interpretation of this result means that there is only 1% chance of our loss going beyond 5million for the next day. The interpretation of VAR indicates that it is dependent on the value of ? or on both ? and the time horizon T .This can be denoted as VAR(?) or VAR(?, T) respectively. Once this VAR amount has been made known to shareholders and senior management, they can then decide if they are comfortable with the level of risk .The VAR calculation can be reviewed or recalculated to determine other amount of loss which shareholders see acceptable depending on their risk tolerance. The choices of these variables are very necessary depending on the particular institution and other factors;

## 2.2 Choice of the Time Horizon

The most used holding period is one day or one month. Four main factors account for the choice of the holding period. This includes the liquidity of the markets in which the firm’s assets are traded. Holding period here refers to the time it takes to liquidate a trading position in the market in an orderly manner. For example if it is expected that we can easily liquidate a position in an orderly fashion, we may prefer to choose a short time horizon to calculate the VAR for that trading position, on the other hand if it takes so much time to liquidate a position in an orderly manner due to the fact that the firm trades in a thin market such as over the counter market (OVC) ,the thin size of the market may cause so much time to get a counterparty to trade with , we may prefer a longer time horizon. One fundamental assumption underlying VAR calculation is that the portfolio return is stationary during the holding period. The longer the time horizon the greater the risk and consequently increases VAR. This is why international bank capital standard set at a 10-days holding period since they have very high turn over and are trading in liquid assets which needs to be sold quickly. Whereas, investment managers have a longer holding period for up to one-month which is the maximum time for an asset portfolio to be liquidated. The implication of this assumption means that most of the discussion on VAR in this thesis would be limited or centered around one day holding period given that the above mentioned assumption is only applicable in the short run.

## 2.3 Choice of Confidence Level

Dowd (1998) outlines a number of factors which influence the choice of the confidence level. This depends on the purpose of the risk measure which could be either to validate VAR systems, determine internal capital requirements, provide inputs for internal risk management, or make comparisons among different institutions. The choice of the confidence interval may also be influenced by the behavior of the distribution for example if we assume normality or some other probability distribution such as the t-distribution (Dowd 1998, p. 52). A lower confidence interval is usually used for a system validation and it may also depend on the choice of the regulatory supervisors who is verifying the VAR system used in the firm. The risk tolerance level of the firm influences the choice of the confidence level of VAR used for the purpose of capital requirements. A more risk adverse firm will use a high confidence level. This is because the firm would want to reserve more capital in liquid assets so that they can meet unexpected low returns. The choice of VAR for accounting and comparison purpose differ with institutions. For example JP Morgan one of the prominent advocate of VAR uses a 95% confidence level, Banker Trust uses 99% confidence level and City Trust uses 95.4% confidence level. Most commonly used confidence level is that of 95%, 99% .Summarily, low confidence interval is used for validation, for risk management and capital requirements a high confidence level is used and for comparison and accounting purposes a medium or low one is used (Dowd 1998).

This can be expressed mathematically as thus:Pr (L>VAR) ? 1- c; Where: Pr= Probability, C= Confidence level, L= Loss. This can be illustrated graphically in figure1 below.

Figure1: The 5% value at Risk of a hypothetical profit and loss probability density function

Source: AaCBrown on June 21, 2010 for use in the Value-at-Risk article

This graph can further be explained in other words as; if a portfolio of stocks has a one-day 95% VAR of $100 million, there is a 0.05 probability that the portfolio will fall in value by more than $100 million over a one day period, assuming markets are normal and there is no trading. Informally, a loss of $100 million or more on this portfolio is expected on 1 day in 10. A loss which exceeds the VAR threshold is termed a “VAR break”.

The most interesting thing about VAR is that it summarizes the potential risks of stock portfolio by a single number. We can say that, VAR is the amounts of money a portfolio can loss within a particular time horizon and at a given confidence interval. This can also be interpreted as the probability that less than a particular sum of money will be lost within a specified time horizon and at a certain percentage, where the percentage is one minus the target confidence limit. In general, the VAR formula is thus:

**VAR ****(w****0****, ?, ?****t****)**** = ****w****0*****N*****?*****v****?****t****………………………………………….equation (1)**

Where **Pr (****?w****0**** ? N*****?*****v****?****t****) = ****?**

Where , **W****0** stands for the current value of a portfolio, **?** is the confidence limit, **?****t** is the time horizon, **N** is the number of standard deviations corresponding to **?**, **?**** **is the standard deviation of the portfolio’s return distribution, **?W****0** is the absolute value of a negative change in** W****0**, and **Pr(·)** stands for the probability. Statistically, VAR corresponds to a certain percentage (quantile) in the return distribution of a stock portfolio, the one whose underlying cumulative probability is equal to **?**. Practically, a small** ****?** is needed to be able to catch extreme events, and that is the reason why 1% has been used as a standard setting in financial institutions. For instance, if a bank announces that the 1- day VAR of its trading portfolio is $30 million at the 1% confidence level it means that there is only 1 day out of 100 days for the bank to make a loss greater than $30 million over a one day period (when the time horizon is 1 day) this means that, the VAR measure is an estimate of more than $30 million decrease in the trading portfolio value that could occur with 1% probability over the next trading day.

VAR models are based on the assumption that the components of the portfolio do not change over the time horizon. But this assumption is only accurate for the short time periods, so majority of the discussion of the VAR measurements is centered on the one-day horizon. To calculate VAR, some assumptions are made; most especially when the daily fluctuation in stock prices has a cumulative density function and this function is usually assumed to follow a normal distribution. The merit of this assumption is that it makes the VAR estimations more easy to use and understand but it also has some disadvantages such as, changes in asset prices do not follow the normal distribution curve in the presence of observations in the tails of the normal distribution, the VAR measurement under the normal distribution approach always gives a smaller amount of the losses that can possibly occur. The t-distribution is the solution to this problem since it considered the fatter tails in the normal distribution. According to Dowd (1998) the t-distribution goes with the fatter tails that gives higher losses which leads to higher VAR.

VAR can measure not only the market risk but any other risk factors and can also measure the risk in almost any kind of asset portfolio and the sum of the loss is been expressed in the probability and money terms. As mentioned above, from its creation in the late 1990s VAR quickly established itself as the dominant risk measure, used not only by investment banks but also by commercial banks, pension funds and other financial institutions (Dowd, 2005 p. 10-11). Even though of its recent popularity and usage, when using VAR, one must be careful of its drawbacks.VAR calculations is best under the normal distribution assumption, and the use of VAR was motivated because financial data are not normally distributed.

VAR measurement has been highly criticized by its opponents for it short comings, like Taleb (1997) who proposed that VAR should not be used as a risk measurement tool because; (1) VAR losses are limited only within the given confidence interval. (2) Greater losses can be resulted when we depend too much on VAR, and (3) the use of VAR concept is a delicate practice since it leads to principal-agent problem and is not valid in real life situation. Also other opponents of VAR model Danielsson & Zigrand (2003) added that the use of VAR as a regulation requirement can (4) alter good risk management practices. (5) VAR as a risk measure is non-sub additive which resulted to inconsistency of the VAR model this is regarded as the most serious drawback of the model since it cannot account for the diversification effect in the case of non normality. Tasche (2001) is of the opinion that for any risk measurement tool to be coherent it must meet the axiom of sub additivity. This axiom states that the sum of the risk for a portfolio of stock for example should be at most the sum of the risk of the individual stocks in the portfolio. It can only be sub-additive if the normality assumptions on return distribution are applied and which is contrary to the real life situation in the financial time series. Coupled with the afore mentioned shortcoming, (7) VAR can be calculated using several methods with different assumptions and each of these methods have their pros and cons and their specific performances, considering the popularity of VAR we believed that looking into the comparison problem of the different VAR methods would constitutes an important information to VAR users. Ener et al (2006) writes that stock market returns exhibit excess and fat tails. This means that there is a higher frequency of witnessing extreme events than those which occur in a normal distribution. The study also indicates that jumps and stochastic volatility are some of the likely cause of kurtosis. Because of the kurtosis it becomes essential for volatility modeling which is very important in VAR measurement.

The VAR concept have been greatly criticized due to the fact that, it does not take into consideration statistical properties of the significant loss above the confidence level, and also, because it is not a coherent risk measure. Nevertheless, the model still stands the test of time when it comes to risk quantification because it is simple and easy to calculate. More over, proponent of the model argued that irrespective of its pitfalls, VAR model can be useful in many ways, (1) Basel II accord under the new risk-based capital adequacy framework which is a revision of the Basel I accord recommended VAR as a standard tool of measuring credit risk and to determine capital requirements of firms. In addition, according to Basel Committee, banks should reserved sufficient cash to be able to cover market losses over 10 days with 99% probability for all their traded portfolios. This amount of cash has to be determined by VAR. (2) an increase in VAR means increase in firm risk; therefore, management can set targets of their total risk and from that determine their corresponding risk position. (3) VAR information can be used to provide remuneration rules for traders and managers. (4) Investment, hedging, trading and portfolio management decisions can be guided by VAR-based decision rules. (5) VAR is been used by firms in reporting and disclosing their risk level and (6) systems based on VAR can measure other risks such as credit, liquidity and operational risks (Dowd, 2005). Since it implementation is simple and free of model risk, financial analysts prefer using the historical simulation together with the bootstrap in real financial markets. Pant and Chang (2001) and Heikkinen and Kanto (2002) has been making remarkable progress about the heavy- tailed portfolio distributions they assumed that portfolio loss follows a t-distribution. The t-distribution can describe the actual market data efficiently than the normal distribution because of it heavy tail. According to Platen and Stahl (2003), the t-distribution gives a better approximation on the returns of majority stock stated in their empirical analyses.

## 2.2 VAR APPROACHES

In this section we are going to present below the three different approaches used in this thesis with their pros and cons which affects the VAR measurement. These approaches made assumptions that handle in different ways the return characteristics which have influence on the individual approach in calculating VAR value.

### 2.2.1 NON PARAMETRIC APPROACH

A non parametric method is a general method that makes no assumption that the returns of assets are to be distributed according to a given probability distribution. The word non parametric does not indicate that the approach does not have parameters, there are parameters but the nature and number of parameters are adjustable and fixed in advanced.According to Dowd (1998), this method is widely used due to the ease in calculating VAR and for the fact that it avoids so many problems and still produces good measurement. The non parametric approach is more suitable to use when extreme events are happening regularly than the normal distribution states and also when returns of assets do not follow the normal distribution curve. Another important issue about this model is the non use of normality assumptions. This method depends so much on the historical data and no events that have not taken place in the past are included. As mentioned above, the two non parametric approaches we make used of in this thesis are historical simulation and weighted volatility using the Exponentially Weighted Moving Average (EWMA).

### 2.2.1.1 HISTORICAL SIMULATION (HS)

This approach is the most famous approach among the non parametric approaches. Because it can be used on all kinds of derivative instruments, its can easily be explained and implemented, it can implicitly be used to calculate volatilities and correlations, it elude model risk and can also be used without normality properties and some times is referred to as non parametric VAR. The purpose of the historical simulation approach is to forecast future VAR directly from historical distributions of changes in assets prices. The changes in historical returns distributions sample are collected at a given time interval for at least one year of current daily stock returns which are assumed to be an appropriate data set representing the future returns changes. According to Giovanni B.A. & Kostas G, (2000), ‘‘A longer period is more appropriate when available, but availability of longer period historical data is often problematic for the whole (linear) contracts or other risk factors.’’ Also Hendricks (1996, p. 54) gives example that 1 day of 1250 days is a long estimation period that render VAR estimate insensitive to new information and give little information about changes in risk factors over time. Assuming that historical distribution of returns is a good measure of the distributions of returns we are going to have in the future, we can now present the function of the historical simulation below (Dowd, 1998, p. 99);

?

**r****p**t = ? wiri,t,t = 0,…, T………………………………………………………..equation(2)

i = 1

Where, t is the number of sample from time 0 to time T. ri,t is the return to asset i at time t, wi is the relative weight of asset i in the portfolio, n is the number of assets in the portfolio and **r**pt is the portfolio return at time t. Each sample t gives a particular portfolio return Rt. The expected sample distribution of historical portfolio return is gotten from the sample of historical observations. This approach just reads the VAR from the histogram of returns (Choi et al 2011 pp.2). For example, a sample of 2000 daily observations, and a VAR with a 99% confidence level, you will expect the actual loss to exceed the VAR on 1% of days, or a total of 20days and the VAR will be the 21st worst loss.

According to Robert (2009.P1-4), the advantages of this method are that; this approach is simple, and also, the data needed are readily available from the public sources a characteristic that enable risk managers to easily report their risk situation to senior managers. Another merit is that it does not depend on the assumptions about the distribution of returns. Whether the returns are efficiently distributed under the normal distribution or not that is not the objective of the method. This approach does not consider the assumption that the returns are independently identically distributed (IID). It assumed that the returns distributions are stable over the periods, that is, it has to remain the same in the future as it was in the past. Dowd (1998) also mentioned another interesting thing about the HS approach which is it less restrictiveness when it comes to assumptions unlike other approaches which are based on specific distributional assumptions such as normality and the approach have no problem of accommodating the fat tails that affect the VAR calculation under the normal approach.

Despite the advantages, HS approach has some problems which have been argued out by some Arthurs like; Jorion (2001), who pointed out that, the primary problem of this approach is related to data since it depends on a larger amount of historical data set in order to perform efficiently at higher confidence intervals. When estimating VAR at a 99% confidence interval, intuitively at least 100 historical data have to be inputted. But even then the approach only produces one observation in the tail. Perhaps not enough historical data is available to produce a good VAR estimate, a problem that on the other hand can occur for most of the VAR approaches. Further, another crucial argument was made by Dowd (1998, P.102), who stated that, the problem with this approach is that only the events that occurred within the collected historical data time period will have a probability of occurring in the future and can be covered in the risk estimation. For example, if no devaluation occurred in a historical data period, the HS procedure would implicitly regard exchange rate risk as very low whereby the real exchange rate risk may be very high, particularly in cases where the market expects exchange rate to change after a long period of stability. Dowd (1998, P.102) also posit that, HS approach has the problem regarding the duration of estimation period. The more extreme the tail is the longer the estimation duration since we assumed that the distribution process of the return data remained the same over the estimation period, we would equally want the longest possible period to maximize the result accuracy. For instance, since we are dealing with VARs based on high confidence intervals, say 95% confidence level, one have to wait, on average 20 days to expect a single loss on VAR; if we have 99% confidence level, we would expect to wait for 100 days to get a single loss in excess of VAR.

In conclusion, the HS approach has both advantages and disadvantages which make HS approach to be recommended for other complementary statistical tests so that it can be able to pick up those risks or events which were not well represented in the historical data or occurred beyond the confidence level.

2.2.1.2 HISTORICAL WEIGHTED VOLATILITY Using EWMA

This approach allows the volatility to vary from one period to another and so explain volatility clustering, since a higher than average volatility in one period is likely to lead to a higher than average volatility in the next period (Dowd, 1998. P. 95). The exponentially weighted moving average (EWMA) is used by the risk metrics as a bench mark in the financial industry in their VAR estimation because it is more responsive to unforeseen movements of the market and is one of the methods of modeling volatility, it has only one parameter** (?)** and easily applicable with ? been assigned the values of 0.94 for daily observations and 0.97 for monthly observations when used in the risk metrics. This method is most preferable amongst the methods of modeling time varying volatility because of the greater weight puts on the more recent moving averages (MA) estimates. MA is calculated as the average of historical volatility which gives the same weight to the past data or events as the present or future events even if the past data are not likely to occur again. The weighted average of the volatility is forecasted to be weighted average of the current volatility and the past period’s volatility by EWMA. This forecast can be presented as:

**?t2 = ? ?t2– 1 + (1 – ?)rt2…………………………………equation(4)**

** **here **?**; is the weight of the forecast and measured volatilities for the given periods.

**?t2 = **recent forecast of returns volatility at day t**, ?t2– 1 = **past forecast of returns volatility at day t-1. **rt2 = **square** **returns at day t. Therefore, from the above formula we can write the exponentially weighted moving average formula as thus:

?

**t****2**** = (1-?) ****?**** ****?****i – 1**** ****r2t – i…………………………………………………………equation(5)**

** i = 0**

t2 =is the volatility estimator, ? = reflect the exponential rate of decay,

i = reflecting the age of the observations, with 1 as the most recent.

The EWMA approach assigns more weight to recent observations and less weight to older observations, thus making the recent observations to have large influence on the forecasted VAR estimate. This approach tries to prevent old observations to fall out and expand the sample size thereby correcting some of the errors which leads to underestimation or overestimation of VAR estimate when using the historical simulation approach. Taking into account the current volatility and adjusting the whole sample accordingly, a more accurate expectation of the VAR is been produced during the particular period by the EWMA approach. For instance, if the current volatility of assets returns is 2% per day and three months ago the volatility was only 1.5% per day, the data observed three months ago understates the changes we expect to see at present. On the other hand, if the volatility was 1.5% per day three months ago, the data observed three months ago overestimate the changes we expect to see now.

The advantages of this approach are thus; (1) the samples are been allowed to grow over time that turns to reduce the impact of extreme events over time there by reducing the ghost effects for which if not would cause leaps in our sample returns. This method cannot be possible when using HS because all old observations have equal weight no matter the sample length (Dowd, 2005 p. 93-94). (2) One of the real advantages of the EWMA is that recent volatilities are taking into consideration and the VAR estimates in this approach can exceed that of the HS approach. For instance, historical returns will be higher in times of high volatility. (3) It also has the ability to assigns more weight to recent observations than previous ones.

The major disadvantage with this method is that it does not take into account the mean reversion which is been considered by other superior approach like GARCH.

### 2.2.2 PARAMETRIC APPROACHES

This approach of calculating VAR involves making some assumptions about the distribution of the returns. For example if we assume that the distribution of the returns is normally distributed. The normality assumption made in the parametric VAR approaches makes VAR estimation simple and the estimates are quite accurate compared to the non parametric VAR. Parametric VAR estimates are easy to calculate most especially when there is a large number of assets in the portfolio compared to when there is just one (Ruppert, 2004. p. 348).

2.2.2.1 NORMAL VAR

The normal distribution VAR is based on the assumption that the returns are normally distributed. This assumption has the advantage of making VAR calculation much straight forward and simpler.VAR based on normal distribution makes use of the mean (**µ)** and standard deviation (**?)**. If the assumption of normality of the distribution holds then it becomes easy to state the confidence level in terms of alpha (?) only. This tells us how far away the cut-off values of the two tails are from the **µ****,** Robert (2009.pp.5), expressed in units of the standard deviation (**?)**. To be able to get the accurate value for ?, this can be done by making use of probability distribution table. Below is a normal distribution curve along with the t distribution with five degree of freedom. The area under each curve has a probability of one.thi indicates that the probability of observing negative and positive values is at most one .The normal distribution has mean (**µ). **To calculate VAR using the normal distribution approach, lets us assume that we use a 99% confidence interval**(c) **and that our alpha **(1-c) =** (?) =1%.Assuming normality of the return distribution, we can use the standard normal probability distribution table to get the critical value of (?) which is -2.326.We can therefore say over the next trading day our standardized return would exceed -2.326 with 1% level of confidence. For example if we assume a mean return of zero, standard deviation (**?) **of returns to be .05 %(.005). This means that our VAR rate for the next day will be -2.326*0.005=0.01163=1.163% at 99% level of confidence. To express this in absolute terms we multiply this VAR rate with the initial capital investment. For example if we had a 100million dollars as the initial capital, then our absolute VAR is 100million*1.163%=-1.163million.however by convention our value at risk is always stated as a positive number, so we have 1.163million

Figure 3 above represent a Standard normal distribution and the t-distribution with five degree of freedom. Note that the t-distribution has fatter tails when compared to the normal distribution. This indicates that the t-distribution has a higher probability of observing more extreme events than the normal distribution. Value at risk is concern with the left tail of the distribution

Source: Robert (2009 .pp.4)

VAR (absolute) = – **µ**Wo- ??Wo ————————————– equation (6)

VAR (relative) = – ??Wo ———————————————– equation (7)

Where Wo, is the initial investment

Absolute VAR is based on the mean µ and ?, relative VAR depends solely on ?. However both depend on the confidence parameter ? (Dowd 1998, p. 43). Below are some of the advantages of the normal distribution VAR, as outlined by Dowd (1998); (1) it is easy to use in VAR calculation. (2) Informativeness of normal VAR, For VAR estimates to be useful and informative; they must be based on some set of assumption and parameters. This assumption is demonstrated by the normal VAR, given that it uses a holding period and a confidence level. (3) Translatability across confidence level; this is a very useful criteria for the normal distribution because it makes VAR estimate across different confidence level easy to understand and informative. For example if we are interested in VAR estimates at a 99% confidence level and we had initially calculated VAR using 95% level of confidence, it’s easy to change to the level of confidence we are interested in. Translatability across holding period, the VAR estimates using the normal approach can easily be converted from one holding period to another. The normal approach of VAR ,based on any particular confidence interval and holding period tells us accurate information on all other VAR estimates for other confidence interval and holding period (Dowd 1998, p65), the normality assumption used in normal VAR gives us a clear picture of our likely loss over a wide range of confidence level and time horizons. According to Dowd (1998) there exists a large empirical evidence to support that return distributions are not normally distributed as assumed in the normal VAR approaches. Stock returns are often negatively skewed, meaning that returns are more likely to experience a loss than a gain (Dowd, 1998). Lechner et al (2010) also points to the fact that the normality assumption which is often assumed in the normal VAR calculation often leads to misleading estimates as most financial data is characterized by negative skewness and leptokurtosis (fat tails). Hendricks (1996) supports this fact by noting that stock market and other financial data often have extreme outcomes that occur more frequent than those predicted by the normal distribution. Einmahl et al (2005) also note that there exists some evidence to support that there is a high frequency of extreme events which is not reflected or captured by the VAR estimates based on the normal distribution. The VAR estimates based on the normal or Gaussian- based statistics often produce faulty results when dealing with skewed data (Lechner et al, 2010).

2.2.2.2 T- DISTRIBUTION VAR

Since the returns of financial data are not normally distributed in the normal distribution approach of VAR, this implies that it does not account for extreme events. It becomes necessary to search for ways which we can use to make adjustments for non normality of the distribution which will take into considerations fatter tails and excess kurtosis, which at the same time will retain the simplicity and convenience of the normal distribution . One of such alternative method is the student t- distribution approach for calculating VAR. In a study by Lechner et al (2010) in which they compared the student t-distribution and the normal distribution as techniques that could be use to capture leptokurtosis (fat tailed and asymmetrical behavior of stock returns), they found out that the t-distribution was quite better in capturing fat tail events than the normal distribution VAR.

Dowd (1998) outlines a number of advantages of the t -distribution this include; it provides an easy way of capturing risk around the standard deviation of the portfolio. The t-distribution is also observed to provide better estimation than the VAR estimates on the normal distribution. This is supported by a study carried out by Wilson (1993, pp.38) in which he compared the number of VAR violations that occurred using the normal distribution and the t distribution at a 99% level of confidence. The results of the study found out that instead of 1% of loss to exceed VAR estimates, the actual loss using the normal distribution recorded VAR violations of 2.7% while the VAR violations using the t-distribution exceeded the predicted VAR estimates by 0.7% only.

The t-distribution is also easy to use in practice due to the wide availability of the standards tables which can easily be read and understood. The fatter tails of the t- distribution indicates it cover more extreme events which are usually ignored by the normal distribution. In spite of these advantages, the t-distribution has some drawbacks. As result of the inability of this distribution to constrain maximum possible losses, it is bound to produce inaccurate VAR estimates at higher level of confidence .This can be explained by the fact that the t distribution is no match to the extreme value theory which covers extreme events. Also the additive criterion is not met by the t-distribution and therefore not a good tool for risk adjusted returns. The additive criterion means that the sum of variables which follow a normal distribution cannot be sum individually as t-variable (Dowd, 2005, p. 159-160)

**VAR** (?) = – **µ**+ (vv-2/v) ? t (?, v)…………………………equation (8)

Where v is the degree of freedom

2.3 Back testing

Given that we have gotten a value for VAR, it is important for us to know how precise the estimate is. Dowd (1998) points to this ‘‘the usefulness of any VAR estimate is thus dependent on its precision, and an estimate can vary from being highly precise (and therefore highly informative) to being so vague as to tell us absolute nothing”. Jorion (2001) pointed that, the expected number of VAR violations can be calculated as (1- confidence level). For example if we have a sample of 100 observations, with a VAR calculated based on 95% level of confidence then we should be expecting VAR violations calculated as (1-.95*100=5). The significance for the choice of the confidence level for the VAR measurement is that, it renders it difficult to know if the model is accurate or not. This is because the higher the confidence level for the VAR forecasts, the lower the number of exceptions. For instance, having a better test of the model accuracy is to select the right confidence level, since with a 99% confidence level we are going have lesser exceptions points than a 95% confidence level.

Back testing as used by regulatory authorities such as the Basel Commission ensures that the VAR disclosures are consistent with the actual losses in the market. In the Basel advisory report in 1996 an amendment was made which gave the way forward for back testing to be used to assess the goodness of the VAR approaches used by banks, comparing them with the actual risk exposures in the market (Lechner et al (2010)). Given that capital charges are usually calculated based on VAR, banks and other firms may have the interest to lower their value at risk values so that they are subject to lower capital requirement. The Basel accord which acts as a supervisory and regulatory authority for banking sector has put in place a number of measures which can easily identify banks which do not disclose their true value for VAR. this act as some means of back testing. In this situation if we have an observation in which the actual value of our stock return is beyond VAR, then we called that a VAR violation or a VAR break. Costello et al (2008) is in favor of this method because according to him this is an important way of verifying which VAR approaches are better in giving accurate estimates for risk forecast. Back testing acts as a quantitative method to verify if the results of the VAR fall in line with the assumptions made in the approach. Given that VAR estimates are used by companies to meet regulatory and also for betting money in the trading positions they take in the money, this calls for the need for a process of continuous validation to check for, and to correct any problem that may make the VAR model to be bias or under mind the reliability of the VAR forecast. Back testing plays an essential role to solve these problems (Dowd. 1998, p38). To improve on the quality of the back testing process, there is the need to back test the expected tail loss (ETL) or expected tail gain (ETG) and the number tells us how effective the model accommodates the size of the ETL or ETG over VAR. An efficient way of performing back testing evaluation is to determine the accuracy of the approach by predicting the size of expected losses and the frequency of the losses. One of the two most important back testing tests is that developed by Kupiec (1955) and the Kristofferson test. In this study we are going to focus on the Kupiec test in the back testing of model validations since it is simple and straight forward.

## 2.4 PREVIOUS RESEARCH

In a study carried out by Chu-Hsiung Lin & Shan-Shan Shen (2006) in which they wanted to find outthe accuracy of the student t-distribution in estimating VAR for stock market index .They compared the normal distribution VAR , the student t-distribution VAR and the VARestimate modeled on extreme value theory(EVT) to know which model accurately measure the market risk . They used the kupiec (1995) test to evaluate the accuracy of the three model used. Using closing prices of stock market indices of S&P 500, NASDAQ, DAX and FTSE 100, from January 2, 1990 to June 19, 2003; they found out that the VAR modeled on the normal distribution underestimates risk at high confidence level. The study also found out that as the confidence level exceeds 95%, the t-distribution VAR and the VAR based on extreme value measure, outperformed the normal distribution VAR. They concluded that using the t-distribution improves the accuracy of the VAR estimates and that this was particularly true if tail index technique was in determining the degree of freedom and also when a 98.5% confidence coefficient is exceeded (Chu et al 2006)

In another study carried out by Pownall et al (1999) in which they wanted to test the role of extreme value theory in VAR estimation, based on Asian stock markets .they compared value at risk values calculated based on the normal distribution and the risk metrics which was developed by Morgan (1996).The findings of the study indicated that VAR estimates based on the extreme value theory were more accurate than those generated with the normal distribution. Pownall et al (1999) stated thatthis superiority of the extreme value theory based VAR estimate was due to the fact that extreme value was able to fit fat –tailed time series. The superiority of the extreme value theory based VAR is supported by findings of a study carried out by Bali (2007) who suggests that the normality assumption generates VAR estimates that may not reflect actual riskthat financial institutions face. They argue that many VAR measures assume the normality of the distribution of returns which is however inconsistent with evidence from empirical research that shows that asset return is not normally distributed. There exist some skewness and fat tailed. This means that the normal VAR estimate fail to provide accurate measures during volatile periods associated with financial crisis. Bali (2007) in a study in which he compared which of the VAR estimates based on the normal distribution, student t distribution and those generated by the extreme value theory, they used daily index stock market for the Dow Jones Industrial Average (DJIA), they used a time period of 4 years for the Dow 30 equity index, with a total of 28,758 daily observations. In their findings, they concluded that the statistical extreme theory approach was a more natural, robust and accurate method for calculating VAR compared to the normal and student t-distribution.

In a study carried out by K.Tolikas et al (2007) in which they used extreme value theory to investigate the distribution of the Extreme minima in the German stock market from the period 1973 to 2001.They used data set from DataStream consisting 7257 daily logarithmic returns. They found out that the normal distribution approach to calculate VAR overestimated the risk at lower confidence level. The Historical Simulation method performed better at high confidence level. However this needed approximately 1000 past period’s data to be able to achieve some accuracy in the lower tail. This was however achieved at the expense of poor accuracy at lower confidence level. The major findings of the study tied with previous studies confirming that the extreme value methods is very useful in risk measurement especially when the focus is on the tail returns with very low probabilities. They also found out that the other VAR method which could compete with the extreme value theory was the historical simulation method. However they argue that the accuracy of this method was compromised by the fact that it needed large number of data points to produce accurate VAR estimates at high confidence level. They pointed to this limitation of the historical simulation method as a very serious constraint on the reliability of the model.

In a study carried out by Artzner et al (1999) in which they studied the properties of any coherent methods of measuring risk, they comparedVAR to Standard Portfolio Analysis of Risk system (SPAN 1995) and the Security and Exchange Commission (SEC) rules used by the National Association Of Security Dealers (NASD 1996).In the study they used the subadditivity axiom to test which risk measure was coherent. This axiom states that the sum of probability of loss of a portfolio of assets should be less than or equal to the sum of probability of loss of the individual assets combined. Artzner et al (1999) puts it this way “ A merger does not create extra risk??. They indicated in the study that VAR fails to meet this axiom. However Heyde et al. (2007) considers this subadditivity criterion misleading, arguing that VAR is a “natural risk statistic??. They based their argument on replacing the sub additive axiom with the comonotonic subadditivity axiom, which states that only random variable that have a direct relationship to be considered when looking at the subadditivity axiom. They however showed that VAR measures and approaches performed poorly and less sensitive to extreme events.

In a study carried out by Juan-Angel et al (2009, pp. 3) in which they wanted to provide an explanation and a set of prescription for managing VAR under the Basel two accord. They use conditional volatility, risk matrix(1996),Normal distribution and other stochastic approaches of VAR. Using daily stock data said that no risk measure perfectly dominates all the others at all the times. As such they recommended that VAR risk models should be changed often so that they represent the daily trading positions of portfolios.

Yamai et al (2004) highlights the disadvantages of VAR. They argue that VAR does not capture a complete risk profile, and are in support that Expected Shortfall (ES) is broader in that sense. The article focuses on market stress applied to a concentrated credit portfolio and foreign exchange rates, and concludes that VAR should be complemented with ES to eliminate the current limitations of using one standardized financial risk measure. Furthermore, Taleb, the author of the book The Black Swan, pointed out the disadvantages of the standardized risk measure VAR and its simplicity; “Proponents of VAR will argue that it has its shortcomings but it’s better than what you had before”. Taleb’s quote and Yamai’s and Yoshiba’s article therefore raise questions about the adequacy of VAR, and thus implicitly of Basel II (Filippa & Maria, spring2010, p6). Further, Engle et al (2004) criticized that, if financial institutions used VAR to determine their capital requirement to cover their market risks of operations, they need to accurately estimate these risks if not so, these institutions may overestimate or underestimate their market risks which will result to financial resources inappropriately allocated as a consequence maintaining excessively high or low capital requirement.

In a study carried out by Choi et al (2011) in which they wanted to find out the main factors affecting the performance of the unconditional and conditional approaches ,in which they use a wide range of approaches such as the student t distribution, normal distribution ,risk matrix historical simulation, exponential weighted moving average and extreme value theory. They use three stock market indices and stock price series. Using the binomial distribution test, the results of the study indicated that approaches which were more flexible outperformed other which were inflexible. They concluded that no approach seem clearly to outperform the others.

In a study carried out by Dimistris et al.(2010) in which they wanted to determine the “optimal?? VARapproaches for equity portfolioin both emerging and developed markets, using data from 16 emerging and four developed stock market to test the accuracy of VAR approaches ,using daily closing prices collected from DataStream from 1995 to 2003 giving them a total of 2094 observations for each of the portfolio. They used the kupiec test for backtesting. The results of the study showed that, the historical simulation method outperformed the other models (GARCH with normal error, Moving average (MA), autoregressive moving average (ARMA) and the student t distribution). They attributed the out performance of this VAR approach over the otherto the exponential weighting scheme.

## 2.5 Hypothesis

The null and alternative hypotheses for the test of frequency and accuracy of each of the VAR approaches are givenas;

** **Ho: h=h=X/T (the expected failure rate is equal to the actual failure rate)

HA:h?h=X/T (the expected failure rate is not equal to the actual failure rate)

The main goal here is to determine whether the expected failure rate **h** suggested by the confidence level is significantly different from the realized failure rate **h**. We accept the null when the expected failure rate is equal to the actual failure rate and reject the null when it is vice versa.

## 3.0 METHODOLOGY

This section presents an explanation ofhow the VAR measures will be tested for accuracy thereby accepting or rejecting the null hypothesis. It also discusses the research strategy, motivation for sample size and source of the data used. It also discusses the motivation for the choice of the various VAR parameters (confidence interval and holding period) used in the study. A description of the sample of empirical data and an explanation of the different tools used to calculate the VAR estimates for the various approaches using the formula mentioned earlier.

## 3.1 KUPIEC TEST

We would validate our hypothesis by back testing using the kupiec test. We also validate the accuracy of our approaches by linking our results to existing literature in this area. This test is also known as the Test of Frequency of tail Losses. That is whether the reported VAR is violated more or less than ?.100% of the time. The purpose of the Kupiec test is

–To determine if the frequency of the expected exceptions is coherent with the frequency of the reported or observed exceptions in accordance with the chosen confidence interval and the VAR model.

– That a model is correct under the null hypothesis, and the number of exceptions is binomially distributed. The binomial distribution is given as

**Pr(x/n,p) = (n/x)?x**?**1-?**?**n-x………………………………………….equation(9)**

Where **x** is the number of exceptions, **P**– Probability of an exception for a given confidence level, **n** is the number of trials.

Whether the forecasted probability is beyond the observed ‘‘null’’ significant levels of say 1% to 5%, the model is accepted. We reject the model when the estimated probability is lower than the significant level and we say that the model is not correct. The loss and gain exceptions test have been performed on the return data of the three underlying assets in this study to determine how accurate the model predicts the frequency of losses and gains above VAR numbers. For instance, if a confidence level of 95 % is used, we have a null hypothesis that the frequency of tail losses is equal to **?** = (1 – c) = 1?0.95 = 5%. Assuming that the model is accurate, the observed failure rate (X/T) should act as an unbiased measure of **?**, and thus converge to 5% as sample size is increased.

According to Kupiec (1995), the test of frequency is well conducted by the likelihood ratio test, the test statistic for each portfolio and confidence level are calculated by inputting the return data (number of observations, number of exceptions and confidence level) into the test statistic function:

**LRuc = -2Ln [(1-?) T-X?X] + 2Ln {[1-(X/T)] T-X(X/T) X}……………equation (10)**

Where LR is the log-likelihood ratio, X is number of exceptions, T is the sample data points, X/T is the failure rate and P is the true probability. Jorion (2001, p134).we can go further to calculate our daily log prices using the formula below.

** Rt = log (Pt/Pt-1)…………………………………………………….equation (11)**

Where Pt represents closing price for time t and Pt-1 represents closing price for the previous day.

In order to balance between type I and type II errors, a critical value (confidence level) say at 5% is fix for the type I error rate and the test is arranged in such a way to minimize the type II error rate, or to maximize the power of the test. Jorion (2011, P360)

A type I errors is dealing with the probability of rejecting a correct model as a result of bad luck while a type II errors is also describing the probability of not rejecting a wrong model. The confidence level of this magnitude of both type I and II errors implies that the model will be rejected only if the evidence against it is fairly strong. The Kupiec non rejection region can be presented in the table below:

Probability level (P)

VAR confidence Level(c)

T=252

T=510days

T=1000days

0,01

99%

N<7

1

4

0,025

97,50%

2

6

15

0,05

95%

6

16

37

0,075

92,50%

11

27

59

0,1

90%

16

38

81

Table1: Non Rejection Region for Number of Failures N. Adapted from Kupiec (1995)

The above table shows N which is the number of failures that could be observed in the sample data T, without rejecting the null hypothesis that p is the correct probability at the 95% and 99% confidence level.

There are two demerits of Kupiec test which has limited its credibility of back testing VAR models accuracy. To begin with, it is a weak statistical test with sample sizes consistent with current regulatory framework for one year. This limitation of the model has already been recognized by Kupiec. Secondly, test of frequency considers only the frequency of losses and not the time when they occur. As a matter of facts, it can fail to reject a model that produces clustered exceptions. Therefore, model back testing should not solely depend upon tests of unconditional coverage. (Campbell, 2005)

## 3.1 Data Collection

In this thesis we use secondary data. We use DataStream an electronic database at Umea University library to obtain historical closing prices for the three-months-Swedish Treasury bills. The historical closing prices of stock returns for NORDEA Bank, SONNYERICSSON were obtained from http://www.nasdaqomxnordic.com. The market risk is measured in U.S Dollars. Stock data obtained for this study covers the period from 3rd January 2000 to 31st December 2010. The total number of observations during this period is 2763 data points for each of the stocks. Our motivation for using this source of data involves the fact that it contains enormousdata relating to our research questions. We also use this source of data because it serves time. Also it would have been very difficult if we had gotten involved in creating new data for this study, given that it would have failed to capture historical prices of stock returns which are very essential for the non parametric approaches selected for this study.

3.2 RESEARCH STRATEGY

We used the closing daily prices to calculate the daily log returns (Rt)using the formula in equation(11),after calculating the log returns we break the sample into two distinct periods; a six-year ‘‘in sample’’ and a four year “out sample” period. The essence of separating the data into two distinct periods is to enable us to be able to apply back testing to verifyif the ‘out sample’ profit and loss falls within the VAR estimate predicted by the in sample period using each of the approaches. If this out sample VAR prediction falls within the confidence interval of the kupiec test based on the back testing of the “in sample?? period of four years then we say that the model has been successful. On the other hand if it falls beyond VAR, then we consider that a VAR violation has occurred meaning the model has been unsuccessful. After separating the sample we went ahead andcalculated the various statistical properties (mean, standarddeviation, variance, percentile, quantile, maximum, kurtosis, skewness) of each of the approaches using the in sample period and out sample period. Due to the difficulties of evaluating each of the models at a glance we deploy the kupiec (1995) test and using the hypothesis that had earlier been stated in chapter 2. This statistical properties were computed with the help of excel sheets and later on done manually using the formula earlier mentioned in chapter two for each of the approaches .These statistical properties are later then use to calculate VAR values for each approachusingexcel sheets. SPSS is used to plot the time series plot for each of the assets at the various level of confidence. The time series plot are plotted so that we can be able to see how the assets fluctuate around it mean value. These time series plots are necessary so that we can be able to link our VAR estimates and kupiec test to the volatility properties of each of the assets illustrated by the time series plots in our analysis chapter. The major software used in our analysis of the data is the SPSS 19, this was mainly used for the exponentially weighted moving average since it proved complicated to use excel. We used a selected time interval of 10 years so that we could be able to get a better estimate of the non parametric approaches such as the historical simulation method that requires a large set of data points to achieve some accuracy in VAR estimates. This is in accordance with the assumption that historical returns of financial data are good estimate and indicators of the behavior of the volatility of stock returns in the future. We applied the nonparametric and parametric approaches on the stock returns of NORDEA Bank, Sonny Ericsson and Three months Swedish treasury bills government, which gave us a total of four models. The parametric methods used include the student t distribution and the normal distribution while the non parametric methods include the historical simulation and the exponential weighted moving average. We selected these approaches to calculate VAR because they represent a clear contrast of the differences between the parametric and non parametric approaches of calculating VAR. While the parametric approaches such as the normal distribution make assumptions such as normality of return distribution, this is clearly contrasted by the non parametric approaches, clearly represented by the historical simulation that does not rely on any assumption but solely on historical return to make future forecasts about returns. The choice of these three assets used in the thesis helps us to understand how VAR is measured and applied on these assets which are subject to different risk exposure factors and volatility. We use daily stock returns preferable to annual returns because annual returns will turn to lower risk the market risk exposure factors. Also by using a longer time horizon the historical simulation may be very faulty because the underlying assumption upheld in this method that the future will be similar to a relevant past would not hold(Robert Sollis,2009). This annual estimates may not be useful for VAR measurement given that daily risk positions of the stocks will not be known which will make annual VAR estimates not useful to management to make daily decisions. This Is also consistent to the idea that stock market returns are very volatile and it would be difficult to predict any particular patterns. We selected the stock return of NORDEA to represent the financial service industry in our study because it one of the largest known banks in Sweden and the Nordic countries. Our choice for sonny Ericsson to represent the technological industry was also based on fact that it is one of the largest mobile technological firms in the Nordic countries with a Swedish origin. The selection of the three months Swedish Treasury bill is based on the assumption that it the most stable compared to the other assets. These assets have different underlying risk exposure factors which represent the nature of the industry’s volatility .This makes the VAR approaches to be applied across diversified portfolio. We have selected stocks because it’s easy to apply the selected approaches of VAR on this asset. Stock are also one of the most well traded financial assets whichare very volatile to market changes, as such it will be better to demonstrate the accuracy of the VAR approaches on an asset which reflects the real life situations.

The authors do acknowledge that this selection of assets is a crude representation of the real trading assets in these selected industries across the Nordic countries and Sweden in particular. We equally suspect that the parametric approaches used in this study will be partly determined by how well our data points fit the normality assumption. It would have been very difficult if we had gotten involved in creating new data for this study, given that it would have failed to capture historical prices of stock returns which are very essential. The Excel sheets are used to simplify the calculation of VAR estimates; however this was done manually which makes it difficult to engage in complex VAR approaches to demonstrate the research question.

## 3.3 Research Method

Quantitative research is concerned with measurement and testing of hypothesis, whereas qualitative research is concerned with the interpretation of social phenomena. Quantitative strategy has been employed in this study, where by we have used mathematical models to explain social phenomenon. The use of mathematical theories and models provides a central point to quantitative research because it serves as a connecting nexus between the observe phenomenon, hypothesis and the theories used in this thesis.We have applied mathematical theories on historical stock prices of Nordea, Sonny Ericsson and three-months Treasury bills and also the testing of the validity of VAR results is been done with back testing and is also based on the daily changes in historical prices of the stocks for the three institutions. However, the approaches used in this study have quantitative features which are also good for the thesis.** **The purpose of this study is to identify which approach produces the best accurate outcome and the outcome depends on the figures from the measurements. In addition, the outcome of the measurement is not only a number but it also takes into consideration the interpretation and significance of the number.

In addition to the above approach, a deductive approach has also been employed in the study, with the research logic of starting from a more general approach to a particular one. We begin the research with a view in mind that the parametric approaches of VAR which make assumption about the return distribution as the normality of the return distribution measure risk accurately. We move further to narrow ourselves to four approaches which we applied on stock indices of three companies. To validate the approaches we constructed a hypothesis.We collected data related to our study and hypothesis.Using back testing we arrive at our conclusion about the null hypothesis. The approaches can from the beginning be very easy to understand, but by increasing the length of these approaches, for instance from the normal distribution to the t-distribution approach the approach became more complex and can best be used to calculate extreme events that the normal distribution was unable to do. The main reason of this thesis is to contribute to existing research on VAR by testing approaches not to create new ones.by testing these approaches we make suggestions for improvement. .

## 3.3 Sample

Here we break the sample into two distinct periods; a seven-year ‘‘in sample’’ and a three year “out sample” period. After this we apply the procedure and number of the failure tests proposed by Kupiec (1995). The essence of this test is to assess the seven-year in sample measures of VAR among the different VAR approaches used. This measure guides us to verifyif the ‘out sample’ profit and loss falls within the VAR estimate then we say that the model has been successful. On the other hand if it falls beyond VAR, then we consider that a VAR violation has occurred meaning the model has been unsuccessful. We used the formula earlier stated to calculate VAR values for each of the approaches with the use of excel sheets and SPSS. The major software used in our analysis of the data is the SPSS 19. We also made used of excel sheets to simplify the calculation of VAR estimates, however this was done manually which makes it difficult to engage in complex VAR approaches to demonstrate the research question. We used a selected time interval of 10 years so that we could be able to get a better estimate of the non parametric approaches such as the historical simulation method that requires a large set of data points to achieve some accuracy in VAR estimates. This is in line with standard for the historical and non-parametric of VAR to use a sample of recent data to be use for VAR calculations and then used by analyst to predictions about future risk measurement estimates (Robert, 2009 pp.2). The choice of a total observation of 2763 is in a bit to ensure that all the approaches are accurately measured. This will give some stability in the procedures we use to estimate our parameters. The 1000 observation for the back testing process will provide us with a large set of data points which are large enough for us to be able to carry the kupiec (1995) test on. The use of a rolling window of 1000 observation is consistent with similar studies carried out by Dimtris et al.(2010)and Olle Billinger and Bjorn Eriksson( 2009). This is also in line with the fact that the estimation performance of the historical simulation critically depends on the rolling window used for the estimation (Choi et al, 2011 pp.1).this argument is also in line with that put forward by Hendricks (1996) who points out that VAR approaches are more sensitive to small windows of estimation than window which have more observation as they tend to produce smooth and reliable estimates which are stable.

## 3.4 CHOICE OF VARIABLES (Stock return indices)

The choice of the technological, banking and governmental stocks mix in this study is to facilitate our understanding of how VAR is measured and applied in all these sectors which are exposed to varied market risk. We have selected stocks because it’s easy to apply the selected approaches of VAR on this group of assets. Stock are one of the most widely traded financial assets which are very volatile to market fluctuations, as such it will be good to demonstrate the accuracy of the VAR approaches on this set if financial assets which reflect highly volatile nature of nowadays stock market returns. Our choice of NORDEA was based on the motivation that its one of the largest and safest financial service groups in the Nordic and Baltic region sea region. Our choice of Sonny Ericsson was based on the fact that it is one of the leading mobile telecommunication firms in the Nordic region and we selected three months Swedish Treasury bill because of it price stability and the low risk assumed to be associated with governmental Bills. We believe the choice of these variables will be able to help us estimate the performance of the various VAR approaches to know which of them performs better than the others.

## 3.5 CHOICE of VAR PARAMETERS

The accuracy of VAR estimates may be compromised greatly if the duration of the historical stock returns and confidence interval usedare not well chosen. For example the historical simulation will require a large amount of data points to be able to reflect an accurate VAR estimate. We believe that the choice of seven year “in sample” period and three year “out sample” period may be able to resolve this problem. We would use a 95% and a 99% confidence interval to test the various VAR approaches. We believe that by choosing this time period and confidence interval results of the VAR estimates across the various approaches will be unbiased and reliable. Given that there is no recommended optimal sample size used to test each of the approaches, but as written in the theoretical framework, a considerable sample size is necessary to be able to obtain better estimates.

## 3.6 RELIABILITY AND VALIDITY

Bryman & Bell (2007, p. 41) points out three general criteria for a quality research which are reliability, replication and validity. To decide whether the outcome or value is reliable, we need to do the test several times to see if the results are the same. Bryman & Bell (2007, P. 41) refer to this as a test and re-test, that is to test if the outcome is reliable. Since we used secondary data collected from public sources to compute VAR, this enable the test to be reproduced several times to ensure that this criteria is met we use our test statistic p- value to ensure that at each of the confidence level to ensure that it is consistent with the Kupiec test results and non-rejection table . Validity is more important than the other quality criteria when thinking whether the outcome in quantitative approach can be justified. The outcome can become useless if you cannot give answer to the question whether the outcome captured the truth. The higher the validity the more closer we are getting to the truth of the situation. Validity can be enforced by continuous adaption between the theories and the methods used in the examination (Holme & Solvang, 1991).Validity in this research is enhanced by our use of the Kupiec test .This involves using our “in sample?? data points estimates to make predictions about the out sample profit and loss estimates. This can be realized by selecting the approach which is capable of bringing out the effects that different assets characteristics can have on the VAR calculation. Worth noting in this study is that, the approach used gives figure which represent VAR values but these figures are always an estimate of the future possible losses. The future losses can be larger than anticipated and exceed the confidence level within which VAR can be calculated and the figure is not the absolute truth. Bryman and Bell, 2007, page 400-422) discusses other criteria which should be considered to achieve the validity criterion. The criteria of transferability .This involves finding out if our study may be correct in one context but proves to be wrong in another context. For example if we apply this study in another country for example in the United State (U.S) will the result of the study proof to be the sameWe think that this transferability of this study within the Nordic stock exchange market would be valid. This may not hold true in other regions given that the assets under study may be exposed to different risk factors which may require different approaches to account for the value at risk.

## 4.0 DATA PRESENTATION AND RESULT

In this chapter, of the thesis we are going to present the results of the assets; SonnyEricsson, Three Months Swedish Treasury Bills and Nordea base on SPSS output and using descriptive tables. The time series plot and histograms with a normal curve fitted describes the underlying characteristics of each of the assets. We would use these time series plots and histograms to capture the statistical properties of the data presented. This would guide us in later chapters to explain variations in our VAR estimates. Detailed analysis of the results would however be handled in the next chapter.

The table below shows statistical characteristics of each of the underlying assets calculated.

**UNDERLYING ASSETS**

**SKEWNESS**

**KURTOSIS**

**STANDARD DEVIATION**

**SONNY ERICSSON**

**2.222**

**453.807**

**0.054**

**STB3M**

**-1.0604**

**101.144**

**0.051**

**NORDEA**

**-0.178**

**9.473**

**0.024**

Table1: Statistical characteristics of Asset Log Returns.

**4.1 SONNYERICSSON**

Figure 1: Histogram showing the Daily Log Returns combined with a Normal Distribution Curve

Figure1 above is a histogram combined with a normal distribution curve which shows a very high kurtosis of log return of 453.807 this is referred to as leptokurtic distribution because it has a high peak around the mean indicating a lower probability of return values near the mean than returns values which follow the normal distribution, it also has fatter tails which indicate a higher probability of accommodating extreme events than the normal distribution. Distributions with high kurtosis can be better analyzed with the use of the T- distribution since it can accommodate extreme events. The skewness of this distribution is 2.22 which indicate a positive skew and is showing that the returns are distributed to the right which does not respect the normality assumption.

**Figure2: Time series of the Daily Log Returns of Ericsson**

The time series plot above indicates a constant variation of the log return of Sonny Ericsson. This can be confirmed by the low value of the variance which is almost zero; and the mean of the distribution is zero. This indicates that the stock prices of this company are stationary, even though from the graph there are some high volatilities around the sequence number of 101 and also around sequence number 2201, these might happened by chance and are so minimal to be considered when we look at the rest of the volatilities. However, this might have been due to some abnormal events which took place during this period such as the global financial crisis in 2008 which adversely affect the stocks of this company. Time series of the log returns facilitate the estimation of other statistical properties over a multiple time period time.

## 4.2 THREE MONTHS SWEDISH TREASURY BILL

Figure 5: Histogram showing the Daily Log Returns with combined with a Normal Distribution Curve

Figure5 above shows a histogram combined with a normal distribution curve having a kurtosis of 101.144 of the return distribution which is higher than that of the Nordea returns but lower than the Ericsson return distribution. This suggests that the values are narrower than that of the normal distribution and with fatter tails it can also accommodate extreme events. At lower level of confidence it is more appropriate and compatible with the parametric approach. With a negative skewness of -1.0603, this indicates that the distributions are skewed to the left with large negative values which does not respect the normality assumption

**Figure3: Time series of the Daily L****og Returns of ST3M**

The time series plot above illustrate that the three months Swedish Treasury bill is the most stable of all the assets .At the same time it portrays some extreme events. The low volatility is reflected by a standard deviation of 0,050827. The low volatility of the this assets can be explained by the low market rates secured by this assets given that it is exposed to relatively low risk and its rates are guaranteed by the Swedish government when compared to the stock returns of listed companies such as Nordea and Sonny Ericsson, whose returns are highly affected by changes in the business environment .There is however some high level of volatility between the time series number of 2201 to 2701,this fluctuations can be attributed to abnormal events such as the financial crisis which affected the returns of stocks. This impact was not limited to private companies, it also affected government securities. The extremes event indicated by the time series plot and supported by the normal curve combined with the histogram earlier mentioned above, makes the normal distribution not a good fit for the returns of this distribution. The student t-distribution with fatter tails and excess kurtosis can be a good fit as it will be able to accommodate those extreme events of the 2008 financial crisis. The high volatility period between 2008, 2009 shows when the Riksbank substantially lowered the market rates for the three months Treasury bill**.**

### 4.1.3 NORDEA

Figure 3: Histogram showing the Daily Log Returns with combined with a Normal Distribution Curve

The figure above illustrates a histogram with a normal curve fitted into it. The log returns for Nordea have a low volatility of -0,024.This low volatility indicate that Nordea stock return seem to fluctuate around the mean and that the stocks may be well diversified .It has a positive kurtosis of 9,473179 which is the least when compared with the other log returns of the other assets, but it is in excess of that of the normal distribution by 3,473179. It has a skewness of -0, 17774 which also indicates that the log returns almost follow a normal distribution but fails to meet the average of zero which is assumed for a normal distribution. The positive kurtosis indicate that the distribution has fatter tails when compared with that of the normal distribution which has a kurtosis of 3.The inability of the normal distribution to capture the fatter tails events associated with this kind of return distribution makes it unfit and inaccurate to beused to measures risk in this situation. The negative skewness associated with this distribution indicates that the distribution is skewed to the left thereby having a high probability of negative events (losses) than positive returns (profits) when dealing with a profit and loss distribution.

Figure4: Time series plot for Nordea

The time series plot above indicates the movement of Nordea daily returns over time. While the time series show some signs of stationary for example the constant variations around the mean, two distinct periods of extreme volatility can be seen. This is the periods between sequence numbers (1-801) and (2301-2501).The early period of the high volatility shows the aftermath of the banking crisis in Sweden around year 2000 when stock prices in the Stockholm exchange market witnessed a sharp downturn. The existence of extreme events in this series also confirms that financial market data seem not to be normally distributed as assumed by the parametric method of VAR calculation most particularly the parametric approach based on the normal distribution.

## 5. ANALYSIS AND CONCLUSION

In this chapter we will analyze the performance of each of the VAR approaches under the three assets chosen, linking this to the statistical characteristics of each of the assets described in the previous chapter. It’s important that we know how accurate this approaches are. The increasing volatility of the financial market has made the need for better VAR estimation models more important than before. Banks are constantly reviewing their VAR measures to ensure it reflects current trading positions and other risk factors. For example Nordea stock return series which show great volatility during the period 2009 and 2009, may explain the reason why the bank choose to revise it VAR model.The Kupiec(1955) test mentioned earlier and the confidence intervals chosen will be used to further analyze this approaches in addition to the time series plot, histograms displayed in chapter four.

The in sample period which covers the period (2000- 2007) shows a period of relatively calm and less volatile for Sonny Ericsson,ST3M and the Nordea stock return. This is in contrast with “out of sample?? period (2007 – 2010) associated with the global financial crisis which can be noticed by the high level of volatility demonstrated by each of the time series of the assets. The high volatility of the 2008 and 2009financial crisis makes the “in sample?? period and “out sample?? period to demonstrate contrasting characteristics. Given the devastating effects of the recent financial crisis and the difficulties involve inpredicting how and when future financial crisis may occur, it wouldbe important that the results of this study should not be underestimated.

In analyzing the results, we will relate this to the advantages and disadvantages of each of the approaches. We will also critically analysis the statistical properties of each of the approaches and the confidence interval used. The use of 95% and 99% confidence interval indicate that 5% and 1% of the data should be found in the left tails of the distribution respectively. We therefore make a critical analysis of the normal distribution VAR, student t distribution VAR, historical simulation and the exponentially weighted moving average. This critical analysis will be directed towards the normality assumption of stock market return that is usually assumed by the parametric approaches.

## 5.1 HISTORICAL SIMULATION APPROACH

This approach may not be the best approach to calculate VAR when we look at the back testing result, as mentioned earlier, historical simulation is a widely used model in estimating VAR values due to its simplicity in estimating VAR mathematically. The summary of the historical simulation back testing result is shown in the table 3 below. In table 3 below, the results of the back testing calculation using the historical simulation approach for the three assets can be seen and a rolling window of 1000 observations has been used which is equivalent to four years of business days on both 95 % and 99 % confidence levels. In the table, there are the minimum, maximum values given by the Kupiec test derived from the confidence probability, the target number of VAR violations and the result of the number of VAR violations from the historical simulation. From the Kupiec test result, VAR violations marked green are those that fall inside the interval and those that are marked red falls above or below the interval.

**KUPIEC TEST- HISTORICAL SIMULATIONS WITH A ROLLING WINDOW OF 1000 OBSERVATIONS
**

** **

** **

** **

** **

** **

** **

** **

** **

95%

99%

ASSETS

OBSERVATIONS

MIN

TARGET

RESULTS

MAX

MIN

TARGET

RESULTS

MAX

SONNYERICSSON

2762

116

138

37

161

14

28

36

41

STB3M

2752

116

138

95

160

14

28

288

41

NORDEA

2761

116

138

41

161

14

28

81

41

Table3: Back testing results with historical Simulations

From the table, the results indicate that the HS performs poorly at the 95% confidence level and performs better at the 99% confidence level. At the 95% confidence level, there are too many VAR violations marked red indicating an overestimation of VAR. At the 99% confidence level this approach performs the same as the EWMA approach but better than the normal and t distribution approach. This approach is almost accepted by the Kupiec test at higher confidence level because it produces only one result which falls within the confidence interval but rejected at lower confidence level which produces no result within the confidence interval. In this approach, there is a tradeoff between the historical information and the recent information as mentioned earlier in section 2.2.1.1. Choosing a shorter time interval would have increased the effects of current observations by paying more attention to recent conditions in the market instead of past conditions. This approach react slowly to changes in current information than past information because it considers only the most recent 1000 daily log returns when calculating its volatility. Therefore, the HS approach shows inertia when confronted with changing volatility and rapidly changing market conditions, which indicates that, during periods of low volatility or calmness VAR is overestimated and underestimated during periods of higher volatility.

Table 3 above present the VAR estimated using the historical simulation at the 99% confidence level for Sonny Ericsson. The Historical simulation approach gives the best results with Sonny Ericsson when compared with the other two assets. The historical simulation approach in general apparently underestimates VAR, thereby resulting to so many VAR violations. The statistical properties of Ericsson revealed that the historical simulation approach will be better fit to this asset while the normal distribution and the t- distribution which are under the parametric approach would make poor fit with Sonny Ericsson. Looking at the volatility and the predictability of the volatility of the three assets, Sonny Ericsson has the higher and predictable volatility than all other two assets or more still, high but stabile. Due to it stability, the historical simulation worked best with sonny Ericsson. In our previous discussion we mentioned that, the historical simulation approach does not make any assumption about the returns distribution, instead, it assumed that the present return distribution is the same as the past. The good result produced by Ericsson is thanks to the stability in its return.

Underestimations and overestimations are very vital aspects to note when testing the accuracy of an approach. The result for STB3M and Nordea are apparently equal to the difference lying in their underestimations of the value at risk. Even though of an increasing changes in the volatility of STB3M and Nordea over time, their volatilities are not as high as compared to that of Sonny Ericsson whose value at risk is overestimated. If a shorter time period is chosen, the results for the STB3M would have looked very different, ignoring the great volatility changes that appeared around 2008. The historical simulation would in that case have produced better results since the fluctuations in return would have been less.

The overestimation of the value at risk for Sonny Ericsson and the underestimation of the value at risks for STB3M and Nordea can be attributed to the large rolling window chosen in this approach. The result at the 95% confidence level produces too few VAR breaks for Ericsson due to the large rolling window chosen. At the 95% confidence level, it seems as if the rolling window chosen is too large for this confidence level which increases the extreme observations to the tail and makes it to become fatter. This is why the historical simulation produces too many VAR violations and the underestimations of VAR for all three assets at this confidence level. In the historical simulation with large window, the smaller returns that occur more frequently are given too much weight that ends up drifting the approach away from useful outliers and producing too many VAR violations. Since the chosen rolling window is well fitted at the 99% confidence level, the same seems to apply here but the effects is not too strong at this confidence level. If a more appropriate rolling window size would have been chosen at the 95% confidence level the approach could have produced a better result. For instance in table 3, the Kupiec test rejected the historical simulation at the 95% confidence level.

At the higher confidence level where the large rolling window is well appropriate, the historical simulation approach tends to produce better result. Due to the nonparametric nature of the historical simulation, all the outliers of the log returns which have been left out by the parametric approaches have been taken into consideration by the historical simulation. This is one good reason as to why the historical simulation performed well at the 99% than the 95% confidence level.

The process of assigning equal weights to all the returns in the distribution makes it difficult for the historical simulation approach to capture fluctuations in the underlying assets returns. This means that, VAR value for a longer time period is been affected by the old extreme outliers in the returns. The Ericsson returns suffers from leptokurtosis, which makes the few extreme outliers to have a greater impact on the VAR value at a particular confidence level and window size, or it gives us an average VAR that is much greater than the average return. The historical simulation approach deals with returns that have fat tails, but when the leptokurtosis becomes too big for the historical simulation approach to accommodate. We see that it is very important before choosing the size of the historical rolling window we have to take into considerations the confidence level and the kurtosis. The assumption by this approach that, returns distribution does not change or are stationary over time, makes it important to look at the past returns in the hope of predicting the future.

Looking at all the approaches, the historical simulation approach performs best at higher confidence level than the other approaches because the size of the rolling historical window is more appropriate for the 99% confidence level than the 95% confidence level. The reason as to why this approach performs best at higher confidence level than the other approaches is that it considers the extreme values (outliers) that fall out of the normal distribution. An example of this can be seen in figure 3a above, showing the histogram combined with the normal distribution of the daily log returns. For Ericsson, a small number of observations can be seen in the tails, while for STB3M and Nordea most of them observations fall in the middle of the normal distribution, causing the approach to overestimate the VAR for Ericsson. When estimating VAR at higher confidence level, historical simulation can be recommended for returns that are stationary and too high kurtosis.

## 5.2 EXPONENTIALLY WEIGHTED MOVING AVERAGE (EWMA)

The back testing results for the three assets calculated using the exponentially weighted moving average is presented in table 4 below. This approach is among the longest used approach in calculating VAR. Despite it elementary nature in calculating VAR, the model still prove to stand the test under favorable conditions by producing good results at the lower than the higher confidence level.

**KUPIEC TEST- EWMA WITH A ROLLING WINDOW OF 1000 OBSERVATIONS**

95%

99%

ASSETS

OBSERVATIONS

MIN

TARGET

RESULTS

MAX

MIN

TARGET

RESULTS

MAX

SONNYERICSSON

2762

116

138

124

161

14

28

71

41

STB3M

2752

116

138

148

160

14

28

63

41

NORDEA

2761

116

138

14

161

14

28

39

41

Table 4: Back testing results with exponentially weighted Moving Averages

The exponentially weighted moving average underestimates the VAR for two of the assets and overestimates VAR for one asset at the 99% confidence level due to the fatter tail of the returns assumed by the normal distribution. The approach produces better results at the 95% than the 99% confidence level as it overestimates VAR for two of the assets returns and underestimates VAR for one asset returns because the normality assumptions are largely met. The back testing result from the Kupiec test rejected the exponentially weighted moving average approach for two of the assets VAR calculations and accepted the model for one of the assets VAR calculation at the 99% confidence level. While at the 95% confidence level, the Kupiec test accepted the approach for two of the assets VAR calculations and rejected the approach for one of the asset VAR calculation.

As mentioned in chapter four, the statistical properties of the Ericsson log returns give an indication that the assumption of normality is right. The STB3M returns are not that skewed and the kurtosis is not that large. Even though, the approach performs better on the Ericsson returns despite that the properties of those returns seem to be distance from the normality assumption. Such a phenomenon can be explained by looking at the graphs of the assets returns. The return for Ericsson is more skewed and leptokurtic than those of the STB3M and Nordea, but the returns of Ericsson are more stable whereas the returns of STB3M and Nordea show great signs of volatility clustering.

In table 4, at high confidence level the extreme volatility peaks of the STB3M returns was as a result of the bad compatibility with the exponentially weighted moving average approach. The VAR violation amount produced by the approach double when compared to the other two assets returns data. This resulted when the outliers appeared alone without a prior increase in volatility the day prior to the VAR violation during volatility clustering. This accounted for the high kurtosis of the returns. Therefore forecasting this increase in volatility and the occurrences of the VAR violation makes it impossible for the exponentially weighted moving average approach. When the outliers are incorporated into the rolling window for measurement after the VAR violation the VAR measure experienced a significant increase. An additional extreme outcome cannot accompany the VAR violation because it occurs only once.

## 5.3NORMAL DISTRIBUTION

**The normal VAR Approach which assumes normality of **

Kupiec Test, Normal distribution using a rolling window of 1000 observations

95%

99%

Asset

no.

MIN

TARGET

Results

MAX

MIN

TARGET

Results

MAX

SONNYERICSSON

2762

116

139

108

161

14

28

93

41

STB3M

2752

116

137

138

160

14

28

199

41

Nordea

2761

116

138

89

161

14

28

213

41

Table 5: The figures in red indicate the inability of the normal distribution to capture the actual number of failures within the prescript confidence interval.

As seen from table (5) the normal distribution performs poorly across all the confidence interval. This poor performance for the normal distribution can be explained by the assumption made by the normal distribution, which assumes that financial data follows a normal distribution. In practice this normality assumption does not hold often as empirical studies have shown that stock data follows a random walk. However, surprising the normal distribution performs well with the stock data of the three months Swedish Treasury bill. One reason which could explain this could be the fact that the returns for the three months Treasury bill are relatively calm and stable throughout the time series. In higher confidence level the normal distribution underestimate the risk. This could be explained by the inability of the normal distribution to capture extreme tail events, which makes it unfit for use during volatile periods.

## 5.4 The Student t distribution VAR

Kupiec Test, Student t Distribution using a rolling window of 1000 observations

95%

99%

Asset

No.

MIN

TARGET

Results

MAX

MIN

TARGET

Results

MAX

Sonnyericsson

2762

116

139

145

161

14

28

124

41

STB3M

2752

116

137

132

160

14

28

118

41

Nordea

2761

116

138

153

161

14

28

58

41

Table(6):The results in red indicates failure of the t distribution using the kupiec test while the results in blue indicate that the results of the kupiec test of the t distribution model lies within the non rejection confidence interval of the kupiec test and as such the model cannot be rejected. It also implies that we cannot reject the null hypothesis that the probability is not significantly different from the failure rate.

The t distribution with fatter tails was able to make better estimates compared to the normal distribution at the 95% confidence level. However as the confidence level increases such as 99% the VAR prediction tend to be inaccurate. The results of kupiec test of this VAR approach fails to fall within the kupiec test interval at 99% level. The results guides us to the fact that in the event of non normality of financial data return, the t distribution may be a usefultool that could be used to estimate VAR than the normal distribution. However, as the sample size increases and the degree of freedom increases to infinity, the normal distribution can be use to approximate the t distribution leading to the same estimates. At higher confidence level and in extreme event both the student t and normal distribution underestimate the risk. This inability of the t distribution and the normal distribution to capture tails events calls for the needs of other measure such as the extreme value theory, and the theory of expected shortfall. These models are able to accommodate the cluster effects and the non-normality of stock market returns. The t distribution however out performs the normal distribution both at the 95% and 99%.This indicates that the t distribution VAR overestimate the risk at this confidence level. However the t distribution turn to make a better estimate of the Nordea stock return at this confidence level .This may be due to that fact that the t distribution is able to accommodate the extreme events which were shown by the time series plot of the Nordea stock return.

The kupiec(1995) test using the binomial distribution at 5% level of probability indicates that the value for kupiec test statistics is 0,034819 which is less than alpha 0,05.This indicates that we reject the null hypothesis that the actually probability of failure is equal to the failure rate. This indicates that , the value at risk measure at this confidence level is not precise, this is because our value for the kupiec test was suppose to be equal to the 100a%.This may suggest that the model over estimates the risk at this level of confidence level. From the above therefore we may say that there are actually far less days which we may observe losses beyond the VAR estimates less than the 138 which is actually forecasted by the VAR model. From the kupiec table at 5% level of probability the value of N is out of the non – rejection region. The kupiec test using the out sample period indicate that this N has been over estimated by which normaly should have been (,034819*10000)=35days.The kupiec test at 1% level of probability gave a value of 0,075365 which is greater than our value of alpha at 1%.this indicates that we cannot reject the null hypothesis that the failure rate is not significantly different from the target value at risk breaks. The results of the test statistics however does not give us any clear evidence if the value associated with this value lies within the non rejection region. However its gives us some information that our value for N for the out sample period lies within the non rejection interval of the kupiec test .the test value of the kupiec which gave a value of 0,075365 guides us in pinpointing that this VAR approach underestimates risk at higher level of confidence. This situation may not be good for the financial insititution because it will reserve inadequate capital to meet it losses.this situation may be influenced by the relatively calm in sample period which has low volatility compared to out sample period which is characterized by high volatility which is associated to the financial crisis which started in the summer of 2008.

## 6.0 Conclusion

This chapter of the thesis dwells on the conclusion of the study and it equally it makes recommendation of future areas of research. Table (7) makes a Summary of the conclusion of the VARapproaches which we have based our analysis and discussion on.

VAR APPROACHES Underlysing AssetsConfidence Interval(95%) Confidence Interval(99%)

SONNYERICSSONRejectReject

Normal Distribution VARSTB3MAcceptReject

NORDEARejectReject

T Distribution VARSONNYERICSSONAcceptReject

STB3MAcceptReject

NORDEAAcceptReject

Historical Simulation VARSONNYERICSSON Reject Accept

STB3M Reject Reject

NORDEA Reject Reject

EWMA VARSONNYERICSSON Accept Reject

STB3M Accept Reject

NORDEA Reject Accept

Table 7: Summary statistics of VAR approaches relating to the null hypothesis.

As can be seen from the above table the kupiec test indicates that no VAR estimation approach absolutely outperformed the other. One significant aspect which the results show is that the parametric approaches were better at lower confidence level. However they performed poorly at higher confidence level. The normal distribution and the historical simulation approaches were the least of the approach; they performed poorly at both the higher and lower confidence level. An implication of the results thus suggested that the normality assumption often assumed by the parametric methods such as the normal distribution seems to be a great drawback for these approaches. This makes the parametric methods of VAR unable to accommodate tails events such as situation of high volatility such as during financial crisis or period when there is a market boom. Also the results of the study seems to suggest that the VAR approach to be applied on an assets may be based on some particularly characteristics of the underlying asset. This suggestion may tie with previous studies which stipulate that, the main factor that accounts for the difference in performance of the approaches rest on the flexibility of the model to reflect the asset characteristics. In all, the models therefore we can say that the t distribution with fatter tails preformed quite well than the other model.

## 6.1 Further Research

This study has been based on three assets using two confidence interval levels. Some areas of future research might be on examining value at risk approaches on equity commodities such as crude oil and gold using more confidence level and a large more data points. It could also involve expanding the number of parametric approaches which can fit better in VAR calculations when the normality assumption of stock data doesn’t hold. This area of research may be important as the particular characteristics of each of these assets may be seen how it affects the choice and accuracy of each of the VAR approaches. We also think that it might be good to carry such a research with varied assets because it is able to give some guidance on how trading positions could be hedge to protect assets from numerous risk factors and help risk manager on how to take calculated and smart risk.

Reference list

Silva, A. D., Beatriz V., & Melo, M.(2003).Value at risk and extreme returns in Asian stock Markets,Internationsl Journal of Business, Vol. 8,p.17-40.

Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1999). Coherent Measures of Risk: Mathematical Finance. Vol. 9, No. 3, p. 203-228.

Basel Committee on Banking Supervision(2004).Basel II :International convergence of capital measurement and capital standards:A Revised framework(2004,June) http://www.bis.org.

Blake D., Dowd, K., & Andrew, C. (2004).Long Term Value at Risk. Journal of Risk Finance ,Vol.5 No.2, p. 52-57.

Bodoukh, J., Richardson, M., & Whitelaw, R. (1998). The Best of Both Worlds; A

Hybrid Approach in Calculating Value at Risk.

Boudoukh, J., Richardson, M., Whitelaw, R. (1998). The Best of Both

Worlds Risk, Vol. 11, p. 64-67.

Bryman, A., & Bell, E. (2007). Business Research Methods, 2nd edition, Oxford University Press.

Campbell, S.D. (2005). A Review of Back Testing and Back Testing Procedures. Board of

Governors of the Federal Reserve System.

Choi, P., & Insik, M. (2011).A Comparison of the Conditional and Unconditional Approaches in Value at Risk Estimation. Japanese Economic review , Vol.62,No.1.p.99-115.

Chu-Hsiung, L., & Shan-Shan, S. (2006).Can the Student t-distribution Provide Accurate Value at Risk?,Journal of Risk Finance , Vol.7,No.3,p.292-300.

Danffie, D., & Jun, P. (1997).An Overview of Value at Risk. The journal of derivatives Vol.4 No.3.p.7-49

Ruppert, D.(2004); Statistics and Finance: An Introduction

Einmahl, J., Foppen, W., Laseroms, O., & De Vries, C. (2005), “VaR stress tests for highly non-linear portfolios”, Journal of Risk Finance, Vol. 6 p.382-7.

Ender, S., & Thomas, W. K. (2006). Asian Pacific Stock Market Volatility Modeling and Value at Risk Analysis. Emerging markets finance and trade, Vol.42,No.2, p.18-62.

Engle, R.F., Focardi, S.M., & Fabozzi, F.J. (2008). ARCH/GARCH Models in Applied Financial Econometrics: Chapter in Handbook Series in Finance by Frank J. Fabozzi, John Wiley & Sons.

Gordon, L., Clark, A. D., Dixon, A, H., & Monk, B. (2009). Managing from Financial Risk, From Global to Local. Oxford University Press.

Diebold, F.X., Schuermann, T., & Stoughair, J. (2000). Pitfalls and opportunities in the use of extreme value theory in risk management, Journal of risk finance. Vol.1, p.30-36.

Hendricks, D. (1996). Evaluation of Value-at-Rrisk models using historical data, Economic Policy Review, Federal Reserve Bank of New York, New York, NY, April 1996, Vol. 2 No.1,

Heyde, C. C., Kou, S. G., & Peng, X.H. (2007). What is a Good External Risk Measure: Bridging the gaps between robustness, subadditivty, and insurance risk measures. Working paper, Department of Industrial Engineering and Operations Research, New York, Columbia University.

Hardle, W., Kleinow, T., & Stahl, G. (2002). Applied Quantitative Finance: Theory and Computational Tools. Springer-Verlag Berlin Heidelberg, Germany.

Jordan, J. V., & Mackay, R. J. (1995). Assessing Value at Risk for Equity Portfolio: Implementing Alternative Techniques. Working Paper, Washington, DC. George Washington University.

Jorion, P. (2000). Value at Risk: The New Benchmark for Managing Financial Risk, McGraw-Hill Professional.

Jorion, P. (2001). Value at Risk – The New Benchmark for Managing Financial Risk 2nd Edition, New York: McGraw Hill.

Juan-Angel, Jimenez-Martin, Michael, M., Teodosio P., & Amaral (2009). The Ten Commandments for Managing Value at Risk under the Basel II Accord. Journal of Economic Surveys. Vol. 23 No.5, p.850-855.

Dowd, K. (1998). Beyond value at Risk: The new science of risk management. New York: John Wiley & sons.

Konstantinos, T., Althanassios, K., & Richard, A. B. (2007). Extreme Risk and Value at Risk in the German Stock Market. The European Journal of Finance. Vol. 13, No.4, p. 373-395.

Jeff. L. H., & Guangwu, L. (2009). Simulating Sensitivities of conditional value at risk. Management Science. Vol. 55, No.2, p.281-293.

Lindsay, A., Lechner, T., & Ovaert, C. (2010). Value-at-Risk: Techniques to account for leptokurtosis and asymmetric behavior in returns distributions”, Journal of Risk Finance, Vol. 11, No. 5, p.464 – 480.

Linsmeier, T. J. & Pearson, N. D. (1996). Risk Measurement: An Introduction to Value at Risk.Working Paper, Urbana Champaign, University of Illinois.

Luenberger, D. G. (1998). Investment Science, Oxford University Press, Inc. New York.

Moore, D. S., McCabe, G. P., Duckworth, W. M., & Alwan, L. C. (2009). Practice of Business Statistics: Using Data for Decisions. 2nd edition. H. Freeman and Company, New York.

Morgan, J.P. (1996). Riskmetrics-Technical Documents, 4th edition (New York: JP Morgan.)

Olle, B. & Bjorn, E. (2009). Star Wars: Finding the Optimal Value at Risk Approach for the Banking Industry. Master Thesis, University of Lund.

Christoffersson, P. & Pelletie, D. (2004). Back testing Value at Risk: A duration –Based Approach; Journal of Financial Econometrics. Vol. 2, No.1, p. 84—108.

Pownall, R. A. J & Koedijk, K.G. (1999). Capturing Downside risk in Financial Markets: The Case of the Asian Crisis. Journal of International Money and Finance. Vol. 18, No.6, P. 853-870.

Pritsker, M. (2001). The Hidden Dangers of Historical Simulation; Working Paper; 2001-27; Board of Governors of the Federal Reserve System.

Robert, S. (2009). Value at risk: A critical Overview. Journal of financial Regulators and Compliance. Vol.17, No.4, p.398-414.

Turan, G. B. (2007). A Generalized Extreme Value Approach to Financial Risk Management. Journal of Money & Banking. Vol. 39, No. 7, p. 1613-1649.

Vlaar, P.J.G. (2000). Value at Risk models for Dutch bond portfolios: Journal of Banking and Finance. Vol. 24**, **p. 1131–1154.

Xing, J., & Allen, X. Z. (2006). Reclaiming Quasi –Monte Carlo Efficiency in portfolio Value at Risk Simulation Through Fourier Transform, Journal of management science. Vol. 52, No.6, p. 925-938.

Yamai, Yasuhiro & Yoshiba, T. (2002). On the Validity of Value-at-Risk: Comparative

Analysis with Expected Shortfall, Monetary and Economic Studies, Bank of Japan.

Yamai, Yasuhiro & Yoshiba T. (2005). Value-at-Risk versus Expected Shortfall: A practical perspective. Journal of Banking and Finance. Vol. 29, No. 4, P. 997-1015.