Market Capitalization Essay

Category: Capital, Statistics
Last Updated: 27 Jul 2020
Pages: 2 Views: 241

To test whether share prices exhibit long memory and/or linearity, a variety of techniques are applied to the time-series of the top 25 and bottom 25 (by market capitalization) companies in both the FTSE and the AIM from January 1, 1998 to December 31, 2004, inclusive. The selection allows for the investigation of two hypotheses: The first is that there are differences in information processing between lined markets. The expectation being that AIM listed companies will be subject to less scrutiny by market participants than their FTSE counterparts and more prone to information inefficiencies.

Therefore, their share prices are more likely to exhibit long memory. The second hypothesis is the relative size hypothesis. Larger companies should be subject to greater scrutiny by market participants and therefore less prone to market inefficiencies. The tendency towards long memory should thus be more prominent in smaller companies. In 2002, research performed by Dolado, Gonzalo & Mayoral adapted the traditional Dickey-Fuller test to a more generic standard in order to process long memory tests. This Fractional Augmented version of the Dickey Fuller test (FADF) consists of estimating the equation:

Using various consistent estimates of d: Shimotsu’s exact Whittle estimator (2006), Geweke and Porter-Hudak’s semi-parametric estimator (1983), and two parametric estimators from Doornik and Ooms (1999), the test statistic is asymptotically normal under Ho. To investigate the possibility of non-linearity in the series, both traditional parametric tests for structural breaks and random field based inference are used. Both cases considered the possibility of the share prices following a non-linear AR (figure 1) process or some form of transition model.

Order custom essay Market Capitalization Essay with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

Investigation of the following models involved the suite of tests proposed by Bai & Perron (2003): Two special cases of the model introduced in Hamilton (2001) served as random field tests for non-linearity, where xt = (Pt, t), and ? denotes element-by-element multiplication: Pt = ? + ? Pt-1 + ? m(g? P) + ? t (and) Pt = ? + ? Pt-1 +? t + ? m(g? P) + ? t In figure 3, m are the random fields and g is a (k x 1) vector k = 1 or 2. The parameter g serves as a measure of the impact of either P or t on the non-linearity. The scalar parameter ? is a measure of the degree of non-linearity. A simple test for non-linearity is to test Ho : ? = 0.

Hamilton’s approach was to assume a Gaussian random field, meaning that the first two moments define the field and deriving a simple x2 test of the null hypothesis from a generalised linear model interpretation. Dahl & Gonzalez-Riviera (2003), extended the basic test introducing three tests that took into account the problems of nuisance parameters. They have shown that these four tests prove generally more powerful than other tests of non-linearity and appear insensitive to model misspecification. The testing of these hypotheses will utilize the suggestion of Dahl and Gonzalez-Riviera in combining the four tests for the more powerful result.

Cite this Page

Market Capitalization Essay. (2018, Jun 13). Retrieved from https://phdessay.com/market-capitalization/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer