Do we need statistic quality criteria?

Category: Experiment, Psychology
Last Updated: 27 Jan 2021
Pages: 8 Views: 228

The intention is to compare statistical analysis methods and dependencies. With the transformation into quantitative variables a rationalization effect is desired. In this way extensive data can be reduced to its core. To check the comparability required for that, quality criteria have been developed. These are according to current interpretation higher if the standardization of the research's content, the course of the investigation and the setting is easily realizable. There are five different types of criterion of measurement. These five criterion of measurement are divided into two groups.

The main group and the sub group. Objectivity, Reliability and Validity belong to the main group. Acceptability and Economy are the two parts of the subgroup. This paper is focused on the main group. Chapter 2 - Criterion of Measurements 1. Objectivity Objectivity is the extent to which a test result in implementation, analysis and interpretation can be influenced by an investigator or if more come to matching results. Neither the implementation nor the analysis and interpretation carried out by different researchers may therefore produce different results.

The implementing objectivity requires that the test result is unaffected by the user. The interpretation f objectivity requires that individual interpretations can not be used towards the interpretation off result. For example to measure the screw length by a calibrated measuring equipment is an objective measurement but the survey of employees by their team leader for the leadership is very subjective. 2. Reliability Reliability indicates the reliability of a measurement method. An investigation is described as reliable if it comes to the same conclusion at a repetition of the measurement under the same conditions.

Order custom essay Do we need statistic quality criteria? with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

It can be inter alai determined by a repeat examination (test-retest method) or other equivalent test (parallel test method). The measure is the reliability coefficient and it is defined by the correlation of the two investigations. An example off reliable question is "How many employees has your department? " In second question has a lower reliability because it is unclear who is defined as a "team player" and therefore different opinions can come into existence. 3. Validity Some people say that there is no validity but indeed there are quite different kinds of validity.

This is certainly true but they have in common: validity or validity of these measurements describe the degree of accuracy with which a method measures what it purports to measure. An intelligence test for example is only valid when intelligence activities are measured and not primarily the steadiness. A test has content that is "true" the problem is to determine that. The name of "intelligence" (as well as "aggression", "anxiously" etc. ) is a so called construct: constructs are concepts that more or less appear to be theoretically meaningful. The described is not observed directly but inferred from indicators.

Constructs are under the theory useful assumptions. Basically it's the problem of "truth" of statements which are hidden in the concept of validity: Are allegations true? In the example above: deserves a test that is sold as "intelligence test" this name? Psychologically more important is the "empirical validity" that means the match and predictive validity. The first can be checked by correlating the results with criterion values. In this way we could for instant demonstrate the validity of a calculation exam for the third school year by the correlation of the test results with the teachers' Judgments.

A predictive validity is situated close to determine, for example, when designing a school readiness test: After the test survey one should wait at least until the end of he first school year and then assess the correlation between test results and school performance. The test is valid if the correlation fails high. Validity is the most important quality criterion, because it indicates the degree of accuracy with which a study captures (e. G. Personality traits or behaviors). The validation is performed using the correlation with an external criterion. There are different types of validity: 3. Construct validity: Construct validity is present when measurements capture what they should mete ( if a construct is inferred from a high correlation then construct validity is the same as liability). If hypotheses are being derived from a construct it means high construct validity and consequently good empirical confirmation of this hypothesis. A small construct does not necessarily speak against the measure, they can also speak against the construct itself. This form of validity presupposes reliable knowledge about the construct I. E. Knowledge of the relevant theories and the relevant findings.

For the validity of an aggression test may speak, for example, when men achieve higher scores than women and when young men (about 20 years) have higher values Han older ones (about 40 years). In general aggressiveness in our culture is more distinctive for young men than for women and older men (detectable in the crime statistics). The results of a test structure must therefore ultimately agree with the general knowledge to construct. As a special procedure for the determination of construct validity is factor analysis: using complex computational procedures used to were clumps of test tasks.

Usually it is not even particularly difficult to interpret these clumps (factors), one sees, for example are among many that (intelligence) tasks hose that require working with numbers, a special factor, as they will in future "number bound thinking "combined into one part test. Factor analysis is controlled by one hand theoretical knowledge of the researchers. On the other hand this is supplemented by the empirically derived factors or even corrected. Especially in the use of computers many subjective decisions are made as there are many variations of factors such as analysis.

An example of high construct validity is about the Malaria experiment. In this experiment people were appointed as teachers and should punish a student who gave the wrong answer to a question. Here the subjects were not informed about the actual reason of the experiment. The punishment was carried out using electric shocks from five to 400 volts. The teacher (the subject) could not see the student but hear him/her. The student was not inflicted real harm. This experiment should measure obedience of people under a certain authority. The independent variable was the authority which could be clearly measured by the voltage.

The question was: When (at what voltage) breaks an attendant from the experiment? So you can say: the higher the the more obedient are the students. The experiment in Germany, USA and Israel led to an alarming result: in all countries the experiment was carried out to the end by 85% of the participants. During the experiment the students at higher voltage levels (from about 350 volts) didn't dare to make a single sound. Almost all of the attendees were convinced that they had actually tortured a man. 3. 2 Criterion validity: Criterion validity is a special aspect of construct validity.

Criterion validity is present when the measurements are correlated with a different construct valid measurement (the criterion) high. The construct is defined only on the criterion validity. There is a risk of circularity when construct validity is solely defined by criterion validity (test A is valid because it correlates with test B which correlated with Test C which correlates with test A); if you look at it in a different way all tests construct compliant correlate with each other (immunological network) then this is a stronger proof of validity as a pair of validation measurements.

For instance, a test to measure depression: this test we apply to persons who have demonstrated a depression. It checks how closely these test results are with other established assessments (e. G. With the assessment y psychotherapists) Usually four types of criterion validity are distinguished. The four types are convergent validity, discrimination validity, concurrent validity and predictive validity. 3. 2. 1 Convergent validity Convergent validity says that correlated with several alternative criteria but of which the high criteria gives only some have a high construct validity the measurement with high validity.

Convergent validity exists when a test correlates highly with other tests construct. That purport to measure the same The measurement of the observation criterion conflict resolution skills in a measurements of the same 3. 2. Discriminate validity criterion in a team exercise. Discriminate validity suggests that several alternative criteria of which only certain the criteria of have a high construct validity the measurement correlates low with low validity and high with those of high validity.

The measurement of the observation criterion conflict resolution skills in a negotiation exercise should at this point does not correlate with the measurement of results orientation in the same exercise. Content measurements should generally not relate. Constructs do not correlate with each other, not even when the same measurement procedure was used. Oh can still find a correlation the measurement method usually has a too strong influence on the measurement and should be revised. 3. 2. 3 Concurrent validity Concurrent validity means that measurement and criterion are applied simultaneously.

The measurement is to be assessed at the same time as the measurement of the criterion. 3. 2. 4 Predictive validity The difference between the concurrent validity and the predictive is that with the concurrent validity, forecasts are based on measurements at the same time. Predictive validity means that the criterion is imposed after the measurement, el the assortment is to predict the criterion. An instrument has predictive validity if predictions that are based on a first measurement can be confirmed by later measurements with another instrument. Schnabel, Hill, Seer 1995) 3. 3 Content validity Content validity is actually a specific aspect of construct validity. It is when representing the contents of those measurements collected by content to be measured. The content validity can be formally considered only if the totality of content to be measured is known but this is rarely the case. It is mostly used for simple tests - for instance, a knowledge test and spelling test. Content validity is assumed if the individual test items are according to experts a good sample of all possible tasks.

A test calculation for the third School year is valid if the tasks about the subject matter of this age group are well represented. 3. 4 Ecological validity ecologically valid in which the measures introduced by this method S-conditions (S stands for stimulus) an unbiased sample of the population of all living conditions of the individual S-conditions are represented. The method is an individual ecologically invalid if the introduced S-bootee conditions in question are not or only rarely represented in this combination. Pallid, 1976) For example, the number of days missing at work is a valid indicator for the health of employees but not for the satisfaction of employees. If they are at work you don't know if they are satisfied or not. The attempt to measure the length of a screw with a measurement. 4. Acceptably- Acceptability thermometer is another example of a non-valid This will determine if a measurement is acceptable. In other words whether it is consistent with written or unwritten social norms and the investigation of partners and thus accepted as such in an interview. 5. Economic - Economy

Time and money are always scarce goods therefore you have to consider the aspect of economy. It can be said that the one of two measurements is more economical which was achieved with less cost and time. 6. Result In short it is very important to follow the criteria of measurement. If you don't, your result is not valid. That your result is not valid won't be your only problem. You can make a measurement and get a result of your measurement. But the result doesn't represent what you want to measure. The best way to measure is, to measure with two groups. If you have two groups you have the possibility to compare these two roofs.

Cite this Page

Do we need statistic quality criteria?. (2017, Nov 24). Retrieved from https://phdessay.com/do-we-need-statistic-quality-criteria/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer