Comments 18/4/12

Animals in research

Can correlation show causality?

Gender Bias in Psychological Research

Can correlation show causality?

Should psychology be written for the layman?

Psychology covers a many areas of human behaviour and social interaction, both of which can be hugely complex and can be difficult to understand. I’m going to discuss some of the arguments into whether psychology, in particularly psychological research papers, should be written for the average layman to understand.

If psychology were to be written in layman’s terms, it would certainly reach out to more people; having a journal simplified would allow a greater amount of people the opportunity to gain knowledge into the areas psychology. I know from experience that if i read an article i didn’t understand i would put it down; purely because my interest was captured by it.

However, the problem with trying to simplify research journals/articles is that prior knowledge is sometimes needed, perhaps not into the subject area, but into the methods of investigation, the statistical procedures, the ability to understand the statistical output. Without such knowledge, some people may find it difficult to understand large aspects of the paper. Take for example the paper on the well-being of mothers of children with disabilities by Marji Erickson Warfield. This paper, like many psychological papers, has statistical output, and states that the mean score for the sample was 122.7 (SD 􏰅 24.2), with the Cronbach’s alpha reliability co- efficient being .94. To a scientist/psychologist, they can draw understanding from this sentence. A non-scientist may struggle to comprehend the relevancy of that sentence.

On the other hand, you could say that maybe this argument is irrelevant and that to understand the statistical output is not important. And i would agree. The paper by Marji Erickson Warfield discusses how children with intellectual and developmental disabilities can have an adverse effect on a mothers well-being. A basic understanding of this subject area can be extracted from the abstract, introduction, discussion and conclusion sections of the paper. The results offer an in-depth analysis of the investigation but an understanding of the research paper can be achieved quite easily without having to delve into the somewhat mind-numbing monotony of statistical output.

I would further argue that if the harder sciences: biology, chemistry and physics can be simplified so that even young children are able to grasp an understanding, is it not reasonable to assume that the same can be done for psychology? After all, the basic principles behind Freud, Bandura and Skinner’s work is not overly complex. Yes their work if taught at undergraduate level but we are required to fully analyse, critique and discuss their work until our arms fall off, not something children would be required to do to obtain an understanding.

Then again, you could argue that the reason for this is that, comparatively, psychology is a lot less scientific that than the big three, although lets not go down that road.

Psychology is complex, but so is mathematics, bio-chemistry and anatomy and yet these subjects have been broken down into layman’s terms, so that someone of below average intelligence or even school children can gain a basic understanding. However, i do not believe psychology should be written exclusively for the layman, merely that simplified papers/articles/journals into the area of psychology should be provided.

Comments for 14/3/12

Bias in SONA

http://anythingforadegree.wordpress.com/2012/03/11/false-results-in-psychological-research/#comment-65

Mind over body.

Is it ethically ok to use internet sources as data for qualitative studies?

Can correlations show causality?

It is a well known fact that correlations indicate a predictive relationship between two variables. For example, you can show a correlation about how the sales of ice cream positively correlates with the increase in weather temperature. Many of you will be familiar with the conventional saying that correlations do not imply causality. However, is this statement solely correct?

The long-term stigma that correlations cannot infer a causal relationship between two variables is largely supported. However, that is not to say that correlations cannot indicate the potential of a causal relationship. Take for instance the above example about ice cream sales. We cannot directly suggest from a correlation that hot weather causes people to buy more ice cream. However, one could strongly predict from the correlation that hot weather could cause an increase in ice cream sales.

However, the problem with this is that causal relationships may be underlying, indirect or unknown. What’s more, high correlations have a tendency to overlap with identity relations, where no actual causation exists. I will concede however that some variable relationships can be causally transparent, for example the relationship between male teenagers’ age and the amount of facial hair they have. One would assume that the older the boys get, the more facial hair they will have. On the other hand, other relationships are not so clear cut. Take for example if you were looking at the relationship between gender and depression. It is much harder to draw a logically prediction from these two variables. To predict a causal relationship between these variables is ambitious at best.

As a result, I do not believe that it can be conclusively stated that correlations show causality, this is because establishing a correlation between two variables is not enough to suggest that a causal process exists. However, correlations can, on occasions, indicate potential causal relationships between two variables but this effect is inconsistent across all correlations.

Comments for 22/2/12

Mr.Qualitative vs Mrs.Quantitative

Faking it!

Qualitative or Quantitative data?

DECEPTION

Is there anything that can’t be measured by psychologists?

As we are all aware human behaviour and human interaction is an umbrella term and covers a huge range of things that can be measured. Psychologists have measured reaction time, perception, personality disorders, effect of facial impressions, intellectual disabilities, i could go on but you get the point. In this blog, i’m going to discuss whether there is actually anything psychologists can not actually measure and the reasons behind it.

Now, i’m going to drop a name that you’ll all know, Sigmund Freud. I understand that his theories are dated and controversial but i am a fan i must be honest. Despite what people say about his work am i do agree with what people say, he was a pioneer in the field of psychology and i respect that. However, i have one issue with Freud. The idea of the mind. The mind is an abstract concept, it doesn’t physically exist in the brain and it can’t be displayed in an observable way so that researchers can report and measures its effect. However we use the term so regularly that sometimes it might as well exist :).

So, staying on the same lines of abstract concepts, I’m going to move to something a bit more controversial, the idea of love. Some of you may have fallen in love or may be in love right now, films are based solely around the idea and it can affect a person’s entire life, but can it be measured? Well, i would be the biggest hypocrite in the world if i said you could. It is after all an abstract concept, it doesn’t physically exist just like the mind. However, i would argue that you could observe, not love itself, but the effects that it can have on a individual. We are all aware of how people can devote their life to one person just through the idea of love: to buy them gifts, commit their time, follow them to the ends of the earth and all that. What’s more, i think you could also measure the effect that a”broken heart” can have on someone. People have been known to fall into an UBER state of depression when they’ve had their heart broken. They may not leave the house, display little energy in their day to day life or just dig into a giant tub of Ben a Jerry’s.

The range of things that a psychologist can measure is frankly incredible. I think we can safely assume that, in regards to human behaviour, there is a lot more that can be measured than stuff that can’t. But one thing that i do not believe can be measured, especially by today’s standards, is the mind. More than anything else is the fact that it cannot be observed by researchers. You can’t physically view its affects. At least with love, although an abstract concept, i do believe it is possible to extract some useful observations from its affect on a individual’s behaviour.

Comment marking for 8th Feb

Observational Studies.

Is it okay to use internet sources for research?

Why do we bother to conduct research and statistical analyses?

3rd February: Recruiting participants for studies

Thanks.

Should researchers conduct a exploratory data analysis?

Before we consider the arguments to this question, we must understand what an exploratory data analysis is. Then we can discuss whether researchers should conduct such an analysis.

An exploratory data analysis (EDA) is a way to analyse sets of data, whereby a summary of the main characteristics can be put into an easy to understand form, often using graphs to do so.

There are four main objectives of the EDA:

  • to suggest hypotheses about the causes of an observed phenomena.
  • to support the selection of appropriate statistical tools and techniques.
  • to provide a basis for further data collections through surveys or experiments.
  • to assess assumptions on which statistical inference will be based.

What’s more, EDA uses a variety of graphical and quantitative techniques, including: histograms, multi-vari chart, scatter plots, ordination and rootograms. This gives the researchers the benefit of being able to find the appropriate statistical test for their data.

From the outset, the EDA appears to be well structured and useful form of analysis. And in essence it is. It follows a scientific method in its objectives and various statistical tests can be brought in to help carry out an analysis. It also has the added bonus of being able to take a complex set of figures a simplify the data to make it comprehendable. It is no surprise that it has become a popular with many researchers.

The EDA lends itself very well to researchers working with quantitative data sets. However, for qualitative research, this approach is not advantageous. The most obvious reason is that there are few qualitative procedures that work efficiently with EDA. This would suggest that  the EDA, although well-structured and useful. It uses are only beneficial for researchers working with quantitative data sets.

To conclude, even though the EDA suffers from not being of great use to researchers using qualitative data sets, for those who’s data it does suit, the EDA offers many advantages. It is of my opinion that researchers working with quantitative data sets should conduct an exploratory data analysis.

Comments for Julie 8/12/11

http://statstastic.wordpress.com/2011/11/24/is-it-ethical-to-conduct-research-on-the-internet/

http://petesays.wordpress.com/2011/10/07/do-you-need-statistics-to-understand-your-data/

http://leilla92.wordpress.com/2011/11/25/is-qualitative-research-more-important-quantitative-methods/

http://psuc9f.wordpress.com/2011/12/06/must-the-need-for-documented-ethics-procedures-hinder-research-progress/

Do the ends justify the means?

It is common, necessary practise to follow strict ethical guidelines when carrying out psychological research. Research will be heavily criticised if it fails in any one area of ethical guidance. That could include: informed consent, right to withdraw or protection of participants. The main argument in regards to ethics is do the ends justify the means. I will discuss the main points of this argument in the blog, using the Milgram study as a key example.

If someone attacks you, then is it okay to fight back? Self defence does sound like a viable argument, but is it also okay once someone has attacked you, to counterattack them without the obstruction of any moral restraints? This is the issue with this topic; the boundary between what is acceptable and not is very thin.

Take for example Stanley Milgram. He looked into the effect of obedience in response to an authority figure. Milgram was guilty of deception and harming participants during the study. Consequently, he was criticised by his peers due to his apparent lack of ethical considerations. However, roughly 80% of all participants later said that they were happy to have participated in the experiment.

Despite the lack of ethical guidance, Milgram was able to uncover some very interesting results. He helped to explain the influence that authority can have on an individual. This influence can lead people to do things of disastrous proportions. An example of this is the individuals who tortured others in Cambodia, during the control of the Khmer Rouge, despite many of the torturers failing to share the same principles of the Khmer Rouge.

If Milgram hadn’t of presented such interesting results, it is likely his research would have been discarded. Combined with the fact that he followed few ethical guidelines, his ends wouldn’t have justified the means. Nevertheless, even though Milgram failed on various aspects of ethical standards, he managed to display some incredible results in regards to social influence. Therefore, I would argue that his methods were justified by his results.

For any researchers’ end results to be viewed as being justified by unethical methods, those results have to be of great interest, significance and/or relevance. If the results are viewed as such, the research will be considered to have ends which are justified the means. However, if the results prove to substandard and uninspiring, such end results are not justified by the means.