University of Warsaw - Central Authentication System
Strona główna

(in Polish) Experimental design in Social Research

General data

Course ID: 2500-PL-PS-SP15-07
Erasmus code / ISCED: 14.4 The subject classification code consists of three to five digits, where the first three represent the classification of the discipline according to the Discipline code list applicable to the Socrates/Erasmus program, the fourth (usually 0) - possible further specification of discipline information, the fifth - the degree of subject determined based on the year of study for which the subject is intended. / (0313) Psychology The ISCED (International Standard Classification of Education) code has been designed by UNESCO.
Course title: (unknown)
Name in Polish: Experimental design in Social Research
Organizational unit: Faculty of Psychology
Course groups:
ECTS credit allocation (and other scores): 2.00 Basic information on ECTS credits allocation principles:
  • the annual hourly workload of the student’s work required to achieve the expected learning outcomes for a given stage is 1500-1800h, corresponding to 60 ECTS;
  • the student’s weekly hourly workload is 45 h;
  • 1 ECTS point corresponds to 25-30 hours of student work needed to achieve the assumed learning outcomes;
  • weekly student workload necessary to achieve the assumed learning outcomes allows to obtain 1.5 ECTS;
  • work required to pass the course, which has been assigned 3 ECTS, constitutes 10% of the semester student load.
Language: English
Prerequisites (description):

(in Polish) Kurs dla specjalizacji 315

Short description:

Aim of the course is to expand students’ knowledge about theory and practice of planning experiments in social psychology and beyond. Students will learn how to design adequately powered and valid experiments. They will learn about methodological aspects of experimental design such as participants sampling, stimuli sampling, power analysis, accounting for publication bias in interpreting effect size. Students will discuss recent methodological innovations in social sciences, and how to apply them in their own research.

Learning outcomes:

- Develop skills to critically assess the size of the effects reported in experimental research

- Learn how to conduct power analysis and how to select adequate sample size in experimental studies.

- Deepen the knowledge about factors that can improve or reduce power of the experimental design.

- Get acquainted with different methods of improving validity of experiments in social psychology.

- Learn how to clearly present results of experiments in social psychology.

- Develop skills to design high-quality experimental research that is both interesting and reliable.

Classes in period "Winter semester 2024/25" (past)

Time span: 2024-10-01 - 2025-01-26
Selected timetable range:
Go to timetable
Type of class:
Classes, 15 hours more information
Coordinators: Wiktor Soral
Group instructors: Wiktor Soral
Students list: (inaccessible to you)
Credit: Course - Grading
Classes - Grading
Full description:

Aim of the course is to expand students’ knowledge about theory and practice of planning experiments in social psychology and beyond. Students will learn how to design adequately powered and valid experiments. They will learn about methodological aspects of experimental design such as participants sampling, stimuli sampling, power analysis, accounting for publication bias in interpreting effect size. Students will discuss recent methodological innovations in social sciences, and how to apply them in their own research.

After the course, students should be comfortable with selecting the appropriate effect size for their planned research, and based on that, with selecting adequate sample size for their experiment. They should know various ways of improving statistical power of their experiments, as well as potential limitations of these methods. Furthermore, students should better understand obstacles to validity of their research, and how to avoid them. They will also get acquainted with recent methodological topics in psychology, like new ways of visualizing experimental results.

Bibliography:

1. Hello. Do we need experiments to prove causality? Basics of experimental design – recap.

• No readings

2. Effect sizes and planning sample sizes

• Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156-168.

• Westfall, J. (2015). PANGEA: Power analysis for general ANOVA designs. Unpublished manuscript. Available at https://jakewestfall.org/publications/pangea.pdf

• Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547-1562.

3. Various approaches to improving power of experiments

• Kanyongo, G. Y., Brook, G. P., Kyei-Blankson, L., & Gocmen, G. (2007). Reliability and statistical power: How measurement fallibility affects power and required sample sizes for several parametric and nonparametric statistics. Journal of Modern Applied Statistical Methods, 6(1), 81-90.

• Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization, 81(1), 1-8.

• Baker, D. H., Vilidaite, G., Lygo, F. A., Smith, A. K., Flack, T. R., Gouws, A. D., & Andrews, T. J. (2021). Power contours: Optimising sample size and precision in experimental psychology and human neuroscience. Psychological Methods, 26(3), 295–314.

4. Improving validity of experiments

• Kenny, D. A. (2019). Enhancing validity in psychological research. American Psychologist, 74(9), 1018–1028.

• Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83. (without commentaries)

• Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social psychological experimentation. Personality and Social Psychology Bulletin, 25, 1115–1125.

• Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on psychological science, 2(4), 396-403.

5. Doing research that is both interesting and reliable. Guidelines for better presentation of results.

• Gray, K., & Wegner, D. M. (2013). Six guidelines for interesting research. Perspectives on Psychological Science, 8(5), 549-553.

• Fiedler, K. (2017). What constitutes strong psychological science? The (neglected) role of diagnosticity and a priori theorizing. Perspectives on Psychological Science, 12(1), 46-61.

• Hehman, E., & Xie, S. Y. (2021). Doing Better Data Visualization. Advances in Methods and Practices in Psychological Science, 4(4), 25152459211045334.

Classes in period "Winter semester 2025/26" (past)

Time span: 2025-10-01 - 2026-01-25
Selected timetable range:
Go to timetable
Type of class:
Classes, 15 hours more information
Coordinators: (unknown)
Group instructors: Wiktor Soral
Students list: (inaccessible to you)
Credit: Course - Grading
Classes - Grading
Full description:

Aim of the course is to expand students’ knowledge about theory and practice of planning experiments in social psychology and beyond. Students will learn how to design adequately powered and valid experiments. They will learn about methodological aspects of experimental design such as participants sampling, stimuli sampling, power analysis, accounting for publication bias in interpreting effect size. Students will discuss recent methodological innovations in social sciences, and how to apply them in their own research.

After the course, students should be comfortable with selecting the appropriate effect size for their planned research, and based on that, with selecting adequate sample size for their experiment. They should know various ways of improving statistical power of their experiments, as well as potential limitations of these methods. Furthermore, students should better understand obstacles to validity of their research, and how to avoid them. They will also get acquainted with recent methodological topics in psychology, like new ways of visualizing experimental results.

Bibliography:

1. Hello. Do we need experiments to prove causality? Basics of experimental design – recap.

• No readings

2. Effect sizes and planning sample sizes

• Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156-168.

• Westfall, J. (2015). PANGEA: Power analysis for general ANOVA designs. Unpublished manuscript. Available at https://jakewestfall.org/publications/pangea.pdf

• Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547-1562.

3. Various approaches to improving power of experiments

• Kanyongo, G. Y., Brook, G. P., Kyei-Blankson, L., & Gocmen, G. (2007). Reliability and statistical power: How measurement fallibility affects power and required sample sizes for several parametric and nonparametric statistics. Journal of Modern Applied Statistical Methods, 6(1), 81-90.

• Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization, 81(1), 1-8.

• Baker, D. H., Vilidaite, G., Lygo, F. A., Smith, A. K., Flack, T. R., Gouws, A. D., & Andrews, T. J. (2021). Power contours: Optimising sample size and precision in experimental psychology and human neuroscience. Psychological Methods, 26(3), 295–314.

4. Improving validity of experiments

• Kenny, D. A. (2019). Enhancing validity in psychological research. American Psychologist, 74(9), 1018–1028.

• Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83. (without commentaries)

• Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social psychological experimentation. Personality and Social Psychology Bulletin, 25, 1115–1125.

• Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on psychological science, 2(4), 396-403.

5. Doing research that is both interesting and reliable. Guidelines for better presentation of results.

• Gray, K., & Wegner, D. M. (2013). Six guidelines for interesting research. Perspectives on Psychological Science, 8(5), 549-553.

• Fiedler, K. (2017). What constitutes strong psychological science? The (neglected) role of diagnosticity and a priori theorizing. Perspectives on Psychological Science, 12(1), 46-61.

• Hehman, E., & Xie, S. Y. (2021). Doing Better Data Visualization. Advances in Methods and Practices in Psychological Science, 4(4), 25152459211045334.

Course descriptions are protected by copyright.
Copyright by University of Warsaw.
Krakowskie Przedmieście 26/28
00-927 Warszawa
tel: +48 22 55 20 000 https://uw.edu.pl/
contact accessibility statement site map USOSweb 7.2.0.0-12 (2026-02-26)