Visual Neuroscience Lab
  • Home
  • Neurodesktop
  • Behavioral
  • Labmeeting
  • DICOM to GLM
  • MRI
  • People
  • Other Topics
  • Archive
  1. Other Topics
  2. Calculating a priori sample size using G Power
  • Home
  • Neurodesktop
  • MRI: from DICOM to GLM Analysis
    • 01 - DICOM to BIDS
    • 02 - T1w for PBn
    • 03 - Defacing
    • 04 - Removing Noise Scan
    • 05 - fmriprep
    • 06 - first level GLM
    • Troubleshooting
  • MRI
    • Retinotopic Mapping
    • 3D printing a brain
    • 7T anatomical analysis on neurodesk
    • MRI BIDS and OSF
    • fMRIprep analysis
    • Old MRI setup
    • Update to the MRI lab setup
    • Physiological noise correction
  • Behavioral
    • Vision tests
    • BIDS for behavioral studies
    • Custom-built chin rests
  • People
    • Stipends and prizes for master students
    • Coffee time
    • Participant compensation
    • Things to do if you are leaving the lab
    • Scientific writing in practice
    • Internship Checklist
    • Pflichtpraktikum
  • Other Topics
    • Calculating a priori sample size using G Power
    • Gamma correction
    • Contribute to the website
    • Lab Members Only!!!
  • Labmeeting
  • Archive

On this page

  • 1 Power analysis
  • 2 G*Power
    • 2.1 How to calculate a priori sample size on G*Power
    • 2.2 Repeated measures ANOVA on G*Power
  • 3 Power analysis by R packages
    • 3.1 pwr
    • 3.2 WebPower
    • 3.3 Superpower
  1. Other Topics
  2. Calculating a priori sample size using G Power

Calculating a priori sample size using G Power

Power
Analysis
Information and descriptions of how to calculate a priori sample size using G Power
Author

CY

Published

March 21, 2025

1 Power analysis

Power analysis is a statistical technique that helps scientists determine the sample size required to detect an effect of a given size with a desired degree of certainty. It helps researchers understand the likelihood that their study will detect a meaningful difference or relationship (if one exists) between groups or variables being studied. By performing a power analysis before conducting an experiment, scientists can ensure that the study is neither too small (which would result in a lack of statistical power to detect an effect, leading to a possible Type II error) nor too large (which could waste resources and potentially expose participants to unnecessary procedures). It’s a crucial step in the research design process that helps in making informed decisions and in enhancing the credibility and replicability of scientific findings.

You need to calculate a priori sample size according to the statistical test you want to make an argument. For example, if you are interested in only the results of ANOVA comparisons then, you should make your calculation for your ANOVA test. However, if you desire to analyze further with multiple comparisons then, you should calculate required sample size based on your post-hoc method which is commonly paired t test.

2 G*Power

G*Power is a very popular and free software program that researchers use to conduct power analysis. It was developed by Franz Faul at the Universität Kiel, Germany, and is frequently updated and improved upon. The software is designed to help researchers with a variety of statistical tests, including t-tests, F-tests, chi-squared tests, z-tests, and some exact tests.

G*Power provides a flexible tool for calculating the required sample size for a study based on the effect size, significance level, and power – which refers to the probability of correctly rejecting a false null hypothesis. Researchers can also use G*Power to calculate the power of their study given the sample size and effect size.

The program is user-friendly, allowing both beginners and experienced users to carry out a range of power analyses that might otherwise be complex and time-consuming. Its broad applicability across various statistical analyses makes it a valuable tool in the research planning process, helping to ensure that studies are properly designed to yield valid and reliable results.

See official page of G*Power

2.1 How to calculate a priori sample size on G*Power

To calculate an a priori sample size using G*Power, you need to follow several steps. The specific steps might change slightly depending on the statistical test you plan to use, but the overall process is similar:

  1. Download and Open G*Power

  2. Choose the Test

    In the G*Power interface, select the statistical test that you intend to use from the ‘Test family’ (e.g., t tests, F tests, etc.) and the ‘Statistical test’ (e.g., ANOVA, linear multiple regression, etc.) menus according to the hypothesis of your study.

  3. Determine the Type of Power Analysis

    Since you want to calculate the sample size, you should choose ‘A priori: Compute required sample size - given α, power, and effect size’.

  4. Input Parameters

    • Effect Size: Estimate the effect size for your desired test, which indicates the magnitude of the relationship or difference that you expect to find. You can use previous literature or pilot studies to estimate this.

    • α Error Probability (Significance Level): Typically, this is set at 0.05.

    • Power (1-β Error Probability): Commonly set at 0.80, indicating an 80% chance of detecting an effect if there really is one.

    • Allocation Ratio (for comparative studies): This is the ratio of the number of participants in one group to the number of participants in the other group(s). For equal group sizes, this is set to 1.

  5. Calculate

    Once all parameters are entered, click on the ‘Calculate’ button. G*Power will display the required sample size for your study based on the input values.

  6. Review Output

    G*Power will provide the calculated sample size along with other details related to the power analysis. Carefully review this output to determine if it is suitable for the needs of your study.

Keep in mind that determining these parameters, especially the effect size, can sometimes be the most challenging part of a power analysis. Consult with a statistician, your team members or relevant literature if you are unsure about these inputs. Finally, remember that power analysis is an important part of designing your study, but it should be combined with practical considerations like resource availability, ethical constraints, and methodological limitations.

2.2 Repeated measures ANOVA on G*Power

As an example, let’s say we have 3 different stimulus types. In each session, we will present one type of stimuli at 4 different contrast levels. The task will be the same throughout the experiment. Our aim is to determine the effect of contrast.

In such a case, we need using a repeated measures ANOVA (or 2-way mixed ANOVA or split-plot ANOVA). Calculating an a priori sample size for a mixed-design ANOVA in GPower can be a bit tricky, as G*Power does not directly provide a pre-built module for mixed ANOVAs with both within- (repeated measures) and between-subjects factors. However, you can still approximate the necessary calculations by considering the most powerful test within your design, which is typically the repeated measures part if the effect sizes are similar. Here is a general guide on how to do this:

  1. Open G*Power

  2. Select the Statistical Test

    • Choose ‘F tests’ as the test family.

    • Choose ‘ANOVA: Repeated measures, within factors’.

      (You can also choose ‘ANOVA: Repeated measures, between factors’ depending on which effect you expect to be larger or more important to your hypothesis testing. However, a conservative approach is often to consider the within-effects when uncertain.)

  3. Choose the Type of Power Analysis

    • Select ‘A priori: Compute required sample size - given α, power, and effect size’.
  4. Input the Parameters

    • Effect Size: We need to determine the effect size with direct method to count the effect of repeated measure. To do that, click on ‘Determine =)>’. On the pop-up window, select ‘Direct’. Enter eta squared value. Click ‘Calculate and transfer to main window’.

      Eta squared (\(\eta^{2}\)) is the effect size that indicates the total variance in testing explained by the within-subject variable (in our case, different contrast levels). Approximate partial eta squared conventions are small = 0.02, medium = 0.06, large = 0.14.

    • α Error Probability: The significance level is typically set at 0.05.

    • Power: Set this commonly to 0.80, indicating an 80% probability of correctly rejecting the null hypothesis.

    • Number of Groups: For the between-subjects factor, input the number of levels or conditions. If you compare, for example, healthy vs patient group, the number of groups is 2. For within-subject designs, the number of groups should be 1.

    • Number of Measurements: For the within-subjects factor, input the number of levels or conditions. If you have more than one independent variable, you can compute the number of measurements by multiplying the number of conditions for each factor. For example, if you have three stimulus types (k = 3) each of that is presented at four contrast levels (m = 4), you have 12 measurements (k x m = 3 x 4).

    • Correlation among Repeated Measures: This is determined by the expected correlation between the measures at different levels of contrast. If you don’t have an estimate, using a default value like 0.5 can be a starting point.

    • Nonsphericity Correction ε: If you have an estimate for this, input it here; otherwise, you can keep it at 1 or use the default provided by G*Power.

  5. Calculate

  6. Review Results

Please note that this approach is an approximation, as it does not fully represent the complexity of a mixed-design ANOVA. It essentially treats the between-subjects factor as another within-subjects level, which may not be totally accurate but can give you a ballpark figure for your sample size. If possible, consult with a statistician to ensure that the calculation is correctly tailored to your study’s specific needs and to take into account other complexities that G*Power may not directly address for mixed designs.

See another example on YouTube

3 Power analysis by R packages

If you want to use R to calculate your power level or required sample size, you can use one of the available packages depending on your needs.You can use the ready-to-use function with default parameters: Script is here.

3.1 pwr

This is the most common and basic package for power analysis. However, it is not suitable for complex statistical tests such as repeated measures ANOVA.

pwr:::pwr.t.test(n = NULL,  # NULL to calculate a priori sample size
  d = 0.4,  # Cohen's d, effect size
  sig.level = 0.05,  # alpha value
  power = 0.80,  # desired power level
  type = "paired",  # type of t test
  alternative = "two.sided"  # alternative hypothesis
)

     Paired t test power calculation 

              n = 51.00945
              d = 0.4
      sig.level = 0.05
          power = 0.8
    alternative = two.sided

NOTE: n is number of *pairs*

3.2 WebPower

This is a strong tool for different types of ANOVA.

WebPower::wp.rmanova(
  n = NULL,  # Compute required sample size
  ng = 1,   # Only one group if it's a within-subjects design
  nm = 3*3,  # Number of measurements (n*m for n x m design)
  f = 0.4,  # Effect size
  nscor = 1,  # Nonsphericity correction (assumed sphericity for now)
  alpha = 0.05,  # significance level, aka alpha value
  power = 0.80,  # desired power 
  type = 1 # 0: between-effect; 1: within-effect; 2: interaction effect
)
Repeated-measures ANOVA analysis

           n   f ng nm nscor alpha power
    94.86041 0.4  1  9     1  0.05   0.8

NOTE: Power analysis for within-effect test
URL: http://psychstat.org/rmanova

3.3 Superpower

As the name suggests, it is super for power analysis :) However, you need to simulate your data to calculate required sample size. It might be useful to calculate actual power level. However, it is hard to create simulations in general for our studies in Visual Neuroscience Lab.

# Experimental design
design_result <- Superpower::ANOVA_design(
  design = "3w*3w",  # 3x2 within-subjects 
  n = 30,  # as a start point to find the required sample size
  mu = c(500, 520, 540, 490, 510, 530, 480, 500, 520),  # average values for dependent variable
  sd = 50,  # estimated standard deviation
  r = 0.5,  # correlation between repeated measures
  labelnames = c("Stimulus", "A", "B", "C", "Condition", "1", "2", "3")
)

# simulation for power analysis
Superpower::ANOVA_power(design_result, nsims = 1000)
Power and Effect sizes for ANOVA tests
                         power effect_size
anova_Stimulus            91.9     0.21484
anova_Condition          100.0     0.50625
anova_Stimulus:Condition   5.2     0.03282

Power and Effect sizes for pairwise comparisons (t-tests)
                                                power effect_size
p_Stimulus_A_Condition_1_Stimulus_A_Condition_2  54.3    0.404293
p_Stimulus_A_Condition_1_Stimulus_A_Condition_3  98.5    0.825044
p_Stimulus_A_Condition_1_Stimulus_B_Condition_1  19.7   -0.208131
p_Stimulus_A_Condition_1_Stimulus_B_Condition_2  19.0    0.209843
p_Stimulus_A_Condition_1_Stimulus_B_Condition_3  89.5    0.631395
p_Stimulus_A_Condition_1_Stimulus_C_Condition_1  53.7   -0.403817
p_Stimulus_A_Condition_1_Stimulus_C_Condition_2   4.3   -0.001351
p_Stimulus_A_Condition_1_Stimulus_C_Condition_3  55.3    0.411029
p_Stimulus_A_Condition_2_Stimulus_A_Condition_3  56.3    0.413685
p_Stimulus_A_Condition_2_Stimulus_B_Condition_1  88.3   -0.613982
p_Stimulus_A_Condition_2_Stimulus_B_Condition_2  15.2   -0.194671
p_Stimulus_A_Condition_2_Stimulus_B_Condition_3  18.5    0.218655
p_Stimulus_A_Condition_2_Stimulus_C_Condition_1  98.4   -0.805800
p_Stimulus_A_Condition_2_Stimulus_C_Condition_2  53.6   -0.407364
p_Stimulus_A_Condition_2_Stimulus_C_Condition_3   5.8    0.002806
p_Stimulus_A_Condition_3_Stimulus_B_Condition_1 100.0   -1.028144
p_Stimulus_A_Condition_3_Stimulus_B_Condition_2  88.2   -0.612244
p_Stimulus_A_Condition_3_Stimulus_B_Condition_3  16.4   -0.198233
p_Stimulus_A_Condition_3_Stimulus_C_Condition_1 100.0   -1.229456
p_Stimulus_A_Condition_3_Stimulus_C_Condition_2  99.2   -0.821577
p_Stimulus_A_Condition_3_Stimulus_C_Condition_3  55.0   -0.412438
p_Stimulus_B_Condition_1_Stimulus_B_Condition_2  56.5    0.417876
p_Stimulus_B_Condition_1_Stimulus_B_Condition_3  98.9    0.838376
p_Stimulus_B_Condition_1_Stimulus_C_Condition_1  18.2   -0.197593
p_Stimulus_B_Condition_1_Stimulus_C_Condition_2  18.6    0.209454
p_Stimulus_B_Condition_1_Stimulus_C_Condition_3  87.2    0.617345
p_Stimulus_B_Condition_2_Stimulus_B_Condition_3  56.0    0.416909
p_Stimulus_B_Condition_2_Stimulus_C_Condition_1  87.3   -0.618780
p_Stimulus_B_Condition_2_Stimulus_C_Condition_2  19.0   -0.210307
p_Stimulus_B_Condition_2_Stimulus_C_Condition_3  17.4    0.199834
p_Stimulus_B_Condition_3_Stimulus_C_Condition_1  99.9   -1.034277
p_Stimulus_B_Condition_3_Stimulus_C_Condition_2  90.9   -0.630142
p_Stimulus_B_Condition_3_Stimulus_C_Condition_3  21.7   -0.215556
p_Stimulus_C_Condition_1_Stimulus_C_Condition_2  55.7    0.407045
p_Stimulus_C_Condition_1_Stimulus_C_Condition_3  99.0    0.811379
p_Stimulus_C_Condition_2_Stimulus_C_Condition_3  57.1    0.412050


Within-Subject Factors Included: Check MANOVA Results
Power and Effect sizes for ANOVA tests
                         power effect_size
anova_Stimulus            91.9     0.21484
anova_Condition          100.0     0.50625
anova_Stimulus:Condition   5.2     0.03282

Power and Effect sizes for pairwise comparisons (t-tests)
                                                power effect_size
p_Stimulus_A_Condition_1_Stimulus_A_Condition_2  54.3    0.404293
p_Stimulus_A_Condition_1_Stimulus_A_Condition_3  98.5    0.825044
p_Stimulus_A_Condition_1_Stimulus_B_Condition_1  19.7   -0.208131
p_Stimulus_A_Condition_1_Stimulus_B_Condition_2  19.0    0.209843
p_Stimulus_A_Condition_1_Stimulus_B_Condition_3  89.5    0.631395
p_Stimulus_A_Condition_1_Stimulus_C_Condition_1  53.7   -0.403817
p_Stimulus_A_Condition_1_Stimulus_C_Condition_2   4.3   -0.001351
p_Stimulus_A_Condition_1_Stimulus_C_Condition_3  55.3    0.411029
p_Stimulus_A_Condition_2_Stimulus_A_Condition_3  56.3    0.413685
p_Stimulus_A_Condition_2_Stimulus_B_Condition_1  88.3   -0.613982
p_Stimulus_A_Condition_2_Stimulus_B_Condition_2  15.2   -0.194671
p_Stimulus_A_Condition_2_Stimulus_B_Condition_3  18.5    0.218655
p_Stimulus_A_Condition_2_Stimulus_C_Condition_1  98.4   -0.805800
p_Stimulus_A_Condition_2_Stimulus_C_Condition_2  53.6   -0.407364
p_Stimulus_A_Condition_2_Stimulus_C_Condition_3   5.8    0.002806
p_Stimulus_A_Condition_3_Stimulus_B_Condition_1 100.0   -1.028144
p_Stimulus_A_Condition_3_Stimulus_B_Condition_2  88.2   -0.612244
p_Stimulus_A_Condition_3_Stimulus_B_Condition_3  16.4   -0.198233
p_Stimulus_A_Condition_3_Stimulus_C_Condition_1 100.0   -1.229456
p_Stimulus_A_Condition_3_Stimulus_C_Condition_2  99.2   -0.821577
p_Stimulus_A_Condition_3_Stimulus_C_Condition_3  55.0   -0.412438
p_Stimulus_B_Condition_1_Stimulus_B_Condition_2  56.5    0.417876
p_Stimulus_B_Condition_1_Stimulus_B_Condition_3  98.9    0.838376
p_Stimulus_B_Condition_1_Stimulus_C_Condition_1  18.2   -0.197593
p_Stimulus_B_Condition_1_Stimulus_C_Condition_2  18.6    0.209454
p_Stimulus_B_Condition_1_Stimulus_C_Condition_3  87.2    0.617345
p_Stimulus_B_Condition_2_Stimulus_B_Condition_3  56.0    0.416909
p_Stimulus_B_Condition_2_Stimulus_C_Condition_1  87.3   -0.618780
p_Stimulus_B_Condition_2_Stimulus_C_Condition_2  19.0   -0.210307
p_Stimulus_B_Condition_2_Stimulus_C_Condition_3  17.4    0.199834
p_Stimulus_B_Condition_3_Stimulus_C_Condition_1  99.9   -1.034277
p_Stimulus_B_Condition_3_Stimulus_C_Condition_2  90.9   -0.630142
p_Stimulus_B_Condition_3_Stimulus_C_Condition_3  21.7   -0.215556
p_Stimulus_C_Condition_1_Stimulus_C_Condition_2  55.7    0.407045
p_Stimulus_C_Condition_1_Stimulus_C_Condition_3  99.0    0.811379
p_Stimulus_C_Condition_2_Stimulus_C_Condition_3  57.1    0.412050
Back to top