Introduction, downloads

D: 16 May 2024

Recent version history

What's new?

Coming next

[Jump to search box]

General usage

Getting started

Flag usage summaries

Column set descriptors

Citation instructions

Standard data input

PLINK 1 binary (.bed)

PLINK 2 binary (.pgen)

Autoconversion behavior

VCF/BCF (.vcf[.gz], .bcf)

Oxford genotype (.bgen)

Oxford haplotype (.haps)

PLINK 1 text (.ped, .tped)

PLINK 1 dosage

Sample ID conversion

Dosage import settings

Generate random

Unusual chromosome IDs

Allele frequencies



'Cluster' import

Reference genome (.fa)

Input filtering

Sample ID file

Variant ID file

Interval-BED file




SNPs only

Simple variant window

Multiple variant ranges

Deduplicate variants

Sample/variant thinning

Pheno./covar. condition


Category subset


Missing genotypes

Number of distinct alleles

Allele frequencies/counts


Imputation quality


Founder status

Main functions

Data management




















Basic statistics










Pairwise diffs



Linkage disequilibrium




Sample-distance matrices





Population stratification


PCA projection

Association analysis


--glm ERRCODE values



Report postprocessing


Linear scoring



Distributed computation

Command-line help


Flag/parameter reuse

System resource usage


.zst decompression

Pseudorandom numbers

Warnings as errors

.pgen validation


1000 Genomes phase 3


FASTA files

Errors and warnings

Output file list

Order of operations

Developer information

GitHub root

Python library

R library


Adding new functionality

Discussion forums


File formats

Quick index search

Association analysis

Linear and logistic/Firth regression with covariates

--glm ['zs'] ['omit-ref'] [{sex | no-x-sex}] ['log10'] ['pheno-ids']
      [{genotypic | hethom | dominant | recessive | hetonly}] ['interaction']
      ['hide-covar'] ['skip-invalid-pheno'] ['allow-no-covars']
      ['qt-residualize'] [{intercept | cc-residualize | firth-residualize}]
      ['single-prec-cc'] [{no-firth | firth-fallback | firth}]
      ['cols='<col set desc.>] ['local-covar='<file>] ['local-psam='<file>]
      ['local-pos-cols='<key col #s> | 'local-pvar='<file>] ['local-haps']
      ['local-omit-last' | 'local-cats='<cat. ct> | 'local-cats0='<cat. ct>]
  (aliases: --linear, --logistic)

--ci <size>
--condition <variant ID> [{dominant | recessive}] ['multiallelic']
--condition-list <variant ID file> [{dominant | recessive}] ['multiallelic']
--parameters <number(s)/range(s)...>
--tests ['all'] [number(s)/range(s)...]

--vif <max VIF>

--max-corr <val>

--glm is PLINK 2.0's primary association analysis command.

For quantitative phenotypes, --glm fits the linear model

   y = GβG + XβX + e

for every variant (one at a time), where y is the phenotype vector, G is the genotype/dosage matrix for the current variant, X is the fixed-covariate matrix, and e is the error term subject to least-squares minimization. (Dosages are always used when present; if you want to analyze hardcalled genotypes instead, run "--make-pgen erase-dosage" first.) X always contains an all-1 intercept column, along with anything loaded by --covar. Missing-dosage rows are excluded, not mean-imputed.

For binary phenotypes, --glm fits a logistic or Firth regression model instead, with the same GβG + XβX terms.

Before we continue, three usage notes.

  • It is now standard practice to include top principal components (usually computed by --pca) as covariates in any association analysis, to correct for population stratification. See Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, Reich D (2006) Principal components analysis corrects for stratification in genome-wide association studies for discussion.
  • This method does not properly adjust for small-scale family structure. As a consequence, it is usually necessary to prune close relations with e.g. --king-cutoff before using --glm for genome-wide association analysis. (Note that biobank data usually comes with a relationship-pruned sample ID list; you can use --keep on that list, instead of performing your own expensive --king-cutoff run.) If this throws out more samples than you'd like, consider using mixed model association software such as SAIGE, BOLT-LMM, GCTA, or FaST-LMM instead; or regenie's whole genome regression.
  • Finally, the statistics computed by --glm are not calibrated well1 when the minor allele count is very small. "--mac 20" is a reasonable filter to apply before --glm; it's possible to make good use of --glm results for rarer variants (e.g. they could be input for a gene-based test), but some sophistication is required. Also, when working with unbalanced binary phenotypes, be aware that Firth regression can be similar to adding a pseudocount of 0.5 to the number of case and control minor allele observations, so weird things happen when the expected number of case minor allele observations is less than 0.5. You probably don't want to throw out every variant with MAC < 300 when your case:control ratio is 1:600 (you may still have excellent power to detect positive association between the minor allele and case status, after all), but you shouldn't take reported odds-ratios or p-values literally for those variants.

Now, the technical details:

  • For biallelic variants, G normally contains a single column with minor allele dosages. To make it always contain ALT allele dosages instead, add the 'omit-ref' modifier. (Why isn't omit-ref the default? We'll get to that.)
    This allele is listed in the A1 column of the main report. (Note that 'A1' just means "current allele" in PLINK 2.0 output files; it no longer is the overloaded global setting that it was in PLINK 1.x.)
    • Be aware that minor alleles are dataset-dependent. If rs71220063 has {freq(C)=0.493, freq(T)=0.507} in one dataset and {freq(C)=0.502, freq(T)=0.498} in another, C is the minor allele for rs71220063 in the first dataset and T is the minor allele in the second dataset, and --glm results will differ accordingly. When this is a problem, you can use --read-freq on an agreed set of allele frequencies to force all --glm runs to use the same set of minor alleles.
  • Similarly, for multiallelic variants, G normally contains one column for each nonmajor2 allele. 'omit-ref' changes this to one column for each ALT allele.
    If some but not all of these allele columns are constant, the constant columns are omitted. (Before 20 Mar 2020, the entire variant was skipped in this case.)
    For each such variant, the main report normally contains one line for each nonmajor allele, followed by a line for each covariate. The allele-specific lines have just one allele in the A1 column, while the covariate lines list all nonmajor alleles in the A1 column.
  • chrX is special in two ways:
    • First, sex (as defined in the .fam/.psam input file) is normally included as an additional covariate. If you don't want this, add the 'no-x-sex' modifier. Or you can add the 'sex' modifier to include .fam/.psam sex as a covariate everywhere. Whatever you do, don't include sex from the .fam/.psam file and the --covar file at the same time; otherwise the duplicated column will cause the regression to fail.
      Note that PLINK 2.0 encodes the .fam/.psam sex covariate as male = 1, female = 2, to match the actual numbers in the input file. This is a minor change from PLINK 1.x.
    • See --xchr-model below.
    The Keinan Lab's XWAS software provides additional chrX analysis options.
  • Outside of chrX, dosage is on a 0..1 scale on haploid chromosomes (chrY, chrM).
  • For each phenotype, --glm writes a regression report to plink2.<pheno name>.glm.<regression type>[.zst].
    • Yes, 'each' phenotype. This is a change from PLINK 1.x; the old --all-pheno flag is now effectively always on.
      • If you have multiple quantitative phenotypes with either no missing values, or missing values for the same samples, analyze them all in a single --glm run! PLINK 2.0's linear regression 'only' tends to be a few hundred times as fast as PLINK 1.9 when you analyze one quantitative phenotype at a time. But --glm also has a quantitative-phenotype-group optimization that can multiply the speedup by another factor of ~10.
    • The regression-type file extension is always 'linear' for quantitative phenotypes.
    • For binary phenotypes, there are now three regression modes:
      The corresponding file extensions are 'logistic', 'logistic.hybrid', and 'firth', respectively.
    • By default, for every variant, this file contains a line for each genotype column and a line for each non-intercept covariate column. If you're not actually using any information in the covariate lines, the 'hide-covar' modifier can greatly reduce file sizes. (See also --pfilter below.) Or, going in the other direction, the 'intercept' modifier lets you also see the intercept-column fit.
    • To trade off some accuracy for speed:
      • You can use the 'single-prec-cc' modifier to request use of single-precision instead of double-precision floating-point numbers during logistic and Firth regression.
      • You can use the 'firth-residualize' or 'cc-residualize' modifier, which implements the shortcut described in Mbatchou J et al. (2021) Computationally efficient whole genome regression for quantitative and binary traits to just Firth, or both Firth and logistic, regression respectively. Similarly, you can use 'qt-residualize' to regress out covariates upfront for quantitative traits. (These must be used with 'hide-covar', disable some other --glm features, and are not recommended if you have a significant number of missing genotypes or have any other reason to expect covariate betas to change in a relevant way between variants.)
    • Since running --glm without at least e.g. principal component covariates is usually an analytical mistake, the 'allow-no-covars' modifier is now required when you're intentionally running --glm without a covariate file. (This modifier did not exist, and the corresponding check was not performed, before 28 Mar 2020.)
    • The 'log10' modifier causes p-values to be reported in -log10(p) form. This works3 for p-values smaller than DBL_MIN.
    • --ci causes confidence intervals with the given width to be reported for each beta or odds-ratio.
    • Refer to the file format entry for a list of supported column sets.
  • A multicollinearity check is performed before each regression. When it fails, the regression is skipped and 'NA' results are reported.
    • This is a change from PLINK 1.9, which only performed the check for linear regressions.
    • The main part of this check is a variance inflation factor calculation. If that value is larger than 50, the check fails. You can change the upper bound with --vif.
    • Correlations between predictors are also checked; if any correlation is larger than 0.999, the check fails. You can change this upper bound with --max-corr.
    • The ERRCODE column reports which variants failed the multicollinearity check.
      This column distinguishes some other error types, too. The following error codes are currently reported:
      • '.': No error.
      • 'SAMPLE_CT<=PREDICTOR_CT': Too few samples.
      • 'CONST_OMITTED_ALLELE': The omitted allele had constant dosage, so the entire variant was skipped. (This usually means that all alleles are constant.)
      • 'CONST_ALLELE': This allele has constant dosage, and was skipped; at least one other allele was nonconstant. (Meaning was different before 20 Mar 2020.)
      • 'CORR_TOO_HIGH': The correlation between two predictors exceeded the --max-corr threshold. (Note that, in 'genotypic' mode, this happens for every biallelic variant with no homozygous-minor genotypes, since the additive and dominance-deviation columns are identical in that case. Those variants must be analyzed without 'genotypic'.)
      • 'VIF_INFINITE': The predictor correlation matrix couldn't be inverted at all.
      • 'VIF_TOO_HIGH': VIF exceeded the --vif threshold.
      • 'SEPARATION': [Quasi-]complete separation was detected, and --glm was operating in no-firth mode.
      • 'RANK_DEFICIENT': The final predictor matrix could not be inverted.
      • 'LOGISTIC_CONVERGE_FAIL': Logistic regression failed to converge, and --glm was operating in no-firth mode.
      • 'FIRTH_CONVERGE_FAIL': Firth regression failed to converge.
      • 'UNFINISHED': Logistic/Firth regression didn't fail in an obvious manner, but the result didn't satisfy the usual convergence criteria when the iteration limit was hit. (The result is still reported in this case, but it's less accurate than usual.)
      • 'INVALID_RESULT': While the underlying regression ran to completion, the result was extreme enough that inversion of the predictor matrix plausibly 'should' have failed.
    • The VIF check is known to be overly strict in several common scenarios; in particular, categorical covariates with a large number of categories will set it off. "When Can You Safely Ignore Multicollinearity?" has more discussion of this. Do not be afraid to greatly increase the --vif threshold after you have studied the problem and confirmed that moderate multicollinearity does not interfere with your analysis.
  • Covariates are also checked before any variants are processed. This includes a covariate-only version of the multicollinearity check described above, along with a covariate-scale check (which identifies scenarios where --covar-variance-standardize can be expected to help a lot). By default, if this check fails, PLINK 2 errors out; to just skip the affected regressions instead, add the 'skip-invalid-pheno' modifier.
  • Finally, if PLINK 2 determines that any samples and covariates are irrelevant to all regressions (specifically, a covariate could be constant, or zero-valued for all but one sample), they are removed before any variants are processed. You can use the 'pheno-ids' modifier to make PLINK 2 report the remaining samples to (per-phenotype) .id files. (When the sample set changes on chrX or chrY, and/or files are also written.)
  • If the phenotype is constant across the remaining samples at this point, PLINK 2 errors out (or, if 'skip-invalid-pheno' was specified, skips the phenotype).
  • Occasionally, it is useful to include selected variants in the immediate dataset as fixed covariates. This can be accomplished by running "--export A" on those variants and cut+pasting the data columns onto the end of the --covar input file. But there's also a shorthand: --condition adds a single variant as a fixed covariate, while --condition-list does the same for all variants named in a file. The 'dominant'/'recessive' modifiers let you change how these covariate columns are encoded (see below).
  • It is also possible to include "local covariates", which are not constant across all variants, in the regression. (These can be e.g. local ancestry coefficients, or polygenic effect predictions from a whole-genome fitting step.) To do so, add the 'local-covar=' and 'local-psam=' modifiers, use full filenames for each, and use either 'local-pvar=' or 'local-pos-cols=' to provide variant ID or position information.
    • Normally, the local-covar file should have cn real-valued columns, where the first c columns correspond to the first sample in the local-psam file, columns (c+1) to 2c correspond to the second sample, etc.; and the mth line of the local-covar file corresponds to the mth nonheader line of the local-pvar file. (Variants not mentioned in the local-pvar file are excluded from the regression.) The local covariates are assigned the names LOCAL1, LOCAL2, etc. To exclude the last local covariate from the regression (necessary if they are e.g. local ancestry coefficients which sum to 1), add the 'local-omit-last' modifier.
    • Alternatively, when 'local-cats='<k> is specified, the local-covar file is expected to have n columns with integer-valued entries in [1, k]. (This range is [0, k-1] with 'local-cats0='.) These category assignments are expanded into (k-1) local covariates, with the last category omitted.
    • When position information is in the local-covar file, this should be indicated by 'local-pos-cols='<number of header rows>,<chrom col #>,<pos start col #>,<first covariate col #>.
    • 'local-haps' indicates that there's one column or column-group per haplotype instead of per sample; they are averaged by --glm.
    • As a practical matter, if you only have a single set of local covariate values per chromosome, you're probably better off with per-chromosome --glm runs which don't use local-covar= at all; that enables some additional optimizations.
  • The 'genotypic' modifier adds a dominance-deviation column (heterozygous-A1 = 1, any other genotype = 0, linear interpolation applied to dosages; "0..1..0" for short) to G, and adds genotype + dominance-deviation joint F-test4 results to the main report; the test name is "GENO_2DF". The 'hethom' modifier does almost the same thing, except that the first genotype column is also replaced, by 'HOM' column(s) with 0..0..1 encoding.
    Note that these two modifiers only make sense when analyzing variants with fairly high MAF. Otherwise, you are very likely to get a 'NA' result, since there are too few homozygous-minor genotypes to reliably distinguish the additive and dominance-deviation effects from each other.
  • The 'dominant' modifier specifies a model assuming full dominance for the A1 allele, i.e. the first genotype column is changed to 0..1..1 encoding. Similarly, 'recessive' makes the first genotype column use 0..0..1 encoding.
  • The 'hetonly' modifier replaces the genotype column with a 0..1..0 dominance-deviation column.
  • The 'interaction' modifier adds genotype x covariate interaction terms to G. More precisely, the additional columns are entrywise (Hadamard) products between a genotype/dosage column and a (non-intercept) covariate column.
    • When G contains a major allele with >90% frequency, the interaction terms can be very highly correlated with the genotype column. This is likely to cause the multicollinearity check to fail, and it isn't a situation where overriding the multicollinearity-check defaults is wise—numerical stability problems are likely.
      So you probably don't want to use 'omit-ref' when performing interaction testing. (And this is why omit-ref is no longer --glm's default setting; it was, back in 2017, until the ~5th time this specific problem came up...)
    • For multiallelic variants, 'interaction' causes a separate model to be fitted for each A1 allele. In each model, interaction terms are only added for one of the alleles in G; the other genotype/dosage columns are treated much like fixed covariates (except that no interaction term is created between them and the A1 allele).
  • If you want to include some, but not all, interaction terms, use --parameters to specify your choices. This flag takes a list of 1-based indices (or ranges of them; syntax is similar to --chr) referring to the sequence of predictors which would normally be included in the model, and removes the unlisted predictors. The sequence is:
    1. Genotype/dosage additive effect column (or 'HOM' column)
    2. Dominance deviation, if present
    3. Local covariate(s), if present
    4. --condition[-list] covariate(s), if present
    5. --covar covariate(s), if present
    6. Genotype x non-sex covariate 'interaction' terms, if present
    7. Sex, if present
    8. Sex-genotype interaction(s), if present

For example, if tmp.cov contains two covariates, the command

plink2 --pfile mydata \
       --glm genotypic interaction \
       --covar tmp.cov \
       --parameters 1-4, 7

induces the following parameter sequence:

  1. Additive effect ('ADD')
  2. Dominance deviation ('DOMDEV')
  3. First covariate ('COVAR1' if not named in the file)
  4. Second covariate
  5. Dosage-first covariate interaction ('ADDxCOVAR1')
  6. Dominance deviation-first covariate interaction ('DOMDEVxCOVAR1')
  7. Dosage-second covariate interaction ('ADDxCOVAR2')
  8. Dominance deviation-second covariate interaction ('DOMDEVxCOVAR2')

so "--parameters 1-4, 7" causes the dosage-first covariate interaction term, and both dominance deviation interaction terms, to be excluded from the model.

No, this interface won't be winning any ease-of-use awards. But it does let you get the job done; it's backward-compatible; it isn't actually restricted to interaction testing; and when all else fails you can usually look at the order in which predictors appear in --glm's main report (the same sequence is used, with one uncommon exception: when 'interaction' is specified and a sex covariate is present, the --glm report will include the sex covariate before the interaction terms).

--tests causes a (joint) test to be performed on the specified term(s). The test name in the report is of the form "USER_<#>DF". The syntax is similar to that of --parameters, except that

  • you can use the 'all' modifier to include all predictors in the test, and
  • if --tests is used with --parameters, the --tests indices refer to the term sequence after --parameters has acted on it. For example,

plink2 --pfile mydata \
       --glm interaction \
       --covar tmp.cov \
       --parameters 1-4, 7 \
       --tests 1, 5

adds an ADD=0, ADDxCOVAR2=0 joint test, since ADDxCOVAR2 is the fifth remaining term after --parameters has been processed.

One last tip. Since --glm linear regression is now much faster than logistic/Firth regression, it is reasonable to recode binary phenotypes as quantitative phenotypes (by e.g. adding 2 to all the values, and ensuring missing values are encoded as 'NA') for exploratory analysis of very large datasets. See "Details and considerations of the UK Biobank GWAS" from the Neale lab blog for detailed discussion of an example (executed with the Hail data analysis platform, but equivalent to the standard PLINK-based workflow). However, the results should only be treated as a rough approximation. There is no guarantee that a genome-wide significant association which would be revealed by logistic/Firth regression will be significant under the misspecified linear model (especially when you have less than ~25 minor allele observations in the case group).

1: PLINK 1.x --linear/--logistic's adaptive permutation mode mostly addresses the calibration problem, and this mode will be added to --glm before PLINK 2.0 is finished. However, the imprecision and high computational cost of this mode make it a last resort; realistically, its main function is to provide an unbiased ground-truth-approximation for researchers developing more computationally practical methods to compare against.
2: If freq(REF)=0.2, freq(ALT1)=0.4, and freq(ALT2)=0.4, ALT1 is treated as the major allele.
3: Actually, --glm also correctly reports p-values smaller than DBL_MIN when 'log10' is not specified, and commands like --adjust-file can read them without significant loss of precision. But other programs are much more likely to be able to make sense of tiny p-values when they are in -log10(p) form.
4: This is a minor change from PLINK 1.x, which used chi-square approximations instead of joint F-tests.

--pfilter <threshold>

--pfilter causes only associations with p-values no larger than the given threshold to be reported. (Variants with 'NA' p-values are excluded as well; if that's the only effect you want, use "--pfilter 1".) This can dramatically reduce the report's size, and you can extract the remaining variant IDs with a command sequence like the following:

plink2 --pfile main_data --glm hide-covar --pfilter 0.001 --out report1

tail -n +2 report1.PHENO1.glm.linear | sed s/^\ *//g | tr -s ' ' ' ' | cut -f 2 -d ' ' > candidates.txt

The second command

  1. removes the top line of the report,
  2. strips leading spaces from each line,
  3. collapses subsequent groups of spaces into single spaces,
  4. and finally, extracts the second column (variant ID) from each line. The resulting file can be used with e.g. --extract.

--xchr-model <mode number>

PLINK 2 dosages are on a 0..2 scale on regular diploid chromosomes, and 0..1 on regular haploid chromosomes. However, chrX doesn't fit neatly in either of those categories. --xchr-model lets you control its encoding in several contexts (--glm, --condition[-list], --score[-list], --variant-score).

The following three modes are supported:

  • 0. Skip chrX. (This no longer causes other haploid chromosomes to be skipped.)
  • 1. Male dosages are on a 0..1 scale on chrX, while females are 0..2. This was the PLINK 1.x default.
  • 2. Males and females are both on a 0..2 scale on chrX. This is the PLINK 2 default.

(PLINK 1.x's mode 3 is no longer supported since it duplicates --glm's interaction testing options, and does not apply to the other commands now covered by --xchr-model.)

Reformat for GWAS Catalog

--gwas-ssf ['zs'] ['delete-orig-glm'] ['a1freq-lower-limit='<bound>]
           ['rsid='<mode>] ['file='<filename>] ['file-list='<filename>]
           [{real-ref-alleles | allow-ambiguous-indels}]

The --gwas-ssf command reformats PLINK 2 association test results as GWAS-SSF, for the GWAS Catalog. Output files have '.ssf.tsv' appended to the original filenames.

  • You will need to prepare companion metadata file(s) before submission.
  • If --glm was specified in the same command-line, --gwas-ssf will postprocess the current --glm run's results, and optionally delete --glm's own output afterward ('delete-orig-glm' modifier).
  • To postprocess some preexisting file(s), use file= and/or file-list=. Note that these files must contain the not-yet-default 'a1freq' column, along with most default columns. (This is why PLINK 1.x --linear/--logistic results are not supported.)
    • By default, unless the 'provref' column is present in the input file, true REF alleles are assumed to be unknown, since --glm is sometimes run on PLINK 1 filesets which don't track that information. Use the 'real-ref-alleles' modifier to specify that REF alleles are accurate.
  • For indels, REF alleles are normally required to be known, because (unlike the case for SNPs) swapping REF/ALT order for an indel actually changes the genetic variation that is referred to. If --gwas-ssf errors out for this reason, but you have an upstream VCF or similar file from which the correct REF/ALT labels can be recovered (see the --ref-allele documentation for details), you are strongly encouraged to recover those labels.
    If it's genuinely impossible to recover them, you can use the 'allow-ambiguous-indels' modifier to override the error. In this case, some of your indel results will probably be unusable.
  • Variants outside {chr1..chr22, chrX, chrY, chrM}, and variants with non-ACGT characters in the A1 or OMITTED allele code, are skipped. (The PAR1, PAR2, and XY chromosome codes are converted to chrX.)
  • If the 'omitted' column is absent from an input file, --gwas-ssf will skip (unsplit) multiallelic variants.
    Conversely, if unsplit multiallelic variants and the 'omitted' column are present, the multiallelic variants will be handled properly, and the results should be more reliable than what you'd get from splitting the variants. Specify "other minor alleles" (or "other ALT alleles if you ran --glm with the 'omit-ref' modifier) as part of the "adjustedCovariates" metadata entry in this case.
  • By default, a 'rsid' column appears in the output iff at least one variant in the current dataset has an ID that is a syntactically valid rsID (mode='infer'). rsid= modes 'no' and 'yes' do what you'd expect.
  • You can use the 'a1freq-lower-limit=' modifier to mask very low allele frequencies, when the raw values would compromise privacy.
Basic multiple testing correction

--adjust-file <filename> ['zs'] ['gc'] ['cols='<col set descriptor>]
              ['log10'] ['input-log10'] ['test='<test name, case-sensitive>]
--adjust-chr-field <field name search order>
--adjust-pos-field <field name search order>
--adjust-id-field <field name search order>
--adjust-ref-field <field name search order>
--adjust-alt-field <field name search order>
--adjust-provref-field <field name search order>
--adjust-a1-field <field name search order>
--adjust-test-field <field name search order>
--adjust-p-field <field name search order>

--adjust ['zs'] ['gc'] ['log10'] ['cols='<col set descriptor>]

--lambda <value>

Given an unfiltered PLINK association analysis report, --adjust-file reports some basic multiple-testing corrections (Bonferroni, FDR...), sorted in increasing-p-value order, to plink2.adjusted[.zst].

  • By default, the genomic-control lambda (inflation factor) is estimated from the data (as <median 1df chi-square stat> / 0.456), and this estimate is reported in the log. --lambda lets you manually set it.
  • 'gc' causes genomic-controlled instead of unadjusted p-values to be used as input for the main multiple-testing correction algorithms.
  • 'log10' causes negative base 10 logs of p-values to be reported, instead of raw p-values. 'input-log10' specifies that the input file contains -log10(p) values.
  • If the input file contains multiple tests per variant which are distinguished by a 'TEST' column (true for --linear/--logistic/--glm), you must use 'test=' to select the test to process. (Most of the time, you want "test=ADD".)
  • The --adjust-...-field flags let you set the field-name search order for each field --adjust-file might scrape. (The default settings are designed to work with PLINK 1.x --linear/--logistic and PLINK 2.0 --glm output.) When multiple arguments are provided for one of these flags, --adjust-file will first search for the first argument in the header line, and then only search for the second argument if the first search fails, etc.
  • Refer to the file format entry for a list of supported column sets. (That's where the old 'qq-plot' functionality now lives.)

In combination with --glm, --adjust performs the same multiple-testing corrections for every phenotype in the current analysis; output filenames are of the form plink2.<pheno name>.adjusted[.zst]

Report postprocessing >>