Chapter 5 Data analysis

5.1 Visual inspection of the scatter plot

The starting point for NCA’s data analysis is a visual inspection of the \(XY\) scatter plot. This is a qualitative examination of the data pattern. By visual inspection the following questions can be answered:

  • Is the expected corner of the \(XY\) plot empty?
  • What is an appropriate ceiling line?
  • What are potential outliers?
  • What is the data pattern in the rest of plot?

Visual inspection of the scatter plot is illustrated with nca.example, in particular for testing of the hypothesis that Individualism is necessary for Innovation performance. Figure 5.1 shows the scatter plot of Individualism versus Innovation performance with all 28 cases as black dots.

Scatter plot of Individualism and  Innovation performance for evaluating the empty space, the appropriateness of the ceiling line, potential outliers, and the density of the cases in relevant areas.

Figure 5.1: Scatter plot of Individualism and Innovation performance for evaluating the empty space, the appropriateness of the ceiling line, potential outliers, and the density of the cases in relevant areas.

5.1.1 Is the expected corner empty?

According to the hypothesis a high level of \(X\) is necessary for a high level of \(Y\) such that the upper left corner is expected to be empty. Figure 5.1 shows that cases with a low value of \(X\) have a low value of \(Y\), and that cases with a high value of \(Y\) have a high value of \(X\). The upper left corner is indeed empty. NCA’s effect size quantifies the size of the empty area in the scatter plot. After visual inspection the size of the empty space can be obtained with the NCA software by using the summaries argument of the nca_output function.

5.1.2 What is an appropriate ceiling line?

The ceiling line represents the border between the area with cases and the area without cases. Figure 5.1 shows the two default ceiling lines. The CR-FDH ceiling is often selected when the border is theoretically assumed to be linear, or when the condition and outcome have many levels (e.g., are continuous). The CE-FDH ceiling is often selected when the condition or outcome have few levels (e.g., are discrete). When the decision about the ceiling line is not made a priori the ceiling line can be selected by visual inspection of the scatter plot. When de border looks linear the CR-FDH line may be selected, and when it looks non-linear the CE-FDH line may be the best choice. After visual inspection the appropriateness of the ceiling line may be verified with the NCA’s ceiling accuracty and fit measures. Ceiling accuracy is the percentage of cases on or below the ceiling line. The ceiling accuracty of an appropriate ceiling line is close to 100%. NCA’s fit measure is the effect size of a selected ceiling as a percentage of the effect size of the CE-FDH line. The selected ceiling line may be inappropriate when fit deviates considerably from 100%. Ceiling accuracty and fit can be obtained with the NCA software by using the summaries argument of nca_output function.

5.1.3 What are potential outliers?

Outliers are cases that are relatively ‘far away’ from other cases. These cases can be identified by visual inspection of the scatter plot. In NCA potential outliers are the cases that construct the ceiling line (‘ceiling outliers’) and the cases that construct the scope (‘scope outliers’). NCA defines an outlier as a case that has a large influence on the effect size when removed (see Section 4.8). After visual inspection, potential outliers can be verified with the NCA software by using the nca_outlier function.

5.1.4 What is the data pattern in the rest of plot?

Although NCA focuses on the empty area in the scatter plot, the full space can also be informative for necessity. In particular the density of cases in other corners, the density of cases near the scope limits, and the density of cases near the ceiling line contain relevant information for necessity.

5.1.4.1 Density of cases in other corners

The presence of many cases in the lower left corner, thus cases with a low value of \(X\) that do not show a high value of \(Y\), is supportive for the necessary condition hypothesis. Similarly, the presence of many cases in the upper right corner, thus cases with a high value of \(X\) that show a high value of \(Y\), is also supportive for the necessary condition hypothesis. If only a few cases were present in these corners the emptiness of the upper left corner could be a random result. This also applies when the majority of cases are in the lower right corner. After visual inspection, the randomness of the empty space (when the variables are unrelated) is identified with NCA’s statistical test that is part of the NCA software. The NCA’s p value can be obtained by using the test.rep argument in the nca_analysis function and by subsequently by using the summaries argument in the nca_output function.

Although NCA focuses on the emptiness of the corner that is expected to be empty according to the hypothesis, it is also possible to explore the emptiness of other corners. When theoretical support is available, the emptiness of another corner could be formulated as an additional necessary condition hypothesis. Such hypothesis is formulated in terms of the absence or a low value of \(X\) or \(Y\) (see Section 3.6). After visual inspection, any corner in the scatter plot can be evaluated with the NCA software by using the corner argument in the nca_analysis function.

5.1.4.2 Density of cases near the scope limits

Cases near the scope limit of \(X = X_{max}\) have met the necessary condition of a high level of \(X\) being necessary for a high level of \(Y\). For certain levels of \(X > X_c\), where \(X_c\) is the intersection between the ceiling line and the line \(Y = Y_{max}\), \(X\) is not necessary for \(Y\) (‘condition inefficiency’). When most cases are in the condition inefficiency area \(X_c < X < X_{max}\), the necessary condition could be considered as ‘trivial’. Most cases have met the necessary condition, whereas only a few cases have not. Cases near the scope limit of \(Y = Y_{min}\) have a low level of the outcome. Up to a level \(Y = Y_c\), where \(Y_c\) is the intersection between the ceiling line and the line \(X = X_{min}\), \(Y\) is not constrained by \(X\) (‘outcome inefficiency’). When most cases are in the outcome inefficiency area \(Y_{min} < Y < Y_c\), the necessary condition could be considered ‘irrelevant’; the condition is not constraining the outcome in that area. The condition and outcome inefficiencies can be obstained with the NCA software by using the summaries argument of nca_output function.

5.1.4.3 Density of cases near the ceiling line

Cases near the ceiling line will be able to achieve a higher outcome unless the necessary condition increases. These cases can be considered as “best cases” (assuming that the outcome is desirable, and the condition is an effort). For a given level of the condition the maximum possible outcome for that level of condition is achieved by other unknown variables). Thus, for a certain level of the condition, cases near the ceiling have achieved a relatively high level of the outcome compared to cases with similar level of the condition. Furthermore, cases near the outcome ‘support’ the estimation of the ceiling line. With many cases near the ceiling line, the support of the ceiling line is high.

5.2 The ‘empty space: Necessary condition ’in kind’

Necessary condition hypotheses are qualitative statements like ‘\(X\) is necessary for \(Y\)’. The statement indicates that it is impossible to have \(Y\) without \(X\). Consequently, if the statement holds, the space corresponding to having \(Y\) but not \(X\) in an \(XY\) plot or contingency table does not have observations, thus is empty. To calculate the size of the empty space, NCA draws a border line (ceiling line) between the empty space without observation and the full space with observations. The size of the empty space relative to the full space is the effect size and indicates the constraint that \(X\) poses on \(Y\). When the effect size is relevant (e.g., \(d > 0.1\)), and when the empty space is unlikely to be a random result of unrelated \(X\) and \(Y\) (e.g., \(p < 0.05\)), the researcher may conclude that there is empirical evidence for the hypothesis. Thus, after having evaluated theoretical support, the effect size and the p value, the hypothesis ‘\(X\) is necessary for \(Y\)’ may be supported. This hypothesis is formulated in a qualitative way and describes the necessary condition in kind. The effect size and its p value can be produced with the NCA software using the nca_analysis function.

Even for small samples, NCA can estimate an effect size an a relevant p value. An effect size can be calculate for a sample size \(N > 1\) (at least two cases must be available) and a relevant p value can be calculated when for a sample size \(N > 3\). The reasons for these thresholds are as follows. NCA’s effect size can be calculated as long as is a ceiling line can be drawn in the \(XY\) plot. NCA estimates the ceiling line from empirical data and for drawing a line at least two points are needed (\(N = 2\)). NCA assumes that the ceiling line is not pure horizontal and not pure vertical (although the CE-FDH step function has horizontal and vertical parts). When the ceiling line represents the situation that the presence or high value of \(X\) is necessary for the presence or high value of Y, the empty space is located in the upper left corner of the scatter plot. For \(N = 2\) the first case is in the lower left corner (\(x = 0, y = 0\)), and the second case is in the upper right corner (\(x = 1, y = 1\)). Then the effect size is 1 (CE-FDH). For all other situations with \(N = 2\), the effect size is 0 or not defined (pure horizontal or pure vertical line). For calculating the p value, NCA’s statistical test (see Section 7.2) compares the effect size of the observed sample (with \(N\) observed cases) with the effect sizes of created samples with \(N\) fictive cases when \(X\) and \(Y\) are unrelated. Fictive cases that represent non-relatedness between \(X\) and \(Y\) are obtained by combining observed \(X\) values with observed \(Y\) values into new samples. The total number of created samples with a unique combination (permutations) of fictive cases is \(N!\) (\(N\) factorial). Only one of the permutations corresponds to the observed sample. For each sample the effect size is calculated. The p value is then defined as the probability (p) that the effect size of the observed sample is equal to or larger than the effect size of all samples. If this probability is small (e.g., \(p < .05\)), a ‘significant’ result is obtained. This means that the observed sample is unlikely the result of a random process of unrelated variables (the null hypothesis is rejected), suggesting support for an alternative hypothesis.

For example, for a sample size of \(N = 2\) the total number of unique samples or permutations (including the observed sample) is \(1 * 2 = 2\). When the observed sample consists of case 1 \((x_1,y_1)\) and case 2 \((x_2,y_2)\) then the observed \(x\)-values \((x_1,x_2)\) and the observed \(y\)-values \((y_1,y_2)\) can be combined into two unique samples of size \(N = 2\): the observed sample with observed case 1 \((x_1,y_1)\) and observed case 2 \((x_2, y_2)\), and the alternative created sample of \(N = 2\) with two fictive cases \((x_1,y_2)\) and \((x_2, y_1)\). For example, when the observed sample has case 1 is \((0,0)\) and case 2 is \((1,1)\) the observed effect size is 1. The effect size of the alternative sample has the cases \((1,0)\) and \((0,1)\) with effect size is 0. The probability that the effect size of the observed sample is equal to or larger than the effect sizes of the two sample is 1 in 2, thus \(p = 0.5\). This means that the effect size of 1 is a ‘non-significant’ result because the probability that the observed effect size is caused by unrelated variables is not small enough (not below 0.05). Similarly, for sample size \(N = 3\) the total number of unique samples is \(1 * 2 * 3 = 6\), and the smallest possible p value is \(1/6 = 0.167\), for \(N = 4\) the total number of unique samples is \(1 * 2 * 3 * 4 = 24\), and the smallest possible p value is \(1/24 = 0.042\), and for \(N = 5\) the total number of unique samples is \(1 * 2 * 3 * 4 * 5 = 120\), and the smallest possible p value is \(1/120 = 0.008\), etc. Thus, only from \(N = 4\) it is possible to obtain a p value that is small enough for making conclusions about the ‘statistical significance’ of an effect size. When sample size increases, the total number of unique samples increases rapidly, and the possible p value decreases rapidly. For \(N = 10\) the total number of samples (permutations) is 3628800 and the corresponding possible p value is smaller than 0.0000003. For \(N = 20\) is the total number of samples is a number of 19 digits with a corresponding possible p value that has 18 zero’s after the decimal point. When N is large the computation of the effect sizes of all samples (permutations) requires unrealistic computation times. Therefore, NCA uses a random sample from all possible samples and estimates the p value with this selection of samples. Therefore, NCA’s statistical test is an ‘approximate permutation test’. In the NCA software this test is implemented with the test.rep argument in the nca_analysis function to provide the number of samples to be analyzed, for example 10000 (the larger the more precision but also more computation time). When the test.rep value is larger than \(N!\) (which is relevant for small N), the software selects all \(N!\) samples for the p value calculation. The NCA software can also be used for estimating the effect size and p value for small N.

5.3 The bottleneck table: Necessary conditions ‘in degree’

For additional insights, a necessity relationship can be formulated in a quantitative way: the necessary condition in degree: “level of \(X\) is necessary for level of \(Y\)”. The bottleneck table is a helpful tool for evaluating necessary conditions in degree. The bottleneck table is the tabular representation of the ceiling line. The first column is the outcome \(Y\) and the next columns are the necessary conditions. The values in the table are levels of \(X\) and \(Y\) corresponding to the ceiling line. By reading the bottleneck table row by row from left to right it can be evaluated for which particular level of \(Y\), which particular threshold levels of the conditions \(X\) are necessary. The bottleneck table includes only conditions that are supposed to be necessary conditions in kind. Conditions that are not supposed to be necessary (e.g., because the effect size is too small or the p value too large) are usually excluded from the table ((see Section 5.7). The bottleneck table can be produced with the NCA software using the argument bottlenecks = 'TRUE' in the nca_output function.

Figure 5.2 shows the software output of a particular bottleneck table (Dul, Hauff, et al., 2021) showing two personality traits of sales persons that were identified as necessary conditions for Sales performance (\(Y\)): Ambition (\(X_1\)) and Sociability (\(X_2\)) because there is theoretical support for it, the effect sizes are relatively large (d > 0.10), and the p values are relatively low (p < 0.05).

Bottleneck table with two necessary conditions for sales performance [Adapted from @dul2021marketing]. NN = not necessary.

Figure 5.2: Bottleneck table with two necessary conditions for sales performance (Adapted from Dul, Hauff, et al., 2021). NN = not necessary.

The example shows that up to level 40 of Sales performance, Ambition and Sociability are not necessary (NN). For level 50 and 60 of Sales performance only Ambition is necessary and for higher levels of Sales performance both personality traits are necessary. For example, for a level of 70 of Sales performance, Ambition must be at least 29.4 and Sociability at least 23.2. If a case (a sales person) has a level of a condition that is lower than the threshold value, this person cannot achieve the corresponding level of Sale performance. The condition is a bottleneck. For the highest level of Sales performance, the required threshold levels of Ambition and Sociability are 66.8 and 98.4 respectively.

5.3.1 Levels expressed as percentage of range

Figure 5.2 is an example of a default bottleneck table produced by the NCA software. This table has 11 rows with levels of \(X\) and \(Y\) expressed as ‘percentage.range’. The range is the maximum level minus the minimal level. A level of 0% of the range corresponds to the minimum level, 100% the maximum level, and 50% the middle level between these extremes. In the first row, the \(Y\) level is 0% of the \(Y\) range, and in the eleventh row it is 100%. In the default bottleneck table the levels of \(Y\) and \(X\)’s are expressed as percentages of their respective ranges.

5.3.2 Levels expressed as actual values

The levels of \(X\) and \(Y\) in the bottleneck table can also be expressed as actual values. Actual values are the values of \(X\) and \(Y\) as they are in the original dataset (and in the scatter plot). This way of expressing the levels can help to compare the bottleneck table results with the original data and with the scatter plot results. Expressing the levels of \(X\) with actual values can be done with the argument bottleneck.x = 'actual' in the nca_analysis function of the NCA software, and the actual values of \(Y\) can be shown in the bottleneck table with the argument bottleneck.y = 'actual'.

One practical use of a bottleneck table with actual values is illustrated with the example of sales performance. Figure 5.3 combines the bottleneck table with the scatter plots.

Left: Bottleneck table with two necessary conditions for sales performance. Right: The red dot is a specific sales person with sales performance = 4, Ambition = 65 and Sociability = 59 [Adapted from @dul2021marketing]. NN = not necessary.

Figure 5.3: Left: Bottleneck table with two necessary conditions for sales performance. Right: The red dot is a specific sales person with sales performance = 4, Ambition = 65 and Sociability = 59 (Adapted from Dul, Hauff, et al., 2021). NN = not necessary.

The actual values of Sales performance range from 0 to 5.5, and for both conditions from 0 to 100. The left side of Figure 5.3 shows that up to level of 2 of Sales performance, none of the conditions are necessary. The scatter plots shows that cases with very low level of Ambition or Sociability still can achieve a level of 2 of Sales performance. However, for level 4 of Sales performance Ambition must be at least 33 and Sociability at least 29. The scatter plot shows that several cases have not reached these threshold levels of the conditions. This means that these cases do not reach level 4 of Sales performance. The scatter plots also show that cases exist with (much) higher levels of the conditions than these threshold levels and yet do not reach level of 4 of Sales performance: Ambition and Sociability are necessary, but not sufficient for Sales performance.

The right side of Figure 5.3 shows a particular case with Sales performance level 4, Ambition level 65 and Sociability level 59. For this person Sociability is the bottleneck for moving from level 4 to level 5 of Sales performance. Level 5 of Sales performance requires level 55 of Ambition and this is already achieved by this person. However, the person’s level of Sociability is 59 and this is below the threshold level that is required for level 5 of Sales performance. For reaching this level of Sales performance, this person must increase the level of Sociability (e.g., by training). Improving the Ambition without improving Sociability has no effect and is a waste of effort. For other persons, the individual situation might be different (e.g., Ambition but not Sociability is the bottleneck, both are bottlenecks, or none is a bottleneck). This type of bottleneck analysis allows the researcher to better understand the bottlenecks of individual cases and how to act on individual cases.

5.3.3 Levels expressed as percentiles

The levels of \(X\) and \(Y\) can also be expressed as percentiles. This can be helpful for selecting ‘important’ necessary conditions when acting on a group of cases. The percentile is a score where a certain percentage of scores fall below that score. For example, 90 percentile of \(X\) is a score of \(X\) that is so high that 90% of the observed \(X\)-scores fall below that \(X\)-score. Similarly, 5 percentile of \(X\) is a score of \(X\) that is so low that 5% of the observed \(X\)-scores fall below that \(X\)-score. The bottleneck can be expressed with percentiles by using the argument bottleneck.x = 'percentile' and/or bottleneck.y = 'percentile' in the nca_analysis function of the NCA software. Figure 5.4 shows the example bottleneck table with the level of \(Y\) expressed as actual values, and the level of the \(X\)s as percentiles.

Bottleneck table with levels of $Y$ expressed as actual and levels of $X$ as percentiles. Between brackets are the cumulative number of cases that have not reached the threshold levels. The total number of cases is 108. [Adapted from @dul2021marketing].

Figure 5.4: Bottleneck table with levels of \(Y\) expressed as actual and levels of \(X\) as percentiles. Between brackets are the cumulative number of cases that have not reached the threshold levels. The total number of cases is 108. (Adapted from Dul, Hauff, et al., 2021).

A percentile level of \(X\) corresponds to the percentage of cases that are unable to reach the threshold level of \(X\) and thus the corresponding level of \(Y\) in the same row. If in a certain row (e.g., row with \(Y\) = 3) the percentile of \(X\) is small (e.g. 4% for Ambition), only a few cases (4 of 108 = 4%) where not able to reach the required level of \(X\) for the corresponding level of \(Y\). If the percentile of \(X\) is large (e.g., row with \(Y\) = 5.5), many cases where not able to reach the required level of \(X\) for the corresponding level of \(Y\): 68 of 108 = 63% for Ambition and 106 of 108 = 98% for Sociability. Therefore, the percentile for \(X\) is an indication of the ‘importance’ of the necessary conditions: how many cases were unable to reach the required level of the necessary condition for a particular level of the outcome. When several necessary conditions exist, this information may be relevant for prioritizing a collective action on a group of cases by focusing on bottleneck conditions with high percentile values of \(X\).

Further details of using and interpreting the bottleneck table in this specific example can be seen in this video.

5.3.4 Levels expressed as percentage of maximum

The final way of expressing the levels of \(X\) and \(Y\) is less commonly used. It is possible to express levels of \(X\) and \(Y\) as percentage of their maximum levels. The bottleneck table can be expressed with percentage of maximum by using the argument bottleneck.x = 'percentage.max' or bottleneck.y = 'percentage.max' in the nca_analysis function of the NCA software.

5.3.5 Interpretation of the bottleneck table with other corners

The above sections refer to analyzing the upper left corner in the \(XY\) plot, corresponding to the situation that the presence or a high value of \(X\) is necessary for the presence or a high value of \(Y\). Then, when \(X\) is expressed as ‘percentage range’, ‘actual value’ or ‘percentage maximum’ the bottleneck table shows the minimum required level of \(X\) for a given value of \(Y\). When \(X\) is expressed as ‘percentiles’, the bottleneck table shows the percentage of cases that are unable to reach the required level of \(X\) for a given value of \(Y\).

The interpretation of the bottleneck table is different when other corners than the upper left corner are analysed (see Section 2.3).

5.3.5.1 Interpretation of the bottleneck table with corner = 2

For corner = 2, the upper right corner is empty suggesting that the absence or low value of \(X\) is necessary for the presence or high value of \(Y\). This means that for a given value of \(Y\), the value of \(X\) must be equal to or lower than the threshold value according to the ceiling line. When \(X\) is expressed as ‘percentage range’, ‘actual value’ or ‘percentage maximum’ the bottleneck table shows the maximum required level of \(X\) for a given value of \(Y\). In other words, for each row in the bottleneck table (representing the desired level of \(Y\)), the corresponding level of \(X\) must be at most the level mentioned in the bottleneck table. When \(X\) is expressed as ‘percentiles’, the bottleneck table still shows the percentage and number of cases that are unable to reach the required level of \(X\) for a given value of \(Y\).

5.3.5.2 Interpretation of the bottleneck table with corner = 3

For corner = 3, the lower left corner is empty suggesting that the presence or high value of \(X\) is necessary for the absence or low value of \(Y\). This means that for a given value of \(Y\), the value of \(X\) must be equal to or higher than the threshold value according to the ceiling line. When \(X\) is expressed as ‘percentage range’, ‘actual value’ or ‘percentage maximum’ the bottleneck table shows the minimum required level of \(X\) for a given value of \(Y\). In other words, for each row in the bottleneck table (representing the desired level of \(Y\)), the corresponding level of \(X\) must be at least the level mentioned in the bottleneck table. However, the first column in the bottleneck table (representing \(Y\)) is now reversed. The first row has a high \(Y\) value and the last row has the low values as the desired outcome is low. When percentages are used for \(Y\) (percentile or percentage of maximum) then 0% corresponds to a highest level of \(Y\) and 100% to the lowest level of \(Y\). When \(X\) is expressed as ‘percentiles’, the bottleneck table still shows the percentage and number of cases that are unable to reach the required level of \(X\) for a given value of \(Y\).

5.3.5.3 Interpretation of the bottleneck table with corner = 4

For corner = 4, the lower right corner is empty suggesting that the absence or low value of \(X\) is necessary for the absence or low value of \(Y\). This means that for a given value of \(Y\), the value of \(X\) must be equal to or lower than the threshold value according to the ceiling line. When \(X\) is expressed as ‘percentage range’, ‘actual value’ or ‘percentage maximum’ the bottleneck table shows the maximum required level of \(X\) for a given value of \(Y\). In other words, for each row in the bottleneck table (representing the desired level of \(Y\)), the corresponding level of \(X\) must be at most the level mentioned in the bottleneck table. However, the first column in the bottleneck table (representing \(Y\)) is now reversed. The first row has a high \(Y\) value and the last row has the low values as the desired outcome is low. When percentages are used for \(Y\) (percentile or percentage of maximum) then 0% corresponds to a highest level of \(Y\) and 100% to the lowest level of \(Y\). When \(X\) is expressed as ‘percentiles’, the bottleneck table still shows the percentage and number of cases that are unable to reach the required level of \(X\) for a given value of \(Y\).

5.3.6 NN and NA in the bottleneck table

A NN (Not Necessary) in the bottleneck table means that \(X\) is not necessary for \(Y\) for the particular level of \(Y\). With any value of \(X\) it is possible to achieve the particular level of \(Y\).

An NA (Not Applicable) in the bottleneck table is a warning that it is not possible to compute a value for the \(X\). There are two possible reasons for it, the first more often than the second:

  1. The maximum possible value of the condition for the particular level of \(Y\) according to the ceiling line is lower than the actually observed maximum value. This can happen for example when the CR ceiling (which is a trend line) runs at \(X\) = \(X_{max}\) (which is the right vertical line of the scope in the scatter plot) under the line \(Y\) = \(Y_{max}\) (which is the upper horizontal line of the scope). If this happens the researcher can either explain why this NA appears in the bottleneck table, or can change NA into the highest observed level of \(X\). The latter can be done with the argument cutoff = 1 in the nca_analysis function.

  2. In a bottleneck table with multiple conditions, one case determines the \(Y_{max}\) value and that case has a missing value for the condition with the NA (but not for another condition). When all cases are complete (have no missing values) or when at least one case exist that has a complete observation (\(X\), \(Y_{max}\)) for the given condition, the NA will not appear. The action to be taken is either to explain why this NA appears, or to delete the incomplete case from the bottleneck table analysis (and accept that the \(Y_{max}\) in the bottleneck table does not correspond to the actually observed \(Y_{max}\)).

5.4 Combining NCA with regression analysis

5.4.1 Introduction

Regression is the mother of all data analyses in the social sciences. It was invented more than 100 years ago when Francis Galton (Galton, 1886) quantified the pattern in the scores of parental height and child height (see Figure 5.5 showing the original graph).

Francis Galton’s (1886) graph with data on Parent height ('mid-parents height') and Child height ('Adult children height').

Figure 5.5: Francis Galton’s (1886) graph with data on Parent height (‘mid-parents height’) and Child height (‘Adult children height’).

In Figure 5.6 Galton’s data are shown in two \(XY\) scatter plots.

Scatter plots of the relationship between Parent height ($X$) and Child height ($Y$) (after Galton (1886). A. With a regression line. B. With a ceiling line.

Figure 5.6: Scatter plots of the relationship between Parent height (\(X\)) and Child height (\(Y\)) (after Galton (1886). A. With a regression line. B. With a ceiling line.

Galton drew lines though the middle of the data for describing the average trend between Parental height and Child height: the regression line (Figure 5.6 A). For example, with a Parent height of 175 cm, the estimated average Child height is about 170 cm. Galton could also have drawn a line on top of the data for describing the necessity of Parent height for Child height: the ceiling line (Figure 5.6 B). For example, with a Parent height of 175 cm, the estimated maximum possible Child height is about 195 cm. But Galton did not draw a ceiling line, and the social sciences have adopted the average trend line as the basis for many data analysis approaches. Regression analysis has developed over the years and many variants exist. The main variant is Ordinary Least Squares (OLS) regression. It is used for example in Simple Linear Regression (SLR), Multiple Linear Regression (MLR), path analysis, variance-based Structural Equation Modeling (SEM) and Partial Least Squares Structural Equation Modeling (PLS-SEM). In this section I compare OLS regression with NCA.

5.4.2 Logic and theory

OLS regression uses additive, average effect logic. The regression line (Figure 5.6 A) predicts the average \(Y\) for a given \(X\). Because the cases are scattered, for a given \(X\) also higher and lower values of \(Y\) than the average value of \(Y\) are possible. With one \(X\) (Simple Linear Regression), \(Y\) is predicted by the regression equation is:

\[\begin{equation} \tag{5.1} Y = β_0 + β_1 X + ɛ(X) \end{equation}\]

where \(β_0\) is the intercept of the regression line, \(β_1\) is the slope of the regression line, and \(ɛ(X)\) is the error term representing the scatter around the regression line for a given \(X\). The slope of the regression line (regression coefficient) is estimated by minimizing the squared vertical distances between the observed \(Y\)-values and the regression line (‘least squares’). The error term includes the effect of all other factors that can contribute to the outcome \(Y\).

For the parent-child data, the regression equation is \(Y = 57.5 + 0.64 X + ɛ(X)\). OLS regression assumes that on average \(ɛ(X) = 0\). Thus, when \(X\) (Parent height) is 175 cm, the estimated average Child height is about 170 cm. In contrast NCA’s C-LP ceiling line is defined by \(Y_c = -129 + 1.85 X\). Thus, when \(X\) (Parent height) is 175 cm, the estimated maximum possible Child height is about 195 cm. Normally, in NCA the ceiling line is interpreted inversely (e.g., in the bottleneck table): \(X_c = (Y_c + 129)/1.85\) indicating, while assuming a non-decreasing ceiling line, that a minimum level of \(X = X_c\) is necessary (but not sufficient) for a desired level of \(Y =Y_c\). When parents wish to have a child of 200 cm it is necessary (but not sufficient) that their Parent height is at least about 177 cm.

To allow for doing statistical tests with OLS, it is usually assumed that the error term for a given \(X\) is normally distributed (with average value 0): cases close to the regression line for the given \(X\) are more likely than cases far from the regression line. The normal distribution is unbounded, hence very high or very low values of \(Y\) are possible, though not likely. This implies that any high value of \(Y\) is possible. Even without the assumption of the normal distribution of the error term, a fundamental assumption of OLS is that the \(Y\) value is unbounded (Berry, 1993). Thus, very large child heights (e.g., 300 cm) are theoretically possible in OLS, but unlikely. This assumption contradicts NCA’s logic in which \(X\) and \(Y\) are presumed bounded. \(X\) puts a limit on \(Y\) and thus there is a border represented by the ceiling line. The limits can be empirically observed in the sample (e.g., the height of the observed tallest person in the sample is 205 cm) for defining NCA’s empirical scope or can be theoretically defined (e.g., the height of the ever-observed tallest person of 272 cm) for defining the NCA’s theoretical scope.

Additivity is another part of regression logic. It is assumed that the terms of the regression equation are added. Next to \(X\), the error term is always added in the regression equation. Possibly also other \(X\)’s or combination of \(X\)’s are added in the regression equation (multiple regression, see below). This means that the terms that make up the equation can compensate for each other. For example, when \(X\) is low, \(Y\) can still be achieved when other terms (error term or other \(X\)’s) give a higher contribution to \(Y\). The additive logic implies that for achieving a certain level of \(Y\), no \(X\) is necessary. This additive logic contradicts NCA’s logic that \(X\) is necessary: \(Y\) cannot be achieved when the necessary factor does not have the right level, and this absence of \(X\) cannot be compensated by other factors.

Results of a regression analysis are usually interpreted in terms of probabilistic sufficiency. A common probabilistic sufficiency-type of hypotheses is ‘\(X\) likely increases \(Y\)’ or ‘\(X\) has an average positive effect on \(Y\)’. Such hypothesis can be tested with regression analysis. The hypothesis is considered to be supported if the regression coefficient is positive. Often, it is then suggested that \(X\) is sufficient to produce an increase the likelyhood of the outcome \(Y\). The results also suggest that a given \(X\) is not necessary for producing the outcome \(Y\) because other factors in the regression model (other \(X\)’s and the error term) can compensate for the absence of a low level of \(X\).

5.4.3 Data analysis

Most regression models include more than one \(X\). The black box of the error term is opened and other \(X\)’s are added to the regression equation (Multiple Linear Regression - MLR), for example:

\[\begin{equation} \tag{5.2} Y = β_0 + β_1 X_1 + β_2 X_2 + \epsilon_X \end{equation}\]

where \(β_1\) and \(β_2\) are the regression coefficients. By adding more factors that contribute to \(Y\) into the equation, \(Y\) is predicted for given combinations of \(X\)’s,and a larger part of the scatter is explained. R\(^2\) is the amount of explained variance of a regression model and can have values between 0 and 1. By adding more factors, usually more variance is explained resulting in higher values of R\(^2\).

Another reason to add more factors is to avoid ‘omitted variable bias’. This bias is the result of not including factors that correlate with \(X\) and \(Y\), which causes in biased estimations of the regression coefficients. Hence, the common standard of regression is not the simple OLS regression with one factor, but multiple regression with many factors including control variables to reduce omitted variable bias. By adding more relevant factors, the prediction of \(Y\) becomes better and the risk of omitted variable bias is reduced. Adding factors in the equation is not just adding new factors (\(X\)). Some factors may be combined such as squaring a factor (\(X^2\)) to represent a non-linear effect of \(X\) on \(Y\), or taking the product of two factors \((X_2 * X_2)\) to represent the interaction between these factors. Such combination of factors are added as a separate terms into the regression equation. Also, other regression-based approaches such as SEM and PLS-SEM include many factors. In SEM models factors are ‘latent variables’ of the measurement model of the SEM approach.

A study by Bouquet & Birkinshaw (2008) is an example of the prediction of an average outcome using MLR by adding many terms including combined factors in the regression equation. This highly cited article in the Academy of Management Journal studies multinational enterprises (MNE’s) to predict how subsidiary companies gain attention from their headquarters (\(Y\)). They use a multiple regression model with 25 terms (\(X\)’s and combination of \(X\)’s) and an ‘error’ term \(ɛ\). With the regression model the average outcome (average attention) for a group of cases (or for the theoretical ‘the average case’) for given values of the terms can be estimated. The error term represents all unknown factors that have a positive or negative effect on the outcome but are not included in the model, assuming that the average effect of the error term is zero. \(β_0\) is a constant and the other \(β_i\)’s are the regression coefficients of the terms, indicating how strong the term is related to the outcome (when all other terms are constant). The regression model is:

\(Attention = \\ β_0 \\ + β_1\ Subsidiary\ size \\ + β_2\ Subsidiary\ age \\ + β_3\ (Subsidiary\ age)^2 \\ + β_4\ Subsidiary\ autonomy \\ + β_5\ Subsidiary\ performance \\ + β_6\ (Subsidiary\ performance)^2 \\ + β_7\ Subsidiary\ functional\ scope \\ + β_8\ Subsidiary\ market\ scope \\ + β_9\ Geographic\ area\ structure \\ + β_{10}\ Matrix\ structure \\ + β_{11}\ Geographic\ scope \\ + β_{12}\ Headquarter\ AsiaPacific\ parentage \\ + β_{13}\ Headquarter\ NorthAmerican\ parentage \\ + β_{14}\ Headquarter\ subsidiary\ cultural\ distance \\ + β_{15}\ Presence\ of\ MNEs\ in\ local\ market \\ + β_{16}\ Local\ market\ size \\ + β_{17}\ Subsidiary\ strength\ within\ MNE\ Network \\ + β_{18}\ Subsidiary\ initiative\ taking \\ + β_{19}\ Subsidiary\ profile\ building \\ + β_{20}\ Headquarter\ subsidiary\ geographic\ distance \\ + β_{21}\ (Initiative\ taking * Geographic\ distance) \\ + β_{22}\ (Profile\ building * Geographic\ distance) \\ + β_{23}\ Subsidiary\ downstream\ competence \\ + β_{24}\ (Initiative\ taking * Downstream\ competence) \\ + β_{25}\ (Profile\ building * Downstream\ competence) \\ + \epsilon\)

The 25 terms of single and combined factors in the regression equation explain 27% of the variance (R\(^2\) = 0.27). Thus, the error term (representing the not included factors) represent the other 73% percent (unexplained variance). Single terms predict only a small part of the outcome. For example, ‘subsidiary initiative taking’ (term 18) is responsible for 2% to the explained variance.

The example shows that adding more factors makes the model more complex and less understandable and therefore less useful in practice. The contrast with NCA is large. NCA can have a model with only one factor that perfectly explains the absence of a certain level of an outcome when the factor is not present at the right level for that outcome. Whereas regression models must include factors that correlate with other factors and with the outcome to avoid biased estimation of the regression coefficient, NCA’s effect size for a necessary factor is not influenced by the absence or presence of other factors in the model. This is illustrated with and example about the effect of a sales person’s personality on sales performance using data of 108 cases (sales representatives from a large USA food manufacturer) obtained with The Hogan Personality Inventory (HPI) personality assessment tool for predicting organizational performance (Hogan & Hogan, 2007). Details of the example are in Dul, Hauff, et al. (2021). The statistical descriptives of the data (mean, standard deviation, correlation) are shown in Figure 5.7 A. Ambition and Sociability are correlated with \(Y\) as well as each other. Hence, if one of them is omitted from the model the regression results may be biased.

Example of the results of a regression analysis and NCA. Effect of four personality traits of sales persons on sales performance. A. Descriptive statistics. B. Results of regression analysis and NCA for two different models.  Data from @hogan2002hogan

Figure 5.7: Example of the results of a regression analysis and NCA. Effect of four personality traits of sales persons on sales performance. A. Descriptive statistics. B. Results of regression analysis and NCA for two different models. Data from Hogan & Hogan (2007)

The omission of one variable is shown in Figure 5.7 B, middle column. The full model (Model 1) includes all four personality factors. The regression results show that Ambition and Sociability have positive average effects on Sales performance (regression coefficients 0.13 and 0.16 respectively, and Learning approach has a negative average effect on Sales performance (regression coefficient -0.11). Interpersonal sensitivity has virtually no average effect on Sales performance (regression coefficient 0.01). The p values for Ambition and Sociability are relatively small. Model 2 has only three factors because Sociability is omitted. The regression results show that the regression coefficients of all three remaining factors have changed. The regression coefficient for Ambition has increased to 0.18, and the regression coefficients of the other two factors have minor differences (because these factors are less correlated with the omitted variable). Hence, in a regression model that is not correctly specified because factors that correlate with factors that are included in the model and with the outcome are not included, the regression coefficients of the included factors may be biased (omitted variable bias). However, the results of the NCA analysis does not change when a variable is omitted (Figure 5.7 B, right column). This means that an NCA model does not suffer from omitted variable bias.

5.4.4 How to combine NCA and regression

Regression and NCA are fundamentally different and complementary. A regression analysis can be added to a NCA study to evaluate the average effect of the identified necessary condition for the outcome. However, the researcher must then include all relevant factors, also those that are not expected to be necessary, to avoid omitted variable bias, and must obtain measurement scores for these factors. When NCA is added to a regression study, not much extra effort is required. Such regression study can be a multiple regression analysis, or the analysis of the structural model of variance based structural equation modeling (SEM), Partial Least Squares structural equation modeling (PLS-SEM) or another regression technique. If a theoretical argument is available for a factor (indicator or latent construct) being necessary for an outcome, any factor that is included in the regression model (independent variables, moderators, mediators) can also be treated as a potential necessary condition that can be tested with NCA. This could be systematically done:

  • For all factors in the regression model that are potential necessary conditions.

  • For those factors that provide a surprising result in the regression analysis (e.g., in terms of direction of the regression coefficient, see below the situation of disparate duality) to better understand the result.

  • For those factors that show no or a limited effect in the regression analysis (small regression coefficient) to check whether such ‘unimportant’ factors on average still may be necessary for a certain outcome.

  • For those factors that have a large effect in the regression analysis (large regression coefficient) to check whether an ‘important’ factor on average may also be necessary.

When adding NCA to a regression analysis more insight about the effect of \(X\) on \(Y\) can be obtained.

Figure 5.8 shows a example of a combined NCA and simple OLS regression analysis.

Example of interpretation of findings from regression analysis and NCA

Figure 5.8: Example of interpretation of findings from regression analysis and NCA

\(X\) and \(Y\) are related with correlation coefficient 0.58. The data may be interpreted as an indication of a necessity relationship, an indication of an average effect relationship, or both. The average relationship is illustrated with a green OLS regression line with intercept 0.66 and slope 0.9 (p < 0.001). The ceiling line (C-LP) is illustrated with the dashed blue with intercept 1.1 and slope 1.0 (effect size d = 0.24; p < 0.001). According the ceiling line, for an outcome value \(Y = 1.3\), a minimum level of the condition of \(X = 0.15\) is necessary. According to the vertical line \(X = 0.15\), the average value of the outcome for this value the condition is \(Y = 0.8\). In other words, for a given value of \(X\) (e.g., \(X =0.15\)), NCA predicts the maximum possible value of \(Y\) (\(Y = 1.3\)), whereas OLS regression predicts the average value of \(Y\) (\(Y = 0.8\)). OLS regression does not predict a maximum value of \(Y\). Regression analysis assumes that any value of \(Y\) is possible. Also very high values of \(Y\), up to infinity, are possible, but unlikely. The horizontal line \(Y = 1.3\) not only intersects the ceiling line, showing the required level of \(X = 0.15\) that all cases must have satisfied to be able to reach \(Y = 1.3\), but also the OLS regression line, showing that the average outcome \(Y = 1.3\) is obtained when \(X = 0.72\). The vertical line \(X = 0.72\) intersects the ceiling line at \(Y = 1.9\), showing that this outcome level is maximally possible when the condition level \(X = 0.72\).

In this example the regression line has the same direction as the ceiling line. There is a positive average effect and a ‘positive’ necessity effect (high level of \(X\) is necessary for high level of \(Y\)). I call this situation: ‘commensurate duality’. Commensurate duality also applies when both lines have negative slopes. It is also possible, but less common that the slope of the ceiling line is positive and the slope of the regression line is negative. In this situation there is a negative average effect and a ‘positive’ necessity effect. A high level of \(X\) is necessary for high level of \(Y\), but increasing \(X\) on average reduces \(Y\). This can occur when a relatively high number of cases are in the lower right corner of the scatter plot (cases with high \(X\) that do not reach high \(Y\)). The situation when both lines have opposite slopes is called ‘disparate duality’. A third situation is possible that there is no significant average effect and yet a significant necessity effect, or the other way around. I call this situation ‘impact duality’. The word ‘duality’ refers to a data interpretation battle. Should the observed data pattern be describe with regression analysis, NCA or both? The answer depends on the researcher’s theory. When a conventional theory with additive/average effect logic is used, regression analysis ensures theory-method fit. When a necessity theory is used, NCA ensures theory-method fit. When a conventional theory and a necessity theory are both used (e.g. in an ‘embedded necessity theory,’ see Bokrantz & Dul, 2023 and see Chapter 3) both methods should be used to ensure theory-method fit. Hence, it depends on the underlying theory what method(s) should be selected to analyse the data.

5.4.5 What is the same in NCA and regression?

I showed that regression has several characteristics that are fundamentally different from the characteristics of NCA. Regression is about average trends, uses additive logic, assumes unbounded \(Y\) values, is prone to omitted variable bias, needs control variables, and is used for testing sufficiency-type of hypotheses, whereas NCA is about necessity logic, assumes limited \(X\) and \(Y\), is immune for omitted variable bias, does not need control variables, and is used for testing necessity hypotheses.

However, NCA and regression also share several characteristics. Both NCA and regression are variance-based approaches and use linear algebra (although NCA can also be applied with the set theory approach with Boolean algebra; see Section 5.5 on NCA and QCA). Both methods need good (reliable and valid) data without measurement error, although NCA may be more prone to measurement error. For statistical generalization from sample to population both methods need to have a probability sample that is representative for the population, and having larger samples usually give more reliable estimations of the population parameters, although NCA can handle small sample sizes. Additionally, for generalization of the findings of a study both methods need replications with different samples; a one-shot study is not conclusive. Both methods cannot make strong causal interpretations when observational data are used; then at least also theoretical support is needed. When null hypothesis testing is used in both methods, such tests and the corresponding p values have strong limitations and are prone to misinterpretations; a low p value only indicates a potential randomness of the data and is not a prove of the specific alternative hypothesis of interest (average effect, or necessity effect).

When a researcher uses NCA or OLS, these common fundamental limitations should be acknowledged. When NCA and OLS are used in combination the fundamental differences between the methods should be acknowledged. It is important to stress that one method is not better than the other. NCA and OLS are different and address different research questions. To ensure theory-method fit, OLS is the preferred method when the researcher is interested in an average effect of \(X\) on \(Y\), and NCA is the preferred method when the researcher is interested in the necessity effect of \(X\) on \(Y\).

5.5 Combining NCA with QCA

5.5.1 Introduction to QCA

5.5.1.1 History of QCA

Qualitative Comparative Analysis (QCA) has its roots in political science and sociology, and was developed by Charles Ragin (Ragin, 1987, 2000, 2008). QCA has steadily evolved and used over the years, and currently many types of QCA approaches exist. A common interpretation of QCA is described by Schneider & Wagemann (2012) and Mello (2021), which I follow in this book.

5.5.1.2 Logic and theory of QCA

Set theory is in the core of QCA. It means that relations between sets, rather than relations between variables are studied. A case can be part of a set or not part of the set. For example, the Netherlands is a case (of all countries) that is ‘in the set’ of rich countries, and Ethiopia is a case that is ‘out of the set’ of rich countries. Set membership scores (rather than variable scores) are linked to a case. Regarding the set of rich countries, the Netherlands has a set membership score of 1 and Ethiopia of 0. In the original version of QCA the set membership scores could only be 0 or 1. This version of QCA is called crisp-set QCA. Later also fuzzy-set QCA (fsQCA) was developed. Here the membership scores can also have values between 0 and 1. For example, Croatia could be allocated a set membership score of 0.7 indicating that it is ‘more in the set’ than ‘out of the set’ of rich countries. In QCA relations between sets are studied. Suppose that one set is the set of rich countries (\(X\)), and another set is the set of countries with happy people (‘happy countries’, \(Y\)). QCA uses Boolean (binary) algebra and expresses the relationship between condition \(X\) and outcome \(Y\) as the presence or absence of \(X\) is related to the presence or absence of \(Y\). More specifically, the relations are expressed in terms of sufficiency and necessity. For example, the presence of X (being a country that is part of the set of rich countries) could be theoretically stated as sufficient for the presence of \(Y\) (being a country that is part of the set of happy countries). All rich countries are happy countries. The set of rich countries is a subset of the set of happy countries. No rich country is not a happy country. Set \(X\) is a subset of set \(Y\). Alternatively, in another theory it could be stated that the presence of \(X\) (being country that is part of the set of rich countries) is necessary for the presence of \(Y\) (being a country that is part of the set of happy countries). All happy countries are rich countries. The set of happy countries is a superset of the set of rich countries. No happy country is not a rich county. Set \(X\) is a superset of set \(Y\). QCA’s main interest is about sufficiency. QCA assumes that a configuration of single conditions produces the outcome. For example, the condition of being in the set of rich countries (\(X_1\)) AND the condition of being in the set of democratic countries (\(X_2\)) is sufficient for the outcome of being in the set of happy countries (\(Y\)). QCA’s Boolean logic statements for this sufficiency relationship is expressed as follows:

\[\begin{equation} \tag{5.3} X_1*X_2 → Y \end{equation}\]

where the symbol ‘\(*\)’ means the logical ‘AND’, and the symbol ‘\(→\)’ means ‘is sufficient for’.

Furthermore, QCA assumes that several alternative configurations may exits that can produce the outcome, known as ‘equifinality’. This is expressed in the following example:

\[\begin{equation} \tag{5.4} X_1*X_2 + X_2*X_3*X_4 → Y \end{equation}\]

where the symbol ‘\(+\)’ means the logical ‘OR’.

It is also possible that the absence of a condition is part of a configuration. This is shown in the following example:

\[\begin{equation} \tag{5.5} X_1*X_2 + X_2*{\sim}X_3*X_4 → Y \end{equation}\]

where the symbol ‘\(\sim\)’ means ‘absence of’.

Single conditions in a configuration that is sufficient for the outcome are called INUS conditions (Mackie, 1965). An INUS condition is an ‘Insufficient but Non-redundant (i.e., Necessary) part of an Unnecessary but Sufficient condition.’ In this expression, the words ‘part’ and ‘condition’ are somewhat confusing because ‘part’ refers to the single condition and ‘condition’ refers to the configuration that consists of single conditions. Insufficient refers to the fact that a part (single condition) is not itself sufficient for the outcome. Non-redundant refers to the necessity of the part (single condition) for the configurations being sufficient for the outcome. Unnecessary refers to the possibility that also other configurations can be sufficient for the outcome. Sufficient refers to the fact that the configuration is sufficient for the outcome. Although a single condition may be locally necessary for the configuration to be sufficient for the outcome, it is not globally necessary for the outcome because the single condition may be absent in another sufficient configuration. INUS conditions are thus usually not necessary conditions for the outcome (the latter are the conditions that NCA considers). Hence, in above generic logical statements about relations between sets, see (5.3), (5.4), (5.5), \(X\) and \(Y\) can only be absent of present (Boolean algebra), even though the individual members of the sets can have fuzzy scores. Both csQCA and fsQCA use only binary levels (absent or present) of the condition and the outcome when formulating the solution. In fsQCA absence means set membership scores < 0.5 and presence means set membership scores > 0.5.

5.5.1.3 Data and data analysis of QCA

The starting point of the QCA data analysis is to transform variable scores into set membership scores. The transformation process is called ‘calibration’. Calibration can be based on the distribution of the data, the measurement scale, or expert knowledge. The goal of calibration is to get scores of 0 or 1 (csQCA) or between 0 and 1 (fsQCA) to represent the extent to which the case belongs to the set (set membership score). Particularly in large \(N\) fsQCA studies and in the business and management field ‘mechanistic’ (data driven) transformation is often. For this transformation the non-lineary logistic function is usually selected. This selection is somewhat arbitrary (build in popular QCA software) and moves the variable scores to the extremes (0 and 1) in comparison to just (linear) normalization of the data: low values move to 0 and high values move to 1. When no substantive reason exists for the logistic transformation, I have proposed (Dul, 2016b) to use a standard transformation (normalization). This transformation keeps the distribution of the data intact. The reason for my proposal is that moving the scores to the extremes implies that cases in the \(XY\) scatter plot with low to middle values of \(X\) move to the left and cases with middle to high values of \(Y\) move upwards. As a result, the upper left corner is more filled with cases. Consequently, potential meaningful empty spaces in the original data (indicating necessity) may not be identifiable. With the standard transformation the cases stay where they are; an empty space in a corner of the \(XY\) plot with the original data stays empty. The standard transformation is an alternative to an arbitrary transformation: it just changes variable score into set membership scores, without affecting the distribution of the data. A calibration evaluation tool to check the effect of calibration on the necessity effect size is available at https://r.erim.eur.nl/r-apps/qca/.

QCA performs two separate analyses with calibrated data: a necessity analysis for identifying (single) necessary conditions, and a sufficiency analysis (‘truth table’ analysis) for identifying https://r.erim.eur.nl/r-apps/qca/sufficient configurations. In this book I focus on the necessary condition analysis of single necessary conditions, which precedes the sufficiency analysis. In csQCA the necessity analysis is similar to a dichotomous necessary condition analysis of NCA with the contingency table approach when \(X\) and \(Y\) are dichotomous set membership scores that can only be present (in the set) or absent (not in the set). By visual inspection of the contingency table a necessary condition ‘in kind’ can be identified when the upper left cell is empty (Figure 5.9) using set membership scores 0 and 1.

Necessity analysis by dichotomous NCA and crist set QCA.

Figure 5.9: Necessity analysis by dichotomous NCA and crist set QCA.

For fuzzy set membership scores the necessity analyses of fsQCA and NCA differ. In fsQCA a diagonal is drawn in the \(XY\) scatter plot (Figure 5.10A) with data from Rohlfing & Schneider (2013); see also Vis & Dul (2018)). For necessity, there can be no cases above the diagonal. Necessity consistency is a measure of the extent to which cases are not above the diagonal, which can range from 0 to 1. When some cases are present in the ‘empty’ zone above the diagonal, fsQCA considers these cases as ‘deviant cases’. FsQCA accepts some deviant cases as long as the necessity consistency level, which is computed from the total vertical distances of the deviant cases to the diagonal, is not smaller than a certain threshold, usually 0.9. The necessity consistency is large enough, fsQCA makes a qualitative (‘in kind’) statement about the necessity of \(X\) for \(Y\): ‘\(X\) is necessary for \(Y\)’, e.g., the presence of \(X\) (membership score > 0.5) is necessary for the presence of \(Y\) (membership score > 0.5).

Comparison of necessity analysis with fsQCA’s (A) and with NCA (B).

Figure 5.10: Comparison of necessity analysis with fsQCA’s (A) and with NCA (B).

5.5.2 The differences between NCA and QCA

The major differences between NCA and QCA are summarized in Figure 5.11 (see also Dul (2016a) and Vis & Dul (2018)) and discussed below.

Comparison of NCA and QCA.

Figure 5.11: Comparison of NCA and QCA.

5.5.2.1 Logic and theory

NCA and QCA are only the same in a very specific situation of ‘in kind’ necessity: A single \(X\) is necessary for \(Y\), and \(X\) and \(Y\) are dichotomous set membership scores (0 and 1). Then the analyses of NCA and QCA are exactly the same. However, NCA normally uses variable scores, but can also set membership scores when NCA is applied in combination with QCA (see below). In addition to the ‘in kind’ necessity that both methods share, NCA also formulates ‘in degree’ necessity. QCA also formulates ‘in kind’ necessity of ‘OR’ combinations of conditions, as well as ‘in kind’ sufficiency of configurations of conditions. The main interest of NCA is the necessity ‘in degree’ of single factors that enable the outcome, whereas the main interest of QCA is the sufficiency ‘in kind’ of (alternative) configurations of conditions.

5.5.2.2 Data and data analysis

Regarding research strategy most NCA studies are observational studies (both small N and large N), although also experiments are possible. Most QCA studies are small N observational studies, although increasingly also large N studies are employed with QCA, in particular in the business and management area. The experimental research strategy is rare (if not absent) in QCA. Regarding case selection/sampling purposive sampling is the main case selection strategy in QCA. It is also possible in small N NCA studies. For large N studies sampling strategies such as those used in regression analysis (preferably probability sampling) are used both in NCA and QCA. Regarding measurement, NCA uses valid and reliable variable scores unless it is used in combination with QCA, in which case NCA uses calibrated set membership scores. QCA uses calibrated set membership scores and cannot use variable scores. In QCA data with variable scores may be used as input for the ‘calibration’ process to transform variable scores into set membership scores. Regarding data analysis in fsQCA a necessary condition is assumed to exist if the area above the diagonal reference line in an \(XY\) scatter plot is virtually empty (see Figure 5.10 A). In contrast, NCA uses the ceiling line as the reference line (see Figure 5.10 B) for evaluating the necessity of \(X\) for \(Y\) (with possibly some cases above the ceiling line; accuracy below 100%). In situations where fsQCA observes ‘deviant cases’, NCA includes these cases in the analysis by ‘moving’ the reference line from the diagonal position to the boundary between the zone with cases and the zone without cases. (Note that moving the QCA’s diagonal parallel upward to address inaccuracy has been suggested by Ragin, 2000 (p. 225, Figure 8.4), but this was not followed up by other QCA researchers). NCA considers cases around the ceiling line (and usually above the diagonal) as ‘best practice’ cases rather than ‘deviant’ cases. These cases are able to reach a high level of outcome (e.g., an output that is desired) for a relatively low level of condition (e.g., an input that requires effort). For deciding about necessity, NCA evaluates the size of the ‘empty’ zone as a fraction of the total zone (empty plus full zone), which ratio is called the necessity effect size. If the effect size is greater than zero (an empty zone is present), and if according to NCA’s statistical test this is unlikely a random result of unrelated \(X\) and \(Y\), NCA identifies a necessary condition ‘in kind’ that can be formulated as: ‘\(X\) is necessary for \(Y\)’, indicating that for at least a part of the range of \(X\) and the range of \(Y\) a certain level of \(X\) is necessary for a certain level of Y. Additionally, NCA can quantitatively formulate necessary condition ‘in degree’ by using the ceiling line: ‘level \(X_c\) of \(X\) is necessary for level \(Y_c\) of \(Y\)’. The ceiling line represents all combinations \(X\) and \(Y\) where \(X\) is necessary for \(Y\). Although also fsQCA’s diagonal reference line allows for making quantitative necessary conditions statements, e.g. \(X\) > 0.3 is necessary for \(Y\) = 0.3, fsQCA does not make such statements. When the ceiling line coincides with the diagonal (corresponding to the situation that fsQCA considers) the statement ‘\(X\) is necessary for \(Y\)’ applies to all \(X\)-levels [0,1] and all \(Y\)-levels [0,1] and the results of the qualitative necessity analysis of fsQCA and NCA are the same. When the ceiling line is positioned above the diagonal ‘\(X\) is necessary for \(Y\)’ only applies to a specific range of \(X\) and a specific range of \(Y\). Outside these ranges \(X\) is not necessary for \(Y\) (‘necessity inefficiency’). Then the results of the qualitative necessity analysis of fsQCA and NCA can be different. Normally, NCA identifies more necessary conditions than fsQCA, mostly because the diagonal is used as reference line. In the example of Figure 5.10, NCA identifies that \(X\) is necessary for \(Y\) because there is an empty zone above the ceiling line. However, fsQCA would conclude that \(X\) is not necessary for \(Y\), because the necessity consistency level is below 0.9. FsQCA’s necessity analysis can be considered as a special case of NCA: an NCA analysis with discrete or continuous fuzzy set membership scores for \(X\) and \(Y\), a ceiling line that is diagonal, an allowance of a specific number of cases in the empty zone given by the necessity consistency threshold, and the formulation of a qualitative ‘in kind’ necessity statement.

5.5.3 Recommendation for combining NCA and QCA

Although in most NCA applications variable scores are used for quantifying condition \(X\) and outcome \(Y\), NCA can also employ set membership scores for the conditions and the outcome, allowing to combine NCA and QCA. The other way around is not possible: combining QCA to a regular NCA with variable scores is not possible because QCA is a set theoretic approach that does not use variable scores. How can NCA with membership scores complement QCA? For answering this question first another question should be raised: how does QCA integrate an identified necessary condition in kind with the results of the sufficiency analysis: the identified sufficient configurations. By definition the necessary condition must be part of all sufficient configurations, otherwise this configuration cannot produce the outcome. However, within the QCA community five different views exist about how to integrate necessary conditions in sufficiency solution. In the first view only sufficient configurations that include the necessary conditions are considered as a result. Hence, all selected configurations have the necessary condition. In the second view the truth table analysis to find the sufficient configurations are done without the necessary conditions and afterwards the necessary conditions are added to the configuration. This also ensures that all configurations have the necessary conditions. In the third view configurations that do not include the necessary condition are excluded from the truth table before this table is further analysed to find the sufficient configurations. This ‘ESA’ approach (Schneider & Wagemann, 2012) also ensures that all configurations have the necessary conditions. In the fourth view sufficient configurations are analysed without worrying about necessary conditions. Afterwards, the necessary conditions are discussed separately. In the fifth view a separate necessity analysis is not done, or necessity is ignored. All views have been employed in QCA; hence no consensus exists yet. And additional complexity of integrating necessity with sufficient configuration is that NCA produces necessary conditions in degree, rather than QCA’s necessary condition and sufficient configurations in kind. The conditions that are part of the sufficient configurations can only be absent or present. Given these complexities, I suggest, a combination of the second and the fourth view:

  1. Perform the NCA analysis ‘in degree’ before QCA’s sufficiency analysis.

  2. Integrate a part of the results of NCA’s necessity analysis into QCA’s sufficient configurations, namely the conditions (‘in degree’) that are necessary for outcome > 0.5.

  3. Discuss the full results of NCA’s necessity analysis afterwards.

In particular it could be discussed that specific levels of necessary membership scores found by NCA must be present in each sufficient configuration found by QCA. If that membership in degree is not in place, the configuration will not produce the outcome.

5.5.4 Examples

I discuss two examples of integrating NCA in QCA according to this recommendation. The first example is a study by Emmenegger (2011) about the effects of six conditions: S = state-society relationships, C = non-market coordination, L = strength labour movement, R = denomination, P = strength religious parties and V = veto points on JSR = job security regulation in Western European countries. He performed an fsQCA analysis with a necessity analysis and a sufficiency analysis. His necessity analysis showed that no condition was necessary for job security regulation (necessity consistency of each condition was < 0.9).

Example of a necessity analysis with NCA (Data from @emmenegger2011job.

Figure 5.12: Example of a necessity analysis with NCA (Data from Emmenegger (2011).

However, the NCA analysis in degree with the six conditions and using the CE-FDH ceiling line (Figure 5.12) shows the following effect sizes and corresponding p values (Figure 5.13).

NCA necessity analysis. d = effect size; p = p value.

Figure 5.13: NCA necessity analysis. d = effect size; p = p value.

A condition could be considered necessary when the effect size has a small p value (e.g. p < 0.05). Hence, the NCA analysis shows that certain strength of labour movement (L), a certain level of denomination (R), and a certain strength of religious parties (P) are necessary for high levels of job security regulation (JSR). From Figure 5.12 it can be observed that the following conditions are necessary for JSR > 0.5:

  • L > 0.29 is necessary for JSR > 0.5 (presence of JSR)

  • R > 0.20 is necessary for JSR > 0.5 (presence of JSR)

  • P > 0.20 is necessary for JSR > 0.5 (presence of JSR)

Although in QCA’s binary logic these small necessary membership scores of L, R, P (all < 0.5) would be framed as ‘absence’ of the condition, in NCA these membership scores are considered small, yet must be present for having the outcome. Thus, according to NCA the low level of membership scores must be present, otherwise the sufficient configurations identified by QCA will not produce the outcome. Emmenegger (2010) identified four possible sufficient configurations for the outcome JSR:

  1. S*R*~V (presence of S AND presence of R AND absence of V)

  2. S*L*R*P (presence of S AND presence of L AND presence of R AND presence of P)

  3. S*C*R*P (presence of S AND presence of C AND presence of R AND presence of P)

  4. C*L*P*~V (presence of C AND presence of L AND presence of P AND absence of V)

This combination of solutions can be expressed by the following logical expression: S*R*~V + S*L*R*P + S*C*R*P + C*L*P*~V → JSR The presence of a condition and the outcome means that the membership score is > 0.5. The absence of a condition means that the membership score is < 0.5. A common way to summarize the results is shown in Figure 5.14.

QCA sufficiency solutions (sufficient configurations).

Figure 5.14: QCA sufficiency solutions (sufficient configurations).

The NCA necessity results can be combined with the QCA sufficiency results as shown in Figure 5.15.

Combining QCA sufficiency results with NCA necessity results.

Figure 5.15: Combining QCA sufficiency results with NCA necessity results.

Small full square symbols (▪) are added to sufficient configurations (according to QCA) to ensure that the minimum required necessity membership score (according to NCA) is fulfilled. This symbol is inspired by Greckhamer (2016) who used large full square symbol (▪) to be added to the solutions table to indicate the presence of a necessary condition (membership score > 0.5) according to QCA’s necessity analysis.

The NCA results that the presence of L > 0.29 is necessary for JSR > 0.5 is already achieved in the QCA sufficient configurations 2 and 4, but not in configurations 1 and 3. In these latter configurations the requirement L > 0.29 is added to the configuration (▪). Similarly, R > 0.20 is added to configuration 4, and P > 0.20 is added to configuration 1. Without adding these requirements to the configuration, the configuration cannot produce the outcome. Only configuration 2 includes all three necessary conditions according to NCA, without a need for adding them. If the results of NCA would be ignored and the QCA of configurations 1, 3 and 4 would not produce the outcome (or only by chance if the required minimum levels of the ignored NCA necessary conditions would be present by chance). Additionally, the NCA results can show what levels of the condition would be necessary for a higher level of the outcome than a membership score > 0.5. This can be observed in Figure 3. For example, for a membership score of JSR of 0.9, it is necessary to have membership scores of S = 1, L > 0.45, R = 1, P > 0.2, and ~V > 0.35. I therefore recommend presenting the NCA necessity results together with the QCA sufficient configurations results as in Figure 5.15, and additionally to discuss the full results of NCA for deeper understanding of the sufficient configurations.

The second example is from a study of Skarmeas et al. (2014) that I discuss in my book (Dul, 2020, pp. 77-83). This study is about the effect of four organizational motives (Egoistic-driven motives, absence of Value-driven motives, Strategy-driven motives, Stakeholder-driven motives) on customer scepticism about an organization’s Corporate Social Responsibility (CSR) initiative. The results of NCA’s necessity analysis with raw scores, and with calibrated set membership scores are shown in Figure 5.16 and Figure 5.17, respectively.

NCA necessity analysis with raw scores.

Figure 5.16: NCA necessity analysis with raw scores.

NCA necessity analysis with calibrated cores.

Figure 5.17: NCA necessity analysis with calibrated cores.

NCA with raw variable scores shows that Absence of Value-driven motives and Strategy-driven motives could be considered are necessary for Scepticism given the medium effect sizes and the small p values (p < 0.05). Also, the NCA with calibrated set membership scores shows that these two conditions have low p values; however, their effect sizes are small (0.04 and 0.02, respectively). This means that these necessary conditions may be statistically significant but may not be practically significant: nearly all cases reached the required level of necessity. Also, Egoistic-driven motives are statistically, but not practically significant. NCA with raw variable scores (the conventional NCA approach) can be used in combination with regression analysis, as regression analysis uses raw variable scores. NCA with calibrated set scores (set membership scores) can be used in combination with QCA, because QCA uses calibrated set membership scores.

Figure 5.18 shows the two sufficient configurations according the QCA analysis of Skarmeas et al. (2014). (2014).

Sufficient configurations (QCA) and necessity requirements (NCA) of the study by Skarmeas et al., 2014.

Figure 5.18: Sufficient configurations (QCA) and necessity requirements (NCA) of the study by Skarmeas et al., 2014.

In each configuration the necessity of Egoistic-driven motives and the Absence of Valuedriven motives are ensured in the configuration. However, the necessity of Strategy-driven motives is not ensured in Sufficient configuration 1. Therefore, the minimum required level of Stakeholder-driven motives according to NCA (0.01) is added to ensure that the configuration is able to produce the outcome. However, the practical meaning of this addition is very limited because the necessity effect size is small. It is added here for illustration of our recommendation.

5.5.5 Logical misinterpretations when combining NCA and QCA

When NCA is combined with QCA sometimes a logical misinterpretation is made about the role of necessary conditions in sufficient configurations. Although a factor that is a necessary condition for the outcome must be part of each sufficient configuration, the opposite is not true: a factor that is part of the sufficient configuration must not automatically be a necessary condition for the outcome. The latter misinterpretation has been introduced in the tourism and hospitality field (Dul, 2022) and can be found for example in studyies by Farmaki et al. (2022), Pappas & Farmaki (2022), Pappas & Glyptou (2021b), Pappas & Glyptou (2021a), Pappas (2021), and Farmaki & Pappas (2022). The misinterpretation was also exported to other fields (e.g., Mostafiz et al., 2023).

This misinterpretation may be caused by mixing necessary conditions for the outcome (as analysed with NCA) with necessary parts of a sufficient configuration (INUS conditions). An INUS condition is an Insufficient but Necessary part of an Unnecessary but Sufficient configuration. INUS conditions are the necessary elements of a configuration to make that configuration sufficient. NCA captures necessary conditions for the outcome, not INUS conditions, see also Dul, Vis, et al. (2021).

5.6 Robustness checks

When conducting empirical research, a researcher must make a variety of theoretical and methodological choices to obtain results. Often, also other plausible choices could have been made. In a robustness check the sensitivity of the results of other plausible choices is evaluated. The check focuses on the sensitivity of the main estimates and the researcher’s main conclusion (e.g., about a hypothesis). For example, in regression analysis the regression coefficient is a main estimate that can be sensitive for inclusion or exclusion of potential confounders in the regression model (Lu & White, 2014) or for the selection of the functional form of the regression equation (linear or non-linear). Another example of a researcher’s choice is the threshold p value (e.g., \(p = 0.05\) or \(p = 0.01\)) for (non)rejection of the hypothesis (Benjamin et al., 2018). In NCA, the effect size and the p value are two main estimates for concluding whether a necessity relationship is credible. These estimates can be sensitive for several choices. Some choices are outside the realm of NCA, for example choices related to the data that are used as input to NCA (research design, sampling, measurement, preprocessing of data) or the threshold level of statistical significance. Other choices are NCA-specific, such as the choice of the ceiling line (e.g., CE-FDH or CR-FDH), the choice of the scope (empirical or theoretical), the threshold level of the necessity effect size (e.g., 0.1), and the handling of NCA-relevant outliers. In this section, I discuss these four NCA-specific choices that are candidates for a robustness check.

5.6.1 Ceiling line choice

In NCA model specification refers to the variables that are selected as potential necessary conditions. Functional form refers to the form of the ceiling line. In contrast to model specification in regression analysis, where confounders can change the result, NCA is a (multiple) bivariate analysis where the inclusion or exclusion of other variables does not affect the results. However, similar to regression analysis, the functional form of the necessity model may affect the results. The two default ceiling lines are linear (straight line CR-FDH) and piece-wise linear (step function CE-FDH), and the estimates of effect size and p value depend on the choice of the ceiling line. The researcher usually selects the ceiling line based on the type of data (discrete or continuous) or on whether the expected or observed pattern of the border between the area with and without data is linear or non-linear. For example, when the data has a small number of discrete levels with a jumpy border, the researcher may decide to select the CE-FDH ceiling line. When the researcher assumes a linear ceiling line in the population, the straight ceiling line CR-FDH may be selected. The choice of the ceiling line is partly subjective because there are no hard criteria for the best ceiling form (see also Section 7.3.3). Therefore, a robustness check with another plausible ceiling line is warranted. For example, Figure 5.19 shows the scatter plot with the two default ceiling lines for testing the hypothesis that in Western countries a culture of individualism is necessary for innovation performance (Dul, 2020).

Example of a ceiling robustness check: Comparing the results of two ceiling lines.

Figure 5.19: Example of a ceiling robustness check: Comparing the results of two ceiling lines.

In this example the choice of the ceiling line is not obvious. As suggested in Dul (2020) a robustness check could consist of performing the analysis with both ceiling lines and comparing the results. The results for the CE-FDH ceiling line are \(d = 0.58\) and \(p = 0.002\), and for the CR-FDH ceiling line are \(d = 0.51\) and \(p = 0.003\). Given the researcher’s other choices (e.g., effect size threshold = 0.1; p value threshold = 0.05) the ceiling line robustness check consists of evaluating whether the researcher’s conclusion about the necessity hypothesis remains the same (robust result) or differs (fragile result). In this case both ceiling lines produce the same conclusion (the necessity hypothesis is supported), which suggests a robust result.

5.6.2 Choice of the threshold level of the effect size

The researcher could also have made another choice for the effect size threshold level. If the threshold level is changed from the common 0.1 value (“small effect”) to a larger value of 0.2 (“medium effect”), the conclusion in the above example about the hypothesis would not change. The robustness check of effect size threshold suggests a robust result.

5.6.3 Choice of the scope

Whereas regression analysis assumes that the variables of the regression model are unbounded (can have values between minus and plus infinity, see Section 5.4), NCA assumes that the variables in the necessity model are bounded (have minimum and maximum values). As a default the NCA selects the bounds based on the observed minimum and maximum values of the condition and the outcome to define the ‘empirical scope’. However, it is also possible to conduct NCA with a theoretically defined scope (‘theoretical scope’). The robustness of the result could be checked by selecting another plausible scope (‘theoretical scope’) than the default empirical scope, or when the original choice was a theoretical scope to select the empirical scope or a different theoretical scope. For example, the researcher may have selected the empirical scope for an analysis with variables measured with a Likert scale, but the scale’s minimum and maximum values are not used. Then the researcher can conduct a robustness check with a theoretical scope defined with the extreme values of the scale. Changing the scope for a larger theoretical scope generally results in a larger effect size. Therefore, such change may affect the researcher’s original conclusion that the hypothesis is rejected, but will not change an original conclusion that the hypothesis is supported. The latter is shown in Figure 5.20. The scatter plot has a theoretical scope that is larger than the empirical scope. The condition ranges from 20 to 100 and the outcome from 0 to 120. The effect size for the CE-FDH ceiling line has increased from \(d = 0.58\) (\(p = 0.002\)) to \(d = 0.61\) (\(p = 0.002\)). Changing a theoretical scope for a smaller scope generally results in a smaller effect size. Such change may affect the researcher’s original conclusion that the hypothesis is supported, but will not change an original conclusion that the hypothesis is rejected.

Example of a scope robustness check: Comparing the results with the empirical and the theoretical scope (difference is shaded).

Figure 5.20: Example of a scope robustness check: Comparing the results with the empirical and the theoretical scope (difference is shaded).

When the researcher would have selected a theoretical scope for the analysis and changes that for a smaller theoretical scope, the effect size will be smaller and may also affect the researcher’s conclusion about the hypothesis from ‘rejected’ to ‘not rejected’.

5.6.4 Outlier choice

The fourth NCA-specific robustness check is the outlier robustness check. NCA results can be sensitive for outliers (see Section 4.8). The researcher may or may not decide to remove an outlier. Outliers will change the effect size when removed. Ceiling outliers (outliers that define the ceiling line) and scope outliers (outliers that define the scope) usually increase the effect size when removed. For example, in Figure 5.21 Finland is a potential ceiling outlier that increases the CE-FDH effect size from 0.58 (\(p = 0.002\)) to 0.62 (\(p = 0.001\)) and Mexico is a potential scope outlier that increases the effect size slightly. However, when an outlier is both a ceiling outlier and a scope outlier the effect may decrease. This applies to the potential outlier USA, where the effect size decreases from 0.58 (\(p = 0.002\)) to 0.52 (\(p = 0.012\)) when this potential outlier is removed. This outlier robustness check suggests that the conclusion about the hypothesis is robust.

Example of an outlier robustness check: Comparing the results with and without removing an outlier. Mexico = potential scope outlier; Finland = potential ceiling outlier; USA = potential scope and ceiling outlier.

Figure 5.21: Example of an outlier robustness check: Comparing the results with and without removing an outlier. Mexico = potential scope outlier; Finland = potential ceiling outlier; USA = potential scope and ceiling outlier.

5.7 Causal inference

Applications of NCA often use cross-sectional research designs. In order to make a causal interpretation of findings from a cross-sectional study, several requirements need to be met. Cross-sectional studies observe cases at a single point in time without manipulating variables. This contrasts with experimental studies where potential causes are manipulated and effects are observed, which allows a direct causal interpretation. Cross-sectional studies can infer causality only indirectly if certain criteria are fulfilled. These criteria include: relevant association, temporal sequence, theoretical plausibility, and control of confounders. Because of the way that necessity is defined, NCA findings are immune to confounding because the observed necessity relationship is not influenced by including or excluding other variables. Therefore, the three main requirements for inferring causality from an NCA cross-sectional study are (1) observing a relevant association, (2) justifying a temporal sequence, and (3) formulating a plausible theory.

In NCA the requirement for observing a relevant association between X and Y means observing that one of the corners of an XY plot is empty (rather than observing a correlation). When it is expected that the presence of X is necessary for the presence of Y the expected empty corner is in the upper left corner.

The requirement for observing a relevant association can be satisfied by having proper values of NCA’s effect size (e.g., \(d\geq0.10\)) representing practical relevance and p value (e.g., \(p\leq0.05\)) representing statistical relevance.

The requirement for justifying the temporal sequence implies that there must be evidence that X precedes Y: first X then Y. If X precedes Y, the empty space is compatible with X being necessary for Y. However, if Y precedes X the empty space is compatible with Y being sufficient for X (reverse causality). If both directions can be theoretically justified the empty space may be compatible with reciprocal (bi-directional) causality: X is necessary for Y, and Y is sufficient for X. The requirement for justifying a temporal sequence can be satisfied as part of the formulation of a plausible theory (see below).

The requirement for formulating a plausible theory implies that there should be a rational explanation how X is a necessary cause for Y. This means that the expected causal direction and the causal mechanism are explained theoretically. This requirement is usually satisfied by formulating a hypothesis for example “the presence of X is a necessary cause for the presence of Y”, while specifying and justifying the temporal direction, and explaining the mechanism why this relationship exists. Explaining the necessity mechanism can be done by explaining the enabling mechanism why cases with the outcome must have the condition, or by explaining the constraining mechanism why cases without the condition cannot have the outcome and why the absence of this condition is not compensable.

If the above requirements are not convincingly met, an alternative research design may be selected to infer causality. In order of increasing confidence, this can be the time-lagged study design in which X is measured first followed by Y, the longitudinal study design in which X and Y are measured in multiple points in time, and the experimental study design in which X is manipulated and the effect on Y is observed.

In practical applications of NCA each of the following requirements need to be fulfilled to interpret an empty space as a necessary causal condition:

  1. Theoretical support that X is a necessary cause for Y (causal direction, mechanism).
  2. The corresponding empty space is relevant (e.g., \(d \geq 0.10\)).
  3. The p value of the empty space is small (e.g., p \(\leq 0.05\)).

If one of the conditions is not met, it cannot be concluded that there is empirical evidence for necessity causality.

References

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E.-J., Berk, R., Bollen, K. A., Brembs, B., Brown, L., Camerer, C., et al. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6–10. https://doi.org/10.1038/s41562-017-0189-z
Berry, W. D. (1993). Understanding regression assumptions (Vol. 92). Sage. https://us.sagepub.com/en-us/nam/book/understanding-regression-assumptions
Bokrantz, J., & Dul, J. (2023). Building and testing necessity theories in supply chain management. Journal of Supply Chain Management, 59, 48–65. https://doi.org/10.1111/jscm.12287
Bouquet, C., & Birkinshaw, J. (2008). Weight versus voice: How foreign subsidiaries gain attention from corporate headquarters. Academy of Management Journal, 51(3), 577–601. https://www.jstor.org/stable/20159527
Dul, J. (2016a). Identifying single necessary conditions with NCA and fsQCA. Journal of Business Research, 69(4), 1516–1523. https://doi.org/10.1016/j.jbusres.2015.10.134
Dul, J. (2016b). Necessary Condition Analysis (NCA) logic and methodology of “necessary but not sufficient” causality. Organizational Research Methods, 19(1), 10–52. https://doi.org/10.1177/1094428115584005
Dul, J. (2020). Conducting necessary condition analysis. Sage. https://uk.sagepub.com/en-gb/eur/conducting-necessary-condition-analysis-for-business-and-management-students/book262898
Dul, J. (2022). Problematic applications of Necessary Condition Analysis (NCA) in tourism and hospitality research. Tourism Management, 93, 104616. https://doi.org/10.1016/j.tourman.2022.104616
Dul, J., Hauff, S., & Tóth, Z. (2021). Necessary Condition Analysis in marketing research. In R. Nunkoo, V. Teeroovengadum, & C. Ringle (Eds.), Handbook of research methods for marketing management (pp. 51–72). Edward Elgar Publishing. https://www.e-elgar.com/shop/gbp/handbook-of-research-methods-for-marketing-management-9781788976947.html
Dul, J., Vis, B., & Goertz, G. (2021). Necessary Condition Analysis (NCA) does exactly what it should do when applied properly: A reply to a comment on NCA. Sociological Methods & Research, 50(2), 926–936. https://doi.org/10.1177%2F0049124118799383
Emmenegger, P. (2011). Job security regulations in western democracies: A fuzzy set analysis. European Journal of Political Research, 50(3), 336–364. https://doi.org/10.1111/j.1475-6765.2010.01933.x
Farmaki, A., & Pappas, N. (2022). Poverty and tourism decision-making: A chaordic perspective. International Journal of Contemporary Hospitality Management, 34(3), 1012–1036. https://doi.org/10.1108/IJCHM-07-2021-0859
Farmaki, A., Pappas, N., Kvasova, O., & Stergiou, D. P. (2022). Hotel CSR and job satisfaction: A chaordic perspective. Tourism Management, 91, 104526. https://doi.org/10.1016/j.tourman.2022.104526
Galton, F. (1886). Regression towards mediocrity in hereditary stature. The Journal of the Anthropological Institute of Great Britain and Ireland, 15, 246–263. https://www.jstor.org/stable/2841583
Greckhamer, T. (2016). CEO compensation in relation to worker compensation across countries: The configurational impact of country-level institutions. Strategic Management Journal, 37(4), 793–815. https://doi.org/10.1002/smj.2370
Hogan, R., & Hogan, J. (2007). The Hogan Personality Inventory manual (3rd ed.). Hogan Assessment Systems, Tulsa, OK.
Lu, X., & White, H. (2014). Robustness checks and robustness tests in applied economics. Journal of Econometrics, 178, 194–206. https://doi.org/10.1016/j.jeconom.2013.08.016
Mackie, J. L. (1965). Causes and conditions. American Philosophical Quarterly, 2(4), 245–264. https://www.jstor.org/stable/20009173
Mello, P. A. (2021). Qualitative comparative analysis: An introduction to research design and application. Georgetown University Press. http://press.georgetown.edu/book/georgetown/qualitative-comparative-analysis
Mostafiz, M. I., Sambasivan, M., Goh, S.-K., & Ahmad, P. (2023). Configuring foreign market knowledge and opportunity recognition capabilities to predict the performance of export-manufacturing firms. Knowledge Management Research & Practice, 21(2), 316–330. https://doi.org/10.1080/14778238.2021.1919573
Pappas, N. (2021). COVID19: Holiday intentions during a pandemic. Tourism Management, 84, 104287. https://doi.org/10.1016/j.tourman.2021.104287
Pappas, N., & Farmaki, A. (2022). Attributes attitudes and chaordic travel intentions during COVID-19. Current Issues in Tourism, 25(24), 4014–4030. https://doi.org/10.1080/13683500.2022.2056004
Pappas, N., & Glyptou, K. (2021a). Accommodation decision-making during the COVID-19 pandemic: Complexity insights from greece. International Journal of Hospitality Management, 93, 102767. https://doi.org/10.1016/j.ijhm.2020.102767
Pappas, N., & Glyptou, K. (2021b). Risk-induced competitive productivity in times of recession: A chaordic tourism decision-making perspective. International Journal of Contemporary Hospitality Management, 33(9), 2932–2949. https://doi.org/10.1108/IJCHM-09-2020-1046
Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quantitative strategies. JSTOR. https://www.jstor.org/stable/10.1525/j.ctt1pnx57?turn_away=true
Ragin, C. C. (2000). Fuzzy-set social science. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/F/bo3635786.html
Ragin, C. C. (2008). Redesigning social inquiry: Fuzzy sets and beyond. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/R/bo5973952.html
Rohlfing, I., & Schneider, C. Q. (2013). Improving research on necessary conditions: Formalized case selection for process tracing after QCA. Political Research Quarterly, 66(1), 220–235. https://www.jstor.org/stable/23563606
Schneider, C. Q., & Wagemann, C. (2012). Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis. Cambridge University Press. https://www.cambridge.org/nl/academic/subjects/social-science-research-methods/qualitative-methods/set-theoretic-methods-social-sciences-guide-qualitative-comparative-analysis?format=PB
Skarmeas, D., Leonidou, C. N., & Saridakis, C. (2014). Examining the role of CSR skepticism using fuzzy-set qualitative comparative analysis. Journal of Business Research, 67(9), 1796–1805. https://doi.org/10.1016/j.jbusres.2013.12.010
Vis, B., & Dul, J. (2018). Analyzing relationships of necessity not just in kind but also in degree: Complementing fsQCA with NCA. Sociological Methods & Research, 47(4), 872–899. https://doi.org/10.1177/0049124115626179