1. Knowledge Base
  2. Data Analysis Guides

Significance Testing

This article explains what significance testing is and how to apply and interpret it in SightX.

What's on this page:

What is Significance Testing?

Significance testing is utilized when analyzing data to determine whether the observed difference in results between groups is beyond any statistical chance.

It is typically recommended to use significance testing in SightX when you need to gain more confidence about score differences among audiences, products, or groups.

For example, let's say you've presented respondents with two potential ads for a new product you're releasing, and you asked them their likelihood of purchasing those two products on a one to five scale. Let's assume 64% of respondents say they would be "Likely" or "Extremely likely" to buy the product after seeing ad #1, while 72% of respondents say the same after seeing ad #2. You can use significance testing to determine whether the 8% preference for ad #2 is meaningful.

Types of Significance Testing Used by SightX

While there are a variety of significance tests that can be applied, SightX significant testing relies on two widely used techniques: Chi Square (for testing differences in proportions/percentile breakdown), and Analysis of Variance (aka ANOVA, for testing the differences among mean scores).

Chi Square

Chi Square significance tests examine if the observed frequencies in a dataset are different from the expected frequencies. The test is often used when a researcher is comparing the frequencies of various sub groups within a variable or across variables of categorical nature.

Analysis of Variance (ANOVA)

The ANOVA significance test technique is used when a researcher is interested in testing the differences among two or more mean scores of groups.

How to Use Significance Testing in SightX

Significance Testing on Concept Test Data

In the Concept Test dashboard, open the analysis toolbox and select the Significance Testing icon. Click "Apply" in the toolbox to apply Significance Testing to the dashboard.

All of the question visualizations on the page will update with sig testing data. Each option with significant data in the chart will show a star, and to the right of each chart all of the statistically significant insights from the question will be displayed. Click on an insight to highlight the data in the chart.

Most question types have multiple views of the data on the Concept Test dashboard; the default view showing the average score for each concept, and the details by option and details by concept views. Switching between views reveals the statistically significant statements associated with the selected view.

Significance Testing in Crosstabs

In the Crosstabs tab, generate a table with the groups you want to compare between in either the columns or the rows. Next, open the toolbox on the right side of the page and select the "Significance testing" icon. 

Here, you can review the confidence level and choose to display p-values before clicking "Calculate" in the upper right corner to calculate significance in the table.

Tip: The Confidence Level (CI) is defaulted to 95%, which is the industry standard level. However, if needed, you can change the CI using the arrow to the left of the Sig Testing toggle. Scroll down to the next section to learn more about confidence levels. 

When significance testing is turned on and there is data that is significant at the specified confidence level, star indicators will appear in the cells where there is significance.

Click on the star indicator in a cell to highlight the cells that it is significantly different from. 

If you export your crosstab to excel, you'll see the significant differences labeled using a letter system, as shown below. 


Confidence Level

Significance is calculated at a confidence level, which shows how sure you can be that the results are statistically significant. For example, a confidence level of 95% means that you can know with 95% certainty that your results are statistically significant, and are not due to chance.

P-values

P-values are the outputs of significance testing calculations. They are values between 0 and 1 that indicate whether results are significant or could be up to chance, where values closer to 0 indicate greater significance. They are essentially the inverse of confidence intervals. A p-value of 0.01 means that there is only a 1% chance that the differences between variables are from random error, and we can be 99% confident the results reflect actual differences between the variables. A p-value of 0.10 means that there is a 10% chance that the differences between variables are a result of random error, and we can be only 90% confident that those differences exist.