JTA Analysis Guide

JTA Analysis Guide

1. Overview

The main objective of the JTA analysis is to determine the average criticality (e.g., importance to the job/role) for each task statement.  

The JTA analysis tab automatically analyzes the JTA survey data. This involves three steps: 

  1. Perform any desired data cleaning to ensure that only attentive, representative respondents are included in the analysis; 
  2. Calculate average criticalities for each task; and 
  3. Adjust the default threshold, if desired, to classify tasks as critical and non-critical. Non-critical tasks are ignored when calculating blueprint weights. 

Tips:

  • The default settings usually work well (although you may wish to adjust the criterion for minimum survey time). But if you need to use custom settings, you should do so before interpreting the average criticalities.
  • If you need to perform additional analyses, the survey data can be offloaded from the reporting portal for further analysis in Excel or in an analysis package. 


2. Analysis of the criticality of tasks

The left side of the screen shows the tasks (in order, or grouped by domain): 

The list of tasks (column “Tasks”) 

The domain(s) assigned to each task (column “Domains”)  

The sample size (“N”) and average rating (“Mean”) for each rating scale (by default, the ratings are “FREQUENCY” and “IMPORTANCE”) 

The sample size (“N”) and average (“Mean”) of the criticality calculated as the weighted sum of ratings; the criticality is the overall importance of the task across all rating scales 

The classification of the task (in column “Critical”) as critical or not critical. Only critical tasks contribute to the Blueprint weights in the next step. 

To view all the tasks, modify the controls at the bottom right to change the number of items shown per page and the page shown.  

Tips:

  • The criticality of each task ranges from 1 to 5 and the average is usually between 3 and 5. 
  • Calculating an accurate estimate of the criticality of each task is the main goal of the JTA analysis. 


3. Analysis criteria

The “Analysis criteria” panel on the right side of the screen can be used to adjust the data cleaning criteria and the default criticality threshold.  

Criticality Threshold is the threshold score used to classify tasks as critical or not. By default, tasks with average criticalities greater than or equal to 3 are considered critical. 

Tip: Consider sorting the (ungrouped) tasks by mean criticality in descending order to see tasks near the criticality threshold and ask a subject matter expert to judge where the threshold should be set. 



Response Quality Criteria Flagging is used to remove respondents from the analysis if the pattern of their responses indicates inattentiveness or low response quality. 

Each criterion produces a “flag” for each respondent that meets that response quality criterion. You can adjust the number of flags used to exclude a respondent from the analysis. By default, two or more flags cause a case to be removed from the analysis.  

Tip: Cases are never deleted from the dataset; you can adjust the criteria and the data cleaning/analysis is regenerated automatically. 


You can use as many criteria as desired to exclude respondents from the analysis. Each criterion is based on one of the following four quality indices and you can have multiple criteria based on the same quality index at the same time. 
 
Corrected Rater-Total Correlation is the correlation between the ratings of a JTA survey respondent and the average ratings of the rest of the sample (i.e., the entire sample after removing this respondent). Values that are close to zero (CRTC< 0.20) or zero/negative (CTRC<=0) indicate that a respondent’s ratings pattern differs from that of the other respondents. By default, two criteria (CRTC< 0.20 and CRTC<=0) are used so raters with small non-negative CRTC values get one flag and raters with zero/negative CRTC values get two flags and are excluded. 

Total Survey Minutes is the total administration time in minutes. Extremely short administration time usually indicates that the respondent was rushing through the survey and not being attentive. The default (TIME<1 minute) should be adjusted for the typical survey time. 

Within-Rater SD is the standard deviation of a single rater’s ratings. Values near zero indicate that a rater provided similar or identical ratings for all tasks, which may be a sign of inattention. Values below 0.20 are flagged by default. 

Percent of Responses Missing is the percentage of missed ratings in the survey. Missing many ratings may indicate an atypical respondent who should not be included in the analysis. By default, respondents with Missing >= 30% are flagged. 

Response quality criteria can be added, edited, and removed. 

Tip: Smaller survey samples may require less stringent flagging. Large samples are always better, but even small, representative samples of attentive survey respondents are useful: samples of size 6, 12, 36, 100, and 1000 reduce uncertainty about average task criticality by 59%, 71%, 83%, 90% and 97%.  


Item Response Criteria is used to remove respondents from analysis based on their responses to the background questions. No default item response criteria are provided.  

Tip: These criteria are only available when the background questions have been enabled. 



4. Next steps

When the analysis of task criticality is complete, the next step is to review the Blueprint Preview and publish the JTA. 

Tips:

  • In the next step, Blueprint weights are calculated using only the average criticalities of the tasks that are classified as critical. 
  • You should make a permanent record of all details and the analysis. 

 

For additional guidance, explore our How-to Step-by-Step guide available [here].

Contact Us

If you have any questions or need additional assistance, please contact us by either emailing support@certiverse.com or by submitting a ticket from this article.