Purpose: The objective was to quantify the effect of data management practices on confidence interval (CI) sizes for resulting model parameters.
Methods: Simulated experimental microbial reduction data sets (n = 100), with random errors, were synthesized (YOBS). The five low-count data management practices described above were applied. Log-linear and Weibull models were fit to the resulting data sets. The CIs of the parameters were estimated for each case, and then compared by ANOVA and Tukey.
Results: The ranking of CI sizes among data management practices varied among data and model types. The Y+ approach, previously shown to be the most accurate (smallest RMSE, P < 0.05), nevertheless most often had the largest CIs, as much as double (P < 0.05) those for YOBS. For most of the other approaches, the CIs fell between those of YOBS and Y+; 22 out of 30 cases yielded CIs greater (P < 0.05) than those for YOBS.
Significance: These results suggest that the application of low-count data management practices significantly affects both the accuracy and uncertainty of the model parameters. The fact that the CIs of Y+ were most often the largest indicated that predictions based on these results, even if more accurate, are also more uncertain. This could influence model selection and utility in risk assessments and food safety management.