:::

詳目顯示

回上一頁
題名:共同方法偏誤之檢驗方法
作者:陳建帆
作者(外文):Chen, Chien-Fan
校院名稱:國立陽明交通大學
系所名稱:經營管理研究所
指導教授:丁承
學位類別:博士
出版日期:2023
主題關鍵詞:偏誤卡方差異檢定拔靴法調整的Bootstrap p值檢驗共同方法變異(CMV)共同方法偏誤(CMB)驗證性因素分析(CFA)方法效應大小蒙地卡羅模擬ULMC法biaschi-square difference testbootstrappingadjusted bootstrap p valuecommon method variance (CMV)common method bias (CMB)confirmatory factor analysis (CFA)method effect sizeMonte Carlo simulationULMC technique
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(0) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:0
  • 共同引用共同引用:0
  • 點閱點閱:0
鑑於期刊審稿人對於共同方法變異(common method variance, CMV)的影響所引起的問題日益重視,調查研究者可能無法避免提供該研究中假設關係並未被CMV扭曲的證據。 然而,CMV本身不必然會對所估計之參數造成統計上有意義的偏差或效應。換言之,假使CMV因為資料來自單一方法而無法避免,其效應有可能太過輕微而不足以影響研究結論。
然而, 文獻上對於CMV影響是否與如何造成真正的偏誤卻無清楚明確的闡述。本文假設當方法效應很小時,儘管有CMV的存在,顯著之共同方法偏誤(common method bias, CMB)卻不必然發生。其中CMV效應大小是以 / ( ) (Cohen, 1988, pp. 410–412)來衡量進而適當辨識出偏差的程度。本研究包括兩大部分。第一部分,我們檢視了方法效應大小對於驗證性因素分析(CFA)中之因素負荷量和相關性估計值的影響,我們使用不同類型的模型以及樣本大小和不同分布特徵的數據進行操作。第二部分,我們比較了幾種方法來檢驗CMB,以獲得更多的統計上有意義的見解。蒙地卡羅模擬結果顯示,無論數據是否服從常態分佈,當平均方法效應小於「中等」水平時,估計值都不存在CMB。為了更實際應用的目的,我們檢視檢驗CMB的幾種方法,結果顯示卡方差異檢定(由Williams等人進行的Method-R和Method-S模型)表現不佳。相比其他方法,本研究提出了一個整合調整p-value的拔靴法(Bollen和Stine,1992)與改良型ULMC(Ding et al.,2023)的方法,該法在檢驗CMB的表現良好。最後本文使用模擬資料和真實資料示範所提檢驗CMB的方法。在實證研究中所面臨CMV的威脅,我們建議可使用本文所提方法來檢驗CMB。
In light of the growing concern among journal reviewers regarding the impact of common method variance (CMV), survey researchers may imperatively provide evidences that the hypothesized relationship in the researches were not distorted. However, CMV itself is not necessarily the meaningful biases or effects in parameter estimates. Stated differently, should CMV should unavoidably exist in data due to the same method, the amount of method effects may be too trivial to change the conclusions.
Whether and how CMV leads to biases is an important issue but seems not well addressed in the literature. We hypothesize that, when the sizes of the method effects are small, despite the presence of CMV the significant CMB(common method bias, CMB) in parameter estimates are unnecessarily found. The amount of CMV can be measured by Effect size( = / (1 ) (Cohen, 1988, pp. 410–412) to properly identify the biasing level. We first discuss the influence of the method effect size on the biases in estimates of trait loadings and trait correlations based on confirmatory factor analysis (CFA) in study 1. And then the conclusion had been confirmed by using different types of model with various sample size and distributional characteristic datasets. Second, several methods had been compared for their ability to address CMB, that is presented in study 2 to gain more statistical insights for addressing meaningful biases. The Monte-Carlo simulation results have indicated that, regardless of whether the data are normal or nonnormal, when the average effect size is smaller than “medium” level, the CMB would not exist. For more piratical application, several methods have been used in detecting common method bias (CMB). It appears that the chi-square difference (restricted vs. unrestricted) test, commonly used in the literature, performs poorly. This study proposes an approach to integrate the modified bootstrap method (Bollen and Stine, 1992) with the refined unmeasured latent method construct (ULMC) technique (Ding et al.,2023) and shows that the approach performs well. The approach has been illustrated by using a simulated dataset and a real dataset. We suggest that the proposed approach be used to detect CMB for empirical studies facing the threat of CMV.
Batista-Foguet, J. M., Revilla, M., Saris, W. E., Boyatzis, R. & Serlavós, R. (2014). Reassessing the effect of survey characteristics on common method bias in emotional and social intelligence competencies assessment, Structural Equation Modeling: A Multidisciplinary Journal, 21, 596–607. http://dx.doi.org/1.1080/10705511.2014.934767
Bollen, K. A., & Stine, R. A. (1992). Bootstrapping goodness-of-fit measures in structural equation models. Sociological Methods & Research, 21, 205–229. https://doi.org/1.1177/004912419202100
Conway, J. M., & Lance, C. E. (2010). What reviewers should expect from authors regarding common method bias in organizational research. Journal of Business and Psychology, 25,325–334. http://dx.doi.org/1.1007/s10869-010-9181-6
Cote, J. A., & Buckley, M. R. (1987). Estimating trait, method, and error variance: Generalizing across 70 construct validation studies. Journal of Marketing Research, 24, 315–318. https://doi.org/1.1177/002224378702400308
Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1, 16−29. https://doi.org/1.1037/1082-989X.1.1.16
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Ding, C. G., Chen, C.-F., & Jane, T.-D.(2023). Improving the performance of the unmeasured latent method construct technique in common method variance detection and correction. Journal of Organizational Behavior, 44(3), 519–542.
https://doi.org/1.1002/job.2673

Doty, D. H., & Glick, W. H. (1998). Common methods bias: Does common methods variance really bias results? Organizational Research Methods, 1(4), 374–406. https://doi.org/1.1177/109442819814002
Efron, B. (1979). Bootstrap methods: Another look at the jackknife. The Annals of Statistics, 7(1), 1–26. https://doi.org/1.1214/aos/1176344552
Efron B. (1982), The jackknife, the bootstrap, and other resampling plans, SIAM NSF-CBMS, Monograph #38. http://dx.doi.org/1.1137/1.9781611970319
Efron B. (1983). Estimating the error rate of a prediction rule: Improvement on cross-validation, Journal of the American Statistical Association, 78, 316–331. http://dx.doi.org/1.1080/01621459.1983.10477973
Efron, B., & Tibshirani, R. (1986). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statistical Science, 1(1), 54–75. https://doi.org/1.1214/ss/1177013815.
Efron, B., & Tibshirani, R. (1994). An introduction to the Bootstrap. Chapman and Hall/CRC. http://dx.doi.org/1.1201/9780429246593
Eid, M., Lischetzke, T., Nussbeck, F. W., & Trierweiler, L. I. (2003). Separating trait effects from trait-specific method effects in multitrait–multimethod models: A multiple-indicator CT-C(M_1) model. Psychological Methods, 8(1), 38–6. http://dx.doi.org/1.1037/1082-989X.8.1.38
Fan, X., & Fan, X. (2005). Using SAS for Monte Carlo simulation research in SEM. Structural Equation Modeling, 12(2), 299–333.
http://dx.doi.org/1.1207/s15328007sem1202_7
Fleishman, A. I. (1978). A method for simulating non-normal distributions. Psychometrika, 43, 521–532. http://dx.doi.org/1.1007/BF02293811

Hair, J. F. Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis: A global perspective (7th ed.). Pearson Education International.
Lance, C. E., Dawson, B., Birkelbach, D., & Hoffman, B. J. (2010). Method effects, measurement error, and substantive conclusions. Organizational Research Methods, 13, 435−455. http://dx.doi.org/1.1177/1094428109352528
Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y. Barry J. & Babin, B.J. (2016). Common methods variance detection in business research. Journal of Business Research 69 (2016) 3192–3198.
http://dx.doi.org/1.1016/j.jbusres.2015.12.008
Kaiser, H. F., & Dickman, K. (1962). Sample and population score matrices and sample correlation matrices from an arbitrary population correlation matrix. Psychometrika, 27, 179–182. https://doi.org/1.1007/BF02289635
Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86, 114–121. http://dx.doi.org/1.1037/0021-901.86.1.114
MacKenzie, S. B., & Podsakoff, P. M. (2012). Common method bias in marketing: Causes, mechanisms, and procedural remedies. Journal of Retailing, 88(4), 542–555. http://dx.doi.org/1.1016/j.jretai.2012.08.001
Malhotra, N. K., Kim, S. S., & Patil, A. (2006). Common method variance in IS research: A comparison of alternative approaches and a reanalysis of past research. Management Science, 52(12), 1865–1883.
http://dx.doi.org/1.1287/mnsc.106.0597

Nevitt, J., & Hancock, G. R. (2001). Performance of bootstrapping approaches to model test statistics and parameter standard error estimation in structural equation modeling. Structural Equation Modeling, 8(3), 353–377. http://dx.doi.org/1.1207/S15328007SEM0803_2
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879−903. http://dx.doi.org/1.1037/0021-901.88.5.879
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569.
http://dx.doi.org/1.1146/annurev-psych-120710-100452
Richardson, H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance. Organizational Research Methods, 12(4), 762–8. http://dx.doi.org/1.1177/1094428109332834
Rindskopf, D. (1983). Parameterizing inequality constraints on unique variances in linear structural models. Psychometrika, 48, 73–83.
http://dx.doi.org/1.1007/BF02314677
Satorra, A., & Bentler, P. M. (1988). Scaling corrections for chi-square statistics in covariance structure analysis. In ASA 1988 Proceedings of the Business and Economic Statistics Section (pp. 308–313). Alexandria, VA: American Statistical Association.
Satorra, A., & Bentler, P. M. (1994). Corrections to test statistics and standard errors in covariance structure analysis. In A. von Eye & C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks, CA: Sage.
Satorra, A., & Bentler, P. M. (2001). A scaled difference chi-square test statistic for moment structure analysis. Psychometrika, 66, 507–514.
http://dx.doi.org/1.1007/BF02296192
Schaller, T.K., Patil, A., Malhotra, N. K. (2015). Alternative techniques for assessing common method variance: An analysis of the theory of planned behavior research. Organizational Research Methods.18(2), 177-206.
http://dx.doi.org/1.1177/1094428114554398
Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York: Oxford University Press.
Spector, P. E. (2006). Method variance in organizational research: Truth or urban legend? Organizational Research Methods, 9(2), 221–232.
http://dx.doi.org/1.1177/1094428105284955
Vale, C. D., & Maurelli, V. A. (1983). Simulating multivariate nonnormal distributions. Psychometrika, 48, 465–471.
http://dx.doi.org/1.1007/BF02293687
Williams, L. J., & Brown, B. K. (1994). Method variance in organizational behavior and human resources research: Effects on correlations, path coefficients, and hypothesis testing. Organizational Behavior and Human Decision Processes, 57(2), 185–209. http://dx.doi.org/1.1006/obhd.1994.1011

Williams, L. J., Gavin, M. B., & Williams, M. L. (1996). Measurement and nonmeasurement processes with negative affectivity and employee attitudes. Journal of Applied Psychology, 81(1), 88-101.
http://dx.doi.org/1.1037/0021-901.81.1.88
Williams, L. J., Hartman, N. F., & Cavazotte, N. (2010). Method variance and marker variable: A review and comprehensive CFA marker Technique. Organizational Research Methods, 13(3), 477–514.
http://dx.doi.org/1.1177/1094428110366036
Williams, L. J., & McGonagle, A. K. (2016). Four research designs and a comprehensive analysis strategy for investigating common method variance with self-report measures using latent variables. Journal of Business and Psychology, 31(3), 339–359. https://doi.org/1.1007/s10869-015-9422-9
Wilcox, R. R. (2016). Comparing dependent robust correlations. British Journal of Mathematical and Statistical Psychology, 69(3), 215–224. https://doi.org/1.1111/bmsp.12069
Yuan, K.-H., & Hayashi, K. (2003). Bootstrap approach to inference and power analysis based on three test statistics for covariance structure models. British Journal of Mathematical and Statistical Psychology, 56(1), 93–11. http://dx.doi.org/1.1348/000711003321645368
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
:::
無相關博士論文
 
無相關書籍
 
無相關著作
 
無相關點閱
 
QR Code
QRCODE