:::

詳目顯示

回上一頁
題名:題組試題和多向度試題的電腦化適性測驗中試題曝光與測驗重疊之同時控制
作者:蘇雅蕙 引用關係
作者(外文):Ya-Hui Su
校院名稱:國立中正大學
系所名稱:心理學所
指導教授:王文中
學位類別:博士
出版日期:2007
主題關鍵詞:電腦適性測驗題組反應理論多向度試題反應理論測驗保密性試題曝光控制試題曝光測驗重疊computerized adaptive testingtestlet response theoryitem exposure controlmultidimensional item response theorytest securityitem exposuretest overlap
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(1) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:1
  • 共同引用共同引用:0
  • 點閱點閱:32
none
Item exposure control procedures in computerized adaptive test (CAT) are used to reduce the administration of popular items that would likely be compromised or overexposed. As long as examinees have prior knowledge of an item due to frequent administrations, the fine psychometric properties will no longer exist, and the test score does not truly reflect real abilities of examinees. The necessity of item exposure control is most relevant to high-stakes tests. Methods of simultaneous control over item exposure and test overlap have been proposed (Chen, Ankenmann, & Spray, 2003; Chen & Lei, 2005; Davey & Parshall, 1995). The Davey and Parshall (DP; Davey & Parshall, 1995) procedure reduces the amount of test overlap but fails to control test overlap rate exactly. Chen and Lei implemented successfully a test overlap control procedure onto the Sympson and Hetter (SH; 1985) procedure, which is called the SH procedure with test overlap control (SHT), in which both item exposure and test overlap can be simultaneously controlled. However, item exposure parameters of those procedures can be obtained only through time-consuming simulations prior to operational CAT. Unlike many other item exposure control methods, the online approach (Chen, 2004; Chen, 2005; Ju, 2005; van der Linden, 2000; van der Linden & Veldkamp, 2004; Veldkamp & van der Linden, 2002; Wu, 2006) does not require time-consuming simulations to develop item exposure control parameters prior to operational CAT. Hence, this online approach is regarded as the “on the fly” control of item exposure. If the operational item pools or the administered populations differ from those in the prior simulations, only the SHOF procedure (Chen, 2005) is not affected. To resolve the problem of the SHOF procedure in high test overlap rates for examinees with extreme abilities, Chen (2005) proposed to implement a test overlap control onto the SHOF procedure, which is called the SHTOF procedure. To reduce item overexposure of the SHOF procedure for examinees with extreme abilities, in this dissertation I propose the SHOF conditional (denoted as SHCOF) procedure and assess its efficiency under various CAT contexts.
The purposes of this study are to (a) implement test overlap control procedures onto item exposure control methods to form the Stocking and Lewis (1998) procedure with test overlap control (denoted as SLT) and the SHCOF procedure, such that item exposure and test overlap can be simultaneously controlled, and (b) conduct a series of simulations to investigate the efficiency of these two newly developed procedures and the existing procedures under unidimensional CAT (UCAT), multidimensional CAT (MCAT), and testlet CAT (TCAT) contexts. Ten procedures are compared: the Sympson and Hetter (SH; 1985) procedure, the Sympson and Hetter conditional (SHC) procedure (Chang, Ansley, & Twu, 2002), the SH procedure with test overlap control (SHT; Chen & Lei, 2005), the Stocking and Lewis (1995) unconditional multinomial (SL) procedure, the Stocking and Lewis (1998) conditional multinomial (SLC) procedure, the SL procedure with test overlap control (SLT; which is developed in the present study), the Davey and Parshall (DP; 1995) procedure, the SH online procedure with freeze control (SHOF; Chen, 2005), the SH online procedure with test overlap control (SHTOF; Chen, 2005), and the SH online conditional procedure (SHCOF; which is developed in the present study). A multinomial model for content ordering (Chen, Ankenmann, & Spray, 2003) is used to randomize content area selection. Each of these procedures was compared to a base-line situation in which no item exposure or test overlap control procedure is implemented.
Except the online approach (i.e., SHOF, SHTOF, and SHCOF), item exposure parameters of the non-online approach (i.e., SH, SHC, SHT, SL, SLC, SLT, and DP) can be obtained only through iterative simulations prior to operational CAT. In this dissertation, the effectiveness of those procedures is evaluated via a series of simulations that contain two major conditions: (a) simulees drawn from standard normal distributions, and (b) simulees drawn from uniform distributions with equal intervals. Measurement precision (Bias and RMSE), item exposure rate, test overlap rate, and pool usage rate are computed to assess efficiency of those procedures.
The results of the simulations showed that those procedures conditional on ability (i.e., SHC, SLC, and SHCOF) yielded well-controlled conditional maximum item exposure rates, lowest test overlap rates, and largest pool usage rates, but largest root mean squared errors. Those procedures with item exposure control or test overlap control achieved the pre-specified item exposure rates and test overlap rates. The three online procedures (i.e., SHOF, SHCOF, and SHTOF) performed fairly well after 1000 simulees were administered, especially for the SHCOF procedure. No significant differences in maximum item exposure rates, mean test overlap rates, and pool usage rates between the three CAT contexts were found. The global and conditional root mean squared errors of all procedures under the TCAT context were larger than those under the UCAT and MCAT contexts, which was mainly because only 12 testlets (and thus 12 adjustments) were selected under the TCAT context. Compared to the 500-item pool, lower root mean squared errors, smoother maximum item exposure rates, lower mean test overlap rates, lower pool usage rates, lower conditional root mean squared errors, larger conditional maximum item exposure rates, and larger conditional mean test overlap rates were found in the 2000-item pool.
This dissertation not only gives guidelines to operational CAT to improve test security, but also sheds new light on theoretical issues on test security in CAT. A series of simulations were conduced to investigate the efficiency in item exposure and test overlap control of these two newly proposed procedures (i.e., SLT and SHCOF) and other popular procedures under the UCAT, MCAT and TCAT contexts. Several important conclusions can be drawn from this study. First, the SLT and the SHCOF procedure are successfully developed. Second, the SHCOF procedure is recommended for practical use because it is efficiency and does not need intensive simulation prior to operational CAT. Third, setting a lower target maximum item exposure rate (e.g., 0.1) can be helpful to obtain stable item exposure parameters for conditional procedures than to using a large conditional sample.
Automated Sciences Group & CACI. (1988). CAT-ASVAB program: Concept of operation and cost/benefit analysis. Fairfax, VA: Author.
Ackerman, T. A. (1989). Unidimensional IRT calibration of compensatory and non-compensatory multidimensional items. Applied Psychological Measurement, 13, 113-127.
Ackerman, T. A. (1991). The use of unidimensional parameter estimates of multidimensional items in adaptive testing. Applied Psychological Measurement, 15, 13-24.
Adams, R. J., Wilson, M. R., & Wang, W. C. (1997). The multidimensional random coefficients multinomial logit model. Applied Psychological Measurement, 21(1), 1-23.new window
Ansley, T. N., & Forsyth, R. A. (1985). An examination of the characteristics of unidimensional IRT parameter estimates derived from two-dimensional data. Applied Psychological Measurement, 9, 37-48.
Beguin, A. A., & Glas, C. A. W. (2001). MCMC estimation and some model-fit analysis of multidimensional IRT models. Psychometrika, 66, 541-562.
Bergstrom, B. A., Lunz, M. E., & Gershon, R. C. (1992). Altering the level of difficulty in computer adaptive testing. Applied Measurement in Education, 5, 137-149.
Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In F. M. Lord & M. R. Novick, Statistical theories of mental test scores (pp. 397-479). Reading, MA: Addison-Wesley.
Bock, R. D., & Mislevy, R. J. (1982). Adaptive EAP estimation of ability in a microcomputer environment. Applied Psychological Measurement, 6, 431-444.
Boyd, A. M. (2003). Strategies for controlling testlet exposure rates in computerized adaptive testing systems. Unpublished doctoral dissertation, University of Texas at Austin.
Bradlow, E. T., Wainer, H., & Wang, X. (1999). A Bayesian random effects model for testlets. Psychometrika, 64, 153-168.
Chang, H.-H., & Ying, Z. (1996). A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20, 213-229.
Chang, H.-H., & Ying, Z. (1999). ?Stratified multistage computerized adaptive testing. Applied Psychological Measurement, 23, 211-222.
Chang, S.-W., & Ansley, T. N. (2003). A comparative study of item exposure control methods in computerized adaptive testing. Journal of Educational Measurement, 40, 71-103.
Chang, S.-W., Ansley, T. N., & Lin, S. H. (2000). Performance of item exposure control methods in computerized adaptive testing: Further explorations. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, LA.
Chang, S.-W., Ansley, T. N., & Twu, B.-Y. (2002). Performance of item exposure control methods in computerized adaptive testing: Further Explorations. Psychological Testing, 49(2), 235-263.
Chang, S. W., & Twu, B.-Y. (2001). Effects of changes in the Examinees’ ability distribution on the exposure control method in CAT. Paper presented at the Annual Meeting of the American Educational Research Association, Seattle, WA.
Chen, S. (2004). Controlling Item Exposure on the Fly in Computerized Adaptive Testing. Paper presented at the Annual Meeting of the Taiwanese Psychological Association, Taipei, Taiwan.
Chen, S. (2005). Controlling item exposure and test overlap on the Fly in computerized adaptive testing. Paper presented at the IMPS 2005 Annual Meeting of the Psychometric Society. Tilburg, Netherlands.
Chen, S., Ankenmann, R. D., & Spray, J. A. (2003). The relationship between item exposure and test overlap in computerized adaptive testing. Journal of Educational Measurement, 40(2), 129-145.
Chen, S., & Lei, P. (2005). Controlling item exposure and test overlap in computerized adaptive testing. Applied Psychological Measurement, 29(3), 204-271.
Davey, T., & Nering, M. (2002). Controlling item exposure and maintaining item security. In C. N. Mills, M. T. Potenza, & J. J. Fremer (Eds.), Computer-Based Testing: Building the Foundation for Future Assessments (pp. 165-191). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Davey, T., & Parshall, C. G. (1995). New algorithms for item selection and exposure control with computerized adaptive testing. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
Drasgow, R., & Parsons, C. K. (1983). Application of unidimensional item response theory methods to multidimensional data. Applied Psychological Measurement, 7, 189-199.
Educational Testing Service. (1993). The GRE Computer Adaptive Testing Program (CAT): Integrating Conveniences, Assessment, and Technology. Princeton, NJ: Educational Testing Service.
Eggen, T. J. H. M. (2001). Overexposure and underexposure of items in computerized adaptive testing (Measurement and Research Department Reports 2001-1). Arnhem, The Netherlands: Citogroep.
Eignor, D. R., Stocking, M. L., Way, W. D., & Steffen, M. (1993). Case studies in computer adaptive test design through simulation (Research Report 93-56). Princeton, NJ: Educational Testing Service.
Eliason, S. R. (1993). Maximum likelihood estimation: Logic and practice. Newbury Park, Calif.: Sage.
Embretson, S. (1980). Multicomponent latent trait models for ability tests. Psychometrika, 45, 479-494.
Embretson, S. E. & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Fischer, G. H., & Seliger, E. (1997). Multidimensional linear logistic models for change. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 323-346). New York: Springer-Verlag.
Flaugher, R. (2000). Item pools. In Wainer, H. (Ed), Computerized adaptive testing: A primer (2nd ed.) (pp. 37-59). Mahwah, NH: Lawrence Erlbaum Associates.
Folk, V. G., & Green, B. F. (1989). Adaptive estimation when the unidimensionality assumption of IRT is violated. Applied Psychological Measurement, 13, 373-389.
Glas, A. W., Wainer, H., & Bradlow, E. T. (2000). MML and EAP estimation in testlet-based adaptive testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 271-287). London, MA: Kluwer Academic Publishers.
Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Boston: Kluwer-Nijhoff Publishing.
Hambleton, R. K., Zaal, J. N., & Pieters, J. P. M. (1991). Computerized adaptive testing: Theory, applications, and standards. In R. K. Hambleton & J. N. Zaal (Eds.), Advances in educational and psychological testing: Theory and applications. Boston: Kluwer Academic Publishers.
Harrison, D. A. (1986). Robustness of IRT parameter estimation to violations of the unidimensionality assumption. Journal of Educational Statistics, 11, 91-115.
Hattie, J. (1981). Decision criteria for determining unidimensionality. Unpublished doctoral dissertation, University of Toronto, Canada.
Hetter, R. D., & Sympson, J. B. (1997). Item exposure control in CAT-ASVAB. In W. A. Sands, B. K. Waters, & J. R. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp.141-144). Washington, DC: American Psychological Association.
Ju, Y. (2005). Item exposure control in a-stratified computerized adaptive testing. Unpublished master’s thesis, National Chung Cheng University, Chia-Yi, Taiwan.
Kelderman, H. (1996). Multidimensional Rasch models for partial-credit scoring. Applied Psychological Measurement, 20, 155-168.
Kim, J. K. & Nicewander, W. A. (1993). Ability estimation for conventional tests. Psychometrika, 58, 587-599.
Kingsbury, G. G., & Zara, A. R. (1989). Procedures for selecting items for computerized adaptive tests. Applied Measurement in Education, 2, 359-375.
Kingsbury, G. G., & Zara, A. R. (1991). A comparison of procedures for content-sensitive item selection in computerized adaptive tests. Applied Measurement in Education, 4, 241-261.
Lord, F. M. (1977). A broad-range tailored test of verbal ability. Applied Psychological Measurement, 1, 95-100.new window
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum.
McBride, J. R., & Martin, J. T. (1983). Reliability and validity of adaptive ability tests in a military setting. In D. J. Weiss (Ed.), New Horizons in Testing (pp. 223-226). New York, NY: Academic Press.
McDonald, R. P. (1982). Linear versus nonlinear models in item response theory. Applied Psychological Measurement, 6, 379-396.
McDonald, R. P. (1997). Normal-ogive multidimensional model. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 258-270). New York: Springer-Verlag.
McKinley, R. L., & Reckase, M. D. (1983). MAXLOG: A computer program for the estimation of the parameters of a multidimensional logistic model. Behavior Research Methods and Instrumentation, 15, 389-390.
Meiser, T. (1996). Loglinear Rasch models for the analysis of stability and change. Psychometrika, 61, 629-645.
Mislevy, R. J. & Stocking, (1989). A consumer’s guide to LOGIST and BILOG. Applied Psychological Measurement, 13, 57-75
Owen, R. J. (1969). A Bayesian approach to tailored testing (Research Report 69-92). Princeton, NJ: Educational Testing Service.
Owen, R. J. (1975). A Bayesian sequential procedure for quantal response in the context of adaptive mental testing. Journal of the American Statistical Association, 70, 351-356.
Parshall, C. G., Davey, T., & Nering, M. L. (1998). Test development exposure control of adaptive testing. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA.
Parshall, C. G., Hogarty, K., & Kromrey, J. (1999). Item exposure in adaptive tests: An empirical investigation of control strategies. Paper presented at the Annual Meeting of the Psychometric Society, Lawrence, KS.
Pastor, D. A., Dodd, B. G., & Chang, H. H. (2002). A comparison of item selection techniques and exposure control mechanisms in CATs using the generalized partial credit model. Applied Psychological Measurement, 26(2), 147-163.
Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Educational Statistics, 4, 207-230.
Reckase, M. R. (1985). The difficulty of test items that measure more than one dimension. Applied Psychological Measurement, 9, 401-412.
Reckase, M. D., Ackerman, T. A., & Carlson, J. E. (1988). Building a unidimensional test using multidimensional items. Journal of Educational Measurement, 25, 193-203.
Revuelta, J., & Ponsoda, V. (1998). A comparison of item-exposure control methods in computerized adaptive testing. Journal of Educational Measurement, 38, 311-327.
Rost, J., & Carstensen, C. H. (2002). Multidimensional Rasch measurement via item component models and faceted designs. Applied Psychological Measurement, 26, 42-56.
Sands, W. A., & Waters, B. K. (1997). Introduction to ASVAB and CAT. In W. A. Sands, B. K. Waters, & J. McBride (Eds.), Computerized adaptive testing: From inquiry to operation (pp. 3-9). Washington DC: American Psychological Association.
Schaeffer, G. A., Reese, C. M., Steffen, M., McKinley, R. L., & Mills, C. N. (1993). Field test of a computer-based GRE General Test (RR-93-07). Princeton, NJ: Educational Testing Service.
Schaeffer, G. A., Steffen, M., Golub-Smith, M., Mills, C., & Durso, R. (1995). The introduction and comparability of the computer adaptive GRE general test (GRE Board Professional Report No. 88-08aP; ETS Research Report 95-20). Princeton, NJ: Educational Testing Service.
Segall, D. C. (1987). ACAP item pools: Analysis and recommendations. San Diego CA: Navy Personnel Research and Development Center.
Segall, D. O. (1996). Multidimensional adaptive testing. Psychometrika, 61(2), 331-354.
Segall, D. O. (2000). Principles of multidimensional adaptive testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 53-73). London, MA: Kluwer Academic Publishers.
Seong, T. (1990). Sensitivity of marginal maximum likelihood estimation of item and ability parameters to the characteristics of the prior ability distributions. Applied Psychological Measurement, 14, 299-311.
Sireci, S. G., Wainer, H., & Thissen, D. (1991). On the reliability of testlet-based tests. Journal of Educational Measurement, 28, 237-247.
Stocking, M. L. (1994). Three practical issues for modern adaptive testing item pools (ETS Research Report 94-5). Princeton, NJ: Educational Testing Service.
Stocking, M. L., & Lewis, C. (1995). A new method for controlling item exposure in computerized adaptive testing (ETS Research Report RR-95-25). Princeton, NJ: Educational Testing Service.
Stocking, M. L., & Lewis, C. (1998). Controlling item exposure conditional on ability in computerized adaptive testing. Journal of Educational and Behavioral Statistics, 23, 57-75.
Stocking, M. L., & Lewis, C. (2000). Methods of controlling the exposure of items in CAT. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 163-182). London: Kluwer Academic Publishers.
Stocking, M. L., & Swanson, L. (1993). A method for severely constrained item selection in adaptive testing. Applied Psychological Measurement, 17 (3), 277-292.
Stroud, A. H., & Sechrest, D. (1966). Gaussian quadrature formulas. Englewood Cliffs, NJ: Prentice-Hall.
Swanson, L., & Stocking, M. L. (1993). A model and heuristic for solving very large item selection problems. Applied Psychological Measurement, 17 (2), 151-166.
Sympson, J. B., & Hetter, R. D. (1985). Controlling item-exposure rates in computerized adaptive testing. Proceeding of the 27th annual meeting of the Military Testing Association (pp. 973-977). San Diego, CA: Navy Personnel Research and Development Center.
Thissen, D. & Mislevy, R. J. (2000). Testing algorithms. In Wainer, H. (Ed). Computerized adaptive testing: A primer (2nd ed.) (pp. 101-133). Mahwah, NH: Lawrence Erlbaum Associates.
Urry, V. W. (1977). Tailored testing: A successful application of latent trait theory. Journal of Educational Measurement, 14, 181-196.
van der Linden, W. J. (1999). Multidimensional adaptive testing with a minimum error-variance criterion. Journal of Educational and Behavioral Statistics, 24(4), 398-412.
van der Linden, W. J. (2000). Constrained adaptive testing with shadow tests. In W. J. van der Linden&C.A.W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 27-52). London, MA: Kluwer Academic Publishers.
van der Linden, W. J. (2003). Some Alternatives to Sympson-Hetter Item-Exposure Control in Computerized Adaptive Testing. Journal of Educational and Behavioral Statistics, 28(3), 249-265.
van der Linden, W. J., & Glas, C. A. W. (2000). Computerized adaptive testing: Theory and practice (p. iii). London, MA: Kluwer Academic Publishers.
van der Linden, W. J., & Pashley, P. J. (2000). Item selection and ability estimation in adaptive testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 1-25). London, MA: Kluwer Academic Publishers.
van der Linden, W. J., & Veldkamp, B. P. (2004). Constraining item exposure in computerized adaptive testing with shadow tests. Journal of Educational and Behavioral Statistics, 29(3), 273-291.
Veldkamp, B. P., & van der Linden, W. J. (2002). Multidimensional adaptive testing with constraints on test content. Psychometrika, 67(4), 575-588.
Vos, H. J., & Glas, C. A. W. (2000). Testlet-based adaptive mastery testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 289-309). London, MA: Kluwer Academic Publishers.
Wainer, H. (2000). Computerized adaptive testing: A Primer (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Association.
Wainer, H., Bradlow, E. T., & Du, Z. (2000). Testlet response theory: An analog for the 3PL model using in testlet-based adaptive testing. In W. J. van der Linden & C. A. W. Glas (Eds.), Computerized adaptive testing: Theory and practice (pp. 245-269). London, MA: Kluwer Academic Publishers.
Wainer, H., & Kiely, G. L. (1987). Item clusters and computerized adaptive testing: A case for testlets. Journal of Educational Measurement, 24(3), 185-201.
Wainer, H. & Mislevy, R. J. (2000). Item response theory, item calibration, and proficiency estimation. In H. Wainer (Ed.), Computerized adaptive testing: A primer (2nd ed.). Mahwah, NH: Lawrence Erlbaum Associates.
Wang, T., & Vispoel, W. P. (1998). Properties of ability estimation methods in computerized adaptive testing. Journal of Educational Measurement, 35(2), 109-135.
Wang, X., Bradlow, E. T., & Wainer, H. (2002). A general Bayesian model for testlets: Theory and applications. Applied Psychological Measurement, 26, 109-128.
Wang, W.-C., & Chen, P.-H. (2004). Implementation and measurement efficiency of multidimensional computerized adaptive testing. Applied Psychological Measurement, 28(5), 295-316
Wang, W.-C., & Wilson, M. (2005). The Rasch testlet model. Applied Psychological Measurement, 29(2), 126-149.
Ward, W. C. (1988). The college board Computerized Placement Tests: An application of computerized adaptive testing. Machine-Mediated Learning, 2, 217-282.
Way, W. D., Ansley, T. N., & Forsyth, R. A. (1988). The comparative effects of compensatory and non-compensatory two-dimensional data on unidimensional IRT estimation. Applied Psychological Measurement, 12, 239-252.
Way, W. D., Zara, A., & Leahy, J. (1996). Modifying the NCLEXTM CAT item selection algorithm to improve item exposure. Paper presented at the annual meeting of the American Educational Research Association, New York, NY.
Weiss, D. J. (1982). Improving measurement quality and efficiency with adaptive testing. Applied Psychological Measurement, 6, 473-492.
Wu, M. (2006). Controlling Item Exposure on the Fly in Computerized Adaptive Testing. Unpublished master’s thesis, National Chung Cheng University, Chia-Yi, Taiwan.
Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8, 125-145.
Yen, W. M. (1993). Scaling performance assessment: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187-213.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
QR Code
QRCODE