:::

詳目顯示

回上一頁
題名:教學實驗中的考驗力分析
作者:張漢宜
作者(外文):Chang, Han-I
校院名稱:國立高雄師範大學
系所名稱:教育學系
指導教授:謝季宏
學位類別:博士
出版日期:2003
主題關鍵詞:統計考驗力樣本大小效果值statistical powersample sizeeffect size
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(1) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:1
  • 共同引用共同引用:0
  • 點閱點閱:60
本研究旨在分析教學實驗中的統計考驗力並且探討統計考驗力與信度的關係。根據這個主旨,本研究的目的陳述如下:一、分析教學實驗的統計考驗力,並利用現有的電腦軟體,設計一個分析多變項統計考驗力的簡易程序。二、比較教學實驗與非實驗教育研究的統計考驗力。三、比較本研究的結果與國外的研究結果是否有差異。四、探討增加信度是否能提高統計考驗力。五、根據研究結果對教學實驗的研究提出建議。
本研究所分析的教學實驗有53篇,共1427項統計考驗;非實驗教育研究有53篇,共4330項統計考驗;整體教育研究有106篇,總共5757項統計考驗。將樣本大小、顯著水準與效果值輸入統計考驗力軟體即可計算出統計考驗力。樣本大小的訊息是根據每一項考驗所實際使用的人數而來的;顯著水準訂定為.05,所有的考驗都以無向考驗的方式來處理;效果值則是使用Cohen(1988)所定義的小、中與大三種效果值。本研究也利用現有的統計考驗力軟體設計一個計算多變項統計考驗力的計算程序。
研究結果發現教學實驗偵測小、中與大效果的統計考驗力分別是.20, .65, 與.87。非實驗教育研究偵測小、中與大效果的統計考驗力分別是.62, .97, 與.99。整體教育研究偵測小、中與大效果的統計考驗力分別是.41, .81, 與.93。非實驗教育研究偵測小、中與大效果的統計考驗力都顯著高於教學實驗的統計考驗力。整體教育研究偵測小、中與大效果的統計考驗力都顯著高於國外研究的統計考驗力。統計考驗力與信度之間並無直接關係。但是,如果測量誤差變異數保持不變,提高信度會降低統計考驗力;如果真分數變異數保持不變,增加信度會提高統計考驗力。
根據以上的結果,本研究的結論可以條列如下:
一、教學實驗的統計考驗力略顯不足;統計結論效度似乎偏低。
(一)偵測小效果的統計考驗力略為低落;
(二)偵測中效果的統計考驗力呈現臨界現象;
(三)偵測大效果的統計考驗力尚稱充足。
二、非實驗教育研究偵測小、中與大效果的統計考驗力都高於教學實驗的統計考驗力。
三、整體教育研究偵測小、中與大效果的統計考驗力都高於國外研究的統計考驗力。
四、多變項考驗力的電腦計算程序能快速而正確地協助研究人員進行計算。
五、增加測驗的信度不必然提高統計考驗力。
(一)統計考驗力與信度之間並無直接的或函數關係;
(二)高信度不必然提高統計考驗力;
(三)低信度有可能導致高統計考驗力。
依據這些結論,本研究針對如何設計具有高統計考驗力的教學實驗、如何使用考驗力分析來決定樣本大小、如何使用統計考驗力的概念協助分析資料、如何降低測量誤差以提高統計考驗力、留意另類資料分析方法的潛力以及未來可能的研究方向等項目分別提出建議。
The purpose of this study is to analyze statistical power in instructional experiment research and explore the relation between statistical power and reliability of measurement. First of all, statistical power in instructional experiment research would be assessed and a procedure for calculating multivariate statistical power was constructed using a power software. Second, statistical power in instructional experiment research was compared to that of non-experimental educational research. Third, statistical power in instructional experiment researches combined with non-experimental educational researches were compared to that of foreign power studies. Fourth, this study would explore if power can increase as reliability of measurement increases. Finally, suggestions for future research were recommended.
A total of 106 theses and dissertations were examined. Of the total, 53 were instructional experiment research investigations, and 53 were non-experimental educational research investigations. The total number of tests for which power was calculated was5757: 1427 in instructional experiment research and 4330 in non-experimental educational research. Power is a function of sample size, significance level, and effect size, and could be calculated by power software. In order to maintain consistency between this and other reported statistical power surveys, a number of conditions were standardized (1)α was held at a uniform value of .05; (2)all statistical tests utilized were assumed to be nondirectional in nature; (3)Cohen’s(1988)definitions of small, medium, and large effect sizes were adopted. A procedure for calculating multivariate statistical power was constructed using a power software.
Power of instructional experiment research to detect small, medium, and large effects was .20, .65, .and 87, respectively. Power of non-experimental educational research to detect small, medium, and large effects was .62, .97, .and .99, respectively. Power of instructional experiment research combined with non-experimental educational research to detect small, medium, and large effects was .62, .97, .and .99, respectively. For all levels of effect size, the difference between power of non-experimental educational research and power of instructional experiment research. was statistically significant. For all levels of effect size, the difference between power of instructional experiment research combined with non-experimental educational research and power of foreign power studies was statistically significant .There is a relation between statistical power and reliability of measurement. However, it is not a functional relation unless one other variable — either true variance or error variance — has a definite value. Specifically, if the measurement-theoretic true variance is assumed to be constant, then power is a monotonic increasing function of the reliability coefficient. In addition, if the error variance is assumed to be constant, then power is a monotonic decreasing function of the reliability coefficient.
The conclusions of this research were as following:
1. The statistical power of instructional experiment research was low.
(1)Power to detect small effects was poor.
(2)Power to detect medium effects was marginal.
(3)Power to detect large effects was adequate.
2. For all levels of effect size, power of non-experimental educational research was higher than that of instructional experiment research.
3. For all levels of effect size, power of instructional experiment research combined with non-experimental educational research was higher than that of foreign power studies. .
4. The procedure for calculating multivariate statistical power can assist researchers in computing correct power quickly.
5.Statistial power does not always increase as reliability increases.
(1)There is not a functional relation between statistical power and reliability of measurement.
(2)Increasing reliability does not necessarily increase statistical power.
(3)It is possible that statistical power increases as reliability decreases.
Recommendations for designing instructional experiment research with high power, for the procedure of determining sample size, for analyzing data using power analysis, and for reducing measurement error, and suggestions for future research are discussed.
一、中文部份
王文科(民84)。教育研究法(四版)。台北: 五南。new window
李金泉(民83)。SAS PC 實務與應用統計分析。台北﹕松崗。
林幸台譯(民75)(Edwin E. Ghiselli、John P. Campbell與Sheldon Zedeck著)。心理測量導論。台北:五南。
林清山(民81)。心理與教育統計學。台北: 東華。
林清山(民72)。多變項分析統計法(三版)。台北: 東華。
林偉人(民84)。國小學生場地獨立性及制握信念對數學科電腦輔助教學學習成效之研究。國民教育研究集刊,1,267-295。
吳明清(民80)。教育研究: 基本觀念與方法之分析。台北: 五南。
陳坤木(民84)。電腦輔助學習在國民小學學生數學科精熟學習應用之研究。國民教育研究集刊,1,329-343。
郭生玉(民76)。心理與教育測驗。台北﹕精華。
國立編譯館(民86)。統計學名詞。台北:水牛。
張德榮(民81)。母數統計與非母數統計: 既生亮何生瑜? 輔導學報,15,1-16。
張德榮(民71)。教育心理研究的統計考驗力分析。輔導學報,5,119-137。new window
詹志禹(民77)。後設分析: 量化的文獻探討。思與言, 26, 311-325。new window
鄭博真(民85)。寫作修改策略對國小學生寫作修改表現、寫作能力、寫作品質和寫作態度之影響研究。國立台南師範學院國民教育研究所碩士論文(未出版)。new window
簡茂發(民76)。心理測驗與統計方法。台北:心理。
謝季宏、塗金堂(民87)。t考驗的統計考驗力之研究。教育學刊,14,93-114。new window
謝廣全(民76)。最新實用心理與教育統計學(增訂三版)。高雄: 復文。
二、英文部份
Aclin, M. W., Mcdowell, , C. J., & Orndoff, S.(1992). Statistical power and the Rorschach: 1975-1991. Journal of Personality Assessment, 59, 366-379.
Allison, D. B.(1991). When is it worth measuring a covariate in a randomized clinical trial? Journal of Consulting and Clinical Psychology, 63, 339-343.
Bakan, D.(1966). The test of significance in psychological research. Psychological Bulletin, 66, 423-437.new window
Borenstein, M., Rothstein, H., Cohen, J., & SPSS Inc. (1997). SamplePower(Version 1.20) [Computer software]. SPSS Inc.
Borenstein, M., & Cohen, J.(1988). Statistical power analysis (Version 1.0) [Computer software]. Hillsdale, NJ: Erlbaum.
Borg, D. & Gall, M. D.(1989). Educational research(5th ed.). New York: Longman .
Boss,M.W.,& Taylor,M.C.(1989).The relationship between locus and academic level and sex of secondary students﹒Contemporary Educational Psychology,14, 315-322﹒
Brewer, J. K.(1972). On the power of statistical tests in the American Educational Research Journal. American Educational Research Journal, 9, 391-401.
Brewer, J. K., & Owen, P. W. (1973). A note on the power of statistical tests in Journal of Educatuonal Measurement. Journal of Educational Measurement, 10, 71-74.
Brown, B. W. , Brauner, C., Chan, A., Gutierrez, D., Herson, J., Lovato, J., Polsley, J., Russell, K.(1994). STPLAN (Version 4.1) [Computer software]. Houston, TX: The University of Texsas, M.D. Anderson cancer center, Department of Biomathmatics.
Chase, L. J., & Baran, S. J. (1976). An assessment of quantitative research in mass communication. Journalism Quarterly, 53, 308-311.
Chase, L. J. , & Chase, R. B. (1976). A statistical power analysis of applied psychological research. Journal of Applied Psycchology, 61, 234-237.
Christensen, J. E., & Christensen, C. E.(1977). Statistical power analysis of health, physical education, and recreation research. Research Quarterly, 48, 204-208.
Cohen, J.(1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65, 145-153.
Cohen, J.(1973). Statistical power and research results. American Educational Research Journal, 10, 225-230.
Cohen, J.(1988). Statistical power analysis for the behavioral sciences(2nd ed.) Hillsdale, NJ:Erlbaum.
Cohen, J.(1990). Things I have learned(so far). American Psychologist, 45, 1304-1312.
Cohen, J.(1992). A power primer. Psychological Bulletin, 112, 155-159.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin Company.
Cooper, H. & Findly, M. (1982). Expected effect size: Estimates for statistical power analysis in social psychology. Personality and Social Psychology Bulletin, 8, 168-173.
Cowles, M., & Davis, C.(1982). On the origins of the .05 level of statistical significance. American Psychologist, 37, 553-558.
Crocker,L., & Algina, J.(1986). Introduction to classical and modern test theory﹒New York﹕Holt,Rinehart & Winston﹒new window
Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington .
Crane, J. A.(1976). The power of social intervention experiments to discriminate differences between experimental and control groups. Social Service Review, 50, 224-242.
CSRIO Biometrics Unit(1996).PowerPlant (Version beta release)[Computer software].Wembly, Australia: Author.
Daniel, T. D.(1993a). A statistical analysis of the quantitative techniques used in the "Journal of Research in Music Education,񓟥 through 1991. Paper presented at the Annual Meeting of the Mid-South Educational Research Association, New Orleans, LA.(ERIC Document Reproduction Service NO. ED 365721)
Daniel, T D (1993b) A statistical power analysis of the quantitative techniques used in the journal of research in music education, 1987 through 1991. Unpublished doctoral dissertation, Auburn University, Auburn.
Dayton, C.M., Schafer, W.D., & Rogers, B. G.(1973). On appropriate uses and interpretations of power analysis:A comment. American Educatuonal Research Journal, 10, 231-234.
Dyer, F. J.(1980)﹒Application of power analysis concepts to test reliability research﹒Educational and Psychological Measurement, 40, 301-306﹒new window
Dollal, G. E.(1986). PC-SIZE:A program for sample size determination. The American Statistician, 40, 52.
Dunport, W. D.(199). PS(Version ) [Computer software]. Nashville, TN: Dept. Preventive Medicine, Vanderbilt University.
Eller, E. G.(1994). A comparison of analysis of covariance and blocking designs in a gender and science context. Paper presented at the Annual Meeting of the Mid-South Educational Research Association, Nashville, TN.(ERIC Document Reproduction Service NO. 281 561)
Fagly, N. S.(1985). Applied statistical power analysis and interpretation of nonsignificat results by research consumers. Journal of Counseling Psychology, 32, 391-396.
Faul, F. & Erdfelder, E.(1992). GPOWER: A priori, post-hoc, and comprise power analysis for MS_DOS(Version 2.0)[Computer Program]. Bonn, FRG: Bonn University, Dep. Of Psychology.
Feldt, L. S.(1973). What size samples for method/materials experiment? Journal of Educational Measurement, 10, 221-226.
Fisher, R. A. (1993). Statistical methods, experimental design, scientific inference(A re-issue of Statistical methods for research workers, The design of experiments, and Statistical methods and scientific inference). Oxford, Great Britan: Oxford University Press.
Fleiss, S. A.(1976). Comment on Overall and Woodward''s asserted paradox concerning the measurement of change.Psychological Bulletin,83, 774-775﹒
Forsyth, R. A. (1978). A note on "Planning an Experiment in the Company of Measurement Error" by Levin and Subkoviak﹒Applied Psychological Measurement,2,377-381.
Friedman, H.(1968). Magnitude of experimental effect and a table for its rapid estimation. Psychological Bulletin, 70, 245-251.
Friedman, H.(1982). Simplified determinations of statistical power, magnitude of effect and research sample sizes. Educational and Psychological Measurement, 42, 521-526.
Friel, S. N., Curcio, F. R., & Bright, G. W.(2001). Making sense of Graphs: Critical factors influencing comprehension and instructional implicications. Journal for Research in Mathematics Education, 32, 124-158.
Friscaro,S. A.,& Lautenschlager, G.J.(1992).Power and reliability﹕The case of homogeneous true score regression across treatments.Educational and Psychological Measurement,52,505-511﹒
Gay, L. R. (1996). Educational research: Competencies for analysis and application(5th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Gerrodette, T.(1993). TRENDS(Version 1.0) [Computer software]. La Jolla, CA: Southest Fisheries Science Center, National Oceanic and Atmospheric Administration.
Gibbs, J. P. (1995). Monitor(Version 6.2) [Computer software]. New York, NY: Dept. of Earth and Envirommental Sciences, Columbia University.
Gigerenzer, G., & Murrey, D. J. (1987). Cognition as intuitive statistics. Hillsdale,NJ: Erlbaum.
Gigerenzer,G., Swijtink,Z., Porter,T., Daston,L. J., Beatty,J., & Kr ger,L. (1989) The empire of chance: How probability changed science and everyday life. Cambridge, England: Cambridge University Press.
Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance. Review of Educational Research, 42, 237-288.
Glass, G. V., & Hopkins, K. D.(1984). Statistical methods in education and psychology(2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.
Green, S. B.(1991). How many subjects does it take to do a regression analysis. Multivariate Behavioral Research, 26, 499-510.
Greenwald, A. G.(1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1-20.
Haase, R. F., Waechter, D. M., & Solomon, G. S. (1982). How significant is a significant difference? Average effect size of research in counseling psychology. Journal of Counseling Psychology, 29, 58-65.
Hasse, R. F., & Ellis, M.(1987). Multivariate analysis of variance. Journal of Counseling Psychology, 34, 404-413.
Hacking, I. (1965). Logic of statitical significance. New York: Cambridge.
Hays, W. L.(1994). Statistics(5th ed.). Orlando, FL:Holt, Rinehart and Winston.
Heppner, P. P. , Kivlighan, Jr., & Wampold, B. E. (1992). Researcch design in counseling. Pacific Grove, CA: Brooks/Cole.
Hinkle, D. E. & Oliver, J. D.(1983). How large should the sample be? A question with no simple answer? Or…. Educational and Psychological Mesurement, 43, 1041-1060.
Hoenig, J. M. , & Heisey, D. M. (2001). The abuse of powe: The pervasive fallacy of power calculations for data analysis. The American Statistician, 1, 19-24.
Hopkins,K﹒D﹒,& Hopkins,B﹒R﹒(1979)﹒The effect of the reliability of the dependent variable on power﹒Journal of Special Education,13,463-466﹒
Huberty, C. J. (1993). Historical origins of statistical testing pratices: The treatment of Fisher versus Neyman-Pearson views in textbooks. Journal of experimental education, 6, 317-333.
Humphreys, L.G.(1993).Further comments on reliability and power of significance tests. Applied Psychological Measurement,17,11-14﹒
Humphreys, L.G., & Drsgow,F.(1989a). Some comments on the relation between reliability and statistical power. Applied Psychological Measurement, 13, 419-425.
Humphreys, L.G., & Drsgow, F.(1989b). Paradoxes,contradictions,and illusions. Applied Psychological Measurement, 13, 429-431.
Jaccard, J. & Becker, M. A(1990). Statistics for the behavioral sciences(2nd ed.). Belmont, CA: Wadsworth.
Johnsson, J. I. (1996). Statistics and biological sense: A reply to Thomas & Juanes. Anim Behav, 52, 860.
Jones, B. J., & Brewer, J. K. (1972). An analysis of the power of statistical tests reported in the Research Quarterly. Research Quarterly, 43, 23-30.
Katzer, J., & Sodt, J. (1973). An analysis of the use of statistial testing in communiation research. The Journal of Communication, 23, 251-265.
Kawano, T.(1993). School Psychology Journals: Relationships with related journal and external and internal quality indices. Journal of School Psychology, 31, 407-424.(ERIC Document Reproduction Service NO. EJ 470254).
Kazdin, A. E., & Bass, D.(1989). Power to detect differences between treatments in comparative psychotherapy outcome research. Journal of Consulting and Clinical Psychology, 57, 138-147.
Keppel, G. (1991). Design and analysis: A research''s handbook(3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.
Kiess, H. O.(1989). Statistical concepts for the behavioral sciences. Needham Heights, MA: Allyn and Bacon.
Kirk,R﹒E﹒(1995)﹒Experimental design﹕Procedures for the behavioral sciences(3rd ed﹒)﹒Pacific Grove,CA﹕Brooks/Cole﹒
Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759.
Kopriva, R. J. , & Shaw, D. (1991)﹒Power estimates: The effect of dependent variable reliability on the power of one-factor ANOVAS﹒Educational and Psychological Measurement, 51, 585-595﹒
Kramer, H. C., & Thiemann, S. (1987). How many subjects: Statistical power analysis in research. Beverly Hills, CA: Sage.
Lenth, R. V. (1996). PiFace[Computer software]. Department of Statistics and Actuarial Science,University of Iowa: Author.
Levin, J. (1986). Note on the relation between the power of significance test and the reliability of the measuring instrument. Multivariate Behavioral Research, 21, 255-261.new window
Levin, J. R.(1975). Determining sample size for planned and post-hoc analysis of variance comparisons. Journal of Educational Measurement, 12, 99-108.
Lord, F. M. , & Novick, M. R. (1968). Statistical theories of mental test scores﹒ Reading, MA: Addison-Wesley.new window
Maxwell, S. E. (1980). Dependent variable reliability and determination of sample size. Applied Psychological Measurement, 4, 253-260.
Mazen, A. M. M., Graf, L. A., Kwllog, C. E., & Hemmasi, M. (1987). Statistical power in contemporary strategy research. Academy of Management Journal, 30, 369-380.
Mazen, A. M. M., Hemmasi, M., & Lewis, M. F. (1987). Assessment of statistical power in contemporary stategy research. Stragic Management Journal, 8, 403-410.
Measurement and Evaluation in Counseling and Development. (1992). Guidelines for authors. Measurement and Evaluation in Counseling and Development, 25, 143.
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
Murrey, L. W., & Dosser, Jr., D. A. (1987). How significant is a significant difference? Problems with the measurement of magnitude of effect. Journal of Counseling Psychology, 34, 68-72.
Newman, J., & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference. Biometirka, 20A, 175-240, 263-294.new window
Nicewander, W. A. , & Price, J. M. (1978). Dependent variable reliability and the power of statistical tests﹒Psychological Bulletin, 85, 405-409﹒
Nicewander, W. A. , & Price, J. M. (1983). Reliability of measurement and the power of statistical tests: Some new results. Psychological Bulletin, 94, 524-533.
NCSS Statistical Software(1997). PASS(Version 6.0) [Computer software]. Kaysville, UT: Author.
Oehlert, G. W., & Binghham, C.(1997). MACANOVA(Version 4.05) [Computer software]. St. Paul, MN: School of Statistics, University of Minnesota.
Olson, C. L. (1976). On choosing a test statistics in multivariance analysis of variance. Psychological Bulletin, 83, 579-586.new window
Orme, J. G., & Combs-Orme, T. D.(1986). Statistical power and type II errors in social work research. Social Work Research & Abstracts, 22, 3-10.
Orme, J. G., & Tolman, R. M. (1986). The statistical power of a decade of social work education research. Social Service Review, 60, 619-632.
Ottenbacher, K. J., & Mass, F. (1999). Quantitative Research Series―How to detect effects: Statistical power and evidence-based practice in occupational therapy research. American Journal of Occupational Therapy, 53, 181-188.
Overall, J. E. (1989a). Contradictions can never a paradox resolve. Applied Psychological Measurement, 13, 426-428.
Overall, J. E. (1989b). Distinguishing between measurements and dependent variables. Applied Psychological Measurement, 13, 432-433.
Overall, J. E. , & Woodward, J. A. (1975). Unreliability of difference scores: A paradox for measurement of change. Psychological Bulletin, 82, 85-86.
Overall, J. E. , & Woodward, J. A. (1976). Reassertion of the paradoxical power of tests of significance based on unreliable difference scores. Psychological Bulletin, 83, 776-777.
Overall, J. E. (1969). Classical statistical hypothesis testing within the context of Baysian theory. Psychological Bulletin, 71, 285-292.
Pearson, E﹒ S﹒, & Hartley, H. O﹒ (1951)﹒ Charts of the power function for analysis of variance tests, derived from the non-central F-distribution﹒ Biometrika, 38, 112-130﹒
Peladeau, N.(1995). Simstat(Version 3.5)[Computer software].Montreal, Canada: Provalis Research.
Peterman, R. M. (1990). Statistical power analysis can improve fisheries research and management. Can. J. Fish. Aquat. Sci., 47, 2-15.
Rogers, W. T. , & Hopkins, K. D. (1988). Power estimates in the presence of a covariate and measurement error. Educational and Psychological Measurement, 48, 647-656﹒
Rohlf, F. J.(1999). TpsPower(Version 1.01) [Computer software]. Stony Brook, NY: Author.
Rosnow, R. L., & Rothenthal, R.(1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284.
Rossi, J. (1990). Statistical power of psychological research: What have we gained in 20 years? Journal of Consulting and Clinical Psychology, 58, 646-656.
Rothenthal, R., & Rubin, D. B.(1982). A simple, general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74, 166-169.
Rothenthal, R.(1979). The "File drawer problem" and tolerance for null results. Psychological Bulletin, 86, 638-641.
Sawyer, A. G.,& Ball, A. D.(1981). Statisticcal power and effect size in marketing research. Journal of Marketing Researcch, 18, 275-290.
Scariano, S. M., & Davenport, J. M. (1987). The effects of violations of independence assumptions in the one-way ANOVA. The American Statistician, 41, 123-129.
Schafer, W. D. (1991). Power analysis in interpreting statistical nonsignificance. Measurement and Evaluation in Counseling and Development, 23, 146-148.
Schafer, W. D. (1993). Interpreting statistical significance and nonsignificance. Journal of Experimental Education, 61, 383-387.
Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.
Siegel, S., & Castellan, Jr., N, J.(1988). Nonparametric statistics for the behavioral sciences(2nd ed.). New York: McGraw-Hill.
Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psycologist, 32, 752-760.
Stevens, J.(1980). Power of the multivariate analysis of variance tests. Psychological Bulletin, 88, 728-737.
Stevens, J.(1986). Applied multivariate statistics for the social sciences. Hillsdale, NY: Haper & Row.
Stevens, J.(1992).Applied multivariate statistics for the social sciences(2nd ed.). Hillsdale, NY: Haper & Row.
Subkoviak, M. J. , & Levin, J. R. (1977). Fallibility of a measurement and the power of a statistical test. Journal of Educational Measurement, 14, 47-52.new window
Sutcliffe, J. P. (1980). On the relationship of reliability to statistical power. Psychological Bulletin, 88, 509-515.
Tress, M. V.(1997).Distribution Function and Probability Calculator(Version 2.4) [Computer software]. Arlington, TX: Author.
Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Madison-Wesley.
Tversky, A., & Kahneman, D.(1971). Belief in the law of small numbers. Psyhological Bulletin, 76, 105-110.
Vogt(1999). Dictionary of statistics & methodology: A nontechnical guide for the social sciences(2nd. Ed.). Thousand Oaks, CA: Sage .
Williams, R. H. , Zimmerman, D. W. , & Zumbo, B. D. (1995). Impact of measurement error on statistical power: Review of an old paradox. Journal of Experimental Education, 63, 363-370.
Wooley, T. W.(1983). A comprehensive power-analytic investigation of research in medical eduation. Journal of Medical Education, 58, 710-715.
Wooleey, T. W., & Dawson, G. O.(1983). A follow-up power analysis of the statistical tests used in the Journal of Research in Sience Teaching. Journal of Research in Science Teaching, 20, 673-681.
Wu, Y. C., & McLean, J. E.(1994). A priori versus post-hoc: Comparing statistical power among ANOVA, block designs, and ANCOVA. Paper presented at the Annual Meeting of the Mid-South Educational Research Association, Nashville, TN.(ERIC Document Reproduction Service NO. 387 531)
Zimmerman, D. W. , & Williams, R. H. (1986). Note on the reliability of experimental measures and the power of significance tests. Psychological Bulletin, 100, 123-124.
Zimmerman, D. W. , Williams, R. H. , & Zumbo, B. D. (1993). Reliability of measurement and power of significance tests based on differences. Applied Psychological Measurement, 17, 1-9.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
:::
無相關著作
 
無相關點閱
 
QR Code
QRCODE