:::

詳目顯示

回上一頁
題名:深度學習技術應用於醫學影像
作者:陶雨涵
作者(外文):Tao, Yu-Han
校院名稱:國立陽明交通大學
系所名稱:資訊管理研究所
指導教授:蔡銘箴
學位類別:博士
出版日期:2023
主題關鍵詞:支持向量機卷積神經網絡深度學習醫學圖像分類SVMConvolutional Neural NetworkDeep LearningMedical Image Classification
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(0) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:0
  • 共同引用共同引用:0
  • 點閱點閱:0
[1] R. C. Gonzales, R. E. Woods, Digital Image Processing, third ed., Prentice Hall, New Jersey, 2008.
[2] R. C. Gonzales, R. E. Woods, S. L. Eddins, et al., Digital Image Processing Using MATLAB, second ed., Gatesmark, United Stated, 2009.
[3] C. W. Hsu, C. C. Chang, C. J. Lin, et al., A Practical Guide to Support Vector Classification, National Taiwan University, Taipei, 2003.
[4] M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter, R. Jain, et al., “Automated diagnosis and image understanding with object extraction, object classification, and inferencing in retinal images,” in Proceedings of 3rd IEEE International Conference on Image Processing, pp. 695–698, 1996.
[5] S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, et al., “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, 1989, pp. 263–269, 1989.
[6] A. Hoover, M. Goldbaum, “Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 951–958, 2003.
[7] A. Hoover, V. Kouznetsova, M. Goldbaum, et al., “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000.
[8] A. Esteva, B. Kuprel, R.A. Novoa, J. Ko, S.M. Swetter, H.M. Blau, S. Thrun, et al., “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks,” nature, vol. 542, pp. 115–118, 2017.
[9] B. E. Bejnordi, M. Veta, P. J. Van Diest, B. Van Ginneken, N. Karssemeijer, G. Litjens, J. A. Van Der Laak, M. Hermsen, Q. F. Manson, M. Balkenhol, et al., “Diagnostic Assessment of Deep Learning Algorithms for Detecting Lymph Node Metastases in Women with Breast Cancer,” in National Center for Biotechnology Information, vol. 542, pp. 2199–2210, 2017.
[10] A. Kleppe, F.Albregtsen, L. Vlatkovic, M. Pradhan, B. Nielsen, T.S. Hveem, H. A. Askautrud, G. B. Kristensen, A. Nesbakken, J. Trovik, et al., “Chromatin organisation and cancer prognosis: a pan-cancer study,” in National Center for Biotechnology Information, vol. 19, pp. 356–369, 2018.
[11] D. Bychkov, N. Linder, R. Turkki, S. Nordling, P. E. Kovanen, C. Verrill, M. Walliander, M. Lundin, C. Haglund, J. Lundin, et al., “Deep learning based tissue analysis predicts outcome in colorectal cancer,” in Scientific Reports, vol. 8, pp. 2045–2322, 2018.
[12] A. Krizhevsky, I. Sutskever, G. E. Hinton, et al., “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems 25 (NIPS 2012), vol. 25, pp. 84–90, 2017.
[13] M. D. Zeiler, R. Fergus, “Visualizing and Understanding Convolutional Networks,” in European Conference on Computer Vision, pp. 818–833, 2014.
[14] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, T. Darrell, et al., “DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition,” in Proceedings of the 31st International Conference on Machine Learning, pp. 647–655, 2014.
[15] A.S. Razavian, H. Azizpour, J. Sullivan, S. Carlsson, et al., “CNN Features Off-the-Shelf: An Astounding Baseline for Recognition,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 512–519, 2014.
[16] J. Yosinski, J. Clune, Y. Bengio, H. Lipson, et al., “How transferable are features in deep neural networks?,” in Advances in Neural Information Processing Systems 27 (NIPS 2014), vol. 27, pp. 3320–3328, 2014.
[17] D. V. Terrance, W. T. Graham, “Improved Regularization of Convolutional Neural Networks with Cutout,” in Computer Vision and Pattern Recognition, vol. 8, arXiv:1708.04552, pp. 4401–4410, 2017.
[18] Z. Zhong, L. Zheng, G. Kang, S. Li, Y. Yang, et al., “Random Erasing Data Augmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 13001–13008, 2020.
[19] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, Q. V. Le, et al., “Autoaugment: Learning Augmentation Strategies from Data,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15–20, 2019.
[20] E. D. Cubuk, B. Zoph, J. Shlens, Q. V. Le, et al., “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 14–19, 2020.
[21] S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, et al., “Aggregated Residual’ transformations for Deep Neural Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995, 2017.
[22] J. Hu, L. Shen, G. Sun, “Squeeze-and-Excitation Networks,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 8, pp. 18–23, 2020.
[23] M. Tan, V. L. Quoc, “Efficientnet: Rethinking Model Scaling for Convolutional Neural Networks,” in Proceedings of the 36th International Conference on Machine Learning, pp. 6105–6114, 2019.
[24] T. Y. Lin, P. Goyal, “Focal Loss for Dense Object Detection,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318–327, 2018.
[25] M. Berman, A. Rannen, B. Matthew, et al., “The Lovasz-Softmax Loss: A Tractable Surrogate for Optimizing the Intersection over-Union Measure in Neural Networks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18–23, 2018.
[26] M.J. Tsai, Y.H. Tao, “Deep Learning Techniques for the Classification of Colorectal Cancer Tissue,” in 2020 14th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 263–269, 2020.
[27] A. K. Mikkilineni, O. Arslan, P. J. Chiang, R. M. Kumontoy, J. P. Allebach, G. T. Chiu, et al., “Printer forensics using SVM techniques,” in NIP21: 21st International Conference on Digital Printing Technologies - Final Program and Proceedings - Baltimore, MD, United States, pp. 223–226, 2005.
[28] M. J. Tsai, J. S. Yin, I. Yuadi, J. Liu, et al., “Digital forensics of printed source identification for Chinese characters,” in 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 2129–2155, 2016.
[29] M. J. Tsai, J. Liu, “Digital forensics for printed source identification,” in 2013 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2347–2350, 2013.
[30] A. Ferreira, L. C. Navarro, G. Pinheiro, J. A. D. Santos, A. Rocha, et al., “Laser printer attribution: Exploring new features and beyond,” in Forensic Science International, vol. 247: Elsevier, pp. 105–125, 2015.
[31] A. Ferreira, L. Bondi, L. Baroffio, P. Bestagini, J. Huang, J.D. Santos, S. Tubaro, A. Rocha, et al., “Data-Driven Feature Characterization Techniques for Laser Printer Attribution,” in IEEE Transactions on Information Forensics and Security, vol. 12, no.8, pp. 1860–1873, 2017.
[32] J.H. Choi, H.Y. Lee, H.K. Lee, et al., “Color laser printer forensic based on noisy feature and support vector machine classifier,” in Multimedia Tools and Applications, vol. 67, pp. 363–382, 2013.
[33] D. G. Kim, H. K. Lee, “Color laser printer identification using photographed halftone images,” in 2014 22nd European Signal Processing Conference (EUSIPCO), pp. 795–799, 2014.
[34] M. J. Tsai, Y.H. Tao, I. Yuadi, et al., “Deep learning for printed document source identification,” in Signal Processing: Image Communication, vol. 70: Elsevier, pp. 184–198, 2019.
[35] S. J. Ryu, K. Y. Lee, D. H. Im, J. H. Choi, H. K. Lee, et al., “Electrophotographic printer identification by halftone texture analysis,” in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1846–1849, 2010.
[36] S. Tong, D. Koller, “Support vector machine active learning with applications to text classification,” in Journal of Machine Learning Research, pp. 45–66, 2001.
[37] K. I. Kim, K. Jung, S. H. Park, H. J. Kim, et al., “Support vector machines for texture classification,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 11, pp. 1542–1550, 2002.
[38] D. Yu, L. Deng, Automatic Speech Recognition: A Deep Learning Approach, Springer-Verlag, London, 2015.
[39] Y. Freund, R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” in Journal of Computer and System Sciences, vol. 55, no.1: Elsevier, pp. 119–139, 1997.
[40] M. J. Tsai, I. Yuadi, Y. H. Tao, J. S. Yin, et al., “Source Identification for Printed Documents,” in 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC), 2017.
[41] J. N. Kather, C.-A. Weis, F. Bianconi, S. M. Melchers, L. R. Schad, T. Gaiser, A. Marx, F. Zöllner, et al., “Multi-class texture analysis in colorectal cancer histology,” in Scientific Reports, vol. 6, no. 27988, 2016.
[42] M. J. Tsai, Y. H. Tao, “Machine Learning Based Common Radiologist-Level Pneumonia Detection on Chest X-rays,” in 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS), 2019.
[43] J. Xu, X. Luo, G. Wang, H. Gilmore, A. Madabhushi, et al., “A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images,” in Neurocomputing, vol. 191, pp. 214–223, 2016.
[44] Y. Du, R. Zhang, A. Zargari, T.C. Thai, C.C. Gunderson, K.M. Moxley, H. Liu, B. Zheng, Y. Qiu, et al., “Classification of Tumor Epithelium and Stroma by Exploiting Image Features Learned by Deep Convolutional Neural Networks,” in National Center for Biotechnology Information, vol. 46, no. 12, pp. 1988–1999, 2018.
[45] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, A. Oliva, et al., “Learning Deep Features for Scene Recognition using Places Database,” in Advances in Neural Information Processing Systems 27 (NIPS 2014), vol. 27, pp. 487–495, 2014.
[46] Y. Xu, Z. Jia, L.-B. Wang, Y. Ai, F. Zhang, M. Lai, E.I.-C Chang, et al., “Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features,” in BMC Bioinformatics, vol. 18, no. 281, 2017.
[47] O. Russakovsky, J. Deng, H. Su, J. Krause, et al., “ImageNet Large Scale Visual Recognition Challenge,” in Computer Vision and Pattern Recognition, vol. 115, no. 3, pp. 211–252, 2015.
[48] B. E. Bejnordi, M. Mullooly, R. M. Pfeiffer, S. Fan, P. M. Vacek, D. L. Weaver, S. Herschorn, L. A. Brinton, B. Van Ginneken, N. Karssemeijer, et al., “Using deep convolutional neural networks to identify and classify tumor-associated stroma in diagnostic breast biopsies,” in National Center for Biotechnology Information, vol. 31, pp. 1502–1512, 2018.
[49] J. N. Kather, J. Krisam, P. Charoentong, T. Luedde, E. Herpel, C.-A. Weis, T. Gaiser, A. Marx, N.A.Valous, D. Ferber, et al., “Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study,” in PLOS MEDICINE, vol. 16, 2019.
[50] K. Simonyan, A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in Computer Vision and Pattern Recognition, arXiv:1409.1556, 2014.
[51] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, K. Keutzer, et al., “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size,” in Computer Vision and Pattern Recognition, arXiv:1602.07360, 2016.
[52] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, R. M. Summers, et al., “ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 21-26, 2017.
[53] P. O. Pinheiro, R. Collobert, “From image-level to pixel-level labeling with Convolutional Networks,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1713–1721, 2015.
[54] P. Tschandl, C. Rosendahl, R. Kittler, et al., “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” in Scientific data, vol. 5, no. 180161, 2018.
[55] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, et al., “Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning,” in National Center for Biotechnology Information, vol. 172, no. 5, pp. 1122–1131, 2018.
[56] W. AlDhabyani, M. Gomaa, H. Khaled, A. Fahmy, et al., “Dataset of breast ultrasound images,” in Data in Brief, vol. 28, pp. 2352–3409, 2021.
[57] D. H. Hubel, T. N. Wiesel, “Receptive fields, binocular interaction and functional architecture in the cat's visual cortex,” in National Center for Biotechnology Information, vol. 160, no. 1, pp. 106–154, 1962.
[58] G. Hinton, R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” in Science, vol. 313, pp. 504–507, 2006.
[59] J. Markoff, How Many Computers to Identify a Cat? 16, 000, in Datamation Staff, 2012.
[60] Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, et al., “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[61] M. Lin, Q. Chen, S. Yan, Network in Network, Cornell University Library, pp. 10, 2014.
[62] E. Rumelhart, E. Geoffrey, J. Ronald, et al., “Learning representations by back-propagating errors,” nature, vol. 323, pp. 533–536, 1986.
[63] S. S. Yadav, S. M. Jadhav, “Deep convolutional neural network based medical image classification for disease diagnosis,” in Journal of Big Data, vol. 6, pp. 1–18, 2019.
[64] A. Krizhevsky, I. Sutskever, G.E. Hinton, et al., “ImageNet classification with deep convolutional neural networks,” in NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097–1105, 2012.
[65] K. He, X. Zhang, S. Ren, J. Sun, et al., “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
[66] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, et al., “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pp. 27–30, 2016.
[67] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, et al., “Densely Connected Convolutional Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10–16, 2017.
[68] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” in Computer Vision and Pattern Recognition, arXiv:1704.04861, pp. 777–787, 2017.
[69] F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, 2017.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
QR Code
QRCODE