:::

詳目顯示

回上一頁
題名:電腦生成的新聞有多真?--文字自動生成技術運用於經濟新聞的評估
書刊名:圖書資訊學刊
作者:曾元顯林郁綺
作者(外文):Tseng, Yuen-hsienLin, Yu-chi
出版日期:2021
卷期:19:1
頁次:頁43-65
主題關鍵詞:電腦生成新聞文字自動生成新聞偵測深度學習人工智慧Computer-generated newsAutomated text generationNews detectionDeep learningArtificial intelligence
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(0) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:0
  • 共同引用共同引用:0
  • 點閱點閱:9
期刊論文
1.LeCun, Yann、Bengio, Yoshua、Hinton, Geoffrey(2015)。Deep learning。Nature,521(7553),436-444。  new window
2.Turing, Alan M.(1950)。Computing Machinery and Intelligence。Mind,59(236),433-460。  new window
3.Allcott, Hunt、Gentzkow, Matthew(2017)。Social media and fake news in the 2016 election。Journal of Economic Perspectives,31(2),211-236。  new window
4.Shu, Kai、Sliva, Amy、Wang, Suhang、Tang, Jiliang、Liu, Huan(2017)。Fake news detection on social media: A data mining perspective。ACM SIGKDD Explorations Newsletter,19(1),22-36。  new window
5.Conroy, N. K.、Rubin, V. L.、Chen, Y.(2015)。Automatic deception detection: Methods for finding fake news。Proceedings of the Association for Information Science and Technology,52(1),1-4。  new window
6.楊德倫、曾元顯(20201100)。建置與評估文字自動生成的情感對話系統。教育資料與圖書館學,57(3),355-378。new window  延伸查詢new window
7.Hochreiter, Sepp、Schmidhuber, Jürgen(1997)。Long Short-term Memory。Neural Computation,9(8),1735-1780。  new window
會議論文
1.Vaswani, A.、Shazeer, N.、Parmar, N.、Uszkoreit, J.、Jones, L.、Gomez, A. N.、Kaiser, L.、Polosukhin, I.(2017)。Attention is all you need。The 31th Annual Conference on Neural Information Processing Systems。Neural Information Processing Systems Foundation。5998-6010。  new window
2.Ruchansky, N.、Seo, S.、Liu, Y.(2017)。Csi: A hybrid deep model for fake news detection。The 2017 ACM on Conference on Information and Knowledge Management,E.-P. Lim & M. Winslett (Chairs) 。Association for Computing Machinery。797-806。  new window
3.Shu, K.、Cui, L.、Wang, S.、Lee, D.、Liu, H.(2019)。dEFEND: Explainable fake news detection。The 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining。Association for Computing Machinery。395-405。  new window
4.Krizhevsky, Alex、Sutskever, I.、Hinton, G. E.(2012)。ImageNet Classification with Deep Convolutional Neural Networks。The 25th International Conference on Neural Information Processing Systems。Curran Associates Inc.。1097-1105。  new window
圖書
1.Russell, Stuart J.、Norvig, Peter(2009)。Artificial Intelligence: A Modern Approach。Prentice-Hall, Inc.。  new window
單篇論文
1.Devlin, Jacob,Chang, Ming-Wei,Lee, Kenton,Toutanova, Kristina(2018)。BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,https://doi.org/10.48550/arXiv.1810.04805,(1810.04805)。  new window
2.Radford, Alec,Narasimhan, Karthik,Salimans, Tim,Sutskever, Ilya(2018)。Improving language understanding by generative pre-training,https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf。  new window
3.Brown, T. B.,Mann, B.,Ryder, N.,Subbiah, M.,Kaplan, J.,Dhariwal, P.,Neelakantan, A.,Shyam, P.,Sastry, G.,Askell, A.,Agarwal, S.,Herbert-Voss, A.,Krueger, G.,Henighan, T.,Child, R.,Ramesh, A.,Ziegler, D. M.,Wu, J.,Winter, C.,Amodei, D.(2020)。Language models are few-shot learners,https://arxiv.org/abs/2005.14165,(2005.14165)。  new window
4.Radford, A.,Wu, J.,Child, R.,Luan, D.,Amodei, D.,Sutskever, I.(2019)。Language models are unsupervised multitask learners,https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf。  new window
其他
1.Du, Z.,Cheng, H.,Chiu, H.,Yida(2019)。GPT2-Chinese: Tools for training GPT2 model in Chinese language,https://github.com/Morizeyao/GPT2-Chinese。  new window
2.ByteDance(2019)。WSDM: Fake news classification,https://www.kaggle.com/c/fake-news-pair-classification-challenge。  new window
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
:::
無相關書籍
 
無相關著作
 
QR Code
QRCODE