:::

詳目顯示

回上一頁
題名:個人與個人化AI: 演算法社會之下的衝突與機會
作者:葛陵偉
作者(外文):Greene, Travis
校院名稱:國立清華大學
系所名稱:服務科學研究所
指導教授:徐茉莉
學位類別:博士
出版日期:2023
主題關鍵詞:演算法個人化社會Personalizationartificial intelligencemachine learningplatformdata science ethics
原始連結:連回原系統網址new window
相關次數:
  • 被引用次數被引用次數:期刊(0) 博士論文(0) 專書(0) 專書論文(0)
  • 排除自我引用排除自我引用:0
  • 共同引用共同引用:0
  • 點閱點閱:3
本論文從社會技術的角度研究人工智能驅動的個性化、數字平台、人的哲學概念以及個性化的法律規制。它闡明了基於人工智能的個性化如何影響個人、社會以及科學研究的實踐和進步。本論文借鑒了許多跨學科的資源和觀點,提出了個性化的批判性但整體的概念,認為個性化可以被視為對人類的目標導向控制。構成本論文的四篇論文涵蓋私營部門(即商業)和公共部門(即刑事司法)。這些緊張局勢包括新的以工程為中心的個性化技術,這些技術可能會扼殺人類的自主權、企業控制平台的業務目標與科學研究人員的目標之間的利益衝突、與可解釋人工智能的商業可行性相關的倫理和經濟問題、意識形態分歧將 AI 研究人員社區分開,以及刑事司法的傳統規範目標與基於證據的刑事司法政策中越來越多地使用算法個性化風險預測之間的摩擦。基於哲學、法律和技術相互塑造和影響的前提,我們認為這些緊張關係同時代表著將個性化技術與人的法律和哲學概念更好地結合起來的機會
This dissertation investigates AI-driven personalization, digital platforms, the philosophical concept of the person, and the legal regulation of personalization from a sociotechnical perspective. It aims to articulate how AI-based personalization affects individual persons, society, and the practice and advance of scientific research. Drawing on a number of interdisciplinary sources and perspectives, the dissertation advances a critical yet holistic conception of personalization by suggesting that it can be viewed as the goal-directed control of human persons. The four papers that comprise this dissertation cover private sector (i.e., commercial) and public sector (i.e., criminal justice). These tensions include new engineering-centric personalization technologies that threaten to stifle human autonomy, conflicts of interest between the business goals of corporate-controlled platforms and the goals of scientific researchers, ethical and economic issues related to the commercial viability of explainable AI, ideological divisions separating the AI researcher community, and frictions between the traditional normative goals of criminal justice and the increasing use of algorithmic personalized risk predictions in evidence-based criminal justice policy. Based on the premise that philosophy, law, and technology mutually shape and influence one another, we argue these tensions at the same time represent opportunities to better align the technology of personalization with legal and philosophical conceptions of the person.
[Abbasi et al., 2016] Abbasi, A., Sarker, S., and Chiang, R. H. (2016). Big data research in information systems: Toward an inclusive research agenda. Journal of the association for information systems, 17(2):3.
[Abbeel and Ng, 2004] Abbeel, P. and Ng, A. Y. (2004). Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, page 1.
[Abdollahpouri et al., 2020] Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., and Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30(1):127–158.
[Abdollahpouri et al., 2019] Abdollahpouri, H., Burke, R., and Mobasher, B. (2019). Managing popularity bias in recommender systems with personalized re-ranking. arXiv preprint arXiv:1901.07555.
[Abel et al., 2021] Abel, D., Dabney, W., Harutyunyan, A., Ho, M. K., Littman, M., Precup, D., and Singh, S. (2021). On the expressivity of markov reward. Advances in Neural Information Processing Systems, 34:7799–7812.
[Abel et al., 2016] Abel, D., MacGlashan, J., and Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In Workshops at the thirtieth AAAI conference on artificial intelligence.
[Ackrill, 1988] Ackrill, J. L. (1988). A new Aristotle reader. Princeton University Press.
[ACLU, 2020] ACLU (March 27, 2020). Summary judgment opinion. https:// www.aclu.org/legal-document/summary-judgment-opinion-0. ACLU, On- line, accessed February 24, 2022.
[Acquisti et al., 2015] Acquisti, A., Brandimarte, L., and Loewenstein, G. (2015). Privacy and human behavior in the age of information. Science, 347(6221):509– 514.
[Adler et al., 2016] Adler, J. M., Lodi-Smith, J., Philippe, F. L., and Houle, I. (2016). The incremental validity of narrative identity in predicting well-being: A review of the field and recommendations for the future. Personality and Social Psychology Review, 20(2):142–175.
[Adomavicius et al., 2013] Adomavicius, G., Bockstedt, J. C., Curley, S. P., and Zhang, J. (2013). Do recommender systems manipulate consumer preferences? a study of anchoring effects. Information Systems Research, 24(4):956–975.
[Adomavicius and Tuzhilin, 2005a] Adomavicius, G. and Tuzhilin, A. (2005a). Personalization technologies: a process-oriented perspective. Communications of the ACM, 48(10):83–90.
[Adomavicius and Tuzhilin, 2005b] Adomavicius, G. and Tuzhilin, A. (2005b). Toward the next generation of recommender systems: A survey of the state- of-the-art and possible extensions. IEEE transactions on knowledge and data engineering, 17(6):734–749.
[Adomavicius and Yang, 2022] Adomavicius, G. and Yang, M. (2022). Integrating behavioral, economic, and technical insights to understand and address algo- rithmic bias: A human-centric perspective. ACM Transactions on Management Information Systems (TMIS), 13(3):1–27.
[Adorno, 2017] Adorno, T. W. (2017). An introduction to dialectics. John Wiley & Sons.
[Adorno and Horkheimer, 1997] Adorno, T. W. and Horkheimer, M. (1997). Di- alectic of enlightenment. Verso.
[Adrian and Ashcraft, 2012] Adrian, T. and Ashcraft, A. B. (2012). Shadow bank- ing regulation. Annu. Rev. Financ. Econ., 4(1):99–140.
[Aggarwal, 2016] Aggarwal, C. C. (2016). Recommender systems. Springer.
[Aguirre et al., 2015] Aguirre, E., Mahr, D., Grewal, D., De Ruyter, K., and Wet- zels, M. (2015). Unraveling the personalization paradox: The effect of information collection and trust-building strategies on online advertisement effective- ness. Journal of retailing, 91(1):34–49.
[Aizenberg and van den Hoven, 2020] Aizenberg, E. and van den Hoven, M. (2020). Designing for human rights in ai. Big Data & Society, 7(2).
[Akaike et al., 1998] Akaike, H., Parzen, E., Tanabe, K., and Kitagawa, G. (1998). Selected papers of hirotugu akaike. Springer Science & Business Media.
[Albonetti, 1986] Albonetti, C. A. (1986). Criminality, prosecutorial screening, and uncertainty: Toward a theory of discretionary decision making in felony case processings. Criminology, 24(4):623–644.
[Albonetti, 1990] Albonetti, C. A. (1990). Race and the probability of pleading guilty. Journal of Quantitative Criminology, 6(3):315–334.
[Albonetti, 1992] Albonetti, C. A. (1992). Charge reduction: An analysis of prose- cutorial discretion in burglary and robbery cases. Journal of Quantitative Crim- inology, 8(3):317–333.
[Albrecht, 2016] Albrecht, J. (2016). How the gdpr will change the world. European Data Protection Law Review, 2(3):287–289.
[Albu and Flyverbom, 2019] Albu, O. B. and Flyverbom, M. (2019). Organiza- tional transparency: Conceptualizations, conditions, and consequences. Busi- ness & Society, 58(2):268–297.
[Alexy, 2010] Alexy, R. (2010). A theory of constitutional rights. Oxford University Press, USA.
[Allen, 2019] Allen, H. J. (2019). Regulatory sandboxes. Geo. Wash. L. Rev., 87:579.
[Allen & Overy, 2016] Allen & Overy (2016). Binding corporate rules. (white paper).
[Amodei et al., 2016] Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schul- man, J., and Man ́e, D. (2016). Concrete problems in ai safety. arXiv preprint arXiv:1606.06565.
[Anagnostopoulos et al., 2016] Anagnostopoulos, I., Zeadally, S., and Exposito, E. (2016). Handling big data: research challenges and future directions. The Journal of Supercomputing, 72:1494–1516.
[Anandkumar, 2020] Anandkumar, A. (December 16, 2020). My heartfelt apology. https://anima-ai.org/2020/12/16/my-heartfelt-apology/. Anima on AI, Online, accessed August 6, 2022.
[Anderson, 1995] Anderson, E. (1995). Value in ethics and economics. Harvard University Press.
[Anderson, 2011] Anderson, E. (2011). Democracy, public policy, and lay assess- ments of scientific testimony. Episteme, 8(2):144–164.
[Anderson and Anderson, 2007] Anderson, M. and Anderson, S. L. (2007). Ma- chine ethics: Creating an ethical intelligent agent. AI magazine, 28(4):15–15.
[Andrade, 2012] Andrade, N. N. G. d. (2012). Oblivion: The right to be different from oneself-reproposing the right to be forgotten. In VII international con- ference on internet, law & politics. Net neutrality and other challenges for the future of the Internet”, IDP. Revista de Internet, Derecho y Pol ́ıtica, number 13, pages 122–137.
[Angerschmid et al., 2022] Angerschmid, A., Zhou, J., Theuermann, K., Chen, F., and Holzinger, A. (2022). Fairness and explanation in ai-informed decision making. Machine Learning and Knowledge Extraction, 4(2):556–579.
[Angrist and Pischke, 2009] Angrist, J. D. and Pischke, J.-S. (2009). Mostly harm- less econometrics: An empiricist’s companion. Princeton university press.
[Angwin et al., 2016] Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias. Online, accessed February 24, 2022.
[Anonymous, ] Anonymous. Tmlr ethics guidelines. https://www.jmlr.org/ tmlr/ethics.html. TMLR Ethics Guidelines, Online, accessed August 12, 2022.
[Anonymous, 2022] Anonymous (2022). Ethics guidelines. https://cvpr2022. thecvf.com/ethics-guideline. The IEEE/CVF Computer Vision and Pat- tern Recognition Conference (CVPR), Online, accessed August 9, 2022.
[Aral, 2021] Aral, S. (2021). The hype machine: how social media disrupts our elections, our economy, and our health–and how we must adapt. Currency.
[Arendt and Kroh, 1964] Arendt, H. and Kroh, J. (1964). Eichmann in Jerusalem. Viking Press New York.
[Argall et al., 2009] Argall, B. D., Chernova, S., Veloso, M., and Browning, B. (2009). A survey of robot learning from demonstration. Robotics and au- tonomous systems, 57(5):469–483.
[Aristotle, 1998] Aristotle (1998). The Nicomachean Ethics (D. Ross, Trans.). Oxford University Press, Oxford, England.
[Arrieta et al., 2020] Arrieta, A. B., D ́ıaz-Rodr ́ıguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garc ́ıa, S., Gil-Lo ́pez, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82– 115.
[Arya et al., 2019] Arya, V., Bellamy, R. K. E., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilovi ́c, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shan- mugam, K., Singh, M., Varshney, K. R., Wei, D., and Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability tech- niques.
[Arzate Cruz and Igarashi, 2020] Arzate Cruz, C. and Igarashi, T. (2020). A sur- vey on interactive reinforcement learning: Design principles and open challenges. In Proceedings of the 2020 ACM designing interactive systems conference, pages 1195–1209.
[Asaro, 2007] Asaro, P. M. (2007). Robots and responsibility from a legal perspec- tive. Proceedings of the IEEE, 4(14):20–24.
[Ashby, 1957] Ashby, W. R. (1957). An introduction to cybernetics.
[Ashford and Risinger, 1969] Ashford, H. A. and Risinger, D. M. (1969). Presump- tions, assumptions, and due process in criminal cases: A theoretical overview. The Yale Law Journal, 79(2):165–208.
[Ashmore and Jussim, 1997] Ashmore, R. D. and Jussim, L. (1997). Self and iden- tity: Fundamental issues. Oxford University Press.
[Ashurst et al., 2022] Ashurst, C., Hine, E., Sedille, P., and Carlier, A. (2022). Ai ethics statements: Analysis and lessons learnt from neurips broader impact statements. In 2022 ACM Conference on Fairness, Accountability, and Trans- parency, pages 2047–2056.
[Ashworth, 2005] Ashworth, A. (2005). Sentencing and criminal justice. Cambridge University Press.
[Ashworth and Blake, 1996] Ashworth, A. and Blake, M. (1996). The presumption of innocence in english criminal law. Criminal Law Review, pages 306–317.
[ ̊Astro ̈m and Murray, 2021] ̊Astro ̈m, K. J. and Murray, R. M. (2021). Feedback systems: an introduction for scientists and engineers. Princeton university press.
[Athey and Imbens, 2016] Athey, S. and Imbens, G. (2016). Recursive partition- ing for heterogeneous causal effects. Proceedings of the National Academy of Sciences, 113(27):7353–7360.
[Athey and Imbens, 2017] Athey, S. and Imbens, G. W. (2017). The state of applied econometrics: Causality and policy evaluation. Journal of Economic per- spectives, 31(2):3–32.
[Athey and Luca, 2019] Athey, S. and Luca, M. (2019). Economists (and economics) in tech companies. Journal of Economic Perspectives, 33(1):209–30.
[Auerhahn, 1999] Auerhahn, K. (1999). Selective incapacitation and the problem of prediction. Criminology, 37(4):703–734.
[Augustine, 2009] Augustine, S. (2009). The city of God. Hendrickson Publishers.
[Aulisio et al., 2000] Aulisio, M. P., Arnold, R. M., Youngner, S. J., for Health, S., and Values, H. (2000). Health care ethics consultation: nature, goals, and competencies: a position paper from the society for health and human values. Annals of internal medicine, 133(1):59–69.
[Avram et al., 2020] Avram, M., Micallef, N., Patil, S., and Menczer, F. (2020). Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arXiv:2005.04682.
[Ayer, 1952] Ayer, A. J. (1952). Language, truth, and logic. Courier Corporation.
[Ayuso et al., 2019] Ayuso, M., Guillen, M., and Nielsen, J. P. (2019). Improving automobile insurance ratemaking using telematics: incorporating mileage and driver behaviour data. Transportation, 46(3):735–752.
[Baars et al., 2003] Baars, B. J., Ramsøy, T. Z., and Laureys, S. (2003). Brain, conscious experience and the observing self. Trends in neurosciences, 26(12):671–675.
[Badri Satya et al., 2016] Badri Satya, P. R., Lee, K., Lee, D., Tran, T., and Zhang, J. (2016). Uncovering fake likers in online social networks. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 2365–2370.
[Baehr, 2011] Baehr, J. (2011). The inquiring mind: On intellectual virtues and virtue epistemology. OUP Oxford.
[Baer, 2013] Baer, D. (April 18, 2013). Why data god jeffrey hammerbacher left facebook to found cloudera. https://www.fastcompany.com/3008436/ why-data-god-jeffrey-hammerbacher-left-facebook-found-cloudera. Bloomberg, Online, accessed August 6, 2022.
[Bak-Coleman et al., 2021] Bak-Coleman, J. B., Alfano, M., Barfuss, W., Bergstrom, C. T., Centeno, M. A., Couzin, I. D., Donges, J. F., Galesic, M., Ger- sick, A. S., Jacquet, J., et al. (2021). Stewardship of global collective behavior. Proceedings of the National Academy of Sciences, 118(27):e2025764118.
[Balkin, 2017] Balkin, J. M. (2017). 2016 sidley austin distinguished lecture on big data law and policy: The three laws of robotics in the age of big data. Ohio St. LJ, 78:1217.
[Bambauer and Zarsky, 2018] Bambauer, J. and Zarsky, T. (2018). The algorithm game. Notre Dame L. Rev., 94:1.
[Bapna et al., 2016] Bapna, R., Ramaprasad, J., Shmueli, G., and Umyarov, A. (2016). One-way mirrors in online dating: A randomized field experiment. Man- agement Science, 62(11):3100–3122.
[Bar-Gill and Ben-Shahar, 2021] Bar-Gill, O. and Ben-Shahar, O. (2021). Manip- ulation by mislaid priorities. Harvard Law School John M. Olin Center Discus- sion Paper, (1063).
[Barabas et al., 2020] Barabas, C., Doyle, C., Rubinovitz, J., and Dinakar, K. (2020). Studying up: reorienting the study of algorithmic fairness around issues of power. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 167–176.
[Bareinboim and Pearl, 2016] Bareinboim, E. and Pearl, J. (2016). Causal inference and the data-fusion problem. Proceedings of the National Academy of Sciences, 113(27):7345–7352.
[Bargh and Chartrand, 1999] Bargh, J. A. and Chartrand, T. L. (1999). The unbearable automaticity of being. American psychologist, 54(7):462.
[Barocas et al., 2019] Barocas, S., Hardt, M., and Narayanan, A. (2019). Fairness and Machine Learning. Fairmlbook. http://www.fairmlbook.org.
[Barocas and Selbst, 2016a] Barocas, S. and Selbst, A. D. (2016a). Big data’s disparate impact. California law review, pages 671–732.
[Barocas and Selbst, 2016b] Barocas, S. and Selbst, A. D. (2016b). Big data’s disparate impact. Calif. L. Rev., 104:671.
[Barocas et al., 2020] Barocas, S., Selbst, A. D., and Raghavan, M. (2020). The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Trans- parency, pages 80–89.
[Barricelli et al., 2019] Barricelli, B. R., Casiraghi, E., and Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE access, 7:167653–167671.
[Barrow, 1999] Barrow, J. D. (1999). Impossibility: The limits of science and the science of limits. Oxford University Press.
[Barry-Jester et al., 2015] Barry-Jester, A., Casselman, B., and Goldstein, D. (2015). The new science of sentencing: Should prison sentences be based on crimes that haven’t been committed yet? Online, accessed May 12, 2021.
[Barto and Dietterich, 2004] Barto, A. G. and Dietterich, T. G. (2004). Reinforce- ment learning and its relationship to supervised learning. Handbook of learning and approximate dynamic programming, 10.
[Bartoletti, 2020] Bartoletti, I. (2020). An artificial revolution: On power, politics and AI. Black Spot Books.
[Bassen et al., 2020] Bassen, J., Balaji, B., Schaarschmidt, M., Thille, C., Painter, J., Zimmaro, D., Games, A., Fast, E., and Mitchell, J. C. (2020). Reinforcement learning for the adaptive scheduling of educational activities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–12.
[Bastos and Walker, 2018] Bastos, M. (April 11, 2018). Facebook’s
and Walker, S. T. data lockdown is a disas- ter for academic researchers. https://theconversation.com/
facebooks-data-lockdown-is-a-disaster-for-academic-researchers-94533. The Conversation, Online, accessed February 24, 2022.
[Bates, 2005] Bates, D. (2005). Crisis between the wars: Derrida and the origins of undecidability. Representations, 90(1):1–27.
[Batty and Torrens, 2005] Batty, M. and Torrens, P. M. (2005). Modelling and prediction in a complex world. Futures, 37(7):745–766.
[Baughan et al., 2022] Baughan, A., Zhang, M. R., Rao, R., Lukoff, K., Schaad- hardt, A., Butler, L. D., and Hiniker, A. (2022). “i don’t even remember what i read”: How design influences dissociation on social media. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–13.
[Baum, 2017] Baum, W. M. (2017). Understanding behaviorism: Behavior, culture, and evolution. John Wiley & Sons.
[Baumeister, 2005] Baumeister, R. F. (2005). The cultural animal: Human nature, meaning, and social life. Oxford University Press.
[Baumeister et al., 1998] Baumeister, R. F., Bratslavsky, E., Muraven, M., and Tice, D. M. (1998). Ego depletion: Is the active self a limited resource? Journal or Personality ami Social Psychology, 74(5):1252–1265.
[Beam et al., 2018] Beam, M. A., Hutchens, M. J., and Hmielowski, J. D. (2018). Facebook news and (de) polarization: reinforcing spirals in the 2016 US election. Information, Communication & Society, 21(7):940–958.
[Beauchamp and Childress, 2001] Beauchamp, T. L. and Childress, J. F. (2001). Principles of Biomedical Ethics. Oxford University Press, USA.
[Becker, 1997] Becker, E. (1997). The denial of death. Simon and Schuster.
[Behrens et al., 2007] Behrens, T. E., Woolrich, M. W., Walton, M. E., and Rush- worth, M. F. (2007). Learning the value of information in an uncertain world. Nature neuroscience, 10(9):1214–1221.
[Ben-Shahar and Porat, 2021] Ben-Shahar, O. and Porat, A. (2021). Personalized Law: Different Rules for Different People. Oxford University Press.
[Benbya et al., 2020] Benbya, H., Nan, N., Tanriverdi, H., and Yoo, Y. (2020). Complexity and information systems research in the emerging digital world. Mis Quarterly, 44(1):1–17.
[Bengio et al., 2013] Bengio, Y., Courville, A., and Vincent, P. (2013). Represen- tation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828.
[Bengio et al., 2009] Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48.
[Benjamin, 2019] Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. John Wiley & Sons.
[Benn, 1975] Benn, S. I. (1975). Freedom, autonomy and the concept of a person. In Proceedings of the Aristotelian Society, volume 76, pages 109–130. JSTOR.
[Benson et al., 2016] Benson, A. R., Kumar, R., and Tomkins, A. (2016). Model- ing user consumption sequences. In Proceedings of the 25th International Con- ference on World Wide Web, pages 519–529.
[Berdichevsky and Neuenschwander, 1999] Berdichevsky, D. and Neuenschwan- der, E. (1999). Toward an ethics of persuasive technology. Communications of the ACM, 42(5):51–58.
[Bering, 2006] Bering, J. M. (2006). The folk psychology of souls. Behavioral and brain sciences, 29(5):453–462.
[Berk, 2019] Berk, R. (2019). Machine learning risk assessments in criminal jus- tice settings. Springer.
[Berk and Bleich, 2014] Berk, R. and Bleich, J. (2014). Forecasts of violence to inform sentencing decisions. Journal of Quantitative Criminology, 30(1):79–96.
[Berk et al., 2021] Berk, R., Heidari, H., Jabbari, S., Kearns, M., and Roth, A. (2021). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1):3–44.
[Berk and Hyatt, 2015] Berk, R. and Hyatt, J. (2015). Machine learning forecasts of risk to inform sentencing decisions. Federal Sentencing Reporter, 27(4):222– 228.
[Berkovsky et al., 2012] Berkovsky, S., Freyne, J., and Oinas-Kukkonen, H. (2012). Influencing individually: fusing personalization and persuasion.
[Berlin, 2002] Berlin, I. (2002). Liberty. Oxford University Press.
[Berlin, 2013] Berlin, I. (2013). The roots of romanticism. Princeton University
Press.
[Bermu ́dez, 1995] Bermu ́dez, J. L. (1995). Nonconceptual content: From per- ceptual experience to subpersonal computational states. Mind & Language, 10(4):333–369.
[Berridge, 2009] Berridge, K. C. (2009). Wanting and liking: Observations from the neuroscience and psychology laboratory. Inquiry, 52(4):378–398.
[Berridge et al., 2009] Berridge, K. C., Robinson, T. E., and Aldridge, J. W. (2009). Dissecting components of reward:‘liking’,‘wanting’, and learning. Cur- rent opinion in pharmacology, 9(1):65–73.
[Berry, 2012] Berry, D. (2012). Understanding Digital Humanities. Springer.
[Beshears et al., 2008] Beshears, J., Choi, J. J., Laibson, D., and Madrian, B. C. (2008). How are preferences revealed? Journal of public economics, 92(8- 9):1787–1794.
[Betz, 2013] Betz, G. (2013). In defence of the value free ideal. European Journal for Philosophy of Science, 3(2):207–220.
[Beven, 2002] Beven, K. (2002). Towards a coherent philosophy for modelling the environment. Proceedings of the royal society of London. Series A: mathematical, physical and engineering sciences, 458(2026):2465–2484.
[Bhargava, 2022] Bhargava, H. K. (2022). The creator economy: Managing ecosystem supply, revenue sharing, and platform design. Management Science, 68(7):5233–5251.
[Bhaskar, 2013] Bhaskar, R. (2013). A realist theory of science. Routledge.
[Bhattacharjee et al., 2014] Bhattacharjee, A., Berger, J., and Menon, G. (2014). When identity marketing backfires: Consumer agency in identity expression. Journal of Consumer Research, 41(2):294–309.
[Bidar, 2021] Bidar, M. (October 25, 2021). Liberals to ”moscow mitch,” conservatives to qanon: Facebook researchers saw how its al- gorithms led to misinformation. https://www.cbsnews.com/news/ facebook-algorithm-news-feed-conservatives-liberals-india/. CBS News, Online, accessed February 24, 2022.
[Bietti, 2020] Bietti, E. (2020). From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 210–219.
[Biggs, 2009] Biggs, M. (2009). Self-fulfilling prophecies. In Hedstro ̈m, P. and Bearman, P., editors, The Oxford Handbook of Analytical Sociology. Oxford Uni- versity Press.
[Bignold et al., 2022] Bignold, A., Cruz, F., Dazeley, R., Vamplew, P., and Foale, C. (2022). Human engagement providing evaluative and informative advice for interactive reinforcement learning. Neural Computing and Applications, pages 1–16.
[Binns, 2018a] Binns, R. (2018a). Algorithmic accountability and public reason. Philosophy & technology, 31(4):543–556.
[Binns, 2018b] Binns, R. (2018b). Fairness in machine learning: Lessons from po- litical philosophy. In Conference on Fairness, Accountability and Transparency, pages 149–159. PMLR.
[Binns, 2020] Binns, R. (2020). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 514–524.
[Bird et al., 2016] Bird, S., Barocas, S., Crawford, K., Diaz, F., and Wallach, H. (2016). Exploring or exploiting? social and ethical implications of autonomous experimentation in ai. In Workshop on Fairness, Accountability, and Trans- parency in Machine Learning.
[Birhane, 2021] Birhane, A. (2021). The impossibility of automating ambiguity. Artificial Life, 27(1):44–61.
[Bishop, 2002] Bishop, M. A. (2002). The art and science of computer security. Addison-Wesley Longman Publishing Co., Inc.
[Black and Murray, 2019] Black, J. and Murray, A. D. (2019). Regulating ai and machine learning: setting the regulatory agenda. European journal of law and technology, 10(3).
[Blackmore and Welsh, 1983] Blackmore, J. and Welsh, J. (1983). Selective inca- pacitation: Sentencing according to risk. Crime & Delinquency, 29(4):504–528.
[Blanc, 2023] Blanc, S. (2023). Deliberative democracy and corporate constitu- tionalism: Considering corporate constitutional courts. Journal of Business Ethics, pages 1–15.
[Blumer, 1986] Blumer, H. (1986). Symbolic interactionism: Perspective and method. Univ of California Press.
[Bohman, 2006] Bohman, J. (2006). Deliberative democracy and the epistemic benefits of diversity. Episteme, 3(3):175–191.
[Bok, 1976] Bok, D. C. (1976). Can ethics be taught? Change: The Magazine of Higher Learning, 8(9):26–30.
[Boka, 2016] Boka, Z. (June 23, 2016). Facebook’s research ethics board needs to stay far away from facebook. https://www.wired.com/2016/06/ facebooks-research-ethics-board-needs-stay-far-away-facebook/. Wired Magazine, Online, accessed February 24, 2022.
[Bolsin et al., 2005] Bolsin, S., Faunce, T., and Oakley, J. (2005). Practical virtue ethics: healthcare whistleblowing and portable digital technology. Journal of medical ethics, 31(10):612–618.
[Borgeaud et al., 2022] Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Ruther- ford, E., Millican, K., Van Den Driessche, G. B., Lespiau, J.-B., Damoc, B., Clark, A., et al. (2022). Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR.
[Borra ́s and Edler, 2020] Borra ́s, S. and Edler, J. (2020). The roles of the state in the governance of socio-technical systems’ transformation. Research Policy, 49(5):103971.
[Borsboom, 2005] Borsboom, D. (2005). Measuring the mind: Conceptual issues in contemporary psychometrics. Cambridge University Press.
[Borsboom et al., 2004] Borsboom, D., Mellenbergh, G. J., and Van Heerden, J. (2004). The concept of validity. Psychological review, 111(4):1061.
[Bostrom, 2005] Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3):202–214.
[Bostrom, 2012] Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2):71–85.
[Botvinick et al., 2020] Botvinick, M., Wang, J. X., Dabney, W., Miller, K. J., and Kurth-Nelson, Z. (2020). Deep reinforcement learning and its neuroscientific implications. Neuron, 107(4):603–616.
[Boulding, 1956] Boulding, K. E. (1956). General systems theory—the skeleton of science. Management science, 2(3):197–208.
[Bouneffouf et al., 2014] Bouneffouf, D., Laroche, R., Urvoy, T., F ́eraud, R., and Allesiardo, R. (2014). Contextual bandit for active learning: Active thompson sampling. In Neural Information Processing: 21st International Conference, ICONIP 2014, Kuching, Malaysia, November 3-6, 2014. Proceedings, Part I 21, pages 405–412. Springer.
[Box-Steffensmeier et al., 2022] Box-Steffensmeier, J. M., Burgess, J., Corbetta, M., Crawford, K., Duflo, E., Fogarty, L., Gopnik, A., Hanafi, S., Herrero, M., Hong, Y.-y., et al. (2022). The future of human behaviour research. Nature Human Behaviour, 6(1):15–24.
[boyd and Crawford, 2012] boyd, D. and Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phe- nomenon. Information, communication & society, 15(5):662–679.
[Brady et al., 2021] Brady, W. J., McLoughlin, K., Doan, T. N., and Crockett, M. J. (2021). How social learning amplifies moral outrage expression in online social networks. Science Advances, 7(33):eabe5641.
[Bratman, 2000] Bratman, M. E. (2000). Reflection, planning, and temporally extended agency. The Philosophical Review, 109(1):35–61.
[Bratton and Malinowski, 2008] Bratton, W. J. and Malinowski, S. W. (2008). Police performance management in practice: Taking compstat to the next level. Policing: A Journal of Policy and Practice, 2(3):259–265.
[Brazier and Lobjoit, 2005] Brazier, M. and Lobjoit, M. (2005). Protecting the vulnerable: Autonomy and consent in health care, volume 1. Routledge.
[Breiman, 2001a] Breiman, L. (2001a). Random forests. Machine learning, 45(1):5–32.
[Breiman, 2001b] Breiman, L. (2001b). Statistical modeling: The two cultures. Statistical science, 16(3):199–231.
[Breitenbach et al., 2010] Breitenbach, M., Dieterich, W., Brennan, T., and Fan, A. (2010). Creating risk-scores in very imbalanced datasets: Predicting ex- tremely violent crime among criminal offenders following release from prison. In Rare association rule mining and knowledge discovery: Technologies for infre- quent and critical event detection, pages 231–254. IGI Global.
[Brendel, 2004] Brendel, E. (2004). Intuition pumps and the proper use of thought experiments. Dialectica, 58(1):89–108.
[Brennan, 1987] Brennan, T. (1987). Classification: An overview of selected methodological issues. Crime and Justice, 9:201–248.
[Brennan and Dieterich, 2018] Brennan, T. and Dieterich, W. (2018). Correc- tional offender management profiles for alternative sanctions (compas). In Singh, J. P., Kroner, D. G., Wormith, J. S., Desmarais, S. L., and Hamilton, Z., editors, Handbook of Recidivism Risk/Needs Assessment Tools. John Wiley & Sons.
[Brennan et al., 2009] Brennan, T., Dieterich, W., and Ehret, B. (2009). Evalu- ating the predictive validity of the compas risk and needs assessment system. Criminal Justice and behavior, 36(1):21–40.
[Bridgman, 1927] Bridgman, P. W. (1927). The logic of modern physics. Macmil- lan New York.
[Brin and Page, 1998] Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1- 7):107–117.
[Bringas Colmenarejo et al., 2022] Bringas Colmenarejo, A., Nannini, L., Rieger, A., Scott, K. M., Zhao, X., Patro, G. K., Kasneci, G., and Kinder-Kurlanda, K. (2022). Fairness in agreement with european values: An interdisciplinary perspective on ai regulation. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 107–118.
[Bringmann et al., 2022] Bringmann, L. F., Elmer, T., and Eronen, M. I. (2022). Back to basics: The importance of conceptual clarification in psychological sci- ence. Current Directions in Psychological Science, 31(4):340–346.
[Bringmann and Eronen, 2016] Bringmann, L. F. and Eronen, M. I. (2016). Heat- ing up the measurement debate: What psychologists can learn from the history of physics. Theory & psychology, 26(1):27–43.
[Brinkmann, 2005] Brinkmann, S. (2005). Human kinds and looping effects in psychology: Foucauldian and hermeneutic perspectives. Theory & Psychology, 15(6):769–791.
[Broeders et al., 2017] Broeders, D., Hirsch Ballin, E., et al. (2017). Big data and security policies: Serving security, protecting freedom. Technical report.
[Broome, 1984] Broome, J. (1984). Selecting people randomly. Ethics, 95(1):38– 55.
[Broussard, 2018] Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.
[Browne, 2015] Browne, S. (2015). Dark matters. Duke University Press.
[Bruce and Fleming, 2021] Bruce, P. C. and Fleming, G. (2021). Responsible Data
Science. John Wiley & Sons.
[Bruner, 1991] Bruner, J. (1991). The narrative construction of reality. Critical
inquiry, 18(1):1–21.
[Brusilovsky and Maybury, 2002] Brusilovsky, P. and Maybury, M. T. (2002). From adaptive hypermedia to the adaptive web. Communications of the ACM, 45(5):30–33.
[Bryan et al., 2021] Bryan, C. J., Tipton, E., and Yeager, D. S. (2021). Be- havioural science is unlikely to change the world without a heterogeneity revo- lution. Nature human behaviour, 5(8):980–989.
[Brynjolfsson and Kahin, 2002] Brynjolfsson, E. and Kahin, B. (2002). Under- standing the digital economy: data, tools, and research. MIT press.
[Brys et al., 2017] Brys, T., Harutyunyan, A., Vrancx, P., Now ́e, A., and Taylor, M. E. (2017). Multi-objectivization and ensembles of shapings in reinforcement learning. Neurocomputing, 263:48–59.
[Bubic et al., 2010] Bubic, A., Von Cramon, D. Y., and Schubotz, R. I. (2010). Prediction, cognition and the brain. Frontiers in human neuroscience, page 25.
[Buchanan et al., 2011] Buchanan, E., Aycock, J., Dexter, S., Dittrich, D., and Hvizdak, E. (2011). Computer science security research and human subjects: Emerging considerations for research ethics boards. Journal of Empirical Re- search on Human Research Ethics, 6(2):71–83.
[Bunge, 2000] Bunge, M. (2000). Systemism: the alternative to individualism and holism. The Journal of Socio-Economics, 29(2):147–157.
[Buolamwini and Gebru, 2018] Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91. PMLR.
[Burch and Furman, 2019] Burch, M. and Furman, K. (2019). Objectivity in sci- ence and law: A shared rescue strategy. International journal of law and psy- chiatry, 64:60–70.
[Burnell et al., 2023] Burnell, R., Schellaert, W., Burden, J., Ullman, T. D., Martinez-Plumed, F., Tenenbaum, J. B., Rutar, D., Cheke, L. G., Sohl- Dickstein, J., Mitchell, M., et al. (2023). Rethink reporting of evaluation results in ai. Science, 380(6641):136–138.
[Burr et al., 2018] Burr, C., Cristianini, N., and Ladyman, J. (2018). An analysis of the interaction between intelligent software agents and human users. Minds and machines, 28(4):735–774.
[Business Insider, 2022] Business Insider (2022). Here’s the latest on amazon’s $31 billion ad business, which continues to grow de- spite the economic upheaval. https://www.businessinsider.com/ inside-amazons-growing-ad-business-everything-we-know-2019-5. Business Insider, Online, accessed March 24, 2023.
[Cadwalladr, 2018] Cadwalladr, C. (March 18, 2018). ‘i made steve bannon’s psychological warfare tool’: meet the data war whistleblower. https://www.theguardian.com/news/2018/mar/17/ data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump. The Guardian, Online, accessed August 6, 2022.
[Cai et al., 2017] Cai, L., Wu, C., Meimandi, K. J., and Gerber, M. S. (2017). Adaptive mobile behavior change intervention using reinforcement learning. In 2017 International Conference on Companion Technology (ICCT), pages 1–2. IEEE.
[Calders and Zˇliobaite ̇, 2013] Calders, T. and Zˇliobaite ̇, I. (2013). Why unbiased computational processes can lead to discriminative decision procedures. In Dis- crimination and privacy in the information society, pages 43–57. Springer.
[Calo, 2013] Calo, R. (2013). Digital market manipulation. Geo. Wash. L. Rev., 82:995.
[Cane, 2002] Cane, P. (2002). Responsibility in law and morality. Hart: Oxford. [Cao, 2022] Cao, L. (2022). Beyond iid: Non-iid thinking, informatics, and learn-
ing. IEEE Intelligent Systems, 37(4):5–17.
[Card and Smith, 2020] Card, D. and Smith, N. A. (2020). On consequentialism
and fairness. Frontiers in Artificial Intelligence, 3:34.
[Carlsmith et al., 2002] Carlsmith, K. M., Darley, J. M., and Robinson, P. H. (2002). Why do we punish? deterrence and just deserts as motives for punish- ment. Journal of personality and social psychology, 83(2):284.
[Carp, 2012] Carp, J. (2012). On the plurality of (methodological) worlds: esti- mating the analytic flexibility of fmri experiments. Frontiers in neuroscience, 6:149.
[Carpenter et al., 2015] Carpenter, S. R., Brock, W. A., Folke, C., Van Nes, E. H., and Scheffer, M. (2015). Allowing variance may enlarge the safe operating space for exploited ecosystems. Proceedings of the National Academy of Sciences, 112(46):14384–14389.
[Cath et al., 2018] Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., and Floridi, L. (2018). Artificial intelligence and the ‘good society’: the us, eu, and uk approach. Science and engineering ethics, 24:505–528.
[Caudill and Murphy, 2000] Caudill, E. M. and Murphy, P. E. (2000). Consumer online privacy: Legal and ethical issues. Journal of Public Policy & Marketing, 19(1):7–19.
[Cecez-Kecmanovic, 2011] Cecez-Kecmanovic, D. (2011). Doing critical informa- tion systems research–arguments for a critical research methodology. European Journal of Information Systems, 20(4):440–455.
[Celeste and De Gregorio, 2022] Celeste, E. and De Gregorio, G. (2022). Digital humanism: the constitutional message of the gdpr. Global Privacy Law Review, 33(1):4–18.
[Chabris et al., 2008] Chabris, C. F., Laibson, D., Morris, C. L., Schuldt, J. P., and Taubinsky, D. (2008). Individual laboratory-measured discount rates predict field behavior. Journal of risk and uncertainty, 37:237–269.
[Chambers et al., 2016] Chambers, C. P., Echenique, F., et al. (2016). Revealed Preference Theory. Cambridge University Press.
[Chaney et al., 2018] Chaney, A. J., Stewart, B. M., and Engelhardt, B. E. (2018). How algorithmic confounding in recommendation systems increases homogene- ity and decreases utility. In Proceedings of the 12th ACM conference on recom- mender systems, pages 224–232.
[Chang, 2004] Chang, H. (2004). Inventing temperature: Measurement and scien- tific progress. Oxford University Press.
[Chang et al., 2016] Chang, Y.-c., Chen, K.-P., and Lin, C.-C. (2016). Anchoring effect in real litigation: An empirical study. University of Chicago Coase-Sandor Institute for Law & Economics Research Paper, (744).
[Chater and Loewenstein, 2022] Chater, N. and Loewenstein, G. (2022). The i- frame and the s-frame: How focusing on individual-level solutions has led be- havioral public policy astray. Behavioral and Brain Sciences, pages 1–60.
[Chazan, 2002] Chazan, P. (2002). The moral self. Routledge.
[Chemerinsky, 1998] Chemerinsky, E. (1998). Substantive due process. Touro L.
Rev., 15:1501.
[Chen et al., 2020] Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., and He, X. (2020). Bias and debias in recommender system: A survey and future directions. arXiv preprint arXiv:2010.03240.
[Chen et al., 2019a] Chen, M., Beutel, A., Covington, P., Jain, S., Belletti, F., and Chi, E. H. (2019a). Top-k off-policy correction for a reinforce recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 456–464.
[Chen et al., 2019b] Chen, X., Li, S., Li, H., Jiang, S., Qi, Y., and Song, L. (2019b). Generative adversarial user model for reinforcement learning based recommendation system. In International Conference on Machine Learning, pages 1052–1061. PMLR.
[Cheng, 2009] Cheng, E. K. (2009). A practical solution to the reference class problem. Colum. L. Rev., 109:2081.
[Cheng et al., 2017] Cheng, L., Liu, F., and Yao, D. (2017). Enterprise data breach: causes, challenges, prevention, and future directions. Wiley Interdis- ciplinary Reviews: Data Mining and Knowledge Discovery, 7(5):e1211.
[Cherkassky and Mulier, 2007] Cherkassky, V. and Mulier, F. M. (2007). Learning from data: concepts, theory, and methods. John Wiley & Sons.
[Chiappa and Isaac, 2018] Chiappa, S. and Isaac, W. S. (2018). A causal bayesian networks viewpoint on fairness. In IFIP International Summer School on Pri- vacy and Identity Management, pages 3–20. Springer.
[Chomsky, 1986] Chomsky, N. (1986). Knowledge of language: Its nature, origin, and use. Greenwood Publishing Group.
[Chouldechova, 2017] Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153– 163.
[Christiano et al., 2017] Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. (2017). Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.
[Christman, 1991] Christman, J. (1991). Autonomy and personal history1. Cana- dian Journal of Philosophy, 21(1):1–24.
[Churchill, 2013] Churchill, E. F. (2013). Putting the person back into personal- ization. interactions, 20(5):12–15.
[Ciampiconi et al., 2023] Ciampiconi, L., Elwood, A., Leonardi, M., Mohamed, A., and Rozza, A. (2023). A survey and taxonomy of loss functions in machine learning. arXiv preprint arXiv:2301.05579.
[Cihon et al., 2021] Cihon, P., Schuett, J., and Baum, S. D. (2021). Corporate governance of artificial intelligence in the public interest. Information, 12(7):275.
[Citron, 2007] Citron, D. K. (2007). Technological due process. Wash. UL Rev., 85:1249.
[Clarke, 1994] Clarke, R. (1994). Human identification in information systems: Management challenges and public policy issues. Information Technology & People, 7(4):6–37.
[Coase, 1960] Coase, R. (1960). The problem of social cost. Journal of Law and Economics, 3:1–44.
[Coglianese and Ben Dor, 2021] Coglianese, C. and Ben Dor, L. (2021). Ai in adjudication and administration. Brooklyn Law Review, Forthcoming, U of Penn Law School, Public Law Research Paper, (19-41).
[Cohen et al., 2007] Cohen, J. D., McClure, S. M., and Yu, A. J. (2007). Should i stay or should i go? how the human brain manages the trade-off between exploitation and exploration. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481):933–942.
[Cohen, 1981] Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral and Brain Sciences, 4(3):317–331.
[Colaner, 2022] Colaner, N. (2022). Is explainable artificial intelligence intrinsically valuable? AI & SOCIETY, 37(1):231–238.
[Cole and Smith, 2007] Cole, G. and Smith, C. (2007). The American System of Criminal Justice. Belmont, CA: Thomson. Wadsworth Publishing.
[Cole-Turner, 2011] Cole-Turner, R. (2011). Transhumanism and transcendence: Christian hope in an age of technological enhancement. Georgetown University Press.
[Collins and Murphy, 2004] Collins, L. and Murphy, S. (2004). A conceptual framework for adaptive preventive interventions. Prevention Science, 5(3):185– 196.
[Collins, 2002] Collins, P. H. (2002). Black feminist thought: Knowledge, consciousness, and the politics of empowerment. Routledge.
[Conant and Ashby, 1970] Conant, R. C. and Ashby, W. (1970). Every good regulator of a system must be a model of that system. International journal of systems science, 1(2):89–97.
[Constantinides et al., 2018] Constantinides, P., Henfridsson, O., and Parker, G. G. (2018). Introduction—platforms and infrastructures in the digital age. Information Systems Research, 29(2):381–400.
[Cook et al., 2002] Cook, T. D., Campbell, D. T., and Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
[Cooke and Michie, 2010] Cooke, D. J. and Michie, C. (2010). Limitations of diagnostic precision and predictive utility in the individual case: A challenge for forensic practice. Law and human behavior, 34(4):259–274.
[Cooley, 1902] Cooley, C. H. (1902). Looking-glass self. The production of reality: Essays and readings on social interaction, 6:126–128.
[Coors, 2010] Coors, C. (2010). Headwind from europe: The new position of the german courts on personality rights after the judgment of the european court of human rights. German Law Journal, 11(5):527–537.
[Corbett-Davies et al., 2017] Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., and Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining, pages 797–806.
[Cormier and Magnan, 1999] Cormier, D. and Magnan, M. (1999). Corporate en- vironmental disclosure strategies: determinants, costs and benefits. Journal of Accounting, Auditing & Finance, 14(4):429–451.
[Coronato et al., 2020] Coronato, A., Naeem, M., De Pietro, G., and Paragliola, G. (2020). Reinforcement learning for intelligent healthcare applications: A survey. Artificial Intelligence in Medicine, 109:101964.
[Corral-Acero et al., 2020] Corral-Acero, J., Margara, F., Marciniak, M., Rodero, C., Loncaric, F., Feng, Y., Gilbert, A., Fernandes, J. F., Bukhari, H. A., Wajdan, A., et al. (2020). The ‘digital twin’to enable the vision of precision cardiology. European heart journal, 41(48):4556–4564.
[Cosley et al., 2003] Cosley, D., Lam, S. K., Albert, I., Konstan, J. A., and Riedl, J. (2003). Is seeing believing? how recommender system interfaces affect users’ opinions. In Proceedings of the SIGCHI conference on Human factors in com- puting systems, pages 585–592.
[Costa, 2012] Costa, L. (2012). Privacy and the precautionary principle. Computer Law & Security Review, 28(1):14–24.
[Cotter, 2019] Cotter, K. (2019). Playing the visibility game: How digital influ- encers and algorithms negotiate influence on instagram. New media & society, 21(4):895–913.
[Couldry and Mejias, 2019] Couldry, N. and Mejias, U. A. (2019). Data colonial- ism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4):336–349.
[Council of Europe, 2019] Council of Europe (2019). Practical examples of ai im- plemented in other countries. Online, accessed May 20, 2021.
[Covington et al., 2016] Covington, P., Adams, J., and Sargin, E. (2016). Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191–198.
[Coyle and Weller, 2020] Coyle, D. and Weller, A. (2020). “explaining” machine learning reveals policy challenges. Science, 368(6498):1433–1434.
[Craig, 2005] Craig, E. (2005). The shorter Routledge encyclopedia of philosophy. Routledge.
[Cranor, 1975] Cranor, C. (1975). Toward a theory of respect for persons. Amer- ican philosophical quarterly, 12(4):309–319.
[Craven and Shavlik, 1995] Craven, M. and Shavlik, J. (1995). Extracting tree- structured representations of trained networks. Advances in neural information processing systems, 8.
[Crawford, 2021] Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
[Cremonesi et al., 2010] Cremonesi, P., Koren, Y., and Turrin, R. (2010). Perfor- mance of recommender algorithms on top-n recommendation tasks. In Proceed- ings of the fourth ACM conference on Recommender systems. ACM.
[Crisp and Slote, 1997] Crisp, R. and Slote, M. (1997). Virtue ethics, volume 10. Oxford readings in philosophy.
[Cristianini et al., 2021] Cristianini, N., Scantamburlo, T., and Ladyman, J. (2021). The social turn of artificial intelligence. AI & SOCIETY, pages 1–8.
[Cronbach and Meehl, 1955] Cronbach, L. J. and Meehl, P. E. (1955). Construct validity in psychological tests. Psychological bulletin, 52(4):281.
[Crowder, 2006] Crowder, G. (2006). Value pluralism and communitarianism. Contemporary Political Theory, 5(4):405–427.
[Crowder et al., 2020] Crowder, S., Delker, C., Forrest, E., and Martin, N. (2020). Introduction to Statistics in Metrology. Springer.
[Cui, 2020] Cui, Y. (2020). Artificial intelligence and judicial modernization. Springer.
[Cullen et al., 2011] Cullen, F. T., Jonson, C. L., and Nagin, D. S. (2011). Prisons do not reduce recidivism: The high cost of ignoring science. The Prison Journal, 91(3):48S–65S.
[Dagher, 2018] Dagher, J. (2018). Regulatory cycles: revisiting the political econ- omy of financial crises. International Monetary Fund.
[d’Alessandro et al., 2017] d’Alessandro, B., O’Neil, C., and LaGatta, T. (2017). Conscientious classification: A data scientist’s guide to discrimination-aware classification. Big data, 5(2):120–134.
[Dalton, 2017] Dalton, R. J. (2017). Political trust in north america. In Handbook on political trust, pages 375–394. Edward Elgar Publishing.
[Damiani et al., 2003] Damiani, E., Di Vimercati, S. D. C., and Samarati, P. (2003). Managing multiple and dependable identities. IEEE Internet Com- puting, 7(6):29–37.
[D’Amour et al., 2020a] D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Ali- panahi, B., Beutel, A., Chen, C., Deaton, J., Eisenstein, J., Hoffman, M. D., et al. (2020a). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.
[D’Amour et al., 2020b] D’Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., and Halpern, Y. (2020b). Fairness is not static: deeper understand- ing of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 525–534.
[Dan and Loewenstein, 2019] Dan, O. and Loewenstein, Y. (2019). From choice architecture to choice engineering. Nature communications, 10(1):1–4.
[Darley et al., 2000] Darley, J. M., Carlsmith, K. M., and Robinson, P. H. (2000). Incapacitation and just deserts as motives for punishment. Law and human behavior, 24(6):659–683.
[Darwall, 1977] Darwall, S. L. (1977). Two kinds of respect. Ethics, 88(1):36–49. [Dastile and Celik, 2021] Dastile, X. and Celik, T. (2021). Making deep learning-
based predictions for credit scoring explainable. IEEE Access, 9:50426–50440.
[Dastin, 2018] Dastin, J. (2018). Amazon scraps secret ai recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1M Reuters, Online, accessed September 29, 2022.
[Dave and Dastin, 2020] Dave, P. and Dastin, J. (December 23, 2020). Google told its scientists to ‘strike a positive tone’ in ai research - documents. https://www.reuters.com/article/ us-alphabet-google-research-focus-idUSKBN28X1CB. Reuters, Online, accessed February 24, 2022.
[Davidoff, 2009] Davidoff, F. (2009). Heterogeneity is not always noise: lessons from improvement. Jama, 302(23):2580–2586.
[Davidson et al., 2023] Davidson, E., Wessel, L., Winter, J. S., and Winter, S. (2023). Future directions for scholarship on data governance, digital innovation, and grand challenges. Information and Organization, 33(1):100454.
[Davis and Marcus, 2016] Davis, E. and Marcus, G. (2016). The scope and limits of simulation in automated reasoning. Artificial Intelligence, 233:60–72.
[Davis, 2021] Davis, G. F. (2021). Corporate purpose needs democracy. Journal of Management Studies, 58(3):902–913.
[de Bruijn et al., 2022] de Bruijn, H., Warnier, M., and Janssen, M. (2022). The perils and pitfalls of explainable ai: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2):101666.
[De Cnudde et al., 2019] De Cnudde, S., Moeyersoms, J., Stankova, M., Tobback, E., Javaly, V., and Martens, D. (2019). What does your facebook profile reveal about your creditworthiness? Using alternative data for microfinance. Journal of the Operational Research Society, 70(3):353–363.
[de Myttenaere et al., 2014] de Myttenaere, A., Le Grand, B., Golden, B., and Rossi, F. (2014). Reducing offline evaluation bias in recommendation systems. In 23rd annual Belgian-Dutch Conference on Machine Learning (Benelearn 2014), pages 55–62.
[De Vries, 2010] De Vries, K. (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and information technology, 12(1):71–85.
[Deacon, 1997] Deacon, T. W. (1997). The symbolic species: The co-evolution of language and the brain. Number 202. WW Norton & Company.
[Deaton and Cartwright, 2018] Deaton, A. and Cartwright, N. (2018). Under- standing and misunderstanding randomized controlled trials. Social science & medicine, 210:2–21.
[DeCew, 1986] DeCew, J. W. (1986). The scope of privacy in law and ethics. Law and Philosophy, pages 145–173.
[Deisenroth et al., 2020] Deisenroth, M. P., Faisal, A. A., and Ong, C. S. (2020). Mathematics for machine learning. Cambridge University Press.
[Del Olmo and Gaudioso, 2008] Del Olmo, F. H. and Gaudioso, E. (2008). Eval- uation of recommender systems: A new approach. Expert Systems with Appli- cations, 35(3):790–804.
[Delacroix and Lawrence, 2019] Delacroix, S. and Lawrence, N. D. (2019). Bottom-up data trusts: disturbing the ‘one size fits all’approach to data gover- nance. International data privacy law, 9(4):236–252.
[Deleuze, 1992] Deleuze, G. (1992). Postscript on the societies of control. October, 59:3–7.
[den Hengst et al., 2020] den Hengst, F., Grua, E. M., el Hassouni, A., and Hoogendoorn, M. (2020). Reinforcement learning for personalization: A systematic literature review. Data Science, pages 1–41.
[Dennett, 1984] Dennett, D. C. (1984). Cognitive wheels: The frame problem of ai. Minds, machines and evolution, pages 129–151.
[Dennett, 1989] Dennett, D. C. (1989). The intentional stance. MIT press. [Dennett, 2013] Dennett, D. C. (2013). Intuition pumps and other tools for think-
ing. WW Norton & Company.
[Dennett, 2015] Dennett, D. C. (2015). Elbow Room, new edition: The Varieties of Free Will Worth Wanting. mit Press.
[Dennis et al., 2020] Dennis, M., Jaques, N., Vinitsky, E., Bayen, A., Russell, S., Critch, A., and Levine, S. (2020). Emergent complexity and zero-shot transfer via unsupervised environment design. Advances in neural information processing systems, 33:13049–13061.
[Denzin, 1986] Denzin, N. K. (1986). Postmodern social theory. Sociological the- ory, 4(2):194–204.
[Denzin and Lincoln, 2011] Denzin, N. K. and Lincoln, Y. S. (2011). The Sage handbook of qualitative research. Sage.
[Derrida, 1992] Derrida, J. (1992). Force of law: The mystical foundation of au- thority. In Cornell, D., Rosenfield, M., and Carlson, D., editors, Deconstruction and the Possibility of Justice, pages 3–67. New York: Routledge.
[Descartes, 1999] Descartes, R. (1999). Discourse on method and meditations on first philosophy. Hackett Publishing.
[Descartes, 2013] Descartes, R. (2013). Meditations on first philosophy. Broadview Press.
[Determann, 2016] Determann, L. (2016). Adequacy of data protection in the usa: myths and facts. International Data Privacy Law, 6(3):244–250.
[Dewey, 1903] Dewey, J. (1903). Democracy in education. The elementary school teacher, 4(4):193–204.
[Dewey, 1998] Dewey, J. (1998). The essential Dewey: Pragmatism, education, democracy, volume 1. Indiana University Press.
[Dewey and Rogers, 2012] Dewey, J. and Rogers, M. L. (2012). The public and its problems: An essay in political inquiry. Penn State Press.
[Dezfouli et al., 2020] Dezfouli, A., Nock, R., and Dayan, P. (2020). Adversarial vulnerabilities of human decision-making. Proceedings of the National Academy of Sciences, 117(46):29221–29228.
[Di Bello and O’neil, 2020] Di Bello, M. and O’neil, C. (2020). Profile evidence, fairness, and the risks of mistaken convictions. Ethics, 130(2):147–178.
[Di Porto and Zuppetta, 2021] Di Porto, F. and Zuppetta, M. (2021). Co- regulating algorithmic disclosure for digital platforms. Policy and society, 40(2):272–293.
[Dieterich et al., 2016] Dieterich, W., Mendoza, C., and Brennan, T. (2016). Com- pas risk scales: Demonstrating accuracy equity and predictive parity. North- pointe Inc, 7(4).
[D’ignazio and Klein, 2020] D’ignazio, C. and Klein, L. F. (2020). Data feminism. MIT press.
[Dillahunt and Lu, 2019] Dillahunt, T. R. and Lu, A. (2019). Dreamgigs: design- ing a tool to empower low-resource job seekers. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–14.
[Dilthey, 1989] Dilthey, W. (1989). Introduction to the human sciences, volume 1. Princeton University Press.
[Djeffal et al., 2022] Djeffal, C., Siewert, M. B., and Wurster, S. (2022). Role of the state and responsibility in governing artificial intelligence: a comparative analysis of ai strategies. Journal of European Public Policy, 29(11):1799–1821.
[Doll et al., 2012] Doll, B. B., Simon, D. A., and Daw, N. D. (2012). The ubiq- uity of model-based reinforcement learning. Current opinion in neurobiology, 22(6):1075–1081.
[Domingos, 2012] Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10):78–87.
[Domingos, 2021] Domingos, P. (January 27, 2021). Beat-
ing back cancel culture: A case study from the field of ar-
tificial intelligence. https://quillette.com/2021/01/27/ beating-back-cancel-culture-a-case-study-from-the-field-of-artificial-intelligen Quillette, Online, accessed July 27, 2022.
[Donald, 1991] Donald, M. (1991). Origins of the modern mind: Three stages in the evolution of culture and cognition. Harvard University Press.
[Dong and Rudin, 2020] Dong, J. and Rudin, C. (2020). Exploring the cloud of variable importance for the set of all good models. Nature Machine Intelligence, 2(12):810–824.
299

[Dong, 2022] Dong, J. Q. (2022). Using simulation in information systems re- search. Journal of the Association for Information Systems, 23(2):408–417.
[Dorie et al., 2019] Dorie, V., Hill, J., Shalit, U., Scott, M., and Cervone, D. (2019). Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition.
[Doshi-Velez and Kim, 2017] Doshi-Velez, F. and Kim, B. (2017). Towards a rig- orous science of interpretable machine learning.
[Dosovitskiy et al., 2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In Intl Conference on Learning Representations.
[Dou and Wu, 2021] Dou, Y. and Wu, D. (2021). Platform competition under network effects: Piggybacking and optimal subsidization. Information Systems Research, 32(3):820–835.
[Douglas and Ney, 1998] Douglas, M. and Ney, S. (1998). Missing persons: A critique of the personhood in the social sciences, volume 1. Univ of California Press.
[Dourish and Mainwaring, 2012] Dourish, P. and Mainwaring, S. D. (2012). Ubicomp’s colonial impulse. In Proceedings of the 2012 ACM conference on ubiqui- tous computing, pages 133–142.
[Downing, 2003] Downing, S. M. (2003). Validity: on the meaningful interpretation of assessment data. Medical education, 37(9):830–837.
[Dressel and Farid, 2018] Dressel, J. and Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580.
[Dreyfus, 1992] Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason. MIT press.
[Drigas and Koukianakis, 2009] Drigas, A. and Koukianakis, L. (2009). Government online: An e-government platform to improve public administration oper- ations and services delivery to the citizen. WSKS (1), 5736:523–532.
[Driver, 2011] Driver, J. (2011). Consequentialism. Routledge.
[Du and Xie, 2021] Du, S. and Xie, C. (2021). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research, 129:961–974.
[Dubey and Griffiths, 2020] Dubey, R. and Griffiths, T. L. (2020). Understanding exploration in humans and machines by formalizing the function of curiosity. Current Opinion in Behavioral Sciences, 35:118–124.
[Duckworth et al., 2016] Duckworth, A. L., Gendler, T. S., and Gross, J. J. (2016). Situational strategies for self-control. Perspectives on Psychological Science, 11(1):35–55.
[Durlauf and Nagin, 2011] Durlauf, S. N. and Nagin, D. S. (2011). Imprisonment and crime: Can both be reduced? Criminology & Public Policy, 10(1):13–54.
[Dutt et al., 2018] Dutt, R., Deb, A., and Ferrara, E. (2018). “senator, we sell ads”: Analysis of the 2016 Russian Facebook ads campaign. In International conference on intelligent information technologies, pages 151–168. Springer.
[Duwe, 2014] Duwe, G. (2014). The development, validity, and reliability of the minnesota screening tool assessing recidivism risk (mnstarr). Criminal Justice Policy Review, 25(5):579–613.
[Duwe and Kim, 2017] Duwe, G. and Kim, K. (2017). Out with the old and in with the new? an empirical comparison of supervised learning algorithms to predict recidivism. Criminal Justice Policy Review, 28(6):570–600.
[Duwe and Rocque, 2017] Duwe, G. and Rocque, M. (2017). Effects of automat- ing recidivism risk assessment on reliability, predictive validity, and return on investment (roi). Criminology & Public Policy, 16(1):235–269.
[Dweck, 2000] Dweck, C. S. (2000). Self-theories: Their role in motivation, per- sonality, and development. Psychology press.
[Dwork et al., 2015] Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., and Roth, A. (2015). The reusable holdout: Preserving validity in adaptive data analysis. Science, 349(6248):636–638.
[Dwork et al., 2012] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226.
[Dworkin, 1978] Dworkin, R. (1978). Liberalism. In Hampshire, S., editor, Public and private morality, chapter 6, pages 113–143. Cambridge University Press.
[Dworkin, 1986] Dworkin, R. (1986). Law’s empire. Harvard University Press.
[Dworkin, 2013] Dworkin, R. (2013). Taking rights seriously. A&C Black.
[Eaneff et al., 2020] Eaneff, S., Obermeyer, Z., and Butte, A. J. (2020). The case for algorithmic stewardship for artificial intelligence and machine learning tech- nologies. Jama, 324(14):1397–1398.
[Easterbrook, 1992] Easterbrook, F. H. (1992). Abstraction and authority. The University of Chicago Law Review, 59(1):349–380.
[Eccles, 2009] Eccles, J. (2009). Who am i and what am i going to do with my life? personal and collective identities as motivators of action. Educational psychologist, 44(2):78–89.
[Eckersley, 2018] Eckersley, P. (2018). Impossibility and uncertainty theorems in ai value alignment (or why your agi should not have a utility function). arXiv preprint arXiv:1901.00064.
[Eckhouse et al., 2019] Eckhouse, L., Lum, K., Conti-Cook, C., and Ciccolini, J. (2019). Layers of bias: A unified approach for understanding problems with risk assessment. Criminal Justice and Behavior, 46(2):185–209.
[Eckles and Bakshy, 2021] Eckles, D. and Bakshy, E. (2021). Bias and high- dimensional adjustment in observational studies of peer effects. Journal of the American Statistical Association, 116(534):507–517.
[Ehsan and Riedl, 2021] Ehsan, U. and Riedl, M. O. (2021). Explainability pit- falls: Beyond dark patterns in explainable ai.
[Eide et al., 2021] Eide, S., Leslie, D. S., Frigessi, A., Rishaug, J., Jenssen, H., and Verrewaere, S. (2021). Finn. no slates dataset: A new sequential dataset log- ging interactions, all viewed items and click responses/no-click for recommender systems research. In Proceedings of the 15th ACM Conference on Recommender Systems, pages 556–558.
[Ekstrand et al., 2015] Ekstrand, M. D., Kluver, D., Harper, F. M., and Konstan, J. A. (2015). Letting users choose recommender algorithms: An experimental study. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 11–18.
[Ekstrand and Willemsen, 2016] Ekstrand, M. D. and Willemsen, M. C. (2016). Behaviorism is not enough: better recommendations through listening to users. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 221–224.
[Elgin, 2013] Elgin, C. Z. (2013). Fact–value distinction. International Encyclo- pedia of Ethics.
[Ellenberg et al., 2019] Ellenberg, S. S., Fleming, T. R., and DeMets, D. L. (2019). Data monitoring committees in clinical trials: a practical perspective. John Wiley & Sons.
[Elster, 2016] Elster, J. (2016). Sour grapes. Cambridge university press. [Endsley, 2017] Endsley, M. R. (2017). From here to autonomy: lessons learned
from human–automation research. Human factors, 59(1):5–27.
[Engle, 2009] Engle, E. (2009). Third party effect of fundamental rights (drittwirkung). Hanse L. Rev., 5:165.
[Enoch, 2011] Enoch, D. (2011). Taking morality seriously: A defense of robust
realism. Oxford University Press on Demand.
[Ensign et al., 2018] Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., and Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. In Conference on Fairness, Accountability and Transparency, pages 160–171. PMLR.
[Epstein, 1973] Epstein, R. A. (1973). A theory of strict liability. The Journal of Legal Studies, 2(1):151–204.
[Etzioni, 1996] Etzioni, A. (1996). The responsive community: A communitarian perspective. American Sociological Review, pages 1–11.
[Eubanks, 2018] Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
[European Commission, 2021] European Commission (2021). Regulation of the european parliament and of the council laying down harmonised rules on artifi- cial intelligence (artificial intelligence act) and amending certain union legislative acts. Technical report, European Commission. European Commission, Online, accessed February 24, 2022.
[European Commission, 2022a] European Commission (2022a). Proposal for a di- rective of the european parliament and of the council on liability for defective products. Technical report, European Commission. European Commission, On- line, accessed September 29, 2022.
[European Commission, 2022b] European Commission (2022b). Questions and answers: Digital services act. https://ec.europa.eu/commission/ presscorner/detail/en/QANDA_20_2348. European Commission, Online, ac- cessed November 27, 2022.
[European Data Protection Board, 2020] European Data Protection Board (2020). Guidelines 05/2020 on consent under regulation 2016/679, version 1.1. Technical report, European Data Protection Board. European Data Protection Board, Online, accessed September 24, 2022.
[European Union, 2010] European Union (2010). Charter of fundamental rights of the european union. Technical report, Brussels.
[Evans and Kasirzadeh, 2021] Evans, C. and Kasirzadeh, A. (2021). User tampering in reinforcement learning recommender systems. arXiv preprint arXiv:2109.04083.
[Evans and Schmalensee, 2016] Evans, D. S. and Schmalensee, R. (2016). Match- makers: The new economics of multisided platforms. Harvard Business Review Press.
[Everitt et al., 2021] Everitt, T., Hutter, M., Kumar, R., and Krakovna, V. (2021). Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective. Synthese, 198(27):6435–6467.
[Eyal, 2014] Eyal, N. (2014). Hooked: How to build habit-forming products. Penguin.
[Faden and Beauchamp, 1986] Faden, R. R. and Beauchamp, T. L. (1986). A history and theory of informed consent. Oxford University Press.
[Fagundes, 2001] Fagundes, D. (2001). Note, what we talk about when we talk about persons: The language of a legal fiction. Harvard Law Review, 114(6).
[Falco and Kleinhans, 2018] Falco, E. and Kleinhans, R. (2018). Beyond technology: Identifying local government challenges for using digital platforms for citizen engagement. International Journal of Information Management, 40:17– 20.
[Fan and Poole, 2006] Fan, H. and Poole, M. S. (2006). What is personalization? perspectives on the design and implementation of personalization in information systems. Journal of Org. Comp. and Elec. Comm., 16(3-4):179–202.
[Faulin et al., 2010] Faulin, J., Juan, A. A., Martorell, S., and Ram ́ırez-Ma ́rquez, J.-E. (2010). Simulation methods for reliability and availability of complex sys- tems, volume 315. Springer.
[Fawcett, 2006] Fawcett, T. (2006). An introduction to roc analysis. Pattern recognition letters, 27(8):861–874.
[Fazel and Wolf, 2018] Fazel, S. and Wolf, A. (2018). Selecting a risk assessment tool to use in practice: a 10-point guide. Evidence-based mental health, 21(2):41– 43.
[Fedus et al., 2020] Fedus, W., Ramachandran, P., Agarwal, R., Bengio, Y., Larochelle, H., Rowland, M., and Dabney, W. (2020). Revisiting fundamen- tals of experience replay. In International Conference on Machine Learning, pages 3061–3071. PMLR.
[Feeley and Simon, 1992] Feeley, M. M. and Simon, J. (1992). The new penology: Notes on the emerging strategy of corrections and its implications. Criminology, 30(4):449–474.
[Feenberg, 2000] Feenberg, A. (2000). From essentialism to constructivism: Phi- losophy of technology at the crossroads. Technology and the good life, pages 294–315.
[Feinberg, 1970] Feinberg, J. (1970). Doing & deserving; essays in the theory of responsibility. Princeton University Press.
[Feinman, 2010] Feinman, J. M. (2010). Law 101: Everything you need to know about American law. Oxford University Press.
[Fell et al., 2021] Fell, J., Shmueli, G., Greene, T., Wang, J.-C., Ray, S., and Wu, S.-Y. (2021). Seeing humans in the data: Ethical blind spots of taiwan academic researchers in the era of behavioral big data. In Proceedings of the 54th Hawaii international conference on system sciences, page 6599.
[Ferguson et al., 2019] Ferguson, K. D., McCann, M., Katikireddi, S. V., Thom- son, H., Green, M. J., Smith, D. J., and Lewsey, J. D. (2019). Evidence synthesis for constructing directed acyclic graphs (ESC-DAGs): a novel and systematic method for building directed acyclic graphs. International Journal of Epidemi- ology, 49(1):322–329.
[Fern ́andez-Lor ́ıa and Provost, 2022] Fern ́andez-Lor ́ıa, C. and Provost, F. (2022). Causal decision making and causal effect estimation are not the same... and why it matters. INFORMS Journal on Data Science.
[Fern ́andez-Lor ́ıa et al., 2020] Ferna ́ndez-Lor ́ıa, C., Provost, F., and Han, X. (2020). Explaining data-driven decisions made by ai systems: The counter- factual approach.
[Ferrario et al., 2023] Ferrario, A., Gloeckler, S., and Biller-Andorno, N. (2023). Ethics of the algorithmic prediction of goal of care preferences: from theory to practice. Journal of Medical Ethics, 49(3):165–174.
[Ferrario and Loi, 2020] Ferrario, A. and Loi, M. (2020). A series of unfortunate counterfactual events: the role of time in counterfactual explanations.
[Feyerabend, 2010] Feyerabend, P. (2010). Against method: Outline of an anar- chistic theory of knowledge. Verso Books.
[Feyerabend, 2018] Feyerabend, P. (2018). Science in a free society. Verso Books.
[Figueroa et al., 2021] Figueroa, C. A., Hernandez-Ramos, R., Boone, C. E., Go ́mez-Pathak, L., Yip, V., Luo, T., Sierra, V., Xu, J., Chakraborty, B., Darrow, S., et al. (2021). A text messaging intervention for coping with social distancing during covid-19 (staywell at home): Protocol for a randomized con- trolled trial. JMIR research protocols, 10(1):e23592.
[Fisher et al., 2019] Fisher, A., Rudin, C., and Dominici, F. (2019). All models are wrong, but many are useful: Learning a variable’s importance by study- ing an entire class of prediction models simultaneously. J. Mach. Learn. Res., 20(177):1–81.
[Fisher, 1987] Fisher, D. H. (1987). Knowledge acquisition via incremental conceptual clustering. Machine learning, 2:139–172.
[Fisher, 1935] Fisher, R. A. (1935). The logic of inductive inference. Journal of the royal statistical society, 98(1):39–82.
[Fisher, 1985] Fisher, W. R. (1985). The narrative paradigm: In the beginning. Journal of communication, 35(4):74–89.
[Fiske, 2022] Fiske, S. T. (2022). Twitter manipulates your feed: Ethical considerations. Proceedings of the National Academy of Sciences, 119(1):e2119924119.
[Flach, 2019] Flach, P. (2019). Performance evaluation in machine learning: the good, the bad, the ugly, and the way forward. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9808–9814.
[Floridi, 2011] Floridi, L. (2011). The informational nature of personal identity. Minds and machines, 21(4):549.
[Floridi, 2016] Floridi, L. (2016). On human dignity as a foundation for the right to privacy. Philosophy & Technology, 29(4):307–312.
[Floridi et al., 2018] Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., et al. (2018). Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4):689–707.
[Fogg, 2002] Fogg, B. J. (2002). Persuasive technology: using computers to change what we think and do. Morgan Kaufmann.
[Fogliato et al., 2020] Fogliato, R., Chouldechova, A., and G’Sell, M. (2020). Fair- ness evaluation in presence of biased noisy labels. In International Conference on Artificial Intelligence and Statistics, pages 2325–2336.
[Forde et al., 2018] Forde, J., Head, T., Holdgraf, C., Panda, Y., Nalvarete, G., Ragan-Kelley, B., and Sundell, E. (2018). Reproducible research environments with repo2docker. In ICML Reproducibility in Machine Learning Workshop.
[Fowers and Davidov, 2006] Fowers, B. J. and Davidov, B. J. (2006). The virtue of multiculturalism: Personal transformation, character, and openness to the other. American Psychologist, 61(6):581.
[Fox, 2015] Fox, J. A. (2015). Social accountability: what does the evidence really say? World Development, 72:346–361.
[Frankel, 1975] Frankel, M. E. (1975). The search for truth: An umpireal view. University of Pennsylvania Law Review, 123(5):1031–1059.
[Frankfurt, 1987] Frankfurt, H. (1987). Equality as a moral ideal. Ethics, 98(1):21– 43.
[Frankfurt, 1971] Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1):5–20.
[Frankl, 1985] Frankl, V. E. (1985). Man’s search for meaning. Simon and Schuster.
[Frase, 2000] Frase, R. S. (2000). Is guided discretion sufficient-overview of state sentencing guidelines. St. Louis ULJ, 44:425.
[Freedman, 1987] Freedman, B. (1987). Equipoise and the ethics of clinical research. The New England Journal of Medicine, 317(3):141–145.
[Freedman, 1975] Freedman, M. H. (1975). Judge frankel’s search for truth. University of Pennsylvania Law Review, 123(5):1060–1066.
[Fr ́enay and Verleysen, 2013] Fr ́enay, B. and Verleysen, M. (2013). Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845–869.
[French, 1999] French, R. M. (1999). Catastrophic forgetting in connectionist net- works. Trends in cognitive sciences, 3(4):128–135.
[Fried, 2020] Fried, I. (December 9, 2020). Scoop: Google CEO pledges
to investigate exit of top AI ethicist. https://www.axios.com/ sundar-pichai-memo-timnit-gebru-exit-18b0efb0-5bc3-41e6-ac28-2956732ed78b. html. Axios, Online, accessed February 24, 2022.
[Friedler et al., 2021] Friedler, S. A., Scheidegger, C., and Venkatasubramanian, S. (2021). The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making. Communications of the ACM, 64(4):136– 143.
[Friedman and Hendry, 2019] Friedman, B. and Hendry, D. G. (2019). Value sen- sitive design: Shaping technology with moral imagination. Mit Press.
[Friedman, 2007] Friedman, M. (2007). The social responsibility of business is to increase its profits. In Corporate ethics and corporate governance, pages 173– 178. Springer.
[Friston, 2010] Friston, K. (2010). The free-energy principle: a unified brain the- ory? Nature reviews neuroscience, 11(2):127–138.
[Friston, 2022] Friston, K. (2022). Computational psychiatry: from synapses to sentience. Molecular Psychiatry, pages 1–13.
[Frow et al., 2011] Frow, P., Payne, A., Wilkinson, I. F., and Young, L. (2011). Customer management and crm: addressing the dark side. Journal of Services Marketing.
[Fudenberg et al., 2022] Fudenberg, D., Kleinberg, J., Liang, A., and Mul- lainathan, S. (2022). Measuring the completeness of economic models. Journal of Political Economy, 130(4):956–990.
[Fukuyama, 2018] Fukuyama, F. (2018). Identity: The demand for dignity and the politics of resentment. Farrar, Straus and Giroux.
[Fuller, 1964] Fuller, L. L. (1964). The morality of law. Yale University Press.
[Fung, 2013a] Fung, A. (2013a). The principle of affected interests: An interpre- tation and defense. In Nagel, J. H. and Smith, R. M., editors, Representation: Elections and Beyond. University of Pennsylvania Press.
[Fung, 2013b] Fung, A. (2013b). The principle of affected interests: An interpre- tation and defense. Representation: Elections and beyond, page 236.
[Fung et al., 2007] Fung, A., Graham, M., and Weil, D. (2007). Full disclosure: The perils and promise of transparency. Cambridge University Press.
[Furuta et al., 2021] Furuta, H., Matsushima, T., Kozuno, T., Matsuo, Y., Levine, S., Nachum, O., and Gu, S. S. (2021). Policy information capacity: Information- theoretic measure for task complexity in deep reinforcement learning. In Inter- national Conference on Machine Learning, pages 3541–3552. PMLR.
[Gabriel, 2020] Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and machines, 30(3):411–437.
[Gabriel, 2022] Gabriel, I. (2022). Toward a theory of justice for artificial intelli- gence. Daedalus, 151(2):218–231.
[Gadamer et al., 2004] Gadamer, H.-G., Weinsheimer, J., and Marshall, D. G. (2004). EPZ truth and method. Bloomsbury Publishing USA.
[Gallie, 1955] Gallie, W. B. (1955). Essentially contested concepts. In Proceedings of the Aristotelian society, volume 56, pages 167–198. JSTOR.
[Galston, 2002] Galston, W. A. (2002). Liberal pluralism: The implications of value pluralism for political theory and practice. Cambridge University Press.
[Gama et al., 2014] Gama, J., Zˇliobaite ̇, I., Bifet, A., Pechenizkiy, M., and Bouchachia, A. (2014). A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1–37.
[Garcıa and Ferna ́ndez, 2015] Garcıa, J. and Fern ́andez, F. (2015). A compre- hensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480.
[Garrett and Monahan, 2020] Garrett, B. L. and Monahan, J. (2020). Judging risk. Calif. L. Rev., 108:439.
[Gastwirth, 1992] Gastwirth, J. L. (1992). Statistical reasoning in the legal setting. The American Statistician, 46(1):55–69.
[Gauci et al., 2018] Gauci, J., Conti, E., Liang, Y., Virochsiri, K., He, Y., Kaden, Z., Narayanan, V., Ye, X., Chen, Z., and Fujimoto, S. (2018). Horizon: Face- book’s open source applied reinforcement learning platform. arXiv preprint arXiv:1811.00260.
[Gauci et al., 2019] Gauci, J., Liu, H., Ghavamzadeh, M., and Nahmias, R. (10/16/2019). Open-sourcing reagent, a modular, end-to-end plat- form for building reasoning systems. https://ai.facebook.com/blog/ open-sourcing-reagent-a-platform-for-reasoning-systems/. Facebook AI, Online, accessed February 24, 2022.
[Gebru et al., 2021] Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daum ́e, H., and Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12):86–92.
[Gelman and Loken, 2014] Gelman, A. and Loken, E. (2014). The statistical crisis in science data-dependent analysis—a “garden of forking paths”—explains why many statistically significant comparisons don’t hold up. American scientist, 102(6):460.
[Gendler, 2014] Gendler, T. S. (2014). Thought experiment: On the powers and limits of imaginary cases. Routledge.
[Gepperth and Hammer, 2016] Gepperth, A. and Hammer, B. (2016). Incremental learning algorithms and applications. In European symposium on artificial neural networks (ESANN).
[Gergen, 1992] Gergen, K. J. (1992). The social constructionist movement in modern psychology. American Psychologist.
[Ghazimatin et al., 2020] Ghazimatin, A., Balalau, O., Saha Roy, R., and Weikum, G. (2020). Prince: Provider-side interpretability with counterfactual explanations in recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 196–204.
[Giddens, 1991] Giddens, A. (1991). Modernity and self-identity: Self and society in the late modern age. Stanford university press.
[Gigerenzer and Gaissmaier, 2011] Gigerenzer, G. and Gaissmaier, W. (2011). Heuristic decision making. Annual review of psychology, 62:451–482.
[Gilbert et al., 2022] Gilbert, T. K., Dean, S., Lambert, N., Zick, T., and Snoswell, A. (2022). Reward reports for reinforcement learning. arXiv preprint arXiv:2204.10817.
[Gilboa et al., 2012] Gilboa, I., Postlewaite, A., and Schmeidler, D. (2012). Rationality of belief or: why savage’s axioms are neither necessary nor sufficient for rationality. Synthese, 187:11–31.
[Gilpin et al., 2018] Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pages 80–89. IEEE.
[Ginsberg, 1963] Ginsberg, M. (1963). The concept of justice. Philosophy, 38(144):99–116.
[Glimcher and Rustichini, 2004] Glimcher, P. W. and Rustichini, A. (2004). Neu- roeconomics: the consilience of brain and decision. Science, 306(5695):447–452.
[Glymour et al., 2016] Glymour, M., Pearl, J., and Jewell, N. P. (2016). Causal inference in statistics: A primer. John Wiley & Sons.
[Godfrey-Smith, 2002] Godfrey-Smith, P. (2002). Environmental complexity and the evolution of cognition.
[Goertzel, 2016] Goertzel, K. M. (2016). Legal liability for bad software. CrossTalk, 23.
[Goethals et al., 2023] Goethals, S., Martens, D., and Calders, T. (2023). Precof: counterfactual explanations for fairness. Machine Learning, pages 1–32.
[Gohel et al., 2021] Gohel, P., Singh, P., and Mohanty, M. (2021). Explainable AI: current status and future directions.
[Goldfarb et al., 2022] Goldfarb, A., Tucker, C., and Wang, Y. (2022). Conducting research in marketing with quasi-experiments. Journal of Marketing, 86(3):1–20.
[Gollwitzer, 1993] Gollwitzer, P. M. (1993). Goal achievement: The role of inten- tions. European review of social psychology, 4(1):141–185.
[Gollwitzer, 1999] Gollwitzer, P. M. (1999). Implementation intentions: strong effects of simple plans. American psychologist, 54(7):493.
[Gomez-Uribe and Hunt, 2015] Gomez-Uribe, C. A. and Hunt, N. (2015). The netflix recommender system: Algorithms. Business Value, 6.
[Gon ̧calves, 2020] Gon ̧calves, M. E. (2020). The risk-based approach under the new eu data protection regulation: a critical perspective. Journal of Risk Research, 23(2):139–152.
[Goodfellow et al., 2016] Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press.
[Goodin, 1986] Goodin, R. E. (1986). Protecting the vulnerable: A re-analysis of our social responsibilities. University of Chicago Press.
[Gordon et al., 2019] Gordon, B. R., Zettelmeyer, F., Bhargava, N., and Chapsky, D. (2019). A comparison of approaches to advertising measurement: Evidence from big field experiments at facebook. Marketing Science, 38(2):193–225.
[Gordon and Rosenthal, 2003] Gordon, M. J. and Rosenthal, J. S. (2003). Capi- talism’s growth imperative. Cambridge Journal of Economics, 27(1):25–48.
[Gorski, 2013] Gorski, P. S. (2013). Beyond the fact/value distinction: Ethical naturalism and the social sciences. Society, 50(6):543–553.
[Gorwa et al., 2020] Gorwa, R., Binns, R., and Katzenbach, C. (2020). Algorith- mic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1):2053951719897945.
[Gottesman et al., 2019] Gottesman, O., Johansson, F., Komorowski, M., Faisal, A., Sontag, D., Doshi-Velez, F., and Celi, L. A. (2019). Guidelines for reinforce- ment learning in healthcare. Nature medicine, 25(1):16–18.
[Gottfredson, 1987] Gottfredson, D. M. (1987). Prediction and classification in criminal justice decision making. Crime and justice, 9:1–20.
[Gottfredson, 1997] Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24(1):13–23.
[Gottlieb et al., 2013] Gottlieb, J., Oudeyer, P.-Y., Lopes, M., and Baranes, A. (2013). Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in cognitive sciences, 17(11):585–593.
[Gouwens, 1982] Gouwens, D. J. (1982). Kierkegaard on the ethical imagination. The Journal of Religious Ethics, pages 204–220.
[Grace, 2016] Grace, Y. Y. (2016). Statistical analysis with measurement error or misclassification. Springer.
[Gray et al., 2018] Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., and Toombs, A. L. (2018). The dark (patterns) side of ux design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–14.
[Green, 2020] Green, B. (2020). The false promise of risk assessments: epistemic reform and the limits of fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 594–606.
[Green, 2021] Green, B. (2021). Data science as political action: grounding data science in a politics of justice. Journal of Social Computing, 2(3):249–265.
[Green and Hu, 2018] Green, B. and Hu, L. (2018). The myth in the methodology: Towards a recontextualization of fairness in machine learning. In Proceedings of the machine learning: the debates workshop.
[Greene et al., 2022a] Greene, T., Martens, D., and Shmueli, G. (2022a). Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms. Nature Machine Intelligence, 4(4):323–330.
[Greene et al., 2022b] Greene, T., Shmueli, G., Fell, J., Lin, C.-F., and Liu, H.- W. (2022b). Forks over knives: Predictive inconsistency in criminal justice algorithmic risk prediction tools. Journal of the Royal Statistical Society: Series A (Statistics in Society), pages 1–32.
[Greene et al., 2019] Greene, T., Shmueli, G., Ray, S., and Fell, J. (2019). Adjust- ing to the gdpr: The impact on data scientists and behavioral researchers. Big data, 7(3):140–162.
[Griffin, 2009] Griffin, J. (2009). On human rights. OUP Oxford.
[Griffith et al., 2013] Griffith, S., Subramanian, K., Scholz, J., Isbell, C. L., and Thomaz, A. L. (2013). Policy shaping: Integrating human feedback with rein- forcement learning. Advances in neural information processing systems, 26.
[Grossman et al., 1993] Grossman, R. L., Adams, F. T., and Levenstein, C. (1993). Taking care of business: Citizenship and the charter of incorporation. New Solutions: A Journal of Environmental and Occupational Health Policy, 3(3):7–18.
[Grosz et al., 2019] Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., and Waldo, J. (2019). Embedded ethics: integrating ethics across cs education. Communications of the ACM, 62(8):54–61.
[Grover et al., 2020] Grover, V., Lindberg, A., Benbasat, I., and Lyytinen, K. (2020). The perils and promises of big data research in information systems. Journal of the Association for Information Systems, 21(2):9.
[Groves et al., 2011] Groves, R. M., Fowler Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., and Tourangeau, R. (2011). Survey methodology. John Wiley & Sons.
[Gruson et al., 2019] Gruson, A., Chandar, P., Charbuillet, C., McInerney, J., Hansen, S., Tardieu, D., and Carterette, B. (2019). Offline evaluation to make decisions about playlist recommendation algorithms. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 420–428.
[Gu et al., 2016] Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. (2016). Contin- uous deep q-learning with model-based acceleration. In International conference on machine learning, pages 2829–2838. PMLR.
[Guardian, 2021] Guardian, T. (February 20, 2021). Google fires margaret mitchell, another top researcher on its ai ethics team. https://www.theguardian.com/technology/2021/feb/19/ google-fires-margaret-mitchell-ai-ethics-team. The Guardian, Online, accessed February 24, 2022.
[Guidotti, 2022] Guidotti, R. (2022). Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery, pages 1–55.
[Gureckis and Markant, 2012] Gureckis, T. M. and Markant, D. B. (2012). Self- directed learning: A cognitive and computational perspective. Perspectives on Psychological Science, 7(5):464–481.
[Guyer, 2016] Guyer, P. (2016). The Virtues of Freedom: Selected Essays on Kant. Oxford University Press.
[Habermas, 1971] Habermas, J. (1971). Knowledge and Human Interests. Beacon. [Habermas, 1975] Habermas, J. (1975). Legitimation crisis, volume 519. Beacon
Press.
[Habermas, 1985] Habermas, J. (1985). The theory of communicative action: Vol-
ume 2: Lifeword and system: A critique of functionalist reason, volume 2. Bea- con press.
[Habermas, 1990] Habermas, J. (1990). Moral consciousness and communicative action. MIT press.
[Habermas, 2015] Habermas, J. (2015). Between facts and norms: Contributions to a discourse theory of law and democracy. John Wiley & Sons.
[Hacking, 1983] Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge university press.
[Hacking, 2007] Hacking, I. (2007). Kinds of people: Moving targets. In Proceed- ings of the British Academy, volume 151, pages 285–318.
[Hadad et al., 2021] Hadad, V., Hirshberg, D. A., Zhan, R., Wager, S., and Athey, S. (2021). Confidence intervals for policy evaluation in adaptive experiments. Proceedings of the National Academy of Sciences, 118(15).
[Hadfield-Menell et al., 2016] Hadfield-Menell, D., Russell, S. J., Abbeel, P., and Dragan, A. (2016). Cooperative inverse reinforcement learning. Advances in neural information processing systems, 29.
[Hafen et al., 2014] Hafen, E., Kossmann, D., and Brand, A. (2014). Health data cooperatives–citizen empowerment. Methods of information in medicine, 53(02):82–86.
[Hagendorff, 2020] Hagendorff, T. (2020). Ai virtues–the missing link in putting ai ethics into practice. arXiv preprint arXiv:2011.12750.
[Hagendorff, 2021] Hagendorff, T. (2021). Blind spots in ai ethics. AI and Ethics, pages 1–17.
[Ha ́jek, 2007] H ́ajek, A. (2007). The reference class problem is your problem too. Synthese, 156(3):563–585.
[Hamilton, 2015] Hamilton, M. (2015). Risk-needs assessment: Constitutional and ethical challenges. Am. Crim. L. Rev., 52:231.
[Hampshire, 1978] Hampshire, S. (1978). Public and private morality. Cambridge University Press.
[Hand, 2006] Hand, D. J. (2006). Classifier technology and the illusion of progress. Statistical science, 21(1):1–14.
[Hand, 2009] Hand, D. J. (2009). Measuring classifier performance: a coherent alternative to the area under the roc curve. Machine learning, 77(1):103–123.
[Hannah-Moffat and Shaw, 2001] Hannah-Moffat, K. and Shaw, M. (2001). Tak- ing risks: Incorporating gender and culture into the classification and assess- ment of federally sentenced women in Canada. Ottawa, ON: Status of Women Canada, Government of Canada.
[Hansmann and Kraakman, 2017] Hansmann, H. and Kraakman, R. (2017). The end of history for corporate law. In Corporate governance, pages 49–78. Gower.
[Hanson et al., 2017] Hanson, R. K., Babchishin, K. M., Helmus, L. M., Thornton, D., and Phenix, A. (2017). Communicating the results of criterion referenced prediction measures: Risk categories for the static-99r and static-2002r sexual offender risk assessment tools. Psychological Assessment, 29(5):582.
[Hansotia and Rukstales, 2002] Hansotia, B. and Rukstales, B. (2002). Incremen- tal value modeling. Journal of Interactive Marketing, 16(3):35–46.
[Harambam et al., 2019] Harambam, J., Bountouridis, D., Makhortykh, M., and Van Hoboken, J. (2019). Designing for the better by taking users into account: A qualitative evaluation of user control mechanisms in (news) recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, pages 69–77.
[Harambam et al., 2018] Harambam, J., Helberger, N., and Van Hoboken, J. (2018). Democratizing algorithmic news recommenders: how to materialize voice in a technologically saturated media ecosystem. Philosophical Transac- tions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180088.
[Hardin, 1985] Hardin, G. (1985). Filters against folly. Viking Books.
[Hardt et al., 2016] Hardt, M., Price, E., and Srebro, N. (2016). Equality of op- portunity in supervised learning. Advances in neural information processing systems, 29.
[Hare, 1991] Hare, R. M. (1991). The language of morals. Oxford Paperbacks. [Harman, 1977] Harman, G. (1977). The Nature of Morality: An Introduction to
Ethics. Oxford University Press.
[Harper et al., 2015] Harper, F. M., Xu, F., Kaur, H., Condiff, K., Chang, S., and Terveen, L. (2015). Putting users in control of their recommendations. In Proceedings of the 9th ACM Conference on Recommender Systems, pages 3–10.
[Harris, 1976] Harris, M. (1976). History and significance of the emic/etic distinc- tion. Annual review of anthropology, 5(1):329–350.
[Hart, 1961] Hart, H. (1961). The Concept of Law. OUP Oxford.
[Hart and Honor ́e, 1985] Hart, H. and Honor ́e, T. (1985). Causation in the Law.
OUP Oxford.
[Hart, 2008] Hart, H. L. A. (2008). Punishment and responsibility: Essays in the
philosophy of law. Oxford University Press.
[Hart and Zingales, 2017a] Hart, O. and Zingales, L. (2017a). Companies should
maximize shareholder welfare not market value. Journal of Law, 2:247–274. [Hart and Zingales, 2017b] Hart, O. and Zingales, L. (2017b). Serving sharehold-
ers doesn’t mean putting profit above all else. Harvard Business Review, 12:2–6.
[Hartmann and Wenzelburger, 2021] Hartmann, K. and Wenzelburger, G. (2021). Uncertainty, risk and the use of algorithms in policy decisions: a case study on criminal justice in the usa. Policy Sciences, 54(2):269–287.
[Hartzog, 2021] Hartzog, W. (2021). What is privacy? that’s the wrong question. U. Chi. L. Rev., 88:1677.
[Harvey, 2007] Harvey, D. (2007). A brief history of neoliberalism. Oxford Uni- versity Press, USA.
[Hastie et al., 2009] Hastie, T., Tibshirani, R., and Friedman, J. (2009). The el- ements of statistical learning: data mining, inference, and prediction. Cham: Springer.
[Ha ̈ubl and Trifts, 2000] H ̈aubl, G. and Trifts, V. (2000). Consumer decision mak- ing in online shopping environments: The effects of interactive decision aids. Marketing science, 19(1):4–21.
[Haugeland, 1997] Haugeland, J. (1997). Mind design II: Philosophy, psychology, and artificial intelligence. MIT press.
[Haugen, 2021] Haugen, F. (October 04, 2021). Statement of Frances Haugen. https://www.commerce.senate.gov/services/files/ FC8A558E-824E-4914-BEDB-3A7B1190BD49. Whistleblower Aid, Online, accessed February 24, 2022.
[Hayek, 1973] Hayek, F. A. (1973). Law, legislation and liberty, Volume 1: Rules and order. University of Chicago Press.
[Hayek, 1980] Hayek, F. A. (1980). Individualism and economic order. University of chicago Press.
[Haykin, 2009] Haykin, S. (2009). Neural networks and learning machines. Pearson Education India.
[He et al., 2014] He, X., Pan, J., Jin, O., Xu, T., Liu, B., Xu, T., Shi, Y., Atallah, A., Herbrich, R., Bowers, S., et al. (2014). Practical lessons from predicting clicks on ads at facebook. In Proceedings of the eighth international workshop on data mining for online advertising, pages 1–9.
[Hecht et al., 2021] Hecht, B., Wilcox, L., Bigham, J. P., Sch ̈oning, J., Hoque, E., Ernst, J., Bisk, Y., De Russis, L., Yarosh, L., Anjum, B., et al. (2021). It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process. arXiv preprint arXiv:2112.09544.
[Heckman, 1979] Heckman, J. (1979). Sample selection bias as a specification error. Econometrica, 47(1):153–161.
[Heidegger, 1977] Heidegger, M. (1977). The question concerning technology.
[Heidegger, 2010] Heidegger, M. (2010). Being and time. Suny Press.
[Heinze et al., 2018] Heinze, G., Wallisch, C., and Dunkler, D. (2018). Variable selection–a review and recommendations for the practicing statistician. Biomet- rical Journal, 60(3):431–449.
[Helberger et al., 2018] Helberger, N., Karppinen, K., and D’acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2):191–207.
[Held et al., 2006] Held, V. et al. (2006). The ethics of care: Personal, political, and global. Oxford University Press on Demand.
[Helfat et al., 2009] Helfat, C. E., Finkelstein, S., Mitchell, W., Peteraf, M., Singh, H., Teece, D., and Winter, S. G. (2009). Dynamic capabilities: Understanding strategic change in organizations. John Wiley & Sons.
[Helmond, 2015] Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media+ Society, 1(2):1–11.
[Helwig, 2006] Helwig, C. C. (2006). The development of personal autonomy throughout cultures. Cognitive Development, 21(4):458–473.
[Henderson et al., 2018] Henderson, P., Islam, R., Bachman, P., Pineau, J., Pre- cup, D., and Meger, D. (2018). Deep reinforcement learning that matters. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
[Hendrycks et al., 2020] Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., and Steinhardt, J. (2020). Aligning ai with shared human values. arXiv preprint arXiv:2008.02275.
[Hertwig and Gru ̈ne-Yanoff, 2017] Hertwig, R. and Gru ̈ne-Yanoff, T. (2017). Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science, 12(6):973–986.
[Heywood, 2021] Heywood, A. (2021). Political ideologies: An introduction. Bloomsbury Publishing.
[Hijmans and Raab, 2018] Hijmans, H. and Raab, C. D. (2018). Ethical dimen- sions of the gdpr. Commentary on the General Data Protection Regulation, Cheltenham: Edward Elgar (2018, Forthcoming).
[Hildebrandt, 2015] Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.
[Hildebrandt, 2022] Hildebrandt, M. (2022). The issue of proxies and choice archi- tectures. why eu law matters for recommender systems. Frontiers in Artificial Intelligence, page 73.
[Hildebrandt and de Vries, 2013] Hildebrandt, M. and de Vries, K. (2013). Pri- vacy, due process and the computational turn: The philosophy of law meets the philosophy of technology. Routledge.
[Hildebrandt and Gutwirth, 2008] Hildebrandt, M. and Gutwirth, S. (2008). Pro- filing the European citizen. Springer.
[Hill et al., 2006] Hill, S., Provost, F., Volinsky, C., et al. (2006). Network-based marketing: Identifying likely adopters via consumer networks. Statistical Sci- ence, 21(2):256–276.
[Hilton et al., 2015] Hilton, N. Z., Scurich, N., and Helmus, L.-M. (2015). Com- municating the risk of violent and offending behavior: Review and introduction to this special issue. Behavioral Sciences & the Law, 33(1):1–18.
[Hinton, 2022] Hinton, G. (2022). The forward-forward algorithm: Some prelimi- nary investigations. arXiv preprint arXiv:2212.13345.
[Hinton and Salakhutdinov, 2006] Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786):504–507.
[Hirschheim and Klein, 1989] Hirschheim, R. and Klein, H. K. (1989). Four paradigms of information systems development. Communications of the ACM, 32(10):1199–1216.
[Hirschheim and Klein, 1994] Hirschheim, R. and Klein, H. K. (1994). Realizing emancipatory principles in information systems development: the case for ethics. MIS quarterly, pages 83–109.
[Hirschman, 1982] Hirschman, A. O. (1982). Rival interpretations of market society: Civilizing, destructive, or feeble? Journal of economic literature, 20(4):1463–1484.
[Ho et al., 2017] Ho, M. K., MacGlashan, J., Littman, M. L., and Cushman, F. (2017). Social is special: A normative framework for teaching with and learning from evaluative feedback. Cognition, 167:91–106.
[Hofman et al., 2021] Hofman, J. M., Watts, D. J., Athey, S., Garip, F., Griffiths, T. L., Kleinberg, J., Margetts, H., Mullainathan, S., Salganik, M. J., Vazire, S., et al. (2021). Integrating explanation and prediction in computational social science. Nature, 595(7866):181–188.
[Hofree and Winkielman, 2012] Hofree, G. and Winkielman, P. (2012). On (not) knowing and feeling what we want and like. In Vazire, S. and Wilson, T. D., editors, Handbook of self-knowledge, pages 210–221. Guilford Press.
[Hohnhold et al., 2015] Hohnhold, H., O’Brien, D., and Tang, D. (2015). Focusing on the long-term: It’s good for users and business. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1849–1858.
[Holbrook, 2005] Holbrook, J. B. (2005). Assessing the science–society relation: The case of the us national science foundation’s second merit review criterion.
Technology in society, 27(4):437–451.
[Holland, 1986] Holland, P. W. (1986). Statistics and causal inference. Journal of
the American statistical Association, 81(396):945–960.
[Hong et al., 2021] Hong, J., Kveton, B., Zaheer, M., Chow, Y., and Ahmed, A. (2021). Non-stationary off-policy optimization. In International Conference on Artificial Intelligence and Statistics, pages 2494–2502. PMLR.
[Hong and Page, 2004] Hong, L. and Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46):16385–16389.
[Honneth, 2004] Honneth, A. (2004). Recognition and justice: Outline of a plural theory of justice. Acta Sociologica, 47(4):351–364.
[Horkheimer, 2013] Horkheimer, M. (2013). Critique of instrumental reason. Verso Books.
[Horton, 1964] Horton, J. (1964). The dehumanization of anomie and alienation: a problem in the ideology of sociology. The British Journal of Sociology, 15(4):283–300.
[Horwitz, 2020] Horwitz, J. (October 23, 2020). Facebook seeks shutdown of nyu
research project into political ad targeting. https://www.wsj.com/articles/ facebook-seeks-shutdown-of-nyu-research-project-into-political-ad-targeting-1160 Wall Street Journal, Online, accessed February 24, 2022.
[Hosanagar, 2020] Hosanagar, K. (2020). A human’s guide to machine intelligence: how algorithms are shaping our lives and how we can stay in control. Penguin.
[Hoy, 1982] Hoy, D. C. (1982). The critical circle: Literature, history, and philo- sophical hermeneutics. Univ of California Press.
[Hron et al., 2022] Hron, J., Krauth, K., Jordan, M. I., Kilbertus, N., and Dean, S. (2022). Modeling content creator incentives on algorithm-curated platforms. arXiv preprint arXiv:2206.13102.
[Hu, 2020] Hu, D. (2020). An introductory survey on attention mechanisms in nlp problems. In Intelligent Systems and Applications: Proceedings of the 2019 Intelligent Systems Conference (IntelliSys) Volume 2, pages 432–448. Springer.
[Hua et al., 2022] Hua, Y., Horta Ribeiro, M., Ristenpart, T., West, R., and Naa- man, M. (2022). Characterizing alternative monetization strategies on youtube. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2):1–30.
[Huang et al., 2020] Huang, J., Oosterhuis, H., De Rijke, M., and Van Hoof, H. (2020). Keeping dataset biases out of the simulation: A debiased simulator for reinforcement learning based recommender systems. In Proceedings of the 14th ACM Conference on Recommender Systems, pages 190–199.
[Huang et al., 2021] Huang, J. Y., Gupta, A., and Youn, M. (2021). Survey of eu ethical guidelines for commercial ai: case studies in financial services. AI and Ethics, 1(4):569–577.
[Hudson and Ozanne, 1988] Hudson, L. A. and Ozanne, J. L. (1988). Alternative ways of seeking knowledge in consumer research. Journal of consumer research, 14(4):508–521.
[Huemer, 2007] Huemer, M. (2007). Ethical intuitionism. Springer.
[Huh and Malthouse, 2020] Huh, J. and Malthouse, E. C. (2020). Advancing com- putational advertising: Conceptualization of the field and future directions. Journal of Advertising, 49(4):367–376.
[Hume, 2003] Hume, D. (2003). A treatise of human nature. Courier Corporation. [Hurd, 1999] Hurd, I. (1999). Legitimacy and authority in international politics.
International organization, 53(2):379–408.
[Hussein et al., 2017] Hussein, A., Gaber, M. M., Elyan, E., and Jayne, C. (2017). Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1–35.
[Husserl, 1970] Husserl, E. (1970). The crisis of European sciences and transcen- dental phenomenology: An introduction to phenomenological philosophy. North- western University Press.
[Husza ́r et al., 2022] Husza ́r, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., and Hardt, M. (2022). Algorithmic amplification of politics on twitter. Proceedings of the National Academy of Sciences, 119(1):e2025334119.
[Hutsebaut-Buysse et al., 2022] Hutsebaut-Buysse, M., Mets, K., and Latr ́e, S. (2022). Hierarchical reinforcement learning: A survey and open research chal- lenges. Machine Learning and Knowledge Extraction, 4(1):172–221.
[Hutter, 2004] Hutter, M. (2004). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media.
[Hwang, 2020] Hwang, T. (2020). Subprime attention crisis: Advertising and the time bomb at the heart of the Internet. FSG originals.
[Ibarz et al., 2021] Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P., and Levine, S. (2021). How to train your robot with deep reinforcement learning: lessons we have learned. The International Journal of Robotics Research, 40(4- 5):698–721.
[Ie et al., 2019] Ie, E., Hsu, C.-w., Mladenov, M., Jain, V., Narvekar, S., Wang, J., Wu, R., and Boutilier, C. (2019). Recsim: A configurable simulation platform for recommender systems. arXiv preprint arXiv:1909.04847.
[IEEE Standards Association, 2019] IEEE Standards Association (2019). The IEEE global initiative on ethics of autonomous and intelligent systems. ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems.
[Imbens and Rubin, 2015] Imbens, G. W. and Rubin, D. B. (2015). Causal infer- ence in statistics, social, and biomedical sciences. Cambridge University Press.
[Imrey and Dawid, 2015] Imrey, P. B. and Dawid, A. P. (2015). A commentary on statistical assessment of violence recidivism risk. Statistics and Public Policy, 2(1):1–18.
[Jackson, 1986] Jackson, F. (1986). What mary didn’t know. The journal of philosophy, 83(5):291–295.
[Jacobs and Wallach, 2021] Jacobs, A. Z. and Wallach, H. (2021). Measurement and fairness. In Proceedings of the 2021 ACM conference on fairness, account- ability, and transparency, pages 375–385.
[Jagerman et al., 2019] Jagerman, R., Markov, I., and de Rijke, M. (2019). When people change their mind: Off-policy evaluation in non-stationary recommenda- tion environments. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 447–455.
[Jameson and Longo, 2015] Jameson, J. and Longo, D. (2015). Precision medicine–personalized, problematic, and promising. The New England Jour- nal of Medicine, 372(23):2229–2234.
[Jansen and Mullen, 2008] Jansen, B. J. and Mullen, T. (2008). Sponsored search: an overview of the concept, history, and technology. International Journal of Electronic Business, 6(2):114–131.
[Japkowicz and Shah, 2011] Japkowicz, N. and Shah, M. (2011). Evaluating learn- ing algorithms: a classification perspective. Cambridge University Press.
[Jen ́ık and Duff, 2020] Jen ́ık, I. and Duff, S. (2020). How to build a regulatory sandbox: A practical guide for policy makers. https://www.cgap.org/sites/
default/files/publications/2020_09_Technical_Guide_How_To_Build_ Regulatory_Sandbox.pdf.
[Jensen, 2001] Jensen, M. C. (2001). Value maximization, stakeholder theory, and the corporate objective function. Journal of applied corporate finance, 14(3):8– 21.
[Jeunen, 2019] Jeunen, O. (2019). Revisiting offline evaluation for implicit- feedback recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, pages 596–600.
[Jeunen et al., 2022] Jeunen, O., Murphy, S., and Allison, B. (2022). Learning to bid with auctiongym.
[Jiang et al., 2021] Jiang, L., Hwang, J. D., Bhagavatula, C., Bras, R. L., Forbes, M., Borchardt, J., Liang, J., Etzioni, O., Sap, M., and Choi, Y. (2021). Delphi: Towards machine ethics and norms. arXiv preprint arXiv:2110.07574.
[Jiang and Fu, 2018] Jiang, M. and Fu, K.-W. (2018). Chinese social media and big data: big data, big brother, big profit? Policy & Internet, 10(4):372–392.
[Jiang et al., 2022] Jiang, M., Rockt ̈aschel, T., and Grefenstette, E. (2022). General intelligence requires rethinking exploration. arXiv preprint arXiv:2211.07819.
[Jin et al., 2017] Jin, Y., Cardoso, B., and Verbert, K. (2017). How do different levels of user control affect cognitive load and acceptance of recommendations? In Jin, Y., Cardoso, B. and Verbert, K., 2017, August. How do different lev- els of user control affect cognitive load and acceptance of recommendations?. In Proceedings of the 4th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with ACM Conference on Recom- mender Systems (RecSys 2017), volume 1884, pages 35–42. CEUR Workshop Proceedings.
[Joachims et al., 2021] Joachims, T., London, B., Su, Y., Swaminathan, A., and Wang, L. (2021). Recommendations as treatments. AI Magazine, 42(3):19–30.
[Joachims and Swaminathan, 2016] Joachims, T. and Swaminathan, A. (2016). Counterfactual evaluation and learning for search, recommendation and ad placement. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 1199–1201.
[Jobin et al., 2019] Jobin, A., Ienca, M., and Vayena, E. (2019). The global land- scape of ai ethics guidelines. Nature Machine Intelligence, 1(9):389–399.
[Johnson and Verdicchio, 2017] Johnson, D. G. and Verdicchio, M. (2017). Ai anxiety. Journal of the Association for Information Science and Technology, 68(9):2267–2270.
[Johnson, 2020] Johnson, K. (2020). Amsterdam and helsinki launch algorithm registries to bring transparency to public deployments of ai. Online, accessed April 1, 2022.
[Jonas, 1982] Jonas, H. (1982). Technology as a subject for ethics. Social Research, pages 891–898.
[Jones and Sims, 1997] Jones, M. and Sims, B. (1997). Recidivism of offenders re- leased from prison in north carolina: A gender comparison. The Prison Journal, 77(3):335–348.
[Kaelbling et al., 1998] Kaelbling, L. P., Littman, M. L., and Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134.
[Kaelbling et al., 1996] Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285.
[Kahneman et al., 2016] Kahneman, D., Rosenfield, A., Gandhi, L., and Blaser, T. (2016). Noise. Harvard Bus Rev, pages 38–46.
[Kahneman and Tversky, 1979] Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2):263–292.
[Kallus, 2017] Kallus, N. (2017). Recursive partitioning for personalization us- ing observational data. In International conference on machine learning, pages 1789–1798. PMLR.
[Kamel Boulos and Zhang, 2021] Kamel Boulos, M. N. and Zhang, P. (2021). Dig- ital twins: from personalised medicine to precision public health. Journal of Personalized Medicine, 11(8):745.
[Kamenica, 2019] Kamenica, E. (2019). Bayesian persuasion and information de- sign. Annual Review of Economics, 11:249–272.
[Kane et al., 2021] Kane, G. C., Young, A. G., Majchrzak, A., and Ransbotham, S. (2021). Avoiding an oppressive future of machine learning: A design theory for emancipatory assistants. MIS Quarterly, 45(1):371–396.
[Kane, 2001] Kane, R. (2001). Free will. John Wiley & Sons.
[Kant, 1948] Kant, I. (1948). Groundwork of the Metaphysics of Morals, trans. HJ
Paton. London: Hutchinson.
[Kant, 2017] Kant, I. (2017). Kant: The metaphysics of morals. Cambridge Uni- versity Press.
[Kaplan et al., 2002] Kaplan, F., Oudeyer, P.-Y., Kubinyi, E., and Miklo ́si, A. (2002). Robotic clicker training. Robotics and Autonomous Systems, 38(3- 4):197–206.
[Kaptein et al., 2015] Kaptein, M., Markopoulos, P., De Ruyter, B., and Aarts, E. (2015). Personalizing persuasive technologies: Explicit and implicit person- alization using persuasion profiles. International Journal of Human-Computer Studies, 77:38–51.
[Kaptein et al., 2010] Kaptein, M. C., Markopoulos, P., De Ruyter, B., and Aarts, E. (2010). Persuasion in ambient intelligence. Journal of Ambient Intelligence and Humanized Computing, 1(1):43–56.
[Karimi et al., 2021a] Karimi, A.-H., Barthe, G., Scho ̈lkopf, B., and Valera, I. (2021a). A survey of algorithmic recourse: contrastive explanations and conse- quential recommendations. ACM Computing Surveys (CSUR).
[Karimi et al., 2021b] Karimi, A.-H., Sch ̈olkopf, B., and Valera, I. (2021b). Algo- rithmic recourse: from counterfactual explanations to interventions. In Proceed- ings of the 2021 ACM conference on fairness, accountability, and transparency, pages 353–362.
[Karliuk, 2022] Karliuk, M. (2022). Proportionality principle for the ethics of artificial intelligence. AI and Ethics, pages 1–6.
[Karppi and Crawford, 2016] Karppi, T. and Crawford, K. (2016). Social media, financial algorithms and the hack crash. Theory, culture & society, 33(1):73–92.
[Katyal, 2019] Katyal, S. K. (2019). Private accountability in the age of artificial intelligence. UCLA L. Rev., 66:54.
[Kaye, 2022] Kaye, K. (July 11, 2022). Not my job: Ai researchers building surveil- lance tech and deepfakes resist ethical concerns. https://www.protocol.com/ enterprise/ai-computer-vision-cvpr-ethics. Protocol, Online, accessed August 9, 2022.
[Keeney, 1988] Keeney, R. L. (1988). Structuring objectives for problems of public interest. Operations Research, 36(3):396–405.
[Keeney et al., 1993] Keeney, R. L., Raiffa, H., and Meyer, R. F. (1993). Decisions with multiple objectives: preferences and value trade-offs. Cambridge university
press.
[Kehl and Kessler, 2017] Kehl, D. L. and Kessler, S. A. (2017). Algorithms in the criminal justice system: Assessing the use of risk assessments in sentencing. Re- sponsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School.
[Kelleher et al., 2015] Kelleher, J. D., Mac Namee, B., and D’arcy, A. (2015). Fun- damentals of machine learning for predictive data analytics: algorithms, worked examples, and case studies. MIT Press.
[Kelleher et al., 2020] Kelleher, J. D., Mac Namee, B., and D’arcy, A. (2020). Fun- damentals of machine learning for predictive data analytics: algorithms, worked examples, and case studies. MIT press.
[Kellert et al., 2006] Kellert, S. H., Longino, H. E., and Waters, C. K. (2006). Scientific pluralism. U of Minnesota Press.
[Kendall and Gal, 2017] Kendall, A. and Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision? Advances in neural information processing systems, 30.
[Kenton et al., 2021] Kenton, Z., Everitt, T., Weidinger, L., Gabriel, I., Miku- lik, V., and Irving, G. (2021). Alignment of language agents. arXiv preprint arXiv:2103.14659.
[Kiely, 2005] Kiely, T. F. (2005). Forensic evidence: science and the criminal law. CRC Press.
[King and Persily, 2020] King, G. and Persily, N. (2020). A new model for industry–academic partnerships. PS: Political Science & Politics, 53(4):703– 709.
[Kirilenko et al., 2017] Kirilenko, A., Kyle, A. S., Samadi, M., and Tuzun, T. (2017). The flash crash: High-frequency trading in an electronic market. The Journal of Finance, 72(3):967–998.
[Kitcher, 2001] Kitcher, P. (2001). Science, truth, and democracy. Oxford Univer- sity Press.
[Kitcher, 2003] Kitcher, P. (2003). Science, truth, and democracy. Oxford Univer- sity Press.
[Kitchin, 2017] Kitchin, R. (2017). Thinking critically about and researching al- gorithms. Information, Communication & Society, 20(1):14–29.
[Kleinberg et al., 2018] Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., and Mullainathan, S. (2018). Human decisions and machine predictions. The quarterly journal of economics, 133(1):237–293.
[Kleinberg et al., 2016] Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807.
[Kleinberg and Raghavan, 2020] Kleinberg, J. and Raghavan, M. (2020). How do classifiers induce agents to invest effort strategically? ACM Transactions on Economics and Computation (TEAC), 8(4):1–23.
[Kline, 2012] Kline, M. (2012). Mathematics and the physical world. Courier Cor- poration.
[Knottnerus, 2016] Knottnerus, J. A. (2016). Research data as a global public good. Journal of Clinical Epidemiology, 70:270–271.
[Knox et al., 2012] Knox, W. B., Glass, B. D., Love, B. C., Maddox, W. T., and Stone, P. (2012). How humans teach agents: A new experimental perspective. International Journal of Social Robotics, 4:409–421.
[Knox and Stone, 2009] Knox, W. B. and Stone, P. (2009). Interactively shaping agents via human reinforcement: The tamer framework. In Proceedings of the fifth international conference on Knowledge capture, pages 9–16.
[Koepke and Robinson, 2018] Koepke, J. L. and Robinson, D. G. (2018). Danger ahead: Risk assessment and the future of bail reform. Wash. L. Rev., 93:1725.
[Kohavi and Thomke, 2017] Kohavi, R. and Thomke, S. (2017). The surprising power of online experiments. Harvard Business Review.
[Koob and Volkow, 2016] Koob, G. F. and Volkow, N. D. (2016). Neurobiology of addiction: a neurocircuitry analysis. The Lancet Psychiatry, 3(8):760–773.
[Kornblith, 2010] Kornblith, H. (2010). What reflective endorsement cannot do 1. Philosophy and Phenomenological Research, 80(1):1–19.
[Korsgaard, 1996] Korsgaard, C. (1996). The sources of normativity. Cambridge University Press.
[Korsgaard, 2009] Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. OUP Oxford.
[Korula et al., 2015] Korula, N., Mirrokni, V., and Nazerzadeh, H. (2015). Opti- mizing display advertising markets: Challenges and directions. IEEE Internet Computing, 20(1):28–35.
[Kosack and Fung, 2014] Kosack, S. and Fung, A. (2014). Does transparency im- prove governance? Annual review of political science, 17.
[Kosinski et al., 2013] Kosinski, M., Stillwell, D., and Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the national academy of sciences, 110(15):5802–5805.
[Koster et al., 2022] Koster, R., Balaguer, J., Tacchetti, A., Weinstein, A., Zhu, T., Hauser, O., Williams, D., Campbell-Gillingham, L., Thacker, P., Botvinick, M., et al. (2022). Human-centred mechanism design with democratic ai. Nature Human Behaviour, 6(9):1320.
[Kotkov et al., 2016] Kotkov, D., Wang, S., and Veijalainen, J. (2016). A survey of serendipity in recommender systems. Knowledge-Based Systems, 111:180–192
[Kovatchev et al., 2009] Kovatchev, B. P., Breton, M., Dalla Man, C., and Cobelli, C. (2009). In silico preclinical trials: A proof of concept in closed-loop control of type 1 diabetes. Journal of Diabetes Science and Technology, 3(1):44–55.
[Krakovna et al., 2020] Krakovna, V., Uesato, J., Mikulik, V., Rahtz, M., Everitt, T., Kumar, R., Kenton, Z., Leike, J., and Legg, S. (2020). Specification gaming: the flip side of ai ingenuity. https://deepmind.com/blog/article/ Specification-gaming-the-flip-side-of-AI-ingenuity. DeepMind Blog, Online, accessed November 8, 2022.
[Kramer et al., 2014] Kramer, A. D., Guillory, J. E., and Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social net- works. Proceedings of the National Academy of Sciences, 111(24):8788–8790.
[Krishna et al., 2022] Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., and Lakkaraju, H. (2022). The disagreement problem in explainable machine learning: A practitioner’s perspective.
[Krizhevsky et al., 2017] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communica- tions of the ACM, 60(6):84–90.
[Kroll et al., 2017] Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., and Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review.
[Krueger et al., 2020] Krueger, D., Maharaj, T., and Leike, J. (2020). Hidden in- centives for auto-induced distributional shift. arXiv preprint arXiv:2009.09153.
[Kshetri, 2021] Kshetri, N. (2021). Data labeling for the artificial intelligence in- dustry: Economic impacts in developing countries. IT Professional, 23(2):96–99.
[Kuhn and Johnson, 2013] Kuhn, M. and Johnson, K. (2013). Applied predictive modeling. Springer.
[Kuhn, 1970] Kuhn, T. S. (1970). The structure of scientific revolutions. Univer- sity of Chicago Press.
[Kumm, 2007] Kumm, M. (2007). Institutionalising socratic contestation: the rationalist human rights paradigm, legitimate authority and the point of judicial review. Eur. J. Legal Stud., 1:153.
[Kusner et al., 2017] Kusner, M. J., Loftus, J., Russell, C., and Silva, R. (2017). Counterfactual fairness. Advances in neural information processing systems, 30.
[Lacey, 2005] Lacey, H. (2005). Is science value free?: Values and scientific un- derstanding. Routledge.
[Lacey, 2016] Lacey, H. (2016). Science, respect for nature, and human well-being: democratic values and the responsibilities of scientists today. Foundations of Science, 21(1):51–67.
[Lamiell, 1998] Lamiell, J. T. (1998). Nomothetic’andidiographic’ contrasting windelband’s understanding with contemporary usage. Theory & Psychology, 8(1):23–38.
[Landry and Banville, 1992] Landry, M. and Banville, C. (1992). A disciplined methodological pluralism for mis research. Accounting, management and infor- mation technologies, 2(2):77–97.
[Langan and Levin, 2002] Langan, P. and Levin, D. (2002). Recidivism of prison- ers released in 1994. Online, accessed May 12, 2021.
[Langer, 1975] Langer, E. J. (1975). The illusion of control. Journal of personality and social psychology, 32(2):311.
[Langley and Leyshon, 2017] Langley, P. and Leyshon, A. (2017). Platform cap- italism: the intermediation and capitalization of digital economic circulation. Finance and society., 3(1):11–31.
[Lardinois, 2017] Lardinois, F. (September 25, 2017). Mi-
crosoft finally starts doing something with linkedin by integrat-
ing it into office 365. https://techcrunch.com/2017/09/25/ microsoft-finally-starts-doing-something-with-linkedin-by-integrating-it-into-of Tech Crunch, Online, accessed February 24, 2022.
[Latour and Woolgar, 2013] Latour, B. and Woolgar, S. (2013). Laboratory life. Princeton University Press.
[Lazar, 2022] Lazar, S. (2022). Legitimacy, authority, and the political value of explanations.
[Lazer, 2015] Lazer, D. (2015). The rise of the social algorithm. Science, 348(6239):1090–1091.
[Lazer et al., 2020] Lazer, D. M., Pentland, A., Watts, D. J., Aral, S., Athey, S., Contractor, N., Freelon, D., Gonzalez-Bailon, S., King, G., Margetts, H., et al. (2020). Computational social science: Obstacles and opportunities. Science, 369(6507):1060–1062.
[Le Moing et al., 2021] Le Moing, G., Ponce, J., and Schmid, C. (2021). Ccvs: Context-aware controllable video synthesis. Advances in Neural Information Processing Systems, 34:14042–14055.
[LeCun, 2022] LeCun, Y. (2022). A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27.
[LeCun et al., 2015] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436–444.
[Lee et al., 2018] Lee, D., Hosanagar, K., and Nair, H. S. (2018). Advertising content and consumer engagement on social media: Evidence from facebook. Management Science, 64(11):5105–5131.
[Lefebvre et al., 2022] Lefebvre, G., Summerfield, C., and Bogacz, R. (2022). A normative account of confirmation bias during reinforcement learning. Neural Computation, 34(2):307–337.
[Legomsky, 2007] Legomsky, S. H. (2007). Learning to live with unequal justice: Asylum and the limits to consistency. Stan. L. Rev., 60:413.
[Lehtiniemi, 2017] Lehtiniemi, T. (2017). Personal data spaces: An intervention in surveillance capitalism? Surveillance & Society, 15(5):626–639.
[Leidner and Tona, 2021] Leidner, D. E. and Tona, O. (2021). The care theory of dignity amid personal data digitalization. MIS Quarterly, 45(1).
[Leiter, 2007] Leiter, B. (2007). Objectivity in law and morals. Cambridge: Cambridge University Press.
[Leiva et al., 2021] Leiva, L. A., Arapakis, I., and Iordanou, C. (2021). My mouse, my rules: Privacy issues of behavioral user profiling via mouse tracking. In Proceedings of the 2021 Conference on Human Information Interaction and Re- trieval, pages 51–61.
[Leotti et al., 2010] Leotti, L. A., Iyengar, S. S., and Ochsner, K. N. (2010). Born to choose: The origins and value of the need for control. Trends in cognitive sciences, 14(10):457–463.
[Lerner, 1972] Lerner, A. P. (1972). The economics and politics of consumer sovereignty. The American Economic Review, 62(1/2):258–266.
[Leslie et al., 2021] Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., and Briggs, M. (2021). Artificial intelligence, human rights, democracy, and the rule of law: a primer. arXiv preprint arXiv:2104.04147.
[Letouzey, 2014] Letouzey, F. (2014). Fruit 1.0 uci (with source code) now avail- able. Computer Chess Club Archives.
[Levine, 1983] Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific philosophical quarterly, 64(4):354–361.
[Levine, 2022] Levine, S. (2022). Understanding the world through action. In Conference on Robot Learning, pages 1752–1757. PMLR.
[Levine et al., 2020] Levine, S., Kumar, A., Tucker, G., and Fu, J. (2020). Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643.
[Lewin, 1996] Lewin, S. B. (1996). Economics and psychology: Lessons for our own day from the early twentieth century. Journal of economic literature, 34(3):1293– 1323.
[Lewis, 1973] Lewis, D. (1973). Causation. The Journal of Philosophy, 70(17):556– 567.
[Li et al., 2019] Li, G., Gomez, R., Nakamura, K., and He, B. (2019). Human- centered reinforcement learning: A survey. IEEE Transactions on Human- Machine Systems, 49(4):337–349.
[Li et al., 2010] Li, L., Chu, W., Langford, J., and Schapire, R. E. (2010). A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661– 670.
[Li et al., 2011] Li, L., Chu, W., Langford, J., and Wang, X. (2011). Unbiased offline evaluation of contextual-bandit-based news article recommendation al-
gorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 297–306.
[Li, 2020] Li, X. (2020). Research on the building of china’s smart court in the internet era. China Legal Sci., 8:30.
[Liang et al., 2006] Liang, T.-P., Lai, H.-J., and Ku, Y.-C. (2006). Personalized content recommendation and user satisfaction: Theoretical synthesis and em- pirical findings. Journal of Management Information Systems, 23(3):45–70.
[Liao et al., 2022] Liao, Q. V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhu- randhar, A. (2022). Connecting algorithmic research and usage contexts: A perspective of contextualized evaluation for explainable ai.
[Liao et al., 2021] Liao, T., Taori, R., Raji, I. D., and Schmidt, L. (2021). Are we learning yet? a meta review of evaluation failures across machine learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
[Lilienfeld et al., 2009] Lilienfeld, S. O., Ammirati, R., and Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on psychological science, 4(4):390–398.
[Lima et al., 2022] Lima, G., Grgi ́c-Hlaˇca, N., Jeong, J. K., and Cha, M. (2022). The conflict between explainable and accountable decision-making algorithms. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM.
[Lindblom, 1959] Lindblom, C. E. (1959). The science of” muddling through”. Public administration review, pages 79–88.
[Linders, 2012] Linders, D. (2012). From e-government to we-government: Defin- ing a typology for citizen coproduction in the age of social media. Government information quarterly, 29(4):446–454.
[Lindstro ̈m et al., 2021] Lindstro ̈m, B., Bellander, M., Schultner, D. T., Chang, A., Tobler, P. N., and Amodio, D. M. (2021). A computational reward learning account of social media engagement. Nature communications, 12(1):1–10.
[Littman, 1996] Littman, M. L. (1996). Algorithms for sequential decision-making. PhD thesis, Brown University.
[Littman, 2015] Littman, M. L. (2015). Reinforcement learning improves be- haviour from evaluative feedback. Nature, 521(7553):445–451.
[Liu and Abbeel, 2021] Liu, H. and Abbeel, P. (2021). Behavior from the void: Unsupervised active pre-training. Advances in Neural Information Processing Systems, 34:18459–18473.
[Liu et al., 2019] Liu, H.-W., Lin, C.-F., and Chen, Y.-J. (2019). Beyond state v loomis: artificial intelligence, government algorithmization and accountability. International journal of law and information technology, 27(2):122–141.
[Liu, 2022] Liu, J. (2022). Social data governance: Towards a definition and model. Big Data & Society, 9(2):20539517221111352.
[Liu and Chao, 2020] Liu, P. and Chao, W. (2020). Computational Advertising: Market and Technologies for Internet Commercial Monetization. CRC Press.
[Locke, 1847] Locke, J. (1847). An essay concerning human understanding. Kay & Troutman.
[Loewenstein et al., 2014] Loewenstein, G., Sunstein, C. R., and Golman, R. (2014). Disclosure: Psychology changes everything. Annu. Rev. Econ., 6(1):391– 419.
[Loftin et al., 2016] Loftin, R., Peng, B., MacGlashan, J., Littman, M. L., Taylor, M. E., Huang, J., and Roberts, D. L. (2016). Learning behaviors via human- delivered discrete feedback: modeling implicit feedback strategies to speed up learning. Autonomous agents and multi-agent systems, 30:30–59.
[Lohr, 2011] Lohr, S. (2011). Sampling: design and analysis. Boca Raton: CRC Press.
[Lombrozo and Carey, 2006] Lombrozo, T. and Carey, S. (2006). Functional explanation and the function of explanation. Cognition, 99(2):167–204.
[Lorenz-Spreen et al., 2020] Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., and Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature human behaviour, 4(11):1102–1109.
[Loughlin, 2010] Loughlin, M. (2010). Foundations of public law. Oxford Univer- sity Press.
[Lovegrove, 1997] Lovegrove, A. (1997). The framework of judicial sentencing: A study in legal decision making. Cambridge University Press Cambridge.
[Lu, 2020] Lu, S. (2020). Algorithmic opacity, private accountability, and corpo- rate social disclosure in the age of artificial intelligence. Vand. J. Ent. & Tech. L., 23:99.
[Luca and Smith, 2013] Luca, M. and Smith, J. (2013). Salience in quality dis- closure: Evidence from the us news college rankings. Journal of Economics & Management Strategy, 22(1):58–77.
[Luhmann, 2013] Luhmann, N. (2013). A sociological theory of law. Routledge.
[Luijken et al., 2019] Luijken, K., Groenwold, R. H., Van Calster, B., Steyerberg, E. W., and van Smeden, M. (2019). Impact of predictor measurement hetero- geneity across settings on the performance of prediction models: A measurement error perspective. Statistics in medicine, 38(18):3444–3459.
[Lukoff et al., 2022] Lukoff, K., Lyngs, U., and Alberts, L. (2022). Designing to support autonomy and reduce psychological reactance in digital self-control tools. In Position Papers for the Workshop “Self-Determination Theory in HCI: Shaping a Research Agenda” at the Conference on Human Factors in Computing Systems (CHI’22), volume 5.
[Lukoff et al., 2021] Lukoff, K., Lyngs, U., Zade, H., Liao, J. V., Choi, J., Fan, K., Munson, S. A., and Hiniker, A. (2021). How the design of youtube influences user sense of agency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–17.
[Lundberg and Lee, 2017] Lundberg, S. M. and Lee, S.-I. (2017). A unified ap- proach to interpreting model predictions. Advances in neural information pro- cessing systems, 30.
[Lupton, 2016] Lupton, D. (2016). The quantified self. John Wiley & Sons.
[Luxon, 2008] Luxon, N. (2008). Ethics and subjectivity: Practices of self- governance in the late lectures of michel foucault. Political Theory, 36(3):377– 402.
[Lyngs et al., 2019] Lyngs, U., Lukoff, K., Slovak, P., Binns, R., Slack, A., In- zlicht, M., Van Kleek, M., and Shadbolt, N. (2019). Self-control in cyberspace: Applying dual systems theory to a review of digital self-control tools. In pro- ceedings of the 2019 CHI conference on human factors in computing systems, pages 1–18.
[Lynskey, 2015] Lynskey, O. (2015). The foundations of EU data protection law. Oxford University Press.
[Lyons et al., 2021] Lyons, H., Velloso, E., and Miller, T. (2021). Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1–25.
[Lyotard, 1984] Lyotard, J.-F. (1984). The postmodern condition: A report on knowledge, volume 10. U of Minnesota Press.
[Lyytinen and Hirschheim, 1988] Lyytinen, K. and Hirschheim, R. (1988). Infor- mation systems as rational discourse: An application of habermas’s theory of communicative action. Scandinavian Journal of Management, 4(1-2):19–30.
[Lyytinen et al., 2021] Lyytinen, K., Nickerson, J. V., and King, J. L. (2021). Metahuman systems= humans+ machines that learn. Journal of Information Technology, 36(4):427–445.
[MacCallum, 1967] MacCallum, G. C. (1967). Negative and positive freedom. The philosophical review, pages 312–334.
[Macdonald, 2017] Macdonald, P. (2017). History of the concept of mind: Volume 1: Speculations about soul, mind and spirit from homer to hume. Routledge.
[MacIntyre, 1984] MacIntyre, A. (1984). After virtue. University of Notre Dame Press.
[MacIntyre, 1985] MacIntyre, A. (1985). How psychology makes itself true-or false. In S. Koch, D. L., editor, A Century of Psychology As Science, pages 897–903.
[Mackie, 1990] Mackie, J. (1990). Ethics: Inventing right and wrong. Penguin UK. [MacKinnon, 1989] MacKinnon, C. A. (1989). Toward a feminist theory of the
state. Harvard University Press.
[Maclin and Shavlik, 1996] Maclin, R. and Shavlik, J. W. (1996). Creating advice-
taking reinforcement learners. Machine Learning, 22(1-3):251–281.
[Mahieu, 2021] Mahieu, R. (2021). The right of access to personal data: A geneal-
ogy. Technology and Regulation, 2021:62–75.
[Mahmood and Ricci, 2007] Mahmood, T. and Ricci, F. (2007). Learning and adaptivity in interactive recommender systems. In Proceedings of the ninth international conference on Electronic commerce, pages 75–84.
[Maltz, 2019] Maltz, M. (2019). Bridging gaps in police crime data. report. Online, accessed May 16, 2021.
[Manders-Huits, 2010] Manders-Huits, N. (2010). Practical versus moral identities in identity management. Ethics and information technology, 12(1):43–55.
[Manheim and Garrabrant, 2018] Manheim, D. and Garrabrant, S. (2018). Cate- gorizing variants of goodhart’s law. arXiv preprint arXiv:1803.04585.
[March, 1991] March, J. G. (1991). Exploration and exploitation in organizational learning. Organization science, 2(1):71–87.
[Marcus, 2009] Marcus, G. (2009). Kluge: The haphazard evolution of the human mind. Houghton Mifflin Harcourt.
[Marcus, 1980] Marcus, R. B. (1980). Moral dilemmas and consistency. The Journal of Philosophy, 77(3):121–136.
[Marcuse, 1941] Marcuse, H. (1941). Some social implications of modern technology. Zeitschrift fu ̈r Sozialforschung, 9(3):414–439.
[Marjanovic and Cecez-Kecmanovic, 2020] Marjanovic, O. and Cecez- Kecmanovic, D. (2020). Open government data platforms–a complex adaptive sociomaterial systems perspective. Information and Organization, 30(4):100323.
[Markus and Wurf, 1987] Markus, H. and Wurf, E. (1987). The dynamic self- concept: A social psychological perspective. Annual review of psychology, 38(1):299–337.
[Martens, 2022] Martens, D. (2022). Data Science Ethics: Concepts, Techniques and Cautionary Tales. Oxford University Press.
[Martens et al., 2007] Martens, D., Baesens, B., Van Gestel, T., and Vanthienen, J. (2007). Comprehensible credit scoring models using rule extraction from support vector machines. European journal of operational research, 183(3):1466– 1476.
[Martens and Provost, 2014] Martens, D. and Provost, F. (2014). Explaining data- driven document classifications. MIS quarterly, 38(1):73–100.
[Martens et al., 2016] Martens, D., Provost, F., Clark, J., and de Fortuny, E. J. (2016). Mining massive fine-grained behavior data to improve predictive ana- lytics. MIS quarterly, 40(4):869–888.
[Martin et al., 2016] Martin, T., Hofman, J. M., Sharma, A., Anderson, A., and Watts, D. J. (2016). Exploring limits to prediction in complex social systems. In Proceedings of the 25th international conference on world wide web, pages 683–694.
[Marx et al., 2020] Marx, C., Calmon, F., and Ustun, B. (2020). Predictive multi- plicity in classification. In International Conference on Machine Learning, pages 6765–6774. PMLR.
[Mathur et al., 2019] Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., and Narayanan, A. (2019). Dark patterns at scale: Findings from a crawl of 11k shopping websites. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1–32.
[Mattioli, 2017] Mattioli, M. (2017). The data-pooling problem. Berkeley Tech- nology Law Journal, 32(1):179–236.
[Mattu et al., 2020] Mattu, S., Yin, L., Waller, A., and Keegan, J. (January 5, 2020). How we built a facebook inspector. https://themarkup.org/ citizen-browser/2021/01/05/how-we-built-a-facebook-inspector. The Markup, Online, accessed February 24, 2022.
[Matz et al., 2017] Matz, S. C., Kosinski, M., Nave, G., and Stillwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion. Pro- ceedings of the national academy of sciences, 114(48):12714–12719.
[Matz and Netzer, 2017] Matz, S. C. and Netzer, O. (2017). Using big data as a window into consumers’ psychology. Current Opinion in Behavioral Sciences, 18:7 – 12. Big data in the behavioural sciences.
[Maxfield, 2005] Maxfield, L. D. (2005). Measuring recidivism under the federal sentencing guidelines. Federal Sentencing Reporter, 17(3):166–170.
[Maxwell et al., 2008] Maxwell, S. E., Kelley, K., and Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59(1):537–563.
[Mayson, 2017] Mayson, S. G. (2017). Dangerous defendants. Yale LJ, 127:490.
[Mazzine et al., 2021] Mazzine, R., Goethals, S., Brughmans, D., and Martens, D. (2021). Counterfactual explanations for employment services. In International workshop on Fair, Effective And Sustainable Talent management using data science, pages 1–7.
[McAdams, 1996] McAdams, D. P. (1996). Personality, modernity, and the storied self: A contemporary framework for studying persons. Psychological inquiry, 7(4):295–321.
[McAuley, 2022] McAuley, J. (2022). Personalized Machine Learning. Cambridge University Press.
[McCormack et al., 2011] McCormack, T., Hoerl, C., and Butterfill, S. (2011). Tool Use and Causal Cognition. Oxford University Press.
[McGuigan, 2021] McGuigan, L. (January 6, 2021). This tool lets you confuse google’s ad network, and a test shows it works. https://www.technologyreview.com/2021/01/06/1015784/ adsense-google-surveillance-adnauseam-obfuscation/. MIT Tech- nology Review, Online, accessed February 24, 2022.
[McInerney et al., 2021] McInerney, J., Elahi, E., Basilico, J., Raimond, Y., and Jebara, T. (2021). Accordion: a trainable simulator for long-term interactive systems. In Proceedings of the 15th ACM Conference on Recommender Systems, pages 102–113.
[McInerney et al., 2018] McInerney, J., Lacker, B., Hansen, S., Higley, K., Bouchard, H., Gruson, A., and Mehrotra, R. (2018). Explore, exploit, and ex- plain: personalizing explainable recommendations with bandits. In Proceedings of the 12th ACM conference on recommender systems, pages 31–39.
[McKay, 2020] McKay, C. (2020). Predicting risk in criminal procedure: actuarial tools, algorithms, ai and judicial decision-making. Current Issues in Criminal Justice, 32(1):22–39.
[Medvedev et al., 2019] Medvedev, I., Gordon, T., and Wu, H. (2019). Powered by ai: Instagram’s explore recommender system. https://ai.facebook.com/ blog/powered-by-ai-instagrams-explore-recommender-system/.
[Meinert, 1980] Meinert, C. L. (1980). Toward more definitive clinical trials. Con- trolled Clinical Trials, 1(3):249–262.
[Meissner et al., 2019] Meissner, F., Grigutsch, L. A., Koranyi, N., Mu ̈ller, F., and Rothermund, K. (2019). Predicting behavior with implicit measures: Disillu- sioning findings, reasonable explanations, and sophisticated solutions. Frontiers in Psychology, 10:2483.
[Menczer, 2020] Menczer, F. (2020). 4 reasons why social media make us vulnera- ble to manipulation. In Fourteenth ACM Conference on Recommender Systems, pages 1–1.
[Merchant, 1980] Merchant, C. (1980). The death of nature: Women, ecology, and the scientific revolution. HarperOne.
[Merel et al., 2019] Merel, J., Botvinick, M., and Wayne, G. (2019). Hierarchical motor control in mammals and machines. Nature communications, 10(1):5489.
[Messick, 1995] Messick, S. (1995). Validity of psychological assessment: Valida- tion of inferences from persons’ responses and performances as scientific inquiry into score meaning. American psychologist, 50(9):741.
[Messing et al., 2020] Messing, S., DeGregorio, C., Hillenbrand, B., King, G., Ma- hanti, S., Mukerjee, Z., Nayak, C., Persily, N., State, B., and Wilkins, A. (June 5, 2020). Dataverse. https://socialscience.one/facebook-dataverse. So- cial Science One, Online, accessed February 24, 2022.
[Metcalf and Crawford, 2016] Metcalf, J. and Crawford, K. (2016). Where are human subjects in big data research? the emerging ethics divide. Big Data & Society, 3(1):2053951716650211.
[Micheli et al., 2020] Micheli, M., Ponti, M., Craglia, M., and Berti Suman, A. (2020). Emerging models of data governance in the age of datafication. Big Data & Society, 7(2):2053951720948087.
[Michie et al., 2013] Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M. P., Cane, J., and Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Annals of behavioral medicine, 46(1):81–95.
[Milano et al., 2021] Milano, S., Mittelstadt, B., Wachter, S., and Russell, C. (2021). Epistemic fragmentation poses a threat to the governance of online targeting. Nature Machine Intelligence, 3(6):466–472.
[Milano et al., 2020] Milano, S., Taddeo, M., and Floridi, L. (2020). Recommender systems and their ethical challenges. AI & SOCIETY, 35(4):957–967.
[Mill, 1887] Mill, J. S. (1887). On liberty. Longmans, Green.
[Mill, 2015] Mill, J. S. (2015). On Liberty, Utilitarianism, and other essays. Oxford
University Press, USA.
[Miller, 2014] Miller, A. (2014). Contemporary metaethics: an introduction. John Wiley & Sons.
[Miller, 1969] Miller, G. A. (1969). Psychology as a means of promoting human welfare. American psychologist, 24(12):1063.
[Miller, 2013] Miller, P. (2013). Justifying fiduciary duties. McGill Law Jour- nal/Revue de droit de McGill, 58(4):969–1023.
[Miller et al., 1975] Miller, R. L., Brickman, P., and Bolen, D. (1975). Attribution versus persuasion as a means for modifying behavior. Journal of personality and social psychology, 31(3):430.
[Miller, 2019] Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38.
[Mills, 2022] Mills, S. (2022). Personalized nudging. Behavioural Public Policy, 6(1):150–159.
[Miltenberger, 2011] Miltenberger, R. G. (2011). Behavior modification: Princi- ples and procedures. Cengage Learning.
[Mingers, 2001] Mingers, J. (2001). Combining is research methods: towards a pluralist methodology. Information systems research, 12(3):240–259.
[Minsky, 1961] Minsky, M. (1961). Steps toward artificial intelligence. Proceedings of the IRE, 49(1):8–30.
[Minsky and Riecken, 1994] Minsky, M. and Riecken, D. (1994). A conversation with marvin minsky about agents. Communications of the ACM, 37(7):22–29.
[Mirowski, 2013] Mirowski, P. (2013). Never let a serious crisis go to waste: How neoliberalism survived the financial meltdown. Verso Books.
[Mitchell et al., 2019] Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., and Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pages 220–229.
[Mitchell et al., 2021] Mitchell, S., Potash, E., Barocas, S., D’Amour, A., and Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8:141–163.
[Mitchell, 1997] Mitchell, T. M. (1997). Machine learning. McGraw-hill New York. [Mittelstadt, 2019] Mittelstadt, B. (2019). Principles alone cannot guarantee eth-
ical ai. Nature Machine Intelligence, 1(11):501–507.
[Mittelstadt et al., 2016] Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2):2053951716679679.
[Mnih et al., 2015] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. nature, 518(7540):529–533.
[Moerel, 2012] Moerel, L. (2012). Binding corporate rules: corporate self- regulation of global data transfers. OUP Oxford.
[Mohai et al., 2009] Mohai, P., Pellow, D., and Roberts, J. T. (2009). Environ- mental justice. Annual review of environment and resources, 34:405–430.
[Mo ̈kander et al., 2021] M ̈okander, J., Morley, J., Taddeo, M., and Floridi, L. (2021). Ethics-based auditing of automated decision-making systems: nature, scope, and limitations. Science and Engineering Ethics, 27(4):1–30.
[Monahan and Skeem, 2016] Monahan, J. and Skeem, J. L. (2016). Risk assess- ment in criminal sentencing. Annual review of clinical psychology, 12:489–513.
[Montaner et al., 2003] Montaner, M., L ́opez, B., and De La Rosa, J. L. (2003). A taxonomy of recommender agents on the internet. Artificial intelligence review, 19:285–330.
[Moon, 2000] Moon, Y. (2000). Intimate exchanges: Using computers to elicit self-disclosure from consumers. Journal of consumer research, 26(4):323–339.
[Moore et al., 1984] Moore, M., Estrich, S., McGillis, D., and Spelman, W. (1984). Dangerous Offenders: The Elusive Target of Justice. Cambridge, Mass.: Harvard University Press.
[Moran, 2002] Moran, D. (2002). Introduction to phenomenology. Routledge.
[Moreno-Torres et al., 2012] Moreno-Torres, J. G., Raeder, T., Alaiz-Rodr ́ıguez, R., Chawla, N. V., and Herrera, F. (2012). A unifying view on dataset shift in classification. Pattern recognition, 45(1):521–530.
[Morris and Miller, 1985] Morris, N. and Miller, M. (1985). Predictions of danger- ousness. Crime and Justice, 6:1–50.
[Morrison et al., 2019] Morrison, T. H., Adger, W. N., Brown, K., Lemos, M. C., Huitema, D., Phelps, J., Evans, L., Cohen, P., Song, A., Turner, R., et al. (2019). The black box of power in polycentric environmental governance. Global Environmental Change, 57:101934.
[Moses and Chan, 2014] Moses, L. B. and Chan, J. (2014). Using big data for legal and law enforcement decisions: Testing the new tools. University of New South Wales Law Journal, the, 37(2):643–678.
[Mouffe, 1999] Mouffe, C. (1999). Deliberative democracy or agonistic pluralism? Social research, pages 745–758.
[Mourby et al., 2018] Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wal- lace, S. E., Bell, J., Smith, H., Aidinlis, S., and Kaye, J. (2018). Are ‘pseudonymised’data always personal data? implications of the gdpr for admin- istrative data research in the uk. Computer Law & Security Review, 34(2):222– 233.
[Mousourakis, 2019] Mousourakis, G. (2019). Comparative Law and Legal Tradi- tions. Springer.
[Mullainathan and Obermeyer, 2017] Mullainathan, S. and Obermeyer, Z. (2017). Does machine learning automate moral hazard and error? American Economic Review, 107(5):476–80.
[Muller, 2019] Muller, J. Z. (2019). The tyranny of metrics. Princeton University Press.
[Mulligan and Bamberger, 2019] Mulligan, D. K. and Bamberger, K. A. (2019). Procurement as policy: Administrative process for machine learning. Berkeley Tech. LJ, 34:773.
[Murthi and Sarkar, 2003] Murthi, B. and Sarkar, S. (2003). The role of the management sciences in research on personalization. Management Science, 49(10):1344–1362.
[Myers and Klein, 2011] Myers, M. D. and Klein, H. K. (2011). A set of principles for conducting critical research in information systems. MIS quarterly, pages 17–36.
[Myhill, 1952] Myhill, J. (1952). Some philosophical implications of mathematical logic: I. three classes of ideas. The review of Metaphysics, 6(2):165–198.
[Na and Gottfredson, 2013] Na, C. and Gottfredson, D. C. (2013). Police officers in schools: Effects on school crime and the processing of offending behaviors. Justice Quarterly, 30(4):619–650.
[Naffine, 2003] Naffine, N. (2003). Who are law’s persons? from cheshire cats to responsible subjects. The Modern Law Review, 66(3):346–367.
[Nagel, 2012] Nagel, T. (2012). Mind and cosmos: why the materialist neo- Darwinian conception of nature is almost certainly false. Oxford University Press.
[Nahum-Shani et al., 2018] Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., and Murphy, S. A. (2018). Just-in-time adaptive interventions (jitais) in mobile health: key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine, 52(6):446–462.
[Najar and Chetouani, 2021] Najar, A. and Chetouani, M. (2021). Reinforcement learning with human advice: a survey. Frontiers in Robotics and AI, 8:584075.
[Neftci and Averbeck, 2019] Neftci, E. O. and Averbeck, B. B. (2019). Reinforce- ment learning in artificial and biological systems. Nature Machine Intelligence, 1(3):133–143.
[Nelson et al., 2011] Nelson, R. M., Beauchamp, T., Miller, V. A., Reynolds, W., Ittenbach, R. F., and Luce, M. F. (2011). The concept of voluntary consent. The American Journal of Bioethics, 11(8):6–16.
[Nemitz, 2018] Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180089.
[Nesse, 2005] Nesse, R. M. (2005). Natural selection and the regulation of defenses: A signal detection analysis of the smoke detector principle. Evolution and human behavior, 26(1):88–105.
[Neumann, 1966] Neumann, J. v. (1966). Theory of self-reproducing automata. Mathematics of Computation, 21:745.
[Neuwirth, 2022] Neuwirth, R. J. (2022). The EU Artificial Intelligence Act: Reg- ulating Subliminal AI Systems. Taylor & Francis.
[Ng et al., 2000] Ng, A. Y., Russell, S., et al. (2000). Algorithms for inverse rein- forcement learning. In Icml, volume 1, page 2.
[Nietzsche, 2001] Nietzsche, F. (2001). Nietzsche: The Gay Science: With a Pre- lude in German Rhymes and an Appendix of Songs. Cambridge University Press.
[Nissenbaum, 1996] Nissenbaum, H. (1996). Accountability in a computerized so- ciety. Science and engineering ethics, 2(1):25–42.
[Nissenbaum, 2004] Nissenbaum, H. (2004). Privacy as contextual integrity. Wash. L. Rev., 79:119.
[Noddings, 2013] Noddings, N. (2013). Caring: A relational approach to ethics and moral education. Univ of California Press.
[Nord and Peter, 1980] Nord, W. R. and Peter, J. P. (1980). A behavior modifi- cation perspective on marketing. Journal of Marketing, 44(2):36–47.
[Nowotny, 2005] Nowotny, H. (2005). The increase of complexity and its reduc- tion: Emergent interfaces between the natural sciences, humanities and social sciences. Theory, Culture & Society, 22(5):15–31.
[Nussbaum, 2000] Nussbaum, M. C. (2000). Women and human development: The capabilities approach, volume 3. Cambridge university press.
[Obermeyer et al., 2019] Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464):447–453.
[Oberst and Sontag, 2019] Oberst, M. and Sontag, D. (2019). Counterfactual off- policy evaluation with gumbel-max structural causal models. In International Conference on Machine Learning, pages 4881–4890. PMLR.
[O’Connor, 1994] O’Connor, T. (1994). Emergent properties. American Philo- sophical Quarterly, 31(2):91–104.
[Official Journal of the European Union, 2016] Official Journal of the European Union (2016). Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation). Technical re- port, European Parliament and Council.
[O’keefe, 2015] O’keefe, D. J. (2015). Persuasion: Theory and research. Sage Publications.
[Olssen and Peters, 2005] Olssen, M. and Peters, M. A. (2005). Neoliberalism, higher education and the knowledge economy: From the free market to knowl- edge capitalism. Journal of education policy, 20(3):313–345.
[Olteanu et al., 2019] Olteanu, A., Castillo, C., Diaz, F., and Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2:13.
[Omohundro, 2007] Omohundro, S. M. (2007). The nature of self-improving arti- ficial intelligence. Singularity Summit, 2008.
[O’neil, 2016] O’neil, C. (2016). Weapons of math destruction: How big data in- creases inequality and threatens democracy. Broadway books.
[O’neill, 2002] O’neill, O. (2002). Autonomy and trust in bioethics. Cambridge University Press.
[O’neill, 2003] O’neill, O. (2003). Autonomy: the emperor’s new clothes. In Aris- totelian Society Supplementary Volume, volume 77, pages 1–21. Wiley Online Library.
[Oreskes et al., 1994] Oreskes, N., Shrader-Frechette, K., and Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sci- ences. Science, 263(5147):641–646.
[Oshana, 2006] Oshana, M. (2006). Personal autonomy in society. Ashgate Pub- lishing, Ltd.
[Ostrom, 2010] Ostrom, E. (2010). Polycentric systems for coping with collec- tive action and global environmental change. Global environmental change, 20(4):550–557.
[Oswald, 2018] Oswald, M. (2018). Algorithm-assisted decision-making in the pub- lic sector: framing the issues using administrative law rules governing discre- tionary power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170359.
[Outhwaite, 1987] Outhwaite, W. (1987). New philosophies of social science: realism, hermeneutics and critical theory. Bloomsbury Publishing.
[Ouyang et al., 2022] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155.
[Palmer and Koenig-Lewis, 2009] Palmer, A. and Koenig-Lewis, N. (2009). An ex- periential, social network-based approach to direct marketing. Direct Marketing: An International Journal.
[Pan et al., 2022] Pan, A., Bhatia, K., and Steinhardt, J. (2022). The effects of reward misspecification: Mapping and mitigating misaligned models. arXiv preprint arXiv:2201.03544.
[Pariser, 2011] Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.
[Park and Levine, 2023] Park, S. and Levine, S. (2023). Predictable mdp abstrac- tion for unsupervised model-based rl. arXiv preprint arXiv:2302.03921.
[Parker et al., 2016] Parker, G. G., Van Alstyne, M. W., and Choudary, S. P. (2016). Platform revolution: How networked markets are transforming the econ- omy and how to make them work for you. WW Norton & Company.
[Parker et al., 2018] Parker, T. H., Griffith, S. C., Bronstein, J. L., Fidler, F., Foster, S., Fraser, H., Forstmeier, W., Gurevitch, J., Koricheva, J., Seppelt, R., et al. (2018). Empowering peer reviewers with a checklist to improve trans- parency. Nature ecology & evolution, 2(6):929–935.
[Partnership on AI, 2016] Partnership on AI (2016). Report on algorithmic risk assessment tools in the u.s. criminal justice system. Online, accessed May 20, 2021.
[Pasquale, 2015] Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
[Patrick et al., 2021] Patrick, M., Asano, Y. M., Kuznetsova, P., Fong, R., Hen- riques, J. F., Zweig, G., and Vedaldi, A. (2021). On compositions of transforma- tions in contrastive self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9577–9587.
[Patro et al., 2022] Patro, G. K., Porcaro, L., Mitchell, L., Zhang, Q., Zehlike, M., and Garg, N. (2022). Fair ranking: a critical review, challenges, and future direc- tions. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 1929–1942.
[Pearl, 2009] Pearl, J. (2009). Causality. Cambridge university press, 2 edition. [Pearl, 2019] Pearl, J. (2019). The seven tools of causal inference, with reflections
on machine learning. Communications of the ACM, 62(3):54–60.
[Pearl, 2021] Pearl, J. (2021). Radical empiricism and machine learning research.
Journal of Causal Inference, 9(1):78–82.
[Pednault et al., 2002] Pednault, E., Abe, N., and Zadrozny, B. (2002). Sequential cost-sensitive decision making with reinforcement learning. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 259–268.
[Pei et al., 2019] Pei, C., Yang, X., Cui, Q., Lin, X., Sun, F., Jiang, P., Ou, W., and Zhang, Y. (2019). Value-aware recommendation based on reinforcement profit maximization. In The World Wide Web Conference, pages 3123–3129.
[Pellegrino and Thomasma, 1993] Pellegrino, E. D. and Thomasma, D. C. (1993). The virtues in medical practice. Oxford University Press.
[Pels, 2000] Pels, D. (2000). Reflexivity: One step up. Theory, Culture & Society, 17(3):1–25.
[Pennycook and Rand, 2019] Pennycook, G. and Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of rea- soning than by motivated reasoning. Cognition, 188:39–50.
[Pentland, 2014] Pentland, A. (2014). Social physics: How good ideas spread-the lessons from a new science. Penguin.
[Perez, 2006] Perez, O. (2006). The institutionalization of inconsistency: from fluid concepts to random walk. In Perez, O. and Taubner, G., editors, Paradoxes and Inconsistencies in the Law. Bloomsbury Publishing.
[Persad et al., 2009] Persad, G., Wertheimer, A., and Emanuel, E. J. (2009). Prin- ciples for allocation of scarce medical interventions. The lancet, 373(9661):423– 431.
[Persons, 2018] Persons, T. M. (2018). Artificial intelligence: emerging opportu- nities, challenges, and implications for policy and research. Technical report.
[Pettit and Western, 2004] Pettit, B. and Western, B. (2004). Mass imprisonment and the life course: Race and class inequality in us incarceration. American sociological review, 69(2):151–169.
[Pettitt et al., 2016] Pettitt, D., Raza, S., Naughton, B., Roscoe, A., Ramakrish- nan, A., Ali, A., Davies, B., Dopson, S., Hollander, G., Smith, J., et al. (2016). The limitations of qaly: a literature review. Journal of Stem Cell Research and Therapy, 6(4).
[Pfeiffer, 2021] Pfeiffer, S. (2021). The greater transformation: digitalization and the transformative power of distributive forces in digital capitalism. Interna- tional Critical Thought, 11(4):535–552.
[Philp, 2001] Philp, M. (2001). Access, accountability and authority: corruption and the democratic process. Crime, Law and Social Change, 36(4):357–377.
[Pickering, 1995] Pickering, A. (1995). The Mangle of Practice: Time, Agency, and Science. University of Chicago Press.
[Pojman and Fieser, 2016] Pojman, L. P. and Fieser, J. (2016). Ethics: Discover- ing right and wrong. Cengage Learning.
[Polanyi, 2001] Polanyi, K. (2001). The great transformation: The political and economic origins of our time. Beacon press.
[Polkinghorne, 1988] Polkinghorne, D. E. (1988). Narrative knowing and the hu- man sciences. Suny Press.
[Pommeranz et al., 2012] Pommeranz, A., Broekens, J., Wiggers, P., Brinkman, W.-P., and Jonker, C. M. (2012). Designing interfaces for explicit preference elicitation: a user-centered investigation of preference representation and elici- tation process. User Modeling and User-Adapted Interaction, 22:357–397.
[Popper, 2012] Popper, K. (2012). The open society and its enemies. Routledge.
[Posner and Weyl, 2018] Posner, E. A. and Weyl, E. G. (2018). Radical Markets. Princeton University Press.
[Posner, 1978] Posner, R. A. (1978). Economic theory of privacy. Regulation, 2:19.
[Poursabzi-Sangdeh et al., 2021] Poursabzi-Sangdeh, F., Goldstein, D. G., Hof- man, J. M., Wortman Vaughan, J. W., and Wallach, H. (2021). Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1–52.
[Powers, 1973] Powers, W. T. (1973). Feedback: Beyond behaviorism: Stimulus- response laws are wholly predictable within a control-system model of behavioral organization. Science, 179(4071):351–356.
[Pozsar et al., 2010] Pozsar, Z., Adrian, T., Ashcraft, A., and Boesky, H. (2010). Shadow banking. Technical report, Federal Reserve Bank of New York Staff Report.
[Praet et al., 2018] Praet, S., Van Aelst, P., Martens, D., et al. (2018). I like, therefore i am. predictive modeling to gain insights in political preference in a multi-party system. Research paper, University of Antwerp, Faculty of Business and Economics, pages 1–34.
[Prosperi et al., 2020] Prosperi, M., Guo, Y., Sperrin, M., Koopman, J. S., Min, J. S., He, X., Rich, S., Wang, M., Buchan, I. E., and Bian, J. (2020). Causal in- ference and counterfactual prediction in machine learning for actionable health- care. Nature Machine Intelligence, 2(7):369–375.
[Provost and Fawcett, 2013] Provost, F. and Fawcett, T. (2013). Data Science for Business: What you need to know about data mining and data-analytic thinking. O’Reilly Media, Inc.
[Prunkl, 2022] Prunkl, C. (2022). Human autonomy in the age of artificial intel- ligence. Nature Machine Intelligence, 4(2):99–101.
[Prunkl et al., 2021] Prunkl, C. E., Ashurst, C., Anderljung, M., Webb, H., Leike, J., and Dafoe, A. (2021). Institutionalizing ethics in ai through broader impact requirements. Nature Machine Intelligence, 3(2):104–110.
[Puiutta and Veith, 2020] Puiutta, E. and Veith, E. M. (2020). Explainable rein- forcement learning: A survey. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pages 77–95. Springer.
[Pundik, 2008] Pundik, A. (2008). Statistical evidence and individual litigants: a reconsideration of wasserman’s argument from autonomy. The International Journal of Evidence & Proof, 12(2):303–324.
[Purdy, 2014] Purdy, J. (2014). Neoliberal constitutionalism: Lochnerism for a new economy. Law & Contemp. Probs., 77:195.
[Purtova, 2015] Purtova, N. (2015). The illusion of personal data as no one’s property. Law, Innovation and Technology, 7(1):83–111.
[Puterman, 2014] Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.
[Putnam, 2004] Putnam, H. (2004). The collapse of the fact/value dichotomy and other essays. Harvard University Press.
[Putnam et al., 2000] Putnam, R. D. et al. (2000). Bowling alone: The collapse and revival of American community. Simon and schuster.
[Quin ̃onero-Candela et al., 2009] Quin ̃onero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N. D. (2009). Dataset shift in machine learning. The MIT Press.
[Rafieian and Yoganarasimhan, 2022] Rafieian, O. and Yoganarasimhan, H. (2022). Ai and personalization. Available at SSRN 4123356.
[Ragan-Kelley and Willing, 2018] Ragan-Kelley, B. and Willing, C. (2018). Binder 2.0-reproducible, interactive, sharable environments for science at scale. In Proceedings of the 17th Python in Science Conference (F. Akici, D. Lippa, D. Niederhut, and M. Pacer, eds.), pages 113–120.
[Rahwan, 2018] Rahwan, I. (2018). Society-in-the-loop: programming the algo- rithmic social contract. Ethics and information technology, 20(1):5–14.
[Rahwan et al., 2019] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bon- nefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., et al. (2019). Machine behaviour. Nature, 568(7753):477–486.
[Rajkumar et al., 2022] Rajkumar, K., Saint-Jacques, G., Bojinov, I., Brynjolfs- son, E., and Aral, S. (2022). A causal test of the strength of weak ties. Science, 377(6612):1304–1310.
[Ramon et al., 2020] Ramon, Y., Martens, D., Provost, F., and Evgeniou, T. (2020). A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C. Advances in Data Analysis and Classification, pages 1–19.
[Ramsey et al., 1926] Ramsey, F. P. et al. (1926). Truth and probability. History of Economic Thought, pages 156–198.
[Ransbotham et al., 2016] Ransbotham, S., Fichman, R. G., Gopal, R., and Gupta, A. (2016). Special section introduction—ubiquitous it and digital vul- nerabilities. Information Systems Research, 27(4):834–847.
[Rasheed et al., 2020] Rasheed, A., San, O., and Kvamsdal, T. (2020). Digital twin: Values, challenges and enablers from a modeling perspective. Ieee Access, 8:21980–22012.
[Rawls, 1971] Rawls, J. (1971). A Theory of Justice. Harvard University Press.
[Rawls, 2005] Rawls, J. (2005). Political liberalism. Columbia university press.
[Raz, 1984] Raz, J. (1984). Human rights without foundations. Journal of Inter- national Law, 78:607.
[Raz, 1987] Raz, J. (1987). Government by consent. NOMOS: Am. Soc’y Pol. Legal Phil., 29:76.
[Reath and Timmermann, 2013] Reath, A. and Timmermann, J. (2013). Kant’s’ Critique of Practical Reason’: A Critical Guide. Cambridge University Press.
[Rector, 1958] Rector, M. G. (1958). Factors in measuring recidivism as presented in annual reports. Crime & Delinquency, 4:218.
[Reeve et al., 2004] Reeve, C. D. et al. (2004). Plato: Republic. Hackett, Indianapolis.
[Reichenbach, 1938] Reichenbach, H. (1938). Experience and prediction: An analysis of the foundations and the structure of knowledge.
[Rendle et al., 2012] Rendle, S., Freudenthaler, C., Gantner, Z., and Schmidt- Thieme, L. (2012). Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618.
[Resnik, 1982] Resnik, J. (1982). Managerial judges. Harv. L. Rev., 96:374. [Resnik, 1995] Resnik, M. (1995). Scientific vs. mathematical realism: The indis-
pensability argument. Philosophia Mathematica, 3(2):166–174.
[Rhee, 2007] Rhee, R. J. (2007). Probability, policy and the problem of reference
class. The International Journal of Evidence & Proof, 11(4):286–291.
[Rhoen, 2017] Rhoen, M. (2017). Rear view mirror, crystal ball: Predictions for the future of data protection law based on the history of environmental protec- tion law. Computer law & security review, 33(5):603–617.
[Ribeiro et al., 2020] Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., and Meira Jr, W. (2020). Auditing radicalization pathways on youtube. In Proceed- ings of the 2020 conference on fairness, accountability, and transparency, pages 131–141.
[Ribeiro et al., 2016] Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
[Rice and Harris, 1995] Rice, M. E. and Harris, G. T. (1995). Violent recidi- vism: Assessing predictive validity. Journal of consulting and clinical psychology, 63(5):737.
[Ricoeur, 1994] Ricoeur, P. (1994). Oneself as another. University of Chicago Press.
[Ridgeway, 2013] Ridgeway, G. (2013). The pitfalls of prediction. NIJ Journal, 271:34–40.
[Rieder and Sire, 2014] Rieder, B. and Sire, G. (2014). Conflicts of interest and incentives to bias: A microeconomic critique of google’s tangled position on the web. New media & society, 16(2):195–211.
[Riedmiller et al., 2022] Riedmiller, M., Springenberg, J. T., Hafner, R., and Heess, N. (2022). Collect & infer-a fresh look at data-efficient reinforcement learning. In Conference on Robot Learning, pages 1736–1744. PMLR.
[Riley, 2008] Riley, D. (2008). Engineering and social justice. Morgan & Claypool Publishers.
[Ritter, 2013] Ritter, N. (2013). Predicting recidivism risk: New tool in philadelphia shows great promise. National Institute of Justice Journal, 271(February):4–13.
[Robbins, 1932] Robbins, L. B. (1932). The nature and significance of economic science. Macmillan London.
[Robertson et al., 2021] Robertson, R. E., Olteanu, A., Diaz, F., Shokouhi, M., and Bailey, P. (2021). “i can’t reply with that”: Characterizing problematic email reply suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–18.
[Robins and Greenland, 1992] Robins, J. M. and Greenland, S. (1992). Identifia- bility and exchangeability for direct and indirect effects. Epidemiology, 3(2):143– 155.
[Rochet and Tirole, 2006] Rochet, J.-C. and Tirole, J. (2006). Two-sided markets: a progress report. The RAND journal of economics, 37(3):645–667.
[Rogers, 1951] Rogers, C. (1951). Client-centered therapy: Its current practice, implications and theory. London: Constable.
[Rohde et al., 2018] Rohde, D., Bonner, S., Dunlop, T., Vasile, F., and Karat- zoglou, A. (2018). Recogym: A reinforcement learning environment for the problem of product recommendation in online advertising. arXiv preprint arXiv:1808.00720.
[Rolf and Asada, 2014] Rolf, M. and Asada, M. (2014). Where do goals come from? a generic approach to autonomous goal-system development. arXiv preprint arXiv:1410.5557.
[Rollwage and Fleming, 2021] Rollwage, M. and Fleming, S. M. (2021). Confirma- tion bias is adaptive when coupled with efficient metacognition. Philosophical Transactions of the Royal Society B, 376(1822):20200131.
[Ropohl, 1999] Ropohl, G. (1999). Philosophy of socio-technical systems. Society for Philosophy and Technology Quarterly Electronic Journal, 4(3):186–194.
[Rorty, 1992] Rorty, R. (1992). The linguistic turn: Essays in philosophical method. University of Chicago Press.
[Rorty, 2009] Rorty, R. (2009). Philosophy and the Mirror of Nature. Princeton university press.
[Rorty et al., 1989] Rorty, R., Richard, R., et al. (1989). Contingency, irony, and solidarity. Cambridge University Press.
[Rosecrance, 1988] Rosecrance, J. (1988). Maintaining the myth of individualized justice: Probation presentence reports. Justice Quarterly, 5(2):235–256.
[Rosen, 2011] Rosen, J. (2011). The right to be forgotten. Stan. L. Rev. Online, 64:88.
[Rosenau, 1991] Rosenau, P. M. (1991). Post-modernism and the social sciences: Insights, inroads, and intrusions. Princeton University Press.
[Rosenbaum, 2000] Rosenbaum, E. F. (2000). What is a market? on the method- ology of a contested concept. Review of social economy, 58(4):455–482.
[Rosenberg, 1990] Rosenberg, M. (1990). The mother of invention: Evolutionary theory, territoriality, and the origins of agriculture. American Anthropologist, 92(2):399–415.
[Rosenbloom and Piotrowski, 2005] Rosenbloom, D. H. and Piotrowski, S. J. (2005). Outsourcing the constitution and administrative law norms. The Amer- ican Review of Public Administration, 35(2):103–121.
[Rosenfeld, 2000] Rosenfeld, M. (2000). The rule of law and the legitimacy of constitutional democracy. S. Cal. L. Rev., 74:1307.
[Roth, 2002] Roth, A. E. (2002). The economist as engineer: Game theory, experimentation, and computation as tools for design economics. Econometrica, 70(4):1341–1378.
[Roth, 2015] Roth, A. E. (2015). Who gets what–and why: the new economics of matchmaking and market design. Houghton Mifflin Harcourt.
[Rouvroy and Poullet, 2009] Rouvroy, A. and Poullet, Y. (2009). The right to informational self-determination and the value of self-development: Reassessing the importance of privacy for democracy. In Reinventing data protection?, pages 45–76. Springer.
[Rudin, 2019] Rudin, C. (2019). Stop explaining black box machine learning mod- els for high stakes decisions and use interpretable models instead. Nature Ma- chine Intelligence, 1(5):206–215.
[Rudin et al., 2022] Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., and Zhong, C. (2022). Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys, 16:1–85.
[Rudner, 1953] Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of science, 20(1):1–6.
[Russell, 2019] Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin.
[Russell et al., 2015] Russell, S., Dewey, D., and Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. Ai Magazine, 36(4):105– 114.
[Russell and Norvig, 2010] Russell, S. J. and Norvig, P. (2010). Artificial Intelligence-A Modern Approach. Pearson Education, 3rd intl edition.
[Russell and Norvig, 2016] Russell, S. J. and Norvig, P. (2016). Artificial intelli- gence: a modern approach. Malaysia; Pearson Education Limited,.
[Rutjens et al., 2010] Rutjens, B. T., Van Der Pligt, J., and Van Harreveld, F. (2010). Deus or darwin: Randomness and belief in theories about the origin of life. Journal of Experimental Social Psychology, 46(6):1078–1080.
[Ryan and Deci, 2017] Ryan, R. M. and Deci, E. L. (2017). Self-determination the- ory: Basic psychological needs in motivation, development, and wellness. Guil- ford Publications.
[Ryan et al., 2014] Ryan, T., Chester, A., Reece, J., and Xenos, S. (2014). The uses and abuses of facebook: A review of facebook addiction. Journal of behav- ioral addictions, 3(3):133–148.
[Saar-Tsechansky et al., 2009] Saar-Tsechansky, M., Melville, P., and Provost, F. (2009). Active feature-value acquisition. Management Science, 55(4):664–684.
[Saar-Tsechansky and Provost, 2007] Saar-Tsechansky, M. and Provost, F. (2007). Handling missing values when applying classification models. Journal of Machine Learning Research.
[Sadowski, 2020] Sadowski, J. (2020). The internet of landlords: Digital platforms and new mechanisms of rentier capitalism. Antipode, 52(2):562–580.
[Sadowski et al., 2021a] Sadowski, J., Viljoen, S., and Whittaker, M. (2021a). Everyone should decide how their digital data are used–not just tech companies. Nature, 595:169–171.
[Sadowski et al., 2021b] Sadowski, J., Viljoen, S., and Whittaker, M. (2021b). Everyone should decide how their digital data are used—not just tech companies. Nature, 595(7866):169–171.
[Sætra et al., 2022] Sætra, H. S., Borgebund, H., and Coeckelbergh, M. (2022). Avoid diluting democracy by algorithms. Nature Machine Intelligence, 4(10):804–806.
[Sager and Rosser, 2009] Sager, F. and Rosser, C. (2009). Weber, wilson, and hegel: Theories of modern bureaucracy. Public Administration Review, 69(6):1136–1147.
[Saito et al., 2020] Saito, Y., Aihara, S., Matsutani, M., and Narita, Y. (2020). Open bandit dataset and pipeline: Towards realistic and reproducible off-policy evaluation. arXiv preprint arXiv:2008.07146.
[Salahdine and Kaabouch, 2019] Salahdine, F. and Kaabouch, N. (2019). Social engineering attacks: A survey. Future Internet, 11(4):89.
[Salakhutdinov et al., 2007] Salakhutdinov, R., Mnih, A., and Hinton, G. (2007). Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pages 791–798.
[Salge et al., 2014] Salge, C., Glackin, C., and Polani, D. (2014). Empowerment– an introduction. Guided Self-Organization: Inception, pages 67–114.
[Sandel, 1984] Sandel, M. J. (1984). The procedural republic and the unencum- bered self. Political theory, 12(1):81–96.
[Sandel, 2012] Sandel, M. J. (2012). What money can’t buy: the moral limits of markets. Macmillan.
[Sandel, 2013] Sandel, M. J. (2013). Market reasoning as moral reasoning: why economists should re-engage with political philosophy. Journal of economic Perspectives, 27(4):121–40.
[Santy et al., 2021] Santy, S., Rani, A., and Choudhury, M. (2021). Use of formal ethical reviews in nlp literature: Historical trends and current practices. arXiv preprint arXiv:2106.01105.
[Sargent, 2010] Sargent, R. G. (2010). Verification and validation of simulation models. In Proceedings of the 2010 winter simulation conference, pages 166– 183. IEEE.
[Sarker et al., 2019] Sarker, S., Chatterjee, S., Xiao, X., and Elbanna, A. (2019). The sociotechnical axis of cohesion for the is discipline: Its historical legacy and its continued relevance. Mis Quarterly, 43(3):695–720.
[Satz, 2010] Satz, D. (2010). Why some things should not be for sale: The moral limits of markets. Oxford University Press.
[Sauer et al., 2021] Sauer, H., Blunden, C., Eriksen, C., and Rehren, P. (2021). Moral progress: Recent developments. Philosophy Compass, 16(10):e12769.
[Sauerbrei et al., 2020] Sauerbrei, W., Perperoglou, A., Schmid, M., Abrahamow- icz, M., Becher, H., Binder, H., Dunkler, D., Harrell, F. E., Royston, P., and Heinze, G. (2020). State of the art in selection of variables and functional forms in multivariable analysis—outstanding issues. Diagnostic and prognostic research, 4(1):1–18.
[Savage, 1972] Savage, L. J. (1972). The foundations of statistics. Courier Corporation.
[Sayedi, 2018] Sayedi, A. (2018). Real-time bidding in online display advertising. Marketing Science, 37(4):553–568.
[Sayer, 2009] Sayer, A. (2009). Who’s afraid of critical social science? Current sociology, 57(6):767–786.
[Sayers, 2003] Sayers, S. (2003). Creative activity and alienation in hegel and marx. Historical Materialism, 11(1):107–128.
[Schachter, 1983] Schachter, O. (1983). Human dignity as a normative concept. American Journal of International Law, 77(4):848–854.
[Schaefer et al., 2014] Schaefer, G. O., Kahane, G., and Savulescu, J. (2014). Au- tonomy and enhancement. Neuroethics, 7(2):123–136.
[Schafer et al., 1999] Schafer, J. B., Konstan, J., and Riedl, J. (1999). Recom- mender systems in e-commerce. In Proceedings of the 1st ACM conference on Electronic commerce, pages 158–166.
[Schauer, 2006] Schauer, F. (2006). Profiles, probabilities, and stereotypes. Har- vard University Press.
[Schauer, 2009] Schauer, F. (2009). Thinking like a lawyer: a new introduction to legal reasoning. Harvard University Press.
[Schechtman, 2018] Schechtman, M. (2018). The constitution of selves. Cornell university press.
[Scherer, 2015] Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL & Tech., 29:353.
[Schindler and Hilborn, 2015] Schindler, D. E. and Hilborn, R. (2015). Prediction, precaution, and policy under global change. Science, 347(6225):953–954.
[Schmidhuber, 1991] Schmidhuber, J. (1991). Curious model-building control sys- tems. In Proc. international joint conference on neural networks, pages 1458– 1463.
[Schmidhuber, 2015] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61:85–117.
[Schnabel et al., 2016] Schnabel, T., Swaminathan, A., Singh, A., Chandak, N., and Joachims, T. (2016). Recommendations as treatments: Debiasing learning and evaluation. In international conference on machine learning, pages 1670– 1679. PMLR.
[Schneider et al., 2018] Schneider, C., Weinmann, M., and Vom Brocke, J. (2018). Digital nudging: guiding online user choices through interface design. Commu- nications of the ACM, 61(7):67–73.
[Schor et al., 2016] Schor, J. et al. (2016). Debating the sharing economy. Journal of self-governance and management economics, 4(3):7–22.
[Schrittwieser et al., 2020] Schrittwieser, J., Antonoglou, I., Hubert, T., Si- monyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., et al. (2020). Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609.
[Schuler and Namioka, 1993] Schuler, D. and Namioka, A. (1993). Participatory design: Principles and practices. CRC Press.
[Schu ̈tze, 2012] Schu ̈tze, R. (2012). European constitutional law. Cambridge Uni- versity Press.
[Scofield, 1993] Scofield, G. R. (1993). Ethics consultation: The least dangerous profession? Cambridge Quarterly of Healthcare Ethics, 2(4):417–426.
[Scurich, 2018] Scurich, N. (2018). The case against categorical risk estimates. Behavioral Sciences & the Law, 36(5):554–564.
[Searle, 2010] Searle, J. (2010). Making the social world: The structure of human civilization. Oxford University Press.
[Searle, 1993] Searle, J. R. (1993). Rationality and realism, what is at stake? Daedalus, 122(4):55–83.
[Sedlacek, 1987] Sedlacek, W. E. (1987). Black students on white campuses: Twenty years of research. Journal of College Student Personnel, 28(6):484–495.
[Seger, 1994] Seger, C. A. (1994). Implicit learning. Psychological bulletin, 115(2):163.
[Selbst and Barocas, 2018a] Selbst, A. D. and Barocas, S. (2018a). The intuitive appeal of explainable machines. Fordham L. Rev., 87:1085.
[Selbst and Barocas, 2018b] Selbst, A. D. and Barocas, S. (2018b). The intuitive appeal of explainable machines. Fordham L. Rev., 87:1085.
[Selbst et al., 2019] Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency, pages 59–68.
[Sell, 2008] Sell, J. (2008). Introduction to deception debate. Social Psychology Quarterly, 71(3):213.
[Semenova et al., 2019] Semenova, L., Rudin, C., and Parr, R. (2019). A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning. arXiv preprint arXiv:1908.01755.
[Sen, 1997] Sen, A. (1997). Resources, values and development. Harvard University Press.
[Sen, 1999] Sen, A. (1999). On ethics and economics. OUP Catalogue.
[Sen and Williams, 1982] Sen, A. and Williams, B. A. O. (1982). Utilitarianism
and beyond. Cambridge University Press.
[Settles, 2009] Settles, B. (2009). Active learning literature survey.
[Shadish, 1993] Shadish, W. R. (1993). Critical multiplism: A research strategy and its attendant tactics. New directions for program evaluation, 1993(60):13– 57.
[Shafer, 1986] Shafer, G. (1986). Savage revisited. Statistical science, pages 463– 485.
[Shardanand and Maes, 1995] Shardanand, U. and Maes, P. (1995). Social infor- mation filtering: Algorithms for automating “word of mouth”. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 210–217.
[Sharma, 2020] Sharma, P. (2020). Coronavirus news, markets and AI: the COVID-19 diaries. Routledge India.
[Sheldon, 2014] Sheldon, K. M. (2014). Becoming oneself: The central role of self- concordant goal selection. Personality and Social Psychology Review, 18(4):349– 365.
[Shi et al., 2019] Shi, J.-C., Yu, Y., Da, Q., Chen, S.-Y., and Zeng, A.-X. (2019). Virtual-taobao: Virtualizing real-world online retail environment for reinforce- ment learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4902–4909.
[Shipley, 2008] Shipley, D. E. (2008). Due process rights before eu agencies: the rights of defense. Ga. J. Int’l & Comp. L., 37:1.
[Shkabatur, 2019] Shkabatur, J. (2019). The global commons of data. Stan. Tech. L. Rev., 22:354.
[Shleifer and Vishny, 1997] Shleifer, A. and Vishny, R. W. (1997). A survey of corporate governance. The journal of finance, 52(2):737–783.
[Shmueli et al., 2021] Shmueli, B., Fell, J., Ray, S., and Ku, L.-W. (2021). Be- yond fair pay: Ethical implications of nlp crowdsourcing. arXiv preprint arXiv:2104.10097.
[Shmueli, 2017] Shmueli, G. (2017). Research dilemmas with behavioral big data. Big data, 5(2):98–119.
[Shmueli, 2019] Shmueli, G. (2019). Lift up and act! classifier performance in resource-constrained applications. arXiv preprint arXiv:1906.03374.
[Shmueli et al., 2017] Shmueli, G., Bruce, P. C., Yahav, I., Patel, N. R., and Lich- tendahl Jr, K. C. (2017). Data mining for business analytics: concepts, techniques, and applications in R. John Wiley & Sons.
[Shmueli et al., 2010] Shmueli, G. et al. (2010). To explain or to predict? Statis- tical science, 25(3):289–310.
[Shmueli and Tafti, 2023] Shmueli, G. and Tafti, A. (2023). How to “improve” prediction using behavior modification. International Journal of Forecasting.
[Shneiderman, 2020] Shneiderman, B. (2020). Human-centered artificial intelli- gence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6):495–504.
[Signatories, 2020] Signatories, I. (2020). An open letter to the communications of the acm.
[Silver et al., 2017] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. (2017). Mastering the game of go without human knowledge. nature, 550(7676):354–359.
[Silver et al., 2021] Silver, D., Singh, S., Precup, D., and Sutton, R. S. (2021). Reward is enough. Artificial Intelligence, 299:103535.
[Simmons et al., 2011] Simmons, J. P., Nelson, L. D., and Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11):1359– 1366.
[Simonsohn et al., 2020] Simonsohn, U., Simmons, J. P., and Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 4(11):1208–1214.
[Singer, 1981] Singer, P. (1981). The expanding circle. Citeseer.
[Singh et al., 2020] Singh, A., Halpern, Y., Thain, N., Christakopoulou, K., Chi, E., Chen, J., and Beutel, A. (2020). Building healthy recommendation sequences for everyone: A safe reinforcement learning approach. In FAccTRec Workshop.
[Singh, 2013] Singh, J. P. (2013). Predictive validity performance indicators in violence risk assessment: A methodological primer. Behavioral Sciences & the Law, 31(1):8–22.
[Singh et al., 2011] Singh, J. P., Grann, M., and Fazel, S. (2011). A comparative study of violence risk assessment tools: A systematic review and metaregression analysis of 68 studies involving 25,980 participants. Clinical psychology review, 31(3):499–513.
[Singh et al., 2009] Singh, S., Lewis, R. L., and Barto, A. G. (2009). Where do rewards come from. In Proceedings of the annual conference of the cognitive science society, pages 2601–2606. Cognitive Science Society.
[Singler, 2020] Singler, B. (2020). “blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse. AI & society, 35:945–955.
[Skinner, 1963] Skinner, B. F. (1963). Behaviorism at fifty: The rapid growth of a scientific analysis of behavior calls for a restatement of the philosophy of psychology. Science, 140(3570):951–958.
[Skinner, 1965] Skinner, B. F. (1965). Science and human behavior. Simon and Schuster.
[Skinner, 2002] Skinner, Q. (2002). Visions of Politics: Regarding Method, vol- ume 1. Cambridge University Press.
[Slivkins et al., 2019] Slivkins, A. et al. (2019). Introduction to multi-armed ban- dits. Foundations and Trends® in Machine Learning, 12(1-2):1–286.
[Slobogin, 2017] Slobogin, C. (2017). Principles of risk assessment: Sentencing and policing. Ohio St. J. Crim. L., 15:583.
[Sloman and Hagmayer, 2006] Sloman, S. A. and Hagmayer, Y. (2006). The causal psycho-logic of choice. Trends in cognitive sciences, 10(9):407–412.
[Slote, 2007] Slote, M. (2007). The ethics of care and empathy. Routledge. [Smart and Williams, 1973] Smart, J. J. C. and Williams, B. (1973). Utilitarian-
ism: For and against. Cambridge University Press.
[Smiley, 2009] Smiley, M. (2009). Moral responsibility and the boundaries of com-
munity. University of Chicago Press.
[Smith, 2021] Smith, B. (December 5, 2021). How tiktok reads your mind. www. nytimes.com/2021/12/05/business/media/tiktok-algorithm.html. The New York Times, Online, accessed November 8, 2022.
[Smith and Linden, 2017] Smith, B. and Linden, G. (2017). Two decades of recommender systems at amazon. com. Ieee internet computing, 21(3):12–18.
[Smith, 2011] Smith, C. (2011). What is a person?: Rethinking humanity, social life, and the moral good from the person up. University of Chicago Press.
[Smuha, 2021] Smuha, N. A. (2021). From a ‘race to ai’to a ‘race to ai regula- tion’: regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1):57–84.
[Snow, 2012] Snow, C. P. (2012). The two cultures. Cambridge University Press.
[Soares and Fallenstein, 2014] Soares, N. and Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research agenda. Machine Intelligence Research Institute (MIRI) technical report, 8.
[Sokol and Flach, 2020] Sokol, K. and Flach, P. (2020). One explanation does not fit all. KI-Ku ̈nstliche Intelligenz, 34(2):235–250.
[Sokoloff, 2005] Sokoloff, W. W. (2005). Between justice and legality: Derrida on decision. Political Research Quarterly, 58(2):341–352.
[Solove, 2000] Solove, D. J. (2000). Privacy and power: Computer databases and metaphors for information privacy. Stan. L. Rev., 53:1393.
[Soman and Yeung, 2020] Soman, D. and Yeung, C. (2020). The behaviourally informed organization. University of Toronto Press.
[Sorensen, 1998] Sorensen, R. A. (1998). Thought experiments. Oxford University Press on Demand.
[Sornette, 2002] Sornette, D. (2002). Predictability of catastrophic events: Mate- rial rupture, earthquakes, turbulence, financial crashes, and human birth. Pro- ceedings of the National Academy of Sciences, 99(suppl 1):2522–2529.
[Sowell, 2002] Sowell, T. (2002). A conflict of visions: Ideological origins of polit- ical struggles. Basic Books (AZ).
[Spahn, 2012] Spahn, A. (2012). And lead us (not) into persuasion. . . ? persuasive technology and the ethics of communication. Science and engineering ethics, 18:633–650.
[Speicher et al., 2018] Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K. P., Singla, A., Weller, A., and Zafar, M. B. (2018). A unified approach to quantify- ing algorithmic unfairness: Measuring individual &group unfairness via inequal- ity indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2239–2248.
[Srinivasan, 2020] Srinivasan, D. (2020). Why google dominates advertising mar- kets. Stan. Tech. L. Rev., 24:55.
[Srnicek, 2017] Srnicek, N. (2017). Platform capitalism. John Wiley & Sons. [Stace, 1924] Stace, W. T. (1924). The philosophy of Hegel: A systematic exposi-
tion. Macmillan.
[Stahl, 2008] Stahl, B. C. (2008). The ethical nature of critical research in information systems. Information systems journal, 18(2):137–163.
[Stahl, 2022] Stahl, B. C. (2022). Responsible innovation ecosystems: Ethical implications of the application of the ecosystem concept to artificial intelligence. International Journal of Information Management, 62:102441.
[Stahl et al., 2021] Stahl, B. C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Shaelou, S. L., Patel, A., Ryan, M., and Wright, D. (2021). Artificial intelligence for human flourishing–beyond principles for machine learn- ing. Journal of Business Research, 124:374–388.
[Stanford Law School Policy Lab, 2019a] Stanford Law School Policy Lab (2019a). Risk assessment fact sheet. Online, accessed May 20, 2021.
[Stanford Law School Policy Lab, 2019b] Stanford Law School Policy Lab (2019b). Stanford pretrial risk assessment tools factsheet. Online, accessed May 20, 2021.
[Starr, 2014] Starr, S. B. (2014). Evidence-based sentencing and the scientific rationalization of discrimination. Stan. L. Rev., 66:803.
[Starr, 2015] Starr, S. B. (2015). The new profiling: Why punishing based on poverty and identity is unconstitutional and wrong. Federal Sentencing Reporter, 27(4):229–236.
[Steegen et al., 2016] Steegen, S., Tuerlinckx, F., Gelman, A., and Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11(5):702–712.
[Steinkamp et al., 2008] Steinkamp, N. L., Gordijn, B., and Ten Have, H. A. (2008). Debating ethical expertise. Kennedy Institute of Ethics Journal, 18(2):173–192.
[Stephens-Davidowitz and Pabon, 2017] Stephens-Davidowitz, S. and Pabon, A. (2017). Everybody lies: Big data, new data, and what the internet can tell us about who we really are. HarperCollins New York.
[Sterman, 2010] Sterman, J. (2010). Business dynamics. Irwin/McGraw-Hill. [Sternberg, 1997] Sternberg, R. J. (1997). The concept of intelligence and its role
in lifelong learning and success. American psychologist, 52(10):1030. [Stevenson, 1944] Stevenson, C. L. (1944). Ethics and language. Yale University
Press.
[Stevenson, 2018] Stevenson, M. (2018). Assessing risk assessment in action. Minn.
L. Rev., 103:303.
[Stiglitz, 1999] Stiglitz, J. E. (1999). Knowledge as a global public good. In Global
public goods: International cooperation in the 21st century, pages 308–325. [Stilgoe et al., 2013] Stilgoe, J., Owen, R., and Macnaghten, P. (2013). Developing
a framework for responsible innovation. Research Policy, 42(9):1568–1580.
[Stobbs et al., 2017] Stobbs, N., Hunter, D., and Bagaric, M. (2017). Can sen- tencing be enhanced by the use of artificial intelligence? Criminal Law Journal, 41(5):261–277.
[Sto ̈ger et al., 2021] Sto ̈ger, K., Schneeberger, D., and Holzinger, A. (2021). Med- ical artificial intelligence: the european legal perspective. Communications of the ACM, 64(11):34–36.
[Stooke et al., 2021] Stooke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Syg- nowski, J., Trebacz, M., Jaderberg, M., Mathieu, M., et al. (2021). Open-ended learning leads to generally capable agents. arXiv preprint arXiv:2107.12808.
[Stout, 2012] Stout, L. A. (2012). The shareholder value myth: How putting share- holders first harms investors, corporations, and the public. Berrett-Koehler Pub- lishers.
[Stray et al., 2022] Stray, J., Halevy, A., Assar, P., Hadfield-Menell, D., Boutilier, C., Ashar, A., Beattie, L., Ekstrand, M., Leibowicz, C., Sehat, C. M., Johansen, S., Kerlin, L., Vickrey, D., Singh, S., Vrijenhoek, S., Zhang, A., Andrus, M., Helberger, N., Proutskova, P., Mitra, T., and Vasan, N. (2022). Building human values into recommender systems: An interdisciplinary synthesis.
[Strehl et al., 2010] Strehl, A., Langford, J., Li, L., and Kakade, S. M. (2010). Learning from logged implicit exploration data. Advances in neural information processing systems, 23.
[Strevens, 2020] Strevens, M. (2020). The knowledge machine: How irrationality created modern science. Liveright Publishing.
[Stryker and Burke, 2000] Stryker, S. and Burke, P. J. (2000). The past, present, and future of an identity theory. Social Psychology Quarterly, 63(4):284–297.
[Stumpf et al., 2009] Stumpf, S., Rajaram, V., Li, L., Wong, W.-K., Burnett, M., Dietterich, T., Sullivan, E., and Herlocker, J. (2009). Interacting meaningfully with machine learning systems: Three experiments. International journal of human-computer studies, 67(8):639–662.
[Su et al., 2021] Su, C., Zhou, H., Gong, L., Teng, B., Geng, F., and Hu, Y. (2021). Viewing personalized video clips recommended by tiktok activates default mode network and ventral tegmental area. NeuroImage, 237:118136.
[Suess and Trumbo, 2010] Suess, E. A. and Trumbo, B. E. (2010). Introduction to probability simulation and Gibbs sampling with R. Springer Science & Business Media.
[Sun et al., 2020] Sun, W., Nasraoui, O., and Shafto, P. (2020). Evolution and impact of bias in human and machine learning algorithm interaction. Plos one, 15(8):e0235502.
[Sunstein et al., 2002] Sunstein, C., Kahneman, D., Schkade, D., and Ritov, I. (2002). Predictably incoherent judgments. Stanford Law Review, 54(1190):34.
[Sunstein, 2000] Sunstein, C. R. (2000). Cognition and cost-benefit analysis. The Journal of Legal Studies, 29(S2):1059–1103.
[Sunstein, 2005] Sunstein, C. R. (2005). Laws of fear: beyond the precautionary principle. Cambridge University Press.
[Sunstein, 2022] Sunstein, C. R. (2022). Sludge audits. Behavioural Public Policy, 6(4):654–673.
[Suppe, 2000] Suppe, F. (2000). Understanding scientific theories: An assessment of developments, 1969-1998. Philosophy of science, 67:102–115.
[Supreme Court of Iowa, 2017] Supreme Court of Iowa (2017). Iowa v. gordon. Online, accessed March 30, 2022.
[Suresh and Guttag, 2019] Suresh, H. and Guttag, J. V. (2019). A framework for understanding sources of harm throughout the machine learning life cycle. arXiv preprint arXiv:1901.10002.
[Susser et al., 2019] Susser, D., Roessler, B., and Nissenbaum, H. (2019). Tech- nology, autonomy, and manipulation. Internet Policy Review, 8(2).
[Sutton, 2019] Sutton, R. (2019). The bitter lesson.
[Sutton and Barto, 2018] Sutton, R. S. and Barto, A. G. (2018). Reinforcement
learning: An introduction. MIT press.
[Sutton et al., 2022] Sutton, R. S., Bowling, M. H., and Pilarski, P. M. (2022).
The alberta plan for ai research. arXiv preprint arXiv:2208.11173.
[Swaminathan and Joachims, 2015] Swaminathan, A. and Joachims, T. (2015). Counterfactual risk minimization: Learning from logged bandit feedback. In International Conference on Machine Learning, pages 814–823. PMLR.
[Swan, 2013] Swan, M. (2013). The quantified self: Fundamental disruption in big data science and biological discovery. Big data, 1(2):85–99.
[Tabassi, 2021] Tabassi, E. (2021). Trustworthy and responsible ai. https://www.nist.gov/programs-projects/trustworthy-and-responsible-ai.
[Taddeo and Floridi, 2018] Taddeo, M. and Floridi, L. (2018). How ai can be a force for good. Science, 361(6404):751–752.
[Tamanaha et al., 2004] Tamanaha, B. Z. et al. (2004). On the rule of law: His- tory, politics, theory. Cambridge University Press.
[Tang, 2017] Tang, P. (2017). Reinforcement mechanism design. In IJCAI, pages 5146–5150.
[Tanriverdi et al., 2010] Tanriverdi, H., Rai, A., and Venkatraman, N. (2010). Research commentary—reframing the dominant quests of information systems strategy research for complex adaptive business systems. Information systems research, 21(4):822–834.
[Taylor, 1979] Taylor, C. (1979). Atomism. In Kontos, A., editor, Powers, pos- sessions and freedom: Essays in Honour of C.B. Macpherson, pages 39–62. University of Toronto Press.
[Taylor, 1980] Taylor, C. (1980). Understanding in human science. The Review of Metaphysics, 34(1):25–38.
[Taylor, 1989] Taylor, C. (1989). Sources of the self: The making of the modern identity. Harvard University Press.
[Taylor, 1992] Taylor, C. (1992). Sources of the self: The making of the modern identity. Harvard University Press.
[Taylor, 1994] Taylor, C. (1994). Multiculturalism. In Gutmann, A., editor, Mul- ticulturalism: : Examining the Politics of Recognition. Princeton University Press.
[Tennant et al., 2021] Tennant, P. W., Murray, E. J., Arnold, K. F., Berrie, L., Fox, M. P., Gadd, S. C., Harrison, W. J., Keeble, C., Ranker, L. R., Textor, J., et al. (2021). Use of directed acyclic graphs (dags) to identify confounders in applied health research: review and recommendations. International journal of epidemiology, 50(2):620–632.
[Tennie et al., 2009] Tennie, C., Call, J., and Tomasello, M. (2009). Ratcheting up the ratchet: on the evolution of cumulative culture. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1528):2405–2415.
[Tenopir et al., 2011] Tenopir, C., Allard, S., Douglass, K., Aydinoglu, A. U., Wu, L., Read, E., Manoff, M., and Frame, M. (2011). Data sharing by scientists: practices and perceptions. PloS one, 6(6):e21101.
[Teschl and Comim, 2005] Teschl, M. and Comim, F. (2005). Adaptive preferences and capabilities: Some preliminary conceptual explorations. Review of social economy, 63(2):229–247.
[Thakor, 1996] Thakor, A. V. (1996). The design of financial systems: An overview. Journal of Banking & Finance, 20(5):917–948.
[Theocharous et al., 2019] Theocharous, G., Healey, J., Mahadevan, S., and Saad, M. (2019). Personalizing with human cognitive biases. In Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pages 13–17.
[Theocharous et al., 2015] Theocharous, G., Thomas, P. S., and Ghavamzadeh, M. (2015). Personalized ad recommendation systems for life-time value opti- mization with guarantees. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
[Thomas et al., 2019] Thomas, P. S., Castro da Silva, B., Barto, A. G., Giguere, S., Brun, Y., and Brunskill, E. (2019). Preventing undesirable behavior of intelligent machines. Science, 366(6468):999–1004.
[Thomas and Uminsky, 2020] Thomas, R. and Uminsky, D. (2020). The problem with metrics is a fundamental problem for ai. arXiv preprint arXiv:2002.08512.
[Thomaz et al., 2006] Thomaz, A. L., Breazeal, C., et al. (2006). Reinforcement learning with human teachers: Evidence of feedback and guidance with impli- cations for learning performance. In Aaai, volume 6, pages 1000–1005. Boston, MA.
[Tiboni et al., 2022] Tiboni, G., Arndt, K., and Kyrki, V. (2022). Dropo: Sim-to-real transfer with offline domain randomization. arXiv preprint arXiv:2201.08434.
[Tillers, 2005] Tillers, P. (2005). If wishes were horses: discursive comments on attempts to prevent individuals from being unfairly burdened by their reference classes. Law, Probability and Risk, 4(1-2):33–49.
[Tindell et al., 2009] Tindell, A. J., Smith, K. S., Berridge, K. C., and Aldridge, J. W. (2009). Dynamic computation of incentive salience:“wanting” what was never “liked”. Journal of Neuroscience, 29(39):12220–12228.
[Tiwana, 2013] Tiwana, A. (2013). Platform ecosystems: Aligning architecture, governance, and strategy. Newnes.
[Tobback et al., 2017] Tobback, E., Bellotti, T., Moeyersoms, J., Stankova, M., and Martens, D. (2017). Bankruptcy prediction for smes using relational data. Decision Support Systems, 102:69–81.
[Tobin et al., 2017] Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23–30. IEEE.
[Tobler, 1999] Tobler, C. (1999). The standard of judicial review of administrative agencies in the us and eu: accountability and reasonable agency action. BC Int’l & Comp. L. Rev., 22:213.
[Todorov, 2009] Todorov, E. (2009). Efficient computation of optimal actions. Proceedings of the national academy of sciences, 106(28):11478–11483.
[Tollenaar and Van Der Heijden, 2019] Tollenaar, N. and Van Der Heijden, P. G. (2019). Optimizing predictive performance of criminal recidivism models using registration data with binary and survival outcomes. PloS one, 14(3):e0213245.
[Tominaga et al., 2011] Tominaga, T., Asahina, Y., Uyama, Y., and Kondo, T. (2011). Regulatory science as a bridge between science and society. Clinical Pharmacology & Therapeutics, 90(1):29–31.
[Torra, 2017] Torra, V. (2017). Data privacy: foundations, new developments and the big data challenge. Springer.
[Tosun, 2013] Tosun, J. (2013). How the eu handles uncertain risks: Understand- ing the role of the precautionary principle. Journal of European Public Policy, 20(10):1517–1528.
[Touretzky and Saksida, 1997] Touretzky, D. S. and Saksida, L. M. (1997). Oper- ant conditioning in skinnerbots. Adaptive Behavior, 5(3-4):219–247.
[Triandis, 2018] Triandis, H. C. (2018). Individualism and collectivism. Routledge.
[Tsang and Williams, 2012] Tsang, E. W. and Williams, J. N. (2012). Gener- alization and induction: Misconceptions, clarifications, and a classification of induction. MIS quarterly, pages 729–748.
[Tsesis, 2017] Tsesis, A. (2017). Constitutional ethos: liberal equality for the com- mon good. Oxford University Press.
[Tufekci, 2014] Tufekci, Z. (2014). Big questions for social media big data: repre- sentativeness, validity and other methodological pitfalls. In Proceedings of the International AAAI Conference on Web and Social Media, volume 8.
[Turing, 1950] Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236):433–460.
[Turner et al., 2021] Turner, A., Smith, L., Shah, R., Critch, A., and Tadepalli, P. (2021). Optimal policies tend to seek power. Advances in Neural Information Processing Systems, 34:23063–23074.
[Tushnet, 2012] Tushnet, M. (2012). Constitution-making: an introduction. Tex. L. Rev., 91:1983.
[Tushnet et al., 2013] Tushnet, M. V., Fleiner, T., and Saunders, C. (2013). Rout- ledge handbook of constitutional law. Routledge USA.
[Tversky and Kahneman, 1974] Tversky, A. and Kahneman, D. (1974). Judg- ment under uncertainty: Heuristics and biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157):1124–1131.
[Tyler, 2006] Tyler, T. R. (2006). Why people obey the law. Princeton university press.
[UCI, 2022] UCI (2022). Activities that require IRB review. https: //research.uci.edu/compliance/human-research-protections/
researchers/activities-irb-review.html. UCI, Online, accessed February 24, 2022.
[Ulnicane et al., 2021] Ulnicane, I., Eke, D. O., Knight, W., Ogoh, G., and Stahl, B. C. (2021). Good governance as a response to discontents? d ́ej`a vu, or lessons for ai from other emerging technologies. Interdisciplinary Science Reviews, 46(1- 2):71–93.
[Underwood, 1979] Underwood, B. D. (1979). Law and the crystal ball: Predicting behavior with statistical inference and individualized judgment. The Yale Law Journal, 88(7):1408–1448.
[Urbina et al., 2022] Urbina, F., Lentzos, F., Invernizzi, C., and Ekins, S. (2022). Dual use of artificial-intelligence-powered drug discovery. Nature Machine In- telligence, 4(3):189–191.
[Ustun et al., 2019] Ustun, B., Spangher, A., and Liu, Y. (2019). Actionable re- course in linear classification. In Proceedings of the conference on fairness, accountability, and transparency, pages 10–19.
[Vaccaro et al., 2018] Vaccaro, K., Huang, D., Eslami, M., Sandvig, C., Hamilton, K., and Karahalios, K. (2018). The illusion of control: Placebo effects of control settings. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–13.
[Vale and Zanfir-Fortuna, 2022] Vale, S. B. and Zanfir-Fortuna, G. (2022). Auto- mated decision-making under the gdpr: Practical cases from courts and data protection authorities. Technical report, Future of Privacy Forum.
[van de Poel, 2020] van de Poel, I. (2020). Embedding values in artificial intelli- gence (ai) systems. Minds and Machines, 30(3):385–409.
[Van der Sloot, 2017] Van der Sloot, B. (2017). Decisional privacy 2.0: the proce- dural requirements implicit in article 8 echr and its potential impact on profiling. International Data Privacy Law, 7(3):190–201.
[Van Dijck, 2013a] Van Dijck, J. (2013a). The culture of connectivity: A critical history of social media. Oxford University Press.
[Van Dijck, 2013b] Van Dijck, J. (2013b). The culture of connectivity: A critical history of social media. Oxford University Press.
[Van Dijck et al., 2018] Van Dijck, J., Poell, T., and De Waal, M. (2018). The platform society: Public values in a connective world. Oxford University Press.
[Van Doorn et al., 2010] Van Doorn, J., Lemon, K. N., Mittal, V., Nass, S., Pick, D., Pirner, P., and Verhoef, P. C. (2010). Customer engagement behavior: Theoretical foundations and research directions. Journal of service research, 13(3):253–266.
[Van Fraassen, 1980] Van Fraassen, B. C. (1980). The scientific image. Oxford University Press.
[Van Otterlo, 2013] Van Otterlo, M. (2013). A machine learning view on profiling.
Privacy, Due Process and the Computational Turn-Philosophers of Law Meet Philosophers of Technology. Abingdon: Routledge.
[Varela et al., 2017] Varela, F. J., Thompson, E., and Rosch, E. (2017). The em- bodied mind, revised edition: Cognitive science and human experience. MIT
press.
[Varian, 2016] Varian, H. R. (2016). Causal inference in economics and marketing.
Proceedings of the National Academy of Sciences, 113(27):7310–7315.
[Vaswani et al., 2017a] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L.,Gomez,A.N.,Kaiser,L????.,andPolosukhin,I.(2017a).Attentionisallyou need. Advances in neural information processing systems, 30.
[Vaswani et al., 2017b] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L.,Gomez,A.N.,Kaiser,L????.,andPolosukhin,I.(2017b).Attentionisallyou need. In Advances in neural information processing systems, pages 5998–6008.
[Venkatasubramanian and Alfano, 2020] Venkatasubramanian, S. and Alfano, M. (2020). The philosophical basis of algorithmic recourse. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 284–293.
[Verbeke et al., 2014] Verbeke, W., Martens, D., and Baesens, B. (2014). So- cial network analysis for customer churn prediction. Applied Soft Computing, 14:431–446.
[Verhulst, 2022] Verhulst, S. G. (2022). Operationalizing digital self determina- tion. arXiv preprint arXiv:2211.08539.
[Verma et al., 2020] Verma, S., Dickerson, J., and Hines, K. (2020). Coun- terfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596.
[Vesanen, 2007] Vesanen, J. (2007). What is personalization? a conceptual frame- work. European Journal of Marketing, 41(5/6):409–418.
[Vial, 2023] Vial, G. (2023). Data governance and digital innovation: A transla- tional account of practitioner issues for is research. Information and Organiza- tion, 33(1):100450.
[Viljoen et al., 2021] Viljoen, S., Goldenfein, J., and McGuigan, L. (2021). De- sign choices: Mechanism design and platform capitalism. Big data & society, 8(2):20539517211034312.
[Vinitsky et al., 2020] Vinitsky, E., Du, Y., Parvate, K., Jang, K., Abbeel, P., and Bayen, A. (2020). Robust reinforcement learning using adversarial populations. arXiv preprint arXiv:2008.01825.
[Visentin et al., 2018] Visentin, R., Campos-Na ́n ̃ez, E., Schiavon, M., Lv, D., Vet- toretti, M., Breton, M., Kovatchev, B. P., Dalla Man, C., and Cobelli, C. (2018). The uva/padova type 1 diabetes simulator goes from single meal to single day. Journal of diabetes science and technology, 12(2):273–281.
[Vogell et al., 2022] Vogell, H., Coryne, H., and Little, R. (2022). Rent Going Up? One Company’s Algorithm Could Be Why. https://www.propublica.org/ article/yieldstar-rent-increase-realpage-rent. Accessed: 2022-12-04.
[Volokh, 2018] Volokh, E. (2018). Chief justice robots. Duke LJ, 68:1135.
[Von Bertalanffy, 1972] Von Bertalanffy, L. (1972). The history and status of general systems theory. Academy of management journal, 15(4):407–426.
[von Hirsch, 1984] von Hirsch, A. (1984). The ethics of selective incapacitation: Observations on the contemporary debate. Crime & Delinquency, 30(2):175– 194.
[von Schomberg, 2013] von Schomberg, R. (2013). A vision of responsible research and innovation. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, pages 51–74.
[Vredenburgh, 2022] Vredenburgh, K. (2022). The right to explanation. Journal of Political Philosophy, 30(2):209–229.
[Wachter et al., 2017] Wachter, S., Mittelstadt, B., and Russell, C. (2017). Coun- terfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841.
[Wachter et al., 2021] Wachter, S., Mittelstadt, B., and Russell, C. (2021). Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai. Computer Law & Security Review, 41:105567.
[Waldman, 2020] Waldman, A. E. (2020). Cognitive biases, dark patterns, and the ‘privacy paradox’. Current opinion in psychology, 31:105–109.
[Waldron, 1987] Waldron, J. (1987). Theoretical foundations of liberalism. The Philosophical Quarterly (1950-), 37(147):127–150.
[Walker and Muchnik, 2014] Walker, D. and Muchnik, L. (2014). Design of ran- domized experiments in networks. Proceedings of the IEEE, 102(12):1940–1951.
[Wallach and Allen, 2008] Wallach, W. and Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.
[Walton, 1988] Walton, D. N. (1988). Burden of proof. Argumentation, 2:233–254. [Walzer, 2008] Walzer, M. (2008). Spheres of justice: A defense of pluralism and
equality. Basic books.
[Wang and He, 2023] Wang, J. and He, X. (2023). Subgroup analysis and adap- tive experiments crave for debiasing. Wiley Interdisciplinary Reviews: Compu- tational Statistics, page e1614.
[Wang et al., 2017] Wang, J., Zhang, W., Yuan, S., et al. (2017). Display adver- tising with real-time bidding (rtb) and behavioural targeting. Foundations and Trends in Information Retrieval, 11(4-5):297–435.
[Wang et al., 2021] Wang, K., Zou, Z., Deng, Q., Shang, Y., Zhao, M., Wu, R., Shen, X., Lyu, T., and Fan, C. (2021). Rl4rs: A real-world bench- mark for reinforcement learning based recommender system. arXiv preprint arXiv:2110.11073.
[Wang et al., 2022a] Wang, S., Wang, T., He, C., and Hu, Y. J. (2022a). Can your toothpaste shopping predict mutual funds purchasing?—transferring knowledge from consumer goods to financial products via machine learning. Transferring Knowledge from Consumer Goods to Financial Products Via Machine Learning (January 27, 2022).
[Wang et al., 2022b] Wang, W., Li, B., Luo, X., and Wang, X. (2022b). Deep reinforcement learning for sequential targeting. Management Science.
[Ward, 2022] Ward, J. (2022). The Loop: How Technology is Creating a World Without Choices and How to Fight Back. Hachette Books.
[Warren, 2017] Warren, M. E. (2017). A problem-based approach to democratic theory. American Political Science Review, 111(1):39–53.
[Wasserman, 1991] Wasserman, D. T. (1991). The morality of statistical proof and the risk of mistaken liability. Cardozo L. Rev., 13:935.
[Watson and Floridi, 2021] Watson, D. S. and Floridi, L. (2021). The explanation game: a formal framework for interpretable machine learning. Synthese, 198(10).
[Weber, 1958] Weber, M. (1958). Science as a vocation. Daedalus, 87(1):111–134. [Wegner, 2004] Wegner, D. M. (2004). Pr ́ecis of the illusion of conscious will.
Behavioral and Brain Sciences, 27(5):649–659.
[Weidinger et al., 2023] Weidinger, L., McKee, K. R., Everett, R., Huang, S., Zhu, T. O., Chadwick, M. J., Summerfield, C., and Gabriel, I. (2023). Using the veil of ignorance to align ai systems with principles of justice. Proceedings of the National Academy of Sciences, 120(18):e2213709120.
[Weidinger et al., 2022] Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., et al. (2022). Taxonomy of risks posed by language models. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214–229.
[Weinstein, 1994] Weinstein, B. D. (1994). The possibility of ethical expertise. Theoretical Medicine, 15(1):61–75.
[Weiss, 2013] Weiss, G. M. (2013). Foundations of imbalanced learning. Imbal- anced Learning: Foundations, Algorithms, and Applications, pages 13–41.
[Weller and Kinder-Kurlanda, 2016] Weller, K. and Kinder-Kurlanda, K. E. (2016). A manifesto for data sharing in social media research. In Proceedings of the 8th ACM Conference on Web Science, pages 166–172.
[Wertenbroch et al., 2020] Wertenbroch, K., Schrift, R. Y., Alba, J. W., Barasch, A., Bhattacharjee, A., Giesler, M., Knobe, J., Lehmann, D. R., Matz, S., Nave, G., et al. (2020). Autonomy in consumer choice. Marketing Letters, 31(4):429– 439.
[Werth, 2019] Werth, R. (2019). Risk and punishment: The recent history and un- certain future of actuarial, algorithmic, and “evidence-based” penal techniques. Sociology Compass, 13(2):e12659.
[Wheeler and Berger, 2007] Wheeler, S. C. and Berger, J. (2007). When the same prime leads to different effects. Journal of Consumer Research, 34(3):357–368.
[White House, 2022] White House (2022). Blueprint for an ai bill of rights: Making automated systems work for the american people. https://www.whitehouse. gov/ostp/ai-bill-of-rights/. Online, accessed November 20, 2022.
[Whittle et al., 2018] Whittle, R., Peat, G., Belcher, J., Collins, G. S., and Riley, R. D. (2018). Measurement error and timing of predictor values for multivariable risk prediction models are poorly reported. Journal of clinical epidemiology, 102:38–49.
[Whittlestone et al., 2021] Whittlestone, J., Arulkumaran, K., and Crosby, M. (2021). The societal implications of deep reinforcement learning. Journal of Artificial Intelligence Research, 70:1003–1030.
[Wiener, 1988] Wiener, N. (1988). The human use of human beings: Cybernetics and society. Number 320. Da Capo Press.
[Wiesbo ̈ck and Hess, 2020] Wiesbo ̈ck, F. and Hess, T. (2020). Digital innovations. Electronic Markets, 30(1):75–86.
[Wiesel et al., 2011] Wiesel, T., Pauwels, K., and Arts, J. (2011). Practice prize paper—marketing’s profit impact: Quantifying online and off-line funnel pro- gression. Marketing Science, 30(4):604–611.
[Wigner, 1960] Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Communications on pure and applied mathematics, 13:1–14.
[Williams, 2006] Williams, B. (2006). Ethics and the Limits of Philosophy. Rout- ledge.
[Wilson et al., 2020] Wilson, J. Q., DiIulio Jr, J. J., Bose, M., and Levendusky, M. S. (2020). American Government: Institutions and Policies, Enhanced. Cen- gage Learning.
[Winick, 1992] Winick, B. J. (1992). On autonomy: Legal and psychological per- spectives. Vill. L. Rev., 37:1705.
[Winsberg, 2003] Winsberg, E. (2003). Simulated experiments: Methodology for a virtual world. Philosophy of science, 70(1):105–125.
[Wolf, 1993] Wolf, S. (1993). Freedom within reason. Oxford University Press on Demand.
[Wolpert, 1996] Wolpert, D. H. (1996). The lack of a priori distinctions between learning algorithms. Neural computation, 8(7):1341–1390.
[Wong and Dobson, 2019] Wong, K. L. X. and Dobson, A. S. (2019). We’re just data: Exploring china’s social credit system in relation to digital platform rat- ings cultures in westernised democracies. Global Media and China, 4(2):220–232.
[Wong, 2020] Wong, P.-H. (2020). Democratizing algorithmic fairness. Philosophy & Technology, 33(2):225–244.
[Wu and Taneja, 2020] Wu, A. X. and Taneja, H. (2020). Platform enclosure of human behavior and its measurement: Using behavioral trace data against plat- form episteme. New Media & Society, page 2650–2667.
[Wu et al., 2022] Wu, J., Zhang, Z., Feng, Z., Wang, Z., Yang, Z., Jordan, M. I., and Xu, H. (2022). Sequential information design: Markov persuasion process and its efficient reinforcement learning. arXiv preprint arXiv:2202.10678.
[Xiao and Benbasat, 2007] Xiao, B. and Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and impact. MIS quarterly, pages 137–209.
[Xin et al., 2020] Xin, X., Karatzoglou, A., Arapakis, I., and Jose, J. M. (2020). Self-supervised reinforcement learning for recommender systems. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 931–940.
[Xiong et al., 2022] Xiong, F., Xie, M., Zhao, L., Li, C., and Fan, X. (2022). Recognition and evaluation of data as intangible assets. Sage Open, 12(2):21582440221094600.
[Yacoby et al., 2022] Yacoby, Y., Green, B., Griffin, C. L., and Velez, F. D. (2022). ” if it didn’t happen, why would i change my decision?”: How judges respond to counterfactual explanations for the public safety assessment. arXiv preprint arXiv:2205.05424.
[Yahav et al., 2016] Yahav, I., Shmueli, G., and Mani, D. (2016). A tree-based approach for addressing self-selection in impact studies with big data. MIS Quarterly, 40(4):819–848.
[Yampolskiy, 2020] Yampolskiy, R. V. (2020). Unpredictability of ai: On the im- possibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 7(01):109–118.
[Yang et al., 2020] Yang, J., Eckles, D., Dhillon, P., and Aral, S. (2020). Targeting for long-term outcomes. arXiv preprint arXiv:2010.15835.
[Yao et al., 2021] Yao, S., Halpern, Y., Thain, N., Wang, X., Lee, K., Prost, F., Chi, E. H., Chen, J., and Beutel, A. (2021). Measuring recommender system effects with simulated users. arXiv preprint arXiv:2101.04526.
[Yeung, 2017] Yeung, K. (2017). ‘hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1):118–136.
[Yew and Hadfield-Menell, 2022] Yew, R.-J. and Hadfield-Menell, D. (2022). A penalty default approach to preemptive harm disclosure and mitigation for ai systems. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 823–830.
[You et al., 2018] You, C.-W., Lin, Y.-F., Chuang, Y., Lee, Y.-H., Hsu, P.-Y., Lin, S.-Y., Chang, C.-C., Chung, Y.-J., Chen, Y.-L., Huang, M.-C., et al. (2018). Sobermotion: Leveraging the force of probation officers to reduce the risk of dui recidivism. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3):1–34.
[Zagzebski, 2001] Zagzebski, L. (2001). The uniqueness of persons. Journal of religious Ethics, 29(3):401–423.
[Zambak, 2013] Zambak, A. F. (2013). The frame problem. In Mu ̈ller, V. C., editor, Philosophy and Theory of Artificial Intelligence, pages 307–319. Springer.
[Zarsky, 2016] Zarsky, T. Z. (2016). Incompatible: the gdpr in the age of big data. Seton Hall L. Rev., 47:995.
[Zavrˇsnik, 2020] Zavrˇsnik, A. (2020). Criminal justice, artificial intelligence sys- tems, and human rights. In ERA Forum, volume 20, pages 567–583. Springer.
[Zavrˇsnik, 2021] Zavrˇsnik, A. (2021). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of criminology, 18(5):623–642.
[Zbontar et al., 2021] Zbontar, J., Jing, L., Misra, I., LeCun, Y., and Deny, S. (2021). Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pages 12310–12320. PMLR.
[Zemel et al., 2013] Zemel, R., Wu, Y., Swersky, K., Pitassi, T., and Dwork, C. (2013). Learning fair representations. In International conference on machine learning, pages 325–333. PMLR.
[Zeng et al., 2019] Zeng, G., Chen, Y., Cui, B., and Yu, S. (2019). Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence, 1(8):364–372.
[Zezula et al., 2006] Zezula, P., Amato, G., Dohnal, V., and Batko, M. (2006). Similarity search: the metric space approach, volume 32. Springer Science & Business Media.
[Zgoba and Dayal, 2015] Zgoba, K. M. and Dayal, N. P. (2015). Recidivism. The Encyclopedia of Crime and Punishment, pages 1–5.
[Zhang et al., 2019] Zhang, S., Yao, L., Sun, A., and Tay, Y. (2019). Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR), 52(1):1–38.
[Zhao et al., 2018] Zhao, J., Qiu, G., Guan, Z., Zhao, W., and He, X. (2018). Deep reinforcement learning for sponsored search real-time bidding. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 1021–1030.
[Zhao et al., 2019] Zhao, X., Xia, L., Tang, J., and Yin, D. (2019). Deep reinforce- ment learning for search, recommendation, and online advertising: a survey. ACM SIGWEB Newsletter, (Spring):1–15.
[Zhao et al., 2009] Zhao, Y., Kosorok, M. R., and Zeng, D. (2009). Reinforcement learning design for cancer clinical trials. Statistics in medicine, 28(26):3294– 3315.
[Zhenbin et al., 2020] Zhenbin, Y., Kankanhalli, A., Ha, S., and Tayi, G. K. (2020). What drives public agencies to participate in open government data initiatives? an innovation resource perspective. Information & Management, 57(3):103179.
[Zhou et al., 2020] Zhou, S., Dai, X., Chen, H., Zhang, W., Ren, K., Tang, R., He, X., and Yu, Y. (2020). Interactive recommender system via knowledge graph-enhanced reinforcement learning. In Proceedings of the 43rd Interna- tional ACM SIGIR Conference on Research and Development in Information Retrieval, pages 179–188.
[Zhou et al., 2010] Zhou, T., Kuscsik, Z., Liu, J.-G., Medo, M., Wakeling, J. R., and Zhang, Y.-C. (2010). Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences, 107(10):4511–4515.
[Zhu et al., 2023] Zhu, Y., Ma, J., and Li, J. (2023). Causal inference in recom- mender systems: A survey of strategies for bias mitigation, explanation, and generalization. arXiv preprint arXiv:2301.00910.
[Zhuang and Hadfield-Menell, 2020] Zhuang, S. and Hadfield-Menell, D. (2020). Consequences of misaligned ai. Advances in Neural Information Processing Sys- tems, 33:15763–15773.
[Zou et al., 2019] Zou, L., Xia, L., Ding, Z., Song, J., Liu, W., and Yin, D. (2019). Reinforcement learning to optimize long-term user engagement in recommender systems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2810–2818.
[Zuboff, 2019] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.
[Zuiderveen Borgesius, 2015] Zuiderveen Borgesius, F. J. (2015). Personal data processing for behavioural targeting: which legal basis? International Data Privacy Law, 5(3):163–176.
[Zwick and Dholakia, 2004] Zwick, D. and Dholakia, N. (2004). Whose identity is it anyway? consumer representation in the age of database marketing. Journal of Macromarketing, 24(1):31–43.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top