How do you use AI?
Share the details about how you use AI on campus, and you can be featured in the Join the Conversation showcase.
Use the button below to submit your AI use case via Qualtrics form.
Share the details about how you use AI on campus, and you can be featured in the Join the Conversation showcase.
Use the button below to submit your AI use case via Qualtrics form.
Wes Bethel, associate professor of Computer Science, is the principal investigator of a research project that will study methods for automating the generation of software tools and processes for the purpose of constructing software that builds machine learning models.
The work is in support of an effort funded by U.S. Department of Energy’s Office of Fusion Energy Sciences to leverage artificial intelligence (AI)/machine learning tools to reduce computational time-to-solution for specific physics calculations, with the ultimate objective of being able to predict plasma behavior in real time in fusion tokamak devices.
Bethel, who joined SF State in 2022 after a career as a computer scientist at Lawrence Berkeley National Laboratory, has assembled a team of Computer Science graduates and undergraduates (V. Cramer, C. Pestano, A. del Rio and S. Verma), and other faculty (Computer Science Lecturer Lothar Narins) to study this problem. The SF State team is part of a larger multi-institutional effort led by J. Wright at the Massachusetts Institute of Technology and includes researchers from the Princeton Plasma Physics Laboratory and Lawrence Berkeley National Laboratory.
The key idea behind the team’s approach is to leverage recent advances in cloud-based AI tools, such Large Language Model implementations such as OpenAI’s ChatGPT and GitHub’s Copilot to quickly produce code that builds other AI models and model validation processes. The intent is to reduce the time to solution for trained models from years to weeks.
Bethel looks forward to reproducing this work on other computational challenges within the SF State scientific community.
The Certificate in Ethical Artificial Intelligence (AI) is designed to offer professionals and graduate students the opportunity to acquire a deeper grasp of the ethical, legal and policy issues and implications of developments in artificial intelligence, addressing areas of impact including pharmaceutical and healthcare research and distribution, business application practices in data and finance, law enforcement, live and social media, information and news development and filtering, autonomous transportation (including automobiles and mass transit), the availability and distribution of government services and other sectors in society.
The program consists of three courses and a short research report on a specific application of ethical issues to AI. The award of the certificate means the holder has completed the required courses and research project at an acceptable level of academic accomplishment. The certificate indicates to potential employers and to other academic programs that the holder has achieved a foundation in the basic principles of artificial intelligence, the latest developments in AI and their ethical implications for society.
To learn more about the Graduate Certificate in Ethical Artificial Intelligence (AI), visit the Lam Family College of Business (LFCoB) Graduate Certificate in Ethical Artificial Intelligence page, or the Bulletin page.
Montemayor, C. (2023) The Prospect of a Humanitarian Artificial Intelligence: Agency and Alignment
In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life.
He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral and political implications and takes into account how epistemic agency differs from moral agency.
Through his insightful comparisons between human and animal intelligence, Montemayor makes it clear why adopting a need-based attention approach justifies a humanitarian framework. This is an urgent, timely argument for developing AI technologies based on international human rights agreements.