Dr. William Wang, associate professor of computer science and co-director of the natural language processing group at UCSB, gave a virtual lecture on Tuesday entitled What Is Responsible AI?
Dr. Wang’s lecture served as the kick-off event for UCSB Reads, a campus and community wide reading program run by the UCSB library.
“Their new season is centered on Ted Chiang’s Exhalation, a short story collection that addresses essential questions about human and computer interaction, including the use of artificial intelligence,” according to a press release from UCSB.
In this lecture, Dr. Wang addressed the concerns and challenges with artificial intelligence (AI) being subject to human bias, because these technologies are built and programmed by humans.
“Sometimes these biases could lead to the ‘rich getting richer’ phenomenon after the AI systems are deployed. That’s why in addition to accuracy, it is important to conduct research in fair and responsible AI systems, including the definition of fairness, measurement, detection and mitigation of biases in AI systems,” said Dr. Wang.
“It is important to first define what fairness means. Defining fairness is really difficult to do. There is a trade-off between group fairness and individual fairness,” Dr. Wang told the News-Press when asked how research can be conducted into fair and responsible AI.
“One practical thing to think about is the trade-off. Accuracy and efficiency are also important and should be taken into consideration. If you have well-defined concepts, how would you be able to optimize?The challenge with the system is that data is always changing, if you only rely on historical data, you will see an accuracy drop, known as the Distribution Shift,” Dr. Wang continued.
Dr. Wang addressed two primary issues that need to be changed in order to decrease the likelihood of bias in AI: data set collections and models.
On the topic of Data set collections, one example Dr. Wang gave was that of female snowboarders. Wikipedia editors only show about 20% of snowboarders to be female, whereas when peer-reviewed research is surveyed there is a greater percentage due to differing amounts of data.
Models were the second point that Dr. Wang addressed which needed to be improved to decrease the likelihood of bias
“How do we build better models? Models that will not exemplify bias in the data set. How would you be able to build a system that considers different user groups, long term queries etc.,” said Dr. Wang.
“In reality, because we are using human-generated data, it is very easy to encode human bias. But it is our responsibility to build systems that serve more people, However it is also important to focus on key strategic areas. Bias that leads to offline violence and harms, those are the worst cases and we should avoid those. Energy efficiency is another area in which we must be responsible,” Dr. Wang told the News-Press, when asked if AI would always be subject to bias to a certain extent.
“Really think about your data. How can your data set better represent diversity of groups? You can develop better algorithms and better machine learning systems. But you don’t want your systems to predict this sensitive variable. However you want to understand the objectives. Think about the data, the algorithms and the model,” Dr. Wang told the News-Press when asked how we can continually decrease the percentage of bias in AI.
“At UCSB we take mission learning research very seriously. Interdisciplinary research is very unique to UCSB and it is our strength. I hope we have more people interested in working together and learning about AI so that we have a good ecosystem. It is a new area and it is an emerging area. We want human-centered technology,” said Dr. Wang.