Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
4 min read
Share
As the use of AI in various segments of the industry is rapidly growing, responsible AI is gaining more and more attention, both by the AI research community and industry. This is not an unexpected and surprising trend. In fact, the need for establishing a reliable, fair, transparent, and robust set of standards and practices for AI development has been pointed out by many of the AI pioneers and well-known advocates of AI.
At Cisco Research, we have considered the responsible AI an important and critical direction of research for the past two years ago and we recognized its impact on various aspect of the future of AI development, from the societal and ethical concerns to the reliability and robustness and trustworthiness of AI systems. Since the start of this research initiative, we have been trying to understand the real issues regarding the responsible AI development by interacting with many of the teams and AI experts within Cisco as well as our customers. We have also funded tens of fascinating novel contributions in many aspects of responsible AI research through sponsored research program and other gift programs to promote the research on the topic of responsible AI.
In an effort to bring many of these research projects together and to promote the new developments in this area of research, we recently held our first ever responsible AI summit where some of our amazing collaborators from top schools came together in a virtual event to present their responsible AI research. While there are various techniques for measuring bias and to mitigate the bias for structured data, we have little or no solution for various problems in AI where the sensitive information is not directly accessible or explicitly annotated. To address this, Judy Hoffman, our collaborator from Georgia Tech presented their new results for identifying and mitigating bias in an unsupervised manner. This is one of the very hard problems in unsupervised learning and this research is one of the promising directions to tackle this problem. Even for structured data, we often do not have access to sensitive demographic information because of concerns such as privacy. Kai Shu’s research from Illinois Institute of Technology IIT considers various techniques and new novel algorithms for these types of problems including the use of correlated attributes as a proxy to estimate and to mitigate the bias. Parinaz Naghizadeh from Ohio state presented their novel approach to adoptively adjust the decision boundaries in scenarios where data distribution shifts and relying on our limited training data is not a good practice and can result in serious biased and unfair decision making. Stevie Chancellor from University of Minnesota reminded us that finding the sources of bias and gathering reliable data for the ethical AI is not an easy task and requires a new way of designing experiments and to conduct multidisciplinary studies, especially in areas such as mental health.
A reliable and trustworthy AI system should be robust against noise and various malicious attackers that try to poison our models and/or data. Baharan Mirzasoleiman from UCLA presented some of their new results on scalable defense against several types of adversarial attacks.
Last but not least, Polo Chau, our collaborator from Georgia Tech presented an amazing ensemble of visualizations techniques that his group has developed which can help us shed some light on the opaque nature of the black box AI.
At the end of these amazing presentations, our great panel of experts engaged with researchers from our Cisco Research as well as experts from other parts of Cisco and discussed many of the open problems in responsible AI development.
In addition to these efforts and the collaborations with top universities, our team at responsible AI research at Cisco have been working on contributing some open-source tools for the AI community. In this summit we presented our latest project, RAI which a python library and dashboarding tool for responsible AI development and we discussed the importance of reliable and accessible open-source tools to make the outcome of the state-of-the-art research accessible to a wider and broader audience. Tune in to Cisco Research portal and our tech-blog for more on this soon to be released open-source project.
Get emerging insights on emerging technology straight to your inbox.
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.