Yale Cyber ​​​​​​Leadership Forum hosts discussion on “AI ethics and security”







Pexels

The Yale Cyber ​​​​​​Leadership Forum held its last session of this year last Friday with two roundtables on the ethics of AI.

This year’s forum is titled “Bridging the Divide: National Security Implications of Artificial Intelligence” and is a collaboration between the Jackson Institute for Global Affairs and Yale Law School, with the goal of bridging the gap between law, politics and technology. Friday’s event was the last of three events organized by the forum this school year.

“The series of events that I saw at Yale and got to be a part of, for me, is really a sign of how the field is changing, how computing is opening up to this genre interdisciplinary dialogue,” Brian Christian, an author who was one of the panelists, told The News. “More and more, we see policy makers, philosophers, legal theorists in the same room as experts in machine learning […] and so for me what I take away is that it’s a team effort, and it’s going to require some collaboration across those traditional disciplinary lines and I think that’s what we’re starting to have.

The first panel included Elena Kvochko, chief trust officer of enterprise software giant SAP; Andi Peng ’18, PhD student at Massachusetts Institute of Technology; and Dragomir Radev, professor of computer science at Yale. Ted Wittenstein, Executive Director of International Security Studies, moderated the panel.

The discussion focused on research on technical security and trust in AI. Radev opened the discussion with a presentation on the limits of natural language processing – the ability of an AI to learn and understand human language. He illustrated this with the example of an ethical AI, known as Ask Delphi, which was trained on the social media site Reddit and started giving racist, homophobic, sexist and illogical answers to ethical questions. . Radev also pointed out that a significant amount of research on AI and machine learning is only conducted in English, which can lead to linguistic bias in his results.

Peng discussed the risks and limitations of physically deploying robots in the real world and the difficulty of communicating the desired effect to a machine when designing machine systems. In response to a question from the audience, she described how one of the earliest examples of a cleaning robot was trained to maximize the amount of dust it could suck up. When they installed the robot in a house, the robot vacuumed the dust, threw it away and then vacuumed it again, as this maximized the amount of dust it was able to collect. This communication problem only exacerbates, she said, when robots then interact with data that could be impure or biased.

“I think a key question that we are currently facing as a field is the question of alignment, that is, how do we align the intended goals of our users with those of the real systems we have trained ?” Peng said during the panel.

The discussion then turned to trust in the private sector and its ability to fend off cyber threats. Today’s “threat landscape,” Kvochko told the audience, poses a number of challenges. There were 36 billion records exposed by data breaches, 58% of which involved personal data, she cited. Additionally, she said, more than 90% of breaches begin with a phishing email, highlighting how vulnerable many industries and individuals are.

The challenges facing the cybersecurity industry are continually evolving, with the increasingly remote workforce representing a new vulnerability to attack, and cybersecurity personnel shortages resulting in insufficient manpower and knowledge to solve. the problem, Kvochko said. She highlighted the importance of private companies, like SAP, in fostering trust by avoiding biased training, allowing customers to opt out of sharing data with third-party vendors, and addressing data breaches, among other things. She also highlighted the importance of the research community partnering with the business community to answer questions about trust in automated machinery.

Peng echoed Kvochko’s sentiment, stressing the value of being exposed and working with other industries alongside academia.

“As an academic, I would say, first and foremost, a lot of times we get super siled with these little research shells that we do, that we end up motivating our own work according to the values ​​of our community, and often we forget what matters in the world and what doesn’t,” Peng told the panel. .”

The second panel included Christian and Scott Shapiro, a Yale law and philosophy professor. It was moderated by Oona Hathaway, professor of international law and director of the Yale School Center for Global Legal Challenges.

Focused on the development of AI ethics and standards, Christian started the conversation by describing the “unusual position” he found himself in as a human subject during a Turing test competition – the competition year in which different computer programs are tested against human control subjects to see which is the most difficult to distinguish – in 2009. Christian recalled with amusement that there was both an award for the most convincing computer program, as well as a “Most Humane Human Award”.

“I think it’s fair enough to say that with the rise of large language models […] we can view the Turing test as being in the rearview mirror,” Christian said. “I don’t think Alan Turing could have imagined that in 2022 the Turing tests are basically like one of the boring chores of being someone who’s on the internet.”

Christian’s most recent book, “The Alignment Problem,” examines the gap between what humans operationalize and what people hoped the system would do. He argued that the alignment problem is one that “worsens rather than improves” as the power of the models improves. If a program like Github co-pilot, a Microsoft product that uses large language models to autocomplete a user’s code, is presented with buggy code, the program tends to autocomplete code with even more bugs. .

Christian spoke about the application of artificial intelligence in criminal justice, where statistics such as predictions of new arrests rely on very simple models with a handful of parameters. He cited a model by Cynthia Rudin, a computer scientist at Duke University, that “fits in one English sentence,” including predicting recidivism for men under 20, those under 23 with two arrests prior arrests or anyone with three or more prior arrests. This model apparently rivals in accuracy the controversial proprietary closed-source system, COMPAS, which is used by many states.

From the perspective that governments are increasingly mandating the use of these tools, he said, they are effectively becoming extensions of the law. Yet not all data from these AI systems is made public. There have been instances, Christian said, where defendants were unable to access data fed into the model.

Shapiro, who describes himself as a “philosopher [who] walks into an AI lab,” examined the implications of using artificial intelligence in the courtroom. He cited and repudiated Chief Justice John Roberts’ plea for the development of an AI system that would yield “better” decisions in legal cases. Relying on AI to make data-driven decisions about sentencing, as opposed to using sentencing guidelines, is simply an “unnecessary extension of the law,” Shapiro argued. He suggested that such a discourse just goes back to the same questions about how much discretion should be given to decision makers – or, in the case of AI, decision-making tools – since in the case of AI and in law, there will always be “a human in the loop” to make the last call.

“One of the things that really strikes me, and I think back to the history of AI and machine learning, is that the field was really started with a kind of interdisciplinary spirit,” said Christian. “I go back to the mid-1950s, there was a series of events called the Basic Patterns specifically that brought together, you know, cybernetics experts and neurologists, mathematicians, but also anthropologists and psychiatrists and it seems to me that we are just now beginning to recapture some of that truly interdisciplinary spirit as we appreciate the inherently interdisciplinary nature of these issues.

This year’s forum was the fifth since the program began.

MIRANDA JEYARETNAM


Miranda Jeyaretnam is the reporter covering the Jackson Institute of Global Affairs and developments at the National University of Singapore and Yale-NUS for the YDN University office. She was previously an Opinion Editor for the Yale Daily News under the 2022 YDN board and wrote as a columnist for its opinion column “Crossing the Aisle” in Spring 2020. From Singapore, she is a student second year at Pierson College, majoring in English. .





Roxxcloud

Leave a Reply

Your email address will not be published.

Back to top