CERIAS Weekly Security Seminar - Purdue University
CERIAS
599 episodes
1 week ago
The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.
All content for CERIAS Weekly Security Seminar - Purdue University is the property of CERIAS and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.
CERIAS Weekly Security Seminar - Purdue University
52 minutes
1 week ago
Abulhair Saparov, Can/Will LLMs Learn to Reason?
Reasoning—the process of drawing conclusions from prior knowledge—is a hallmark of intelligence. Large language models, and more recently, large reasoning models have demonstrated impressive results on many reasoning-intensive benchmarks. Careful studies over the past few years have revealed that LLMs may exhibit some reasoning behavior, and larger models tend to do better on reasoning tasks. However, even the largest current models still struggle on various kinds of reasoning problems. In this talk, we will try to address the question: Are the observed reasoning limitations of LLMs fundamental in nature? Or will they be resolved by further increasing the size and data of these models, or by better techniques for training them? I will describe recent work to tackle this question from several different angles. The answer to this question will help us to better understand the risks posed by future LLMs as vast resources continue to be invested in their development. About the speaker: Abulhair Saparov is an Assistant Professor of Computer Science at Purdue University. His research focuses on applications of statistical machine learning to natural language processing, natural language understanding, and reasoning. His recent work closely examines the reasoning capacity of large language models, identifying fundamental limitations, and developing new methods and tools to address or workaround those limitations. He has also explored the use of symbolic and neurosymbolic methods to both understand and improve the reasoning capabilities of AI models. He is also broadly interested in other applications of statistical machine learning, such as to the natural sciences.
CERIAS Weekly Security Seminar - Purdue University
The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.