Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/1c/18/83/1c1883a3-8260-40c1-1483-a0261cac93d6/mza_8947180628080687200.jpg/600x600bb.jpg
AI: AX - introspection
mcgrof
8 episodes
1 day ago
The art of looking into a model and understanding what is going on through introspection is referred to AX.
Show more...
Technology
RSS
All content for AI: AX - introspection is the property of mcgrof and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The art of looking into a model and understanding what is going on through introspection is referred to AX.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/44214955/44214955-1754722534071-bb9d45cf6b3f5.jpg
PA-LRP & absLRP
AI: AX - introspection
19 minutes 31 seconds
3 months ago
PA-LRP & absLRP

We focus on two evolutions to AX, they focus on advancing the explainability of deep neural networks, particularly Transformers, by improving Layer-Wise Relevance Propagation (LRP) methods. One source introduces Positional Attribution LRP (PA-LRP), a novel approach that addresses the oversight of positional encoding in prior LRP techniques, showing it significantly enhances the faithfulness of explanations in areas like natural language processing and computer vision. The other source proposes Relative Absolute Magnitude Layer-Wise Relevance Propagation (absLRP) to overcome issues with conflicting relevance values and varying activation magnitudes in existing LRP rules, demonstrating its superior performance in generating clear, contrastive, and noise-free attribution maps for image classification. Both works also contribute new evaluation metrics to better assess the quality and reliability of these attribution-based explainability methods, aiming to foster more transparent and interpretable AI models.


Sources:


1) June 2025 - https://arxiv.org/html/2506.02138v1 - Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability

2) December 2024 - https://arxiv.org/pdf/2412.09311 - Advancing Attribution-Based Neural Network Explainability

through Relative Absolute Magnitude Layer-Wise Relevance

Propagation and Multi-Component Evaluation


To help with context the original 2024 AttLRP paper was also given as a source:


3) June 2024 - https://arxiv.org/pdf/2402.05602 - AttnLRP: Attention-Aware Layer-Wise Relevance Propagation

for Transformers



AI: AX - introspection
The art of looking into a model and understanding what is going on through introspection is referred to AX.