专题:Adversarial Robustness in Machine Learning

This cluster of papers focuses on the robustness of deep learning models against adversarial attacks, exploring topics such as adversarial examples, security, uncertainty estimation, defenses, and verification. It delves into the challenges and potential solutions for ensuring the resilience of neural networks in the face of malicious inputs.
最新文献
近5年高被引文献
Segment Anything

article Full Text OpenAlex 8689 FWCI999.6111

On Assessing ML Model Robustness: A Methodological Framework (Academic Track)

preprint Full Text OpenAlex 4567 FWCI1331.4379

Towards Total Recall in Industrial Anomaly Detection

article Full Text OpenAlex 1310 FWCI123.2631

Explainable AI (XAI): Core Ideas, Techniques, and Solutions

review Full Text OpenAlex 1108 FWCI127.0185

A survey of uncertainty in deep neural networks

article Full Text OpenAlex 1106 FWCI178.7623

Mixup: Beyond empirical risk minimization

article Full Text OpenAlex 961 FWCI0

Hands-On Bayesian Neural Networks—A Tutorial for Deep Learning Users

article Full Text OpenAlex 800 FWCI92.6029

No More Strided Convolutions or Pooling: A New CNN Building Block for Low-Resolution Images and Small Objects

book-chapter Full Text OpenAlex 704 FWCI272.6085

Planning-oriented Autonomous Driving

article Full Text OpenAlex 700 FWCI74.9289

SCConv: Spatial and Channel Reconstruction Convolution for Feature Redundancy

article Full Text OpenAlex 688 FWCI79.2509