Cybersecurity

As cyber threats grow more sophisticated, traditional security rules are no longer enough. This category breaks down how artificial intelligence is transforming the cybersecurity landscape. We cover the latest research and real-world applications in AI-driven defense, including predictive analytics for network security, automated malware classification, adversarial machine learning, and phishing detection. Stay updated on how neural networks are learning to predict and neutralize cyber attacks before they happen.

ACCF: Adversarial Contrastive Collaborative Filtering.

ACCF: Adversarial Contrastive Collaborative Filtering

ACCF: Adversarial Contrastive Collaborative Filtering | AI Security Research AISecurity Research Machine Learning About Recommender Systems · Knowledge-Based Systems 2026 · 14 min read ACCF: Teaching Recommender Systems to Learn from Adversity Through Contrastive Learning A novel training paradigm that integrates adversarial perturbations with instance-sensitive optimization to enhance robustness and generality in graph neural network-based […]

ACCF: Adversarial Contrastive Collaborative Filtering Read More »

The FedDRLPD system architecture.

FedDRLPD: Deep Reinforcement Learning Defense Against Poisoning Attacks in Federated Learning

FedDRLPD: Deep Reinforcement Learning Defense Against Poisoning Attacks in Federated Learning | AI Security Research AISecurity Research Machine Learning About Federated Learning Security · Knowledge-Based Systems 2026 · 16 min read FedDRLPD: Teaching AI to Defend Itself Against Poisoning Attacks Through Deep Reinforcement Learning A novel defense framework that integrates Deep Q-Network algorithms with Mahalanobis

FedDRLPD: Deep Reinforcement Learning Defense Against Poisoning Attacks in Federated Learning Read More »

PDF: PUF-based DNN Fingerprinting for Knowledge Distillation Traceability.

PDF: PUF-based DNN Fingerprinting for Knowledge Distillation Traceability

PDF: PUF-based DNN Fingerprinting for Knowledge Distillation Traceability | AI Security Research AISecurity Research Machine Learning About Model Security · DAC 2026, Long Beach, CA · 15 min read The Hardware Fingerprint That Traces Stolen AI Models Back to Their Source A novel PUF-based framework embeds unclonable hardware signatures into teacher models during knowledge distillation,

PDF: PUF-based DNN Fingerprinting for Knowledge Distillation Traceability Read More »

SegTrans: The Breakthrough Framework That Makes AI Segmentation Models Vulnerable to Transfer Attacks

SegTrans: The Breakthrough Framework That Makes AI Segmentation Models Vulnerable to Transfer Attacks

In the high-stakes world of autonomous driving, medical diagnostics, and satellite imagery analysis, semantic segmentation models are the unsung heroes. These sophisticated AI systems perform pixel-level classification, allowing them to precisely identify and outline objects like pedestrians, tumors, or road markings within complex images. Their accuracy is critical for safety and reliability. However, a groundbreaking

SegTrans: The Breakthrough Framework That Makes AI Segmentation Models Vulnerable to Transfer Attacks Read More »

Follow by Email
Tiktok