Abstract:
Cyber threats are now more complicated and
harder than ever with the swift development of
artificial intelligence (AI). This study focuses on
three key aspects of AI-driven cyber threats:
social engineering with fake personal media, self
modifying malware that attacks automatically and
the tough challenges in defending systems against
AI-backed cyberweapons. Because deepfake
technology uses generative AI, it can make it
simple and very convincing for criminals to carry
out phishing and deceitful impersonation attacks
on humans. Because it is driven by AI,
autonomous malware can quickly transform to
bypass
usual
anti-malware
systems.
Cyberweapons that incorporate AI such as
automated creation of vulnerabilities, are a huge
threat to computer networks. A comprehensive
review of literature, as well as proposing a mixed
methodology, this study studies how these threats
work, their effects and the ways to control them.
The results suggest that AI-based attacks are
becoming more hidden and can be used on a large
scale, so new defense methods are required.
Participants cover the effectiveness of AI in
detecting hackers, certain problems with today’s
cybersecurity methods and what is and is not
ethical when using AI in cybersecurity. To address
these developing hazards, this paper proposes a
multi-layered defense system that uses machine
learning, deep learning and metaheuristic
algorithms. The study urges the use of quick,
flexible and ethical cybersecurity methods to
defend vital systems in a world dominated by AI.
This
study
supports
current
ongoing
conversations by providing new findings for
researchers,
policymakers
cybersecurity.