The advent of artificial intelligence (AI) brings advantages that are being discovered as the technology is being implemented, but it allows those that engage in cybercrime greater opportunities to carry out their criminal activities.
In terms of security, AI allows for faster threat detection, intelligence analysis, fraud identification, in addition to border surveillance and monitoring, among many other advantages.
“Artificial intelligence can strengthen cybersecurity through predictive analytics, anomaly detection, and automatic responses, improving the ability to defend against emerging threats,” Lieutenant Ricardo Sánchez, head of the Cyber Intelligence Section of the Panamanian National Border Service (SENAFRONT), told Diálogo. “But its use presents challenges, such as the creation of more sophisticated attacks, taking advantage of the automatic learning of computers, to circumvent traditional security measures.”
On the other hand, “artificial intelligence could cause damage by manipulating sensitive information, carrying out massive automated attacks, or even deceiving detection systems by generating apparently normal behaviors,” Lt. Sánchez said.
The 2024 Threat Predictions report by FortiGuard Labs, the research and threat intelligence organization of U.S. cybersecurity company Fortinet, supports Lt. Sánchez’s viewpoint, indicating that generative AI offers innumerable tools to cybercrime.
The report predicts that in 2024 there will be much more focused and stealthy attacks designed to evade the most robust security controls, making AI one of the main threats to digital security.
For U.S. cybersecurity firm BeyondTrust, AI will become more dangerous in 2024. As the technology develops further, experts such as programmers and others who rely on AI will continue to introduce threats into the system, often unintentionally, through simple human error.
“Modern wars are no longer on traditional battlefields but in cyberspace and it is technology that will replace traditional means and resources,” says Severino Mejía, coordinator of Security Projects and Programs of the Panamanian government. “Its use will reduce the costs of conflicts, translated into less loss of life and lower economic costs. What is coming now is not an arms race but a race in which technology will be the main protagonist.”
The challenges with AI in cybersecurity demand constant adaptability to threats, the need to develop rapid countermeasures, and protection against the malicious use of the same technology, says Lt. Sánchez.
“Cyberterrorism and cybercrime require a comprehensive effort to mitigate the threats we are experiencing today. To think that a State can face it alone is utopian. Establishing approved policies, with internal norms that are aligned with new regulations on cyberspace security threats, is imperative,” Mejía added.
Lt. Sánchez explains that the corresponding studies are currently being carried out, focused on a cost-benefit analysis, to identify viability. He recommends strengthening collaboration between government entities, companies, and cybersecurity experts, as well as investing in continuous training and technological updating.
Against this backdrop and as is done in other fields of the fight against organized crime, the interoperation of forces is very useful, allowing for best practices and experiences to be shared.
“It’s fundamental because there is an asymmetry between the learning curve of developed countries and those that do not have the training, resources, and expertise on an issue that affects us all,” Mejía said. “Not only in the training of the armed forces or police forces, but also in public and private institutions.”
For Lt. Sánchez regional cooperation in cybersecurity can facilitate the exchange of information on threats, best practices, and the creation of joint strategies to address the challenges of AI.
Currently, Panama’s security institutions are participating in a working group focused on cybersecurity, where “it is considered essential that we have a rapprochement with countries in the region that have more experience in this field and thus have a vision from another perspective,” Lt. Sánchez said.
In 2024, AI will be a target of studies for those who work for the good of society and also for those who do not have such good intentions for humanity, Mejía added. “That’s why cybersecurity professionals need skills in data analysis, programming, understanding machine learning algorithms, and solid honesty, to work with artificial intelligence effectively and ethically.”