Google Cloud/Cloud Security Alliance Report: IT and Security Pros Are ‘Very Carefully Optimistic’ About AI

Uncategorized

The C-suite is more acquainted with AI technologies than their IT and security staff, according to a report from the Cloud Security Alliance commissioned by Google Cloud. The report, published on April 3, addressed whether IT and security experts fear AI will change their tasks, the advantages and challenges of the increase in generative AI and more.

Of the IT and security experts surveyed, 63% think AI will enhance security within their company. Another 24% are neutral on AI’s effect on security procedures, while 12% do not believe AI will enhance security within their company. Of individuals surveyed, only an extremely few (12%) predict AI will change their tasks.

The survey utilized to develop the report was performed worldwide, with reactions from 2,486 IT and security experts and C-suite leaders from companies across the Americas, APAC and EMEA in November 2023.

Cybersecurity specialists not in leadership are less clear than the C-suite on possible usage cases for AI in cybersecurity, with simply 14% of personnel (compared to 51% of C-levels) stating they are “extremely clear.”

“The detach between the C-suite and personnel in understanding and implementing AI highlights the requirement for a tactical, unified technique to successfully incorporate this innovation,” stated Caleb Sima, chair of Cloud Security Alliance’s AI Security Initiative, in a news release.

Some concerns in the report specified that the responses ought to connect to generative AI, while other questions utilized the term “AI” broadly.

The AI understanding space in security

C-level professionals deal with pressure from the top down that might have led them to be more familiar with usage cases for AI than security experts.

Lots of (82%) C-suite professionals say their executive leadership and boards of directors are promoting AI adoption. However, the report states that this technique may cause application issues down the line.

“This might highlight an absence of appreciation for the difficulty and understanding needed to adopt and carry out such a special and disruptive innovation (e.g., timely engineering),” wrote lead author Hillary Baron, senior technical director of research study and analytics at the Cloud Security Alliance, and a team of contributors.

There are a few reasons that this understanding gap may exist:

  • Cybersecurity experts might not be as notified of the method AI can impact total method.
  • Leaders may undervalue how hard it might be to implement AI methods within existing cybersecurity practices.

The report authors note that some information (Figure A) shows respondents are about as knowledgeable about generative AI and large language models as they are with older terms like natural language processing and deep learning.

Figure A

Infographic showing responses to the instruction Responses to the guideline “Rate your familiarity with the following AI innovations or systems.”Image: Cloud Security Alliance The report authors keep in mind that the predominance of familiarity with older terms such as natural language processing and deep learning might indicate a conflation between generative AI and popular tools like ChatGPT.

“It’s the difference between recognizing with consumer-grade GenAI tools vs professional/enterprise level which is more vital in regards to adoption and execution,” said Baron in an email to TechRepublic. “That is something we’re seeing usually throughout the board with security professionals at all levels.”

Will AI replace cybersecurity tasks?

A little group (12%) of security experts think AI will completely change their jobs over the next five years. Others are more optimistic:

  • 30% think AI will help boost parts of their skillset.
  • 28% forecast AI will support them overall in their present function.
  • 24% believe AI will change a big part of their role.
  • 5% expect AI will not impact their function at all.

Organizations’ goals for AI reflect this, with 36% seeking the result of AI improving security groups’ skills and understanding.

The report points out an interesting inconsistency: although boosting skills and understanding is an extremely desired result, skill comes at the bottom of the list of challenges. This may indicate that immediate tasks such as identifying dangers take top priority in everyday operations, while talent is a longer-term concern.

More must-read AI protection

Advantages and difficulties of AI in cybersecurity

The group was divided on whether AI would be more useful for protectors or aggressors:

  • 34% see AI more advantageous for security teams.
  • 31% view it as equally useful for both protectors and assailants.
  • 25% see it as more beneficial for opponents.

Experts who are concerned about making use of AI in security cite the following reasons:

  • Poor information quality causing unexpected predisposition and other problems (38%).
  • Lack of transparency (36%).
  • Skills/expertise gaps when it comes to managing complicated AI systems (33%).
  • Information poisoning (28%).

Hallucinations, personal privacy, information leakage or loss, precision and abuse were other choices for what individuals may be worried about; all of these choices gotten under 25% of the votes in the study, where respondents were welcomed to choose their top three concerns.

SEE: The UK National Cyber Security Centre found generative AI may enhance assailants’ arsenals. (TechRepublic)

Over half (51%) of participants said “yes” to the question of whether they are worried about the possible dangers of over-reliance on AI for cybersecurity; another 28% were neutral.

Planned uses for generative AI in cybersecurity

Of the companies planning to use generative AI for cybersecurity, there is an extremely broad spread of desired usages (Figure B). Common uses include:

  • Rule creation.
  • Attack simulation.
  • Compliance infraction monitoring.
  • Network detection.
  • Minimizing false positives.

Figure B

Infographic showing responses to the question How does your organization plan to use Generative AI for cybersecurity? (Select top 3 use cases). Actions to the question How does your organization

plan to use Generative AI for cybersecurity? (Select top 3 use cases). Image: Cloud Security Alliance How organizations are structuring their groups in the age of AI Of the people surveyed, 74% state their companies plan to create new teams to oversee the safe use of AI within the next five years. How those teams are structured can differ.

Today, some companies dealing with AI deployment put it in the hands of their security group (24%). Other organizations provide main responsibility for AI implementation to the IT department (21%), the data science/analytics group (16%), a devoted AI/ML group (13%) or senior management/leadership (9%). In rarer cases, DevOps (8%), cross-functional teams (6%) or a team that did not fit in any of the classifications (listed as “other” at 1%) took duty.

SEE: Hiring package: timely engineer (TechRepublic Premium)

“It appears that AI in cybersecurity is not simply transforming existing functions but also paving the way for new specialized positions,” wrote lead author Hillary Baron and the group of factors.

What sort of positions? Generative AI governance is a growing sub-field, Baron told TechRepublic, as is AI-focused training and upskilling.

“In basic, we’re likewise starting to see job postings that include more AI-specific roles like timely engineers, AI security architects, and security engineers,” stated Baron.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *