DFS Letter: Beware of Cyber Risks From A.I.

Abstract: The New York State Department of Financial Services (DFS) has cautioned the entities it regulates to be alert to cybersecurity risks resulting from using artificial intelligence (AI) technology.
Body:

mockup-5288034_640.jpg

The New York State Department of Financial Services (DFS) has cautioned the entities it regulates to be alert to cybersecurity risks resulting from using artificial intelligence (AI) technology. The department also described steps for reducing those risks.

DFS responded in the October 16 industry letter to questions about the cyber risks from AI and what to do about them. The letter did not add new requirements to those in the department’s cybersecurity regulation. Instead, it explained how entities should use the regulation’s provisions to assess and address AI risks.

THE RISKS: WHY YOU SHOULD BE AFRAID

Among the risks the department highlighted were:

Social Engineering: “Social engineering" is a cyber attack in which the attacker uses human interaction to obtain an organization’s information or to compromise its information or computer systems. For example, a hacker may convincingly impersonate a manager within an organization. This person then convinces an employee to transfer funds to an illegitimate account.

According to the letter, AI has made these attacks more effective. It said, “Threat actors are increasingly using AI to create realistic and interactive" so-called “deepfakes" (audio, video, and text communications that appear to be from an internal manager but are not.) Hackers deliver these communications by email, phone, text message, videoconferencing, and postings online. “For example," DFS said, “in February 2024, a Hong Kong finance worker was tricked into transferring $25 million to threat actors after they set up a video call in which every other person participating, including the Chief Finance Officer, was a video deepfake."

Enhanced Cyber Attacks: AI can scan and analyze large volumes of data much faster than a human can. This enables hackers to use it to find and exploit security holes much more quickly. Once inside, they can use it to figure out how to best deploy malware in a network and steal information. They can also use it to quickly develop new versions of malware and ransomware that can elude security controls.

Lastly, AI tools enable hackers who lack coding chops to develop and launch their own attacks. “This lower barrier to entry for threat actors, in conjunction with AI-enabled deployment speed," the letter said, “has the potential to increase the number and severity of cyberattacks, especially in the financial services sector, where the maintenance of highly sensitive NPI (non-public information) creates a particularly attractive and lucrative target for threat actors."

Entities’ Use of AI Tools: Products that use AI rely on collecting and processing large amounts of data. Some of this data will be NPI. A summary of the New York cybersecurity regulation is, “What you collect, you have to protect." Therefore, entities using AI products may have to protect much more information than they might have otherwise. That information could include biometric information (facial characteristics, fingerprints, etc.) Multi-factor authentication (MFA) systems use this information to verify a network user’s identity. Hackers who steal it can use it to log into a network by impersonating a trusted user.

Third Parties: Third party service providers and vendors may either provide data to the entity or have access to the entity’s NPI. If they suffer cyber incidents, the entity’s NPI and systems may be vulnerable to attack.

THE CONTROLS: WHAT YOU CAN DO ABOUT THE RISKS

The letter listed several procedures the regulation already requires that an entity can use to reduce the risks.

  • Include the potential for deepfakes and other AI threats when performing the annual risk assessment.
  • Design the risk assessment to address:
    • The entity’s use of AI.
    • AI technologies its third-party service providers and vendors use.
    • Any vulnerabilities that might result from AI technologies and that could threaten the computer network and NPI.
  • Update the entity’s cybersecurity policies and procedures to reflect the threats uncovered during the assessment.
  • Larger entities who do not qualify for a limited exemption must create and implement plans for investigating and mitigating cyber incidents. They must also have plans for incident response, business continuity, and disaster recovery. Limited exempt entities might want to give some thought to these subjects even though the regulation does not require them to create formal plans. Planning ahead means less flailing if an incident occurs.
  • Create a workplace culture that includes cybersecurity awareness.
  • When performing due diligence on third-party service providers, consider their uses of AI; the threats that could pose to them; and how cyber incidents they experience could impact your entity.
  • Implement strong controls for access to the entity’s network, starting with MFA. The regulation requires all entities to implement MFA by November 1, 2024. They should also include annual reviews of which network users have access to NPI and whether they still need it.
  • Annual cybersecurity awareness training for all employees, including training on the risks of social engineering attacks. The regulation requires all entities to start doing this by November 1, 2024.
  • Larger entities must have formal system monitoring tools in place. Limited exempt agencies should at least be alert to signs of unusual activity. They should also watch for employees using the system for purposes the agency has not approved.
  • Place sensible limits on the amount of NPI the agency collects and retains. These will vary by the business needs of the agency. What you collect, you must protect, so do not retain more data than you want to protect.

AI technologies are here to stay, and their use will only grow with time. If your agency has not yet registered with technology consulting firm Catalyit, we urge you to do so now. They presented a series of webinars last spring that explain how using AI can benefit your business. There are plenty of benefits to using these technologies, but as with any other type of operation, there are risks. The DFS published this letter to make you aware of the risks and suggest ways to control them while you reap the benefits.

Category: Ask Tim; Cyber
Published: 10/31/2024 3:04 PM
Author: Tim Dodge
IAFeaturePost: NONE

Source