In December, 2022, the Swiss Financial Market Supervisory Authority (FINMA) published "Circular 2023/1 Operational Risks and Resilience – Banks". This new circular provides banks operating in Switzerland with a complete revision of the previous guidance found in Circular 2008/21 Operational Risk - Banks. The circular adopts the revised principles for managing operational risks and the new principles on operational resilience published by the Basel Committee on Banking Supervision. The new requirements for cybersecurity are extensive and enforceable as of January 1, 2024.
Operational risk refers to “the risk of financial loss resulting from inadequate or failed internal processes or systems, inappropriate actions taken by people or mistakes made by them, or from external events.” Circular 2023/1 defines six primary principles for Operational Risk Management as well as principles for “Ensuring operational resilience” and “Continuation of critical services during the resolution and recovery of systemically important banks.”
The former are:
This blog will focus on the first four operational risk management principles, in particular as they relate to critical data.
What is Critical Data?
Maintaining the confidentiality, integrity, and availability of data is a requirement in banking. The Circular defines critical data as “…data that, in view of the institution’s size, complexity, structure, risk profile and business model, are of such crucial significance that they require increased security measures. These are data that are crucial for the successful and sustainable provision of the institution’s services or for regulatory purposes.”
Principle A makes clear that risk management, including that from “ICT risks, the cyber risks, the risks relating to critical data” are the responsibility of the executive management and board of directors of institutions. It requires biannual approval of executive management and annual board approval of “the risk tolerance for operational risk” and that the board “regularly approves strategies for dealing with ICT, cyber risks, critical data and BCM, and monitors their application.”
From a DLP standpoint, it is important that teams be able to easily demonstrate the controls in place to protect critical data. Reveal audit trails provide organizations with evidence that controls are in place and working.
The Circular’s Information and communication technology (ICT) risk management principle includes strategy and governance, change management, operations, and incident management. This includes visibility to critical data as well as “dependencies within the institution as well as interfaces to significant external service providers.”
When protecting critical data it is important to remember that some of this data must be shared with third parties. Traditional DLP solutions that rely on granular rules can present challenges in this use case. As new partners are added new rules are required that dictate allowed actions by each user with each class of data.
Reveal takes a different approach, using machine learning on the endpoint to create baselines for individual users - inside and outside the organization. This surfaces individual anomalies and isolates risk to each user and device.
Cyber risk management includes the “protection of the inventoried ICT assets and the electronic critical data from cyber attacks by implementing appropriate protective measures, particularly with regard to the confidentiality, integrity and availability” as well as “implementing appropriate processes for taking rapid containment and remediation measures.”
Insider Risk and DLP solutions like Reveal can help meet this principle by building individual baselines that surface individual user anomalies. When anomalous activity is detected, controls can be enforced that block actions, isolate devices from the network, lock out user sessions, take screenshots (static/in motion), display messages, block uploads, and kill processes.
The Circular requires “the executive board shall appoint a unit to establish the framework for ensuring the confidentiality, integrity and availability of critical data and to monitor its observance.” This includes ensuring that data is “adequately protected from being accessed and used by unauthorised persons during operations and during the development, change and migration of ICT.” It further requires that personnel be provided regular training in protecting critical data. Monitoring, control, and protection must include service providers that can process or view critical data.
Given the short timeframe until enforcement of Circular 2023/1, organizations must consider their approach to identifying and protecting critical data. Legacy approaches to DLP require teams to identify and classify all the sensitive data in an organization before data protection can begin. This delays protection for months (or years) while the solution scans network shares and endpoints. It is often further extended as new types of data are identified, requiring the classification exercise to be repeated.
Instead of spending months attempting to identify and classify data - while delaying protecting that data - Reveal classifies data in real time as it is created and used in modern enterprises. Contextual inspection identifies sensitive data in both structured and unstructured data without predefined policies. Its policy free approach to data protection allows organizations to realize value in weeks, not months.
FINMA Circular 2023/1 requires organizations to implement controls over critical data in a short timeframe. Organizations attempting to meet these requirements using legacy DLP solutions will face multiple challenges.
We can help. We built Reveal for today’s work environment, technology stack, and threat space. It is cloud native with smart agents for fast deployments, immediate visibility to risk, and rapid time to value. Reveal leverages machine learning on each endpoint to identify and protect critical data as it is created and used, on and off the corporate network. It begins baselining activity on installation and uses multiple behavioral analytics algorithms that monitor user, entity, and network behavior, to define typical and anomalous behavior.