The Impact of Financial Regulations on Cloud: Top 8 Questions Answered
Which regulations apply to our cloud program? Do we in-source or out-source solutions? How much due diligence is needed? What due diligence needs to be performed? Chaos engineering? Exit plans? Evidence?!
The fog surrounding exactly how regulations affect your cloud strategy is thick and we’ve had many conversations with our customers in the UK around how to avoid costly, over-complex solutions in response.
Contino recently hosted a breakfast briefing at the Andaz Hotel in London to address these very questions.
Our panel included Greg Hawkins, the CTO and Technology Advisor for Starling Bank as well as Brendan Foxen, Contino’s CTO and Federico (Fed) Fregosi, Principal Consultant. In this blog post, I’ll summarise our expert panel’s responses to the most burning cloud questions.
1) Which of the FCA/EBA regulations apply?
The panel discussion opened with a question on which of the FCA/EBA guidelines need to be followed.
Greg acknowledged the confusion between the two bodies and advised the people in the room, who represented banks and other FS organisations, to pay closer attention to the EBA guidelines.
This is because the FCA guidelines have recently been updated so that they do not apply to banks, building societies and some investment firms. Brendan echoed this advice by clarifying that while the EBA guidelines are relevant to banks and building societies, the FCA guidelines apply more to pharmaceutical and insurance companies etc.
Nevertheless, as Greg pointed out, ultimately you have to review it all.
2) Which resources should be kept in-house?
In the context of the new regulations, the panel covered which resources to keep in-house to monitor the cloud provider, which created a healthy debate between the trio.
Fed noted that both regulations (FCA, EBA) mandate the need for maintaining or obtaining cloud skills within the bank (i.e. in-sourcing). This is interesting because it means that, while you can outsource your systems to cloud providers, at the same time you have to retain skills in the company that allow you to maintain supervision of your cloud provider. As an example of this, if you’re creating an exit plan that involves working with a different cloud provider, then you’ll need to have in house skills to work with the other provider as well.
Both guidelines are very specific on not only having in-house skills to monitor the provider but also to work with it. Fed recommended checking your SLAs to determine whether or not the provider is offering the level of service that was agreed upon. Greg also stressed the importance of proper vendor management of your cloud provider. This means monthly meetings as well as monthly status reporting on the quality of your service.
3) What due diligence needs to be performed before outsourcing to a cloud provider?
The panel moved on to a discussion over what needs to happen from a due diligence perspective when outsourcing. This covered everything from SLAs and the ‘Shared Responsibility Model’ to contract addendums and the vendor management relationship.
Brendan began by encouraging people to look at it from a commercial point of view.
If you have an application or workload that you’re looking to put into the cloud, you will have various service levels that you’ve defined for that particular stack. So when you’re looking at the cloud provider, asking yourself what services to use, you’ll need to ask yourself how that aligns to your service levels. How do I architect it to make sure that it’s aligned and that it can tolerate failure? So, at the very start of that journey, before you even start putting your workloads into the cloud, you need to set the standards that you will need to adhere to. Brendan referred to the Shared Responsibility Model here as a key asset in understanding where your responsibility lies.
Greg went on to raise the issue of contract addendums. As he mentioned, there are a number of things that both sets of guidelines outline that you need to make sure are in your contract with the cloud provider. He warned that unless you specifically sign an addendum with them, it means that they’re not. For example, according to the EBA guidelines, they will provide for your ability to send auditors to data centres to actually audit AWS.
Fed added that while both sets of guidelines are very clear on a number of clauses that you need to put in your contract with the cloud provider, these regulations apply to outsourcing in general. He continued to explain that cloud providers are very mature so they will come with pre-packed addendums to the standard contract they offer that are customised to comply with FCA regulations. However, as Fed pointed out, if you start outsourcing IT functions in a different way, e.g. if you start using a Software-as-a-Service provider, the new provider will need to be vetted to make sure that you have the right clauses in your contract with them. While cloud providers are very mature on this, most SaaS tools are not.
Fed also echoed Greg’s point about cloud providers, highlighting the importance of establishing a good relationship with them, to ensure you’re getting the right services.
4) How do you introduce chaos engineering into the organisation and move incrementally towards that posture?
Chaos engineering is a pretty bold step to take. Greg and Fed offered their advice on how best to approach it.
Greg was clear on his view that “you’ve just got to do it!”.
As an industry, he described how we used to treat our production lines like delicate crystal that we wouldn’t touch out of fear that it might fall over or shatter. He explained how, today, we’re actually beating up our production lines as much as we can because we know that’s how we grow solid and how this has largely been enabled by the flexibility and transparency of the cloud.
But how do we go about doing this?
Greg commented that there’s a difference between taking out a server here and there and taking out a whole region. There are different types of things you can do. It doesn’t have to be as extreme as walking into a data centre and pulling out all of the wires! He described how one of the biggest benefits of the cloud stems from the fact that the base compute unit is actually not very resilient at all. Because of this, we’ve grown used to designing hyper-resilient architectures.
Fed compared the different approaches of the major cloud providers with regards to resilience. With Google, when you provision your VPC, or virtual private network, you are effectively deploying one across the globe. You can then restrict certain regions and other things but, by default, when you subscribe to Google, you can network across the globe. Whereas with AWS, you’ve got a regional network to work with. Multiple regions can be then be connected if needed. So, with one click, your operational resilience becomes global and you can provide service to anywhere you want in the world. At this point, as Greg was saying, you can start building highly-complex architectures that are very resilient that you would never have been able to do on-premises. You can apply this across multiple services and regions.
Fed described how we now have standardised ways of deploying services across multiple regions in minutes. This is extraordinary given that only five years ago, doing the same thing in a data centre would have taken six months, a lot of coordination and a number of different people. You would have needed leased lines and a number of engineers working on this. But now with cloud, you can right click a standard template that allows you to apply changes across multiple regions in a regulated and standardised fashion.
5) What kind of practices/control can you do at a cloud provider to ensure that data can’t move transiently across the world to regions where it’s not supposed to go?
Taking data sovereignty and the concerns about moving data in financial services into account, the panel discussed the various options available in the cloud to restrict the movement of data.
Brendan outlined the options available on AWS, based on his experience working with this cloud provider. For example, you can securely lock particular regions into an account on AWS. He also advised looking at your account structure policy. So if you have accounts where data can’t reside outside the EU, you can put the workloads into that bundle and you can lock it down at policy level. As he pointed out, there is an element of trust with the provider here as well.
While AWS offers prescribed controls to block certain regions, other cloud providers have different strategies. Fed described how in the case of GCP, you can specify service controls so that, even for managed services such as Big Query, you can lock your data in not just one region, but within your virtual private cloud. In other words, not only can you block specific regions or allow specific regions, you can specify that only things within a region can assess data within a region as a general policy.
6) What evidence should you provide the regulator when demonstrating your exit strategy?
When you’re speaking to the regulator about your exit strategy, the panel was asked if you need anything in the form of documentation or a practical report as proof.
As Brendan put it, it’s not a case of “show me your policies and test plan” but rather “show me how you exercise it”.
Fed agreed with this point, explaining that most of the time it’s a months-long process and it comes down to personal relationships: you build trust over time with the regulator as you build your exit plan. You should be able to have a discussion about what else they would like to see in there. He also acknowledged that while there used to be a template for an exit plan in the EBA regulations in a previous version, this has since been removed.
Greg suggested that it will be different for everyone. He shared that in his experience, he has not seen regulators looking at test reports. However, regulators do post a lot of information on auditors and audit reports and auditors. These auditors are there to check you’re doing what you say you’re doing. At the end of the day however, you are responsible for demonstrating your exit plan and as Greg said, “it has to be coherent, consistent and compelling”.
7) What are some of the ‘dos and don'ts’ from a portability application perspective?
The panel was asked if they would ever put a broker in front of all your cloud providers for portability?
Brendan was clear that cloud brokers are not the answer.
He recognised that the portability challenge is always going to be around data and the sheer volume of it. In offering a solution to addressing these data concerns, he raised the importance of lifecycle policies on your data in limiting the size of your data. These allow you to move records out that aren’t particularly needed for the operational value of the application but that you might still need it in terms of your data storage retention policies. Brendan also referred to open source database technologies, which enable you to effectively run that same type of data without having to refactor, reform or transform it in any way when you move it across.
Fed went on to say that multi-cloud is more mature than it was two years ago. There are now standardised patterns to use cloud native tools like Kubernetes, which you can apply across your cloud providers. If you can ascribe your workloads as docker containers, in most cases, you can deploy them in a standard way.
Greg expanded on this point, stating that Kubernetes is a pathway to portability for financial institutions. However, each CSP handles their own implementation of Kubernetes slightly differently. Meaning that there will always be some refactoring of images, applications or networking boundaries should you wish to move an application at the flick of a switch.
The other issue to consider are the use of public endpoints which might be exposed to third parties but that serve a critical part of the business service line. In the case of an application being moved, these endpoints would change and would ultimately have a dependency on a third party in modifying their configuration. This could have a lead time of weeks. Invariably, the same could be said for networking links and connections, which can take many months to be installed or re-routed. Advance planning and preparation is key.
8) Any Final Thoughts?
Greg: It’s very easy to grumble about regulation and we’ve all been there. But actually most of the time, regulators are just trying to get you to do what you should be doing anyway and what you really want to be doing anyway. So sometimes their view of the world doesn’t quite match with your view of the world and sometimes there’s an artificial difference that can be smoothed over or finessed. And sometimes you have to remember that we had 2008. What if in 2020 we have a massive AWS outage?
Fed: On my side, multi-cloud is a natural strategy for sure. There’s a number of high-level, cloud-native services that can be leveraged to simplify the implementation of multi-cloud in large financial institutions. These ones can be used to move your workloads if and when needed.
Brendan: Regarding vendor lock-in: what we tend to find is you’re locked in more by your own internal processes, your engineering capability, the way your teams work, so the cloud provider is really helpful when it comes to an exit so it’s about making a start. Don’t worry about the speed you’re going.