Dan's Dev Corner - August 2018
*This is a guest blog, written by SeeUnity CTO, Dan Hunsinger.
Cloud Development Challenges – Part 1
As you would expect, more and more of our development efforts are being directed to cloud-based services, whether it be document management systems like NetDocuments or iManage, file sharing services like Box or HighQ, or a records-based systems like Salesforce. Even though it’s usually easier to get started in integrating cloud services, there are some extra challenges as well.
What we have typically tried to do is officially support new versions of an integrated system within a couple months of release. That gives us time to regression test risk areas and make any adjustments that might be required. At that point, we can coordinate with customers to do a timed update so there’s no operational disruption.
In the cloud world it’s a little different. Releases are said to be “pushed”, and usually whether you want them or not. On one hand, its great having somebody else update your system all the time. But on the flip side, there are a couple of things that are needed to help make integrations continue to work.
- It’s extremely helpful to have active development partnerships with cloud venders. This can help by getting advanced notice of platform pushes, knowing what changes will be in each push, and in a lot of cases, having access to a sandbox environment to test integrations before a push. At SeeUnity part of our job is to develop and maintain these partnerships to know when change is coming among other things.
- Generally, the way we interact with cloud based services is through a REST API. One of the nice things about REST is that it’s pretty easy to version the interface. By that, I mean adding functionality and extensions to the API while still maintaining backward compatibility. As we build out our connectors with REST, we build them to use the lowest possible specific version of methods and optionally use later versions only if the customer’s instance supports it.
By now, nearly all cloud services use some kind of rate limiting. It’s basically a way to slow down API requests based on different things like the level of service a customer has bought and current service utilization. They have to do this so that whatever they’re using to power the services doesn’t get overwhelmed.
People typically would like a migration to the cloud (or to anywhere for that matter) to be done as fast as possible. This is at odds with rate limiting.
As with platform version issues, this is where it’s critical to have a good relationship with the service vender. Under the right circumstances, like working with a trusted development partner, the vendors are usually willing to turn off their rate limiting “features” for a brief period of time.
Even though most cloud services implement some kind of rate limiting, the way it’s done from an API standpoint can be quite different. Some vendors simply slow down requests. Some vendors return additional HTTP headers with each request to let the caller know how much service has been used within a “window” and how much is left before requests error out. And some just error out when the maximum requests have been exceeded.
One of the things that’s done in the SeeUnity connector architecture is to seamlessly handle all these different scenarios so that our solutions like Echo or Velocity can operate as normal without “errors” caused by rate limiting. In addition, if there’s a question about some slower than expected process, we can always run our CIS performance monitor to see exactly what’s being limited and by how much.
In the next part, I’ll discuss other challenges with cloud service integrations: authentication, security, and database access.