Category: google cloud platform
8 common reasons why enterprises migrate to the cloud8 common reasons why enterprises migrate to the cloudCloud migration teamProduct Manager
[Editor’s note: This post originally appeared on the Velostrata blog. Velostrata has since come into the Google Cloud fold, and we’re pleased to now bring you their seasoned perspective on deciding to migrate to cloud. There’s more here on how Velostrata’s accelerated migration technology works. ]
At Velostrata, we’ve spent a lot of time talking about how to optimize the cloud migration process. But one of the questions we also get a lot is: What drives an enterprise’s cloud migration in the first place? For this post, we chatted with customers and dug into our own data, along with market data from organizations like RightScale and others to find the most common reasons businesses move to the cloud. If you think moving to the cloud may be in your future, this can help you determine what kinds of events may result in starting a migration plan.
1. Data center contract renewals
Many enterprises have contracts with private data centers that need to be periodically renewed. When you get to renegotiation time for these contracts, considerations like cost adjustments or other limiting factors often come up. Consequently, it’s during these contract renewal periods that many businesses begin to consider moving to the cloud.
When companies merge, it’s often a challenge to match up application landscapes and data—and doing this across multiple on-prem data centers can be all the more challenging. Lots of enterprises undergoing mergers find that moving key applications and data into the cloud makes the process easier. Using cloud also makes it easier to accommodate new geographies and employees, ultimately resulting in a smoother transition.
3. Increased capacity requirements
Whether it’s the normal progression of a growing business or the need to accommodate huge capacity jumps during seasonal shifts, your enterprise can benefit from being able to rapidly increase or decrease compute. Instead of having to pay the maximum for on-prem capacity, you can shift your capacity on-demand with cloud and pay as you go.
4. Software and hardware refresh cycles
When you manage an on-prem data center, it’s up to you to keep everything up to date. This can mean costly on-prem software licenses and hardware upgrades to handle the requirements of newly upgraded software. We’ve seen that when evaluating an upcoming refresh cycle, many enterprises find it’s significantly less expensive to decommission on-prem software and hardware and consider either a SaaS subscription or a lift-and-shift of that application into the public cloud. Which path you choose will depend greatly on the app (and available SaaS options), but either way it’s the beginning of a cloud migration project.
5. Security threats
With security threats only increasing in scale and severity, we know many enterprises that are migrating to the cloud to mitigate risk. Public cloud providers offer vast resources for protecting against threats—more than nearly any single company could invest in.
6. Compliance needs
If you’re working in industries like financial services and healthcare, ensuring data compliance is essential for business operations. Moving to the cloud means businesses are using cloud-based tools and services that are already compliant, helping remove some of the burden of compliance from enterprise IT teams.
7. Product development benefits
By taking advantage of benefits like a pay-as-you-go cost model and dynamic provisioning for product development and testing, many enterprises are finding that the cloud helps them get products to market faster. We see businesses migrating to the cloud not just to save time and money, but also to realize revenue faster.
8. End-of-life events
All good things must come to an end—software included. Increasingly, when critical data center software has an end-of-life event announcement, it can be a natural time for enterprise IT teams to look for ways to replicate those services in the cloud instead of trying to extend the life cycle on-prem. This means enterprises can decommission old licenses and hardware along with getting the other benefits of cloud.
As you can see, there are a lot of reasons why organizations decide to kick off their cloud journeys. In some cases, they’re already in the migration process when they find even more ways to use cloud services in the best way. Understanding the types of events that frequently result in a cloud migration can help you determine the right cloud architecture and migration strategy to get your workloads to the cloud.
Learn more here about cloud migration with Velostrata.
How to connect Cloudera’s CDH to Cloud StorageHow to connect Cloudera’s CDH to Cloud StorageStrategic Cloud Engineer, Google Cloud Professional Services
In this post, we’ll help you get started deploying the Cloud Storage connector for your CDH clusters. The methods and steps we discuss here will apply to both on-premise clusters and cloud-based clusters. Keep in mind that the Cloud Storage connector uses Java, so you’ll want to make sure that the appropriate Java 8 packages are installed on your CDH cluster. Java 8 should come pre-configured as your default Java Development Kit.
[Check out this post if you’re deciding how and when to use Cloud Storage over the Hadoop Distributed File System (HDFS).]
Here’s how to get started:
Distribute using the Cloudera parcel
If you’re running a large Hadoop cluster or more than one cluster, it can be hard to deploy libraries and configure Hadoop services to use those libraries without making mistakes. Fortunately, Cloudera Manager provides a way to install packages with parcels. A parcel is a binary distribution format that consists of a gzipped (compressed) tar archive file with metadata.
We recommend using the CDH parcel to install the Cloud Storage connector. There are some big advantages of using a parcel instead of manual deployment and configuration to deploy the Cloud Storage connector on your Hadoop cluster:
Self-contained distribution: All related libraries, scripts and metadata are packaged into a single parcel file. You can host it at an internal location that is accessible to the cluster or even upload it directly to the Cloudera Manager node.
No need for sudo access or root: The parcel is not deployed under /usr or any of the system directories. Cloudera Manager will deploy it through agents, which eliminates the need to use sudo access users or root user to deploy.
Create your own Cloud Storage connector parcel
To create the parcel for your clusters, download and use this script. You can do this on any machine with access to the internet.
This script will execute the following actions:
Download Cloud Storage connector to a local drive
Package the connector Java Archive (JAR) file into a parcel
Place the parcel under the Cloudera Manager’s parcel repo directory
If you’re connecting an on-premise CDH cluster or cluster on a cloud provider other than Google Cloud Platform (GCP), follow the instructions from this page to create a service account and download its JSON key file.
Create the Cloud Storage parcel
Next, you’ll want to run the script to create the parcel file and checksum file and let Cloudera Manager find it with the following steps:
1. Place the service account JSON key file and the create_parcel.sh script under the same directory. Make sure that there are no other files under this directory.
2. Run the script, which will look something like this:
$ ./create_parcel.sh -f <parcel_name> -v <version> -o <os_distro_suffix>
- parcel_name is the name of the parcel in a single string format without any spaces or special characters. (i.e.,, gcsconnector)
- version is the version of the parcel in the format x.x.x (ex: 1.0.0)
- os_distro_suffix: Like the naming conventions of RPM or deb, parcels need to be named in a similar way. A full list of possible distribution suffixes can be found here.
- d is a flag you can use to deploy the parcel to the Cloudera Manager parcel repo folder. It’s optional; if not provided, the parcel file will be created in the same directory where the script ran.
3. Logs of the script can be found in /var/log/build_script.log
Distribute and activate the parcel
Once you’ve created the Cloud Storage parcel, Cloudera Manager has to recognize the parcel and install it on the cluster.
The script you ran generated a .parcel file and a .parcel.sha checksum file. Put these two files on the Cloudera Manager node under directory /opt/cloudera/parcel-repo. If you already host Cloudera parcels somewhere, you can just place these files there and add an entry in the manifest.json file.
On the Cloudera Manager interface, go to Hosts -> Parcels and click Check for New Parcels to refresh the list to load any new parcels. The Cloud Storage connector parcel should show up like this:
Knative: bringing serverless to Kubernetes everywhereKnative: bringing serverless to Kubernetes everywhereDirector of Product Management, Google CloudGroup Product Manager
Knative, the open-source framework that provides serverless building blocks for Kubernetes, is on a roll, and GKE serverless add-on, the first commercial Knative offering that we announced this summer, is enjoying strong uptake with our customers. Today, we are announcing that we’ve updated GKE serverless add-on to support Knative 0.2. In addition, today at KubeCon, RedHat, IBM, and SAP announced their own commercial offerings based on Knative. We are excited for this growing ecosystem of products based on Knative.
Knative allows developers to easily leverage the power of Kubernetes, the de-facto cross-cloud container orchestrator. Although Kubernetes provides a rich toolkit for empowering the application operator, it offers less built-in convenience for application developers. Knative solves this by integrating automated container build, fast serving, autoscaling and eventing capabilities on top of Kubernetes so you get the benefits of serverless, all on the extensible Kubernetes platform. In addition, Knative applications are fully portable, enabling hybrid applications that can run both on-prem and in the public cloud.
Knative plus Kubernetes together form a general purpose platform with the unique ability to run serveless, stateful, batch, and machine learning (ML) workloads alongside one another. That means developers can use existing Kubernetes capabilities for monitoring, logging, authentication, identity, security and more, across all their modern applications. This consistency saves time and effort, reduces errors and fragmentation and improves your time to market. As a user you get the ease of use of Knative where you want it, with the power of Kubernetes when you need it.
In the four months since we announced Knative, an active and diverse community of companies has contributed to the project. Google Kubernetes Engine (GKE) users have been actively using the GKE serverless add-on since its launch in July and have provided valuable feedback leading to many of the improvements in Knative 0.2.
In addition to Google, multiple partners are now delivering commercial offerings based on Knative. Red Hat announced that you can now start trying Knative as part of its OpenShift container application platform. IBM has committed to supporting Knative on its IBM Cloud Kubernetes Service. SAP is using Knative as part of its SAP Cloud Platform and open-source Kyma project.
A consistent experience, with the flexibility to run where you want, resonates with many enterprises and startups. We are pleased that Red Hat, IBM, and SAP are embracing Knative as a powerful open industry-wide approach to serverless. Here’s what Knative brings to each of the new commercial offerings:
“The serverless paradigm has already demonstrated that it can accelerate developer productivity and significantly optimize compute resources utilization. However, serverless offerings have also historically come with deep vendor lock-in. Red Hat believes that Knative, with its availability on Red Hat OpenShift, and collaboration within the open source community behind the project, will enable enterprises to benefit from the advantages of serverless while also minimizing lock-in, both from a perspective of application portability, as well as that of day-2 operations management.” – Reza Shafii, VP of product, platform services, at Red Hat
“IBM believes open standards are key to success as enterprises are shifting to the era of hybrid multi-cloud, where portability and no vendor lock-in are crucial. We think Knative is a key technology that enables the community to unify containers, apps, and functions deployment on Kubernetes.” – Jason McGee, IBM Fellow, VP and CTO, Cloud Platform.
“SAP’s focus has always been centered around simplifying and facilitating end-to-end business processes. SAP Cloud Platform Extension Factory is addressing the need to integrate and extend business solutions by providing a central point of control, allowing developers to react on business events and orchestrate complex workflows across all connected systems. Under the hood, we are leveraging cloud-native technologies such as Knative, Kubernetes, Istio and Kyma. Knative tremendously simplifies the overall architecture of SAP Cloud Platform Extension Factory and we will continue to collaborate and actively contribute to the Knative codebase together with Google and other industry leaders.” – Michael Wintergerst, SVP, SAP Cloud Platform
We’re excited to deliver enterprise-grade Knative functionality as part of Google Kubernetes Engine, and by its momentum in the industry. To get started, take part in ther GKE serverless add-on alpha. To learn more about the Knative ecosystem, check out our post on the Google Open Source blog.