The challenges of truly embracing cloud native

Source: Tech News – Enterprise

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs

IBM-Maersk blockchain shipping consortium expands to include other major shipping companies

Source: Tech News – Enterprise

Last year IBM and Danish shipping conglomerate Maersk announced the limited availability of a blockchain-based shipping tool called TradeLens. Today, the two partners announced that a couple of other major shippers have come on board.

The partners announced that CMA CGM and MSC Mediterranean Shipping Company have joined TradeLens. When you include these companies together with Maersk, the TradeLens consortium now encompasses almost half of the world’s cargo container shipments, according to data supplied by IBM .

That’s important, because shipping has traditionally been a paper-intensive and largely manual process. It’s still challenging to track where a container might be in the world and which government agency might be holding it up. When it comes to auditing, it can take weeks of intensive effort to gather the paperwork generated throughout a journey from factory or field to market. Suffice to say, cargo touches a lot of hands along the way.

It’s been clear for years that shipping could benefit from digitization, but to this point, previous attempts like EDI have not been terribly successful. The hope is that by using blockchain to solve the problem, all the participants can easily follow the flow of shipments along the chain and trust that the immutable record has not been altered at any point.

As Marie Wieck, general manager for IBM Blockchain told TechCrunch at the time of last year’s announcement, the blockchain brings some key benefits to the shipping workflow:

The blockchain provides a couple of obvious advantages over previous methods. For starters, [Wieck said] it’s safer because data is distributed, making it much more secure with digital encryption built in. The greatest advantage though is the visibility it provides. Every participant can check any aspect of the flow in real time, or an auditor or other authority can easily track the entire process from start to finish by clicking on a block in the blockchain instead of requesting data from each entity manually.

The TradeLens partners certainly see the benefits of digitizing the process. “We believe that TradeLens, with its commitment to open standards and open governance, is a key platform to help usher in this digital transformation,” Rajesh Krishnamurthy, executive vice president for IT & Transformations at CMA CGM Group, said in a statement.

Today’s announcement is a big step toward gaining more adoption for this approach. While there are many companies working on supply chain products on the blockchain, the more shipping companies and adjacent entities like customs agencies who join TradeLens, the more effective it’s going to be.


IBM-Maersk blockchain shipping consortium expands to include other major shipping companies

Serverless and containers: Two great technologies that work better together

Source: Microsoft more

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


Serverless and containers: Two great technologies that work better together

Serverless and containers: Two great technologies that work better together

Source: Tech News – Enterprise

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native put simply involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google says the Kubernetes community has really embraced the serveless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points there isn’t a one size fits all approach to cloud native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open source community and vendors will respond with tools to help them. Bringing serverless and containers is together is just one example of that.


Serverless and containers: Two great technologies that work better together

HPE is buying Cray for $1.3 billion

Source: Tech News – Enterprise

HPE announced it was buying Cray for $1.3 billion, giving it access to the company’s high performance computing portfolio, and perhaps a foothold into quantum computing in the future.

The purchase price was $35 a share, a $5.19 premium over yesterday’s close of $29.81 a share. Cray was founded in the 1970s and for a time represented the cutting edge of super computing in the United States, but times have changed, and as the market has shifted, a deal like this makes sense.

Ray Wang, founder and principal analyst at Constellation Research says this is about consolidation at the high end of the market. “This is a smart acquisition for HPE. Cray has been losing money for some time but had a great portfolio of IP and patents that is key for the quantum era,” he told TechCrunch.

While HPE’s president and CEO Antonio Neri didn’t see it in those terms, he did see an opportunity in combining the two organizations. “By combining our world-class teams and technology, we will have the opportunity to drive the next generation of high performance computing and play an important part in advancing the way people live and work,” he said in a statement.

Cray CEO and president Peter Ungaro agreed. “We believe that the combination of Cray and HPE creates an industry leader in the fast-growing High-Performance Computing and AI markets and creates a number of opportunities that neither company would likely be able to capture on their own,” he wrote in a blog post announcing the deal.

While it’s not clear how this will work over time, this type of consolidation usually involves some job loss on the operations side of the house as the two companies become one. It is also unclear how this will affect Cray’s customers as it moves to become part of HPE but HPE has plans to create a high performance computing product family using its new assets.

HPE was formed when HP split into two companies in 2014. HP Inc. was the printer division, while HPE was the enterprise side.

The deal is subject to the typical regulatory oversight, but if all goes well, it is expected to close in HPE’s fiscal Q1 2020.


HPE is buying Cray for .3 billion

Health[at]Scale lands $16M Series A to bring machine learning to healthcare

Source: Tech News – Enterprise

Health[at]Scale, a startup with founders who have both medical and engineering expertise, wants to bring machine learning to bear on healthcare treatment options to produce outcomes with better results and less aftercare. Today the company announced a $16 million Series A. Optum, which is part of the UnitedHealth Group, was the sole investor .

Today, when people looks at treatment options, they may look at a particular surgeon or hospital, or simply what the insurance company will cover, but they typically lack the data to make truly informed decisions. This is true across every part of the healthcare system, particularly in the U.S. The company believes using machine learning, it can produce better results.

“We are a machine learning shop, and we focus on what I would describe as precision delivery. So in other words, we look at this question of how do we match patients to the right treatments, by the right providers, at the right time,” Zeeshan Syed, Health at Scale CEO told TechCrunch.

The founders see the current system as fundamentally flawed, and while they see their customers as insurance companies, hospital systems and self-insured employers; they say the tools they are putting into the system should help everyone in the loop get a better outcome.

The idea is to make treatment decisions more data driven. While they aren’t sharing their data sources, they say they have information from patients with a given condition, to doctors who treat that condition, to facilities where the treatment happens. By looking at a patient’s individual treatment needs and medical history, they believe they can do a better job of matching that person to the best doctor and hospital for the job. They say this will result in the fewest post-operative treatment requirements, whether that involves trips to the emergency room or time in a skilled nursing facility, all of which would end up adding significant additional cost.

If you’re thinking this is strictly about cost savings for these large institutions, Mohammed Saeed, who is the company’s chief medical officer and has and MD from Harvard and a PhD in electrical engineering from MIT, insists that isn’t the case. “From our perspective, it’s a win-win situation since we provide the best recommendations that have the patient interest at heart, but from a payer or provider perspective, when you have lower complication rates you have better outcomes and you lower your total cost of care long term,” he said.

The company says the solution is being used by large hospital systems and insurer customers, although it couldn’t share any. The founders also said, it has studied the outcomes after using its software and the machine learning models have produced better outcomes, although it couldn’t provide the data to back that up at that point at this time.

The company was founded in 2015 and currently has 11 employees. It plans to use today’s funding to build out sales and marketing to bring the solution to a wider customer set.


Health[at]Scale lands M Series A to bring machine learning to healthcare

SugarCRM moves into marketing automation with Salesfusion acquisition

Source: Tech News – Enterprise

SugarCRM announced today that it has acquired Atlanta-based Salesfusion to help build out the the marketing automation side of its business. The deal closed last Friday. The companies did not share the purchase price.

CEO Craig Charlton, who joined the company in February, says he recognized that marketing automation was an area of the platform that badly needed enhancing. Faced with a build or buy decision, he decided it would be faster to buy a company and began looking for an acquisition target.

“We spent the last three or four months doing a fairly intensive market scan and dealing with a number of the possible opportunities, and we decided that Salesfusion was head and shoulders above the rest for a variety of reasons,” he told TechCrunch.

Among those was the fact the company was still growing and some of the targets Sugar looked at were actually shrinking in size. The real attraction for him was Salesfusion’s customer focus. “They have a very differentiated on-boarding process, which I hadn’t seen before. I think that’s one of the reasons why they get such a quick time to value for the customers is because they literally hold their hand for 12 weeks until they graduate from the on-boarding process. And when they graduate, they’re actually live with the product,” he said.

Brent Leary, principal at CRM Essentials, who is also based in Atlanta, thinks this firm could help Sugar by giving it a marketing automation story all its own. “Salesfusion gives Sugar a marketing automation piece they can fully bring into their fold and not have to be at the whims of marketing automation vendors, who end up not being the best fit as partners, whether it’s due to acquisition or instability of leadership at chosen partners,” Leary told TechCrunch.

It has been a period of transition for SugarCRM, which has had a hard time keeping up with giants in the industry, particularly Salesforce. The company dipped into the private equity market last summer and took a substantial investment from Accel-KKR, which several reports pegged as a 9 figure deal, and Pitchbook characterized as a leveraged buyout.

As part of that investment, the company replaced long-time CEO Larry Augustin with Charlton and began creating a plan to spend some of that money. In March, it bought email integration firm Collabspot, and Charlton says they aren’t finished yet with possibly two or three more acquisitions on target for this quarter alone.

“We’re looking to make some waves and grow very aggressively and and to drive home some really compelling differentiation that we have, and and that will be building over the next 12 to 24 months,” he said.

Salesfusion, which was founded in 2007 and raised $32 million, will continue to operate out of its offices in Atlanta. The company’s 50 employees are now part of Sugar.


SugarCRM moves into marketing automation with Salesfusion acquisition

VMware acquires Bitnami to deliver packaged applications anywhere

Source: Tech News – Enterprise

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware.

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.


VMware acquires Bitnami to deliver packaged applications anywhere

Solo.io wants to bring order to service meshes with centralized management hub

Source: Tech News – Enterprise

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io announced a new open source tool called Service Mesh Hub today, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings including Istio, Linkerd (pronounced Linker-Dee) and Convoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, say she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub. Screenshot: Solo.io

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tool like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken over $13 million in funding, according to Crunchbase data.


Solo.io wants to bring order to service meshes with centralized management hub

Egnyte brings native G Suite file support to its platform

Source: Tech News – Enterprise

Egnyte announced today that customers can now store G Suite files inside its storage, security and governance platform. This builds on the support the company previously had for Office 365 documents.

Egnyte CEO and co-founder Vineet Jain says that while many enterprise customers have seen the value of a collaborative office suite like G Suite, they might have stayed away because of compliance concerns (whether that was warranted or not).

He said that Google has been working on an API for some time that allows companies like Egnyte to decouple G Suite documents from Google Drive. Previously, if you wanted to use G Suite, you no choice but to store the documents in Google Drive.

Jain acknowledges that the actual integration is pretty much the same as his competitors because Google determined the features. In fact, Box and Dropbox announced similar capabilities over the last year, but he believes his company has some differentiating features on its platform.

“I honestly would be hard pressed to tell you this is different than what Box or Dropbox is doing, but when you look at the overall context of what we’re doing…I think our advanced governance features are a game changer,” Jain told TechCrunch.

What that means is that G Suite customers can open a document and get the same editing experience as they would get were they inside Google Drive, while getting all the compliance capabilities built into Egnyte via Egnyte Protect. What’s more, they can store the files wherever they like, whether that’s in Egnyte itself, an on-premises file store or any cloud storage option that Egnyte supports, for that matter.

Egnyte storage and compliance platform

G Suite documents stored on the Egnyte platform.

Long before it was commonplace, Egnyte tried to differentiate itself from a crowded market by being a hybrid play where files can live on-premises or in the cloud. It’s a common way of looking at cloud strategy now, but it wasn’t always the case.

Jain has always emphasized a disciplined approach to growing the company, and it has grown to 15,000 customers and 600 employees over 11 years in business. He won’t share exact revenue, but says the company is generating “multi-millions in revenue” each month.

He has been talking about an IPO for some time, and that remains a goal for the company. In a recent letter to employees that Egnyte shared with TechCrunch, Jain put it this way. “Our leadership team, including our board members, have always looked forward to an IPO as an interim milestone — and that has not changed. However, we now believe this company has the ability to not only be a unicorn but to be a multi-billion dollar company in the long-term. This is a mindset that we all need to have moving forward,” he wrote.

Egnyte was founded in 2007 and has raised over $137 million, according to Crunchbase data.


Egnyte brings native G Suite file support to its platform