GitHub launches Sponsors, lets you pay your favorite open source contributors

Source: Microsoft more

GitHub today launched Sponsors, a new tool that lets you give financial support to open source developers. Developers will be able to opt into having a “Sponsor me” button on their GitHub repositories and open source projects will also be able to highlight their funding models, no matter whether that’s individual contributions to developers or using Patreon, Tidelift, Ko-fi or Open Collective.

The mission here, GitHub says, is to “expand the opportunities to participate in and build on open source.”

That’s likely to be a bit controversial among some open source developers who don’t want financial interests to influence what people will work on. And there may be some truth to that as this may drive open source developers to focus on projects that are more likely to attract financial contributions over more esoteric projects that are interesting and challenging but aren’t likely to find financial backers on GitHub. We asked GitHub for a comment about this but did not receive a response by the time this article went live.

The program is only open to open source developers. During the first year of a developer’s participation, GitHub (and by extension, it’s corporate overlords at Microsoft) will also match up to $5,000 in contributions. For the next twelve months, GitHub won’t charge any payment processing fees either (though it will do so after this time is over).

Payouts will be available in every country where GitHub itself does business. “Expanding opportunities to participate on that team is at the core of our mission, so we’re proud to make this new tool available to developers worldwide,” the company says.

It’s worth noting that this isn’t just about code and developers, but all open source contributors, including those who write documentation, provide leadership or mentor new developers, for example. As long as they have a GitHub profile, they’ll be eligible to receive support, too.

To make this work, GitHub is also launching a ‘Community Contributors’ hovercard to highlight the people who built the code your applications depend on, for example.

It will definitely be interesting to see how the community will react to Sponsors. The idea isn’t completely novel, of course, and there are projects like Beerpay that already integrate with GitHub. Still, the traditional route to get paid for open source is to find a job at a company that will let you contribute to projects, either as a full-time or part-time job.

In addition to Sponsors, GitHub is also launching a number of new security features. The company today announced that it has acquired Dependabot, for example, a tool that ensures that projects use the most up-to-date libraries. GitHub Enterprise is getting improved audit features, which are now generally available, and maintainers will now get beta access to a private space in GitHub to discuss potential security issues so that their public chats don’t tip off potential hackers. GitHub is also taking token scanning into general availability, which is meant to prevent developers from accidentally leaking their credentials from services like Alibaba Cloud, Amazon Web Services, Microsoft Azure, Google Cloud, Mailgun, Slack, Stripe and Twilio.

GitHub’s enterprise edition is also getting a few updates, including more fine-grained permissions, which are now generally available. Also generally available are Enterprise accounts, while new features like internal repos and organizational insights are now in beta.


GitHub launches Sponsors, lets you pay your favorite open source contributors

Microsoft makes a push for service mesh interoperability

Source: Microsoft more

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


Microsoft makes a push for service mesh interoperability

Microsoft makes a push for service mesh interoperability

Source: Tech News – Enterprise

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


Microsoft makes a push for service mesh interoperability

Atlassian puts its Data Center products into containers

Source: Tech News – Enterprise

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes because the company today announced that it is launching Atlassian Software in Kubernetes (AKS), a new solution that allows enterprises to run and manage its on-premise applications like Jira Data Center as containers and with the help of Kubernetes.

To build this solution, Atlassian partnered with Praqma, a Continuous Delivery and DevOps consultancy. It’s also making AKS available as open source.

As the company admits in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” the company explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.


Atlassian puts its Data Center products into containers

Microsoft open-sources a crucial algorithm behind its Bing Search services

Source: Microsoft more

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.


Microsoft open-sources a crucial algorithm behind its Bing Search services

Algorithmia raises $25M Series B for its AI automation platform

Source: Tech News – Enterprise

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning lifecycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, over 90,000 engineers and data scientists are now on the platform.


Algorithmia raises M Series B for its AI automation platform

Microsoft easily beats the Street as its cloud run rate passes $20B a year early

Source: Microsoft more
 Microsoft today announced its quarterly earnings for its first financial quarter of 2018 (yes, I know it’s early, but that’s how Microsoft’s financial quarters work). Wall Street’s crack team of financial analysts expected the company to report revenue of about $23.56 billion and earnings per share of $0.72. Read More
Microsoft easily beats the Street as its cloud run rate passes B a year early

Google and Cisco announce hybrid cloud partnership

Source: Tech News – Enterprise
 Google and Cisco today announced a new partnership around helping their customers build more efficient hybrid cloud solutions. Unsurprisingly, given Google’s recent focus, this partnership centers around the Google-incubated Kubernetes container orchestration tool, as well as the Istio service mesh for connecting and securing microservices across clouds. “Google Cloud and Cisco… Read MoreGoogle and Cisco announce hybrid cloud partnership

Microsoft’s rebranded Azure Container Service shifts its focus to Kubernetes

Source: Tech News – Enterprise
 As far as container orchestration goes, Kubernetes is quickly becoming the de facto standard, even as Docker Swarm and Mesos/Mesosphere DC/OS continue to find their own niches. For the longest time, Microsoft argued that one of the advantages of its managed Azure Container Service (ACS) was its support for multiple orchestration tools, but that’s shifting a bit today. Read MoreMicrosoft’s rebranded Azure Container Service shifts its focus to Kubernetes

Redkix, an email-friendly team messaging platform, launches its public beta

Source: Tech News – Enterprise
 When you first look at Redkix, it looks like any other Slack clone, but while you could definitely use it just like Slack, the team offers an important twist on the standard company chat theme: it plays nice with email. After a year of private testing with about 7,000 users, the team is opening up its public beta today and launching its paid premium program in private beta.
Oudi Antebi… Read MoreRedkix, an email-friendly team messaging platform, launches its public beta