Splunk acquires cloud monitoring service SignalFx for $1.05B

Source: Tech News – Enterprise

Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60 percent of this will be in cash and 40 percent in Splunk common stock. The companies expect the acquisition to close in the second half of 2020.

SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader in “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”

Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.

2019 08 21 1332

Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase, including a recent Series E round. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal, and Yelp.

“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, President and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”


Splunk acquires cloud monitoring service SignalFx for .05B

Join The New Stack for Pancake & Podcast with Q&A at TC Sessions: Enterprise

Source: Tech News – Enterprise

Popular enterprise news and research site The New Stack is coming to TechCrunch Sessions: Enterprise on September 5 for a special Pancake & Podcast session with live Q&A, featuring, you guessed it, delicious pancakes and awesome panelists!

Here’s the “short stack” of what’s going to happen:

  • Pancake buffet opens at 7:45 am on Thursday, September 5 at TC Sessions: Enterprise
  • At 8:15 am the panel discussion/podcast kicks off; the topic, “The People and Technology You Need to Build a Modern Enterprise
  • After the discussion, the moderators will host a live audience Q&A session with the panelists
  • Once the Q&A is done, attendees will get the chance to win some amazing raffle prizes

You can only take part in this fun pancake-breakfast podcast if you register for a ticket to  TC Sessions: Enterprise. Use the code TNS30 to get 30% off the conference registration price!

Here’s the longer version of what’s going to happen:

At 8:15 a.m., The New Stack founder and publisher Alex Williams takes the stage as the moderator and host of the panel discussion. Our topic: “The People and Technology You Need to Build a Modern Enterprise.” We’ll start with intros of our panelists and then dive into the topic with Sid Sijbrandij, founder and CEO at GitLab, and Frederic Lardinois, enterprise reporter and editor at TechCrunch, as our initial panelists. More panelists to come!

Then it’s time for questions. Questions we could see getting asked (hint, hint): Who’s on your team? What makes a great technical team for the enterprise startup? What are the observations a journalist has about how the enterprise is changing? What about when the time comes for AI? Who will I need on my team?

And just before 9 a.m., we’ll pick a ticket out of the hat and announce our raffle winner. It’s the perfect way to start the day.

On a side note, the pancake breakfast discussion will be published as a podcast on The New Stack Analysts

But there’s only one way to get a prize and network with fellow attendees, and that’s by registering for TC Sessions: Enterprise and joining us for a short stack with The New Stack. Tickets are now $349, but you can save 30% with code TNS30.


Join The New Stack for Pancake & Podcast with Q&A at TC Sessions: Enterprise

Ally raises $8M Series A for its OKR solution

Source: Microsoft more

OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Accel Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.

Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.

Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.

One of its early adopters was Remitly . “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.

Desktop Team OKRs Screenshot

Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).

Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.

Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.


Ally raises M Series A for its OKR solution

Ally raises $8M Series A for its OKR solution

Source: Tech News – Enterprise

OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Accel Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.

Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.

Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.

One of its early adopters was Remitly . “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.

Desktop Team OKRs Screenshot

Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).

Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.

Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.


Ally raises M Series A for its OKR solution

How Facebook does IT

Source: Tech News – Enterprise

If you have ever worked at any sizable company, the word “IT” probably doesn’t conjure up many warm feelings. If you’re working for an old, traditional enterprise company, you probably don’t expect anything else, though. If you’re working for a modern tech company, though, chances are your expectations are a bit higher. And once you’re at the scale of a company like Facebook, a lot of the third-party services that work for smaller companies simply don’t work anymore.

To discuss how Facebook thinks about its IT strategy and why it now builds most of its IT tools in-house, I sat down with the company’s CIO, Atish Banerjea, at its Menlo Park headquarter.

Before joining Facebook in 2016 to head up what it now calls its “Enterprise Engineering” organization, Banerjea was the CIO or CTO at companies like NBCUniversal, Dex One and Pearson.

“If you think about Facebook 10 years ago, we were very much a traditional IT shop at that point,” he told me. “We were responsible for just core IT services, responsible for compliance and responsible for change management. But basically, if you think about the trajectory of the company, were probably about 2,000 employees around the end of 2010. But at the end of last year, we were close to 37,000 employees.”

Traditionally, IT organizations rely on third-party tools and software, but as Facebook grew to this current size, many third-party solutions simply weren’t able to scale with it. At that point, the team decided to take matters into its own hands and go from being a traditional IT organization to one that could build tools in-house. Today, the company is pretty much self-sufficient when it comes to running its IT operations, but getting to this point took a while.

“We had to pretty much reinvent ourselves into a true engineering product organization and went to a full ‘build’ mindset,” said Banerjea. That’s not something every organization is obviously able to do, but, as Banerjea joked, one of the reasons why this works at Facebook “is because we can — we have that benefit of the talent pool that is here at Facebook.”

IMG 20190702 125344

The company then took this talent and basically replicated the kind of team it would help on the customer side to build out its IT tools, with engineers, designers, product managers, content strategies, people and research. “We also made the decision at that point that we will hold the same bar and we will hold the same standards so that the products we create internally will be as world-class as the products we’re rolling out externally.”

One of the tools that wasn’t up to Facebook’s scaling challenges was video conferencing. The company was using a third-party tool for that, but that just wasn’t working anymore. In 2018, Facebook was consuming about 20 million conference minutes per month. In 2019, the company is now at 40 million per month.

Besides the obvious scaling challenge, Facebook is also doing this to be able to offer its employees custom software that fits their workflows. It’s one thing to adapt existing third-party tools, after all, and another to build custom tools to support a company’s business processes.

Banerjea told me that creating this new structure was a relatively easy sell inside the company. Every transformation comes with its own challenges, though. For Facebook’s Enterprise  Engineering team, that included having to recruit new skill sets into the organization. The first few months of this process were painful, Banerjea admitted, as the company had to up-level the skills of many existing employees and shed a significant number of contractors. “There are certain areas where we really felt that we had to have Facebook DNA in order to make sure that we were actually building things the right way,” he explained.

Facebook’s structure creates an additional challenge for the team. When you’re joining Facebook as a new employee, you have plenty of teams to choose from, after all, and if you have the choice of working on Instagram or WhatsApp or the core Facebook app — all of which touch millions of people — working on internal tools with fewer than 40,000 users doesn’t sound all that exciting.

“When young kids who come straight from college and they come into Facebook, they don’t know any better. So they think this is how the world is,” Banerjea said. “But when we have experienced people come in who have worked at other companies, the first thing I hear is ‘oh my goodness, we’ve never seen internal tools of this caliber before.’ The way we recruit, the way we do performance management, the way we do learning and development — every facet of how that employee works has been touched in terms of their life cycle here.”

Event Center 02

Facebook first started building these internal tools around 2012, though it wasn’t until Banerjea joined in 2016 that it rebranded the organization and set up today’s structure. He also noted that some of those original tools were good, but not up to the caliber employees would expect from the company.

“The really big change that we went through was up-leveling our building skills to really become at the same caliber as if we were to build those products for an external customer. We want to have the same experience for people internally.”

The company went as far as replacing and rebuilding the commercial Enterprise Resource Planning (ERP) system it had been using for years. If there’s one thing that big companies rely on, it’s their ERP systems, given they often handle everything from finance and HR to supply chain management and manufacturing. That’s basically what all of their backend tools rely on (and what companies like SAP, Oracle and others charge a lot of money for). “In that 2016/2017 time frame, we realized that that was not a very good strategy,” Banerjea said. In Facebook’s case, the old ERP handled the inventory management for its data centers, among many other things. When that old system went down, the company couldn’t ship parts to its data centers.

“So what we started doing was we started peeling off all the business logic from our backend ERP and we started rewriting it ourselves on our own platform,” he explained. “Today, for our ERP, the backend is just the database, but all the business logic, all of the functionality is actually all custom written by us on our own platform. So we’ve completely rewritten our ERP, so to speak.”

In practice, all of this means that ideally, Facebook’s employees face far less friction when they join the company, for example, or when they need to replace a broken laptop, get a new phone to test features or simply order a new screen for their desk.

One classic use case is onboarding, where new employees get their company laptop, mobile phones and access to all of their systems, for example. At Facebook, that’s also the start of a six-week bootcamp that gets new engineers up to speed with how things work at Facebook. Back in 2016, when new classes tended to still have less than 200 new employees, that was still mostly a manual task. Today, with far more incoming employees, the Enterprise Engineering team has automated most of that — and that includes managing the supply chain that ensures the laptops and phones for these new employees are actually available.

But the team also built the backend that powers the company’s more traditional IT help desks, where employees can walk up and get their issues fixed (and passwords reset).

Event Center 10

To talk more about how Facebook handles the logistics of that, I sat down with Koshambi Shah, who heads up the company’s Enterprise Supply Chain organization, which pretty much handles every piece of hardware and software the company delivers and deploys to its employees around the world (and that global nature of the company brings its own challenges and additional complexity). The team, which has fewer than 30 people, is made up of employees with experience in manufacturing, retail and consumer supply chains.

Typically, enterprises offer their employees a minimal set of choices when it comes to the laptops and phones they issue to their employees, and the operating systems that can run on them tend to be limited. Facebook’s engineers have to be able to test new features on a wide range of devices and operating systems. There are, after all, still users on the iPhone 4s or BlackBerry that the company wants to support. To do this, Shah’s organization actually makes thousands of SKUs available to employees and is able to deliver 98% of them within three days or less. It’s not just sending a laptop via FedEx, though. “We do the budgeting, the financial planning, the forecasting, the supply/demand balancing,” Shah said. “We do the asset management. We make sure the asset — what is needed, when it’s needed, where it’s needed — is there consistently.”

In many large companies, every asset request is double guessed. Facebook, on the other hand, places a lot of trust in its employees, it seems. There’s a self-service portal, the Enterprise Store, that allows employees to easily request phones, laptops, chargers (which get lost a lot) and other accessories as needed, without having to wait for approval (though if you request a laptop every week, somebody will surely want to have a word with you). Everything is obviously tracked in detail, but the overall experience is closer to shopping at an online retailer than using an enterprise asset management system. The Enterprise Store will tell you where a device is available, for example, so you can pick it up yourself (but you can always have it delivered to your desk, too, because this is, after all, a Silicon Valley company).

A73A6581

For accessories, Facebook also offers self-service vending machines, and employees can walk up to the help desk.

The company also recently introduced an Amazon Locker-style setup that allows employees to check out devices as needed. At these smart lockers, employees simply have to scan their badge, choose a device and, once the appropriate door has opened, pick up the phone, tablet, laptop or VR devices they were looking for and move on. Once they are done with it, they can come back and check the device back in. No questions asked. “We trust that people make the right decision for the good of the company,” Shah said. For laptops and other accessories, the company does show the employee the price of those items, though, so it’s clear how much a certain request costs the company. “We empower you with the data for you to make the best decision for your company.”

Talking about cost, Shah told me the Supply Chain organization tracks a number of metrics. One of those is obviously cost. “We do give back about 4% year-over-year, that’s our commitment back to the businesses in terms of the efficiencies we build for every user we support. So we measure ourselves in terms of cost per supported user. And we give back 4% on an annualized basis in the efficiencies.”

Unsurprisingly, the company has by now gathered enough data about employee requests (Shah said the team fulfills about half a million transactions per year) that it can use machine learning to understand trends and be proactive about replacing devices, for example.

IMG 8238

Facebooks’ Enterprise Engineering group doesn’t just support internal customers, though. Another interesting aspect to Facebook’s Enterprise Engineering group is that it also runs the company’s internal and external events, including the likes of F8, the company’s annual developer conference. To do this, the company built out conference rooms that can seat thousands of people, with all of the logistics that go with that.

The company also showed me one of its newest meeting rooms where there are dozens of microphones and speakers hanging from the ceiling that make it easier for everybody in the room to participate in a meeting and be heard by everybody else. That’s part of what the organization’s “New Builds” team is responsible for, and something that’s possible because the company also takes a very hands-on approach to building and managing its offices.

Facebook also runs a number of small studios in its Menlo Park and New York offices, where both employees and the occasional external VIP can host Facebook Live videos.

Event Center 56

Indeed, live video, it seems, is one of the cornerstones of how Facebook employees collaborate and help employees who work from home. Typically, you’d just use the camera on your laptop or maybe a webcam connected to your desktop to do so. But because Facebook actually produces its own camera system with the consumer-oriented Portal, Banerjea’s team decided to use that.

“What we have done is we have actually re-engineered the Portal,” he told me. “We have connected with all of our video conferencing systems in the rooms. So if I have a Portal at home, I can dial into my video conferencing platform and have a conference call just like I’m sitting in any other conference room here in Facebook. And all that software, all the engineering on the portal, that has been done by our teams — some in partnership with our production teams, but a lot of it has been done with Enterprise Engineering.”

Unsurprisingly, there are also groups that manage some of the core infrastructure and security for the company’s internal tools and networks. All of those tools run in the same data centers as Facebook’s consumer-facing applications, though they are obviously sandboxed and isolated from them.

It’s one thing to build all of these tools for internal use, but now, the company is also starting to think about how it can bring some of these tools it built for internal use to some of its external customers. You may not think of Facebook as an enterprise company, but with its Workplace collaboration tool, it has an enterprise service that it sells externally, too. Last year, for the first time, Workplace added a new feature that was incubated inside of Enterprise Engineering. That feature was a version of Facebook’s public Safety Check that the Enterprise Engineering team had originally adapted to the company’s own internal use.

“Many of these things that we are building for Facebook, because we are now very close partners with our Workplace team — they are in the enterprise software business and we are the enterprise software group for Facebook — and many [features] we are building for Facebook are of interest to Workplace customers.”

As Workplace hit the market, Banerjea ended up talking to the CIOs of potential users, including the likes of Delta Air Lines, about how Facebook itself used Workplace internally. But as companies started to adopt Workplace, they realized that they needed integrations with existing third-party services like ERP platforms and Salesforce. Those companies then asked Facebook if it could build those integrations or work with partners to make them available. But at the same time, those customers got exposed to some of the tools that Facebook itself was building internally.

“Safety Check was the first one,” Banerjea said. “We are actually working on three more products this year.” He wouldn’t say what these are, of course, but there is clearly a pipeline of tools that Facebook has built for internal use that it is now looking to commercialize. That’s pretty unusual for any IT organization, which, after all, tends to only focus on internal customers. I don’t expect Facebook to pivot to an enterprise software company anytime soon, but initiatives like this are clearly important to the company and, in some ways, to the morale of the team.

This creates a bit of friction, too, though, given that the Enterprise Engineering group’s mission is to build internal tools for Facebook. “We are now figuring out the deployment model,” Banerjea said. Who, for example, is going to support the external tools the team built? Is it the Enterprise Engineering group or the Workplace team?

Chances are then, that Facebook will bring some of the tools it built for internal use to more enterprises in the long run. That definitely puts a different spin on the idea of the consumerization of enterprise tech. Clearly, not every company operates at the scale of Facebook and needs to build its own tools — and even some companies that could benefit from it don’t have the resources to do so. For Facebook, though, that move seems to have paid off and the tools I saw while talking to the team definitely looked more user-friendly than any off-the-shelf enterprise tools I’ve seen at other large companies.


How Facebook does IT

SAP HANA tenant database system copy and recovery

Source: Veeam

Navigation:

Part 1 — 3 steps to protect your SAP HANA database
Part 2 — How to optimize and backup your SAP HANA environment
Part 3 — SAP HANA tenant database system copy and recovery

 

Throughout these blog series dedicated to the Veeam Plug-in for SAP HANA, I have highlighted the challenges SAP administrators face to maintain reliability and downtime. What we have not yet had the opportunity to discuss is how the demand for data preservation has increased precipitously in recent years.

This could be due to regulations or the need to preserve critical records. Now businesses are grappling with maintaining data to leverage it for long-term strategic business decisions, testing, or even monetizing it. For the final blog in this series, I will provide you with answers to typical customer questions I receive before or when enabling the Veeam Plug-in for SAP HANA.

How to plan for long-term retention requirements and restore scenarios

The biggest challenge regarding this topic is how to restore data without knowing much about the backup. You might sit in front of an empty SAP HANA database without any catalog entries, or a different host, or even a different version of SAP HANA.

When performing a restore, the same version of SAP HANA or later is required, hence I highly recommend documenting the HANA version and consulting SAP´s HANA Update table. Please also consult any related SAP documentation before performing a restore to a different version of SAP HANA to prevent unnecessary challenges.

In most cases you might have a retention policy consisting of several days or even weeks. Customers typically delete older backups (you may recall we discussed proceeding carefully in the first blog of this series). You may encounter a missing catalog entry for this restore point. You might even have deleted all the <EBID>.vab files already from your repository — depending on your retention policy.

To make this backup recoverable, you need to proceed cautiously. Creating a proper backup would be a good start. In my example I only assume responsibility for the tenant, but the way of creating and restoring the database is the same for the SYSTEM DB — and you should always have both backups available to you.

Create a backup of tenant database

Please refer to the first blog in this series to create a backup with SAP HANA Studio if needed and document it. This will be important during the restore process.

Often at Veeam, we stress the importance of naming conventions for tags, backup jobs, and other related information so it’s intuitive to others. After clicking next and next again, the backup will be created. A recommended practice I should share is to copy the backup log into your backup documentation for this long-term backup.

You will now find your recently created backup in the backup catalog and can document the EBID numbers for your backup services.

All EBIDs have corresponding <EBID>.vab files in our repositories. Go to your repository and find these files and copy them into your long-term vault (e.g. file to tape, long-term S3 bucket).

It is important to note that if these <EBID>.vab files are deleted from the repository through backint, a restore will no longer be possible. To avoid this challenge, you MUST delete this specific data backup prior to the application of your created retention policy. The next screenshot shows how to delete a specific data backup and from the catalog. You will delete only this specific data backup from the catalog, but the files remain in the repository.

The Catalog and Backup Location option will delete the <EBID>.vab files inside the repository — which we wish to avoid.

After refreshing the catalog, you’ll see it’s gone forever from the SAP HANA catalog. Check the file system for the repository and you’ll still find the files.

Now you can copy/move/archive these <EBID>.vab files with file to tape or other tools.

Recovery of tenant database

To recover it, please follow the steps below:

  1. Copy the <EBID>.vab files back to your original repository folder
  2. Restore databackup with knowing prefix.
  3. Start a recovery of your tenant and select Recover the database to a specific data backup.

Click next and choose Recover without the backup catalog.

Now is the moment where we’ll be glad we documented the original backup. Have your Backup Prefix ready.

Click Next — you don’t have many other options to restore your database.

Make sure to check the summary — review the SQL statement if needed — and press Finish.

Tenant database System Copy with Veeam

The next question I often receive is how to create an SAP HANA System Copy with Veeam Backup & Replication. System copies are one of the often-used SAP Basis processes to create DB copies for your Quality Assurance, Development or Sandbox Systems. Keep in mind this is only the first step of this process — the database copy. Please consult SAP documentation on further steps like system renaming and cleanups.

Please follow these steps to make this happen:

Creating backup of tenant database — see above how to create it with a meaningful name and document your backup properly.

This is a backup of my S/4 production system.

Before I can restore this tenant on my S/4 QA system, I need to tell my secondary HANA System to search within a different job entry via command-line interface (CLI). Keep in mind you need access rights into the needed repositories to see the other SAP HANA system entries.

 

sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --help

--help                Show help

--show-config         Show configuration parameters

--wizard              Start configuration wizard

--set-credentials arg Set credentials

--set-host arg        Set backup server

--set-port arg        Set backup server port

--set-repository      Set backup respository

--set-restore-server  Set source restore server for system copy processing

--map-backup          Map backup
 

sles15hana04:/opt/veeam/VeeamPluginforSAPHANA # SapBackintConfigTool --set-restore-server

Select source SAP HANA plug-in server to be used for system copy restore:

1. sles15hana04
2. sles15hana03
3. sles15hana02
4. sles15hana01

Enter server number: 2

Available backup repositories:

1. Default Backup Repository
2. SOBR1
3. w2k19repo_ext1
4. sles15repo_ext1

Enter repository number: 2

sles15hana04:/opt/veeam/VeeamPluginforSAPHANA #

 

This set of commands tells the Veeam Backup & Replication SAP Backint client to search inside the SLESHANA03/SOBR1 backup job to find this specific backup. This has no impact on your running backups. If you are not able to modify the configuration, check your user permissions and the file permissions on /op/Veeam/VeeamHANAPlug/Veeam_config.xml

The next step is to perform a restore on your copy system. While this seems easy, I recommend reviewing the comments below for important steps you may not be familiar with.

Select the tenant you want to restore to.

Recover the database to a specific data backup — the one we just created.

Select Recover without backup catalog and enable Backint System Copy. It is important to add the Source system with tenant@SID naming.

Now add your Backup Prefix:

No other options are possible so click Next.

Review the summary before initiating the restore.

Now SAP HANA Studio will shut down this specific tenant and start the recovery.

If everything is going well, it should look like this and begin the recovery process.

The tenant will restart.

And thankfully, the HANA System copy process is done. Just keep in mind this was only the database copy. Additional SAP tasks such as SAP system renaming, etc. can begin.

As result, I created an SAP HANA DB system copy of my production S/4 database into a new HANA system with a new SID.

Closing thoughts

I hope you enjoyed this blog series and are now better prepared to leverage Veeam to more effectively backup, restore, and protect your SAP HANA environment. As SAP S/4 HANA is the market-leading intelligent ERP solution with over 11,000 customers, I’m thrilled Veeam Backup & Replication 9.5 Update 4 offers an SAP Backint certified solution to help customers minimize disruption and downtime for mission-critical applications like your SAP system. I highly encourage you to check out this capability and I also encourage you to check out the Veeam Plug-in for Oracle RMAN, another great feature in Update 4.

The post SAP HANA tenant database system copy and recovery appeared first on Veeam Software Official Blog.


SAP HANA tenant database system copy and recovery

How to copy AWS EMR cluster hive scripts

Source: Veeam

I recently had a situation where I was discussing a scenario with Veeam Backup & Replication deployed completely in the public cloud. In fact, many organizations have the Veeam Backup & Replication server in the cloud and are managing either AWS EC2 or Azure VM backups with Veeam Agent for Microsoft Windows and Veeam Agent for Linux.

In this situation, there was a discussion about some other services in the cloud. In particular, AWS EMR (Elastic MapReduce). There was a discussion about managing the hive scripts that are part of the EMR cluster. I offered a simple solution:

Veeam File Copy Job

Since the Veeam Backup & Replication server is in the public cloud, the EMR cluster can be inventoried. This means it is visible in the Veeam infrastructure. Generically speaking, the Linux requirements for a repository or other Linux functions are documented here. But mainly, Perl and SSH are what is needed to add the EMR cluster into Veeam Backup & Replication. This is shown in the figure below:

 

The second entry, ec2-54-196-190-21.compute-1.amazonaws.com, is EMR cluster’s master public DNS entry. From there, it is visible for some basic interaction with Veeam. This reminds me of a challenge I had nearly 10 years ago. I had a physical server with millions of files that I needed to move, the Veeam FastSCP engine saved the day.

I had an odd re-remembering of this situation with the EMR cluster. When you have some hive scripts that are inventoried in EMR, it is possible that Veeam can copy those off to another system so you have a copy of them. It’s important to note that this is not a backup — but a file copy job. Nonetheless, it can be helpful if you have a need to have a copy of your EMR scripts on another system. In the screenshot above, I could copy them to the ec2-18-206-235-47.compute-1.amazonaws.com host as that is a Linux EC2 instance, and I have Veeam Agent for Linux on that system. So, I could have a “proper” backup as well.

One other nice feature here is I can use the Veeam text editor (what used to be the FastSCP Editor). Simply right-click on your hive script and select Edit, this may be easier than dusting off your vi skills:

 

Then from there, I can build the File Copy Job. The file copy job is a simple copy with an optional schedule. You select a source and a target, and it will do just as it sounds. It’s handy in that you can coordinate its schedule with other jobs you may already have in the public cloud. In fact, that’s exactly what I have done in the example below. The file copy job is placing the hive scripts and some other folders I care about from my EMR cluster on a system I’m backing up with Veeam Agent for Linux. That is scheduled to run right after the Linux Agent job. This is a view of the file copy job:

 

Have you ever used the file copy job? It may be handy for services such as EMR clusters that you need to copy files to or from. You can use the file copy job in the free Veeam Community Edition, download it now and try for yourself.

The post How to copy AWS EMR cluster hive scripts appeared first on Veeam Software Official Blog.


How to copy AWS EMR cluster hive scripts

Healthcare backup vs record retention

Source: Veeam

Healthcare overspends on long term backup retention

There is a dramatic range of perspective on how long hospitals should keep their backups: some keep theirs for 30 days while others keep their backups forever. Many assume the long retention is due to regulatory requirements, but that is not actually the case. Retention times longer than needed have significant cost implications and lead to capital spending 50-70% higher than necessary. At a time when hospitals are concerned with optimization and cost reduction across the board, this is a topic that merits further exploration and inspection.

What is the role of data protection?

The primary role of data protection is recovery in response to failure, malice, or accident. Wherever we have applications and data, we need assurance that those applications can be restored quickly (RTO) with tolerable near-term information loss (RPO). That is an IT deliverable common to all hospitals.

What are the relevant regulations?

HIPAA mandates that Covered Entities and Business Associates have backup and recovery procedures for Patient Health Information (PHI) to avoid loss of data. Nothing regarding duration is specified (CFR 164.306, CFR 164.308). State regulations govern how long PHI must be retained, usually ranging from six to 25 years, sometimes longer.

The retention regulations refer to the PHI records themselves, not the backups thereof. This is an important distinction and a source of confusion and debate. In the absence of deeper understanding, hospitals often opt for long term backup retention, which has significant cost implications without commensurate value.

How do we translate applicable regulations into policy?

There are actually two policies at play: PHI retention and Backup retention. PHI retention should be the responsibility of data governance and/or application data owners. Backup retention is IT policy that governs the recoverability of systems and data.

I have yet to encounter a hospital that actively purges PHI when permitted by regulations. There’s good reason not to: older records still have value as part of analytics datasets but only if they are present in live systems. If PHI is never purged, records in backups from one year ago will also be present in backups from last night. So, what value exists in the backups from one year ago, or even six months ago?

Keeping backups long term increases the capital requirements, complexity of data protection systems, and limits hospitals’ abilities to transition to new data protection architectures that offer a lower TCO, all without mitigating additional risk or adding additional value.

What is the right backup retention period for hospital systems?

Most agree that the right answer is 60-90 days. Thirty days may expose some risk from undesirable system changes that require going further back at the system (if not the data) level; examples given include changes that later caused a boot error. Beyond 90 days, it’s very difficult to identify scenarios where the data or systems would be valuable.

What about legacy applications?

Most hospitals have a list of legacy applications that contain older PHI that was not imported into the current primary EMR system or other replacement application. The applications exist purely for reference purposes, and they often have other challenges such as legacy operating systems and lack of support, which increases risk.

For PHI that only exists in legacy systems, we have only two choices: keep those aging apps in service or migrate those records to a more modern platform that replicates the interfaces and data structures. Hospitals that have pursued this path have been very successful reducing risk by decommissioning legacy applications, using solutions from Harmony, Mediquant, CITI, and Legacy Data Access.

What about email?

Hospitals have a great deal of freedom to define their email policies. Most agree that PHI should not be in email and actively prevent it by policy and process. Without PHI in email, each hospital can define whatever email retention policy they wish.

Most hospitals do not restrict how long emails can be retained, though many do restrict the ultimate size of user mailboxes. There is a trend, however, often led by legal to reduce the history of email. It is often phased in gradually: one year they will cut off the email history at ten years, then to eight or six and so on.

It takes a great deal of collaboration and unity among senior leaders to effect such changes, but the objectives align the interests of legal, finance, and IT. Legal reduces discoverable information; finance reduces cost and risk; and IT reduces the complexity and weight of infrastructure.

The shortest email history I have encountered is two years at a Detroit health system: once an item in a user mailbox reaches two years old, it is actively removed from the system by policy. They also only keep their backups for 30 days. They are the leanest healthcare data protection architecture I have yet encountered.

Closing thoughts

It is fascinating that hospitals serving the same customer needs bound by vastly similar regulatory requirements come to such different conclusions about backup retention. That should be a signal that there is real optimization potential both with PHI and email:

  • There is no additional value in backups older than 90 days.
  • Significant savings can be achieved through reduced backup retention of 60-90 days.
  • Longer backup retention times impose unnecessary capital costs by as much as 70% and hinder migration to more cost-effective architectures.
  • Email retention can be greatly shortened to reduce liability and cost through set policy.

The post Healthcare backup vs record retention appeared first on Veeam Software Official Blog.


Healthcare backup vs record retention

How to optimize and backup your SAP HANA environment

Source: Veeam

SAP administrators face many challenges, including maintaining reliability, minimizing downtime and providing business units the intelligence they need when they need it, while managing the complexity of disparate systems and layers dependent upon business intelligence. My first blog, 3 steps to protect your SAP HANA database, provided insight as to how Veeam can help with these challenges, including installing and configuring the Veeam Plug-in for SAP HANA and leveraging it to perform backup and restore operations. This blog expands that discussion and focuses on SAP HANA backint configuration, as well as recommendations on protecting your SAP application environment.

I hope you never have to leverage Veeam’s unique restore capabilities for SAP. If you do, minimizing the downtime associated with your SAP infrastructure is critical. The 2019 Veeam Cloud Data Management Report surveyed over 1,500 senior business and IT decision makers worldwide and found that lost data from mission-critical application downtime costs organizations $102,450 per hour, on average. Now downtime costs vary by industry, scale of business operations and an assortment of other factors, but it is still a staggering amount, causing SAP administrators to review disaster recovery capabilities for upcoming SAP S/4HANA migrations from ECC.

Additional configuration items in SAP HANA

In my previous blog, I discussed the recommended practices for installing and configuring the Veeam Plug-in for SAP HANA. I also recommended SAP Basis Administrators should work along with their Veeam Administrator counterparts, as by working more closely together, you can achieve success and potentially make your life easier.

For SAP HANA backint, most configuration items reside in a database configuration file called global.ini. Inside it is a backup section.

Many of these parameters are self-explanatory but you may consult the SAP HANA Administration Guide, if needed.

Which ones are important for Veeam Backups?

Catalog_backup_using_backint — This determines if the catalog backup will be written to a local disk or to the Veeam repository. There are advantages and disadvantages with each but changing the value to true allows the catalog backup to also write to Veeam repositories instead of local disk.

Parallel_data_backup_backint_channels — This determines how many channels will be used to write the data to a Veeam repository. The HANA service must be greater than 128 GB of data (not RAM) to use parallel channels. The maximum number of channels is 32, but I would encourage you to be cautious. The axiom I often suggest is to use as much as needed for performance and as low as possible for resources used. A recommended approach would be to start with as few as two or as many as four and double it per run to gauge the impact on the total time needed for the backup to run. You should expect to see the time for the backup to complete change in a linear fashion. If not, you may be leveraging too many channels, consuming significant and valuable resources on the HANA instance and the Veeam repository. This can impact overall backint performance, which I will comment on later in this blog.

Do not forget to adjust the data_backup_buffer_size according to your channels. Please consult the SAP HANA admin guide for more information.

Other parameters should be set based upon guidance from SAP and/or Veeam support, as they are dependent upon specific elements within your environment and architecture. For example, larger systems may require you to change the backint_response_timout parameter to a higher value, possibly in conjunction with the log_backup_interval_mode. Both parameters influence how log backups are written into backint.

Performance of Veeam Plug-in for SAP HANA

As there are many opinions on this topic, I wanted to be sure to address performance expectations with the Veeam Plug-in for SAP HANA. In my experience, businesses often have high expectations for throughput values but have trouble achieving them. Yet there are ways to optimize performance and I wish to share with you my top recommendations in hopes you will find them helpful.

  1. Network throughput. As SAP HANA backint communicates via network only, this is a great first place to look. My personal recommendation for customers is to use iperf3 to test throughput from the SAP HANA system to the Veeam repository. On a 10 Gb network I would expect you to see similar results to the ones in the screenshot below. While you may believe these values to be suboptimal, I have seen environments that have struggled to achieve even these values.
    1. Start iperf3 -s on your repository.
    2. Start iperf3 -c ‘repository system name’ on your non-productive SAP HANA system.
  2. Run iostat and Windows System Resource Manager (disks) to understand your storage in further detail and examine throughput and latency.
  3. If you need to evaluate the maximum throughput value your infrastructure can deliver or better understand any bottlenecks, you can simply test your infrastructure without using any Veeam components. To do this you need to mount your CIFS or NFS repository share. It is important is to use the uid or gid to have write access for the <sid>adm.
    1. mount -t cifs -o username=username, domain=domain, vers=3.0, rw, uid=adm //server/cifsshare /mnt/cifs/
    2. As a result, SAP HANA will provide you a runtime length for your backup and after performance optimization, SAP HANA backing usually yields a more desirable value.

Now that you have a good overview of what you should look for in terms of configuration and tuning recommendations, I want to address operational considerations, including scheduling and what to backup beyond SAP HANA backint data.

Scheduling options for SAP HANA

As described previously, the Veeam SAP HANA backint Plug-In follows an SAP database centric approach, hence scheduling options are SAP centric. Depending on your daily operation requirements, I can provide an overview for the following scheduling options:

  • External scheduling software like autonomy, cron and others
  • SAP HANA Cockpit
  • SAP S/4 DBA Planning Calendar (DB13)
  • Veeam shell job for scheduling

All these options have unique advantages and disadvantages, hence you will want to discuss them with your operations team to decide which one best fits your environment.

External scheduling software

For using external scheduling software, we provide a sample script/example available via GitHub.

The steps are straight forward and can be easily scripted in the following steps.

1. Login to the OS with <sid>adm (if not performed by scheduling software)

2. Start “hdbsql -d SYSTEMDB -U <HDBUSERSSTOREKEY> -I <filename>

In the file use the following SQL commands as needed:

 

# Full Backup SYSTEMDB

# backup data using backint ('$backupcomment');

# Diff Backup SYSTEMDB

# backup data differential using backint ('$backupcomment');

# INCR Backup SYSTEMDB

# backup data incremental using backint ('$backupcomment');

 

# Full Backup tenant <tenant name>

#backup data for <tenant name> using backint ('$backupcomment');

# Diff Backup tenant <tenant name>

#backup data differential for <tenant name> using backint ('$backupcomment');

# INCR Backup tenant <tenant name>

#backup data incremental for <tenant name> using backint ('$backupcomment');

 

With timeouts, the scripting of the option asynchronous might help, as the scripts will start but will return without waiting for the backup run.

# backup data for VDQ using backint ('$backupcomment') ASYNCHRONOUS;

 

If you have a very dynamic environment with a lot of tenant changes, you can also easily incorporate the following SQL command to capture all tenants at the beginning of the backup.

# list all tenant DBs (logon to SYSTEM DB first) including all informations

select * from "SYS"."M_DATABASES"

# list all tenant DBs (logon to SYSTEM DB first) - only names

select DATABASE_NAME from "SYS"."M_DATABASES"

SAP HANA Cockpit

SAP HANA Cockpit is the new SAP HANA administrative tool, replacing SAP HANA Studio. After logging into SAP HANA Cockpit, select your database and go to Database Administration — Manage database backups.

 

Inside your database backups there is a Go to Schedules button.

From there you can enable your scheduler and then create schedules for your database.

After enabling the scheduler, click the scheduling table or press the + sign. Select Schedule a Series of Backups.

Make sure to use a meaningful or intuitive name for your schedule.

Set your settings to your backup type (ex. full backup) and select Backint as destination type. Change the backup prefix as needed.

Choose your recurrence type.

Configure your recurrence details.

Check the summary and save the schedule.

 

You will find your newly created schedule in the backup schedules table.

SAP S/4 DBA Planning Calendar (DB13)

You also have the option to use your DBA planning calendar. Just jump into DB13 and you create your schedule from here.

Usage of Veeam jobs for scheduling

For those who want to control the backup from Veeam Backup & Replication, you can create a Veeam job for an agent or a VM. Simply create the job and add the desired agent or VM, enable application aware processing and start a script to start the backup. See the steps for the pre or post-thaw script under scheduling software.

How to back up your complete SAP system

Now that we have explored scheduling options, I want to offer recommendations on how to secure your complete SAP environment, including ASCS, Dialog and batch instances, and the HANA system, including OS and database installation. The steps are similar, regardless if you are running Windows or Linux application servers or if they are physical or virtual.

Create a job, which includes all systems.

 

In this example, you see a HANA database system and one application server with ASCS and the first application server. You can add these both into VMs to back up.

 

Next, it is important to Enable application-aware processing, so that the database will be backed up in snapshot mode. A script can also be added for the database VM to activate an application aware backup.

 

Click Applications and configure a script for the HANA system.

 

This will make sure the HANA database leverages snapshot mode before taking the backup and completes the snapshot after taking the backup. If you are using Veeam storage integrations, this will significantly shorten the time a snapshot is open.

Sample scripts may be found via GitHub.

For those seeking recommendations on scheduling, unfortunately there is no simple answer. It depends on your environment, your service level agreements (SLAs) (recovery point objective and recovery time objective) and your SAP environment behavior. At a minimum, I suggest at least once a week, though I would urge you to consider once a day. Recently, ESG published a survey that found that 74% of servers have a downtime tolerance of less than two hours.[1]

This survey was conducted across all workloads, not just mission-critical, and more information on this survey can be found in The race to zero downtime and zero data loss vlog. The good news is that everything besides the HANA database is typically very static and with Change Block Tracking and storage integrations this should be simple from a capacity and performance perspective.

Closing thoughts

Now that you have progressed through the second blog in this series, you are not only more educated regarding SAP backups and restores but are well prepared in case of a disaster — which will hopefully never happen to you. I know there are many other elements to consider for disaster recovery (e.g. like cluster, HANA System replication and VMWARE FT for the ASCS,) but in case you lose an application server or your ACSC, you can simply recover it with Veeam via Instant VM Recovery. Veeam Instant VM Recovery immediately restores a VM back into your production environment by running it directly from the backup file in minutes, minimizing disruption and downtime for mission-critical applications like your SAP system.

If something happens with your HANA server, simply restore this system with Veeam Instant VM Recovery and check that your database is up and running. Restore your latest full backup via Backint and forward all the needed logs via the SAP certified solution.

In the next blog we will discuss retention policies, including long-term retention, as well as HANA system copies. In the meantime, make sure to install the Veeam Plug-in for SAP HANA, which is included with the Veeam Backup & Replication installation package.

 

[1] ESG: Real-world SLAs and Availability Requirements by Edwin Yuen on May 4, 2018

The post How to optimize and backup your SAP HANA environment appeared first on Veeam Software Official Blog.


How to optimize and backup your SAP HANA environment

Monday.com raises $150M more, now at $1.9B valuation, for workplace collaboration tools

Source: Tech News – Enterprise

Workplace collaboration platforms have become a crucial cornerstone of the modern office: workers’ lives are guided by software and what we do on our computers, and collaboration tools provide a way for us to let each other know what we’re working on, and how we’re doing it, in a format that’s (at best) easy to use without too much distraction from the work itself.

Now, Monday.com, one of the faster growing of these platforms, is announcing a $150 million round of equity funding — a whopping raise that points both to its success so far, and the opportunity ahead for the wider collaboration space, specifically around better team communication and team management.

The Series D funding — led by Sapphire Ventures, with Hamilton Lane, HarbourVest Partners, ION Crossover Partners and Vintage Investment Partners also participating — is coming in at what reliable sources tell me is a valuation of $1.9 billion, or nearly four times Monday.com’s valuation when it last raised money a year ago.

The big bump is in part to the company’s rapid expansion: it now has 80,000 organizations as customers, up from a mere 35,000 a year ago, with the number of actual employees within those organizations numbering as high as 4,000 employees, or as little as two, spanning some 200 industry verticals, including a fair number of companies that are non-technical in their nature (but still rely on using software and computers to get their work done). The client list includes Carlsberg, Discovery Channel, Phillips, Hulu and WeWork and a number of Fortune 500 companies.

“We have built flexibility into the platform,” Roy Mann, the CEO who co-founded the company with Eran Zinman, which is one reason he believes why it’s found a lot of stickiness among the wider field of knowledge workers looking for products that work not unlike the apps that they use as average consumers.

All those figures are also helping to put Monday.com on track for an IPO in the near future, said Roy Mann, the CEO who co-founded the company with Eran Zinman.

“An IPO is something that we are considering for the future, he said in an interview. “We are just at 1% of our potential, and we’re in a position for huge growth.” In terms of when that might happen, he and Zinman would not specify a timeline, but Mann added that this potentially could be the last round before a public listing.

On the other hand, there are some big plans up ahead for the startup, including adding in a free usage tier (to date, the only free on Monday.com is a free trial, all usage tiers have been otherwise paid), expanding geographically and into more languages, and continuing to develop the integration and automation technology that underpins the product. The aim is to have 200 applications working with Monday.com by the end of this year.

While the company is already generating cash and it has just raised a significant round, in the current market, that has definitely not kept venture-backed startups from raising more. (Monday.com, which first started life as Dapulse in 2014, has raised $234.1 million to date.)

Monday.com’s rise and growth are coming at an interesting moment for productivity software. There have been software platforms on the market for years aimed at helping workers communicate with each other, as well as to better track how projects and other activity are progressing. Despite being a relatively late entrant, Slack, the now-public workplace chat platform, has arguably defined the space. (It has even entered the modern work lexicon, where people now Slack each other, as a verb.)

That speaks to the opportunity to build products even when it looks like the market is established, but also — potentially — competition. Mann and Zinman are clear to point out that they definitely do not see Slack as a rival, though. “We even use Slack ourselves in the office,” Zinman noted.

The closer rivals, they note, are the likes of Airtable (now valued at $1.1 billion) and Notion (which we’ve confirmed with the company was raising and has now officially closed a round of $10 million on an equally outsized valuation of $800 million), as well as the wider field of project management tools like Jira, Wrike and Asana — although as Mann playfully pointed out, all of those could also feasibly be integrated into Monday.com and they would work better…

The market is still so nascent for collaboration tools that even with this crowded field, Mann said he believes that there is room for everyone and the differentiations that each platform currently offers: Notion, he noted as an example, feels geared towards more personal workspace management, while Airtable is more about taking on spreadsheets.

Within that, Monday.com hopes to position itself as the ever-powerful and smart go-to place to get an overview of everything that’s happening, with low-chat noise and no need for technical knowledge to gain understanding.

“Monday.com is revolutionizing the workplace software market and we’re delighted to be partnering with Roy, Eran, and the rest of the team in their mission to transform the way people work,” said Rajeev Dham, managing partner at Sapphire Ventures, in a statement. “Monday.com delivers the quality and ease of use typically reserved for consumer products to the enterprise, which we think unlocks significant value for workers and organizations alike.”


Monday.com raises 0M more, now at .9B valuation, for workplace collaboration tools