Application-level monitoring for your workloads

Source: Veeam

If you haven’t noticed, Veeam ONE has really taken on an incredible amount of capabilities with the 9.5 Update 4 release.

One capability that can be a difference-maker is application-level monitoring. This is a big deal for keeping applications available and is part of a bigger Availability story. Putting this together with incredible backup capabilities from Veeam Backup & Replication, application-level monitoring can extend your Availability to the applications on the workloads where you need the most Availability. What’s more, you can combine this with actions in Veeam ONE Monitor to put in the handling you want when applications don’t behave as expected.

Let’s take a look at application-level monitoring in Veeam ONE. This capability is inside of Veeam ONE Monitor, which is my personal favorite “part” of Veeam ONE. I’ve always said with Veeam ONE, “I guarantee that Veeam ONE will tell you something about your environment that you didn’t know, but need to fix.” And with application-level monitoring, the story is stronger than ever. Let’s start with both the processes and services inside of a running virtual machine in Veeam ONE Monitor:

 

I’ve selected the SQL Server service, which for any system with this service, is likely important. Veeam ONE Monitor can use a number of handling options for this service. The first are simple start, stop and restart service options that can be passed to the service control manager. But we also can set up some alarms based on the service:

 

The alarm capability for the services being monitored will allow a very explicit handling you can provide. Additionally, you can make it match the SLA or expectation that your stakeholders have. Take how this alarm is configured, if the service is not running for 5 minutes, the alarm will be triggered as an error. I’ll get to what happens next in a moment, but this 5-minute window (which is configurable) can be what you set as a reasonable amount of time for something to go through most routine maintenance. But if this time exceeds 5 minutes, something may not be operating as expected, and chances are the service should be restarted. This is especially true if you have a fiddlesome application that constantly or even occasionally requires manual intervention. This 5-minute threshold example may even be quick enough to avoid being paged in the middle of the night! The alarm rules are shown below:

 

The alarm by itself is good, but we need more sometimes. That’s where a different Veeam ONE capability can help out with remediation actions. I frequently equate, and it’s natural to do so, the remediation actions with the base capability. So, the base capability is the application-level monitoring, but the means to the end of how to fully leverage this capability comes from the remediation actions.

With the remediation actions, the proper handling can be applied for this application. In the screenshot below, I’ve put in a specific PowerShell script that can be automatically run when the alarm is triggered. Let your ideas go crazy here, it can be as simple as restarting the service — but you also may want to notify application owners that the application was remediated if they are not using Veeam ONE. This alone may be the motivation needed to setup read-only access to the application team for their applications. The configuration to run the script to automatically resolve that alarm is shown below:

 

Another piece of intelligence regarding services, application-level monitoring in Veeam ONE will also allow you to set an alarm based on the number of services changing. For example, if one or more services are added; an alarm would be triggered. This would be a possible indicator of an unauthorized software install or possibly a ransomware service.

Don’t let your creativity stop simply at service state, that’s one example, but application-level monitoring can be used for so many other use cases. Processes for example, can have alarms built on many criteria (including resource utilization) as shown below:

 

If we look closer at the process CPU, we can see that alarms can be built on if a process CPU usage (as well as other metrics) go beyond specified thresholds. As in the previous example, we can also put in handling with remediation actions to sort the situation based on pre-defined conditions. These warning and error thresholds are shown below:

 

As you can see, application-level monitoring used in conjunction with other new Veeam ONE capabilities can really set the bar high for MORE Availability. The backup, the application and more can be looked after with the exact amount of care you want to provide. Have you seen this new capability in Veeam ONE? If you haven’t, check it out!

You can find more information on Veeam Availability Suite 9.5 Update 4 here.

More on new Veeam ONE capabilities:

The post Application-level monitoring for your workloads appeared first on Veeam Software Official Blog.


Application-level monitoring for your workloads

Backup infrastructure at your fingertips with Heatmaps

Source: Veeam

One of the best things an organization can do is have a well-performing backup infrastructure. This is usually done by fine-tuning backup proxies, sizing repositories, having specific conversations with business stakeholders about backup windows and more. Getting that set up and running is a great milestone, but there is a problem. Things change. Workloads grow, new workloads are introduced, storage consumption increases and more challenges come into the mix every day.

Veeam Availability Suite 9.5 Update 4 introduced a new capability that can help organizations adjust to the changes:

Heatmaps!

Heatmaps are part of Veeam ONE Reporter and do an outstanding job of giving an at-a-glance view of the backup infrastructure that can help you quickly view if the environment is performing both as expected AND as you designed it. Let’s dig into the new heatmaps.

The heatmaps are available on Veeam ONE Reporter in the web user interface and are very easy to get started. In the course of showing heatmaps, I’m going to show you two different environments. One that I’ve intentionally set to be performing in a non-optimized fashion and one that is in good shape and balanced so that the visual element of the heatmap can be seen easily.

Let’s first look at the heatmap of the environment that is well balanced:

Here you can see a number of things, the repositories are getting a bit low on free space, including one that is rather small. The proxies carry a nice green color scheme and do not show too much variation in their work during their backup windows. Conversely if we see a backup proxy is dark green, that indicates it is not in use, which is not a good thing.

We can click on the backup proxies to get a much more detailed view, and you can see that the proxy has a small amount of work during the backup window in this environment in the mid-day timeframe and carries a 50% busy load:

When we look at the environment that is not so balanced, the proxies tell a different story:

You can see that first of all there are three proxies, but one of them seems to be doing much more work than the rest due to the color changes. This clearly tells me the proxies are not balanced, and this proxy selected is doing a lot more work than the others during the overnight backup window — which stretches out the backup window.

One of the coolest parts of the heatmap capability is that we can drill into a timeframe in the grid (this timeline can have a set observation window) that will tell us which backup jobs are causing the proxies to be so busy during this time, shown below:

In the details of the proxy usage, you can see the specific jobs that are set which are taking the CPU cycles are shown.

How can this help me tune my environment?

This is very useful as it may indicate a number of things, such as backup jobs being configured to not use the correct proxies, proxies not having the connectivity they need to perform the correct type of backup job. An example of this would be if one or more proxies are configured for only Hot-Add mode and they are physical machines, which makes that impossible. The proxy would never be selected for a job and the remaining proxies would be in charge of doing the backup job. This is all visible in the heatmap yet the backup jobs would complete successfully, but this type of situation would extend the backup window. How cool is that?

Beyond proxy usage, repositories are also very well reported with the heatmaps. This includes the Scale-Out Backup Repositories as well. This will allow you to view the underlying storage free space. The following animation will show this in action:

Show me the heatmaps!

As you can see, the heatmaps add an incredible visibility element to your backup infrastructure. You can see how it is performing, including if things are successful yet not as expected. You can find more information on Veeam Availability Suite 9.5 Update 4 here.

 

Helpful resources:

The post Backup infrastructure at your fingertips with Heatmaps appeared first on Veeam Software Official Blog.


Backup infrastructure at your fingertips with Heatmaps

Blog your way to VeeamON 2019 in Miami!

Source: Veeam

This is your chance to win three exclusive VeeamON 2019 packages that include an event pass, airfare up to $1,500 and three-nights’ accommodations in beautiful Miami! Plus, we are awarding 10 event passes to ten additional contestants.

To enter the contest, come up with the most engaging and meaningful content about VeeamON 2019 until April 15 and send the link to the blog post to sponsorship@veeam.com.

Read full details of the contest in the Terms and Conditions.

Here is some VeeamON 2019 content to consider:

  • Breakout sessions: We will have  50+ breakout sessions with various tracks on the latest industry trends and worldwide technology best practices, along with discussion panels and how-to sessions on Veeam capabilities.
  • Industry keynote speakers: The speakers list includes Anton Gostev, Danny Allan, Jason Buffington, Michael Cade, Rick Vanover and more!
  • Veeam Certified Engineer (VMCE) trainings are offered at a more than 50% discount only for VeeamON 2019 attendees. More information can be found here.
  • Check out last year’s recorded sessions.

Feel free to contact us with any questions at sponsorship@veeam.com.

The post Blog your way to VeeamON 2019 in Miami! appeared first on Veeam Software Official Blog.


Blog your way to VeeamON 2019 in Miami!

SAP HANA integrated backup is here!

Source: Veeam

SAP HANA is one of the most critical enterprise applications out there, and if you have worked with it, you know it likely runs part, if not all, of a business. In Veeam Availability Suite 9.5 Update 4, we are pleased to now have native, certified SAP support for backups and recoveries directly via backint.

What problem does it solve?

SAP HANA’s in-memory database platform also requires that a backup solution be integrated and aware of the platform. This SAP HANA support helps you to have certified solutions for SAP HANA backups, reduce impact of doing backups, ensure operational consistency for backups, and leverage all of the additional capabilities that Veeam Availability Suite has to offer. This also includes point-in-time restores, database integrity checks, storage efficiencies such as compression and deduplication as well.

This milestone comes after years of organizations wanting Veeam backups with their SAP installations. We spent many years advocating on backing up SAP with BRTOOLS and leveraging image-based backups as well to prepare for tests. Now, the story becomes even stronger with support for Veeam to drive backint backups from SAP and store them in a Veeam repository. Specifically, this means that a backint backup can happen for SAP HANA and Veeam can manage the storage of that backup. It is important to note now that the Veeam SAP Plug-In, which makes this native support work, is also supported for use with SAP HANA on Microsoft Azure.

How does it work?

The Veeam Plug-In for SAP HANA becomes a target available for native backups with SAP HANA Studio for backups of a few types: file-based backups, snapshots and backint backups. When backups are performed in SAP HANA Studio, a number of different types and targets can be selected. This is all native within the SAP HANA application and SAP HANA tools like SAP HANA Studio, SAP HANA Cockpit or SQL based command line entries. These include file backups (plain copies of files) and complete data backups using backint. Backint is an API framework that allows 3rd-party tools (such as Veeam) to directly connect the backup infrastructure to the SAP HANA database. The backint backup is configured to have a backup interval set in SAP HANA Studio, and that interval can be very small – such as 5 minutes. It is also recommended to do the backup with log backups (again, configured in SAP HANA Studio) to enable more granular restores which will be covered a bit later on.

SAP HANA can also call snapshots to its own application, while it does not have consistency or corruption checks – snapshots are a great addition to the overall backup strategy. By most common perspectives, backint is the best approach for backing up SAP HANA systems but using the snapshots can also add more options for recovery. The plug-in data flow for a backint backup as implemented in Veeam Availability Suite 9.5 Update 4 is shown in the figure below:

 

 

One of the key benefits of doing a backint backup of SAP HANA is that you can do direct restores to a specific point in time – either from snapshots or from the backint backups with point-in-time recovery. This is very important when considering how critical SAP HANA is to many organizations. So, when it comes to how often a backup is done, select the interval that works for your organization’s requirements and make sure the option to enable automatic log backup is selected as well.

Bring on the Enterprise applications!

Application support is a recent trend here at Veeam, and I do not expect this to slow down any time soon! The SAP HANA Plug-In support, along with the Oracle RMAN plug-in, are two big steps in bringing application support to Veeam for critical, enterprise applications. You can find more information on Veeam Availability Suite 9.5 Update 4 here.

The post SAP HANA integrated backup is here! appeared first on Veeam Software Official Blog.


SAP HANA integrated backup is here!

Have you heard? You can join the launch of Veeam Availability Suite 9.5 Update 4!

Source: Veeam

Things are changing around here lately! And we are happy to soon be sharing some of the newest changes with the public in a special way. On Tuesday, Jan. 22, you can join Veeam online as we present to the market the next update of Veeam Availability Suite, of which we are very proud.

Over the last few months, Veeam has been diligently working toward a large update for our core products: Veeam Backup & Replication, Veeam ONE, Veeam Agent for Microsoft Windows and Veeam Agent for Linux. Beyond this milestone, additional related updates are also coming soon for other products, with the first among them being Veeam Availability Console.

On Jan. 22, we will host an online event showcasing some of these accomplishments. Some people are already enjoying these new Veeam capabilities, as we have provided this important update to service providers to prepare for the event that will mark its general availability.

The theme for the technical capabilities of the Veeam Availability Suite 9.5 Update 4 release is to bring more cost savings, more cloud flexibility and to have security and compliance for how data is made available.

This update is also special because it coincides with our new Veeam Velocity event. Velocity is an event that, for the first time this year, will have partners in attendance. The primary objective of Velocity is to enable all Veeam employees worldwide throughout the year. Given how partner-focused Veeam is, it only made sense that we also allow our partners of all types to partake in the event in-person. This includes channel partners, alliance partners, service-provider partners, distribution partners, integration partners and more! An even bigger step is that now YOU can also be a part of this experience.

We are putting part of the event online in a streamed portion. I take special pride in this part of the agenda because I and two other technologists, Michael Cade and Anthony Spiteri, will be personally showcasing some of the latest capabilities live to you.

While I can’t quite tell you everything we are going to do at this online event, I can tell you that it will be a well-spent hour or so. Please join us by signing up here to be a part of Veeam Velocity!

The post Have you heard? You can join the launch of Veeam Availability Suite 9.5 Update 4! appeared first on Veeam Software Official Blog.


Have you heard? You can join the launch of Veeam Availability Suite 9.5 Update 4!

Veeam Vanguard nominations for 2019 are now open!

Source: Veeam

As 2018 comes to a close, we are proud to have completed the fourth year of the Veeam Vanguard Program. The Veeam Vanguard Program is Veeam’s global, top-level influencer program. Each Vanguard is nominated (either by themselves or others) and selected by the Veeam Product Strategy team. There are Vanguards of all types and all backgrounds — what will the 2019 program have in store?

Vanguard_logo_2017

You can nominate yourself or someone you know for the Veeam Vanguard Program for 2019.

Nominations close on January 2nd, 2019.

We have grown the program over the years. At Veeam, we have added different technologies; and new influencers have an opportunity to be discovered. The Vanguards have unparalleled external access to Veeam initiatives, product updates, betas, incredible swag and more. One of the highlights of the annual program is the Veeam Vanguard Summit. This year, it was held in Prague and we hosted over 40 Vanguards for a week of technology and community fun.

Veeam-Vanguard-Summit

One of the best ways to describe the Veeam Vanguard Program can come right from the Vanguards themselves.

Veeam-Vanguard-commentators

Paul Stringfellow, UK:

“As a new member of the Vanguard program in 2018, it’s been a great pleasure to work with such a diverse group of people. It’s a truly global group big enough to support many different experiences, skill sets and opinions, while small enough to be a group that can get together to share ideas, technology developments and strategies. A fantastic group of people who it’s been a true pleasure to get to know this year. Veeam should be truly proud of the program they have built and the tremendous team who support it. Each one is a fine ambassador for their company.”

Dave and Kristal Kawula, Canada:

“The Veeam Vanguard Program is hands down one of the most diverse community programs in the industry. Microsoft, VMware, Amazon, Dell, HPE, Cisco and other technology professionals all in one room. Simply put, the Veeam Vanguard Program is awesome! “

Florian Raack, Germany:

“I always thought accelerating in a fast car was fun, but the acceleration of my skills in the Veeam Vanguard program surpasses this by far. I’ve been on board this unique community program for two years now and I don’t regret a single minute. Besides deep insights into the Veeam product world and co-determination of some portfolio decisions, it just feels good to have a group of the world’s best IT experts behind me. Besides the pure Veeam product knowledge, I take the experience of the other Vanguards with me for my daily work. The whole thing is refined with first-class swag.”

Didier Van Hoye, Belgium:

“The diversity in both the depth and breadth in skill sets, IT environments and background combined with the passion and drive of the Veeam Vanguards is impressive. This group of people experiences, deals and works with a huge variety of challenges that need to be addressed while protecting the data and services of their employers and/or customers. They share this cumulative knowledge freely with the global community for the benefit of all. The Veeam Vanguard program shows Veeam’s appreciation for these community efforts while supporting it. Veeam in return gets insights in real live ecosystems as well as open and honest feedback that they need to improve and evolve.”

These are just a few views of the program, and everyone experiences the Vanguard program in their own way. If you like what you are seeing, it’s natural to have a few questions on this type of program, so I’ve created a few Frequently Asked Questions (FAQ):

Who can apply to be a Vanguard?

Anyone active in a technology community.

Can Veeam employees be awarded Vanguard status?

No, but employees can nominate persons for consideration. Former Veeam employees who have been separated for more than 1 year are eligible for nomination.

What criteria are needed to be awarded Vanguard status?

The criteria are decided by our team that looks across communities to find the persons who embody our brand the best.

How will nominees be notified of the result?

The Product Strategy team (my staff and I) are going to review the nominations after the registration closes, and then we will deliberate the results. Look for communication either way after that.

If you see a Vanguard in the wild, let us know via nominations or nominate yourself. Nominations will close on January 2nd, 2019!

The post Veeam Vanguard nominations for 2019 are now open! appeared first on Veeam Software Official Blog.


Veeam Vanguard nominations for 2019 are now open!

Migration is never fun – Backups are no exception

Source: Veeam

One of the interesting things I’ve seen over the years is people switching backup products. Additionally, it is reasonable to say that the average organization has more than one backup product. At Veeam, we’ve seen this over time as organizations started with our solutions. This was especially the case before Veeam had any solutions for the non-virtualized (physical server and workstation device) space. Especially in the early days of Veeam, effectively 100% of business was displacing other products — or sitting next to them for workloads where Veeam would suit the client’s needs better.

Migration-VDP

The question of migration is something that should be discussed, as it is not necessarily easy. It reminds me of personal collections of media such as music or movies. For movies, I have VHS tapes, DVDs and DVR recordings, and use them each differently. For music, I have CDs, MP3s and streaming services — used differently again. Backup data is, in a way, similar. This means that the work to change has to be worth the benefit.

There are many reasons people migrate to a new backup product. This can be due to a product being too complicated or error-prone, too costly, or a product discontinued (current example is VMware vSphere Data Protection). Even at Veeam we’ve deprecated products over the years. In my time here at Veeam, I’ve observed that backup products in the industry come, change and go. Further, almost all of Veeam’s most strategic partners have at least one backup product — yet we forge a path built on joint value, strong capabilities and broad platform support.

When the migration topic comes up, it is very important to have a clear understanding about what happens if a solution no longer fits the needs of the organization. As stated above, this can be because a product exits the market, drops support for a key platform or simply isn’t meeting expectations. How can the backup data over time be trusted to still meet any requirements that may arise? This is an important forethought that should be raised in any migration scenario. This means that the time to think about what migration from a product would look like, actually should occur before that solution is ever deployed.

Veeam takes this topic seriously, and the ability to handle this is built into the backup data. My colleagues and I on the Veeam Product Strategy Team have casually referred to Veeam backups being “self-describing data.” This means that you open it up (which can be done easily) and you can clearly see what it is. One way to realize this is the fact that Veeam backup products have an extract utility available. The extract utility is very helpful to recover data from the command line, which is a good use case if an organization is no longer a Veeam client (but we all know that won’t be the case!). Here is a blog by Vanguard Andreas Lesslhumer on this little-known tool.

Why do I bring up the extract utility when it comes to switching backup products? Because it hits on something that I have taken very seriously of late. I call it Absolute Portability. This is a very significant topic in a world where organizations passionately want to avoid lock-in. Take the example I mentioned before of VMware vSphere Data Protection going end-of-life, Veeam Vanguard Andrea Mauro highlights how they can migrate to a new solution; but chances are that will be a different experience. Lock-in can occur in many ways, and organizations want to avoid lock-in. This can be a cloud lock-in, a storage device lock-in, or a services lock-in. Veeam is completely against lock-ins, and arguably so agnostic that it makes it hard to make a specific recommendation sometimes!

I want to underscore the ability to move data — in, out and around — as organizations see fit. For organizations who choose Veeam, there are great capabilities to keep data available.

So, why move? Because expanded capabilities will give organizations what they need.

The post Migration is never fun – Backups are no exception appeared first on Veeam Software Official Blog.


Migration is never fun – Backups are no exception

Windows Server 2019 and what we need to do now: Migrate and Upgrade!

Source: Veeam

IT pros around the world were happy to hear that Windows Server 2019 is now generally available and since there have been some changes to the release. This is a huge milestone, and I would like to offer congratulations to the Microsoft team for launching the latest release of this amazing platform as a big highlight of Microsoft Ignite.

As important as this new operating system is now, there is an important subtle point that I think needs to be raised now (and don’t worry – Veeam can help). This is the fact that both SQL Server 2008 R2 and Windows Server 2008 R2 will soon have extended support ending. This can be a significant topic to tackle as many organizations have applications deployed on these systems.

What is the right thing to do today to prepare for leveraging Windows Server 2019? I’m convinced there is no single answer on the best way to address these systems; rather the right approach is to identify options that are suitable for each workload. This may also match some questions you may have. Should I move the workload to Azure? How do I safely upgrade my domain functional level? Should I use Azure SQL? Should I take physical Windows Server 2008 R2 systems and virtualize them or move to Azure? Should I migrate to the latest Hyper-V platform? What do I do if I don’t have the source code? These are all indeed natural questions to have now.

These are questions we need to ask today to move to Windows Server 2019, but how do we get there without any surprises? Let me re-introduce you to the Veeam DataLab. This technology was first launched by Veeam in 2010 and has evolved in every release and update since. Today, this technology is just what many organizations need to safely perform tests in an isolated environment to ensure that there are no surprises in production. The figure below shows a data lab:

Let’s deconstruct this a bit first. An application group is an application you care about — and it can include multiple VMs. The proxy appliance isolates the DataLab from the production network yet reproduces the IP space in the private network without interference via a masquerade IP address. With this configuration, the DataLab allows Veeam users to test changes to systems without risk to production. This can include upgrading to Windows Server 2019, changing database versions, and more. Over the next weeks and month or so, I’ll be writing a more comprehensive document in whitepaper format that will take you through the process of setting up a DataLab and doing specific task-like upgrading to Windows Server 2019 or a newer version of SQL Server as well as migrating to Azure.

Another key technology where Veeam can help is the ability to restore Veeam backups to Microsoft Azure. This technology has been available for a long while and is now built into Veeam Backup & Replication. This is a great way to get workloads into Azure with ease starting from a Veeam backup. Additionally, you can easily test other changes to Windows and SQL Server with this process — put it into an Azure test environment to test the migration process, connectivity and more. If that’s a success, repeat the process as part of a planned migration to Azure. This cloud mobility technique is very powerful and is shown below for Azure:

Why Azure?

This is because Microsoft announced that Extended Security Updates will be available for FREE in Azure for Windows server 2008 R2 for an additional three years after the end of the support deadline. Customers can rehost these workloads to Azure with no application code changes, giving them more time to plan for their future upgrades. Read more here.

What also is great about moving workloads to Azure is that this applies to almost anything that Veeam can back up. Windows Servers, Linux Agents, vSphere VMs, Hyper-V VMs and more!

Migrating to the latest platforms are a great way to stay in a supported configuration for critical applications in the data center. The difference is being able to do the migration without any surprises and with complete confidence. This is where Veeam’s DataLabs and Veeam Recovery to Microsoft Azure can work in conjunction to provide you a seamless experience in migrating to the latest SQL and Windows Server platforms.

Have you started testing Windows Server 2019? How many Windows Server 2008 R2 and SQL Server 2008 systems do you have? Let’s get DataLabbing!

The post Windows Server 2019 and what we need to do now: Migrate and Upgrade! appeared first on Veeam Software Official Blog.


Windows Server 2019 and what we need to do now: Migrate and Upgrade!

Daily administration meets software-defined storage with the Scale-Out Backup Repository

Source: Veeam

This post is admittedly long overdue. The Scale-Out Backup Repository (SOBR) is a very powerful management technology that has been in Veeam Backup & Replication since v9, but I recently had a situation in our lab that made me remember how powerful this technology is, and I thought it appropriate to re-introduce this feature.

The situation was that I needed to remove a backup repository and I didn’t want to lose any backup data or restore points. It’s easy to do this with the SOBR, but there is so much more to it. Let’s re-introduce the SOBR!

What is the SOBR?

The SOBR is a logical collection of individual backup repositories (where backups go from a storage perspective) in one pool. The underlying repositories are referred to as extents and the parent SOBR is a collection of all the extents and will summarize their capacity. A picture helps describe this, so let’s look at the figure below:

This SOBR is a collection of six extents of different types holding backups from Veeam Agents, VMware vSphere, Microsoft Hyper-V and Nutanix AHV.

Why have this technology?

There are many reasons why the SOBR can add benefits to how organizations manage data in their environment. A few of the use cases that are available right now (and note – there will be more capabilities coming later this year) include:

  • The ability to easily migrate underlying backup repositories in and out of the SOBR
  • Data locality selection to keep backup files together within a job
  • Performance policy to keep types of backup files on appropriately performing storage resources
  • Invoke maintenance mode and evacuate extents as underlying repository needs change

These are just a few ways that the SOBR can solve real challenges in the data center for the backup infrastructure. If you have not, please check out the SOBR pages in Veeam Help Center. There you can find nearly 20 sub-pages on how the SOBR can be administered and its capabilities.

What does the SOBR do?

Like a regular repository, the SOBR holds backup data. The real benefits come when there need to be changes to the backup infrastructure. This will save administrators a lot of work in the following situations:

  • A backup repository needs to be offline for maintenance
  • A backup repository needs to be removed (such as being end-of-life or lease is up)
  • Data needs to be evacuated from a backup repository
  • Performance design can be improved

The performance design is something that can really be intriguing for those who have a mix of different storage systems. Some SOBR implementations will put incremental backups on a NAS or low-end SAN device and full backups on a deduplication appliance. This is an attractive arrangement as the performance profile of each of these types of backups files is in alignment with the storage capabilities of those extents.

Taking a backup repository offline is a very easy step. In the figure below, I have a SOBR with three extents: two local storage resources and one NAS device. The one local storage resource is on the C: drive — which is not optimal for backup placement. I can simply right-click and put this extent into maintenance mode.

Once a repository is in maintenance mode, an important fact must be considered: backups can still run. In this example, there are two other extents that are ready to receive backup jobs. This is a very powerful characteristic as we realize changes need to be made to the backup infrastructure over time, but we don’t want to be in a situation where we’re missing restore points to do such. This is one of the key deliverables that the SOBR has brought to Veeam installations from the beginning.

The extent can then have the backups evacuated, which will place the data on the remaining extents.

Once those backups are evacuated, the extent that is in maintenance mode can be removed from the configuration of the SOBR (and the remaining extents left in place) with ease.

What about smaller environments, can the SOBR help here also?

Truth be told, this was one of the first use cases of the SOBR! As the story is told to me, one of the first ideas came for the scenario where an organization had one backup repository. How could they move backup job configuration and data to a new backup repository with ease? This challenge was made easy with the SOBR.

I refer to this as a “Single-Instance SOBR” or basically a SOBR with one extent. The thought is, let’s have the SOBR defined but backed by only one extent. In this way, when the time arrives that the backup storage needs to be replaced or can’t be scaled to the size needed, an additional extent is added and then the first repository is placed into maintenance mode, backups evacuated, then removed from the SOBR configuration. Just like that — a new backup storage resource is in place without missing a single backup job. The figure below logically shows a Single-Instance SOBR:

In this configuration, the sole repository that is defined in the SOBR could be replaced by adding another extent with ease through the following steps (the unchanged part of the infrastructure made gray for simplicity):

The Scale-Out Backup Repository and you

The beautiful thing here is that we can let the world of day-to-day backup infrastructure administration leverage software-defined capabilities for backup storage. The management of the backup storage through this mechanism makes it very simple to make changes to the underlying storage while having absolute portability of the critical backup data.

Additionally, later this year we will be releasing more capabilities for the SOBR that will further drive the benefits needed today: portability, data management and new locations.

Are you using the SOBR? If so, how do you have it configured? Share your comments below.

The post Daily administration meets software-defined storage with the Scale-Out Backup Repository appeared first on Veeam Software Official Blog.


Daily administration meets software-defined storage with the Scale-Out Backup Repository

SysAdmin Day 2018: Are we administering systems like it is 2018?

Source: Veeam

Days of appreciation in the workplace are an interesting event. Whether it’s Administrative Professional’s Day, Boss’s Day, Day of the Programmer, or others I may have missed (and many in other fields, such as medicine), they are a way to offer thanks for professions that are at times hard. However, I challenge the acknowledgement of a profession like system administration comes with a big caveat: Has this process been innovated for 2018?

SysAdmin Day started (it can be traced to the year 2000) as a great way to give thanks to the professionals in the mix for the hard work that goes along with being a system administrator. Take this into context at the beginning of this century however. Things were harder then. There were more manual IT tasks, more equipment and less automation. This was an important time when the IT space was ripe for innovation, and platforms such as virtualization and the cloud were strictly a meteorological term.

The challenge I pose for today is to ask if systems are indeed being administered like it’s 2018. Are SysAdmins seeking investments (not just with products, but even personal skills) in automation? Are SysAdmins looking to have visibility into all of their data? Are SysAdmins able to have the mobility for workloads that they need today? These are important questions today that are in line with the spirit of the SysAdmin day; but I challenge the skills of a SysAdmin are dependent on the capabilities of the modern era.

Each of those questions are important in today’s IT landscape. The mobility aspect is one that I am very passionate about, and it can avoid problems later. I’ll discuss this one in a bit more detail. When a SysAdmin mentions mobility, what comes to mind? Answers could range from moving an application to a new piece of hardware, doing an upgrade to a new version, or even changing location of an application to a higher performing network or site. I challenge that today’s mobility expectation is that applications can be mobile to the best platforms. This includes the cloud, a hypervisor platform such as Hyper-V, vSphere or Acropolis, or even a next-generation technology for the application. SysAdmins need to be careful to not create traps in their IT practice to have obsolete components in the mix.

One common example is to have obsolete applications on obsolete hardware. I occasionally have spoken to organizations who have obsolete applications on obsolete operating systems which require obsolete hardware. This really strikes me as a bad practice point today. I’m usually talking to these organizations about options related to backup and Availability technologies, however, we reach a stopping point with some of the obsolete museum pieces that are still critical to their operation. I commonly have to advise that organizations have bigger problems than backup when these situations arise. There can be a bigger business issue if the organization is dependent on something that can’t be made available due to obsolete technologies.

These are just a few examples, but the life of the SysAdmin is a tough job. It always has been, and always will be. There is a debate on whether there even will be a SysAdmin job in the near future due to newer technologies (such as the cloud). I challenge that there will be, but only if the SysAdmins of today adapt to current conditions and deliver the best service with the best technologies that don’t put their organizations at risk. For those SysAdmins out there — great job, keep up the good work and always be on the lookout for what you can do better next time, for the next project and for whatever comes up tomorrow.

The post SysAdmin Day 2018: Are we administering systems like it is 2018? appeared first on Veeam Software Official Blog.


SysAdmin Day 2018: Are we administering systems like it is 2018?