Windows Server 2019 and what we need to do now: Migrate and Upgrade!

Source: Veeam

IT pros around the world were happy to hear that Windows Server 2019 is now generally available and since there have been some changes to the release. This is a huge milestone, and I would like to offer congratulations to the Microsoft team for launching the latest release of this amazing platform as a big highlight of Microsoft Ignite.

As important as this new operating system is now, there is an important subtle point that I think needs to be raised now (and don’t worry – Veeam can help). This is the fact that both SQL Server 2008 R2 and Windows Server 2008 R2 will soon have extended support ending. This can be a significant topic to tackle as many organizations have applications deployed on these systems.

What is the right thing to do today to prepare for leveraging Windows Server 2019? I’m convinced there is no single answer on the best way to address these systems; rather the right approach is to identify options that are suitable for each workload. This may also match some questions you may have. Should I move the workload to Azure? How do I safely upgrade my domain functional level? Should I use Azure SQL? Should I take physical Windows Server 2008 R2 systems and virtualize them or move to Azure? Should I migrate to the latest Hyper-V platform? What do I do if I don’t have the source code? These are all indeed natural questions to have now.

These are questions we need to ask today to move to Windows Server 2019, but how do we get there without any surprises? Let me re-introduce you to the Veeam DataLab. This technology was first launched by Veeam in 2010 and has evolved in every release and update since. Today, this technology is just what many organizations need to safely perform tests in an isolated environment to ensure that there are no surprises in production. The figure below shows a data lab:

Let’s deconstruct this a bit first. An application group is an application you care about — and it can include multiple VMs. The proxy appliance isolates the DataLab from the production network yet reproduces the IP space in the private network without interference via a masquerade IP address. With this configuration, the DataLab allows Veeam users to test changes to systems without risk to production. This can include upgrading to Windows Server 2019, changing database versions, and more. Over the next weeks and month or so, I’ll be writing a more comprehensive document in whitepaper format that will take you through the process of setting up a DataLab and doing specific task-like upgrading to Windows Server 2019 or a newer version of SQL Server as well as migrating to Azure.

Another key technology where Veeam can help is the ability to restore Veeam backups to Microsoft Azure. This technology has been available for a long while and is now built into Veeam Backup & Replication. This is a great way to get workloads into Azure with ease starting from a Veeam backup. Additionally, you can easily test other changes to Windows and SQL Server with this process — put it into an Azure test environment to test the migration process, connectivity and more. If that’s a success, repeat the process as part of a planned migration to Azure. This cloud mobility technique is very powerful and is shown below for Azure:

Why Azure?

This is because Microsoft announced that Extended Security Updates will be available for FREE in Azure for Windows server 2008 R2 for an additional three years after the end of the support deadline. Customers can rehost these workloads to Azure with no application code changes, giving them more time to plan for their future upgrades. Read more here.

What also is great about moving workloads to Azure is that this applies to almost anything that Veeam can back up. Windows Servers, Linux Agents, vSphere VMs, Hyper-V VMs and more!

Migrating to the latest platforms are a great way to stay in a supported configuration for critical applications in the data center. The difference is being able to do the migration without any surprises and with complete confidence. This is where Veeam’s DataLabs and Veeam Recovery to Microsoft Azure can work in conjunction to provide you a seamless experience in migrating to the latest SQL and Windows Server platforms.

Have you started testing Windows Server 2019? How many Windows Server 2008 R2 and SQL Server 2008 systems do you have? Let’s get DataLabbing!

The post Windows Server 2019 and what we need to do now: Migrate and Upgrade! appeared first on Veeam Software Official Blog.


Windows Server 2019 and what we need to do now: Migrate and Upgrade!

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Source: Veeam

Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.

Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.

Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.

Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?

Now, let’s take a look at how the Veeam DataLab technology works.

Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.

patch testing veeam backup datalabs

In the above diagram, you can see the components that support a Veeam DataLab environment.

Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the Veeam.com blog.

So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.

This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.

To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.

The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.


How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Considerations in a multi-cloud world

Source: Veeam

With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.

With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

What to consider when choosing a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.

When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.

The post Considerations in a multi-cloud world appeared first on Veeam Software Official Blog.


Considerations in a multi-cloud world

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Source: Veeam

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.


More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Why our software-driven, hardware agnostic approach makes sense for backups

Source: Veeam

Having been hands-on in service provider land for the entirety of my career prior to joining Veeam, I understand the pain points that come with offering backup and recovery services. I’ve spent countless hours working on getting the best combination of hardware and software for those services. I also know firsthand the challenges that storage platforms pose for architecture, engineering and operations teams who design, implement and manage these platforms.

Storage scalability

An immutable truth that exists in our world is that backup and storage go hand in hand and you can’t have one without the other. In recent times, there has been an extreme growth in the amount of data being backed up and the sprawl of that data has also become increasingly challenging to manage. While data is growing quicker than it ever has, in relative terms the issues created by that haven’t changed in the last ten or so years — though they have been magnified.

Focusing on storage, those that have deployed any storage platform understand that there will come a point where hardware and software constraints start to come into play. I’ve not yet experienced or heard of a storage system that doesn’t apply some limitation on scale or performance at some point. Whether you are constrained by physical disk or controller based limits or software overheads, the reality is no system is infinitely scalable and free of challenge.

The immediate solution to resolve these challenges in my experience (and anecdotally) has always been to throw more hardware at the platforms by purchasing more. Whether it be performance or disk constraints, the end result is always to expand capacity or upgrade the core hardware components to get the system back to a point where it’s performing as expected.

That said, there are a number of systems that do work well, and if architected and managed in the correct way will offer longer term service sustainability. When it comes to designing storage for backup data, the principals that are used to design for other workloads such as virtual machines cannot be applied. Backup data is a long game and portability of that data should be paramount when choosing what storage to use.

How Veeam helps

Veeam offers tights integration with a number of top storage vendors via our storage integrations. Not only do these integrations offer flexibility to our customers and partners, but they also offer absolute choice and mobility when it comes to the short and long-term retention of backup data.

Extending that portability message — the way in which backup data is stored should mean that when storage systems reach the end of their lifetime, data isn’t held a prisoner to the hardware. Another inevitability of storage is that there will come a time when it needs replacing. This is where Veeam’s hardware agnostic, software-defined approach to backup comes into play.

Recently, there have been a number of products that have come into the market that offer an all-in-one solution for data protection in the form of software tied to hardware appliances. The premise of these offerings is ease of use and single platform to manage. While it’s true that all-in-one solutions are attractive, there is a sting in the tail of any platform that offers software that is tied to hardware.

Conclusion

Fundamentally, the issues that apply to storage platforms apply to these all-in-one appliances. They will reach a point where performance starts to struggle, upgrades are required and, ultimately, systems need to be replaced. This is where the ability to have freedom of choice and a decoupled approach to software and hardware ultimately results in total control of where your backup data is stored, how it performs and when that data is required to be moved or migrated.

You only achieve this through backup software that’s separated from the hardware. While it might seem like a panacea to have an all-in-one solution, there needs to be consideration as to what this means three, five or ten years into the future. Again, portability and choice is king when it comes to choosing a backup vendor. Lock in should be avoided at all costs.

The post Why our software-driven, hardware agnostic approach makes sense for backups appeared first on Veeam Software Official Blog.


Why our software-driven, hardware agnostic approach makes sense for backups

Daily administration meets software-defined storage with the Scale-Out Backup Repository

Source: Veeam

This post is admittedly long overdue. The Scale-Out Backup Repository (SOBR) is a very powerful management technology that has been in Veeam Backup & Replication since v9, but I recently had a situation in our lab that made me remember how powerful this technology is, and I thought it appropriate to re-introduce this feature.

The situation was that I needed to remove a backup repository and I didn’t want to lose any backup data or restore points. It’s easy to do this with the SOBR, but there is so much more to it. Let’s re-introduce the SOBR!

What is the SOBR?

The SOBR is a logical collection of individual backup repositories (where backups go from a storage perspective) in one pool. The underlying repositories are referred to as extents and the parent SOBR is a collection of all the extents and will summarize their capacity. A picture helps describe this, so let’s look at the figure below:

This SOBR is a collection of six extents of different types holding backups from Veeam Agents, VMware vSphere, Microsoft Hyper-V and Nutanix AHV.

Why have this technology?

There are many reasons why the SOBR can add benefits to how organizations manage data in their environment. A few of the use cases that are available right now (and note – there will be more capabilities coming later this year) include:

  • The ability to easily migrate underlying backup repositories in and out of the SOBR
  • Data locality selection to keep backup files together within a job
  • Performance policy to keep types of backup files on appropriately performing storage resources
  • Invoke maintenance mode and evacuate extents as underlying repository needs change

These are just a few ways that the SOBR can solve real challenges in the data center for the backup infrastructure. If you have not, please check out the SOBR pages in Veeam Help Center. There you can find nearly 20 sub-pages on how the SOBR can be administered and its capabilities.

What does the SOBR do?

Like a regular repository, the SOBR holds backup data. The real benefits come when there need to be changes to the backup infrastructure. This will save administrators a lot of work in the following situations:

  • A backup repository needs to be offline for maintenance
  • A backup repository needs to be removed (such as being end-of-life or lease is up)
  • Data needs to be evacuated from a backup repository
  • Performance design can be improved

The performance design is something that can really be intriguing for those who have a mix of different storage systems. Some SOBR implementations will put incremental backups on a NAS or low-end SAN device and full backups on a deduplication appliance. This is an attractive arrangement as the performance profile of each of these types of backups files is in alignment with the storage capabilities of those extents.

Taking a backup repository offline is a very easy step. In the figure below, I have a SOBR with three extents: two local storage resources and one NAS device. The one local storage resource is on the C: drive — which is not optimal for backup placement. I can simply right-click and put this extent into maintenance mode.

Once a repository is in maintenance mode, an important fact must be considered: backups can still run. In this example, there are two other extents that are ready to receive backup jobs. This is a very powerful characteristic as we realize changes need to be made to the backup infrastructure over time, but we don’t want to be in a situation where we’re missing restore points to do such. This is one of the key deliverables that the SOBR has brought to Veeam installations from the beginning.

The extent can then have the backups evacuated, which will place the data on the remaining extents.

Once those backups are evacuated, the extent that is in maintenance mode can be removed from the configuration of the SOBR (and the remaining extents left in place) with ease.

What about smaller environments, can the SOBR help here also?

Truth be told, this was one of the first use cases of the SOBR! As the story is told to me, one of the first ideas came for the scenario where an organization had one backup repository. How could they move backup job configuration and data to a new backup repository with ease? This challenge was made easy with the SOBR.

I refer to this as a “Single-Instance SOBR” or basically a SOBR with one extent. The thought is, let’s have the SOBR defined but backed by only one extent. In this way, when the time arrives that the backup storage needs to be replaced or can’t be scaled to the size needed, an additional extent is added and then the first repository is placed into maintenance mode, backups evacuated, then removed from the SOBR configuration. Just like that — a new backup storage resource is in place without missing a single backup job. The figure below logically shows a Single-Instance SOBR:

In this configuration, the sole repository that is defined in the SOBR could be replaced by adding another extent with ease through the following steps (the unchanged part of the infrastructure made gray for simplicity):

The Scale-Out Backup Repository and you

The beautiful thing here is that we can let the world of day-to-day backup infrastructure administration leverage software-defined capabilities for backup storage. The management of the backup storage through this mechanism makes it very simple to make changes to the underlying storage while having absolute portability of the critical backup data.

Additionally, later this year we will be releasing more capabilities for the SOBR that will further drive the benefits needed today: portability, data management and new locations.

Are you using the SOBR? If so, how do you have it configured? Share your comments below.

The post Daily administration meets software-defined storage with the Scale-Out Backup Repository appeared first on Veeam Software Official Blog.


Daily administration meets software-defined storage with the Scale-Out Backup Repository

Veeam Availability on Cisco HyperFlex is here

Source: Veeam

Ever since 2013, Veeam and Cisco have been partnering close together to deliver solutions addressing the needs of our joint customers. Our customers have grown to expect the Cisco and Veeam hallmarks — performance, manageability and reliability. Our relationship continues to evolve and grow deeper. We started with meet-in-the-channel solutions with reference architectures based on Cisco UCS storage servers, then added integration to HyperFlex snapshots. And in 2017, Veeam was selected as global ISV partner of the year and was also added to the Cisco price list to allow partners and customers easier access to our joint solutions.

Hyperconverged Infrastructure meets Intelligent Data Management

Recently we announced a new joint solution, Veeam Availability on Cisco HyperFlex. This new solution brings together the power of Veeam Hyper-Availability Platform and Cisco’s industry-leading Hyperconverged Infrastructure (HCI) solution, HyperFlex to deliver an enterprise-grade, highly available Intelligent Data Management platform. So, what’s different?

To date, Veeam has worked with Cisco HyperFlex through native snapshot integration to provide data Availability for HyperFlex data, HyperFlex as the data source. With the introduction of Cisco HyperFlex with large form factor drives (LFF), Veeam can now utilize HyperFlex as the target for the Veeam backup repository. In fact, all the Veeam services — the backup manager, proxy servers and repository — can efficiently run on the HyperFlex platform. The result is complete peace of mind.

The combination of Veeam on HyperFlex means customers get all the benefits of two industry-leading vendors from brands they can trust. Now the data protection platform is protected against node or disk failure and Veeam can leverage advanced resiliency features in HyperFlex like failover, clustering and self-healing. Add HyperFlex’s seamless horizontal scalability of storage and throughput and you now have an enterprise class solution. This joint solution can easily meet the needs of enterprise IT organizations that are struggling with unreliable legacy technologies that cannot scale and are unable to provide the Hyper-Availability today’s enterprises require.

Veeam and Cisco closely collaborated to develop optimized configurations that will be available and supported through Cisco, packaged into a single SKU to make it easier and faster to purchase. That means it’s a win-win for customers in that this provides benefits such as:

  • Hyper-Availability for all workloads — virtual, physical and cloud
  • Seamless scalability and reduced operational costs
  • Reduced risk and accelerated time to value
  • Simplified and optimized deployment: A single Cisco SKU that includes all installed software and right-sized hardware
  • Simple, single point of acquisition and support from Cisco

We trust that you’ll agree when Peter McKay, the President and Co-CEO of Veeam deems this as “a game changer.”

Check out this new solution brief to learn more about Veeam Availability on Cisco HyperFlex.

The post Veeam Availability on Cisco HyperFlex is here appeared first on Veeam Software Official Blog.


Veeam Availability on Cisco HyperFlex is here

How to bring balance into your infrastructure

Source: Veeam

Veeam Backup & Replication is known for ease of installation and a moderate learning curve. It is something that we take as a great achievement, but as we see in our support practice, it can sometimes lead to a “deploy and forget” approach, without fine-tuning the software or learning the nuances of its work. In our previous blog posts, we examined tape configuration considerations and some common misconfigurations. This time, the blog post is aimed at giving the reader some insight on a Veeam Backup & Replication infrastructure, how data flows between the components, and most importantly, how to properly load-balance backup components so that the system can work stably and efficiently.

Overview of a Veeam Backup & Replication infrastructure

Veeam Backup & Replication is a modular system. This means that Veeam as a backup solution consists of a number of components, each with a specific function. Examples of such components are the Veeam server itself (as the management component), proxy, repository, WAN accelerator and others. Of course, several components can be installed on a single server (provided that it has sufficient resources) and many customers opt for all-in-one installations. However, distributing components can give several benefits:

  • For customers with branch offices, it is possible to localize the majority of backup traffic by deploying components locally.
  • It allows to scale out easily. If your backup window increases, you can deploy an additional proxy. If you need to expand your backup repository, you can switch to scale-out backup repository and add new extents as needed.
  • You can achieve a High Availability for some of the components. For example, if you have multiple proxies and one goes offline, the backups will still be created.

Such system can only work efficiently if everything is balanced. An unbalanced backup infrastructure can slow down due to unexpected bottlenecks or even cause backup failures because of overloaded components.

Let’s review how data flows in a Veeam infrastructure during a backup (we’re using a vSphere environment in this example):

All data in Veeam Backup & Replication flows between source and target transport agents. Let’s take a backup job as an example: a source agent is running on a backup proxy and its job is to read the data from a datastore, apply compression and source-side deduplication and send it over to a target agent. The target agent is running directly on a Windows/Linux repository or a gateway if a CIFS share is used. Its job is to apply a target-side deduplication and save the data in a backup file (.VKB, .VIB etc).

That means there are always two components involved, even if they are essentially on the same server and both must be taken into account when planning the resources.

Tasks balancing between proxy and repository

To start, we must examine the notion of a “task.” In Veeam Backup & Replication, a task is equal to a VM disk transfer. So, if you have a job with 5 VMs and each has 2 virtual disks, there is a total of 10 tasks to process. Veeam Backup & Replication is able to process multiple tasks in parallel, but the number is still limited.

If you go to the proxy properties, on the first step you can configure the maximum concurrent tasks this proxy can process in parallel:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

On the repository side, you can find a very similar setting:

For normal backup operations, a task on the repository side also means one virtual disk transfer.

This brings us to our first important point: it is crucial to keep the resources and number of tasks in balance between proxy and repository.  Suppose you have 3 proxies set to 4 tasks each (that means that on the source side, 12 virtual disks can be processed in parallel), but the repository is set to 4 tasks only (that is the default setting). That means that only 4 tasks will be processed, leaving idle resources.

The meaning of a task on a repository is different when it comes to synthetic operations (like creating synthetic full). Recall that synthetic operations do not use proxies and happen locally on a Windows/Linux repository or between a gateway and a CIFS share. In this case for normal backup chains, a task is a backup job (so 4 tasks mean that 4 jobs will be able to generate synthetic full in parallel), while for per-VM backup chains, a task is still a VM (so 4 tasks mean that repo can generate 4 separate VBKs for 4 VMs in parallel). Depending on the setup, the same number of tasks can create a very different load on a repository! Be sure to analyze your setup (the backup job mode, the job scheduling, the per-VM option) and plan resources accordingly.

Note that, unlike for a proxy, you can disable the limit for number of parallel tasks for a repository. In this case, the repository will accept all incoming data flows from proxies. This might seem convenient at first, but we highly discourage from disabling this limitation, as it may lead to overload and even job failures. Consider this scenario: a job has many VMs with a total of 100 virtual disks to process and the repository uses the per-VM option. The proxies can process 10 disks in parallel and the repository is set to the unlimited number of tasks. During an incremental backup, the load on the repository will be naturally limited by proxies, so the system will be in balance. However, then a synthetic full starts. Synthetic full does not use proxies and all operations happen solely on the repository. Since the number of tasks is not limited, the repository will try to process all 100 tasks in parallel! This will require immense resources from the repository hardware and will likely cause an overload.

Considerations when using CIFS share

If you are using a Windows or Linux repository, the target agent will start directly on the server.  When using a CIFS share as a repository, the target agent starts on a special component called a “gateway,” that will receive the incoming traffic from the source agent and send the data blocks to the CIFS share. The gateway must be placed as close to the system sharing the folder over SMB as possible, especially in scenarios with a WAN connection. You should not create topologies with a proxy/gateway on one site and CIFS share on another site “in the cloud” — you will likely encounter periodic network failures.

The same load balancing considerations described previously apply to gateways as well. However, the gateway setup requires an additional attention because there are 2 options available — set the gateway explicitly or use an automatic selection mechanism:

Any Windows “managed server” can become a gateway for a CIFS share. Depending on the situation, both options can come handy. Let’s review them.

You can set the gateway explicitly. This option can simplify the resource management — there can be no surprises as to where the target agent will start. It is recommended to use this option if an access to the share is restricted to specific servers or in case of distributed environments — you don’t want your target agent to start far away from the server hosting the share!

Things become more interesting if you choose Automatic selection. If you are using several proxies, automatic selection gives ability to use more than one gateway and distribute the load. Automatic does not mean random though and there are indeed strict rules involved.

The target agent starts on the proxy that is doing the backup. In case of normal backup chains, if there are several jobs running in parallel and each is processed by its own proxy, then multiple target agents can start as well. However, within a single job, even if the VMs in the job are processed by several proxies, the target agent will start only on one proxy, the first to start processing. For per-VM backup chains, a separate target agent starts for each VM, so you can get the load distribution even within a single job.

Synthetic operations do not use proxies, so the selection mechanism is different: the target agent starts on the mount server associated with the repository (with an ability to fail over to Veeam server if the mount server in unavailable). This means that the load of synthetic operations will not be distributed across multiple servers. As mentioned above, we discourage from setting the number of tasks to unlimited — that can cause a huge load spike on the mount/Veeam server during synthetic operations.

Additional notes

Scale-out backup repository. SOBR is essentially a collection of usual repositories (called extents). You cannot point a backup job to a specific extent, only to SOBR, however extents retain some of settings, including the load control. So what was discussed about standalone repositories, pertains to SOBR extents as well. SOBR with per-VM option (enabled by default), the “Performance” placement policy and backup chains spread out across extents will be able to optimize the resource usage.

Backup copy. Instead of a proxy, source agents will start on the source repository. All considerations described above apply to source repositories as well (although in case of Backup Copy Job, synthetic operations on a source repository are logically not possible). Note that if the source repository is a CIFS share, the source agents will start on the mount server (with a failover to Veeam server).

Deduplication appliances. For DataDomain, StoreOnce (and possibly other appliances in the future) with Veeam integration enabled, the same considerations apply as for CIFS share repositories. For a StoreOnce repository with source-side deduplication (Low Bandwidth mode) the requirement to place gateway as close to the repository as possible does not apply — for example, a gateway on one site can be configured to send data to a StoreOnce appliance on another site over WAN.

Proxy affinity. A feature added in 9.5, proxy affinity creates a “priority list” of proxies that should be preferred when a certain repository is used.

If a proxy from the list is not available, a job will use any other available proxy. However, if the proxy is available, but does not have free task slots, the job will be paused waiting for free slots. Even though the proxy affinity is a very useful feature for distributed environments, it should be used with care, especially because it is very easy to set and forget about this option. Veeam Support encountered cases about “hanging” jobs which came down to the affinity setting that was enabled and forgotten about. More details on proxy affinity.

Conclusion

Whether you are setting up your backup infrastructure from scratch or have been using Veeam Backup & Replication for a long time, we encourage you to review your setup with the information from this blog post in mind. You might be able to optimize the use of resources or mitigate some pending risks!

The post How to bring balance into your infrastructure appeared first on Veeam Software Official Blog.


How to bring balance into your infrastructure

Life after vSphere 5.5 End of General Support

Source: Veeam

The end of general support for vSphere 5.5 was on September 19, 2018. There may be a number of questions for those customers that are still sitting on vSphere 5.5, like what version do I upgrade to? And what considerations do I need to make as part of the support upgrade path?

You have four options you first need to consider:

The first option is to stay on vSphere 5.5 in an unsupported fashion. This wouldn’t be the greatest of decisions, but I do understand that if this is the first time you are hearing of the end of general support, then you may not have the time and resources to make the change and upgrade. Another option may be to explore extended support options with VMware.

The third option could be getting everything to vSphere 6.0, which will be supported until March 12, 2020, so it will give you a few more years before this comes around again. This is the easiest step in terms of time and resources. For more information about this path read this VMware KB article. Don’t miss this important information before upgrading to vSphere 6.0 Update 1.

Now we get to the more interesting, and final, option because of the features and functionality that will become available with these next two releases.

vSphere 6.5 and vSphere 6.7 will see you in general support until Nov. 15, 2021. I will touch on some of the functionality you will gain by moving to this platform over the others later on.

vSphere 6.5

  1. vCenter Server Appliance — If you are running a Windows-based VMware Virtual Center today, it might be time to consider this upgrade path since in 6.5, we saw a fully-featured version of the vCenter appliance. The appliance, as you will read later, is the future. It brings concise management, but also scale, ease of migration and the infamous Update Manager that is also embedded into the appliance.
  2. VM encryption — Always a bone of contention because with encryption of any data you lose out on something. In vSphere 6.5, you have the ability to encrypt virtual machines at rest, within the hypervisor and at the point the IO comes out of the virtual disk controller. This adds its own benefits alone, but there are more details here.
  3. vSAN — This won’t be applicable to all, especially if you are running your shared storage system to provide your VMware datastores where your virtual machines are stored, but a consideration to be made is to look at the capabilities that come with vSAN in the 6.5 release: erasure coding, stretch clustering, QoS, encryption and the list keeps on going.
  4. vCenter Management — Introducing the HTML5-based vSphere client. Now this should in no way be the deciding factor on jumping to this version as the HTML5 interface is not complete, and you will find yourself jumping between the interfaces to get things done, but it’s a good step. And as we get to vSphere 6.7, this is the only way to manage your vSphere environment.

vSphere 6.7

The upgrade to 6.7 will depend on many things, including is your hardware compatible with the latest GA version from VMware? A lot has changed on that front, so be sure to check that. vSphere 6.7 was released last year at VMworld 2017 and it had a ton of new and exciting features and functionality that came with it. Amongst the wider offering from VMware, it wasn’t just vSphere; it was the surrounding products also.

Manageability

I mentioned the HTML5 client. Well, in this release, we see things take a much broader step to be the management interface, including not having to jump between multiple windows to perform certain tasks. Another thing to add here is that vSphere 6.7 will be the last version that will support the vCenter Server being on Windows — VCSA all the way.

Storage

A couple of things to note for vSphere 6.7 and the storage scene: When it comes to deciding, the ability to use PMEM (Persistent Memory), which has similar characteristics of memory but retains the data during power cycles, really assists in some of those enterprise applications that just require everything to be faster. There is a whole white paper on this.

The most notable and significant vSAN release came with vSAN 6.7. Firstly, management can be done through the HTML5 interface rather than having to go through CLI and APIs. The vSAN iSCSI supports Windows Server Failover Clusters (WSFC). In 6.5, vSAN already supported modern Windows application layer clustering technologies, such as Microsoft SQL Always-on Availability Groups (AAG), Microsoft Exchange Database Availability Groups (DAG) and Oracle Real Application Clusters (RAC).

There is also the Adaptive Resync feature to ensure a fair share of resources is available for VM I/Os and Resync I/Os during dynamic changes in load on the system.

“vSAN continues to see rapid adoption with more than 10,000 customers and growing. A 600 million dollar run rate was announced for Q4FY2018, and IDC named it the #1 and fastest growing HCI Software Solution.”

Loads more on this can be found here in the What’s New with VMware vSAN 6.7.

There are heaps of other things to consider, but these are just a few things. Also, take a look at the security enhancements and TPM that also landed in vSphere 6.7.

More details on some useful smaller enhancements that could help a decision tree can be found in this VMware article.

How can Veeam help?

Veeam takes vSphere platform support very seriously and has done so since our company’s beginnings. One of the most effective ways that organizations can migrate to a newer vSphere platform is from Veeam replication.

This is a very powerful technique as organizations can migrate workloads to a new cluster with very little downtime and have the ability to fail back if needed. Additionally, this introduces the option of a new cluster. How many times are there things about the old cluster that you would like to change and not move forward to be stuck with forever? Migrating to a new cluster via Veeam replication can allow you to put in new design elements that can be the right choice today. You can find more about Veeam replication in the Veeam Help Center.

The post Life after vSphere 5.5 End of General Support appeared first on Veeam Software Official Blog.


Life after vSphere 5.5 End of General Support

Get your data ready for vSphere 5.5 End of Support

Source: Veeam

There have been lots of articles and walkthroughs on how to make that upgrade work for you, and how to get to a supported level of vSphere. This VMware article is very thorough walking through each step of the process.

But we wanted to touch on making sure your data is protected prior, during and after the upgrade events.

If we look at the best practice upgrade path for vSphere, we’ll see how we make sure we’re protected at each step along the way:

Upgrade Path

The first thing that needs to be considered is what path you’ll be taking to get away from the end of general support of vSphere 5.5. You have two options:

  • vSphere 6.5 which is now going to be supported till November 2021 (so another 5 years’ time)
  • vSphere 6.7 which is the latest released version from VMware.

Another consideration to make here is support for surrounding and ecosystem partners, including Veeam. Today, Veeam fully supports vSphere 6.5 and 6.7, however, vSphere 6.5 U2 is NOT officially supported with Veeam Backup & Replication Update 3a due to the vSphere API regression.

The issue is isolated to over-provisioned environments with heavily loaded hosts (so more or less individual cases).

It’s also worth noting that there is no direct upgrade path from 5.5 to 6.7. If you’re currently running vSphere 5.5, you must first upgrade to either vSphere 6.0 or vSphere 6.5 before upgrading to vSphere 6.7.

Management – VMware Virtual Center

The first step of the vSphere upgrade path after you’ve decided and found the appropriate version, is to make sure you have a backup of your vCenter server. The vSphere 5.5 virtual center could be a Windows machine or it could be using the VCSA.

Both variants can be protected with Veeam, however, the VCSA runs on a Postgres-embedded database. Be sure to take an image-level backup with Veeam and then there is a database backup option within the appliance. Details of the second step can be found in this knowledge base article.

If you’re an existing Veeam customer, you’ll already be protecting the virtual center as part of one of your existing backup jobs.

You must also enable VMware tools quiescence to create transactionally-consistent backups and replicas for VMs that do not support Microsoft VSS (for example, Linux VMs). In this case, Veeam Backup & Replication will use the VMware Tools to freeze the file system and application data on the VM before backup or replication. VMware Tools quiescence is enabled at the job level for all VMs added to the job. By default, this option is disabled.

You must also ensure Application-Aware Image Processing (AAIP) is either disabled or excluded for the VCSA VM.

Virtual Machine Workloads

If you are already a Veeam customer, then you’ll already have your backup jobs created and working with success before the upgrade process begins. However, as part of the upgrade process, you’ll want to make sure that all backup job processes that initiate through the virtual center are paused during the upgrade process.

If the upgrade path consists of new hardware but with no vMotion licensing, then the following section will help.

Quick Migration

Veeam Quick Migration enables you to promptly migrate one or more VMs between ESXi hosts and datastores. Quick Migration allows for the migration of VMs in any state with minimum disruption.

More information on Quick Migration can be found in our user guide.

During the upgrade process

As already mentioned in the virtual machine workloads section, it is recommended to stop all vCenter-based actions prior to update. This includes Veeam, but also any other application or service that communicates with your vCenter environment. It is also worth noting that whilst the vCenter is unavailable, vSphere Distributed Resource Scheduler (DRS) and vSphere HA will not work.

Veeam vSphere Web Client

If you’re moving to vSphere 6.7 and you have the Veeam vSphere Web Client installed as a vSphere plug-in, you’ll need to install the new vSphere Veeam web client plug-in from a post-upgraded Veeam Enterprise Manager.

More detail can be found in Anthony Spiteri’s blog post on new HTML5 plug-in functionality.

You’ll also need to ensure that any VMware-based products or other integrated products vCenter supports are the latest versions as you upgrade to a newer version of vSphere.

Final Considerations

From a Veeam Availability perspective, the above steps are the areas that we can help and make sure that you are constantly protected against failure during the process. Each environment is going to be different and other considerations will need to be made.

Another useful link that should be used as part of your planning: Update sequence for vSphere 5.5 and its compatible VMware products (2057795)

One last thing is a shout out to one of my colleagues who has done an in-depth look at the vSphere upgrade process.

The post Get your data ready for vSphere 5.5 End of Support appeared first on Veeam Software Official Blog.


Get your data ready for vSphere 5.5 End of Support