Blog

Office 365 Backup now available in the Azure Marketplace!

Source: Veeam

When we released Veeam Backup for Microsoft Office 365 in July, we saw a huge adoption rate and a large inquiry on running the solution within Azure. It is with great pleasure, we can announce that Veeam Backup for Microsoft Office 365 is now available in the Azure Marketplace!

A simple deployment model

Veeam Backup for Microsoft Office 365 within Azure falls under the BYOL (Bring Your Own License) model, which means you only have to buy the amount of licenses needed besides the Azure infrastructure costs.

The deployment is easy. Just define your project and instance details combined with an administrator login and you’re good to go. You will notice a default size will be selected, however, this can always be redefined. Keep in mind it is advised to leverage the minimum system requirements which can be found in the User Guide.

VBO365 Azure deployment

Once you’ve added your disks and configured the networking, you’re good to go and the Azure portal will even share you details on the Azure infrastructure costs such as the example below for a Standard A4 v2 VM.

VBO365 Azure pricing

If you are wondering on how to calculate the amount of storage needed for Exchange, SharePoint and OneDrive data, Microsoft provides great reports for this within the Microsoft 365 admin center under the reports option.

Once the VM has been deployed, you can leverage RDP and are good to go with a pre-installed VBO installation. Keep in mind that by default, the standard retention on the repository is set to 3 years, so you may need to modify this to adjust to your organization’s needs.

Two ways to get started!

You can provision Veeam Backup for Microsoft Office 365 in Azure and bring a 30-day trial key with you to begin testing.

You can also deploy the solution within Azure and back up all your Office 365 data free forever – limited to a maximum of 10 users and 1TB of SharePoint data within your organization.

Ready to get started? Try it out today and head to the Azure Marketplace right now!

The post Office 365 Backup now available in the Azure Marketplace! appeared first on Veeam Software Official Blog.


Office 365 Backup now available in the Azure Marketplace!

Native Snapshot Integration for NetApp HCI and SolidFire

Source: Veeam

Four years ago, Veeam delivered to the market ground-breaking native snapshot integration into NetApp’s flagship ONTAP storage operating system. In addition to operational simplicity, improved efficiencies, reduced risk and increased ROI, the Veeam Hyper-Availability Platform and ONTAP continues to help customers of all sizes accelerate their Digital Transformation initiatives and compete more effectively in the digital economy.

Today I’m pleased to announce a native storage integration with Element Software, the storage operating system that powers NetApp HCI and SolidFire, is coming to Veeam Backup & Replication 9.5 with the upcoming Update 4.


Key milestones in the Veeam + NetApp Alliance

Veeam continues to deliver deeper integration across the NetApp Data Fabric portfolio to provide our joint customers with the ability to attain the highest levels of application performance, efficiency, agility and Hyper-Availability across hybrid cloud environments. Together with NetApp, we enable organizations to attain the best RPOs and RTOs for all applications and data through native snapshot based integrations.

How Veeam integration takes NetApp HCI to Hyper-Available

With Veeam Availability Suite 9.5 Update 3, we released a brand-new framework called the “Universal Storage API.” This set of API’s allows Veeam to accelerate the adoption of storage-based integrations to help decrease impact on the production environment, significantly improve RPOs and deliver significant operational benefits that would not be attainable without Veeam.

Let’s talk about how the new Veeam integration with NetApp HCI and SolidFire deliver these benefits.

Backup from Element Storage Snapshots

The Veeam Backup from Storage Snapshot technology is designed to dramatically reduce the performance impact typically associated with traditional API driven VMware backup on primary hypervisor infrastructure.

This process dramatically improves backup performance, with the added benefit of reducing performance impact on production VMware infrastructure.

Granular application item recovery from Element Storage Snapshots

If you’re a veteran of enterprise storage systems and VMware, you undoubtably know the pain of trying to recover individual Windows or Linux files, or application items from a Storage Snapshot. The good news is that Veeam makes this process fast, easy and painless. With our new integration into Element snapshots, you can quickly recover application items directly from the Storage Snapshot like:

  • Individual Windows or Linux guest files
  • Exchange items
  • MS SQL databases
  • Oracle databases
  • Microsoft Active Directory items
  • Microsoft SharePoint items

What’s great about this functionality is that it works with a Storage Snapshot created by Veeam and NetApp, and the only requirement is that VMs need to be in the VMDK format.

Hyper-Available VMs with Instant VM Recovery from Element Snapshots

Everyone knows that time is money and that every second that a critical workload is offline your business is losing money, prestige and possibly even customers. What if I told you that you could recover an entire virtual machine, no matter the size in a very short timeframe? Sound farfetched? Instant VM Recovery technology from Veeam which leverages Element Snapshots for NetApp HCI and SolidFire makes this a reality.

Not only is this process extremely fast, there is no performance loss during this process, because once recovered, the VM is running from your primary production storage system!


Veeam Instant VM Recovery on NetApp HCI

Element Snapshot orchestration for better RPO

It’s common to see a nightly or twice daily backup schedule in most organizations. The problem with this strategy is that it leaves your organization with a large data loss potential of 12-24 hours. We call the amount of acceptable data loss your “RPO” or recovery point objective. Getting your RPO as low as possible just makes good business sense. With Veeam and Element Snapshot management, we can supplement the off-array backup schedule with more frequent storage array-based snapshots. One common example would be taking hourly storage-based snapshots in between nightly off-array Veeam backups. When a restore event happens, you now have hourly snapshots, or a Veeam backup to choose from when executing the recovery operation.

Put your Storage Snapshots to work with Veeam DataLabs

Wouldn’t it be great if there were more ways to leverage your investments in Storage Snapshots for additional business value? Enter Veeam DataLabs — the easy way to create copies of your production VMs in a virtual lab protected from the production network by a Veeam network proxy.

The big idea behind this technology is to provide your business with near real-time copies of your production VMs for operations like dev/test, data analytics, proactive DR testing for compliance, troubleshooting, sandbox testing, employee training, penetration testing and much more! Veeam makes the process of test lab rollouts and refreshes easy and automated.

NetApp + Veeam = Better Together

NetApp Storage Technology and Veeam Availability Suite are perfectly matched to create a Hyper-Available data center. Element storage integrations provide fast, efficient backup capabilities, while significantly lowering RPOs and RTOs for your organization.

Find out more on how you can simplify IT, reduce risk, enhance operational efficiencies and increase ROI through NetApp HCI and Veeam.

The post Native Snapshot Integration for NetApp HCI and SolidFire appeared first on Veeam Software Official Blog.


Native Snapshot Integration for NetApp HCI and SolidFire

USB Drives

Source: SANS security tip
Be very careful of any lost USB drives you may find (such as in the parking lot or local coffee shop) or USB drives you are given at public events, like conferences. It is very easy for these devices to be infected with malware. Never use such devices for work, use only authorized devices issued to you by work.
USB Drives

Windows Server 2019 and what we need to do now: Migrate and Upgrade!

Source: Veeam

IT pros around the world were happy to hear that Windows Server 2019 is now generally available and since there have been some changes to the release. This is a huge milestone, and I would like to offer congratulations to the Microsoft team for launching the latest release of this amazing platform as a big highlight of Microsoft Ignite.

As important as this new operating system is now, there is an important subtle point that I think needs to be raised now (and don’t worry – Veeam can help). This is the fact that both SQL Server 2008 R2 and Windows Server 2008 R2 will soon have extended support ending. This can be a significant topic to tackle as many organizations have applications deployed on these systems.

What is the right thing to do today to prepare for leveraging Windows Server 2019? I’m convinced there is no single answer on the best way to address these systems; rather the right approach is to identify options that are suitable for each workload. This may also match some questions you may have. Should I move the workload to Azure? How do I safely upgrade my domain functional level? Should I use Azure SQL? Should I take physical Windows Server 2008 R2 systems and virtualize them or move to Azure? Should I migrate to the latest Hyper-V platform? What do I do if I don’t have the source code? These are all indeed natural questions to have now.

These are questions we need to ask today to move to Windows Server 2019, but how do we get there without any surprises? Let me re-introduce you to the Veeam DataLab. This technology was first launched by Veeam in 2010 and has evolved in every release and update since. Today, this technology is just what many organizations need to safely perform tests in an isolated environment to ensure that there are no surprises in production. The figure below shows a data lab:

Let’s deconstruct this a bit first. An application group is an application you care about — and it can include multiple VMs. The proxy appliance isolates the DataLab from the production network yet reproduces the IP space in the private network without interference via a masquerade IP address. With this configuration, the DataLab allows Veeam users to test changes to systems without risk to production. This can include upgrading to Windows Server 2019, changing database versions, and more. Over the next weeks and month or so, I’ll be writing a more comprehensive document in whitepaper format that will take you through the process of setting up a DataLab and doing specific task-like upgrading to Windows Server 2019 or a newer version of SQL Server as well as migrating to Azure.

Another key technology where Veeam can help is the ability to restore Veeam backups to Microsoft Azure. This technology has been available for a long while and is now built into Veeam Backup & Replication. This is a great way to get workloads into Azure with ease starting from a Veeam backup. Additionally, you can easily test other changes to Windows and SQL Server with this process — put it into an Azure test environment to test the migration process, connectivity and more. If that’s a success, repeat the process as part of a planned migration to Azure. This cloud mobility technique is very powerful and is shown below for Azure:

Why Azure?

This is because Microsoft announced that Extended Security Updates will be available for FREE in Azure for Windows server 2008 R2 for an additional three years after the end of the support deadline. Customers can rehost these workloads to Azure with no application code changes, giving them more time to plan for their future upgrades. Read more here.

What also is great about moving workloads to Azure is that this applies to almost anything that Veeam can back up. Windows Servers, Linux Agents, vSphere VMs, Hyper-V VMs and more!

Migrating to the latest platforms are a great way to stay in a supported configuration for critical applications in the data center. The difference is being able to do the migration without any surprises and with complete confidence. This is where Veeam’s DataLabs and Veeam Recovery to Microsoft Azure can work in conjunction to provide you a seamless experience in migrating to the latest SQL and Windows Server platforms.

Have you started testing Windows Server 2019? How many Windows Server 2008 R2 and SQL Server 2008 systems do you have? Let’s get DataLabbing!

The post Windows Server 2019 and what we need to do now: Migrate and Upgrade! appeared first on Veeam Software Official Blog.


Windows Server 2019 and what we need to do now: Migrate and Upgrade!

How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Source: Veeam

Unfortunately, bad patches are something everyone has experienced at one point or another. Just take the most recent example of the Microsoft Windows October 2018 Update that impacted both desktop and server versions of Windows. Unfortunately, this update resulted in missing files for impacted systems, and has temporarily been paused as Microsoft investigates.

Because of incidents like this, organizations are often seldom to quickly adopt patches. This is one of the reasons the WannaCry ransomware was so impactful. Unpatched systems introduce risk into environments, as new exploits for old problems are on the rise. In order to patch a system, organizations must first do two things, back up the systems to be patched, and perform patch testing.

A recent, verified Veeam Backup

Before we patch a system, we always want to make sure we have a backup that matches our organization’s Recovery Point Objective (RPO), and that the backup was successful. Luckily, Veeam Backup & Replication makes this easy to schedule, or even run on demand as needed.

Beyond the backup itself succeeding, we also want to verify the backup works correctly. Veeam’s SureBackup technology allows for this by booting the VM in an isolated environment, then tests the VM to make sure it is functioning properly. Veeam SureBackup gives organizations additional piece of mind that their backups have not only succeeded, but will be useable.

Rapid patch testing with Veeam DataLabs

Veeam DataLabs enable us to test patches rapidly, without impacting production. In fact, we can use that most recent backup we just took of our environment to perform the patch testing. Remember the isolated environment we just talked about with Veeam SureBackup technology? You guessed it, it is powered by Veeam DataLabs.

Veeam DataLabs allows us to spin up complete applications in an isolated environment. This means that we can test patches across a variety of servers with different functions, all without even touching our production environment. Perfect for patch testing, right?

Now, let’s take a look at how the Veeam DataLab technology works.

Veeam DataLabs are configured in Veeam Backup & Replication. Once they are configured, a virtual appliance is created in VMware vSphere to house the virtual machines to be tested. Beyond the virtual machines you plan on testing, you can also include key infrastructure services such as Active Directory, or anything else the virtual machines you plan on testing require to work correctly. This group of supporting VMs is called an Application Group.

patch testing veeam backup datalabs

In the above diagram, you can see the components that support a Veeam DataLab environment.

Remember, these are just copies from the latest backup, they do not impact the production virtual machines at all. To learn more about Veeam DataLabs, be sure to take a look at this great overview hosted here on the Veeam.com blog.

So what happens if we apply a bad patch to a Veeam DataLab environment? Absolutely nothing. At the end of the DataLab session, the VMs are powered off, and the changes made during the session are thrown away. There is no impact to the production virtual machines or the backups leveraged inside the Veeam DataLab. With Veeam DataLabs, patch testing is no longer a big deal, and organizations can proceed with their patching activities with confidence.

This DataLab can then be leveraged for testing, or for running Veeam SureBackup jobs. SureBackup jobs also provide reports upon completion. To learn more about SureBackup jobs, and see how easy they are to configure, be sure to check out the SureBackup information in the Veeam Help Center.

Patch testing to improve confidence

The hesitance to apply patches is understandable in organizations, however, that does not mean there can be significant risk if patches are not applied in a timely manner. By leveraging Veeam Backups along with Veeam DataLabs, organizations can quickly test as many servers and environments as they would like before installing patches on production systems. The ability to rapidly test patches ensures any potential issue is discovered long before any data loss or negative impact to production occurs.

No VMs? No problem!

What about the other assets in your environment that can be impacted by a bad patch, such as physical servers, dekstops, laptops, and full Windows tablets? You can still protect these assets by backing them up using Veeam Agent for Microsoft Windows. These agents can be automatically deployed to your assets from Veeam Backup & Replication. To learn more about Veeam Agents, take a look at the Veeam Agent Getting Started Guide.

To see the power of Veeam Backup & Replication, Veeam DataLabs, and Veeam Agent for Microsoft Windows for yourself, be sure to download the 30-day free trial of Veeam Backup & Replication here.

The post How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs appeared first on Veeam Software Official Blog.


How to Enable Rapid Patch Testing with Veeam Backups and Veeam DataLabs

Considerations in a multi-cloud world

Source: Veeam

With the infrastructure world in constant flux, more and more businesses are adopting a multi-cloud deployment model. The challenges from this are becoming more complex and, in some cases, cumbersome. Consider the impact on the data alone. 10 years ago, all anyone worried about was if the SAN would stay up, and if it didn’t, would their data be protected. Fast forward to today, even a small business can have data scattered across the globe. Maybe they have a few vSphere hosts in an HQ, with branch offices using workloads running in the cloud or Software as a Service-based applications. Maybe backups are stored in an object storage repository (somewhere — but only one guy knows where). This is happening in the smallest of businesses, so as a business grows and scales, the challenges become even more complex.

Potential pitfalls

Now this blog is not about how Veeam manages data in a multi-cloud world, it’s more about how to understand the challenges and the potential pitfalls. Take a look at the diagram below:

Veeam supports a number of public clouds and different platforms. This is a typical scenario in a modern business. Picture the scene: workloads are running on top of a hypervisor like VMware vSphere or Nutanix, with some services running in AWS. The company is leveraging Microsoft Office 365 for its email services (people rarely build Exchange environments anymore) with Active Directory extended into Azure. Throw in some SAP or Oracle workloads, and your data management solution has just gone from “I back up my SAN every night to tape” to “where is my data now, and how do I restore it in the event of a failure?” If worrying about business continuity didn’t keep you awake 10 years ago, it surely does now. This is the impact of modern life. The more agility we provide on the front end for an IT consumer, the more complexity there has to be on the back end.

With the ever-growing complexity, global reach and scale of public clouds, as well as a more hands-off approach from IT admins, this is a real challenge to protect a business, not only from an outage, but from a full-scale business failure.

Managing a multi-cloud environment

When looking to manage a multi-cloud environment, it is important to understand these complexities, and how to avoid costly mistakes. The simplistic approach to any environment, whether it is running on premises or in the cloud, is to consider all the options. Sounds obvious, but that has not always been the case. Where or how you deploy a workload is becoming irrelevant, but how you protect that workload still is. Think about the public cloud: if you deploy a virtual machine, and set the firewall ports to any:any, (that would never happen would it?), you can be pretty sure someone will gain access to that virtual machine at some point. Making sure that workload is protected and recoverable is critical in this instance. The same considerations and requirements always apply whether running on premises or off premises.  How do you protect the data and how do you recover the data in the event of a failure or security breach?

What to consider when choosing a cloud platform?

This is something often overlooked, but it has become clear in recent years that organizations do not choose a cloud platform for single, specific reasons like cost savings, higher performance and quicker service times, but rather because the cloud is the right platform for a specific application. Sure, individual reason benefits may come into play, but you should always question the “why” on any platform selection.

When you’re looking at data management platforms, consider not only what your environment looks like today, but also what will it look like tomorrow. Does the platform you’re purchasing today have a roadmap for the future? If you can see that the company has a clear vision and understanding of what is happening in the industry, then you can feel safe trusting that platform to manage your data anywhere in the world, on any platform. If a roadmap is not forthcoming, or they just don’t get the vision you are sharing about your own environment, perhaps it’s time to look at other vendors. It’s definitely something to think about next time you’re choosing a data management solution or platform.

The post Considerations in a multi-cloud world appeared first on Veeam Software Official Blog.


Considerations in a multi-cloud world

More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Source: Veeam

Welcome to even more tips and tricks for a smooth Veeam Availability Orchestrator deployment. In the first part of our series, we covered the following topics:

  • Plan first, install next
  • Pick the right application to protect to get a feel for the product
  • Decide on your categorization strategy, such as using VMware vSphere Tags, and implement it
  • Start with a fresh virtual machine

Configure the DR site first

After you have installed Veeam Availability Orchestrator, the first site you configure will be your DR site. If you are also deploying production sites, it is important to note, you cannot change your site’s personality after the initial configuration. This is why it is so important to plan before you install, as we discussed in the first article in this series.

As you are configuring your Veeam Availability Orchestrator site, you will see an option for installing the Veeam Availability Orchestrator Agent on a Veeam Backup & Replication server. Remember, you have two options here:

  1. Use the embedded Veeam Backup & Replication server that is installed with Veeam Availability Orchestrator
  2. Push the Veeam Availability Orchestrator Agent to existing Veeam Backup & Replication servers

If you change your mind and do in fact want to use an existing Veeam Backup & Replication server, it is very easy to install the agent after initial configuration. In the Veeam Availability Orchestrator configuration screen, simply click VAO Agents, then Install. You will just need to know the name of the Veeam Backup & Replication server you would like to add and have the proper credentials.

Ensure replication jobs are configured

No matter which Veeam Backup & Replication server you choose to use for Veeam Availability Orchestrator, it is important to ensure your replication jobs are configured in Veeam Backup & Replication before you get too far in configuring your Veeam Availability Orchestrator environment. After all, Veeam Availability Orchestrator cannot fail replicas over if they are not there!

If for some reason you forget this step, do not worry. Veeam Availability Orchestrator will let you know when a Readiness Check is run on a Failover Plan. As the last step in creating a Failover Plan, Veeam Availability Orchestrator will run a Readiness Check unless you specifically un-check this option.

If you did forget to set up your replication jobs, Veeam Availability Orchestrator will let you know, because your Readiness Check will fail, and you will not see green checkmarks like this in the VM section of the Readiness Check Report.

For a much more in-depth overview of the relationship between Veeam Backup & Replication and Veeam Availability Orchestrator, be sure to read the white paper Technical Overview of Veeam Availability Orchestrator Integration with Veeam Backup & Replication.

Do not forget to configure Veeam DataLabs

Before you can run a Virtual Lab Test on your new Failover Plan (you can find a step-by-step guide to configuring your first Failover Plan here), you must first configure your Veeam DataLab in Veeam Backup & Replication. If you have not worked with Veeam DataLabs before (previously known as Veeam Virtual Labs), be sure to read the white paper I mentioned above, as configuration of your first Veeam DataLab is also covered there.

After you have configured your Veeam DataLab in Veeam Backup & Replication, you will then be able to run Virtual Lab Tests on your Failover Plan, as well as schedule Veeam DataLabs to run whenever you would like. Scheduling Veeam DataLabs is ideal to provide an isolated production environment for application testing, and can help you make better use of those idle DR resources.

Veeam DataLabs can be run on demand or scheduled from the Virtual Labs screen. When running or scheduling a lab, you can also select the duration of time you would like the lab to run for, which can be handy when scheduling Veeam DataLab resources for use by multiple teams.

There you have it, even more tips and tricks to help you get Veeam Availability Orchestrator up and running quickly and easily. Remember, a free 30-day trial of Veeam Availability Orchestrator is available, so be sure to download it today!

The post More tips and tricks for a smooth Veeam Availability Orchestrator deployment appeared first on Veeam Software Official Blog.


More tips and tricks for a smooth Veeam Availability Orchestrator deployment

Why our software-driven, hardware agnostic approach makes sense for backups

Source: Veeam

Having been hands-on in service provider land for the entirety of my career prior to joining Veeam, I understand the pain points that come with offering backup and recovery services. I’ve spent countless hours working on getting the best combination of hardware and software for those services. I also know firsthand the challenges that storage platforms pose for architecture, engineering and operations teams who design, implement and manage these platforms.

Storage scalability

An immutable truth that exists in our world is that backup and storage go hand in hand and you can’t have one without the other. In recent times, there has been an extreme growth in the amount of data being backed up and the sprawl of that data has also become increasingly challenging to manage. While data is growing quicker than it ever has, in relative terms the issues created by that haven’t changed in the last ten or so years — though they have been magnified.

Focusing on storage, those that have deployed any storage platform understand that there will come a point where hardware and software constraints start to come into play. I’ve not yet experienced or heard of a storage system that doesn’t apply some limitation on scale or performance at some point. Whether you are constrained by physical disk or controller based limits or software overheads, the reality is no system is infinitely scalable and free of challenge.

The immediate solution to resolve these challenges in my experience (and anecdotally) has always been to throw more hardware at the platforms by purchasing more. Whether it be performance or disk constraints, the end result is always to expand capacity or upgrade the core hardware components to get the system back to a point where it’s performing as expected.

That said, there are a number of systems that do work well, and if architected and managed in the correct way will offer longer term service sustainability. When it comes to designing storage for backup data, the principals that are used to design for other workloads such as virtual machines cannot be applied. Backup data is a long game and portability of that data should be paramount when choosing what storage to use.

How Veeam helps

Veeam offers tights integration with a number of top storage vendors via our storage integrations. Not only do these integrations offer flexibility to our customers and partners, but they also offer absolute choice and mobility when it comes to the short and long-term retention of backup data.

Extending that portability message — the way in which backup data is stored should mean that when storage systems reach the end of their lifetime, data isn’t held a prisoner to the hardware. Another inevitability of storage is that there will come a time when it needs replacing. This is where Veeam’s hardware agnostic, software-defined approach to backup comes into play.

Recently, there have been a number of products that have come into the market that offer an all-in-one solution for data protection in the form of software tied to hardware appliances. The premise of these offerings is ease of use and single platform to manage. While it’s true that all-in-one solutions are attractive, there is a sting in the tail of any platform that offers software that is tied to hardware.

Conclusion

Fundamentally, the issues that apply to storage platforms apply to these all-in-one appliances. They will reach a point where performance starts to struggle, upgrades are required and, ultimately, systems need to be replaced. This is where the ability to have freedom of choice and a decoupled approach to software and hardware ultimately results in total control of where your backup data is stored, how it performs and when that data is required to be moved or migrated.

You only achieve this through backup software that’s separated from the hardware. While it might seem like a panacea to have an all-in-one solution, there needs to be consideration as to what this means three, five or ten years into the future. Again, portability and choice is king when it comes to choosing a backup vendor. Lock in should be avoided at all costs.

The post Why our software-driven, hardware agnostic approach makes sense for backups appeared first on Veeam Software Official Blog.


Why our software-driven, hardware agnostic approach makes sense for backups