Greetings friends, the series on Microsoft Azure Blob comes to its last post, I hope you liked it and you find it useful when selecting an Object Storage for your Veeam Backups.We still have this last entry about Microsoft Azure Blob monitoring with Grafana. Let's go to it, after we follow all the steps we will be able to have a result like this:
Greetings friends, we are in the penultimate post about this great series on Veeam Cloud/Capacity Tier in Microsoft Azure Blob, and today we are going to see how we can monitor this repository in our Veeam ONE.
Capacity monitoring and Offload Tier jobs in Veeam ONE v9.5 U4 - Veeam ONE MonitorSince the last Update 4 for Veeam ONE, we can now monitor the size, and the information we have sent to our Object Storage Repositories, for this we will go to Veeam ONE Monitor, and in the Data Protection View - Backup Repositories - your Object Storage section, we can see the following: It is a fairly complete monitoring that will help us understand the consumption of this Repository, and monitor it in case it is in the cloud and we have a certain budget.If we also want to see the growth of this repository, predictions, etc, we can go to Reports and see the following.
Greetings friends, we have already seen during this series how to configure our Capacity/Cloud tier, the configuration within Veeam, in Microsoft Azure and of course a brief introduction to why use this new technology to store our oldest backups.Today I leave you some aspects that will help you to better understand the Capacity Tier process, how to launch it manually, etc.
How often does Scale-out Backup Repository Offload run?The Scale-out Backup Repository offload task runs automatically every four hours, once we have configured our Capacity Tier, as we saw in the previous entry.This means that once we have configured everything, after four hours we will check if any of the files comply:
- The age selected in the Capacity Tier option
- It's a closed chain.
- The override option has been selected if a certain % of use is reached in the Performance Tier
Greetings friends, now that we have seen an introduction to this great series and have created the Microsoft Azure Blob, we can go to Veeam and create everything relevant to Cloud/Capacity tier.
Scale-Out Backup Repository - BasicsBefore we go any further, it is important that we understand what we intend to do. Cloud/Capacity Tier builds on Veeam's Scale-Out Backup Repository to combine Performance Tier and Capacity Tier.If we saw it in a very simple diagram, we would have the following, a combination of local extents (Backup Repositories) called Performance Tier, to which is added a Capacity Tier based on Object Storage to which are sent the copies we don't need to have in the performance tier:
Greetings friends, in the previous post we could see the introduction to this great series on Microsoft Azure Blob Storage, and how to send there our backups.In today's chapter we are going to see a comfortable step by step on how to create our container of Microsoft Azure Blob, for it before we will see a bit of theory.Before moving on to the content, I would like to show you the different components that make up Microsoft Azure Blob:
- Storage Account: It is in this component where we will have to select if we want the Storage Account to be General Purpose V1, v2 or Blob directly, as well as selecting if we want it to be Hot Tier or Cool Tier, finally in the Storage Account we can also select how we want Microsoft Azure to protect this account, with geo-distribution, or locally in the region, and so on.
- Container: In this component, which is a logical resource within a Storage Account, you can grant them access permissions, and make them public if you want.
- Blob: It will be the content that is inside the container, content that is specially prepared to store PB of information exposing a RESTfulAPI.
Greetings friends, a few weeks ago we saw the new functionality included for free in the latest Veeam Availability Suite v9.5 Update 4, Cloud/Capacity Tier, which allows us to move our backups that are already in retention policy to the Cloud to save disk space.Today I bring you the beginning of a series on Microsoft Azure Blob, and how to use this Cloud platform as one of the best candidates for this cloud storage of backup files.
Diagram of how it worksI would like to show you this diagram so that we understand the Veeam workflow between our local data center and Microsoft Azure Blob:We already saw in the article about Cloud/Capacity Tier, that only backup files that no longer have dependencies are susceptible to be uploaded to the cloud, this means that there are no other files as they can be incremental, or synthetic full backups depending on them, in the cases of GFS is easier to understand, since the files are Full, so they can be uploaded directly.
Greetings friends, some time ago I showed you all the news of Veeam Availability Suite v9.5 U4, and among them, I have told you in several articles all the power of Veeam Capacity/Cloud Tier, which allows us to take advantage of Object Storage providers to store large numbers of backups that normally require a long retention, months, semesters or years.Many of you, and in the Community, have wondered if you can use storage that we have in our Datacenters that offer Object Storage services, and the answer is YES, of course, as long as the product publishes the S3 service following modern standards should work. Veeam has an unofficial list of products and solutions that you can deploy in your Datacenter.And it's from this list that I've got one of the products that most caught my attention, because of the company behind it, Dell EMC, and because they have a Community Edition, I'm talking about Dell EMC ECS.As this blog entry is quite long, I leave you the menu here to jump wherever you want:
- Dell EMC ECS at a Glance
- Dell EMC ECS CE at a Glance
- Dell EMC ECS CE System Requirements
- Dell EMC ECS CE OVA Deployment on VMware vSphere 6.7 U2
- Configuring Dell EMC ECS CE OVA
- Object Storage Repository and Capacity Tier Configuration in Veeam Backup & Replication v9.5 U4
Dell EMC ECS at a GlanceDell EMC ECS is an industry-leading object storage platform designed to support traditional and next-generation workloads. Available in multiple consumer models: it is defined software and can be purchased as a turnkey device or as a service operated by Dell EMC-ECS. It enables organizations of all sizes to economically store and manage unstructured data at any scale and for any length of time.EMC ECS is an object storage system that makes use of persistent storage containers for cloud storage protocols. ECS supports AWS S3 and OpenStack Swift. In file-enabled buckets, ECS can provide NFS exports for access to file-level objects.We find two main models, in the smallest we can start with a not inconsiderable 60TB and in the largest we can find configurations of up to 8.6PB per rack.As we can imagine what Dell EMC ECS provides us with is object-based storage, scalable and at a reduced price, if we think of an architecture in which a user writes a block using an S3 connector, it would be like this: We see how all nodes are replicated between them, and that only if the block has been stored in all nodes is considered valid.If we thought of a similar operation, but this time reading an S3 block, it would be like this:We see how the request enters through Node 1, which gathers the information of all nodes and at the end it is sent from node 3 to node 1 and from there it is presented to the client.
Greetings friends, I announced a few days ago that Veeam has announced Veeam Backup for Microsoft Office 365 v3.0, and with this new version many improvements, some already told you in the previous post, and others are somewhat more hidden between the RESTfulAPI and PowerShell.Today I come to talk about a Powershell cmdlet called Measure-VBOOrganizationFullBackupSize that will allow us to know information about the organization or organizations we want to protect, without having to go to the Admin Center and review one by one the elements, as I told you in this other post.
Measure-VBOOrganizationFullBackupSize at a glanceWith this new cmdlet it will literally take us a few seconds to know the full size of the organizations we want to protect, is very useful when creating repositories, and even to divide repositories by applications for example, or tenants, etc.
Greetings everyone, if you have followed this blog for quite some time, I am sure you have stopped by the blog entry about How to Monitor a vSphere Environment using Grafana, InfluxDB and telegraf. That blog post has tons of comments, and feedback from all of you around the Globe, which is great. Lately I've started receiving some comments about a common error that some of you have seen once deployed the solution, the error should say something like this:
Task Name: Remote View Manager, Status: The request refers to an unexpected or unknown typeAnd it will look this on the vSphere Client, thanks to Stuart Kennedy for the screenshot:This is due to an old telegraf version you might be using, and the solution it is quite simple, upgrade to the latest telegraf version.
Greetings friends, I have told you on numerous occasions how to deploy Nutanix Community Edition on different platforms such as VMware Fusion, using ISO, using PowerShell to create a cluster of three nodes, and so on.Today I bring you something much simpler, it only takes 5 minutes to have a Single-Cluster since I have altruistically created an OVF image ready for you to try Community Edition today.
System requirements to deploy Nutanix Community Edition 5.10 in OVF formatI want to emphasize that this image is based on the latest version of Nutanix Community Edition 5.10, which was announced a few days ago, and that this OVF is intended only to be deployed nested on vSphere or ESXi, we go with the requirements, the image is consuming the following, but you can edit CPU and RAM:
- Intel CPU, 4 cores minimum, with VT-x support enabled. The image has 8 Cores, 2 processors of 4.
- Memory 16GB minimum. The image has 24GB
- Hot Tier (SSD) One SSD for each minimum Server to install Hypervisor Acropolis, ≥ 16GB
- Hot Tier (SSD) One SSD for each minimum Server, ≥ 200GB per server. We'd better deploy it over SSD.
- Cold Tier (HDD) One SSD for each server minimum, ≥ 500GB per server.