• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
The Blog of Jorge de la Cruz

The Blog of Jorge de la Cruz

Everything about VMware, Veeam, InfluxData, Grafana, Zimbra, etc.

  • Home
  • VMWARE
  • VEEAM
    • Veeam Content Recap 2021
    • Veeam v11a
      • Veeam Backup and Replication v11a
    • Veeam Backup for AWS
      • Veeam Backup for AWS v4
    • Veeam Backup for Azure
      • Veeam Backup for Azure v3
    • VeeamON 2021
      • Veeam Announces Support for Red Hat Enterprise Virtualization (RHEV/KVM)
      • Veeam announces enhancements for new versions of Veeam Backup for AWS v4/Azure v3/GVP v2
      • VBO v6 – Self-Service Portal and Native Integration with Azure Archive and AWS S3 Glacier
  • Grafana
    • Part I (Installing InfluxDB, Telegraf and Grafana on Ubuntu 20.04 LTS)
    • Part VIII (Monitoring Veeam using Veeam Enterprise Manager)
    • Part XII (Native Telegraf Plugin for vSphere)
    • Part XIII – Veeam Backup for Microsoft Office 365 v4
    • Part XIV – Veeam Availability Console
    • Part XV – IPMI Monitoring of our ESXi Hosts
    • Part XVI – Performance and Advanced Security of Veeam Backup for Microsoft Office 365
    • Part XVII – Showing Dashboards on Two Monitors Using Raspberry Pi 4
    • Part XIX (Monitoring Veeam with Enterprise Manager) Shell Script
    • Part XXII (Monitoring Cloudflare, include beautiful Maps)
    • Part XXIII (Monitoring WordPress with Jetpack RESTful API)
    • Part XXIV (Monitoring Veeam Backup for Microsoft Azure)
    • Part XXV (Monitoring Power Consumption)
    • Part XXVI (Monitoring Veeam Backup for Nutanix)
    • Part XXVII (Monitoring ReFS and XFS (block-cloning and reflink)
    • Part XXVIII (Monitoring HPE StoreOnce)
    • Part XXIX (Monitoring Pi-hole)
    • Part XXXI (Monitoring Unifi Protect)
    • Part XXXII (Monitoring Veeam ONE – experimental)
    • Part XXXIII (Monitoring NetApp ONTAP)
    • Part XXXIV (Monitoring Runecast)
  • Nutanix
  • ZIMBRA
  • PRTG
  • LINUX
  • MICROSOFT

FreeNAS: Configure Veeam Backup Repository Object Storage connected to FreeNAS (MinIO) and launch Capacity Tier

22nd August 2019 - Written in: linux, opensource

Greetings friends, I have been showing you in this series of blogs about FreeNAS, how to deploy it on VMware vSphere in a very comfortable way, how to add an SSL certificate with Let’s Encrypt to publish FreeNAS services securely, and how to configure the Object Storage service of FreeNAS (based on MinIO) with just a few clicks.

To conclude the series, I’d like to talk about how we can combine what we’ve learned with Veeam Capacity/Cloud Tier.

Scale-Out Backup Repository – Basics

Before we go any further, it is important that we understand what we intend to do. Cloud/Capacity Tier builds on Veeam’s Scale-Out Backup Repository to combine Performance Tier and Capacity Tier.

If we saw it in a very simple diagram, we would have the following, a combination of local extents (Backup Repositories) called Performance Tier, to which is added a Capacity Tier based on Object Storage to which are sent the copies we don’t need to have in the performance tier:

Backup Jobs Cloud/Capacity Tier

If we wanted to send for example our Backup Jobs retention policy of 30 days to a Capacity Tier, taking into account that we make a synthetic or full each week, this means that we would have something like that between performance tier and capacity tier:The advantages of this method is that on disk we would only have the open chain of backups, this is the last full and incremental, while everything else to complete those 31 days would be in Object Storage (which is uploaded to Cloud the next day that the chain is closed), for this is necessary to have good connectivity to FreeNAS. Remember that when the open backup chain of the 31st is closed, the first one we see on the left will be deleted from the cloud, and so on, respecting our retention policy.

Cloud/Capacity Tier of Backup Copy Jobs with GFS

If we wanted to send for example our GFS policies, which perhaps makes more sense, we could send the weekly, monthly and yearly, it would look like this if we look within a year:

On disk we would only have the minimum that the Backup Copy Jobs need to create those full, and in Object Storage we would have one for each month, and one for each last week, if we had any yearly, it would also be clear there.

Now that we have clearer the whole concept of Scale-out Backup Repository and what is uploaded to Object Storage, let’s see the configuration steps in Veeam.

Object Storage Repository and Capacity Tier configuration at Veeam Backup & Replication v9.5 U4

Finally we would only have the Veeam configuration, remember that we can only send files that are complete, that their string is not open, for example from a Backup Copy Jobs string, those that are closed are the GFS that are created with the retention we select, in my case 4 Weekly, 12 Monthly and 1 Yearly, so that the files that comply with GFS and have been created, being full, are perfect to send.

We will start by creating a Backup Repository of Object Storage type, pointing to this FreeNAS:Select S3 Compatible: Enter a name for your repository:We will now introduce the Service point, for FreeNAS it is https://VUESTRAIP:9000, the region will be left as it is, and we will make sure that we have the credentials that we have created previously:We will see the Object Storage bucket, we will have to create a folder: Once we have everything ready, we will click Finish: The next and last step is to create a Scale-out Backup Repository, where we will combine local repositories, with Object Storage repositories, which is where we will do the Capacity Tier, we will do next: We will select from Performance Tier a repository where we will have the GFS copies, and we will leave the policy in Data Locality: We will select in Capacity Tier the Object Storage Repository, and in my case I want to send everything that is already sealed and that is older than 0 days, this means that if it is Saturday and the weekly is created by policy, in a range of about four hours maximum this copy will be sent to this object storage:If all is well we will click on Finish: Finally, we will do CTRL + Right button and I will execute the work in a manual way, since I want to send everything to my object storage: The work will begin, and since we have the Object Storage in our network, the performance is really powerful, it is another of the advantages, apart from the security, data protection, etc. If we go back to our FreeNAS d we will be able to see the space consumption and the performance graph we will also be able to see how it moves. That’s all friends, I hope you like it, that you think about creating your own Object Storage in your laboratories for free and very powerful with FreeNAS and leave comments on the article.

I leave you the whole menu with the entries on FreeNAS:

  • FreeNAS: Initial installation and configuration of FreeNAS 11.x as VM within vSphere
  • FreeNAS: Enable and configure Object Storage in FreeNAS 11.x compatible with S3 APIs – Based on MinIO
  • FreeNAS: How to Deploy a Let’s Encrypt SSL Certificate in FreeNAS 11.x and HTTPS Configuration
  • FreeNAS: Configure Veeam Backup Repository Object Storage connected to FreeNAS (MinIO) and launch Capacity Tier

Filed Under: linux, opensource Tagged With: freenas, freenas aws, freenas installation, freenas object storage, freenas object storage api, freenas s3, freenas ui, freenas vmware, freenas vsphere

Reader Interactions

Comments

  1. Gerardo Altman says

    13th November 2019 at 3:41 am

    Hi JORGE

    Nice article, we are currently looking at testing this in our lab and by pure chance came across your article.

    Now that it’s been a few months of using Freenas + ZFS and Minio how are you finding the ZFS performance?

    Trolling through the FreeNAS and Veeam forums there have been mixed results with ZFS performance as a target but this is directly via CIFS or NFS share and S3 hasn’t been covered at all.

    would be interested to discuss the project results in more detail.

    “”Cheers
    Gerardo

  2. jorgeuk says

    13th November 2019 at 10:41 am

    Hello Gerardo,
    I have the opinion that it works as fast as you size it, FreeNAS has some recommendations when deploying it, if you are not doing anything fancy like dedupe or compress on the ZFS volumes, then the requirements are lower, but still when enabling minio that consumes more RAM. So, just take a look at the CPU and RAM requirements, and also disk, of course.

    I am biased as I am running all on VSAN with NVMe and SSD, so you can imagine the performance of this system 🙂 It is just an abstraction of the resources you give to it. My recommendation? Give it a go, it is free 🙂

  3. Gerardo Altman says

    13th November 2019 at 10:45 am

    Hi George

    We are looking to experiment with an all SSD ZFS config starting with 10 x 8 TB drives expandable to 24 on a single box.

    It will be interesting to see where ZFS performance starts to degrade.

  4. jorgeuk says

    13th November 2019 at 10:47 am

    Thinking on FreeNAS just for the Object Storage part, or CIFS/NFS as well?

  5. Gerardo Altman says

    13th November 2019 at 10:50 am

    Hi Jorge

    thanks for the encouragement 🙂

    we will be playing with an all SSD ZFS box starting with 10 x 8 TB drives and eventually extend out to 24, trying to see where performance starts to degrade.

    it will be interesting to see how it performs directly compared to using MinIO erasure coding which is meant to scale better than ZFS, less functional but scales much better.

    So you’re not using this in production? was it just a Dev deployment ?

    “”Cheers
    G

  6. jorgeuk says

    13th November 2019 at 10:56 am

    Hello Gerardo,
    That is right, just for my lab, and real backups of my Homelab and my Office 365 accounts, but I will not call this production, even if doing 50VMs Backups, and a few Office 365 accounts.

  7. Gerardo Altman says

    13th November 2019 at 10:58 am

    Thats a good question, it will depend on how FreeNas performs as a storage target for Veeam with backup and restore processes tested under load.

    We are waiting for Veeam to come back to us for any best practises, according to the TrueNAS website it is Veeam certified so looking to get more information from Veeam on this and see what the ultimate config will be.

    Not sure if using NFS iS preferred as Veeams > Freenas CIFS/ SMB implementation may be limited to SMB1 or 2 not sure.

    If it works well we may look at augmenting it as a second or third tier backup repository to complement our StoreOnce repositories, since both MinIO and ZFS can replicate at different levels (bucket, dataset or pool levels) it may be an interesting fit for internal use cases and client side bucket replication between datacenter’s.

    very keen to see what we can achieve 🙂

    “”Cheers
    G

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

  • E-mail
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • YouTube

Posts Calendar

August 2019
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Jul   Sep »

Disclaimer

All opinions expressed on this site are my own and do not represent the opinions of any company I have worked with, am working with, or will be working with.

Copyright © 2025 · The Blog of Jorge de la Cruz