Posts by regiscarlier

Senior Storage Architect with over 20 years of experience in the storage industry as Field Engineer, System Engineer and Technical and Strategic Advisor in Data Management, on premise, hybrid and cloud storage infrastructure. As Netapp expert, I’m member of the #NetAppUnited team. My goal is to share my experience and to be challenged every day. My way of Life ... to learn every day ...

How-to restore 7-mode NDMP backup onto an Ontap Cluster using Atempo Time Navigator. Yes, you can!

Context:

The use case seems simple. A customer had a Netapp FAS running 7-mode DataOntap 8.2. This Storage Array was migrated to C-mode Ontap 9.1 and joined to an existing Cluster. Before the migration, the customer used to backup data through NDMP using Atempo Time Navigator 4.3.3. Now the challenge is to restore previously backed-up data onto the new cluster.

 

Initial Configuration Before conversion to C-mode:

image1

Actual Configuration (after converting and joining Cluster):

image2

The Facts:

  • Previous configuration exists into the Time Navigator Catalog Database
    • 7-mode controllers systems
    • NDMP applications attached to 7-mode controllers systems
  • The catalog database contains the Backup jobs
  • Tapes still exist and Data is recoverable
  • Backup granularity is FILE
  • The SAN configuration is correct
  • Netapp Cluster nodes egid-03 and egid-04 connect to tape drives and library
  • Tina NDMP NetApp 7-mode application refer to /vol/VOLUME_NAME as ‘raw’ device

 

How-To:

I had no idea if it was possible to restore NDMP 7-mode data to a C-mode Cluster so I decided to test/try/retry. The next section describes step by step How-To.

Configure NDMP onto the Cluster

We need to configure Node-scope NDMP for nodes egid-03 and egid-04.

system services ndmp node-scope-mode on

system services ndmp start -node egid-03

system services ndmp start -node egid-04

system services ndmp modify -user-id backupndmp -node egid-03

system services ndmp modify -user-id backupndmp -node egid-04

 

Reconfigure NDMP NetApp NAS into the Tina Catalog

You need to declare NetApp NAS filer and then create an NDMP application in charge of Backup/Restore. NAS filers and NDMP applications still exist and you need to keep them to be able to restore previously backed up data.

image3

You need to modify NAS Filers to reflect the converted controllers. In 7-mode, controllers are accessed by the admin network interface and in C-mode node-scope NDMP you need to connect to node management interface. So, I just changed the IP into the NAS Filers Systems in the catalog and mapped them to the node management IP/DNS name.

image4

That’s all for the NAS systems.

Next, you need to configure the NDMP Applications:

Nothing special, just refer to the NDMP server (Previously configured NAS Filer).

image5

Now that everything seems configured, let’s test :

The problem is that in 7-mode, backup class refer to volumes /vol/VolumeName.

As it’s node-scope NDMP, volumes need to be hosted on specific NDMP node. I created a vserver TEST, a volume VOL and a volume SANBx on node egid-03. Volume VOL is mounted on /vol and SANBx is mounted on /vol/SANBx.

Then I tested a restore and of course, it failed !

image6

The error was pretty simple to understand. Tina complains about a vserver vol and a volume SANBx inside this vserver. So I simply renamed vserver TEST to vol.

Final test :

I retried after renaming the vserver to vol and …… IT WORKS !

The data backed up from a /vol/SANBx volume could be restored in a SANBx volume into a vserver named vol. hosted on the NDMP node (Restoring to a different place was not possible).

I could successfully restore previously backed up data using NDMP from a 7-mode controller to a C-mode Cluster using NDMP.

image7

 

Conclusion

In fact, I spent a lot of time to test before succeeding to restore. Initially I even thought it was not possible but if you don’t try, you don’t know. And, once you know, it’s not so difficult ;).

It should work with other Backup Software but I did not test it.

It should work with SVM-scope NDMP.

I tried to restore to another location than the initial location but it did not work with Tina.

And so many other behaviors and use cases to test … Next time.

 

 

Basic step by step how-to :

 

The steps to restore NDMP 7-mode backups to a C-mode controller are simple:

  • Configure and start NDMP onto the Ontap Cluster
  • Create a vol vserver
  • Create volumes inside the vol vserver similar to 7-mode volumes
    • /vol/SANBx 7-mode volume maps to SANBx volume inside vol vserver
  • Restore with NDMP using defaults settings

 

It was tricky … but Fun ….  The conclusion is …. Yes, you can restore NDMP backups data from a 7-mode to a C-mode using NDMP with Atempo Time Navigator and I am sure that it should work with other software vendors.

 

Advertisements

Merging a 7-mode system and a Cluster Mode system using XCP. A customer case study.

Context

The European Genomic Institute for Diabetes (EGID). EGID is an international research institute focused on diabetes (type 1 and 2), obesity and its associated risk factors. The fundamental mission of the institute will be to develop mayor breakthroughs in the understanding of these diseases, as well as their diagnosis and therapeutic treatments. DNA sequencers generate a huge amount of genomics data stored on Netapp storage. It represents Millions of files and a capacity of about 500 TB but the need is at about 1PB, 800 TB of raw data and about 200 TB. Data is used by several entities and is accessed by servers exclusively through NFS v3 protocol.

There are 2 storage systems:

  • A 3270 HA pair operating in 7mode 8.1 with about 600 TB of data
  • A 3250 HA switched Cluster Ontap 9.1 with initially about 200 TB of Data.

image1

The customer had several needs and constraints:

  • expand the capacity to store Data (he wanted to add about 300 TB) but with a minimal investment
  • simplify the storage infrastructure
  • simplify the data management
  • have less and bigger volumes for genomics data
  • The source genomics data is essentially read and is only a few modified.
  • Source data is computed by many algorithms and jobs run during several days but it’s possible to have a small window cutover if necessary between jobs.
  • It was possible for the customer to modify easily mount points on the servers

The target storage system

We studied several solutions to define a new architecture and transition data if needed and the choice seemed evident:

  • a single 4node Cluster
  • a single Flexgroup for the genomics data

Simple, efficient, reliable!

image2

Step by step how-to

To do this, we need to fully empty the 3270 7-mode before initializing it in Ontap and join it to the existing Cluster. Of course, we cannot erase data so we need to step by step migrate onto the Cluster system. Data is exclusively accessed through NFSv3, genomics data need to be on a unique flexgroup volume, so, the perfect migration tool is netapp XCP.NetApp XCP NFS Migration Tool is a high-performance NFSv3 migration tool for fast and reliable migrations from third-party storage to NetApp and NetApp to NetApp transitions.  Due to the source data aggregates volumes and directories we needed to selectively step by step move data. In each volume there was a few directories containing dozen of TB and millions of files. The granularity used for the XCP transition was directories inside volumes to minimize jobs interruptions. Each job uses well known directories and once data are transitioned, it’s easy to modify the job to make it run on the new flexgroup.

Step 1

First we added the new purchased shelfs to the 3250 Cluster. It adds a capacity of about 300 TB.

image3

We created a new Vserver and the flexgroup needed to host genomics data.

Step 2

We copied about half of the data from the 3270 to the new flexgroup using XCP.

image4

Example:

….

/home/regis/XCP/xcp-1.3/linux/xcp copy -newid copy_run_215 192.168.0.1:/vol/SANB7:run_215 192.168.0.50:/RUN:run_215

/home/regis/XCP/xcp-1.3/linux/xcp copy -newid copy_run_216 192.168.0.1:/vol/SANB7:run_216 192.168.0.50:/RUN:run_216

/home/regis/XCP/xcp-1.3/linux/xcp copy -newid copy_run_217 192.168.0.1:/vol/SANB7:run_217 192.168.0.50:/RUN:run_217

….

As you can see, with a 10 Gb/s network, XCP was very fast at about 2TB per hour.

….

6,874 scanned, 5,543 copied, 5,355 indexed, 15 giants, 361 GiB in (570 MiB/s), 361 GiB out (571 MiB/s), 10m44s

 6,874 scanned, 5,546 copied, 5,355 indexed, 15 giants, 364 GiB in (570 MiB/s), 364 GiB out (570 MiB/s), 10m49s

 6,900 scanned, 5,589 copied, 5,388 indexed, 15 giants, 367 GiB in (561 MiB/s), 366 GiB out (561 MiB/s), 10m54s

….

 Step 3

After step2, the 3250 cluster is nearly full and we still need space for the remaining data. Once all directories of a volume are migrated we can destroy the volume and once all volumes of an aggregate are empty we can destroy it. Aggregates spread on the shelf stacks and to make shelfs empty of data we need to use ‘disk replace’ between the stacks.

image5

Example:

....

disk replace start -f 0a.09.17 3d.02.8

disk replace start -f 0a.09.19 3d.02.9

disk replace start -f 0a.09.18 3d.02.10

disk replace start -f 0a.09.0 3d.02.11

disk replace start -f 0a.09.16 3d.02.12

....

Then, the full stack is empty.

image6.png

Step 4

We disconnect stack 2 from 3270 and connect it to 3250. We create temporary aggregates and expand the flexgroup.

image7

Step 5

We copied remaining data from the 3270 to the 3250 using XCP as in step2.

image8

After transitions, the 3270 is empty.

image9

Step 6

The 3270 is empty and we can initialize it in Ontap 9.1, add 10G Ethernet cards and join it to the Cluster.

image10

Step 7

Now, we just need to rebalance data and shelfs between nodes. We do it with vol move inside the cluster.

image11

 First, moves are done to empty ‘stack2’.

image12

Once done, we can reconnect stack2 to 3270 nodes.

image13

Then, we balance volumes between nodes and aggregates with vol move.

image14

Target storage system

image15

Conclusion

The primary goals were achieved without any problem:

  • a single 4node Cluster
  • a single Flexgroup for the genomics data
  • simplify the storage infrastructure
  • simplify the data management
  • have less and bigger volumes for genomics data

 

XCP was really the best tool to achieve this transition. It’s very fast and reliable. We could transfer data at more than 2TB per hour. It enables to migrate data from multiple flexvols to a unique flexgroup.

References

Netapp_logo.svg

https://xcp.netapp.com

Logo-EGID

http://www.egid.fr

logo-scalair

http://www.scalair.fr

 

Let’s start !

Senior Storage Architect with over 20 years of experience in the storage industry as Field Engineer,System Engineer and Technical and Strategic Advisor in Data Management, on premise, hybrid and cloud storage infrastructure. As Netapp expert, I’m member of the #NetAppUnited team. My goal is to share my experience and to be challenged every day. My way of Life … to learn every day …

Thanks for joining me!

cropped-img_3712logonetappunited