May 9

A new product announced by EMC this year is ViPR. I got some hands on time and I like it. The lab took my through self service provisioning, monitoring and reporting, visibility from VMware, and object data services.

Join me as we go through it together…..

We begin as an end user in the product requesting to add block storage to a host. Carving luns, zoning, masking, etc were all automated. We got a positive result at the end that let us know our storage was ready and took seconds. Next we are a storage administrator. We create a new service for the catalog. Here are some pictures of all of the possible services:

20130509-112806.jpg

20130509-112840.jpg

20130509-112846.jpg

20130509-112851.jpg

20130509-112900.jpg

We now move on to look at approvals. As the end user we requested to add some block storage to a host. We switch over to administrator to approve this.

20130509-113010.jpg

The next exercises are all monitoring and reporting. I thought it was excellent and was entranced. I didn’t take a single picture. I currently use ProSphere and I see this as an upgrade.

Moving on we are back to an end user. We need a datastore added to our VMware. We are growing wildly and things have been crazy. No problem we use self service to add a new 5GB high performance file based datastore.

20130509-113115.jpg

Why don’t we go into some detail here around what happened.

20130509-113316.jpg

20130509-113328.jpg

20130509-113332.jpg

20130509-113336.jpg

20130509-113342.jpg

20130509-113346.jpg

As you can see every thing has been automated for us.

    Create a file system
    Create NFS export
    Connect VMware to NFS
    Created Datastore

The moment of truth is when we log into vCenter and add a virtual machine on our new datastore.

20130509-113441.jpg

Closing in on the end of the lab we use vCOPs to monitor performance and some capacity. We find our data store is full but the storage array has plenty of space.

In the end we use Amazon S3 and ViPR to simulate an outside marketing firm updating an image. We then access it internally using CIFS and add our own updates.

Here is a summary of what the lab included.

20130509-113540.jpg

May 7

This session is presented by Adnan Sahin. I googled him when I first met him. His name is on 13 patents relating to various storage technologies. I have asked him many performance questions on how to make VMAX scream. He currently works in Performance Engineering.

Objectives for this session:
1. Understand VMAX cloud edition architecture and components
2. Understand performance considerations.
3. Understand restful API and integration with orchestration.

Guiding Principles of VMAX Cloud Edition
Secure multi tenancy
Tenant performance isolation and control
Management isolation and role based management
Simplified management via abstracted entities
Manage applications instead of volumes
Use Tenant language rather than storage specific or array specific language
Integration with orchestration software via Rest API’s

I like the use of tenant language, today we use all the technical terms like fa ports, meta volumes, tdats. Moving to terms that our customers use like disk, datastore, drive may be useful.

Cloud edition is hardware and software. Cloud command center is provided by EMC. Your VMAX is connected via secure VPN similar to ESRS. No data will leave the data center, just management information and configuration. You would access Cloud Command via https.

The self service portal being ran in the command center allows for management, chargeback, and reporting. Data warehouse maintains storage resource usage and mapping between tenants and resources they are using. It is the source for reporting and chargeback.

Coordinator determines what resources are used from the data warehouse. It also applies reservations and commits. It uses placement rules and policies, like tenants only being aloe to use certain data centers or arrays. Enforces approvals. Creates execution plans and limits available service levels by tenants.

Automation engine uses the execution plan from coordinator. It executes array and SAN switch. commands. It also provides rollback if plan fails. You cannot use any additional tools other than the single solutions enabler. No prosphere, UniSphere, or symcli.

Collector collects configuration information. Capacity usage, zones, etc.

Metrics used for I/O workload definition. I/O rate and size, read/write, cache hit/miss, sequential/random, bursts/sustained, skew. Know these terms and you will be rewarded. I use iometer to play with the effects of these and do some of my own performance engineering. Skew and burst/sustained are hard to get. We should all know our io size and io rate.

I/O density is I/O per second per GB. Ie 500 IOPs over 1000 GB will be 0.5 IOPs/GB. Data warehousing is usually large io size, mostly sequential. OLTP is usually small i/o.

Each service level corresponds to a specific performance capability in terms of IO density. Service level designer will help you identify the right service level for your workload profile. Service level designer looks really nice and easy to use. Basic and advanced type interfaces depending on what metrics you know.

You buy drive packs that have SATA, SAS, and EFD. As you move up in service levels the mount of EFD increases, as you move down you have more SATA. More FA ports granted as service level increases. Bronze 2 FA’s, silver 4 FA’s, gold and higher 8 FA’s..

Service levels and capacity are linked. If you double the capacity you get double the performance. Forcing the consumption of performance and capacity in proportion. This helps those tiny work loads from running away and leaving no performance for the remaining capacity. Modern short stroking.

Users can change service levels. This is a big deal. Moving between service levels I think is a key feature requirement for cloud. The system will automatically change fast policy, fa count and port count, and host io limits.

Rest API supporting get, put, delete, etc.

May 22

Hands on Lab 19 was great. ProSphere is shaping up to be a tool I’ll frequently depend on.

At first glance of the interface of ProSphere uses flash to display some really great looking charts and graphs. Each main section has a few screen tabs to go with it. Under each screen we can creat saved screens or views and display them. We can filter for many different attributes. One of the best pieces is search right up at the top.

The dashboard section gives some quick overviews of how capacity and performance look. Each chart or graph in here is can be maximized. Maximizing one of them allows for you to see a table below to see additional information as well as do some filtering for things like specific arrays or hosts. In the lab I was shown how to check for capacity as well as look at current trends. The performance tab allowed me to see about 6 top metrics from hosts and arrays etc. I was quickly able to find a pool that was nearing capacity and track down what was using the space.

There is some nice diagrams that are generated that can be drilled down into as well as tables that appear near the bottom as you click on objects in table view. The diagram views gave me the ability to double click on things like hosts and it would show me hba’s and ports. I was able to see an interactive view of the SAN’s ProSphere was managing.

Discovery seemed to support a wide range of methods to gather data. VMWare API, SMI-S, SSH, and SNMP were all available. In the lab I configured discovery credentials for Cisco MDS, VMAX, and VMware. I also setup a schedule for when to discover. There was also the ability to group my objects together. I thought of scenarios where I may group all of my database servers to show capacity planning information for their storage etc.

Search was a big win for me. I could search for array’s, hosts, vm’s, and other items. This made it really easy to find information on any piece of the SAN’s. I really liked this. A number of other good exercises were in the lab. I would suggest you check it out if you get the chance.

« Previous Entries