May 8

Here is all of the EMC blueprints. These are artistic representations of the content from keynote and general sessions. Great to view back home and share with those who couldn’t attend.

20130508-113125.jpg

20130508-113139.jpg

20130508-113148.jpg

20130508-113158.jpg

20130508-113203.jpg

20130508-113209.jpg

20130508-113215.jpg

20130508-113220.jpg


May 7

This session is presented by Adnan Sahin. I googled him when I first met him. His name is on 13 patents relating to various storage technologies. I have asked him many performance questions on how to make VMAX scream. He currently works in Performance Engineering.

Objectives for this session:
1. Understand VMAX cloud edition architecture and components
2. Understand performance considerations.
3. Understand restful API and integration with orchestration.

Guiding Principles of VMAX Cloud Edition
Secure multi tenancy
Tenant performance isolation and control
Management isolation and role based management
Simplified management via abstracted entities
Manage applications instead of volumes
Use Tenant language rather than storage specific or array specific language
Integration with orchestration software via Rest API’s

I like the use of tenant language, today we use all the technical terms like fa ports, meta volumes, tdats. Moving to terms that our customers use like disk, datastore, drive may be useful.

Cloud edition is hardware and software. Cloud command center is provided by EMC. Your VMAX is connected via secure VPN similar to ESRS. No data will leave the data center, just management information and configuration. You would access Cloud Command via https.

The self service portal being ran in the command center allows for management, chargeback, and reporting. Data warehouse maintains storage resource usage and mapping between tenants and resources they are using. It is the source for reporting and chargeback.

Coordinator determines what resources are used from the data warehouse. It also applies reservations and commits. It uses placement rules and policies, like tenants only being aloe to use certain data centers or arrays. Enforces approvals. Creates execution plans and limits available service levels by tenants.

Automation engine uses the execution plan from coordinator. It executes array and SAN switch. commands. It also provides rollback if plan fails. You cannot use any additional tools other than the single solutions enabler. No prosphere, UniSphere, or symcli.

Collector collects configuration information. Capacity usage, zones, etc.

Metrics used for I/O workload definition. I/O rate and size, read/write, cache hit/miss, sequential/random, bursts/sustained, skew. Know these terms and you will be rewarded. I use iometer to play with the effects of these and do some of my own performance engineering. Skew and burst/sustained are hard to get. We should all know our io size and io rate.

I/O density is I/O per second per GB. Ie 500 IOPs over 1000 GB will be 0.5 IOPs/GB. Data warehousing is usually large io size, mostly sequential. OLTP is usually small i/o.

Each service level corresponds to a specific performance capability in terms of IO density. Service level designer will help you identify the right service level for your workload profile. Service level designer looks really nice and easy to use. Basic and advanced type interfaces depending on what metrics you know.

You buy drive packs that have SATA, SAS, and EFD. As you move up in service levels the mount of EFD increases, as you move down you have more SATA. More FA ports granted as service level increases. Bronze 2 FA’s, silver 4 FA’s, gold and higher 8 FA’s..

Service levels and capacity are linked. If you double the capacity you get double the performance. Forcing the consumption of performance and capacity in proportion. This helps those tiny work loads from running away and leaving no performance for the remaining capacity. Modern short stroking.

Users can change service levels. This is a big deal. Moving between service levels I think is a key feature requirement for cloud. The system will automatically change fast policy, fa count and port count, and host io limits.

Rest API supporting get, put, delete, etc.


May 7

Four trends are identified that are disruptive or transforming IT. Cloud, big data, mobile, and social networking. Trust does need to be spread across all of these. The IDC is calling these the 3rd platform, the first and second being mainframe and client/server. Billions of users and millions of applications. IBM is mentioned as being on the only company that made it from platform one to platform two. Most of the applications today in the business work on platform two.

Abstract, pool, and automate are some new buzzwords from VMware and the software defined Datacenter.

Vblock has a widening portfolio with specialized system being a new model shown to the right of the 720. Wonder if this is where mine Vblock will be with its odd/even fa port cabling rather than odd/even director.

EMC is reinforcing that new apps are needed that are built on new platforms. Paul Maritz has been saying this for years. Pivotal will allow for apps to run on any cloud.

A key vision around security, understanding the intrusions as well as trying to block them. The intruders have excellent tools and their attacks are generating big data. Many times company spend on intrusion prevention only to have no vision of the intrusions that do occur.

Paul takes the stage…Pivotal. Cloud independence. Big data, fast data, apps.

Looking at the Internet giants like google, amazon, yahoo, and Facebook. They have built the ability to store and reason data at massive scale. They have rapid application deployment. Ingest huge numbers of events in real-time. Interact with legacy apps and infrastructure. Openly scale.

2nd generation apps need to be portable to 3rd generation platforms. I agree here, we cannot ask that all the worlds developers of business applications suddenly rewrite years of code. I do ask that they start to plan the cloudification of their applications.

Pivotal one is the new platform. This is Paul’s vision of the cloud and hardware independent operating system. Pivotal one should be available Q4 2013. All 3 fabrics. Data, application, cloud fabric. Some pieces are available now.

Interesting that General Electric has ownership in Pivotal. GE jet engines not being sold but delivered as a service. As in 15000 hours of power for a fee. The statement of 30 terabytes of telemetry data from a jet engine makes sense now.

It is no surprise that Pivotal needs some maturing. It will be worth keeping an eye on them as they evolve.


« Previous Entries