This session is presented by Adnan Sahin. I googled him when I first met him. His name is on 13 patents relating to various storage technologies. I have asked him many performance questions on how to make VMAX scream. He currently works in Performance Engineering.
Objectives for this session:
1. Understand VMAX cloud edition architecture and components
2. Understand performance considerations.
3. Understand restful API and integration with orchestration.
Guiding Principles of VMAX Cloud Edition
Secure multi tenancy
Tenant performance isolation and control
Management isolation and role based management
Simplified management via abstracted entities
Manage applications instead of volumes
Use Tenant language rather than storage specific or array specific language
Integration with orchestration software via Rest API’s
I like the use of tenant language, today we use all the technical terms like fa ports, meta volumes, tdats. Moving to terms that our customers use like disk, datastore, drive may be useful.
Cloud edition is hardware and software. Cloud command center is provided by EMC. Your VMAX is connected via secure VPN similar to ESRS. No data will leave the data center, just management information and configuration. You would access Cloud Command via https.
The self service portal being ran in the command center allows for management, chargeback, and reporting. Data warehouse maintains storage resource usage and mapping between tenants and resources they are using. It is the source for reporting and chargeback.
Coordinator determines what resources are used from the data warehouse. It also applies reservations and commits. It uses placement rules and policies, like tenants only being aloe to use certain data centers or arrays. Enforces approvals. Creates execution plans and limits available service levels by tenants.
Automation engine uses the execution plan from coordinator. It executes array and SAN switch. commands. It also provides rollback if plan fails. You cannot use any additional tools other than the single solutions enabler. No prosphere, UniSphere, or symcli.
Collector collects configuration information. Capacity usage, zones, etc.
Metrics used for I/O workload definition. I/O rate and size, read/write, cache hit/miss, sequential/random, bursts/sustained, skew. Know these terms and you will be rewarded. I use iometer to play with the effects of these and do some of my own performance engineering. Skew and burst/sustained are hard to get. We should all know our io size and io rate.
I/O density is I/O per second per GB. Ie 500 IOPs over 1000 GB will be 0.5 IOPs/GB. Data warehousing is usually large io size, mostly sequential. OLTP is usually small i/o.
Each service level corresponds to a specific performance capability in terms of IO density. Service level designer will help you identify the right service level for your workload profile. Service level designer looks really nice and easy to use. Basic and advanced type interfaces depending on what metrics you know.
You buy drive packs that have SATA, SAS, and EFD. As you move up in service levels the mount of EFD increases, as you move down you have more SATA. More FA ports granted as service level increases. Bronze 2 FA’s, silver 4 FA’s, gold and higher 8 FA’s..
Service levels and capacity are linked. If you double the capacity you get double the performance. Forcing the consumption of performance and capacity in proportion. This helps those tiny work loads from running away and leaving no performance for the remaining capacity. Modern short stroking.
Users can change service levels. This is a big deal. Moving between service levels I think is a key feature requirement for cloud. The system will automatically change fast policy, fa count and port count, and host io limits.
Rest API supporting get, put, delete, etc.