Prezentace se nahrává, počkejte prosím

Prezentace se nahrává, počkejte prosím

Tomáš Okrouhlík Storage Presales Technical Consultant 5. dubna 2011

Podobné prezentace


Prezentace na téma: "Tomáš Okrouhlík Storage Presales Technical Consultant 5. dubna 2011"— Transkript prezentace:

1 Tomáš Okrouhlík Storage Presales Technical Consultant 5. dubna 2011
3PAR Výkon a správa Tomáš Okrouhlík Storage Presales Technical Consultant 5. dubna 2011 ©2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

2 Agenda 3PAR Tenký Provisioning ThP 3AR Výkon
3PAR Presales Agenda 3PAR Tenký Provisioning ThP 3AR Výkon 3PAR Správa a administrace 3PAR Virtual Domains 3PAR Virtual Copy 3PAR System Reporter

3 HP-3PAR Jak jsou data uložena
3PAR Presales HP-3PAR Jak jsou data uložena

4 Každý fyzický disk je rozdělen na “Chunklety”
3PAR Presales 3PAR InServ Data Layout Všechny logické disky LD jsou svázané s logickým volumem LUN. Logické volume jsou transparentně prezentované pro servery ze všech portů na všech uzlech. Prezentace je v režimu Activ-Activ Každý fyzický disk je rozdělen na “Chunklety” Každý o velikosti 256 MB. Každý VV je automaticky rozprostřen přes chanklety na všech´discích stejného typu FC, SATA, Kontrolerové uzly jsou přidávané v párech Diskový magazín obsahuje 4 disky stejného typu. Různé diskové magazíny mohou být mixovány v jednom diskovém chassis RAID sety např. RAID5 3+1 R5, každý z jeho 4 členů je rozložen na samostatné diskové chassis Diskové chasis je napřímo připojeno ke kontrolérům RAID sety jsou svázané s logickými disky LD Logické disky LD jsou namapované a rovnoměrně rozložené přes jednotlivé uzly LD LD L D L D 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256 256

5 Jak jsou Virtuální Volumy mapované na Chunklety

6 Co to je „region“ a jak se mapuje na virtuální volume
Logické Disky (LDs) Jsou mapované na Virtuální Volumy skrze 128 MB regiony Zároveň jejich mapování na fyzické disky je přes Raidsety (více chankletů svázaných RAID pravidly a typem disků) V rámci AO Adaptivní Optimalizace se může regipon stěhovat na jiný LD Host Virtuální Volumy libovolné velikosti (od 256MB do16TB ) Fyzické disky rozlámané na Chunklety (každý 256MB ) Host Servery Vidí LUNy (exportované Virtuální Volumy) Zřídka přistupované části volumu mapované na TIER2 Logických Disků LD Často přistupované oblasti volumu mapované na region v TIER0 Logických disků LD 2 Raidlets (RAID 1) 4 Raidlets (RAID 5, 3+1) SSD SATA 3PAR's InSpire Architecture is designed for "5 nines" system availability.  A number of availability and error isolation features, coupled with a leading service and support model, enable this (see below) and make component MTBF information of limited value.   Actual system availability is dependent on configuration and on selected service/support options.  Please consult with your Systems Engineer.  3PAR does not discuss individual component MTBF information, except that of disk drives which, because they fail the most, result in the most expected on-site service (which users may wish to budget for). Platform ============= Fully hardware and software redundant Non-disruptive software upgradeability & remote (lights-out) software patch updating Non-disruptive hardware upgradeability (addition and replacement) High levels of performance after component failure No write-through mode (>4 nodes, Q1 2010) De-stage cache data to permanent media during extended power-fail Write-cache mirroring (with write-through after node failure) Preserved data space (for preventing "pinned" data) End-to-end internal error checking RAID support:   0, 10, and 50 (multiple mirrors available); RAID MP (60) in Q1 2010 Rapid rebuild times (with global spare chunklets) Continuous disk scrubbing Configurable RAID Isolation:  Magazine or Chassis level Switched Drive Chassis architecture for advanced FC error isolation Predicative drive failure through 3PAR algorithms and SMART technology Service and Support ===================== SMART - Getting it right Highly experienced experts only  (14 years avg) Full system information always at hand Automated analysis and reporting Automated point-and-click service actions avoid error FAST - Rapid Pro-active response Issues identified and communicated by 3PAR first No extra time to gather or analyze information on-site No extra time to share information - integrated infrastructure for shared information access Lights-out online remote SW updates and service actions Rapid escalation – 2hrs to VP of Customer Service PROVEN - Demonstrated global support Successfully delivering enterprise support for 5+ years 100% responsible for customer services experience ~500 customers, 50% Fortune/Global 1000 Service in 5 continents, 19+ countries, 70+ service areas 200+ trained authorized service provider technicians 1,000+ system installations and upgrades/updates

7 3PAR Presales HP-3PAR Základy ThP

8 The 3 “Thin’s” HP 3PAR systém má 3 sw pro tenký provisiong
Thin Provisioning : pro schopnost vytvářet TPVV tj. volumy, které mají větší kapacitu než jsou fizické disky a alokují se a rostou podle potřeby. Thin Conversion : schopnost rozpoznat nuly přicházející od serveru na TPVV a jako takové je „ignorovat“ a nezašpiňovat s nimi samotný volume. Užitečný pro migrace systémů, aplikací a FS které píšou nuly. Thin Persistence : schopnost uvolňovat kapacitu v TPVV, online bez dopadu na server. Thin Persistence může být použitý jak pro HW asistované zjišťování nul, tak za použití SCSI příkazu “Write same”

9 Jak se alokuje kapacita
Rogiony jsou rozdělené na stránky o velikosti 16KB a jsou alokované v okamžiku, kdy na ně píše server Když vytvoříme TPVV. Několik regionů o velikosti 128 MB je rezervováno. Resp. 512MB x počet kontrolerových párů Když jsou tyto regiony plné, dalších 128MB až 2GB je rezervováno v závsloti na rychlosti psaní TPVV V okamžiku vytrvoření CPG není žádná kapacita alokována LD CPG Když je vytvořen první TPVV spojený s CPG, dojde k automatickému vytvoření Logických disků (LD) 32GB (default) x počet kontrolerových párů pro user data 8GB (RAID1 triple mirror) x počet kontrolerových párů pro admin prostor Když se prvně alokovaný prostor zaplní na 75% Dojde k alokaci dalších 32GB. Pool volných chunkletů HP Confidential 21 April 2017

10 HP-3PAR Thin Conversion
3PAR Presales HP-3PAR Thin Conversion

11 Thin Conversion (TC) pro migraci dat
TC is useful for data migrations as most block data migration tools do not know about real data from zeroes and will migrate both real data and zeroes. TC will work regardless of what tool is used for data migration : LVM mirroring, appliance based migration, file based migration, SAN based migration... TC will help make sure that only the real data gets written to disk on the 3PAR destination system Before After 0000 Gen3 ASIC

12 Thin Conversion (TC) pro trvalou detekci nul
TC is also useful to prevent applications, OS and file systems from bloating the TPVV by writing zeroes. Depending on the environment it can be a good idea to keep zero detect enabled even after the data migration and for new TPVVs Environments that will benefit from the inline, permanent zero detection include : Windows 2008 in case of Full format of a disk (physical or virtual server) Vmware ESX 3.x : When doing a Storage Vmotion, Clone or deployment from template Vmware ESX 3.x and 4.x : If using EagerZeroedThick (EZT) VMDK Linux : if using LVM for mirroring disks MS SQL : When creating a new database without the local administrator privilege

13 HP-3PAR Thin Persistence
3PAR Presales HP-3PAR Thin Persistence

14 Zápisy, které jsou brzo smazány
200GB File System 100GB written data Delete 10 GB Write another 20 GB data Host File System 100 GB written 90GB written 110GB written InServ still allocates 100GB after host delete 100GB written on InServ Storage Array 100 GB written 100 GB written 120 GB written Deleted space on a host file system does not result in deleted space on the storage array This is true of traditional/legacy filesystems that simply change “pointers” to files as opposed to zeroing out the deleted space. This was done to improve filesystem performance and reduce the load on the host. If this is not addressed, a Thin volume can actually grow larger than it’s allocated capacity. Our Thin API used by Symantec, Oracle and VMware have eliminated this problem in their environments.

15 Thin Persistence (TPe)
Thin Persistence is the ability to reclaim capacity that has already been allocated from a TPVV This capacity can be reclaimed in 2 ways By writing zeroes on the pages that need to be reclaimed. This method requires that the zero_detect policy be enabled on the TPVV By using the SCSI “write same” command. The zero_detect policy doesn't need to be enabled for this to work Upon reclamation, pages of 16KB are given back to the regions that have been reserved to the TPVV, automatically If a full contiguous region of 128MB is reclaimed, it is given back to the CPG, automatically Capacity given back to the CPG will stay allocated to the CPG until a “compactcpg” command is run against this CPG. Note : this can be run manually or scheduled using the internal scheduler

16 Jak je kapacita uvolňována
Stránky jsou uvolňované psaním 16kB nul za použití SCSI příkazu “write same”. Pokud má region (128MB) jednu stránku použitou, tak celý region zůstává alokovaný pro TPVV a může být využitý pouze tímto TPVV Pokud celý region 128MB je uvolněný, je automaticky vrácen do CPG Tato kapacita je použitelný pro libovolný TPVV nad danou CPG TPVV LD LD LD LD LD Po spuštění příkazu “Compact CPG” (manualně nebo naplanovaně) CPG vrátí kapacitu do poolu volných chankletů. Může redukovat velikost LD, nebo jejich počet. Uvolňěné chunklety jsou znova inicialozované (nulované) CPG Pool of free chunklets HP Confidential 21 April 2017

17 HP-3PAR Vymáhání kapacity v OS
3PAR Presales HP-3PAR Vymáhání kapacity v OS

18 Reclaiming capacity on Windows
Windows without Symantec Storage Foundation : Capacity can be reclaimed by writing zeroes in the free space of the file system with Sdelete (or any tool that can create big files full of zeroes) Sdelete is a Microsoft Sysinternals tool Example of Sdelete command : sdelete -c <drive letter> Windows with Symantec Storage Foundation for Windows (SFW): SFW 5.1 supports reclaiming capacity in TPVVs without writing zeroes, using the SCSI “write same” implementation

19 Reclaiming capacity on Unix/Linux
Reclaiming capacity by writing zeroes in the free space of the file system Capacity can be reclaimed by writing zeroes in the free space of the file system. “dd” can be used to create large files of zeroes that can then be copied multiple times. Example of creation of a 10GB file of zeroes with dd : dd if=/dev/zero of=/path/10GB_zerofile bs=128K count=81920 Capacity can also be reclaimed at the time of the deletion of the file it the deletion is done using Shred. Example of deletion of a file with shred : shred –n 0 –z –u /path/file Symantec Storage Foundation for Unix(SF): SF 5.0 supports reclaiming capacity in TPVVs without writing zeroes, using the SCSI “write same” implementation

20 Reclaiming capacity on VMware (VMFS)
Reclaiming capacity in the Datastore (VMFS) The easiest way of reclaiming capacity in a Datastore is by creating EagerZeroedThick (EZT) VMDKs to fill the complete file system free space. This can be done by using the “vmkfstools” command or any tool that can create EZT VMDKs Example of creation of a 100G EZT VMDK using vmkfstools: vmkfstools -c 100g -d eagerzeroedthick /vmfs/volumes/<datastore>/<file> If running ESX 4.1, it is strongly recommended to install the 3PAR VAAI plug-in before creating EZT VMDKs as it will speed up their creation by 10x to 20x. When using the VAAI plug-in, VMware will create the EZT VMDKs using SCSI “write same” commands, that the 3PAR array will interpret as reclaim commands Reclaiming capacity in the virtual machines themselves There is nothing specific while reclaiming capacity in the VMs, see previous slides for Windows and Linux reclamation methods

21 HP-3PAR Integrace tenkých technologíí
3PAR Presales HP-3PAR Integrace tenkých technologíí Symantec Veritas Storage Foundations Integration Oracle ASM VMware

22 3PAR Thin Persistence in Veritas Storage Foundation Environments: Thin API
Partnered with Symantec Jointly developed a Thin API—An industry first! File system / array communication API Most elements now captured as part of emerging T10 SCSI standard 3PAR has introduced API to other operating system vendors, offered development support VMware, Microsoft Symantec introducing API to other storage vendors HDS, EMC                                            3PAR Thin Persistence integrated with Symantec Veritas Storage Foundation to enable space reclamation where detection of deleted blocks is done via file system integration Thin Persistence is performed on an on-demand basis. Space Persistence (online thinning) uses the “write same” SCSI command Requires version 5.0 MP3 of Veritas Storage Foundation. 3PAR Thin Persistence for Symantec VxFS will require a 3PAR Thin Persistence license on the InServ

23 Thin Reclamation for Veritas: Intelligent Re-thinning
Returns capacity from data deletions in Thin Volumes File system notifies thin array of freed blocks via standard SCSI command Array frees space Operates autonomically: Storage/Server administrators need not be involved Works with: Veritas Storage Foundation v5+ All InServs: E, F, S, T-Class Volume File deleted WRITE SAME InServ Notified Capacity freed Uses SCSI (WRITE SAME) to free space on array Autonomic re-thinning eliminates admin time in Symantec environments

24 3PAR Thin Persistence in Oracle Environments: An Industry First
Oracle auto-extend allows customers to save on database capacity with Thin Provisioning Database Capacity can get stranded after writes and deletes 3PAR Thin Persistence and Oracle ASM Storage Reclamation Utility can reclaim 25% or more After Tablefile is shrunk or Tablespace or Database is dropped After a new LUN is added to ASM Disk Group Oracle ASM Storage Reclamation Utility writes zeroes to free space 3PAR Thin Built-In™ ASIC-based, zero-detection eliminates free space From a DBA perspective: Non disruptive – does not impact storage performance. ASIC huge advantage Increase DB Miles Per Gallon Oracle Tables Tablespace ASM with ASRU Disk Group 0000 0000 0000 0000 Before After 3PAR InServ Use standard file system tools/scripts to write zero blocks Zeros detected inline and mapped, not written 24 C:\Documents and Settings\jtchin\Local Settings\Temp\cache\OLK30\3PAR IPO Roadshow (Nov ).ppt 4/21/2017 6:05 PM

25 3PAR Thin Persistence in VMware Environments
ESX 4.0 VMware VMFS suppots three formats for VM disk images Thin Thick - ZeroedThick (ZT)and EagerZeroedThick (EZT) VMware recommends EZT for highest performance More info 3PAR Thin Technologies works with and optimizes all three formats ESX 4.1 Introduces the vStorage API for Array Integration (VAAI) Thin Technologies enabled by the 3PAR Plug-in for VAAI Thin vMotion - Uses XCOPY via the plug-in Active Thin Reclamation - Using Write-Same to offload zeroing to array

26 3PAR Presales HP-3PAR Výkon

27 High Level Caching algoritmus (1 of 2)
3PAR Presales High Level Caching algoritmus (1 of 2) Cache je zrcadlena mezi nody, aby v případě výpadku jednoho nodu nedošlo ke ztrátě dat. Algoritmus se sám přizpůsobuje v reálném čase s ohledem na poměr mezi čtecími a zápisovými operacemi. Při intenzivním čtení, může být až 100% cache dedikováno pro čtení. Při intenzivním zápisu, může být až 50% cache dedikováno pro zápis. Architektutra 3PAR není „cache centric“ jako u velkých monolitických polí. Cache slouží primárně jako buffer při přesunu dat z a do disků.

28 High Level Caching algoritmus (2 of 2)
3PAR Presales High Level Caching algoritmus (2 of 2) 3PAR má inteligentní čistící algoritmus cache, který vyprazdňuje data z cache a ta mohla být využita pro nové zápisové operace. Výhody: Spojuje více zápisů do stejné cache stránky. Spojuje malé bloky a na disky píše ve větších blocích. Kombinuje více RAID5 a nebo RAID6 zápisů tak, aby zápis byl optimalizován na full stripe size. U rotačních disků 3PAR vždy čte z disků a plní cache celou 16k stránkou. Pokud server požaduje číst méně než 16k, vždy se čte celá 16k stránka. Pokud server požaduje zapsat méně než 16k, vždy se zapíšou validní data bez ohledu, zda stránka je zaplněna. Lze kombinovat v jedné stránce (dirty page) více malých zápisových operací, tak aby stránka 16k byla celá efektivně zaplněna.

29 Read-Ahead algoritmus (1 of 2)
3PAR Presales Read-Ahead algoritmus (1 of 2) Cíl: detekovat sekvenční čtení a připravit se na budoucí požadavek, tzn. načíst data do cache předem a výrazně urychlit budoucí IO požadavek Jsme schopni detekovat až přednačítat až 5 streamů na každý virtuální volume. Stream nemusí být čistě sekvenční Náš pre-fetch algoritmus je adaptivní a může být aktivován řadou vzorů. To umožňuje například přednačítat každý druhý blok.

30 Read-Ahead algoritmus (2 of 2)
3PAR Presales Read-Ahead algoritmus (2 of 2) Specificky, když IO směřuje na oblasti o násobcích 8x IO size. Například pokud server používá 8k IO velikost a čte oblast 64k, pak je aktivován Read Ahead algoritmus. Dopřed se načte 1MB až 4MB v závislosti na velikosti IO. Pokud server používá IO velikost 512KB nebo méně, pak se dopředu načítá oblast o velikosti iosize*8 nebo minimálně1MB. Pokud server používá IO velikost větší než 512KB, pak se dopředu načítá oblast o velikosti iosize*4 nebo maximálně 4MB.

31 Výkon komponent Předpoklady
3PAR Presales Výkon komponent Předpoklady Všechna výkonnostní čísla jsou vztažena k následující definici zatížení: IOP/s Malý block size (16k nebo menší). Náhodný přístup k celému zařízení Více vláken na zařízení MB/s (Propustnost) Veliký block size (256k nebo větší) Sekvenční přístup k celému zařízení Jedno vlákno na zařízení _________________________________________________________________

32 Výkon komponent DISK SSD Disk Type/Rychlost IOP/s MB/s 3PAR Presales
15K FC 200 45 10K FC 150 7.2K NL 75 30 zátěž IOPs zarovnaný IOPs nezarovnaný 100% Read 3950 3900 70% Read/30% Write 2100 1500 50% Read/50% Write 1800 1150 30% Read/70% Write 1000 100% Write 1600 SSD NOTES: These are back end numbers. RAID overhead must be considered when calculating front end capability Numbers reflect IO access from VLUN (host) to Physical Disk IOP/s are less with larger blocks! (With 64k blocks, IOP/s are 67% of above) As seen above, SSD IOPs vary greatly with IO Mix SSD IOPs above are 4K. Larger blocks has impact on overall SSD performance SSD writes are significantly slower than reads (2 – 3 times longer to complete) SSD Sequential performance is slower than spinning disks A little more explanation on note 5 above. SSD IOPs are much more dependant on block size than spinning disks. To a spinning disk the performance of a 512B and a 16Kbyte is not very different but on an SSD this is a dramatically different number.

33 Výkon komponent FC Porty a chassis Propustnost IOP/s 4Gb Port 360 MB/s
3PAR Presales Výkon komponent FC Porty a chassis Propustnost IOP/s 4Gb Port 360 MB/s 30,000 Diskové chassis 720 MB/s 15K FC = 8000 10K FC = 6000 7.2K NL = 3000 _________________________________________________________________

34 Výkon komponent – jeden pár kontrolérů
3PAR Presales Výkon komponent – jeden pár kontrolérů RAID 1, Host/Disk Čtení MB/s Zápis MB/s Čtení IOPs Zápis IOPs T-Series 1400 / 1400 600 / 1200 64k / 64k 32k / 64k F-Series 1300 / 1300 500 / 1000 38.4k / 38.4k 19.2k / 38.4k RAID 5, Host/Disk Reads MB/s Writes MB/s Reads IOPs Writes IOPs T-Series 1400 / 1400 560 / 750 64k / 64k 16k / 64k F-Series 1300 / 1300 470 / 625 38.4k / 38.4k 9.6k / 38.4k Node-Pairs (RAID 6, Host/Disk) Reads MB/s Writes MB/s Reads IOPs Writes IOPs T-Series 1400 / 1400 560 / 750 64k / 64k 9.6k / 64k F-Series 1300 / 1300 470 / 625 38.4k / 38.4k 5.8k / 38.4k _________________________________________________________________

35 Výkon komponent 3PAR Presales
NOTES RAID1: Appropriate number of disks and HBA’s needed to obtain node max Numbers above are listed for a node-pair (2 nodes) Disk IOP/s are limited by drive support (max config) NOTES RAID5: Appropriate number of disks and HBA’s needed to obtain node max Numbers above are listed for a node-pair (2 nodes) Disk IOP/s are limited by drive support (max config) Host Write MB/s depend on set size (shown as default of 4 for R5, 3d+1p) Backend IOP performance is fixed per node and front end depends on data to parity overhead. In this example 750MB/sec back end is 560 for host data and 190MB/sec for parity. If this was 7+1 the host could push 650 MB/s NOTES RAID6: Appropriate number of disks and HBA’s needed to obtain node max Numbers above are listed for a node-pair For all other Series, disk IOP/s are limited by drive support (max config) Host Write MB/s depend on set size (shown as default of 8 for R6, 6d+2p) Host Write MB/s equation is [(set size - 2) / (set size)] * [Disk Bandwidth] Comment #4 needs a little clarification. Since an Inserv RAID set is so small the chance of even a small IO having a full stripe write is very high. RAID 5 and RAID 6 sets will have no RAID parity calculation overhead.

36 Publikované výsledky Benchmarků
3PAR Presales Publikované výsledky Benchmarků SPC-1 (Storage Performance Council) Výsledek na: T800 InServ (September 2008) 224,989 SPC-1 IOPS with 7.22 ms response time 77,824 GB user size (ASU) 1280 x 146GB 15K FC drives F400 InServ (April 2009) 93,050 SPC-1 IOPS with 8.85 ms response time 27,046 GB user size (ASU) 384 x 146GB 15K FC drives

37 Porovnání s konkurencí: SPC-1 benchmark
3PAR Presales Porovnání s konkurencí: SPC-1 benchmark IBM DS5300 Sep 2008 transaction-intensive applications typically demand response time < 10 ms SPC-1 IOPS™ Response Time (ms) IBM DS8300 Turbo Dec 2006 HDS USP V Oct 2007 EMC CLARiiON CX3-40 Jan 2008 NetApp FAS3170 Jun 2008 3PAR InServ T800 Sep 2008 Mid Range High End HDS AMS 2500 Mar 2009 3PAR InServ F400 May 2009 This slide shows 3PAR’s performance leadership. The point is not to brag or to suggest that the SPC-1 perfectly represents your workload environment. The point is to illustrate (using an open and audited benchmark built from real application traces) not just the high quantity of performance of the InServ – necessary for eliminating worries and concerns in a consolidated environment – but the high quality of that performance. That is, this chart is indicative of the low latencies that stem from the InSpire Architecture. Response-time sensitive applications like Oracle and other databases (under 10 ms please) and Exchange require this. Other applications and end users like it too! VMware, which makes extensive use of virtual memory pages (to disk) really loves it, and permits more VMware instances to be consolidated on a given physical server. A word about the blue line from the HDS USP V…This is a also very good result. But remember that in a benchmark, if you throw enough equipment at it, you can get good results. So, it’s just as important to discuss how certain results were achieved. Single System Arrays from highly rated vendors (Gartner MQ) Version 1.10 of SPC-1 Benchmark or later Cache-protected results only. Minimum ASU size 1TB. Scalability beyond 120 drives. Duplicate results from different vendors eliminated.

38 Aktualizované porovnání: SPC-1 pro midrange

39 Škálovatelnost s nízkými náklady
Utility HDS USP V Oct 2007 3PAR InServ T800 Sep 2008 SPC-1 IOPS™ 3PAR InServ F400 Mar 2009 IBM DS8300 Turbo Dec 2006 Mid Range HDS AMS 2500 Mar 2009 High End IBM DS5300 Sep 2008 EMC CLARiiON CX3-40 Jan 2008 NetApp FAS3170 Jun 2008 $ / SPC-1 IOPS™ 3PAR’s industry-leading performance results achieved cost-effectively….

40 HP-3PAR Správa a administrace
3PAR Presales HP-3PAR Správa a administrace Head-to-head Comparison

41 3PAR administrace a správa
3PAR Presales 3PAR administrace a správa 3PAR InForm CLI Jednoduchá, ucelená, konsolidovaná administrace Výkonný a mocný nástroj s jemnou granularitou kontroly Skriptovatelný, jednoduchá a logická syntaxe Podpora LDAP, IPv6 Vícenásobně přidělitelné role 3PAR InForm Mgmt. Console S E B e Super Přístup ke všem operacím Edit Přístup k většině operací my Snapshot Vytváření, správa snapshotů a kopii Service Omezená sada operací pro servis Browse Povolený přístup jen na čtení

42 3PAR InForm Management konsole
3PAR Presales 3PAR InForm Management konsole GUI a CLI management

43 Ukázka CLI InServ cli% help createsched
CLI prompt Pro použití CLI stačí adresu a login : IP nebo jméno platné v DNS user name password Once the CLI is installed and in your operating system shell’s PATH, you can start the interactive CLI shell by typing “cli”. [see the example on the slide of putting in InServ’s DNS name or system IP address. Then login to that InServ. Once logged into the CLI, There are 3 levels of help: General – Category - Command And there’s a search function within help! Type “help” or “clihelp” – Help for the 3PAR specific CLI commands. Because this is an “interactive” shell, it has some cool shell features. Press the “up” arrow on the keyboard to re-use the last command you typed.

44 HP-3PAR Virtual Domians
3PAR Presales HP-3PAR Virtual Domians Head-to-head Comparison

45 Self-Service Storage with Virtual Domains
3PAR Presales Self-Service Storage with Virtual Domains Virtual Domain B Virtual Domain A Hosts Access Volumes CPG Parameters Up to 1,024 Virtual Domains / InServ Jednoduchý a bezpečný Hlavní administrátor vytvoří, definuje a přidělí uživatele, logické elementy a domény Autorizované uživatelé mohou spravovat elementy v jejich doméně Integrováno včetně fcí System Reporter, LDAP, & Remote Copy “Virtual Private Arrays” ideální pro: “Self-service” storage Různé aplikace a různé uživatelské skupiny Prevence chyb Compliance

46 Co jsou 3PAR Virtual Domains?
Pojetí Multi-Tenancy u tradičních polí Implementace Multi-Tenancy uv3PAR Domains Admin A App A Dept A Customer A Admin B App B Dept B Customer B Admin C App C Dept C Customer C Admin A App A Dept A Customer A Admin B App B Dept B Customer B Admin C App C Dept C Customer C Domain A Domain B Domain C Separovaní, fyzicky zabezpečené Sdílené, Logicky zabezpečené 3PAR Virtual Domains delivers secure access and improved storage services for different applications and user groups. By providing secure, administrative segregation of users and hosts within the InServ, individual user groups and applications can achieve greater storage service levels (performance, availability and functionality) than previously possible. Virtual Domains is fully integrated within the 3PAR InSpire Architecture, offering support with: * Remote Copy * System Reporter * LDAP Support * multiple InServs -There are 3 Domain Types, “No”, “All”, and “Specified” that are later clarified -

47 Kdy se hodí Virtual Domains?
Centralizovaná administrace u tradičního pole Self-Service administrace s 3PAR Virtual Domains Prezentované objekty Koncový uživatel (oddělení, zákazník) Prezentované objekty Virtual Domains Centralizovaná administrace Centralizovaná administrace Physical Storage Physical Storage Konsolidované pole Konsolidované pole Consolidation Projects Especially VMware projects where we have a management plug-in that via LDAP enables “domain-aware” self-service by system administrators and application administrators. Service Providers, whether they are internal or external Ideal for providers who not only understand the benefits of consolidation, but must enable secure, independent storage services to multiple administrators, applications, departments, and customers. In many environments, the simple – but secure – consolidation of production with test and development storage infrastructure can represent HUGE savings. In other environments, simple storage admin tasks or monitoring tools can be turned over to system and application admini8strators in a “self-service” model. You can emphasize the efficiencies gained in test environments when test-bed configurations are virtual machines and virtual storage as opposed to physical servers and direct-attach storage (more costly & complex) or a silo of SAN storage for test vs. dev (most costly & complex)

48 3PAR typy domén a privilegií
Super User(s) Edit User(s) (přiřazený k “All” Domain) Domény, Uživatelé, Politiky Politiky “Engineering” Sada domén “All” Domain “No” Domain Doména “A” (Vývoj) Doména “B” (Test) Nepřiřazené objekty CPG(s) Host(s) User(s) & respective user level(s) VLUNs VVs & TPVVs VCs & FCs & RCs Chunklets & LDs Nepřiřazené objekty The Virtual Domains feature gives the ability to specify privileges over system objects such as volumes and hosts. Accessing objects on InServ servers configured to use 3PAR Virtual Domains requires privileges in the domain in which those objects reside, except for the Super User, who can create (and remove) Domains, Users and manage the Provisioning Policies for those domains. Edit Users who are set to the “All” Domain have privileges to manage Provisioning Policies, too. The “No” Domain is really just a container for all of the unassigned elements of the InServ, and there are the “Specified” Domains of “Domain A” and “Domain B”. Privileges given over a Domain Set means the user can manage multiple Domains. Because the configuration of Domains can differ within an InServ Storage Server, or from one server to another (in configurations with multiple servers), a user can have differing privileges between domains in a single system, or across multiple systems.

49 3PAR Presales Head-to-head Comparison
HP-3PAR Virtual Copy Head-to-head Comparison

50 Virtual Copy – Funkcionalita Copy-on-Write
Snapshot Admin (SA) Space časové razítko 5/25/06 14:35 Pointer to data P Zdrojový Volume 1. A 2. B 3. C 5. E 6. F 7. G Nový Zápis Dat 4. D’ 4. D 4. D 4. D’ Copy-on-write do SD Snapshot Data (SD) Space – místo kde jsou uložené původní data před změnou What is being showed with this slide is the relationship of the base volume data space to the snapshot data space and its corresponding snapshot administration space. In selling Virtual Copy many times the customer is going to ask how we keep track of all the changes without having the same penalties that other arrays suffer. Although to detailed for a customer presentation it’s important that you be able to discuss how these different pieces interwork. Snapshot administration space, always mirrored in the array, essentially holds the pointer table between the base volume and the virtual copy data space. Only block level changes are grown in the data space with the rest of the virtual copy pointers reflecting back to the base volume allocation. As the tree expands so does the administration space in order to keep track of all the corresponding changes. To reemphasize the data space only grows by the aggregate total of all snapshots in the tree. As snapshots are deleted so is the data and correlating administrative space thus freeing these blocks back to the array. All new systems have this feature turned on by default; however, existing systems that have been upgraded to the latest code level of must have the system setting turned on manually. This is done from the command line only and will not be further mentioned in this section.

51 Pohled na GUI s Virtual Copy
GUI poskytuje velmi snadný a přehledný pohled na přítomné LUNy a VC Utilizing the InFrom GUI the snapshot tree and parent relationships are easily seen. What we see with this illustration is that a single base volume has one base read only snapshot and a single RW snapshot. This is indicated by RO at the end of the volume name for read only and shows the associated RW volume attached to it. Also notice that there is no space taken as part of the snapshot but shows as a RAID GB volume. No changes have been made to the volume therefore no updates in snapshot data space has taken place. All read write snapshots must have a base read only snapshot to work from and is done automatically when the copy is initiated from the InForm GUI.

52 Vytvoření Virtual Copy za použití GUI
Klik pravou myší a výběr volby “Create Virtual Copy” Simply select the volume, right click, and chose “Create Virtual Copy”

53 HP-3PAR System Reporter
3PAR Presales HP-3PAR System Reporter Head-to-head Comparison

54 Webovské rozhraní 1 Rychlé reporty podle předem připravených šablon
2 Naplánované reporty 3 Reporty na míru Custom Reports tab Here you can build a custom report for viewing data from the database. You can select the array, the resolution, domain and report type.

55 Výkon VLUN - denní přehled
Once a report is generated, you can see graphically details or you may optionally select the blue Icon at the top of the page to generate complete details and open a Excel spread sheet

56 Porovnání systémů - VLUN
At the top of the report you can see all the details you setup and were used to generate the report.

57 Výkon VLUN – hodinové porovnání v čase
Hourly bar graph report showing VLUN performance. Here you can see VLUN IOPs Reads and Writes in the same column

58 3PAR Presales DĚKUJI ZA POZORNOST !


Stáhnout ppt "Tomáš Okrouhlík Storage Presales Technical Consultant 5. dubna 2011"

Podobné prezentace


Reklamy Google