Wednesday, November 5, 2014

Intel Virtualization Technology

Intel Virtualization Technology (VT). Formerly known as Vanderpool, this technology enables a CPU to act as if it were several CPUs working in parallel, in order to enable several operating systems to run at the same time in the same machine. In this tutorial we will explain everything you need to know about this new technology.


You may confuse virtualization with multitasking or even with Hyper-Threading. On multitasking, there is a single operating system and several programs running in parallel.  On virtualization, you can have several operating systems running in parallel, each one with several programs running. Each operating system runs on a “virtual CPU” or “virtual machine”. And Hyper-Threading simulates two CPUs where there is just one physical CPU for balancing performance using SMP (Symmetric Multi Processing), and these two CPUs cannot be used separately.

Of course if a CPU has both Hyper-Threading and Virtualization Technology each virtual CPU will appear to the operating system as if two CPUs are available on the system for symmetric multiprocessing.

If you pay close attention, Virtualization Technology uses the same idea of Virtual 8086 (V86) mode, which is available since 386’s. With V86 mode you can create several virtual 8086 machines to run DOS-based programs in parallel. With VT you can create several “complete” virtual machines to run full operating systems in parallel.

CPUs with Virtualization Technology have some new instructions to control virtualization. With them, controlling software (called VMM, Virtual Machine Monitor) can be simpler, thus improving performance compared to software-only solutions.

How It Works

Processors with Virtualization Technology have an extra instruction set called Virtual Machine Extensions or VMX. VMX brings 10 new virtualization-specific instructions to the CPU: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUCH, VMRESUME, VMXOFF and VMXON.

There are two modes to run under virtualization: root operation and non-root operation. Usually only the virtualization controlling software, called Virtual Machine Monitor (VMM), runs under root operation, while operating systems running on top of the virtual machines run under non-root operation. Software running on top of virtual machines is also called “guest software”.

To enter virtualization mode, the software should execute the VMXON instruction and then call the VMM software. Then VMM software can enter each virtual machine using the VMLAUNCH instruction, and exit it by using the VMRESUME. If VMM wants to shutdown and exit virtualization mode, it executes the VMXOFF instruction.

Wednesday, August 20, 2014

Citrix announces releases of XenApp and XenDesktop 7.6

Ø  XenApp and XenDesktop 7.6 are designed to be deployed in the cloud, on-premise and in hybrid environments.
Ø  The new releases provide new bringing back features such as application pre-launch, session linger and anonymous logon.
Ø  The releases also provide new enhancements to the audio, video and graphics experience.  New high-performance graphics acceleration using GPUs provide high quality DirectX/2D rendering, an important requirement for engineers and designers.
Ø  Support for Unified Communications such as Lync Server 2013 and generic redirection of the latest USB 3.0 peripherals like webcams, headsets and Lync phones for Windows devices are perfect for contact center agents, contractors and remote workers.
Ø  Citrix is also releasing a new update to the HDX Mobile SDK. Which helps to re-architect apps for mobile operating systems, developers can customize their Windows applications to be mobile-aware and provide features that enable GPS location awareness, picture and video capture, and screen rotation re-factoring.
Ø  New enhancements to Citrix provisioning services, a widely deployed single image management feature of XenApp and XenDesktop uses commodity servers and RAM to drop the IOPS load on storage by 99 percent. This solves the storage throughput performance problem without deploying SSDs and high-end storage arrays.
Ø  XenApp and XenDesktop provide the only application and desktop virtualization solution available that meets both Federal Information Processing Standards (FIPS) compliance and Common Criteria evaluation requirements.
Ø  Security enhancements also have been made to enable granular policies over clipboard content filtering and directional control. This feature, inspired by customer requirements in the banking industry to help prevent credit card data hacking, gives IT control over whether end users can copy into – or out of – their virtual workspace.
Ø  The new Citrix Connector 7.5 for System Center Configuration Manager—developed in close partnership with Microsoft—enables administrators to use Configuration Manager to define and manage user access to virtual applications and desktops powered by Citrix.
Ø  Several features of the latest version seamlessly integrate with XenApp 6.5, including the new provisioning services, updates to Citrix Receiver, StoreFront and HDX, AppDNA, System Center Connector, monitoring consoles and more.

Ø  XenApp and XenDesktop 7.6 will be available in September 2014.

Monday, August 18, 2014

Sysprep for preparing Windows Image

Microsoft Sysprep comes handy when one needs to prepare a Windows OS for Imaging & cloning.

We all know it drastically reduced the time & effort required in spinning up new windows servers and desktops.
But the world has changed,
·         application coding no longer relies on machine’s security identifier
·         with invent of IAAS & DAAS, faster provisioning of virtual machines is the need of present

Hence, the technology of today further minimized the image preparation time further with the invent of Quickprep.

While the Sysprep operation needs several minutes to change the SID on a Windows OS as it needs to change all files on the hard disk drive, Quickprep works much faster only takes care of key things to give a cloned VM a new personality as below :

Function
QuickPrep
Sysprep
Removing local accounts
No
Yes
Changing Security Identifiers (SID)
No
Yes
Removing parent from domain
No
Yes
Changing computer name
Yes
Yes
Joining the new instance to the domain
Yes
Yes
Generating new SID
No
Yes
Language, regional settings, date, and time customization
No
Yes
Number of reboots
0
1 (seal & mini-setup)
Requires configuration file and Sysprep  
No
Yes


Limitations
·         The Quickprep tool comes with limited products like VMware Horizon View/ Tucloud DAAS and works only with Linked Clone desktop pools
A small percentage of software development still rely on the unique local SID like AV or NAC ones, for which sysprep will be a better deal

Horizon 6

      VMware announced that they're Challenging Citrix head on in the XenApp space by adding RDSH session support and published apps to VMware Horizon, all accessible via PCoIP
      VMware just entered a race that Citrix started many years ago. Yesterday VMware announced Horizon 6 Few Features includes:
o   Hybrid cloud, enabled by our acquisition of Desktone last year and now our Horizon DaaS offering, allows you to easily deploy virtual desktops on-prem, off-prem, or whatever combination suits your fancy.
o   Combining Horizon View, Horizon Mirage, and VMware Fusion, we can offer integrated and centralized desktop management across virtual, physical, and bring-your-own devices.
o   We’ve upped the ante for enterprise management with enhanced image management features in Horizon Mirage and integrations into vCenter Operations Manager, vCenter Orchestrator, and vCloud Automation Center.
o   Virtual SAN is now integrated and supported, helping to drive down storage costs while maintaining performance and SLA.
o   The biggest and most exciting part of the announcement is that Horizon View now supports application publishing.  This has been a feature request long-requested and long in the making.

      VMware has finally shipped a SBC product , SBC stands for Server-based computing aka RDSH (Remote Desktop Session Host) or RDS (Remote Desktop Services) or TS (Terminal Services).
      So why should anyone care about VMware releasing a product that has been around forever. For one, we all know VDI is not a silver bullet technology. Right now VDI adoption is teetering around 5% of total enterprise market share, and if you are in the group of orgs using it you tend to find a sweet spot of usage for about 20-30% of your organization. SBCs market is much bigger and VMware just entered it… but Citrix created it.
      The story goes that somehow in the 90s Citrix has able to secure the source code for Windows NT and they redesigned the OS to support multiple sessions on the same OS. This was typical in Unix mainframes but had never been done for Windows. Microsoft then bought the code back from Citrix and released Windows NT Terminal Services. Microsoft and Citrix spent the next decade being best of buddies.
      What VMware has done in Horizon 6 is create a true competitive product to XenApp. The reason Horizon 6 is a true competitor to XenApp is that they are the first product  that has done the  work to create a 3rd Party Protocol Provider for RDS.
      What is a 3PPP, it’s the official way to create a protocol that works with RDSH. It’s how ICA works with RDSH to bring you XenApp, or how RDP works (but RDP wouldn’t be third party). Up until now, any vendor in this space has not done the work to create a true 3PPP interface instead most products have just either used virtual channels on RDP or they’ve done some transcoding of RDP.


      How this works is that Windows talks to a graphics driver which then takes all the content being created and encodes it into a protocol. Microsoft uses its own protocol, RDP, Citrix uses ICA/HDX, VMware uses PCoIP/Blast

Friday, August 8, 2014

Windows Branch Cache

Microsoft System Center Configuration Manager (ConfigMgr or SCCM) has the option to use Microsoft’s BranchCache technology in order to enable clients to obtain content from local clients that have a current cache of the content. When a BranchCache-enabled client computer first requests content from a distribution point configured with BranchCache, the client will usually download the content and cache it in the BranchCache cache. This ConfigMgr content is then made available to other BranchCache-enabled clients on the same subnet that request it, and these clients also cache the content.

The key reasons that BranchCache is not suitable for ConfigMgr content distribution are that it:

Uses Background Intelligent Transfer Service (BITS), which has poor WAN bandwidth throttling, which is one of the key problems that most ConfigMgr-using organizations struggle with. BITS throttles based on worst-case pre-configurations that you set and bases its throttling on what it sees at the client NIC and first router. Activity at higher hops is not accounted for, leading to network congestion at those higher levels of the network
 Has very limited operating system deployment (OSD) support. BranchCache cannot be used in WinPE, and it offers no solution to the need for supporting PXE boot methods or state migration servers
 Has no centralized status reporting. You cannot readily verify or demonstrate that it is working as intended, nor can you find problems in order to correct them
 Caches content only for 28 days, though much of your ConfigMgr content will be needed for long after that, such as for OSD, patching new computers, or providing software to users as they change roles
 Is not enabled by default to run on computers running on battery power. A significant number of your computers are probably laptops, and most of the time when they’re available on your corporate network they may be in meetings or other scenarios where they are running on batteries
 Has no options to control the selection of the best (and never inappropriate) computers to serve content to peers.


To elaborate on those points respectively:
1. BITS is used by BranchCache as the transport protocol when obtaining the original content from a ConfigMgr distribution point. BITS has known limitations in both configuration and its capabilities when assessing network utilization.
a. BITS determines network utilization from the Network Adapter (NIC) of the client and usually from the router it connects to. Beyond that BITS has no awareness of the network activity levels to the distribution point and therefore is likely to aggravate network congestion on higher hops to the DP. This is a common network scenario in almost every modern enterprise network.
b. The throttling capability of BITS can only be specified by placing fixed percentage rate limits as the kbps level. This requires detailed knowledge of the network link speed end-to-end, estimating the worst case traffic usage scenario to reduce the risk of impact to other traffic, and then using that worst case scenario as a physical setting for all traffic. Even when network capacity becomes available, BITS can only utilize the set amount pre-configured for the worst case scenario and cannot make use of this spare capacity. As circumstances in a network change those settings must be revisited and requires considerable operational overhead for any sizable environment. Weekends have the same limits applied as weekdays restricting network traffic at an optimal time.
2. BranchCache has very limited support for OSD scenarios. WinPE does not include support for BranchCache. Provision of large WIM image files and driver packages that are required in the WinPE phase of an OSD build cannot be supplied using BranchCache and these large packages would require downloading at build time from a distribution point.
3. BranchCache does not generate ConfigMgr status messages, or provide any other means of reporting that can be viewed centrally from the Configuration Manager console or any other reporting interface. BranchCache only writes events to the local Windows event log. Administrators therefore cannot centrally verify the effectiveness or success of BranchCache in facilitating ConfigMgr deployments.
4. The default maximum client cache age is 28 days. This setting is applied to all content in BranchCache and content will only be removed when this age is reached. This is likely to cause the premature deletion of very large content that will require downloading again to the location after it has been removed.
5. By default BranchCache does not serve content when on battery power (for example, laptops at meetings).
a. There are NETSH and PowerShell commands to override this on Windows 8 or newer.
6. There is no weighting or similar mechanism, based on either system type or configuration options, to determine the most appropriate BranchCache system to download and serve requested content.
a. As a consequence the most appropriate client will not necessarily be used to serve content. A laptop is as likely to be selected as a desktop computer or a server.
b. Organizations may want to prioritize certain computers to be more likely to serve content, and for other computers to never serve content. As there is no mechanism to specify preferences for determining the BranchCache system allocated, this level of flexibility is not possible.
7. Not every BranchCache client has, or retains, the full package that has been requested.
a. This places increased risk that the complete amount of content requested is not available at a location when being required by subsequent client systems, and therefore there is an increased risk that duplication of content download will be necessary.
b. This is especially the case when the following occurs:
i. Multiple clients originally needed the package at once. The content BranchCache metadata is often not calculated as fast as the content is downloaded. Therefore some of the content is downloaded with BranchCache and some just with BITS.
ii. The first client that needed the package. May also suffer from that problem but possibly less so since the DP may be able to keep up with the metadata calculation more successfully.
iii. Subnets including both Windows 7 and Windows 8 clients. This causes the metadata to be calculated in two formats.
8. The BranchCache cache is entirely independent from the ConfigMgr cache, so the content must be physically stored on disk for both. The content thereby takes up twice as much disk space on each system. Nomad uses hard-links so that the ConfigMgr cache appears to have the content but the only copy is actually in the Nomad cache.
9. Microsoft does not recommend using BranchCache on subnets with 100 or more clients. See the BranchCache design guide: http://www.microsoft.com/en-us/download/details.aspx?id=2559
10. BranchCache is not offered as a Windows XP feature. Though most organizations have replaced or are replacing Windows XP machines, any that remain must download all content from a distribution point and cannot make use of the BranchCache technology.
11. ConfigMgr only offers PXE support for distribution points, and they must have WDS for PXE, so that means the DPs must be running on Windows Server. The functionality of a State Migration Point also requires a server and without this user data backups with USMT cannot be integrated into the OSD build scenarios being utilized. The net result is that BranchCache doesn’t address the core OSD issues around content management.
12. BranchCache cannot support clients located across multiple subnets. Therefore content must be downloaded once per subnet in locations where multiple subnets exist. This increases the amount of data that must be downloaded to the location and increases network utilization. In scenarios where multiple clients in multiple subnets all require the content at the same time, the limitations of BITS causes a cumulative effect as they are not aware of each other, greatly increasing the risk of causing network congestion and impacting other network traffic. Nomad has the Single Site Download feature
13. Mixed environments of Windows 7 and Windows 8 cannot work together. This increases the amount of content to be downloaded as they are not able to share content
a. Windows 8 can be configured for compatibility mode with Windows 7, however this removes the availability of new features and improvements to the Windows 8 version.
b. This is true as long as the DP is Windows Server 2012 or better. It can be overridden by turning off Windows 8 features and improvements (such as hash precalculation, or that cache retention can be increased)
14. BranchCache does not have any multicast capabilities to minimize deployments when large content must be delivered to many clients simultaneously.
15. Does not help with files less than 64KB in size. The impact of this issue will vary by package but can possibly account for about 5% of the overall package size.
16. Cache management is limited to a percentage of total disk space. Thus it will vary depending on disk size (a small 120GB SSD could have only 6 GB, for example), and could cause a user to run out of disk space if they use a large fraction of the disk (95%). The percentage can be adjusted via group policy with Windows 8, but it’s still a percentage.
17. BranchCache elections are done for every file or file segment in the package, as opposed to for the package as a whole. This could add LAN network congestion.
18. Non-domain-joined (workgroup) clients are not supported by BranchCache.
19. HTTP must be enabled for client-to-client transfers. This might be a security concern for some organizations.

BranchCache can use BITS, SMB or HTTP, for content retrieval. SMB and HTTP support seem to require Windows Enterprise version. BranchCache uses WSD (WS-Discovery protocol) for peer discovery.
BranchCache does not download files smaller than 64KB - the requesting client will receive these directly through normal BITS/SMB/HTTP transfer).

Friday, August 1, 2014

Web Scale Infra

Ø  According to Gartner, and as you know they’re always right J, by 2017 Web-Scale technology will be an architectural approach found operating in 50 percent of global enterprises, up from less than 10 percent in 2013!
Ø  The idea behind Web-Scale infrastructures is conversion, a way to combine, or integrate, multiple infrastructural components like (server) compute, storage, networking and virtualization into a single platform or appliance of some sort.
Ø  Resources are than aggregated and pooled, positively impacting performance, flexibility and overall efficiency. Management is centralized and simplified since converged infrastructures are, or should be, managed as a single entity no matter how big they get.
Ø  Web Scale computing focuses on scale-out rather than scale-up technologies and since, as mentioned, its resources are aggregated directly from the underlying hardware, workloads can be scaled up without needing to scale up individual server and or other related hardware, again, offering simplicity and ease of management.
Ø  Web Scale uses technologies like, data deduplication, data tiering, writes are being replicated, physical components are redundant, multiple times over in most cases, data gets compressed, and the list goes on. All this combined make that Web Scale technology is extremely robust. Eg: Nutanix


The best practices we once knew will soon become obsolete and known as ‘just’ practices, the way we once ‘did’ stuff. Another thing to think about is how we not only manage, but also need to support, upgrade and expand (scalability) our current infrastructures which isn’t an easy task with the hardware-centric architectures we’ve got going today.


Source: http://www.basvankaam.com/2014/06/18/what-is-web-scale-technology-and-where-does-it-come-from/

Wednesday, July 30, 2014

What is SCCM?

SCCM  provides:
 An installation mechanism for all types of software
- Applications
- Operating System deployments
- OS and Application Updates (patching)
 Software distribution – gets the software to where the computers are
 Portals to allow users to initiate software installation
 Malware mitigation (endpoint protection)
 Asset data collection (inventory) – hardware and software details in depth, including software usage (metering)
 Software asset analysis – including some license management
 Configuration policy verification and enforcement – settings management, including power settings, firewall policies, and roaming user configuration
 Wake-on-LAN – the ability to powers up computers when needed
 Network Access Protection
 Remote control
This is a lot for any system, and all of these are done on a wide diversity of devices on almost any scale in often complex environments. Given all that, it shouldn’t surprise anyone that there are opportunities for improvement. That’s why Microsoft frequently provides new releases and encourages a strong partner ecosystem.Specific ConfigMgr features that are sometimes challenging and often cause concern within organizations:
1. Content Distribution
- Competition with other uses for Wide Area Network (WAN) links can cause conflicts with other business priorities. Traditional approaches of restricting SCCM traffic to avoid that problem can cause deployments to take too long
- Organizations with many locations, as in dozens to thousands, find that the standard Distribution Point model introduces single points of failure, can be difficult to keep running reliably, as well as being costly to deploy
2. Software Asset Management
- ConfigMgr does an excellent job of collecting a wide variety of asset data but its features for turning data into practical information and actions are limited
3. Self-Service Application Portal
- SCCM 2012 embraces a user-centric model but its end-user portal provides only basic features and often does not meet the expectations of today’s sophisticated users and administrators
4. PC Power Management
- SCCM enables the deployment of power management
policies and the collecting of state data but it does little more to maximize power savings
5. Wake-on-LAN
- Waking sleeping computers is a powerful mechanism to expedite computer management and improve end-user productivity, but ConfigMgr wake-on-LAN often does not work well in production environments
6. Operating System Deployment
- Operating System Deployment (OSD) takes many steps and requires a wide variety of resources, making it especially
complex. This is especially true in some scenarios such as organizations with numerous remote locations or where it can be difficult to justify deploying costly server infrastructure

Saturday, July 26, 2014

x86 Server Virtualization Infrastructure

At least 70% of x86 server workloads are virtualized, the market is mature and competitive, and enterprises have viable choices.

Citrix is focusing its energies on making XenServer an attractive hypervisor for two markets: cloud infrastructure (optimizing integration with its own CloudPlatform offering); and desktop virtualization (supporting its market-leading XenDesktop and XenApp offerings, particularly in the area of graphics processing unit [GPU] virtualization)

Oracle VM is Oracle's implementation of the Xen hypervisor, which leverages intellectual property tied to Oracle Linux and was also put together based on intellectual property acquired from Sun Microsystems and Virtual Iron, which also had Xen-based offerings. Oracle has further integrated these technologies into a more coherent and packaged solution with the Oracle VM 3.2 release in 2013 (and an update release is imminent).

Oracle VM is managed by Enterprise Manager 12c, Oracle's system management product. Enterprise Manager can monitor and manage the entire stack — from applications to infrastructure — allowing application and platform administrators to get contextual insight into their virtualization environment. Enterprise Manager 12c also acts as the service delivery platform for cloud services, such as IaaS, leveraging the infrastructure and virtualization resources provided by Oracle's VM product portfolio. 

This portfolio includes Oracle VM (an x86 architecture product, based on Xen); Oracle VM Server for SPARC (based on Sun Logical Domain [LDOM] technology); Oracle Solaris Zones (Oracle has changed the Solaris Containers' product name to Oracle Solaris Zones); Oracle Linux Containers; and potential software appliances using Oracle VM, storage and other related virtualized infrastructures.

Oracle still favors Oracle VM for software licensing and pricing — for example, with processor pinning (allowing the specification of a limited number of processors being used by a VM, which can reduce software costs when live migration is not required). This approach and flexibility do not extend to the Hyper-V certification.

Parallels now offers a virtualization suite consisting of three virtualization packages: Parallels Containers (for Windows and Linux); Parallels Cloud Server (which includes Parallels Containers, Parallels Hypervisor and Parallels Cloud Storage); and Parallels Automation for Cloud Infrastructure (including Parallels Cloud Server and service provider tools).

The Parallels Containers product allows applications to run in lightweight, separate containers, offering processor affinity and memory protection and isolation. Compared with hypervisor-based solutions, the Parallels Containers offering enables much-higher server densities and can reduce OS software and administration costs. The Parallels Containers product also offers portability and live workload migration. The whole architecture of containers enables a workload and container to spin up faster with less performance overhead than VM solutions.

Parallels Cloud Server also includes Parallels Server Bare Metal, enabling service providers to offer traditional VMs on the same physical node as containers. Parallels Cloud Server combines Parallels Containers and Parallels Hypervisor with Parallels Cloud Storage to enable a complete high-availability solution on commodity hardware by creating a cloud storage pool from existing server hard drives.

vSphere 5.5 in September 2013, including scalability improvements (for example, broader reach for the vCenter Server Appliance), an expanded vSphere Web Client for management, Virtual SAN, server-side caching (vFlash), 62TB Virtual Machine Disks (VMDKs). Furthermore, the vCenter Site Recovery Manager (SRM) now works with Storage DRS and Storage vMotion.


Thursday, July 24, 2014

Storage Concepts

Ø  Storage Tier
0 - Special Functionality
1 - Enterprise (15,000 rpm)
2 - Modular (10,000 rpm)
3 - General Purpose (7,200 rpm SATA)
Connectivity Tier :
A - Fibre Attached
B - iSCSI Attached (not yet available)
C - NAS (not yet available)


Ø  A SAN uses the SCSI(Small Computer Storage Interconnect) and FC (Fibre Channel) protocols to move data over a network and store it directly to disk drives in block format

Ø  Benefits of a SAN:
·         Removes the distance limits of SCSI-connected disks
·         Greater performance
·         Increased disk utilization
·         Higher availability to storage by use of multiple access paths
·         New disaster-recovery capabilities
·         Online recovery:
·         Reduction of servers
·         Increased input/output (I/O) performance and bulk data movement
·         Nondisruptive scalability
·         Storage on demand

What Makes a SAN ?

Ø  The parts: All the hardware you use to create a SAN; the switches, cables, disk arrays, and so forth
·         HBA , GBIC, Fiber-optic cables,
·         Hubs, Switches, Gateway, Router.
·         Storage arrays, Modular arrays, Monolithic arrays

Ø  The protocols: The languages that the parts use to talk to each other
·         Fibre Channel protocol, SCSI protocol

Ø  Modular arrays
·         Modular arrays come with shelves that hold the disk drives. Each shelf can hold between 10 to 16 drives Modular arrays usually fit into industry-standard 19" racks
·         Modular arrays almost always use two controllers with separate cache memory in each controller,and then mirror the cache between the controllers to prevent data loss. Mostmodern modular arrays have between 16 and 32GB of cache memory
Ø  Monolithic arrays
·         Monolithic arrays have many controllers, and those controllers can share direct access to a global memory cache (up to hundreds of gigabytes) of fast memory. This method of sharing access to a large global or monolithic cache is why these arrays are also called monolithic.


Ø  Gigabit Interface Converter (GBIC)
·         The GBIC is formally known as a transceiver;it can be a transmitter and a receiver.it has a laser inside that converts billions of digital bits into light pulses to be transmitted over optical fiber.In older HBAs, the transmission device was called a Gigabit Link Module (GLM) .two kinds of GBICs, defined by the wavelength of light that the laser inside generates: short-wave (500 m) and long-wave (10 km).


Ø  Cables
·         9μm, 50μm, and 62.5μm.
·         When 9μm cables are used to transmit data over long distances, they’re called dark fiber cables. That’s because you cannot see the laser light being transmitted with the naked eye, and if you ever did look into one of these cables, it would fry your eyeballs. Single-mode optical signals can travel much farther than multimode signals.
·         Cable connectors come in two different types. An SC connector (SC stands for Subscriber connector) is the standard optical connector for 1Gbit Fibre Channel. An LC connector (LC stands for Lucent connector) is standard for 2Gbit and 4Gbit Fibre Channel cable.

Ø  N_Ports (node ports), L_Ports (loop ports), G_Ports (global ports), F_Port (fabric port), FL_Port (fabric-to-loop port), E_Port (switch-to-switch expansion port) or a T_Port ( Trunk port), NL_port (node-to-loop port),

Ø  The disks inside a disk array are first arranged into RAID sets and then sliced up into partitions. The partitions are then assigned a LUN, and the LUN is assigned to a server in the SAN.


Ø  The WWN of the storage array is known as the World Wide Node Name or WWNN. The resulting WWN of the port on the storage array is known as the World Wide Port Name or WWPN.

Ø  no more than seven servers allocated per storage port (again, this is for each Gbps of bandwidth,but this is still a pretty good rule of thumb for even faster SAN components).Using this configuration allows those seven servers to share the connection and therefore the bandwidth of the storage port. This is commonly called the fan-in ratio of the storage port.

Ø  Having too many servers per port also means each port has only so many I/O operations it can support at one time (the maximum queue depth of the port). Most current storage arrays support at least 256 queues per port (some support 512). So if you want each server to be able to queue up 32 I/O operations at one time (which is a good best practice), limit the number of servers to eight per port (256/32 = 8). Most HBA vendors configure the default queue depth for their HBA drivers at 32 anyway, so this is a good default fan-in ratio for server-to-storage port.

Ø  An Infiniband adapter is called an HCA, or Host Channel Adapter; an iSCSI network card is called a TOE adapter, or TCP/IP Offload Engine adapter.

Ø  Multipathing Solutions:
·         Hewlett- Packard AutoPath, SecurePath
·         Microsoft MPIO
·         Hitachi Dynamic Link Manager
·         EMC PowerPath
·         IBM RDAC, MultiPath Driver
·         Sun MPXIO
·         VERITAS Dynamic Multipathing(DMP)


Ø  Zoning is also important because it can be used to keep storage of various servers separate from each other, keep SAN traffic localized within each zone, and separate different vendor storage arrays in the same fabric.zoning can be used as a method of making the SAN more secure.
Soft zoning: Zones are identified by World Wide Name

Hard zoning: Zones are identified by physical switch port

HDD Types

SCSI
Ø  Small Computer System Interface, or SCSI (pronounced scuzzy[1]), is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, and electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners and CD drives. The SCSI standard defines command sets for specific peripheral device types; the presence of "unknown" as one of these types means that in theory it can be used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial requirements.
Ø  SCSI is an intelligent, peripheral, buffered, peer to peer interface. It hides the complexity of physical format. Every device attaches to the SCSI bus in a similar manner. Up to 8 or 16 devices can be attached to a single bus. There can be any number of hosts and peripheral devices but there should be at least one host. SCSI uses hand shake signals between devices, SCSI-1, SCSI-2 have the option of parity error checking. Starting with SCSI-U160 (part of SCSI-3) all commands and data are error checked by a CRC32 checksum. The SCSI protocol defines communication from host to host, host to a peripheral device, peripheral device to a peripheral device. However most peripheral devices are exclusively SCSI targets, incapable of acting as SCSI initiators—unable to initiate SCSI transactions themselves. Therefore peripheral-to-peripheral communications are uncommon, but possible in most SCSI applications. The Symbios Logic 53C810 chip is an example of a PCI host interface that can act as a SCSI target.


SAS
Ø  Serial Attached SCSI (SAS) is a computer bus used to move data to and from computer storage devices such as hard drives and tape drives. SAS depends on a point-to-point serial protocol that replaces the parallel SCSI bus technology that first appeared in the mid 1980s in data centers and workstations, and it uses the standard SCSI command set. SAS offers backwards-compatibility with second-generation SATA drives. SATA 3 Gbit/s drives may be connected to SAS backplanes, but SAS drives may not be connected to SATA backplanes.

SATA

Ø  Serial ATA (SATA or Serial Advanced Technology Attachment) is a computer bus interface for connecting host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial ATA was designed to replace the older ATA (AT Attachment) standard (also known as EIDE). It is able to use the same low level commands, but serial ATA host-adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, the parallel ATA (the redesignation for the legacy ATA specifications) used 16 data conductors each operating at a much lower speed.
Ø  SATA offers several advantages over the older parallel ATA (PATA) interface: reduced cable-bulk and cost (reduced from 80 wires to seven), faster and more efficient data transfer, and hot swapping.
Ø  The SATA host adapter is integrated into almost all modern consumer laptop computers and desktop motherboards. As of 2009, SATA has replaced parallel ATA in most shipping consumer PCs. PATA remains in industrial and embedded applications dependent on CompactFlash storage although the new CFast storage standard will be based on SATA.[2][3]

iSCSI

Ø  In computing, iSCSI (pronounced /aɪˈskʌzi/ "eye-scuzzy"), is an abbreviation of Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a popular Storage Area Network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally-attached disks. Unlike traditional Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure.

Saturday, July 19, 2014

Intel VT

Ø  Intel Virtualization Technology (VT). Formerly known as Vanderpool, this technology enables a CPU to act as if it were several CPUs working in parallel, in order to enable several operating systems to run at the same time in the same machine.


Ø  You may confuse virtualization with multitasking or even with Hyper-Threading. On multitasking, there is a single operating system and several programs running in parallel.  On virtualization, you can have several operating systems running in parallel, each one with several programs running. Each operating system runs on a “virtual CPU” or “virtual machine”. And Hyper-Threading simulates two CPUs where there is just one physical CPU for balancing performance using SMP (Symmetric Multi Processing), and these two CPUs cannot be used separately.

Ø  Of course if a CPU has both Hyper-Threading and Virtualization Technology each virtual CPU will appear to the operating system as if two CPUs are available on the system for symmetric multiprocessing.

Ø  If you pay close attention, Virtualization Technology uses the same idea of Virtual 8086 (V86) mode, which is available since 386’s. With V86 mode you can create several virtual 8086 machines to run DOS-based programs in parallel. With VT you can create several “complete” virtual machines to run full operating systems in parallel.

Ø  CPUs with Virtualization Technology have some new instructions to control virtualization. With them, controlling software (called VMM, Virtual Machine Monitor) can be simpler, thus improving performance compared to software-only solutions.

Ø  How It Works

Ø  Processors with Virtualization Technology have an extra instruction set called Virtual Machine Extensions or VMX. VMX brings 10 new virtualization-specific instructions to the CPU: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUCH, VMRESUME, VMXOFF and VMXON.

Ø  There are two modes to run under virtualization: root operation and non-root operation. Usually only the virtualization controlling software, called Virtual Machine Monitor (VMM), runs under root operation, while operating systems running on top of the virtual machines run under non-root operation. Software running on top of virtual machines is also called “guest software”.


Ø  To enter virtualization mode, the software should execute the VMXON instruction and then call the VMM software. Then VMM software can enter each virtual machine using the VMLAUNCH instruction, and exit it by using the VMRESUME. If VMM wants to shutdown and exit virtualization mode, it executes the VMXOFF instruction.

Amazon Web Services

Ø  Amazon Machine Images (AMIs) contain pre-configured software such as an operating system, application server, and applications. You use these templates to launch your server instances,

Ø  Amazon Elastic Compute Cloud (Amazon EC2) is an Amazon Web Service (AWS) you can use to access servers, software, and storage resources across the Internet in a self-service manner.

Ø  A security group defines firewall rules for your instances. These rules specify which incoming network traffic is delivered to your instance.

Ø  An Amazon EBS volume serves as network-attached storage for your instance.

Ø  Terminating an instance effectively deletes it.This differs from stopping the instance; you are still charged for a stopped instance, and you can restart a stopped instance.

Ø  Amazon EBS volumes can persist even after your instance goes away. If you created and attached an EBS volume in the previous step, it was detached when you terminated the instance.

Ø  Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC is a web service that enables you to create a virtual network topology—including subnets and route tables—for your Amazon Web Services (AWS) resources.VPC to leverage advanced networking features such as private subnets, outbound security group filtering, network ACLs, Dedicated Instances, and VPN connections.

Ø  Connectivity from lab/development VPCs to Expedia's network is setup using secure IPSec VPN tunnels, as is production connectivity from all Amazon regions except US East.  Production connectivity between Amazon's US East region and Expedia's data centers in Phoenix and Chandler is via AWS Direct Connect.  Direct Connect uses dedicated 10Gb circuits between Expedia's data centers and the AWS US East region, decreasing Expedia's bandwidth costs and making for more consistent network performance.All inbound communications from AWS are subject to firewall restrictions; communications are denied by default.

Ø  EC2 Linux instances can use LDAPS to authenticate users and groups against Expedia's Active Directory domains, relieving the need to manage separate user accounts or LDAP directories.  Development EC2 instances will authenticate using SEA domain users and groups while production EC2 instances will authenticate using EXPESO domain users and groups.

Ø  The AWS Management Console Gateway (http://awsportal) enables the use of SEA domain accounts and groups for federated authentication and authorization to the AWS console, removing the need to manage users and groups in Amazon Identity and Access Management (IAM).  This portal can be used with all accounts, not just those with VPCs connected to Expedia's network.

Ø  Name resolution services are available for EC2 instances in AWS.  These DNS servers host secondary (read-only) copies of Expedia DNS zones.

Ø  Elastic IP addresses are static IP addresses designed for dynamic cloud computing. Additionally, Elastic IP addresses are associated with your account, not specific instances. Any Elastic IP addresses that you associate with your account remain associated with your account until you explicitly release them. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by rapidly remapping your public IP addresses to any instance in your account.