¡Recomienda este blog!

miércoles, 1 de diciembre de 2010

Cluster Computer and its Architecture

• A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers cooperatively working together as a single, integrated computing resource

• A node
– a single or multiprocessor system with memory, I/O facilities, & OS.
– generally 2 or more computers (nodes) connected together.
– in a single cabinet or physically separated & connected via a LAN cabinet, – appear as a single system to users and applications.
– provide a cost-effective way to gain features and benefits.


Windows of Opportunities

• Parallel Processing
– Use multiple processors p p to build MPP/DSM-like systems for parallel computing

• Network RAM
– Use memory associated with each workstation as aggregate DRAM cache.

• Software RAID (Redundant array of inexpensive disks)
– Possible to provide parallel I/O support to applications.
– Use arrays of workstation disks to provide cheap, highly available, and scalable file storage.

• Multipath Communication
– Use multiple networks for parallel data transfer between nodes.

Prominent Components of Cluster Computers

• High Performance Networks/Switches
– Ethernet (10Mbps),
– Fast Ethernet (100Mbps),
– Gigabit Ethernet (1Gbps)
– SCI (Dolphin - MPI- 12micro-sec latency)
– ATM
– Myrinet (1.2Gbps)
- InfiniBand
– Digital Memory Channel
– FDDI
– Advanced Switching
– Quadrics…

• Cluster Middleware
– Single System Image (SSI)
– System Availability (SA) Infrastructure

• Hardware
– DEC Memory Channel, DSM (Alewife, DASH), SMP Techniques

• Operating System Kernel/Gluing Layers
– Solaris MC, Unixware, GLUnix

• Applications and Subsystems
– Applications (system management and electronic forms)
– Runtime systems (software DSM, PFS etc.)
– Resource management and scheduling software (RMS)

• CODINE, LSF, PBS, NQS, etc.

• Parallel Programming Environments and Tools
– Threads (PCs, SMPs, NOW..)

• POSIX Threads

• Java Threads
– MPI

• Linux, NT, on many Supercomputers
– PVM
– Software DSMs (Shmem)
– Compilers

• C/C++/Java

• Parallel programming with C++ (MIT Press book)
– RAD (rapid application development tools)

• GUI based tools for PP modeling
– Debuggers
– Performance Analysis Tools
– Visualization Tools

• Applications
– Sequential
– Parallel / Distributed (Cluster-aware app.)
• Grand Challenging applications
– Weather Forecasting
– Quantum Chemistry
– Molecular Biology Modeling
– Engineering Analysis (CAD/CAM)
– ……………….

• PDBs, web servers,data-mining

Clusters Classification

• Application Target
– High Performance (HP) Clusters
• Grand Challenging Applications
– High Availability (HA) Clusters
• Mission Critical applications

• Node Ownership
– Dedicated Clusters
– Non-dedicated clusters
• Adaptive parallel computing GRID
• Communal multiprocessing

• Node Hardware
– Clusters of PCs (CoPs)
• Piles of PCs (PoPs)
– Clusters of Workstations (COWs)
– Clusters of SMPs CLUMPs) Constellations)

• Node Operating System
– Linux Clusters (e.g., Beowulf)
– Solaris Clusters (e.g., Berkeley NOW)
– NT Clusters (e.g., HPVM)
– AIX Clusters (e.g., IBM SP2)
– SCO/Compaq Clusters (Unixware)
– Digital VMS Clusters
– HP-UX clusters
– Microsoft Wolfpack clusters

• Node Configuration
– Homogeneous Clusters
• All nodes will have similar architectures and run the same OSs
– Heterogeneous Clusters
• All nodes will have different architectures and run different OSs

• Levels of Clustering
– Group Clusters (#nodes: 2-99)
• Nodes are connected by SAN like Myrinet
– Departmental Clusters (#nodes: 10s to 100s)
– Organizational Clusters (#nodes: many 100s)
– National Metacomputers (WAN/Internet-based)
– International Metacomputers (Internet-based, #nodes: 1000s to many millions)
• Metacomputing
• Web-based Computing
• Agent Based Computing
– Java plays a major in web and agent based computing

0 comentarios:

Publicar un comentario