ADVERTISE HERE

Other articles in this glossary


HARDWARE: SAN, Rackmount Servers, RAID controllers


SOFTWARE: CRM, Management Information Systems

Services: Data Recovery, ERP more...


Technologies: VoIP, CAD  more...



Meta Computing


Definition: Metacomputing involves using multiple computers linked by high-speed networks to solve application problems; they appear to the user as a single computer but provide a level of performance far in excess of individual computers in the network.

 

meta computing
 


Metacomputers or computational grids function like a networked virtual supercomputer. Metacomputing was born out of a need to utilize greater processing power than was available in a single site, and to combine the power of computers with different architecture.

Metacomputing is also useful for collecting, manipulating and analyzing data from remote databases and instruments like microscopes, telescopes and satellite downlinks. Large-scale data intensive applications require high-performance computing, high-speed networking, data storage facilities and interactive software.

In addition to science and engineering, high-performance computing is now required for creating new films, weather forecasting, designing new drugs, etc., and system simulation is used by engineering companies for rapid prototyping and to reduce the time to market for new products.

Metacomputing provides users with the power of a supercomputer, without the high cost. Small companies need high-performance computing for developing new products, but their limited requirement may not justify the high installation and maintenance costs of such systems.

Metacomputing offers an economical solution and allows small companies to access high-performance computing capabilities, as and when they are required. Users can pay for their actual usage and avoid the burden of high ownership costs.

Metacomputing allows users to gain access to desktop supercomputing with powerful graphics capabilities and to use distributed supercomputing to solve complex problems. People can interact and collaborate with other users in different geographical locations.

When high performance computing facilities are centralized there may be a lack of flexibility and redundancy. A failure could cause major disruption to the business.

Distributed heterogeneous computing uses computing installations in different locations, linked by a high-speed network. There is a big reduction in the requirement of bandwidth and a failure in one component is a local problem and does not cause the collapse of entire grid. Different service providers can be a part of the network and compete for customers, leading to a reduction in prices and enhanced service quality. 




Suppliers: