Show
Nov. 16 — The SCinet network, SC’s Supercomputing Internet, is now live! On November 14, the Austin Convention Center became home to the fastest and most innovative computer network in the world, delivering more than 1.6 terabits per second of network bandwidth to the SC conference (SC15). SCinet gives the SC conference attendees a unique chance to showcase and discover the latest research in HPC. By building the fastest, most innovative operational network possible every year, SCinet enables data-intensive research and live-use of high performing hardware to run multi-gigabit demonstrations, requiring a fast and robust infrastructure. “This network is unrivaled with regards to its capabilities and the broad-reaching influence, both nationally and internationally, to support demonstrations and experiments that could not be done easily in any other place. It’s a one-of-a-kind environment where research meets production,” says Davey Wheeler, SCinet Chair and Senior Network Engineer from the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC). “As the SCinet Chair leading the development of this network, this year is a culmination of 17 years of experience working on SCinet from year to year. It is humbling and honoring to be able to work aside these colleagues and see the tremendous talent, dedication, and creativity of the volunteers.” SCinet is built by a team of expert volunteers from around the world, taking one year to design the network, three weeks to set it up, four days to operate it, and twenty-four hours to tear it down. Over 100 engineers from industry, academia and government institutions came together to build this network, using over $22 million in loaned equipment and over 89 miles of newly installed fiber optic cables. “Having been the SCinet Chair for SC07 in Reno, I am intimately familiar with the incredible amount of planning and work that goes into creating what will be the most powerful network. Over 130 SCinet volunteers from more than 15 countries have worked energetically for the past year to provide wired and wireless access to our conference attendees, and the platform for our exhibitors to showcase bandwidth-driven HPC and cloud computing applications. SCinet continues to be a crucial part of SC and I am extremely grateful for their hard work,” says Jackie Kern, Director of IT Shared Services at UIUC and SC15 Conference Chair. For SC15, SCinet has connected multiple 100 gigabit circuits, bringing an unprecedented 1.62 terabits per second of bandwidth to the Austin Convention Center. Lonestar Education and Research Network (LEARN) leads this effort in collaboration with leading national and international research networks and commodity providers. LEARN and SCinet supports the HPC community by providing multiple 100 gigabit waves and complementary capabilities throughout the SC15 conference events. In addition to the massive external capacity SCinet brings to the convention center, the network is also supporting research initiatives through a half-day workshop, Innovating the Network for Data-Intensive Science (INDIS); and the Network Research Exhibition (NRE). SCinet organizes the INDIS workshop to discuss technical papers and show floor demonstrations dedicated to high performance networking technologies, innovations, protocols, hardware, and much more. Further, SCinet is providing the wireless connectivity for more than 11,000 expected conference attendees throughout the conference areas. The SCinet team built the SC15 wireless network using 339 wireless access points to support more than 4,000 simultaneous users on the conference wifi. The wireless network will include support for eduroam (education roaming) service, which allows users (researchers, teachers, students, and staff) from participating institutions to securely access the protected wireless network using their home organization’s login credentials. SCinet is the result of the hard work and significant contributions of many government, research, education and corporate collaborators who have volunteered time, equipment and expertise to ensure SC15’s success. This year, SCinet continued the Contributors Program and we would like to give a special thank you to all SCinet contributors and volunteers! — Source: SC15 http://www.hpcwire.com/2015-supercomputing-conference/ In computing, a server is a piece of computer hardware or software (computer program) that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device.[1] Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.[2] Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components. The use of the word server in computing comes from queueing theory,[3] where it dates to the mid 20th century, being notably used in Kendall (1953) (along with "service"), the paper that introduced Kendall's notation. In earlier papers, such as the Erlang (1909), more concrete terms such as "[telephone] operators" are used. In computing, "server" dates at least to RFC 5 (1969),[4] one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of host: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4,[5] contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests, usually remote, with the 1981 (1.1.0) version reading:
A network based on the client–server model where multiple individual clients request services and resources from centralized servers Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not.[a] The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests". The server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server. While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response.[6] The role of a server is to share data as well as to share resources and distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of a quid pro quo transaction, or simply a technical possibility. The following table shows several scenarios in which a server is used.
Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world[9] and virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some implementations of telephony (e.g. pre-Microsoft Skype). A rack-mountable server with the top cover removed to reveal internal components Hardware requirement for servers vary widely, depending on the server's purpose and its software. Servers are more often than not, more powerful and expensive than the clients that connect to them. Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large serversLarge traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime. Uninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power supplies, RAID disk systems, and ECC memory,[10] along with extensive pre-boot memory testing and verification. Critical components might be hot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling. They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks. These types of servers are often housed in dedicated data centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.
ClustersA server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers,[11] and there is a collaborative effort, Open Compute Project around this concept. AppliancesA class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers. MobileA mobile server has a portable form factor, e.g. a laptop.[12] In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time.[13] The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations.[14] To facilitate portability, features such as the keyboard, display, battery (uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis. Sun's Cobalt Qube 3; a computer server appliance (2002); running Cobalt Linux (a customized version of Red Hat Linux, using the 2.2 Linux kernel), complete with the Apache web server. On the Internet the dominant operating systems among servers are UNIX-like open-source distributions, such as those based on Linux and FreeBSD,[15] with Windows Server also having a significant share. Proprietary operating systems such as z/OS and macOS Server are also deployed, but in much smaller numbers. Specialist server-oriented operating systems have traditionally had features such as:
In practice, today many desktop and server operating systems share similar code bases, differing mostly in configuration. In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1-1.5% of electrical energy consumption worldwide and 1.7-2.2% in the United States.[17] One estimate is that total energy consumption for information and communications technology saves more than 5 times its carbon footprint[18] in the rest of the economy by increasing efficiency. Global energy consumption is increasing due to the increasing demand of data and bandwidth. Natural Resources Defense Council (NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage. Environmental groups have placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons of carbon dioxide in a year.
|