Friday, September 25, 2015

HPC computing challenges of deep learning computing power of black holes

Industrial 4.0 advocated by industry behind, without high-performance computing (HPC) and cloud computing, big data fusion of "computing". On September 24, the tide sponsored jointly by the Asian Association of ultra is 2015 HPC users Conference was held in Beijing, the Assembly around the current technology trends "computing" concept. High performance computing can dramatically reduce the cost of industrial design, development and production and increase efficiency, industrial 4.0 is one of the most important and innovative tool.

Challenge the computing power of a black hole

It can be said that "great computer" has become a trend, its technology in two areas:

First, cloud computing, big data and multiple forms of calculations in the gradual integration of high performance computing.

The rapid development of Internet technology, remote sensing technology and penetration into other industries, have brought explosive growth in data, also contributed to the rise of a number of new technologies such as artificial intelligence, complex application makes data centers to handle work of a single schema, different ways of calculation needs to be integration.

Second is infrastructure such as computing, networking, storage and other boundaries are becoming increasingly blurred, the trend towards software defined.

Meanwhile, big also represents a large – exascale computing era. The direct result of explosive data growth, demand for computing power is even stronger. For deep learning, it often means that the computing power of black holes.

--A black hole?

Today's most famous GoogleBrain used used 16,000 parallel computing platform CPU, a total of 1 billion nodes, further construction of network learning model. However, roughly 100 billion neurons in the human brain (CPU in the corresponding depth model), each neuron has some 5,000 synapses (nodes in the network). Someone once calculated, if all the synapses in the brain of a person turn into a straight line, from the Earth to the Moon, returned to Earth from the moon. Meanwhile, computing power than the human brain is still very low, equivalent to the world's fastest supercomputer, Tianhe, 2nd of 2 million times.

Japan and Germany, researchers in the "Jing" (Japan's fastest supercomputer) are the largest in history on a human brain simulation-use "Jing" 82,944 processor simulates the +1PB memory of 1.73 billion nerve cells. Results of Beijing-complete brain simulation takes 40 minutes 1 second, assuming completion time is linearly proportional to the scale of analog neural, simulated operation of the brain 1 second it takes 2.5 days.

In human society the amount of data growing at EB level case, needs a new low cost, high efficiency computing infrastructure (mainly chips and disruptive changes in network technology), to complete the data processing work.

The past year, is heard more and more words "deep learning", "artificial intelligence". Many people also need: I have a big data needs, can provide management and unified integration of HPC to me? I'm experiencing performance issues in depth study, using HPC technologies can better solve and achieve?

Actually different calculations give us a challenge. Such as domestic procurement unit has many Internet companies from a single Cabinet unit, can imagine his next purchase is likely to be full of sensors. This represents the evolution of integration in enterprise architecture trends. The earliest time the server above, it is the coupling of the node, with its own independent processed and stored. So when it comes to Rack, cabinets cell can achieve a pooling, interconnection of the Cabinet as a whole, implemented by means of switch-free design of the entire network topology and, throughout the CPU in your data sharing, storing, sharing, shared and IO global supply chain management.

Challenges of the new era of computing

HPC (High Performance Computing) refers to using many processors (as part of a single machine) or a cluster of organizations in several computers (operation as a single computing resources) computing systems and the environment. HPC has become a concept in a number of years ago, early already has the characteristics of high performance computing.

Different ways of calculating coupled calculation schema integration and evolution. In a new era of computing what challenges we will face?

First is how to use a calculation to solve all of the enterprise computing infrastructure problems. Different applications have different characteristics, so we need for computing resources to provide this kind of service.

Second-is a hardware architecture needs to face a variety of challenges. For example, large data, traditional science and engineering calculations and deep learning, it needs different from calculation schema for the background.

Third is how to ensure applications on mixed calculation of product flexibility.

We are missing is the adaptation of computing environment

"Calculate +" strategy, the key idea is to change the original server is a server, the storage is storage, network is a network of State, let them join the evolution as a whole, back through software-defined architecture as a whole.

This trend in the last few years are not uncommon, more and more Internet companies to sell their server companies, more and more storage company also wants to sell computing company. This represents a clear sign: future networks can be defined by the software, the storage can also be defined by the software, as long as the calculation schema, you can define all calculation schema through the software.

Under such a policy, you need to provide an adaptable computing environment, on the hardware architecture is a fusion of infrastructure, software design to define HPC software.

Just Cavalli iPhone 6 cases

Adaptive computing environment

Different application to different calculations for demand characteristics are totally different, not an environment can do all things. Under such a situation, you need to provide is a computing environment that has more possibilities and selective.

Integration infrastructure

Now very obvious trend calculation +, that is, compute, storage, networks are converging. Cabinet-oriented computing infrastructure, which is in a cabinet with the same physical specifications for different calculation function, there are two way, four-way, Exchange and storage nodes can be implemented by means of software-defined resource pools and sharing. In the whole infrastructure, universal power supply and fans share the Cabinet to achieve the integration of global management, for whole Cabinet all the computing resources to achieve unified management.

Software-defined HPC

Software-defined the term very hot, in particular on the HPC, the core is to define HPC software, provide a software environment.

1, software-defined data services.

In the application of data, about 70% of the time is spent to the IO (input/output, namely the information input and output) as above, this should solve the problem. Now by way of software defined storage, allows high performance computing, data and cloud computing so different interface supports different data formats, provides unified storage space for different applications. So that regardless of the previous compute cluster run HPC, or large data, depth of study, data that is stored in the back-end service is a unified storage device, but use different software to define ways to provide.

2, software defined network services.

In the clouds, big data, we used in this area is the Internet software defined this IO not so good. If the HPC, software defined network services, can achieve closer to, suitable for the application of the network topology. Like 3D structure is different from traditional 2D architectural, nodes extensibility a lot less restrictive. In 3D to achieve very major network expansion based on different application environments, software-defined its different topologies, made by way of resource-aware, traffic-intensive applications on a network platform, so application network communications delay lower and higher bandwidth. Just Cavalli iPhone 6 cases

3, software-defined resource services. Just Cavalli iPhone 6 Case

It can be applied without achieving a uniform distribution and allocation of resources between and in the distribution of resources between physical and virtual machines and migrate, and computation in the local and cloud migration and flexible scheduling.

With software-defined data services, software defined network services and software-defined resource services, three aspects, coupled with Adaptive computing environments, integration of HPC infrastructure software definition is currently thought. HPC several years ago has become a concept in current trends and challenges of such a calculation, can do the HPC to study in depth, HPC data.

"The writer Liu, tide high performance server product manager"

https://www.youtube.com/watch?v=BISfMD-fk6I

32 votes

Galaxy Tab-s tablets

Used Samsung Galaxy Tab s own Exynos 5420 processor, and on that Note 3 is exactly the same, quad core A15 of the 1.9GHz with the 1.3GHz A7, 3G Ram+16G Rom. GPU is Mali-T628mp6, the GPU to cope with 2K screen to its ambition.

View details of the voting >>

No comments:

Post a Comment