The Internet of Things has been heated up, and the explosion of data brought by the Internet of Things cannot be ignored. With the tens of thousands of times of data growth, we will find that the pressure on the servers in the data center is increasing. The higher the processing power requirements, the faster the server acceleration.
Generally, to ensure that the data is redundant, each big data architecture needs to do 3 backups. The problem is that users need to prepare 3 times of storage space. What should I do when there is no way to do online backup? Some people say that the required usage of the hard disk can be reduced from 3 times to 1.4 times by logic, but the CPU usage will exceed 99% because the CPU has to handle logic operations. It can be seen that there is no redundant CPU processing capability to do big data analysis. How do we solve it? This is why we want to encourage second-generation distributed computing. We can accelerate with FPGA accelerators, so that the CPU is mainly responsible for the general-purpose computing load, and the FPGA technology is responsible for a large number of repetitive computing loads, thus separating the control.
How does the CPU and FPGA cross-border integration? What role do they play in server acceleration? How to do the "less, faster, better, save" of the future server? With these questions and non-net reporters participated
When the development of semiconductor technology encounters a bottleneck, how can the server accelerate?
As can be seen from the development of computer systems, the original computer is a single-task computing, and as the data increases, it gradually evolves to multi-task computing. Therefore, there are multiple CPU POWERs in the system, because multiple CPU POWERs can simultaneously access the memory. Data, so the first solution is the data consistency problem. When a POWER operates on one data, another POWER needs to get the correct data. In the system, hardware is generally used to ensure data consistency, so that another thread can get the correct data when reading data, so when the computer system evolves from a single CPU system to a multi-CPU system, its The performance-to-power ratio has dropped a lot. How to improve the performance of the CPU and reduce the power consumption has become a confusion for many users.
Bruce Wile, IBM's global distinguished engineer, explains, "With the growth of Internet data, we need more hardware computing power to process more data for our system. One solution is to open up a CPU core. More hardware threads, use these threads to improve its processing power, to increase its processing of I / O data, and we introduced GPU and FPGA, use these hardware to help our system process Data, but traditionally GPUs and FPGAs are mounted on this system in the form of I/O devices. In order to use these IO devices, engineers need more skills. For example, programmers need to learn hardware knowledge, we need The kernel understands the driver development for these I/O devices, and since they are I/O devices, these IO devices do not share memory with the CPU, so kernel code is needed to help them with data transfer. Another problem that lies in front of us is Semiconductor technology has reached a technical turning point, its cost performance is no longer growing, we can not rely on Thanks to the growth of semiconductor technology, our system is faster and stronger. We need to consider a combination of hardware, operating system, device application and other aspects to find a better solution."
Building Visual Intercom Column
Shenzhen Jingtu Cabinet Network Equipment Co., LTD , https://www.jingtujigui.com