Interpretation of the use of cloud technology in audio systems

Last year, the fifth-generation audio architecture and cloud backup concept was first proposed in China. Looking back at the development of sound reinforcement technology, sound reinforcement technology has undergone many changes since its birth. In addition to the different functions of various devices for sound processing, the architectural relationship can be roughly divided into five generations:

1. The first generation audio architecture: pure analog architecture. It is embodied in analog connection and analog processing.

The first generation of pure analog technology includes a variety of sound processing equipment from mixers, equalizers, delays, compressors, splitters, frequency dividers, etc., from the device itself to solve the sound control and processing, the disadvantage is Complex connections, difficult to save debug data, etc.

2. Second-generation audio architecture: digital-to-analog combination architecture. Reflected as: digital processing, analog connections;

The shortcomings of the first generation technology gave birth to new technologies: digital peripherals and digital mixers, the second generation of audio technology: digital-to-analog combination architecture. Their emergence solves the defects of the first generation of complex connections and easy misoperation.

At the same time, with the increasing demand for sound reinforcement technology and the development of digital technology, many sound reinforcement environments that do not require fixed installation by a dedicated tuner have raised additional requirements, namely the functional basis of combining the mixer and peripheral processing. On the other hand, add more matrix nodes and distribute the signals losslessly. This is the third-generation technology all-digital architecture.

3. Third-generation audio architecture: all-digital architecture. Reflected as: digital processing, digital connections.

This architecture eliminates the analog cable between the mixer, digital surround and digital amplifiers. Some products even eliminate the concept of the three, only a full-featured digital audio matrix, integrated mixer, peripheral processing and amplifier . The biggest advantage of this architecture is that internal data cross-utilization can not meet the conditions in the past, and it is easy to achieve a matrix of 64×64 channels or more (currently up to 256×256), and can be within the scope of DSP resources. Unlimited simulation of countless audio processing modules. Because of this technology, the concept of the central computer room was born. Many hotel or conference buildings are designed with a central office to complete all equipment management.

4. Fourth generation audio architecture

However, the needs of society are constantly improving. The advantages of the third generation technology make the use of this technology greatly save the number of technicians and improve the stability of the overall operation. However, its limitation is that the connection between the core audio matrices can only be done in one computer room, and the outward extension is analog. Digital connections between devices do not support public protocols. While applying these technologies, it is desirable to transfer high-quality sound and system processing power to places where analog lines cannot reach. This demand has spawned the fourth generation of technology: the ability to integrate network transmission based on third-generation technology, extending high-quality sound and management of the entire system to areas that were previously unreachable. The most famous application is ConbraNet. These technologies are used in large-scale sound reinforcement projects, such as the broadcasting system of the entire building, the sound reinforcement of large public environments, and so on. This is the fourth generation audio architecture: digital network architecture. Reflected as: digital processing, digital connectivity, network expansion;

The fourth generation technology solves the problem of long-distance transmission of high-quality sound and integrated management of the entire system. Freeing the staff from each meeting room, while ensuring quality and even reducing the overall investment in the system.

The core of the whole system is based on the DSP program, which can perform multiple tasks at the same time. For example, a system with different ports working independently can support multiple conference rooms or different broadcast partitions. All the programs are stored in a set of audio processing systems (many times it is not done by a separate host), and the user's debugging data such as volume, EQ, routing relationship, etc. is a variable that can be adjusted at any time. Together with the system program, these parameter variables form the basis for the stable operation of the entire system.

5. Fifth generation audio architecture

The fourth generation technology greatly facilitates audio work. With the application of large-scale networked DSP systems, new demands have emerged. The stability requirements of DSP for large-scale projects are also increasing. Once the DSP host fails, it means that the entire system will stop working. Even if you can recover with the original saved program, it is difficult to restore the variable parameters that are frequently adjusted in daily work to the final state. Not only that, because the DSP of the fourth-generation system manages a large multi-partition system, no matter which partition needs to be re-modified, that is, when uploading, all systems will stop working until the program is finished. This in turn will cause a serious injury to the work of the entire system.

Also, a large project is often not completed at the same time and needs to be continuously added during the construction process. The DSP resources of the fourth-generation technology are determined by the beginning of the design and cannot be extended (the method of processing different areas by multiple hosts does not belong to one system). This is another dead hole in the construction of large systems.

With the development of society, the IT industry has promoted the application of cloud technologies, including cloud computing, cloud storage, and cloud backup. The characteristic of cloud computing is that the end user does not need to pay attention to the working state of the host in the cloud, and only needs to consider the local work requirements. These technologies have gained mature applications in technologies such as e-mail, WeChat, and network disk, and have been recognized by the society.

Because network audio technology can form a large sound reinforcement system, and greatly liberate the work pressure of managers, it has been widely used. The shortcomings reflected in the development of the fourth generation audio technology are unacceptable to large and important users. Combined with the characteristics of current cloud technology, the fifth generation audio architecture technology was born.

The characteristics of the fifth generation audio architecture

1. The system architecture is cloud architecture: the connection between the systems is the network connection, the input analog signal is directly converted into a network signal, and the output terminal is also directly outputted into an analog signal by the network signal, that is, the AN/NA connection; the central server completes all Computing work, the end user does not need to consider the working state of the cloud;

2. System resources can be dynamically allocated: each terminal partition can freely apply for resource allocation. And the program can be freely uploaded or modified within the resources allocated by the license without affecting the work of any other partition;

3, cloud resources can be expanded indefinitely: the number of core DSPs can be added indefinitely. Adding a solution may be adding DSP module hardware inside the server, or adding a network server;

4, with a stable mechanism: the core server must be able to online hot backup. The contents of the hot backup include program and user adjusted data;

5, the transmission delay is small: must ensure that the analog voice input to the central server processing is completed, return to the local sound reinforcement process will not cause any feeling of the user. At present, the total transmission delay of the ten intermediate links is less than 2 ms.

In addition to this, it also requires:

6, with a security mechanism: to ensure that the audio signal in the system can only appear in the designated terminal, there will be no leakage.

7, compatibility: at the same time compatible with the current mainstream public agreements, such as AVB, Dante, ConbraNet and so on.

Advantages of the fifth generation audio architecture

There are two main advantages to adopting this technology, namely, the free allocation of core resources and the unlimited expansion of core resources.

1. Free allocation of core resources.

Among the DSP resources, some are used more, some are used less, some are often used, some are not. According to the past concept, do we have to build a complete DSP resource for each room user? If there are three projects now, it is very simple, and each one can be used. But what if you add or quit in the actual application process? Of course, you want the system to be dynamically allocated. If there are 20 venues, but they will definitely not be used at the same time, we only need to prepare a part of the resource allocation, and we can maintain the appropriate redundancy. But can the resources meet the distribution at the time of use? If 20 rooms are peak usage, then we need to expand the resources.

2. Core resources can be expanded indefinitely.

You can do an audio demo simulation. For example, in a speech, the voice of the audience under the analog station becomes higher. In this case, the volume of the speaker microphone needs to be increased. The usual method is to improve the operator to the mixer. volume. But is there any other way?

In the first case, the sound of the outside world is improved, but the sound of the microphone is normal, and the change of the external sound has no effect on it; in the second case, we switch to another algorithm, the external sound is improved, and the sound of the microphone is also very fast. Improve, the external sound is reduced, and the sound of the microphone is simultaneously reduced to a normal level. This means that a lot of things can be done with intelligent algorithms, without the need to use human resources to solve them. We're demonstrating a very simple function, but is this feature going to end? Is it enough to have a conference room with this feature in the future? Of course not! We can't imagine how high the future of technology will go. There will definitely be better solutions in the future, which requires our resources to expand indefinitely.

Manual operation with human resources can be used to solve some problems, but can it be solved if the project reaches 20, 50, or 80? Obviously it can't be solved, and it can only be solved by speed and algorithm. The addition of intelligent algorithms brings a lot of convenience: the first is that no operational skills are needed; the second is that there is not a need for too many people; the third is less emphasis on concentration. In the past, if the sound engineer was not focused, he could not interact with the speaker; in the past, there was a lot of audio processing that required professional knowledge, and those who did not have such skills were difficult to use.

Calculate the account of equipment and labor expenditure

In today's society, human skills develop in both the design and application directions, and the talents on the application side seem to be harder to find. Just like corporate users, hotels, government units, etc., recruiting a qualified sound controller is much more difficult than recruiting a client manager. We do not advocate the use of technicians, but can reduce them as appropriate under limited conditions, which is of great benefit to cost control.

If we say that we reduce an ordinary staff member, his basic salary is calculated at 5,000 yuan, and 750,000 yuan will be invested in 10 years. If you think that this person is not bad, and want to train him, there will be an opportunity to increase the salary every three years or so, it will be 1.14 million. If this money is saved, part of it is used to train core technicians, and some other money to buy intelligent equipment, is it a better result?


Stabilization mechanism, backup is one of the most important means

We see that the fourth generation of systems are concentrated on the central server, it has to retain two kinds of data. When we usually do a project, there is often a completion data, but this is not the data that the customer ultimately needs. For example, I am currently speaking, the volume of the microphone during the speech is not necessarily the best volume when the time is up. This is obviously not the result we want. What we want now is to be able to maintain a stable state of data. Backup is one of the most important means.

We generally recommend that there are two mechanisms for backup, one is a centralized architecture. As shown in the figure, the above is the host, and below are the various terminals. Some people may ask, you just mentioned the problem of the network, what if the network is interrupted? If there is a very confidential content, do not transfer data to the cloud terminal, do not want to contact third-party people, what should I do?

There is another way to do this, called a distributed centralized architecture. Each subunit has a separate computing system. Usually, the cloud is done by the host. Or when we need to keep it secret, or when we don't need this information to leak out of the conference room, we hope that the information is done in a cloud system.

With IT giants such as Microsoft, Intel, Xilinx, Google and other giants turning their attention to the audio and video market, with the AVnu organization announcing their time synchronization protocol AVB is free to open to any need, with cloud technology In the deeper application of the audio field, with the development of IPv6 and the development of a new generation of high-speed networks, with the exit of the audio workers who have traditionally relied on experience to complete the work in the process of social development, the future audio system will definitely change into A quantifiable working model similar to the IT industry. The requirements for quantification are: process quantification, result quantification, and learning quantification. The core resource investment of the audio system has also changed from the planned resource and the use of resources in the past 1:1 to 1:3 or lower, which greatly saves the overall cost. Therefore, it can be considered that the fifth-generation audio technology using cloud architecture makes the large-scale system more secure, cheaper, and more flexible in system construction, and will be the mainstream direction and inevitable choice for large-scale audio sound reinforcement systems in the future.

1.5mm Shelf Header Led Display

LED Shelf Screen,LED Display System,LED Digital Signage,Outdoor Advertising LED Display

Shenzhen Uhled Technology Co., Ltd. , https://www.uhled.com

This entry was posted in on