patents.google.com

CN111190745A - A data processing method, apparatus and computer readable storage medium - Google Patents

  • ️Fri May 22 2020

CN111190745A - A data processing method, apparatus and computer readable storage medium - Google Patents

A data processing method, apparatus and computer readable storage medium Download PDF

Info

Publication number
CN111190745A
CN111190745A CN201911071773.8A CN201911071773A CN111190745A CN 111190745 A CN111190745 A CN 111190745A CN 201911071773 A CN201911071773 A CN 201911071773A CN 111190745 A CN111190745 A CN 111190745A Authority
CN
China
Prior art keywords
queue
target
data processing
standby
running
Prior art date
2019-11-05
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911071773.8A
Other languages
Chinese (zh)
Other versions
CN111190745B (en
Inventor
王亮
肖怀锋
顾栋波
高立周
赵光普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2019-11-05
Filing date
2019-11-05
Publication date
2020-05-22
2019-11-05 Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
2019-11-05 Priority to CN201911071773.8A priority Critical patent/CN111190745B/en
2020-05-22 Publication of CN111190745A publication Critical patent/CN111190745A/en
2024-01-30 Application granted granted Critical
2024-01-30 Publication of CN111190745B publication Critical patent/CN111190745B/en
Status Active legal-status Critical Current
2039-11-05 Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device and a computer readable storage medium, wherein a standby queue resource pool is generated and comprises a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with a target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined as the target queue, the standby queue is associated with the transmission configuration information of the target queue, the standby queue can replace the target queue in the abnormal state to perform data processing quickly, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Description

Data processing method and device and computer readable storage medium

Technical Field

The present application relates to the field of computer communication technologies, and in particular, to a data processing method and apparatus, and a computer-readable storage medium.

Background

Due to the rapid development of cloud computing, computing work is more and more intensively completed in a data center, and more terminals only send requested tasks to the data center for computing by using a network, so that the demand of the terminals on computing capacity is reduced, but the demand of the data center on data transceiving and computing capacity is increased day by day.

In the prior art, the bottom layer data of a data center is generally received and transmitted through queues of various hardware, such as an intelligent network card with multiple queues of an intelligent network card, where the intelligent network card has an allocation mechanism based on multiple Direct Memory Access (DMA) queues, and can distribute messages in a network to different queues, and different queues are operated by cores of different Central Processing Units (CPUs), so as to implement high-speed data transmission and processing.

In the research and practice process of the prior art, the inventor of the present application finds that, in the prior art, a queue in an intelligent network card is easy to be abnormally hung up, and in the process of resetting the queue which is abnormally hung up, long-time communication interruption can be caused, and the efficiency of data processing is low.

Disclosure of Invention

The embodiment of the application provides a data processing method, a data processing device and a computer readable storage medium, which can improve the data processing efficiency.

In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:

a method of data processing, comprising:

generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue;

acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue;

determining transmission configuration information associated with the target queue;

and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing.

Correspondingly, an embodiment of the present application further provides a data processing apparatus, including:

the device comprises a generating unit, a receiving unit and a processing unit, wherein the generating unit is used for generating a standby queue resource pool which comprises a standby queue;

the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring the working state of a running queue and determining the running queue with the working state in an abnormal state as a target queue;

a determining unit, configured to determine transmission configuration information associated with the target queue;

and the association unit is used for associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue to perform data processing.

In some embodiments, the apparatus further comprises:

the reset unit is used for controlling the target queue to reset when the system is detected to be in an idle state;

and moving the reset target queue to a standby queue resource pool, and changing the target queue into a standby queue.

In some embodiments, the obtaining unit includes:

the first detection unit is used for detecting whether the statistics of the application layer received messages is continuously zero within a first preset time;

the second detection unit is used for detecting whether the packet loss statistics of the messages submitted by the running queue continuously increases within a second preset time when the statistics of the messages received by the application layer is continuously zero within the first preset time;

and the judging subunit is used for judging the working state of the running queue to be an abnormal state and determining the running queue to be a target queue when detecting that the packet loss statistics of the messages submitted by the running queue continuously increases within a second preset time.

In some embodiments, the determining subunit is configured to:

performing exception accumulation on the running queue;

and when the abnormal accumulated value reaches a preset threshold value, judging the working state of the running queue to be an abnormal state, and determining the running queue to be a target queue.

In some embodiments, the obtaining unit further includes:

the clearing unit is used for clearing the abnormal accumulated value when detecting that the statistics of the application layer received messages is not continuously zero within a first preset time; or

And when detecting that the packet loss statistics of the messages submitted by the running queue is not continuously increased within a second preset time, clearing the abnormal accumulated value.

In some embodiments, the generating unit includes:

a receiving subunit, configured to receive a resource amount of the standby queue resource pool;

and the application subunit is used for generating a standby queue resource pool according to the resource quantity and applying for the standby queues with corresponding quantity to carry out activation processing.

In some embodiments, the receiving subunit is configured to:

acquiring the total resource amount of the running queue and the current data receiving and transmitting stability evaluation value;

determining a corresponding proportional value according to the data receiving and transmitting stability evaluation value;

and generating the resource amount of the standby queue resource pool based on the total resource amount and the proportion value.

Correspondingly, the embodiment of the present invention further provides a computer-readable storage medium, where multiple instructions are stored, and the instructions are suitable for being loaded by a processor to perform the data processing method.

The method comprises the steps of generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with a target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined as the target queue, the standby queue is associated with the transmission configuration information of the target queue, the standby queue can replace the target queue in the abnormal state to perform data processing quickly, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

FIG. 1 is a schematic diagram of a data processing system according to an embodiment of the present application;

FIG. 2a is a schematic flow chart of a data processing method according to an embodiment of the present disclosure;

fig. 2b is a schematic structural diagram of a receiver extension technique provided in an embodiment of the present application;

FIG. 3 is another schematic flow chart diagram of a data processing method according to an embodiment of the present application;

fig. 4a is a schematic view of a scene of a data processing method according to an embodiment of the present application;

FIG. 4b is a schematic structural diagram of a flow director technique provided in an embodiment of the present application;

fig. 4c is a schematic view of another scenario of the data processing method according to the embodiment of the present application;

fig. 4d is a schematic view of another scenario of the data processing method according to the embodiment of the present application;

FIG. 5a is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;

FIG. 5b is a schematic diagram of another structure of a data processing apparatus according to an embodiment of the present application;

FIG. 5c is a schematic diagram of another structure of a data processing apparatus according to an embodiment of the present application;

FIG. 5d is a schematic diagram of another structure of a data processing apparatus according to an embodiment of the present application;

fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The embodiment of the application provides a data processing method, a data processing device and a computer readable storage medium.

Referring to fig. 1, fig. 1 is a schematic view of a scenario of a data processing system according to an embodiment of the present application, where the scenario is applied to a cloud data computing center and includes a user space, a central processing unit, and an intelligent network card, and a data packet may be processed in the user space. The cloud data computing center may include a plurality of cores of central processors, for example, the central processors may be 64-core processors. The intelligent network card may assist the central processing unit in handling network loads through a Field Programmable Gate Array (FPGA), the intelligent network card may be capable of providing distributed computing resources, i.e. implementing network card multi-queue, i.e. the intelligent network card has a distribution mechanism based on a plurality of direct memory access queues as the name implies, and the cores of different central processing units operate different queues, thereby avoiding the lock overhead caused by the simultaneous access of a plurality of threads to the same queue, after receiving the network message, the intelligent network card can distribute the message to different queues, for example, the method is implemented by receiver extension (RSS) proposed by microsoft, which distributes packets to a plurality of queues uniformly according to hash values, and director (FlowDirector) proposed by intel, which distributes packets to designated queues based on exact matching by searching.

Because a plurality of queues work simultaneously and the queues are liable to hang up, in the prior art, a watchdog function is realized in a network card driving frame, if the queue is overtime, the reset of a network port queue is triggered, and in the reset process of the operation, the queue can be in a waiting process, which causes communication interruption for 1 to 5 seconds and seriously affects the data processing performance, therefore, the embodiment of the application can generate a standby queue resource pool which comprises a standby queue, acquire the working state of a running queue, determine the running queue with the working state in an abnormal state as a target queue, determine the transmission configuration information associated with the target queue, associate the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing, and realize that the failed target queue is quickly switched to an available standby queue, resource reconstruction is not involved, the whole process is 0.1-0.5 milliseconds, and the requirement of efficient operation of the cloud computing center is met.

It should be noted that the scenario diagram of the data processing system shown in fig. 1 is only an example, and the data processing system and the scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows, with the evolution of the data processing system and the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.

The following are detailed below. The numbers in the following examples are not intended to limit the order of preference of the examples.

The first embodiment,

In the present embodiment, the data processing apparatus will be described from the perspective of a data processing apparatus, which may be integrated into a server having a storage unit and a microprocessor installed therein and having an arithmetic capability, and the server may be a cloud host.

A method of data processing, comprising: generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with a target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing.

Referring to fig. 2a, fig. 2a is a schematic flow chart of a data processing method according to an embodiment of the present disclosure. The data processing method comprises the following steps:

in

step

101, a reserve queue resource pool is generated.

The cloud host can be provided with a plurality of processors, such as 64-core central processing units, in order to increase the operation efficiency, the processors need to perform communication processing, each core of the central processing units can operate different queues, the queue connected with the central processing units can be called a running queue, the queue is a special linear table, and the cloud host is characterized in that the queue only allows deletion operation at the front end (front) of the table and insertion operation at the rear end (rear) of the table, the queue can be a network card multi-queue in an intelligent network card, and each queue in the network card multi-queue can be independently used for data transceiving resources.

In this embodiment of the present application, when a cloud host initializes an operation queue, a standby queue resource pool is generated, and the number of the operation queues that is greater than a system requirement is applied, where the standby queue resource pool may be a piece of special memory space, and queues stored in the standby queue resource pool are called standby queues, that is, the standby queue resource pool includes standby queues, and the number of the standby queues is determined by the size of a storage space of the standby queue resource pool.

In some embodiments, the step of generating the reserve queue resource pool may include:

(1) receiving the resource amount of the standby queue resource pool;

(2) and generating a standby queue resource pool according to the resource amount, and applying for a corresponding amount of standby queues to carry out activation processing.

When the intelligent network card is started, the storage space of the standby queue resource pool can be set by a user, namely the user can apply for the standby queue resource pool with the corresponding size according to the actual use condition, after receiving the resource amount of the standby queue resource pool set by the user, the cloud host generates the standby queue resource pool with the corresponding size according to the resource amount and applies for the standby queue with the corresponding number to be activated, so that the function of controlling the number of the standby queues by the user is achieved, and the resources of the system are saved.

In some embodiments, the step of receiving the resource amount of the reserve queue resource pool may include:

(1.1) acquiring the total resource amount of the running queue and the current data receiving and transmitting stability evaluation value;

(1.2) determining a corresponding proportional value according to the data receiving and transmitting stability evaluation value;

and (1.3) generating the resource amount of the standby queue resource pool based on the total resource amount and the proportion value.

The resource total amount of the running queue connected with the central processing unit can be obtained, the more the running queue is, the larger the resource total amount is, the less the running queue is, and the smaller the resource total amount is. And acquiring a current data transceiving stability evaluation value, wherein the data transceiving stability evaluation value can be an evaluation value for the stability of the running environment of the running queue, such as an evaluation value for the stability of the network environment and/or an evaluation value for the running state of the running queue, and the like.

Further, a corresponding proportion value can be determined according to the data transceiving stability evaluation value, the range of the proportion value is a numerical value between 0 and 1, the higher the data transceiving stability evaluation value is, the lower the proportion value is, the lower the data transceiving stability evaluation value is, the higher the proportion value is, and therefore, a standby queue which is adaptive to the current running environment quantity is generated based on the product of the total resource quantity of the running queue and the proportion value, so that the system resource is saved, and the situation that the standby queue quantity is insufficient is avoided.

In

step

102, the working state of the run queue is obtained, and the run queue with the working state in the abnormal state is determined as the target queue.

The working state of the running queue comprises a normal state and an abnormal state, when the working state of the running queue is the normal state, the running queue in the normal state performs data processing, and when the working state of the running queue is the abnormal state, the running queue in the abnormal state cannot perform normal data processing, so that the message of the running queue cannot be transmitted to the central processing unit, and the packet loss rate of the message of the running queue is increased all the time.

In the embodiment of the application, the working state of the running queue in the intelligent network card is detected in real time, and when it is detected that the running queue cannot transmit the message to the corresponding central processing unit and the packet loss rate of the message submitted by the running queue is continuously increased, the working state of the running queue is determined to be an abnormal state.

In some embodiments, the step of obtaining the work state of the run queue and determining the run queue with the work state in the abnormal state as the target queue may include:

(1) detecting whether the statistics of the application layer received messages is continuously zero within a first preset time;

(2) when detecting that the statistics of the messages received by the application layer is continuously zero in a first preset time, detecting whether the packet loss statistics of the messages submitted by the running queue is continuously increased in a second preset time;

(3) when packet loss statistics of the submitted messages of the running queue is continuously increased within second preset time, the working state of the running queue is judged to be an abnormal state, and the running queue is determined to be a target queue.

According to the method, two checking conditions are set, wherein one checking condition is that whether the statistics of the received messages in the application layer is continuously zero within a first preset time, in practical use, when the running queue is in an abnormal state, an intelligent network card cannot transmit the messages to a corresponding central processing unit through the running queue when receiving the messages, the messages are further transmitted to the application layer of the upper layer, the first preset time is set to avoid misjudgment of the situation that the running queue is in a short time, therefore, whether the statistics of the received messages of the application layer is continuously zero within the first preset time is firstly detected, and when the statistics of the received messages of the application layer is continuously zero within the first preset time, a second checking condition is triggered.

The second check condition is to continuously detect whether packet loss statistics of the messages submitted by the running queue continuously increases within a second preset time, when the running queue is in an abnormal state, the running queue can continuously increase the packet loss statistics of the submitted messages due to the continuous increase of the messages except for the condition that the running queue cannot transmit the messages to a corresponding central processing unit, the second preset time is set to further avoid misjudgment of the temporary packet loss condition of the running queue, and when the packet loss statistics of the messages submitted by the running queue is further detected to continuously increase within the second preset time, the working state of the running queue is judged to be in the abnormal state, and the running queue is determined to be a target queue.

In some embodiments, the step of determining the working state of the run queue as an abnormal state may include:

(1.1) performing exception accumulation on the running queue;

and (1.2) when the abnormal accumulated value reaches a preset threshold value, judging the working state of the running queue to be an abnormal state.

Wherein, in order to further increase the accuracy of detecting the abnormal condition of the running queue, when detecting that the statistics of the messages received by the application layer is continuously zero within a first preset time and further detecting that the packet loss statistics of the messages submitted by the running queue is continuously increased within a second preset time, the working state of the running queue is not immediately judged to be abnormal, the running queue is abnormally accumulated, and a corresponding preset threshold is set, wherein the preset threshold is a critical value for defining whether the working state of the running queue is abnormal or not, when the abnormal cumulative value of the running queue reaches the preset threshold, the abnormal state of the target queue is continuously generated, the working state of the running queue is judged to be abnormal, and when the abnormal cumulative value of the running queue does not reach the preset threshold, the step of detecting whether the statistics of the messages received by the application layer is continuously zero within the first preset time is returned, therefore, the false touch probability of the burr signal is effectively eliminated through the preset threshold value, and the robustness of the whole system is improved.

In some embodiments, when it is detected that the statistics of the application layer received messages do not continuously reach zero within a first preset time, clearing an abnormal accumulated value; or when detecting that the packet loss statistics of the messages submitted by the running queue is not continuously increased within a second preset time, clearing the abnormal accumulated value.

When the statistics of the messages received by the application layer is not continuously zero within the first preset time, the running state of the running queue is recovered to be normal, and the abnormal accumulated value is cleared.

In

step

103, transmission configuration information associated with the target queue is determined.

The transmission configuration information may be a communication protocol between the target queue and the corresponding central processing unit and a rule for message distribution, and the target queue may distribute the corresponding message according to the transmission configuration information, and directionally transmit the distributed message to the target central processing unit according to the target central processing unit specified by the transmission configuration information.

The method and the device for processing the queue determine the transmission configuration information associated with the target queue in the abnormal state, so that the queue can be replaced through the transmission configuration information subsequently.

In an embodiment, since the intelligent network card has multiple technologies for distributing the messages to different queues and further transmitting the messages to corresponding central processing units, for example, receiver expansion and flow director technologies, the transmission configuration information also includes multiple types, which is described in this application embodiment by taking a receiver expansion technology as an example, specifically as follows:

in some embodiments, the step of determining the transmission configuration information associated with the target queue may include: and acquiring hash configuration information associated with the target queue expanded according to the receiving party, wherein the hash configuration information comprises an association relation between the target hash value and the target queue.

Referring to fig. 2b, Microsoft's receiver extension technology may determine corresponding keywords according to the packet types of the data packets, and further calculate Hash values according to the keywords through a Hash function, where the Hash function generally selects Microsoft Toeplitz Based Hash or symmetric Hash, each Hash value corresponds to a different run queue, and uniformly distributes the packets to multiple run queues, and each run queue corresponds to a different central processor, so that different streams may be distributed to different central processors to achieve load balancing, which is helpful to improve locality of reference and cache consistency.

Based on this, when the target queue is a target queue generated by a receiver expansion technology, hash configuration information associated with the target queue expanded according to the receiver is acquired, and the hash configuration information includes an association relationship between a target hash value and the target queue.

In

step

104, the standby queue is associated with the transmission configuration information, so that the standby queue replaces the target queue for data processing.

In an embodiment, the method of selecting the corresponding standby queue from the standby queue resource pool may be random selection, or a specific standby queue may be selected from the standby queue resource pool in advance, and the specific standby queue and the target queue are bound into a group, so that when the corresponding standby queue is selected, the standby queue in the group with the target queue may be selected.

Furthermore, after the corresponding standby queue is selected, the standby queue can be associated with the transmission configuration information associated with the target queue, so that the data flow is changed from the importing of the target queue to the importing of the corresponding standby queue, the resource-free overload in the whole operation process can be controlled to be in the millisecond level, and the data processing efficiency is greatly improved.

In some embodiments, the step of associating the standby queue with the transmission configuration information so that the standby queue replaces the target queue for data processing may include:

(1) switching the target queue to an inactive state;

(2) and associating the standby queue with the transmission configuration information instead of the target queue so that the standby queue replaces the target queue for data processing.

Because the working state of the target queue is an abnormal state, that is, the target queue cannot perform normal data processing, the target queue may be switched to an inactive state (inactive), that is, the target queue is closed, and a corresponding standby queue is associated with transmission configuration information instead of the target queue, so that a data stream originally flowing into the target queue is changed to flow into the standby queue, and an effect of performing data processing by replacing the target queue with the standby queue is achieved.

In some embodiments, the associating the standby queue with the transmission configuration information in place of the target queue, such that the standby queue performs data processing in place of the target queue, may include:

(1.1) deleting the association relation between the target hash value and the target queue;

and (1.2) establishing an association relation between the target hash value and the standby queue, so that the standby queue replaces the target queue to work.

When the target queue is generated by a receiver expansion technology, hash configuration information is acquired, the incidence relation between the target hash value and the target queue in an abnormal working state is deleted, and the incidence relation between the target hash value and the standby queue is established.

As can be seen from the above, in the embodiment of the present application, a standby queue resource pool is generated, where the standby queue resource pool includes a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with a target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined as the target queue, the standby queue is associated with the transmission configuration information of the target queue, the standby queue can replace the target queue in the abnormal state to perform data processing quickly, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Example II,

The method described in the first embodiment is further illustrated by way of example.

In the present embodiment, the data processing apparatus is specifically integrated in a server, and the server is a cloud host.

Referring to fig. 3, fig. 3 is another schematic flow chart of a data processing method according to an embodiment of the present disclosure.

The method flow can comprise the following steps:

in

step

201, the server obtains the total resource amount of the running queue and the current data transceiving stability evaluation value, determines a corresponding proportional value according to the data transceiving stability evaluation value, and generates the resource amount of the standby queue resource pool based on the total resource amount and the proportional value.

Referring to fig. 4a, when the server initializes the queues, all the queues are in a default state, that is, a deactivated state (inactive), and the server may determine the total number of the running queues of the network cards required by the system, and determine the corresponding total amount of resources according to the total number, where the total amount of the running queues is proportional to the total amount of the resources. Meanwhile, the server acquires a current data receiving and transmitting stability evaluation value, and the data receiving and transmitting stability evaluation value can be the running state of the running queue.

Further, a corresponding proportional value is determined according to the data transceiving stability evaluation value, the proportional value is a real number which is greater than 0 and smaller than 1, the larger the data stability evaluation value is, the lower the corresponding proportional value is, the smaller the data stability evaluation value is, the higher the corresponding proportional value is, the inverse ratio of the two is obtained, a product of the total resource amount and the proportional value is calculated, and the resource amount of the standby queue resource pool is generated according to the product, so that when the data transceiving is more stable, the resource amount of the standby queue resource pool is lower, and when the data transceiving is worse, the resource amount of the standby queue resource pool is higher.

In step 202, the server generates a standby queue resource pool according to the resource amount, and applies for a corresponding amount of standby queues to perform activation processing.

The server generates a standby queue resource pool according to the resource amount and applies for a standby queue corresponding to the resource amount of the standby queue resource pool, wherein the larger the resource amount is, the larger the number of the standby queue is, the smaller the resource amount is, and the smaller the number of the standby queue is.

Referring to fig. 4a, the server initializes the number of network card queues that is greater than the system requirement, initializes the running queues that are required by the user number through a queue mapping (queue mapping) table, generates corresponding Active queues in an Active state, initializes the excess queues to be activated, and places the excess queues into a standby queue resource pool for standby use.

Please refer to fig. 4b, in order to implement that the intelligent network card distributes the message to different queues and further transmits the message to different central processing units, in the embodiment of the present application, a Flow Director (FlowDirector) technology proposed by intel corporation may be used to implement a queue mapping table, a Flow Director table is stored in advance through the Flow Director technology, the table includes a header and a target central processing unit, the header includes keyword information and a corresponding queue, and the target central processing unit indicates which processor the target central processing unit communicates with. The size of the table is limited by hardware resources, the table records keywords needing field matching and actions after matching, a driver is responsible for operating the table, the operation comprises initializing, adding table items and deleting table items, the table of the Flow Director is searched according to the keywords after the network card receives a data packet from the line, and the table is processed according to the actions in the table items after matching, so that the function of accurately distributing the field of the packet to different running queues is realized.

In

step

203, the server detects whether the statistics of the application layer received messages are continuously zero within a first preset time.

The intelligent network card comprises a plurality of running queues, so that the running queues are easy to hang up when being in an abnormal state in actual use, and when the running queues hang up, the running queues can be blocked by data, and the messages cannot be transmitted to the corresponding central processing unit, namely cannot be uploaded to an application layer.

Therefore, the server may detect whether the statistics of the messages received by the application layer last zero within the first preset time, and execute

step

204 when the server detects that the statistics of the messages received by the application layer last zero within the first preset time. When the server detects that the statistics of the application layer received messages are not continuously zero within the first preset time,

step

206 is executed.

In

step

204, the server detects whether the packet loss statistics of the message submitted by the running queue continuously increases within a second preset time.

When the running queue is hung up, the running queue is blocked, the server detects that the statistics of messages received by the application layer is continuously zero within a first preset time, and at the moment, because the intelligent network card still receives the messages, the packet loss statistics of the messages submitted by the running queue can be continuously increased.

Therefore, when detecting that the statistics of the messages received by the application layer is continuously zero within the first preset time, the server further needs to continuously detect whether the packet loss statistics of the messages submitted by the running queue is continuously increased within the second preset time, when detecting that the packet loss statistics of the messages submitted by the running queue is continuously increased within the second preset time,

step

205 is executed, and when detecting that the packet loss statistics of the messages submitted by the running queue is not continuously increased within the second preset time,

step

206 is executed.

In

step

205, the server performs exception accumulation on the run queue.

When the server detects that the running queue in the intelligent network card cannot upload the received messages to the corresponding central processing unit and the packet loss number of the messages submitted by the running queue is continuously increased, the server performs abnormal accumulation on the running queue.

In

step

206, the server clears the abnormal cumulative value.

When the server detects that the statistics of the messages received by the application layer is not continuously zero within the first preset time, the running state of the running queue is recovered to be normal, and the abnormal accumulated value is cleared.

In

step

207, the server detects whether the abnormal cumulative value reaches a preset threshold.

The server continuously counts the accumulated value of the running queue in the abnormal state, and when the server detects that the abnormal accumulated value reaches a preset threshold, step 208 is executed. When the server detects that the abnormal cumulative value does not reach the preset threshold value, the server returns to execute

step

203 to continue to detect the abnormal state.

In step 208, the server determines the operating state of the run queue as an abnormal state and determines the run queue as a target queue.

When the server detects that the abnormal accumulated value of the running queue in the continuous abnormal state reaches a preset threshold value, the server determines that the working state of the running queue is not in the abnormal state, and determines the running queue as a target queue.

In

step

209, the server obtains the queue mapping table for the target queue.

When the target queue is generated by the guider technology, acquiring a queue mapping table corresponding to the target queue, wherein the queue mapping table is as follows:

header head Target central processing unit
102.4001 2
10.4000 4

TABLE 1

As shown in table 1, the queue map table includes a header and a target central processor, the header includes key information and a corresponding queue, such as "102.4001" indicating a key of 102 and a corresponding queue of 4001, the target central processor indicates that the header "102.4001" communicates with the target

central processor

2.

In

step

210, the server switches the target queue to an inactive state, modifies the queue mapping table, deletes the mapping relationship between the target keyword information in the queue mapping table and the target queue, and establishes the mapping relationship between the target keyword information and the standby queue, so that the standby queue replaces the target queue to perform work.

Referring to fig. 4a, the server switches the target queue in the active state back to the inactive state, modifies the queue mapping table, deletes the mapping relationship between the target keyword information and the target queue in the queue mapping table, i.e., deleting the header and establishing a mapping relationship between the target key information and the standby queue, i.e., establishing a corresponding header of the target key information and the standby queue, for example, when the target queue is '4001' and the standby queue is '4002', the mapping relation '102.4001' of the target key and the target queue is deleted, and establishes a mapping relationship "102.4002" of the target keyword information "102" and the standby queue "4002", the data flow which originally flows into the target queue "4001" through the keyword information "102" is changed into the flow into the standby queue "4002", so that the effect of replacing the target queue by the standby queue to perform data processing is realized.

Referring to fig. 4c, after receiving the message, the intelligent network card respectively enters different running queues through a queue mapping technique, the server determines a target queue with an abnormal working state from the running queues, modifies a queue mapping table, deletes the mapping relationship between the target keyword information and the target queue in the queue mapping table, and establishes the mapping relationship between the target keyword information and the standby queue, so that the standby queue replaces the target queue to work.

In some embodiments, please refer to fig. 4d together, after receiving the message, the intelligent network card enters different running queue groups through a queue mapping technique, where each running queue group includes 2 queues, that is, a running queue 0 and a corresponding running queue 1 (standby queue), the server determines a target queue whose working state is an abnormal state from the running queues, modifies a queue mapping table, deletes a mapping relationship between target keyword information in the queue mapping table and the target queue, and establishes a mapping relationship between the target keyword information and the standby queue in the same group, so that the standby queue replaces the target queue to perform work.

In step 211, when the server detects that the system is in an idle state, the target queue is controlled to be reset, and the reset target queue is moved to the standby queue resource pool and changed into the standby queue.

Referring to fig. 4a, when the server detects that the system is in an idle state, the server controls the target queue that is already in an inactive state to be reset, moves the reset target queue to the standby queue resource pool, and changes the target queue into the standby queue for the next use, so that the running queue and the standby queue are always in a balanced state.

As can be seen from the above, in the embodiment of the present application, the total resource amount of the running queue and the current data transceiving stability evaluation value are obtained by the server to generate a proper resource amount of the standby queue resource pool, the standby queue resource pool is generated according to the resource amount, a corresponding number of standby queues are applied for activation processing, when it is detected that the abnormal accumulated value continuously appearing in the running queue exceeds the preset threshold value, the working state of the running queue is determined to be abnormal, the running queue is determined to be a target queue, a queue mapping table of the target queue is obtained, the target queue is switched to an inactive state, the queue mapping table is modified, the mapping relationship between the target keyword information in the queue mapping table and the target queue is deleted, the mapping relationship between the target keyword information and the standby queue is established, so that the standby queue replaces the target queue to work, and when it is detected that the system is in a, and controlling the target queue to be reset, moving the reset target queue to the standby queue resource pool, and changing the target queue into the target queue. Therefore, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Furthermore, the target queue is returned to the standby queue resource pool after being reset, and the data processing efficiency is further improved.

Example III,

In order to better implement the data processing method provided by the embodiment of the present application, an embodiment of the present application further provides a device based on the data processing method. The terms are the same as those in the data processing method, and details of implementation can be referred to the description in the method embodiment.

Referring to fig. 5a, fig. 5a is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure, where the data processing apparatus may include a

generating unit

301, an obtaining

unit

302, a determining

unit

303, an associating

unit

304, and the like.

A generating

unit

301, configured to generate a reserve queue resource pool, where the reserve queue resource pool includes a reserve queue.

When the cloud host initializes the running queue, the generating

unit

301 generates a standby queue resource pool, and applies for the number of running queues that is greater than the system requirement, where the standby queue resource pool may be a piece of special memory space, and places excess queues into the standby queue resource pool, and a queue stored in the standby queue resource pool is called a standby queue, that is, the standby queue resource pool includes a standby queue, and the number of the standby queue is determined by the size of the storage space of the standby queue resource pool, and the larger the storage space of the standby queue resource pool is, the larger the number of the standby queue is, the smaller the storage space of the standby queue resource pool is, and the smaller the number of the standby queue is.

In some embodiments, as shown in fig. 5b, the generating

unit

301 includes:

a receiving

subunit

3011, configured to receive a resource amount of the standby queue resource pool;

an

application subunit

3012, configured to generate a standby queue resource pool according to the resource amount, and apply for a corresponding number of standby queues to perform activation processing.

In some embodiments, the receiving

subunit

3011 is configured to obtain a total amount of resources in the running queue and a current data transceiving stability evaluation value; determining a corresponding proportional value according to the data receiving and transmitting stability evaluation value; and generating the resource amount of the standby queue resource pool based on the total resource amount and the proportion value.

An obtaining

unit

302, configured to obtain a working state of the run queue, and determine the run queue with the working state in an abnormal state as a target queue.

The working state of the running queue comprises a normal state and an abnormal state, when the working state of the running queue is the normal state, the running queue in the normal state performs data processing, and when the working state of the running queue is the abnormal state, the running queue in the abnormal state cannot perform normal data processing, so that the message of the running queue cannot be transmitted to the central processing unit, and the packet loss rate of the message of the running queue is increased all the time.

In this embodiment of the application, the obtaining

unit

302 detects the working state of the running queue in the smart network card in real time, and determines the working state of the running queue as an abnormal state when it is detected that the running queue cannot transmit the message to the corresponding central processing unit and the packet loss rate of the message submitted by the running queue is continuously increased.

In some embodiments, as shown in fig. 5c, the obtaining

unit

302 includes:

the first detecting

unit

3021 is configured to detect whether the statistics of the application layer received packets is continuously zero within a first preset time.

The second detecting

unit

3022 is configured to detect whether the packet loss statistics of the packet submitted by the running queue continuously increases within a second preset time when it is detected that the statistics of the packet received by the application layer continues to be zero within the first preset time.

A

clearing unit

3023, configured to clear the abnormal accumulated value when it is detected that the statistics of the application layer received packets do not continue to be zero within a first preset time; or when detecting that the packet loss statistics of the messages submitted by the running queue is not continuously increased within a second preset time, clearing the abnormal accumulated value.

The determining

subunit

3024 is configured to, when it is detected that the packet loss statistics of the packet submitted by the running queue continues to increase within a second preset time, determine the working state of the running queue to be an abnormal state, and determine the running queue as a target queue.

In some embodiments, the determining

subunit

3024 is configured to: performing exception accumulation on the running queue; and when the abnormal accumulated value reaches a preset threshold value, judging the working state of the running queue to be an abnormal state, and determining the running queue to be a target queue.

A determining

unit

303, configured to determine transmission configuration information associated with the target queue.

The transmission configuration information may be a communication protocol between the target queue and the corresponding central processing unit and a rule for message distribution, and the target queue may distribute the corresponding message according to the transmission configuration information, and directionally transmit the distributed message to the target central processing unit according to the target central processing unit specified by the transmission configuration information.

The determining

unit

303 determines the transmission configuration information associated with the target queue in which the abnormal state occurs, so that queue replacement can be performed subsequently by the transmission configuration information.

In an embodiment, since the intelligent network card has a plurality of technologies for distributing the messages to different queues and then transmitting the messages to corresponding central processing units, for example, receiver expansion and flow director technologies, the transmission configuration information also includes a plurality of types.

In some embodiments, the determining

unit

303 is configured to: and acquiring hash configuration information associated with the target queue expanded according to the receiving party, wherein the hash configuration information comprises an association relation between the target hash value and the target queue.

In some embodiments, the determining

unit

303 is further configured to: and acquiring a queue mapping table of the target queue, wherein the queue mapping table comprises a mapping relation between target keyword information and the target queue.

An associating

unit

304, configured to associate the standby queue with the transmission configuration information, so that the standby queue replaces the target queue for data processing.

In one embodiment, the

association unit

304 selects the corresponding standby queue from the standby queue resource pool randomly, or selects a specific standby queue from the standby queue resource pool in advance, and combines the specific standby queue and the target queue into a group, so that when the corresponding standby queue is selected, the standby queue in the group with the target queue can be selected.

Further, after selecting the corresponding standby queue, the associating

unit

304 may associate the transmission configuration information associated with the standby queue and the target queue, so that the data flow is changed from the importing target queue to the importing corresponding standby queue, and the resource-free heavy load in the whole operation process may be controlled at the millisecond level, thereby greatly improving the efficiency of data processing.

In some embodiments, as shown in fig. 5d, the associating

unit

304 includes:

a

switching subunit

3041, configured to switch the target queue to an inactive state.

A replacing

subunit

3042, configured to associate the standby queue instead of the target queue with the transmission configuration information, so that the standby queue replaces the target queue for data processing.

In some embodiments, the

substitute subunit

3042 is configured to delete the association relationship between the target hash value and the target queue; and establishing an incidence relation between the target hash value and the standby queue, so that the standby queue replaces the target queue to work.

In some embodiments, the replacing

subunit

3042 is configured to modify the queue mapping table, and delete the mapping relationship between the target keyword information in the queue mapping table and the target queue; and establishing a mapping relation between the target keyword information and the standby queue, so that the standby queue replaces the target queue to work.

In some embodiments, as shown in fig. 5d, the communication device may further include:

a

reset unit

305, configured to control the target queue to reset when the system is detected to be in an idle state; and moving the reset target queue to a standby queue resource pool, and changing the target queue into a standby queue.

The specific implementation of each unit can refer to the previous embodiment, and is not described herein again.

As can be seen from the above, in the embodiment of the present application, the generating

unit

301 generates the standby queue resource pool, where the standby queue resource pool includes the standby queue; the obtaining

unit

302 obtains the working state of the running queue, and determines the running queue with the working state in an abnormal state as a target queue; the determining

unit

303 determines the transmission configuration information associated with the target queue; the associating

unit

304 associates the reserve queue with the transmission configuration information so that the reserve queue replaces the target queue for data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined as the target queue, the standby queue is associated with the transmission configuration information of the target queue, the standby queue can replace the target queue in the abnormal state to perform data processing quickly, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Example four,

The embodiment of the present application further provides a server, as shown in fig. 6, which shows a schematic structural diagram of the server according to the embodiment of the present application, specifically:

the server may be a cloud host, and may include components such as a

processor

401 of one or more processing cores,

memory

402 of one or more computer-readable storage media, a

power supply

403, and an

input unit

404. Those skilled in the art will appreciate that the server architecture shown in FIG. 6 is not meant to be limiting, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:

the

processor

401 is a control center of the server, connects various parts of the entire server using various interfaces and lines, and performs various functions of the server and processes data by running or executing software programs and/or modules stored in the

memory

402 and calling data stored in the

memory

402, thereby performing overall monitoring of the server. Optionally,

processor

401 may include one or more processing cores; preferably, the

processor

401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the

processor

401.

The

memory

402 may be used to store software programs and modules, and the

processor

401 executes various functional applications and data processing by operating the software programs and modules stored in the

memory

402. The

memory

402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the server, and the like. Further, the

memory

402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the

memory

402 may also include a memory controller to provide the

processor

401 access to the

memory

402.

The server further includes a

power supply

403 for supplying power to each component, and preferably, the

power supply

403 may be logically connected to the

processor

401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The

power supply

403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.

The server may also include an

input unit

404, the

input unit

404 being operable to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.

Although not shown, the server may further include a display processor and the like, which will not be described in detail herein. Specifically, in this embodiment, the

processor

401 in the server loads the executable file corresponding to the process of one or more application programs into the

memory

402 according to the following instructions, and the

processor

401 runs the application program stored in the

memory

402, thereby implementing various functions as follows:

generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue for data processing.

In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the data processing method, and are not described herein again.

As can be seen from the above, the server according to the embodiment of the present application may generate the standby queue resource pool, where the standby queue resource pool includes the standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with a target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue to perform data processing. Therefore, the standby queue is generated in advance, when the running queue is in an abnormal state, the running queue is determined as the target queue, the standby queue is associated with the transmission configuration information of the target queue, the standby queue can replace the target queue in the abnormal state to perform data processing quickly, the target queue in the abnormal state does not need to wait for resetting, long-time communication interruption is avoided, and the data processing efficiency is greatly improved.

Example V,

It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.

To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any data processing method provided by the embodiments of the present application. For example, the instructions may perform the steps of:

generating a standby queue resource pool, wherein the standby queue resource pool comprises a standby queue; acquiring the working state of a running queue, and determining the running queue with the working state in an abnormal state as a target queue; determining transmission configuration information associated with the target queue; and associating the standby queue with the transmission configuration information, so that the standby queue replaces the target queue for data processing.

The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.

Wherein the computer-readable storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.

Since the instructions stored in the computer-readable storage medium can execute the steps in any data processing method provided in the embodiments of the present application, the beneficial effects that can be achieved by any data processing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described again here.

The foregoing detailed description has provided a data processing method, apparatus, and computer-readable storage medium according to embodiments of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1.一种数据处理方法,其特征在于,包括:1. a data processing method, is characterized in that, comprises: 生成备用队列资源池,所述备用队列资源池中包括备用队列;generating a backup queue resource pool, the backup queue resource pool includes a backup queue; 获取运行队列的工作状态,将所述工作状态处于异常状态的运行队列确定为目标队列;Obtain the working status of the running queue, and determine the running queue whose working status is in an abnormal state as the target queue; 确定所述目标队列关联的传输配置信息;determining the transmission configuration information associated with the target queue; 将所述备用队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理。The backup queue is associated with the transmission configuration information, so that the backup queue replaces the target queue for data processing. 2.根据权利要求1所述的数据处理方法,其特征在于,所述将所述备用队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理的步骤,包括:2. The data processing method according to claim 1, wherein the step of associating the backup queue with the transmission configuration information so that the backup queue replaces the target queue for data processing comprises the following steps: : 将所述目标队列切换到非激活状态;switching the target queue to an inactive state; 将所述备用队列替代所述目标队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理。Associating the backup queue to replace the target queue with the transmission configuration information, so that the backup queue replaces the target queue to perform data processing. 3.根据权利要求2所述的数据处理方法,其特征在于,所述确定所述目标队列关联的传输配置信息的步骤,包括:3. The data processing method according to claim 2, wherein the step of determining the transmission configuration information associated with the target queue comprises: 获取根据接收方扩展的目标队列关联的哈希配置信息,所述哈希配置信息中包括目标哈希值与目标队列的关联关系;Obtain the hash configuration information associated with the target queue expanded according to the receiver, where the hash configuration information includes the association relationship between the target hash value and the target queue; 所述将所述备用队列替代所述目标队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理的步骤,包括:The step of associating the backup queue to replace the target queue with the transmission configuration information so that the backup queue replaces the target queue to perform data processing includes: 将所述目标哈希值与目标队列的关联关系删除;Delete the association relationship between the target hash value and the target queue; 建立所述目标哈希值与备用队列的关联关系,使得所述备用队列替换所述目标队列进行工作。An association relationship between the target hash value and the backup queue is established, so that the backup queue replaces the target queue for work. 4.根据权利要求2所述的数据处理方法,其特征在于,所述确定所述目标队列关联的传输配置信息的步骤,包括:4. The data processing method according to claim 2, wherein the step of determining the transmission configuration information associated with the target queue comprises: 获取所述目标队列的队列映射表,所述队列映射表中包括目标关键字信息与目标队列的映射关系;Obtain the queue mapping table of the target queue, and the queue mapping table includes the mapping relationship between the target keyword information and the target queue; 所述将所述备用队列替代所述目标队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理的步骤,包括:The step of associating the backup queue to replace the target queue with the transmission configuration information so that the backup queue replaces the target queue to perform data processing includes: 修改所述队列映射表,将所述队列映射表中的目标关键字信息与目标队列的映射关系删除;Modifying the queue mapping table, deleting the mapping relationship between the target keyword information and the target queue in the queue mapping table; 建立所述目标关键字信息与备用队列的映射关系,使得所述备用队列替换所述目标队列进行工作。A mapping relationship between the target keyword information and the backup queue is established, so that the backup queue replaces the target queue for work. 5.根据权利要求2至4任一项所述的数据处理方法,其特征在于,所述将所述备用队列替代所述目标队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理的步骤之后,还包括:5. The data processing method according to any one of claims 2 to 4, wherein the replacing the target queue with the backup queue is associated with the transmission configuration information, so that the backup queue replaces the target queue. After the data processing steps of the target queue described above, it also includes: 当检测到系统处于空闲状态时,控制所述目标队列重置;When it is detected that the system is in an idle state, the target queue is controlled to be reset; 将重置后的目标队列移动至备用队列资源池中,并更改为备用队列。Move the reset target queue to the standby queue resource pool and change it to the standby queue. 6.根据权利要求1至4任一项所述的数据处理方法,其特征在于,所述获取运行队列的工作状态,将所述工作状态处于异常状态的运行队列确定为目标队列的步骤,包括:6. The data processing method according to any one of claims 1 to 4, wherein the step of acquiring the working state of the run queue, and determining the run queue whose working state is in an abnormal state as the target queue, comprises the following steps: : 检测应用层接收报文的统计是否在第一预设时间内持续为零;Detecting whether the statistics of the received packets at the application layer continue to be zero within the first preset time; 当检测到应用层接收报文的统计在第一预设时间内持续为零时,检测运行队列提交报文的丢包统计是否在第二预设时间内持续增加;When it is detected that the statistics of the packets received by the application layer continue to be zero within the first preset time, it is detected whether the packet loss statistics of the packets submitted by the running queue continue to increase within the second preset time; 当检测到运行队列提交报文的丢包统计在第二预设时间内持续增加时,将所述运行队列的工作状态判定为异常状态,并将所述运行队列确定为目标队列。When it is detected that the packet loss statistics of the packets submitted by the run queue continue to increase within the second preset time, the working state of the run queue is determined to be an abnormal state, and the run queue is determined as the target queue. 7.根据权利要求6所述的数据处理方法,其特征在于,所述将所述运行队列的工作状态判定为异常状态的步骤,包括:7. The data processing method according to claim 6, wherein the step of judging the working state of the run queue as an abnormal state comprises: 对所述运行队列进行异常累计;abnormal accumulation is performed on the running queue; 当异常累计值达到预设阈值时,将所述运行队列的工作状态判定为异常状态。When the abnormal cumulative value reaches a preset threshold, the working state of the running queue is determined to be an abnormal state. 8.根据权利要求7所述的数据处理方法,其特征在于,所述将所述运行队列的工作状态判定为异常状态的步骤之前,还包括:8. The data processing method according to claim 7, wherein before the step of judging the working state of the run queue as an abnormal state, the method further comprises: 当检测到应用层接收报文的统计在第一预设时间内未持续为零时,将异常累计值清零;或者When it is detected that the statistics of the received packets at the application layer do not continue to be zero within the first preset time, clear the abnormal accumulated value to zero; or 当检测到运行队列提交报文的丢包统计在第二预设时间内未持续增加时,将异常累计值清零。When it is detected that the packet loss statistics of the packets submitted by the running queue do not continue to increase within the second preset time, the abnormal accumulated value is cleared to zero. 9.根据权利要求1至4任一项所述的数据处理方法,其特征在于,所述生成备用队列资源池的步骤,包括:9. The data processing method according to any one of claims 1 to 4, wherein the step of generating a standby queue resource pool comprises: 接收备用队列资源池的资源量;Receive the resource amount of the standby queue resource pool; 根据所述资源量生成备用队列资源池,并申请相应数量的备用队列进行激活处理。A backup queue resource pool is generated according to the resource amount, and a corresponding number of backup queues are applied for activation processing. 10.根据权利要求9所述的数据处理方法,其特征在于,所述接收备用队列资源池的资源量的步骤,包括:10. The data processing method according to claim 9, wherein the step of receiving the resource amount of the standby queue resource pool comprises: 获取运行队列的资源总量和当前的数据收发稳定评估值;Obtain the total resources of the running queue and the current data sending and receiving stability evaluation value; 根据所述数据收发稳定评估值确定相应的比例值;Determine the corresponding proportional value according to the data sending and receiving stability evaluation value; 基于所述资源总量和比例值生成备用队列资源池的资源量。The resource amount of the standby queue resource pool is generated based on the total amount of resources and the proportional value. 11.一种数据处理装置,其特征在于,包括:11. A data processing device, comprising: 生成单元,用于生成备用队列资源池,所述备用队列资源池中包括备用队列;a generating unit, configured to generate a backup queue resource pool, the backup queue resource pool includes a backup queue; 获取单元,用于获取运行队列的工作状态,将所述工作状态处于异常状态的运行队列确定为目标队列;an acquiring unit, configured to acquire the working state of the running queue, and determining the running queue whose working state is in an abnormal state as the target queue; 确定单元,用于确定所述目标队列关联的传输配置信息;a determining unit, configured to determine the transmission configuration information associated with the target queue; 关联单元,用于将所述备用队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理。an associating unit, configured to associate the backup queue with the transmission configuration information, so that the backup queue replaces the target queue for data processing. 12.根据权利要求11所述的处理装置,其特征在于,所述关联单元,包括:12. The processing device according to claim 11, wherein the associating unit comprises: 切换子单元,用于将所述目标队列切换到非激活状态;a switching subunit for switching the target queue to an inactive state; 替代子单元,用于将所述备用队列替代所述目标队列与所述传输配置信息进行关联,使得所述备用队列替换所述目标队列进行数据处理。A replacement subunit, configured to associate the backup queue with the transmission configuration information instead of the target queue, so that the backup queue replaces the target queue for data processing. 13.根据权利要求12所述的处理装置,其特征在于,所述确定单元,用于:13. The processing device according to claim 12, wherein the determining unit is configured to: 获取根据接收方扩展的目标队列关联的哈希配置信息,所述哈希配置信息中包括目标哈希值与目标队列的关联关系;Obtain the hash configuration information associated with the target queue expanded according to the receiver, where the hash configuration information includes the association relationship between the target hash value and the target queue; 所述替代子单元,用于:The replacement subunit for: 将所述目标哈希值与目标队列的关联关系删除;Delete the association relationship between the target hash value and the target queue; 建立所述目标哈希值与备用队列的关联关系,使得所述备用队列替换所述目标队列进行工作。An association relationship between the target hash value and the backup queue is established, so that the backup queue replaces the target queue for work. 14.根据权利要求12所述的处理装置,其特征在于,所述确定单元,用于:14. The processing device according to claim 12, wherein the determining unit is configured to: 获取所述目标队列的队列映射表,所述队列映射表中包括目标关键字信息与目标队列的映射关系;Obtain the queue mapping table of the target queue, and the queue mapping table includes the mapping relationship between the target keyword information and the target queue; 所述替代子单元,用于:The replacement subunit for: 修改所述队列映射表,将所述队列映射表中的目标关键字信息与目标队列的映射关系删除;Modifying the queue mapping table, deleting the mapping relationship between the target keyword information and the target queue in the queue mapping table; 建立所述目标关键字信息与备用队列的映射关系,使得所述备用队列替换所述目标队列进行工作。A mapping relationship between the target keyword information and the backup queue is established, so that the backup queue replaces the target queue for work. 15.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至10任一项所述的数据处理方法中的步骤。15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a plurality of instructions, the instructions are adapted to be loaded by a processor to execute the method according to any one of claims 1 to 10. Steps in a data processing method.

CN201911071773.8A 2019-11-05 2019-11-05 A data processing method, device and computer-readable storage medium Active CN111190745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911071773.8A CN111190745B (en) 2019-11-05 2019-11-05 A data processing method, device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911071773.8A CN111190745B (en) 2019-11-05 2019-11-05 A data processing method, device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111190745A true CN111190745A (en) 2020-05-22
CN111190745B CN111190745B (en) 2024-01-30

Family

ID=70709087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911071773.8A Active CN111190745B (en) 2019-11-05 2019-11-05 A data processing method, device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111190745B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596884A (en) * 2020-12-26 2021-04-02 中国农业银行股份有限公司 Task adjusting method and device
CN112764896A (en) * 2020-12-31 2021-05-07 广州技象科技有限公司 Task scheduling method, device and system based on standby queue and storage medium
CN113301103A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Data processing system, method and device
CN113300979A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Network card queue creating method and device under RDMA (remote direct memory Access) network
CN113810228A (en) * 2021-09-13 2021-12-17 中国人民银行清算总中心 Message queue channel resetting method and device
CN113920717A (en) * 2020-07-08 2022-01-11 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
CN114257492A (en) * 2021-12-09 2022-03-29 北京天融信网络安全技术有限公司 Fault processing method and device of intelligent network card, computer equipment and medium
CN114640574A (en) * 2022-02-28 2022-06-17 天翼安全科技有限公司 Method and device for switching main equipment and standby equipment
CN115086203A (en) * 2022-06-15 2022-09-20 中国工商银行股份有限公司 Data transmission method, data transmission device, electronic equipment and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
CN109495540A (en) * 2018-10-15 2019-03-19 深圳市金证科技股份有限公司 A kind of method, apparatus of data processing, terminal device and storage medium
WO2019061647A1 (en) * 2017-09-26 2019-04-04 平安科技(深圳)有限公司 Queue message processing method and device, terminal device and medium
CN109976919A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 A kind of transmission method and device of message request
CN110266551A (en) * 2019-07-29 2019-09-20 腾讯科技(深圳)有限公司 A kind of bandwidth prediction method, apparatus, equipment and storage medium
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
WO2019061647A1 (en) * 2017-09-26 2019-04-04 平安科技(深圳)有限公司 Queue message processing method and device, terminal device and medium
CN109976919A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 A kind of transmission method and device of message request
CN109495540A (en) * 2018-10-15 2019-03-19 深圳市金证科技股份有限公司 A kind of method, apparatus of data processing, terminal device and storage medium
CN110290217A (en) * 2019-07-01 2019-09-27 腾讯科技(深圳)有限公司 Processing method and processing device, storage medium and the electronic device of request of data
CN110266551A (en) * 2019-07-29 2019-09-20 腾讯科技(深圳)有限公司 A kind of bandwidth prediction method, apparatus, equipment and storage medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920717A (en) * 2020-07-08 2022-01-11 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and storage medium
CN113920717B (en) * 2020-07-08 2023-08-22 腾讯科技(深圳)有限公司 Information processing method, device, electronic equipment and storage medium
CN112596884B (en) * 2020-12-26 2024-06-11 中国农业银行股份有限公司 Task adjusting method and device
CN112596884A (en) * 2020-12-26 2021-04-02 中国农业银行股份有限公司 Task adjusting method and device
CN112764896A (en) * 2020-12-31 2021-05-07 广州技象科技有限公司 Task scheduling method, device and system based on standby queue and storage medium
CN113300979A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Network card queue creating method and device under RDMA (remote direct memory Access) network
CN113300979B (en) * 2021-02-05 2024-09-17 阿里巴巴集团控股有限公司 Network card queue creation method and device under RDMA (remote direct memory Access) network
CN113301103B (en) * 2021-02-05 2024-03-12 阿里巴巴集团控股有限公司 Data processing system, method and device
CN113301103A (en) * 2021-02-05 2021-08-24 阿里巴巴集团控股有限公司 Data processing system, method and device
CN113810228A (en) * 2021-09-13 2021-12-17 中国人民银行清算总中心 Message queue channel resetting method and device
CN113810228B (en) * 2021-09-13 2024-07-05 中国人民银行清算总中心 Message queue channel resetting method and device
CN114257492A (en) * 2021-12-09 2022-03-29 北京天融信网络安全技术有限公司 Fault processing method and device of intelligent network card, computer equipment and medium
CN114257492B (en) * 2021-12-09 2023-11-28 北京天融信网络安全技术有限公司 Fault processing method and device for intelligent network card, computer equipment and medium
CN114640574A (en) * 2022-02-28 2022-06-17 天翼安全科技有限公司 Method and device for switching main equipment and standby equipment
CN114640574B (en) * 2022-02-28 2023-11-28 天翼安全科技有限公司 Main and standby equipment switching method and device
CN115086203A (en) * 2022-06-15 2022-09-20 中国工商银行股份有限公司 Data transmission method, data transmission device, electronic equipment and computer-readable storage medium
CN115086203B (en) * 2022-06-15 2024-03-08 中国工商银行股份有限公司 Data transmission method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111190745B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN111190745A (en) 2020-05-22 A data processing method, apparatus and computer readable storage medium
US10817497B2 (en) 2020-10-27 Migration flow control
US20240259327A1 (en) 2024-08-01 Software load balancer to maximize utilization
US9600337B2 (en) 2017-03-21 Congestion avoidance in network storage device using dynamic weights
US10884667B2 (en) 2021-01-05 Storage controller and IO request processing method
US20180337984A1 (en) 2018-11-22 Method and apparatus for managing resource on cloud platform
CN111913670B (en) 2024-04-02 Processing method and device for load balancing, electronic equipment and storage medium
US11411799B2 (en) 2022-08-09 Scalable statistics and analytics mechanisms in cloud networking
US9037703B1 (en) 2015-05-19 System and methods for managing system resources on distributed servers
CN113014608B (en) 2022-07-26 Flow distribution control method and device, electronic equipment and storage medium
CN105450784B (en) 2019-06-04 The device and method of message distribution consumption node into MQ
KR20200080458A (en) 2020-07-07 Cloud multi-cluster apparatus
CN108958975A (en) 2018-12-07 A kind of method, device and equipment controlling data resume speed
CN109039933B (en) 2022-07-08 A cluster network optimization method, device, equipment and medium
KR20150007698A (en) 2015-01-21 Load distribution system for virtual desktop service
CN109739634A (en) 2019-05-10 A kind of atomic task execution method and device
CN112118314A (en) 2020-12-22 Load balancing method and device
CN110377398A (en) 2019-10-25 A kind of method for managing resource, device and host equipment, storage medium
CN114785739A (en) 2022-07-22 Method, device, equipment and medium for controlling service quality of logical volume
CN112887407A (en) 2021-06-01 Job flow control method and device for distributed cluster
CN108509148B (en) 2021-08-06 I/O request processing method and device
CN113923222B (en) 2022-05-31 Data processing method and device
CN117149382A (en) 2023-12-01 Virtual machine scheduling method, device, computer equipment and storage medium
CN105094947B (en) 2018-06-12 The quota management method and system of a kind of virtual computing resource
US11409570B2 (en) 2022-08-09 Dynamic management of system computing resources

Legal Events

Date Code Title Description
2020-05-22 PB01 Publication
2020-05-22 PB01 Publication
2020-12-08 SE01 Entry into force of request for substantive examination
2020-12-08 SE01 Entry into force of request for substantive examination
2024-01-30 GR01 Patent grant
2024-01-30 GR01 Patent grant
2024-11-29 TG01 Patent term adjustment
2024-11-29 TG01 Patent term adjustment