patents.google.com

CN119011504A - Calculation method based on parallel computing power dynamic allocation under cloud edge fusion - Google Patents

  • ️Fri Nov 22 2024
Calculation method based on parallel computing power dynamic allocation under cloud edge fusion Download PDF

Info

Publication number
CN119011504A
CN119011504A CN202411077152.1A CN202411077152A CN119011504A CN 119011504 A CN119011504 A CN 119011504A CN 202411077152 A CN202411077152 A CN 202411077152A CN 119011504 A CN119011504 A CN 119011504A Authority
CN
China
Prior art keywords
task
computing
allocation
parameters
nodes
Prior art date
2024-08-07
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411077152.1A
Other languages
Chinese (zh)
Inventor
吴巨爱
彭沛
谢东亮
郑宗强
薛峰
宋晓芳
蔡林君
张卉琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Nari Technology Co Ltd
State Grid Electric Power Research Institute
Original Assignee
Nanjing University of Posts and Telecommunications
Nari Technology Co Ltd
State Grid Electric Power Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
2024-08-07
Filing date
2024-08-07
Publication date
2024-11-22
2024-08-07 Application filed by Nanjing University of Posts and Telecommunications, Nari Technology Co Ltd, State Grid Electric Power Research Institute filed Critical Nanjing University of Posts and Telecommunications
2024-08-07 Priority to CN202411077152.1A priority Critical patent/CN119011504A/en
2024-11-22 Publication of CN119011504A publication Critical patent/CN119011504A/en
Status Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • H04L47/762Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/828Allocation of resources per group of connections, e.g. per group of users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Algebra (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明属于电子信息领域,公开了一种云边融合下基于并行算力动态分配的计算方法,在预分配阶段,将系统计算节点平均分为两组,分别采用不同分配参数,通过统计观察期内任务到达间隔时间、每组节点处理任务时间、云计算处理时间,计算相应分配参数下各组任务完成时间,进行参数动态优化配置;在稳定运行阶段,采用最优分配参数推进计算进程;同时监测任务到达间隔时间、每组节点处理任务时间、云计算处理时间,若变量变化较大,则比较当前分配参数及相邻分配参数下的任务完成时间,判断是否重新进入预处理,再次进行参数动态优化配置。本发明可动态分配云边融合计算系统中的并行算力资源,同时兼容并行计算算法,大幅降低仿真平台推演运算的时间。

The present invention belongs to the field of electronic information, and discloses a calculation method based on dynamic allocation of parallel computing power under cloud-edge fusion. In the pre-allocation stage, the system computing nodes are evenly divided into two groups, and different allocation parameters are used respectively. By statistically analyzing the task arrival interval time, the task processing time of each group of nodes, and the cloud computing processing time during the observation period, the completion time of each group of tasks under the corresponding allocation parameters is calculated, and the parameters are dynamically optimized and configured; in the stable operation stage, the optimal allocation parameters are used to advance the calculation process; at the same time, the task arrival interval time, the task processing time of each group of nodes, and the cloud computing processing time are monitored. If the variable changes greatly, the task completion time under the current allocation parameters and the adjacent allocation parameters is compared to determine whether to re-enter preprocessing and perform dynamic optimization and configuration of parameters again. The present invention can dynamically allocate parallel computing power resources in the cloud-edge fusion computing system, and is compatible with parallel computing algorithms, greatly reducing the time of simulation platform deduction and calculation.

Description

Calculation method based on parallel computing power dynamic allocation under cloud edge fusion

Technical Field

The invention belongs to the technical field of edge calculation, and particularly relates to a calculation method based on parallel computing power dynamic allocation under cloud edge fusion.

Background

The edge computing system is deployed near equipment users such as the Internet of things to provide computing services, and compared with the cloud computing system, the communication delay of tasks in the edge computing is low; however, the edge computing system has limited storage and computing resources and cannot serve all the arrived tasks, so that the edge computing and cloud computing can serve more tasks, the performance of the cloud edge fusion computing system is improved, and the simulation operation bottleneck of the power system emergency protection situation deduction related to mass scenes can be effectively relieved.

Considering the great difference between cloud computing and edge computing, the performance of the fusion system cannot be exerted to the greatest extent by simple system fusion, so that the computing resources of the fusion system are required to be reasonably distributed. Current resource allocation algorithms focus mainly on optimizing resource orchestration allocation and task scheduling management between edges and clouds, however these algorithms do not take into account the following problems: some computing nodes in the fusion system face the problems of power management, software and hardware faults, computing resource sharing and the like, so that the computing performance of the system is reduced.

The parallel computing method in the cloud computing is a method for solving the above-described problems. The parallel computing algorithm distributes a plurality of computing nodes for each arrival task to compute simultaneously. The algorithm can integrate scattered computing resources, and combines methods of coding, copying and the like, so that the influence of some slower tasks on a computing system is avoided, the computing time of the tasks is reduced, and the utilization rate of the computing resources is improved. However, introducing parallel computing into an edge computing system suffers from the following problems: when designing the resource allocation algorithm, the objective condition of limited computing resources of the edge computing system must be considered; the parallel computing algorithm in cloud computing mainly optimizes the computing time and cost of each task, and usually does not consider whether introducing redundancy can cause more tasks to be blocked or not, and the cloud edge fusion computing system needs to optimize the whole system; edge computing environments are often faced with large variations in computing task types, and when designing resource allocation algorithms, it must be considered that the algorithm needs to be able to dynamically adjust the allocation of computing resources.

Disclosure of Invention

In order to solve the technical problems, the invention provides a calculation method based on parallel computing power dynamic allocation under cloud edge fusion, which is applied to an edge calculation system, and a plurality of calculation nodes are allocated to each task through an algorithm so as to perform parallel calculation and adaptively adjust resource allocation parameters.

The invention discloses a calculation method based on parallel computing power dynamic allocation under cloud edge fusion, which comprises the following steps:

Step 1, equally dividing computing nodes of an edge computing system into two groups, wherein each group of nodes adopts different distribution parameters;

Step 2, the edge computing system distributes computing nodes for the arrived tasks, and calculates average task arrival time, average task processing time of each group of nodes and average cloud computing processing time in an observation period respectively;

Step 3, calculating task blocking probability of the edge computing system under each group of distribution parameters by the computing system respectively according to the time obtained in the step 2, and then calculating task completion time of each group under the corresponding distribution parameters;

Step 4, judging whether to end the pre-allocation stage by comparing whether the task completion time of each group is reduced along with the allocation parameters, and carrying out parameter dynamic optimization configuration according to the judgment;

Step 5, adopting optimal allocation parameters in the pre-allocation stage to respectively calculate and update average task arrival time, average task processing time of nodes and average cloud computing processing time in the observation period;

step 6, calculating task blocking probability of the edge computing system under the current distribution parameters and the adjacent distribution parameters respectively based on the time obtained in the step 5 and the historical data of the average task processing time under different distribution parameters in the step 3, and calculating task completion time under the corresponding distribution parameters;

Step 7, judging whether to finish the steady operation stage by comparing the current allocation parameters with the task completion time under the adjacent allocation parameters, if yes, re-entering the pre-allocation stage to perform parameter dynamic optimization configuration; if not, the computing resource allocation is completed.

Further, in step 1, the computing nodes of the edge computing system are equally divided into two groups, and each node in the first group processes a task independently, that is, the distribution parameter is m 1 =1; every second node in the second group processes a task, i.e. the allocation parameter is m 2 =2.

Further, in step 2, the edge computing system allocates computing nodes for the arriving tasks, specifically:

Randomly selecting computing nodes based on the number of the distribution parameters from any group of idle nodes, and if no idle nodes exist, sending the tasks to a cloud computing system for processing; if the number of idle nodes is smaller than the parameters m 1 and m 2, sending the task to the cloud computing system;

The observation period refers to that after each time the edge computing system processes s tasks, the average task arrival time is respectively calculated to be 1/mu, the average task processing time t j of each group of nodes and the average cloud computing processing time t c, and the computing modes of parameters adopt arithmetic average values.

Further, in step 3, the computing system adopts the task blocking probability P (m) of the edge computing system under each group of allocation parameters, and the formula is as follows:

Wherein a=n/m, which is the number of tasks that the edge computing system can simultaneously compute; n is the total number of edge computation nodes; m is a generic term of parameters of the node allocation method, when calculating the task blocking probability of the first group, m=m 1, and when calculating the task blocking probability of the second group, m=m 2; μ is the average task arrival rate, i.e., the inverse of the average task arrival time; i is a sum intermediate variable, and the value of i is an integer of [0, a ];

calculating task completion time T (m) of each group under the corresponding allocation parameters, wherein the formula is as follows:

T(m)=(1-P(m))tj+tc (2)。

Further, the step 4 specifically includes:

If T (m 1)≤T(m2), ending the pre-allocation stage, wherein the current optimal allocation parameter is m o=m1; wherein T (m 1) is the task completion time of the first set of computing nodes, and T (m 2) is the task completion time of the second set of computing nodes;

If T (m 1)>T(m2), the pre-allocation phase is continued, the allocation parameters of the first group are adjusted to m 1=m1 +1 and the allocation parameters of the second group are adjusted to m 2=m2 +1.

Further, in step 5, an idle node is allocated to the arrived task by adopting the optimal allocation parameter m o obtained in step 4, if no idle node exists, the task is sent to the cloud computing system for processing, if the number of the idle nodes is smaller than the optimal allocation parameter m o, the rest idle nodes are allocated to the same task, and the task does not participate in the statistics of the subsequent related time;

in step 5, the same method as in step 2 is used to calculate and update the average task arrival time 1/μ, the average task processing time t j of the node, and the average homogeneous cloud calculation processing time t c.

Further, in step 6, the history data of the average task processing time refers to the average task processing time when the distribution parameters are m o -1 and m o +1 respectively before the pre-distribution phase is finished.

Further, the step 7 specifically includes:

If T (m o)≤T(mo -1) and T (m o)≤T(mo +1), continuing the steady operation phase and adopting an optimal allocation parameter m o, wherein T (m o) is the task completion time under the optimal allocation parameter;

Otherwise, all the computing nodes in the edge computing are divided into two groups again, wherein if T (m o)>T(mo -1) is the parameter m 1=mo -1 of the first group and the parameter m 2=mo of the second group; if T (m o)>T(mo +1), the parameters of the first set, m 1=mo, and the parameters of the second set, m 2=mo +1.

The beneficial effects of the invention are as follows: the method of the invention dynamically distributes parallel computing power resources in the cloud-edge fusion computing system, and simultaneously can be compatible with parallel computing algorithms such as coding computing and the like, thereby accelerating the speed of processing each task by the system; through dynamic adjustment of computing resources, more computing tasks are performed in the edge computing system, so that larger communication time delay required by sending the computing tasks to the cloud computing system is avoided, and the deduction operation time of the simulation platform is greatly reduced; the invention can adapt to the influence of the task type and task arrival frequency change in the system on the system distribution parameters by dynamically switching the pre-distribution stage and the stable operation stage.

Drawings

FIG. 1 is a schematic illustration of the application of the method of the present invention;

FIG. 2 is a flow chart of the method of the present invention;

FIG. 3 is a schematic diagram of edge computation using a parallel computing algorithm;

fig. 4 is a schematic diagram of simulation results.

Detailed Description

In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings.

With reference to fig. 1, the computing method based on parallel computing power dynamic allocation under cloud edge fusion is applied to an edge computing system, distributes a plurality of computing nodes for each task reaching the edge computing system through an algorithm, adopts related parallel computing algorithms, such as copying, encoding and the like, and sends the task to the cloud computing system for processing if no idle node exists in the system.

With reference to fig. 2, the computing method based on parallel computing power dynamic allocation under cloud edge fusion comprises the following steps:

Step 1, initializing: in connection with fig. 3, the computing nodes of the edge computing system are equally divided into two groups, wherein each node in the first group processes a task separately, i.e. the allocation parameter is m 1 =1; every second node in the second group processes a task, i.e. the allocation parameter is m 2 =2. (the total number of edge computing nodes to which the algorithm of the present invention applies needs to be greater than 3).

Step 2, computing resource allocation (pre-allocation): the edge computing system distributes nodes for the arrived tasks, the distribution mode is selected from any group of idle nodes at random, if no idle nodes exist, the tasks are sent to the cloud computing system for processing, and if the number of the idle nodes is smaller than parameters m 1 and m 2, the tasks are sent to the cloud computing system; meanwhile, after each time of processing s tasks, the edge computing system respectively calculates the average task arrival time 1/mu, the average task processing time t j of each group of nodes and the average cloud computing processing time t c, and the computing mode of parameters adopts an arithmetic average value. The task was set to poisson arrival with rate μ, task processing time obeyed the exponential distribution, rate parameter (RATE PARAMETER) was 1/t j, and cloud computing processing time was set to constant t c in combination with the simulation parameters of table 1.

Step 3, task completion time calculation (pre-allocation): namely, through the time calculated in the second step, the task blocking probability P (m) of the edge computing system under the group allocation parameters is calculated by the computing system respectively, and the formula is as follows:

Where a=n/m is the number of tasks that the edge computing system can compute simultaneously; n is the total number of edge computation nodes; m is a generic term of parameters of the node allocation method (m=m 1 when calculating the task blocking probability of the first group, m=m 2 when calculating the task blocking probability of the second group); then calculating the task completion time T (m) of each group under the corresponding allocation parameters, wherein the formula is as follows:

T(m)=(1-P(m))tj+tc (2)

Step 4, judging that the pre-allocation stage is finished: judging whether to end the pre-allocation stage by comparing whether the task completion time of each group decreases along with the allocation parameters, and carrying out parameter dynamic optimization configuration according to the judgment, if T (m 1)≤T(m2), ending the pre-allocation stage, wherein the current optimal allocation parameter is m o=m1; if T (m 1)>T(m2), the pre-allocation phase is continued, the allocation parameters of the first group are adjusted to m 1=m1 +1 and the allocation parameters of the second group are adjusted to m 2=m2 +1.

Step 5, computing resource allocation (stable): the system combines the computing power resources into a group, adopts the optimal allocation parameter m o in the preprocessing stage to allocate idle nodes for the arrived task, if no idle nodes exist, the task is sent to the cloud computing system for processing, if the number of the idle nodes is smaller than the optimal allocation parameter m o, the rest idle nodes are allocated to the same task, and the task does not participate in the statistics of the follow-up related time; meanwhile, after each time the system processes n tasks, the average task arrival time is 1/mu, the average task processing time t j of the nodes and the average cloud computing processing time t c are respectively calculated, and the parameter computing modes adopt arithmetic average valux

Step 6, task completion time calculation (stabilization): according to the time calculated in the fifth step and the historical data of the corresponding latest statistics during average task processing when the distribution parameters are m o -1 and m o +1 respectively, calculating the task blocking probability of the edge computing system under the current distribution parameters and the adjacent distribution parameters (m o -1 and m o +1) according to formulas (1) and (2), and calculating the task completion time under the corresponding distribution parameters.

Step 7, judging that the stable operation phase is ended: judging whether to finish a stable operation stage and whether to perform parameter dynamic optimization configuration by comparing the current allocation parameters with task completion time under the adjacent allocation parameters, if T (m o)≤T(mo -1) and T (m o)≤T(mo +1), continuing the stable operation stage, and adopting the current allocation parameters m o; otherwise, all the computing nodes in the edge computation are divided into two groups again, wherein if T (m o)>T(mo -1), the parameters m 1=mo -1 of the first group, the parameters m 2=mo of the second group, if T (m o)>T(mo +1), the parameters m 1=mo of the first group, and the parameters m 2=mo +1 of the second group.

The calculation method based on parallel computing power dynamic distribution under cloud edge fusion is applied to an edge calculation system, and simulation parameter settings of the edge calculation system are shown in table 1. From the simulation results of fig. 4, it was verified that the task completion time was significantly less when the algorithm of the present invention was used than when the algorithm of the present invention was not used. Particularly, when the task arrival rate is smaller, the algorithm of the invention can improve the performance of the edge computing system. Table 2 demonstrates the effectiveness of the dynamic computing resource allocation of the algorithm of the present invention, where the edge computing system changes the task type or size every 500 tasks, resulting in a change in the size of the average task processing time t j.

Table 1 simulation example parameter settings

Parameter name Parameter value
Edge computing node total number N 20
Average task arrival time 1/. Mu 1/7s
Average task processing time tj 1s
Flat homogeneous cloud calculates the processing time tc 10s
Number of arrival of simulation task 500
Task number s of pre-allocation stage statistics average time 10
Number of tasks n for counting average time in steady operation phase 20

Table 2 algorithm dynamic allocation validity verification

Average task processing time tj (seconds) 1 1.5 2 0.5 1.2
Using the algorithm of the present invention 0.341 0.795 1.717 0.225 0.563
Unused and not used 1.042 1.596 2.228 0.516 1.169

In summary, the computing resource allocation algorithm can effectively reduce the task computing time of the edge computing system.

The foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the present invention, and all equivalent variations using the description and drawings of the present invention are within the scope of the present invention.

Claims (8)

1. The calculation method based on parallel computing power dynamic allocation under cloud edge fusion is characterized by comprising the following steps of:

Step 1, equally dividing computing nodes of an edge computing system into two groups, wherein each group of nodes adopts different distribution parameters;

Step 2, the edge computing system distributes computing nodes for the arrived tasks, and calculates average task arrival time, average task processing time of each group of nodes and average cloud computing processing time in an observation period respectively;

Step 3, calculating task blocking probability of the edge computing system under each group of distribution parameters by the computing system respectively according to the time obtained in the step 2, and then calculating task completion time of each group under the corresponding distribution parameters;

Step 4, judging whether to end the pre-allocation stage by comparing whether the task completion time of each group is reduced along with the allocation parameters, and carrying out parameter dynamic optimization configuration according to the judgment;

Step 5, adopting optimal allocation parameters in the pre-allocation stage to respectively calculate and update average task arrival time, average task processing time of nodes and average cloud computing processing time in the observation period;

step 6, calculating task blocking probability of the edge computing system under the current distribution parameters and the adjacent distribution parameters respectively based on the time obtained in the step 5 and the historical data of the average task processing time under different distribution parameters in the step 3, and calculating task completion time under the corresponding distribution parameters;

Step 7, judging whether to end the stable operation stage by comparing the current allocation parameters with the task completion time under the adjacent allocation parameters; if yes, re-entering a pre-allocation stage to perform parameter dynamic optimization configuration; if not, the computing resource allocation is completed.

2. The computing method based on parallel computing power dynamic distribution under cloud edge fusion according to claim 1, wherein in step 1, computing nodes of an edge computing system are equally divided into two groups, each node in the first group processes a task independently, namely, a distribution parameter is m 1 =1; every second node in the second group processes a task, i.e. the allocation parameter is m 2 =2.

3. The computing method based on parallel computing power dynamic allocation under cloud edge fusion according to claim 2, wherein in step 2, an edge computing system allocates computing nodes for arriving tasks, specifically:

Randomly selecting computing nodes based on the number of the distribution parameters from any group of idle nodes, and if no idle nodes exist, sending the tasks to a cloud computing system for processing; if the number of idle nodes is smaller than the parameters m 1 and m 2, sending the task to the cloud computing system;

The observation period refers to that after each time the edge computing system processes s tasks, the average task arrival time is respectively calculated to be 1/v, the average task processing time t j of each group of nodes, the average cloud computing processing time t c are respectively calculated, and the computing modes of parameters are all arithmetic average values.

4. The method for computing dynamic distribution based on parallel computing power under cloud edge fusion according to claim 3, wherein in step 3, the computing system adopts a task blocking probability P (m) of the edge computing system under each group of distribution parameters, and the formula is as follows:

Wherein a=n/m, which is the number of tasks that the edge computing system can simultaneously compute; n is the total number of edge computation nodes; m is a generic term of parameters of the node allocation method, when calculating the task blocking probability of the first group, m=m 1, and when calculating the task blocking probability of the second group, m=m 2; μ is the average task arrival rate, i.e., the inverse of the average task arrival time; i is a sum intermediate variable, and the value of i is an integer of [0, a ];

calculating task completion time T (m) of each group under the corresponding allocation parameters, wherein the formula is as follows:

T(m)=(1-P(m))tj+tc (2)。

5. the computing method based on parallel computing power dynamic allocation under cloud edge fusion according to claim 4, wherein the step 4 is specifically:

If T (m 1)≤T(m2), ending the pre-allocation stage, wherein the current optimal allocation parameter is m o=m1; wherein T (m 1) is the task completion time of the first set of computing nodes, and T (m 2) is the task completion time of the second set of computing nodes;

If T (m 1)>T(m2), the pre-allocation phase is continued, the allocation parameters of the first group are adjusted to m 1=m1 +1 and the allocation parameters of the second group are adjusted to m 2=m2 +1.

6. The method for computing dynamic allocation based on parallel computing power under cloud edge fusion according to claim 4, wherein in step 5, the optimal allocation parameter m o obtained in step 4 is adopted to allocate idle nodes for the arrived task, if no idle nodes exist, the task is sent to a cloud computing system for processing, if the number of idle nodes is smaller than the optimal allocation parameter m o, the rest idle nodes are allocated to the same task, and the task does not participate in statistics of subsequent related time.

7. The computing method based on parallel computing power dynamic allocation under cloud edge fusion according to claim 2, wherein in step 6, the historical data of average task processing time refers to average task processing time when allocation parameters are m o -1 and m o +1 respectively before the pre-allocation stage is finished.

8. The computing method based on parallel computing power dynamic allocation under cloud edge fusion according to claim 7, wherein the step 7 is specifically:

If T (m o)≤T(mo -1) and T (m o)≤T(mo +1), continuing the steady operation phase and adopting an optimal allocation parameter m o, wherein T (m o) is the task completion time under the optimal allocation parameter;

Otherwise, all the computing nodes in the edge computing are divided into two groups again, wherein if T (m o)>T(mo -1) is the parameter m 1=mo -1 of the first group and the parameter m 2=mo of the second group; if T (m o)>T(mo +1), the parameters of the first set, m 1=mo, and the parameters of the second set, m 2=mo +1.

CN202411077152.1A 2024-08-07 2024-08-07 Calculation method based on parallel computing power dynamic allocation under cloud edge fusion Pending CN119011504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411077152.1A CN119011504A (en) 2024-08-07 2024-08-07 Calculation method based on parallel computing power dynamic allocation under cloud edge fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411077152.1A CN119011504A (en) 2024-08-07 2024-08-07 Calculation method based on parallel computing power dynamic allocation under cloud edge fusion

Publications (1)

Publication Number Publication Date
CN119011504A true CN119011504A (en) 2024-11-22

Family

ID=93486055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411077152.1A Pending CN119011504A (en) 2024-08-07 2024-08-07 Calculation method based on parallel computing power dynamic allocation under cloud edge fusion

Country Status (1)

Country Link
CN (1) CN119011504A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119364434A (en) * 2024-12-26 2025-01-24 南京邮电大学 A computational method for unloading blocked tasks based on user retransmission mechanism in cloud-edge fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119364434A (en) * 2024-12-26 2025-01-24 南京邮电大学 A computational method for unloading blocked tasks based on user retransmission mechanism in cloud-edge fusion

Similar Documents

Publication Publication Date Title
CN110276182B (en) 2020-12-22 API distributed current limiting realization method
CN107729147B (en) 2021-09-21 Data processing method in stream computing system, control node and stream computing system
CN110941667A (en) 2020-03-31 A computing offloading method and system in a mobile edge computing network
CN109582448B (en) 2021-03-16 Criticality and timeliness oriented edge calculation task scheduling method
Saleem et al. 2018 Performance guaranteed partial offloading for mobile edge computing
CN109947574B (en) 2023-05-30 Fog network-based vehicle big data calculation unloading method
CN113612635B (en) 2022-08-12 A network slice instance resource allocation method based on the combination of horizontal and vertical scaling
CN110996390B (en) 2022-03-18 A wireless access network computing resource allocation method and network system
CN112367276B (en) 2021-03-30 Network resource dynamic self-adaption method and system based on network flow priority
CN108270805B (en) 2021-03-05 Resource allocation method and device for data processing
CN119011504A (en) 2024-11-22 Calculation method based on parallel computing power dynamic allocation under cloud edge fusion
US9665409B2 (en) 2017-05-30 Methods and apparatus facilitating access to storage among multiple computers
Duan et al. 2021 Resource management for intelligent vehicular edge computing networks
CN112888005B (en) 2022-09-13 MEC-oriented distributed service scheduling method
CN112416516A (en) 2021-02-26 Cloud data center resource scheduling method for resource utility improvement
Meskar et al. 2020 Fair multi-resource allocation in mobile edge computing with multiple access points
CN113032146A (en) 2021-06-25 Robust service supply method for multi-access edge computing environment
Xu et al. 2021 Online learning algorithms for offloading augmented reality requests with uncertain demands in MECs
CN110780986B (en) 2022-02-15 Internet of things task scheduling method and system based on mobile edge computing
CN105404554B (en) 2019-09-13 Method and device for Storm streaming computing framework
Chen et al. 2022 Joint optimization of task caching, computation offloading and resource allocation for mobile edge computing
Benoit et al. 2021 Max-stretch minimization on an edge-cloud platform
CN109388609B (en) 2020-02-21 Data processing method and device based on acceleration core
CN110069319B (en) 2023-05-02 A multi-objective virtual machine scheduling method and system for multi-cloud resource management
CN109101315B (en) 2021-11-19 Cloud data center resource allocation method based on packet cluster framework

Legal Events

Date Code Title Description
2024-11-22 PB01 Publication
2024-11-22 PB01 Publication
2024-12-10 SE01 Entry into force of request for substantive examination
2024-12-10 SE01 Entry into force of request for substantive examination