Download PDFOpen PDF in browser

Service Offloading of Computationally Demanding Processes Based on Deep Reinforcement Learning

EasyChair Preprint no. 3056

8 pagesDate: March 27, 2020

Abstract

With the emergence of advanced vehicles with Internet facilities the computational and communication problems have increased drastically. Fog computing is one such technique and a potential solution to increase the advanced offloading at the edge network. The offloading targets are offloading task granularity, offloading choice time, and force models for offloading dynamic. In this paper we have proposed fog-computing to minimise the power consumption of vehicles and that of computational facilities in the internet of vehicles. Errand granularity, offloading choice time, and force models for offloading dynamic. In contrast to a large portion of the current works, we consider both the postponement tolerant and delay-requirement benefits so as to accomplish the streamlined help inactivity and income. Besides, we consider the various needs to organize the edge administrations for ideal assistance offloading. We formulate the proposed scheme mathematically. Based on the force model, an offloading choice model is proposed to powerfully decide if a help summon ought to be offloaded.

Keyphrases: Fog Computing, offloading decision time, optimal service offloading

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:3056,
  author = {Prabha Bennet and Neil Sawant and Abhigyan Kirti and Ashutosh Saxena},
  title = {Service Offloading of Computationally Demanding Processes Based on Deep Reinforcement Learning},
  howpublished = {EasyChair Preprint no. 3056},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser