Résumé

Large-scale IoT applications based on machine learning (ML) demand both edge and cloud processing for, respectively, AI inference and ML training tasks. Context-aware applications also need self-adaptive intelligence which makes their architecture even more complex. Estimating the costs of operating such edge-to-cloud deployments is challenging. To this purpose, we propose a reference service-oriented event-driven system architecture for IoT/edge applications comprising a minimal set of components, mapped on available cloud services. We then propose a resource consumption model for estimating the cost of deploying and running self-adaptive AI-assisted IoT applications on selected edge-tocloud platforms. The model is evaluated in two scenarios: Road Traffic Management and Smart Grid. We finally provide some estimates showing how the expenditure breakdown varies significantly depending on the adopted platform: storage costs are dominant in Road Traffic Management for all providers, whereas either messaging or edge management costs may dominate the Smart Grid scenario, and, surprisingly, computing costs are almost negligible in all cases.

Détails

Actions