Telemedicine applications depend on uninterrupted patient access, rapid feature updates, and fault- tolerant data handling across distributed environments. Traditional monolithic deployments struggle to provide elastic scalability, real-time reliability, and operational resilience during unpredictable spikes in remote healthcare demands.
This paper proposes a cloud-native DevOps architecture that combines micro services, container workloads, and Kubernetes orchestration to ensure reliable telemedicine service delivery. The approach aligns infrastructure automation, continuous delivery pipelines, and auto scaling policies to support dynamic patient traffic, resilient service communication, and secure data flows across modular components.
Reliability is achieved by combining horizontal scaling and service mesh routing with health-checking, rollback mechanisms, and distributed monitoring that treat patient-facing services as independently deployable units. The resulting design supports dependable consultations, responsive clinical workflows, and stable telehealth performance without interrupting active patient sessions.
By embracing DevOps automation and microservice decomposition, telemedicine systems can evolve continuously while maintaining consistent service quality under demanding operational conditions.
1: Introduction
Telemedicine platforms must deliver dependable patient services regardless of network volatility, traffic fluctuations, or rapid shifts in clinical demand. Remote consultations, electronic prescriptions, virtual triage, and diagnostics all rely on continuous availability of backend systems that handle protected patient interactions.
Any interruption in communication or service processing has immediate consequences on patient care, unlike non-critical web applications, where delays primarily affect user experience. Reliability in telehealth, therefore, becomes a fundamental design requirement, where system responsiveness, service continuity, and predictable performance are treated as clinical safeguards.
Traditional monolithic architectures cannot meet these reliability expectations as telemedicine usage scales. A single deployable unit forces all patient workflows into a shared computing environment, generating resource competition between unrelated functionalities such as video processing, medical record retrieval, and scheduling. If a single component fails or is updated, patient consultations may freeze, authentication systems may stall, or medical records may become temporarily unavailable.
Telemedicine reliability demands separation of responsibilities into independently deployable services that isolate failure boundaries and maintain continuity even when individual components are overloaded or unhealthy. Micro services address this fragmentation by decomposing telemedicine workflows into functional building blocks with distinct reliability behavior. Services such as session management, video communication, patient records, provider matching, and payment authorization operate as autonomous deployments.
This separation allows high-traffic components to scale without affecting others, and failure in one service no longer collapses the entire platform. Microservice decomposition also supports targeted rollbacks and granular upgrades during active clinical operations, improving reliability without restricting development speed.
Kubernetes operationalizes these benefits by coordinating microservice execution dynamically across distributed nodes. The platform continuously monitors application health, replaces failed containers, and distributes workloads based on real-time demand.
Auto scaling mechanisms respond to traffic surges, particularly during peak telehealth usage, by provisioning additional replicas to absorb load without compromising ongoing patient sessions. Likewise, rolling updates and controlled deployments minimize disruption when new features, clinical workflows, or bug fixes are introduced to the platform.
Network reliability is similarly reinforced through cloud native orchestration. Kubernetes relies on service discovery, internal load balancing, and adaptive routing to sustain communication between healthcare services regardless of node health or infrastructure changes. This prevents patient video communication from failing due to database scaling operations or clinical record retrieval from slowing when additional appointment services are deployed.
Service mesh integration strengthens communication reliability further by enforcing secure channels and retry logic at the network layer, reducing application-level failure handling. Observability completes the reliability model by providing real-time visibility into degradations and failure trends. Distributed logging, telemetry, and health metrics allow telehealth operators to detect session drops, latency spikes, container crashes, and unexpected request failures as they occur.
This continuous awareness is essential for regulated healthcare environments, where reliability issues must be corrected before they affect active patient care. Monitoring integrated with deployment pipelines also enables proactive adjustments, where reliability data influences future scaling decisions, configuration changes, and architectural evolution.
By aligning microservice decomposition with Kubernetes orchestration and continuous operational measurement, cloud native DevOps enables telemedicine platforms to achieve high reliability at scale. Reliability becomes a programmable capability that evolves with clinical demand, system load, and operational conditions. This architecture supports patient expectations for uninterrupted virtual care and prepares telemedicine platforms for rapid growth without compromising quality of service or clinical safety.
2: Cloud-Native Architecture for Reliable Telemedicine Services
Telemedicine workflows consist of interdependent operations that must remain available regardless of platform load, node failures, or software updates . The proposed architecture begins at the patient access layer, where devices connect through an API gateway responsible for authentication, routing, throttling, and protocol negotiation. This entry point isolates external traffic from internal services and maintains a stable request boundary even if downstream components change.
Once authenticated, requests are forwarded to a dedicated set of micro services deployed within a Kubernetes cluster, each responsible for a single clinical or operational function. The micro services are decomposed to reflect reliability domains within patient workflows.
Core components include a session service for managing consultation states, a video communication service, a medical record retrieval service, a scheduling service, and a billing and prescription service.
Each service is encapsulated as a containerized workload with isolated scaling policies and individual deployment schedules. This separation ensures that demand spikes in video sessions do not slow record retrieval or disrupt appointment scheduling, and failures are isolated to the affected functionality rather than consuming the entire platform.
Kubernetes ensures execution resilience by orchestrating these services across multiple worker nodes. Replica Sets maintain the required number of service instances, while the Horizontal Pod Autoscaler (HPA) increases or decreases replicas based on CPU, memory, or custom telemetry metrics.
Liveness and readiness probes continually verify service health, removing unresponsive instances automatically and preventing failed services from receiving traffic. Failover at this level occurs at machine speed, reducing patient disruptions that would otherwise cause frozen video, incomplete prescription submissions, or stalled clinician requests.
Secure and reliable communication between micro services is managed through an internal service mesh. The mesh enforces mutual TLS (mTLS) encryption between services, guaranteeing secure patient data traversal within the cluster.
It also provides traffic shaping, retry policies, and circuit breaking to ensure that transient network issues do not cascade into system-wide failures. As a result, communication reliability is enforced independently of application logic, reducing the engineering overhead on individual services while increasing global resiliency.
Persistent data created by the services is stored using cloud-backed and distributed storage layers that communicate with the cluster through secure service endpoints. Clinical data, session metadata, scheduling information, and billing records remain protected throughout their lifecycle.
The architecture prevents backend scaling operations from interrupting data access, as requests are served through the service mesh routing layer rather than bound to individual storage nodes. This design supports continuous patient consultations even when storage clusters are under maintenance or expansion.

The complete architecture is shown in Fig. 1, capturing both the technical control flow and the reliability infrastructure layered around micro services. The figure illustrates how the patient workflow is mapped to independently deployable components, each supported by Kubernetes policies and continuous observability.
Rather than depending on static pre-provisioned infrastructure, telemedicine reliability is achieved dynamically through cloud-native orchestration and real-time decision-making within the cluster. Telemedicine reliability becomes an emergent property of this architecture rather than a discrete feature.
It is created through coordinated orchestration, network hardening, and modular decomposition rather than static hardware provisioning or manual scaling. This allows telemedicine platforms to scale safely with patient demand, adapt to partial failures, and update continuously without risking clinical disruption.
3: Ci/Cd Strategies for Reliable Telemedicine Deployments
Reliability in telemedicine is not only a function of how services are architected, but also of how changes are introduced into production . Continuous integration and continuous delivery pipelines determine whether software updates, configuration changes, and infrastructure modifications reach patient-facing environments without causing disruption .
For telemedicine platforms, deployment mistakes translate directly into care interruptions, missed appointments, or delayed access to clinical information. CI/CD strategies must therefore be designed to minimize risk, provide rapid rollback paths, and support concurrent deployments while active consultations are in progress.
A cloud-native DevOps approach introduces structured deployment strategies that align release behavior with reliability requirements. Rather than pushing updates directly into production, changes can be released gradually, validated under real traffic, and rolled back automatically when failure thresholds are exceeded.
Telemedicine deployments demand this level of control because patient session continuity must be preserved even when new features are introduced, security patches are applied, or infrastructure is reconfigured to accommodate growth.
Rolling updates form a foundational strategy within Kubernetes-based telemedicine platforms. In a rolling update, subsets of service instances are replaced incrementally while the remaining replicas continue handling patient traffic. This avoids complete service outages and allows changes to be validated progressively against live workloads. For components such as session management or scheduling services, rolling updates reduce the risk that all instances are simultaneously affected by a faulty release. However, this strategy assumes that the old and new versions of a service are compatible, which may not always hold for significant clinical workflow changes.
Blue-green deployments provide an alternative strategy where two full environments exist in parallel: one active (blue) and one idle but fully provisioned (green). New releases are deployed to the green environment and validated before any traffic is shifted from blue. If issues occur, traffic can be redirected back to the stable blue environment with minimal delay.
For telemedicine video services or payment gateways, this approach provides strong isolation between current and candidate releases, though it requires additional infrastructure capacity to maintain duplicate environments .
Canary releases extend these concepts by routing a small, controlled percentage of real user traffic to a new version while the majority of sessions remain on the stable version. For a telemedicine platform, this might mean exposing a new consultation workflow or triage feature to a limited subset of users or regions.
Reliability is monitored closely for this small cohort; error rates, latency, and user session stability are evaluated against thresholds. If metrics remain favorable, more traffic is gradually shifted to the new version. If not, the canary is rolled back without impacting the majority of patients.
Table I contrasts these deployment strategies with respect to telemedicine-specific reliability concerns. It illustrates that no single strategy is universally superior; rather, each is appropriate for different types of change, levels of acceptable risk, and resource constraints in healthcare environments. In practice, a telemedicine platform benefits from combining these strategies rather than selecting a single universal approach.
Routine, low-risk updates may be delivered using rolling updates; substantial workflow changes may use blue-green deployments; and high-impact features affecting critical clinical interactions may require canary rollouts with strict reliability thresholds.
CI/CD pipelines orchestrate these strategies programmatically, integrating health checks, metric evaluation, and automated rollback into repeatable workflows. This layered deployment discipline ensures that telemedicine platforms can evolve quickly while preserving the reliability expectations that patients and clinicians depend on.
Observability and Reliability Metrics For Telemedicine Platforms
Cloud-native telemedicine systems require continuous measurement of session stability, latency, and error behavior to maintain reliable clinical interactions. Unlike general-purpose applications, a degraded telehealth service directly affects patient outcomes when video consultations fail, medical records load slowly, or prescription services stall during a clinical decision. Observability must therefore extend beyond infrastructure health and capture patient-experience metrics that reflect end-to-end workflows .
The goal is to detect reliability degradation early, correlate failures with their underlying components, and adapt system behavior through automated remediation and scaling policies. A comprehensive observability stack combines distributed tracing, log aggregation, and time-series monitoring across all micro services. Tracing allows clinical workflows to be followed across multiple services, revealing where latency accumulates during a consultation or when a database lookup delays record retrieval.
Centralized logging allows reliability analysts to identify repeated session drops or unexpected disconnections, particularly during high demand. Metrics provide quantitative feedback that can be examined in real time or used to trigger automatic responses when service-level thresholds are violated. Service Level Objectives (SLOs) form the backbone of reliability governance within telemedicine DevOps. SLOs define acceptable performance and fault tolerance based on patient expectations and clinical requirements.
For example, a video consultation service may require an initiation latency under a specific threshold, while medical record access may require consistent availability even during background data synchronization. These objectives ensure that observability mechanisms enforce quality-of-service targets that reflect patient care rather than generic uptime measurements. Error budgets play a complementary role by quantifying how much failure is tolerable within a defined period.
If a telemedicine video service consumes its error budget too quickly due to dropped sessions or unstable bitrates, deployment pipelines may automatically restrict further updates until reliability stabilizes. By tying deployment velocity to service performance, the platform prioritizes patient service continuity above feature delivery. This dynamic tension balances innovation with clinical stability. Reliability metrics must also reflect clinical workflow continuity.
Session success rates track whether consultations are completed without interruption, while workflow completion metrics assess whether actions such as prescription issuance or medical document uploads finish successfully under load. These measurements allow micro services to be evaluated not only for technical correctness, but also for their contribution to uninterrupted patient care. Observability thus evolves from simple resource monitoring to safeguarding clinical workflows.
Table II summarizes reliability metrics tailored specifically for telemedicine micro services. These metrics align runtime monitoring with business-critical functions and provide practical thresholds that DevOps pipelines can enforce automatically. By enforcing observability-driven reliability controls, telemedicine platforms transform runtime monitoring into a regulatory and operational asset.
Metrics become actionable signals that govern scaling, deployment velocity, and failure remediation. The result is a telehealth environment where patient interactions remain dependable even as systems evolve, traffic surges, or services are continuously updated through cloud-native DevOps pipelines .
5: Case Study: Scaling Telehealth Workloads with Cloud-Native DevOps
A telemedicine provider experienced unreliable service behavior during sudden increases in appointment requests, particularly during evening hours and flu season peaks. The platform relied on a monolithic video and scheduling application hosted on a small virtual server cluster, where CPU saturation caused login delays, video failures, and stalled appointment confirmations.
Scaling required full node provisioning and periodic downtime, forcing administrators to over-provision resources for average demand while still failing to address peak loads. The operational model became unsustainable as patient adoption increased and remote care became a primary engagement channel.
To address these reliability issues, the provider transitioned to a microservices architecture orchestrated by Kubernetes . Patient authentication, scheduling, streaming, and prescription workflows were separated into independently deployable services. Each service gained dedicated scaling rules that reflected its runtime behavior and patient workflow sensitivity.
For example, the video streaming service was configured to scale aggressively based on network throughput and CPU contention, while the prescription service prioritized database performance and consistency checks rather than high concurrency. This separation allowed resources to be allocated to services in proportion to patient demand rather than shared across unrelated functionalities. Reliability increased significantly after implementing horizontal pod auto scaling (HPA).
Streaming service replicas increased automatically during periods of heavy load, particularly when physicians initiated back-to-back consultations. Liveness probes ensured failing instances were removed instantly, reducing session drops. Readiness probes restricted unstable pods from receiving traffic during startup, preserving session continuity. These capabilities reduced median session failures during peak usage and eliminated the need for manual scale-ups by the operations team.
The case study also introduced a deployment strategy aligned with active clinical operations. Canary releases were configured for workflow-sensitive capabilities such as video negotiation protocols and patient triage interfaces. Only a small fraction of users accessed new features during initial rollout, and failure thresholds blocked larger deployments if error rates rose.
Rolling updates were applied to low-risk services like appointment reminders, minimizing exposure to critical workflows. The CI/CD system became a gatekeeper for clinical reliability rather than simply a release mechanism.
Visibility into performance and user behavior further strengthened reliability. Observability tools recorded latency distribution, video bitrate stability, and workflow completion metrics across services. When historical data revealed bottlenecks in medical record access during streaming sessions, the architecture was updated to route record queries through a dedicated API with separate caching and scaling behavior. This modification eliminated request starvation caused by video-driven CPU consumption.
In effect, reliability improvements emerged iteratively from tracked clinical usage patterns rather than static planning. The final architecture used a service mesh to guarantee encrypted traffic and consistent retry logic between components. Sudden request surges no longer cascaded into systemic failures because traffic shaping and circuit breaking isolated malfunctioning micro services.
For example, failures in the external pharmacy API did not disrupt active consultations; degraded prescription calls were retried asynchronously without blocking session continuity. These protections made the system resilient not only to internal failures but also to unreliable third-party dependencies.

Fig. 2 illustrates how workload scaling and reliability controls are layered into the telemedicine workflow. The diagram visualizes how traffic enters through a gateway, flows through specialized micro services, and scales independently based on real-time demand. It also shows how orchestration and mesh policies operate alongside observability for continuous reliability enforcement.
This case study demonstrates that reliability can be engineered incrementally by decomposing workflows, enforcing autonomous scaling, and deploying changes under controlled observability. The cloud-native DevOps approach shifts reliability enforcement from manual operations to automated orchestration and traffic-aware decision making, enabling telemedicine platforms to scale quickly while maintaining dependable patient services.
6. Operational Cost and Performance Trade-Offs In Cloud-Native Telemedicine
Scaling telemedicine systems with cloud-native DevOps introduces a critical engineering challenge: balancing reliability with infrastructure cost. Although elastic cloud resources enable dependable performance during peak patient demand, excessive or poorly tuned scaling may generate operational expense without improving clinical experience.
Telemedicine traffic varies significantly by region, provider schedules, and medical urgency, making cost management and performance optimization inseparable components of reliability engineering. Auto scaling enables resilient patient video consultations and responsive triage services, but aggressive scaling triggers may allocate more compute than clinically necessary.
For example, scaling video services exclusively on network throughput could allocate new pods even when short burst traffic would naturally decline. Similarly, provisioning persistent replicas after transient peak loads increases cost without improving service continuity. Scaling rules must therefore incorporate service duration, patient behavior patterns, and clinician availability rather than relying solely on infrastructure signals.
Microservice decomposition also affects the cost-to-performance ratio. While separating clinical workflows into independent services enhances reliability, excessive decomposition increases logging, networking, and storage overhead. Each microservice requires metrics pipelines, sidecar proxies, and independent deployments. Services that do not directly impact patient interaction, such as receipt generation or optional notifications, can share scaling policies or operate as batch tasks.
Practical reliability engineering must focus on services where latency or failure directly disrupts consultations or clinical outcomes. Service prioritization determines how performance investments are distributed. Telemedicine reliability hinges on the responsiveness of core services like authentication, clinical data retrieval, and video streaming.
These components require capacity headroom and tighter scaling thresholds. Meanwhile, auxiliary services such as insurance verification or payment processing can tolerate latency without disrupting care. Applying equal optimization across all micro services wastes resources and increases complexity without improving patient experience.
Observability introduces additional cost considerations. High-resolution telemetry, detailed session traces, and constant bitrate monitoring offer valuable diagnostics but require sustained compute cycles, storage space, and network transfer. Instead of logging all data equally, platforms can adopt a tiered observability approach: critical session metrics receive granular analytics, while auxiliary services receive sampled logging. This approach preserves reliability insights without incurring unnecessary operational expense.
A cloud-native service mesh strengthens reliability through encrypted communication, retries, and circuit breaking, but it also increases CPU usage and network overhead due to sidecar proxies. Telemedicine platforms must assess which communication pathways justify strict mesh policies. Video negotiation and clinical data exchange require full mesh enforcement, while infrequent background jobs may rely on simpler security configurations .
This selective strategy protects patient privacy and continuity without overspending on infrastructure . Fig. 3 summarizes these trade-offs using a bar chart that compares performance priority and cost impact across different categories of telemedicine workloads. Core clinical services justify both high performance focus and higher cost tolerance, while support and auxiliary services can follow more conservative scaling and observability strategies.

The trade-off between operational cost and performance reliability must mirror clinical priorities rather than abstract optimization goals. Effective strategies deploy strong scaling and observability where patient safety is directly impacted, while using optimized, lower-cost policies for auxiliary workloads. When cloud-native DevOps decisions align with clinical importance, telemedicine platforms achieve sustainable reliability without overspending on infrastructure.
7. Predictive Scaling and AI-Assisted Reliability in Telemedicine Platforms
Traditional cloud-native auto scaling in telemedicine responds to resource pressure only when demand is already rising. Although this reactive model improves service availability during sudden usage spikes, it can still lead to temporary degradation while scaling takes effect.
For remote clinical workflows, even a brief performance fluctuation may disrupt consultations or delay access to patient data. Predictive scaling aims to address this limitation by anticipating demand before it occurs, using data patterns derived from historical load, appointment schedules, and clinician behavior.
Telemedicine traffic typically follows predictable patterns based on appointment windows, working hours of physicians, and geographical time zones. For example, load remains light during early morning hours and increases rapidly during predictable consultation blocks.
Predictive scaling uses these trends to provision capacity ahead of expected demand, reducing latency risks at session initiation and decreasing the likelihood of call drops. By forecasting expected workflow intensity, the platform prepares resources in advance rather than reacting after stress metrics exceed thresholds.
AI-assisted reliability further enhances this approach by monitoring behavioral indicators that correlate with performance risk. Workflow completion times, packet loss across video streams, frequency of retry requests, and unusual authentication errors can serve as early signs of instability.
Machine learning models can analyze these indicators to adjust scaling policies, throttle non-critical workloads, or re-route service traffic through more resilient nodes. Instead of treating performance degradation as an inevitable event, the platform learns to respond proactively based on statistical likelihood.
Microservice-specific prediction enables even more targeted reliability improvements. For example, a model trained on record retrieval latency can trigger additional caching for the medical records API before peak usage begins, while streaming services may receive more aggressive scaling based on bit-rate history.
This independence prevents auxiliary services from competing with critical ones during peak hours and reduces the cost of blanket resource allocation. Predictive models align resource allocation with service criticality, ensuring reliability where it has the greatest clinical impact. Observability plays a central role in AI-driven reliability. Telemetry pipelines supply models with the data needed to identify risk conditions before they manifest.
This includes both infrastructure measurements and clinical workflow indicators such as appointment density or physician session patterns. When telemetry and model-driven policies work together, the platform evolves from reacting to discrete failures into continuously adapting to clinical demand. The system becomes not only resilient but also self-adjusting throughout its operational lifecycle.
Another advantage of predictive reliability is the ability to replace static safety margins. Conventional systems allocate extra resources across the cluster to guard against unexpected spikes, resulting in sustained cost overhead. Predictive scaling allows telemedicine platforms to minimize excess provisioning and activate resources only when demand forecasts justify it.
This approach improves financial efficiency while maintaining consistent interaction quality for patients and clinicians. Finally, predictive reliability supports safer deployment strategies by forecasting how a release may affect clinical traffic. Models trained on historical rollout impact can signal whether a planned deployment should be delayed, accelerated, or limited to a smaller user group. This form of intelligence allows CI/CD pipelines to account for usage risk rather than merely code correctness .
As predictive systems mature, telemedicine reliability becomes a coordinated interaction between clinical behavior, observability pipelines, and deployment automation. AI-assisted reliability, therefore, represents a shift from reactive scaling to proactive operational design.
By anticipating performance challenges, aligning scaling with clinical workflows, and guiding deployment decisions, predictive systems enhance resilience without relying on constant overprovisioning. The result is a telemedicine platform that responds smoothly to clinical growth while preserving service continuity under variable and unpredictable patient demand.
Conclusion
Cloud-native DevOps provides a practical and repeatable foundation for delivering reliable telemedicine services at scale. By decomposing clinical workflows into micro services, orchestrating them with Kubernetes, and enforcing reliability through automated deployment pipelines, providers can eliminate downtime caused by monolithic releases, unpredictable load spikes, and manual scaling interventions.
Reliability becomes a function of operational discipline rather than infrastructure size, allowing systems to grow and evolve without exposing patients or clinicians to unstable service behavior. For engineering teams building or modernizing telehealth platforms, operational priorities should center on modular service boundaries, independent scaling, and controlled rollout strategies that reflect clinical sensitivity.
Video communication, authentication systems, and medical record access should be treated as reliability- critical services with distinct scaling policies and deployment restrictions. Automated rollback conditions, canary validation, and observability-driven decisions must be incorporated into CI/CD workflows to protect active patient sessions during rapid system evolution.
Kubernetes orchestration, service mesh enforcement, and metric-based auto scaling together form a reliability control plane that is adaptable to new clinical workloads and variable patient demand. Rather than investing in excess infrastructure capacity or delaying software changes, organizations should adopt continuous improvement cycles guided by measured reliability performance.
This approach allows telemedicine platforms to expand services, onboard more patients, and adopt advanced features without compromising consultation quality or treatment continuity. The most practical path forward for healthcare innovators is to treat reliability as an engineering requirement that is automated, monitored, and enforced throughout the lifecycle of telemedicine systems.
References
M. S. Jalali, A. Landman, and W. J. Gordon, “Telemedicine, privacy, and information security in the age of COVID-19,” Journal of the AmericanMedical Informatics Association, vol. 28, no. 3, pp. 671–672, 2021. . Available: https://doi.org/10.1093/jamia/ocaa310 V. J. Watzlaf, L. Zhou, D. R. DeAlmeida, and L. M. Hartman, “A systematic review of research studies examining telehealth privacy and security
practices used by healthcare providers,” International Journal of Telerehabilitation, vol. 9, no. 2, p. 39–58, Nov. 2017. . Available:
https://telerehab.hpu.edu/index.php/Telerehab/article/view/6231 M. Shahin, M. A. Babar, and L. Zhu, “Continuous integration, delivery and deployment: A systematic review,” IEEE Access, vol. 5, pp. 3909– 3943, . Available: https://doi.org/10.1109/ACCESS.2017. 2685629 M. M. R. Hasan, E. L. Asaf, B. Paik, and J.-F. Letarte, “Testing practices for infrastructure as code,” ACM Software Engineering Notes, vol. 45, no. 4,
pp. 1–7, 2020. . Available: https: //doi.org/10.1145/3416504.3424334 T. Kanwal, A. Anjum, and A. Khan, “Privacy preservation in e-health cloud: Taxonomy, privacy requirements, and feasibility analysis,” Cluster Computing, pp. 1–19, 2020. . Available: https://doi.org/10.1007/s10586-020-03106-1 A. Rahman and L. Williams, “Source code properties of defective infrastructure as code scripts,” Information and Software Technology, vol. 112,
pp. 148–163, 2019. . Available: https://doi.org/10. 1016/j.infsof.2019.04.013 K. Kaur, S. Garg, G. Kaddoum, S. H. Ahmed, and M. Atiquzzaman, “Keids: Kubernetes-based energy and interference-driven scheduler for
Industrial IoT in edge-cloud ecosystem,” IEEE Internet of Things Journal, vol. 7, no. 5, pp. 4228–4237, 2020. I. Keshta and A. Odeh, “Security and privacy of electronic health records: Concerns and challenges,” Egyptian Informatics Journal, vol. 22, no. 2,
pp. 177–183, 2021. . Available: https://doi.org/10.1016/j.eij.2020.07.003 T. M. Hale and J. C. Kvedar, “Privacy and security concerns in telehealth,” AMA Journal of Ethics, vol. 16, no. 12, pp. 981–985, 2014. .
Available: https://doi.org/10.1001/virtualmentor.2014. 16.12.jdsc1-1412 A. Rahman, M. Rahman, and L. Williams, “The seven sins: Security smells in infrastructure as code scripts,” in Proceedings of the 41st
International Conference on Software Engineering, 2019, pp. 1–12. . Available: https://doi.org/10.1109/ICSE.2019.00033
(Photo by Mikhail Fesenko on Unsplash)

