Skip to main content

Deployments

Understanding EdgeRun deployments, lifecycle management, and deployment strategies

Deployments are the core abstraction in EdgeRun for managing containerized applications across multiple providers and regions. This guide covers deployment concepts, lifecycle management, and best practices.

What is a Deployment?

A deployment represents a running application instance that includes your container image, configuration, resources, networking, scaling policies, and health checks.

Deployment Components

Container

Container

Your application container with its configuration and environment.

Configuration

Configuration

Resource limits, environment variables, and deployment settings.

Resources

Resources

CPU, memory, and storage allocation for your application.

Networking

Networking

Ingress rules, service discovery, and load balancing.

Scaling

Scaling

Auto-scaling policies and resource utilization targets.

Monitoring

Monitoring

Health checks, metrics collection, and alerting rules.

Deployment Configuration

A basic deployment configuration includes the essential components needed to run your application:

edgerun.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-web-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-web-app
  template:
    metadata:
      labels:
        app: my-web-app
    spec:
      containers:
      - name: web
        image: my-app:latest
        ports:
        - containerPort: 8080
        env:
        - name: NODE_ENV
          value: production
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-config
              key: url
        resources:
          requests:
            cpu: 500m
            memory: 1Gi

Getting Started

This basic configuration is perfect for most web applications. EdgeRun will handle provider selection, networking, and health checks automatically.

Deployment Strategies

StrategyApproachBenefitsBest For
Rolling UpdateGradual replacementZero downtime, resource efficientDefault choice
Blue-GreenTwo environmentsInstant rollback, full testingCritical applications
CanaryGradual traffic shiftRisk mitigation, data-drivenNew features
RecreateStop then startSimple, stateful appsDatabase migrations

Multi-Region Deployments

Geographic Distribution

Multi-Region Configuration
# Multi-region via nodeSelector or topologySpreadConstraints
spec:
  template:
    spec:
      nodeSelector:
        edgerun.io/region: us-west-2

Benefits

Reduced Latency
Global
High Availability
99.9%+
Disaster Recovery
Automatic
Load Distribution
Intelligent

Auto-Scaling & Health Checks

Auto-Scaling Configuration

Auto-Scaling Setup
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-web-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-web-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Health Check Setup

Health Checks
livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 10
  failureThreshold: 3
readinessProbe:
  tcpSocket:
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Best Practice

Always configure both liveness and readiness probes. Liveness probes restart unhealthy containers, while readiness probes control traffic routing to ensure users only reach healthy instances.

Best Practices

Resource Management

  • • Always set resource requests and limits
  • • Use appropriate resource sizing
  • • Monitor resource utilization
  • • Enable auto-scaling for production

Security

  • • Run as non-root user
  • • Use read-only root filesystem
  • • Drop unnecessary capabilities
  • • Implement security contexts

Reliability

  • • Configure health checks properly
  • • Use rolling update strategy
  • • Plan for graceful shutdowns
  • • Implement circuit breakers

Monitoring

  • • Enable metrics collection
  • • Set up proper logging
  • • Configure alerting rules
  • • Monitor business metrics

Troubleshooting

IssueCauseDiagnosisSolution
Deployment StuckResource unavailableCheck provider capacityVerify image access
Health Check FailsApp not readyVerify endpoint pathIncrease timeout
Auto-scaling InactiveMetrics missingCheck resource requestsVerify scaling policies
Network IssuesConnectivity problemsReview port configurationCheck load balancer

Need Help?

For more detailed troubleshooting guidance, see our comprehensiveTroubleshooting Guide.

Next Steps

Providers

Providers

Learn about EdgeRun's provider network and how to choose the right one.

CLI

CLI

Use our CLI to manage deployments from your terminal.

Examples

Examples

See real-world examples of EdgeRun deployments.