gopaddle User Guide
  • πŸ‘‹Welcome to gopaddle
  • Overview
    • πŸ’‘Getting Started
    • Register a Cluster
      • Register Rancher Prime - RKE2
      • Register K3S
      • Register MicroK8s
      • Register Kind
      • Register minikube
      • Register Kubeadm Cluster
      • Register AWS EKS Cluster
        • EKS Cluster with public or public/private access point
        • EKS Cluster with private access endpoint and a bastion host
        • Validate Cluster Connectivity
      • Register Azure AKS Cluster
      • Register Google GKE Cluster
      • Register Huawei Cloud Container Engine
    • Register GitHub Account
    • Register Jira Account
    • Register ChatGPT Assistant
    • πŸ’»Kubernetes IDE
      • Filters
      • Editor
      • Flat vs Tree View
      • Developer Tools
    • πŸ™‹AI Assistant
      • Chat with AI
      • Raise a Jira Ticket from Chat Window
      • Create Runbook from Chat Window
    • πŸ“–Runbook Hub
      • Create Runbook Hub
      • Attach Runbook Hub to Cluster
      • Enhancing contents of Runbooks with AI
      • Detach Runbook Hub from Cluster
      • Syncing Runbook Hub with GitHub
      • Delete Runbook / Runbook Hub
    • ⏱️Installing Community Edition
      • MicroK8s Addon
        • On Ubuntu
        • On MacOS
      • Docker Desktop
      • SUSE Rancher Prime
      • Digital Ocean
      • Akamai Linode
      • Kind Cluster
      • Helm
      • Docker Compose
      • Accessing gopaddle UI
    • πŸ“ˆImproving performance of resource discovery
    • Provision new Cluster
      • Register Cloud Account
        • AWS
          • Quickstart AWS Setup
          • IAM Access Policies
          • AWS Setup Script
        • Azure
          • Create Azure Application
          • Register Azure Cloud Authenticator
          • Register Azure Account
        • Google
      • Provision Clusters on Cloud
        • AWS EKS
          • AWS EKS Reference Architecture
          • Adding an AWS IAM Role (EKS Master / Node Pool)
          • Public EKS Cluster
          • All Private EKS Cluster (beta)
          • Creating a Node Pool
        • Azure AKS
          • Creating a Node Pool
          • Enable Public IP Node Access for Azure Deployments
          • VMSS Autoscaling Rules
        • Google GKE
          • Creating a Node Pool
  • πŸ”ŽTroubleshooting
    • Cluster Resource View Issues
      • Network Error ! ServerError: Response not successful: Received status code 503
      • Network Error ! TypeError: Failed to fetch
      • Network Error ! ServerParseError: Unexpected token 'j', "json: erro"... is not valid JSON
      • Updating Labels and Annotations does not get reflected in resources list
      • Filtered resources are not fully listed
    • Runbook Issues
      • Deleting a runbook from .gp.yaml does not detach annotation in resources
      • Deleting a Code Account from gopaddle UI does not detach annotation in resources
    • Jira Issues
      • Creating, Updating or Appending a Jira issue fails with error INVALID_INPUT
    • EKS Issues
    • Deployment Issues
    • Node Issues
    • Huawei Issues
Powered by GitBook
On this page
  1. Troubleshooting

Deployment Issues

PreviousEKS IssuesNextNode Issues

Last updated 1 year ago

Service stuck in pending state - Reason : 0/1 nodes are available: Too many pods.

Scenario

When an application is deployed, the services are stuck in pending state in the application view page. When a specific replica is viewed, it shows the error - 0/1 nodes are available: <nodecount> Too many pods.

Under containers, there are no containers provisioned.

Resolution

This error could happen due to 2 reasons.

  1. The number of replicas provisioned on a specific node exceeded the maximum limit assigned on that node.

  2. The number of private IP addresses assigned to the replicas exceeded the maximum number of private IPs possible for node size.

In both scenarios, the cluster does not automatically scale and add another node. To overcome this issue, you can increase the desired node code in the cluster.

Navigate to the cluster view page, click on the node pool and edit the desired node count.

Below are the list of node pool fields to be edited for different types of managed clusters.

  • AWS EKS - Desired Capacity

  • Google GKE - Initial Count

  • Azure AKS - Desired Count

exec user process caused "exec format error"

Scenario

When the application is deployed, the service moves to pending state and the container moves to Waiting state with the reason as CrashLoopBackOff. The container logs show the error standard_init_linux.go:178: exec user process caused β€œexec format error”

Reason 1:

This error is encountered when trying to run a go binary inside a Docker container. This is because the host machine and the Docker container have different architectures and operating systems. The go binary must be compiled for the architecture and operating system of the Docker container in order for it to be executed.

Solution

If you are onboarding a Dockerfile based container then make sure the right architecture is specified in your build command. Say, for instance, if you are building the binary on a MacOS, but your runtime Docker container is based on Ubuntu, you can compile your binary like below:

GOOS=linux go build -o myprogram

Reason 2 :

This error could appear due to missing script header like #!/bin/bash or #!/bin/sh in the container start script. The error may also occur due to any empty line or space before the script header.

Solution

Click on the info link for the container, and edit the start script to include the header as the first line of the start script. Save the start script.

Here is an example of how a startscript with script header:

#!/bin/bash
npm start

This will automatically restart the replica after a few seconds. If you desire to restart the replica immediately, you can click on the Delete option next to the replica. This will bring down the replica and create a new replica with the updated start script in the container.

pod has unbound immediate PersistentVolumeClaims : node(s) had taints that the pod didn't tolerate.

Scenario

Deploying an application with Stateful services, results in Stateful Service stuck in pending state for long. Examining the Service replica log shows the error :

Failed to provision volume with StorageClass "gp-landingpage-sc-uj2z": InvalidZone.NotFound: The zone 'us-east-1a' does not exist. pod has unbound immediate PersistentVolumeClaims 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. no nodes available to schedule pods

This happens when the Service is scheduled on a nodepool in a different availability zone where as the Volume Provision Policy provisions the volume in a different availability zone.

Resolution

Modify the Volume Provision Policy to the availability zone to match the node pool availability zone.

πŸ”Ž
Page cover image