Table of Content

DeepID Kubernetes Documentation
A Comprehensive Guide to Deploying and Managing DeepID with Helm on Kubernetes
Overview
Welcome to the official deployment documentation for DeepID, a cutting-edge application engineered to thrive in Kubernetes environments. This guide is your one-stop resource for understanding and implementing DeepID using our meticulously crafted Helm chart, which is provided as a convenient ZIP file. Whether you’re a seasoned DevOps engineer or a developer new to Kubernetes, our goal is to equip you with detailed instructions to set up, deploy, and maintain DeepID across various cloud platforms like AWS, Azure, or even on-premises clusters. DeepID’s architecture is designed with scalability, security, and operational efficiency in mind, leveraging Kubernetes’ powerful orchestration capabilities to ensure robust performance under diverse workloads. In this document, we’ll walk you through the entire process—from preparing your environment to troubleshooting potential issues—while highlighting the flexibility and modularity that make DeepID stand out.
The DeepID system is built around several core principles: isolated namespaces for clean resource management, sophisticated resource quotas to prevent contention, and autoscaling mechanisms to adapt to real-time demands. By distributing the Helm chart as a ZIP file, we’ve made it easy for you to deploy DeepID in your preferred environment without wrestling with complex dependency setups. Let’s dive into the details and get you started!
Prerequisites
Deploying DeepID successfully hinges on having the right tools and infrastructure in place. Before you begin, take a moment to ensure your environment aligns with the following requirements. These prerequisites are designed to guarantee a smooth deployment experience, minimizing hiccups and maximizing performance. Below, we’ve outlined everything you need, from software versions to hardware specifications, along with cloud-specific considerations for AWS and Azure users.
• Kubernetes Cluster: You’ll need a cluster running version 1.22 or higher. This ensures compatibility with the latest Kubernetes features, such as improved pod scheduling and security enhancements that DeepID relies on.
• Helm: Install Helm version 3.0 or higher on your local machine. Helm is our chosen tool for packaging and deploying DeepID, offering a streamlined way to manage configurations and dependencies.
• kubectl: Configure kubectl with administrative access to your Kubernetes cluster. This command-line tool will be your primary interface for interacting with the cluster and verifying the deployment.
• Storage: A default storage class must be available to provision persistent volumes for DeepID components like databases and file storage. Without this, the system may fail to initialize properly.
• Load Balancer: If you plan to expose DeepID externally (e.g., via the API service), configure a load balancer in your cluster. This is optional but recommended for production setups.
• Cluster Size:
- Minimum Recommended: 70 CPU cores and 223 GB of memory. This includes a 20% buffer for system overhead and growth, ensuring DeepID performs reliably even under peak loads.
- Refer to the Appendix: Cluster Sizing and Best Practices for a detailed breakdown.
For those deploying on cloud platforms, additional setup may be required:
- AWS Users: DeepID integrates seamlessly with Elastic Kubernetes Service (EKS). You’ll need a Virtual Private Cloud (VPC) with isolated subnets, Relational Database Service (RDS) for PostgreSQL, and Elastic File System (EFS) for persistent storage. AWS Secrets Manager is also recommended for secure credential handling.
- Azure Users: Leverage Azure Kubernetes Service (AKS) with a Virtual Network, Azure Database for PostgreSQL Flexible Server, and Azure Files for storage. Azure Key Vault will handle secrets management securely.
If you’re unsure about your setup, don’t worry—we’ll guide you through the process step-by-step in the deployment section!
Getting Started
Downloading the Helm Chart
The DeepID Helm chart is packaged as a ZIP file named deepid-k8s.zip, making it easy to distribute and deploy. You can download it from [insert download link here, e.g., a GitHub release page or company portal]. Once downloaded, you’ll need to extract its contents to a local directory on your machine. This ZIP file contains everything you need: the Helm chart metadata (Chart.yaml), default configurations (values.yaml), and Kubernetes resource templates (templates/ directory). Here’s how to get it ready:
After running these commands, you’ll see a directory structure that encapsulates the DeepID deployment logic. The charts/ subdirectory includes environment-specific configurations (e.g., deepid-k8s-dev for development and deepid-k8s-main for production), giving you flexibility to tailor the deployment to your needs. Take a moment to explore the files—it’s a great way to familiarize yourself with what’s under the hood!
Setting Up Your Environment
Before deploying DeepID, you’ll need to configure a few environment variables and verify your cluster connectivity. This step ensures that your deployment targets the correct environment (e.g., development or production) and cloud provider. Follow these detailed instructions:
1. Set Environment Variables: These variables help customize your deployment. For example, setting ENVIRONMENT to dev will use development-specific configurations, while prod targets production. The CLOUD_PROVIDER variable ensures compatibility with your infrastructure, and REGION specifies the geographic location of your cluster. Here’s an example for an AWS deployment in the US East region:
2. Verify Cluster Access: Use kubectl to confirm that your local machine can communicate with your Kubernetes cluster. This command displays basic cluster information, such as the control plane’s address:
If you see output like Kubernetes control plane is running at https://<cluster-ip>, you’re good to go! If not, double-check your cluster credentials or consult your cloud provider’s documentation for troubleshooting tips.
Deployment Guide
Now that your environment is ready, let’s deploy DeepID to your Kubernetes cluster. This section breaks the process into three manageable steps, each with detailed commands and explanations. By the end, you’ll have DeepID up and running, ready to handle your workloads.
Step 1: Configure Your Kubernetes Context
The first step is to ensure kubectl is pointing to the correct cluster. This is done by updating your Kubernetes context with the appropriate credentials. The exact command depends on your cloud provider:
• AWS: If you’re using EKS, run this command to fetch credentials for your cluster. Replace <cluster-name> with your EKS cluster’s name and <region> with its region:
• Azure: For AKS users, this command retrieves credentials. Replace <rg-name> with your resource group and <cluster-name> with your AKS cluster’s name:
After running the command, verify the context switch with kubectl config current-context. You should see your cluster’s name in the output. This step ensures all subsequent kubectl commands target the right environment.
Step 2: Prepare the Namespace and Resources
DeepID runs in a dedicated namespace called deepid-system, which provides isolation and simplifies resource management. You’ll also apply initial configuration files to set the stage for the Helm deployment. Here’s how:
- The first command creates the deepid-system namespace if it doesn’t already exist.
- The second applies a ConfigMap from the environments/ directory, which contains environment-specific settings (e.g., dev or prod). Replace $ENVIRONMENT with your chosen value from Step 1. This ConfigMap might include variables like API endpoints or logging levels—feel free to inspect it in the extracted ZIP file for customization opportunities!
Step 3: Deploy with Helm
With the groundwork laid, it’s time to deploy DeepID using Helm. This single command installs or updates the application, leveraging the chart’s templates and your custom values. Here’s the command:
Let’s break this down:
- helm upgrade --install: Updates an existing release or installs a new one if none exists.
- deepid-release: The name of this Helm release (you can change it if desired).
- ./charts/deepid-k8s-$ENVIRONMENT: Path to the Helm chart for your environment (e.g., deepid-k8s-dev).
- --namespace deepid-system: Ensures all resources are created in the correct namespace.
- --values environments/$ENVIRONMENT/values.yaml: Applies your custom configuration overrides.
Once the command completes, verify that DeepID is running:
You should see pods with names like deepid-api-xyz and deepid-handler-abc in a Running state. If any are stuck in Pending or CrashLoopBackOff, proceed to the Troubleshooting section for guidance. Congratulations—you’ve deployed DeepID!
Configuration Options
DeepID’s Helm chart is highly configurable, allowing you to tailor the deployment to your specific needs. The values.yaml file (found in environments/$ENVIRONMENT/) is your entry point for customization. Below, we’ve highlighted some key options, along with their defaults and use cases, to help you fine-tune the system.
• Replicas: Controls how many API pods run concurrently. Default is api.replicas: 2. Increase this for higher availability or load handling (e.g., api.replicas: 5).
• Resources: Sets CPU and memory allocations:
- API: requests: { cpu: "1", memory: "2Gi" }, limits: { cpu: "2", memory: "4Gi" }. These ensure each API pod gets sufficient resources without overloading the cluster.
- Handler: Configurable per CronJob (e.g., requests: { cpu: "2", memory: "4Gi" }, limits: { cpu: "4", memory: "8Gi" }). Adjust based on workload intensity.
• Image: Specifies container images. Set api.image and handler.image to your registry paths (e.g., myregistry/deepid-api:1.0.0). Ensure these images are accessible to your cluster.
• Autoscaling: Controls Horizontal Pod Autoscaling (HPA):
- minReplicas: 2: Minimum number of API pods.
- maxReplicas: 10: Maximum number of API pods.
- metrics.resource.target.averageUtilization: 70: Scales pods when CPU usage hits 70%. Tweak this for more aggressive or conservative scaling.
To customize, open environments/$ENVIRONMENT/values.yaml in a text editor before deploying. For example, if your API needs more memory, update the file like this:
Save your changes and redeploy with the Helm command from Step 3. This flexibility ensures DeepID adapts to your unique requirements, whether you’re running a small test cluster or a large-scale production environment.
Monitoring and Maintenance
Once DeepID is running, keeping it healthy requires regular monitoring and occasional maintenance. This section provides practical commands and strategies to check system status, scale resources, and back up critical data. We’ve included examples to make these tasks approachable, even for Kubernetes newcomers.
Checking System Health
Monitoring DeepID starts with two essential commands:
• Pod Status: Lists all pods in the deepid-system namespace, showing their current state:
• Logs: View logs for a specific deployment (e.g., the API) to diagnose issues or confirm normal operation:
kubectl get pods -n deepid-system
Look for Running under the STATUS column. A healthy output might look like:
The -f flag streams logs in real-time, which is great for spotting errors or monitoring activity during a test run.
Scaling the Application
DeepID supports both manual and automatic scaling. For quick adjustments, manually scale the API deployment:
This increases the API pods to 3. Check the result with kubectl get pods -n deepid-system. For dynamic scaling, DeepID’s Horizontal Pod Autoscaler (HPA) is preconfigured to maintain CPU usage at 70%, automatically adjusting between 2 and 10 replicas. You can monitor HPA status with:
Backup and Recovery
Regular backups are critical for protecting DeepID against data loss. Here’s how to back up the database and configurations:
• Database Backup: Execute the backup script inside the database pod:
• Configuration Backup: Export all resources in the namespace to a YAML file:
This creates a snapshot of your database—store it securely outside the cluster.
To restore, apply the YAML file with kubectl apply -f backup.yaml -n deepid-system after addressing the underlying issue.
Troubleshooting
Even with careful planning, issues can arise. This section covers common problems, their likely causes, and step-by-step solutions to keep DeepID running smoothly.
• Pods Not Starting:
- Cause: Insufficient cluster resources or quota limits.
- Solution: Check quotas with:
Increase cluster capacity or adjust deepid-quota in the Helm chart if needed.
• Service Unreachable:
- Cause: Misconfigured network policies or service selectors.
- Solution: Inspect the service:
Ensure the selector matches pod labels (e.g., app: deepid-api).
• Storage Issues:
- Cause: Missing or misconfigured storage class.
- Solution: Verify with kubectl get storageclass and set a default if necessary.
kubectl describe quota -n deepid-system
kubectl describe svc deepid-api -n deepid-system
For deeper debugging, fetch pod logs:
kubectl logs -f <pod-name> -n deepid-system
This often reveals error messages or stack traces to guide your next steps.
Cluster Sizing and Best Practices
To ensure DeepID performs optimally, your cluster must meet certain resource thresholds. Here’s a detailed breakdown:
• Base Resources:
- CPU: 58 cores (12 core services, 22 internal APIs, 24 handlers).
- Memory: 186 GB (24 GB core services, 66 GB internal APIs, 96 GB handlers).
• With 20% Buffer:
- CPU: 70 cores (58 + 12 buffer).
- Memory: 223 GB (186 + 37 buffer).
Best Practices:
- Reserve extra capacity for unexpected spikes.
- Use node pools to separate workloads (e.g., API vs. handlers).
- Enable cluster autoscaling for dynamic resource allocation.
These specs and tips ensure DeepID runs reliably in production environments.
© Deep Media AI
DeepID Kubernetes Documentation
Autor:
Deep ID team
Support:
support@deepmedia.ai
Next