Careers
Senior DevOps Engineer
Seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions.
Location: Mumbai
Role Overview:
We are seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions. You will be the architect of our entire software delivery lifecycle, focusing on building robust CI/CD pipelines, engineering our Kubernetes-based platform, and implementing Infrastructure as Code (IaC) as a core principle. You will work in close collaboration with our Server Administration team, who manage the day-to-day operations, allowing you to focus on strategic automation, developer enablement, and MLOps.
Key Responsibilities:
· Platform Architecture & Infrastructure as Code (IaC):
Architect and implement a fully automated, multi-tenant cloud infrastructure, providing the foundational blueprints for our production and testing environments.
Design and build scalable, resilient, and cost-effective architectures on AWS/Azure for our microservices, databases, and big data workloads.
Champion GitOps principles for managing infrastructure and application configurations declaratively.
· CI/CD & Developer Enablement:
Design, build, and own the CI/CD pipelines using GitHub Actions/GitLab CI, enabling fast, safe, and automated releases for our React, Python (Django/FastAPI), and AI/ML components.
Integrate automated quality and security gates into the pipeline, including static analysis (SAST), dependency scanning, and container security.
Streamline the developer experience by creating self-service tools and reducing friction in the path to production.
· Platform Engineering & Orchestration:
Architect, build, and maintain our Kubernetes platform as a robust, multi-tenant service for our engineering teams.
Implement and manage service mesh (e.g., Istio, Linkerd), ingress controllers, and other cloud-native ecosystem tools to enhance observability, security, and traffic management.
Develop and optimize Docker images for performance, security, and size.
· Observability & Monitoring Engineering:
Engineer and maintain the central observability platform using tools like Prometheus, Grafana, and the ELK Stack or similar tools.
Focus on building the core infrastructure for metrics, logging, and tracing, empowering development and server administration teams to create their own dashboards and alerts.
· MLOps & Data Pipeline Automation:
Collaborate with data scientists to design and build automated CI/CD pipelines for machine learning models (MLOps), covering training, validation, deployment, and monitoring.
Provision and automate the infrastructure required for our big data processing stack (PySpark, Dask) and real-time data ingestion systems.
· DevSecOps & Automated Security:
Integrate and manage automated security tooling within the CI/CD pipeline (SAST, DAST, Iac scanning).
Implement and manage a centralized secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager).
Codify security policies and best practices within our Terraform modules and Kubernetes configurations.
Requirements:
5+ years of experience in a DevOps, SRE, or Platform Engineering role.
Expert-level proficiency in Infrastructure as Code.
Deep, hands-on experience building and optimizing CI/CD pipelines using GitHub Actions, GitLab CI, or similar.
Advanced knowledge of Docker and production-grade experience architecting and managing Kubernetes clusters.
Strong scripting and automation skills using Python or Go.
Experience architecting, deploying, and managing core observability tools (Prometheus, Grafana).
Solid understanding of cloud networking, security, and IAM in a major cloud provider (AWS/Azure).
Experience working in a collaborative environment with clear handoffs to operational teams (like Server Admins or DBAs).
Bachelor’s degree in computer science, Engineering, or a related field.
Nice-to-Have:
Experience with GitOps tools like Argo CD or Flux.
Familiarity with MLOps frameworks (Kubeflow, MLflow).
Experience with service mesh technologies (Istio, Linkerd).
Knowledge of provisioning and automating infrastructure for PostgreSQL, MongoDB, and Redis.
Contributions to open-source projects in the cloud-native space.
Certifications: Certified Kubernetes Administrator (CKA), Terraform Associate, or AWS/Azure DevOps Professional.
Why Join Us?
Be the primary architect of the platform powering a cutting-edge AI product suite.
Focus on strategic engineering and automation, leaving day-to-day maintenance to a dedicated team.
Drive innovation by implementing modern practices like GitOps, MLOps, and Platform Engineering.
Work in a highly collaborative culture where you empower developers and data scientists.
Competitive salary.
Don't see an open position but still want to join our team?
We would love to hear from you. We are always looking for talented individuals who are passionate about making a change.
Don't see an open position but still want to join our team?
We would love to hear from you. We are always looking for talented individuals who are passionate about making a change.
Careers
Careers
Senior DevOps Engineer
Seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions.
Senior DevOps Engineer
Location: Mumbai
Role Overview:
We are seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions. You will be the architect of our entire software delivery lifecycle, focusing on building robust CI/CD pipelines, engineering our Kubernetes-based platform, and implementing Infrastructure as Code (IaC) as a core principle. You will work in close collaboration with our Server Administration team, who manage the day-to-day operations, allowing you to focus on strategic automation, developer enablement, and MLOps.
Key Responsibilities:
· Platform Architecture & Infrastructure as Code (IaC):
Architect and implement a fully automated, multi-tenant cloud infrastructure, providing the foundational blueprints for our production and testing environments.
Design and build scalable, resilient, and cost-effective architectures on AWS/Azure for our microservices, databases, and big data workloads.
Champion GitOps principles for managing infrastructure and application configurations declaratively.
· CI/CD & Developer Enablement:
Design, build, and own the CI/CD pipelines using GitHub Actions/GitLab CI, enabling fast, safe, and automated releases for our React, Python (Django/FastAPI), and AI/ML components.
Integrate automated quality and security gates into the pipeline, including static analysis (SAST), dependency scanning, and container security.
Streamline the developer experience by creating self-service tools and reducing friction in the path to production.
· Platform Engineering & Orchestration:
Architect, build, and maintain our Kubernetes platform as a robust, multi-tenant service for our engineering teams.
Implement and manage service mesh (e.g., Istio, Linkerd), ingress controllers, and other cloud-native ecosystem tools to enhance observability, security, and traffic management.
Develop and optimize Docker images for performance, security, and size.
· Observability & Monitoring Engineering:
Engineer and maintain the central observability platform using tools like Prometheus, Grafana, and the ELK Stack or similar tools.
Focus on building the core infrastructure for metrics, logging, and tracing, empowering development and server administration teams to create their own dashboards and alerts.
· MLOps & Data Pipeline Automation:
Collaborate with data scientists to design and build automated CI/CD pipelines for machine learning models (MLOps), covering training, validation, deployment, and monitoring.
Provision and automate the infrastructure required for our big data processing stack (PySpark, Dask) and real-time data ingestion systems.
· DevSecOps & Automated Security:
Integrate and manage automated security tooling within the CI/CD pipeline (SAST, DAST, Iac scanning).
Implement and manage a centralized secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager).
Codify security policies and best practices within our Terraform modules and Kubernetes configurations.
Requirements:
5+ years of experience in a DevOps, SRE, or Platform Engineering role.
Expert-level proficiency in Infrastructure as Code.
Deep, hands-on experience building and optimizing CI/CD pipelines using GitHub Actions, GitLab CI, or similar.
Advanced knowledge of Docker and production-grade experience architecting and managing Kubernetes clusters.
Strong scripting and automation skills using Python or Go.
Experience architecting, deploying, and managing core observability tools (Prometheus, Grafana).
Solid understanding of cloud networking, security, and IAM in a major cloud provider (AWS/Azure).
Experience working in a collaborative environment with clear handoffs to operational teams (like Server Admins or DBAs).
Bachelor’s degree in computer science, Engineering, or a related field.
Nice-to-Have:
Experience with GitOps tools like Argo CD or Flux.
Familiarity with MLOps frameworks (Kubeflow, MLflow).
Experience with service mesh technologies (Istio, Linkerd).
Knowledge of provisioning and automating infrastructure for PostgreSQL, MongoDB, and Redis.
Contributions to open-source projects in the cloud-native space.
Certifications: Certified Kubernetes Administrator (CKA), Terraform Associate, or AWS/Azure DevOps Professional.
Why Join Us?
Be the primary architect of the platform powering a cutting-edge AI product suite.
Focus on strategic engineering and automation, leaving day-to-day maintenance to a dedicated team.
Drive innovation by implementing modern practices like GitOps, MLOps, and Platform Engineering.
Work in a highly collaborative culture where you empower developers and data scientists.
Competitive salary.
Careers
Careers
Senior DevOps Engineer
Seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions.
Senior DevOps Engineer
Location: Mumbai
Role Overview:
We are seeking a visionary Senior Platform & DevOps Engineer to architect, build, and automate the scalable, self-service cloud platform that powers our AI-driven analytics solutions. You will be the architect of our entire software delivery lifecycle, focusing on building robust CI/CD pipelines, engineering our Kubernetes-based platform, and implementing Infrastructure as Code (IaC) as a core principle. You will work in close collaboration with our Server Administration team, who manage the day-to-day operations, allowing you to focus on strategic automation, developer enablement, and MLOps.
Key Responsibilities:
· Platform Architecture & Infrastructure as Code (IaC):
Architect and implement a fully automated, multi-tenant cloud infrastructure, providing the foundational blueprints for our production and testing environments.
Design and build scalable, resilient, and cost-effective architectures on AWS/Azure for our microservices, databases, and big data workloads.
Champion GitOps principles for managing infrastructure and application configurations declaratively.
· CI/CD & Developer Enablement:
Design, build, and own the CI/CD pipelines using GitHub Actions/GitLab CI, enabling fast, safe, and automated releases for our React, Python (Django/FastAPI), and AI/ML components.
Integrate automated quality and security gates into the pipeline, including static analysis (SAST), dependency scanning, and container security.
Streamline the developer experience by creating self-service tools and reducing friction in the path to production.
· Platform Engineering & Orchestration:
Architect, build, and maintain our Kubernetes platform as a robust, multi-tenant service for our engineering teams.
Implement and manage service mesh (e.g., Istio, Linkerd), ingress controllers, and other cloud-native ecosystem tools to enhance observability, security, and traffic management.
Develop and optimize Docker images for performance, security, and size.
· Observability & Monitoring Engineering:
Engineer and maintain the central observability platform using tools like Prometheus, Grafana, and the ELK Stack or similar tools.
Focus on building the core infrastructure for metrics, logging, and tracing, empowering development and server administration teams to create their own dashboards and alerts.
· MLOps & Data Pipeline Automation:
Collaborate with data scientists to design and build automated CI/CD pipelines for machine learning models (MLOps), covering training, validation, deployment, and monitoring.
Provision and automate the infrastructure required for our big data processing stack (PySpark, Dask) and real-time data ingestion systems.
· DevSecOps & Automated Security:
Integrate and manage automated security tooling within the CI/CD pipeline (SAST, DAST, Iac scanning).
Implement and manage a centralized secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager).
Codify security policies and best practices within our Terraform modules and Kubernetes configurations.
Requirements:
5+ years of experience in a DevOps, SRE, or Platform Engineering role.
Expert-level proficiency in Infrastructure as Code.
Deep, hands-on experience building and optimizing CI/CD pipelines using GitHub Actions, GitLab CI, or similar.
Advanced knowledge of Docker and production-grade experience architecting and managing Kubernetes clusters.
Strong scripting and automation skills using Python or Go.
Experience architecting, deploying, and managing core observability tools (Prometheus, Grafana).
Solid understanding of cloud networking, security, and IAM in a major cloud provider (AWS/Azure).
Experience working in a collaborative environment with clear handoffs to operational teams (like Server Admins or DBAs).
Bachelor’s degree in computer science, Engineering, or a related field.
Nice-to-Have:
Experience with GitOps tools like Argo CD or Flux.
Familiarity with MLOps frameworks (Kubeflow, MLflow).
Experience with service mesh technologies (Istio, Linkerd).
Knowledge of provisioning and automating infrastructure for PostgreSQL, MongoDB, and Redis.
Contributions to open-source projects in the cloud-native space.
Certifications: Certified Kubernetes Administrator (CKA), Terraform Associate, or AWS/Azure DevOps Professional.
Why Join Us?
Be the primary architect of the platform powering a cutting-edge AI product suite.
Focus on strategic engineering and automation, leaving day-to-day maintenance to a dedicated team.
Drive innovation by implementing modern practices like GitOps, MLOps, and Platform Engineering.
Work in a highly collaborative culture where you empower developers and data scientists.
Competitive salary.
Don't see an open position but stil want to join our team?
We would love to hear from you. We are always looking for talented individuals who are passionate about making a change.
Don't see an open position but stil want to join our team?
We would love to hear from you. We are always looking for talented individuals who are passionate about making a change.