We're #hiring a new Kafka Confluent Engineer in India. Apply today or share this post with your network.
Intuitive.Cloud’s Post
More Relevant Posts
-
DevOps Engineer ♾️| Linux🐧|AWS ☁️| Docker 🐳 | Kubernetes ☸️ | CiCd 🚀 | Terraform 🏗️ |anSibLe⚙️ | Jenkins🧑🔧| SheLL - ScriPting 💠 | Grafana⛄ | Git & GitHub 🐙 | Prometheus ♨️ |
Latest Opportunity🔥🔥🔥🔥🔔🔔🔔🚀 #devopsjob #devops #devopscommunity #trainwithshubham #90daysofdevops #90daysofdevopschallenge #awsdevops #azure #azurecloud #hiring #azure #devopsengineer #awscommunity
Job Description - Job Title: Automation DevOps Engineer (Ansible, Kafka) Location: Memphis, Tennessee( Hybrid ) with dl MOI - Zoom Local Only WITH DL Please share the updated resume at hanshika@vyzeinc.com
To view or add a comment, sign in
-
***Need either ex Apple or TCS Exp. candidate*** Hello folks, Hope you are doing well. Kindly find the below JD and revert me back with a suitable resume. Role: DevOps Engineer Location : Sunnyvale CA , Austin Tx VISA:H1B Experience : 8+ years Job Description : JD1 Sunnyvale CA * DevOps Engineer - Kubernetes & Cloud Infrastructure. * Core Competencies required are Kubernetes, Observability tools, Networking, Security protocols, Databases, Cloud-Native solutions and Troubleshooting. * Core Kubernetes stack, Rubix experience * Observability - Prometheus, Grafana * Networking, Ingresses, Load balancers, Certificate management * SSL/MTLS, GRPC, REST/HTTP/JSON * Database experience - NoSQL Couchbase, Cassandra and Transactional database - Oracle, PG * Caching / Cloud native solutions - SNS/SQS, ElasticSearch, Solr * General troubleshooting skills in Java applications. JD2: Austin TX * Good prior experience with Infrastructure Support. Good experience with Infrastructure Monitoring using Splunk and Kubernetes. * Strong experience with Splunk and Python — using Splunk APIs, Python Splunk SDK; Query Indexing, applying Filters. Achieve real time monitoring and troubleshooting also leveraging Shared Splunk Dashboards. Monitoring Alerts and performing analysis identifying patterns of issues. Good experience with AWS and Kubernetes more on Infrastructure * Good experience with ..ubernetes - Deployments, Advanced Kubernetes monitoring - Monitor Kub,F7f,, cts, their health and their per to troubleshoot, identify :-..sgailures with Containers and efficiently manage performance in Kubernetes environments. Good experience with optimizing containers thereby reduce utilization and achieve cost optimization. * Monitoring Kubernetes with Splunk Infrastructure Monitoring. Navigating Your Kubernetes Environment - End-to-End Visibility for Microservices on Kubernetes - Collecting Kubernetes Metrics with Splunk - overall view of entire Kubernetes architecture, including critical health metrics (Clusters and Workloads) across elements of the environment from the infrastructure to the orchestrator, containers, and applications - view of Nodes, Pods, Containers. * Good experience using GitHub it. Share resume to nidhi@urpantech.com
To view or add a comment, sign in
-
#hiring DevOps Engineer with Databricks on AWS - Fulltime candidates only No C2C Email Resumes ekta.Khosla@infinite.com Dallas,TX or TAmpa, FL Infrastructure Automation: Designing, implementing, and maintaining automated infrastructure provisioning and configuration management for Databricks clusters on AWS using tools like Terraform, CloudFormation, or Ansible. Continuous Integration/Continuous Deployment (CI/CD): Developing and maintaining CI/CD pipelines for Databricks workloads on AWS, ensuring automated testing, deployment, and monitoring of data pipelines and analytics solutions. Cluster Management: Managing Databricks clusters on AWS, including provisioning, scaling, and optimization for performance, cost-efficiency, and reliability. Monitoring and Logging: Implementing monitoring and logging solutions for Databricks clusters and workloads on AWS using tools like CloudWatch, Prometheus, Grafana, or ELK stack to ensure visibility into system performance and health. Security and Compliance: Implementing security best practices for Databricks deployments on AWS, including IAM policies, encryption, network security, and compliance with data privacy regulations. Backup and Disaster Recovery: Implementing backup and disaster recovery strategies for Databricks data and workloads on AWS to ensure data integrity and business continuity. Cost Optimization: Optimizing Databricks usage and AWS infrastructure costs by right-sizing clusters, implementing cost allocation tags, and monitoring resource utilization. Collaboration and Documentation: Collaborating with data engineers, data scientists, and other stakeholders to understand requirements and provide infrastructure support. Documenting infrastructure configurations, processes, and best practices. Troubleshooting and Support: Providing troubleshooting and support for Databricks-related issues, working closely with AWS support and Databricks technical support teams as needed. Knowledge Sharing: Sharing knowledge and best practices with the broader team through documentation, training sessions, and mentorship to promote DevOps culture and practices within the organization. Continuous Improvement: Continuously evaluating and adopting new tools, technologies, and practices to improve the efficiency, reliability, and scalability of Databricks deployments on AWS. Vendor Management: Managing relationships with AWS and Databricks vendors, including license management, support agreements, and staying informed about product updates and roadmap changes. #devops #databricks #aws #hiringnow #hiringalert #ansible #terraform #devopsengineer
To view or add a comment, sign in
-
Hiring Alert!! #immediatehiring for an "Enterprise Architect". Experience required: 20+ years Skills: Should have #healthcare domain experience. ➡ Java ➡ Spring ➡ Enterprise Architecture ➡ Azure Cloud ➡ Splunk ➡ Dynatrace or any other monitoring tools. Kindly, share resumes to sandhya@mericaninc.com #enterprisearchitecture #enterprisearchitect #java #spring #javaenterpriseedition #jee #springboot #javaspring #azurecloud #microsoftazure #splunk #dynatrace #datadog #monitoring #observability #architecture #enterpriselevel
To view or add a comment, sign in
-
Client: #centene Title : #SRE #sitereliabilityengineer #devopsengineer Location : #remote Duration : Long Term Need on prime Vendor W2 (USC, GC, GCEAD, H4EAD & H1T) • Overview: 49 Application (tier 1) should have full DR with demanding restoration time. There are some resiliency but there is no DR. Team does have some architectural designs. • SRE/ Software Engineer with Specifically Disaster Recovery experience • Application is AWS based. • Work with application engineering team to implement DR capabilities in the application. Must Have: • Observatory/ Monitoring tools - Grafana, Splunk, Dynatrace (Splunk will be useful first 3-6 months) • Disaster Recovery implementation capabilities • Experience with private cloud deployment • Kafka experience • AWS Cloud tech • Deployment Skills • Route 53; Mongo; Kafka; Lambda; Kubernetes • Rancher Axway is preferred Day to Day: • Engineers will be involved in designing and understanding the patterns and then implementation will be done. #If_You_are_interested_Drop_a_Message_to_me #sitereliabilityengineer #sitereliabilityengineering #sitereliability #devops #devopsengineer #devopsjobs #devopsengineers #cloud #cloudengineer #aws #awscloud #awsdevops #azure #azurecloud #azuredevops #gcp #w2 #w2jobs #w2requirements #w2contract #w2only #w2hiring #w2roles
To view or add a comment, sign in
-
#JobAlert #Hiring #DevOps #AWS #Terraform #Jenkins #k8s #Kafka Join Our Team as a #DevOps Engineer! Location: #Remote Experience Level: 12-15 Years Key Skills: Must Have: - Expertise in setting up CI/CD pipelines with #Jenkins, #Artifactory, #Vault, #SonarQube, #GitHub, #Terraform, #Rancher, and #Harness. - Deploying on #Kubernetes with #Rancher or #Harness. - Hands-on experience with #Kafka MRC and monitoring tools like #LogicMonitor and #Splunk. - Mastery of #AWS services, especially #S3, AWS #Artifactory, and #S3 replication. - Knowledge of #Kubernetes (#K8s) and #S3/#Cloudian for shared infrastructure. - Specializes in Cloud Infrastructure Modernization, virtualization, data center setup, DR & BC Strategies, and DevOps. - Experience with on-premises and AWS-hosted data center operations. - Proficient in migration tools. - Skilled in #Terraform Enterprise, #Ansible, and scripting with #Python. - Managing helm charts and deploying into Kubernetes (#K8s). - Proficient with monitoring tools like #Splunk, #LogicMonitor, #SignalFX, and #Prometheus. - Experience with microservices deployment and hybrid cloud/on-prem infrastructure. - Intermediate knowledge of #Maven, #Gradle, #Java, and message brokers like #Kafka, #RabbitMQ, #ActiveMQ, and #AmazonKinesis. Good to Have: - Experience setting up CI/CD pipelines for streaming Flink-based applications. Your Mission: - Set Up CI/CD Pipelines: Create seamless pipelines for application deployment. - Build Failover Infrastructure: Develop new shared services for on-premises failover, including S3, Kafka, and Data Store. - Ensure Connectivity: Link new failover services with existing ones like Secret, Identity, LDAP, DNS, Artifactory, Jenkins, and Splunk. - Automate Disaster Recovery: Implement and automate data replication between Cloud and failover environments. - Drive Continuous Improvement: Innovate and enhance DevOps processes for peak performance. - Solve Complex Challenges: Use agile software development to tackle and resolve technical hurdles. Apply Now! Share your resume to vivek.v@promantisinc.com P.S. Like this opportunity? Share it ♻️ with your network. Thank you! #DevOps #Automation #CI_CD #Kubernetes #AWS #Terraform #Jenkins #CloudInfrastructure #Hiring #TechJobs #ChicagoJobs #CareerOpportunity #rancher #CICD #AWS #Terraform #Ansible #scripting #Leadership #RemoteWork #DigitalTransformation #hiring #linikedin #linkedinjobs #carriergrowth #onbording #shortlisting #jobalert #c2c #w2
To view or add a comment, sign in
-
Position :Devops Engineer Exp : 4 to 7 yrs (relevant experience should be good) Location: Remote NP: Immediate to 15 days Kindly Note: Please do not share any Andhra candidates. Please check the communication while screening Before sharing the CV Kindly check their relevant experience in respective skills. Please share address with pin code as well. Company & projects should be available in the cv with their duration. Pls check mandatory skill set should be mentioned in the cv. Mandatory Skills: - Working experience on AWS services like EC2, EBS, Lambda, Route53, EKS - Working experience on Kubernetes, Kubernet cluster, Jenkins Pipelines, - Working experience on Automation through Terraform, Python - Cluster Monitoring, Alerting and Metrix - Working experience in various AWS services such as EC2, ELB, EKS, LAMBDA, EKS FARGATE, ECS, S3, Cloud Front, EBS, EFS, MSK, CLOUDWATCH, ROUTE53, ROUTE53 Application Recovery Controller, Service Catalog, - Strive for continuous improvement and build continuous integration, continuous development, and constant deployment - Awareness of critical concepts in Agile principles - MWAA (Managed Airflow), OPEN SEARCH, SES, SNS, SQS, AWS VPC, NETWORK INTERFACES, VPC ENDPOINTS, AND VPC ENDPOINT SERVICES - Working experience in automation tools Jenkins, Python, Terraform - Strong understanding of container orchestration tool Kubernetes Engine including Deployment of Vanilla Kubernetes, Kubernetes Control Plane Component, and their Troubleshooting, EKS IRSA, EKS Node Groups, Namespaces, Resource Quotas, Calico CNI, Network Policies, OpenShift Good to Have: - Knowledge of Kafka architecture- MSK, Confluent Kafka - Manage day-to-day Kafka operations, monitoring cluster health, and addressing system alerts promptly. - Experience with Schema registry, Kafka connectors, Clusterlinking, Schemalinking, Kafka Security - Managed clients confluent kafka infrastructure - Working Experience on Amazon Managed Streaming for Apache Kafka - Excelled in troubleshooting complex issues within Kafka clusters, applying comprehensive diagnostic skills to identify root causes and implement effective solutions. Interested candidate Can share their CV at payal.avsar@gmail.com #devops #AWS #Kubernetes #devops #Kubernetes
To view or add a comment, sign in
-
DevOps Engineer ♾️| Linux🐧|AWS ☁️| Docker 🐳 | Kubernetes ☸️ | CiCd 🚀 | Terraform 🏗️ |anSibLe⚙️ | Jenkins🧑🔧| SheLL - ScriPting 💠 | Grafana⛄ | Git & GitHub 🐙 | Prometheus ♨️ |
Latest Opportunity 🔔🔔🔔🔔🔔🔔🔥🔥🔥🔥 #devops #devopsengineer #devopstools #devopscommunity #90daysofdevops #awsdevopsengineer #90daysofdevopschallenge #90dayschallenge #trainwithshubham #awscloud
Dear LinkedIn Members, We are looking for *Devops Engineer* Location: Gurgaon Experience: 2-4 Years Requirement: Must-Have :- Hands On experience with Jenkins, Groovy Writing, Ansible, Linux Administration, AWS, Cloud Formation. Skills Set required :- · Strong troubleshooting and problem-solving skills. . Excellent communication and collaboration abilities. . Ability to work in a fast-paced and dynamic environment. . Proficiency in programming languages such as Python, Java, or Go. . Knowledge of database technologies (SQL, NoSQL). . Strong hands-on experience with kafka, Kong, nginx . Familiarity with monitoring and logging tools such as Prometheus, Splunk, ELKG stack, or similar. . Strong knowledge of Linux/Unix systems fundamentals. . Experience with CI/CD pipelines and related tools (Jenkins, GitLab CI/CD). . Experience with cloud platforms such as AWS, Azure, or Google Cloud and relevant certifications. . Hands-on experience with containerization technologies (Docker, Kubernetes). . Knowledge of infrastructure-as-code tools such as Terraform or CloudFormation. Share your resume on Anshu.k@rediansoftware.com #devopsengineer #devops #seniordevopsengineer #hiring #immediatejoiners #gurgaonjobs
To view or add a comment, sign in
-
Hello Connections ! Deltacubes is hiring a Devops Architect. Please find the requirements below : No of Positions No of Positions: 1 location Location: Bengaluru Onsite Role Experience : 9 to 15 Year Required Skills: Devops Helm chart grafana argocd Bigdata terraforms Database Description In-depth knowledge of GCP services and resources to design, deploy, and manage cloud infrastructure efficiently. Proficiency in Java, Golang, Shell, and Python scripting. Develop, maintain, and optimize Infrastructure as Code scripts and templates using tools like Terraform and Ansible, ensuring resource automation and consistency. Strong expertise in Kubernetes, HAProxy, and containerization technologies. Manage and fine-tune databases, including Neo4j, MySQL, PostgreSQL, and Redis Cache Clusters, to ensure performance and data integrity. Skill in managing and optimizing Apache Kafka and RabbitMQ to facilitate efficient data processing and communication. Design and maintain Virtual Private Cloud (VPC) network architecture for secure and efficient data transmission. Implement and maintain monitoring tools such as Prometheus, Zipkin, and Grafana. Utilize Helm charts and Kubernetes (K8s) manifests for containerized application management. Proficient with Git, Jenkins, and Argo CD to set up and enhance CI and CD pipelines. Utilize Google Artifact Registry and Google Container Registry for artifact and container image management. Familiarity with CI/CD practices, version control, and DevOps methodologies. Strong understanding of cloud network design, security, and best practices. #DevOps #DevOpsArchitect #ContinuousIntegration #ContinuousDelivery #Automation #InfrastructureAsCode #CI/CD #CloudNative #Containerization #Kubernetes #Docker #Microservices #AgileInfrastructure #DevOpsCulture #SiteReliabilityEngineering #ConfigurationManagement #DeploymentAutomation #MonitoringAndLogging #DevSecOps #CollaborativeOps
To view or add a comment, sign in
77,795 followers