VMC Soft Technologies is looking for DevOps Engineer with SRE in Sunnyvale, CA Title: DevOps Engineer withs SRE Location: Sunnyvale, CA Contract: W2 Fulltime Job Description: Experience in infrastructure management like managing storage, Compute and Network resources by automation of SRE reports Experience in converting legacy applications to Docker/Kubernetes and deploying on Cloud environment Good understanding of enterprise level vulnerability management. Hands-on experience in monitoring & logging and capacity planning Experience working in global organizations with diversified cultural, language and time zone environment Excellent debugging, critical thinking, and communication skills Independent problem-solving and analytical thinking skills For more details: dhamu@vmcsofttech.com/ 602-666-1741 ..... Apply Now At: https://lnkd.in/gRApKNeJ #devops #cloud #aws #programming #cloudcomputing #technology #developer #linux #python #coding #azure #software #iot #cybersecurity #kubernetes #it #css #javascript #java #devopsengineer #tech #ai #datascience #docker #softwaredeveloper #webdev #machinelearning #programmer #bigdata #security
VMC Soft Technologies, Inc’s Post
More Relevant Posts
-
#Hello #connections #DevOpsEngineer with #3to6years of #Experience #location - #Bangalore Primary Skills: #DevOpsExpert #GCPExpert Good to have: #DevOps, #MicroServices, #GCP Job Responsibilities Responsibilities • Bridging the gaps b/w core infra, security, QA and development team. Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki. • Providing 24X7 infra & app support, building processes and documenting “tribal” knowledge around the same time. • Mentor and train L1 engineers and continually improve app and infra support processes. • Managing application deployment & GKE platforms - automate and improve development and release processes. • Creating, managing and maintaining datastores & data platform infra using IaC. • Owning and onboarding new applications with the production readiness review process. • Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors. • Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts. • Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks. • Identifying observability gaps in application & infrastructure and working with stakeholders to fix them. • Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation. • Automate toil and repetitive work. What We're Looking For: 3 to 6 Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills. • Experience in troubleshooting, managing and deploying containerized environments using Docker/containerd, Kubernetes is a must. • Must be proficient with the helm with experience in service mesh like Istio, Linkerd. • Must be very hands-on in managing and troubleshooting the Kubernetes environment. Extensive experience with Linux administration and a good understanding of the various Linux kernel subsystems (memory, storage, network etc). • Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. • Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure. • Experience in building the CI/CD pipelines with tools such as Jenkins, GitLab, Spinnaker, Argo etc. • Experience with multiple datastores is a plus (Kafka/RabbitMQ, Redis, Elasticsearch) Hiring Process Round 1 - screening Round 2 - Technical Round 3 - Technical share resume to sohani.sahoo@thecodekart.com
To view or add a comment, sign in
-
#hiring Role: DevOps Specialist Industry: Fintech Experience: 10-15 Years Location: Bangalore Primary Skills- DevOps, AWS, Terraform, Kubernetes, Helm Charts, AWS, EKS 10+ years of overall experience and 5+ years in managing DevOps CD platforms and enabling engineering teams to consume these platforms. 3+ years of team management experience leading/managing a devops team. Must have proficiency in following tools from managing and keeping these platforms up & running and enabling others to build their solution and consume the platforms— Jenkins, Maven, GitHub, Nexus, Artifactory, GitHub Terraform, Ansible, Python, Groovy, Bash, PowerShell Shell scripting. AWS - Compute and Networking services including but not limited to EC2,ECS, EKS and Lambda Setup. Kubernetes, AWS managed k8S services. Good understanding of networking knowledge needed for cloud services. Experience with Multi-Region replication and disaster management guidelines implementation for any cloud Understanding of Monitoring, Security, and cost optimization approach for cloud services OpenShift 4.3 or above preferred Basic monolithic and Microservices based application architecture understanding and best practices. Strong expertise in managing and supporting Redis, Stream Processing as a Service (Kafka) and NoSQL as a Service (Cassandra), ELK Stack Strong working knowledge of Linux/Windows Operating system. Apply to Monica@tezra.in or WhatsApp@8272938119 #Devops #AWS #terraform #kubernetes #Ansible #Jenkins #Maven #GitHub #Nexus #Artifactory #Python #Groovy #Bash #PowerShell #shellscripting
To view or add a comment, sign in
-
Job Title- DevOps Engineer Location – San Francisco, CA Must have Exp with GovCloud Pay rate- $60-70/hr. Job Description • Bachelor’s degree in Computer Science or other technical degree • Typically requires 3+ years of relevant technical or business work experience • General familiarity with networking, security and automated deployments of middleware in AWS GovCloud using IaC • Maintains and improves existing build and deployment processes across all products • Collaborates with Application Developers, QA Engineers, Product Owners, and others to create deployment best practices • Enforces best practices for security and reliability across ITS • Designs and deploys new application components and infrastructures • Implements and maintains a continuous integration environment • Supports and troubleshoots product and infrastructure issues in production environments • Writes configuration scripts for automation tools and automates recurring tasks • Actively monitors and administers cloud-hosted applications and builds integrations • Participates in engineering design and deployment planning Complexity • Works with partners within 1-2 business functions to align technology solutions with business strategies • Supports several moderately complex business initiatives • Serves as a project team member #govcloud #devops #cloud #aws #programming #cloudcomputing #technology #developer #linux #python #coding #azure #software #iot #cybersecurity #kubernetes #it #css #javascript #java #devopsengineer #tech #ai #datascience #docker #softwaredeveloper #webdev #machinelearning #programmer #bigdata #security #softwareengineer #html #agile #softwaredevelopment #microsoft #webdeveloper #code #training #github #sysadmin #data #devopstraining #ui #webdevelopment #automation #softwareengineering #cloudsecurity #coder #nodejs #jenkins #cisco #ux #business #computerscience #development #scrum #amazonwebservices #networking #devopstools #windows
To view or add a comment, sign in
-
Tech YouTuber (140K+ Subscribers)| 25k Connections | Career Coach | Interview Preparation| Mock Interviews
Do you want DevOps Interview Preparation and Mock Interviews then #DM me or Visit https://lnkd.in/dyYi7rwr General Day to Day Activities of DevOps Professionals: Monitor Infrastructure status by using Grafana and CloudWatch Check Jira ticket status and work on pending task Production release management if any Setup CICD pipeline according to project requirement Follow best practices Git branching strategy in CICD for deployments Write Docker file as per the application Create/Manage infra on AWS using terraform. Add new users or provide access to users as per request in IAM. Always find a way to automate the tasks and do the enhance wherever I see the opportunity Daily standups, client meetings and internal team meetings Create infrastructure related documents in confluence #aws #azure #DevOps #software #hiring #jobs #CFBR #REPOST
To view or add a comment, sign in
-
Mail: praneetha.s@briskwinit.com; Hiring - SAP SF-Compensation consultants, Java Spring batch developers, MSD Power apps Technical, Siebel Developers, Service now developers
#hiring #azure #devops #cloudengineer #azureinfra #jobopenings #qataropenings #onsiteopenings #briskwinit Exp: 16+ yrs Mandate skills: Architect exp & infra Exp The Azure DevOps engineer job description includes the following skills: Proficiency in Azure cloud services, including virtual machines, containers, networking, and databases. Experience in designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines using Azure DevOps, Jenkins, or similar tools. Knowledge of Infrastructure as Code tools like Terraform, ARM templates, or Azure Bicep for automating infrastructure deployment. Expertise in version control systems, particularly Git, for managing and tracking code changes. Strong YAML, PowerShell, Bash, or Python scripting skills for automating tasks and processes. Experience with monitoring and logging tools like Azure Monitor, Log Analytics, and Application Insights for performance and reliability management. Understanding security best practices, including role-based access control (RBAC), Azure Policy, and managing secrets with tools like Azure Key Vault. Ability to collaborate effectively with development, operations, and security teams, with strong communication skills to drive DevOps culture. Knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes on Azure Kubernetes Service (AKS). Strong problem-solving abilities to troubleshoot and resolve complex technical issues related to DevOps processes. Operating System: Linux\Windows OS knowledge and bash/shell scripting, hands-on experience on Ansible or similar tools – Yaml- writing playbook and ad-hoc commands, managing inventory file dynamic and static. Code Quality Assessment: Utilize code analysis tools to assess the quality of code written by developers. Identify code smells, anti-patterns, and areas for improvement. Security Scanning: Conduct security scans to identify vulnerabilities (e.g., OWASP Top Ten) in the codebase. Collaborate with development teams to remediate security issues. Performance Optimization: Analyze code performance using profiling tools. Optimize bottlenecks, memory leaks, and resource-intensive code. Static Code Analysis: Configure and run static analysis tools. Interpret results and provide actionable recommendations. Continuous Integration (CI) Integration: Integrate code analysis tools into CI/CD pipelines. Ensure that code quality checks are part of the automated build process. Technical Debt Management: Identify technical debt (complex code, duplicated logic, etc.). Collaborate with development teams to prioritize refactoring efforts. Documentation and Reporting: Document findings, recommendations, and improvements. Generate regular reports on code quality metrics. If interested, share your updated profile to praneetha.s@briskwinit.com with subject "Azure Infra - Onsite"
To view or add a comment, sign in
-
The future of careers in DevOps is promising, with growing demand for professionals who can bridge the gap between software development and IT operations. DevOps engineers, automation specialists, cloud architects, and site reliability engineers. Professionals skilled in DevOps practices, automation, cloud technologies, and continuous integration/continuous deployment (CI/CD) pipelines will continue to be in high demand, with ample opportunities for career growth and advancement. Continuous learning and staying updated with emerging technologies will be key to thriving in the evolving landscape of DevOps careers. https://lnkd.in/gBQs6Svh #devops #coding #csharp #javascript #codelife #programmer #webdeveloper #softwaredeveloper #fullstack #softwaredevelopment #programmers #cloud #sql #hiring #automation #webops #jobs #location
To view or add a comment, sign in
-
🔍 IT Talent Acquisition Specialist | 👥 Connecting Innovative Minds with Industry-Leading Companies | 🌈D&I Talent Specialist
Good Opportunity
We are looking for a Platform Engineering SRE with 10-12 years of experience in Azure, DevOps, SRE, Infrastructure, and Kubernetes. The position is based in Hyderabad. Key Qualifications: - Expertise in Docker or Containers - Experience with Orchestration technologies (Kubernetes, Docker Swarm, etc.) - Proficiency in Git and Git pipelines (GitLab CI) - Experience with Infrastructure as Code (IAC) tools like Terraform - Familiarity with monitoring tools (Datadog, Prometheus, New Relic, Splunk) - Strong scripting background If you meet these qualifications, please share your profile to krithika@livecjobs.com. #SRE#Azure#DevOps#Kubernetes#PlatformEngineer#Liveconnections #Livec#WePlacePeopleFirst
To view or add a comment, sign in
-
We are #hiring Principle Kubernetes Engineer! Responsibilities: - Architect, design, and implement managed Kubernetes services using Cluster API (CAPI) on OpenStack, focusing on creating scalable, robust, and efficient cloud-native infrastructures. - Design, implement, and deploy microservices using GoLang or NodeJS, creating high-performance and scalable applications tailored to enterprise needs. - Implement virtual clusters (vclusters) for Kubernetes, enhancing security, isolation, and multi-tenancy capabilities, allowing for segregated environments within the same physical cluster. - Handle Kubernetes resource management, including setting quotas and optimizing the use of computing, memory, and storage resources to ensure efficient operations. - Implement autoscaling for nodes and workloads using Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler (VPA), and Cluster Autoscaler (CA) to dynamically adjust resources based on demand. - Lead the development of secure architectures, focusing on advanced security contexts, Pod Security Policies (PSP), and robust multi-tenancy solutions to ensure safe and compliant operations. Required Skills and Qualifications: - Deep understanding of Kubernetes internals, such as the API server, etcd, scheduler, and core networking protocols - Extensive experience with scaling Kubernetes clusters, demonstrating a capability to manage high-load systems and implement effective scaling strategies. - Strong background in architecting and deploying managed Kubernetes using CAPI - Extensive experience in microservice development using GoLang or NodeJS In-depth knowledge of virtual clusters, security, and isolation in Kubernetes
To view or add a comment, sign in
62,123 followers