Know anyone looking for a great opportunity to serve our country's mission? Here are some of our open positions! DevOps Engineer - All levels open (Jr, Mid, Sr, Sr Lead) - https://loom.ly/GKufQAM Cloud Security Engineer- All levels open (Jr, Mid, Sr, Sr Lead) - https://loom.ly/eB9K8h0 Cloud Systems Administrator- All levels open (Jr, Mid, Sr, Sr Lead) - https://loom.ly/lCMxmDA Data Brokering Engineer - All levels open: https://loom.ly/-fpZmyA Elastic search Developer - https://loom.ly/khgZKKo No prank here! ...With platinum benefits, 10% 401k with immediate vesting, plus amazing teams/programs, you'll never be fooled at EverWatch! #TechJobs #Hiring #TechRecruiting #AprilFoolsDay
EverWatch’s Post
More Relevant Posts
-
We are #hiring Role : AWS Architect Location: Bristol County, MA Share resume to sofia@alchemysolutions.us Skills: Data Science-Artificial Intelligence-Amazon EC2 Solid understanding of AWS services: You should have an in-depth knowledge of various AWS services such as EC2, VPC, IAM, and their use cases, configurations, and best practices. Cloud architecture design: You should be able to design highly available, fault-tolerant, and scalable cloud architectures that align with business requirements and follow AWS best practices. Security and compliance: Understanding AWS security services like IAM, VPC, Security Groups, and compliance requirements is essential. Networking expertise: Knowledge of networking concepts like VPCs, subnets, routing tables, gateways, cloudWAN is crucial for designing secure and efficient network architectures. Automation and scripting: Proficiency in using infrastructure as code tools like Terraform for automating infrastructure provisioning and management. Terraform enterprise and harness as nice to have. Monitoring and logging: Familiarity with AWS monitoring and logging services like CloudWatch, CloudTrail, and AWS Config for monitoring resources, analyzing logs, and maintaining compliance. Migration and hybrid cloud: Experience in migrating private cloud workloads to AWS and designing cloud architectures Containerization and serverless: Knowledge of containerization technologies like Docker and AWS ECS/EKS, as well as serverless architectures using AWS Lambda and AWS Step Functions. Problem-solving and troubleshooting: Strong analytical and problem-solving skills to troubleshoot and resolve complex issues in a cloud environment. Continuous learning: Ability to stay updated with the latest AWS services, features, and best practices through continuous learning and professional development. Additionally, AWS certifications like AWS Certified Solutions Architect - Associate and AWS Certified Solutions Architect - Professional can validate your skills and knowledge in the AWS ecosystem. #awsarchitect #cloudarchitect #corptocorp #c2c #benchsale #c2crequirement
To view or add a comment, sign in
-
Hi Folks, Role: Azure DevOps Engineer Location: Remote Duration: Long term Contract Experience: 13+ Yrs experience NOTE : WE WORK ON W2 MANDATORY (in order of importance): · An expert in all things Terraform. · An expert in all things DevOps. · Can navigate highly political work environments. OPTIONAL (in order of importance): · At least intermediate and preferably an expert in architecting parts of, or full blown, applications. The operative word here is architect. It doesn’t matter if they’re app-centric, data-centric, or infra-centric, they need to know how to piece together software, data, and infrastructure patterns that would support applications and are repeatable. I considered making this mandatory but IMO that would make this position virtually impossible to fill. · Intermediate in these Azure resources: Cosmos DB, Redis cache, PostgreSQL, Event Hub, Service Bus, and Azure VMs. · Intermediate in at least one scripting language. Mail Id: upendra.peetla@monosage.com or any future update #follow me and #cfbr #AzureDevOpsEngineer #remote #connection with me.
To view or add a comment, sign in
-
Hi All, Looking for an MLops Engineer - Remote Please find the JD below and share a suitable resume with user11@datacapitalinc.com Key Responsibilities: • Work with Walmart's AI/ML Platform Enablement team within the eCommerce Analytics team. The broader team is currently on a transformation path, and this role will be instrumental in enabling the broader team's vision. • Work closely with data scientists to help with production models and maintain them in production. • Deploy and configure Kubernetes components for production cluster, including API Gateway, Ingress, Model Serving, Logging, Monitoring, Cron Jobs, etc. Improve the model deployment process for MLE for faster builds and simplified workflows • Be a technical leader on various projects across platforms and a hands-on contributor of the entire platform's architecture • System administration, security compliance, and internal tech audits • Responsible for leading operational excellence initiatives in the AI/ML space which includes efficient use of resources, identifying optimization opportunities, forecasting capacity, etc. • Design and implement different flavors of architecture to deliver better system performance and resiliency. • Develop capability requirements and transition plan for the next generation of AI/ML enablement technology, tools, and processes to enable Walmart to efficiently improve performance with scale. Tools/Skills (hands-on experience is must): • Administering Kubernetes. Ability to create, maintain, scale, and debug production Kubernetes clusters as a Kubernetes administrator and In-depth knowledge of Docker. • Ability to transform designs ground up and lead innovation in system design • Deep understanding of data center architectures, networking, storage solutions, and scale system performance • Have worked on at least one Kubernetes cloud offering (EKS/GKE/AKS) or on-prem Kubernetes (native Kubernetes, Gravity, MetalK8s) • Programming experience in Python, Node, Golang, or bash • Ability to use observability tools (Splunk, Prometheus, and Grafana ) to look at logs and metrics to diagnose issues within the system. • Experience with Seldon core, MLFlow, Istio, Jaeger, Ambassador, Triton, PyTorch, and Tensorflow/TFserving is a plus. • Experience with distributed computing and deep learning technologies such as Apache MXNet, CUDA, cuDNN, TensorRT • Experience hardening a production-level Kubernetes environment (memory/CPU/GPU limits, node taints, annotations/labels, etc.) • Experience with Kubernetes cluster networking and Linux host networking • Experience scaling infrastructure to support high-throughput data-intensive applications • Background with automation and monitoring platforms, MLOps, and configuration management platforms Education & Experience: - • 5+ years relevant experience in roles with responsibility over data platforms and data operations dealing with large volumes of data in cloud-based distributed computing environments.
To view or add a comment, sign in
-
#hiring #sivak@arkhyatech.com #Azure #Devops Engineer:: #Remote Key skills - #Azure, #Terraform, #Python, #PySpark Technical Skills 1. Cloud Platform Proficiency: Mastery of cloud platforms like AWS, Azure, or Google Cloud Platform is essential. This includes understanding their architecture, services, and security features. 2. Automation and Configuration Management: Skills in automating deployment, scaling, and management of infrastructure using tools like Ansible, Chef, Puppet, and Terraform. 3. Programming and Scripting: Proficiency in languages such as Python, PySpark, or Bash for scripting and automation. 4. Infrastructure as Code (IaC): Knowledge of IaC tools like Terraform or CloudFormation to manage and provision infrastructure through code. 5. Continuous Integration/Continuous Deployment (CI/CD): Experience with CI/CD tools like Jenkins or GitLab CI to automate the software release process. 6. Containerization and Orchestration: Familiarity with Docker for containerization and Kubernetes for orchestration. 7. Monitoring and Logging: Skills in using monitoring tools like Prometheus, Grafana, and ELK stack for logging and monitoring system performance. Soft Skills 1. Collaboration and Communication: Ability to work effectively with cross-functional teams, including developers, operations, and business stakeholders. 2. Problem-Solving: Strong analytical and problem-solving skills to troubleshoot and resolve issues quickly. 3. Adaptability: Willingness to continuously learn and adapt to new technologies and methodologies. On-Premises Specific Skills 1. Server Management: Knowledge of managing and configuring Linux and Windows servers. 2. Networking: Understanding of network configurations, protocols, and security measures. Cloud-Specific Skills 1. Cloud Security: Ensuring security best practices are followed in cloud environments. 2. Cost Management: Ability to manage and optimize cloud costs effectively.
To view or add a comment, sign in
-
Hi All, Greetings!! We are hiring for Azure Deployment Engineer Senior Azure Deployment Engineer:(6+ years of relevant exp) Automating and Streamlining Development and Deployment: Azure implementation engineers focus on automating and optimizing development and deployment processes within the Azure environment. They work with CI/CD pipelines to ensure efficient and reliable application delivery. These engineers contribute to the seamless integration of code changes, testing, and deployment. Infrastructure Provisioning and Configuration: They assist in automating infrastructure provisioning using Infrastructure as Code (IaC) tools. This involves defining and managing Azure resources (such as virtual machines, storage accounts, and networking components) through code. By automating these tasks, they enhance scalability, consistency, and reproducibility. Cloud Computing Knowledge: Azure engineers need a solid understanding of cloud computing concepts. They work with cloud-based services, ensuring efficient utilization of Azure resources. Microsoft Azure Platform Expertise: Proficiency in various Azure services is essential: Compute: Understanding virtual machines, containers, and serverless computing. Storage: Managing storage accounts, blobs, queues, and file shares. Networking: Configuring virtual networks, subnets, and security groups. Security and Identity: Implementing access controls, Azure AD, and RBAC. Databases: Working with Azure SQL Database, Cosmos DB, and other database services. Analytics and Monitoring: Leveraging Azure Monitor, Log Analytics, and Application Insights. Scripting and Programming Skills: Proficiency in scripting languages such as PowerShell and Python is valuable. Familiarity with C# aids in developing custom solutions and automating tasks. DevOps Practices: Azure engineers embrace DevOps principles: Continuous Integration (CI): Automating code integration and testing. Continuous Deployment (CD): Automating application deployment. Infrastructure as Code (IaC): Defining infrastructure using code. Security and Compliance Awareness: They understand Azure’s security best practices and compliance standards. Safeguarding cloud environments and data is a critical responsibility. If anyone interested please share your updated resume to hiring@clientservertech.com #Azuredeploymentengineer #immediatejoiner #hiringimmediately
To view or add a comment, sign in
-
Hello Everyone!! I am #hiring for AWS Architect Charlotte, NC (Day 1 onsite). Role : AWS Architect Location: Charlotte, NC (Day 1 onsite) Understanding of cloud architecture especially AWS Managing and monitoring Cloud apps in cloud Deep dive performance management experience using Splunk/Dynatrace Some AIOPS for cloud platform would be helpful. Creating, constructing, and sustaining the company’s cost-effective, scalable cloud systems. Understanding the company’s business goals and developing cloud-based solutions to support them Digital transformation is transferring antiquated systems to the cloud to increase organizational efficiency. Preserving the security of cloud environments while avoiding outages or security breaches. Assessing the danger posed by external platforms or frameworks. Looking for methods to digitize routine processes and improve business operations. Creating, managing, and designing internal cloud applications for the company. Moving internal operations and data to cloud architecture. Minimizing data loss and downtime to the bare minimum. Keeping up-to-date with cloud computing best practices and enhancing their company’s or organization’s cloud infrastructure. Interacting with internal groups, including IT, operations, and sales. Build applications and interact with stakeholders to satisfy project requirements. Recommending hardware and software to the organization following the needs of the project and organization. Please Share Your resume at Yukti Tulani or yukti@siraconsultinginc.com #hiring #Onsite #Aws #architect #architecture
To view or add a comment, sign in
-
Hi All, I am hiring for Full Time: Remote : AWS DevOps Engineer. Please review the job description Responsibilities: - Design, implement, and maintain continuous integration/continuous deployment (CI/CD) pipelines for Databricks-based data applications and workflows. Automate the build, test, and deployment processes using tools like Jenkins, GitLab CI, or AWS CodePipeline. Ensure smooth deployment of code changes to different environments (development, staging, production) with minimal downtime. Develop and maintain Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or Ansible to manage AWS resources and Databricks clusters. Automate the provisioning, configuration, and scaling of infrastructure components, ensuring consistency across environments. Implement version control for infrastructure code and maintain modular, reusable IaC templates. Manage and optimize AWS resources such as EC2, S3, Lambda, RDS, and VPCs in conjunction with Databricks clusters to ensure high availability and cost-effectiveness. Implement best practices for security, monitoring, and logging in the cloud environment. Ensure that infrastructure is scalable, resilient, and aligned with business continuity and disaster recovery plans. Kartik Parashar kartikp@vbeyond.com #Fulltime #remote #usjobs #unitedstates #ec2 #s3 #RDS #AWS #amazonwebservices #AWSDevops #engineer #cicd
To view or add a comment, sign in
-
DVA is not associated with this job posting Senior Software Engineer (SRE/DevOps) https://lnkd.in/ebGCD_EH The Technology We know that if we have a DevOps team we aren’t practicing DevOps 🙂 both are listed to make it clear that we’re looking for a multi position player who’s comfortable with application engineering AND infrastructure. Candidate should have a strong understanding of cloud architecture including the major cloud providers (AWS, GCP, etc) Candidates should understand underlying networking and security considerations when developing the architecture of our deployment environments. Candidates should have a strong understanding of Relational Databases (PostgreSQL) and be comfortable optimizing and advising the broader engineering team on optimization techniques to ensure the data layer of our deployed services run smoothly Candidates should have a strong understanding of authentication and authorization frameworks such as IAM, Security Groups, RBAC, etc. Candidates should have experience with Kubernetes and be able to point to deployments they have architected or managed. Candidates should have a strong understanding of the operating model of Kubernetes and be able to explain the requirements for designing deployments for new applications. #innovation #management #digitalmarketing #technology #creativity #futurism #startups #marketing #socialmedia #socialnetworking #motivation #personaldevelopment #jobinterviews #sustainability #personalbranding #education #productivity #travel #sales #socialentrepreneurship #fundraising #law #strategy #culture #fashion #business #networking #hiring #health #inspiration
To view or add a comment, sign in
-
#hiring DevOps Engineer with Databricks on AWS - Fulltime candidates only No C2C Email Resumes ekta.Khosla@infinite.com Dallas,TX or TAmpa, FL Infrastructure Automation: Designing, implementing, and maintaining automated infrastructure provisioning and configuration management for Databricks clusters on AWS using tools like Terraform, CloudFormation, or Ansible. Continuous Integration/Continuous Deployment (CI/CD): Developing and maintaining CI/CD pipelines for Databricks workloads on AWS, ensuring automated testing, deployment, and monitoring of data pipelines and analytics solutions. Cluster Management: Managing Databricks clusters on AWS, including provisioning, scaling, and optimization for performance, cost-efficiency, and reliability. Monitoring and Logging: Implementing monitoring and logging solutions for Databricks clusters and workloads on AWS using tools like CloudWatch, Prometheus, Grafana, or ELK stack to ensure visibility into system performance and health. Security and Compliance: Implementing security best practices for Databricks deployments on AWS, including IAM policies, encryption, network security, and compliance with data privacy regulations. Backup and Disaster Recovery: Implementing backup and disaster recovery strategies for Databricks data and workloads on AWS to ensure data integrity and business continuity. Cost Optimization: Optimizing Databricks usage and AWS infrastructure costs by right-sizing clusters, implementing cost allocation tags, and monitoring resource utilization. Collaboration and Documentation: Collaborating with data engineers, data scientists, and other stakeholders to understand requirements and provide infrastructure support. Documenting infrastructure configurations, processes, and best practices. Troubleshooting and Support: Providing troubleshooting and support for Databricks-related issues, working closely with AWS support and Databricks technical support teams as needed. Knowledge Sharing: Sharing knowledge and best practices with the broader team through documentation, training sessions, and mentorship to promote DevOps culture and practices within the organization. Continuous Improvement: Continuously evaluating and adopting new tools, technologies, and practices to improve the efficiency, reliability, and scalability of Databricks deployments on AWS. Vendor Management: Managing relationships with AWS and Databricks vendors, including license management, support agreements, and staying informed about product updates and roadmap changes. #devops #databricks #aws #hiringnow #hiringalert #ansible #terraform #devopsengineer
To view or add a comment, sign in
-
ALTA IT Services is #hiring an Architect III for #hybrid work in Reston, VA. Qualifications include: 🔍 Expertise in relational and NoSQL DBs. 🛠️ AWS Data Migration Service & Test Data management. ☁️ Cloud migration & microservices architecture. 💻 Skilled in AWS, development, and networking. 🛠️ Experience with JIRA and Confluence. Learn more and apply today: https://ow.ly/Z8bx50QuW4P #ALTAIT #ITJobs #RestonVA #DatabaseExpert #AWSMigration #CloudArchitecture #Microservices #AWSskills #JIRAExperience #ConfluenceExpert
To view or add a comment, sign in
4,547 followers