Hello Connections, Iota platforms is Hiring!!! Position: Devops Engineer Location: USA Roles and Responsibilities ; -Agents of change that prioritize automation, security, and quality; they also assist the organization develop and adopt a DevOps culture. -Design techniques including Application Performance Monitoring, Automated Deployment, Continuous Integration, Continuous Delivery, and Source Code Management. -Create both short- and long-term strategic plans to guarantee that network bandwidth and computing resources satisfy current and future demands. -Determines and manages system security and access. -Keeps an eye on customer service, keeps the network stable, gathers and analyzes data about memory and network usage, installs and tests software updates. -Oversees the scheduling, backup, storage, and retrieval of cloud operations. -Creates, updates, and tests cloud-based and location-based disaster recovery plans. ~Experience - 0-3 years in devops. 🚀🚀🚀NOTE : Only for the candidates who are reside in USA with valid work permit. Interested candidates can share their CVS at hr@iotaplatforms.com #Urgentrequirement #InnovateWithUs #JoinTheRevolution #CareerGrowth #IOS #TABLEAU #C2CRequirements #H1b #OPT #CPT #USCITIZEN #GC #DevOps #DataEngineer #Android #iotaplatforms #iota #IOTA #IOTAPLTFORM #usa #USA #opentowork #searching #hiring 🚀💡
Vinayaka M’s Post
More Relevant Posts
-
Hello Connections, I’m #hiring. anyone who might be interested? Please share the updated resume to kavyasree.p@aplombtek.com or you can reach me out 973-475-5428 Job Title:Devops Engineer Contract: 6+ Months Location: Dallas ,texas Type: Contract (W2 only) Work mode: Onsite Responsibilities: Collaborate with software developers, system operators (SysOps), and other stakeholders to manage code releases. Automate and streamline operations and processes. Build and maintain tools for deployment, monitoring, and operations. Troubleshoot and resolve issues in development, test, and production environments. Design and implement strategies for system scalability, reliability, and performance. Monitor and analyze system performance metrics. Ensure high availability and security of systems and applications. Participate in on-call rotation and respond to incidents promptly. Continuously evaluate and improve existing systems and processes. Stay updated with industry trends and best practices in DevOps and cloud technologies. #DevOps #Engineer #Automation #ContinuousIntegration #ContinuousDelivery #CI/CD #InfrastructureAsCode #IaC #DeploymentAutomation #Monitoring #Scalability #Reliability #Performance #CloudComputing #AWS #Azure #GoogleCloud #Docker #Kubernetes #Containerization #Scripting #DevOpsCulture #Agile #SysAdmin #ITOps #SiteReliabilityEngineering #SRE #IncidentResponse #Security #DevSecOps #ConfigurationManagement #Ansible #Puppet #Chef #Git #VersionControl #Microservices #ContainerOrchestration #AgileDevelopment #SoftwareEngineering #SystemArchitecture
To view or add a comment, sign in
-
DVA is not associated with this job posting Senior Software Engineer (SRE/DevOps) https://lnkd.in/ebGCD_EH The Technology We know that if we have a DevOps team we aren’t practicing DevOps 🙂 both are listed to make it clear that we’re looking for a multi position player who’s comfortable with application engineering AND infrastructure. Candidate should have a strong understanding of cloud architecture including the major cloud providers (AWS, GCP, etc) Candidates should understand underlying networking and security considerations when developing the architecture of our deployment environments. Candidates should have a strong understanding of Relational Databases (PostgreSQL) and be comfortable optimizing and advising the broader engineering team on optimization techniques to ensure the data layer of our deployed services run smoothly Candidates should have a strong understanding of authentication and authorization frameworks such as IAM, Security Groups, RBAC, etc. Candidates should have experience with Kubernetes and be able to point to deployments they have architected or managed. Candidates should have a strong understanding of the operating model of Kubernetes and be able to explain the requirements for designing deployments for new applications. #innovation #management #digitalmarketing #technology #creativity #futurism #startups #marketing #socialmedia #socialnetworking #motivation #personaldevelopment #jobinterviews #sustainability #personalbranding #education #productivity #travel #sales #socialentrepreneurship #fundraising #law #strategy #culture #fashion #business #networking #hiring #health #inspiration
To view or add a comment, sign in
-
The future of careers in DevOps is promising, with growing demand for professionals who can bridge the gap between software development and IT operations. DevOps engineers, automation specialists, cloud architects, and site reliability engineers. Professionals skilled in DevOps practices, automation, cloud technologies, and continuous integration/continuous deployment (CI/CD) pipelines will continue to be in high demand, with ample opportunities for career growth and advancement. Continuous learning and staying updated with emerging technologies will be key to thriving in the evolving landscape of DevOps careers. https://lnkd.in/gBQs6Svh #devops #coding #csharp #javascript #codelife #programmer #webdeveloper #softwaredeveloper #fullstack #softwaredevelopment #programmers #cloud #sql #hiring #automation #webops #jobs #location
To view or add a comment, sign in
-
SR US IT Recruiter | Currently Hiring professionals for Candidates and | LinkedIn Recruiter! Expert in Talent Acquisition for Tech Industry Leaders | Driving Success Through Strategic Recruitment Solutions"
We are still #hiring Know anyone who might be interested? Please share profiles with a.meshak@nityo.com Job Title: #swe Engineering Systems (#devops + #golang) Location: #redmond / #seattle (#onsite) Duration: #contract Job Description: Ø Context: The candidate would work on the Automated #apitesting Platform ü #K6 is used ü Service or collection of services passes against the test. ü Success or failure, i.e. gating within the #cicd is determined Ø Preferred candidate would have a Dev back ground (#Go, #python, #java, #C#) + #testing, preferably web application development Ø Strong skills/experience with Kubernetes is fundamental Ø Playwright skills would also be useful. Ø Skills can be cloud-agnostic. Please start working on the below requirement. PFB Job description As a member of the Engineering System and Release Management team, you will Ø Be an architect of our future infrastructure, using tools like #kubernetes , #docker , #azure , and #azuredevops Ø Design, implement, maintain, and evolve the #cicd and orchestration systems that run Engage Ø Design, implement, and maintain tooling focused on supporting a fast development cycle and accelerating Engage’s engineers Ø Collaborate across various teams to provide: design and code review; capacity planning; failure/reliability analysis; performance analysis; security and customer privacy analysis Ø Contribute to our on-call rotation Responsibilities: Ø Experience with application deployment using #cicd Ø Experience with containers and container orchestration systems Ø Experience operating and evolving large-scale distributed systems in a cloud infrastructure Ø A penchant for automating the boring stuff Ø Proficiency in at least one of the following languages: #Go, #python , #java Ø Ability to break down technical problems and solve them systematically Interests that will set you apart: Ø Google Kubernetes, Apache Mesos, Docker Swarm, Docker, Rocket, LXC Ø Linux, Ubuntu, CoreOS, Microsoft Azure, Amazon Web Services, Google Cloud Platform Ø Azure DevOps, Spinnaker, GitHub, GitLab Ø Infrastructure-as-code, Chef, Ansible, Puppet Ø Test-driven development, integration testing, continuous integration and continuous delivery; and tools like Azure DevOps Pipelines, Jenkins, TeamCity, Travis CI, CircleCI #vendors #benchsales #corptocorp #updating #recruiter #corejava #spring #requirements #c2c #list #post #recruiters #comment #distribution #sales #email #recruitment #linkedin #recruiting #salesrecruitment #sap #hotlist #vendorlist #requirements #hiring #business #shareit #id #send #preferred #daily #corp2corp #urgentrequirement #hotlists #sales #Recruiters #vendorlist #vendor #vendorlists #hotlists #resume #urgentrequirement #urgentrequirements #jobs #jobdescription #jobfair #job #vendorlist #benchsales #usrecruitment #longterm #usjobs #itrecruiters #hotlist #directclient #vendormanagement #vendors #benchsales #updating #recruiter #requirements #list #post #recruiters
To view or add a comment, sign in
-
Dear connections, Title: DevOps Engineer to Support Azure OpenAI Experience: 8+ years Location: Remote Visa: GC, GC EAD, USC & H4 EAD PP Number is Must Contract: W2 & C2C Email: vtaneeru@mitresource.com Key Tasks: 1. Collaborate with software engineers, testers, and system administrators to ensure smooth integration of OpenAI services into applications. Foster a culture of teamwork and continuous improvement 2. Set up monitoring, logging, and alerting for OpenAI services. Optimize resource utilization, scalability, and performance 3. Develop and maintain infrastructure patterns supporting promotion and deprecation of OpenAI and other LLM models within an API configuration 4. Building and maintaining a set of tools that enable automation for creating and supporting Azure subscriptions in a large-scale tenant 5. Improving existing Continuous integration across multiple product pipelines 6. Oversee the day- to-day remediation of critical issues and building processes and efficiencies to eliminate the recurrence of issue 7. Maintain the health and security of infrastructure by monitoring and patching 8. Ability to deliver TIO strategy in partnership with the business platforms needs 9. Participate in off hours on-call support schedule Key Skills / Experience: 1. 3 – 5 years' experience with administering Microsoft Azure subscriptions in an Enterprise environment 2. Understand Azure deployment processes such as Azure Blueprints, ARM templates and PowerShell Modules 3. Proficiency with the Azure Command Line (CLI) interface 4. Deep knowledge of Azure Active Directory, IAM and Roles 5. Practiced in Azure infrastructure – Virtual Machines, Azure Storage, Azure App Service and Database offerings 6. Understand Azure VNETS, Express route, Internet networking, Virtual Private Networks and DNS 7. Familiarity of Azure Cognitive Services and its component offerings Ability to use Terraform in an enterprise environment, including module creation and use 8. Advanced Linux and Windows server administration experience 9. Docker and Containerization, Kubernetes a plus 10. Git and source control procedures 11. An understanding of coding practices and of DRY principles 12. Authoring scripts with Bash and Python 13. Executing and tracking tasks in an Agile environment 14. Continuous Integration systems such as Jenkins and GitHub Actions Ability to work on several work streams with excellent time management #devops #systemengineer #devopsengineer #azure #openai #python #kubernetes #remoteroles #devopsjobs
To view or add a comment, sign in
-
#hiring #sivak@arkhyatech.com #Azure #Devops Engineer:: #Remote Key skills - #Azure, #Terraform, #Python, #PySpark Technical Skills 1. Cloud Platform Proficiency: Mastery of cloud platforms like AWS, Azure, or Google Cloud Platform is essential. This includes understanding their architecture, services, and security features. 2. Automation and Configuration Management: Skills in automating deployment, scaling, and management of infrastructure using tools like Ansible, Chef, Puppet, and Terraform. 3. Programming and Scripting: Proficiency in languages such as Python, PySpark, or Bash for scripting and automation. 4. Infrastructure as Code (IaC): Knowledge of IaC tools like Terraform or CloudFormation to manage and provision infrastructure through code. 5. Continuous Integration/Continuous Deployment (CI/CD): Experience with CI/CD tools like Jenkins or GitLab CI to automate the software release process. 6. Containerization and Orchestration: Familiarity with Docker for containerization and Kubernetes for orchestration. 7. Monitoring and Logging: Skills in using monitoring tools like Prometheus, Grafana, and ELK stack for logging and monitoring system performance. Soft Skills 1. Collaboration and Communication: Ability to work effectively with cross-functional teams, including developers, operations, and business stakeholders. 2. Problem-Solving: Strong analytical and problem-solving skills to troubleshoot and resolve issues quickly. 3. Adaptability: Willingness to continuously learn and adapt to new technologies and methodologies. On-Premises Specific Skills 1. Server Management: Knowledge of managing and configuring Linux and Windows servers. 2. Networking: Understanding of network configurations, protocols, and security measures. Cloud-Specific Skills 1. Cloud Security: Ensuring security best practices are followed in cloud environments. 2. Cost Management: Ability to manage and optimize cloud costs effectively.
To view or add a comment, sign in
-
Hello Folks, Hope you are doing great! One more requirement Position: FinOps Consultant Designation: Associate Location: California City: Sunnyvale Rate: 60$ on C2C Job Description (Posting). Overview of Group : Architecture and Onboarding team in Application Engineering group, Productivity Engineering Responsible for Azure DevOps, Monitoring, Alerting, Automation, Designing for Capacity scale, redundancy for all Enterprise Applications. Operation support - Monitoring, Alerting, On call support 7x7 Defining everything from transformation of services to be cloud native, CI/CD tools development and deployment Why the need to hire this person: o The impact to our customers/members/employees: Impacts all the customers who are using these applications o Why it s important to LinkedIn Impacts internal and external customers o Cross-functional work Application Engineering, FAST, GNS, Service Delivery Engineering, IAM, Sales Systems, ISE, SWE, CTE, TPE, FinIT What s required (Skills Must haves vs. nice to haves) : o Must Have: Cloud native architecture, Azure cloud migration OOP - SRE level coding, building automation, tools (PowerShell,Python, Java, SQL) Data driven monitoring and automation - analysis, profiling, capacity planning, scaling Broad experience of different technologies (Java applications, Network, Virtualization, Platform) o Nice to Have: Soft Skills - working with global workforce Data Structures Relational / non relational DB s Network Stack / File Systems CI/CD Day-to-day Responsibilities: o What the engineer will be doing on a day to day basis Azure DevOps Operational work o % Hands-on coding responsibilities or strong technical knowledge Azure cloud migration and transformation of application to be cloud native o What specific skill if only one is required for this role Azure DevOps (1.) To provide FinOps cost optimization recommendations such as Rightsizing resources based on utilization, handling zombie assets etc. (2.) To enable cost visibility, reporting, governance using policies via CSP nativeor third-party FinOps Tools thus, generating savings. (3.) To propose reservations for Compute, Databases to save costs from native-tools recommendations. (4.) To propose best practices for Tagging, Anomaly Detection and Budgets along with setting altering mechanisms to monitor the performances (5.) To provide Storage optimization initiatives and assist customers to create custom dashboardsor reports for further granular visibility Experience : 7+Years Skill (Primary): Application Operations-Application Infrastructure-Application Server Reach me at katyayini@teamworkitsolutions.com #CEP #OPT #H1B #H4EAD #W2 #requirements #looking #actively #seeking #hiring #USA hashtag#CA #COBOL #Mainframes #Snowflake #J2EE #JAVA #computers #IT #Devops #Python #developer #Dataengineer #CFBR #SQL #DBA #Databricks #DE #seniourarchitect #architect
To view or add a comment, sign in
-
Hello all #hiring for #DevOpsEngineer Looking for H1B candidates on W2 only Position: DevOps Engineer Location Remote(PST) Minimum Qualifications: BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent practical experience. Preferred Qualifications: Experience with algorithms, data structures, complexity analysis and software design. You have a passion for software development and technology. Strong experience with Monitoring applications – SignalFx/Splunk/DataDog, etc. Strong understanding of Jenkins – CI/CD pipelines. Experience with GitOps and OaC preferred. Demonstrable knowledge of terraform, artifactory, a strong plus. Deep understanding and working knowledge of networking principles, internet fundamentals, Operating Systems, and application stacks. You should be comfortable using Linux command line. Familiarity with configuration management and automation (e.g., Puppet, Ansible) Using the "Scientific Method" to dig into problems is something you love to do. In addition to well-developed skills in industry standard software development languages (e.g., Java, Node), you should be capable in at least one scripting language (e.g., python). Strong debugging and analytical skills You strive to work in a dynamic, changing environment, with large scale applications across the stack. Finally, you must be determined to have fun, otherwise it's just a job. We provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. Daily Responsibilities: 50% operations / 50% engineering – managing drift and sandbox environment. Following up on drift with respective teams (through jira / emails, making sure drift is managed). Working on platforms (engineering work, scripting, java, node, etc) #DevOps #DevOpsEngineer #DevOpsLife #Automation #CI/CD #CloudComputing #InfrastructureAsCode #ContinuousIntegration #ContinuousDelivery #SRE #DevSecOps #CloudNative #ITAutomation #Kubernetes #Docker #Microservices #AgileDevelopment #ConfigurationManagement #SysAdmin #ITOps #DevOpsCulture Candidate Requirements: Top 3 must-have hard skills: 1 Scripting, python, golang – anyone of those is a must 2 Knowledge in java/node.js/react 3Experience with monitoring tools (splunk/signalfx, datadog, etc). Nice to Have: both frontend and backend combined experience Years of Experience: 10 Degrees or Certifications: bachelors minimum
To view or add a comment, sign in
-
Job Title- DevOps Engineer Location – San Francisco, CA Must have Exp with GovCloud Pay rate- $60-70/hr. Job Description • Bachelor’s degree in Computer Science or other technical degree • Typically requires 3+ years of relevant technical or business work experience • General familiarity with networking, security and automated deployments of middleware in AWS GovCloud using IaC • Maintains and improves existing build and deployment processes across all products • Collaborates with Application Developers, QA Engineers, Product Owners, and others to create deployment best practices • Enforces best practices for security and reliability across ITS • Designs and deploys new application components and infrastructures • Implements and maintains a continuous integration environment • Supports and troubleshoots product and infrastructure issues in production environments • Writes configuration scripts for automation tools and automates recurring tasks • Actively monitors and administers cloud-hosted applications and builds integrations • Participates in engineering design and deployment planning Complexity • Works with partners within 1-2 business functions to align technology solutions with business strategies • Supports several moderately complex business initiatives • Serves as a project team member #govcloud #devops #cloud #aws #programming #cloudcomputing #technology #developer #linux #python #coding #azure #software #iot #cybersecurity #kubernetes #it #css #javascript #java #devopsengineer #tech #ai #datascience #docker #softwaredeveloper #webdev #machinelearning #programmer #bigdata #security #softwareengineer #html #agile #softwaredevelopment #microsoft #webdeveloper #code #training #github #sysadmin #data #devopstraining #ui #webdevelopment #automation #softwareengineering #cloudsecurity #coder #nodejs #jenkins #cisco #ux #business #computerscience #development #scrum #amazonwebservices #networking #devopstools #windows
To view or add a comment, sign in
-
Hi All, Looking for an MLops Engineer - Remote Please find the JD below and share a suitable resume with user11@datacapitalinc.com Key Responsibilities: • Work with Walmart's AI/ML Platform Enablement team within the eCommerce Analytics team. The broader team is currently on a transformation path, and this role will be instrumental in enabling the broader team's vision. • Work closely with data scientists to help with production models and maintain them in production. • Deploy and configure Kubernetes components for production cluster, including API Gateway, Ingress, Model Serving, Logging, Monitoring, Cron Jobs, etc. Improve the model deployment process for MLE for faster builds and simplified workflows • Be a technical leader on various projects across platforms and a hands-on contributor of the entire platform's architecture • System administration, security compliance, and internal tech audits • Responsible for leading operational excellence initiatives in the AI/ML space which includes efficient use of resources, identifying optimization opportunities, forecasting capacity, etc. • Design and implement different flavors of architecture to deliver better system performance and resiliency. • Develop capability requirements and transition plan for the next generation of AI/ML enablement technology, tools, and processes to enable Walmart to efficiently improve performance with scale. Tools/Skills (hands-on experience is must): • Administering Kubernetes. Ability to create, maintain, scale, and debug production Kubernetes clusters as a Kubernetes administrator and In-depth knowledge of Docker. • Ability to transform designs ground up and lead innovation in system design • Deep understanding of data center architectures, networking, storage solutions, and scale system performance • Have worked on at least one Kubernetes cloud offering (EKS/GKE/AKS) or on-prem Kubernetes (native Kubernetes, Gravity, MetalK8s) • Programming experience in Python, Node, Golang, or bash • Ability to use observability tools (Splunk, Prometheus, and Grafana ) to look at logs and metrics to diagnose issues within the system. • Experience with Seldon core, MLFlow, Istio, Jaeger, Ambassador, Triton, PyTorch, and Tensorflow/TFserving is a plus. • Experience with distributed computing and deep learning technologies such as Apache MXNet, CUDA, cuDNN, TensorRT • Experience hardening a production-level Kubernetes environment (memory/CPU/GPU limits, node taints, annotations/labels, etc.) • Experience with Kubernetes cluster networking and Linux host networking • Experience scaling infrastructure to support high-throughput data-intensive applications • Background with automation and monitoring platforms, MLOps, and configuration management platforms Education & Experience: - • 5+ years relevant experience in roles with responsibility over data platforms and data operations dealing with large volumes of data in cloud-based distributed computing environments.
To view or add a comment, sign in