Hello #folks Having an urgent requirement of AWS Data Engineer with one of our prime Client. Role-Sr. AWS Data Engineer Location-Houston TX(Onsite) Contract Job Description · 9+ years of strong hands-on experience working in data warehousing, data engineering and dimension modelling. · Should be able to work independently with minimal guidance, with excellent problem solving and analytical skills. Required Skills · Experience building and maintaining ETL pipelines with large data sets using services such as AWS Glue, EMR, Kinesis or Kafka · Strong Python development experience with proficiency in Spark or Pyspark and in using APIs · Strong in writing SQL queries and performance tuning in AWS Redshift and other industry leading RDMS such MS SQL Server, Postgres · Proficient working with AWS Services such as AWS Lambda, Event Bridge, Step functions, SNS, SQS · Familiar with how IAM Roles and Policies work Preferred Skills · Worked in workflow management tools such as Airflow · Familiar with infrastructure coding such as Cloud Formation · Worked in CI/CD pipeline and agile methodologies. Do let me know if you are avilable and intrested or help this post reach out to someone who could be a great fit for this Opportunity. I am avilable on 609-897-9670 Ext. 2216 or BishnuK@sysmind.com #awsdataengineering #datascience #bigdata #dataengineer #dataanalytics #bigdataanalytics #data #python #pythonprogramming #dataanalysis #datavisualization #businessintelligence #datawarehouse #sql #datasciencetraining #bi #bigdataanalysis #technology #datascientist #datamanagement #programminglife #pythonlearning
Bishnu Upadhyay’s Post
More Relevant Posts
-
We are Hiring a Data Engineer…… Key Skills: · 8+ years of experience as a Data Engineer in a cloud environment with Azure cloud · 5+ years of experience in Azure services such as Data Factory, Data Bricks, Data flows, Key Vaults, etc. Below skills needed: 1. ADF a. Pipelines b. Data flows c. Datasets d. Activities e. Triggers 2. SQL Server & Cosmos DB 3. Azure Integration Runtime 4. Azure Blob/ Data lake storage Nice to have: 1. Azure Key vault 2. Private network 3. Data warehouse design 4. Azure Synapse 5. R/Python scripts 6. Azure DevOps 7. Bicep Key Responsibilities: · Data engineers design, build, and optimize systems for data collection, storage, and access at scale. · Analyze and organize raw data. · Build data systems and pipelines. · Data modeling and analysis. · Evaluate business needs and objectives. · Interpret trends and patterns. · Conduct complex data analysis and report on results. · Prepare data for prescriptive and predictive modeling. · Combine raw information from different sources. · Explore ways to enhance data quality and reliability. · Authentication and Authorization in pipelines. · Key vault and secrets consumption. · Nice to have C#, WEB API · Nice to have Hybrid source connections. · Nice to have DevOps Work Experience: 8+ years #consign #consignspacesolutions #consignjob #consignjobs #wearehiring #hiringnow #dataengineer #azurecloud #datafactory #databricks #dataflows #keyvault #sqlserver #cosmosdb #azureintegration #blobstorage #datalake #datawarehousing #azuresynapse #rscripts #pythonscripts #azuredevops #bicep #datamodeling #dataanalysis #datapipelines #prescriptivemodeling #predictivemodeling #dataquality #datareliability #authentication #authorization #csharp #webapi #hybridconnections #devops #techjobs #jobopening #workexperience
To view or add a comment, sign in
-
HR Executive at Softtrix Tech Solutions Pvt Ltd Hiring|| SEO || PPC || 📩 akanshahrsofttrix@gmail.com || 📞6283467135
#urgenthiring Data bricks Consultant Exp-5+years IST(Remote) Skills: Bachelor's degree in Computer Science, Information Technology, or a related field. Apache Spark and Delta Lake expertise: Hands-on experience with big data pipelines. Programming skills: Proficiency in Python, Scala, or SQL. Databricks platform experience and its features and capabilities. Data engineering background: Solid experience in data modeling, warehousing, and ETL/ELT processes. Cloud computing knowledge: Experience with AWS, Azure, GCP, and their integration with Databricks. Data security and compliance: Knowledge of encryption, access controls, and data privacy regulations (e.g., GDPR, HIPAA). Analytical skills: Ability to analyze and optimize system performance. Project management experience: Skilled in managing timelines, resources, and stakeholder expectations. Communication and collaboration: Ability to work effectively with cross-functional teams and explain technical concepts to non-technical stakeholders. Continuous learning: Commitment to staying current with big data technologies, Databricks updates, and industry best practices. Key Responsibilities: Databricks Deployment: Lead the setup and configuration of the Databricks environment within the organization's cloud infrastructure (AWS, Azure, GCP). Ensure the integration of Databricks with existing data sources, data warehouses, and other analytics tools. Data Pipeline Engineering: Design, develop, and maintain robust data pipelines within Databricks, leveraging Apache Spark and Delta Lake for efficient data processing and storage. Implement ETL/ELT processes that cater to the needs of diverse analytics and machine learning projects. Performance Optimization: Optimize Spark jobs and Databricks clusters for cost efficiency and computational performance. Security and Governance: Implement and maintain security measures, including access controls and data encryption, to protect sensitive information. Collaboration and Support: Work closely with data engineers, data scientists, IT teams, and business stakeholders to understand their data needs and challenges. Provide technical guidance and support, ensuring best practices in Databricks usage are followed across the organization. Innovation and Continuous Learning: Keep abreast of the latest developments in Databricks, Apache Spark, and related technologies. Training and Enablement: Develop and deliver training sessions, workshops, and documentation to upskill team members in using Databricks effectively. Foster a culture of data literacy and self-service analytics across the organization. Project Management: Lead and manage projects related to Databricks implementation and data platform enhancements. Quality Assurance and Testing: Implement testing frameworks to ensure the accuracy and integrity of data within the Databricks platform. Interested candidates, kindly share your resume at akansha@thetechgalore.com
To view or add a comment, sign in
-
https://lnkd.in/d4rYMX7F Job Overview As one of Data Platform engineers, you’ll join a rapidly growing, premier engineering team and form the foundation of our data pillar, encompassing customer-facing data products, internal analytics, and the customer-facing data warehouse. You will build the next generation of our Data Platform services that enables internal developers to easily build multi-tenant data applications and analytical products.growing really quickly, and you’ll be setting the bar for high quality data and a metrics-driven culture as we scale. You’ll serve as a key input and thought leader, and work closely with the product teams to deliver data driven capabilities to our internal and external customers. RESPONSIBILITIES Build & operate high throughput distributed messaging platform like Kafka/kinesis to enable data change capture and data integration. Build next generation warehouse and compute platform with scalable data ingress/egress for internal and external customers Build DSL & schema registry for internal and external customers to build custom data model. Develop Async data migrations over billions of records with zero-downtime while maintaining our data integrity guarantees. Define and design data transformations and pipelines for cross-functional datasets, while ensuring that data integrity and data privacy are first-class concerns regarded proactively, instead of reactively. Define the right Service Level Objectives for the batch & streaming pipelines, and optimize their performance. Designing and creating CI/CD pipelines for platform provisioning, full lifecycle management. Building the platform control panel to operate the fleet of systems efficiently. Work closely with the team across Application and Platform to establish best practices around usage of our data platform. QUALIFICATIONS Have 5+ years of experience or a proven track record in software engineering Experience with data analytics and warehouse solutions such as Snowflake, Delta Lake, AWS Redshift, etc Experience with data processing technologies Kafka, Kinesis, Spark, Flink, or other open-source or commercial software Experience in schema design, SQL & Schema registry Strong experience with scripting language (such as Python) Experience with deployment and configuration management frameworks such as Terraform, Ansible, or Chef and container management systems such as Kubernetes or Amazon ECS. Driven by creating positive impact for our customers and Benchling’s business, and ultimately accelerating the pace of research in the Life Sciences Comfortable with complexity in the short term but can build towards simplicity in the long term Strong communicator with both words and data – you understand what it takes to go from raw data to something a human understands Willing to work onsite in our SF office 3 days a week. . . . .
To view or add a comment, sign in
-
Sr Cloud Data Engineer| Python | SQL | AWS | PySpark | Kafka | Hadoop | Data Warehousing | Looking for Contract roles
🔍 Pavithra Masilamani - Data Engineer Seeking New Opportunities** 📢 Hello LinkedIn community! I hope this post finds you well. 👋 I'm Pavithra, an experienced Data Engineer actively seeking new challenges and opportunities in the data space. 🔧 **Skills:** - ETL, SQL, Python, AWS, Big Data technologies, Data Warehousing 👨💻 **Experience:** • Built Snowflake stage over Azure blob and later mounted Snowflake External table using the stage. Proficient in table design strategy using Data clustering keys. • Migrated an existing on-premises application to AWS. Used AWS services like EC2 and S3 for small data sets processing and storage, experienced in Maintaining the Hadoop cluster on AWS EMR • Strong experience and knowledge of real-time data analytics using Spark Streaming, Kafka, and Flume 🚀 **Why Hire Me:** I am passionate about leveraging data to drive insights and improvements. My hands-on experience in [mention any specific technologies or industries you have expertise in] has equipped me with the skills needed to tackle complex data challenges. 🌐 **Open to Collaborations:** I'm open to discussing opportunities related to data engineering, data analytics, or any role where I can contribute my expertise. If you or someone you know is looking for a dedicated and results-driven Data Engineer, please don't hesitate to reach out or share this post. Let's connect and explore how my skills can bring value to your team! #DataEngineer #DataEngineering #JobSeeker #DataAnalytics #TechJobs #c2cvendors #c2crequirements #contractjobs #Snowflake #plateform #Hadoop #SAS #MySQL #MongoDB Thank you for your time and support! 🙌
To view or add a comment, sign in
-
if you are a perfect #dataengineer you might have these....... 1. Well versed basics(Hadoop, Distributed systems, Big data) 2. Architectures(Hadoop, spark, hive, databases) 3. Advanced SQL 4. Fluent Pyspark 5. In depth apache spark, hive. 6. Good Cloud Experience(Azure, AWS, GCP) 7. Experience of orchestration(Airflow) 8. little streaming exposure(Kafka) 9. strong System designing skills, data modelling, data warehousing skills. CI/CD skills. this is the basic things a data engineer should have to even appear for #interviews too... do you have them all?
To view or add a comment, sign in
-
Artificial Intelligence and Analytics(AIA)- Data Analyst/Data Engineering @ Cognizant || EX-META || EX-TCS || Microsoft Certified: Azure Data Engineering Associate
🚀 Are you striving to become a top-notch data engineer? Master these essential skills: 1. Strong foundation in Hadoop, Distributed Systems, and Big Data concepts. 2. Expertise in designing architectures with Hadoop, Spark, Hive, and databases. 3. Proficiency in advanced SQL for efficient data manipulation. 4. Fluency in Pyspark for data processing and analytics. 5. In-depth understanding of Apache Spark and Hive for scalable data processing. 6. Cloud experience with Azure, AWS, or GCP for deploying data solutions. 7. Use of orchestration tools like Airflow for automating data workflows. 8. Exposure to streaming tech like Kafka for real-time data processing. 9. Skills in system design, data modeling, and data warehousing. 10. Proficiency in CI/CD practices for streamlined development and deployment. These skills are crucial for excelling in interviews and thriving as a data engineer. Do you possess these competencies? Let's discuss! 💡🔥 #DataEngineering #TechSkills
To view or add a comment, sign in