Urgent Hiring for #Data_Analyst (#Permanent_Remote) #Job_Type: Full Time #Client: #NASDAQ_Listed_Fortune500_Organization #Key_Skills: #Data_Analysis (#Insurance_Domain_Specific), #Expertise_in_DB2_and_MongoDB #Advanced_SQL, #Data_Profiling, #Data_Validation, and #Data_Cleansing, #Data_Visualization (Power_BI & Tableau) #Experience_Range: 4-6 Years #Notice_Period: 0-15 days max (Immediate Joiners Only) Please share CV on info@humtechnologies.in or Call 7888443523 #Job_Responsibilities: ---------------------- ● 4+ years of experience in data analysis, preferably within the insurance domain. ● Proficiency in DB2 and MongoDB databases. ● Strong experience in data profiling, data validation, and data cleansing. ● Advanced knowledge of SQL for querying relational databases. ● Experience with data visualization tools (e.g., Power BI, Tableau). ● Excellent problem-solving and analytical skills. ● Strong verbal and written communication skills for interacting with cross-functional teams. #Must_Have_Skills: ------------------- ● Analyze and profile large datasets to ensure completeness, accuracy, and consistency. ● Work with structured and unstructured data from various sources, including DB2 and MongoDB. ● Collaborate with business stakeholders and SMEs to understand data needs and deliver actionable insights. ● Perform data validation and cleansing, ensuring that the data aligns with business requirements. ● Develop, maintain, and optimize SQL queries to extract, transform, and load data. ● Create and maintain comprehensive documentation on data sources, definitions, and transformation logic. ● Design and generate data reports, dashboards, and visualizations to support data-driven decision-making. ● Identify trends, patterns, and anomalies within the data to provide insights for business improvements. ● Ensure data governance, accuracy, and adherence to internal and external regulations. ● Mentor junior analysts and assist with developing best practices for data management and analysis. Please share CV on info@humtechnologies.in or Call 7888443523
Hum Technologies
Technology, Information and Internet
MUMBAI, mahrashtra 4,668 followers
Data engineering And AI solutions To accelerate Your Transformation Towards Data-led AI Powered Enterprise
About us
Hum Technologies is a data solutions/AI services company, enabling businesses to accelerate their journey towards being a data-led enterprise powered by AI. We understand the unique challenges you face and are dedicated to providing tailored AI strategies and solutions to meet your specific needs. Our passion to solve complex business challenges using technology, combined with our domain knowledge and deep expertise in data engineering & custom AI development enables us to provide impactful solutions aligning with your business goals. We firmly believe that when technology is wielded responsibly, it can be a transformative force for good in our world.
- Website
-
www.humtechnologies.in
External link for Hum Technologies
- Industry
- Technology, Information and Internet
- Company size
- 51-200 employees
- Headquarters
- MUMBAI, mahrashtra
- Type
- Privately Held
- Founded
- 2023
- Specialties
- tech staffing, IT Staffing, Data Management, data science, procurement, AI/ML, and Tech Talent Management
Locations
-
Primary
MUMBAI, mahrashtra 400014, IN
Employees at Hum Technologies
Updates
-
Urgent Hiring for #AWS_Data_Engineer (#Permanent_Remote) (#Role : #Data_Engineer & #Senior_Data_Engineer) #Job_Type: Full Time (#5 Openings) #Client:#NASDAQ_Listed_Fortune500_Organization #Experience_Range: 3-10 Years #Notice_Period: 0-30 days max (Short Notice Joiner Preferred) #Key_Skills : #Technical_expertise_in_Python_and_SQL_Scripting, #Apache_Spark #Pyspark #Cloud_Formation #AWS_Step_Functions #AWS_Lake_Formation and #Glue_Data_Catalog Please share CV on info@humtechnolgoies.in or Call 7888443523 #Key_Responsibilities:- ------------------------ ● Design, develop and maintain large-scale data pipelines using #AWS_Services such as #Glue, #Lambda and #S3. ● Architect and implement data solutions in #AWS_Redshift for optimized data storage and retrieval. ● Build, automate and monitor data workflows using #AWS_Step_Functions and #CloudFormation. ● Ensure the effective setup and management of #AWS_Lake_Formation and #Glue_Data_Catalog for data lake environments. ● Develop and optimize ETL processes using #Python and #PySpark to process and transform large datasets. ● Collaborate with data scientists, analysts and stakeholders to understand data requirements and translate them into scalable solutions. ● Implement best practices for data governance, security and quality across AWS infrastructure. ● Troubleshoot, optimize and enhance existing data pipelines and architecture for performance and scalability. #Key_Skills:- ------------- ● Minimum 3+ years of #Relvant_Experience in #AWS_Data_Engineering with a deep focus on #AWS_Services. ● Expertise in AWS Glue, Lambda, S3, Redshift and Step Functions. ● Hands-on experience with AWS CloudFormation and AWS Lake Formation is a highly beneficial. ● Advanced programming skills in Python and PySpark for building and optimizing ETL pipelines. ● Experience with performance tuning, data modeling and query optimization in AWS environments. ● Strong understanding of data security, governance and compliance in cloud environments. #Good_to_have_skills:- -------------------- ● Experience with CI/CD pipeline setup for data workflows. ● Knowledge of AWS cost optimization practices. ● Familiarity with other AWS services such as Athena, DynamoDB or RDS Please share CV on info@humtechnolgoies.in or Call 7888443523
-
Urgent Hiring for #AWS_Data_Engineer (#Permanent_Remote) (#Role : #Data_Engineer & #Senior_Data_Engineer) #Job_Type: Full Time (#5 Openings) #Client:#NASDAQ_Listed_Fortune500_Organization #Experience_Range: 3-10 Years #Notice_Period: 0-30 days max (Short Notice Joiner Preferred) #Key_Skills : #Technical_expertise_in_Python_and_SQL_Scripting, #Apache_Spark #Pyspark #Cloud_Formation #AWS_Step_Functions #AWS_Lake_Formation and #Glue_Data_Catalog Please share CV on info@humtechnolgoies.in or Call 7888443523 #Key_Responsibilities:- ------------------------ ● Design, develop and maintain large-scale data pipelines using #AWS_Services such as #Glue, #Lambda and #S3. ● Architect and implement data solutions in #AWS_Redshift for optimized data storage and retrieval. ● Build, automate and monitor data workflows using #AWS_Step_Functions and #CloudFormation. ● Ensure the effective setup and management of #AWS_Lake_Formation and #Glue_Data_Catalog for data lake environments. ● Develop and optimize ETL processes using #Python and #PySpark to process and transform large datasets. ● Collaborate with data scientists, analysts and stakeholders to understand data requirements and translate them into scalable solutions. ● Implement best practices for data governance, security and quality across AWS infrastructure. ● Troubleshoot, optimize and enhance existing data pipelines and architecture for performance and scalability. #Key_Skills:- ------------- ● 3+ years of experience in data engineering with a focus on AWS. ● Expertise in AWS Glue, Lambda, S3, Redshift and Step Functions. ● Hands-on experience with AWS CloudFormation and AWS Lake Formation is a highly beneficial. ● Advanced programming skills in Python and PySpark for building and optimizing ETL pipelines. ● Experience with performance tuning, data modeling and query optimization in AWS environments. ● Strong understanding of data security, governance and compliance in cloud environments. #Good_to_have_skills:- -------------------- ● Experience with CI/CD pipeline setup for data workflows. ● Knowledge of AWS cost optimization practices. ● Familiarity with other AWS services such as Athena, DynamoDB or RDS Please share CV on info@humtechnolgoies.in or Call 7888443523
-
Urgent Hiring for #AWS_Cloud_Engineer (#Permanent_Remote) #Job_Type: Full Time (#5 Openings) #Client:#NASDAQ_Listed_Fortune500_Organization #Experience_Range: 3-6 Years #Notice_Period: 0-15 days max (Short Notice Joiner Preferred) #Key_Skills : #Technical_expertise_in_Python_and_SQL_Scripting, #Apache_Spark #Pyspark #Cloud_Formation #AWS_Step_Functions #Snowflake_Data_Warehousing Please share CV on info@humtechnolgoies.in or Call 7888443523 #Key_Responsibilities:- ------------------------ ● Design, develop and maintain large-scale data pipelines using #AWS_Services such as #Glue, #Lambda and #S3. ● Architect and implement data solutions in #AWS_Redshift for optimized data storage and retrieval. ● Build, automate and monitor data workflows using #AWS_Step_Functions and #CloudFormation. ● Ensure the effective setup and management of #AWS_Lake_Formation and #Glue_Data_Catalog for data lake environments. ● Develop and optimize ETL processes using #Python and #PySpark to process and transform large datasets. ● Collaborate with data scientists, analysts and stakeholders to understand data requirements and translate them into scalable solutions. ● Implement best practices for data governance, security and quality across AWS infrastructure. ● Troubleshoot, optimize and enhance existing data pipelines and architecture for performance and scalability. #Key_Skills:- ------------- ● 3+ years of experience in data engineering with a focus on AWS. ● Expertise in AWS Glue, Lambda, S3, Redshift and Step Functions. ● Hands-on experience with AWS CloudFormation and AWS Lake Formation is a highly beneficial. ● Advanced programming skills in Python and PySpark for building and optimizing ETL pipelines. ● Experience with performance tuning, data modeling and query optimization in AWS environments. ● Strong understanding of data security, governance and compliance in cloud environments. #Good_to_have_skills:- -------------------- ● Experience with CI/CD pipeline setup for data workflows. ● Knowledge of AWS cost optimization practices. ● Familiarity with other AWS services such as Athena, DynamoDB or RDS Please share CV on info@humtechnolgoies.in or Call 7888443523
-
Urgent Hiring for #AWS_Cloud_Engineer (#Permanent_Remote) #Job_Type: Full Time (#5 Openings) #Client:#NASDAQ_Listed_Fortune500_Organization #Experience_Range: 3-6 Years #Notice_Period: 0-15 days max (Short Notice Joiner Preferred) #Key_Skills : #Technical_expertise_in_Python_and_SQL_Scripting, #Apache_Spark #Pyspark #Cloud_Formation #AWS_Step_Functions #Snowflake_Data_Warehousing Please share CV on info@humtechnologies.in or Call 7888443523 #Key_Responsibilities: ----------------------- ● Designing, building, and maintaining efficient, reusable, and reliable code ● Ensure the best possible performance and quality of high-scale data applications and services ● Participate in system design discussions ● Independently perform hands-on Development and unit testing of the applications ● Collaborate with the development team and build individual components into the enterprise data platform ● Work in a team environment with the product, QE/QA, and cross-functional teams to deliver a project throughout the whole software development cycle. ● Responsible to identify and resolve any performance issues ● Keep up to date with new technology development and implementation ● Participate in code review to make sure standards and best practices are met #Required_Skillset: ---------------- • 3+ years as a Data Engineer, specializing in AWS Stack and Snowflake. • Proficiency in Python, SQL, Apache Spark, and ETL processes. • Expertise in #AWS_Services - CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, Lambda, Step Functions, Lake Formation, and CloudWatch. • Experience in Snowflake data warehousing and management. • Strong aptitude, problem-solving abilities, and analytical skills. • Ability to take ownership of tasks and projects, as appropriate. • Quick learner, with the ability to help the team adapt to new technologies swiftly. • Excellent communication and coordination skills. Please share CV on info@humtechnologies.in or Call 7888443523
-
Position : AWS Data Engineer Location : Remote Experience : 4 - 7 Years Company Name : Hum Technologies (Client Fortune 500 Company) Job Description : - 4Y+ Years of experience as a Data Engineer - Strong technical expertise in Python and SQL - Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark) - Solid experience with AWS services such as Cloud Formation, S3, Athena, Glue, Glue Data Brew, EMR/Spark, RDS, Redshift, Data Sync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM, Event Bridge, EC2, SQS, SNS, Lake Formation, Cloud Watch, Cloud Trail - Responsible for building, test, QA & UAT environments using Cloud Formation. - Build & implement CI/CD pipelines for the EDP Platform using Cloud Formation and Jenkins Key Skills : - Implement high-velocity streaming solutions and orchestration using Amazon Kinesis, AWS Managed Airflow, and AWS Managed Kafka (preferred) - Solid experience building solutions on AWS data lake/data warehouse - Analyze, design, Development , and implement data ingestion pipeline in AWS - Knowledge of implementing ETL/ELT for data solutions end to end - ingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift. - Perform the Peer Code Review and, perform code quality analysis, and associated tools end-to-end for Prudential's platforms - Create detailed, comprehensive, and well-structured test cases that follow best practices and techniques - Estimate, prioritize, plan & coordinate quality testing activities - Understanding requirements, and data solutions (ingest, storage, integration, processing, access) on AWS - Knowledge of implementing RBAC strategy/solutions using AWS IAM and Redshift RBAC model - Knowledge of analyzing data using SQL Stored procedures - Build automated data pipelines to ingest data from relational database systems, file system , and NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift - Build Automated data pipelines to develop test plans, execute manual and automated test cases Please share your profile at info@humtechnologies.in
-
We are hiring for Sr. Java Data Engineer with strong exp in Java, Spring, Restful-API, Microservices, AWS / AZURE, SQL. For one of the Fortune 500 Clients. Company Name : Hum Technologies Client : One of the Fortune 500 Companies Location : Remote/ Hybrid Role : Sr. Java Data Engineer. Responsibilities : - Application development using Java, API - Design, implement, and maintain Java-API based applications that can be high-volume and low-latency. - Develop and test software & identify and resolve any technical issues arising. - Create detailed design documentation and propose changes to the current Java infrastructure. - Support continuous improvement, investigating alternatives and technologies, and presenting for architectural review. - Experience on large scale Data applications - Should be able to do coding, debugging, performance tuning and deploying the apps to Prod. - Experience in Java, Spring, Kafka, Restful-API, Microservices, AWS / AZURE, SQL, - Knowledge on ETL platforms. - Test the functionality and push the code to the next environment. - Shell compliant CI/CD processes. Good to Have : - Experience / Knowledge on Hadoop, Spark, ETL, Python - Write ETL (Extract, Transform, Load) scripts to load facts and dimensions data using SQL (Structured Query Language) queries, document catalog data. - Strong understanding of Data warehousing and data modeling and building ETL pipelines - Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools. Please share your profile at info@humtechnologies.in
-
We are expanding our Data Engineering toolset and need an experienced GCP, Python and SQL Data Engineer. This role will be required to design and implement our new data pipelines using GCP and train other team members. Primary Responsibilities: · Implement and manage GCP tools and environment. · Using Python, build and maintain multiple source data connections to API's, SQL databases, GCS. · Migrate data from AWS S3 to Google cloud storage. · Manage IT platform and GCS environment. · Ensure data consistency, accuracy and reliability as data and business requirements change. Required Skills: · 3+ years of experience working with GCP including using pub/sub, google cloud functions. · 3+ years of data engineering, data pipeline development, and ETL experience using Python, SQL, in GCP · Proficiency in the Python scripting language, SQL, Cloud databases, and ETL development processes & tools · Strong understanding of traditional relational databases, data and dimensional modeling principles, and data normalization techniques · Experience requesting, transforming, and ingesting data from REST and SOAP APIs Required Education/Experience: · Bachelor’s degree in information systems, computer science, or related technical field
-
We are hiring for GCP Migration Engineer with falirs on cloud services, migration, and automation. Company Name: Hum Technologies Client : One of the Fortune 500 Companies Location : Remote/ Hybrid Skills : Cloud services, migration & automation Responsibilities: ● 5+ years of experience in cloud computing, with a focus on GCP. ● Proven experience in migrating on-premises infrastructure to GCP. ● Strong understanding of GCP services, including Compute Engine, Cloud Storage, BigQuery, Pub/Sub, and others. ● Experience with containerization and orchestration tools such as Docker and Kubernetes. ● Proficiency in scripting and automation using tools like Terraform, Ansible, or similar. ● Familiarity with network architecture and security best practices in a cloud environment. ● Conduct a thorough assessment of the existing on-premises infrastructure. ● Develop a detailed migration plan, including timelines, resource allocation, and risk mitigation strategies. ● Identify and address potential challenges in migrating on-prem applications and data to GCP. ● Lead the execution of the migration plan, ensuring minimal disruption to business operations. ● Migrate applications, databases, and workloads from on-premises to GCP. https://lnkd.in/dgY5kR8B Please send us your Resume on info@humtechnologies.in OR DM on 7304515922
Company Name: Hum Technologies Client : One of the Fortune 500 Companies Location : Remote/ Hybrid Skills : Cloud services, migration & automation Responsibilities: ● 5+ years of experience in cloud computing, with a focus on GCP. ● Proven experience in migrating on-premises infrastructure to GCP. ● Strong understanding of GCP services, including Compute Engine, Cloud Storage, BigQuery, Pub/Sub, and others. ● Experience with containerization and orchestration tools such as Docker and Kubernetes. ● Proficiency in scripting and automation using tools like Terraform, Ansible, or similar. ● Familiarity with network architecture and security best practices in a cloud environment. ● Conduct a thorough assessment of the existing on-premises infrastructure. ● Develop a detailed migration plan, including timelines, resource allocation, and risk mitigation strategies. ● Identify and address potential challenges in migrating on-prem applications and data to GCP. ● Lead the execution of the migration plan, ensuring minimal disruption to business operations. ● Migrate applications, databases, and workloads from on-premises to GCP. Please send us your Resume on info@humtechnologies.in OR DM to 7304515922
-
We are hiring for Fullstack Engineer for one of our Client which is a UK based MNC. Skill- Fullstack Developer with React Location- Remote Experience- 4-8 Years Key Skills & Experience ● 4+ years ReactJS, writing maintainable, reusable code. ● Experience with Redux ● Experience with HTML/CSS, including concepts like cross-browser support, theming, and accessibility. ● Strong fundamentals and problem-solving skills. ● Communicate equally clearly in writing, code, and speech. ● Knowledge and Experience in source control systems like Bitbucket, Gitlab ● Knowledge and Experience in Agile Practices ● Desire to build and design complex web architectures leveraging object-oriented programming. ● Excellent interpersonal and communication skills, including excellent spoken and written English ● Experience in Linux required ● Experience with CI/CD and AWS is preferred. Working Conditions: This is a remote role with the expectation of on-site/in-person collaboration with teammates and stakeholders for moments that matter and may require up to 15% travel.