🔥 We're Hiring! 🔥 📌 Lead Data Engineer (4-8 years) 📌 Senior Database Engineer (6+ years) 📌 Database Architect (10+ years) 📌 Senior Data Engineer (5+ years) ✉ Send us your CV and join us in shaping the future with bold ideas and innovation at Experion! #hiring #data #engineer #database #architect #experiontechnologies #productengineering Experion Technologies
Experion Technologies’ Post
More Relevant Posts
-
Hi, Hope you are doing great!! Please find the requirement below , If you find yourself comfortable with the requirement please reply back with updated resume. Role- Data Modeler Location- Philadelphia, PA Fulltime Opportunity Technical Skills • Proven experience as a Data Mdeling • Hands-on experience with software development and system administration • Understanding of strategic IT solutions • Functional Domain knowledge: Telecom OSS/BSS, Ecommerce, Customer Experience, Backoffice Operations • Knowledge of selected coding languages (Scala, Java, Python) • Working knowledge of AWS and Kafka. • Familiarity with various operating systems (UNIX) and databases (Relational required-- Oracle, MySQL, No SQL is plus--Graph, Mongo) • Familiarity with big data technologies and platform. • Experience working in an Agile Team environment. • Excellent communication skills • Problem-solving aptitude • Organizational and leadership skills Best Regards, Anurag Pathak Technical Recruiter Phone : (848) 668-7022*3103 Text : (470) 531-8842 Email: APathak@siriinfo.com #informationtechnology #technology #it #cybersecurity #tech #computerscience #programming #business #coding #innovation #software #python #information #computer #informationsecurity #security #technologynews #java #networking #hacking #programmer #linux #technologyrocks #coder #technologythesedays #cloudcomputing #education #engineering #itservices #newtechnology #usajobs #hospitalityjobs #hiring #nowhiring #restaurantjobs #usa #jobhiring #hiringnow #jobsearch #hoteljobs #jobs #job #cookjobs #workfromhome #chefjobs #souschefjobs #recruitment #usajob #interview #h #texasjobs #jobsnear #californiajobs #work #joboffers #j #usatoday #executivechef #resume #business #datamodel #datascience #dataanalytics #ai #machinelearning #datamining #limitlessanalytics #lainc #datascientist #mlengineer #codingforbeginners #datasciencememes #aiconference #codingjokes #codingmemes #datamodeling #codingisfun #analysis #indicator #news #bigdataanalytics #forecast #businessanalytics #javamemes #signals #math #codinglove #prediction #blockchain #kidcoders
To view or add a comment, sign in
-
We are hiring at Radioactive Technologies. Full time Contract Opportunity Fully Remote please share resume at Khalil@weradioactive.com Data Analyst- (SQL, Python, Hadoop, Spark) Exp- 3-5 yrs Key Responsibilities ● Design, build, and maintain efficient and reliable data pipelines to process large volumes of data. ● Implement ETL (Extract, Transform, Load) processes to integrate data from various sources. ● Manage and optimize databases and data warehouses to ensure high performance and availability. ● Implement data storage solutions that are secure, scalable, and cost-effective. ● Ensure data quality by implementing data validation, cleansing, and monitoring processes. ● Develop and enforce data governance policies to maintain data integrity and compliance. ● Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide necessary support. ● Communicate technical concepts and solutions effectively to both technical and non-technical team members. Tool and Technology Utilization ● Utilize data engineering tools and technologies such as SQL, Python, Hadoop, Spark, and cloud platforms (e.g., AWS, Azure, Google Cloud). ● Stay updated with the latest advancements in data engineering and big data technologies. Qualifications ● Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. ● 3-5 years of experience in data engineering or a related field. ● Proven experience with data pipeline development, database management, and data integration. ● Experience working with large datasets and big data technologies. ● Proficiency in SQL and programming languages such as Python or Java. ● Strong experience with big data technologies (e.g., Hadoop, Spark). ● Familiarity with cloud computing platforms (e.g., AWS, Azure, Google Cloud). ● Experience with data warehousing solutions (e.g., Redshift, Snowflake). ● Strong understanding of data security and privacy principles. ● Ability to adapt to a fast-paced and dynamic work environment. Preferred Qualifications ● Experience with real-time data processing technologies (e.g., Kafka, Flink). ● Knowledge of DevOps practices and tools (e.g., Docker, Kubernetes) #hiring #remotework #share #resume #DataAnalyst #DataEngineering #SQL #Python #Hadoop #Spark #ETL #DataPipelines #BigData #DataWarehousing #CloudComputing #AWS #Azure #GoogleCloud #DataSecurity #DataGovernance #DataIntegration #DataQuality #ETLProcesses #DataScience Follow this link to join my WhatsApp group: https://lnkd.in/dMUrg_uR
To view or add a comment, sign in
-
-
Hi there, we're looking for a Data Engineer. And although things have changed a lot - for the better by a significant margin - I'll use the same pitch as last time. We have great benefits, an awesome office with all the perks, including beer, amazing colleagues, nice career progression, budget for learning including conferences, top-notch hardware, etc. But everyone else provides exactly the same, so why us? The short answer is because we offer something won’t find elsewhere: FREEDOM. Yes, we are a huge financial institution, from which you’d usually expect tons of red-tape, but this time we got you. Our team is pretty much like a start-up inside an incubator, or like working for a huge angel investor that is also a customer. Do you wanna use this or that stack? Go for it. Do you have an innovative solution in mind for a given problem? Count on us. Do you have an idea for a product that will reach a few million users? Let’s build it. Do you want to generate insights from a humongous amount of data? Let's see it. Do you know what to do and where to go? We are following just behind you. Seriously, if FREEDOM + RESOURCES + IMPACT is enticing for you (and if it’s not, please think again), let’s chat asap. #absa #absagroup #absaprague #dataengineering #mlops #etl #databricks #banking #fintechs #datascience https://lnkd.in/e-c3-G88
We are Hiring!! As a Data Engineer, you'll play a critical role in the organization's data ecosystem. Your responsibilities span from understanding the data requirements of various business units to designing and implementing robust data pipelines and infrastructure to meet those needs. Collaboration is key, as you'll work closely with Data Analysts, Data Scientists, and stakeholders to ensure alignment with business objectives. https://lnkd.in/d5zuAeMa Felipe Melo #dataengineer #etl #databricks #datagovernance #python #yourstorymatters
To view or add a comment, sign in
-
-
We are hiring at Radioactive Technologies. Full time Contract Opportunity Fully Remote please share resume at Khalil@weradioactive.com Data Analyst- (SQL, Python, Hadoop, Spark) Exp- 3-5 yrs Key Responsibilities ● Design, build, and maintain efficient and reliable data pipelines to process large volumes of data. ● Implement ETL (Extract, Transform, Load) processes to integrate data from various sources. ● Manage and optimize databases and data warehouses to ensure high performance and availability. ● Implement data storage solutions that are secure, scalable, and cost-effective. ● Ensure data quality by implementing data validation, cleansing, and monitoring processes. ● Develop and enforce data governance policies to maintain data integrity and compliance. ● Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide necessary support. ● Communicate technical concepts and solutions effectively to both technical and non-technical team members. Tool and Technology Utilization ● Utilize data engineering tools and technologies such as SQL, Python, Hadoop, Spark, and cloud platforms (e.g., AWS, Azure, Google Cloud). ● Stay updated with the latest advancements in data engineering and big data technologies. Qualifications ● Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. ● 3-5 years of experience in data engineering or a related field. ● Proven experience with data pipeline development, database management, and data integration. ● Experience working with large datasets and big data technologies. ● Proficiency in SQL and programming languages such as Python or Java. ● Strong experience with big data technologies (e.g., Hadoop, Spark). ● Familiarity with cloud computing platforms (e.g., AWS, Azure, Google Cloud). ● Experience with data warehousing solutions (e.g., Redshift, Snowflake). ● Strong understanding of data security and privacy principles. ● Ability to adapt to a fast-paced and dynamic work environment. Preferred Qualifications ● Experience with real-time data processing technologies (e.g., Kafka, Flink). ● Knowledge of DevOps practices and tools (e.g., Docker, Kubernetes) #hiring #remotework #share #resume #DataAnalyst #DataEngineering #SQL #Python #Hadoop #Spark #ETL #DataPipelines #BigData #DataWarehousing #CloudComputing #AWS #Azure #GoogleCloud #DataSecurity #DataGovernance #DataIntegration #DataQuality #ETLProcesses #DataScience Follow this link to join my WhatsApp group: https://lnkd.in/e5d4_-HQ
To view or add a comment, sign in
-
-
We are hiring at Radioactive Technologies. Full time Contract Opportunity Fully Remote please share resume at Khalil@weradioactive.com Data Analyst- (SQL, Python, Hadoop, Spark) Exp- 3-5 yrs Key Responsibilities ● Design, build, and maintain efficient and reliable data pipelines to process large volumes of data. ● Implement ETL (Extract, Transform, Load) processes to integrate data from various sources. ● Manage and optimize databases and data warehouses to ensure high performance and availability. ● Implement data storage solutions that are secure, scalable, and cost-effective. ● Ensure data quality by implementing data validation, cleansing, and monitoring processes. ● Develop and enforce data governance policies to maintain data integrity and compliance. ● Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide necessary support. ● Communicate technical concepts and solutions effectively to both technical and non-technical team members. Tool and Technology Utilization ● Utilize data engineering tools and technologies such as SQL, Python, Hadoop, Spark, and cloud platforms (e.g., AWS, Azure, Google Cloud). ● Stay updated with the latest advancements in data engineering and big data technologies. Qualifications ● Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. ● 3-5 years of experience in data engineering or a related field. ● Proven experience with data pipeline development, database management, and data integration. ● Experience working with large datasets and big data technologies. ● Proficiency in SQL and programming languages such as Python or Java. ● Strong experience with big data technologies (e.g., Hadoop, Spark). ● Familiarity with cloud computing platforms (e.g., AWS, Azure, Google Cloud). ● Experience with data warehousing solutions (e.g., Redshift, Snowflake). ● Strong understanding of data security and privacy principles. ● Ability to adapt to a fast-paced and dynamic work environment. Preferred Qualifications ● Experience with real-time data processing technologies (e.g., Kafka, Flink). ● Knowledge of DevOps practices and tools (e.g., Docker, Kubernetes) #hiring #remotework #share #resume #DataAnalyst #DataEngineering #SQL #Python #Hadoop #Spark #ETL #DataPipelines #BigData #DataWarehousing #CloudComputing #AWS #Azure #GoogleCloud #DataSecurity #DataGovernance #DataIntegration #DataQuality #ETLProcesses #DataScience Follow this link to join my WhatsApp group: https://lnkd.in/dMUrg_uR
To view or add a comment, sign in
-
-
We are hiring at Radioactive Technologies. Full time Contract Opportunity Fully Remote please share resume at Khalil@weradioactive.com Data Analyst- (SQL, Python, Hadoop, Spark) Exp- 3-5 yrs Key Responsibilities ● Design, build, and maintain efficient and reliable data pipelines to process large volumes of data. ● Implement ETL (Extract, Transform, Load) processes to integrate data from various sources. ● Manage and optimize databases and data warehouses to ensure high performance and availability. ● Implement data storage solutions that are secure, scalable, and cost-effective. ● Ensure data quality by implementing data validation, cleansing, and monitoring processes. ● Develop and enforce data governance policies to maintain data integrity and compliance. ● Work closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide necessary support. ● Communicate technical concepts and solutions effectively to both technical and non-technical team members. Tool and Technology Utilization ● Utilize data engineering tools and technologies such as SQL, Python, Hadoop, Spark, and cloud platforms (e.g., AWS, Azure, Google Cloud). ● Stay updated with the latest advancements in data engineering and big data technologies. Qualifications ● Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. ● 3-5 years of experience in data engineering or a related field. ● Proven experience with data pipeline development, database management, and data integration. ● Experience working with large datasets and big data technologies. ● Proficiency in SQL and programming languages such as Python or Java. ● Strong experience with big data technologies (e.g., Hadoop, Spark). ● Familiarity with cloud computing platforms (e.g., AWS, Azure, Google Cloud). ● Experience with data warehousing solutions (e.g., Redshift, Snowflake). ● Strong understanding of data security and privacy principles. ● Ability to adapt to a fast-paced and dynamic work environment. Preferred Qualifications ● Experience with real-time data processing technologies (e.g., Kafka, Flink). ● Knowledge of DevOps practices and tools (e.g., Docker, Kubernetes) #hiring #remotework #share #resume #DataAnalyst #DataEngineering #SQL #Python #Hadoop #Spark #ETL #DataPipelines #BigData #DataWarehousing #CloudComputing #AWS #Azure #GoogleCloud #DataSecurity #DataGovernance #DataIntegration #DataQuality #ETLProcesses #DataScience Follow this link to join my WhatsApp group: https://lnkd.in/dfZJfrxt
To view or add a comment, sign in
-
-
Data science is booming. Here are some of the places hiring entry-level employees.
To view or add a comment, sign in
-
Do you have a passion for Data Governance? 🏛 Does the challenge of tackling Data Governance for one of the top tier companies in the world excite you? 🥇 Then show some initiative and take Action! 😤 -- Action is hiring a Data Governance Strategist Apply within: https://action.co/careers -- Governance Strategist Role Description: *Governance* • Data Governance and Dictionary Management • Align data governance with business teams • Source system thought leadership • Interpret the data and formulate the cataloging documentation *Ownership and Process Definition* • Data Ownership and Approval Process Definition • Drive approval flow for process and data owners • Define data ownership and process best practices *Monastic Data Management* • Establish a structured semantic layer to enable AI • Interpret the data and structure data definitions *Integration* • Bridge Multiple Functions' Siloed Data into Common Data Foundation • Align silos of information across the organization • Expose and document technical APIs to global business teams • Drive data sharing and expose data capabilities #datagovernance #dataquality #data #action #opportunities
To view or add a comment, sign in
-
Why I Prefer Hiring Analytics Engineers Over Traditional Data Analysts! In the fast-evolving world of data, I've found myself at a crossroads between traditional methodologies and the burgeoning field of analytics engineering. This role, a hybrid between data analysis and data engineering, is like a mix between looking at data and building the systems that handle data. Here’s why I’m all in for hiring analytics engineers: 1) They Make Data User-Friendly: Analytics engineers craft clean, well-structured datasets that empower end-users to independently find answers to their questions. This self-service approach democratizes data, making it accessible and understandable for non-technical users. It’s like they turn data into a helpful friend instead of a confusing puzzle. 2) They Keep Data Trustworthy: They embed software engineering practices like version control and continuous integration into the fabric of data analytics. Think of it like having a really good quality check system that keeps mistakes low and confidence high. 3) They Help Everyone Help Themselves: By making data more reliable, analytics engineers let everyone in the company do their own data digging. This reduces the bottleneck traditionally seen in data teams and accelerates the pace of insight generation. 4) They Keep Things Clear: Analytics engineers are meticulous in maintaining data documentation and definitions. This means everyone understands what they’re looking at, which is super important. 5) They’re Like Swiss Army Knives: Not only can they sort out data problems, but they’re also pretty good at finding insights, almost like traditional analysts. But they bring something extra to the table with their tech skills and big-picture thinking. The transition towards hiring analytics engineers reflects a broader trend in the data field towards efficiency, clarity, and democratization of data. In this new era, the analytics engineer is not just a role but a strategic asset, reshaping how we approach data and decision-making within the organization. Follow me for more such post on data and analytics - https://lnkd.in/gAJY2wqW #hiring #dataanalytics #analytics #analyticsengineering Image Source - Google
To view or add a comment, sign in
-
-
Hi All, Hiring for #dataengineer for Parsippany NJ (CONTRACT TO HIRE - 3 Months) Location, #share #resume at saurabh@maintec.com *** Please Share Genuine profiles with Good Communications Skills And Candidate Rate, Visa and Location…. Job Title : Data Engineer Location : Parsippany, NJ Contract : CONTRACT TO HIRE - 3 Months and the FULLTIME 10+ Years of profiles Needed - H1 / GC / USC / H4 works RESPONSIBILITIES Work closely with cross-functional teams, including product managers, data scientists and engineers to understand project requirements and objectives ensuring alignment with overall business goals. Build data ingestion framework and data pipelines to ingest #unstructureddata and #structureddata from various data sources such as #sharepoint , #confluence , #chatbot , #jira , External Sites, etc. into our existing #OneData platform. Design a scalable target state architecture for data processing-based on document content (Data types may include, but are not limited to: #xml , #html , DOC, PDF, XLS, JPEG, TIFF, and PPT) including PII/CII handling, policy-based #hierarchy rules and #metadata tagging. Design, development, and deployment of optimal data #pipelines including incremental data ingestion strategy by taking advantage of leading-edge technologies through experimentation and iterative refinement. Design and implement vector databases to efficiently store and retrieve high-dimensional vectors. Conducting research to stay up to date with the latest advancements in generative AI services and identify opportunities to integrate them into our products and services. Implement data quality and validation checks to ensure accuracy and consistency of data. Build automation that effectively and repeatably ensures quality, security, integrity, and maintainability of our solutions. QUALIFICATIONS REQUIRED • Bachelor’s degree in Engineering, Computer Science or a related field; Master’s degree is a plus. • 10+ years relevant industry and functional experience in #database and #cloudbased technologies • Experience in working with #machinelearning and #ai concepts related to #rag #architecture , LLMSs, embedding and data insertion into a Vector data store. • Experience in building data ingestion pipelines for Structured and Unstructured data both for storage and optimal retrieval • Experience working with #clouddata stores, #nosql , Graph and Vector databases. • Proficiency with languages such as #python , #sql , and #pyspark • Experience working with #databricks and #snowflakes technologies. • Experience with relevant #code #repository and project tools such as #github , #jira and #confluence • Working experience with #continuousintegration & #continuousdeployment with hands-on expertise on #jenkins , #terraform , #splunk and #dynatrace .
To view or add a comment, sign in
Aspiring Software engineer/ Bachelor's degree on Computer Science and Engineering.
2moAmazing opportunity