Senior Big Data Engineer

at The Judge Group
Location Irving, TX
Date Posted February 1, 2020
Category Default
Job Type Contractor
Full-time

Description

Our client is currently seeking a Senior Big Data Engineer 

Full Time, Permanent opportunity

Job Description
This Lead Consultant is an experienced professional who is responsible for leveraging data and analytics to help automate and optimize Claims Analytics Data processes enabling our Claims employees to focus on serving our customers and delivering the most advanced claims experience on the planet. They will be responsible for the strategy around how we bring together complex data into clean and useful data structures making our valuable data more approachable.
Key Responsibilities
  • Responsible for design, prototyping and delivery of software solutions within the big data eco-system
  • Leading projects and/or serving as analytics SME to provide new or enhanced data to the business
  • Improving data governance and quality increasing the reliability of our data
  • Influencing the creation of a single, trusted source for key Claims business data that can be shared across the Enterprise
  • Responsible for designing and building new Big Data systems for turning data into actionable insights
  • Train and mentor junior team members on Big Data/Hadoop tools and technologies
  • Identifies opportunities for improvement and presents recommendations to management
  • Seeks out and evaluates emerging big data technologies and open-source packages
  • Participate in strategic planning discussions with technical and non-technical partners
  • Uses, teaches, and supports a wide variety of Big Data and Analytics tools to achieve results (i.e., Python, Hadoop, HIVE, Scala, Impala and others).
  • Uses, teaches, and supports a wide variety of programming languages on Big Data and Analytics work (i.e. Java, Python, SQL, R)

Job Qualifications
  • Undergraduate degree in Computer Science, Mathematics, Engineering (or related field) or equivalent experience preferred
  • 5-7 years of experience preferred in a data integration, ETL and/or business intelligence/analytics related function
  • Ability to work with broad parameters in complex situations
  • Experience in developing, managing, and manipulating large, complex datasets
  • Expert high-level coding skills such as SQL and Python and/or other scripting languages(UNIX) required. 
  • Experience with source control solutions (ex git, GitHub, Jenkins, Artifactory) required
  • 4-5+ years of experience with big data and the Hadoop ecosystem (HDFS, SPARK, SQOOP, Hive, Impala, Parquet) required
  • Experience with Agile development methodologies and tools to iterate quickly on product changes, developing user stories and working through backlog 

Only registered members can apply for jobs.