|Date Posted||February 11, 2020|
We are seeking a Big Data Engineer to support a new contract supporting the DHS CIO. The program, formerly the Data Framework Program, is now called the Data Services Branch. The mission of the project is to consolidate multiple data sets into single data lake in order to do entity resolution and strive toward a more streamlined process that will be able to gather intel from different components and applications. The current data lake is built on Hadoop and Cloudera and they are looking to migrate to the Cloud Factory, a Platform as a Service built within DHS HQ.
*Must Work W2 (No C2C), Must be eligible to be processed for security clearance*
This is a great opportunity to work on an enterprise-wide implementation of bleeding-edge technical solutions and be part of a high energy team. The Data Platform Engineer will leverage in-depth, hands-on experience and expertise across multiple big data, Cloud, and analytics solutions. The successful candidate will have experience implementing enterprise solutions leveraging Cloud and Big Data technology in a Federal environment and working closely with Government clients. In this role you will balance technical leadership with hands-on development and implementation to initiate, plan, and execute large-scale, highly-technical, and cross-functional data and analytics initiatives.
Applicants must possess a demonstrated understanding and experience working with a wide variety of skillsets; including but not limited to:
Design, implement and optimize leading Big Data methodologies (Hadoop, Spark, SAP HANA) across hybrid hosting platforms (AWS, Azure, on-prem)
3+ years of progressive experience in architecting, developing, and operating modular, efficient and scalable big data and analytics solutions
Bachelor's degree in computer science, information technology or related field
Advanced level proficiency in at least one of Python (preferred), Scala, Java (preferred) or C++
Experience with distributed computing frameworks, particularly Apache Hadoop 2.0+ (YARN; MR & HDFS) and associated technologies: Sqoop, Avro, Flume, Oozie, Zookeeper, etc.
Experience with Apache Hive, Apache Spark and its components (Streaming, SQL, MLLib)
Operating knowledge of cloud computing platforms
Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common tasks
Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of GIT repository
Understanding of cloud and distributed systems principles (such as load balancing, networks, scaling, in-memory vs. disk);
Experience with large-scale, big data methods (MapReduce, Hadoop, Spark, Hive, Impala, or Storm)
The company is an equal opportunity employer and will consider all applications without regards to race, sex, age, color, religion, national origin, veteran status, disability, sexual orientation, gender identity, genetic information or any characteristic protected by law.
If you would like to request a reasonable accommodation, such as the modification or adjustment of the job application process or interviewing process due to a disability, please call 888 472-3411 or email accommodation@teksystems .com for other accommodation options.