Skip to main content

This job has expired

ETL Hadoop Developer

Employer
Interpro Inc.
Location
Bowie, MD
Closing date
Aug 22, 2019

View more

LONG TERM ROLE. NO THIRD PARTIES PLEASE Description Sr. Software Engineer Responsibilities Create robust and scalable designs that are easily incorporated into the feature and meets customer requirements Develop exemplary code for multiple feature components that meets the company s best practice standards. Deliver code that is stable, secure, on schedule and contained minimal bugs Develop code taking into consideration the ilities scalability, reliability, maintainability, flexibility Write unit tests for each function for the feature component that effectively exercises the code, surfaces bugs or design issues and integrates with or complements those used by Test organization Develope code for feature component conforms to group coding standards, is consistent with product design and architecture goals, does not repeat mistakes, and meets schedule and quality expectations When modifying existing code take necessary steps to fix the code and ensure it meets team and company coding and quality standards Ensures that the delivered overall user experience conforms to the feature objectives Design and develop data structures and ETL processes in a SQL Server environment Design and develop data structures and ETL processes using HIVE, hBASE, PIG Scripts, SPARK, Java funcitons Hadoop development and implementation Loading from disparate data sets Pre-processing using Hive and Pig Designing, building, installing, configuring and supporting Hadoop. Help build Hadoop clusters Test prototypes and oversee handover to operational teams Assist in data analysis and data modeling using HIVE Work with product teams to understand requirements and convert them into final product Involved with full software development lifecycle including testing, implementation and auditing in an Agile environment and Diagnose and address issues as necessary and work with product teams to implement correction plan. Qualifications 7+ years of strong software development experience 4+ years of experience in developing in Hadoop (Hortonworks) 4+ years of experience in developing in Spark - using Java 4+ years of experience in developing MapReduce - using Java 4+ years of experience in at least one messaging bus in the big data stack (KafkaFlume, etc.) 4+ years of experience in at least one data extraction tools in the big data stack (NIFISquoop, etc.) 4+ years of experience in at least one orchestration tool in the big data stack (Oozie, etc.) 4+ years of experience in at least one Non RDBMS in Hadoop (JethroHiveHBaseMemSQL, etc.) 4+ years of experience in at least one RDBMS (MySQLMS SQL, etc.) 2+ years of experience in Python shell scripting 2+ Experience in building Microservices and Springboot Writing high-performance, reliable and maintainable code Good knowledge of big data structures, theories, principles, and practices Knowledge of workflowschedulers like Oozie Experience working in an Agile environment .NET and JavaScript experience is preferred and Analytical and problem solving skills, applied to Big Data domain.

Get job alerts

Create a job alert and receive personalized job recommendations straight to your inbox.

Create alert