Sr Hadoop Developer
Our team is looking for a Sr Hadoop resource who will be supporting the Provider team. Currently the team is working on various GWCS projects to support the Provider domain data for enterprise use.
Write complex Spark code to perform ETL and load into different target databases like DB2, Mongo, Postgres.
Knowledge in scheduling tools like Control M.
Experience in integrating with Kafka/MQ is required,
Knowledge in code migration and deployment using Jenkins and GITLab.
Experience to process bulk volume data in batch and real time for data warehousing is required.
Experience working with S3 datalake is preferred.
Provide guidance on projects and ensure the project is implemented in the specified timelines.
Build reusable code for complex processes.
Perform unit testing and debugging, be able to debug most program errors and provide solution to resolve the error.
Write detailed technical specifications , identify integration points, ensure sufficient quality and compliance of documentation to architectural standards.
Build Dataflow diagrams, solution documents, be involved in the architecture design for projects and provide inputs on the design.
Perform code reviews with lead and work as an individual and as a team.
Act as a liaison for the team and work with all the users consuming our data for analytics/reporting needs.
The hadoop resource should have knowledge to write complex ETL.
Experience in Spark using Scala, Spark Sql, Hive, Hbase, NIFI, Performance tuning, best practices to follow while writing Spark code, experience in Kafka and real time data processing experience.
If you believe that your skills and experience are a match for this position, please give us a call at 904-998-9414 to speak to a Recruiter, e-mail your most current resume to firstname.lastname@example.org or apply on line at https://jobs.btginc.com