Copyright OPTnation. All rights reserved.

Teradata Developer

Job ID : 24095

Job Title : Teradata Developer

Location : Seattle, WA

Comapny Name : Elajika Inc

Job Type : contract

Industry : Information Technology

Salary :  $55 - $60  per hour

Work Authorization : ["OPT"]

No. of Positions : 2-4

Posted on : 10-17-2019

Required Skills : AWS - S3, IAM etc Spark Job Description Python Data Architect with 8+ years of experience Airflow 5+ years of experience in Design and implementing complex load patterns involving Teradata and Big data technologies including Spark and Kafka Tableau

Benefits : None of These

Job Description :

 

PROJECT DESCRIPTION

BI and DW team would consume profile to build extended datasets and subject area focused marts leveraging Teradata and big data stack including Kafka and Spark. This would replace the existing DWs – Merch DM, Supply Chain DM, EDW, SRA etc- most of which are daily batch today and there is redundancy in data etc.

 

Job Description

  • Data Architect with 8+ years of experience
  • 5+ years of experience in Design and implementing complex load patterns involving Teradata and Big data technologies including Spark and Kafka
  • 5+ years of experience in complex Teradata code including Stored Procedures/BTEQ
  • 5+ years of experience in Teradata Load Utilities
  • Exposure to Visualisation tools like Microstrategy/Tableau
  • 3+ years of experience in Big data design and implementation using Kafka and Spark
  • Experience in complex Problem Solving with data
  • Candidate must be able to think out of the box for solutions and designs
  • 2+ years of experience in AWS working with S3, IAM and other components
  • 3+ years of experience in scheduling tools like airflow
  • Should be able to handle a team
  • Should be a Go to Person for complex problems
  • Provide a constant flow of new and innovative ideas into the BI roadmap.
  • Coding proficiency in at least one modern programming language (e.g. Python, Java, Scala)
  • Exposure to Big Data Technologies (Hadoop, Hive, Presto, Pig, Spark, etc.).
  • Design and implement modernized ETL and data processing solutions through modernized cloud-based solutions (S3, Redshift, etc) and deprecate legacy on-premise solutions (Oracle, etc)
  • Develop data integration solutions leveraging multiple disparate sources.
  • Continual performance tuning and capacity planning for future growth potential.

Company Details :

Company Information hidden please Login to view details

Login To Apply Now! Register & Apply Now!