Copyright OPTnation. All rights reserved.

Hadoop Developer

Job ID : 23821

Job Title : Hadoop Developer

Location : Michigan

Comapny Name : Logging-in

Job Type : Full-Time

Industry : Computer/Software

Salary :  $65000 - $70000  per year

Work Authorization : ["OPT"]

No. of Positions : I have ongoing need to fill this role

Posted on : 09-16-2019

Required Skills : Hadoop

Benefits : None of These

Job Description :

Logging-in INC  is one of the fastest growing IT consulting Firm for fortune 500 clients. We cater to the application development, Support services needs of our clients and offer them technology-driven business solutions that meet their goals and corporate objectives.

WHY US?

Logging-in knows what is in demand in the IT industry. In the wake of providing the best talent, we have meticulously designed our training programs in a way that they stand at par with the demand of the current market scenario. Our reach has been so wide that we are the preferred choice for our prime vendors.

Hadoop Developer:

Job Description

We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.

Responsibilities

  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
  • Implementing ETL process
  • Monitoring performance and advising any necessary infrastructure changes
  • Defining data retention policies

Skills and Qualifications 

  • Proficient understanding of distributed computing principles
  • Management of Hadoop cluster, with all included services
  • Ability to solve any ongoing issues with operating the cluster
  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
  • Experience with Spark
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Experience with Big Data ML toolkits, such as Mahout, SparkML.
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Experience with Cloudera/MapR/Hortonworks

Company Details :

Company Information hidden please Login to view details

Login To Apply Now! Register & Apply Now!