Job ID : 34816
Location : Claremont, CA
Company Name : Apollose
Job Type : Full-Time, Part-Time, Contract, Training
Industry : Information Technology
Salary : $106200 - $107700 per hour
No. of Positions : I have ongoing need to fill this role
Required Skills : Java, Python, SQL..
Benefits : Medical Insurance, Dental Insurance, Vision Insurance, 401K, Life Insurance
Job Description :Responsblties:
- Highly experienced in developing Tableau Dashboards and data models using custom sql, data extracts and real time integrations with Redshift data base
- 2 years of experience in full lifecycle implementation and support of Data Lake/ BI solutions
- 4 years of experience in design, development, testing, deploying and supporting data visualizations and analytics solutions using SQL on Amazon Redshift , Spectrum, Oracle DB as well as using Tableau, Kibana, Powe BI and like tools
- Hands on experience building regression, classification, Deep learning models and taking it to production with good accuracy rates
- Experience in building unsupervised continuous learning models and work with data engineers to optimize and operationalize.
- Should be hands on with Sage maker, Jupiter notebook or like platforms to analyze and build models
- Self-sufficient in handling large volumes of data with excellent SQL skills
- Eager to learn / get trained on new Time series data stores like Apache pinto
- Familiarity or experience with Apache Pinot is a plus
- Exposure to AWS Lambdas, Kubernetes, Kubeflow, Airflow for orchestration is a plus
- Design/ Architect frameworks to Operationalize ML models through serverless architecture and support unsupervised continuous training models
- Take over and scale our data models (Tableau, Dynamo DB, Kibana)
- Communicate data-backed findings to a diverse constituency of internal and external stakeholders
- Participate in technical decisions and collaborate with talented peers.
Requirment:
- 4 or more years of experience working directly with enterprise data solutions.
- Hands on experience working in a public cloud environment and on-prem infrastructure.
- Specialty on Columnar Databases like Redshift Spectrum, Time Series data stores like Apache Pinot and the AWS cloud infrastructure.
- familiarity with in-memory, serverless, streaming technologies and orchestration tools such as Spark, Kafka, Airflow, Kubernetes.
- Current hands-on implementation experience required, possessing 3 or more years of IT platform implementation experience.
- AWS Certified Big Data - Specialty desirable.
- Experience designing and implementing AWS big data and analytics solutions in large digital and retail environments is desirable
- Advanced knowledge and experience in online transactional processing and analytical processing databases, data lakes, and schemas.
- Experience with AWS Cloud Data Lake Technologies and operational experience of Kinesis/Kafka, S3, Glue and Athena.
- Experience with a wide variety of modern data processing technologies, including.
Key Skills:
- Advanced knowledge of data analytics, cleaning and preparation.
- Proficiency with data modeling and data analysis tools.
- Expertise in programming languages Java, Python, R, SAS, Scala, SQL.
- Experience with statistical and mathematical analysis.