Build and maintain tools for deployment, monitoring and operations.
Automate and streamline our processes to enable the team’s deliverables to our customers
Monitor our production workloads, evaluate performance issues and solve them.
Develop custom real-time streaming data pipelines using Spark
Ensure proper data governance policies are followed by implementing or validating data lineage, quality checks, classification, etc.
Have a quality mindset, squash bugs with a passion, and work hard to prevent them in the first place through unit testing, test-driven development, version control, continuous integration and deployment.
Be passionate about solving customer problems and develop solutions that result in a passionate customer/community following
Create design and participate in design and code reviews
Contribute to the design and architecture of the project
Operate within Agile Development environment and apply the methodologies
Required Knowledge and Skills:
Knowledge of best practices and IT operations in an always-up, always-available service
Experience with automation/configuration management
Experience creating and modifying CI/CD pipelines for new deliverables.
Experience with deploying AWS cloud services and infrastructure
Experience with network, application, security, and server monitoring
Demonstrated understanding of distributed computing principles
Experience developing ETL processing flows using MapReduce technologies like Spark and Hadoop
Good knowledge of stream-processing systems, such as Storm or Spark-Streaming
Good knowledge of various messaging systems, such as Kafka or RabbitMQ
Good knowledge of data architectures, data pipelines, real time processing, streaming, networking, and security
4+ years’ experience in software engineering
1+ years’ experience with CI/CD pipelines with Jenkins, TeamCity or other deployment tools
1+ years’ experience with configuration management with Ansible, Octopus or other similar configuration tools
1+ years’ Management of Spark or Hadoop clusters
1+ years’ experience with building data pipelines with spark or hadoop
6+ years’ experience in software engineering
2+ years’ experience with configuration management with Ansible, Octopus or other similar configuration tools
2+ years’ experience with CI/CD pipelines with Jenkins, TeamCity or other deployment tools
Demonstrable advanced knowledge of data architectures, data pipelines, real time processing, streaming, networking, and security
1+ years’ experience with NoSQL databases, such as HBase, Cassandra,
The Arcanum Group Pittsburgh, PA
CGI Veterans Jobs Pittsburgh, PA
BNY Mellon Pittsburgh, PA
UPMC Pittsburgh, PA
Civil & Environmental Consultants, Inc. Robinson Township, PA