We enjoy competitive compensation and benefits packages, and reward and recognize our employees for exceptional results. A constant focus on continued learning and growth keeps our team members engaged and excited about “what’s next.†We offer flexible work options when available, and emphasize the importance of work-life balance. We know that when our people are rewarded, recognized, and rejuvenated, we win as a team. Â
We are looking for a detail-oriented, talented, and enthusiastic data engineer to work in a fast-paced, startup-like environment with a seasoned cross-functional team of Security Experts, Data Scientists, and Machine Learning Engineers to advance the state-of-the-art in network defense. If you love the challenges that come with big data then this role is for you. Duties include large-scale data routing, modeling, extraction, transformation, loading, warehousing, and composing such systems together with support for monitoring and mediation logic. You will use the latest big data platforms and technologies (e.g. NiFi, Spark, Kafka, NoSQL, Docker, Kubernetes, AWS/GCP) to help federate algorithms and applications across many clouds as part of the next generation Secureworks platform.
The ideal candidate will have a software development background with an emphasis on distributed systems and some familiarity with data science or machine learning, though personal expertise is not required. This role requires a data engineer that can take architectures from concept to reality and work well within a collaborative environment. If you’re an engineer with experience solving big data problems, you might have what it takes to become an elite member of our team and help us innovate faster than the bad guys.
Key Responsibilities
Design, build, launch, and maintain infrastructure to support large-scale data routing, streaming and historical processing, data warehousing, and container orchestration.
Coordinate Database Operations (Elasticsearch & Scylla) including deploying clusters to production, performing post-deployment logistics (restart/backup), and monitoring the health of clusters.
Employ a variety of languages and tools to marry big data systems together and hunt down opportunities to acquire new data from other systems.
Recommend and implement ways to improve data reliability, efficiency, and quality.Â
Collaborate with data scientists and security researchers to productize new methods of extracting information from structured and unstructured data.
Work effectively on a geographically distributed team to deliver high quality software against aggressive schedules. Â
Minimum Requirements
Minimum of 6Â years of experience in data engineering, data modeling, ETL development, or data warehousing.
Minimum of 2 years of experience with Elasticsearch/Lucene and/or Scylla/Cassandra
Minimum of 2 years of experience with AWS cloud services.
Minimum of 2 years of software engineering in Scala and scripting languages like Python, Ruby, Perl, Bash, etc.
Minimum of 3 years of experience with big data platforms (e.g. Hadoop, Spark, Kafka, HBase, etc)
Preferred Experience
Experience working with containers and container orchestration solutions such as Docker, Kubernetes, Mesos, etc.
Experience with stream-processing systems, such as NiFi/StreamSets, Kafka, Storm/Heron, Spark, or Flink.
Job ID: 117761
Meta is embarking on the most transformative change to its business and technolo...
Deloitte’s Enterprise Performance professionals are leaders in optimizing...
Job Duties/Responsibilities:Determine the acceptability of specimens for testing...
• JOB TYPE: Direct Hire Position (no agencies/C2C - see notes below)â€...