talented data engineer with broad experience and a passion for building
scalable and resilient systems, to join the challenge of combating online
fraud.
This engineer would be helping to catch sophisticated fraudsters in real-time,
at large scale, for the biggest enterprises in the world.
Stuff you’ll be doing:
* Developing and being responsible for distributed, high-throughput, large-scale production systems, handling billions of events per day.
* Supporting and conducting the flow for providing effective data points needed for real-time decisioning, from research to production.
* Crafting tools and practices to analyze complex datasets (e.g. imagine complex joins between multiple data sources with billions of records in each).
* Working with colleagues from various disciplines – Analysts, data scientists, researchers, and fellow engineers.
* Working with production clusters of various datastores and data processing engines, including Spark, Storm, ElasticSearch, Couchbase, Redis, MySQL, Redshift and Athena.
* Debugging complex problems across the whole stack.
* Self-managing projects planning, milestones, designs, and estimations.
Stuff you’ll need:
* Strong skills with several of the following: Python, Java, Kotlin, NodeJS.
* Proven experience of at least 5 years with designing and building large-scale production systems.
* Deep knowledge of different data stores such as MySQL, Postgres, Elasticsearch, Couchbase, AeroSpike, MongoDB, Cassandra, DynamoDB, Redis, etc.
* Have been a “go-to” person in your previous teams/organizations.
* Experience with AWS or other public clouds.
* Professional proficiency in English.
We’d love it if you have:
* Very strong understanding of how to explore and extract insights from data: SQL capabilities (RDS/Athena/Redshift), using Spark to process billions of data points, etc.
* Advanced experience in stream processing and data processing tools and best-practices