Big Data Architect

Multi-disciplinary Big Data Architect to design and build scalable and robust
data

platform, Marketing, CRM, campaign management and other back-office
applications

* Responsible for multi projects at once, being the main source of technical knowledge

and experience, facilitating, building (hands-on) and mentoring RnD teams

* Responsible for cloud infrastructure, architecture, micro-services, CICD, production

environments and for new innovative and complex developments.

* Responsible for research, analysis and performing proof of concepts for new

technologies, tools and design concepts

* Design and development core modules in our big data platform infrastructures

(hosting in Google Cloud, services and Kubernetes, based on: Spark
Core/Streaming

Structure/SQL, Scala, Python, AngularJS, Node.js, Kafka, BigQuery, Redis,

Elasticsearch, Google Cloud Machine Learning Engine and TensorFlow)

* The platform handles huge amounts of data, through complex processing in batch

and real time modes, complex data manipulation, using services, UI frameworks
and

interactive notebooks.

Qualifications

* 8+ years of practical experience with Scala/Python programming languages,

excellent programming skills – functional programming, design patterns, data

structures and TDD approach – must

* 6+ years hands-on experience building large-scale (petabytes), low-latency

distributed systems using modern cloud computing technologies (GCP – preferred
,

AWS) – must.

* 5+ years experience working in microservices environments, building state-of-the-

art data-driven solutions – must

* 5+ years experience in efficient data modeling using SQL/NoSQL solutions (BQ,

ES, MySQL etc) – must

* Expert SQL queries knowledge – you know how to write efficient, low-latency

queries vs modern data-warehouse solutions (BQ – preferred, Redshift, Athana
etc)

– must

* Experience building large-scale (petabytes) of Streaming/Batch ETL using modern

processing engines (Spark – preferred, Beam, Flask etc).

* Experience working with data streams systems (Kafka – preferred , Pub-Sub or

Kinesis) – must

* Experience working Data Science/ML (production grade) infrastructure – big

advantage

* Experience working with notebooks solutions (Zeppelin, Jupyter) – big advantage
* Experience working storage solutions (hdfs, S3, GCS – preferred) – big advantage
* Experience working with CI&CD tools and devops solutions – big advantage
* Experience working with dockerize and kubernetes solutions – big advantage
* Team-Player, excellent communication skills vs software engineers, product and

external clients – fluent English – must

* BSc Computer Science/Data Management or equivalent practical experience – must

מספר משרה: 8868

למה לעבוד קשה?

שלחו לנו קו"ח ותנו למשרה הנכונה למצוא אתכם