Job Details

Position : Data engineer (Pyspark)

4+ years working experience in data integration and pipeline development with BS degree in CS, CE or EE.

2+ years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems

Strong real-life experience in python development especially in PySpark in AWS Cloud environment.

Design, develop test, deploy, maintain and improve data integration pipeline.

Experience in Python and common python libraries.

Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.

Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.

Databricks, Redshift Experience is a plus.

Apply