. Job Description: Responsibilities: 1. Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Python or PySpark. 2... and experience with Apache Spark. 4. Solid understanding of data processing concepts and ETL pipelines. 5. Experience with any cloud......
Job Location: Noida, Uttar Pradesh, IndiaSelected articles on work and employment, which may be found interesting:
Find more articles on Articles page