- 10 to 15 yrs + Years of experience in PySpark and AWS development.
- 4-8 years of experience in Data Engineering or Data Platform roles
- Strong experience in Python development & with Spark Framework
- Strong programming skills in PySpark (ETL logic, data validation, testing)
- Handson experience with AWS data services, including:
- Lake Formation, S3, Glue, EMR, Step Functions
- Lambda, Kinesis (for streaming or eventdriven pipelines)
- Solid understanding of data warehousing concepts and ETL/ELT patterns
- Experience working with largescale datasets in production environments
- Experience with CI/CD, Git, Terraform and automated testing for data pipelines
- Experience with scripting language(s): Python (preferred), PowerShell , Configuration as Code principles and API integration
n
Roles and responsibilities
n
- Experience of the full development lifecycle
- Solid communication skills with the ability to explain solutions to technical and non-technical audience
- Excellent attention to detail, with the ability to analyze problems and requirements.
- Strong experience in Python/Pysprk development
- Strong experience with AWS
- Strong experience with DevOps & IaC tooling - Terraform, CI/CD pipelines, Git
- Commitment to staying updated with the latest terminology, concepts, and best practices.
Apply on Kit Job: kitjob.in/job/43jsb6
📌 Data Engineer (Nellore)
🏢 Tata Consultancy Services
📍 Nellore
Reply to this offer
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.