Are you a Data Engineer passionate about building scalable data pipelines and working with modern cloud data platforms? We’re looking for talented professionals to join a high-impact data engineering team working on large-scale, cloud-native data solutions.
? Key Responsibilities
- Design, develop, and maintain scalable ETL/data pipelines on Databricks
- Work with AWS/Azure data engineering services to process large-scale and real-time data
- Collaborate with cross-functional teams to integrate data from multiple sources
- Optimize data workflows for performance, scalability, and cost efficiency
- Implement data quality checks, monitoring,
and validation
- Ensure stability and reliability of data pipelines
✅ Must-Have Skills
- Strong programming experience in Python / PySpark / Scala
- Hands-on experience with Databricks and Apache Spark
- Solid knowledge of AWS or Azure data engineering services
- Expertise in SQL and data warehousing concepts
- Experience building ETL pipelines using ADF / Glue / similar tools
? Valuable-to-Have
- Data modeling knowledge
- Experience with Git/version control
- Understanding of data governance, quality, and security
Apply on Kit Job: kitjob.in/job/44hrrx
📌 Data Engineer (Nellore)
🏢 Tata Consultancy Services
📍 Nellore
Reply to this offer
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.