We are looking for a highly skilled Data Engineer with strong expertise in SQL and Databricks, along with solid programming experience in Python. The ideal candidate will have a strong testing/QA mindset, ensuring high data quality and reliability, and exposure to ERP migration projects will be a significant advantage.
Key Responsibilities
- Design, build, and maintain scalable data pipelines using Databricks
- Develop and optimize complex SQL queries for data transformation and analysis
- Implement data processing solutions using Python
- Ensure data quality, integrity, and consistency through rigorous testing and validation
- Collaborate with cross-functional teams to understand data requirements
- Support ERP migration initiatives (data extraction, transformation, reconciliation)
- Optimize data workflows for performance and scalability
- Troubleshoot data issues and provide timely resolutions
- Maintain documentation for data processes and pipelines
Required Skills & Qualifications
- Strong hands-on experience with SQL (advanced queries, performance tuning)
- Expertise in Databricks (Spark, Delta Lake, notebooks, workflows)
- Proficiency in Python for data engineering tasks
- Experience building ETL/ELT pipelines
- Robust understanding of data validation and testing methodologies
- Knowledge of data warehousing concepts
- Excellent problem-solving and analytical skills
- Good communication and collaboration skills
Preferred / Nice-to-Have
- Experience in ERP systems (SAP, Oracle, etc.)
- Hands-on involvement in ERP migration or transformation projects
- Familiarity with cloud platforms (Azure, AWS, or GCP)
- Experience with CI/CD pipelines for data workflows
Apply on Kit Job: kitjob.in/job/45y0ol
📌 Data Pipeline Developer (Bengaluru)
🏢 Vaisesika
📍 Bengaluru
Reply to this offer
Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.