27 Mar
|
AMRR TechSols
|
Nellore
27 Mar
AMRR TechSols
Nellore
Apply on Kit Job: kitjob.in/job/43vlp1
* Data Engineer (Mid / Senior / Lead)* Full-Time / Contract-to-Hire * Bangalore, Hyderabad, Chennai* Hybrid
Role: Data Engineer (Mid / Senior / Lead)
Employment Type: Full-Time / Contract-to-Hire (6 months, with potential conversion to Full-Time)
Locations: Bangalore, Hyderabad, Chennai
Work Model: Hybrid
Urgency Note
This is an immediate requirement, and we are actively looking to onboard candidates at the earliest. We are open to both full-time employment as well as contract-to-hire (6 months, extendable/convertible to full-time) based on candidate preference and business needs.
Role Overview
We are looking for highly skilled Data Engineers to design, build, and optimize scalable data pipelines and modern data platforms. The ideal candidate should have strong hands-on experience with Python, SQL, Spark, Databricks, and cloud platforms (AWS/Azure/GCP), along with a solid understanding of data modeling, ETL/ELT processes, and distributed computing.
This role spans multiple seniority levels, with responsibilities ranging from execution to architecture and technical leadership.
About the End Client
Our client is a global leader in marketing, advertising, and data-driven digital transformation, working with Fortune 500 brands across industries. The organization operates at massive scale, leveraging advanced data platforms to power customer intelligence, audience targeting, and analytics-driven decision-making.
The environment offers exposure to:
Large-scale, high-volume data systems
Advanced cloud-native architectures
Cutting-edge use cases in AdTech and customer analytics
Common Tech Stack
Languages: Python, SQL
Big Data: Apache Spark, Databricks
Cloud: AWS (S3, EMR, Athena), Azure (ADF, Databricks)
Orchestration: Airflow, Databricks Workflows
Modeling: DBT, Data Warehousing
Databases: PostgreSQL, MySQL, NoSQL
================================================================
Open Positions
=================================================================
a) Senior Data Engineer – 2 Positions (5–7 Years)
Role Focus
End-to-end ownership of data pipelines and system optimization.
Key Responsibilities
Design and implement scalable, production-grade data pipelines
Build and manage data lake / data warehouse architectures
Optimize Spark/Databricks workloads for performance and cost
Implement advanced data modeling (SCD Type 2/4, dimensional models)
Drive cloud migration and modernization initiatives
Ensure data quality, monitoring, and observability
Mentor junior engineers and review code
Required Skills
Strong expertise in Python, SQL, Spark, Databricks
Experience with DBT / up-to-date data stack
Hands-on with AWS/Azure cloud services
Solid understanding of data warehousing & modeling
Experience with Airflow / orchestration tools
Familiarity with CI/CD and DevOps practices
-------------------------------------------------------------------------------------------------------
b) Lead Data Engineer – 1 Position (7–9 Years)Role Focus
Architecture, strategy, and technical leadership.
Key Responsibilities
Architect end-to-end data platforms and ecosystems
Define data engineering standards, frameworks, and best practices
Design high-volume, low-latency data processing systems
Lead Databricks-based platform design and governance
Drive cloud strategy, scalability, and cost optimization
Oversee data governance, security, and compliance
Lead teams and collaborate with stakeholders on data strategy
Required Skills
Expert-level expertise in Spark, Databricks, distributed systems
Strong experience in cloud architecture (AWS/Azure/GCP)
Proven track record in building large-scale data platforms
Deep knowledge of data modeling and pipeline optimization
Strong leadership and stakeholder management experience
Experience managing end-to-end delivery of data projects
--------------------------------------------------------------------------------------------------------
c) Mid-Level Data Engineer – 1 Position (3–5 Years)Role Focus
Execution, development, and optimization of data pipelines under guidance.
Key Responsibilities
Build and maintain ETL/ELT pipelines
Develop data transformations using Python, SQL, PySpark
Work with Databricks and cloud data services
Support data modeling and warehousing efforts
Debug, test, and optimize existing pipelines
Collaborate with senior engineers on implementations
Required Skills
Strong in Python and SQL
Hands-on experience with Spark / Databricks
Familiarity with Airflow or similar orchestration tools
Basic understanding of data warehousing concepts
Exposure to AWS/Azure/GCP
---------------------------------------------------------------------------------------------------------
====================================================================
Soft Skills
Strong analytical and problem-solving skills
Excellent communication and collaboration
Ownership mindset and accountability
Ability to work in fast-paced, agile environments
End Note
If you are passionate about building scalable data systems, working with modern cloud and big data technologies like Databricks and Spark, and want to be part of a high-impact, fast-paced environment—this is an excellent opportunity to grow your career.
We are prioritizing candidates who can join immediately or within short notice, and who are excited to work on large-scale, data-driven platforms with real-world business impact.
Interested candidates can share their updated resumes along with the following details:
- Current CTC
- Expected CTC
- Notice Period / Earliest Joining Availability
- Current Location
Apply on Kit Job: kitjob.in/job/43vlp1
📌 Data Engineer - Lead/Senior/Mid - 3-9 years - Hybrid - Bangalore, Hyderabad, Chennai (Nellore)
🏢 AMRR TechSols
📍 Nellore