Java Spark Big Data Developer - (K-320)

Java Spark Big Data Developer - (K-320)

18 Mar
|
Citibank India
|
Chennai district

18 Mar

Citibank India

Chennai district

Job Description



The purpose of the job is to design and develop ETL solutions for Global Liquidity Reporting System (GLRS). The candidate will be responsible for understanding the requirements, designing, coding and testing along with the development team. This will involve working closely with the business and SMEs to prioritize business requests, manage the Batch Processing development work slate, provide estimate efforts, and ensure timely delivery on committed items and to project manage all aspects of software development according to the Software Development Lifecycle (SDLC).



Job Background/context:



The Global Liquidity Reporting System (GLRS)

application standardized and consolidated liquidity reporting solution to global treasury users. GLRS application consumes data from GENESIS and ADS/SDRs to produce various liquidity reports for FRB and PRA agencies. GLRS Reports used by Treasury users to analyze firm liquidity status to take internal funding decisions as well as report to external agencies.



Key Responsibilities:





The key responsibilities for the candidate are -



Develop and design highly efficient ETL solutions based on the business requirements and aggressive delivery timelines. Experience with Hadoop/Spark/Sqoop/Hive (Or willingness to learn backed with solid technology understanding of distributed systems) is essential. Understanding Business and Functional Requirements provided by Business Analysts and to convert into Technical Design Documents to deliver on the requirements. Ensure best practices are followed. Prepare detailed level test plans and ensure proper testing for each module developed. Prepare handover documents; manage SIT with oversight of UAT and Production Implementation. Identify and proactively resolve issues that could affect system performance, reliability, and usability. Ensure process compliance and manage expectations of the leadership Explore existing application systems, determines areas of complexity, potential risks to successful implementation. Build and Provision Big Data Cluster for SIT / UAT Environments and Design end-to-end big data pipeline and testing. Extracting and analysing the sample data from operational systems and ingesting into big data Hadoop platform to validate the user requirements to create high-level design documents. Willing to work flexible hours.





Person Specification



Knowledge/Experience:



6 to 8 years of experience in Software development

Must have 3 years experience in Architecture, Design and Development of Big data platforms, Hadoop, Apache Spark, Hive, Hue, Query IT, Avro, Parquet, Cloudera , Data-mere, Arcadia.

5 years of experience in big data ecosystem Hadoop, Spark & storm, Elastic Search, Apache Nifi, HBase, Cassandra and MongoDB.

Or must have 5 years of Ab Initio ETL and Data Management solutions including ability to understand the internal working of variety of Ab Initio components.

Good hands on Experience in Java and Scala Programming Languages.

Expertise in building Big Data applications using Hadoop and Spark.

Good knowledge of Pentaho reporting tool and integration of Hadoop Eco System components.

Strong Experience in NoSQL Data Store design and implementation.

Admin knowledge is the plus.

Good experience in UNIX shell scripting and automation process.

Strong Experience in Big Data Job Scheduling Oozie.



Skills:



Strong design and execution bend of mind

Candidate should possess a strong work ethic, good interpersonal, communication skills, and a high energy level.

Candidate should share many common traits: Analytical thinker, quick learner who is capable of organizing and structuring information effectively;

Ability to prioritize and manage schedules under tight, fixed deadlines.

Excellent written and verbal communication skills.

Ability to build relationships at all levels.

Ability to independently work with vendors in resolving issues and developing solutions

Strong interpersonal skills



Qualifications:



Bachelor of Science or Master degree in Computer Science or Engineering or related discipline.





Competencies:



Strong work organization and prioritization capabilities.

Takes ownership and accountability for assigned work.

Ability to manage multiple activities.

Focused and determined in getting the job done right.

Ability to identify and manage key risks and issues.

Shows drive, integrity, sound judgment, adaptability, creativity, self-awareness and an ability to multitask and prioritize.

Good change management discipline

The original job offer can be found in Kit Job:
https://www.kitjob.in/job/21654596/java-spark-big-data-developer-k-320-chennai-district/?utm_source=html

Reply to this offer

Impress this employer describing Your skills and abilities, fill out the form below and leave Your personal touch in the presentation letter.

Subscribe to this job alert:
Enter Your E-mail address to receive the latest job offers for: java spark big data developer - (k-320)
Publish a new Free Offer
Need to publish an offer? With more than 1 million unique users per month, you will find the ideal candidate for your company instantly, what are you waiting for!
Publish Now

Subscribe to this job alert