B.Tech Graduates Vacancy at MetLife: Check Eligibility Here
Overview:
MetLife is hiring an experienced Share this Job Associate Big Data Engineer at their Hyderabad location. You will work and collaborate with a nimble, autonomous, cross-functional team of makers, breakers, doers, and disruptors who love to solve real problems and meet real customer needs. You will be using cutting-edge technologies and frameworks to process data, create data pipelines and collaborate with the data science team to operationalize end-to-end machine learning & AI solutions.
The complete details of this job are as follows:-
Roles and Responsibilities:
The Ideal Candidate should be able to:
Contribute towards supporting the build/implementation of data ingestion and curation processes developed using Big data tools such as Spark (Scala/python), Hive, HDFS, Kafka, Pig, Spark, HDFS, Oozie, Sqoop, Flume, Zookeeper, Kerberos, Sentry, Impala, CDP 7.x etc. under the guidance of Big Data Engineers.
Support the ingestion of huge data volumes from various platforms for Analytics needs and prepare high-performance, reliable, and maintainable ETL code with support/review guidance of senior team members.
Provide relevant support in monitoring performance and advising any necessary infrastructure changes to senior team members for their review inputs.
Understand the defined data security principals and policies developed using Ranger and Kerberos.
Gain broader understanding on how to support application developers and progressively work on efficient big data application development using cutting edge technologies.
Collaborate with Business systems analyst, technical leads, Project managers and business/operations teams in building data enablement solutions across different LOBs and use cases basis
Understand & support the creation of reusable frameworks which will optimize the development effort involved.
The Ideal Candidate should also have:
Experience in Python and writing Azure functions using Python/Node.js
Experience using Event Hub for data integrations.
Supporting implementation of analytical data stores in Azure platform using ADLS/Azure data factory /Data Bricks and Cosmos DB (mongo/graph API)
Proficiency in using tools like Git, Bamboo and other continuous integration and deployment tools.
Exposure to data governance principles such as Metadata, Lineage (Colibra /Atlas) etc.
Eligibility:
Bachelor’s degree in Computer Science, Engineering, or related discipline
1 – 3 years of solutions development and delivery experience.
Hive database management and Performance tuning (Partitioning / Bucketing).
Good SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
Basic experience of building stream-processing systems, using solutions such as Storm or Spark-Streaming.
Able to support build/design of Data warehouses, data stores for analytics consumption On prem/ Cloud (real time as well as batch use cases) as per guidance and help from Sr. Big Data Engineer/Big Data Engineer.
Experience in any model management methodologies is a plus.
Proficiency with hands on development experience on some of the key tools: HDFS, Hive, Spark, Scala, Java, Python, Databricks/Delta Lake, Flume, Kafka etc.
Analytical skills to analyze situations and come to optimal and efficient solution based on requirements.
Performance tuning and problem-solving skills.
Able to support/contribute towards designing of multi-tenant, containerized Hadoop architecture for memory/CPU management/sharing across different LOBs under the guidance of senior data engineers in the team.
Code versioning experience using Bitbucket.
Good communication skills both written and verbal.
Proficiency in project documentation preparation and other support as required.
Join Studycafe Membership to Download File and Apply
The link to Apply for this Job Vacancy is Given in the attached PDF