Capgemini Hiring 2022. Capgemini Notification full details below.Interested and eligible candidates can Apply Now.Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 300,000 team members in nearly 50 countries.
Vacancy details:
- Post Name: Snowflake Developer
- Qualification: Any Graduate
- Experienced: 4 To 12 Years
- Salary: Not Disclosed
Job Description: Greeting from Capgemini !!!!
Important Details :
- Post of date:07/11/2022
- Location: Hyderabad/Secunderabad, Pune, Chennai
- Selection Process: The selection will be on the basis of Interview.
- Mode of Interview: virtual Interview
- Interview Rounds of Interview: HR
Roles and Responsibilities
3-5 years of experience
Good experience in snowflake technology
Good in Writing SQL queries using Joines, Subqueries,
Good understanding of RDBMS and No-SQL
Good in DWH concepts
Good in Communication and written skills
Ability to work independently
working Agile is plus
Detailed JD:
Experience in data integration activities including: architecting, designing, coding, and testing phases
Architect the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL and other big data technologies
Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
Strong knowledge on Hadoop eco system , and proficient in HIVE and Impala
Experience in performance tuning of the snow pipelines and should be able to trouble shoot the issue quickly
Extensive experience in relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modeling)
Understanding data pipelines and modern ways of automating data pipeline using cloud based implementation and Testing and clearly document the requirements to create technical and functions specs
Should be able to demonstrate the proposed solution, with excellent communication and presentation skill
Qualifications:
Minimum 2+ years of experience in designing and implementing a fully operational solution on Snowflake Data Warehouse.
Strong experience in Hadoop like Hive , Impala ,Pig.
Experience with Python and a major relational database.
Excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Ability to troubleshooting issues as and when arisen.
Work Experience on optimizing the performance of the Spark jobs
Chennai Location JD
Experience in snowflake is a must ; exposure to data modelling
Snowflake skills that can guide the team to architect and change the tables to meet report requirements
Develop Power Bi reports including Caching , Data Flows
Analyze business and system requirements and define optimal data pipeline design for fulfilling them.
Motivation and ability to perform as a consultant in data engineering projects
Define data security and data access controls
Hyderabad Location JD
Responsibilities
Must have knowledge on snowflake along with Ms SQL
Interface with operational delivery teams, gathering information and delivering complete solutions.
Model data and metadata to support ad-hoc and pre-built data analysis.
Provide subject matter expertise and advice to data consumers surrounding data pipelines supporting ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
Tune application performance related to data access.
Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for data management.
Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment.
Responsible for data pipeline ETL design and collaborating with Data Architect on data modeling. Given a set of target data sources, the technical lead should be able to identify and design the processes necessary to systematically collect information across the data lifecycle.
Design data collection, migration and quality control procedures. The quality procedures must include documenting data provenance (source, date of extraction).
Lead ETL development, unit testing and deployment and participate in integration testing activities and UAT as needed.
Qualifications:
At least 4 years of database programming experience.
At least 3 years of experience using Python, Java, stored procedure, and advanced programming including performance tuning, scaling, query tuning.
At least 3 years of experience in – Knowledge of data ingestion and data cataloguing experience in AWS Kinesis (or Kafka), Glue, Athena.
-At least 2 years of experience in AWS RDS databases including Mongo DB and/or RDS Oracle.
Knowledge of data ingestion and data cataloguing experience in AWS Kinesis (or Kafka), Glue, Athena.
Strong hands on experience in SQL.
Experience in working with different data formats – json, XML, parquet
Must have proven track records of building enterprise-scale data warehouses/lakes using Oracle, Vertica and Talend, Kafka, Hadoop, Spark.
Must have hands-on experience with NoSQL databases.
Must have hands-on experience with ETL tools including Talend, Informatica, Oracle Data Integrator.
Click here for notification and Apply