Zurück zu allen offenen Projekten

System Engineer / Solution Architekt Big Data (m/w)

0% Auslastung

Schlagwörter:

null

Freelancer-Projekte-PLZ6

Details:

System engineer/solution architect with deep understanding of big data solutions (Hadoop, Cloudera), and in particular building data lakes in an big enterprise environment. As part of the engineering team his/her task would be to support Engineers and Developers integrating different kind of data sources in a large, distributed data lake, and cover especially the data specifications part of it.

Roles and Responsibilities:

  • Work closely with solution designers to understand approach, requirements and solution design of current project

  • Collect, prepare and process data source specifications from various data source owners

  • Cover all significant aspects of data sources like type, volume, velocity, frequency, content description, etc.

  • Coordinate with systems engineers and other stakeholders outside the project to align requirements and design

  • Support project management and preparing routines

  • Provide clear qualitative and quantitative information and knowledge to the team

  • Take part in and lead workshops

  • Document and visualize source descriptions and other artifacts

  • Drive the source management and on-boarding processes for the project

  • Support the general solution architecture and design efforts of the project

Skills

Must have:

  • Several years of proven experience as architect/engineer in complex Big Data projects (Hadoop, Kafka, Spark, Hbase, HIVE, Impala, etc.) or other mass data applications at Petabyte scale

  • At least 3 years of experience with IT Projects in large enterprises

  • At least 3 years of experience with (big) data integration projects

  • At least 1 year of experience with performance critical data processing at large scale using files, data streams and databases

  • Very good documentation and visualization skills

  • Excellent oral and written communication skills in English (German nice to have)

  • Team player, experience with agile development approach and tools (SCRUM, KANBAN, JIRA)

  • Must be able to effectively and professionally communicate within the team

Beneficial:

  • At least 3 years of experience with large Hadoop clusters

  • Experience with batch and streamed data flows in and out Cloudera (Spark, Kafka)

  • Experience with testing IT Systems and Data Integration Systems

  • Experience with working in internationally distributed cross-functional teams

  • Experience with the Cloudera framework


JobNr: 3688

Ansprechpartner:
E-Mail: Experten@soorce.de
Zurück zu allen offenen Projekten