Wir verwenden Cookies, um unsere Website und unseren Service zu optimieren.
FunktionalImmer aktiv
Notwendige Cookies helfen dabei, eine Webseite nutzbar zu machen, indem sie Grundfunktionen wie Seitennavigation und Zugriff auf sichere Bereiche der Webseite ermöglichen. Die Webseite kann ohne diese Cookies nicht richtig funktionieren.
Statistiken
ClickCease
Der Anbieter des Webtools analysiert Ihre Browserdaten nach auffälligem Verhalten in Bezug auf Klickbetrug, also die Häufigkeit von Werbeaufrufen einer IP.
Marketing
Google AdSense
Cookie von Google, das für Ad-Targeting und Anzeigenmessung verwendet wird.
We’re looking for an experienced developer with a getting-things-done attitude to join our team. As data is our product, you will be working at the core of our infrastructure, developing and maintaining data pipelines and components which will mostly have a direct effect on our final product. Of course this kind of work will involve close collaboration with our QA engineers and sometimes DevOps, supporting them in their work of cluster maintenance and administration.
Plan, design and implement robust data pipelines using technologies such as Hadoop MapReduce and Spark
Deliver near real-time data to our customers using our high-availability data infrastructure
Maintain current software components including BugFixing
You'll be involved in the development of all parts of our data infrastructure including Kafka, MapReduce, Spark, Hive and ElasticSearch
Work on monitoring and alerting systems of our pipelines from source to sink
Have a close relation with our network / server administrators to maintain On-Premises Hadoop infrastructure and take part of cluster maintainance
Participate actively in our code reviewing procedure
Participate in architectural planning for our current and upcoming data challenges on a technical level.
Skills and Requirements
3+ years Java and Scala developement experience
Experience with the Hadoop ecosystem (MapReduce, Hive, HDFS, ...)
Knowledge and experience in writing Spark applications (Spark streaming, SparkSQL)
Used to work in Linux based environments
Strong Hands-on mentality
Having worked with Kafka and ElasticSearch
Experience with working in an agile development environment
Fluent in English
Our tech stack
We use a variety of different technologies, frameworks and programming languages to be able to fit our needs and ideas. Amongst classical programming frameworks we use data solutions and computing frameworks like the Hadoop EcoSystem together with Spark and Kafka, we mainly use Java and Scala with these. Our backend and warehousing solutions reach from classical RDBMs to inmemory databases and distributed solutions such as CouchDB and ElasticSearch. Last but not least, we have a bunch of other technologies in production or waiting for them to be, among them OpenShift and Docker.
What you can expect
You can look forward to an exciting, dynamic and promising industry in which it never gets boring! We are living a community in which everyone can develop and achieve something. It is obvious that we strengthen our team spirit in regular employee events, work together as colleagues and allow everyone to express his or her opinion - because appreciation counts for us more than hierarchies! Due to our intensive training you will be on the road to success right from the beginning. What else can you expect? Daily fresh fruit, delicious hot and cold drinks, reimbursement of the membership fee for the gym at our house, meal vouchers on top of your salary and work-life balance through our home office possibilities. You want to enjoy the view on Alexanderplatz with us in the future? Then we would like to welcome you to our bright and modern office in the heart of Berlin!