Hi, we’re optAd360 ⎼ a tech company creating awesome content monetization software. We help publishers from around the world by providing innovative tools based on programmatic solutions for every stage of their business development. As one of the fastest-growing AdTech companies in Europe, we support over 4,000 websites and mobile apps. We are constantly improving the technology and algorithm to help content creators increase their monetization profits. We like what we do, and so we never stand still. Always celebrating our successes together ⎼ both individual and team ones. So if you want to grow among experts and in an inspiring atmosphere, you’ve come to the right place!
Apply for this role and join the Bidlogic project! It’s the first optimization tool that automates mobile app monetization processes within the most popular mobile ad mediation platforms. You’ll be immersed in the latest technologies designed for mobile games monetization. Besides, the whole team is passionate about what they do, so if you’re a game enthusiast – here you’ll surely find someone to share your passion with!
You will be responsible for:
- Designing, implementing, and optimizing data processing pipelines using Big Data and AWS tools and technologies;
- Working with Big Data tools and technologies such as Hadoop, Spark, Flink, Kafka, HBase, Hive, and Presto;
- Programming in Python to implement algorithms, data analysis, and integration with AWS services;
- Creating, optimizing, and managing data processing pipelines using AWS Glue, Kinesis, Lambda, S3, Redshift, and others;
- Collaborating with other development teams and data analysts to develop and implement data management strategies;
- Designing, developing, and optimizing ETL processes using tools such as Talend, Apache NiFi, SSIS, and Informatica PowerCenter;
- Ensuring high-quality code and writing unit and integration tests;
- Participating in the design process of new functionalities and systems related to data engineering.
You need to have:
- Minimum 2 years of experience as a Python Data Engineer;
- Knowledge of Python frameworks such as Pandas, NumPy, Dask, and PySpar;
- Experience with Big Data technologies such as Hadoop, Spark, Flink, Kafka, HBase, Hive, and Presto;
- Minimum 1 year of experience maintaining infrastructure on the AWS platform, including managing services, resources, and monitoring performance;
- Familiarity with AWS services such as S3, EC2, EMR, Redshift, Glue, Kinesis, Lambda, and RDS;
- Experience working with databases (SQL and NoSQL) such as PostgreSQL, MySQL, MongoDB, Cassandra, and DynamoDB;
- Ability to work with version control tools, such as Git;
- Minimum 1 year of experience as an ETL Developer, with the ability to use tools such as Talend, Apache NiFi, SSIS, and Informatica PowerCenter;
- Good knowledge of spoken and written English;
- Good knowledge of spoken and written Polish.
Flexible working hours
Language classes and other trainings
Private health care and well-being platform
Team-building integration meetings and events
Non-corporate working atmosphere and flat organizational structure
Possibility to work remotely or onsite in a homelike office with a charming garden, lots of good coffee and an in-house gym 🙂