
Senior Ruby on Rails Engineer IRC274923
- Киев Львов
- Постоянная работа
- Полная занятость
Our client is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we have set our sights on powering every television in the world. The client’s mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers.About this role:
The client was pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and provide advertisers with unique capabilities to engage consumers. The client’s streaming players and TV models are available worldwide through direct retail sales and licensing agreements with TV brands and pay-TV operators. With tens of millions of players sold across many countries, thousands of streaming channels, and billions of hours watched on our platform, building scalable, highly available, fault-tolerant big data platforms is essential for our success.The client’s advertising business operations leverage a mix of SaaS and internally built applications, all orchestrated through an Order-to-Cash workflow. These systems generate critical data needs across the lifecycle of business operations, requiring robust, scalable, and future-ready data infrastructure.#LI-OM1#LI-RemoteRequirementsMust have:
- A Bachelor’s or Master’s of Science in Computer Science is preferred;
- 8+ years of experience in data engineering and infrastructure, preferably in large-scale or AdTech environments.
- Strong programming skills and Handson coding in Python, Java/Scala, and distributed systems development.
- Proven expertise with modern data stacks: Python, Docker, Airflow, Trino/Presto, Postgres, distributed query engines, graph-based data systems, ETL design, and pipeline development;
- Hands-on experience with the Hadoop ecosystem: HDFS, Hive, Spark, MapReduce, Airflow;
- Knowledge of orchestration and messaging frameworks (Airflow/Dagster, Pub/Sub, Kafka, SQS), with a focus on backpressure-aware workflows;
- Experience building microservices and integrating them into data workflows;
- Cloud expertise, ideally GCP (AWS acceptable), with practical knowledge of CI/CD pipelines for data and ML workflows;
- A focus on continuous learning and improving, both technically and professionally, in your industry, for you and your teams;
- Demonstrated resilience, with experience working in ambiguous situations;
- Strong English, excellent influencing and communication skills, and excellent documentation skills;
- Exposure to vector databases, RAG pipelines, knowledge graphs, and embedding workflows;
- Interest or experience in AI-driven architectures, including model orchestration, agentic systems, and standards such as Model Context Protocol (MCP);
- Architect the Data Lifecycle: Design end-to-end data architectures spanning integration, pipelines, transformation workflows, and unified data consolidation across multiple platforms;
- Build Scalable Systems: Develop distributed, high-performance data systems across SaaS and self-managed applications, including optimized database stack solutions;
- Develop Governed Data Services: Create structured, semantic APIs and data services to power internal operations and analytics;
- Operational Reliability: Monitor, troubleshoot, and ensure uninterrupted data flow across pipelines and platforms;
- Data Insights: Perform in-depth data analysis to generate actionable insights that guide business decisions;
- Innovate with AI: Collaborate with product and engineering teams to embed modern agentic and AI-driven patterns into both infrastructure and customer-facing solutions;