Description
ABOUT IDBS
IDBS helps BioPharma organizations unlock the potential of AI/ML to improve the lives of patients. As a trusted long-term partner to 80% of the top 20 global BioPharma companies, IDBS delivers powerful cloud software and services specifically designed to meet the evolving needs of the BioPharma sector.
IDBS, a Danaher company, leverages 35 years of scientific informatics expertise to help organizations design, execute and orchestrate processes, manage, contextualize and structure data and gain valuable insights throughout the product lifecycle, from R&D through manufacturing. Known for its signature IDBS E-WorkBook software, IDBS has extended its flexible, scalable solutions to the IDBS Polar and PIMS cloud platforms to help scientists make smarter decisions with assured confidence in both GxP and non-GxP environments.
Do you want to work in a dynamic, fast paced, high performing, safe to fail and fun environment which is founded on trust, empowerment and autonomy? Do you enjoy solving complex customer problems as a team?
We are currently seeking a Principal Software Engineer who will be a vital cog in our team, designing and developing IDBS’s data driven products. You will deliver data-rich software and contribute to the architectural design, technical approach and implementation mechanisms adopted by the team. You will be directly involved in the development of data-centric products from ingest to egress via pipelines, data warehousing, cataloguing, integrations and other mechanisms.
The role will be involved in all aspects of the software delivery lifecycle including the creation and elaboration of business requirements, functional and technical design specifications, development and maintenance of our software (including prototyping) and driving innovation into our product suite. You will be responsible for ensuring the development and maintenance of IDBS’s software platforms adheres to IDBS’s architecture vision.
What we’ll get you doing:
+ Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark (PySpark) to support analytics and other data-driven initiatives.
+ Support the elaboration of requirements, formulation of the technical implementation plan and backlog refinement. Provide technical perspective to products enhancements & new requirements activities.
+ Optimize Spark-based workflows for performance, scalability, and data integrity, ensuring alignment with GxP and other regulatory standards.
+ Research, and promote new technologies, design patterns, approaches, tools and methodologies that could optimise and accelerate development.
+ Apply strong software engineering practices including version control (Git), CI/CD pipelines, unit testing, and code reviews to ensure maintainable and production-grade code.
Here is what success in this role looks like:
+ Delivered reliable, scalable data pipelines that process clinical and pharmaceutical data efficiently, reducing data latency and improving time-to-insight for research and regulatory teams.
+ Enabled regulatory compliance by implementing secure, auditable, and GxP-aligned data workflows with robust access controls.
+ Improved system performance and cost-efficiency by optimizing Spark jobs and Databricks clusters, leading to measurable reductions in compute costs and processing times.
+ Fostered cross-functional collaboration by building reusable. testable, well-documented Databricks notebooks and APIs that empower data scientists, analysts, and other stakeholders to build out our product suite.
+ Contributed to a culture of engineering excellence through code reviews, CI/CD automation, and mentoring, resulting in higher code quality, faster deployments, and increased team productivity.
It would be a plus if you also possess previous experience in:
+ Deployment of Databricks functionality in a SaaS environment (via infrastructure as code) with experience of Spark, Python and a breadth of database technologies
+ Event-driven and distributed systems, using messaging systems like Kafka, AWS SNS/SQS and languages such as Java and Python
+ Data Centric architectures, including experience with Data Governance / Management practices and Data Lakehouse / Data Intelligence platforms. Experience of AI software delivery and AI data preparation would also be an advantage
At IDBS we believe in designing a better, more sustainable workforce. We recognize the benefits of flexible working arrangements for eligible roles and are committed to providing enriching careers, no matter the work arrangement. This position is eligible for a flexible work arrangement in which you can work part-time at the Company location identified above and part-time remotely from your home. Additional information about this work arrangement will be provided by your interview team. Explore the flexibility and challenge that working for IDBS can provide.
Join our winning team today. Together, we’ll accelerate the real-life impact of tomorrow’s science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life.
For more information, visit www.danaher.com .
At Danaher, we value diversity and the existence of similarities and differences, both visible and not, found in our workforce, workplace and throughout the markets we serve. Our associates, customers and shareholders contribute unique and different perspectives as a result of these diverse attributes.