**This is ABGSC**
ABG Sundal Collier (ABGSC) is the independent Nordic investment bank, developed over 40 years, founded on an inclusive partnership culture and the ability to attract, develop, and empower top talent. Our purpose is to enable businesses and capital to grow and perform, and our vision is to be the Nordic investment bank of choice.
Investment banking is all about understanding companies, industries, and markets. Our core offering to clients is that we offer the best advice, derived from diligent analysis in a dynamic, collaborative, and high-performing environment.
Our approximately 350 partners and employees are located in the Nordic offices of Oslo, Stockholm, and Copenhagen, in addition to our international offices in London, New York, Frankfurt, Singapore, and Lucerne.
At ABGSC, teamwork and collaboration are at the heart of how we succeed. We don’t believe in rigid hierarchies or titles. Instead, we focus on results, recognising and rewarding individuals for their contributions, growth, and achievements - regardless of tenure.
Our flat organisation is defined by openness, trust, and the freedom to share ideas. Across geographies and functions, we operate as one team. We value each other’s unique skills and perspectives, and together we work towards a shared purpose, vision, and set of values.
By joining ABGSC, you will become part of a dedicated and persistent team, united in striving for excellence, where every voice is heard, and every contribution makes an impact.
**About the role**
We are looking for people to join our core Data & AI team in the Technology Development department. The team builds and runs ABGSCs data and analytics platform across the full lifecycle — from ingestion and pipeline development to transformation, modelling, infrastructure, delivery, and AI-enabled tooling. Everyone on the team works across the stack and shares ownership of what we build.
While the role involves a lot of engineering, coding, and development, understanding and caring about the data itself is the core. Every pipeline, model, and delivery layer is shaped by the data it serves, so a strong foundation in data thinking is just as important as technical skill.
We work code-first and aim to use AI in practical, grounded ways across engineering workflows — from development and debugging to automation, documentation, and internal tooling. We are looking for people who are comfortable working in that kind of environment and who are curious about how AI can improve the way data platforms are built and operated.
This is an early-stage team, and the role will evolve over time. Responsibilities and ways of working will be shaped together as the platform and team grow
**Task**
- Build and maintain data pipelines that ingest data from source systems into Snowflake
- Model and transform data end-to-end following strict data modelling techniques
- Manage data warehouse environments, CI/CD workflows, and developer tooling
- Implement tests, monitor freshness, and ensure reliability across pipelines and data products
- Deliver data to end-users through BI tools, semantic layers, or direct access
- Contribute to Python CLI tooling, pipeline frameworks, and configuration-driven architecture
- Use AI-enabled tooling and automation to improve development workflows, platform operations, and team effectiveness
**Desired qualifications**
We value breadth, curiosity, and learning ability over deep specialization in a single tool. You should be comfortable picking up new technologies, working across different parts of the stack, and understanding data quality, structure, and meaning.
You do not need to meet every requirement listed, but we expect you to be familiar with most of them and/or motivated to learn quickly where you have less experience.
**What we are looking for**
- Python and SQL — our pipelines, tooling, and transformations are primarily built with Python, SQL, and Jinja
- Data modelling — understanding of modelling approaches such as Data Vault, dimensional modelling, or similar methodologies
- Cloud data platforms — experience with, or exposure to, tools such as Snowflake and dbt
- Engineering practices — familiarity with Git, testing, CI/CD, and working with production data workflows
- AI-enabled ways of working — interest in using AI tools to support coding, debugging, documentation, analysis, and delivery
**Nice to have**
- Experience with BI tools, dashboards, or data applications
- Exposure to cloud infrastructure and platform services
- Familiarity with container-based development environments
- Experience with code-first ingestion tools or local data processing tools
- Interest in financial services or capital markets
**Who you are**
- Relevant master’s degree in a technical or quantitative field
- 1–5 years of relevant experience
- You take ownership of your work, can drive things forward independently, and collaborate well in a team
- You are comfortable speaking with stakeholders and translating between business needs and technical solutions
- You thrive in a high-paced environment with quick turnarounds, and can refocus and reprioritize when things change
- You communicate clearly and directly — in code, in writing, and in conversation
- You are comfortable in a small team where everyone does a bit of everything
- You are genuinely interested in data — not just moving it, but understanding what it represents and how it can be used
**Location:**
Oslo
**Deadline for application**
: applications are reviewed on a rolling basis
**Contact person:**
Thea Bruun Klausen
*ABG Sundal Collier cooperates with Semac to conduct background checks as part of our recruitment process. The background check is used to verify information provided in the CV and other documentation.*