Job Board
Consulting

sbt 2.0 and What It Means for Spark Scala Projects

sbt 2.0 is in its final release candidates with the 2.0.0 milestone fully closed. Build definitions now require Scala 3, all tasks are cached by default with Bazel-compatible remote caching, and the plugin ecosystem is being rebuilt. Here's what Spark Scala teams need to know before upgrading.

Where sbt 2.0 Stands

The sbt project has shipped through RC11 as of April 2026, with the 2.0.0 milestone showing 100% completion — 74 issues closed, zero open. A stable release is imminent.

Meanwhile, sbt 1.x continues to receive maintenance updates (1.12.9 is the latest), so there's no urgency to jump. But the gap between sbt 1.x and 2.x is widening, and understanding the changes now will save your team time when the stable release drops.

Build Definitions Move to Scala 3

The biggest conceptual shift: your build.sbt and custom plugins are now compiled with Scala 3.x (specifically 3.8.2 in RC11). sbt 1.x used Scala 2.12 for build definitions. sbt 2.x uses Scala 3.

This does not affect what your project compiles to. sbt 2.x can still build Scala 2.13 projects — which is what Spark 4.x requires. The Scala 3 requirement applies only to the build definition layer.

In practice, this means a few syntax changes in build.sbt:

// sbt 1.x — postfix notation works
libraryDependencies += "org.apache.spark" %% "spark-sql" % "4.1.0" % "provided" withSources()

// sbt 2.x — dot notation required
libraryDependencies += ("org.apache.spark" %% "spark-sql" % "4.1.0" % "provided").withSources()

If your build files use import FooCodec._ for typeclass instances, you'll need to switch to import FooCodec.given — the Scala 3 way of importing givens.

Bare Settings Now Apply to All Subprojects

This is the change most likely to bite multi-module Spark projects. In sbt 1.x, bare settings in build.sbt applied to the root project. In sbt 2.x, they apply to every subproject:

// sbt 2.x — this applies to ALL subprojects, not just root
name := "my-spark-app"
publish / skip := true

// To scope to root only:
LocalRootProject / name := "my-spark-app"
LocalRootProject / publish / skip := true

If your Spark project has submodules (common in larger codebases — an ETL module, a shared library module, a test utilities module), audit your bare settings. Settings like name, publish / skip, and assembly / mainClass that you intended for root may now cascade into subprojects where they shouldn't.

All Tasks Are Cached by Default

This is the headline feature. In sbt 2.x, every task result is cached to local disk automatically. When you run compile and nothing changed, sbt skips it entirely — not just incremental compilation, but the task itself.

The test task gets the same treatment. If your tests passed and no inputs changed, sbt test returns instantly. If you need a full run regardless, use testFull.

For Spark projects with slow test suites — where each test initializes a local SparkSession — this can dramatically cut iteration time during development.

Bazel-Compatible Remote Caching

Local caching is useful for individual developers. Remote caching is where teams see the real gains.

sbt 2.x ships with a gRPC client that speaks the Bazel Remote Execution API. It stores task results in a content-addressable store using SHA-256 hashes of inputs and outputs. When a teammate has already built and tested the same code, your sbt instance pulls the cached result from the remote server instead of rebuilding.

Setup is straightforward:

// build.sbt — enable remote caching
Global / remoteCache := Some(uri("grpcs://your-cache-server:443"))

// With API token authentication (BuildBuddy, NativeLink, etc.)
Global / remoteCacheHeaders += IO.read(file("credentials/cache-token")).trim

Tested backends include buchgr/bazel-remote (open-source, self-hosted), BuildBuddy, EngFlow, and NativeLink (open-source, Rust-based). Real-world numbers from production teams show fully cached builds completing in ~3.5 minutes versus ~45 minutes uncached — over 90% time savings.

For Spark teams running CI on every PR, this is significant. Spark projects tend to have heavy compilation (Spark's transitive dependency tree is massive) and slow tests (SparkSession initialization). Remote caching means your CI pipeline only rebuilds what actually changed.

Parallel Cross-Builds Are Built In

sbt 2.x includes projectMatrix natively — no more sbt-projectmatrix plugin. This enables parallel cross-building across Scala versions and platforms from a single build definition.

For most Spark projects targeting only Scala 2.13, this isn't immediately relevant. But if you maintain a shared library that supports both Scala 2.13 and Scala 3 consumers (using the forward-compatibility approach), parallel cross-builds can cut your CI time significantly.

The new platform setting also replaces the %%% operator for cross-platform resolution (JVM, JS, Native), though Spark projects are JVM-only.

Plugin Compatibility: The Biggest Migration Cost

Here's the pain point. sbt 2.x plugins use a _sbt2_3 suffix instead of sbt 1.x's _2.12 suffix. Existing sbt 1.x plugins must be explicitly ported to work with sbt 2.x.

The plugins Spark developers commonly use:

Plugin sbt 2.x Status
sbt-assembly Available via cross-publish
sbt-native-packager Migration in progress
sbt-scalafmt Available
sbt-scoverage Migration in progress
sbt-release Migration in progress

The sbt2-compat library helps plugin authors cross-publish for both sbt 1.x and 2.x from a single codebase. It provides compatibility shims for the biggest API change: file handling moved from java.io.File everywhere to context-specific types (HashedVirtualFileRef for classpath entries, VirtualFile for task outputs).

Before migrating, inventory every plugin in your project/plugins.sbt. If a critical plugin hasn't been ported, that's your blocker. Check each plugin's GitHub for sbt 2.x issues or PRs.

Other Breaking Changes That Affect Spark Projects

A few more changes to be aware of:

JDK 17 is the minimum. sbt 2.x requires JDK 17+. Spark 4.x already requires JDK 17, so this shouldn't be an issue for teams that have upgraded to Spark 4. Teams still on Spark 3.x with JDK 8 or 11 have a larger jump ahead.

IntegrationTest configuration is removed. If your Spark project uses IntegrationTest for slow or cluster-dependent tests, you'll need to restructure those as a separate subproject with standard Test configuration.

exportJars defaults to true. This change can cause NullPointerException or FileSystemNotFoundException when tests access resources via classpath. If you hit this, add exportJars := false to your build.

Target directory structure changed. The output layout is now target/out/jvm/scala-<version>/ instead of target/scala-<version>/. Scripts or CI steps that reference specific paths under target/ will need updating. If you're building fat jars with sbt-assembly, verify your deployment scripts still find the jar where they expect it.

A Practical Migration Checklist

When sbt 2.0 goes stable, here's the order to tackle migration for a Spark Scala project:

  1. Verify JDK 17+ — You likely already have this if you're on Spark 4.x
  2. Audit plugins — List every plugin in project/plugins.sbt and check sbt 2.x availability
  3. Fix bare settings — Scope root-only settings with LocalRootProject / in multi-module builds
  4. Update syntax — Replace postfix notation, update import statements to Scala 3 style
  5. Remove IntegrationTest — Restructure as a subproject if applicable
  6. Check target paths — Update CI scripts, deployment pipelines, and any hardcoded target/ references
  7. Test with provided scope — Verify Spark dependencies still resolve correctly with the new %% platform-aware operator
  8. Enable remote caching — Optional but recommended for teams; set up after the core migration is stable

An automated migration tool is available for the sbt 0.13 shell syntax changes (colon → slash notation):

// Run this scalafix rule to update old shell syntax in .sbt files
// sbt 0.13 style: test:compile
// sbt 2.x style: Test/compile
scalafix --rules=https://gist.githubusercontent.com/eed3si9n/57e83f5330592d968ce49f0d5030d4d5/raw/Sbt0_13BuildSyntax.scala *.sbt

Should You Upgrade Now?

No — wait for the stable release. RC11 is close, but RCs can still introduce breaking changes. sbt 1.12.x will continue to receive maintenance updates.

What you can do now:

  • Test your build against the RC. Drop sbt.version=2.0.0-RC11 into project/build.properties on a branch and see what breaks. The earlier you identify plugin gaps and syntax issues, the faster you'll migrate when stable lands.
  • Start scoping bare settings. This is backwards-compatible — LocalRootProject / name := "foo" works in sbt 1.x too. Fix it now and you eliminate one migration step.
  • Evaluate remote caching needs. If your Spark CI pipeline takes 15+ minutes, remote caching is worth investigating regardless of sbt version (sbt 1.x has limited caching support via plugins).

The sbt 2.0 upgrade is not a drop-in replacement. But the payoff — faster builds, modern Scala 3 tooling in your build definitions, and team-wide caching — is substantial for Spark projects where build and test times are a real bottleneck.

Article Details

Created: 2026-04-11

Last Updated: 2026-04-11 11:23:00 PM