What do you like best about Databricks?
Unity Catalog has been the single biggest value-add for our enterprise migration. We moved from a Hive Metastore architecture to Unity Catalog and gained centralized governance, lineage tracking, and fine-grained access control across all our data assets without bolting on third-party tools. For a multi-domain organization (finance, manufacturing, supply chain, procurement), having one catalog that enforces consistent naming and permissions across bronze, silver, gold, and platinum layers saved us weeks of manual policy work.
UI/UX: The notebook experience with inline Spark SQL and PySpark, combined with the workspace file browser, makes it straightforward for our team to develop and test transformations iteratively. The SQL editor for ad-hoc queries against Unity Catalog tables is clean and responsive.
Integrations: Native Delta Lake support means we don't manage format conversions. The Azure Key Vault integration via secret scopes (dbutils.secrets.get) keeps credentials out of code. ADF integration for orchestration in our V1 environment was seamless, and Databricks Asset Bundles (DAB) for V2 deployment give us a clean CI/CD path with databricks.yml configs targeting dev/qa/prod without custom scripting.
Performance: Switching to CTEs over temp views in our Gold notebooks reduced cluster memory pressure noticeably. The ability to right-size clusters per environment (1 worker for dev, 3 for production) with Standard_D4ds_v5 nodes keeps costs predictable while maintaining performance for our batch ETL workloads.
Pricing/ROI: The pay-as-you-go compute model paired with single-user security mode clusters means we're not over-provisioning. Consolidating our ETL, governance, and BI serving layer into one platform eliminated licensing for separate catalog, orchestration, and data quality tools.
AI/Intelligence (Genie): Genie Spaces have been an unexpected win. Our business analysts in finance and supply chain can ask natural language questions against curated Gold/Platinum tables without writing SQL. It reduced the number of ad-hoc report requests coming to the data team by giving domain users a self-service path that still respects Unity Catalog permissions.
Support/Onboarding: The documentation is thorough, and the skills-based approach to learning (bundles, Unity Catalog, jobs, SQL) maps well to how our team actually works. Onboarding new engineers to the V2 architecture took about half the time compared to V1 because the platform conventions (medallion architecture, asset bundles, catalog naming) are well-documented and consistent. Review collected by and hosted on G2.com.
What do you dislike about Databricks?
UI/UX: The notebook editor still feels behind dedicated IDEs. No native multi-file search, limited refactoring support, and the git integration UI is clunky for teams managing dozens of notebooks across workflow bundles. We ended up doing all real development in VS Code and treating the Databricks workspace as a deployment target, which adds friction. The workspace file browser also doesn't handle folder structures well when you have 50+ notebooks organized by domain there's no filtering, tagging, or favorites.
Integrations: Databricks Asset Bundles (DAB) are a step forward, but the documentation has gaps for complex multi-bundle deployments. We run a shared Global_Utilities bundle that other workflow bundles depend on, and getting cross-bundle references to work reliably across dev/qa/prod targets required significant trial and error. The ADF-to-Databricks integration works, but debugging failed pipeline runs means jumping between the ADF monitoring UI and Databricks job runs with no unified view. A tighter handshake between orchestration and compute monitoring would save hours of troubleshooting.
Performance: Cluster cold-start times remain a pain point for development workflows. Spinning up a single-node Standard_D4ds_v5 cluster takes 4-7 minutes, which breaks flow when you're iterating on notebook logic. Serverless compute helps but isn't available for all workload types yet, and the cost premium is hard to justify for dev/test environments.
Pricing/ROI: The DBU pricing model is opaque for capacity planning. Estimating monthly costs for a project with 30+ scheduled jobs, interactive development clusters, and SQL warehouse queries requires building custom spreadsheets because the built-in cost management tools don't give you a clear forecast by workflow or domain. We've been surprised by cost spikes from jobs that ran longer than expected with no easy way to set per-job budget alerts.
Support/Onboarding: Enterprise support response times are inconsistent. Critical issues with Unity Catalog permissions during our migration took 3-5 business days for initial triage, which stalled our deployment timeline. The community forums are helpful for common patterns, but for Unity Catalog edge cases (cross-catalog lineage, complex permission inheritance), the knowledge base is thin.
AI/Intelligence: Genie is promising but still rough for production use. It struggles with joins across more than 3-4 tables, sometimes generates incorrect SQL against our Gold layer, and there's no easy way to curate or correct its responses to improve accuracy over time. Our business users got excited, tried it, hit wrong answers on moderately complex questions, and lost trust. A feedback loop where domain experts can flag and correct Genie's outputs would make it genuinely production-ready. Review collected by and hosted on G2.com.