A great experience that combines ML-Runtimes - MLFlow and Spark. The ability to use Python, and SQL seamlessly in one platform. Since databricks notebooks can be saved as python scripts in the background it is amazing to have both notebook and script...
Too many customizations are needed to achieve the right mix of parameterization for optimal performance. On the other hand, snowflake provides lots of features out of the box without the developer worrying about these things.
The user interface is seamless, it also provides options to integrate data from various sources. Apart from that it can also train some ML models via queries which is also a plus.
partitioning only works with date. Join statements don't optimize even trivially without using where clause clauses. from table1 join table2 on table1.a=table2.a and table1.a=123 takes more resources and cost than from table1 ...
A great experience that combines ML-Runtimes - MLFlow and Spark. The ability to use Python, and SQL seamlessly in one platform. Since databricks notebooks can be saved as python scripts in the background it is amazing to have both notebook and script...
The user interface is seamless, it also provides options to integrate data from various sources. Apart from that it can also train some ML models via queries which is also a plus.
Too many customizations are needed to achieve the right mix of parameterization for optimal performance. On the other hand, snowflake provides lots of features out of the box without the developer worrying about these things.
partitioning only works with date. Join statements don't optimize even trivially without using where clause clauses. from table1 join table2 on table1.a=table2.a and table1.a=123 takes more resources and cost than from table1 ...