Spark has become the de-facto analytics tool for data stored in Scylla. In this webinar we will review different workloads using Spark and Scylla, for example Extract, Transform, Load (ETL), creating joins between tables and summaries and reporting. We will also cover data modeling best practices for Scylla-Spark use cases and different deployment scenarios.
To conclude, we will share performance tuning settings to utilize both Scylla and Spark at peak performance.
Join us to learn...
- Why using Spark with Scylla is advantageous for analytics workloads
- How to create reporting using Spark and Scylla
- Best practices for data modeling and performance tuning for Scylla and Spark