Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • Thank you for your detailed answer. I suppose I was not specific enough about my use case. I’m totally ok with Tika being run before the Spark workflow to extract the data and detect the language and mime type. I’m actually already running Spark as physically close to S3 as possible. Everything is currently running on my local machine in Docker (including S3, as I’m using MinIO). In prod this will be all within the same Kubernetes cluster. What I mean by the doc processing: I’m not actually performing ops on it yet. Just reading it from the bucket on the same server can take forever or no time Commented Feb 6, 2021 at 15:37
  • I’m not using any SaaS products here, so I’m not using Spark as a service either. Everything is local. It’s sort of bizarre to me that it was advised not to reuse the same ‘SparkSession’ through an applications lifetime, but rather to expensively re-create it every time I would go to process a file. Also, I’m concerned about Sparks ability to “watch” S3 because I have dynamically allocated buckets (one bucket for each grouping of documents uploaded by the user). A lot of these concerns stem from my lack of knowledge about Spark and it’s capabilities. I can’t seem to find many similar use cases Commented Feb 6, 2021 at 15:46
  • If you are running everything for your proof of concept on local, it will be slower than on prod, since you don't really have the level of parallelism of a bigger cluster (well, you sort of do with multiple cores, but still it won't be comparable performance of a production cluster). Commented Feb 7, 2021 at 23:17
  • Running multiple spark contexts on the same JVM is not recommended. The multiple spark context configuration was only allowed for internal testing for spark development. If you use it in user programs, it will lead to unpredictable behaviour. So you may want to rethink your code and design so you are not using the multiple contexts. Commented Feb 7, 2021 at 23:20
  • 1
    Thank you again for all of your input Commented Feb 9, 2021 at 0:10