"High Level Architect
Client
v
API Gateway
v
Object Storage
v
Message Queue
v
Worker
v
Database
Client should can document with a web site or directly with API services.
API Gateway should be used for upload document,get document info and state.
Object storage should be used for original document and send event to Message Queue for starting.
Message Queue is neccessary because there are millions of document should be process each time.
Worker can get text from document with OCR.
Database shoul"
Berk C. - "High Level Architect
Client
v
API Gateway
v
Object Storage
v
Message Queue
v
Worker
v
Database
Client should can document with a web site or directly with API services.
API Gateway should be used for upload document,get document info and state.
Object storage should be used for original document and send event to Message Queue for starting.
Message Queue is neccessary because there are millions of document should be process each time.
Worker can get text from document with OCR.
Database shoul"See full answer
"Data Warehouses are purpose built to derive a business insight but datalakes are build to store virtually all data generated by the organization on which meaning can be derived later."
Anonymous Partridge - "Data Warehouses are purpose built to derive a business insight but datalakes are build to store virtually all data generated by the organization on which meaning can be derived later."See full answer
"Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"
Joshua R. - "Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"See full answer
"There are 2 questions popping into my mind:
Should the 2nd job have to kick off at 12:30AM?
Are there others depending on the 2nd job?
If both answers are no, we may simply postpone the second job to allow sufficient time for the first one to complete. If they are yeses, we could let the 2nd job retry to a certain amount of times. Make sure that even reaching the maximum of retries won't delay or fail the following jobs."
Anzhe M. - "There are 2 questions popping into my mind:
Should the 2nd job have to kick off at 12:30AM?
Are there others depending on the 2nd job?
If both answers are no, we may simply postpone the second job to allow sufficient time for the first one to complete. If they are yeses, we could let the 2nd job retry to a certain amount of times. Make sure that even reaching the maximum of retries won't delay or fail the following jobs."See full answer