"untuk mengurutkan daftar angka secara efisien saya akan menggunakan aplikasi pengolah angka yaitu excel dengan rumus rumus untuk mempermudah dan mempercepat pengurutan daftar angka"
Isnadea soraya R. - "untuk mengurutkan daftar angka secara efisien saya akan menggunakan aplikasi pengolah angka yaitu excel dengan rumus rumus untuk mempermudah dan mempercepat pengurutan daftar angka"See full answer
"Is this s shuttle service or the standard uber service? Let's assume the latter. First Uber should gather some data. Find cities similar to Bangalore in several measures that Uber already services. Al bring in cities in India that already are services by Uber, if any. With this in hand, do a multifactoral analysis to try and understand the likely demand in Bangalore. That will help the company understand how many drivers it needs to launch with, so that user expectations for service are met. Hir"
Lee F. - "Is this s shuttle service or the standard uber service? Let's assume the latter. First Uber should gather some data. Find cities similar to Bangalore in several measures that Uber already services. Al bring in cities in India that already are services by Uber, if any. With this in hand, do a multifactoral analysis to try and understand the likely demand in Bangalore. That will help the company understand how many drivers it needs to launch with, so that user expectations for service are met. Hir"See full answer
"let str = 'this is a test of programs';
let obj={};
for (let s of str )
obj[s]?(obj[s]=obj[s]+1):(obj[s]=1)
console.log(JSON.stringify(obj))"
Anonymous Emu - "let str = 'this is a test of programs';
let obj={};
for (let s of str )
obj[s]?(obj[s]=obj[s]+1):(obj[s]=1)
console.log(JSON.stringify(obj))"See full answer
"Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"
Joshua R. - "Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"See full answer
"This is another Diagnosis problem. To answer this question, we suggest you use our framework (along with the TROPIC method) to be as thorough as possible. The framework is as follows:
Ask clarifying questions
List potential high level reasons
Gather Context (TROPIC)Time
Region
Other features / products (internal)
Platform
Industry / Competition
Cannibalization
Establish a theory of probable cause
Test theories
Propose solutions
Summarize
"
Exponent - "This is another Diagnosis problem. To answer this question, we suggest you use our framework (along with the TROPIC method) to be as thorough as possible. The framework is as follows:
Ask clarifying questions
List potential high level reasons
Gather Context (TROPIC)Time
Region
Other features / products (internal)
Platform
Industry / Competition
Cannibalization
Establish a theory of probable cause
Test theories
Propose solutions
Summarize
"See full answer
"We have a list of documents.
We want to build an index that maps keywords to documents containing them.
Then, given a query keyword, we can efficiently retrieve all matching documents.
docs = [
"Python is great for data science",
"C++ is a powerful language",
"Python supports OOP and functional programming",
"Weather today is sunny",
"Weather forecast shows rain"
]"
Mridul J. - "We have a list of documents.
We want to build an index that maps keywords to documents containing them.
Then, given a query keyword, we can efficiently retrieve all matching documents.
docs = [
"Python is great for data science",
"C++ is a powerful language",
"Python supports OOP and functional programming",
"Weather today is sunny",
"Weather forecast shows rain"
]"See full answer
"Schema is wrong - id from product is mapped to id from transactions, id from product should point to product_id in transcations table"
Arshad P. - "Schema is wrong - id from product is mapped to id from transactions, id from product should point to product_id in transcations table"See full answer
"Tell a story that you do not need to sit with a laptop to be in career growing mood.
"You can grow your career on the go."
"Grow your career while you scroll."
We will target our Messaging at every point where people go to read about job roles and descriptions."
Mubarak O. - "Tell a story that you do not need to sit with a laptop to be in career growing mood.
"You can grow your career on the go."
"Grow your career while you scroll."
We will target our Messaging at every point where people go to read about job roles and descriptions."See full answer