"select employeename, employeeid, salary, department, DR
from (
select employeename, employeeid, salary, dense_rank() over (partition by department order by salary desc) DR, department from employee
)
where DR <=3
order by department, DR"
Sreeram reddy B. - "select employeename, employeeid, salary, department, DR
from (
select employeename, employeeid, salary, dense_rank() over (partition by department order by salary desc) DR, department from employee
)
where DR <=3
order by department, DR"See full answer
"How do you find consecutive days for login (MySQL, SQL, date, subquery, MySQL 5.7, development)?
1
Follow
Request
Answer
More
All related (34)
Recommended
📷
Trausti Thor Johannsson
·
Follow
Been using MySQL for more than 16 yearsDec 27
There are functions like DATEDIFF but there are also BETWE"
Hayatu H. - "How do you find consecutive days for login (MySQL, SQL, date, subquery, MySQL 5.7, development)?
1
Follow
Request
Answer
More
All related (34)
Recommended
📷
Trausti Thor Johannsson
·
Follow
Been using MySQL for more than 16 yearsDec 27
There are functions like DATEDIFF but there are also BETWE"See full answer
"SELECT
s.Sale_Date,
SUM(si.Quantity * si.SalePrice) AS TotalRevenue
FROM Sales s
JOIN SaleItems si ON s.SaleID = si.Sale_ID
GROUP BY s.Sale_Date
ORDER BY s.Sale_Date;
"
Bala G. - "SELECT
s.Sale_Date,
SUM(si.Quantity * si.SalePrice) AS TotalRevenue
FROM Sales s
JOIN SaleItems si ON s.SaleID = si.Sale_ID
GROUP BY s.Sale_Date
ORDER BY s.Sale_Date;
"See full answer
"What do all data scientists need to know about how to work with very large datasets?
37
Follow
Request
Answer
More
All related (39)
Recommended
📷
Corrin Lakeland
·
Follow
, M.S. Data Science, University of St. Thomas, St. Paul (2018)6yData Science consultant and managerUpvoted by[Tom Halloin](https://www.quora"
Hayatu H. - "What do all data scientists need to know about how to work with very large datasets?
37
Follow
Request
Answer
More
All related (39)
Recommended
📷
Corrin Lakeland
·
Follow
, M.S. Data Science, University of St. Thomas, St. Paul (2018)6yData Science consultant and managerUpvoted by[Tom Halloin](https://www.quora"See full answer
Data Engineer
Data Modeling
🧠Want an expert answer to a question? Saving questions lets us know what content to make next.
"Data lake and warehouse are both places that allow an organization to store large amounts of data.
When swimming in a lake, one would imagine that they come across all sorts of stuff - floating twigs, fish in the water, stones, chemicals and sometimes may be even a snake. Similarly, a data lake stores all forms of data that the company has without any indexing. The data is available at any time but needs to be first cleaned up and reorganized before it can be used for any type of analysis.
A"
Kshitij I. - "Data lake and warehouse are both places that allow an organization to store large amounts of data.
When swimming in a lake, one would imagine that they come across all sorts of stuff - floating twigs, fish in the water, stones, chemicals and sometimes may be even a snake. Similarly, a data lake stores all forms of data that the company has without any indexing. The data is available at any time but needs to be first cleaned up and reorganized before it can be used for any type of analysis.
A"See full answer
"Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"
Joshua R. - "Hadoop is better than PySpark when you are dealing with extremely large scale, batch oriented, non-iterative workloads where in-memory computing isn't feasible/ necessary, like log storage or ETL workflows that don't require high response times. It's also better in situations where the Hadoop ecosystem is already deeply embedded and where there is a need for resource conscious, fault tolerant computation without the overhead of Spark's memory constraints. In these such scenarios, Hadoop's disk-b"See full answer