"Depends on the memory available. Also if there's any structure to the data already.
If this is just pure numbers random and all over the place and we don't need to worry about space. Then radix sort would be the best.
But this all depends on what constraints the interviewer gives you."
Ted K. - "Depends on the memory available. Also if there's any structure to the data already.
If this is just pure numbers random and all over the place and we don't need to worry about space. Then radix sort would be the best.
But this all depends on what constraints the interviewer gives you."See full answer
Machine Learning Engineer
Analytical
🧠Want an expert answer to a question? Saving questions lets us know what content to make next.
"from typing import List
def traprainwater(height: List[int]) -> int:
if not height:
return 0
l, r = 0, len(height) - 1
leftMax, rightMax = height[l], height[r]
res = 0
while l < r:
if leftMax < rightMax:
l += 1
leftMax = max(leftMax, height[l])
res += leftMax - height[l]
else:
r -= 1
rightMax = max(rightMax, height[r])
"
Anonymous Roadrunner - "from typing import List
def traprainwater(height: List[int]) -> int:
if not height:
return 0
l, r = 0, len(height) - 1
leftMax, rightMax = height[l], height[r]
res = 0
while l < r:
if leftMax < rightMax:
l += 1
leftMax = max(leftMax, height[l])
res += leftMax - height[l]
else:
r -= 1
rightMax = max(rightMax, height[r])
"See full answer
"I've participated in several competitions in Kaggle concerning medical images. My most recent competition deals with images of skin lesions and classifying them as either melanoma or not. I focused on fine-tuning pretrained models and ensembling them.
I also like to keep track of the latest trends of computer vision research, with a focus on making models memory-efficient through model compression and interpretability."
Xuelong A. - "I've participated in several competitions in Kaggle concerning medical images. My most recent competition deals with images of skin lesions and classifying them as either melanoma or not. I focused on fine-tuning pretrained models and ensembling them.
I also like to keep track of the latest trends of computer vision research, with a focus on making models memory-efficient through model compression and interpretability."See full answer
"The difference between convex and nonconvex functions lies in their mathematical properties and the implications for optimization problems.
Convex Functions:A convex function has a shape where any line segment connecting two points on its graph lies entirely above or on the graph.
This property ensures that any local minimum is also a global minimum, making optimization straightforward and reliable.
Convex functions are critical in machine learning and optimization tasks because of th"
Alan T. - "The difference between convex and nonconvex functions lies in their mathematical properties and the implications for optimization problems.
Convex Functions:A convex function has a shape where any line segment connecting two points on its graph lies entirely above or on the graph.
This property ensures that any local minimum is also a global minimum, making optimization straightforward and reliable.
Convex functions are critical in machine learning and optimization tasks because of th"See full answer
"Effective loss functions for computer vision models vary depending on the specific task, some commonly used loss functions for different tasks:
Classification
Cross-Entropy Loss:Used for multi-class classification tasks.
Measures the difference between the predicted probability distribution and the true distribution.
Binary Cross-Entropy Loss:Used for binary classification tasks.
Evaluates the performance of a model by comparing predicted probabilities to the true binary labe"
Shibin P. - "Effective loss functions for computer vision models vary depending on the specific task, some commonly used loss functions for different tasks:
Classification
Cross-Entropy Loss:Used for multi-class classification tasks.
Measures the difference between the predicted probability distribution and the true distribution.
Binary Cross-Entropy Loss:Used for binary classification tasks.
Evaluates the performance of a model by comparing predicted probabilities to the true binary labe"See full answer