"The difference between convex and nonconvex functions lies in their mathematical properties and the implications for optimization problems.
Convex Functions:A convex function has a shape where any line segment connecting two points on its graph lies entirely above or on the graph.
This property ensures that any local minimum is also a global minimum, making optimization straightforward and reliable.
Convex functions are critical in machine learning and optimization tasks because of th"
Alan T. - "The difference between convex and nonconvex functions lies in their mathematical properties and the implications for optimization problems.
Convex Functions:A convex function has a shape where any line segment connecting two points on its graph lies entirely above or on the graph.
This property ensures that any local minimum is also a global minimum, making optimization straightforward and reliable.
Convex functions are critical in machine learning and optimization tasks because of th"See full answer
"Overfitting is the condition where your model is giving an unexpectedly higher accuracy because of its training in a small database and not getting exposed to anu different type of database while testing"
Bhavya V. - "Overfitting is the condition where your model is giving an unexpectedly higher accuracy because of its training in a small database and not getting exposed to anu different type of database while testing"See full answer
"Even more faster and vectorized version, using np.linalg.norm - to avoid loop and np.argpartition to select lowest k. We dont need to sort whole array - we need to be sure that first k elements are lower than the rest.
import numpy as np
def knn(Xtrain, ytrain, X_new, k):
distances = np.linalg.norm(Xtrain - Xnew, axis=1)
k_indices = np.argpartition(distances, k)[:k] # O(N) selection instead of O(N log N) sort
return int(np.sum(ytrain[kindices]) > k / 2.0)
`"
Dinar M. - "Even more faster and vectorized version, using np.linalg.norm - to avoid loop and np.argpartition to select lowest k. We dont need to sort whole array - we need to be sure that first k elements are lower than the rest.
import numpy as np
def knn(Xtrain, ytrain, X_new, k):
distances = np.linalg.norm(Xtrain - Xnew, axis=1)
k_indices = np.argpartition(distances, k)[:k] # O(N) selection instead of O(N log N) sort
return int(np.sum(ytrain[kindices]) > k / 2.0)
`"See full answer
"As an interviewer, I have asked this question to candidates in the past. Here are the major topics I am looking for in an interview
The candidate should understand that there are ways of measuring the loss of a particular clustering. For example, we can take the average distance of each point to it's cluster center.
The candidate should understand that this loss will always decrease as the number of clusters increases. For that reason, we can't just pick the value of K that minimizes the l"
Michael F. - "As an interviewer, I have asked this question to candidates in the past. Here are the major topics I am looking for in an interview
The candidate should understand that there are ways of measuring the loss of a particular clustering. For example, we can take the average distance of each point to it's cluster center.
The candidate should understand that this loss will always decrease as the number of clusters increases. For that reason, we can't just pick the value of K that minimizes the l"See full answer
"BERT - bidirectional encoder representations from transformer.
For example:- it takes an entire sentence as input at once and understands the meaning of the words in that sentence and calculate the relations of words with each other irrespective of their positions from the original word to understand the meaning of the word using neighboring words. BERT model is a pre trained transformer model which can be fine-tuned for our purposes. It is used for tasks such sentimental analysis, question answ"
Bhavya V. - "BERT - bidirectional encoder representations from transformer.
For example:- it takes an entire sentence as input at once and understands the meaning of the words in that sentence and calculate the relations of words with each other irrespective of their positions from the original word to understand the meaning of the word using neighboring words. BERT model is a pre trained transformer model which can be fine-tuned for our purposes. It is used for tasks such sentimental analysis, question answ"See full answer
"Functional requirement's:
partial search while searching for users, products any keywords in the search.
additional keywords in the filter
Black listed words in the search.
Non functional requirements:
low latency,
search through 2 Billion records
recent search should be cached.
Design:
high reads,
we should have caching enabled over the primary db storages.
caching cluster can be added when the search load increases.
read ahead. - check in cache
(periodic cache refresh), lfu, lru
"
Sandeep Y. - "Functional requirement's:
partial search while searching for users, products any keywords in the search.
additional keywords in the filter
Black listed words in the search.
Non functional requirements:
low latency,
search through 2 Billion records
recent search should be cached.
Design:
high reads,
we should have caching enabled over the primary db storages.
caching cluster can be added when the search load increases.
read ahead. - check in cache
(periodic cache refresh), lfu, lru
"See full answer