Skip to main content

Artificial Intelligence Interview Questions

Review this list of 72 Artificial Intelligence interview questions and answers verified by hiring managers and candidates.
  • Anthropic logoAsked at Anthropic 
    Video answer for 'How do you approach GenAI safety in consumer products?'
    +6

    "I’d start by clarifying what the business outcome we are driving/what is the context of the AI platform I am owning. Assuming I am giving no further info, I would start with framing why safety is a must with GenAI, the market being saturated with new LLMs daily, so trust is the main differentiator for retention and defensibility. Safety failures can trigger regulatory action (FTC, EU, etc.), reputational damage, or user churn. Then I would identify some Key Buckets: • Kids/teen protections ("

    Matthew W. - "I’d start by clarifying what the business outcome we are driving/what is the context of the AI platform I am owning. Assuming I am giving no further info, I would start with framing why safety is a must with GenAI, the market being saturated with new LLMs daily, so trust is the main differentiator for retention and defensibility. Safety failures can trigger regulatory action (FTC, EU, etc.), reputational damage, or user churn. Then I would identify some Key Buckets: • Kids/teen protections ("See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • Mistral AI logoAsked at Mistral AI 

    "Tell me about the most recent product you launched that you are very proud of?"

    Shalin G. - "Tell me about the most recent product you launched that you are very proud of?"See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • Perplexity AI logoAsked at Perplexity AI 
    +5

    "As per my understanding, Success of the AI product/feature has to be measured in 2 aspects: success of the problem it solves success of the model used Success of the problem- start with the overall business goal of the problem(acquisition, retention etc) & the high level metric(essentially north star) then comes the product/feature level goals & metrics(conversion, engagement, clicks etc) Success of the model used- general: precision, recall, latency ethical: bias, safety busine"

    Debajyoti B. - "As per my understanding, Success of the AI product/feature has to be measured in 2 aspects: success of the problem it solves success of the model used Success of the problem- start with the overall business goal of the problem(acquisition, retention etc) & the high level metric(essentially north star) then comes the product/feature level goals & metrics(conversion, engagement, clicks etc) Success of the model used- general: precision, recall, latency ethical: bias, safety busine"See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • "Functional requirements: user can send an input and wait for the result Group up to 100 individual requests in to single GPU The system should should send results back to the user who requested it when done Non functional requirements: Minimize the waiting between two batches of execution/ reduce idle time error message if a batch faiils Scale to support multiple GPUs Core Entities: Request Batch Result API Design: POST /predict -> {requestid: "", response: ""} req"

    Alok S. - "Functional requirements: user can send an input and wait for the result Group up to 100 individual requests in to single GPU The system should should send results back to the user who requested it when done Non functional requirements: Minimize the waiting between two batches of execution/ reduce idle time error message if a batch faiils Scale to support multiple GPUs Core Entities: Request Batch Result API Design: POST /predict -> {requestid: "", response: ""} req"See full answer

    Software Engineer
    Artificial Intelligence
    +4 more
  • Meta logoAsked at Meta 

    "Clarifying questions What company are we?: Uber Why do we want to build the feature?: To increase transactions or revenue Do we have a particular problem in mind or am I free to select the same?: You are free to select the same Do we have any constraints or timelines in mind?: Assume working under realistic budget and a timeline of 6 months Can I assume we will have access to relevant data and AI models for the product?: Yes First let's discuss the Vision for Uber. It is to allow peop"

    Kartikeya N. - "Clarifying questions What company are we?: Uber Why do we want to build the feature?: To increase transactions or revenue Do we have a particular problem in mind or am I free to select the same?: You are free to select the same Do we have any constraints or timelines in mind?: Assume working under realistic budget and a timeline of 6 months Can I assume we will have access to relevant data and AI models for the product?: Yes First let's discuss the Vision for Uber. It is to allow peop"See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • 🧠 Want an expert answer to a question? Saving questions lets us know what content to make next.

  • Nvidia logoAsked at Nvidia 

    "For RAG systems, You need to evaluate the retrieval and generation. Typically, you have golden truth question and answers as evaluation data set. For retrieval, check if the retrieved contexts are relevant for the question For generation evaluation, You check the semantic similarity between the golden truth and RAG generated answer. Apart from this, you can evaluate the output using frameworks like RAGAS where the answers generated are evaluated based on completeness, faithfulness, to"

    S R. - "For RAG systems, You need to evaluate the retrieval and generation. Typically, you have golden truth question and answers as evaluation data set. For retrieval, check if the retrieved contexts are relevant for the question For generation evaluation, You check the semantic similarity between the golden truth and RAG generated answer. Apart from this, you can evaluate the output using frameworks like RAGAS where the answers generated are evaluated based on completeness, faithfulness, to"See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • Snap logoAsked at Snap 
    +3

    "Perplexity - the measure LLM is surprised when it predicts next word. For example: I love to eat --- if LLM selects as next word "fruits" it will be less surprising than if LLM selects as next word "metal". It is better to have lower perplexity score. Cross Entropy is the measure how well model match true labels. So if the next word is "cat" and LLM assigns 0.5 to it then cross entropy value is -log(0.5) = 0.69 and if it assigns 0.9 probability to word cat then cross entropy value is - log(0.9"

    Alex N. - "Perplexity - the measure LLM is surprised when it predicts next word. For example: I love to eat --- if LLM selects as next word "fruits" it will be less surprising than if LLM selects as next word "metal". It is better to have lower perplexity score. Cross Entropy is the measure how well model match true labels. So if the next word is "cat" and LLM assigns 0.5 to it then cross entropy value is -log(0.5) = 0.69 and if it assigns 0.9 probability to word cat then cross entropy value is - log(0.9"See full answer

    Machine Learning Engineer
    Artificial Intelligence
    +2 more
  • Anthropic logoAsked at Anthropic 

    "to add! health assistant! remind me to do physical check, aggregate my health info including sleep, physic check results. ...."

    Tian H. - "to add! health assistant! remind me to do physical check, aggregate my health info including sleep, physic check results. ...."See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • "CQs: Content moderation system finds Inappropriate contents - profanity, violence, privacy concerning. Misinformation - false info, false claim, fomenting wrong views PII Misinformation → Wrong info Twisted info Incomplete info Goal - reliability and trust on the platform Long term increased engagement on informational content RAG system - what and why? RAG system has 3 components Brain - reasoning models Tool"

    Sumit P. - "CQs: Content moderation system finds Inappropriate contents - profanity, violence, privacy concerning. Misinformation - false info, false claim, fomenting wrong views PII Misinformation → Wrong info Twisted info Incomplete info Goal - reliability and trust on the platform Long term increased engagement on informational content RAG system - what and why? RAG system has 3 components Brain - reasoning models Tool"See full answer

    Product Manager
    Artificial Intelligence
    +1 more
  • Anthropic logoAsked at Anthropic 

    "We will have to use a second more powerful LLM Model to validate the answers. LLM as a judge"

    Anonymous Partridge - "We will have to use a second more powerful LLM Model to validate the answers. LLM as a judge"See full answer

    Machine Learning Engineer
    Artificial Intelligence
    +4 more
  • Anthropic logoAsked at Anthropic 
    Machine Learning Engineer
    Artificial Intelligence
    +2 more
  • OpenAI logoAsked at OpenAI 

    "The adjusting context window size in LLM change trade off between reasoning capability, accuracy, computation cost. It influence how attention mechanist allocate resources across the input. Longer context window let it you input greater number of words and have more context to generate proper next token. However llms have lost in the middle issue. They remember the beginning of text and end of text but lost information located in the middle of long input. Another problem is Attention Dilution."

    Alex N. - "The adjusting context window size in LLM change trade off between reasoning capability, accuracy, computation cost. It influence how attention mechanist allocate resources across the input. Longer context window let it you input greater number of words and have more context to generate proper next token. However llms have lost in the middle issue. They remember the beginning of text and end of text but lost information located in the middle of long input. Another problem is Attention Dilution."See full answer

    Machine Learning Engineer
    Artificial Intelligence
    +4 more
  • Anthropic logoAsked at Anthropic 
    Product Manager
    Artificial Intelligence
    +2 more
  • Sierra AI logoAsked at Sierra AI 
    Product Manager
    Artificial Intelligence
    +1 more
  • Anthropic logoAsked at Anthropic 
    Software Engineer
    Artificial Intelligence
    +1 more
  • OpenAI logoAsked at OpenAI 
    Product Manager
    Artificial Intelligence
    +1 more
  • Anthropic logoAsked at Anthropic 
    Product Manager
    Artificial Intelligence
    +4 more
  • Anthropic logoAsked at Anthropic 

    "Hallucinations are evaluated by measuring how often generated outputs contain information that is not supported by trusted sources. what hallucination means in context: Intrinsic hallucination: contradicts provided context Extrinsic hallucination: introduces unsupported facts Fabrication: confidently incorrect answers"

    Hardik saurabh G. - "Hallucinations are evaluated by measuring how often generated outputs contain information that is not supported by trusted sources. what hallucination means in context: Intrinsic hallucination: contradicts provided context Extrinsic hallucination: introduces unsupported facts Fabrication: confidently incorrect answers"See full answer

    Product Manager
    Artificial Intelligence
    +3 more
  • Software Engineer
    Artificial Intelligence
    +1 more
  • Product Manager
    Artificial Intelligence
    +5 more
Showing 1-20 of 72