DeepVerified Results

DeepVerify provides a set of API endpoints to evaluate the factual accuracy, consistency, and reliability of information by checking against an established knowledge base and evaluating answer quality and source credibility.

Endpoints

  1. Fact Check /fact-check

This endpoint assesses the factual accuracy and completeness of the provided information based on established knowledge.

Parameters:

  • input_text (required): The text or information to be fact-checked.

Evaluations:

  • Consistency with Established Knowledge: Checks if the information aligns with verified knowledge sources.

  • Fabrication: Detects if any parts of the information appear fabricated or invented.

  • Omission & Incomplete Information: Identifies missing elements that may lead to incomplete information.

Response Structure:

{
  "consistency_score": 0.95,
  "fabrication": "No",
  "omission_incomplete_info": "Some details are missing.",
  "fact_blocks": ["FactBlock_001", "FactBlock_002"],
  "explanation_paths": [
    {"path": ["FactBlock_001", "FactBlock_003"], "description": "Knowledge consistency path"}
  ]
}

  1. Answer Check /answer-check

This endpoint evaluates the quality and accuracy of answers or responses based on logical inferences, context, and answer detail.

Parameters:

  • input_text (required): The answer or response to evaluate.

Evaluations:

  • False Inferences: Detects incorrect logical reasoning.

  • Parroting or Reiteration: Identifies cases where the answer merely repeats information.

  • Context Consistency: Evaluates if the answer is contextually consistent.

  • Misinterpretation of Question: Checks if the answer misinterprets the question.

  • Bias Detection: Identifies potential biases in the answer.

  • Vague or Broad Answers: Detects if the answer is too vague or overly generalized.

  • Exaggeration/Distortion

  • Overgeneralization or Simplification: Detects oversimplification or generalizations.

Response Structure:

{
  "false_inference": "Yes",
  "parroting": "No",
  "context_consistency": 0.87,
  "misinterpretation": "No",
  "bias_detection": "Detected",
  "vague_broad_answer": "Yes",
  "exaggeration": "No",
  "simplification": "Yes",
  "fact_blocks": ["FactBlock_005", "FactBlock_006"],
  "explanation_paths": [
    {"path": ["FactBlock_005", "FactBlock_006"], "description": "Reasoning path"}
  ]
}


3. Reference Check /reference-check

This endpoint evaluates the credibility and completeness of sources or citations.

Parameters:

  • input_text (required): The references or citations to be evaluated.

Evaluations:

  • Source Reliability: Calculates the reliability score based on scores stored in cited FactBlocks.

  • Negation or Incomplete Information: Detects any unsupported negations or incomplete assertions.

  • Unverifiable Citations: Verifies if AI-cited sources are accessible and authentic. (Examples: Checks if sources are present and accessible. )

Response Structure:

{
  "source_reliability": 0.92,
  "negation_incomplete_info": "No",
  "unverifiable_citations": "Yes",
  "fact_blocks": ["FactBlock_007", "FactBlock_008"],
  "explanation_paths": [
    {"path": ["FactBlock_007", "FactBlock_009"], "description": "Reliability assessment path"}
  ]
}


Result Types

The results returned by each endpoint may include the following types:

  • Score: A numerical score with decimal points indicating evaluation accuracy or reliability.

  • Binary Value (Yes/No): A binary indicator for simple evaluations.

  • Text: Explanation or detailed notes based on the evaluation.

  • List of FactBlocks: List of FactBlocks that served as the basis for the evaluation.

  • List of FactBlocks with Paths: A list of FactBlocks, along with paths showing how they are connected to improve explainability.

Last updated