variables: 953970
Data license: CC-BY
This data as json
id | name | unit | description | createdAt | updatedAt | code | coverage | timespan | datasetId | sourceId | shortUnit | display | columnOrder | originalMetadata | grapherConfigAdmin | shortName | catalogPath | dimensions | schemaVersion | processingLevel | processingLog | titlePublic | titleVariant | attributionShort | attribution | descriptionShort | descriptionFromProducer | descriptionKey | descriptionProcessing | licenses | license | grapherConfigETL | type | sort | dataChecksum | metadataChecksum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
953970 | Top-1 accuracy | % | 2024-07-25 11:21:52 | 2024-07-25 12:17:07 | 6629 | % | { "unit": "%", "zeroDay": "2019-01-01", "shortUnit": "%", "yearIsDay": true, "numDecimalPlaces": 1 } |
0 | top_1_accuracy | grapher/artificial_intelligence/2024-07-23/papers_with_code_imagenet/papers_with_code_imagenet#top_1_accuracy | 2 | major | [ "The top-1 accuracy measure is used to assess how frequently a model's absolute top prediction matches the correct answer from a given set of options.", "Here's an example to illustrate what this benchmark tests: Imagine an image classification model that is presented with an image of an animal. The model assigns probabilities to each potential label and generates its highest-confidence prediction. For instance, when analyzing an image, the model might predict 'Cat' as the most probable label. To evaluate the model's accuracy using the top-1 measure, researchers compare this prediction with the correct label. If the model's top prediction matches the correct label (e.g., if the actual animal in the image is indeed a cat), then the model's prediction is considered correct according to the top-1 accuracy metric. On the other hand, if the model's top prediction does not match the correct label (e.g., if the image shows a dog, but the model predicts a cat), then the model's prediction is considered incorrect based on the top-1 accuracy measure. To calculate the top-1 accuracy, researchers analyze the model's performance on a large dataset where the correct labels are known. They determine the percentage of examples in the dataset where the model's highest-confidence prediction matches the actual label.", "This measure provides a focused evaluation of the model's ability to make accurate predictions by considering only its absolute top guess." ] |
float | [] |
f7fb48fbe151b4d27b926c8079c54471 | d6770393ef1449eb38f7880d50447a49 |