sources
Data license: CC-BY
2 rows where datasetId = 6102 sorted by id descending
This data as json, CSV (advanced)
Suggested facets: createdAt (date), updatedAt (date)
id ▲ | name | description | createdAt | updatedAt | datasetId | additionalInfo | link | dataPublishedBy |
---|---|---|---|---|---|---|---|---|
29574 | Papers With Code (2023) | { "link": "https://paperswithcode.com/", "retrievedDate": "2023-06-14", "additionalInfo": "\nThe goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available.\n\nThe comparisons to human performance are very approximate and based on small samples of people \u2014 they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks:\n\n-Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf\n\n-Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf\n", "dataPublishedBy": "Papers With Code" } |
2023-12-14 12:06:37 | 2024-07-08 15:20:31 | Performance on Coding, Math, Language, Image Classification and Atari tasks (Papers With Code, 2023) 6102 | The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available. The comparisons to human performance are very approximate and based on small samples of people — they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks: -Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf -Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf | https://paperswithcode.com/ | Papers With Code |
28105 | Papers With Code (2023) | { "link": "https://paperswithcode.com/", "retrievedDate": "2023-06-14", "additionalInfo": "The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available.\n\nThe comparisons to human performance are very approximate and based on small samples of people \u2014 they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks:\n\n-Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf\n\n-Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf\n", "dataPublishedBy": "Papers With Code" } |
2023-07-25 15:52:05 | 2024-07-08 15:20:30 | Performance on Coding, Math, Language, Image Classification and Atari tasks (Papers With Code, 2023) 6102 | The goal of Papers With Code website is to compile a comprehensive collection of ML papers, code implementations, datasets, methods, and evaluation tables, all made freely available. The comparisons to human performance are very approximate and based on small samples of people — they are only meant to give a rough comparison. You can read more details in the papers that describe the benchmarks: -Hendrycks et al (2021) Measuring Massive Multitask Language Understanding (MMLU) (page 3): https://arxiv.org/pdf/2009.03300.pdf -Hendrycks et al (2021) Measuring Mathematical Problem Solving With the MATH Dataset (page 5): https://arxiv.org/pdf/2103.03874v2.pdf | https://paperswithcode.com/ | Papers With Code |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE "sources" ( "id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" VARCHAR(512) NULL , "description" TEXT NOT NULL , "createdAt" DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP , "updatedAt" DATETIME NULL , "datasetId" INTEGER NULL, additionalInfo TEXT GENERATED ALWAYS as (JSON_EXTRACT(description, '$.additionalInfo')) VIRTUAL, link TEXT GENERATED ALWAYS as (JSON_EXTRACT(description, '$.link')) VIRTUAL, dataPublishedBy TEXT GENERATED ALWAYS as (JSON_EXTRACT(description, '$.dataPublishedBy')) VIRTUAL, FOREIGN KEY("datasetId") REFERENCES "datasets" ("id") ON UPDATE RESTRICT ON DELETE RESTRICT ); CREATE INDEX "sources_datasetId" ON "sources" ("datasetId");