{"id":6963,"date":"2021-06-21T17:41:39","date_gmt":"2021-06-21T17:41:39","guid":{"rendered":"http:\/\/invbat.com\/blog2\/?p=6963"},"modified":"2021-06-21T17:41:39","modified_gmt":"2021-06-21T17:41:39","slug":"deberta-decoding-enhanced-bert-with-disentangled-attention-is-a-transformer-based-neural-language-model-pretrained-on-large-amounts-of-raw-text-corpora-using-self-supervised-learning","status":"publish","type":"post","link":"https:\/\/invbat.com\/blog2\/index.php\/2021\/06\/21\/deberta-decoding-enhanced-bert-with-disentangled-attention-is-a-transformer-based-neural-language-model-pretrained-on-large-amounts-of-raw-text-corpora-using-self-supervised-learning\/","title":{"rendered":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark\/\"><font color=\"blue\">  DeBERTa (Decoding-enhanced BERT with disentangled attention) is a Transformer based neural language model pretrained on large amounts of raw text corpora using self-supervised learning. <\/font><\/a>"},"content":{"rendered":"\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-deberta-surpasses-human-performance-on-the-superglue-benchmark\/\"> \n\nDeBERTa (Decoding-enhanced BERT with disentangled attention) is a Transformer based neural language model pretrained on large amounts of raw text corpora using self-supervised learning.\n\n<br>\n\n<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/invbat.com\/ads1.html\"><img loading=\"lazy\" decoding=\"async\" width=\"885\" height=\"305\" src=\"https:\/\/invbat.com\/blog2\/wp-content\/uploads\/2021\/04\/Help2.png\" alt=\"\" class=\"wp-image-5005\" srcset=\"https:\/\/invbat.com\/blog2\/wp-content\/uploads\/2021\/04\/Help2.png 885w, https:\/\/invbat.com\/blog2\/wp-content\/uploads\/2021\/04\/Help2-300x103.png 300w, https:\/\/invbat.com\/blog2\/wp-content\/uploads\/2021\/04\/Help2-768x265.png 768w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/figure><p>advertisement<\/p>\n","protected":false},"excerpt":{"rendered":"<p>DeBERTa (Decoding-enhanced BERT with disentangled attention) is a Transformer based neural language model pretrained on large amounts of raw text corpora using self-supervised learning.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[395],"tags":[],"class_list":["post-6963","post","type-post","status-publish","format-standard","hentry","category-decoding-enhanced-bert-deberta"],"_links":{"self":[{"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/posts\/6963","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/comments?post=6963"}],"version-history":[{"count":1,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/posts\/6963\/revisions"}],"predecessor-version":[{"id":6964,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/posts\/6963\/revisions\/6964"}],"wp:attachment":[{"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/media?parent=6963"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/categories?post=6963"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/invbat.com\/blog2\/index.php\/wp-json\/wp\/v2\/tags?post=6963"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}