Skip to content
代码片段 群组 项目
选择Git版本
  • fix-scn-tests
  • fix-tests
  • hypnopump-patch-1
  • main 默认
  • pw/add-scn-cloud-mask
  • pw/cleanup-utils
  • pw/fix-tests
  • pw/fix-tied-row-msa
  • v0.4.32
  • 0.4.31
  • 0.4.30
  • 0.4.29
  • 0.4.27
  • 0.4.26
  • 0.4.25
  • 0.4.24
  • 0.4.23
  • 0.4.22
  • 0.4.21
  • 0.4.20
  • 0.4.19
  • 0.4.18
  • 0.4.17
  • 0.4.16
  • 0.4.15
  • 0.4.14
  • 0.4.13
  • 0.4.12
可以使用方向键移动图形。
Created with Raphaël 2.2.05Jul11Sep27Aug23221094228Jul272625242322201821Jun331May3025232116171615131276228Apr272625242220191716151412975432130Mar292725232120191817161211109854543432128Feb2624232221201915119876543131Jan3029232221171615141312107654331Dec30282726211817832130Novunnecessary, pytorch native softmax is numerically stablev0.4.32 main up…v0.4.32 main upstream/mainreadmezero init all branch output linear layers0.4.310.4.31do not detach rotations on last iteration0.4.300.4.300.4.30Make sure random_tokens is on correct device (#96)properly mask for structure module0.4.290.4.29make sure outermean can be masked for padding in msa rows, and also fix a bug with mask0.4.260.4.26link to arxiv insightsfix quaternion update0.4.250.4.25use stable softmax in attention0.4.240.4.24release checkpointing for evoformer0.4.230.4.23do checkpointing for the entire evoformerbug0.4.220.4.22remove sparse attentioncleanup axial attention, since we are no longer mixing row and column attention in one blockadd stop gradient to the rotations, as done for stability, as in the paper and open sourced code0.4.210.4.21add rel pos emb0.4.200.4.20add recycling logic and some more cleanup0.4.190.4.19keep fixing0.4.180.4.18fix residual for attn0.4.170.4.17fixes0.4.160.4.16cleanupmove SSL stuff to own fileiterate on MLM0.4.150.4.15fit in masked-language-modeling loss for MSAs, not yet accurate to paper0.4.140.4.14fix gating and make axial naming less confusing0.4.130.4.130.4.120.4.120.4.12bug in outer mean0.4.110.4.110.4.11make sure extra MSAs use right type of attention, where queries are pooled0.4.100.4.100.4.10fix tri-multtake care of templates angles features0.4.90.4.9add template pointwise attention for pooling0.4.70.4.7add evoformer for extra msa0.4.60.4.6cleanupparameterized outer sum for pairwise rep, do pairwise attention layers for templates, use relative positional embeddings summed to pairwise rep as in paper0.4.50.4.5triangle attention has attention biased by original pairwise representation0.4.40.4.4more cleanup0.4.30.4.3