论文笔记-video generation

  • GenHFi: Generating high fidelity images with subscale pixel networks and multidimensional upscaling. (ICLR2019)
  • paper2: Scaling autoregressive video models (ICLR2020)
  • Video pixel networks. (CoRR2016)
  • Parallel: Parallel multiscale autoregressive density estimation
  • VQGAN: Taming Transformers for High-Resolution Image Synthesis
  • TeCoGAN: Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
  • ImaGINator: Conditional Spatio-Temporal GAN for Video Generation
  • Temporal Shift GAN for Large Scale Video Generation
  • MoCoGAN: Decomposing Motion and Content for Video Generation
  • Playable Video Generation (CVPR2021)
阅读更多

论文笔记-Discrete Latent Variables Based Generation

  • VQ-VAE: Neural Discrete Representation Learning (NIPS2017)
  • VQ-VAE2: Generating Diverse High-Resolution Images with VQ-VAE-2
  • DALL-E: Zero-Shot Text-to-Image Generation
  • VideoGPT: Video Generation using VQ-VAE and Transformers
  • LVT: Latent Video Transformer
  • Feature Quantization Improves GAN Training (ICML2020)
  • DVT-NAT: Fast Decoding in Sequence Models Using Discrete Latent Variables (ICML2018)
  • NWT: Towards natural audio-to-video generation with representation learning
阅读更多

论文笔记-constrast learning in NLP

paper list:

  • An efficient framework for learning sentence representations.
  • CLEAR: Contrastive Learning for Sentence Representation
  • Declutr: Deep contrastive learn- ing for unsupervised textual representations. arXiv
  • SimCSE: Simple Contrastive Learning of Sentence Embeddings
  • R-Drop: R-Drop: Regularized Dropout for Neural Networks
  • Coco-lm: Correcting and contrasting text sequences for language model pretraining.
  • Learning dense representations of phrases at scale.
  • An unsupervised sentence embedding method by mutual information maximization.
  • Representation degeneration problem in training natural language generation models
  • CosReg: Improving neural language generation with spectrum control.
  • CLAPS: Contrastive Learning with Adversarial Perturbations for Conditional Text Generation
  • LINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding
  • Adversarial PerturbationInterpretable Adversarial Perturbation in Input Embedding Space for Text
  • Contrastive Learning for Many-to-many Multilingual Neural Machine Translation
  • Unsupervised Data Augmentation for Consistency Training
阅读更多