- Draw deep learning network architecture diagrams
- Torch stuff
- ResNet understanding
- BCEwithlogitsloss
- more stable that vanilla BCELoss (for reconstruction loss eg, AE/VAE)
- Picking the right-optimizer-and-learning-rate
- Nice demonstration of different optimizer
- torch.optim.RMSprop
- lr_scheduler
- pytorch-set-gpu-usage
- Dataloader shuffled in every epoch
- Multi-label-classification-in-pytorch
- MSEloss-producing-nan-on-the-second-or-third-input-each-time
- Normalization script
- Append-for-nn-sequential
- concatenate-torch-tensor-along-given-dimension
- torch.unsqueeze
- Different-dimensions-tensor-concatenation
- Normalize-embedding-vectors
- Hand segmentation
- Deep Learning concepts
- Bayesian-neural-network-series-post-1-need-for-bayesian-networks
- Relational reasoning with deep learning
- Cutting edge techniques in deep learning @Lazebnik-spring17
- Deep Learning@Lazebnik'18
- Deep Learning meets Structured Prediction@AlexSchwing-ICCV15
- why-should-the-data-be-shuffled-for-machine-learning-tasks
- Shuffling-data-in-the-mini-batch-training-of-neural-network
- I have implemented my own shuffler during training my siamese network. Don't need to use the DataLoader() 'shuffle=true/false'.
- What's-the-difference-between-a-Variational-Autoencoder-VAE-and-an-Autoencoder
- sparse-autoencoders-using-l1-regularization-with-pytorch
- Benefits-of-using-a-Siamese-vs-CNN-for-feature-embedding
- Benefits of learning embedding@Yoshua Bengio
- "So you should use clustering if you really have no choice or if you really know that there are a few dominant classes (and no other way to make sense of the structure in the data, otherwise you get unstable solutions). Furthermore, you should do it in the right space where data have been mapped, e.g., a space in which the clustering assumptions work well (and you can train that transformation for that purpose) - and that appropriate space is usually not the raw input space. But like many good things, it can be learned."
- Machine Learning concepts
- Why momentum works?
- An-introduction-to-high-dimensional-hyper-parameter-tuning
- Introduction-to-bayesian-networks
- Normalizations-in-neural-networks
- Whitening
- Local Contrast Normalization
- Local Response Normalization
- Batch Normalization
- New other normalization (updated 10/2020)
- Why VAE perform_bad with single_input_training_data?
- Cross Entropy Loss and its variants: details explanation
- Diversity meet ensemble@CVPR16
- Causal Inference in AI@Purdue
- Computer Vision concepts
- Microelectronics topics
- Are-computer-chips-the-new-security-threat?
- Indianas-applied-research-institute-unveils-new-name-and-brand-awards-first-contract-to-purdue-university
- List_of_integrated_circuit_manufacturers
- Semantic Segmentation implementations
- Sayan98/pytorch-segnet
- meetshah1995/pytorch-semseg
- zijundeng/pytorch-semantic-segmentation
- shufanwu/SegNet-PyTorch
- pytorch-unet-segmentation
- self-link@alimurreza
- Psychology meeting stuff
- Deep Reinforcement Learning
- Geometric Deep Learning
- What-is-geometric-deep-learning
- overview-of-deep-learning-on-graph-embeddings
- graph-convolutional-networks-for-geometric-deep-learning
- Graph-convolutional-networks
- Beyond-graph-convolution-networks
- Deep Learning for PDE:
No comments:
Post a Comment