[2206.13446] Pen and Paper Exercises in Machine Learning
"Pen & Paper Exercises in Machine Learning" -- probably the most interesting arXiv article I've seen in a while ☺️
Love it when exercises come with solutions. 👌
(Textbooks with exercises but w/o solutions are truly evil 😆). t.co/bULUF1UclK
A collection of 🖊️ and 📜 problems on topics in unsupervised #MachineLearning by Michael Gutmann @InfAtEd Every problem comes with a typical Michael-style step-by-step, detailed solution. Highly recommended for students or anyone needing to brush up 🙂 t.co/rRuEMkXTBz
Data Science and AI for Neuroscience Summer School
Your @CaltechN summer of learning🏝️🧠🤓
✦dynamical time series
✦high dimensional data
✦autoencoders & ML
✦single cell seq
✦deep learning & LFADS
✦generative modeling & MYOW
✦RNNs & dynamical systems
"We test language models on our forecasting task and find that performance is far below a human expert baseline. However, performance improves with increased model size and incorporation of relevant information from the news corpus."
IntervalQA includes diverse numerical questions from existing QA datasets:
SQuAD, 80K Hours Calibration, Eighth Grade Arithmetic (Cobbe et al), TriviaQA (Joshi et al), Jeopardy, MATH, and MMLU.
GitHub - facebookresearch/torchdim: Named tensors with first-class dimensio...
If you like einsum (or use vmap a lot), you should probably take a look at torchdim by @Zachary_DeVito & team:
An intriguing take on named tensors by using dynamically bound objects to represent the dimensions.
Some examples below. 🙃 1/n
Benchopt: Benchmark repository for optimization — benchopt 1.2.1.dev41 docu...
📢NEW PAPER ALERT📢
Benchopt: Reproducible, efficient and collaborative optimization benchmarks 🎉
If you ever felt the pain of doing a large scale benchmark, benchopt is here for you!! 😎
There’s a reproducibility crisis brewing in almost every scientific field that has adopted machine learning. On July 28, we’re hosting an online workshop featuring a slate of expert speakers to help you diagnose and fix these problems in your own research: t.co/zexYmhkttpt.co/rq3qby3F8C
@amirzait Great question! In t.co/RBS70Y20Ww we began to study memorization. We indeed looked at acc on modified questions, checked for MATH in the training data, and compared acc when removing answers similar to MATH. But this is an important direction for more follow up!
The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks ...
@togelius@NeurIPSConf Here’s an example of the rebuttal for “The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning”.
I think after the initial reviews, it would have been rejected. But after the rebuttals, it became a spotlight.
"RETRO is so fast and cheap, in fact, that I cannot fathom why anyone would choose to do language modeling without retrieval."
New blog post benchmarking RETRO's database!
Are large pre-trained models nothing more than stochastic parrots? Is scaling them all we need to bridge the gap between humans and machines? In this new opinion piece for @NautilusMag, I argue that the answer lies somewhere in between. 1/14
“Sufficiently advanced mimicry is virtually indistinguishable from intelligent behavior—and therein lies the difficulty. “ t.co/lwROJhqn38