[2009.11243] Tasks, stability, architecture, and compute: Training more eff...
@jreuben1 beyond NAS + MetaLearning: learning how to learn ML optimizers t.co/JfnTuVFH5R Google brain build a dataset of 1000+ diverse optimization tasks commonly found in ML, tested wether learned optimizer could help them train other, new learned optimizers
Aella Credit is Making Banking More Accessible Using AWS Machine Learning -...
@vamsikvutukuru This story made my day. Very inspired to see how Aella Credit, a digital financial services company operating in sub-Saharan Africa, uses Amazon Rekognition to verify people and increase access to banking. t.co/y3BaXpRAMN
最先端NLP勉強会 - 2020
@jqk09a 最先端NLP勉強会お疲れ様でした!今年は初のオンライン開催でした。運営として至らない点もあったかと思いますが、皆様にとって濃く愉しい会になっていれば幸いです。ご発表頂いた皆様、議論を盛り上げてくれた皆様、ありがとうございました! 発表資料は web にて公開中です👉t.co/mfEmFXHjpy
GitHub - WhyR2020/abstracts
@_stakaya すっかり宣伝遅れましたが、本日開催中のWhy R2020にヴァリューズのコッシーさんがスピーカーとして出られます! 皆さん応援是非ッ!!! t.co/OPwRFlrhPN
An Effectiveness Metric for Ordinal Classification: Formal Properties and E...
@sho_yokoi t.co/Ju9Ze5xtmr 評価尺度界隈で活躍されている著者勢による「順序分類の評価にF1とか順序相関とかMSEとか使うのやめましょう」論文です。 私的な勉強メモを含みます。とくに予測問題の類型については重要な文献を見つけられていない気がしており、おすすめがあればぜひコメントください。
Construction — Machine Learning from Scratch
@ImAI_Eruel 機械学習・AIの入門者に向けた,ライブラリを使わず機械学習を数学的背景をベースに一から実装して理解していく書籍が公開されている “Machine Learning from Scratch” t.co/liL7uttapg 和書でも人気の機械学習スクラッチ実装本がこうして(英語が読めれば)Webで見られるのは素晴らしい t.co/ZVPvcDUVN4
[DSO] Machine Learning Seminar Vol.8 Chapter 9
@Mr_Sakaue [DSO] Machine Learning Seminar Vol.8 Chapter 9 t.co/OqGg97AyZk 勉強会用の写経と資料作成が同時にできるという点で、jupyter nbconvertは良い機能ですね。もちろん、体裁はまんまだとプレーンな感じにはなりますが。今回はflaskの章。
How Waze predicts carpools using Google Cloud AI Platform
@googlecloud What are the most important signals to Waze's ML models when ranking lists of drivers and riders? 🤔 Read our blog for more answers on how Waze predicts carpools with Google Cloud's AI Platform → t.co/m7H8ssd996
#3 The ML Test Score: A Rubric for ML Production Readiness and Technical De...
@hurutoriya #just4fm 第3回です。 機械学習システムの信頼性向上と技術的負債の減少をテーマにした論文を紹介しました t.co/Mh0yiYEbRR
Eric Jang: My Criteria for Reviewing Papers
@ericjang11 My Criteria for Reviewing Papers - would love to hear what factors academics care most about when determining whether papers should be accepted/rejected t.co/7Wpo40Gg1V
Akira's ML news #Week 39, 2020|akiraTOSEI|note
@AkiraTOSEI Akira's ML news Week 39,2020を投稿しました。週次で機械学習界隈の最新論文や機械学習の活用事例などを紹介しております。 t.co/Q2rIJ6KwJV
Scientific Computing in Python: Introduction to NumPy and Matplotlib
@rasbt "Scientific Computing in Python: Intro to NumPy & Matplotlib" (blog + 10 video tutorials). Most students in my ML class are new to Python so I made a @numpy_team lecture. Just converted the notes to a blog & embedded the vids. Maybe useful to others too :) t.co/SkStNbFy9u
[2006.16537] Theory-Inspired Path-Regularized Differential Network Architec...
@hillbig ネットワークアーキテクチャ探索(NAS)でDARTSは微分を使って効率的に探索できるが、速く損失を減らせるスキップ接続を優先して選択しがちであった。PR-DARTSは理論的解析を元にスキップ接続にペナルティを加え長経路にボーナスを加えることで良いネットワークを発見できた t.co/F8SLpiu71l
[2006.16537] Theory-Inspired Path-Regularized Differential Network Architec...
@hillbig Differential architecture search (DARTS) tends to select NN with skip connections because NN w/ skip connections can converge faster. PR-DARTS adds a regularization for skip connections and shallow networks, and can find much better NN. t.co/F8SLpiu71l
サウナで分析基盤を構築し、漁業のサステナビリティを向上させる取り組みを考えた話 - Lighthouse Developers Blog
@bashi0501 思考が整理されやすいのは、サウナ室にいるときなのか、休憩中なのか、が現在の僕の研究対象👀 t.co/vqso7XBCsW
Tissue Specificity in OMICs
@1wantphd 遺伝子名を打ち込めば、各組織ごとにRNA量とタンパク量を表示してくれる超便利サイト。遺伝子名入力を大文字でしか受け付けないという融通の効かなさ以外、弱点がない。 t.co/KQoYU9MBgN t.co/Gfj2nqKdKF
ELK Demonstrators
@Vjeux Would love if someone could play with this layout engine and excalidraw. Select a bunch of shapes and arrow, click format and it moves in a good looking shape. Doesn’t do what you want? Either move stuff around to fix it or cmd-z. t.co/3L6sIcYY5c
TFX tutorials  |  TensorFlow
@wakame1367 久々にチュートリアル見直したけど結構変わったところ多いな t.co/qJq23IGSoP #just4funfm
Trustworthy ML
@hima_lakkaraju Exciting news!! We are expanding our @trustworthy_ml Initiative (t.co/ZVuPpE05OK) & launching multiple new efforts. We have put together a ton of useful resources to help beginners (t.co/v1ZRXVZLKV). We are also starting a bi-weekly seminar series in Oct. t.co/lwps126yvb
Trustworthy ML
@trustworthy_ml 📢 Excited to announce our launch as the Trustworthy ML initiative (TrustML) t.co/04XPYF8VTa. Our efforts just got bigger and better! Read on 👇[1/n] #trustworthyML #MachineLearning #ArtificialIntelligence #DeepLearning t.co/16KksEFr32
Trustworthy ML
@JaydeepBorkar Super happy to be a part of this new initiative along w/ @hima_lakkaraju @sarahookr Sarah Tan @sbmisi @chhaviyadav_. For a few months now I've been maintaining @trustworthy_ml to disseminate news and research related to trustworthy ML. Check us out at t.co/qTnC5BYAj6! 1/ t.co/vxugxwMDAZ
How Can Evolution Learn?: Trends in Ecology & Evolution
@ykamit この話しらなかった。 進化と学習の間にはアナロジー以上の関係(アルゴリズムとしての等価性)がある、という話。 いわゆる帰納バイアスの中身を解明する糸口になるかも t.co/G4gZdrlYKF
Deep LearningにおけるBatch Normalizationの理解メモと、実際にその効果を見てみる - Qiita
@pacifinapacific 参考記事をどうぞ。 t.co/ARKIt7ag13 教科書な質問をされてもわかりやすい記事があるので探してみてください以上の事… 続きは質問箱へ #Peing #質問箱 t.co/dLFP4PMlyy
Retrieval Augmented Generation: Streamlining the creation of intelligent na...
@facebookai Our Retrieval Augmented Generation #NLP model is now available as part of the @HuggingFace transformer library. The true strength of RAG is in its flexibility. You control what it knows simply by swapping out the documents it uses for knowledge retrieval. t.co/M7PM5eWorP t.co/51ibozdVeG
Retrieval Augmented Generation: Streamlining the creation of intelligent na...
@alexvoica Today, @riedelcastro and the FAIR team at @FacebookLondon are releasing RAG, an #NLP model that will be open sourced as part of the @HuggingFace transformer library: t.co/sJ7GqpUFPf t.co/ZKjEiAyGpw
[2006.08564] Post-Hoc Methods for Debiasing Neural Networksopen searchopen ...
@crwhite_ml Proud of @yashsavani_ for his first first-author paper, to appear at #NeurIPS2020! Check out our paper, Post-Hoc Methods for Debiasing Neural Networks (with @naveensundarg ) t.co/izXd30vk7h @abacusai
Towards ML Engineering: A Brief History Of TensorFlow Extended (TFX) — The ...
@lmoroney Great post on 'Towards ML Engineering' -- a history of TFX at the TensorFlow Blog: t.co/NVvocUXGnR
[2006.00475] Improved Regret for Zeroth-Order Adversarial Bandit Convex Opt...
@SebastienBubeck Fantastic progress by Tor Lattimore on bandit convex optimization t.co/wqTIh6uNTa !!! The regret is now d^{2.5} sqrt(T) (down from d^{9.5} sqrt(T)), and the proof is short and sweet. Very close to the conjectured bound of d^{1.5} sqrt(T) . 1/2
Google Research Football with Manchester City F.C. | Kaggle
@BrianPrestidge Really excited we can share this after what feels like an age in the pipeline with the fantastic folks @GoogIeAI t.co/NBZKlMoONn
Google Research Football with Manchester City F.C. | Kaggle
@tkm2261 Kaggle新コンペ Google x マンチェスター・シティによるサッカーゲームのエージェント学習の強化学習コンペ。AI同士を観戦するのも楽しそう。 t.co/9D1iznm36M
Google Research Football with Manchester City F.C. | Kaggle
@JeremyAbramson So this is interesting...Man U + Google = Kaggle competition to generate football/soccer agents: t.co/41PfwrKaF4
Google Research Football with Manchester City F.C. | Kaggle
@ikuma_uchida18 t.co/t07MbpUkq5 シティがKaggleに来た 知識が乏しいのは置いといて、これを皮切りにKaggleの世界に飛び込もうかな
Google Research Football with Manchester City F.C. | Kaggle
@karol_kurach Google Research Football is now a Kaggle competition. Great collaboration between @GoogleAI Brain team in Zurich and @ManCity ! Looking forward to see the AI finals 2020 :) t.co/CquDltQybX Thanks @sylvain_gelly, @obousquet, Piotr, Anton & Marcin for making it happen!
Google Research Football with Manchester City F.C. | Kaggle
@brendankent Manchester City & Google are hosting an AI soccer/football Kaggle competition ⚽ t.co/k2GXEWR6Hf
AI Platform Prediction now with better reliability & ML workflows | Google ...
@GCPcloud We've updated our AI Platform Prediction (now GA) to improve robustness, flexibility, & usability: ✅ XGBoost / scikit learn models on high-mem/high-cpu machine types ✅ Resource Metrics ✅ Regional Endpoints ✅ VPC-Service Controls (Beta) Learn more ↓ t.co/z022p7CybQ
HAMLETS 2020
@dkaushik96 Less than two weeks to submission deadline for our #NeurIPS2020 HAMLETS workshop ---> t.co/Bmu51vEzOr t.co/VwGaw88DSt
Google Research Football with Manchester City F.C. | Kaggle
@MLBear2 メダルありのシミュレーションコンペ来てる。ウイイレみたいに1人のサッカー選手をコントロールしてチームを勝ちに繋げるゲーム?(詳しく見てない t.co/j4HJ6UAUV5
Google Research Football with Manchester City F.C. | Kaggle
@MeganRisdal Finally I may be able to convince my husband to try out Kaggle (and give up FIFA on the Switch so I can play more Zelda). t.co/HfdcCV8X7o
[2009.03294] GraphNorm: A Principled Approach to Accelerating Graph Neural ...
@hillbig GraphNormはグラフNNの正規化関数でありBNとは異なりグラフ毎に統計値を求め、さらに正則グラフでは平均をそのまま引くとグラフ構造情報が失われるためシフト量を学習可能としている。Preconditionerとして最適化を容易にする効果があり汎化性能も改善する。t.co/odwBzehZU2
[2009.03294] GraphNorm: A Principled Approach to Accelerating Graph Neural ...
@hillbig GraphNorm is normalization for graph NN, applying normalization to each individual graph with a learnable shift. Subtracting the mean will lose the graph's structural information for the regular-like graphs. Converges faster and improves generalization. t.co/odwBzehZU2
[2007.00970] MPLP: Learning a Message Passing Learning Protocolopen searcho...
@samgreydanus @hardmaru Couldn’t agree more. One additional thought: I think the recent line on learning cellular automata has great potential for uncovering good local/simple learning rules. Early work on this by @zzznah @RandazzoEttore @eyvindn: t.co/uaBx4J1uww
Building a Simple Pipeline in R – Mathew Analytics
@Maxwell_110 R(+RStudio)のみでシンプルな分析パイプラインを作成する記事 t.co/IVM02ZN2OA cronR ( t.co/uC71hYFtuR ) を使用して自動実行させる形. データを格納するフォルダ名が "data" というのは割と普通だと思うんだけれど,"input" としている人は Kaggler の疑い有(独断と偏見) t.co/CbQ6Wj1cec
A Brief Survey of Time Series Classification Algorithms | by Alexandra Amid...
@ISID_AI_team 時系列データの分類アルゴリズムまとめです ・KNN with dynamic time warping ・TimeSeriesForest ・BOSS, cBOSS ・RISE — like TimeSeriesForest but with other features ・Shapelet Transform Classifier ・HIVE-COTE なお全て、sktimeパッケージから使用できます t.co/S5BnY1MiyH
Tensorflow2 Keras – Custom loss function and metric classes for multi task ...
@keunwoochoi Masked Keras metrics were not on the web so I had to implement it. Now you don't have to :P t.co/FLeXAa0qwE
A Brief Survey of Time Series Classification Algorithms | by Alexandra Amid...
@ISID_AI_team 時系列データの分類アルゴリズムまとめです ・KNN with dynamic time warping ・TimeSeriesForest ・BOSS, cBOSS ・RISE — like TimeSeriesForest but with other features ・Shapelet Transform Classifier ・HIVE-COTE なお全て、sktimeパッケージから使用できます t.co/S5BnY1MiyH
これからの事業の成否を分かつ、「Linking Data」の必要性とは? - Cinnamon AI Blog
@mikuhirano 【初ブログ書きました!】Data is Kingと言われますが、データをためるだけでは意味がありません。これからはLoop is King. データをつなげられるかでデジタル時代の勝者が決まります。それがLinking Data. t.co/2LvFglXlDG
高速道路の新MDP!拡張可能な状態定義(前半) | AI-SCHOLAR | AI:(人工知能)論文・技術情報メディア
@ai_scholar 高速道路シナリオの新しいMDPの提案です。拡張が容易なMDPを定義し、強化学習と逆強化学習を組み合わせて、方策を獲得しています。前半の記事では新しいMDPと強化学習・逆強化学習について紹介しています。 t.co/LvlL3YdqNc
Introducing PyTorch Forecasting | by Jan Beitner | Towards Data Science
@fakegingerbitch There are reasons why people use classic methods for forecasting - they generally work better: t.co/1EZZJUACIz
Optimizing MobileDet for Mobile Deployments | Sayak PaulOptimizing MobileDe...
@RisingSayak New blog post focusing on the criticalities of effectively optimizing *MobileDet object detectors for mobile deployments*. Thanks to @khanhlvg for the guidance. @TensorFlow #TensorFlowLite @GoogleDevExpert t.co/DctUHdUr01 1/2
A Brief Survey of Time Series Classification Algorithms | by Alexandra Amid...
@ISID_AI_team 時系列データの分類アルゴリズムまとめです ・KNN with dynamic time warping ・TimeSeriesForest ・BOSS, cBOSS ・RISE — like TimeSeriesForest but with other features ・Shapelet Transform Classifier ・HIVE-COTE なお全て、sktimeパッケージから使用できます t.co/S5BnY1MiyH