Reading List: October 2021
Here’s a list of articles, courses, books, videos, and any other things that I found personally interesting and went through them:
Productivity Methods
I stumbled across the following posts by Devi Parikh while going through this repo:
- Calendar. Not to-do lists: This post discusses some principles for time management. it makes the argument for using calendar to manage your time and tasks, not to-do lists. According to Parikh, to-do lists are too disorganized (especially in the time dimension, which is what we’re trying to optimize). She then goes on to explain her method of using calendars, which involves making everything you do - not just work, but things like eating, sleeping, doing nothing - an entry on your calendar. She then discusses other principles - I found the calibration multiplier fascinating!
- Checking Email → inbox Zero: This is something I’d heard about before, in the form of other terms like Touching every email once. This post outlines the motivation for being strict with your inbox, and dissuades procrastinating emails. Do what you need to do with that email whenever you open it, don’t put it off for a later time.
The Confessions of Marcus Hutchins, the Hacker Who Saved the Internet
I love this article. It is a very well written story of Marcus Hutchins, the hacker who was responsible for stopping the WannaCry ransomware attack in 2017. Apart from the cool hacker element, this article also remarkably addresses the real-world implications of cybersecurity, and the serious repercussions it can have on people and property. It delves into the grey world of Cybersecurity, and it made me think that there really isn’t a stark line between cybercriminals and cybersecurity experts. They’re really the same kind of people who study the same things, but are on different teams, with different intentions. This 70-90 minutes read is worth every second.
Dropout: A Simple Way to Prevent Neural Networks from Overfitting
This is the original paper on Dropout, a technique that is commonly used while training deep neural networks today. I knew the concept of dropout, but the way that the authors present it instilled a new way of thinking about dropout and how it curbs prevents overfitting, with the intuition tending more towards gradient-boosted machines and genetic algorithms than the simple principle of “disallowing the network to get dependent on certain pathways”.
The YOLO papers
Many are aware that the creator of YOLO has written quite humorous papers for it. YOLO (You Only Look Once) transformed the field of object detection when it was published in 2016. It was much faster, used less computation and was at par with the state of the art. Apart from the surprise of how simple the model is, the papers really entertain their readers with subtle humour placed here and there. More versions of YOLO have been released since, but the initial versions were written by the same author: Joseph Chet Redmon, and these papers are fun.
- You Only Look Once: Unified, Real-Time Object Detection
- YOLO9000: Better, Faster, Stronger
- # YOLOv3: An Incremental Improvement
I never thought that research papers could be written like these. Brilliant!
How to plan and execute your ML and DL projects
This blog on FloydHub by Sayak Paul outlines good practices and tooling for machine learning projects, with a focus on reproducibility, versioning and low technical debt.