Neural Networks Helpful Resources
This post is a resource repository for Neural Networks. Mainly it highlights all the stuff that I found useful and have gathered over the span of 6 months while working on Deep Learning and Computer Vision and would have perhaps wanted to know all this when I had just started. This is by no means exhaustive. From time to time I might update this list as I come across anything potentially useful.
Disclaimer: My main research area is in the field of Computer Vision, so most resources are more biased towards computer vision. There are tons of NLP resources as well but for the lack of knowledge I can’t write about them. Even then this post should cover some intersection of the two and may be useful even for those who are working on other applications of Deep Learning
While this post is especially for people with no prior experience in this field, those with very moderate experience may still find something useful out of this.
Aside : Note for Absolute Beginner:
If you are a complete beginner with say no prior knowledge of computer vision or deep learning, this might not be the best place to start with. I would highly recommend understanding some basic stuff about signal and image processing before diving into this. But again that is definitely not a pre-requisite, its just something that would enable you to appreciate this field even more.
Cool Courses:
-
CS231n: This is undoubtedly the best place to start. All lecture videos are available on youtube. It assumes more or less no prior knowledge about the field except for some background on differential calculus. The most recent version available is the cs231n spring 2017 course, and having watched all the videos I must say this is easily the best point to start. It makes your base quite strong. Also there are assignments which are pretty neat. The best part of the course is that it focuses more on breadth than depth so you get know almost all the broad research fields in the area of deep learning. Moreover since this is an actual class lecture the pacing of the course is absolutely fantastic. Expected time to complete ~ 2 weeks (assuming 1 lecture a day)
-
Neural Networks Coursera: This is an extremely resourceful course made by one of the pioneers of the field Prof. Geoffrey Hinton. In fact I have seen papers which cite a lecture from this course. While this also does start assuming no background knowledge I found the course slightly difficult to follow. While it covers more general deep learning theory since this is not a class format rather an online course lecture I felt that some of the crucial details were not given much intuition. But if you have some basic familiarity of machine learning this is definitely a must watch things. While not everything that is covered may be useful in the short-term, having a knowledge of basic tools that can probably help you is great. Expected time to complete ~ 2 weeks (assuming 1 lecture a day)
-
Fast.ai course: Okay I will be honest here. I haven’t gotten around completing the whole course. But from the few lectures that I have seen, it gives a more weightage to intuition rather than actual maths. This is quite amazing since a lot of innovations in deep learning actually do come from intuitions. The best part is that it gives more weightage to coding rather than theory which is really required in this field, after all your innovations are meaningless unless you can prove it by testing your ideas. Definitely an amazing resource and I promise I will get back to it once I have some more free time.
Update: I have now completed the fastai course base on pytorch. I will perhaps write a completely new blog on that but I will simply state that I found this the most useful of all. It takes a top-down approach and emphasizes on the coding aspect a lot more. Trust me, being able to code and convert ideas into code, experimentation are very high on the list of things one should know. I would highly suggest doing this course at least once. The best part is that they have a library on top of Pytorch which is absolutely stunning and amazing. Be sure to check it out.
Book:
- Deep Learning Book : This undoubtedly an extremely good resource because it goes into a lot mathematical details. If you are comfortable with maths, this is definitely the best place to supplement your knowledge from the course. Most of the content is amazingly detailed and the best part is that it is freely available on the web. Moreover the first 5 chapters give a brief revision of all the relevant building blocks of deep learning. Update : Video Walkthrough of Every Chapter of the Deep Learning Book is a truly amazing resource for reading the book.
Video Lectures
- Video lectures.net : A big repository of video lectures from conferences, summer schools, tutorials or even interviews. The videos are segregated into various categories and most often than not you should get everything related to any particular topic. Extremely useful resource.
Reading Papers and Going through Projects:
Once you have understood the basics it is time to read some papers. But where?
-
Arxiv and Arxiv Sanity: The computer science community in general use arxiv to put pre-prints of their paper. The tags to watch out for are cs.CV and cs.AI (for my research interests). Arxiv sanity is wrapper around arxiv to keep up with the large amount of papers being published everyday by giving a better display and a auto-recommendation system built on SVMs (to the best of my knowledge). There are also apps on the google store for android like the Arxiv Mobile which have a clean gui. Another new addition is the Arxiv Vanity. It renders the pdf into a html format, so you don’t need to download every pdf rather you can view it online. It is still under development but definitely something quite useful. But lets be real. It is really not possible to keep up with all the papers and I don’t think it is a good idea to even try to do this at early stages.
-
Visionbib: Probably one of the biggest repositories for computer vision papers segregated by topics. Definitely a great resource to gain some understanding a particular topic.
-
Awesome Deep Vision and Awesome Deep Learning: Amazing repositories with a list of many courses, books etc. Consider these to be elongated and perhaps broader version of this blog. Though it is not complete and not updated regularly, it is still contains a wealth of information. Highly recommended to fork the repositories and perhaps maintain your own such repository to make it a bit more personalized.
-
UT Austin Reading Group: Well it is in general extremely difficult to keep up with all the papers so a better option is to visit reading group and see what they are reading. Most often than not some of the papers might have been overlooked while searching through arxiv or visionbib. UT Austin reading group has some very nice papers which are discussed bi-weekly and everything from 2007 is documented. Quite honestly I have gained a lot of insight for my BTech project after reading some of the papers posted here.
-
UIUC Reading Grouop: Quite similar to (3) except that the reading takes places weekly. Though it seems to have the records from 2014. Even then an extremely useful resource.
PS : There are perphas other reading groups as well in different universities, but (3) and (4) are the ones that I have personally referred to so I can vouch for them. Please feel free to email me if you know any other reading group.
-
WAYR: What are you Reading : A weekly thread on r/MachineLearning where people post what they are reading that particular week. Many interesting works are posted and some of them might be relevant for your work. Past threads are also linked so many related papers can be found. Apart from that, the subreddit in itself is amazing in that a lot of people post their projects (with code) and interesting blogs.
-
Twitter: For some reason the deep learning community has taken on to twitter and quite a lot of discussion on recent work can be found here. It can include publicity or even criticism of a particular paper, some interesting observation found and a lot more. Here is a great list of people whom you can follow Deep learning Twitter Loop. Some of them even go through arxiv and write out interesting observations on them and quite honestly this is a great place to be updated with latest developments on AI and deep learning.
Aside : Blogs
Quite often than not we may be interested in a particular topic but going through the original paper is extremely time consuming. That is where blogs come in. Many people blog. The primary intention is to make the information in the paper which is perhaps filled with rigorous maths easier to understand. Many blogs also include code for the particular topic and that is extremely helpful. So an extremely interesting thing to do when searching for a topic is to first read up a blog on that. After getting sufficient intuition dive into the paper if it really is relevant else just skip that.
I have found some great blogs in r/MachineLearning. Another great resource is from Andrej Karpathy who has remained an instructor for the CS231n course. I have also found some great intuitions from the blogs posted by Frenec Huszar. This is by no means any exhaustive list, rather just a few names that I can recall from the top of my head.
Computational Resources:
A particular reason that deep learning has been rising is because of the amazing work done in the field of computer architecture and thanks to better gpus heavy computation which could take hours can be done in minutes. So here are some helpful tips:
-
Working on Laptop (no other computational resources): Well there is nothing much that can be done on the laptop, the primary reason being it is difficult to do too much work on the cpu. While most NN toolkits do come with cpu versions but the speed is drastically reduced. If in case you have a nvidia graphics card you may be able to exploit a bit of computation. The best that you can do is to install cuda and also use cuDNN. But if you are willing to spend a few bucks the best option would be either to buy a gpu or a perhaps a cloud service (see below).
-
Buying a GPU: Suppose you have a quite a bit to spend or perhaps you are into gaming then a viable option might be to buy a new GPU. CNN Benchmarks has a decent comparison of some standard GPU’s. In my current lab (VIP lab, IIT Bombay), we have GTx 1080 Ti GPU which is usually sufficient for almost all kinds of computations.
-
Cloud Services: Buying a gpu is obviously not viable for everyone. Fortunately, there is a cheaper way out. There are cloud services being offered by Amazon, Google, Oracle. The best part is that all of them have a free starter pack. Like Amazon gives out $75 for github student pack for free. Google Cloud at one point gave out $300 credits for computation. Oracle handed out around $5000 (yes, that’s true) for student. I have only used Amazon EC2 service once. While the first time setup can be a bit messy, once you get the hang of it, it is quite simple. The amount charged is also not too much. To the best of my knowledge it is around $0.9 for one hour computation on EC2 instance AWS pricing. Definitely worth checking these as well. A great tutorial on Getting started with AWS by fast.ai.
Aside : Use screen when you ssh
When you have a gpu in your lab or perhaps you are using a cloud service you would be connecting to the server using ssh
. Suppose you have a process running on the terminal. But how do you access the same terminal after you log out? That is where screen comes in. It kind of creates an environment within the terminal. All you need to do is to install screen
. To start a new environment screen -S session_name
. After running the code, press Ctr+a
followed by Ctr+d
which detaches the environment. To reattach use screen -r
. And voila you can access the code.
Software Tools :
When it comes to deep learning there are a lot software tools that you can use. Here is a comparison of the famous software tools being currently used by François Chollet (developer of Keras, described below). Based on Github stars and current publications on arxiv.
I will give a brief introduction to pros and cons of the some of the libraries I have used. An amazing introduction to this is in Lecture 8 of CS231n
-
Tensorflow : As far as I know this is the most widely used deep learning framework. It is maintained by Google and all of the code is open-sourced Tensorflow. It has some potential benefits when using a distributed frameworks (I don’t want to expand much on this but google is your friend). The best feature is perhaps tensorboard which includes some great visualizaitions to monitor your training. Everything is written in python and there is a lot of flexibility. The only disadvantage is that the learning curve is slightly steep and for me at least a few things are quite non-intuitive. But once past that, it is definitely a great tool to use.
-
Pytorch : A relatively new addition to the available frameworks but the code is much cleaner than tensorflow. Also a lot of abstractions that we use in python can be directly carryforwarded directly to pytorch and in some sense a replacement of the popular numpy library. I really like pytorch because of its simplicity. It is maintained by Facebook and all the code is open-sourced as well Pytorch. Another thing to note is that it is too under very rapid development. Unfortunately I cannot really think of a disadvantage except for the fact that it is relatively new and is undergoing massive developments.
-
Keras : Keras is a wrapper around a backend. Backend can either be tensorflow or theano. Unfortunately Theano has stopped production, so it is best to use Tensorflow backend. It eases the coding in tensorflow and makes it amazingly intuitive. Again all the code is opensourced Keras. Coding becomes much more easier than in raw tensorflow. The only disadvantage would be that sometimes it is difficult to get some particular parameter out of a model. But for most cases it makes life much easier.
-
Caffe : Caffe is written in C++ and is quite efficient. The information about layer is given in a protobuf file and to say train you just need to call a particular source file. There is also a python wrapper called pycaffe. It is from University of California Berkley. It is also open sourced Caffe. There is also a Model Zoo which is a collection of all trained models put up by different researchers. The biggest advantage of caffe is perhaps deployment. All you need is a
deploy.prototxt
and aweights.caffemodel
file and voila you are done. The disadvantage is that the installation of caffe is slightly painful and multiple errors creep in. Also since it is not exactly a programming language it can be sometimes difficult to debug.
I might add in something as I continue my path as a computer vision researcher but I guess that’s it for now.
If you have any comments for my post feel free to shoot me an email at ark.sadhu2904@gmail.com