I'm very interested in these lectures and am looking forward to digging into it.
I was wondering if you could provide some feedback on whether deep learning would be useful in classifying images that have text or not. For example, looking at a set of images I wish to classify the ones that have text and the ones that don't have text. A dataset could be like this:
Yeah for sure - these images are pretty different in their composition so it should be pretty easy to classify them. How large is your dataset? Do you need to collect one?
With small amounts of data, transfer learning is the most effective approach. There's a great tutorial on retraining inception for your own categories in TensorFlow: https://www.tensorflow.org/how_tos/image_retraining/.
I don't have a dataset at the moment. I would have to build one. I was thinking of about 200 images in the dataset with 100 of text and 100 of non text. Would that be big enough dataset for transfer learning? Please let me know if there is a dataset you know of that I could leverage