HomeAuthor InterviewsInterview with Ashish Ranjan Jha

Interview with Ashish Ranjan Jha

Ashish Ranjan Jha is the author of Mastering PyTorch, we got the chance to sit down with him and find out more about his experience of writing with Packt.

Q: What is/are your specialist tech area(s)?

Ashish: Machine Learning, Deep Learning, Computer Vision, Software Engineering.

Q: How did you become an author for Packt? Tell us about your journey. What was your motivation for writing this book?

Ashish: I was contacted by Devika Battike – Acquisition Editor at Packt, on Linkedin. I got interested because I liked the topic – it’s something I have been working on for the past few years. And this was a great opportunity to share the knowledge / experience in my own unique way.

Q: What kind of research did you do, and how long did you spend researching before beginning the book?

Ashish: Although I have been working in the field of deep learning since 2014, there were topics that I needed to dig deep into, to ensure high-quality content for the book. I read up lots of articles, especially from PyTorch’s official website, as well as explored the openly available datasets I could use for exercises in the book. Much of the research was alongside the writing process. But I think I researched for a month even before starting to write.

Q: Did you face any challenges during the writing process? How did you overcome them?

Ashish: Having a full-time job alongside writing makes for a very tight schedule. Having a well-defined timeline before the start of the project helped as an anchor throughout thereafter.

Once the process began, managing different phases of different chapters was a unique challenge. For example, I was writing the first draft of Chapter 10, addressing comments from the first review of Chapter 3, and addressing comments from a technical review of Chapter 1, all at the same time. I’d say there is no working around, you just have to be prepared for such overlapping pieces of work that require different mindsets.

Q: What’s your take on the technologies discussed in the book? Where do you see these technologies heading in the future?

Ashish: PyTorch is at the centre of this book. And it’s provenly and legitimately gaining more popularity and support to emerge as the leading deep learning library. I can only see PyTorch grow further in the future.

This book touches upon a few external libraries such as Optuna (for hyperparameter search), Captum (for model interpretability), ONNX (for universal model formats) that enhance the functionalities provided by PyTorch. As more such libraries are being created for specialised sub-fields within deep learning, the PyTorch ecosystem is increasingly becoming a sophisticated and holistic toolkit for deep learning.

Q: Why should readers choose this book over others already on the market? How would you differentiate your book from its competition?

Ashish: While there are some very well-written theoretical books on Deep Learning such as the one by Ian Goodfellow et. al., I rarely find a good hands-on deep learning book. Applied learning is the need of the hour especially for the field of deep learning because we have way too many bright minds working on deep learning in academia, but few in the industry (relative to the demand). This book is an attempt to bridge that gap, to let people grasp the different concepts, elements, and applications of deep learning by working on the carefully designed guided PyTorch coding exercises.

Q. What are the key takeaways you want readers to come away from the book with?

Ashish: You should be able to structure any deep learning problem into the usual components as repeatedly elaborated under the various exercises in the book, such as – (i) data ingestion/loading using PyTorch’s straightforward data APIs, (ii) model architecture definition using the “nn” module, (iii) model training and testing routines, as well as (iv) model performance analysis/interpretation.

You should get experience in deep learning engineering in production systems such as model serving, distributed training, rapid prototyping. And finally, you should be able to explore and utilise the rich PyTorch ecosystem with some amazing third-party libraries that could work wonders for your specific use-case or sub-field of specialisation within deep learning.

Q. What advice would you give to readers learning tech? Do you have any top tips?

Ashish: Get your hands dirty, indulge in working the exercises out yourself. That is the best way to learn. Furthermore, do not just limit yourself to what is illustrated in the exercises. Go beyond, try to break the code. Examine what really happens underneath. Spending time this way will accelerate your learning trajectory.

Be aware that technologies are changing/upgrading rapidly. Keep an eye on the developments in the specific field of your interest. The existing tools might be changing, new tools might be coming up.

Q. Can you share any blogs, websites and forums to help readers gain a holistic view of the tech they are learning?


Reddit: https://www.reddit.com/r/MachineLearning/
Deeplearning.AI https://www.deeplearning.ai/thebatch/
Distill https://distill.pub/
Sebastian Ruder’s Blog https://ruder.io/

Q. How would you describe your author journey with Packt? Would you recommend Packt to aspiring authors?

Ashish: Writing any book is a challenge and the success or failure of such a venture will very often depend on the support provided by the publishing team. I am delighted to say that the publishing team at Packt has been excellent throughout excellent and consistent communication, very supportive with queries, and always polite and professional.

Q. Do you belong to any tech community groups?

Ashish: No

Q. What are your favorite tech journals? How do you keep yourself up to date on tech?





Q. How did you organize, plan, and prioritize your work and write the book?

Ashish: I began working on this book in May 2020. As this is the first edition, I decided together with Packt, the contents to place within the book. With the initial 2-line summaries of each chapter in place, I started writing chapter drafts. I used Google Docs as my writing tool. For writing the chapters, I always did some initial research on the contents to be put in for that chapter. Different chapters demanded different volumes of time. I remember spending quite some time reading up for Chapter 9 for instance, which was all about Deep Reinforcement Learning and I wanted to give a broad overview of the field to the readers besides training a DRL agent using PyTorch. Each chapter has at least one exercise – which means that I would design the exercises and push them on the book’s GitHub repository, most of the time even before writing much of the chapter text. Besides the code, the images or figures were another important elements to be taken care of. I mostly used draw.io to generate the model architectures or other schematic diagrams.

Each chapter went through rounds of reviews starting from the basic language review, to technical review, all the way to higher-level editorial review. These reviews really helped refine the quality of the content and I am thankful to the Packt team to this end.

Q. What is that one writing tip that you found most crucial and would like to share with aspiring authors?

Ashish: Attention to detail – once it is out, it is out. Be quadruply sure to get it all right – the text, the image, the code, everything.

You can find Ashish’s book on Amazon by following this link: Please click here