HomeAuthor InterviewsInterview with Serg Masis

Interview with Serg Masis

Serg is the author of Interpretable Machine Learning with Python, we got the chance to sit down with him and find out more about his experience of writing with Packt.

Q: What are your specialty areas?

Serg: Data Scientist, Computer Scientist, (former) Web/app Developernternet of Things, Edge Computing, Cloud Computing, Big Data, Machine Learning, Domain Driven Design

Q: How did you become an author for Packt? Tell us about your journey. What was your motivation for writing this book?

Serg: I was approached by a Packt Acquisition Editor who had encountered a sample of my writing regarding the topic. She asked if I would be interested in writing a book about it. I loved Interpretable Machine Learning (aka Explainable Artificial Intelligence or XAI). It was indeed serendipitous that she approached me with that one of all the subjects I had written about. I was motivated to write it because, at the time, there were only two books for practitioners written about it, and I felt I had quite a bit to contribute to the subject. However, I had to wonder if it was the right timing. It was the beginning of the global COVID-19 pandemic, so I knew I would likely be stuck indoors for months, and this would facilitate an environment for writing. Still, there was also a lot of uncertainty and anxiety. In the end, I thought it would keep me focussed on something positive even though a lot of negative was going on around me, so it had more benefits than not.

Q: What kind of research did you do, and how long did you spend researching before beginning the book?

Serg: Once I agreed to write the book, I was asked to write an outline, and upon approval, I would write it as such. It took me three weeks to deliver the outline. I did most of my general research while writing it because I had to determine each chapter’s title and its description, headings, and subheadings. To this end, I looked for appropriate datasets for each chapter and made sure that the libraries I wanted to use for each chapter still worked or if there was something better already on the market. I also gathered academic literature to assist me in the explanations for some of the most advanced chapters.

Q: Did you face any challenges during the writing process? How did you overcome them?

Serg: I estimated from the beginning that somewhere between 30% and 50% of my book would be code and code output. So, after I had issues in the first chapter copying and pasting the code and corresponding code into Microsoft Word, I decided to write the book in Jupyter notebooks. Then I wrote a script that would convert each notebook into Markdown using Nbconvert, and then modify it automatically to meet the styling guidelines of Packt, and finally, leverage Pandoc to convert from Markdown to Word format. Once implemented, I didn’t have to worry about inconsistencies between my source code and styled chapter contents. Another challenge had to do with bringing in external datasets. I wanted it to be seamless and not spend a lot of time doing data preparation because the book is not about these steps. Therefore, I created a library called machine-learning-datasets to take care of these steps. I later realized I could use this library to store simple functions to evaluate models or plot outcomes since having to teach how to write them was detracting from the main lessons of each chapter. Instead, I could ask readers to run the functions and start discussing the interpretations.

Q. What’s your take on the technologies discussed in the book? Where do you see these technologies heading in the future?

Serg: The technologies discussed in the book, starting with the Python language to libraries like Pandas, Tensorflow, and SHAP, are constantly evolving. So chances are future versions five years from now won’t be compatible with the current ones. However, if we abstract these technologies and just see the methodologies/theory behind them, they will very much remain compatible. In other words, data frames, neural networks, and game-theory inspired interpretability methods will still exist in five years. Understanding how they work and when to leverage them will be an essential skill to have. As for Python, I think machine learning will gravitate towards no-code/low-code solutions in the medium term. Python will still be an important skill for advanced ML applications, but I believe most ML engineers will likely be working with drag and drop interfaces. And then with coding out of the way, I think ML practitioners will have more time on their hands to focus on interpreting outcomes, debugging anomalies, mitigating bias, and now quaint concepts like validating and certifying models for fairness and robustness.

Q. Why should readers choose this book over others already on the market? How would you differentiate your book from its competition?

Serg: When I started writing this book, there were two books for practitioners on the topic, although one of them was more of a short guide. None of them were particularly hands-on and in Python, so the book was designed to be hands-on and focus on methods implemented in Python. The competing titles also focussed on understanding models through interpretable ML methods, and in particular model-agnostic methods. Before my book’s release, another two Interpretable ML books were released, and they had the same gaps. My book is more comprehensive, tackling both sides of the equation: the diagnosis and the treatment of interpretability concerns. It doesn’t stop at transparency but also covers fairness and accountability, which are often ignored or underplayed by other practitioner-oriented books on the topic. Fairness, in particular, is given special attention since uncovering and mitigating bias in machine learning models with code examples hasn’t been covered in any depth in competing titles. It also discusses deep learning-specific interpretability methods such as those used for convolutional neural networks and recurrent neural networks. Unlike other books, my book is mission-centric. Every chapter is a different case study that takes the reader on a journey to discover the different topics tackled by the chapter. Great effort has been taken to ensure these case studies are as realistic as possible. Therefore, “toy datasets” such as MNIST, Iris, and Titanic are not leveraged since they are too clean to depict real-world conditions.

Q. What are the key takeaways you want readers to come away with from the book?

Serg: Often interpretation is seen as an essential skill for descriptive analytics. However, it’s also very much leveraged in predictive and prescriptive analytics. Training a good machine learning model is more than optimizing predictive performance since we can measure the goodness of a model in many ways, such as those encompassed by concepts of fairness and robustness. Interpretable machine learning is more than a toolset for making complex models interpretable. Instead, practitioners can learn from a model and improve a model in more ways than with predictive performance. Interpretable Machine Learning is also how Ethical AI, Responsible AI, and Fair AI get implemented by ML practitioners.

Q. What advice would you give to readers learning tech? Do you have any top tips?

Serg: I have three tips: 1) Often learning tech becomes solely about learning hard skills like programming a language or a library or learning how to use a cloud service. However, HOW you do things is only a function of WHY you need to do them. In other words, technology is not a means to end. For this reason, it’s important readers grasp the business problem they are trying to solve first and only then choose tools and methods accordingly. 2) Engineering can be tech-centric. However, in data science, tech is simply what connects the dots between our data and a solution, be it insights or a machine learning model. And even though good engineering is what made this technology work, we musn’t forget the “science” in data science, which is about inquiry, research, and experimentation. These are all concepts that involve ample interpretation. 3) And in business, interpretation is not only how you understand the problem but how you holistically evaluate how well your solution solved the problem and how it didn’t, and then tell that story to stakeholders.

Q. Do you have a blog that readers can follow?

Serg: Yes, they can find my blog at https://www.serg.ai/#blog, or follow me via LinkedIn at https://www.linkedin.com/in/smasis/ since I post pretty much the same stuff.

Q. Can you share any blogs, websites and forums to help readers gain a holistic view of the tech they are learning?

Serg: Good blogs include Machine Learning Mastery (https://machinelearningmastery.com/), Analytics Vidhya (https://www.analyticsvidhya.com/blog/), TowardsAI (https://towardsai.net/p/category/artificial-intelligence), and KDNuggets (https://www.kdnuggets.com/news/index.html). I recommend Aggregate Intellect (https://ai.science/), which has a lot of content involving XAI and provides a community to learn AI applications through discussion groups and competitions. DataScienceGO (https://www.datasciencego.com/) has many useful events throughout the year to learn about Data Science and a wonderful community.

Q. How would you describe your author journey with Packt? Would you recommend Packt to aspiring authors?

Serg: It definitely has been an incredible journey. As a first-time author, it was critical to get support and guidance from experienced editors, which they did provide. I would highly recommend Packt to aspiring authors.

Q. Do you belong to any tech community groups?

Serg: Yes. I’m a member of Aggregate Intellect and attend several local meetups, including PyData, which has many chapters worldwide, and a local civic tech group (https://chihacknight.org/)

Q. What are your favorite tech journals? How do you keep yourself up to date on tech?

Serg: For staying up to date on the latest trends and research in A.I. I subscribe to Alpha Signal (https://alphasignal.ai/) and read MIT News (https://news.mit.edu/topic/artificial-intelligence2). I also follow A.I. researchers on Twitter and attend many conferences.

Q. How did you organize, plan, and prioritize your work and write the book?

Serg: The book was written in 9½ months. The outline provided a good roadmap with clear descriptions of all 14 chapters. Packt allocated time to complete each chapter and provided a schedule to that end. Short chapters had a week, and some longer chapters had three weeks to complete. Being a hands-on book, I devoted about 65% of the time to writing and testing the code for each chapter in Jupyter. I then would write the text surrounding the code for 30% of it. Lastly, the remaining 5% was devoted to exporting to Word and formatting.

Q. What is that one writing tip that you found most crucial and would like to share with aspiring authors?

Serg: Spend as much time as you can researching and creating a detailed outline of your book.

You can find Serg ‘s book on Amazon by following this link

Interpretable Machine Learning with Python on Amazon.com