New Project: Gigyasa


This post is about introducing my new writing project “Gigyasa“. It is an online Blog Magazine where I will be writing and sharing some wisdom about Life on general that I have collected over the years.

I had been writing much notes here and there previously on my notebooks much of which were just sporadic writings. However,I would say the theme mainly revolved around around:

  • mindful living,
  • self transformation habits,
  • timeless principles,
  • contemplative thoughts
  • and similar matters of spirituality.

As I flipped through my notebooks last week, I realized that I had some quite good notes (at least in my opinion (: ). I thought it would be a pity to see them just rot down in my basement.

So, I decided upon starting this blog magazine to make use of my existing writings and also to keep it as a platform to write and talk about similar topics for the future.

This is just my initial thoughts and plans about Gigyasa, let’s see where it goes from there later.

Please feel free to review Gigyasa and give your constructive feedback to me.


Web Scraping Quotes From Good Reads


GoodReads is a very good resource for info about books, authors and interesting quotations.

In this post, I will share a piece of code that will allow you to scrape for quotations from this site. The code is written for python’s Scrapy framework.

Getting Started

To get started with scraping quotes from your favorite author, first of all search for quotes by the author name in the quotes section.

Quote Search Section

Once you type the author’s name, you can look for css and xpath in the displayed results for finding pointers to scrape data.

Looking For Xpaths

Code For Spider

Now that we have data to scrape from, the next step is to create a spider that will scrape data from this page. A spider in scrapy is basically a class that you can use to scrape data from a location. You can find more info on scrapy here.

Basically, we want to loop over each “quoteDetail” section to get the author and quote text.

for sel in response.css('div.quoteDetails '):
quote = sel.css('div.quoteText::text').extract()
author = sel.css('div.quoteText a::text').extract_first()
item = GoodreadsItem()
item['author'] = author
item['quote'] = quote
yield item

Each quote gets extracted as a “GoodreadsItem” object.

Next, to scrape data from the next page, following code can be used:

checkNextPage = response.xpath('//a[@class="next_page"]').extract_first()
nextPageLink = response.xpath('//a[@class="next_page"]/@href').extract_first()
nextPageFullUrl = response.urljoin(nextPageLink)


That’s all the code needed for scraping. It’s quite easy and fun to scrape with Scrapy. Good luck!