From 01a836ef3aef6f08ea721c1e30cfdf15c7528e73 Mon Sep 17 00:00:00 2001 From: Peter Thaleikis Date: Wed, 25 Nov 2020 15:06:54 +0400 Subject: [PATCH] Typos --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a227339..3db85ac 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ The crawled data is not as *clean* as the one obtained by the APIs, but the bene scrapy crawl TweetScraper -a query="foo,#bar" - where `query` is a list of keywords seperated by comma and quoted by `"`. The query can be any thing (keyword, hashtag, etc.) you want to search in [Twitter Search](https://twitter.com/search-home). `TweetScraper` will crawl the search results of the query and save the tweet content and user information. + where `query` is a list of keywords separated by comma and quoted by `"`. The query can be anything (keyword, hashtag, etc.) you want to search in [Twitter Search](https://twitter.com/search-home). `TweetScraper` will crawl the search results of the query and save the tweet content and user information. 3. The tweets will be saved to disk in `./Data/tweet/` in default settings and `./Data/user/` is for user data. The file format is JSON. Change the `SAVE_TWEET_PATH` and `SAVE_USER_PATH` in `TweetScraper/settings.py` if you want another location.