python搜索引擎和爬虫框架介绍

全文检索引擎

1、Sphinx

1.1.Sphinx是什么

Sphinx是由俄罗斯人Andrew Aksyonoff开发的一个全文检索引擎。意图为其他应用提供高速、低空间占用、高结果 相关度的全文搜索功能。Sphinx可以非常容易的与SQL数据库和脚本语言集成。当前系统内置MySQL和PostgreSQL 数据库数据源的支持,也支持从标准输入读取特定格式 的XML数据。通过修改源代码,用户可以自行增加新的数据源(例如:其他类型的DBMS 的原生支持)

Official APIs for PHP, Python, Java, Ruby, pure C are included in Sphinx distribution

 

1.2.Sphinx的特性

  • 高速的建立索引(在当代CPU上,峰值性能可达到10 MB/秒);
  • 高性能的搜索(在2 – 4GB 的文本数据上,平均每次检索响应时间小于0.1秒);
  • 可处理海量数据(目前已知可以处理超过100 GB的文本数据, 在单一CPU的系统上可 处理100 M 文档);
  • 提供了优秀的相关度算法,基于短语相似度和统计(BM25)的复合Ranking方法;
  • 支持分布式搜索;
  • 支持短语搜索
  • 提供文档摘要生成
  • 可作为MySQL的存储引擎提供搜索服务;
  • 支持布尔、短语、词语相似度等多种检索模式;
  • 文档支持多个全文检索字段(最大不超过32个);
  • 文档支持多个额外的属性信息(例如:分组信息,时间戳等);
  • 支持断词;

1.3.Sphinx中文分词

中文的全文检索和英文等latin系列不一样,后者是根据空格等特殊字符来断词,而中文是根据语义来分词。目前大多数数据库尚未支持中文全文检索,如Mysql。故,国内出现了一些Mysql的中文全文检索的插件,做的比较好的有hightman的中文分词。Sphinx如果需要对中文进行全文检索,也得需要一些插件来补充。其中我知道的插件有 coreseek sfc

 2、Xapian

 

Xapian is an Open Source Search Engine Library, released under the GPL. It’s written in C++, with bindings to allow use from Perl,Python, PHP, Java, Tcl, C# and Ruby (so far!)

Xapian is a highly adaptable toolkit which allows developers to easily add advanced indexing and search facilities to their own applications. It supports the Probabilistic Information Retrieval model and also supports a rich set of boolean query operators.

 

爬虫

1、Scrapy

1.1、What is Scrapy

Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

1.2、Scrapy Features

Simple

Scrapy was designed with simplicity in mind, by providing the features you need without getting in your way

Productive

Just write the rules to extract the data from web pages and let Scrapy crawl the entire web site for you

Fast

Scrapy is used in production crawlers to completely scrape more than 500 retailer sites daily, all in one server

Extensible

Scrapy was designed with extensibility in mind and so it provides several mechanisms to plug new code without having to touch the framework core

Portable

Scrapy runs on Linux, Windows, Mac and BSD

Open Source and 100% Python

Scrapy is completely written in Python, which makes it very easy to hack

Well-tested

Scrapy has an extensive test suite with very good code coverage

 Html处理

1、Beautiful Soup

Beautiful Soup 是用Python写的一个HTML/XML的解析器,它可以很好的处理不规范标记并生成剖析树(parse tree)。它提供简单又常用的导航(navigating),搜索以及修改剖析树的操作。它可以大大节省你的编程时间。 对于Ruby,使用Rubyful Soup

 

与web站点交互

1、mechanize

Stateful programmatic web browsing in Python, after Andy Lester’s Perl module

 

  • mechanize.Browser and mechanize.UserAgentBase implement the interface of urllib2.OpenerDirector, so:
    • any URL can be opened, not just http:
    • mechanize.UserAgentBase offers easy dynamic configuration of user-agent features like protocol, cookie, redirection and robots.txt handling, without having to make a new OpenerDirector each time, e.g. by calling build_opener().
  • Easy HTML form filling.
  • Convenient link parsing and following.
  • Browser history (.back() and .reload() methods).
  • The Referer HTTP header is added properly (optional).
  • Automatic observance of robots.txt.
  • Automatic handling of HTTP-Equiv and Refresh.

发表评论

电子邮件地址不会被公开。