abc.ABCMeta
scrapy.item._BaseItemMeta
- Undocumentedscrapy.item.ItemMeta
- Metaclass_ of Item
that handles field definitions.argparse.ArgumentParser
scrapy.utils.curl.CurlParser
- UndocumentedAssertionError
scrapy.exceptions.ContractFail
- Error raised in case of a failing contractcollections.abc.MutableMapping
scrapy.item.DictItem
- No class docstring; 0/1 instance variable, 0/1 class variable, 1/13 method documentedscrapy.Item
- Base class for scraped items.scrapy.settings._DictProxy
- Undocumentedscrapy.settings.BaseSettings
- Instances of this class behave like dictionaries, but store priorities along with their (key, value) pairs, and can be frozen (i.e. marked immutable).scrapy.settings.Settings
- This object stores Scrapy settings for the configuration of internal components, and can be used for any further customization.collections.OrderedDict
scrapy.utils.datatypes.LocalCache
- Dictionary with a finite number of keys.dict
scrapy.Field
- Container of field metadatascrapy.utils.datatypes.CaselessDict
- No class docstring; 0/1 class variable, 2/12 methods, 0/1 class method documentedscrapy.http.headers.Headers
- Case insensitive http headers dictionaryenum.Enum
scrapy.core.http2.stream.StreamCloseReason
- UndocumentedException
scrapy.core.downloader.handlers.http11.TunnelError
- An HTTP CONNECT tunnel could not be established by the proxy.scrapy.exceptions.CloseSpider
- Raise this from callbacks to request the spider to be closedscrapy.exceptions.DontCloseSpider
- Request the spider not to be closed yetscrapy.exceptions.DropItem
- Drop item from the item pipelinescrapy.pipelines.images.NoimagesDrop
- Product with no images exceptionscrapy.exceptions.IgnoreRequest
- Indicates a decision was made not to process a requestscrapy.spidermiddlewares.httperror.HttpError
- A non-200 response was filteredscrapy.exceptions.NotConfigured
- Indicates a missing configuration situationscrapy.exceptions.NotSupported
- Indicates a feature or method is not supportedscrapy.exceptions.StopDownload
- Stop the download of the body for a given response. The 'fail' boolean parameter indicates whether or not the resulting partial response should be handled by the request errback. Note that 'fail' is a keyword-only argument.scrapy.exceptions.UsageError
- To indicate a command-line usage errorscrapy.pipelines.files.FileException
- General media error exceptionscrapy.pipelines.images.ImageException
- General image error exceptionscrapy.utils.signal._IgnoredException
- Undocumentedh2.exceptions.H2Error
scrapy.core.http2.protocol.InvalidNegotiatedProtocol
- Undocumentedscrapy.core.http2.protocol.MethodNotAllowed405
- Undocumentedscrapy.core.http2.protocol.RemoteTerminatedConnection
- Undocumentedscrapy.core.http2.stream.InvalidHostname
- Undocumenteditemloaders.ItemLoader
scrapy.loader.ItemLoader
- No summaryjson.JSONDecoder
scrapy.utils.serialize.ScrapyJSONDecoder
- Undocumentedjson.JSONEncoder
scrapy.utils.serialize.ScrapyJSONEncoder
- Undocumentedlogging.Filter
scrapy.utils.log.TopLevelFormatter
- Keep only top level loggers's name (direct children from root) from records.logging.Handler
scrapy.utils.log.LogCounterHandler
- Record log levels count into a crawler statsparsel.Selector
scrapy.selector.unified.Selector
- An instance of Selector
is a wrapper over response to select certain parts of its content.parsel.Selector.selectorlist_cls
scrapy.selector.unified.SelectorList
- The SelectorList
class is a subclass of the builtin list class, which provides a few additional methods.scrapy.commands.bench._BenchServer
- Undocumentedscrapy.commands.ScrapyCommand
- No class docstring; 0/2 instance variable, 0/4 class variable, 6/9 methods documentedscrapy.commands.BaseRunSpiderCommand
- Common class used to share functionality between the crawl, parse and runspider commandsscrapy.commands.crawl.Command
- Undocumentedscrapy.commands.parse.Command
- Undocumentedscrapy.commands.runspider.Command
- Undocumentedscrapy.commands.bench.Command
- Undocumentedscrapy.commands.check.Command
- Undocumentedscrapy.commands.edit.Command
- Undocumentedscrapy.commands.fetch.Command
- Undocumentedscrapy.commands.view.Command
- Undocumentedscrapy.commands.genspider.Command
- No class docstring; 0/1 property, 0/1 instance variable, 0/2 class variable, 1/8 method documentedscrapy.commands.list.Command
- Undocumentedscrapy.commands.settings.Command
- Undocumentedscrapy.commands.shell.Command
- No class docstring; 0/2 class variable, 1/7 method documentedscrapy.commands.startproject.Command
- No class docstring; 0/1 property, 0/1 instance variable, 0/2 class variable, 1/5 method documentedscrapy.commands.version.Command
- Undocumentedscrapy.contracts.Contract
- Abstract class for contractsscrapy.contracts.default.CallbackKeywordArgumentsContract
- Contract to set the keyword arguments for the request. The value should be a JSON-encoded dictionary, e.g.:scrapy.contracts.default.ReturnsContract
- Contract to check the output of a callbackscrapy.contracts.default.ScrapesContract
- Contract to check presence of fields in scraped items @scrapes page_name page_bodyscrapy.contracts.default.UrlContract
- Contract to set the url of the request (mandatory) @url http://scrapy.orgscrapy.contracts.ContractsManager
- No class docstring; 0/1 class variable, 1/6 method documentedscrapy.core.downloader.contextfactory.AcceptableProtocolsContextFactory
- Context factory to used to override the acceptable protocols to set up the [OpenSSL.SSL.Context] for doing NPN and/or ALPN negotiation.scrapy.core.downloader.Downloader
- Undocumentedscrapy.core.downloader.handlers.datauri.DataURIDownloadHandler
- Undocumentedscrapy.core.downloader.handlers.DownloadHandlers
- No class docstring; 0/4 instance variable, 1/5 method documentedscrapy.core.downloader.handlers.file.FileDownloadHandler
- Undocumentedscrapy.core.downloader.handlers.ftp.FTPDownloadHandler
- Undocumentedscrapy.core.downloader.handlers.http10.HTTP10DownloadHandler
- No class docstring; 0/4 instance variable, 0/1 class variable, 1/3 method, 0/1 class method documentedscrapy.core.downloader.handlers.http11._RequestBodyProducer
- Undocumentedscrapy.core.downloader.handlers.http11.HTTP11DownloadHandler
- No class docstring; 0/7 instance variable, 0/1 class variable, 1/3 method, 0/1 class method documentedscrapy.core.downloader.handlers.http11.ScrapyAgent
- Undocumentedscrapy.core.downloader.handlers.http2.H2DownloadHandler
- Undocumentedscrapy.core.downloader.handlers.http2.ScrapyH2Agent
- Undocumentedscrapy.core.downloader.handlers.s3.S3DownloadHandler
- Undocumentedscrapy.core.downloader.Slot
- Downloader slotscrapy.core.engine.ExecutionEngine
- No class docstring; 0/1 property, 0/14 instance variable, 8/22 methods documentedscrapy.core.engine.Slot
- Undocumentedscrapy.core.http2.agent.H2Agent
- No class docstring; 0/4 instance variable, 1/4 method documentedscrapy.core.http2.agent.ScrapyProxyH2Agent
- No class docstring; 0/1 instance variable, 1/3 method documentedscrapy.core.http2.agent.H2ConnectionPool
- No class docstring; 0/4 instance variable, 1/6 method documentedscrapy.core.http2.stream.Stream
- Represents a single HTTP/2 Stream.scrapy.core.scheduler.Scheduler
- Scrapy Scheduler. It allows to enqueue requests and then get a next request to download. Scheduler is also handling duplication filtering, via dupefilter.scrapy.core.scraper.Scraper
- No class docstring; 0/7 instance variable, 8/15 methods documentedscrapy.core.scraper.Slot
- Scraper slot (one per running spider)scrapy.crawler.Crawler
- No class docstring; 0/10 instance variable, 1/5 method documentedscrapy.crawler.CrawlerRunner
- This is a convenient helper class that keeps track of, manages and runs crawlers inside an already setup ~twisted.internet.reactor
.scrapy.crawler.CrawlerProcess
- A class to run multiple scrapy crawlers in a process simultaneously.scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware
- Handle 'AJAX crawlable' pages marked as crawlable via meta tag. For more info see https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.scrapy.downloadermiddlewares.cookies.CookiesMiddleware
- This middleware enables working with sites that need cookiesscrapy.downloadermiddlewares.decompression.DecompressionMiddleware
- This middleware tries to recognise and extract the possibly compressed responses that may arrive.scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware
- Undocumentedscrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware
- Undocumentedscrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware
- Set Basic HTTP Authorization header (http_user and http_pass spider class attributes)scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware
- Undocumentedscrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware
- This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sitesscrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware
- Undocumentedscrapy.downloadermiddlewares.redirect.BaseRedirectMiddleware
- Undocumentedscrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware
- Undocumentedscrapy.downloadermiddlewares.redirect.RedirectMiddleware
- Handle redirection of requests based on response status and meta-refresh html tag.scrapy.downloadermiddlewares.retry.RetryMiddleware
- Undocumentedscrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware
- Undocumentedscrapy.downloadermiddlewares.stats.DownloaderStats
- Undocumentedscrapy.downloadermiddlewares.useragent.UserAgentMiddleware
- This middleware allows spiders to override the user_agentscrapy.dupefilters.BaseDupeFilter
- Undocumentedscrapy.dupefilters.RFPDupeFilter
- Request Fingerprint duplicates filterscrapy.exporters.BaseItemExporter
- No class docstring; 0/5 instance variable, 2/7 methods documentedscrapy.exporters.CsvItemExporter
- Undocumentedscrapy.exporters.JsonItemExporter
- Undocumentedscrapy.exporters.JsonLinesItemExporter
- Undocumentedscrapy.exporters.MarshalItemExporter
- Exports items in a Python-specific binary format (see marshal
).scrapy.exporters.PickleItemExporter
- Undocumentedscrapy.exporters.PprintItemExporter
- Undocumentedscrapy.exporters.PythonItemExporter
- This is a base class for item exporters that extends BaseItemExporter
with support for nested items.scrapy.exporters.XmlItemExporter
- Undocumentedscrapy.extensions.closespider.CloseSpider
- Undocumentedscrapy.extensions.corestats.CoreStats
- Undocumentedscrapy.extensions.debug.Debugger
- Undocumentedscrapy.extensions.debug.StackTraceDump
- Undocumentedscrapy.extensions.feedexport._FeedSlot
- Undocumentedscrapy.extensions.feedexport.BlockingFeedStorage
- Undocumentedscrapy.extensions.feedexport.FTPFeedStorage
- Undocumentedscrapy.extensions.feedexport.GCSFeedStorage
- Undocumentedscrapy.extensions.feedexport.S3FeedStorage
- Undocumentedscrapy.extensions.feedexport.FeedExporter
- No class docstring; 0/6 instance variable, 3/16 methods, 0/1 class method documentedscrapy.extensions.feedexport.FileFeedStorage
- Undocumentedscrapy.extensions.feedexport.StdoutFeedStorage
- Undocumentedscrapy.extensions.httpcache.DbmCacheStorage
- Undocumentedscrapy.extensions.httpcache.DummyPolicy
- Undocumentedscrapy.extensions.httpcache.FilesystemCacheStorage
- No class docstring; 0/4 instance variable, 2/7 methods documentedscrapy.extensions.httpcache.RFC2616Policy
- Undocumentedscrapy.extensions.logstats.LogStats
- Log basic scraping stats periodicallyscrapy.extensions.memdebug.MemoryDebugger
- Undocumentedscrapy.extensions.memusage.MemoryUsage
- No class docstring; 0/9 instance variable, 1/8 method, 0/1 class method documentedscrapy.extensions.spiderstate.SpiderState
- Store and load spider state during a scraping jobscrapy.extensions.statsmailer.StatsMailer
- Undocumentedscrapy.extensions.throttle.AutoThrottle
- No class docstring; 0/5 instance variable, 1/8 method, 0/1 class method documentedscrapy.http.cookies._DummyLock
- Undocumentedscrapy.http.cookies.CookieJar
- Undocumentedscrapy.http.cookies.WrappedRequest
- Wraps a scrapy Request class with methods defined by urllib2.Request class to interact with CookieJar classscrapy.http.cookies.WrappedResponse
- Undocumentedscrapy.link.Link
- Link objects represent an extracted link by the LinkExtractor.scrapy.linkextractors.FilteringLinkExtractor
- Undocumentedscrapy.linkextractors.lxmlhtml.LxmlLinkExtractor
- No class docstring; 1/2 method documentedscrapy.linkextractors.lxmlhtml.LxmlParserLinkExtractor
- No class docstring; 0/6 instance variable, 1/6 method documentedscrapy.logformatter.LogFormatter
- Class for generating log messages for different actions.scrapy.mail.MailSender
- Undocumentedscrapy.middleware.MiddlewareManager
- Base class for implementing middleware managersscrapy.core.downloader.middleware.DownloaderMiddlewareManager
- Undocumentedscrapy.core.spidermw.SpiderMiddlewareManager
- Undocumentedscrapy.extension.ExtensionManager
- Undocumentedscrapy.pipelines.ItemPipelineManager
- Undocumentedscrapy.pipelines.files.FSFilesStore
- Undocumentedscrapy.pipelines.files.FTPFilesStore
- Undocumentedscrapy.pipelines.files.GCSFilesStore
- Undocumentedscrapy.pipelines.files.S3FilesStore
- No class docstring; 0/3 instance variable, 0/8 constant, 2/5 methods documentedscrapy.pipelines.media.MediaPipeline
- No class docstring; 0/5 instance variable, 0/1 constant, 9/18 methods, 0/1 class method, 0/1 class documentedscrapy.pipelines.files.FilesPipeline
- Abstract pipeline that implement the file downloadingscrapy.pipelines.images.ImagesPipeline
- Abstract pipeline that implement the image thumbnail generation logicscrapy.pipelines.media.MediaPipeline.SpiderInfo
- Undocumentedscrapy.pqueues.DownloaderAwarePriorityQueue
- PriorityQueue which takes Downloader activity into account: domains (slots) with the least amount of active downloads are dequeued first.scrapy.pqueues.DownloaderInterface
- No class docstring; 0/1 instance variable, 1/4 method documentedscrapy.pqueues.ScrapyPriorityQueue
- A priority queue implemented using multiple internal queues (typically, FIFO queues). It uses one internal queue for each priority value. The internal queue must implement the following methods:scrapy.resolver._CachingResolutionReceiver
- Undocumentedscrapy.resolver.CachingHostnameResolver
- Experimental caching resolver. Resolves IPv4 and IPv6 addresses, does not support setting a timeout value for DNS requests.scrapy.resolver.HostResolution
- Undocumentedscrapy.responsetypes.ResponseTypes
- No class docstring; 0/2 instance variable, 0/1 constant, 6/8 methods documentedscrapy.robotstxt.RobotParser
- No class docstring; 1/1 method, 1/1 class method documentedscrapy.robotstxt.ProtegoRobotParser
- Undocumentedscrapy.robotstxt.PythonRobotParser
- Undocumentedscrapy.robotstxt.ReppyRobotParser
- Undocumentedscrapy.robotstxt.RerpRobotParser
- Undocumentedscrapy.settings.SettingsAttribute
- Class for storing data related to settings attributes.scrapy.shell.Shell
- Undocumentedscrapy.signalmanager.SignalManager
- No class docstring; 0/1 instance variable, 5/6 methods documentedscrapy.spiderloader.SpiderLoader
- SpiderLoader is a class which locates and loads spiders in a Scrapy project.scrapy.spidermiddlewares.depth.DepthMiddleware
- Undocumentedscrapy.spidermiddlewares.httperror.HttpErrorMiddleware
- Undocumentedscrapy.spidermiddlewares.offsite.OffsiteMiddleware
- No class docstring; 0/3 instance variable, 1/5 method, 0/1 class method documentedscrapy.spidermiddlewares.referer.RefererMiddleware
- No class docstring; 0/1 instance variable, 1/4 method, 0/1 class method documentedscrapy.spidermiddlewares.referer.ReferrerPolicy
- No class docstring; 0/1 class variable, 2/7 methods documentedscrapy.spidermiddlewares.referer.NoReferrerPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-no-referrerscrapy.spidermiddlewares.referer.NoReferrerWhenDowngradePolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-no-referrer-when-downgradescrapy.spidermiddlewares.referer.DefaultReferrerPolicy
- A variant of "no-referrer-when-downgrade", with the addition that "Referer" is not sent if the parent request was using file:// or s3:// scheme.scrapy.spidermiddlewares.referer.OriginPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-originscrapy.spidermiddlewares.referer.OriginWhenCrossOriginPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-origin-when-cross-originscrapy.spidermiddlewares.referer.SameOriginPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-same-originscrapy.spidermiddlewares.referer.StrictOriginPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-strict-originscrapy.spidermiddlewares.referer.StrictOriginWhenCrossOriginPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-strict-origin-when-cross-originscrapy.spidermiddlewares.referer.UnsafeUrlPolicy
- https://www.w3.org/TR/referrer-policy/#referrer-policy-unsafe-urlscrapy.spidermiddlewares.urllength.UrlLengthMiddleware
- Undocumentedscrapy.spiders.crawl.Rule
- Undocumentedscrapy.spiders.Spider
scrapy.spiders.init.InitSpider
- Base Spider with initialization facilitiesscrapy.statscollectors.StatsCollector
- Undocumentedscrapy.statscollectors.DummyStatsCollector
- Undocumentedscrapy.statscollectors.MemoryStatsCollector
- Undocumentedscrapy.utils.datatypes.SequenceExclude
- Object to test if an item is NOT within some sequence.scrapy.utils.iterators._StreamReader
- Undocumentedscrapy.utils.log.StreamLogger
- Fake file-like stream object that redirects writes to a logger instancescrapy.utils.python.MutableChain
- Thin wrapper around itertools.chain, allowing to add iterables "in-place"scrapy.utils.python.WeakKeyCache
- Undocumentedscrapy.utils.reactor.CallLaterOnce
- Schedule a function to be called in the next reactor loop, but only if it hasn't been already scheduled since the last time it ran.scrapy.utils.sitemap.Sitemap
- Class to parse Sitemap (type=urlset) and Sitemap Index (type=sitemapindex) filesscrapy.utils.testproc.ProcessTest
- Undocumentedscrapy.utils.testsite.SiteTest
- Undocumentedscrapy.utils.trackref.object_ref
- Inherit from this class to a keep a record of live instancesscrapy.http.request.Request
- No class docstring; 0/3 property, 0/13 instance variable, 0/2 class variable, 2/8 methods, 1/1 class method documentedscrapy.http.request.form.FormRequest
- Undocumentedscrapy.http.request.json_request.JsonRequest
- No class docstring; 0/1 instance variable, 1/3 method documentedscrapy.http.request.rpc.XmlRpcRequest
- Undocumentedscrapy.http.response.Response
- No class docstring; 1/3 property, 0/9 instance variable, 0/2 class variable, 7/13 methods documentedscrapy.http.response.text.TextResponse
- No class docstring; 1/3 property, 0/7 instance variable, 0/1 constant, 5/16 methods documentedscrapy.http.response.html.HtmlResponse
- Undocumentedscrapy.http.response.xml.XmlResponse
- Undocumentedscrapy.item._BaseItem
- Temporary class used internally to avoid the deprecation warning raised by isinstance checks using BaseItem.scrapy.item.BaseItem
- Deprecated, please use scrapy.item.Item
insteadscrapy.item.DictItem
- No class docstring; 0/1 instance variable, 0/1 class variable, 1/13 method documentedscrapy.Item
- Base class for scraped items.scrapy.selector.unified.Selector
- An instance of Selector
is a wrapper over response to select certain parts of its content.scrapy.selector.unified.SelectorList
- The SelectorList
class is a subclass of the builtin list class, which provides a few additional methods.scrapy.Spider
- Base class for scrapy spiders. All spiders must inherit from this class.scrapy.commands.bench._BenchSpider
- A spider that follows all linksscrapy.spiders.crawl.CrawlSpider
- Undocumentedscrapy.spiders.feed.CSVFeedSpider
- Spider for parsing CSV feeds. It receives a CSV file in a response; iterates through each of its rows, and calls parse_row with a dict containing each field's data.scrapy.spiders.feed.XMLFeedSpider
- This class intends to be the base class for spiders that scrape from XML feeds.scrapy.spiders.sitemap.SitemapSpider
- No class docstring; 0/2 instance variable, 0/4 class variable, 2/5 methods documentedscrapy.utils.spider.DefaultSpider
- Undocumentedtwisted.internet._sslverify.ClientTLSOptions
scrapy.core.downloader.tls.ScrapyClientTLSOptions
- SSL Client connection creator ignoring certificate verification errors (for genuinely invalid certificates or bugs in verification code).twisted.internet.base.ThreadedResolver
scrapy.resolver.CachingThreadedResolver
- Default caching resolver. IPv4 only, supports setting a timeout value for DNS requests.twisted.internet.endpoints.TCP4ClientEndpoint
twisted.internet.error.ConnectionClosed
scrapy.core.http2.stream.InactiveStreamClosed
- Connection was closed without sending request headers of the stream. This happens when a stream is waiting for other streams to close and connection is lost.twisted.internet.protocol.ClientFactory
scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
- No class docstring; 0/20 instance variable, 0/3 class variable, 2/11 methods documentedtwisted.internet.protocol.Factory
scrapy.core.http2.protocol.H2ClientFactory
- Undocumentedtwisted.internet.protocol.ProcessProtocol
scrapy.utils.testproc.TestProcessProtocol
- Undocumentedtwisted.internet.protocol.Protocol
scrapy.core.downloader.handlers.ftp.ReceivedDataProtocol
- Undocumentedscrapy.core.downloader.handlers.http11._ResponseReader
- Undocumentedscrapy.core.http2.protocol.H2ClientProtocol
- No class docstring; 2/2 properties, 0/7 instance variable, 0/1 constant, 12/21 methods documentedtwisted.internet.protocol.ServerFactory
scrapy.extensions.telnet.TelnetConsole
- Undocumentedtwisted.protocols.policies.TimeoutMixin
scrapy.core.http2.protocol.H2ClientProtocol
- No class docstring; 2/2 properties, 0/7 instance variable, 0/1 constant, 12/21 methods documentedtwisted.web.client.Agent
scrapy.core.downloader.handlers.http11.ScrapyProxyAgent
- No class docstring; 0/1 instance variable, 1/2 method documentedscrapy.core.downloader.handlers.http11.TunnelingAgent
- No summarytwisted.web.client.BrowserLikePolicyForHTTPS
scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
- Non-peer-certificate verifying HTTPS context factoryscrapy.core.downloader.contextfactory.BrowserLikeContextFactory
- Twisted-recommended context factory for web clients.twisted.web.http.HTTPClient
scrapy.core.downloader.webclient.ScrapyHTTPPageGetter
- Undocumentedtwisted.web.resource.Resource
scrapy.utils.benchserver.Root
- Undocumentedtwisted.web.util.Redirect
scrapy.utils.testsite.NoMetaRefreshRedirect
- UndocumentedTypeError
scrapy.exceptions._InvalidOutput
- Indicates an invalid value has been returned by a middleware's processing method. Internal and undocumented, it should not be raised or caught by user code.unittest.TextTestResult
scrapy.commands.check.TextTestResult
- UndocumentedValueError
scrapy.http.response.text._InvalidSelector
- Raised when a URL cannot be obtained from a SelectorWarning
scrapy.exceptions.ScrapyDeprecationWarning
- Warning category for deprecated features, since the default DeprecationWarning is silenced on Python 2.7+scrapy.spidermiddlewares.offsite.PortWarning
- Undocumentedscrapy.spidermiddlewares.offsite.URLWarning
- Undocumentedweakref.WeakKeyDictionary
scrapy.utils.datatypes.LocalWeakReferencedCache
- A weakref.WeakKeyDictionary implementation that uses LocalCache as its underlying data structure, making it ordered and capable of being size-limited.zope.interface.Interface
scrapy.extensions.feedexport.IFeedStorage
- Interface that all Feed Storages must implementscrapy.interfaces.ISpiderLoader
- No class docstring; 4/4 methods documented