write down,forget
  • adidaseqt
  • eqtturbored
  • eqtsupport9317
  • eqtsupport
  • 9317adidas
  • adidaseqtboost9317
  • eqtsupport93
  • 9317eqt
  • eqt support 9317 adv
  • support 9317 adv
  • eqtadv
  • eqt9317
  • eqtadv9317
  • support93
  • originalseqt
  • adidas eqt
  • eqt support 9317
  • eqt support
  • eqt adv
  • eqt 9317
  • elasticsearch插件bug fix

    <Category: Diving Into ElasticSearch, 小道消息> 查看评论

    lucene4变化不少:http://blog.mikemccandless.com/2012/07/lucene-400-alpha-at-long-last.html
    自定义的analyzer的要求也更加严格,之前写的几个插件,都报错了。
    错误具体表现在第一次分词没有问题,第二次及后续的分词都是空,感谢各位网友的积极响应和测试反馈,谢谢。

    具体是什么原因呢?

    仔细研究了下代码,发现reset多加了一个参数,汗,fix如下:

    –    public void reset(Reader input) throws IOException {
    +    public void reset() throws IOException {

     

    受影响的插件:pinyin、string2int、stconvert

    这次都是按jdk6重新编译的,上次反映的jdk7的问题也一并解决了。

     

     

    别人遇到和我一样的问题:http://www.gossamer-threads.com/lists/lucene/java-user/173910

    补充一点基础知识:

    • TokenStream enumerates the sequence of tokens, either from Fields of a Document or from query text.This is an abstract class; concrete subclasses are:

      • Tokenizer, a TokenStream whose input is a Reader; and
      • TokenFilter, a TokenStream whose input is another TokenStream.

      A new TokenStream API has been introduced with Lucene 2.9. This API has moved from being Token-based to Attribute-based. While Token still exists in 2.9 as a convenience class, the preferred way to store the information of a Token is to use AttributeImpls.TokenStream now extends AttributeSource, which provides access to all of the token Attributes for the TokenStream. Note that only one instance per AttributeImpl is created and reused for every token. This approach reduces object creation and allows local caching of references to the AttributeImpls. See incrementToken() for further details.

      The workflow of the new TokenStream API is as follows:

      1. Instantiation of TokenStream/TokenFilters which add/get attributes to/from the AttributeSource.
      2. The consumer calls reset().
      3. The consumer retrieves attributes from the stream and stores local references to all attributes it wants to access.
      4. The consumer calls incrementToken() until it returns false consuming the attributes after each call.
      5. The consumer calls end() so that any end-of-stream operations can be performed.
      6. The consumer calls close() to release any resource when finished using the TokenStream.

      To make sure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in incrementToken().You can find some example code for the new API in the analysis package level Javadoc.

      Sometimes it is desirable to capture a current state of a TokenStream, e.g., for buffering purposes (see CachingTokenFilter, TeeSinkTokenFilter). For this usecase AttributeSource.captureState() andAttributeSource.restoreState(org.apache.lucene.util.AttributeSource.State) can be used.

      The TokenStream-API in Lucene is based on the decorator pattern. Therefore all non-abstract subclasses must be final or have at least a final implementation of incrementToken()! This is checked when Java assertions are enabled.

       

    An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text.

    In order to define what analysis is done, subclasses must define their TokenStreamComponents in createComponents(String, Reader). The components are then reused in each call to tokenStream(String, Reader).

    Simple example:

    For more examples, see the Analysis package documentation.

    For some concrete implementations bundled with Lucene, look in the analysis modules:

    • Common: Analyzers for indexing content in different languages and domains.
    • ICU: Exposes functionality from ICU to Apache Lucene.
    • Kuromoji: Morphological analyzer for Japanese text.
    • Morfologik: Dictionary-driven lemmatization for the Polish language.
    • Phonetic: Analysis for indexing phonetic signatures (for sounds-alike search).
    • Smart Chinese: Analyzer for Simplified Chinese, which indexes words.
    • Stempel: Algorithmic Stemmer for the Polish Language.
    • UIMA: Analysis integration with Apache UIMA.

     

    本文来自: elasticsearch插件bug fix

    
    eqt support adidas eqt support 93 primeknit og colorway ba7506 adidas eqt running 93 updated with primeknit construction adidas eqt boost 93 17 white turbo red adidas eqt support 9317 white turbo red adidas eqt support 93 17 adidas eqt support 9317 adidas eqt support 9317 turbo red releases tomorrow adidas originals adidas eqt tactile green pack adidas eqt tactile green pack adidas eqt light green pack womens adidas eqt light green pack coming soon adidas eqt milled leather pack release date adidas originals eqt milled leather pack adidas eqt support ultra boost turbo red white adidas adv support burnt orange grey where to buy the adidas eqt support 9317 turbo red adidas eqt boost 91 16 turbo red adidas eqt support 93 turbo red adidas eqt support 9317 white turbo red available now