diff --git a/Lib/Lucene.Net.xml b/Lib/Lucene.Net.xml deleted file mode 100644 index 8bf1bc8..0000000 --- a/Lib/Lucene.Net.xml +++ /dev/null @@ -1,27293 +0,0 @@ - - - - Lucene.Net - - - - Base class for cache implementations. - - - Returns a thread-safe cache backed by the specified cache. - In order to guarantee thread-safety, all access to the backed cache must - be accomplished through the returned cache. - - - - Called by {@link #SynchronizedCache(Cache)}. This method - returns a {@link SynchronizedCache} instance that wraps - this instance by default and can be overridden to return - e. g. subclasses of {@link SynchronizedCache} or this - in case this cache is already synchronized. - - - - Puts a (key, value)-pair into the cache. - - - Returns the value for the given key. - - - Returns whether the given key is in this cache. - - - Closes the cache. - - - Simple Cache wrapper that synchronizes all - calls that access the cache. - - - - Provides support for converting byte sequences to Strings and back again. - The resulting Strings preserve the original byte sequences' sort order. - - The Strings are constructed using a Base 8000h encoding of the original - binary data - each char of an encoded String represents a 15-bit chunk - from the byte sequence. Base 8000h was chosen because it allows for all - lower 15 bits of char to be used without restriction; the surrogate range - [U+D8000-U+DFFF] does not represent valid chars, and would require - complicated handling to avoid them and allow use of char's high bit. - - Although unset bits are used as padding in the final char, the original - byte sequence could contain trailing bytes with no set bits (null bytes): - padding is indistinguishable from valid information. To overcome this - problem, a char is appended, indicating the number of encoded bytes in the - final content char. - - This class's operations are defined over CharBuffers and ByteBuffers, to - allow for wrapped arrays to be reused, reducing memory allocation costs for - repeated operations. Note that this class calls array() and arrayOffset() - on the CharBuffers and ByteBuffers it uses, so only wrapped arrays may be - used. This class interprets the arrayOffset() and limit() values returned by - its input buffers as beginning and end+1 positions on the wrapped array, - resprectively; similarly, on the output buffer, arrayOffset() is the first - position written to, and limit() is set to one past the final output array - position. - - - - Returns the number of chars required to encode the given byte sequence. - - - The byte sequence to be encoded. Must be backed by an array. - - The number of chars required to encode the given byte sequence - - IllegalArgumentException If the given ByteBuffer is not backed by an array - - - Returns the number of bytes required to decode the given char sequence. - - - The char sequence to be encoded. Must be backed by an array. - - The number of bytes required to decode the given char sequence - - IllegalArgumentException If the given CharBuffer is not backed by an array - - - Encodes the input byte sequence into the output char sequence. Before - calling this method, ensure that the output CharBuffer has sufficient - capacity by calling {@link #GetEncodedLength(java.nio.ByteBuffer)}. - - - The byte sequence to encode - - Where the char sequence encoding result will go. The limit - is set to one past the position of the final char. - - IllegalArgumentException If either the input or the output buffer - is not backed by an array - - - - Decodes the input char sequence into the output byte sequence. Before - calling this method, ensure that the output ByteBuffer has sufficient - capacity by calling {@link #GetDecodedLength(java.nio.CharBuffer)}. - - - The char sequence to decode - - Where the byte sequence decoding result will go. The limit - is set to one past the position of the final char. - - IllegalArgumentException If either the input or the output buffer - is not backed by an array - - - - Decodes the given char sequence, which must have been encoded by - {@link #Encode(java.nio.ByteBuffer)} or - {@link #Encode(java.nio.ByteBuffer, java.nio.CharBuffer)}. - - - The char sequence to decode - - A byte sequence containing the decoding result. The limit - is set to one past the position of the final char. - - IllegalArgumentException If the input buffer is not backed by an - array - - - - Encodes the input byte sequence. - - - The byte sequence to encode - - A char sequence containing the encoding result. The limit is set - to one past the position of the final char. - - IllegalArgumentException If the input buffer is not backed by an - array - - - - Implements {@link LockFactory} for a single in-process instance, - meaning all locking will take place through this one instance. - Only use this {@link LockFactory} when you are certain all - IndexReaders and IndexWriters for a given index are running - against a single shared in-process Directory instance. This is - currently the default locking for RAMDirectory. - - - - - - -

Base class for Locking implementation. {@link Directory} uses - instances of this class to implement locking.

- -

Note that there are some useful tools to verify that - your LockFactory is working correctly: {@link - VerifyingLockFactory}, {@link LockStressTest}, {@link - LockVerifyServer}.

- -

- - - - - - -
- - Set the prefix in use for all locks created in this - LockFactory. This is normally called once, when a - Directory gets this LockFactory instance. However, you - can also call this (after this instance is assigned to - a Directory) to override the prefix in use. This - is helpful if you're running Lucene on machines that - have different mount points for the same shared - directory. - - - - Get the prefix in use for all locks created in this LockFactory. - - - Return a new Lock instance identified by lockName. - name of the lock to be created. - - - - Attempt to clear (forcefully unlock and remove) the - specified lock. Only call this at a time when you are - certain this lock is no longer in use. - - name of the lock to be cleared. - - - - An interprocess mutex lock. -

Typical use might look like:

-            new Lock.With(directory.makeLock("my.lock")) {
-            public Object doBody() {
-            ... code to execute while locked ...
-            }
-            }.run();
-            
- - -
- $Id: Lock.java 769409 2009-04-28 14:05:43Z mikemccand $ - - - -
- - Pass this value to {@link #Obtain(long)} to try - forever to obtain the lock. - - - - How long {@link #Obtain(long)} waits, in milliseconds, - in between attempts to acquire the lock. - - - - Attempts to obtain exclusive access and immediately return - upon success or failure. - - true iff exclusive access is obtained - - - - If a lock obtain called, this failureReason may be set - with the "root cause" Exception as to why the lock was - not obtained. - - - - Attempts to obtain an exclusive lock within amount of - time given. Polls once per {@link #LOCK_POLL_INTERVAL} - (currently 1000) milliseconds until lockWaitTimeout is - passed. - - length of time to wait in - milliseconds or {@link - #LOCK_OBTAIN_WAIT_FOREVER} to retry forever - - true if lock was obtained - - LockObtainFailedException if lock wait times out - IllegalArgumentException if lockWaitTimeout is - out of bounds - - IOException if obtain() throws IOException - - - Releases exclusive access. - - - Returns true if the resource is currently locked. Note that one must - still call {@link #Obtain()} before using the resource. - - - - Utility class for executing code with exclusive access. - - - Constructs an executor that will grab the named lock. - - - Code to execute with exclusive access. - - - Calls {@link #doBody} while lock is obtained. Blocks if lock - cannot be obtained immediately. Retries to obtain lock once per second - until it is obtained, or until it has tried ten times. Lock is released when - {@link #doBody} exits. - - LockObtainFailedException if lock could not - be obtained - - IOException if {@link Lock#obtain} throws IOException - - - Implements the wildcard search query. Supported wildcards are *, which - matches any character sequence (including the empty one), and ?, - which matches any single character. Note this query can be slow, as it - needs to iterate over many terms. In order to prevent extremely slow WildcardQueries, - a Wildcard term should not start with one of the wildcards * or - ?. - -

This query uses the {@link - MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} - rewrite method. - -

- - -
- - An abstract {@link Query} that matches documents - containing a subset of terms provided by a {@link - FilteredTermEnum} enumeration. - -

This query cannot be used directly; you must subclass - it and define {@link #getEnum} to provide a {@link - FilteredTermEnum} that iterates through the terms to be - matched. - -

NOTE: if {@link #setRewriteMethod} is either - {@link #CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE} or {@link - #SCORING_BOOLEAN_QUERY_REWRITE}, you may encounter a - {@link BooleanQuery.TooManyClauses} exception during - searching, which happens when the number of terms to be - searched exceeds {@link - BooleanQuery#GetMaxClauseCount()}. Setting {@link - #setRewriteMethod} to {@link #CONSTANT_SCORE_FILTER_REWRITE} - prevents this. - -

The recommended rewrite method is {@link - #CONSTANT_SCORE_AUTO_REWRITE_DEFAULT}: it doesn't spend CPU - computing unhelpful scores, and it tries to pick the most - performant rewrite method given the query. - - Note that {@link QueryParser} produces - MultiTermQueries using {@link - #CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} by default. -

-
- - The abstract base class for queries. -

Instantiable subclasses are: -

    -
  • {@link TermQuery}
  • -
  • {@link MultiTermQuery}
  • -
  • {@link BooleanQuery}
  • -
  • {@link WildcardQuery}
  • -
  • {@link PhraseQuery}
  • -
  • {@link PrefixQuery}
  • -
  • {@link MultiPhraseQuery}
  • -
  • {@link FuzzyQuery}
  • -
  • {@link TermRangeQuery}
  • -
  • {@link NumericRangeQuery}
  • -
  • {@link Lucene.Net.Search.Spans.SpanQuery}
  • -
-

A parser for queries is contained in: -

    -
  • {@link Lucene.Net.QueryParsers.QueryParser QueryParser}
  • -
-
-
- - Sets the boost for this query clause to b. Documents - matching this clause will (in addition to the normal weightings) have - their score multiplied by b. - - - - Gets the boost for this clause. Documents matching - this clause will (in addition to the normal weightings) have their score - multiplied by b. The boost is 1.0 by default. - - - - Prints a query to a string, with field assumed to be the - default field and omitted. -

The representation used is one that is supposed to be readable - by {@link Lucene.Net.QueryParsers.QueryParser QueryParser}. However, - there are the following limitations: -

    -
  • If the query was created by the parser, the printed - representation may not be exactly what was parsed. For example, - characters that need to be escaped will be represented without - the required backslash.
  • -
  • Some of the more complicated queries (e.g. span queries) - don't have a representation that can be parsed by QueryParser.
  • -
-
-
- - Prints a query to a string. - - - Expert: Constructs an appropriate Weight implementation for this query. - -

- Only implemented by primitive queries, which re-write to themselves. -

-
- - Expert: Constructs and initializes a Weight for a top-level query. - - - Expert: called to re-write queries into primitive queries. For example, - a PrefixQuery will be rewritten into a BooleanQuery that consists - of TermQuerys. - - - - Expert: called when re-writing queries under MultiSearcher. - - Create a single query suitable for use by all subsearchers (in 1-1 - correspondence with queries). This is an optimization of the OR of - all queries. We handle the common optimization cases of equal - queries and overlapping clauses of boolean OR queries (as generated - by MultiTermQuery.rewrite()). - Be careful overriding this method as queries[0] determines which - method will be called and is not necessarily of the same type as - the other queries. - - - - Expert: adds all terms occuring in this query to the terms set. Only - works if this query is in its {@link #rewrite rewritten} form. - - - UnsupportedOperationException if this query is not yet rewritten - - - Expert: merges the clauses of a set of BooleanQuery's into a single - BooleanQuery. - -

A utility for use by {@link #Combine(Query[])} implementations. -

-
- - Expert: Returns the Similarity implementation to be used for this query. - Subclasses may override this method to specify their own Similarity - implementation, perhaps one that delegates through that of the Searcher. - By default the Searcher's Similarity implementation is returned. - - - - Returns a clone of this query. - - - A rewrite method that first creates a private Filter, - by visiting each term in sequence and marking all docs - for that term. Matching documents are assigned a - constant score equal to the query's boost. - -

This method is faster than the BooleanQuery - rewrite methods when the number of matched terms or - matched documents is non-trivial. Also, it will never - hit an errant {@link BooleanQuery.TooManyClauses} - exception. - -

- - -
- - A rewrite method that first translates each term into - {@link BooleanClause.Occur#SHOULD} clause in a - BooleanQuery, and keeps the scores as computed by the - query. Note that typically such scores are - meaningless to the user, and require non-trivial CPU - to compute, so it's almost always better to use {@link - #CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} instead. - -

NOTE: This rewrite method will hit {@link - BooleanQuery.TooManyClauses} if the number of terms - exceeds {@link BooleanQuery#getMaxClauseCount}. - -

- - -
- - Like {@link #SCORING_BOOLEAN_QUERY_REWRITE} except - scores are not computed. Instead, each matching - document receives a constant score equal to the - query's boost. - -

NOTE: This rewrite method will hit {@link - BooleanQuery.TooManyClauses} if the number of terms - exceeds {@link BooleanQuery#getMaxClauseCount}. - -

- - -
- - Read-only default instance of {@link - ConstantScoreAutoRewrite}, with {@link - ConstantScoreAutoRewrite#setTermCountCutoff} set to - {@link - ConstantScoreAutoRewrite#DEFAULT_TERM_COUNT_CUTOFF} - and {@link - ConstantScoreAutoRewrite#setDocCountPercent} set to - {@link - ConstantScoreAutoRewrite#DEFAULT_DOC_COUNT_PERCENT}. - Note that you cannot alter the configuration of this - instance; you'll need to create a private instance - instead. - - - - Constructs a query for terms matching term. - check sub class for possible term access - the Term does not - make sense for all MultiTermQuerys and will be removed. - - - - Constructs a query matching terms that cannot be represented with a single - Term. - - - - Returns the pattern term. - check sub class for possible term access - getTerm does not - make sense for all MultiTermQuerys and will be removed. - - - - Construct the enumeration to be used, expanding the pattern term. - - - Expert: Return the number of unique terms visited during execution of the query. - If there are many of them, you may consider using another query type - or optimize your total term count in index. -

This method is not thread safe, be sure to only call it when no query is running! - If you re-use the same query instance for another - search, be sure to first reset the term counter - with {@link #clearTotalNumberOfTerms}. -

On optimized indexes / no MultiReaders, you get the correct number of - unique terms for the whole index. Use this number to compare different queries. - For non-optimized indexes this number can also be achived in - non-constant-score mode. In constant-score mode you get the total number of - terms seeked for all segments / sub-readers. -

- - -
- - Expert: Resets the counting of unique terms. - Do this before executing the query/filter. - - - - - - - - - - Sets the rewrite method to be used when executing the - query. You can use one of the four core methods, or - implement your own subclass of {@link RewriteMethod}. - - - - A rewrite method that tries to pick the best - constant-score rewrite method based on term and - document counts from the query. If both the number of - terms and documents is small enough, then {@link - #CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE} is used. - Otherwise, {@link #CONSTANT_SCORE_FILTER_REWRITE} is - used. - - - - Abstract class that defines how the query is rewritten. - - - If the number of terms in this query is equal to or - larger than this setting then {@link - #CONSTANT_SCORE_FILTER_REWRITE} is used. - - - - - - - - If the number of documents to be visited in the - postings exceeds this specified percentage of the - maxDoc() for the index, then {@link - #CONSTANT_SCORE_FILTER_REWRITE} is used. - - 0.0 to 100.0 - - - - - - - - Returns the pattern term. - - - Prints a user-readable version of this query. - - - Represents hits returned by {@link - * Searcher#search(Query,Filter,int)} and {@link - * Searcher#search(Query,int) - - - - The total number of hits for the query. - - - - - The top hits for the query. - - - Stores the maximum score value encountered, needed for normalizing. - - - Returns the maximum score value encountered. Note that in case - scores are not tracked, this returns {@link Float#NaN}. - - - - Sets the maximum score value encountered. - - - Constructs a TopDocs with a default maxScore=Float.NaN. - - - - - - Base class for span-based queries. - - - Expert: Returns the matches for this query in an index. Used internally - to search for spans. - - - - Returns the name of the field matched by this query. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - - Abstract base class providing a mechanism to restrict searches to a subset - of an index and also maintains and returns position information. - This is useful if you want to compare the positions from a SpanQuery with the positions of items in - a filter. For instance, if you had a SpanFilter that marked all the occurrences of the word "foo" in documents, - and then you entered a new SpanQuery containing bar, you could not only filter by the word foo, but you could - then compare position information for post processing. - - - - Abstract base class for restricting which documents may be returned during searching. -

- Note: In Lucene 3.0 {@link #Bits(IndexReader)} will be removed - and {@link #GetDocIdSet(IndexReader)} will be defined as abstract. - All implementing classes must therefore implement {@link #GetDocIdSet(IndexReader)} - in order to work with Lucene 3.0. -

-
- - - - Creates a {@link DocIdSet} enumerating the documents that should be - permitted in search results. NOTE: null can be - returned if no documents are accepted by this Filter. -

- Note: This method will be called once per segment in - the index during searching. The returned {@link DocIdSet} - must refer to document IDs for that segment, not for - the top-level reader. - - @param reader a {@link IndexReader} instance opened on the index currently - searched on. Note, it is likely that the provided reader does not - represent the whole underlying index i.e. if the index has more than - one segment the given reader only represents a single segment. - -

- a DocIdSet that provides the documents which should be permitted or - prohibited in search results. NOTE: null can be returned if - no documents will be accepted by this Filter. - - - -
- - Returns a SpanFilterResult with true for documents which should be permitted in - search results, and false for those that should not and Spans for where the true docs match. - - The {@link Lucene.Net.Index.IndexReader} to load position and DocIdSet information from - - A {@link SpanFilterResult} - - java.io.IOException if there was an issue accessing the necessary information - - - - - Expert: Scoring functionality for phrase queries. -
A document is considered matching if it contains the phrase-query terms - at "valid" positons. What "valid positions" are - depends on the type of the phrase query: for an exact phrase query terms are required - to appear in adjacent locations, while for a sloppy phrase query some distance between - the terms is allowed. The abstract method {@link #PhraseFreq()} of extending classes - is invoked for each document containing all the phrase query terms, in order to - compute the frequency of the phrase query in that document. A non zero frequency - means a match. -
-
- - Expert: Common scoring functionality for different types of queries. - -

- A Scorer iterates over documents matching a - query in increasing order of doc Id. -

-

- Document scores are computed using a given Similarity - implementation. -

- -

NOTE: The values Float.Nan, - Float.NEGATIVE_INFINITY and Float.POSITIVE_INFINITY are - not valid scores. Certain collectors (eg {@link - TopScoreDocCollector}) will not properly collect hits - with these scores. - -

- - -
- - This abstract class defines methods to iterate over a set of non-decreasing - doc ids. Note that this class assumes it iterates on doc Ids, and therefore - {@link #NO_MORE_DOCS} is set to {@value #NO_MORE_DOCS} in order to be used as - a sentinel object. Implementations of this class are expected to consider - {@link Integer#MAX_VALUE} as an invalid value. - - - - When returned by {@link #NextDoc()}, {@link #Advance(int)} and - {@link #Doc()} it means there are no more docs in the iterator. - - - - Unsupported anymore. Call {@link #DocID()} instead. This method throws - {@link UnsupportedOperationException} if called. - - - use {@link #DocID()} instead. - - - - Returns the following: -
    -
  • -1 or {@link #NO_MORE_DOCS} if {@link #NextDoc()} or - {@link #Advance(int)} were not called yet.
  • -
  • {@link #NO_MORE_DOCS} if the iterator has exhausted.
  • -
  • Otherwise it should return the doc ID it is currently on.
  • -
-

- NOTE: in 3.0, this method will become abstract. - -

- 2.9 - -
- - Unsupported anymore. Call {@link #NextDoc()} instead. This method throws - {@link UnsupportedOperationException} if called. - - - use {@link #NextDoc()} instead. This will be removed in 3.0 - - - - Unsupported anymore. Call {@link #Advance(int)} instead. This method throws - {@link UnsupportedOperationException} if called. - - - use {@link #Advance(int)} instead. This will be removed in 3.0 - - - - Advances to the next document in the set and returns the doc it is - currently on, or {@link #NO_MORE_DOCS} if there are no more docs in the - set.
- - NOTE: in 3.0 this method will become abstract, following the removal - of {@link #Next()}. For backward compatibility it is implemented as: - -
-            public int nextDoc() throws IOException {
-            return next() ? doc() : NO_MORE_DOCS;
-            }
-            
- - NOTE: after the iterator has exhausted you should not call this - method, as it may result in unpredicted behavior. - -
- 2.9 - -
- - Advances to the first beyond the current whose document number is greater - than or equal to target. Returns the current document number or - {@link #NO_MORE_DOCS} if there are no more docs in the set. -

- Behaves as if written: - -

-            int advance(int target) {
-            int doc;
-            while ((doc = nextDoc()) < target) {
-            }
-            return doc;
-            }
-            
- - Some implementations are considerably more efficient than that. -

- NOTE: certain implemenations may return a different value (each - time) if called several times in a row with the same target. -

- NOTE: this method may be called with {@value #NO_MORE_DOCS} for - efficiency by some Scorers. If your implementation cannot efficiently - determine that it should exhaust, it is recommended that you check for that - value in each call to this method. -

- NOTE: after the iterator has exhausted you should not call this - method, as it may result in unpredicted behavior. -

- NOTE: in 3.0 this method will become abstract, following the removal - of {@link #SkipTo(int)}. - -

- 2.9 - -
- - Constructs a Scorer. - The Similarity implementation used by this scorer. - - - - Returns the Similarity implementation used by this scorer. - - - Scores and collects all matching documents. - The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. -
When this method is used the {@link #Explain(int)} method should not be used. - - use {@link #Score(Collector)} instead. - -
- - Scores and collects all matching documents. - The collector to which all matching documents are passed. -
When this method is used the {@link #Explain(int)} method should not be used. - -
- - Expert: Collects matching documents in a range. Hook for optimization. - Note that {@link #Next()} must be called once before this method is called - for the first time. - - The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. - - Do not score documents past this. - - true if more matching documents may remain. - - use {@link #Score(Collector, int, int)} instead. - - - - Expert: Collects matching documents in a range. Hook for optimization. - Note, firstDocID is added to ensure that {@link #NextDoc()} - was called before this method. - - - The collector to which all matching documents are passed. - - Do not score documents past this. - - - The first document ID (ensures {@link #NextDoc()} is called before - this method. - - true if more matching documents may remain. - - - - Returns the score of the current document matching the query. - Initially invalid, until {@link #Next()} or {@link #SkipTo(int)} - is called the first time, or when called from within - {@link Collector#collect}. - - - - Returns an explanation of the score for a document. -
When this method is used, the {@link #Next()}, {@link #SkipTo(int)} and - {@link #Score(HitCollector)} methods should not be used. -
- The document number for the explanation. - - - Please use {@link IndexSearcher#explain} - or {@link Weight#explain} instead. - -
- - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - For a document containing all the phrase query terms, compute the - frequency of the phrase in that document. - A non zero frequency means a match. -
Note, that containing all phrase terms does not guarantee a match - they have to be found in matching locations. -
- frequency of the phrase in current doc, 0 if not found. - -
- - Score a candidate doc for all slop-valid position-combinations (matches) - encountered while traversing/hopping the PhrasePositions. -
The score contribution of a match depends on the distance: -
- highest score for distance=0 (exact match). -
- score gets lower as distance gets higher. -
Example: for query "a b"~2, a document "x a b a y" can be scored twice: - once for "a b" (distance=0), and once for "b a" (distance=2). -
Possibly not all valid combinations are encountered, because for efficiency - we always propagate the least PhrasePosition. This allows to base on - PriorityQueue and move forward faster. - As result, for example, document "a b c b a" - would score differently for queries "a b c"~4 and "c b a"~4, although - they really are equivalent. - Similarly, for doc "a b c b a f g", query "c b"~2 - would get same score as "g f"~2, although "c b"~2 could be matched twice. - We may want to fix this in the future (currently not, for performance reasons). -
-
- - Init PhrasePositions in place. - There is a one time initialization for this scorer: -
- Put in repeats[] each pp that has another pp with same position in the doc. -
- Also mark each such pp by pp.repeats = true. -
Later can consult with repeats[] in termPositionsDiffer(pp), making that check efficient. - In particular, this allows to score queries with no repetitions with no overhead due to this computation. -
- Example 1 - query with no repetitions: "ho my"~2 -
- Example 2 - query with repetitions: "ho my my"~2 -
- Example 3 - query with repetitions: "my ho my"~2 -
Init per doc w/repeats in query, includes propagating some repeating pp's to avoid false phrase detection. -
- end (max position), or -1 if any term ran out (i.e. done) - - IOException -
- - We disallow two pp's to have the same TermPosition, thereby verifying multiple occurrences - in the query of the same word would go elsewhere in the matched doc. - - null if differ (i.e. valid) otherwise return the higher offset PhrasePositions - out of the first two PPs found to not differ. - - - - A Scorer for queries with a required part and an optional part. - Delays skipTo() on the optional part until a score() is needed. -
- This Scorer implements {@link Scorer#SkipTo(int)}. -
-
- - The scorers passed from the constructor. - These are set to null as soon as their next() or skipTo() returns false. - - - - Construct a ReqOptScorer. - The required scorer. This must match. - - The optional scorer. This is used for scoring only. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - Returns the score of the current document matching the query. - Initially invalid, until {@link #Next()} is called the first time. - - The score of the required scorer, eventually increased by the score - of the optional scorer when it also matches the current document. - - - - Explain the score of a document. - TODO: Also show the total score. - See BooleanScorer.explain() on how to do this. - - - - Returns the maximum payload score seen, else 1 if there are no payloads on the doc. -

- Is thread safe and completely reusable. - - -

-
- - An abstract class that defines a way for Payload*Query instances - to transform the cumulative effects of payload scores for a document. - - - for more information - -

- This class and its derivations are experimental and subject to change - - - - - -

Calculate the score up to this point for this doc and field - The current doc - - The field - - The start position of the matching Span - - The end position of the matching Span - - The number of payloads seen so far - - The current score so far - - The score for the current payload - - The new current Score - - - - -
- - Calculate the final score for all the payloads seen so far for this doc/field - The current doc - - The current field - - The total number of payloads seen on this document - - The raw score for those payloads - - The final score for the payloads - - - - Expert: obtains float field values from the - {@link Lucene.Net.Search.FieldCache FieldCache} - using getFloats() and makes those values - available as other numeric types, casting as needed. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

- for requirements" - on the field. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

- - - -

Expert: A base class for ValueSource implementations that retrieve values for - a single field from the {@link Lucene.Net.Search.FieldCache FieldCache}. -

- Fields used herein nust be indexed (doesn't matter if these fields are stored or not). -

- It is assumed that each such indexed field is untokenized, or at least has a single token in a document. - For documents with multiple tokens of the same field, behavior is undefined (It is likely that current - code would use the value of one of these tokens, but this is not guaranteed). -

- Document with no tokens in this field are assigned the Zero value. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

-

-
- - Expert: source of values for basic function queries. -

At its default/simplest form, values - one per doc - are used as the score of that doc. -

Values are instantiated as - {@link Lucene.Net.Search.Function.DocValues DocValues} for a particular reader. -

ValueSource implementations differ in RAM requirements: it would always be a factor - of the number of documents, but for each document the number of bytes can be 1, 2, 4, or 8. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - - -

-
- - Return the DocValues used by the function query. - the IndexReader used to read these values. - If any caching is involved, that caching would also be IndexReader based. - - IOException for any error. - - - description of field, used in explain() - - - Needed for possible caching of query results - used by {@link ValueSourceQuery#equals(Object)}. - - - - - Needed for possible caching of query results - used by {@link ValueSourceQuery#hashCode()}. - - - - - Create a cached field source for the input field. - - - Return cached DocValues for input field and reader. - FieldCache so that values of a field are loaded once per reader (RAM allowing) - - Field for which values are required. - - - - - - Check if equals to another {@link FieldCacheSource}, already knowing that cache and field are equal. - - - - - Return a hash code of a {@link FieldCacheSource}, without the hash-codes of the field - and the cache (those are taken care of elsewhere). - - - - - - Create a cached float field source with default string-to-float parser. - - - Create a cached float field source with a specific string-to-float parser. - - - Expert: represents field values as different types. - Normally created via a - {@link Lucene.Net.Search.Function.ValueSource ValueSuorce} - for a particular field and reader. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - - -

-
- - Return doc value as a float. -

Mandatory: every DocValues implementation must implement at least this method. -

- document whose float value is requested. - -
- - Return doc value as an int. -

Optional: DocValues implementation can (but don't have to) override this method. -

- document whose int value is requested. - -
- - Return doc value as a long. -

Optional: DocValues implementation can (but don't have to) override this method. -

- document whose long value is requested. - -
- - Return doc value as a double. -

Optional: DocValues implementation can (but don't have to) override this method. -

- document whose double value is requested. - -
- - Return doc value as a string. -

Optional: DocValues implementation can (but don't have to) override this method. -

- document whose string value is requested. - -
- - Return a string representation of a doc value, as reuired for Explanations. - - - Explain the scoring value for the input doc. - - - Expert: for test purposes only, return the inner array of values, or null if not applicable. -

- Allows tests to verify that loaded values are: -

    -
  1. indeed cached/reused.
  2. -
  3. stored in the expected size/type (byte/short/int/float).
  4. -
- Note: implementations of DocValues must override this method for - these test elements to be tested, Otherwise the test would not fail, just - print a warning. -
-
- - Returns the minimum of all values or Float.NaN if this - DocValues instance does not contain any value. -

- This operation is optional -

- -

- the minimum of all values or Float.NaN if this - DocValues instance does not contain any value. - -
- - Returns the maximum of all values or Float.NaN if this - DocValues instance does not contain any value. -

- This operation is optional -

- -

- the maximum of all values or Float.NaN if this - DocValues instance does not contain any value. - -
- - Returns the average of all values or Float.NaN if this - DocValues instance does not contain any value. * -

- This operation is optional -

- -

- the average of all values or Float.NaN if this - DocValues instance does not contain any value - -
- - A query that scores each document as the value of the numeric input field. -

- The query matches all documents, and scores each document according to the numeric - value of that field. -

- It is assumed, and expected, that: -

    -
  • The field used here is indexed, and has exactly - one token in every scored document.
  • -
  • Best if this field is un_tokenized.
  • -
  • That token is parsable to the selected type.
  • -
-

- Combining this query in a FunctionQuery allows much freedom in affecting document scores. - Note, that with this freedom comes responsibility: it is more than likely that the - default Lucene scoring is superior in quality to scoring modified as explained here. - However, in some cases, and certainly for research experiments, this capability may turn useful. -

- When contructing this query, select the appropriate type. That type should match the data stored in the - field. So in fact the "right" type should be selected before indexing. Type selection - has effect on the RAM usage: -

    -
  • {@link Type#BYTE} consumes 1 * maxDocs bytes.
  • -
  • {@link Type#SHORT} consumes 2 * maxDocs bytes.
  • -
  • {@link Type#INT} consumes 4 * maxDocs bytes.
  • -
  • {@link Type#FLOAT} consumes 8 * maxDocs bytes.
  • -
-

- Caching: - Values for the numeric field are loaded once and cached in memory for further use with the same IndexReader. - To take advantage of this, it is extremely important to reuse index-readers or index-searchers, - otherwise, for instance if for each query a new index reader is opened, large penalties would be - paid for loading the field values into memory over and over again! - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. -

-
- - Expert: A Query that sets the scores of document to the - values obtained from a {@link Lucene.Net.Search.Function.ValueSource ValueSource}. -

- This query provides a score for each and every undeleted document in the index. -

- The value source can be based on a (cached) value of an indexed field, but it - can also be based on an external source, e.g. values read from an external database. -

- Score is set as: Score(doc,query) = query.getBoost()2 * valueSource(doc). - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. -

-
- - Create a value source query - provides the values defines the function to be used for scoring - - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - Expert: Calculate query weights and build query scorers. -

- The purpose of {@link Weight} is to ensure searching does not - modify a {@link Query}, so that a {@link Query} instance can be reused.
- {@link Searcher} dependent state of the query should reside in the - {@link Weight}.
- {@link IndexReader} dependent state should reside in the {@link Scorer}. -

- A Weight is used in the following way: -

    -
  1. A Weight is constructed by a top-level query, given a - Searcher ({@link Query#CreateWeight(Searcher)}).
  2. -
  3. The {@link #SumOfSquaredWeights()} method is called on the - Weight to compute the query normalization factor - {@link Similarity#QueryNorm(float)} of the query clauses contained in the - query.
  4. -
  5. The query normalization factor is passed to {@link #Normalize(float)}. At - this point the weighting is complete.
  6. -
  7. A Scorer is constructed by {@link #Scorer(IndexReader,boolean,boolean)}.
  8. -
- -
- 2.9 - -
- - An explanation of the score computation for the named document. - - - sub-reader containing the give doc - - - - an Explanation for the score - - IOException - - - The query that this concerns. - - - The weight for this query. - - - Assigns the query normalization factor to this. - - - Returns a {@link Scorer} which scores documents in/out-of order according - to scoreDocsInOrder. -

- NOTE: even if scoreDocsInOrder is false, it is - recommended to check whether the returned Scorer indeed scores - documents out of order (i.e., call {@link #ScoresDocsOutOfOrder()}), as - some Scorer implementations will always return documents - in-order.
- NOTE: null can be returned if no documents will be scored by this - query. - -

- - the {@link IndexReader} for which to return the {@link Scorer}. - - specifies whether in-order scoring of documents is required. Note - that if set to false (i.e., out-of-order scoring is required), - this method can return whatever scoring mode it supports, as every - in-order scorer is also an out-of-order one. However, an - out-of-order scorer may not support {@link Scorer#NextDoc()} - and/or {@link Scorer#Advance(int)}, therefore it is recommended to - request an in-order scorer if use of these methods is required. - - - if true, {@link Scorer#Score(Collector)} will be called; if false, - {@link Scorer#NextDoc()} and/or {@link Scorer#Advance(int)} will - be called. - - a {@link Scorer} which scores documents in/out-of order. - - IOException -
- - The sum of squared weights of contained query clauses. - - - Returns true iff this implementation scores docs only out of order. This - method is used in conjunction with {@link Collector}'s - {@link Collector#AcceptsDocsOutOfOrder() acceptsDocsOutOfOrder} and - {@link #Scorer(Lucene.Net.Index.IndexReader, boolean, boolean)} to - create a matching {@link Scorer} instance for a given {@link Collector}, or - vice versa. -

- NOTE: the default implementation returns false, i.e. - the Scorer scores documents in-order. -

-
- - A scorer that (simply) matches all documents, and scores each document with - the value of the value soure in effect. As an example, if the value source - is a (cached) field source, then value of that field in that document will - be used. (assuming field is indexed for this doc, with a single token.) - - - - use {@link #NextDoc()} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #Advance(int)} instead. - - - - Create a FieldScoreQuery - a query that scores each document as the value of the numeric input field. -

- The type param tells how to parse the field string values into a numeric score value. -

- the numeric field to be used. - - the type of the field: either - {@link Type#BYTE}, {@link Type#SHORT}, {@link Type#INT}, or {@link Type#FLOAT}. - -
- - Type of score field, indicating how field values are interpreted/parsed. -

- The type selected at search search time should match the data stored in the field. - Different types have different RAM requirements: -

    -
  • {@link #BYTE} consumes 1 * maxDocs bytes.
  • -
  • {@link #SHORT} consumes 2 * maxDocs bytes.
  • -
  • {@link #INT} consumes 4 * maxDocs bytes.
  • -
  • {@link #FLOAT} consumes 8 * maxDocs bytes.
  • -
-
-
- - field values are interpreted as numeric byte values. - - - field values are interpreted as numeric short values. - - - field values are interpreted as numeric int values. - - - field values are interpreted as numeric float values. - - - Provides a {@link FieldComparator} for custom field sorting. - - NOTE: This API is experimental and might change in - incompatible ways in the next release. - - - - - Creates a comparator for the field in the given index. - - - Name of the field to create comparator for. - - FieldComparator. - - IOException - If an error occurs reading the index. - - - - A {@link Filter} that only accepts documents whose single - term value in the specified field is contained in the - provided set of allowed terms. - -

- - This is the same functionality as TermsFilter (from - contrib/queries), except this filter requires that the - field contains only a single term for all documents. - Because of drastically different implementations, they - also have different performance characteristics, as - described below. - -

- - The first invocation of this filter on a given field will - be slower, since a {@link FieldCache.StringIndex} must be - created. Subsequent invocations using the same field - will re-use this cache. However, as with all - functionality based on {@link FieldCache}, persistent RAM - is consumed to hold the cache, and is not freed until the - {@link IndexReader} is closed. In contrast, TermsFilter - has no persistent RAM consumption. - - -

- - With each search, this filter translates the specified - set of Terms into a private {@link OpenBitSet} keyed by - term number per unique {@link IndexReader} (normally one - reader per segment). Then, during matching, the term - number for each docID is retrieved from the cache and - then checked for inclusion using the {@link OpenBitSet}. - Since all testing is done using RAM resident data - structures, performance should be very fast, most likely - fast enough to not require further caching of the - DocIdSet for each possible combination of terms. - However, because docIDs are simply scanned linearly, an - index with a great many small documents may find this - linear scan too costly. - -

- - In contrast, TermsFilter builds up an {@link OpenBitSet}, - keyed by docID, every time it's created, by enumerating - through all matching docs using {@link TermDocs} to seek - and scan through each term's docID list. While there is - no linear scan of all docIDs, besides the allocation of - the underlying array in the {@link OpenBitSet}, this - approach requires a number of "disk seeks" in proportion - to the number of terms, which can be exceptionally costly - when there are cache misses in the OS's IO cache. - -

- - Generally, this filter will be slower on the first - invocation for a given field, but subsequent invocations, - even if you change the allowed set of Terms, should be - faster than TermsFilter, especially as the number of - Terms being matched increases. If you are matching only - a very small number of terms, and those terms in turn - match a very small number of documents, TermsFilter may - perform faster. - -

- - Which filter is best is very application dependent. -

-
- - A DocIdSet contains a set of doc ids. Implementing classes must - only implement {@link #iterator} to provide access to the set. - - - - An empty {@code DocIdSet} instance for easy use, e.g. in Filters that hit no documents. - - - Provides a {@link DocIdSetIterator} to access the set. - This implementation can return null or - {@linkplain #EMPTY_DOCIDSET}.iterator() if there - are no docs that match. - - - - This method is a hint for {@link CachingWrapperFilter}, if this DocIdSet - should be cached without copying it into a BitSet. The default is to return - false. If you have an own DocIdSet implementation - that does its iteration very effective and fast without doing disk I/O, - override this method and return true. - - - - This DocIdSet implementation is cacheable. - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - An efficient implementation of JavaCC's CharStream interface.

Note that - this does not do line-number counting, but instead keeps track of the - character position of the token in the input, as required by Lucene's {@link - Lucene.Net.Analysis.Token} API. - -

-
- - This interface describes a character stream that maintains line and - column number positions of the characters. It also has the capability - to backup the stream to some extent. An implementation of this - interface is used in the TokenManager implementation generated by - JavaCCParser. - - All the methods except backup can be implemented in any fashion. backup - needs to be implemented correctly for the correct operation of the lexer. - Rest of the methods are all used to get information like line number, - column number and the String that constitutes a token and are not used - by the lexer. Hence their implementation won't affect the generated lexer's - operation. - - - - Returns the next character from the selected input. The method - of selecting the input is the responsibility of the class - implementing this interface. Can throw any java.io.IOException. - - - - Returns the column position of the character last read. - - - - - - - Returns the line number of the character last read. - - - - - - - Returns the column number of the last character for current token (being - matched after the last call to BeginTOken). - - - - Returns the line number of the last character for current token (being - matched after the last call to BeginTOken). - - - - Returns the column number of the first character for current token (being - matched after the last call to BeginTOken). - - - - Returns the line number of the first character for current token (being - matched after the last call to BeginTOken). - - - - Backs up the input stream by amount steps. Lexer calls this method if it - had already read some characters, but could not use them to match a - (longer) token. So, they will be used again as the prefix of the next - token and it is the implemetation's responsibility to do this right. - - - - Returns the next character that marks the beginning of the next token. - All characters must remain in the buffer between two successive calls - to this method to implement backup correctly. - - - - Returns a string made up of characters from the marked token beginning - to the current buffer position. Implementations have the choice of returning - anything that they want to. For example, for efficiency, one might decide - to just return null, which is a valid implementation. - - - - Returns an array of characters that make up the suffix of length 'len' for - the currently matched token. This is used to build up the matched string - for use in actions in the case of MORE. A simple and inefficient - implementation of this is as follows : - - { - String t = GetImage(); - return t.substring(t.length() - len, t.length()).toCharArray(); - } - - - - The lexer calls this function to indicate that it is done with the stream - and hence implementations can free any resources held by this class. - Again, the body of this function can be just empty and it will not - affect the lexer's operation. - - - - Constructs from a Reader. - - - Add a complete document specified by all its term vectors. If document has no - term vectors, add value for tvx. - - - - - IOException - - - Do a bulk copy of numDocs documents from reader to our - streams. This is used to expedite merging, if the - field numbers are congruent. - - - - Close all streams. - - - $Id: TermVectorsReader.java 687046 2008-08-19 13:01:11Z mikemccand $ - - - - Retrieve the length (in bytes) of the tvd and tvf - entries for the next numDocs starting with - startDocID. This is used for bulk copying when - merging segments, if the field numbers are - congruent. Once this returns, the tvf & tvd streams - are seeked to the startDocID. - - - - - The number of documents in the reader - - - - Retrieve the term vector for the given document and field - The document number to retrieve the vector for - - The field within the document to retrieve - - The TermFreqVector for the document and field or null if there is no termVector for this field. - - IOException if there is an error reading the term vector files - - - Return all term vectors stored for this document or null if the could not be read in. - - - The document number to retrieve the vector for - - All term frequency vectors - - IOException if there is an error reading the term vector files - - - - The field to read in - - The pointer within the tvf file where we should start reading - - The mapper used to map the TermVector - - IOException - - - Models the existing parallel array structure - - - The TermVectorMapper can be used to map Term Vectors into your own - structure instead of the parallel array structure used by - {@link Lucene.Net.Index.IndexReader#GetTermFreqVector(int,String)}. -

- It is up to the implementation to make sure it is thread-safe. - - - -

-
- - - true if this mapper should tell Lucene to ignore positions even if they are stored - - similar to ignoringPositions - - - - Tell the mapper what to expect in regards to field, number of terms, offset and position storage. - This method will be called once before retrieving the vector for a field. - - This method will be called before {@link #Map(String,int,TermVectorOffsetInfo[],int[])}. - - The field the vector is for - - The number of terms that need to be mapped - - true if the mapper should expect offset information - - true if the mapper should expect positions info - - - - Map the Term Vector information into your own structure - The term to add to the vector - - The frequency of the term in the document - - null if the offset is not specified, otherwise the offset into the field of the term - - null if the position is not specified, otherwise the position in the field of the term - - - - Indicate to Lucene that even if there are positions stored, this mapper is not interested in them and they - can be skipped over. Derived classes should set this to true if they want to ignore positions. The default - is false, meaning positions will be loaded if they are stored. - - false - - - - - Same principal as {@link #IsIgnoringPositions()}, but applied to offsets. false by default. - - false - - - - Passes down the index of the document whose term vector is currently being mapped, - once for each top level call to a term vector reader. -

- Default implementation IGNORES the document number. Override if your implementation needs the document number. -

- NOTE: Document numbers are internal to Lucene and subject to change depending on indexing operations. - -

- index of document currently being mapped - -
- - Construct the vector - The {@link TermFreqVector} based on the mappings. - - - - TermDocs provides an interface for enumerating <document, frequency> - pairs for a term.

The document portion names each document containing - the term. Documents are indicated by number. The frequency portion gives - the number of times the term occurred in each document.

The pairs are - ordered by document number. -

- - -
- - Sets this to the data for a term. - The enumeration is reset to the start of the data for this term. - - - - Sets this to the data for the current term in a {@link TermEnum}. - This may be optimized in some implementations. - - - - Returns the current document number.

This is invalid until {@link - #Next()} is called for the first time. -

-
- - Returns the frequency of the term within the current document.

This - is invalid until {@link #Next()} is called for the first time. -

-
- - Moves to the next pair in the enumeration.

Returns true iff there is - such a next pair in the enumeration. -

-
- - Attempts to read multiple entries from the enumeration, up to length of - docs. Document numbers are stored in docs, and term - frequencies are stored in freqs. The freqs array must be as - long as the docs array. - -

Returns the number of entries read. Zero is only returned when the - stream has been exhausted. -

-
- - Skips entries to the first beyond the current whose document number is - greater than or equal to target.

Returns true iff there is such - an entry.

Behaves as if written:

-            boolean skipTo(int target) {
-            do {
-            if (!next())
-            return false;
-            } while (target > doc());
-            return true;
-            }
-            
- Some implementations are considerably more efficient than that. -
-
- - Frees associated resources. - - - This exception is thrown when an {@link IndexReader} - tries to make changes to the index (via {@link - IndexReader#deleteDocument}, {@link - IndexReader#undeleteAll} or {@link IndexReader#setNorm}) - but changes have already been committed to the index - since this reader was instantiated. When this happens - you must open a new reader on the current index to make - the changes. - - - - This is a {@link LogMergePolicy} that measures size of a - segment as the number of documents (not taking deletions - into account). - - - -

This class implements a {@link MergePolicy} that tries - to merge segments into levels of exponentially - increasing size, where each level has fewer segments than - the value of the merge factor. Whenever extra segments - (beyond the merge factor upper bound) are encountered, - all segments within the level are merged. You can get or - set the merge factor using {@link #GetMergeFactor()} and - {@link #SetMergeFactor(int)} respectively.

- -

This class is abstract and requires a subclass to - define the {@link #size} method which specifies how a - segment's size is determined. {@link LogDocMergePolicy} - is one subclass that measures size by document count in - the segment. {@link LogByteSizeMergePolicy} is another - subclass that measures size as the total byte size of the - file(s) for the segment.

-

-
- -

Expert: a MergePolicy determines the sequence of - primitive merge operations to be used for overall merge - and optimize operations.

- -

Whenever the segments in an index have been altered by - {@link IndexWriter}, either the addition of a newly - flushed segment, addition of many segments from - addIndexes* calls, or a previous merge that may now need - to cascade, {@link IndexWriter} invokes {@link - #findMerges} to give the MergePolicy a chance to pick - merges that are now required. This method returns a - {@link MergeSpecification} instance describing the set of - merges that should be done, or null if no merges are - necessary. When IndexWriter.optimize is called, it calls - {@link #findMergesForOptimize} and the MergePolicy should - then return the necessary merges.

- -

Note that the policy can return more than one merge at - a time. In this case, if the writer is using {@link - SerialMergeScheduler}, the merges will be run - sequentially but if it is using {@link - ConcurrentMergeScheduler} they will be run concurrently.

- -

The default MergePolicy is {@link - LogByteSizeMergePolicy}.

- -

NOTE: This API is new and still experimental - (subject to change suddenly in the next release)

- -

NOTE: This class typically requires access to - package-private APIs (e.g. SegmentInfos) to do its job; - if you implement your own MergePolicy, you'll need to put - it in package Lucene.Net.Index in order to use - these APIs. -

-
- - Determine what set of merge operations are now necessary on the index. - {@link IndexWriter} calls this whenever there is a change to the segments. - This call is always synchronized on the {@link IndexWriter} instance so - only one thread at a time will call this method. - - - the total set of segments in the index - - - - Determine what set of merge operations is necessary in order to optimize - the index. {@link IndexWriter} calls this when its - {@link IndexWriter#Optimize()} method is called. This call is always - synchronized on the {@link IndexWriter} instance so only one thread at a - time will call this method. - - - the total set of segments in the index - - requested maximum number of segments in the index (currently this - is always 1) - - contains the specific SegmentInfo instances that must be merged - away. This may be a subset of all SegmentInfos. - - - - Determine what set of merge operations is necessary in order to expunge all - deletes from the index. - - - the total set of segments in the index - - - - Release all resources for the policy. - - - Returns true if a newly flushed (not from merge) - segment should use the compound file format. - - - - Returns true if the doc store files should use the - compound file format. - - - - OneMerge provides the information necessary to perform - an individual primitive merge operation, resulting in - a single new segment. The merge spec includes the - subset of segments to be merged as well as whether the - new segment should use the compound file format. - - - - Record that an exception occurred while executing - this merge - - - - Retrieve previous exception set by {@link - #setException}. - - - - Mark this merge as aborted. If this is called - before the merge is committed then the merge will - not be committed. - - - - Returns true if this merge was aborted. - - - A MergeSpecification instance provides the information - necessary to perform multiple merges. It simply - contains a list of {@link OneMerge} instances. - - - - The subset of segments to be included in the primitive merge. - - - Exception thrown if there are any problems while - executing a merge. - - - - - Use {@link #MergePolicy.MergeException(String,Directory)} instead - - - - - Use {@link #MergePolicy.MergeException(Throwable,Directory)} instead - - - - Returns the {@link Directory} of the index that hit - the exception. - - - - Defines the allowed range of log(size) for each - level. A level is computed by taking the max segment - log size, minus LEVEL_LOG_SPAN, and finding all - segments falling within that range. - - - - Default merge factor, which is how many segments are - merged at a time - - - - Default maximum segment size. A segment of this size - - - - -

Returns the number of segments that are merged at - once and also controls the total number of segments - allowed to accumulate in the index.

-

-
- - Determines how often segment indices are merged by - addDocument(). With smaller values, less RAM is used - while indexing, and searches on unoptimized indices are - faster, but indexing speed is slower. With larger - values, more RAM is used during indexing, and while - searches on unoptimized indices are slower, indexing is - faster. Thus larger values (> 10) are best for batch - index creation, and smaller values (< 10) for indices - that are interactively maintained. - - - - Sets whether compound file format should be used for - newly flushed and newly merged segments. - - - - Returns true if newly flushed and newly merge segments - - - - - Sets whether compound file format should be used for - newly flushed and newly merged doc store - segment files (term vectors and stored fields). - - - - Returns true if newly flushed and newly merge doc - store segment files (term vectors and stored fields) - - - - - - Sets whether the segment size should be calibrated by - the number of deletes when choosing segments for merge. - - - - Returns true if the segment size should be calibrated - by the number of deletes when choosing segments for merge. - - - - Returns true if this single info is optimized (has no - pending norms or deletes, is in the same dir as the - writer, and matches the current compound file setting - - - - Returns the merges necessary to optimize the index. - This merge policy defines "optimized" to mean only one - segment in the index, where that segment has no - deletions pending nor separate norms, and it is in - compound file format if the current useCompoundFile - setting is true. This method returns multiple merges - (mergeFactor at a time) so the {@link MergeScheduler} - in use may make use of concurrency. - - - - Finds merges necessary to expunge all deletes from the - index. We simply merge adjacent segments that have - deletes, up to mergeFactor at a time. - - - - Checks if any merges are now necessary and returns a - {@link MergePolicy.MergeSpecification} if so. A merge - is necessary when there are more than {@link - #setMergeFactor} segments at a given level. When - multiple levels have too many segments, this method - will return multiple merges, allowing the {@link - MergeScheduler} to use concurrency. - - - -

Determines the largest segment (measured by - document count) that may be merged with other segments. - Small values (e.g., less than 10,000) are best for - interactive indexing, as this limits the length of - pauses while indexing to a few seconds. Larger values - are best for batched indexing and speedier - searches.

- -

The default value is {@link Integer#MAX_VALUE}.

- -

The default merge policy ({@link - LogByteSizeMergePolicy}) also allows you to set this - limit by net size (in MB) of the segment, using {@link - LogByteSizeMergePolicy#setMaxMergeMB}.

-

-
- - Returns the largest segment (measured by document - count) that may be merged with other segments. - - - - - - - - - - Sets the minimum size for the lowest level segments. - Any segments below this size are considered to be on - the same level (even if they vary drastically in size) - and will be merged whenever there are mergeFactor of - them. This effectively truncates the "long tail" of - small segments that would otherwise be created into a - single level. If you set this too large, it could - greatly increase the merging cost during indexing (if - you flush many small segments). - - - - Get the minimum size for a segment to remain - un-merged. - - - - - - Please subclass IndexCommit class instead - - - - Get the segments file (segments_N) associated - with this commit point. - - - - Returns all index files referenced by this commit point. - - - Delete this commit point. -

- Upon calling this, the writer is notified that this commit - point should be deleted. -

- Decision that a commit-point should be deleted is taken by the {@link IndexDeletionPolicy} in effect - and therefore this should only be called by its {@link IndexDeletionPolicy#onInit onInit()} or - {@link IndexDeletionPolicy#onCommit onCommit()} methods. -

-
- - NOTE: this API is experimental and will likely change - - - Adds a new term in this field; term ends with U+FFFF - char - - - - Called when we are done adding terms to this field - - - Called when DocumentsWriter decides to create a new - segment - - - - Called when DocumentsWriter decides to close the doc - stores - - - - Called when an aborting exception is hit - - - Add a new thread - - - Called when DocumentsWriter is using too much RAM. - The consumer should free RAM, if possible, returning - true if any RAM was in fact freed. - - - - Class to write byte streams into slices of shared - byte[]. This is used by DocumentsWriter to hold the - posting list for many terms in RAM. - - - - Set up the writer to write at address. - - - Write byte into byte slice stream - - - Abstract base class for input from a file in a {@link Directory}. A - random-access input stream. Used for all Lucene index input operations. - - - - - - Reads and returns a single byte. - - - - - Reads a specified number of bytes into an array at the specified offset. - the array to read bytes into - - the offset in the array to start storing bytes - - the number of bytes to read - - - - - - Reads a specified number of bytes into an array at the - specified offset with control over whether the read - should be buffered (callers who have their own buffer - should pass in "false" for useBuffer). Currently only - {@link BufferedIndexInput} respects this parameter. - - the array to read bytes into - - the offset in the array to start storing bytes - - the number of bytes to read - - set to false if the caller will handle - buffering. - - - - - - Reads four bytes and returns an int. - - - - - Reads an int stored in variable-length format. Reads between one and - five bytes. Smaller values take fewer bytes. Negative numbers are not - supported. - - - - - - Reads eight bytes and returns a long. - - - - - Reads a long stored in variable-length format. Reads between one and - nine bytes. Smaller values take fewer bytes. Negative numbers are not - supported. - - - - Call this if readString should read characters stored - in the old modified UTF8 format (length in java chars - and java's modified UTF8 encoding). This is used for - indices written pre-2.4 See LUCENE-510 for details. - - - - Reads a string. - - - - - Reads Lucene's old "modified UTF-8" encoded - characters into an array. - - the array to read characters into - - the offset in the array to start storing characters - - the number of characters to read - - - - -- please use readString or readBytes - instead, and construct the string - from those utf8 bytes - - - - Expert - - Similar to {@link #ReadChars(char[], int, int)} but does not do any conversion operations on the bytes it is reading in. It still - has to invoke {@link #ReadByte()} just as {@link #ReadChars(char[], int, int)} does, but it does not need a buffer to store anything - and it does not have to do any of the bitwise operations, since we don't actually care what is in the byte except to determine - how many more bytes to read - - The number of chars to read - - this method operates on old "modified utf8" encoded - strings - - - - Closes the stream to futher operations. - - - Returns the current position in this file, where the next read will - occur. - - - - - - Sets current position in this file, where the next read will occur. - - - - - The number of bytes in the file. - - - Returns a clone of this stream. - -

Clones of a stream access the same data, and are positioned at the same - point as the stream they were cloned from. - -

Expert: Subclasses must ensure that clones may be positioned at - different points in the input from each other and from the stream they - were cloned from. -

-
- - A {@link FieldSelector} based on a Map of field names to {@link FieldSelectorResult}s - - - - - Similar to a {@link java.io.FileFilter}, the FieldSelector allows one to make decisions about - what Fields get loaded on a {@link Document} by {@link Lucene.Net.Index.IndexReader#Document(int,Lucene.Net.Documents.FieldSelector)} - - - - - - - the field to accept or reject - - an instance of {@link FieldSelectorResult} - if the {@link Field} named fieldName should be loaded. - - - - Create a a MapFieldSelector - maps from field names (String) to {@link FieldSelectorResult}s - - - - Create a a MapFieldSelector - fields to LOAD. List of Strings. All other fields are NO_LOAD. - - - - Create a a MapFieldSelector - fields to LOAD. All other fields are NO_LOAD. - - - - Load field according to its associated value in fieldSelections - a field name - - the fieldSelections value that field maps to or NO_LOAD if none. - - - - Provides information about what should be done with this Field - - - - - - Load this {@link Field} every time the {@link Document} is loaded, reading in the data as it is encountered. - {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should not return null. -

- {@link Document#Add(Fieldable)} should be called by the Reader. -

-
- - Lazily load this {@link Field}. This means the {@link Field} is valid, but it may not actually contain its data until - invoked. {@link Document#GetField(String)} SHOULD NOT BE USED. {@link Document#GetFieldable(String)} is safe to use and should - return a valid instance of a {@link Fieldable}. -

- {@link Document#Add(Fieldable)} should be called by the Reader. -

-
- - Do not load the {@link Field}. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should return null. - {@link Document#Add(Fieldable)} is not called. -

- {@link Document#Add(Fieldable)} should not be called by the Reader. -

-
- - Load this field as in the {@link #LOAD} case, but immediately return from {@link Field} loading for the {@link Document}. Thus, the - Document may not have its complete set of Fields. {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should - both be valid for this {@link Field} -

- {@link Document#Add(Fieldable)} should be called by the Reader. -

-
- - Behaves much like {@link #LOAD} but does not uncompress any compressed data. This is used for internal purposes. - {@link Document#GetField(String)} and {@link Document#GetFieldable(String)} should not return null. -

- {@link Document#Add(Fieldable)} should be called by - the Reader. -

- This is an internal option only, and is - no longer needed now that {@link CompressionTools} - is used for field compression. - -
- - Expert: Load the size of this {@link Field} rather than its value. - Size is measured as number of bytes required to store the field == bytes for a binary or any compressed value, and 2*chars for a String value. - The size is stored as a binary value, represented as an int in a byte[], with the higher order byte first in [0] - - - - Expert: Like {@link #SIZE} but immediately break from the field loading loop, i.e., stop loading further fields, after the size is loaded - - - A Tokenizer is a TokenStream whose input is a Reader. -

- This is an abstract class; subclasses must override {@link #IncrementToken()} -

- NOTE: Subclasses overriding {@link #next(Token)} must call - {@link AttributeSource#ClearAttributes()} before setting attributes. - Subclasses overriding {@link #IncrementToken()} must call - {@link Token#Clear()} before setting Token attributes. -

-
- - A TokenStream enumerates the sequence of tokens, either from - {@link Field}s of a {@link Document} or from query text. -

- This is an abstract class. Concrete subclasses are: -

    -
  • {@link Tokenizer}, a TokenStream whose input is a Reader; and
  • -
  • {@link TokenFilter}, a TokenStream whose input is another - TokenStream.
  • -
- A new TokenStream API has been introduced with Lucene 2.9. This API - has moved from being {@link Token} based to {@link Attribute} based. While - {@link Token} still exists in 2.9 as a convenience class, the preferred way - to store the information of a {@link Token} is to use {@link AttributeImpl}s. -

- TokenStream now extends {@link AttributeSource}, which provides - access to all of the token {@link Attribute}s for the TokenStream. - Note that only one instance per {@link AttributeImpl} is created and reused - for every token. This approach reduces object creation and allows local - caching of references to the {@link AttributeImpl}s. See - {@link #IncrementToken()} for further details. -

- The workflow of the new TokenStream API is as follows: -

    -
  1. Instantiation of TokenStream/{@link TokenFilter}s which add/get - attributes to/from the {@link AttributeSource}.
  2. -
  3. The consumer calls {@link TokenStream#Reset()}.
  4. -
  5. The consumer retrieves attributes from the stream and stores local - references to all attributes it wants to access
  6. -
  7. The consumer calls {@link #IncrementToken()} until it returns false and - consumes the attributes after each call.
  8. -
  9. The consumer calls {@link #End()} so that any end-of-stream operations - can be performed.
  10. -
  11. The consumer calls {@link #Close()} to release any resource when finished - using the TokenStream
  12. -
- To make sure that filters and consumers know which attributes are available, - the attributes must be added during instantiation. Filters and consumers are - not required to check for availability of attributes in - {@link #IncrementToken()}. -

- You can find some example code for the new API in the analysis package level - Javadoc. -

- Sometimes it is desirable to capture a current state of a TokenStream - , e. g. for buffering purposes (see {@link CachingTokenFilter}, - {@link TeeSinkTokenFilter}). For this usecase - {@link AttributeSource#CaptureState} and {@link AttributeSource#RestoreState} - can be used. -

-
- - An AttributeSource contains a list of different {@link AttributeImpl}s, - and methods to add and get them. There can only be a single instance - of an attribute in the same AttributeSource instance. This is ensured - by passing in the actual type of the Attribute (Class<Attribute>) to - the {@link #AddAttribute(Class)}, which then checks if an instance of - that type is already present. If yes, it returns the instance, otherwise - it creates a new instance and returns it. - - - - An AttributeSource using the default attribute factory {@link AttributeSource.AttributeFactory#DEFAULT_ATTRIBUTE_FACTORY}. - - - An AttributeSource that uses the same attributes as the supplied one. - - - An AttributeSource using the supplied {@link AttributeFactory} for creating new {@link Attribute} instances. - - - returns the used AttributeFactory. - - - Returns a new iterator that iterates the attribute classes - in the same order they were added in. - Signature for Java 1.5: public Iterator<Class<? extends Attribute>> getAttributeClassesIterator() - - Note that this return value is different from Java in that it enumerates over the values - and not the keys - - - - Returns a new iterator that iterates all unique Attribute implementations. - This iterator may contain less entries that {@link #getAttributeClassesIterator}, - if one instance implements more than one Attribute interface. - Signature for Java 1.5: public Iterator<AttributeImpl> getAttributeImplsIterator() - - - - a cache that stores all interfaces for known implementation classes for performance (slow reflection) - - - Adds a custom AttributeImpl instance with one or more Attribute interfaces. - - - The caller must pass in a Class<? extends Attribute> value. - This method first checks if an instance of that class is - already in this AttributeSource and returns it. Otherwise a - new instance is created, added to this AttributeSource and returned. - Signature for Java 1.5: public <T extends Attribute> T addAttribute(Class<T>) - - - - Returns true, iff this AttributeSource has any attributes - - - The caller must pass in a Class<? extends Attribute> value. - Returns true, iff this AttributeSource contains the passed-in Attribute. - Signature for Java 1.5: public boolean hasAttribute(Class<? extends Attribute>) - - - - The caller must pass in a Class<? extends Attribute> value. - Returns the instance of the passed in Attribute contained in this AttributeSource - Signature for Java 1.5: public <T extends Attribute> T getAttribute(Class<T>) - - - IllegalArgumentException if this AttributeSource does not contain the - Attribute. It is recommended to always use {@link #addAttribute} even in consumers - of TokenStreams, because you cannot know if a specific TokenStream really uses - a specific Attribute. {@link #addAttribute} will automatically make the attribute - available. If you want to only use the attribute, if it is available (to optimize - consuming), use {@link #hasAttribute}. - - - - Resets all Attributes in this AttributeSource by calling - {@link AttributeImpl#Clear()} on each Attribute implementation. - - - - Captures the state of all Attributes. The return value can be passed to - {@link #restoreState} to restore the state of this or another AttributeSource. - - - - Restores this state by copying the values of all attribute implementations - that this state contains into the attributes implementations of the targetStream. - The targetStream must contain a corresponding instance for each argument - contained in this state (e.g. it is not possible to restore the state of - an AttributeSource containing a TermAttribute into a AttributeSource using - a Token instance as implementation). - - Note that this method does not affect attributes of the targetStream - that are not contained in this state. In other words, if for example - the targetStream contains an OffsetAttribute, but this state doesn't, then - the value of the OffsetAttribute remains unchanged. It might be desirable to - reset its value to the default, in which case the caller should first - call {@link TokenStream#ClearAttributes()} on the targetStream. - - - - Performs a clone of all {@link AttributeImpl} instances returned in a new - AttributeSource instance. This method can be used to e.g. create another TokenStream - with exactly the same attributes (using {@link #AttributeSource(AttributeSource)}) - - - - An AttributeFactory creates instances of {@link AttributeImpl}s. - - - returns an {@link AttributeImpl} for the supplied {@link Attribute} interface class. -

Signature for Java 1.5: public AttributeImpl createAttributeInstance(Class%lt;? extends Attribute> attClass) -

-
- - This is the default factory that creates {@link AttributeImpl}s using the - class name of the supplied {@link Attribute} interface class by appending Impl to it. - - - - This class holds the state of an AttributeSource. - - - - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - A TokenStream using the default attribute factory. - - - A TokenStream that uses the same attributes as the supplied one. - - - A TokenStream using the supplied AttributeFactory for creating new {@link Attribute} instances. - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - For extra performance you can globally enable the new - {@link #IncrementToken} API using {@link Attribute}s. There will be a - small, but in most cases negligible performance increase by enabling this, - but it only works if all TokenStreams use the new API and - implement {@link #IncrementToken}. This setting can only be enabled - globally. -

- This setting only affects TokenStreams instantiated after this - call. All TokenStreams already created use the other setting. -

- All core {@link Analyzer}s are compatible with this setting, if you have - your own TokenStreams that are also compatible, you should enable - this. -

- When enabled, tokenization may throw {@link UnsupportedOperationException} - s, if the whole tokenizer chain is not compatible eg one of the - TokenStreams does not implement the new TokenStream API. -

- The default is false, so there is the fallback to the old API - available. - -

- This setting will no longer be needed in Lucene 3.0 as the old - API will be removed. - -
- - Returns if only the new API is used. - - - - - This setting will no longer be needed in Lucene 3.0 as - the old API will be removed. - - - - Consumers (i.e., {@link IndexWriter}) use this method to advance the stream to - the next token. Implementing classes must implement this method and update - the appropriate {@link AttributeImpl}s with the attributes of the next - token. - - The producer must make no assumptions about the attributes after the - method has been returned: the caller may arbitrarily change it. If the - producer needs to preserve the state for subsequent calls, it can use - {@link #captureState} to create a copy of the current attribute state. - - This method is called for every token of a document, so an efficient - implementation is crucial for good performance. To avoid calls to - {@link #AddAttribute(Class)} and {@link #GetAttribute(Class)} or downcasts, - references to all {@link AttributeImpl}s that this stream uses should be - retrieved during instantiation. - - To ensure that filters and consumers know which attributes are available, - the attributes must be added during instantiation. Filters and consumers - are not required to check for availability of attributes in - {@link #IncrementToken()}. - - - false for end of stream; true otherwise - - Note that this method will be defined abstract in Lucene - 3.0. - - - - This method is called by the consumer after the last token has been - consumed, after {@link #IncrementToken()} returned false - (using the new TokenStream API). Streams implementing the old API - should upgrade to use this feature. -

- This method can be used to perform any end-of-stream operations, such as - setting the final offset of a stream. The final offset of a stream might - differ from the offset of the last token eg in case one or more whitespaces - followed after the last token, but a {@link WhitespaceTokenizer} was used. - -

- IOException -
- - Returns the next token in the stream, or null at EOS. When possible, the - input Token should be used as the returned Token (this gives fastest - tokenization performance), but this is not required and a new Token may be - returned. Callers may re-use a single Token instance for successive calls - to this method. - - This implicitly defines a "contract" between consumers (callers of this - method) and producers (implementations of this method that are the source - for tokens): -
    -
  • A consumer must fully consume the previously returned {@link Token} - before calling this method again.
  • -
  • A producer must call {@link Token#Clear()} before setting the fields in - it and returning it
  • -
- Also, the producer must make no assumptions about a {@link Token} after it - has been returned: the caller may arbitrarily change it. If the producer - needs to hold onto the {@link Token} for subsequent calls, it must clone() - it before storing it. Note that a {@link TokenFilter} is considered a - consumer. - -
- a {@link Token} that may or may not be used to return; - this parameter should never be null (the callee is not required to - check for null before using it, but it is a good idea to assert that - it is not null.) - - next {@link Token} in the stream or null if end-of-stream was hit - - The new {@link #IncrementToken()} and {@link AttributeSource} - APIs should be used instead. - -
- - Returns the next {@link Token} in the stream, or null at EOS. - - - The returned Token is a "full private copy" (not re-used across - calls to {@link #Next()}) but will be slower than calling - {@link #Next(Token)} or using the new {@link #IncrementToken()} - method with the new {@link AttributeSource} API. - - - - Resets this stream to the beginning. This is an optional operation, so - subclasses may or may not implement this method. {@link #Reset()} is not needed for - the standard indexing process. However, if the tokens of a - TokenStream are intended to be consumed more than once, it is - necessary to implement {@link #Reset()}. Note that if your TokenStream - caches tokens and feeds them back again after a reset, it is imperative - that you clone the tokens when you store them away (on the first pass) as - well as when you return them (on future passes after {@link #Reset()}). - - - - Releases resources associated with this stream. - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - The text source for this Tokenizer. - - - Construct a tokenizer with null input. - - - Construct a token stream processing the given input. - - - Construct a tokenizer with null input using the given AttributeFactory. - - - Construct a token stream processing the given input using the given AttributeFactory. - - - Construct a token stream processing the given input using the given AttributeSource. - - - Construct a token stream processing the given input using the given AttributeSource. - - - By default, closes the input Reader. - - - Return the corrected offset. If {@link #input} is a {@link CharStream} subclass - this method calls {@link CharStream#CorrectOffset}, else returns currentOff. - - offset as seen in the output - - corrected offset based on the input - - - - - - Expert: Reset the tokenizer to a new reader. Typically, an - analyzer (in its reusableTokenStream method) will use - this to re-use a previously created tokenizer. - - - - The start and end character offset of a Token. - - - Base interface for attributes. - - - Returns this Token's starting offset, the position of the first character - corresponding to this token in the source text. - Note that the difference between endOffset() and startOffset() may not be - equal to termText.length(), as the term text may have been altered by a - stemmer or some other filter. - - - - Set the starting and ending offset. - See StartOffset() and EndOffset() - - - - Returns this Token's ending offset, one greater than the position of the - last character corresponding to this token in the source text. The length - of the token in the source text is (endOffset - startOffset). - - - - Filters {@link LetterTokenizer} with {@link LowerCaseFilter} and - {@link StopFilter}. - - -

- You must specify the required {@link Version} compatibility when creating - StopAnalyzer: -

    -
  • As of 2.9, position increments are preserved
  • -
-
-
- - An Analyzer builds TokenStreams, which analyze text. It thus represents a - policy for extracting index terms from text. -

- Typical implementations first build a Tokenizer, which breaks the stream of - characters from the Reader into raw Tokens. One or more TokenFilters may - then be applied to the output of the Tokenizer. -

-
- - Creates a TokenStream which tokenizes all the text in the provided - Reader. Must be able to handle null field name for - backward compatibility. - - - - Creates a TokenStream that is allowed to be re-used - from the previous time that the same thread called - this method. Callers that do not need to use more - than one TokenStream at the same time from this - analyzer should use this method for better - performance. - - - - Used by Analyzers that implement reusableTokenStream - to retrieve previously saved TokenStreams for re-use - by the same thread. - - - - Used by Analyzers that implement reusableTokenStream - to save a TokenStream for later re-use by the same - thread. - - - - This is only present to preserve - back-compat of classes that subclass a core analyzer - and override tokenStream but not reusableTokenStream - - - - Invoked before indexing a Fieldable instance if - terms have already been added to that field. This allows custom - analyzers to place an automatic position increment gap between - Fieldable instances using the same field name. The default value - position increment gap is 0. With a 0 position increment gap and - the typical default token position increment of 1, all terms in a field, - including across Fieldable instances, are in successive positions, allowing - exact PhraseQuery matches, for instance, across Fieldable instance boundaries. - - - Fieldable name being indexed. - - position increment gap, added to the next token emitted from {@link #TokenStream(String,Reader)} - - - - Just like {@link #getPositionIncrementGap}, except for - Token offsets instead. By default this returns 1 for - tokenized fields and, as if the fields were joined - with an extra space character, and 0 for un-tokenized - fields. This method is only called if the field - produced at least one token for indexing. - - - the field just indexed - - offset gap, added to the next token emitted from {@link #TokenStream(String,Reader)} - - - - Frees persistent resources used by this Analyzer - - - An array containing some common English words that are not usually useful - for searching. - - Use {@link #ENGLISH_STOP_WORDS_SET} instead - - - - An unmodifiable set containing some common English words that are not usually useful - for searching. - - - - Builds an analyzer which removes words in - ENGLISH_STOP_WORDS. - - Use {@link #StopAnalyzer(Version)} instead - - - - Builds an analyzer which removes words in ENGLISH_STOP_WORDS. - - - Builds an analyzer which removes words in - ENGLISH_STOP_WORDS. - - - See {@link StopFilter#SetEnablePositionIncrements} - - Use {@link #StopAnalyzer(Version)} instead - - - - Builds an analyzer with the stop words from the given set. - Use {@link #StopAnalyzer(Version, Set)} instead - - - - Builds an analyzer with the stop words from the given set. - - - Builds an analyzer with the stop words from the given set. - Set of stop words - - - See {@link StopFilter#SetEnablePositionIncrements} - - Use {@link #StopAnalyzer(Version, Set)} instead - - - - Builds an analyzer which removes words in the provided array. - Use {@link #StopAnalyzer(Set, boolean)} instead - - Use {@link #StopAnalyzer(Version, Set)} instead - - - - Builds an analyzer which removes words in the provided array. - Array of stop words - - - See {@link StopFilter#SetEnablePositionIncrements} - - Use {@link #StopAnalyzer(Version, Set)} instead - - - - Builds an analyzer with the stop words from the given file. - - - Use {@link #StopAnalyzer(Version, File)} instead - - - - Builds an analyzer with the stop words from the given file. - - - File to load stop words from - - - See {@link StopFilter#SetEnablePositionIncrements} - - Use {@link #StopAnalyzer(Version, File)} instead - - - - Builds an analyzer with the stop words from the given file. - - - - - See
above - - File to load stop words from - - - - Builds an analyzer with the stop words from the given reader. - - - Use {@link #StopAnalyzer(Version, Reader)} instead - - - - Builds an analyzer with the stop words from the given reader. - - - Reader to load stop words from - - - See {@link StopFilter#SetEnablePositionIncrements} - - Use {@link #StopAnalyzer(Version, Reader)} instead - - - - Builds an analyzer with the stop words from the given reader. - - - See above - - Reader to load stop words from - - - - Filters LowerCaseTokenizer with StopFilter. - - - Filters LowerCaseTokenizer with StopFilter. - - - Normalizes tokens extracted with {@link StandardTokenizer}. - - - A TokenFilter is a TokenStream whose input is another TokenStream. -

- This is an abstract class; subclasses must override {@link #IncrementToken()}. - -

- - -
- - The source of tokens for this filter. - - - Construct a token stream filtering the given input. - - - Performs end-of-stream operations, if any, and calls then end() on the - input TokenStream.

- NOTE: Be sure to call super.end() first when overriding this method. -

-
- - Close the input TokenStream. - - - Reset the filter as well as the input TokenStream. - - - Construct filtering in. - - - Returns the next token in the stream, or null at EOS. -

Removes 's from the end of words. -

Removes dots from acronyms. -

-
- - Expert: This class provides a {@link TokenStream} - for indexing numeric values that can be used by {@link - NumericRangeQuery} or {@link NumericRangeFilter}. - -

Note that for simple usage, {@link NumericField} is - recommended. {@link NumericField} disables norms and - term freqs, as they are not usually needed during - searching. If you need to change these settings, you - should use this class. - -

See {@link NumericField} for capabilities of fields - indexed numerically.

- -

Here's an example usage, for an int field: - -

-             Field field = new Field(name, new NumericTokenStream(precisionStep).setIntValue(value));
-             field.setOmitNorms(true);
-             field.setOmitTermFreqAndPositions(true);
-             document.add(field);
-            
- -

For optimal performance, re-use the TokenStream and Field instance - for more than one document: - -

-             NumericTokenStream stream = new NumericTokenStream(precisionStep);
-             Field field = new Field(name, stream);
-             field.setOmitNorms(true);
-             field.setOmitTermFreqAndPositions(true);
-             Document document = new Document();
-             document.add(field);
-            
-             for(all documents) {
-               stream.setIntValue(value)
-               writer.addDocument(document);
-             }
-            
- -

This stream is not intended to be used in analyzers; - it's more for iterating the different precisions during - indexing a specific numeric value.

- -

NOTE: as token streams are only consumed once - the document is added to the index, if you index more - than one numeric field, use a separate NumericTokenStream - instance for each.

- -

See {@link NumericRangeQuery} for more details on the - precisionStep - parameter as well as how numeric fields work under the hood.

- -

NOTE: This API is experimental and - might change in incompatible ways in the next release. - -

- 2.9 - -
- - The full precision token gets this token type assigned. - - - The lower precision tokens gets this token type assigned. - - - Creates a token stream for numeric values using the default precisionStep - {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). The stream is not yet initialized, - before using set a value using the various set???Value() methods. - - - - Creates a token stream for numeric values with the specified - precisionStep. The stream is not yet initialized, - before using set a value using the various set???Value() methods. - - - - Expert: Creates a token stream for numeric values with the specified - precisionStep using the given {@link AttributeSource}. - The stream is not yet initialized, - before using set a value using the various set???Value() methods. - - - - Expert: Creates a token stream for numeric values with the specified - precisionStep using the given - {@link org.apache.lucene.util.AttributeSource.AttributeFactory}. - The stream is not yet initialized, - before using set a value using the various set???Value() methods. - - - - Initializes the token stream with the supplied long value. - the value, for which this TokenStream should enumerate tokens. - - this instance, because of this you can use it the following way: - new Field(name, new NumericTokenStream(precisionStep).SetLongValue(value)) - - - - Initializes the token stream with the supplied int value. - the value, for which this TokenStream should enumerate tokens. - - this instance, because of this you can use it the following way: - new Field(name, new NumericTokenStream(precisionStep).SetIntValue(value)) - - - - Initializes the token stream with the supplied double value. - the value, for which this TokenStream should enumerate tokens. - - this instance, because of this you can use it the following way: - new Field(name, new NumericTokenStream(precisionStep).SetDoubleValue(value)) - - - - Initializes the token stream with the supplied float value. - the value, for which this TokenStream should enumerate tokens. - - this instance, because of this you can use it the following way: - new Field(name, new NumericTokenStream(precisionStep).SetFloatValue(value)) - - - - Holds a map of String input to String output, to be used - with {@link MappingCharFilter}. - - - - Records a replacement to be applied to the inputs - stream. Whenever singleMatch occurs in - the input, it will be replaced with - replacement. - - - input String to be replaced - - output String - - - - An abstract base class for simple, character-oriented tokenizers. - - - Returns true iff a character should be included in a token. This - tokenizer generates as tokens adjacent sequences of characters which - satisfy this predicate. Characters for which this is false are used to - define token boundaries and are not included in tokens. - - - - Called on each token character to normalize it before it is added to the - token. The default implementation does nothing. Subclasses may use this - to, e.g., lowercase tokens. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - An iterator to iterate over set bits in an OpenBitSet. - This is faster than nextSetBit() for iterating over the complete set of bits, - especially when the density of the bits set is high. - - - $Id$ - - - - ** the python code that generated bitlist - def bits2int(val): - arr=0 - for shift in range(8,0,-1): - if val & 0x80: - arr = (arr << 4) | shift - val = val << 1 - return arr - def int_table(): - tbl = [ hex(bits2int(val)).strip('L') for val in range(256) ] - return ','.join(tbl) - **** - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - Expert: allocate a new buffer. - Subclasses can allocate differently. - - size of allocated buffer. - - allocated buffer. - - - - The interface for search implementations. - -

- Searchable is the abstract network protocol for searching. Implementations - provide search over a single index, over multiple indices, and over indices - on remote servers. - -

- Queries, filters and sort criteria are designed to be compact so that they - may be efficiently passed to a remote index, with only the top-scoring hits - being returned, rather than every matching hit. - - NOTE: this interface is kept public for convenience. Since it is not - expected to be implemented directly, it may be changed unexpectedly between - releases. -

-
- - Lower-level search API. - -

{@link HitCollector#Collect(int,float)} is called for every non-zero - scoring document. -
HitCollector-based access to remote indexes is discouraged. - -

Applications should only use this if they need all of the - matching documents. The high-level search API ({@link - Searcher#Search(Query)}) is usually more efficient, as it skips - non-high-scoring hits. - -

- to match documents - - if non-null, used to permit documents to be collected. - - to receive hits - - BooleanQuery.TooManyClauses - use {@link #Search(Weight, Filter, Collector)} instead. - -
- - Lower-level search API. - -

- {@link Collector#Collect(int)} is called for every document.
- Collector-based access to remote indexes is discouraged. - -

- Applications should only use this if they need all of the matching - documents. The high-level search API ({@link Searcher#Search(Query)}) is - usually more efficient, as it skips non-high-scoring hits. - -

- to match documents - - if non-null, used to permit documents to be collected. - - to receive hits - - BooleanQuery.TooManyClauses -
- - Frees resources associated with this Searcher. - Be careful not to call this method while you are still using objects - like {@link Hits}. - - - - Expert: Returns the number of documents containing term. - Called by search code to compute term weights. - - - - - - Expert: For each term in the terms array, calculates the number of - documents containing term. Returns an array with these - document frequencies. Used to minimize number of remote calls. - - - - Expert: Returns one greater than the largest possible document number. - Called by search code to compute term weights. - - - - - - Expert: Low-level search implementation. Finds the top n - hits for query, applying filter if non-null. - -

Called by {@link Hits}. - -

Applications should usually call {@link Searcher#Search(Query)} or - {@link Searcher#Search(Query,Filter)} instead. -

- BooleanQuery.TooManyClauses -
- - Expert: Returns the stored fields of document i. - Called by {@link HitCollector} implementations. - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Get the {@link Lucene.Net.Documents.Document} at the nth position. The {@link Lucene.Net.Documents.FieldSelector} - may be used to determine what {@link Lucene.Net.Documents.Field}s to load and how they should be loaded. - - NOTE: If the underlying Reader (more specifically, the underlying FieldsReader) is closed before the lazy {@link Lucene.Net.Documents.Field} is - loaded an exception may be thrown. If you want the value of a lazy {@link Lucene.Net.Documents.Field} to be available after closing you must - explicitly load it or fetch the Document again with a new loader. - - - - Get the document at the nth position - - The {@link Lucene.Net.Documents.FieldSelector} to use to determine what Fields should be loaded on the Document. May be null, in which case all Fields will be loaded. - - The stored fields of the {@link Lucene.Net.Documents.Document} at the nth position - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - - - - - - - - - - - - - Expert: called to re-write queries into primitive queries. - BooleanQuery.TooManyClauses - - - Expert: low-level implementation method - Returns an Explanation that describes how doc scored against - weight. - -

This is intended to be used in developing Similarity implementations, - and, for good performance, should not be displayed with every hit. - Computing an explanation is as expensive as executing the query over the - entire index. -

Applications should call {@link Searcher#Explain(Query, int)}. -

- BooleanQuery.TooManyClauses -
- - Expert: Low-level search implementation with arbitrary sorting. Finds - the top n hits for query, applying - filter if non-null, and sorting the hits by the criteria in - sort. - -

Applications should usually call - {@link Searcher#Search(Query,Filter,int,Sort)} instead. - -

- BooleanQuery.TooManyClauses -
- - A Filter that restricts search results to a range of values in a given - field. - -

This filter matches the documents looking for terms that fall into the - supplied range according to {@link String#compareTo(String)}. It is not intended - for numerical ranges, use {@link NumericRangeFilter} instead. - -

If you construct a large number of range filters with different ranges but on the - same field, {@link FieldCacheRangeFilter} may have significantly better performance. - -

- Use {@link TermRangeFilter} for term ranges or - {@link NumericRangeFilter} for numeric ranges instead. - This class will be removed in Lucene 3.0. - -
- - A wrapper for {@link MultiTermQuery}, that exposes its - functionality as a {@link Filter}. -

- MultiTermQueryWrapperFilter is not designed to - be used by itself. Normally you subclass it to provide a Filter - counterpart for a {@link MultiTermQuery} subclass. -

- For example, {@link TermRangeFilter} and {@link PrefixFilter} extend - MultiTermQueryWrapperFilter. - This class also provides the functionality behind - {@link MultiTermQuery#CONSTANT_SCORE_FILTER_REWRITE}; - this is why it is not abstract. -

-
- - Wrap a {@link MultiTermQuery} as a Filter. - - - Expert: Return the number of unique terms visited during execution of the filter. - If there are many of them, you may consider using another filter type - or optimize your total term count in index. -

This method is not thread safe, be sure to only call it when no filter is running! - If you re-use the same filter instance for another - search, be sure to first reset the term counter - with {@link #clearTotalNumberOfTerms}. -

- - -
- - Expert: Resets the counting of unique terms. - Do this before executing the filter. - - - - - - Returns a BitSet with true for documents which should be - permitted in search results, and false for those that should - not. - - Use {@link #GetDocIdSet(IndexReader)} instead. - - - - Returns a DocIdSet with documents that should be - permitted in search results. - - - - The field this range applies to - - The lower bound on this range - - The upper bound on this range - - Does this range include the lower bound? - - Does this range include the upper bound? - - IllegalArgumentException if both terms are null or if - lowerTerm is null and includeLower is true (similar for upperTerm - and includeUpper) - - - - WARNING: Using this constructor and supplying a non-null - value in the collator parameter will cause every single - index Term in the Field referenced by lowerTerm and/or upperTerm to be - examined. Depending on the number of index Terms in this Field, the - operation could be very slow. - - - The lower bound on this range - - The upper bound on this range - - Does this range include the lower bound? - - Does this range include the upper bound? - - The collator to use when determining range inclusion; set - to null to use Unicode code point ordering instead of collation. - - IllegalArgumentException if both terms are null or if - lowerTerm is null and includeLower is true (similar for upperTerm - and includeUpper) - - - - Constructs a filter for field fieldName matching - less than or equal to upperTerm. - - - - Constructs a filter for field fieldName matching - greater than or equal to lowerTerm. - - - - Implements the fuzzy search query. The similarity measurement - is based on the Levenshtein (edit distance) algorithm. - - Warning: this query is not very scalable with its default prefix - length of 0 - in this case, *every* term will be enumerated and - cause an edit score calculation. - - - - - Create a new FuzzyQuery that will match terms with a similarity - of at least minimumSimilarity to term. - If a prefixLength > 0 is specified, a common prefix - of that length is also required. - - - the term to search for - - a value between 0 and 1 to set the required similarity - between the query term and the matching terms. For example, for a - minimumSimilarity of 0.5 a term of the same length - as the query term is considered similar to the query term if the edit distance - between both terms is less than length(term)*0.5 - - length of common (non-fuzzy) prefix - - IllegalArgumentException if minimumSimilarity is >= 1 or < 0 - or if prefixLength < 0 - - - - Calls {@link #FuzzyQuery(Term, float) FuzzyQuery(term, minimumSimilarity, 0)}. - - - Calls {@link #FuzzyQuery(Term, float) FuzzyQuery(term, 0.5f, 0)}. - - - Returns the minimum similarity that is required for this query to match. - float value between 0.0 and 1.0 - - - - Returns the non-fuzzy prefix length. This is the number of characters at the start - of a term that must be identical (not fuzzy) to the query term if the query - is to match that term. - - - - Returns the pattern term. - - - A PriorityQueue maintains a partial ordering of its elements such that the - least element can always be found in constant time. Put()'s and pop()'s - require log(size) time. - -

NOTE: This class pre-allocates a full array of - length maxSize+1, in {@link #initialize}. - -

-
- - Determines the ordering of objects in this priority queue. Subclasses - must define this one method. - - - - This method can be overridden by extending classes to return a sentinel - object which will be used by {@link #Initialize(int)} to fill the queue, so - that the code which uses that queue can always assume it's full and only - change the top without attempting to insert any new object.
- - Those sentinel values should always compare worse than any non-sentinel - value (i.e., {@link #LessThan(Object, Object)} should always favor the - non-sentinel values).
- - By default, this method returns false, which means the queue will not be - filled with sentinel values. Otherwise, the value returned will be used to - pre-populate the queue. Adds sentinel values to the queue.
- - If this method is extended to return a non-null value, then the following - usage pattern is recommended: - -
-            // extends getSentinelObject() to return a non-null value.
-            PriorityQueue pq = new MyQueue(numHits);
-            // save the 'top' element, which is guaranteed to not be null.
-            MyObject pqTop = (MyObject) pq.top();
-            <...>
-            // now in order to add a new element, which is 'better' than top (after 
-            // you've verified it is better), it is as simple as:
-            pqTop.change().
-            pqTop = pq.updateTop();
-            
- - NOTE: if this method returns a non-null value, it will be called by - {@link #Initialize(int)} {@link #Size()} times, relying on a new object to - be returned and will not check if it's null again. Therefore you should - ensure any call to this method creates a new instance and behaves - consistently, e.g., it cannot return null if it previously returned - non-null. - -
- the sentinel object to use to pre-populate the queue, or null if - sentinel objects are not supported. - -
- - Subclass constructors must call this. - - - Adds an Object to a PriorityQueue in log(size) time. If one tries to add - more objects than maxSize from initialize a RuntimeException - (ArrayIndexOutOfBound) is thrown. - - - use {@link #Add(Object)} which returns the new top object, - saving an additional call to {@link #Top()}. - - - - Adds an Object to a PriorityQueue in log(size) time. If one tries to add - more objects than maxSize from initialize an - {@link ArrayIndexOutOfBoundsException} is thrown. - - - the new 'top' element in the queue. - - - - Adds element to the PriorityQueue in log(size) time if either the - PriorityQueue is not full, or not lessThan(element, top()). - - - - - true if element is added, false otherwise. - - use {@link #InsertWithOverflow(Object)} instead, which - encourages objects reuse. - - - - insertWithOverflow() is the same as insert() except its - return value: it returns the object (if any) that was - dropped off the heap because it was full. This can be - the given parameter (in case it is smaller than the - full heap's minimum, and couldn't be added), or another - object that was previously the smallest value in the - heap and now has been replaced by a larger one, or null - if the queue wasn't yet full with maxSize elements. - - - - Returns the least element of the PriorityQueue in constant time. - - - Removes and returns the least element of the PriorityQueue in log(size) - time. - - - - Should be called when the Object at top changes values. Still log(n) worst - case, but it's at least twice as fast to - -
-            pq.top().change();
-            pq.adjustTop();
-            
- - instead of - -
-            o = pq.pop();
-            o.change();
-            pq.push(o);
-            
- -
- use {@link #UpdateTop()} which returns the new top element and - saves an additional call to {@link #Top()}. - -
- - Should be called when the Object at top changes values. Still log(n) worst - case, but it's at least twice as fast to - -
-            pq.top().change();
-            pq.updateTop();
-            
- - instead of - -
-            o = pq.pop();
-            o.change();
-            pq.push(o);
-            
- -
- the new 'top' element. - -
- - Returns the number of elements currently stored in the PriorityQueue. - - - Removes all entries from the PriorityQueue. - - - Expert: obtains int field values from the - {@link Lucene.Net.Search.FieldCache FieldCache} - using getInts() and makes those values - available as other numeric types, casting as needed. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

- for requirements - on the field. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

- - - -

Create a cached int field source with default string-to-int parser. -
- - Create a cached int field source with a specific string-to-int parser. - - - Writes norms. Each thread X field accumulates the norms - for the doc/fields it saw, then the flush method below - merges all of these together into a single _X.nrm file. - - - - Produce _X.nrm if any document had a field with norms - not disabled - - - - Remaps docIDs after a merge has completed, where the - merged segments had at least one deletion. This is used - to renumber the buffered deletes in IndexWriter when a - merge of segments with deletions commits. - - - - Filename filter that accept filenames and extensions only created by Lucene. - - - $rcs = ' $Id: Exp $ ' ; - - - - Returns true if this is a file that would be contained - in a CFS file. This function should only be called on - files that pass the above "accept" (ie, are already - known to be a Lucene index file). - - - - This is a DocFieldConsumer that inverts each field, - separately, from a Document, and accepts a - InvertedTermsConsumer to process those terms. - - - - Processes all occurrences of a single field - - - Implements the skip list writer for the default posting list format - that stores positions and payloads. - - - - - This abstract class writes skip lists with multiple levels. - - Example for skipInterval = 3: - c (skip level 2) - c c c (skip level 1) - x x x x x x x x x x (skip level 0) - d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d (posting list) - 3 6 9 12 15 18 21 24 27 30 (df) - - d - document - x - skip data - c - skip data with child pointer - - Skip level i contains every skipInterval-th entry from skip level i-1. - Therefore the number of entries on level i is: floor(df / ((skipInterval ^ (i + 1))). - - Each skip entry on a level i>0 contains a pointer to the corresponding skip entry in list i-1. - This guarantess a logarithmic amount of skips to find the target document. - - While this class takes care of writing the different skip levels, - subclasses must define the actual format of the skip data. - - - - - Subclasses must implement the actual skip data encoding in this method. - - - the level skip data shall be writting for - - the skip buffer to write to - - - - Writes the current skip data to the buffers. The current document frequency determines - the max level is skip data is to be written to. - - - the current document frequency - - IOException - - - Writes the buffered skip lists to the given output. - - - the IndexOutput the skip lists shall be written to - - the pointer the skip list starts - - - - Sets the values for the current skip data. - - - Implements the skip list reader for the default posting list format - that stores positions and payloads. - - - - - This abstract class reads skip lists with multiple levels. - - See {@link MultiLevelSkipListWriter} for the information about the encoding - of the multi level skip lists. - - Subclasses must implement the abstract method {@link #ReadSkipData(int, IndexInput)} - which defines the actual format of the skip data. - - - - Returns the id of the doc to which the last call of {@link #SkipTo(int)} - has skipped. - - - - Skips entries to the first beyond the current whose document number is - greater than or equal to target. Returns the current doc count. - - - - Seeks the skip entry on the given level - - - initializes the reader - - - Loads the skip levels - - - Subclasses must implement the actual skip data encoding in this method. - - - the level skip data shall be read from - - the skip stream to read from - - - - Copies the values of the last read skip entry on this level - - - used to buffer the top skip levels - - - Returns the freq pointer of the doc to which the last call of - {@link MultiLevelSkipListReader#SkipTo(int)} has skipped. - - - - Returns the prox pointer of the doc to which the last call of - {@link MultiLevelSkipListReader#SkipTo(int)} has skipped. - - - - Returns the payload length of the payload stored just before - the doc to which the last call of {@link MultiLevelSkipListReader#SkipTo(int)} - has skipped. - - - - This exception is thrown when Lucene detects - an inconsistency in the index. - - - - This class wraps a Token and supplies a single attribute instance - where the delegate token can be replaced. - - Will be removed, when old TokenStream API is removed. - - - - Base class for Attributes that can be added to a - {@link Lucene.Net.Util.AttributeSource}. -

- Attributes are used to add data in a dynamic, yet type-safe way to a source - of usually streamed objects, e. g. a {@link Lucene.Net.Analysis.TokenStream}. -

-
- - Clears the values in this AttributeImpl and resets it to its - default value. If this implementation implements more than one Attribute interface - it clears all. - - - - The default implementation of this method accesses all declared - fields of this object and prints the values in the following syntax: - -
-            public String toString() {
-            return "start=" + startOffset + ",end=" + endOffset;
-            }
-            
- - This method may be overridden by subclasses. -
-
- - Subclasses must implement this method and should compute - a hashCode similar to this: -
-            public int hashCode() {
-            int code = startOffset;
-            code = code * 31 + endOffset;
-            return code;
-            }
-            
- - see also {@link #equals(Object)} -
-
- - All values used for computation of {@link #hashCode()} - should be checked here for equality. - - see also {@link Object#equals(Object)} - - - - Copies the values from this Attribute into the passed-in - target attribute. The target implementation must support all the - Attributes this implementation supports. - - - - Shallow clone. Subclasses must override this if they - need to clone any members deeply, - - - - The term text of a Token. - - - Returns the Token's term text. - - This method has a performance penalty - because the text is stored internally in a char[]. If - possible, use {@link #TermBuffer()} and {@link - #TermLength()} directly instead. If you really need a - String, use this method, which is nothing more than - a convenience call to new String(token.termBuffer(), 0, token.termLength()) - - - - Copies the contents of buffer, starting at offset for - length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Copies the contents of buffer into the termBuffer array. - the buffer to copy - - - - Copies the contents of buffer, starting at offset and continuing - for length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Returns the internal termBuffer character array which - you can then directly alter. If the array is too - small for your token, use {@link - #ResizeTermBuffer(int)} to increase it. After - altering the buffer be sure to call {@link - #setTermLength} to record the number of valid - characters that were placed into the termBuffer. - - - - Grows the termBuffer to at least size newSize, preserving the - existing content. Note: If the next operation is to change - the contents of the term buffer use - {@link #SetTermBuffer(char[], int, int)}, - {@link #SetTermBuffer(String)}, or - {@link #SetTermBuffer(String, int, int)} - to optimally combine the resize with the setting of the termBuffer. - - minimum size of the new termBuffer - - newly created termBuffer with length >= newSize - - - - Return number of valid characters (length of the term) - in the termBuffer array. - - - - Set number of valid characters (length of the term) in - the termBuffer array. Use this to truncate the termBuffer - or to synchronize with external manipulation of the termBuffer. - Note: to grow the size of the array, - use {@link #ResizeTermBuffer(int)} first. - - the truncated length - - - - A Token's lexical type. The Default value is "word". - - - Returns this Token's lexical type. Defaults to "word". - - - Set the lexical type. - - - - - The positionIncrement determines the position of this token - relative to the previous Token in a TokenStream, used in phrase - searching. - -

The default value is one. - -

Some common uses for this are:

    - -
  • Set it to zero to put multiple terms in the same position. This is - useful if, e.g., a word has multiple stems. Searches for phrases - including either stem will match. In this case, all but the first stem's - increment should be set to zero: the increment of the first instance - should be one. Repeating a token with an increment of zero can also be - used to boost the scores of matches on that token.
  • - -
  • Set it to values greater than one to inhibit exact phrase matches. - If, for example, one does not want phrases to match across removed stop - words, then one could build a stop word filter that removes stop words and - also sets the increment to the number of stop words removed before each - non-stop word. Then exact phrase queries will only match when the terms - occur with no intervening stop words.
  • - -
- -
- - -
- - Set the position increment. The default value is one. - - - the distance from the prior term - - - - Returns the position increment of this Token. - - - - - This attribute can be used to pass different flags down the {@link Tokenizer} chain, - eg from one TokenFilter to another one. - - - - EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long. -

- - Get the bitset for any bits that have been set. This is completely distinct from {@link TypeAttribute#Type()}, although they do share similar purposes. - The flags can be used to encode information about the token for use by other {@link Lucene.Net.Analysis.TokenFilter}s. - - -

- The bits - -
- - - - - - The payload of a Token. See also {@link Payload}. - - - Returns this Token's payload. - - - Sets this Token's payload. - - - A Token is an occurrence of a term from the text of a field. It consists of - a term's text, the start and end offset of the term in the text of the field, - and a type string. -

- The start and end offsets permit applications to re-associate a token with - its source text, e.g., to display highlighted query terms in a document - browser, or to show matching text fragments in a KWIC display, etc. -

- The type is a string, assigned by a lexical analyzer - (a.k.a. tokenizer), naming the lexical or syntactic class that the token - belongs to. For example an end of sentence marker token might be implemented - with type "eos". The default token type is "word". -

- A Token can optionally have metadata (a.k.a. Payload) in the form of a variable - length byte array. Use {@link TermPositions#GetPayloadLength()} and - {@link TermPositions#GetPayload(byte[], int)} to retrieve the payloads from the index. -

-

-
-

NOTE: As of 2.9, Token implements all {@link Attribute} interfaces - that are part of core Lucene and can be found in the {@code tokenattributes} subpackage. - Even though it is not necessary to use Token anymore, with the new TokenStream API it can - be used as convenience class that implements all {@link Attribute}s, which is especially useful - to easily switch from the old to the new TokenStream API. -

-

-

NOTE: As of 2.3, Token stores the term text - internally as a malleable char[] termBuffer instead of - String termText. The indexing code and core tokenizers - have been changed to re-use a single Token instance, changing - its buffer and other fields in-place as the Token is - processed. This provides substantially better indexing - performance as it saves the GC cost of new'ing a Token and - String for every term. The APIs that accept String - termText are still available but a warning about the - associated performance cost has been added (below). The - {@link #TermText()} method has been deprecated.

-

-

Tokenizers and TokenFilters should try to re-use a Token instance when - possible for best performance, by implementing the - {@link TokenStream#IncrementToken()} API. - Failing that, to create a new Token you should first use - one of the constructors that starts with null text. To load - the token from a char[] use {@link #SetTermBuffer(char[], int, int)}. - To load from a String use {@link #SetTermBuffer(String)} or {@link #SetTermBuffer(String, int, int)}. - Alternatively you can get the Token's termBuffer by calling either {@link #TermBuffer()}, - if you know that your text is shorter than the capacity of the termBuffer - or {@link #ResizeTermBuffer(int)}, if there is any possibility - that you may need to grow the buffer. Fill in the characters of your term into this - buffer, with {@link String#getChars(int, int, char[], int)} if loading from a string, - or with {@link System#arraycopy(Object, int, Object, int, int)}, and finally call {@link #SetTermLength(int)} to - set the length of the term text. See LUCENE-969 - for details.

-

Typical Token reuse patterns: -

    -
  • Copying text from a string (type is reset to {@link #DEFAULT_TYPE} if not - specified):
    -
    -            return reusableToken.reinit(string, startOffset, endOffset[, type]);
    -            
    -
  • -
  • Copying some text from a string (type is reset to {@link #DEFAULT_TYPE} - if not specified):
    -
    -            return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
    -            
    -
  • -
  • Copying text from char[] buffer (type is reset to {@link #DEFAULT_TYPE} - if not specified):
    -
    -            return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
    -            
    -
  • -
  • Copying some text from a char[] buffer (type is reset to - {@link #DEFAULT_TYPE} if not specified):
    -
    -            return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
    -            
    -
  • -
  • Copying from one one Token to another (type is reset to - {@link #DEFAULT_TYPE} if not specified):
    -
    -            return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
    -            
    -
  • -
- A few things to note: -
    -
  • clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.
  • -
  • Because TokenStreams can be chained, one cannot assume that the Token's current type is correct.
  • -
  • The startOffset and endOffset represent the start and offset in the - source text, so be careful in adjusting them.
  • -
  • When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.
  • -
-

-

- - -
- - We will remove this when we remove the - deprecated APIs - - - - Characters for the term text. - This will be made private. Instead, use: - {@link #TermBuffer()}, - {@link #SetTermBuffer(char[], int, int)}, - {@link #SetTermBuffer(String)}, or - {@link #SetTermBuffer(String, int, int)} - - - - Length of term text in the buffer. - This will be made private. Instead, use: - {@link #TermLength()}, or @{link setTermLength(int)}. - - - - Start in source text. - This will be made private. Instead, use: - {@link #StartOffset()}, or @{link setStartOffset(int)}. - - - - End in source text. - This will be made private. Instead, use: - {@link #EndOffset()}, or @{link setEndOffset(int)}. - - - - The lexical type of the token. - This will be made private. Instead, use: - {@link #Type()}, or @{link setType(String)}. - - - - This will be made private. Instead, use: - {@link #GetPayload()}, or @{link setPayload(Payload)}. - - - - This will be made private. Instead, use: - {@link #GetPositionIncrement()}, or @{link setPositionIncrement(String)}. - - - - Constructs a Token will null text. - - - Constructs a Token with null text and start & end - offsets. - - start offset in the source text - - end offset in the source text - - - - Constructs a Token with null text and start & end - offsets plus the Token type. - - start offset in the source text - - end offset in the source text - - the lexical type of this Token - - - - Constructs a Token with null text and start & end - offsets plus flags. NOTE: flags is EXPERIMENTAL. - - start offset in the source text - - end offset in the source text - - The bits to set for this token - - - - Constructs a Token with the given term text, and start - & end offsets. The type defaults to "word." - NOTE: for better indexing speed you should - instead use the char[] termBuffer methods to set the - term text. - - term text - - start offset - - end offset - - - - Constructs a Token with the given text, start and end - offsets, & type. NOTE: for better indexing - speed you should instead use the char[] termBuffer - methods to set the term text. - - term text - - start offset - - end offset - - token type - - - - Constructs a Token with the given text, start and end - offsets, & type. NOTE: for better indexing - speed you should instead use the char[] termBuffer - methods to set the term text. - - - - - - - - token type bits - - - - Constructs a Token with the given term buffer (offset - & length), start and end - offsets - - - - - - - - - - - - - - Set the position increment. This determines the position of this token - relative to the previous Token in a {@link TokenStream}, used in phrase - searching. - -

The default value is one. - -

Some common uses for this are:

    - -
  • Set it to zero to put multiple terms in the same position. This is - useful if, e.g., a word has multiple stems. Searches for phrases - including either stem will match. In this case, all but the first stem's - increment should be set to zero: the increment of the first instance - should be one. Repeating a token with an increment of zero can also be - used to boost the scores of matches on that token.
  • - -
  • Set it to values greater than one to inhibit exact phrase matches. - If, for example, one does not want phrases to match across removed stop - words, then one could build a stop word filter that removes stop words and - also sets the increment to the number of stop words removed before each - non-stop word. Then exact phrase queries will only match when the terms - occur with no intervening stop words.
  • - -
-
- the distance from the prior term - - - -
- - Returns the position increment of this Token. - - - - - Sets the Token's term text. NOTE: for better - indexing speed you should instead use the char[] - termBuffer methods to set the term text. - - use {@link #SetTermBuffer(char[], int, int)} or - {@link #SetTermBuffer(String)} or - {@link #SetTermBuffer(String, int, int)}. - - - - Returns the Token's term text. - - - This method now has a performance penalty - because the text is stored internally in a char[]. If - possible, use {@link #TermBuffer()} and {@link - #TermLength()} directly instead. If you really need a - String, use {@link #Term()} - - - - Returns the Token's term text. - - This method has a performance penalty - because the text is stored internally in a char[]. If - possible, use {@link #TermBuffer()} and {@link - #TermLength()} directly instead. If you really need a - String, use this method, which is nothing more than - a convenience call to new String(token.termBuffer(), 0, token.termLength()) - - - - Copies the contents of buffer, starting at offset for - length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Copies the contents of buffer into the termBuffer array. - the buffer to copy - - - - Copies the contents of buffer, starting at offset and continuing - for length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Returns the internal termBuffer character array which - you can then directly alter. If the array is too - small for your token, use {@link - #ResizeTermBuffer(int)} to increase it. After - altering the buffer be sure to call {@link - #setTermLength} to record the number of valid - characters that were placed into the termBuffer. - - - - Grows the termBuffer to at least size newSize, preserving the - existing content. Note: If the next operation is to change - the contents of the term buffer use - {@link #SetTermBuffer(char[], int, int)}, - {@link #SetTermBuffer(String)}, or - {@link #SetTermBuffer(String, int, int)} - to optimally combine the resize with the setting of the termBuffer. - - minimum size of the new termBuffer - - newly created termBuffer with length >= newSize - - - - Allocates a buffer char[] of at least newSize, without preserving the existing content. - its always used in places that set the content - - minimum size of the buffer - - - - Return number of valid characters (length of the term) - in the termBuffer array. - - - - Set number of valid characters (length of the term) in - the termBuffer array. Use this to truncate the termBuffer - or to synchronize with external manipulation of the termBuffer. - Note: to grow the size of the array, - use {@link #ResizeTermBuffer(int)} first. - - the truncated length - - - - Returns this Token's starting offset, the position of the first character - corresponding to this token in the source text. - Note that the difference between endOffset() and startOffset() may not be - equal to termText.length(), as the term text may have been altered by a - stemmer or some other filter. - - - - Set the starting offset. - - - - - Returns this Token's ending offset, one greater than the position of the - last character corresponding to this token in the source text. The length - of the token in the source text is (endOffset - startOffset). - - - - Set the ending offset. - - - - - Set the starting and ending offset. - See StartOffset() and EndOffset() - - - - Returns this Token's lexical type. Defaults to "word". - - - Set the lexical type. - - - - - EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long. -

- - Get the bitset for any bits that have been set. This is completely distinct from {@link #Type()}, although they do share similar purposes. - The flags can be used to encode information about the token for use by other {@link Lucene.Net.Analysis.TokenFilter}s. - - -

- The bits - -
- - - - - - Returns this Token's payload. - - - Sets this Token's payload. - - - Resets the term text, payload, flags, and positionIncrement, - startOffset, endOffset and token type to default. - - - - Makes a clone, but replaces the term buffer & - start/end offset in the process. This is more - efficient than doing a full clone (and then calling - setTermBuffer) because it saves a wasted copy of the old - termBuffer. - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(char[], int, int)}, - {@link #setStartOffset}, - {@link #setEndOffset}, - {@link #setType} - - this Token instance - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(char[], int, int)}, - {@link #setStartOffset}, - {@link #setEndOffset} - {@link #setType} on Token.DEFAULT_TYPE - - this Token instance - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(String)}, - {@link #setStartOffset}, - {@link #setEndOffset} - {@link #setType} - - this Token instance - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(String, int, int)}, - {@link #setStartOffset}, - {@link #setEndOffset} - {@link #setType} - - this Token instance - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(String)}, - {@link #setStartOffset}, - {@link #setEndOffset} - {@link #setType} on Token.DEFAULT_TYPE - - this Token instance - - - - Shorthand for calling {@link #clear}, - {@link #SetTermBuffer(String, int, int)}, - {@link #setStartOffset}, - {@link #setEndOffset} - {@link #setType} on Token.DEFAULT_TYPE - - this Token instance - - - - Copy the prototype token's fields into this one. Note: Payloads are shared. - - - - - Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared. - - - - - - - Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared. - - - - - - - - - - - Use this {@link LockFactory} to disable locking entirely. - This LockFactory is used when you call {@link FSDirectory#setDisableLocks}. - Only one instance of this lock is created. You should call {@link - #GetNoLockFactory()} to get the instance. - - - - - - - - Base class for Directory implementations that store index - files in the file system. There are currently three core - subclasses: - - - - Unfortunately, because of system peculiarities, there is - no single overall best implementation. Therefore, we've - added the {@link #open} method, to allow Lucene to choose - the best FSDirectory implementation given your - environment, and the known limitations of each - implementation. For users who have no reason to prefer a - specific implementation, it's best to simply use {@link - #open}. For all others, you should instantiate the - desired implementation directly. - -

The locking implementation is by default {@link - NativeFSLockFactory}, but can be changed by - passing in a custom {@link LockFactory} instance. - The deprecated getDirectory methods default to use - {@link SimpleFSLockFactory} for backwards compatibility. - The system properties - org.apache.lucene.store.FSDirectoryLockFactoryClass - and org.apache.lucene.FSDirectory.class - are deprecated and only used by the deprecated - getDirectory methods. The system property - org.apache.lucene.lockDir is ignored completely, - If you really want to store locks - elsewhere, you can create your own {@link - SimpleFSLockFactory} (or {@link NativeFSLockFactory}, - etc.) passing in your preferred lock directory. - -

In 3.0 this class will become abstract. - -

- - -
- - A Directory is a flat list of files. Files may be written once, when they - are created. Once a file is created it may only be opened for read, or - deleted. Random access is permitted both when reading and writing. - -

Java's i/o APIs not used directly, but rather all i/o is - through this API. This permits things such as:

    -
  • implementation of RAM-based indices;
  • -
  • implementation indices stored in a database, via JDBC;
  • -
  • implementation of an index as a single file;
  • -
- - Directory locking is implemented by an instance of {@link - LockFactory}, and can be changed for each Directory - instance using {@link #setLockFactory}. - -
-
- - Holds the LockFactory instance (implements locking for - this Directory instance). - - - - For some Directory implementations ({@link - FSDirectory}, and its subclasses), this method - silently filters its results to include only index - files. Please use {@link #listAll} instead, which - does no filtering. - - - - Returns an array of strings, one for each file in the - directory. Unlike {@link #list} this method does no - filtering of the contents in a directory, and it will - never return null (throws IOException instead). - - Currently this method simply fallsback to {@link - #list} for Directory impls outside of Lucene's core & - contrib, but in 3.0 that method will be removed and - this method will become abstract. - - - - Returns true iff a file with the given name exists. - - - Returns the time the named file was last modified. - - - Set the modified time of an existing file to now. - - - Removes an existing file in the directory. - - - Renames an existing file in the directory. - If a file already exists with the new name, then it is replaced. - This replacement is not guaranteed to be atomic. - - - - - - Returns the length of a file in the directory. - - - Creates a new, empty file in the directory with the given name. - Returns a stream writing this file. - - - - Ensure that any writes to this file are moved to - stable storage. Lucene uses this to properly commit - changes to the index, to prevent a machine/OS crash - from corrupting the index. - - - - Returns a stream reading an existing file. - - - Returns a stream reading an existing file, with the - specified read buffer size. The particular Directory - implementation may ignore the buffer size. Currently - the only Directory implementations that respect this - parameter are {@link FSDirectory} and {@link - Lucene.Net.Index.CompoundFileReader}. - - - - Construct a {@link Lock}. - the name of the lock file - - - - Attempt to clear (forcefully unlock and remove) the - specified lock. Only call this at a time when you are - certain this lock is no longer in use. - - name of the lock to be cleared. - - - - Closes the store. - - - Set the LockFactory that this Directory instance should - use for its locking implementation. Each * instance of - LockFactory should only be used for one directory (ie, - do not share a single instance across multiple - Directories). - - - instance of {@link LockFactory}. - - - - Get the LockFactory that this Directory instance is - using for its locking implementation. Note that this - may be null for Directory implementations that provide - their own locking implementation. - - - - Return a string identifier that uniquely differentiates - this Directory instance from other Directory instances. - This ID should be the same if two Directory instances - (even in different JVMs and/or on different machines) - are considered "the same index". This is how locking - "scopes" to the right index. - - - - Copy contents of a directory src to a directory dest. - If a file in src already exists in dest then the - one in dest will be blindly overwritten. - -

NOTE: the source directory cannot change - while this method is running. Otherwise the results - are undefined and you could easily hit a - FileNotFoundException. - -

NOTE: this method only copies files that look - like index files (ie, have extensions matching the - known extensions of index files). - -

- source directory - - destination directory - - if true, call {@link #Close()} method on source directory - - IOException -
- - AlreadyClosedException if this Directory is closed - - - This cache of directories ensures that there is a unique Directory - instance per path, so that synchronization on the Directory can be used to - synchronize access between readers and writers. We use - refcounts to ensure when the last use of an FSDirectory - instance for a given canonical path is closed, we remove the - instance from the cache. See LUCENE-776 - for some relevant discussion. - - Not used by any non-deprecated methods anymore - - - - Set whether Lucene's use of lock files is disabled. By default, - lock files are enabled. They should only be disabled if the index - is on a read-only medium like a CD-ROM. - - Use a {@link #open(File, LockFactory)} or a constructor - that takes a {@link LockFactory} and supply - {@link NoLockFactory#getNoLockFactory}. This setting does not work - with {@link #open(File)} only the deprecated getDirectory - respect this setting. - - - - Returns whether Lucene's use of lock files is disabled. - true if locks are disabled, false if locks are enabled. - - - - Use a constructor that takes a {@link LockFactory} and - supply {@link NoLockFactory#getNoLockFactory}. - - - - The default class which implements filesystem-based directories. - - - A buffer optionally used in renameTo method - - - Returns the directory instance for the named location. - - - Use {@link #Open(File)} - - - the path to the directory. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use {@link #Open(File, LockFactory)} - - - the path to the directory. - - instance of {@link LockFactory} providing the - locking implementation. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use {@link #Open(File)} - - - the path to the directory. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use {@link #Open(File)} - - - the path to the directory. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use {@link #Open(File, LockFactory)} - - - the path to the directory. - - instance of {@link LockFactory} providing the - locking implementation. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use {@link #Open(File, LockFactory)} - - - the path to the directory. - - instance of {@link LockFactory} providing the - locking implementation. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use IndexWriter's create flag, instead, to - create a new index. - - - the path to the directory. - - if true, create, or erase any existing contents. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use IndexWriter's create flag, instead, to - create a new index. - - - the path to the directory. - - if true, create, or erase any existing contents. - - the FSDirectory for the named file. - - - - Returns the directory instance for the named location. - - - Use IndexWriter's create flag, instead, to - create a new index. - - - the path to the directory. - - if true, create, or erase any existing contents. - - the FSDirectory for the named file. - - - - - - - - Initializes the directory to create a new file with the given name. - This method should be used in {@link #createOutput}. - - - - The underlying filesystem directory - - - - - - - - - - - Create a new FSDirectory for the named location (ctor for subclasses). - the path of the directory - - the lock factory to use, or null for the default - ({@link NativeFSLockFactory}); - - IOException - - - Creates an FSDirectory instance, trying to pick the - best implementation given the current environment. - The directory returned uses the {@link NativeFSLockFactory}. - -

Currently this returns {@link SimpleFSDirectory} as - NIOFSDirectory is currently not supported. - -

Currently this returns {@link SimpleFSDirectory} as - NIOFSDirectory is currently not supported. - -

NOTE: this method may suddenly change which - implementation is returned from release to release, in - the event that higher performance defaults become - possible; if the precise implementation is important to - your application, please instantiate it directly, - instead. On 64 bit systems, it may also good to - return {@link MMapDirectory}, but this is disabled - because of officially missing unmap support in Java. - For optimal performance you should consider using - this implementation on 64 bit JVMs. - -

See above -

-
- - Creates an FSDirectory instance, trying to pick the - best implementation given the current environment. - The directory returned uses the {@link NativeFSLockFactory}. - -

Currently this returns {@link SimpleFSDirectory} as - NIOFSDirectory is currently not supported. - -

NOTE: this method may suddenly change which - implementation is returned from release to release, in - the event that higher performance defaults become - possible; if the precise implementation is important to - your application, please instantiate it directly, - instead. On 64 bit systems, it may also good to - return {@link MMapDirectory}, but this is disabled - because of officially missing unmap support in Java. - For optimal performance you should consider using - this implementation on 64 bit JVMs. - -

See above -

-
- - Just like {@link #Open(File)}, but allows you to - also specify a custom {@link LockFactory}. - - - - Lists all files (not subdirectories) in the - directory. This method never returns null (throws - {@link IOException} instead). - - - NoSuchDirectoryException if the directory - does not exist, or does exist but is not a - directory. - - IOException if list() returns null - - - Lists all files (not subdirectories) in the - directory. This method never returns null (throws - {@link IOException} instead). - - - NoSuchDirectoryException if the directory - does not exist, or does exist but is not a - directory. - - IOException if list() returns null - - - Lists all files (not subdirectories) in the - directory. - - - - - - Returns true iff a file with the given name exists. - - - Returns the time the named file was last modified. - - - Returns the time the named file was last modified. - - - Set the modified time of an existing file to now. - - - Returns the length in bytes of a file in the directory. - - - Removes an existing file in the directory. - - - Renames an existing file in the directory. - Warning: This is not atomic. - - - - - - Creates an IndexOutput for the file with the given name. - In 3.0 this method will become abstract. - - - - Creates an IndexInput for the file with the given name. - In 3.0 this method will become abstract. - - - - So we can do some byte-to-hexchar conversion below - - - Closes the store to future operations. - - - For debug output. - - - Default read chunk size. This is a conditional - default: on 32bit JVMs, it defaults to 100 MB. On - 64bit JVMs, it's Integer.MAX_VALUE. - - - - - - Sets the maximum number of bytes read at once from the - underlying file during {@link IndexInput#readBytes}. - The default value is {@link #DEFAULT_READ_CHUNK_SIZE}; - -

This was introduced due to Sun - JVM Bug 6478546, which throws an incorrect - OutOfMemoryError when attempting to read too many bytes - at once. It only happens on 32bit JVMs with a large - maximum heap size.

- -

Changes to this value will not impact any - already-opened {@link IndexInput}s. You should call - this before attempting to open an index on the - directory.

- -

NOTE: This value should be as large as - possible to reduce any possible performance impact. If - you still encounter an incorrect OutOfMemoryError, - trying lowering the chunk size.

-

-
- - The maximum number of bytes to read at once from the - underlying file during {@link IndexInput#readBytes}. - - - - - - Use SimpleFSDirectory.SimpleFSIndexInput instead - - - - Base implementation class for buffered {@link IndexInput}. - - - Default buffer size - - - Inits BufferedIndexInput with a specific bufferSize - - - Change the buffer size used by this IndexInput - - - - - - - Expert: implements buffer refill. Reads bytes from the current position - in the input. - - the array to read bytes into - - the offset in the array to start storing bytes - - the number of bytes to read - - - - Expert: implements seek. Sets current position in this file, where the - next {@link #ReadInternal(byte[],int,int)} will occur. - - - - - - A straightforward implementation of {@link FSDirectory} - using java.io.RandomAccessFile. However, this class has - poor concurrent performance (multiple threads will - bottleneck) as it synchronizes when multiple threads - read from the same file. It's usually better to use - {@link NIOFSDirectory} or {@link MMapDirectory} instead. - - - - Create a new SimpleFSDirectory for the named location. - - - the path of the directory - - the lock factory to use, or null for the default. - - IOException - - - Create a new SimpleFSDirectory for the named location. - - - the path of the directory - - the lock factory to use, or null for the default. - - IOException - - - Create a new SimpleFSDirectory for the named location and the default lock factory. - - - the path of the directory - - IOException - - - - - - - Create a new SimpleFSDirectory for the named location and the default lock factory. - - - the path of the directory - - IOException - - - Creates an IndexOutput for the file with the given name. - - - Creates an IndexInput for the file with the given name. - - - Please use ctor taking chunkSize - - - - Please use ctor taking chunkSize - - - - IndexInput methods - - - Method used for testing. Returns true if the underlying - file descriptor is valid. - - - - Base implementation class for buffered {@link IndexOutput}. - - - Abstract base class for output to a file in a Directory. A random-access - output stream. Used for all Lucene index output operations. - - - - - - - - Writes a single byte. - - - - - Writes an array of bytes. - the bytes to write - - the number of bytes to write - - - - - - Writes an array of bytes. - the bytes to write - - the offset in the byte array - - the number of bytes to write - - - - - - Writes an int as four bytes. - - - - - Writes an int in a variable-length format. Writes between one and - five bytes. Smaller values take fewer bytes. Negative numbers are not - supported. - - - - - - Writes a long as eight bytes. - - - - - Writes an long in a variable-length format. Writes between one and five - bytes. Smaller values take fewer bytes. Negative numbers are not - supported. - - - - - - Writes a string. - - - - - Writes a sub sequence of characters from s as the old - format (modified UTF-8 encoded bytes). - - the source of the characters - - the first character in the sequence - - the number of characters in the sequence - - -- please pre-convert to utf8 bytes - instead or use {@link #writeString} - - - - Writes a sub sequence of characters from char[] as - the old format (modified UTF-8 encoded bytes). - - the source of the characters - - the first character in the sequence - - the number of characters in the sequence - - -- please pre-convert to utf8 bytes instead or use {@link #writeString} - - - - Copy numBytes bytes from input to ourself. - - - Forces any buffered output to be written. - - - Closes this stream to further operations. - - - Returns the current position in this file, where the next write will - occur. - - - - - - Sets current position in this file, where the next write will occur. - - - - - The number of bytes in the file. - - - Set the file length. By default, this method does - nothing (it's optional for a Directory to implement - it). But, certain Directory implementations (for - - can use this to inform the - underlying IO system to pre-allocate the file to the - specified size. If the length is longer than the - current file length, the bytes added to the file are - undefined. Otherwise the file is truncated. - - file length - - - - Writes a single byte. - - - - - Writes an array of bytes. - the bytes to write - - the number of bytes to write - - - - - - Forces any buffered output to be written. - - - Expert: implements buffer write. Writes bytes at the current position in - the output. - - the bytes to write - - the number of bytes to write - - - - Expert: implements buffer write. Writes bytes at the current position in - the output. - - the bytes to write - - the offset in the byte array - - the number of bytes to write - - - - Closes this stream to further operations. - - - Returns the current position in this file, where the next write will - occur. - - - - - - Sets current position in this file, where the next write will occur. - - - - - The number of bytes in the file. - - - output methods: - - - Random-access methods - - - - - - - - - - - - - - - - - - - Use SimpleFSDirectory.SimpleFSIndexOutput instead - - - - - - - - This exception is thrown when there is an attempt to - access something that has already been closed. - - - - The {@link TimeLimitingCollector} is used to timeout search requests that - take longer than the maximum allowed search time limit. After this time is - exceeded, the search thread is stopped by throwing a - {@link TimeExceededException}. - - - -

Expert: Collectors are primarily meant to be used to - gather raw results from a search, and implement sorting - or custom result filtering, collation, etc.

- -

As of 2.9, this class replaces the deprecated - HitCollector, and offers an API for efficient collection - of hits across sequential {@link IndexReader}s. {@link - IndexSearcher} advances the collector through each of the - sub readers, in an arbitrary order. This results in a - higher performance means of collection.

- -

Lucene's core collectors are derived from Collector. - Likely your application can use one of these classes, or - subclass {@link TopDocsCollector}, instead of - implementing Collector directly: - -

    - -
  • {@link TopDocsCollector} is an abstract base class - that assumes you will retrieve the top N docs, - according to some criteria, after collection is - done.
  • - -
  • {@link TopScoreDocCollector} is a concrete subclass - {@link TopDocsCollector} and sorts according to score + - docID. This is used internally by the {@link - IndexSearcher} search methods that do not take an - explicit {@link Sort}. It is likely the most frequently - used collector.
  • - -
  • {@link TopFieldCollector} subclasses {@link - TopDocsCollector} and sorts according to a specified - {@link Sort} object (sort by field). This is used - internally by the {@link IndexSearcher} search methods - that take an explicit {@link Sort}.
  • - -
  • {@link TimeLimitingCollector}, which wraps any other - Collector and aborts the search if it's taken too much - time, will subclass Collector in 3.0 (presently it - subclasses the deprecated HitCollector).
  • - -
  • {@link PositiveScoresOnlyCollector} wraps any other - Collector and prevents collection of hits whose score - is <= 0.0
  • - -
- -

Collector decouples the score from the collected doc: - the score computation is skipped entirely if it's not - needed. Collectors that do need the score should - implement the {@link #setScorer} method, to hold onto the - passed {@link Scorer} instance, and call {@link - Scorer#Score()} within the collect method to compute the - current hit's score. If your collector may request the - score for a single hit multiple times, you should use - {@link ScoreCachingWrappingScorer}.

- -

NOTE: The doc that is passed to the collect - method is relative to the current reader. If your - collector needs to resolve this to the docID space of the - Multi*Reader, you must re-base it by recording the - docBase from the most recent setNextReader call. Here's - a simple example showing how to collect docIDs into a - BitSet:

- -

-            Searcher searcher = new IndexSearcher(indexReader);
-            final BitSet bits = new BitSet(indexReader.maxDoc());
-            searcher.search(query, new Collector() {
-            private int docBase;
-            
-            // ignore scorer
-            public void setScorer(Scorer scorer) {
-            }
-            
-            // accept docs out of order (for a BitSet it doesn't matter)
-            public boolean acceptsDocsOutOfOrder() {
-            return true;
-            }
-            
-            public void collect(int doc) {
-            bits.set(doc + docBase);
-            }
-            
-            public void setNextReader(IndexReader reader, int docBase) {
-            this.docBase = docBase;
-            }
-            });
-            
- -

Not all collectors will need to rebase the docID. For - example, a collector that simply counts the total number - of hits would skip it.

- -

NOTE: Prior to 2.9, Lucene silently filtered - out hits with score <= 0. As of 2.9, the core Collectors - no longer do that. It's very unusual to have such hits - (a negative query boost, or function query returning - negative custom scores, could cause it to happen). If - you need that behavior, use {@link - PositiveScoresOnlyCollector}.

- -

NOTE: This API is experimental and might change - in incompatible ways in the next release.

- -

- 2.9 - -
- - Called before successive calls to {@link #Collect(int)}. Implementations - that need the score of the current document (passed-in to - {@link #Collect(int)}), should save the passed-in Scorer and call - scorer.score() when needed. - - - - Called once for every document matching a query, with the unbased document - number. - -

- Note: This is called in an inner search loop. For good search performance, - implementations of this method should not call {@link Searcher#Doc(int)} or - {@link Lucene.Net.Index.IndexReader#Document(int)} on every hit. - Doing so can slow searches by an order of magnitude or more. -

-
- - Called before collecting from each IndexReader. All doc ids in - {@link #Collect(int)} will correspond to reader. - - Add docBase to the current IndexReaders internal document id to re-base ids - in {@link #Collect(int)}. - - - next IndexReader - - - - - - - * Return true if this collector does not - * require the matching docIDs to be delivered in int sort - * order (smallest to largest) to {@link #collect}. - * - *

Most Lucene Query implementations will visit - * matching docIDs in order. However, some queries - * (currently limited to certain cases of {@link - * BooleanQuery}) can achieve faster searching if the - * Collector allows them to deliver the - * docIDs out of order. - * - *

Many collectors don't mind getting docIDs out of - * order, so it's important to return true - * here. - * -

- -
- - Default timer resolution. - - - - - Default for {@link #IsGreedy()}. - - - - - Create a TimeLimitedCollector wrapper over another {@link Collector} with a specified timeout. - the wrapped {@link Collector} - - max time allowed for collecting hits after which {@link TimeExceededException} is thrown - - - - Return the timer resolution. - - - - - Set the timer resolution. - The default timer resolution is 20 milliseconds. - This means that a search required to take no longer than - 800 milliseconds may be stopped after 780 to 820 milliseconds. -
Note that: -
    -
  • Finer (smaller) resolution is more accurate but less efficient.
  • -
  • Setting resolution to less than 5 milliseconds will be silently modified to 5 milliseconds.
  • -
  • Setting resolution smaller than current resolution might take effect only after current - resolution. (Assume current resolution of 20 milliseconds is modified to 5 milliseconds, - then it can take up to 20 milliseconds for the change to have effect.
  • -
-
-
- - Checks if this time limited collector is greedy in collecting the last hit. - A non greedy collector, upon a timeout, would throw a {@link TimeExceededException} - without allowing the wrapped collector to collect current doc. A greedy one would - first allow the wrapped hit collector to collect current doc and only then - throw a {@link TimeExceededException}. - - - - - - Sets whether this time limited collector is greedy. - true to make this time limited greedy - - - - - - Calls {@link Collector#Collect(int)} on the decorated {@link Collector} - unless the allowed time has passed, in which case it throws an exception. - - - TimeExceededException - if the time allowed has exceeded. - - - - - Support class used to handle threads - - - - - This interface should be implemented by any class whose instances are intended - to be executed by a thread. - - - - - This method has to be implemented in order that starting of the thread causes the object's - run method to be called in that separately executing thread. - - - - - Contains conversion support elements such as classes, interfaces and static methods. - - - - - Copies an array of chars obtained from a String into a specified array of chars - - The String to get the chars from - Position of the String to start getting the chars - Position of the String to end getting the chars - Array to return the chars - Position of the destination array of chars to start storing the chars - An array of chars - - - - Support class used to handle threads - - - - - The instance of System.Threading.Thread - - - - - Initializes a new instance of the ThreadClass class - - - - - Initializes a new instance of the Thread class. - - The name of the thread - - - - Initializes a new instance of the Thread class. - - A ThreadStart delegate that references the methods to be invoked when this thread begins executing - - - - Initializes a new instance of the Thread class. - - A ThreadStart delegate that references the methods to be invoked when this thread begins executing - The name of the thread - - - - This method has no functionality unless the method is overridden - - - - - Causes the operating system to change the state of the current thread instance to ThreadState.Running - - - - - Interrupts a thread that is in the WaitSleepJoin thread state - - - - - Blocks the calling thread until a thread terminates - - - - - Blocks the calling thread until a thread terminates or the specified time elapses - - Time of wait in milliseconds - - - - Blocks the calling thread until a thread terminates or the specified time elapses - - Time of wait in milliseconds - Time of wait in nanoseconds - - - - Resumes a thread that has been suspended - - - - - Raises a ThreadAbortException in the thread on which it is invoked, - to begin the process of terminating the thread. Calling this method - usually terminates the thread - - - - - Raises a ThreadAbortException in the thread on which it is invoked, - to begin the process of terminating the thread while also providing - exception information about the thread termination. - Calling this method usually terminates the thread. - - An object that contains application-specific information, such as state, which can be used by the thread being aborted - - - - Suspends the thread, if the thread is already suspended it has no effect - - - - - Obtain a String that represents the current object - - A String that represents the current object - - - - Gets the currently running thread - - The currently running thread - - - - Gets the current thread instance - - - - - Gets or sets the name of the thread - - - - - Gets or sets a value indicating the scheduling priority of a thread - - - - - Gets a value indicating the execution status of the current thread - - - - - Gets or sets a value indicating whether or not a thread is a background thread. - - - - - Represents the methods to support some operations over files. - - - - - Returns an array of abstract pathnames representing the files and directories of the specified path. - - The abstract pathname to list it childs. - An array of abstract pathnames childs of the path specified or null if the path is not a directory - - - - Returns a list of files in a give directory. - - The full path name to the directory. - - An array containing the files. - - - - Flushes the specified file stream. Ensures that all buffered - data is actually written to the file system. - - The file stream. - - - - A simple class for number conversions. - - - - - Min radix value. - - - - - Max radix value. - - - - - Converts a number to System.String. - - - - - - - Converts a number to System.String. - - - - - - - Converts a number to System.String in the specified radix. - - A number to be converted. - A radix. - A System.String representation of the number in the specified redix. - - - - Parses a number in the specified radix. - - An input System.String. - A radix. - The parsed number in the specified radix. - - - - Performs an unsigned bitwise right shift with the specified number - - Number to operate on - Ammount of bits to shift - The resulting number from the shift operation - - - - Performs an unsigned bitwise right shift with the specified number - - Number to operate on - Ammount of bits to shift - The resulting number from the shift operation - - - - Returns the index of the first bit that is set to true that occurs - on or after the specified starting index. If no such bit exists - then -1 is returned. - - The BitArray object. - The index to start checking from (inclusive). - The index of the next set bit. - - - - Converts a System.String number to long. - - - - - - - Mimics Java's Character class. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - This class provides supporting methods of java.util.BitSet - that are not present in System.Collections.BitArray. - - - - - Returns the next set bit at or after index, or -1 if no such bit exists. - - - the index of bit array at which to start checking - the next set bit or -1 - - - - Returns the next un-set bit at or after index, or -1 if no such bit exists. - - - the index of bit array at which to start checking - the next set bit or -1 - - - - Returns the number of bits set to true in this BitSet. - - The BitArray object. - The number of bits set to true in this BitSet. - - - - Summary description for TestSupportClass. - - - - - Compares two Term arrays for equality. - - First Term array to compare - Second Term array to compare - true if the Terms are equal in both arrays, false otherwise - - - - A Hashtable which holds weak references to its keys so they - can be collected during GC. - - - - - Serves as a simple "GC Monitor" that indicates whether cleanup is needed. - If collectableObject.IsAlive is false, GC has occurred and we should perform cleanup - - - - - Customize the hashtable lookup process by overriding KeyEquals. KeyEquals - will compare both WeakKey to WeakKey and WeakKey to real keys - - - - - Perform cleanup if GC occurred - - - - - Iterate over all keys and remove keys that were collected - - - - - Wrap each key with a WeakKey and add it to the hashtable - - - - - Create a temporary copy of the real keys and return that - - - - - A weak referene wrapper for the hashtable keys. Whenever a key\value pair - is added to the hashtable, the key is wrapped using a WeakKey. WeakKey saves the - value of the original object hashcode for fast comparison. - - - - - A Dictionary enumerator which wraps the original hashtable enumerator - and performs 2 tasks: Extract the real key from a WeakKey and skip keys - that were already collected. - - - - - Support class used to handle Hashtable addition, which does a check - first to make sure the added item is unique in the hash. - - - - - Converts the specified collection to its string representation. - - The collection to convert to string. - A string representation of the specified collection. - - - - Compares two string arrays for equality. - - First string array list to compare - Second string array list to compare - true if the strings are equal in both arrays, false otherwise - - - - Sorts an IList collections - - The System.Collections.IList instance that will be sorted - The Comparator criteria, null to use natural comparator. - - - - Fills the array with an specific value from an specific index to an specific index. - - The array to be filled. - The first index to be filled. - The last index to be filled. - The value to fill the array with. - - - - Fills the array with an specific value. - - The array to be filled. - The value to fill the array with. - - - - Compares the entire members of one array whith the other one. - - The array to be compared. - The array to be compared with. - Returns true if the two specified arrays of Objects are equal - to one another. The two arrays are considered equal if both arrays - contain the same number of elements, and all corresponding pairs of - elements in the two arrays are equal. Two objects e1 and e2 are - considered equal if (e1==null ? e2==null : e1.equals(e2)). In other - words, the two arrays are equal if they contain the same elements in - the same order. Also, two array references are considered equal if - both are null. - - - A collection of which can be - looked up by instances of . - The type of the items contains in this - collection. - The type of the keys that can be used to look - up the items. - - - Creates a new instance of the - class. - The which will convert - instances of to - when the override of is called. - - - The which will convert - instances of to - when the override of is called. - - - Converts an item that is added to the collection to - a key. - The instance of - to convert into an instance of . - The instance of which is the - key for this item. - - - Determines if a key for an item exists in this - collection. - The instance of - to see if it exists in this collection. - True if the key exists in the collection, false otherwise. - - - Represents a strongly typed list of objects that can be accessed by index. - Provides methods to search, sort, and manipulate lists. Also provides functionality - to compare lists against each other through an implementations of - . - The type of elements in the list. - - - Initializes a new instance of the - class that is empty and has the - default initial capacity. - - - Initializes a new instance of the - class that contains elements copied from the specified collection and has - sufficient capacity to accommodate the number of elements copied. - The collection whose elements are copied to the new list. - - - Initializes a new instance of the - class that is empty and has the specified initial capacity. - The number of elements that the new list can initially store. - - - Adds a range of objects represented by the - implementation. - The - implementation to add to this list. - - - Compares the counts of two - implementations. - This uses a trick in LINQ, sniffing types for implementations - of interfaces that might supply shortcuts when trying to make comparisons. - In this case, that is the and - interfaces, either of which can provide a count - which can be used in determining the equality of sequences (if they don't have - the same count, then they can't be equal). - The from the left hand side of the - comparison to check the count of. - The from the right hand side of the - comparison to check the count of. - Null if the result is indeterminate. This occurs when either - or doesn't implement or . - Otherwise, it will get the count from each and return true if they are equal, false otherwise. - - - Compares the contents of a - implementation to another one to determine equality. - Thinking of the implementation as - a string with any number of characters, the algorithm checks - each item in each list. If any item of the list is not equal (or - one list contains all the elements of another list), then that list - element is compared to the other list element to see which - list is greater. - The implementation - that is considered the left hand side. - The implementation - that is considered the right hand side. - True if the items are equal, false otherwise. - - - Compares this sequence to another - implementation, returning true if they are equal, false otherwise. - The other implementation - to compare against. - True if the sequence in - is the same as this one. - - - Compares this object for equality against other. - The other object to compare this object against. - True if this object and are equal, false - otherwise. - - - Gets the hash code for the list. - The hash code value. - - - Gets the hash code for the list. - The - implementation which will have all the contents hashed. - The hash code value. - - - Clones the . - This is a shallow clone. - A new shallow clone of this - . - - - - A simple wrapper to allow for the use of the GeneralKeyedCollection. The - wrapper is required as there can be several keys for an object depending - on how many interfaces it implements. - - - - - Provides platform infos. - - - - - Whether we run under a Unix platform. - - - - - Whether we run under a supported Windows platform. - - - - TimerThread provides a pseudo-clock service to all searching - threads, so that they can count elapsed time with less overhead - than repeatedly calling System.currentTimeMillis. A single - thread should be created to be used for all searches. - - - - Get the timer value in milliseconds. - - - Thrown when elapsed search time exceeds allowed search time. - - - Returns allowed time (milliseconds). - - - Returns elapsed time (milliseconds). - - - Returns last doc that was collected when the search time exceeded. - - - Stores information about how to sort documents by terms in an individual - field. Fields must be indexed in order to sort by them. - -

Created: Feb 11, 2004 1:25:29 PM - -

- lucene 1.4 - - $Id: SortField.java 801344 2009-08-05 18:05:06Z yonik $ - - - -
- - Sort by document score (relevancy). Sort values are Float and higher - values are at the front. - - - - Sort by document number (index order). Sort values are Integer and lower - values are at the front. - - - - Guess type of sort based on field contents. A regular expression is used - to look at the first term indexed for the field and determine if it - represents an integer number, a floating point number, or just arbitrary - string characters. - - Please specify the exact type, instead. - Especially, guessing does not work with the new - {@link NumericField} type. - - - - Sort using term values as Strings. Sort values are String and lower - values are at the front. - - - - Sort using term values as encoded Integers. Sort values are Integer and - lower values are at the front. - - - - Sort using term values as encoded Floats. Sort values are Float and - lower values are at the front. - - - - Sort using term values as encoded Longs. Sort values are Long and - lower values are at the front. - - - - Sort using term values as encoded Doubles. Sort values are Double and - lower values are at the front. - - - - Sort using term values as encoded Shorts. Sort values are Short and - lower values are at the front. - - - - Sort using a custom Comparator. Sort values are any Comparable and - sorting is done according to natural order. - - - - Sort using term values as encoded Bytes. Sort values are Byte and - lower values are at the front. - - - - Sort using term values as Strings, but comparing by - value (using String.compareTo) for all comparisons. - This is typically slower than {@link #STRING}, which - uses ordinals to do the sorting. - - - - Represents sorting by document score (relevancy). - - - Represents sorting by document number (index order). - - - Creates a sort by terms in the given field where the type of term value - is determined dynamically ({@link #AUTO AUTO}). - - Name of field to sort by, cannot be - null. - - Please specify the exact type instead. - - - - Creates a sort, possibly in reverse, by terms in the given field where - the type of term value is determined dynamically ({@link #AUTO AUTO}). - - Name of field to sort by, cannot be null. - - True if natural order should be reversed. - - Please specify the exact type instead. - - - - Creates a sort by terms in the given field with the type of term - values explicitly given. - - Name of field to sort by. Can be null if - type is SCORE or DOC. - - Type of values in the terms. - - - - Creates a sort, possibly in reverse, by terms in the given field with the - type of term values explicitly given. - - Name of field to sort by. Can be null if - type is SCORE or DOC. - - Type of values in the terms. - - True if natural order should be reversed. - - - - Creates a sort by terms in the given field, parsed - to numeric values using a custom {@link FieldCache.Parser}. - - Name of field to sort by. Must not be null. - - Instance of a {@link FieldCache.Parser}, - which must subclass one of the existing numeric - parsers from {@link FieldCache}. Sort type is inferred - by testing which numeric parser the parser subclasses. - - IllegalArgumentException if the parser fails to - subclass an existing numeric parser, or field is null - - - - Creates a sort, possibly in reverse, by terms in the given field, parsed - to numeric values using a custom {@link FieldCache.Parser}. - - Name of field to sort by. Must not be null. - - Instance of a {@link FieldCache.Parser}, - which must subclass one of the existing numeric - parsers from {@link FieldCache}. Sort type is inferred - by testing which numeric parser the parser subclasses. - - True if natural order should be reversed. - - IllegalArgumentException if the parser fails to - subclass an existing numeric parser, or field is null - - - - Creates a sort by terms in the given field sorted - according to the given locale. - - Name of field to sort by, cannot be null. - - Locale of values in the field. - - - - Creates a sort, possibly in reverse, by terms in the given field sorted - according to the given locale. - - Name of field to sort by, cannot be null. - - Locale of values in the field. - - - - Creates a sort with a custom comparison function. - Name of field to sort by; cannot be null. - - Returns a comparator for sorting hits. - - use SortField (String field, FieldComparatorSource comparator) - - - - Creates a sort with a custom comparison function. - Name of field to sort by; cannot be null. - - Returns a comparator for sorting hits. - - - - Creates a sort, possibly in reverse, with a custom comparison function. - Name of field to sort by; cannot be null. - - Returns a comparator for sorting hits. - - True if natural order should be reversed. - - use SortField (String field, FieldComparatorSource comparator, boolean reverse) - - - - Creates a sort, possibly in reverse, with a custom comparison function. - Name of field to sort by; cannot be null. - - Returns a comparator for sorting hits. - - True if natural order should be reversed. - - - - Returns the name of the field. Could return null - if the sort is by SCORE or DOC. - - Name of field, possibly null. - - - - Returns the type of contents in the field. - One of the constants SCORE, DOC, AUTO, STRING, INT or FLOAT. - - - - Returns the Locale by which term values are interpreted. - May return null if no Locale was specified. - - Locale, or null. - - - - Returns the instance of a {@link FieldCache} parser that fits to the given sort type. - May return null if no parser was specified. Sorting is using the default parser then. - - An instance of a {@link FieldCache} parser, or null. - - - - Returns whether the sort should be reversed. - True if natural order should be reversed. - - - - use {@link #GetComparatorSource()} - - - - Use legacy IndexSearch implementation: search with a DirectoryReader rather - than passing a single hit collector to multiple SegmentReaders. - - - true for legacy behavior - - will be removed in Lucene 3.0. - - - - if true, IndexSearch will use legacy sorting search implementation. - eg. multiple Priority Queues. - - will be removed in Lucene 3.0. - - - - Returns true if o is equal to this. If a - {@link SortComparatorSource} (deprecated) or {@link - FieldCache.Parser} was provided, it must properly - implement equals (unless a singleton is always used). - - - - Returns true if o is equal to this. If a - {@link SortComparatorSource} (deprecated) or {@link - FieldCache.Parser} was provided, it must properly - implement hashCode (unless a singleton is always - used). - - - - - Lucene.Net specific. Needed for Serialization - - - - - - - Lucene.Net specific. Needed for deserialization - - - - - - Returns the {@link FieldComparator} to use for - sorting. - - NOTE: This API is experimental and might change in - incompatible ways in the next release. - - - number of top hits the queue will store - - position of this SortField within {@link - Sort}. The comparator is primary if sortPos==0, - secondary if sortPos==1, etc. Some comparators can - optimize themselves when they are the primary sort. - - {@link FieldComparator} to use when sorting - - - - Attempts to detect the given field type for an IndexReader. - - - - - The BoostingTermQuery is very similar to the {@link Lucene.Net.Search.Spans.SpanTermQuery} except - that it factors in the value of the payload located at each of the positions where the - {@link Lucene.Net.Index.Term} occurs. -

- In order to take advantage of this, you must override {@link Lucene.Net.Search.Similarity#ScorePayload(String, byte[],int,int)} - which returns 1 by default. -

- Payload scores are averaged across term occurrences in the document. - -

- - - - See {@link Lucene.Net.Search.Payloads.PayloadTermQuery} - -
- - This class is very similar to - {@link Lucene.Net.Search.Spans.SpanTermQuery} except that it factors - in the value of the payload located at each of the positions where the - {@link Lucene.Net.Index.Term} occurs. -

- In order to take advantage of this, you must override - {@link Lucene.Net.Search.Similarity#ScorePayload(String, byte[],int,int)} - which returns 1 by default. -

- Payload scores are aggregated using a pluggable {@link PayloadFunction}. - -

-
- - Matches spans containing a term. - - - Construct a SpanTermQuery matching the named term's spans. - - - Return the term whose spans are matched. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - - Expert-only. Public for use by other weight implementations - - - Public for extension only. - - - not needed anymore - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - - {@link #GetSpanScore()} * {@link #GetPayloadScore()} - - IOException - - - Returns the SpanScorer score only. -

- Should not be overriden without good cause! - -

- the score for just the Span part w/o the payload - - IOException - - - - -
- - The score for the payload - - - The score, as calculated by - {@link PayloadFunction#DocScore(int, String, int, float)} - - - - Wraps another SpanFilter's result and caches it. The purpose is to allow - filters to simply filter, and then wrap with this class to add caching. - - - - A transient Filter cache. - - - Filter to cache results of - - - - Use {@link #GetDocIdSet(IndexReader)} instead. - - - - An IndexReader which reads multiple indexes, appending their content. - - - $Id: MultiReader.java 782406 2009-06-07 16:31:18Z mikemccand $ - - - - IndexReader is an abstract class, providing an interface for accessing an - index. Search of an index is done entirely through this abstract interface, - so that any subclass which implements it is searchable. -

Concrete subclasses of IndexReader are usually constructed with a call to - one of the static open() methods, e.g. {@link - #Open(String, boolean)}. -

For efficiency, in this API documents are often referred to via - document numbers, non-negative integers which each name a unique - document in the index. These document numbers are ephemeral--they may change - as documents are added to and deleted from an index. Clients should thus not - rely on a given document having the same number between sessions. -

An IndexReader can be opened on a directory for which an IndexWriter is - opened already, but it cannot be used to delete documents from the index then. -

- NOTE: for backwards API compatibility, several methods are not listed - as abstract, but have no useful implementations in this base class and - instead always throw UnsupportedOperationException. Subclasses are - strongly encouraged to override these methods, but in many cases may not - need to. -

-

- NOTE: as of 2.4, it's possible to open a read-only - IndexReader using one of the static open methods that - accepts the boolean readOnly parameter. Such a reader has - better concurrency as it's not necessary to synchronize on - the isDeleted method. Currently the default for readOnly - is false, meaning if not specified you will get a - read/write IndexReader. But in 3.0 this default will - change to true, meaning you must explicitly specify false - if you want to make changes with the resulting IndexReader. -

-

NOTE: {@link - IndexReader} instances are completely thread - safe, meaning multiple threads can call any of its methods, - concurrently. If your application requires external - synchronization, you should not synchronize on the - IndexReader instance; use your own - (non-Lucene) objects instead. -

- $Id: IndexReader.java 826049 2009-10-16 19:28:55Z mikemccand $ - -
- - Expert: returns the current refCount for this reader - - - Expert: increments the refCount of this IndexReader - instance. RefCounts are used to determine when a - reader can be closed safely, i.e. as soon as there are - no more references. Be sure to always call a - corresponding {@link #decRef}, in a finally clause; - otherwise the reader may never be closed. Note that - {@link #close} simply calls decRef(), which means that - the IndexReader will not really be closed until {@link - #decRef} has been called for all outstanding - references. - - - - - - - Expert: decreases the refCount of this IndexReader - instance. If the refCount drops to 0, then pending - changes (if any) are committed to the index and this - reader is closed. - - - IOException in case an IOException occurs in commit() or doClose() - - - - - - - will be deleted when IndexReader(Directory) is deleted - - - - - - Legacy Constructor for backwards compatibility. - -

- This Constructor should not be used, it exists for backwards - compatibility only to support legacy subclasses that did not "own" - a specific directory, but needed to specify something to be returned - by the directory() method. Future subclasses should delegate to the - no arg constructor and implement the directory() method as appropriate. - -

- Directory to be returned by the directory() method - - - - - use IndexReader() - -
- - AlreadyClosedException if this IndexReader is closed - - - Returns a read/write IndexReader reading the index in an FSDirectory in the named - path. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #Open(Directory, boolean)} instead. - This method will be removed in the 3.0 release. - - - the path to the index directory - - - - Returns an IndexReader reading the index in an - FSDirectory in the named path. You should pass - readOnly=true, since it gives much better concurrent - performance, unless you intend to do write operations - (delete documents or change norms) with the reader. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - the path to the index directory - - true if this should be a readOnly - reader - - Use {@link #Open(Directory, boolean)} instead. - This method will be removed in the 3.0 release. - - - - - Returns a read/write IndexReader reading the index in an FSDirectory in the named - path. - - the path to the index directory - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #Open(Directory, boolean)} instead. - This method will be removed in the 3.0 release. - - - - - Returns an IndexReader reading the index in an - FSDirectory in the named path. You should pass - readOnly=true, since it gives much better concurrent - performance, unless you intend to do write operations - (delete documents or change norms) with the reader. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - the path to the index directory - - true if this should be a readOnly - reader - - Use {@link #Open(Directory, boolean)} instead. - This method will be removed in the 3.0 release. - - - - - Returns a read/write IndexReader reading the index in - the given Directory. - - the index directory - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #Open(Directory, boolean)} instead - This method will be removed in the 3.0 release. - - - - - Returns an IndexReader reading the index in the given - Directory. You should pass readOnly=true, since it - gives much better concurrent performance, unless you - intend to do write operations (delete documents or - change norms) with the reader. - - the index directory - - true if no changes (deletions, norms) will be made with this IndexReader - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns a read/write IndexReader reading the index in the given - {@link IndexCommit}. - - the commit point to open - - CorruptIndexException if the index is corrupt - Use {@link #Open(IndexCommit, boolean)} instead. - This method will be removed in the 3.0 release. - - - IOException if there is a low-level IO error - - - Expert: returns an IndexReader reading the index in the given - {@link IndexCommit}. You should pass readOnly=true, since it - gives much better concurrent performance, unless you - intend to do write operations (delete documents or - change norms) with the reader. - - the commit point to open - - true if no changes (deletions, norms) will be made with this IndexReader - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns a read/write IndexReader reading the index in the given - Directory, with a custom {@link IndexDeletionPolicy}. - - the index directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - Use {@link #Open(Directory, IndexDeletionPolicy, boolean)} instead. - This method will be removed in the 3.0 release. - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns an IndexReader reading the index in - the given Directory, with a custom {@link - IndexDeletionPolicy}. You should pass readOnly=true, - since it gives much better concurrent performance, - unless you intend to do write operations (delete - documents or change norms) with the reader. - - the index directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - true if no changes (deletions, norms) will be made with this IndexReader - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns an IndexReader reading the index in - the given Directory, with a custom {@link - IndexDeletionPolicy}. You should pass readOnly=true, - since it gives much better concurrent performance, - unless you intend to do write operations (delete - documents or change norms) with the reader. - - the index directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - true if no changes (deletions, norms) will be made with this IndexReader - - Subsamples which indexed - terms are loaded into RAM. This has the same effect as {@link - IndexWriter#setTermIndexInterval} except that setting - must be done at indexing time while this setting can be - set per reader. When set to N, then one in every - N*termIndexInterval terms in the index is loaded into - memory. By setting this to a value > 1 you can reduce - memory usage, at the expense of higher latency when - loading a TermInfo. The default value is 1. Set this - to -1 to skip loading the terms index entirely. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns a read/write IndexReader reading the index in the given - Directory, using a specific commit and with a custom - {@link IndexDeletionPolicy}. - - the specific {@link IndexCommit} to open; - see {@link IndexReader#listCommits} to list all commits - in a directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - Use {@link #Open(IndexCommit, IndexDeletionPolicy, boolean)} instead. - This method will be removed in the 3.0 release. - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns an IndexReader reading the index in - the given Directory, using a specific commit and with - a custom {@link IndexDeletionPolicy}. You should pass - readOnly=true, since it gives much better concurrent - performance, unless you intend to do write operations - (delete documents or change norms) with the reader. - - the specific {@link IndexCommit} to open; - see {@link IndexReader#listCommits} to list all commits - in a directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - true if no changes (deletions, norms) will be made with this IndexReader - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Expert: returns an IndexReader reading the index in - the given Directory, using a specific commit and with - a custom {@link IndexDeletionPolicy}. You should pass - readOnly=true, since it gives much better concurrent - performance, unless you intend to do write operations - (delete documents or change norms) with the reader. - - the specific {@link IndexCommit} to open; - see {@link IndexReader#listCommits} to list all commits - in a directory - - a custom deletion policy (only used - if you use this reader to perform deletes or to set - norms); see {@link IndexWriter} for details. - - true if no changes (deletions, norms) will be made with this IndexReader - - Subsambles which indexed - terms are loaded into RAM. This has the same effect as {@link - IndexWriter#setTermIndexInterval} except that setting - must be done at indexing time while this setting can be - set per reader. When set to N, then one in every - N*termIndexInterval terms in the index is loaded into - memory. By setting this to a value > 1 you can reduce - memory usage, at the expense of higher latency when - loading a TermInfo. The default value is 1. Set this - to -1 to skip loading the terms index entirely. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Refreshes an IndexReader if the index has changed since this instance - was (re)opened. -

- Opening an IndexReader is an expensive operation. This method can be used - to refresh an existing IndexReader to reduce these costs. This method - tries to only load segments that have changed or were created after the - IndexReader was (re)opened. -

- If the index has not changed since this instance was (re)opened, then this - call is a NOOP and returns this instance. Otherwise, a new instance is - returned. The old instance is not closed and remains usable.
-

- If the reader is reopened, even though they share - resources internally, it's safe to make changes - (deletions, norms) with the new reader. All shared - mutable state obeys "copy on write" semantics to ensure - the changes are not seen by other readers. -

- You can determine whether a reader was actually reopened by comparing the - old instance with the instance returned by this method: -

-            IndexReader reader = ... 
-            ...
-            IndexReader newReader = r.reopen();
-            if (newReader != reader) {
-            ...     // reader was reopened
-            reader.close(); 
-            }
-            reader = newReader;
-            ...
-            
- - Be sure to synchronize that code so that other threads, - if present, can never use reader after it has been - closed and before it's switched to newReader. - -

NOTE: If this reader is a near real-time - reader (obtained from {@link IndexWriter#GetReader()}, - reopen() will simply call writer.getReader() again for - you, though this may change in the future. - -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Just like {@link #Reopen()}, except you can change the - readOnly of the original reader. If the index is - unchanged but readOnly is different then a new reader - will be returned. - - - - Expert: reopen this reader on a specific commit point. - This always returns a readOnly reader. If the - specified commit point matches what this reader is - already on, and this reader is already readOnly, then - this same instance is returned; if it is not already - readOnly, a readOnly clone is returned. - - - - Efficiently clones the IndexReader (sharing most - internal state). -

- On cloning a reader with pending changes (deletions, - norms), the original reader transfers its write lock to - the cloned reader. This means only the cloned reader - may make further changes to the index, and commit the - changes to the index on close, but the old reader still - reflects all changes made up until it was cloned. -

- Like {@link #Reopen()}, it's safe to make changes to - either the original or the cloned reader: all shared - mutable state obeys "copy on write" semantics to ensure - the changes are not seen by other readers. -

-

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Clones the IndexReader and optionally changes readOnly. A readOnly - reader cannot open a writeable reader. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Returns the directory associated with this index. The Default - implementation returns the directory specified by subclasses when - delegating to the IndexReader(Directory) constructor, or throws an - UnsupportedOperationException if one was not specified. - - UnsupportedOperationException if no directory - - - Returns the time the index in the named directory was last modified. - Do not use this to check whether the reader is still up-to-date, use - {@link #IsCurrent()} instead. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #LastModified(Directory)} instead. - This method will be removed in the 3.0 release. - - - - Returns the time the index in the named directory was last modified. - Do not use this to check whether the reader is still up-to-date, use - {@link #IsCurrent()} instead. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #LastModified(Directory)} instead. - This method will be removed in the 3.0 release. - - - - - Returns the time the index in the named directory was last modified. - Do not use this to check whether the reader is still up-to-date, use - {@link #IsCurrent()} instead. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Reads version number from segments files. The version number is - initialized with a timestamp and then increased by one for each change of - the index. - - - where the index resides. - - version number. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #GetCurrentVersion(Directory)} instead. - This method will be removed in the 3.0 release. - - - - Reads version number from segments files. The version number is - initialized with a timestamp and then increased by one for each change of - the index. - - - where the index resides. - - version number. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #GetCurrentVersion(Directory)} instead. - This method will be removed in the 3.0 release. - - - - Reads version number from segments files. The version number is - initialized with a timestamp and then increased by one for each change of - the index. - - - where the index resides. - - version number. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Reads commitUserData, previously passed to {@link - IndexWriter#Commit(Map)}, from current index - segments file. This will return null if {@link - IndexWriter#Commit(Map)} has never been called for - this index. - - - where the index resides. - - commit userData. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - - - - - Version number when this IndexReader was opened. Not implemented in the - IndexReader base class. - -

- If this reader is based on a Directory (ie, was created by calling - {@link #Open}, or {@link #Reopen} on a reader based on a Directory), then - this method returns the version recorded in the commit that the reader - opened. This version is advanced every time {@link IndexWriter#Commit} is - called. -

- -

- If instead this reader is a near real-time reader (ie, obtained by a call - to {@link IndexWriter#GetReader}, or by calling {@link #Reopen} on a near - real-time reader), then this method returns the version of the last - commit done by the writer. Note that even as further changes are made - with the writer, the version will not changed until a commit is - completed. Thus, you should not rely on this method to determine when a - near real-time reader should be opened. Use {@link #IsCurrent} instead. -

- -

- UnsupportedOperationException - unless overridden in subclass - -
- - Retrieve the String userData optionally passed to - IndexWriter#commit. This will return null if {@link - IndexWriter#Commit(Map)} has never been called for - this index. - - - - - - -

For IndexReader implementations that use - TermInfosReader to read terms, this sets the - indexDivisor to subsample the number of indexed terms - loaded into memory. This has the same effect as {@link - IndexWriter#setTermIndexInterval} except that setting - must be done at indexing time while this setting can be - set per reader. When set to N, then one in every - N*termIndexInterval terms in the index is loaded into - memory. By setting this to a value > 1 you can reduce - memory usage, at the expense of higher latency when - loading a TermInfo. The default value is 1.

- - NOTE: you must call this before the term - index is loaded. If the index is already loaded, - an IllegalStateException is thrown. -

- IllegalStateException if the term index has already been loaded into memory - Please use {@link IndexReader#Open(Directory, IndexDeletionPolicy, boolean, int)} to specify the required TermInfos index divisor instead. - -
- -

For IndexReader implementations that use - TermInfosReader to read terms, this returns the - current indexDivisor as specified when the reader was - opened. -

-
- - Check whether any new changes have occurred to the index since this - reader was opened. - -

- If this reader is based on a Directory (ie, was created by calling - {@link #open}, or {@link #reopen} on a reader based on a Directory), then - this method checks if any further commits (see {@link IndexWriter#commit} - have occurred in that directory). -

- -

- If instead this reader is a near real-time reader (ie, obtained by a call - to {@link IndexWriter#getReader}, or by calling {@link #reopen} on a near - real-time reader), then this method checks if either a new commmit has - occurred, or any new uncommitted changes have taken place via the writer. - Note that even if the writer has only performed merging, this method will - still return false. -

- -

- In any event, if this returns false, you should call {@link #reopen} to - get a new reader that sees the changes. -

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - UnsupportedOperationException unless overridden in subclass -
- - Checks is the index is optimized (if it has a single segment and - no deletions). Not implemented in the IndexReader base class. - - true if the index is optimized; false otherwise - - UnsupportedOperationException unless overridden in subclass - - - Return an array of term frequency vectors for the specified document. - The array contains a vector for each vectorized field in the document. - Each vector contains terms and frequencies for all terms in a given vectorized field. - If no such fields existed, the method returns null. The term vectors that are - returned may either be of type {@link TermFreqVector} - or of type {@link TermPositionVector} if - positions or offsets have been stored. - - - document for which term frequency vectors are returned - - array of term frequency vectors. May be null if no term vectors have been - stored for the specified document. - - IOException if index cannot be accessed - - - - - Return a term frequency vector for the specified document and field. The - returned vector contains terms and frequencies for the terms in - the specified field of this document, if the field had the storeTermVector - flag set. If termvectors had been stored with positions or offsets, a - {@link TermPositionVector} is returned. - - - document for which the term frequency vector is returned - - field for which the term frequency vector is returned. - - term frequency vector May be null if field does not exist in the specified - document or term vector was not stored. - - IOException if index cannot be accessed - - - - - Load the Term Vector into a user-defined data structure instead of relying on the parallel arrays of - the {@link TermFreqVector}. - - The number of the document to load the vector for - - The name of the field to load - - The {@link TermVectorMapper} to process the vector. Must not be null - - IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. - - - - - Map all the term vectors for all fields in a Document - The number of the document to load the vector for - - The {@link TermVectorMapper} to process the vector. Must not be null - - IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. - - - Returns true if an index exists at the specified directory. - If the directory does not exist or if there is no index in it. - false is returned. - - the directory to check for an index - - true if an index exists; false otherwise - - Use {@link #IndexExists(Directory)} instead - This method will be removed in the 3.0 release. - - - - - Returns true if an index exists at the specified directory. - If the directory does not exist or if there is no index in it. - - the directory to check for an index - - true if an index exists; false otherwise - - Use {@link #IndexExists(Directory)} instead. - This method will be removed in the 3.0 release. - - - - - Returns true if an index exists at the specified directory. - If the directory does not exist or if there is no index in it. - - the directory to check for an index - - true if an index exists; false otherwise - - IOException if there is a problem with accessing the index - - - Returns the number of documents in this index. - - - Returns one greater than the largest possible document number. - This may be used to, e.g., determine how big to allocate an array which - will have an element for every document number in an index. - - - - Returns the number of deleted documents. - - - Returns the stored fields of the nth - Document in this index. -

- NOTE: for performance reasons, this method does not check if the - requested document is deleted, and therefore asking for a deleted document - may yield unspecified results. Usually this is not required, however you - can call {@link #IsDeleted(int)} with the requested document ID to verify - the document is not deleted. - -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Get the {@link Lucene.Net.Documents.Document} at the n - th position. The {@link FieldSelector} may be used to determine - what {@link Lucene.Net.Documents.Field}s to load and how they should - be loaded. NOTE: If this Reader (more specifically, the underlying - FieldsReader) is closed before the lazy - {@link Lucene.Net.Documents.Field} is loaded an exception may be - thrown. If you want the value of a lazy - {@link Lucene.Net.Documents.Field} to be available after closing you - must explicitly load it or fetch the Document again with a new loader. -

- NOTE: for performance reasons, this method does not check if the - requested document is deleted, and therefore asking for a deleted document - may yield unspecified results. Usually this is not required, however you - can call {@link #IsDeleted(int)} with the requested document ID to verify - the document is not deleted. - -

- Get the document at the nth position - - The {@link FieldSelector} to use to determine what - Fields should be loaded on the Document. May be null, in which case - all Fields will be loaded. - - The stored fields of the - {@link Lucene.Net.Documents.Document} at the nth position - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - - - - - - -
- - Returns true if document n has been deleted - - - Returns true if any documents have been deleted - - - Returns true if there are norms stored for this field. - - - Returns the byte-encoded normalization factor for the named field of - every document. This is used by the search code to score documents. - - - - - - - Reads the byte-encoded normalization factor for the named field of every - document. This is used by the search code to score documents. - - - - - - - Expert: Resets the normalization factor for the named field of the named - document. The norm represents the product of the field's {@link - Lucene.Net.Documents.Fieldable#SetBoost(float) boost} and its {@link Similarity#LengthNorm(String, - int) length normalization}. Thus, to preserve the length normalization - values when resetting this, one should base the new value upon the old. - - NOTE: If this field does not store norms, then - this method call will silently do nothing. - - - - - - - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Implements setNorm in subclass. - - - Expert: Resets the normalization factor for the named field of the named - document. - - - - - - - - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Returns an enumeration of all the terms in the index. The - enumeration is ordered by Term.compareTo(). Each term is greater - than all that precede it in the enumeration. Note that after - calling terms(), {@link TermEnum#Next()} must be called - on the resulting enumeration before calling other methods such as - {@link TermEnum#Term()}. - - IOException if there is a low-level IO error - - - Returns an enumeration of all terms starting at a given term. If - the given term does not exist, the enumeration is positioned at the - first term greater than the supplied term. The enumeration is - ordered by Term.compareTo(). Each term is greater than all that - precede it in the enumeration. - - IOException if there is a low-level IO error - - - Returns the number of documents containing the term t. - IOException if there is a low-level IO error - - - Returns an enumeration of all the documents which contain - term. For each document, the document number, the frequency of - the term in that document is also provided, for use in - search scoring. If term is null, then all non-deleted - docs are returned with freq=1. - Thus, this method implements the mapping: -

    - Term    =>    <docNum, freq>* -
-

The enumeration is ordered by document number. Each document number - is greater than all that precede it in the enumeration. -

- IOException if there is a low-level IO error -
- - Returns an unpositioned {@link TermDocs} enumerator. - IOException if there is a low-level IO error - - - Returns an enumeration of all the documents which contain - term. For each document, in addition to the document number - and frequency of the term in that document, a list of all of the ordinal - positions of the term in the document is available. Thus, this method - implements the mapping: - -

    - Term    =>    <docNum, freq, - <pos1, pos2, ... - posfreq-1> - >* -
-

This positional information facilitates phrase and proximity searching. -

The enumeration is ordered by document number. Each document number is - greater than all that precede it in the enumeration. -

- IOException if there is a low-level IO error -
- - Returns an unpositioned {@link TermPositions} enumerator. - IOException if there is a low-level IO error - - - Deletes the document numbered docNum. Once a document is - deleted it will not appear in TermDocs or TermPostitions enumerations. - Attempts to read its field with the {@link #document} - method will result in an error. The presence of this document may still be - reflected in the {@link #docFreq} statistic, though - this will be corrected eventually as the index is further modified. - - - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Implements deletion of the document numbered docNum. - Applications should call {@link #DeleteDocument(int)} or {@link #DeleteDocuments(Term)}. - - - - Deletes all documents that have a given term indexed. - This is useful if one uses a document field to hold a unique ID string for - the document. Then to delete such a document, one merely constructs a - term with the appropriate field and the unique ID string as its text and - passes it to this method. - See {@link #DeleteDocument(int)} for information about when this deletion will - become effective. - - - the number of documents deleted - - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Undeletes all documents currently marked as deleted in this index. - - - StaleReaderException if the index has changed - since this reader was opened - - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Implements actual undeleteAll() in subclass. - - - Does nothing by default. Subclasses that require a write lock for - index modifications must implement this method. - - - - - IOException - - - Opaque Map (String -> String) - that's recorded into the segments file in the index, - and retrievable by {@link - IndexReader#getCommitUserData}. - - IOException - - - Commit changes resulting from delete, undeleteAll, or - setNorm operations - - If an exception is hit, then either no changes or all - changes will have been committed to the index - (transactional semantics). - - IOException if there is a low-level IO error - - - Commit changes resulting from delete, undeleteAll, or - setNorm operations - - If an exception is hit, then either no changes or all - changes will have been committed to the index - (transactional semantics). - - IOException if there is a low-level IO error - - - Implements commit. - Please implement {@link #DoCommit(Map) - instead}. - - - - Implements commit. NOTE: subclasses should override - this. In 3.0 this will become an abstract method. - - - - Closes files associated with this index. - Also saves any new deletions to disk. - No other methods should be called after this has been called. - - IOException if there is a low-level IO error - - - Implements close. - - - Get a list of unique field names that exist in this index and have the specified - field option information. - - specifies which field option should be available for the returned fields - - Collection of Strings indicating the names of the fields. - - - - - - Returns true iff the index in the named directory is - currently locked. - - the directory to check for a lock - - IOException if there is a low-level IO error - Please use {@link IndexWriter#IsLocked(Directory)} instead. - This method will be removed in the 3.0 release. - - - - - Returns true iff the index in the named directory is - currently locked. - - the directory to check for a lock - - IOException if there is a low-level IO error - Use {@link #IsLocked(Directory)} instead. - This method will be removed in the 3.0 release. - - - - - Forcibly unlocks the index in the named directory. -

- Caution: this should only be used by failure recovery code, - when it is known that no other process nor thread is in fact - currently accessing this index. -

- Please use {@link IndexWriter#Unlock(Directory)} instead. - This method will be removed in the 3.0 release. - - -
- - Expert: return the IndexCommit that this reader has - opened. This method is only implemented by those - readers that correspond to a Directory with its own - segments_N file. - -

WARNING: this API is new and experimental and - may suddenly change.

-

-
- - Prints the filename and size of each file within a given compound file. - Add the -extract flag to extract files to the current working directory. - In order to make the extracted version of the index work, you have to copy - the segments file from the compound index into the directory where the extracted files are stored. - - Usage: Lucene.Net.Index.IndexReader [-extract] <cfsfile> - - - - Returns all commit points that exist in the Directory. - Normally, because the default is {@link - KeepOnlyLastCommitDeletionPolicy}, there would be only - one commit point. But if you're using a custom {@link - IndexDeletionPolicy} then there could be many commits. - Once you have a given commit, you can open a reader on - it by calling {@link IndexReader#Open(IndexCommit)} - There must be at least one commit in - the Directory, else this method throws {@link - java.io.IOException}. Note that if a commit is in - progress while this method is running, that commit - may or may not be returned array. - - - - Expert: returns the sequential sub readers that this - reader is logically composed of. For example, - IndexSearcher uses this API to drive searching by one - sub reader at a time. If this reader is not composed - of sequential child readers, it should return null. - If this method returns an empty array, that means this - reader is a null reader (for example a MultiReader - that has no sub readers). -

- NOTE: You should not try using sub-readers returned by - this method to make any changes (setNorm, deleteDocument, - etc.). While this might succeed for one composite reader - (like MultiReader), it will most likely lead to index - corruption for other readers (like DirectoryReader obtained - through {@link #open}. Use the parent reader directly. -

-
- - Expert - - - - - Returns the number of unique terms (across all fields) - in this reader. - - This method returns long, even though internally - Lucene cannot handle more than 2^31 unique terms, for - a possible future when this limitation is removed. - - - UnsupportedOperationException if this count - cannot be easily determined (eg Multi*Readers). - Instead, you should call {@link - #getSequentialSubReaders} and ask each sub reader for - its unique term count. - - - - Expert: Return the state of the flag that disables fakes norms in favor of representing the absence of field norms with null. - true if fake norms are disabled - - This currently defaults to false (to remain - back-compatible), but in 3.0 it will be hardwired to - true, meaning the norms() methods will return null for - fields that had disabled norms. - - - - Expert: Set the state of the flag that disables fakes norms in favor of representing the absence of field norms with null. - true to disable fake norms, false to preserve the legacy behavior - - This currently defaults to false (to remain - back-compatible), but in 3.0 it will be hardwired to - true, meaning the norms() methods will return null for - fields that had disabled norms. - - - - Utility class for executing code that needs to do - something with the current segments file. This is - necessary with lock-less commits because from the time - you locate the current segments file name, until you - actually open it, read its contents, or check modified - time, etc., it could have been deleted due to a writer - commit finishing. - - - - A collection of segmentInfo objects with methods for operating on - those segments in relation to the file system. - -

NOTE: This API is new and still experimental - (subject to change suddenly in the next release)

-

-
- - The file format version, a negative number. - - - This format adds details used for lockless commits. It differs - slightly from the previous format in that file names - are never re-used (write once). Instead, each file is - written to the next generation. For example, - segments_1, segments_2, etc. This allows us to not use - a commit lock. See file - formats for details. - - - - This format adds a "hasSingleNormFile" flag into each segment info. - See LUCENE-756 - for details. - - - - This format allows multiple segments to share a single - vectors and stored fields file. - - - - This format adds a checksum at the end of the file to - ensure all bytes were successfully written. - - - - This format adds the deletion count for each segment. - This way IndexWriter can efficiently report numDocs(). - - - - This format adds the boolean hasProx to record if any - fields in the segment store prox information (ie, have - omitTermFreqAndPositions==false) - - - - This format adds optional commit userData (String) storage. - - - This format adds optional per-segment String - dianostics storage, and switches userData to Map - - - - counts how often the index has been changed by adding or deleting docs. - starting with the current time in milliseconds forces to create unique version numbers. - - - - If non-null, information about loading segments_N files - - - - - Get the generation (N) of the current segments_N file - from a list of files. - - - -- array of file names to check - - - - Get the generation (N) of the current segments_N file - in the directory. - - - -- directory to search for the latest segments_N file - - - - Get the filename of the current segments_N file - from a list of files. - - - -- array of file names to check - - - - Get the filename of the current segments_N file - in the directory. - - - -- directory to search for the latest segments_N file - - - - Get the segments_N filename in use by this segment infos. - - - Parse the generation off the segments file name and - return it. - - - - Get the next segments_N filename that will be written. - - - Read a particular segmentFileName. Note that this may - throw an IOException if a commit is in process. - - - -- directory containing the segments file - - -- segment file to load - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - This version of read uses the retry logic (for lock-less - commits) to find the right segments file to load. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Returns a copy of this instance, also copying each - SegmentInfo. - - - - version number when this SegmentInfos was generated. - - - Current version number from segments file. - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Returns userData from latest segments file - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - If non-null, information about retries when loading - the segments file will be printed to this. - - - - Advanced: set how many times to try loading the - segments.gen file contents to determine current segment - generation. This file is only referenced when the - primary method (listing the directory) fails. - - - - - - - - Advanced: set how many milliseconds to pause in between - attempts to load the segments.gen file. - - - - - - - - Advanced: set how many times to try incrementing the - gen when loading the segments file. This only runs if - the primary (listing directory) and secondary (opening - segments.gen file) methods fail to find the segments - file. - - - - - - - - - - - - Returns a new SegmentInfos containg the SegmentInfo - instances in the specified range first (inclusive) to - last (exclusive), so total number of segments returned - is last-first. - - - - Call this to start a commit. This writes the new - segments file, but writes an invalid checksum at the - end, so that it is not visible to readers. Once this - is called you must call {@link #finishCommit} to complete - the commit or {@link #rollbackCommit} to abort it. - - - - Returns all file names referenced by SegmentInfo - instances matching the provided Directory (ie files - associated with any "external" segments are skipped). - The returned collection is recomputed on each - invocation. - - - - Writes & syncs to the Directory dir, taking care to - remove the segments file on exception - - - - Replaces all segments in this instance, but keeps - generation, version, counter so that future commits - remain write once. - - - - - Simple brute force implementation. - If size is equal, compare items one by one. - - SegmentInfos object to check equality for - true if lists are equal, false otherwise - - - - Calculate hash code of SegmentInfos - - hash code as in java version of ArrayList - - - Utility class for executing code that needs to do - something with the current segments file. This is - necessary with lock-less commits because from the time - you locate the current segments file name, until you - actually open it, read its contents, or check modified - time, etc., it could have been deleted due to a writer - commit finishing. - - - - Subclass must implement this. The assumption is an - IOException will be thrown if something goes wrong - during the processing that could have been caused by - a writer committing. - - - - Constants describing field properties, for example used for - {@link IndexReader#GetFieldNames(FieldOption)}. - - - - All fields - - - All indexed fields - - - All fields that store payloads - - - All fields that omit tf - - - Renamed to {@link #OMIT_TERM_FREQ_AND_POSITIONS} - - - - All fields which are not indexed - - - All fields which are indexed with termvectors enabled - - - All fields which are indexed but don't have termvectors enabled - - - All fields with termvectors enabled. Please note that only standard termvector fields are returned - - - All fields with termvectors with position values enabled - - - All fields with termvectors with offset values enabled - - - All fields with termvectors with offset values and position values enabled - - -

Construct a MultiReader aggregating the named set of (sub)readers. - Directory locking for delete, undeleteAll, and setNorm operations is - left to the subreaders.

-

Note that all subreaders are closed if this Multireader is closed.

-

- set of (sub)readers - - IOException -
- -

Construct a MultiReader aggregating the named set of (sub)readers. - Directory locking for delete, undeleteAll, and setNorm operations is - left to the subreaders.

-

- indicates whether the subreaders should be closed - when this MultiReader is closed - - set of (sub)readers - - IOException -
- - Tries to reopen the subreaders. -
- If one or more subreaders could be re-opened (i. e. subReader.reopen() - returned a new instance != subReader), then a new MultiReader instance - is returned, otherwise this instance is returned. -

- A re-opened instance might share one or more subreaders with the old - instance. Index modification operations result in undefined behavior - when performed before the old instance is closed. - (see {@link IndexReader#Reopen()}). -

- If subreaders are shared, then the reference count of those - readers is increased to ensure that the subreaders remain open - until the last referring reader is closed. - -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Clones the subreaders. - (see {@link IndexReader#clone()}). -
-

- If subreaders are shared, then the reference count of those - readers is increased to ensure that the subreaders remain open - until the last referring reader is closed. -

-
- - If clone is true then we clone each of the subreaders - - - New IndexReader, or same one (this) if - reopen/clone is not necessary - - CorruptIndexException - IOException - - - - - - - Checks recursively if all subreaders are up to date. - - - Not implemented. - UnsupportedOperationException - - - Change to true to see details of reference counts when - infoStream != null - - - - Initialize the deleter: find all previous commits in - the Directory, incref the files they reference, call - the policy to let it delete commits. This will remove - any files not referenced by any of the commits. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Remove the CommitPoints in the commitsToDelete List by - DecRef'ing all files from each SegmentInfos. - - - - Writer calls this when it has hit an error and had to - roll back, to tell us that there may now be - unreferenced files in the filesystem. So we re-list - the filesystem and delete such files. If segmentName - is non-null, we will only delete files corresponding to - that segment. - - - - For definition of "check point" see IndexWriter comments: - "Clarification: Check Points (and commits)". - - Writer calls this when it has made a "consistent - change" to the index, meaning new files are written to - the index and the in-memory SegmentInfos have been - modified to point to those files. - - This may or may not be a commit (segments_N may or may - not have been written). - - We simply incref the files referenced by the new - SegmentInfos and decref the files we had previously - seen (if any). - - If this is a commit, we also call the policy to give it - a chance to remove other commits. If any commits are - removed, we decref their files as well. - - - - Deletes the specified files, but only if they are new - (have not yet been incref'd). - - - - Tracks the reference count for a single index file: - - - Holds details for each commit point. This class is - also passed to the deletion policy. Note: this class - has a natural ordering that is inconsistent with - equals. - - - -

Expert: represents a single commit into an index as seen by the - {@link IndexDeletionPolicy} or {@link IndexReader}.

- -

Changes to the content of an index are made visible - only after the writer who made that change commits by - writing a new segments file - (segments_N). This point in time, when the - action of writing of a new segments file to the directory - is completed, is an index commit.

- -

Each index commit point has a unique segments file - associated with it. The segments file associated with a - later index commit point would have a larger N.

- -

WARNING: This API is a new and experimental and - may suddenly change.

-

-
- - Get the segments file (segments_N) associated - with this commit point. - - - - Returns all index files referenced by this commit point. - - - Returns the {@link Directory} for the index. - - - Delete this commit point. This only applies when using - the commit point in the context of IndexWriter's - IndexDeletionPolicy. -

- Upon calling this, the writer is notified that this commit - point should be deleted. -

- Decision that a commit-point should be deleted is taken by the {@link IndexDeletionPolicy} in effect - and therefore this should only be called by its {@link IndexDeletionPolicy#onInit onInit()} or - {@link IndexDeletionPolicy#onCommit onCommit()} methods. -

-
- - Returns true if this commit is an optimized index. - - - Two IndexCommits are equal if both their Directory and versions are equal. - - - Returns the version for this IndexCommit. This is the - same value that {@link IndexReader#getVersion} would - return if it were opened on this commit. - - - - Returns the generation (the _N in segments_N) for this - IndexCommit - - - - Convenience method that returns the last modified time - of the segments_N file corresponding to this index - commit, equivalent to - getDirectory().fileModified(getSegmentsFileName()). - - - - Returns userData, previously passed to {@link - IndexWriter#Commit(Map)} for this commit. Map is - String -> String. - - - - Called only be the deletion policy, to remove this - commit point from the index. - - - - NOTE: this API is experimental and will likely change - - - Adds a new doc in this term. If this returns null - then we just skip consuming positions/payloads. - - - - Called when we are done adding docs to this term - - - Adds a new doc in this term. If this returns null - then we just skip consuming positions/payloads. - - - - Called when we are done adding docs to this term - - - This is a DocFieldConsumer that inverts each field, - separately, from a Document, and accepts a - InvertedTermsConsumer to process those terms. - - - - Combines multiple files into a single compound file. - The file format:
-
    -
  • VInt fileCount
  • -
  • {Directory} - fileCount entries with the following structure:
  • -
      -
    • long dataOffset
    • -
    • String fileName
    • -
    -
  • {File Data} - fileCount entries with the raw data of the corresponding file
  • -
- - The fileCount integer indicates how many files are contained in this compound - file. The {directory} that follows has that many entries. Each directory entry - contains a long pointer to the start of this file's data section, and a String - with that file's name. - - -
- $Id: CompoundFileWriter.java 690539 2008-08-30 17:33:06Z mikemccand $ - -
- - Create the compound stream in the specified file. The file name is the - entire name (no extensions are added). - - NullPointerException if dir or name is null - - - Returns the directory of the compound file. - - - Returns the name of the compound file. - - - Add a source stream. file is the string by which the - sub-stream will be known in the compound stream. - - - IllegalStateException if this writer is closed - NullPointerException if file is null - IllegalArgumentException if a file with the same name - has been added already - - - - Merge files with the extensions added up to now. - All files with these extensions are combined sequentially into the - compound stream. After successful merge, the source files - are deleted. - - IllegalStateException if close() had been called before or - if no file has been added to this object - - - - Copy the contents of the file with specified extension into the - provided output stream. Use the provided buffer for moving data - to reduce memory allocation. - - - - source file - - - temporary holder for the start of directory entry for this file - - - temporary holder for the start of this file's data section - - - Class for accessing a compound stream. - This class implements a directory, but is limited to only read operations. - Directory methods that would normally modify data throw an exception. - - - - $Id: CompoundFileReader.java 673371 2008-07-02 11:57:27Z mikemccand $ - - - - Returns an array of strings, one for each file in the directory. - - - Returns true iff a file with the given name exists. - - - Returns the time the compound file was last modified. - - - Set the modified time of the compound file to now. - - - Not implemented - UnsupportedOperationException - - - Not implemented - UnsupportedOperationException - - - Returns the length of a file in the directory. - IOException if the file does not exist - - - Not implemented - UnsupportedOperationException - - - Not implemented - UnsupportedOperationException - - - Implementation of an IndexInput that reads from a portion of the - compound file. The visibility is left as "package" *only* because - this helps with testing since JUnit test cases in a different class - can then access package fields of this class. - - - - Expert: implements buffer refill. Reads bytes from the current - position in the input. - - the array to read bytes into - - the offset in the array to start storing bytes - - the number of bytes to read - - - - Expert: implements seek. Sets current position in this file, where - the next {@link #ReadInternal(byte[],int,int)} will occur. - - - - - - Closes the stream to further operations. - - - Synonymous with {@link Field}. - -

WARNING: This interface may change within minor versions, despite Lucene's backward compatibility requirements. - This means new methods may be added from version to version. This change only affects the Fieldable API; other backwards - compatibility promises remain intact. For example, Lucene can still - read and write indices created within the same major version. -

- - -

-
- - Sets the boost factor hits on this field. This value will be - multiplied into the score of all hits on this this field of this - document. - -

The boost is multiplied by {@link Lucene.Net.Documents.Document#GetBoost()} of the document - containing this field. If a document has multiple fields with the same - name, all such values are multiplied together. This product is then - used to compute the norm factor for the field. By - default, in the {@link - Lucene.Net.Search.Similarity#ComputeNorm(String, - FieldInvertState)} method, the boost value is multiplied - by the {@link - Lucene.Net.Search.Similarity#LengthNorm(String, - int)} and then rounded by {@link Lucene.Net.Search.Similarity#EncodeNorm(float)} before it is stored in the - index. One should attempt to ensure that this product does not overflow - the range of that encoding. - -

- - - - - - -
- - Returns the boost factor for hits for this field. - -

The default value is 1.0. - -

Note: this value is not stored directly with the document in the index. - Documents returned from {@link Lucene.Net.Index.IndexReader#Document(int)} and - {@link Lucene.Net.Search.Hits#Doc(int)} may thus not have the same value present as when - this field was indexed. - -

- - -
- - Returns the name of the field as an interned string. - For example "date", "title", "body", ... - - - - The value of the field as a String, or null. -

- For indexing, if isStored()==true, the stringValue() will be used as the stored field value - unless isBinary()==true, in which case binaryValue() will be used. - - If isIndexed()==true and isTokenized()==false, this String value will be indexed as a single token. - If isIndexed()==true and isTokenized()==true, then tokenStreamValue() will be used to generate indexed tokens if not null, - else readerValue() will be used to generate indexed tokens if not null, else stringValue() will be used to generate tokens. -

-
- - The value of the field as a Reader, which can be used at index time to generate indexed tokens. - - - - - The value of the field in Binary, or null. - - - - - The TokenStream for this field to be used when indexing, or null. - - - - - True if the value of the field is to be stored in the index for return - with search hits. - - - - True if the value of the field is to be indexed, so that it may be - searched on. - - - - True if the value of the field should be tokenized as text prior to - indexing. Un-tokenized fields are indexed as a single word and may not be - Reader-valued. - - - - True if the value of the field is stored and compressed within the index - - - True if the term or terms used to index this field are stored as a term - vector, available from {@link Lucene.Net.Index.IndexReader#GetTermFreqVector(int,String)}. - These methods do not provide access to the original content of the field, - only to terms used to index it. If the original content must be - preserved, use the stored attribute instead. - - - - - - - True if terms are stored as term vector together with their offsets - (start and end positon in source text). - - - - True if terms are stored as term vector together with their token positions. - - - True if the value of the field is stored as binary - - - True if norms are omitted for this indexed field - - - Expert: - - If set, omit normalization factors associated with this indexed field. - This effectively disables indexing boosts and length normalization for this field. - - - - Renamed to {@link AbstractField#setOmitTermFreqAndPositions} - - - - Renamed to {@link AbstractField#getOmitTermFreqAndPositions} - - - - Indicates whether a Field is Lazy or not. The semantics of Lazy loading are such that if a Field is lazily loaded, retrieving - it's values via {@link #StringValue()} or {@link #BinaryValue()} is only valid as long as the {@link Lucene.Net.Index.IndexReader} that - retrieved the {@link Document} is still open. - - - true if this field can be loaded lazily - - - - Returns offset into byte[] segment that is used as value, if Field is not binary - returned value is undefined - - index of the first character in byte[] segment that represents this Field value - - - - Returns length of byte[] segment that is used as value, if Field is not binary - returned value is undefined - - length of byte[] segment that represents this Field value - - - - Return the raw byte[] for the binary field. Note that - you must also call {@link #getBinaryLength} and {@link - #getBinaryOffset} to know which range of bytes in this - returned array belong to the field. - - reference to the Field value as byte[]. - - - - Return the raw byte[] for the binary field. Note that - you must also call {@link #getBinaryLength} and {@link - #getBinaryOffset} to know which range of bytes in this - returned array belong to the field.

- About reuse: if you pass in the result byte[] and it is - used, likely the underlying implementation will hold - onto this byte[] and return it in future calls to - {@link #BinaryValue()} or {@link #GetBinaryValue()}. - So if you subsequently re-use the same byte[] elsewhere - it will alter this Fieldable's value. -

- User defined buffer that will be used if - possible. If this is null or not large enough, a new - buffer is allocated - - reference to the Field value as byte[]. - -
- - Transforms the token stream as per the Porter stemming algorithm. - Note: the input to the stemming filter must already be in lower case, - so you will need to use LowerCaseFilter or LowerCaseTokenizer farther - down the Tokenizer chain in order for this to work properly! -

- To use this filter with other analyzers, you'll want to write an - Analyzer class that sets up the TokenStream chain as you want it. - To use this with LowerCaseTokenizer, for example, you'd write an - analyzer like this: -

-

-            class MyAnalyzer extends Analyzer {
-            public final TokenStream tokenStream(String fieldName, Reader reader) {
-            return new PorterStemFilter(new LowerCaseTokenizer(reader));
-            }
-            }
-            
-
-
- - Simplistic {@link CharFilter} that applies the mappings - contained in a {@link NormalizeCharMap} to the character - stream, and correcting the resulting changes to the - offsets. - - - - Base utility class for implementing a {@link CharFilter}. - You subclass this, and then record mappings by calling - {@link #addOffCorrectMap}, and then invoke the correct - method to correct an offset. - -

NOTE: This class is not particularly efficient. - For example, a new class instance is created for every - call to {@link #addOffCorrectMap}, which is then appended - to a private list. -

-
- - Subclasses of CharFilter can be chained to filter CharStream. - They can be used as {@link java.io.Reader} with additional offset - correction. {@link Tokenizer}s will automatically use {@link #CorrectOffset} - if a CharFilter/CharStream subclass is used. - - - $Id$ - - - - - CharStream adds {@link #CorrectOffset} - functionality over {@link Reader}. All Tokenizers accept a - CharStream instead of {@link Reader} as input, which enables - arbitrary character based filtering before tokenization. - The {@link #CorrectOffset} method fixed offsets to account for - removal or insertion of characters, so that the offsets - reported in the tokens match the character offsets of the - original Reader. - - - - Called by CharFilter(s) and Tokenizer to correct token offset. - - - offset as seen in the output - - corrected offset based on the input - - - - Subclass may want to override to correct the current offset. - - - current offset - - corrected offset - - - - Chains the corrected offset through the input - CharFilter. - - - - Retrieve the corrected offset. Note that this method - is slow, if you correct positions far before the most - recently added position, as it's a simple linear - search backwards through all offset corrections added - by {@link #addOffCorrectMap}. - - - - Default constructor that takes a {@link CharStream}. - - - Easy-use constructor that takes a {@link Reader}. - - - This class can be used if the token attributes of a TokenStream - are intended to be consumed more than once. It caches - all token attribute states locally in a List. - -

CachingTokenFilter implements the optional method - {@link TokenStream#Reset()}, which repositions the - stream to the first Token. -

-
- - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Java's builtin ThreadLocal has a serious flaw: - it can take an arbitrarily long amount of time to - dereference the things you had stored in it, even once the - ThreadLocal instance itself is no longer referenced. - This is because there is single, master map stored for - each thread, which all ThreadLocals share, and that - master map only periodically purges "stale" entries. - - While not technically a memory leak, because eventually - the memory will be reclaimed, it can take a long time - and you can easily hit OutOfMemoryError because from the - GC's standpoint the stale entries are not reclaimaible. - - This class works around that, by only enrolling - WeakReference values into the ThreadLocal, and - separately holding a hard reference to each stored - value. When you call {@link #close}, these hard - references are cleared and then GC is freely able to - reclaim space by objects stored in it. - - - - A variety of high efficiencly bit twiddling routines. - - - $Id$ - - - - Returns the number of bits set in the long - - - Returns the number of set bits in an array of longs. - - - Returns the popcount or cardinality of the two sets after an intersection. - Neither array is modified. - - - - Returns the popcount or cardinality of the union of two sets. - Neither array is modified. - - - - Returns the popcount or cardinality of A & ~B - Neither array is modified. - - - - table of number of trailing zeros in a byte - - - Returns number of trailing zeros in a 64 bit long value. - - - Returns number of trailing zeros in a 32 bit int value. - - - returns 0 based index of first set bit - (only works for x!=0) -
This is an alternate implementation of ntz() -
-
- - returns 0 based index of first set bit -
This is an alternate implementation of ntz() -
-
- - returns true if v is a power of two or zero - - - returns true if v is a power of two or zero - - - returns the next highest power of two, or the current value if it's already a power of two or zero - - - returns the next highest power of two, or the current value if it's already a power of two or zero - - - Simple standalone server that must be running when you - use {@link VerifyingLockFactory}. This server simply - verifies at most one process holds the lock at a time. - Run without any args to see usage. - - - - - - - - - Base class for file system based locking implementation. - - - Directory for the lock files. - - - Set the lock directory. This method can be only called - once to initialize the lock directory. It is used by {@link FSDirectory} - to set the lock directory to itsself. - Subclasses can also use this method to set the directory - in the constructor. - - - - Retrieve the lock directory. - - - Writes bytes through to a primary IndexOutput, computing - checksum. Note that you cannot use seek(). - - - - Starts but does not complete the commit of this file (= - writing of the final checksum at the end). After this - is called must call {@link #finishCommit} and the - {@link #close} to complete the commit. - - - - See {@link #prepareCommit} - - - Expert: returns a comparator for sorting ScoreDocs. - -

- Created: Apr 21, 2004 3:49:28 PM - - This class will be used as part of a key to a FieldCache value. You must - implement hashCode and equals to avoid an explosion in RAM usage if you use - instances that are not the same instance. If you are searching using the - Remote contrib, the same instance of this class on the client will be a new - instance on every call to the server, so hashCode/equals is very important in - that situation. - -

- $Id: SortComparatorSource.java 747019 2009-02-23 13:59:50Z - mikemccand $ - - 1.4 - - Please use {@link FieldComparatorSource} instead. - -
- - Creates a comparator for the field in the given index. - Index to create comparator for. - - Name of the field to create comparator for. - - Comparator of ScoreDoc objects. - - IOException If an error occurs reading the index. - - - A Query that matches documents containing a particular sequence of terms. - A PhraseQuery is built by QueryParser for input like "new york". - -

This query may be combined with other terms or queries with a {@link BooleanQuery}. -

-
- - Constructs an empty phrase query. - - - Sets the number of other words permitted between words in query phrase. - If zero, then this is an exact phrase search. For larger values this works - like a WITHIN or NEAR operator. -

The slop is in fact an edit-distance, where the units correspond to - moves of terms in the query phrase out of position. For example, to switch - the order of two words requires two moves (the first move places the words - atop one another), so to permit re-orderings of phrases, the slop must be - at least two. -

More exact matches are scored higher than sloppier matches, thus search - results are sorted by exactness. -

The slop is zero by default, requiring exact matches. -

-
- - Returns the slop. See setSlop(). - - - Adds a term to the end of the query phrase. - The relative position of the term is the one immediately after the last term added. - - - - Adds a term to the end of the query phrase. - The relative position of the term within the phrase is specified explicitly. - This allows e.g. phrases with more than one term at the same position - or phrases with gaps (e.g. in connection with stopwords). - - - - - - - - - Returns the set of terms in this phrase. - - - Returns the relative positions of terms in this phrase. - - - - - - - Prints a user-readable version of this query. - - - Returns true iff o is equal to this. - - - Returns a hash code value for this object. - - - Implements search over a single IndexReader. - -

Applications usually need only call the inherited {@link #Search(Query)} - or {@link #Search(Query,Filter)} methods. For performance reasons it is - recommended to open only one IndexSearcher and use it for all of your searches. - -

Note that you can only access Hits from an IndexSearcher as long as it is - not yet closed, otherwise an IOException will be thrown. - -

NOTE: {@link - IndexSearcher} instances are completely - thread safe, meaning multiple threads can call any of its - methods, concurrently. If your application requires - external synchronization, you should not - synchronize on the IndexSearcher instance; - use your own (non-Lucene) objects instead.

-

-
- - An abstract base class for search implementations. Implements the main search - methods. - -

- Note that you can only access hits from a Searcher as long as it is not yet - closed, otherwise an IOException will be thrown. -

-
- - Returns the documents matching query. - BooleanQuery.TooManyClauses - Hits will be removed in Lucene 3.0. Use - {@link #Search(Query, Filter, int)} instead. - - - - Returns the documents matching query and - filter. - - BooleanQuery.TooManyClauses - Hits will be removed in Lucene 3.0. Use - {@link #Search(Query, Filter, int)} instead. - - - - Returns documents matching query sorted by - sort. - - BooleanQuery.TooManyClauses - Hits will be removed in Lucene 3.0. Use - {@link #Search(Query, Filter, int, Sort)} instead. - - - - Returns documents matching query and filter, - sorted by sort. - - BooleanQuery.TooManyClauses - Hits will be removed in Lucene 3.0. Use - {@link #Search(Query, Filter, int, Sort)} instead. - - - - Search implementation with arbitrary sorting. Finds - the top n hits for query, applying - filter if non-null, and sorting the hits by the criteria in - sort. - -

NOTE: this does not compute scores by default; use - {@link IndexSearcher#setDefaultFieldSortScoring} to enable scoring. - -

- BooleanQuery.TooManyClauses -
- - Lower-level search API. - -

{@link HitCollector#Collect(int,float)} is called for every matching - document. - -

Applications should only use this if they need all of the - matching documents. The high-level search API ({@link - Searcher#Search(Query)}) is usually more efficient, as it skips - non-high-scoring hits. -

Note: The score passed to this method is a raw score. - In other words, the score will not necessarily be a float whose value is - between 0 and 1. -

- BooleanQuery.TooManyClauses - use {@link #Search(Query, Collector)} instead. - -
- - Lower-level search API. - -

{@link Collector#Collect(int)} is called for every matching document. - -

Applications should only use this if they need all of the matching - documents. The high-level search API ({@link Searcher#Search(Query, int)} - ) is usually more efficient, as it skips non-high-scoring hits. -

Note: The score passed to this method is a raw score. - In other words, the score will not necessarily be a float whose value is - between 0 and 1. -

- BooleanQuery.TooManyClauses -
- - Lower-level search API. - -

{@link HitCollector#Collect(int,float)} is called for every matching - document. -
HitCollector-based access to remote indexes is discouraged. - -

Applications should only use this if they need all of the - matching documents. The high-level search API ({@link - Searcher#Search(Query, Filter, int)}) is usually more efficient, as it skips - non-high-scoring hits. - -

- to match documents - - if non-null, used to permit documents to be collected. - - to receive hits - - BooleanQuery.TooManyClauses - use {@link #Search(Query, Filter, Collector)} instead. - -
- - Lower-level search API. - -

{@link Collector#Collect(int)} is called for every matching - document. -
Collector-based access to remote indexes is discouraged. - -

Applications should only use this if they need all of the - matching documents. The high-level search API ({@link - Searcher#Search(Query, Filter, int)}) is usually more efficient, as it skips - non-high-scoring hits. - -

- to match documents - - if non-null, used to permit documents to be collected. - - to receive hits - - BooleanQuery.TooManyClauses -
- - Finds the top n - hits for query, applying filter if non-null. - - - BooleanQuery.TooManyClauses - - - Finds the top n - hits for query. - - - BooleanQuery.TooManyClauses - - - Returns an Explanation that describes how doc scored against - query. - -

This is intended to be used in developing Similarity implementations, - and, for good performance, should not be displayed with every hit. - Computing an explanation is as expensive as executing the query over the - entire index. -

-
- - The Similarity implementation used by this searcher. - - - Expert: Set the Similarity implementation used by this Searcher. - - - - - - - Expert: Return the Similarity implementation used by this Searcher. - -

This defaults to the current value of {@link Similarity#GetDefault()}. -

-
- - creates a weight for query - new weight - - - - use {@link #Search(Weight, Filter, Collector)} instead. - - - - Creates a searcher searching the index in the named directory. - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #IndexSearcher(Directory, boolean)} instead - - - - Creates a searcher searching the index in the named - directory. You should pass readOnly=true, since it - gives much better concurrent performance, unless you - intend to do write operations (delete documents or - change norms) with the underlying IndexReader. - - directory where IndexReader will be opened - - if true, the underlying IndexReader - will be opened readOnly - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #IndexSearcher(Directory, boolean)} instead - - - - Creates a searcher searching the index in the provided directory. - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - Use {@link #IndexSearcher(Directory, boolean)} instead - - - - Creates a searcher searching the index in the named - directory. You should pass readOnly=true, since it - gives much better concurrent performance, unless you - intend to do write operations (delete documents or - change norms) with the underlying IndexReader. - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - directory where IndexReader will be opened - - if true, the underlying IndexReader - will be opened readOnly - - - - Creates a searcher searching the provided index. - - - Return the {@link IndexReader} this searches. - - - Note that the underlying IndexReader is not closed, if - IndexSearcher was constructed with IndexSearcher(IndexReader r). - If the IndexReader was supplied implicitly by specifying a directory, then - the IndexReader gets closed. - - - - Just like {@link #Search(Weight, Filter, int, Sort)}, but you choose - whether or not the fields in the returned {@link FieldDoc} instances - should be set by specifying fillFields.
- -

- NOTE: this does not compute scores by default. If you need scores, create - a {@link TopFieldCollector} instance by calling - {@link TopFieldCollector#create} and then pass that to - {@link #Search(Weight, Filter, Collector)}. -

-

-
- - By default, no scores are computed when sorting by field (using - {@link #Search(Query,Filter,int,Sort)}). You can change that, per - IndexSearcher instance, by calling this method. Note that this will incur - a CPU cost. - - - If true, then scores are returned for every matching document - in {@link TopFieldDocs}. - - - If true, then the max score for all matching docs is computed. - - - - Expert: obtains the ordinal of the field value from the default Lucene - {@link Lucene.Net.Search.FieldCache Fieldcache} using getStringIndex(). -

- The native lucene index order is used to assign an ordinal value for each field value. -

- Field values (terms) are lexicographically ordered by unicode value, and numbered starting at 1. -

- Example: -
If there were only three field values: "apple","banana","pear" -
then ord("apple")=1, ord("banana")=2, ord("pear")=3 -

- WARNING: - ord() depends on the position in an index and can thus change - when other documents are inserted or deleted, - or if a MultiSearcher is used. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

-

-
- - Constructor for a certain field. - field whose values order is used. - - - - A range query that returns a constant score equal to its boost for - all documents in the exclusive range of terms. - -

It does not have an upper bound on the number of clauses covered in the range. - -

This query matches the documents looking for terms that fall into the - supplied range according to {@link String#compareTo(String)}. It is not intended - for numerical ranges, use {@link NumericRangeQuery} instead. - -

This query is hardwired to {@link MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT}. - If you want to change this, use {@link TermRangeQuery} instead. - -

- Use {@link TermRangeQuery} for term ranges or - {@link NumericRangeQuery} for numeric ranges instead. - This class will be removed in Lucene 3.0. - - $Id: ConstantScoreRangeQuery.java 797694 2009-07-25 00:03:33Z mikemccand $ - -
- - A Query that matches documents within an exclusive range of terms. - -

This query matches the documents looking for terms that fall into the - supplied range according to {@link String#compareTo(String)}. It is not intended - for numerical ranges, use {@link NumericRangeQuery} instead. - -

This query uses the {@link - MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} - rewrite method. -

- 2.9 - -
- - Constructs a query selecting all terms greater/equal than lowerTerm - but less/equal than upperTerm. - -

- If an endpoint is null, it is said - to be "open". Either or both endpoints may be open. Open endpoints may not - be exclusive (you can't select all but the first or last term without - explicitly specifying the term to exclude.) - -

- The field that holds both lower and upper terms. - - The term text at the lower end of the range - - The term text at the upper end of the range - - If true, the lowerTerm is - included in the range. - - If true, the upperTerm is - included in the range. - -
- - Constructs a query selecting all terms greater/equal than - lowerTerm but less/equal than upperTerm. -

- If an endpoint is null, it is said - to be "open". Either or both endpoints may be open. Open endpoints may not - be exclusive (you can't select all but the first or last term without - explicitly specifying the term to exclude.) -

- If collator is not null, it will be used to decide whether - index terms are within the given range, rather than using the Unicode code - point order in which index terms are stored. -

- WARNING: Using this constructor and supplying a non-null - value in the collator parameter will cause every single - index Term in the Field referenced by lowerTerm and/or upperTerm to be - examined. Depending on the number of index Terms in this Field, the - operation could be very slow. - -

- The Term text at the lower end of the range - - The Term text at the upper end of the range - - If true, the lowerTerm is - included in the range. - - If true, the upperTerm is - included in the range. - - The collator to use to collate index Terms, to determine - their membership in the range bounded by lowerTerm and - upperTerm. - -
- - Returns the field name for this query - - - Returns the lower value of this range query - - - Returns the upper value of this range query - - - Returns true if the lower endpoint is inclusive - - - Returns true if the upper endpoint is inclusive - - - Returns the collator used to determine range inclusion, if any. - - - Prints a user-readable version of this query. - - - Changes of mode are not supported by this class (fixed to constant score rewrite mode) - - - A clause in a BooleanQuery. - - - The query whose matching documents are combined by the boolean query. - - - Constructs a BooleanClause. - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - Specifies how clauses are to occur in matching documents. - - - A serializable Enum class. - - - Resolves the deserialized instance to the local reference for accurate - equals() and == comparisons. - - - a reference to Parameter as resolved in the local VM - - ObjectStreamException - - - Use this operator for clauses that must appear in the matching documents. - - - Use this operator for clauses that should appear in the - matching documents. For a BooleanQuery with no MUST - clauses one or more SHOULD clauses must match a document - for the BooleanQuery to match. - - - - - - Use this operator for clauses that must not appear in the matching documents. - Note that it is not possible to search for queries that only consist - of a MUST_NOT clause. - - - - This exception is thrown when parse errors are encountered. - You can explicitly create objects of this exception type by - calling the method generateParseException in the generated - parser. - - You can modify this class to customize your error reporting - mechanisms so long as you retain the public fields. - - - - This constructor is used by the method "generateParseException" - in the generated parser. Calling this constructor generates - a new object of this type with the fields "currentToken", - "expectedTokenSequences", and "tokenImage" set. The boolean - flag "specialConstructor" is also set to true to indicate that - this constructor was used to create this object. - This constructor calls its super class with the empty string - to force the "toString" method of parent class "Throwable" to - print the error message in the form: - ParseException: <result of getMessage> - - - - The following constructors are for use by you for whatever - purpose you can think of. Constructing the exception in this - manner makes the exception behave in the normal way - i.e., as - documented in the class "Throwable". The fields "errorToken", - "expectedTokenSequences", and "tokenImage" do not contain - relevant information. The JavaCC generated code does not use - these constructors. - - - - Constructor with message. - - - Constructor with message. - - - This variable determines which constructor was used to create - this object and thereby affects the semantics of the - "getMessage" method (see below). - - - - This is the last token that has been consumed successfully. If - this object has been created due to a parse error, the token - followng this token will (therefore) be the first error token. - - - - Each entry in this array is an array of integers. Each array - of integers represents a sequence of tokens (by their ordinal - values) that is expected at this point of the parse. - - - - This is a reference to the "tokenImage" array of the generated - parser within which the parse error occurred. This array is - defined in the generated ...Constants interface. - - - - The end of line string for this machine. - - - Used to convert raw characters to their escaped version - when these raw version cannot be used as part of an ASCII - string literal. - - - - This method has the standard behavior when this object has been - created using the standard constructors. Otherwise, it uses - "currentToken" and "expectedTokenSequences" to generate a parse - error message and returns it. If this object has been created - due to a parse error, and you do not catch it (it gets thrown - from the parser), then this method is called during the printing - of the final stack trace, and hence the correct error message - gets displayed. - - - - Call this if the IndexInput passed to {@link #read} - stores terms in the "modified UTF8" (pre LUCENE-510) - format. - - - - Allows you to iterate over the {@link TermPositions} for multiple {@link Term}s as - a single {@link TermPositions}. - - - - - TermPositions provides an interface for enumerating the <document, - frequency, <position>* > tuples for a term.

The document and - frequency are the same as for a TermDocs. The positions portion lists the ordinal - positions of each occurrence of a term in a document. - -

- - -
- - Returns next position in the current document. It is an error to call - this more than {@link #Freq()} times - without calling {@link #Next()}

This is - invalid until {@link #Next()} is called for - the first time. -

-
- - Returns the length of the payload at the current term position. - This is invalid until {@link #NextPosition()} is called for - the first time.
-
- length of the current payload in number of bytes - -
- - Returns the payload data at the current term position. - This is invalid until {@link #NextPosition()} is called for - the first time. - This method must not be called more than once after each call - of {@link #NextPosition()}. However, payloads are loaded lazily, - so if the payload data for the current position is not needed, - this method may not be called at all for performance reasons.
- -
- the array into which the data of this payload is to be - stored, if it is big enough; otherwise, a new byte[] array - is allocated for this purpose. - - the offset in the array into which the data of this payload - is to be stored. - - a byte[] array containing the data of this payload - - IOException -
- - Checks if a payload can be loaded at this position. -

- Payloads can only be loaded once per call to - {@link #NextPosition()}. - -

- true if there is a payload available at this position that can be loaded - -
- - Creates a new MultipleTermPositions instance. - - - - - - - Not implemented. - UnsupportedOperationException - - - Not implemented. - UnsupportedOperationException - - - Not implemented. - UnsupportedOperationException - - - Not implemented. - UnsupportedOperationException - - - Not implemented. - UnsupportedOperationException - - - - false - - - - Add a new thread - - - Abort (called after hitting AbortException) - - - Flush a new segment - - - Close doc stores - - - Attempt to free RAM, returning true if any RAM was - freed - - - - Used by DocumentsWriter to maintain per-thread state. - We keep a separate Posting hash and other state for each - thread and then merge postings hashes from all threads - when writing the segment. - - - - This class converts alphabetic, numeric, and symbolic Unicode characters - which are not in the first 127 ASCII characters (the "Basic Latin" Unicode - block) into their ASCII equivalents, if one exists. - - Characters from the following Unicode blocks are converted; however, only - those characters with reasonable ASCII alternatives are converted: - - - - See: http://en.wikipedia.org/wiki/Latin_characters_in_Unicode - - The set of character conversions supported by this class is a superset of - those supported by Lucene's {@link ISOLatin1AccentFilter} which strips - accents from Latin1 characters. For example, 'À' will be replaced by - 'a'. - - - - Converts characters above ASCII to their ASCII equivalents. For example, - accents are removed from accented characters. - - The string to fold - - The number of characters in the input string - - - - Borrowed from Cglib. Allows custom swap so that two arrays can be sorted - at the same time. - - - - Simple lockless and memory barrier free String intern cache that is guaranteed - to return the same String instance as String.intern() does. - - - - Subclasses of StringInterner are required to - return the same single String object for all equal strings. - Depending on the implementation, this may not be - the same object returned as String.intern(). - - This StringInterner base class simply delegates to String.intern(). - - - - Returns a single object instance for each equal string. - - - Returns a single object instance for each equal string. - - - Size of the hash table, should be a power of two. - - Maximum length of each bucket, after which the oldest item inserted is dropped. - - - - Encapsulates sort criteria for returned hits. - -

The fields used to determine sort order must be carefully chosen. - Documents must contain a single term in such a field, - and the value of the term should indicate the document's relative position in - a given sort order. The field must be indexed, but should not be tokenized, - and does not need to be stored (unless you happen to want it back with the - rest of your document data). In other words: - -

document.add (new Field ("byNumber", Integer.toString(x), Field.Store.NO, Field.Index.NOT_ANALYZED));

- - -

Valid Types of Values

- -

There are four possible kinds of term values which may be put into - sorting fields: Integers, Longs, Floats, or Strings. Unless - {@link SortField SortField} objects are specified, the type of value - in the field is determined by parsing the first term in the field. - -

Integer term values should contain only digits and an optional - preceding negative sign. Values must be base 10 and in the range - Integer.MIN_VALUE and Integer.MAX_VALUE inclusive. - Documents which should appear first in the sort - should have low value integers, later documents high values - (i.e. the documents should be numbered 1..n where - 1 is the first and n the last). - -

Long term values should contain only digits and an optional - preceding negative sign. Values must be base 10 and in the range - Long.MIN_VALUE and Long.MAX_VALUE inclusive. - Documents which should appear first in the sort - should have low value integers, later documents high values. - -

Float term values should conform to values accepted by - {@link Float Float.valueOf(String)} (except that NaN - and Infinity are not supported). - Documents which should appear first in the sort - should have low values, later documents high values. - -

String term values can contain any valid String, but should - not be tokenized. The values are sorted according to their - {@link Comparable natural order}. Note that using this type - of term value has higher memory requirements than the other - two types. - -

Object Reuse

- -

One of these objects can be - used multiple times and the sort order changed between usages. - -

This class is thread safe. - -

Memory Usage

- -

Sorting uses of caches of term values maintained by the - internal HitQueue(s). The cache is static and contains an integer - or float array of length IndexReader.maxDoc() for each field - name for which a sort is performed. In other words, the size of the - cache in bytes is: - -

4 * IndexReader.maxDoc() * (# of different fields actually used to sort) - -

For String fields, the cache is larger: in addition to the - above array, the value of every term in the field is kept in memory. - If there are many unique terms in the field, this could - be quite large. - -

Note that the size of the cache is not affected by how many - fields are in the index and might be used to sort - only by - the ones actually used to sort a result set. - -

Created: Feb 12, 2004 10:53:57 AM - -

- lucene 1.4 - - $Id: Sort.java 795179 2009-07-17 18:23:30Z mikemccand $ - -
- - Represents sorting by computed relevance. Using this sort criteria returns - the same results as calling - {@link Searcher#Search(Query) Searcher#search()}without a sort criteria, - only with slightly more overhead. - - - - Represents sorting by index order. - - - Sorts by computed relevance. This is the same sort criteria as calling - {@link Searcher#Search(Query) Searcher#search()}without a sort criteria, - only with slightly more overhead. - - - - Sorts by the terms in field then by index order (document - number). The type of value in field is determined - automatically. - - - - - Please specify the type explicitly by - first creating a {@link SortField} and then use {@link - #Sort(SortField)} - - - - Sorts possibly in reverse by the terms in field then by - index order (document number). The type of value in field is - determined automatically. - - - - - Please specify the type explicitly by - first creating a {@link SortField} and then use {@link - #Sort(SortField)} - - - - Sorts in succession by the terms in each field. The type of value in - field is determined automatically. - - - - - Please specify the type explicitly by - first creating {@link SortField}s and then use {@link - #Sort(SortField[])} - - - - Sorts by the criteria in the given SortField. - - - Sorts in succession by the criteria in each SortField. - - - Sets the sort to the terms in field then by index order - (document number). - - Please specify the type explicitly by - first creating a {@link SortField} and then use {@link - #SetSort(SortField)} - - - - Sets the sort to the terms in field possibly in reverse, - then by index order (document number). - - Please specify the type explicitly by - first creating a {@link SortField} and then use {@link - #SetSort(SortField)} - - - - Sets the sort to the terms in each field in succession. - Please specify the type explicitly by - first creating {@link SortField}s and then use {@link - #SetSort(SortField[])} - - - - Sets the sort to the given criteria. - - - Sets the sort to the given criteria in succession. - - - Representation of the sort criteria. - Array of SortField objects used in this sort criteria - - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - Calculate the final score as the average score of all payloads seen. -

- Is thread safe and completely reusable. - - -

-
- - Expert: obtains single byte field values from the - {@link Lucene.Net.Search.FieldCache FieldCache} - using getBytes() and makes those values - available as other numeric types, casting as needed. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

- for requirements" - on the field. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

- - - -

Create a cached byte field source with default string-to-byte parser. -
- - Create a cached byte field source with a specific string-to-byte parser. - - - Filter caching singleton. It can be used - to save filters locally for reuse. - This class makes it possble to cache Filters even when using RMI, as it - keeps the cache on the seaercher side of the RMI connection. - - Also could be used as a persistent storage for any filter as long as the - filter provides a proper hashCode(), as that is used as the key in the cache. - - The cache is periodically cleaned up from a separate thread to ensure the - cache doesn't exceed the maximum size. - - - - The default maximum number of Filters in the cache - - - The default frequency of cache clenup - - - The cache itself - - - Maximum allowed cache size - - - Cache cleaning frequency - - - Cache cleaner that runs in a separate thread - - - Sets up the FilterManager singleton. - - - Sets the max size that cache should reach before it is cleaned up - maximum allowed cache size - - - - Sets the cache cleaning frequency in milliseconds. - cleaning frequency in millioseconds - - - - Returns the cached version of the filter. Allows the caller to pass up - a small filter but this will keep a persistent version around and allow - the caching filter to do its job. - - - The input filter - - The cached version of the filter - - - - Holds the filter and the last time the filter was used, to make LRU-based - cache cleaning possible. - TODO: Clean this up when we switch to Java 1.5 - - - - Keeps the cache from getting too big. - If we were using Java 1.5, we could use LinkedHashMap and we would not need this thread - to clean out the cache. - - The SortedSet sortedFilterItems is used only to sort the items from the cache, - so when it's time to clean up we have the TreeSet sort the FilterItems by - timestamp. - - Removes 1.5 * the numbers of items to make the cache smaller. - For example: - If cache clean size is 10, and the cache is at 15, we would remove (15 - 10) * 1.5 = 7.5 round up to 8. - This way we clean the cache a bit more, and avoid having the cache cleaner having to do it frequently. - - - - Describes the input token stream. - - - An integer that describes the kind of this token. This numbering - system is determined by JavaCCParser, and a table of these numbers is - stored in the file ...Constants.java. - - - - The line number of the first character of this Token. - - - The column number of the first character of this Token. - - - The line number of the last character of this Token. - - - The column number of the last character of this Token. - - - The string image of the token. - - - A reference to the next regular (non-special) token from the input - stream. If this is the last token from the input stream, or if the - token manager has not read tokens beyond this one, this field is - set to null. This is true only if this token is also a regular - token. Otherwise, see below for a description of the contents of - this field. - - - - This field is used to access special tokens that occur prior to this - token, but after the immediately preceding regular (non-special) token. - If there are no such special tokens, this field is set to null. - When there are more than one such special token, this field refers - to the last of these special tokens, which in turn refers to the next - previous special token through its specialToken field, and so on - until the first special token (whose specialToken field is null). - The next fields of special tokens refer to other special tokens that - immediately follow it (without an intervening regular token). If there - is no such token, this field is null. - - - - An optional attribute value of the Token. - Tokens which are not used as syntactic sugar will often contain - meaningful values that will be used later on by the compiler or - interpreter. This attribute value is often different from the image. - Any subclass of Token that actually wants to return a non-null value can - override this method as appropriate. - - - - No-argument constructor - - - Constructs a new token for the specified Image. - - - Constructs a new token for the specified Image and Kind. - - - Returns the image. - - - Returns a new Token object, by default. However, if you want, you - can create and return subclass objects based on the value of ofKind. - Simply add the cases to the switch for all those special cases. - For example, if you have a subclass of Token called IDToken that - you want to create if ofKind is ID, simply add something like : - - case MyParserConstants.ID : return new IDToken(ofKind, image); - - to the following switch statement. Then you can cast matchedToken - variable to the appropriate type and use sit in your lexical actions. - - - - Token Manager. - - - Token literal values and constants. - Generated by org.javacc.parser.OtherFilesGen#start() - - - - End of File. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - RegularExpression Id. - - - Lexical state. - - - Lexical state. - - - Lexical state. - - - Lexical state. - - - Literal token values. - - - Debug output. - - - Set debug output. - - - Token literal values. - - - Lexer state names. - - - Lex State array. - - - Constructor. - - - Constructor. - - - Reinitialise parser. - - - Reinitialise parser. - - - Switch to specified lex state. - - - Get the next Token. - - - MessageBundles classes extend this class, to implement a bundle. - - For Native Language Support (NLS), system of software internationalization. - - This interface is similar to the NLS class in eclipse.osgi.util.NLS class - - initializeMessages() method resets the values of all static strings, should - only be called by classes that extend from NLS (see TestMessages.java for - reference) - performs validation of all message in a bundle, at class load - time - performs per message validation at runtime - see NLSTest.java for - usage reference - - MessageBundle classes may subclass this type. - - - - Initialize a given class with the message bundle Keys Should be called from - a class that extends NLS in a static block at class load time. - - - Property file with that contains the message bundle - - where constants will reside - - - - - - - - - - - - - - - Message Key - - - - - Performs the priviliged action. - - A value that may represent the result of the action. - - - A Term represents a word from text. This is the unit of search. It is - composed of two elements, the text of the word, as a string, and the name of - the field that the text occured in, an interned string. - Note that terms may represent more than words from text fields, but also - things like dates, email addresses, urls, etc. - - - - Constructs a Term with the given field and text. -

Note that a null field or null text value results in undefined - behavior for most Lucene APIs that accept a Term parameter. -

-
- - Constructs a Term with the given field and empty text. - This serves two purposes: 1) reuse of a Term with the same field. - 2) pattern for a query. - - - - - - - Returns the field of this term, an interned string. The field indicates - the part of a document which this term came from. - - - - Returns the text of this term. In the case of words, this is simply the - text of the word. In the case of dates and other types, this is an - encoding of the object as a string. - - - - Optimized construction of new Terms by reusing same field as this Term - - avoids field.intern() overhead - - The text of the new term (field is implicitly same as this Term instance) - - A new Term - - - - Compares two terms, returning a negative integer if this - term belongs before the argument, zero if this term is equal to the - argument, and a positive integer if this term belongs after the argument. - The ordering of terms is first by field, then by text. - - - - Resets the field and text of a Term. - - - Optimized implementation. - - - Overridden by SegmentTermPositions to skip in prox stream. - - - Optimized implementation. - - - Called by super.skipTo(). - - - This is a DocConsumer that gathers all fields under the - same name, and calls per-field consumers to process field - by field. This class doesn't doesn't do any "real" work - of its own: it just forwards the fields to a - DocFieldConsumer. - - - - Load the First field and break. -

- See {@link FieldSelectorResult#LOAD_AND_BREAK} -

-
- - Replacement for Java 1.5 Character.valueOf() - Move to Character.valueOf() in 3.0 - - - - Returns a Character instance representing the given char value - - - a char value - - a Character representation of the given char value. - - - - Optimized implementation of a vector of bits. This is more-or-less like - java.util.BitSet, but also includes the following: -
    -
  • a count() method, which efficiently computes the number of one bits;
  • -
  • optimized read from and write to disk;
  • -
  • inlinable get() method;
  • -
  • store and load, as bit set or d-gaps, depending on sparseness;
  • -
-
- $Id: BitVector.java 765649 2009-04-16 14:29:26Z mikemccand $ - -
- - Constructs a vector capable of holding n bits. - - - Sets the value of bit to one. - - - Sets the value of bit to true, and - returns true if bit was already set - - - - Sets the value of bit to zero. - - - Returns true if bit is one and - false if it is zero. - - - - Returns the number of bits in this vector. This is also one greater than - the number of the largest valid bit number. - - - - Returns the total number of one bits in this vector. This is efficiently - computed and cached, so that, if the vector is not changed, no - recomputation is done for repeated calls. - - - - Writes this vector to the file name in Directory - d, in a format that can be read by the constructor {@link - #BitVector(Directory, String)}. - - - - Write as a bit set - - - Write as a d-gaps list - - - Indicates if the bit vector is sparse and should be saved as a d-gaps list, or dense, and should be saved as a bit set. - - - Constructs a bit vector from the file name in Directory - d, as written by the {@link #write} method. - - - - Read as a bit set - - - read as a d-gaps list - - - Retrieve a subset of this BitVector. - - - starting index, inclusive - - ending index, exclusive - - subset - - - - - Represents hits returned by {@link Searcher#search(Query,Filter,int,Sort)}. - - - - The fields which were used to sort results by. - - - Creates one of these objects. - Total number of hits for the query. - - The top hits for the query. - - The sort criteria used to find the top hits. - - The maximum score encountered. - - - - A {@link Collector} that sorts by {@link SortField} using - {@link FieldComparator}s. -

- See the {@link #create(Lucene.Net.Search.Sort, int, boolean, boolean, boolean, boolean)} method - for instantiating a TopFieldCollector. - -

NOTE: This API is experimental and might change in - incompatible ways in the next release.

-

-
- - A base class for all collectors that return a {@link TopDocs} output. This - collector allows easy extension by providing a single constructor which - accepts a {@link PriorityQueue} as well as protected members for that - priority queue and a counter of the number of total hits.
- Extending classes can override {@link #TopDocs(int, int)} and - {@link #GetTotalHits()} in order to provide their own implementation. -
-
- - The priority queue which holds the top documents. Note that different - implementations of PriorityQueue give different meaning to 'top documents'. - HitQueue for example aggregates the top scoring documents, while other PQ - implementations may hold documents sorted by other criteria. - - - - The total number of documents that the collector encountered. - - - Populates the results array with the ScoreDoc instaces. This can be - overridden in case a different ScoreDoc type should be returned. - - - - Returns a {@link TopDocs} instance containing the given results. If - results is null it means there are no results to return, - either because there were 0 calls to collect() or because the arguments to - topDocs were invalid. - - - - The total number of documents that matched this query. - - - Returns the top docs that were collected by this collector. - - - Returns the documents in the rage [start .. pq.size()) that were collected - by this collector. Note that if start >= pq.size(), an empty TopDocs is - returned.
- This method is convenient to call if the application allways asks for the - last results, starting from the last 'page'.
- NOTE: you cannot call this method more than once for each search - execution. If you need to call it more than once, passing each time a - different start, you should call {@link #TopDocs()} and work - with the returned {@link TopDocs} object, which will contain all the - results this search execution collected. -
-
- - Returns the documents in the rage [start .. start+howMany) that were - collected by this collector. Note that if start >= pq.size(), an empty - TopDocs is returned, and if pq.size() - start < howMany, then only the - available documents in [start .. pq.size()) are returned.
- This method is useful to call in case pagination of search results is - allowed by the search application, as well as it attempts to optimize the - memory used by allocating only as much as requested by howMany.
- NOTE: you cannot call this method more than once for each search - execution. If you need to call it more than once, passing each time a - different range, you should call {@link #TopDocs()} and work with the - returned {@link TopDocs} object, which will contain all the results this - search execution collected. -
-
- - Creates a new {@link TopFieldCollector} from the given - arguments. - -

NOTE: The instances returned by this method - pre-allocate a full array of length - numHits. - -

- the sort criteria (SortFields). - - the number of results to collect. - - specifies whether the actual field values should be returned on - the results (FieldDoc). - - specifies whether document scores should be tracked and set on the - results. Note that if set to false, then the results' scores will - be set to Float.NaN. Setting this to true affects performance, as - it incurs the score computation on each competitive result. - Therefore if document scores are not required by the application, - it is recommended to set it to false. - - specifies whether the query's maxScore should be tracked and set - on the resulting {@link TopDocs}. Note that if set to false, - {@link TopDocs#GetMaxScore()} returns Float.NaN. Setting this to - true affects performance as it incurs the score computation on - each result. Also, setting this true automatically sets - trackDocScores to true as well. - - specifies whether documents are scored in doc Id order or not by - the given {@link Scorer} in {@link #SetScorer(Scorer)}. - - a {@link TopFieldCollector} instance which will sort the results by - the sort criteria. - - IOException -
- - Abstract base class for sorting hits returned by a Query. - -

- This class should only be used if the other SortField types (SCORE, DOC, - STRING, INT, FLOAT) do not provide an adequate sorting. It maintains an - internal cache of values which could be quite large. The cache is an array of - Comparable, one for each document in the index. There is a distinct - Comparable for each unique term in the field - if some documents have the - same term in the field, the cache array will have entries which reference the - same Comparable. - - This class will be used as part of a key to a FieldCache value. You must - implement hashCode and equals to avoid an explosion in RAM usage if you use - instances that are not the same instance. If you are searching using the - Remote contrib, the same instance of this class on the client will be a new - instance on every call to the server, so hashCode/equals is very important in - that situation. - -

- Created: Apr 21, 2004 5:08:38 PM - - -

- $Id: SortComparator.java 800119 2009-08-02 17:59:21Z markrmiller $ - - 1.4 - - Please use {@link FieldComparatorSource} instead. - -
- - Returns an object which, when sorted according to natural order, - will order the Term values in the correct order. -

For example, if the Terms contained integer values, this method - would return new Integer(termtext). Note that this - might not always be the most efficient implementation - for this - particular example, a better implementation might be to make a - ScoreDocLookupComparator that uses an internal lookup table of int. -

- The textual value of the term. - - An object representing termtext that sorts according to the natural order of termtext. - - - - - -
- - Compares two ScoreDoc objects and returns a result indicating their - sort order. - - First ScoreDoc - - Second ScoreDoc - - a negative integer if i should come before j
- a positive integer if i should come after j
- 0 if they are equal -
- - -
- - Returns the value used to sort the given document. The - object returned must implement the java.io.Serializable - interface. This is used by multisearchers to determine how - to collate results from their searchers. - - - - Document - - Serializable object - - - - Returns the type of sort. Should return SortField.SCORE, - SortField.DOC, SortField.STRING, - SortField.INTEGER, SortField.FLOAT or - SortField.CUSTOM. It is not valid to return - SortField.AUTO. - This is used by multisearchers to determine how to collate results - from their searchers. - - One of the constants in SortField. - - - - - - A {@link Collector} implementation which wraps another - {@link Collector} and makes sure only documents with - scores > 0 are collected. - - - - A query that matches all documents. - - - - - Field used for normalization factor (document boost). Null if nothing. - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - A ranked list of documents, used to hold search results. -

- Caution: Iterate only over the hits needed. Iterating over all hits is - generally not desirable and may be the source of performance issues. If you - need to iterate over many or all hits, consider using the search method that - takes a {@link HitCollector}. -

-

- Note: Deleting matching documents concurrently with traversing the - hits, might, when deleting hits that were not yet retrieved, decrease - {@link #Length()}. In such case, - {@link java.util.ConcurrentModificationException - ConcurrentModificationException} is thrown when accessing hit n - > current_{@link #Length()} (but n < {@link #Length()} - _at_start). - -

- see {@link Searcher#Search(Query, int)}, - {@link Searcher#Search(Query, Filter, int)} and - {@link Searcher#Search(Query, Filter, int, Sort)}:
- -
-            TopDocs topDocs = searcher.Search(query, numHits);
-            ScoreDoc[] hits = topDocs.scoreDocs;
-            for (int i = 0; i < hits.Length; i++) {
-            int docId = hits[i].doc;
-            Document d = searcher.Doc(docId);
-            // do something with current hit
-            ...
-            
-
-
- - Tries to add new documents to hitDocs. - Ensures that the hit numbered min has been retrieved. - - - - Returns the total number of hits available in this set. - - - Returns the stored fields of the nth document in this set. -

Documents are cached, so that repeated requests for the same element may - return the same Document object. -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Returns the score for the nth document in this set. - - - Returns the id for the nth document in this set. - Note that ids may change when the index changes, so you cannot - rely on the id to be stable. - - - - Returns a {@link HitIterator} to navigate the Hits. Each item returned - from {@link Iterator#next()} is a {@link Hit}. -

- Caution: Iterate only over the hits needed. Iterating over all - hits is generally not desirable and may be the source of - performance issues. If you need to iterate over many or all hits, consider - using a search method that takes a {@link HitCollector}. -

-

-
- - An iterator over {@link Hits} that provides lazy fetching of each document. - {@link Hits#Iterator()} returns an instance of this class. Calls to {@link #next()} - return a {@link Hit} instance. - - - Use {@link TopScoreDocCollector} and {@link TopDocs} instead. Hits will be removed in Lucene 3.0. - - - - Constructed from {@link Hits#Iterator()}. - - - true if current hit is less than the total number of {@link Hits}. - - - - Unsupported operation. - - - UnsupportedOperationException - - - Returns the total number of hits. - - - Returns a {@link Hit} instance representing the next hit in {@link Hits}. - - - Next {@link Hit}. - - - - Wrapper for ({@link HitCollector}) implementations, which simply re-bases the - incoming docID before calling {@link HitCollector#collect}. - - - Please migrate custom HitCollectors to the new {@link Collector} - class. This class will be removed when {@link HitCollector} is - removed. - - - - Wrapper used by {@link HitIterator} to provide a lazily loaded hit - from {@link Hits}. - - - Use {@link TopScoreDocCollector} and {@link TopDocs} instead. Hits will be removed in Lucene 3.0. - - - - Constructed from {@link HitIterator} - Hits returned from a search - - Hit index in Hits - - - - Returns document for this hit. - - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Returns score for this hit. - - - - - - - Returns id for this hit. - - - - - - - Returns the boost factor for this hit on any field of the underlying document. - - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Returns the string value of the field with the given name if any exist in - this document, or null. If multiple fields exist with this name, this - method returns the first value added. If only binary fields with this name - exist, returns null. - - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Prints the parameters to be used to discover the promised result. - - - Expert: A hit queue for sorting by hits by terms in more than one field. - Uses FieldCache.DEFAULT for maintaining internal term lookup tables. - -

Created: Dec 8, 2003 12:56:03 PM - -

- lucene 1.4 - - $Id: FieldSortedHitQueue.java 803676 2009-08-12 19:31:38Z hossman $ - - - - - - see {@link FieldValueHitQueue} - -
- - Creates a hit queue sorted by the given list of fields. - Index to use. - - Fieldable names, in priority order (highest priority first). Cannot be null or empty. - - The number of hits to retain. Must be greater than zero. - - IOException - - - Stores a comparator corresponding to each field being sorted by - - - Stores the sort criteria being used. - - - Stores the maximum score value encountered, needed for normalizing. - - - returns the maximum score encountered by elements inserted via insert() - - - Returns whether a is less relevant than b. - ScoreDoc - - ScoreDoc - - true if document a should be sorted after document b. - - - - Given a FieldDoc object, stores the values used - to sort the given document. These values are not the raw - values out of the index, but the internal representation - of them. This is so the given search hit can be collated - by a MultiSearcher with other search hits. - - The FieldDoc to store sort values into. - - The same FieldDoc passed in. - - - - - - Returns the SortFields being used by this hit queue. - - - Internal cache of comparators. Similar to FieldCache, only - caches comparators instead of term values. - - - - Returns a comparator for sorting hits according to a field containing bytes. - Index to use. - - Fieldable containing integer values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing shorts. - Index to use. - - Fieldable containing integer values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing integers. - Index to use. - - Fieldable containing integer values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing integers. - Index to use. - - Fieldable containing integer values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing floats. - Index to use. - - Fieldable containing float values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing doubles. - Index to use. - - Fieldable containing float values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing strings. - Index to use. - - Fieldable containing string values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to a field containing strings. - Index to use. - - Fieldable containing string values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Returns a comparator for sorting hits according to values in the given field. - The terms in the field are looked at to determine whether they contain integers, - floats or strings. Once the type is determined, one of the other static methods - in this class is called to get the comparator. - - Index to use. - - Fieldable containing values. - - Comparator for sorting hits. - - IOException If an error occurs reading the index. - - - Expert: Internal cache. - - - Expert: The default cache implementation, storing all values in memory. - A WeakHashMap is used for storage. - -

Created: May 19, 2004 4:40:36 PM - -

- lucene 1.4 - - $Id: FieldCacheImpl.java 807572 2009-08-25 11:44:45Z mikemccand $ - -
- - Checks the internal cache for an appropriate entry, and if none is - found, reads the terms in field as a single byte and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - Used to get field values. - - Which field contains the single byte values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is found, - reads the terms in field as bytes and returns an array of - size reader.maxDoc() of the value each document has in the - given field. - - Used to get field values. - - Which field contains the bytes. - - Computes byte for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is - found, reads the terms in field as shorts and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - Used to get field values. - - Which field contains the shorts. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is found, - reads the terms in field as shorts and returns an array of - size reader.maxDoc() of the value each document has in the - given field. - - Used to get field values. - - Which field contains the shorts. - - Computes short for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is - found, reads the terms in field as integers and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - Used to get field values. - - Which field contains the integers. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is found, - reads the terms in field as integers and returns an array of - size reader.maxDoc() of the value each document has in the - given field. - - Used to get field values. - - Which field contains the integers. - - Computes integer for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if - none is found, reads the terms in field as floats and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - Used to get field values. - - Which field contains the floats. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if - none is found, reads the terms in field as floats and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - Used to get field values. - - Which field contains the floats. - - Computes float for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is - found, reads the terms in field as longs and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - - Used to get field values. - - Which field contains the longs. - - The values in the given field for each document. - - java.io.IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is found, - reads the terms in field as longs and returns an array of - size reader.maxDoc() of the value each document has in the - given field. - - - Used to get field values. - - Which field contains the longs. - - Computes integer for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is - found, reads the terms in field as integers and returns an array - of size reader.maxDoc() of the value each document - has in the given field. - - - Used to get field values. - - Which field contains the doubles. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none is found, - reads the terms in field as doubles and returns an array of - size reader.maxDoc() of the value each document has in the - given field. - - - Used to get field values. - - Which field contains the doubles. - - Computes integer for string values. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none - is found, reads the term values in field and returns an array - of size reader.maxDoc() containing the value each document - has in the given field. - - Used to get field values. - - Which field contains the strings. - - The values in the given field for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if none - is found reads the term values in field and returns - an array of them in natural order, along with an array telling - which element in the term array each document uses. - - Used to get field values. - - Which field contains the strings. - - Array of terms and index into the array for each document. - - IOException If any error occurs. - - - Checks the internal cache for an appropriate entry, and if - none is found reads field to see if it contains integers, longs, floats - or strings, and then calls one of the other methods in this class to get the - values. For string values, a StringIndex is returned. After - calling this method, there is an entry in the cache for both - type AUTO and the actual found type. - - Used to get field values. - - Which field contains the values. - - int[], long[], float[] or StringIndex. - - IOException If any error occurs. - Please specify the exact type, instead. - Especially, guessing does not work with the new - {@link NumericField} type. - - - - Checks the internal cache for an appropriate entry, and if none - is found reads the terms out of field and calls the given SortComparator - to get the sort values. A hit in the cache will happen if reader, - field, and comparator are the same (using equals()) - as a previous call to this method. - - Used to get field values. - - Which field contains the values. - - Used to convert terms into something to sort by. - - Array of sort objects, one for each document. - - IOException If any error occurs. - Please implement {@link - FieldComparatorSource} directly, instead. - - - - EXPERT: Generates an array of CacheEntry objects representing all items - currently in the FieldCache. -

- NOTE: These CacheEntry objects maintain a strong refrence to the - Cached Values. Maintaining refrences to a CacheEntry the IndexReader - associated with it has garbage collected will prevent the Value itself - from being garbage collected when the Cache drops the WeakRefrence. -

-

- EXPERIMENTAL API: This API is considered extremely advanced - and experimental. It may be removed or altered w/o warning in future - releases - of Lucene. -

-

-
- -

- EXPERT: Instructs the FieldCache to forcibly expunge all entries - from the underlying caches. This is intended only to be used for - test methods as a way to ensure a known base state of the Cache - (with out needing to rely on GC to free WeakReferences). - It should not be relied on for "Cache maintenance" in general - application code. -

-

- EXPERIMENTAL API: This API is considered extremely advanced - and experimental. It may be removed or altered w/o warning in future - releases - of Lucene. -

-

-
- - If non-null, FieldCacheImpl will warn whenever - entries are created that are not sane according to - {@link Lucene.Net.Util.FieldCacheSanityChecker}. - - - - counterpart of {@link #SetInfoStream(PrintStream)} - - - Will be removed in 3.0, this is for binary compatibility only - - - - Will be removed in 3.0, this is for binary compatibility only - - - - Will be removed in 3.0, this is for binary compatibility only - - - - Will be removed in 3.0, this is for binary compatibility only - - - - The pattern used to detect float values in a field - removed for java 1.3 compatibility - protected static final Object pFloats = Pattern.compile ("[0-9+\\-\\.eEfFdD]+"); - - - - - - - - EXPERT: A unique Identifier/Description for each item in the FieldCache. - Can be useful for logging/debugging. -

- EXPERIMENTAL API: This API is considered extremely advanced - and experimental. It may be removed or altered w/o warning in future - releases - of Lucene. -

-

-
- - - - - - Computes (and stores) the estimated size of the cache Value - - - - - The most recently estimated size of the value, null unless - estimateSize has been called. - - - - Only needed because of Entry (ab)use by - FieldSortedHitQueue, remove when FieldSortedHitQueue - is removed - - - - Only needed because of Entry (ab)use by - FieldSortedHitQueue, remove when FieldSortedHitQueue - is removed - - - - Adds warning to super.toString if Local or sortFieldType were specified - Only needed because of Entry (ab)use by - FieldSortedHitQueue, remove when FieldSortedHitQueue - is removed - - - - Hack: When thrown from a Parser (NUMERIC_UTILS_* ones), this stops - processing terms and returns the current FieldCache - array. - - - - Expert: Internal cache. - - - Expert: Every composite-key in the internal cache is of this type. - - - Only (ab)used by FieldSortedHitQueue, - remove when FieldSortedHitQueue is removed - - - - Only (ab)used by FieldSortedHitQueue, - remove when FieldSortedHitQueue is removed - - - - Only (ab)used by FieldSortedHitQueue, - remove when FieldSortedHitQueue is removed - - - - Creates one of these objects for a custom comparator/parser. - - - Only (ab)used by FieldSortedHitQueue, - remove when FieldSortedHitQueue is removed - - - - Two of these are equal iff they reference the same field and type. - - - Composes a hashcode based on the field and type. - - - Please specify the exact type, instead. - Especially, guessing does not work with the new - {@link NumericField} type. - - - - - - - - Expert: a FieldComparator compares hits so as to determine their - sort order when collecting the top results with {@link - TopFieldCollector}. The concrete public FieldComparator - classes here correspond to the SortField types. - -

This API is designed to achieve high performance - sorting, by exposing a tight interaction with {@link - FieldValueHitQueue} as it visits hits. Whenever a hit is - competitive, it's enrolled into a virtual slot, which is - an int ranging from 0 to numHits-1. The {@link - FieldComparator} is made aware of segment transitions - during searching in case any internal state it's tracking - needs to be recomputed during these transitions.

- -

A comparator must define these functions:

- -

    - -
  • {@link #compare} Compare a hit at 'slot a' - with hit 'slot b'.
  • - -
  • {@link #setBottom} This method is called by - {@link FieldValueHitQueue} to notify the - FieldComparator of the current weakest ("bottom") - slot. Note that this slot may not hold the weakest - value according to your comparator, in cases where - your comparator is not the primary one (ie, is only - used to break ties from the comparators before it).
  • - -
  • {@link #compareBottom} Compare a new hit (docID) - against the "weakest" (bottom) entry in the queue.
  • - -
  • {@link #copy} Installs a new hit into the - priority queue. The {@link FieldValueHitQueue} - calls this method when a new hit is competitive.
  • - -
  • {@link #setNextReader} Invoked - when the search is switching to the next segment. - You may need to update internal state of the - comparator, for example retrieving new values from - the {@link FieldCache}.
  • - -
  • {@link #value} Return the sort value stored in - the specified slot. This is only called at the end - of the search, in order to populate {@link - FieldDoc#fields} when returning the top results.
  • -
- - NOTE: This API is experimental and might change in - incompatible ways in the next release. -
-
- - Compare hit at slot1 with hit at slot2. - - - first slot to compare - - second slot to compare - - any N < 0 if slot2's value is sorted after - slot1, any N > 0 if the slot2's value is sorted before - slot1 and 0 if they are equal - - - - Set the bottom slot, ie the "weakest" (sorted last) - entry in the queue. When {@link #compareBottom} is - called, you should compare against this slot. This - will always be called before {@link #compareBottom}. - - - the currently weakest (sorted last) slot in the queue - - - - Compare the bottom of the queue with doc. This will - only invoked after setBottom has been called. This - should return the same result as {@link - #Compare(int,int)}} as if bottom were slot1 and the new - document were slot 2. - -

For a search that hits many results, this method - will be the hotspot (invoked by far the most - frequently).

- -

- that was hit - - any N < 0 if the doc's value is sorted after - the bottom entry (not competitive), any N > 0 if the - doc's value is sorted before the bottom entry and 0 if - they are equal. - -
- - This method is called when a new hit is competitive. - You should copy any state associated with this document - that will be required for future comparisons, into the - specified slot. - - - which slot to copy the hit to - - docID relative to current reader - - - - Set a new Reader. All doc correspond to the current Reader. - - - current reader - - docBase of this reader - - IOException - IOException - - - Sets the Scorer to use in case a document's score is - needed. - - - Scorer instance that you should use to - obtain the current hit's score, if necessary. - - - - Return the actual value in the slot. - - - the value - - value in this slot upgraded to Comparable - - - - Parses field's values as byte (using {@link - FieldCache#getBytes} and sorts by ascending value - - - - Sorts by ascending docID - - - Parses field's values as double (using {@link - FieldCache#getDoubles} and sorts by ascending value - - - - Parses field's values as float (using {@link - FieldCache#getFloats} and sorts by ascending value - - - - Parses field's values as int (using {@link - FieldCache#getInts} and sorts by ascending value - - - - Parses field's values as long (using {@link - FieldCache#getLongs} and sorts by ascending value - - - - Sorts by descending relevance. NOTE: if you are - sorting only by descending relevance and then - secondarily by ascending docID, peformance is faster - using {@link TopScoreDocCollector} directly (which {@link - IndexSearcher#search} uses when no {@link Sort} is - specified). - - - - Parses field's values as short (using {@link - FieldCache#getShorts} and sorts by ascending value - - - - Sorts by a field's value using the Collator for a - given Locale. - - - - Sorts by field's natural String sort order, using - ordinals. This is functionally equivalent to {@link - StringValComparator}, but it first resolves the string - to their relative ordinal positions (using the index - returned by {@link FieldCache#getStringIndex}), and - does most comparisons using the ordinals. For medium - to large results, this comparator will be much faster - than {@link StringValComparator}. For very small - result sets it may be slower. - - - - Sorts by field's natural String sort order. All - comparisons are done using String.compareTo, which is - slow for medium to large result sets but possibly - very fast for very small results sets. - - - - A range filter built on top of a cached single term field (in {@link FieldCache}). - -

FieldCacheRangeFilter builds a single cache for the field the first time it is used. - Each subsequent FieldCacheRangeFilter on the same field then reuses this cache, - even if the range itself changes. - -

This means that FieldCacheRangeFilter is much faster (sometimes more than 100x as fast) - as building a {@link TermRangeFilter} (or {@link ConstantScoreRangeQuery} on a {@link TermRangeFilter}) - for each query, if using a {@link #newStringRange}. However, if the range never changes it - is slower (around 2x as slow) than building a CachingWrapperFilter on top of a single TermRangeFilter. - - For numeric data types, this filter may be significantly faster than {@link NumericRangeFilter}. - Furthermore, it does not need the numeric values encoded by {@link NumericField}. But - it has the problem that it only works with exact one value/document (see below). - -

As with all {@link FieldCache} based functionality, FieldCacheRangeFilter is only valid for - fields which exact one term for each document (except for {@link #newStringRange} - where 0 terms are also allowed). Due to a restriction of {@link FieldCache}, for numeric ranges - all terms that do not have a numeric value, 0 is assumed. - -

Thus it works on dates, prices and other single value fields but will not work on - regular text fields. It is preferable to use a NOT_ANALYZED field to ensure that - there is only a single term. - -

This class does not have an constructor, use one of the static factory methods available, - that create a correct instance for different data types supported by {@link FieldCache}. -

-
- - This method is implemented for each data type - - - Creates a string range query using {@link FieldCache#getStringIndex}. This works with all - fields containing zero or one term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetBytes(IndexReader,String)}. This works with all - byte fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetBytes(IndexReader,String,FieldCache.ByteParser)}. This works with all - byte fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetShorts(IndexReader,String)}. This works with all - short fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetShorts(IndexReader,String,FieldCache.ShortParser)}. This works with all - short fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetInts(IndexReader,String)}. This works with all - int fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetInts(IndexReader,String,FieldCache.IntParser)}. This works with all - int fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetLongs(IndexReader,String)}. This works with all - long fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetLongs(IndexReader,String,FieldCache.LongParser)}. This works with all - long fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetFloats(IndexReader,String)}. This works with all - float fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetFloats(IndexReader,String,FieldCache.FloatParser)}. This works with all - float fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetDoubles(IndexReader,String)}. This works with all - double fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - Creates a numeric range query using {@link FieldCache#GetDoubles(IndexReader,String,FieldCache.DoubleParser)}. This works with all - double fields containing exactly one numeric term in the field. The range can be half-open by setting one - of the values to null. - - - - this method checks, if a doc is a hit, should throw AIOBE, when position invalid - - - this DocIdSet is cacheable, if it works solely with FieldCache and no TermDocs - - - @deprecated use {@link #NextDoc()} instead. - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - Expert: Describes the score computation for document and query. - - - Indicates whether or not this Explanation models a good match. - -

- By default, an Explanation represents a "match" if the value is positive. -

-

- - -
- - The value assigned to this explanation node. - - - Sets the value assigned to this explanation node. - - - A description of this explanation node. - - - Sets the description of this explanation node. - - - A short one line summary which should contain all high level - information about this Explanation, without the "Details" - - - - The sub-nodes of this explanation node. - - - Adds a sub-node to this explanation node. - - - Render an explanation as text. - - - Render an explanation as HTML. - - - Small Util class used to pass both an idf factor as well as an - explanation for that factor. - - This class will likely be held on a {@link Weight}, so be aware - before storing any large or un-serializable fields. - - - - - the idf factor - - - - This should be calculated lazily if possible. - - - the explanation for the idf factor. - - - - An alternative to BooleanScorer that also allows a minimum number - of optional scorers that should match. -
Implements skipTo(), and has no limitations on the numbers of added scorers. -
Uses ConjunctionScorer, DisjunctionScorer, ReqOptScorer and ReqExclScorer. -
-
- - The scorer to which all scoring will be delegated, - except for computing and using the coordination factor. - - - - The number of optionalScorers that need to match (if there are any) - - - Creates a {@link Scorer} with the given similarity and lists of required, - prohibited and optional scorers. In no required scorers are added, at least - one of the optional scorers will have to match during the search. - - - The similarity to be used. - - The minimum number of optional added scorers that should match - during the search. In case no required scorers are added, at least - one of the optional scorers will have to match during the search. - - the list of required scorers. - - the list of prohibited scorers. - - the list of optional scorers. - - - - Returns the scorer to be used for match counting and score summing. - Uses requiredScorers, optionalScorers and prohibitedScorers. - - - - Returns the scorer to be used for match counting and score summing. - Uses the given required scorer and the prohibitedScorers. - - A required scorer already built. - - - - Scores and collects all matching documents. - The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. -
When this method is used the {@link #Explain(int)} method should not be used. - - use {@link #Score(Collector)} instead. - -
- - Scores and collects all matching documents. - The collector to which all matching documents are passed through. -
When this method is used the {@link #Explain(int)} method should not be used. - -
- - Expert: Collects matching documents in a range. -
Note that {@link #Next()} must be called once before this method is - called for the first time. -
- The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. - - Do not score documents past this. - - true if more matching documents may remain. - - use {@link #Score(Collector, int, int)} instead. - -
- - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - Throws an UnsupportedOperationException. - TODO: Implement an explanation of the coordination factor. - - The document number for the explanation. - - UnsupportedOperationException - - - A Scorer for OR like queries, counterpart of ConjunctionScorer. - This Scorer implements {@link Scorer#SkipTo(int)} and uses skipTo() on the given Scorers. - TODO: Implement score(HitCollector, int). - - - - The number of subscorers. - - - The subscorers. - - - The minimum number of scorers that should match. - - - The scorerDocQueue contains all subscorers ordered by their current doc(), - with the minimum at the top. -
The scorerDocQueue is initialized the first time next() or skipTo() is called. -
An exhausted scorer is immediately removed from the scorerDocQueue. -
If less than the minimumNrMatchers scorers - remain in the scorerDocQueue next() and skipTo() return false. -

- After each to call to next() or skipTo() - currentSumScore is the total score of the current matching doc, - nrMatchers is the number of matching scorers, - and all scorers are after the matching doc, or are exhausted. -

-
- - The document number of the current match. - - - The number of subscorers that provide the current match. - - - Construct a DisjunctionScorer. - A collection of at least two subscorers. - - The positive minimum number of subscorers that should - match to match this query. -
When minimumNrMatchers is bigger than - the number of subScorers, - no matches will be produced. -
When minimumNrMatchers equals the number of subScorers, - it more efficient to use ConjunctionScorer. - -
- - Construct a DisjunctionScorer, using one as the minimum number - of matching subscorers. - - - - Called the first time next() or skipTo() is called to - initialize scorerDocQueue. - - - - Scores and collects all matching documents. - The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. -
When this method is used the {@link #Explain(int)} method should not be used. - - use {@link #Score(Collector)} instead. - -
- - Scores and collects all matching documents. - The collector to which all matching documents are passed through. -
When this method is used the {@link #Explain(int)} method should not be used. - -
- - Expert: Collects matching documents in a range. Hook for optimization. - Note that {@link #Next()} must be called once before this method is called - for the first time. - - The collector to which all matching documents are passed through - {@link HitCollector#Collect(int, float)}. - - Do not score documents past this. - - true if more matching documents may remain. - - use {@link #Score(Collector, int, int)} instead. - - - - Expert: Collects matching documents in a range. Hook for optimization. - Note that {@link #Next()} must be called once before this method is called - for the first time. - - The collector to which all matching documents are passed through. - - Do not score documents past this. - - true if more matching documents may remain. - - - - use {@link #NextDoc()} instead. - - - - Advance all subscorers after the current document determined by the - top of the scorerDocQueue. - Repeat until at least the minimum number of subscorers match on the same - document and all subscorers are after that document or are exhausted. -
On entry the scorerDocQueue has at least minimumNrMatchers - available. At least the scorer with the minimum document number will be advanced. -
- true iff there is a match. -
In case there is a match, currentDoc, currentSumScore, - and nrMatchers describe the match. - - TODO: Investigate whether it is possible to use skipTo() when - the minimum number of matchers is bigger than one, ie. try and use the - character of ConjunctionScorer for the minimum number of matchers. - Also delay calling score() on the sub scorers until the minimum number of - matchers is reached. -
For this, a Scorer array with minimumNrMatchers elements might - hold Scorers at currentDoc that are temporarily popped from scorerQueue. -
-
- - Returns the score of the current document matching the query. - Initially invalid, until {@link #Next()} is called the first time. - - - - use {@link #DocID()} instead. - - - - Returns the number of subscorers matching the current document. - Initially invalid, until {@link #Next()} is called the first time. - - - - Skips to the first match beyond the current whose document number is - greater than or equal to a given target.
- When this method is used the {@link #Explain(int)} method should not be - used.
- The implementation uses the skipTo() method on the subscorers. - -
- The target document number. - - true iff there is such a match. - - use {@link #Advance(int)} instead. - -
- - Advances to the first match beyond the current whose document number is - greater than or equal to a given target.
- When this method is used the {@link #Explain(int)} method should not be - used.
- The implementation uses the skipTo() method on the subscorers. - -
- The target document number. - - the document whose number is greater than or equal to the given - target, or -1 if none exist. - -
- - An explanation for the score of a given document. - - - - Scorer for conjunctions, sets of queries, all of which are required. - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - Count a scorer as a single match. - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - A QueryParser which constructs queries to search multiple fields. - - - $Revision: 829134 $ - - - - This class is generated by JavaCC. The most important method is - {@link #Parse(String)}. - - The syntax for query strings is as follows: - A Query is a series of clauses. - A clause may be prefixed by: -
    -
  • a plus (+) or a minus (-) sign, indicating - that the clause is required or prohibited respectively; or
  • -
  • a term followed by a colon, indicating the field to be searched. - This enables one to construct queries which search multiple fields.
  • -
- - A clause may be either: -
    -
  • a term, indicating all the documents that contain this term; or
  • -
  • a nested query, enclosed in parentheses. Note that this may be used - with a +/- prefix to require any of a set of - terms.
  • -
- - Thus, in BNF, the query grammar is: -
-            Query  ::= ( Clause )*
-            Clause ::= ["+", "-"] [<TERM> ":"] ( <TERM> | "(" Query ")" )
-            
- -

- Examples of appropriately formatted queries can be found in the query syntax - documentation. -

- -

- In {@link TermRangeQuery}s, QueryParser tries to detect date values, e.g. - date:[6/1/2005 TO 6/4/2005] produces a range query that searches - for "date" fields between 2005-06-01 and 2005-06-04. Note that the format - of the accepted input depends on {@link #SetLocale(Locale) the locale}. - By default a date is converted into a search term using the deprecated - {@link DateField} for compatibility reasons. - To use the new {@link DateTools} to convert dates, a - {@link Lucene.Net.Documents.DateTools.Resolution} has to be set. -

-

- The date resolution that shall be used for RangeQueries can be set - using {@link #SetDateResolution(DateTools.Resolution)} - or {@link #SetDateResolution(String, DateTools.Resolution)}. The former - sets the default date resolution for all fields, whereas the latter can - be used to set field specific date resolutions. Field specific date - resolutions take, if set, precedence over the default date resolution. -

-

- If you use neither {@link DateField} nor {@link DateTools} in your - index, you can create your own - query parser that inherits QueryParser and overwrites - {@link #GetRangeQuery(String, String, String, boolean)} to - use a different method for date conversion. -

- -

Note that QueryParser is not thread-safe.

- -

NOTE: there is a new QueryParser in contrib, which matches - the same syntax as this class, but is more modular, - enabling substantial customization to how a query is created. - -

NOTE: there is a new QueryParser in contrib, which matches - the same syntax as this class, but is more modular, - enabling substantial customization to how a query is created. - NOTE: You must specify the required {@link Version} compatibility when - creating QueryParser: -

    -
  • As of 2.9, {@link #SetEnablePositionIncrements} is true by default.
  • -
-
-
- - Alternative form of QueryParser.Operator.AND - - - Alternative form of QueryParser.Operator.OR - - - The actual operator that parser uses to combine query terms - - - Constructs a query parser. - the default field for query terms. - - used to find terms in the query text. - - Use {@link #QueryParser(Version, String, Analyzer)} instead - - - - Constructs a query parser. - - - Lucene version to match. See above) - - the default field for query terms. - - used to find terms in the query text. - - - - Parses a query string, returning a {@link Lucene.Net.Search.Query}. - the query string to be parsed. - - ParseException if the parsing fails - - - Returns the analyzer. - - - - Returns the field. - - - - Get the minimal similarity for fuzzy queries. - - - Set the minimum similarity for fuzzy queries. - Default is 0.5f. - - - - Get the prefix length for fuzzy queries. - Returns the fuzzyPrefixLength. - - - - Set the prefix length for fuzzy queries. Default is 0. - The fuzzyPrefixLength to set. - - - - Sets the default slop for phrases. If zero, then exact phrase matches - are required. Default value is zero. - - - - Gets the default slop for phrases. - - - Set to true to allow leading wildcard characters. -

- When set, * or ? are allowed as - the first character of a PrefixQuery and WildcardQuery. - Note that this can produce very slow - queries on big indexes. -

- Default: false. -

-
- - - - - - Set to true to enable position increments in result query. -

- When set, result phrase and multi-phrase queries will - be aware of position increments. - Useful when e.g. a StopFilter increases the position increment of - the token that follows an omitted token. -

- Default: false. -

-
- - - - - - Sets the boolean operator of the QueryParser. - In default mode (OR_OPERATOR) terms without any modifiers - are considered optional: for example capital of Hungary is equal to - capital OR of OR Hungary.
- In AND_OPERATOR mode terms are considered to be in conjunction: the - above mentioned query is parsed as capital AND of AND Hungary -
-
- - Gets implicit operator setting, which will be either AND_OPERATOR - or OR_OPERATOR. - - - - Whether terms of wildcard, prefix, fuzzy and range queries are to be automatically - lower-cased or not. Default is true. - - - - - - - - Please use {@link #setMultiTermRewriteMethod} instead. - - - - Please use {@link #getMultiTermRewriteMethod} instead. - - - - By default QueryParser uses {@link MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} - when creating a PrefixQuery, WildcardQuery or RangeQuery. This implementation is generally preferable because it - a) Runs faster b) Does not have the scarcity of terms unduly influence score - c) avoids any "TooManyBooleanClauses" exception. - However, if your application really needs to use the - old-fashioned BooleanQuery expansion rewriting and the above - points are not relevant then use this to change - the rewrite method. - - - - - - - - Set locale used by date range parsing. - - - Returns current locale, allowing access by subclasses. - - - Sets the default date resolution used by RangeQueries for fields for which no - specific date resolutions has been set. Field specific resolutions can be set - with {@link #SetDateResolution(String, DateTools.Resolution)}. - - - the default date resolution to set - - - - Sets the date resolution used by RangeQueries for a specific field. - - - field for which the date resolution is to be set - - date resolution to set - - - - Returns the date resolution that is used by RangeQueries for the given field. - Returns null, if no default or field specific date resolution has been set - for the given field. - - - - - Sets the collator used to determine index term inclusion in ranges - for RangeQuerys. -

- WARNING: Setting the rangeCollator to a non-null - collator using this method will cause every single index Term in the - Field referenced by lowerTerm and/or upperTerm to be examined. - Depending on the number of index Terms in this Field, the operation could - be very slow. - -

- the collator to use when constructing RangeQuerys - -
- - the collator used to determine index term inclusion in ranges - for RangeQuerys. - - - - use {@link #AddClause(List, int, int, Query)} instead. - - - - throw in overridden method to disallow - - - - Base implementation delegates to {@link #GetFieldQuery(String,String)}. - This method may be overridden, for example, to return - a SpanNearQuery instead of a PhraseQuery. - - - throw in overridden method to disallow - - - - throw in overridden method to disallow - - - - Builds a new BooleanQuery instance - disable coord - - new BooleanQuery instance - - - - Builds a new BooleanClause instance - sub query - - how this clause should occur when matching documents - - new BooleanClause instance - - - - Builds a new TermQuery instance - term - - new TermQuery instance - - - - Builds a new PhraseQuery instance - new PhraseQuery instance - - - - Builds a new MultiPhraseQuery instance - new MultiPhraseQuery instance - - - - Builds a new PrefixQuery instance - Prefix term - - new PrefixQuery instance - - - - Builds a new FuzzyQuery instance - Term - - minimum similarity - - prefix length - - new FuzzyQuery Instance - - - - Builds a new TermRangeQuery instance - Field - - min - - max - - true if range is inclusive - - new TermRangeQuery instance - - - - Builds a new MatchAllDocsQuery instance - new MatchAllDocsQuery instance - - - - Builds a new WildcardQuery instance - wildcard term - - new WildcardQuery instance - - - - Factory method for generating query, given a set of clauses. - By default creates a boolean query composed of clauses passed in. - - Can be overridden by extending classes, to modify query being - returned. - - - List that contains {@link BooleanClause} instances - to join. - - - Resulting {@link Query} object. - - throw in overridden method to disallow - - use {@link #GetBooleanQuery(List)} instead - - - - Factory method for generating query, given a set of clauses. - By default creates a boolean query composed of clauses passed in. - - Can be overridden by extending classes, to modify query being - returned. - - - List that contains {@link BooleanClause} instances - to join. - - - Resulting {@link Query} object. - - throw in overridden method to disallow - - - - Factory method for generating query, given a set of clauses. - By default creates a boolean query composed of clauses passed in. - - Can be overridden by extending classes, to modify query being - returned. - - - List that contains {@link BooleanClause} instances - to join. - - true if coord scoring should be disabled. - - - Resulting {@link Query} object. - - throw in overridden method to disallow - - use {@link #GetBooleanQuery(List, boolean)} instead - - - - Factory method for generating query, given a set of clauses. - By default creates a boolean query composed of clauses passed in. - - Can be overridden by extending classes, to modify query being - returned. - - - List that contains {@link BooleanClause} instances - to join. - - true if coord scoring should be disabled. - - - Resulting {@link Query} object. - - throw in overridden method to disallow - - - - Factory method for generating a query. Called when parser - parses an input term token that contains one or more wildcard - characters (? and *), but is not a prefix term token (one - that has just a single * character at the end) -

- Depending on settings, prefix term may be lower-cased - automatically. It will not go through the default Analyzer, - however, since normal Analyzers are unlikely to work properly - with wildcard templates. -

- Can be overridden by extending classes, to provide custom handling for - wildcard queries, which may be necessary due to missing analyzer calls. - -

- Name of the field query will use. - - Term token that contains one or more wild card - characters (? or *), but is not simple prefix term - - - Resulting {@link Query} built for the term - - throw in overridden method to disallow - -
- - Factory method for generating a query (similar to - {@link #getWildcardQuery}). Called when parser parses an input term - token that uses prefix notation; that is, contains a single '*' wildcard - character as its last character. Since this is a special case - of generic wildcard term, and such a query can be optimized easily, - this usually results in a different query object. -

- Depending on settings, a prefix term may be lower-cased - automatically. It will not go through the default Analyzer, - however, since normal Analyzers are unlikely to work properly - with wildcard templates. -

- Can be overridden by extending classes, to provide custom handling for - wild card queries, which may be necessary due to missing analyzer calls. - -

- Name of the field query will use. - - Term token to use for building term for the query - (without trailing '*' character!) - - - Resulting {@link Query} built for the term - - throw in overridden method to disallow - -
- - Factory method for generating a query (similar to - {@link #getWildcardQuery}). Called when parser parses - an input term token that has the fuzzy suffix (~) appended. - - - Name of the field query will use. - - Term token to use for building term for the query - - - Resulting {@link Query} built for the term - - throw in overridden method to disallow - - - - Returns a String where the escape char has been - removed, or kept only once if there was a double escape. - - Supports escaped unicode characters, e. g. translates - \\u0041 to A. - - - - - Returns the numeric value of the hexadecimal character - - - Returns a String where those characters that QueryParser - expects to be escaped are escaped by a preceding \. - - - - Command line tool to test QueryParser, using {@link Lucene.Net.Analysis.SimpleAnalyzer}. - Usage:
- java Lucene.Net.QueryParsers.QueryParser <input> -
-
- - Generated Token Manager. - - - Current token. - - - Next token. - - - Constructor with user supplied CharStream. - - - Reinitialise. - - - Constructor with generated Token Manager. - - - Reinitialise. - - - Get the next Token. - - - Get the specific Token. - - - Generate ParseException. - - - Enable tracing. - - - Disable tracing. - - - The default operator for parsing queries. - Use {@link QueryParser#setDefaultOperator} to change it. - - - - Creates a MultiFieldQueryParser. Allows passing of a map with term to - Boost, and the boost to apply to each term. - -

- It will, when parse(String query) is called, construct a query like this - (assuming the query consists of two terms and you specify the two fields - title and body): -

- - - (title:term1 body:term1) (title:term2 body:term2) - - -

- When setDefaultOperator(AND_OPERATOR) is set, the result will be: -

- - - +(title:term1 body:term1) +(title:term2 body:term2) - - -

- When you pass a boost (title=>5 body=>10) you can get -

- - - +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0) - - -

- In other words, all the query's terms must appear, but it doesn't matter - in what fields they appear. -

- -

- Please use - {@link #MultiFieldQueryParser(Version, String[], Analyzer, Map)} - instead - -
- - Creates a MultiFieldQueryParser. Allows passing of a map with term to - Boost, and the boost to apply to each term. - -

- It will, when parse(String query) is called, construct a query like this - (assuming the query consists of two terms and you specify the two fields - title and body): -

- - - (title:term1 body:term1) (title:term2 body:term2) - - -

- When setDefaultOperator(AND_OPERATOR) is set, the result will be: -

- - - +(title:term1 body:term1) +(title:term2 body:term2) - - -

- When you pass a boost (title=>5 body=>10) you can get -

- - - +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0) - - -

- In other words, all the query's terms must appear, but it doesn't matter - in what fields they appear. -

-

-
- - Creates a MultiFieldQueryParser. - -

- It will, when parse(String query) is called, construct a query like this - (assuming the query consists of two terms and you specify the two fields - title and body): -

- - - (title:term1 body:term1) (title:term2 body:term2) - - -

- When setDefaultOperator(AND_OPERATOR) is set, the result will be: -

- - - +(title:term1 body:term1) +(title:term2 body:term2) - - -

- In other words, all the query's terms must appear, but it doesn't matter - in what fields they appear. -

- -

- Please use - {@link #MultiFieldQueryParser(Version, String[], Analyzer)} - instead - -
- - Creates a MultiFieldQueryParser. - -

- It will, when parse(String query) is called, construct a query like this - (assuming the query consists of two terms and you specify the two fields - title and body): -

- - - (title:term1 body:term1) (title:term2 body:term2) - - -

- When setDefaultOperator(AND_OPERATOR) is set, the result will be: -

- - - +(title:term1 body:term1) +(title:term2 body:term2) - - -

- In other words, all the query's terms must appear, but it doesn't matter - in what fields they appear. -

-

-
- - Parses a query which searches on the fields specified. -

- If x fields are specified, this effectively constructs: - -

-            <code>
-            (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
-            </code>
-            
- -
- Queries strings to parse - - Fields to search on - - Analyzer to use - - ParseException - if query parsing fails - - IllegalArgumentException - if the length of the queries array differs from the length of - the fields array - - Use {@link #Parse(Version,String[],String[],Analyzer)} - instead - -
- - Parses a query which searches on the fields specified. -

- If x fields are specified, this effectively constructs: - -

-            <code>
-            (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
-            </code>
-            
- -
- Lucene version to match; this is passed through to - QueryParser. - - Queries strings to parse - - Fields to search on - - Analyzer to use - - ParseException - if query parsing fails - - IllegalArgumentException - if the length of the queries array differs from the length of - the fields array - -
- - Parses a query, searching on the fields specified. - Use this if you need to specify certain fields as required, - and others as prohibited. -

-            Usage:
-            
-            String[] fields = {"filename", "contents", "description"};
-            BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
-            BooleanClause.Occur.MUST,
-            BooleanClause.Occur.MUST_NOT};
-            MultiFieldQueryParser.parse("query", fields, flags, analyzer);
-            
-            
-

- The code above would construct a query: -

-            
-            (filename:query) +(contents:query) -(description:query)
-            
-            
- -
- Query string to parse - - Fields to search on - - Flags describing the fields - - Analyzer to use - - ParseException if query parsing fails - IllegalArgumentException if the length of the fields array differs - from the length of the flags array - - Use - {@link #Parse(Version, String, String[], BooleanClause.Occur[], Analyzer)} - instead - -
- - Parses a query, searching on the fields specified. Use this if you need - to specify certain fields as required, and others as prohibited. -

- -

-            Usage:
-            <code>
-            String[] fields = {"filename", "contents", "description"};
-            BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
-            BooleanClause.Occur.MUST,
-            BooleanClause.Occur.MUST_NOT};
-            MultiFieldQueryParser.parse("query", fields, flags, analyzer);
-            </code>
-            
-

- The code above would construct a query: - -

-            <code>
-            (filename:query) +(contents:query) -(description:query)
-            </code>
-            
- -
- Lucene version to match; this is passed through to - QueryParser. - - Query string to parse - - Fields to search on - - Flags describing the fields - - Analyzer to use - - ParseException - if query parsing fails - - IllegalArgumentException - if the length of the fields array differs from the length of - the flags array - -
- - Parses a query, searching on the fields specified. - Use this if you need to specify certain fields as required, - and others as prohibited. -

-            Usage:
-            
-            String[] query = {"query1", "query2", "query3"};
-            String[] fields = {"filename", "contents", "description"};
-            BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
-            BooleanClause.Occur.MUST,
-            BooleanClause.Occur.MUST_NOT};
-            MultiFieldQueryParser.parse(query, fields, flags, analyzer);
-            
-            
-

- The code above would construct a query: -

-            
-            (filename:query1) +(contents:query2) -(description:query3)
-            
-            
- -
- Queries string to parse - - Fields to search on - - Flags describing the fields - - Analyzer to use - - ParseException if query parsing fails - IllegalArgumentException if the length of the queries, fields, - and flags array differ - - Used - {@link #Parse(Version, String[], String[], BooleanClause.Occur[], Analyzer)} - instead - -
- - Parses a query, searching on the fields specified. Use this if you need - to specify certain fields as required, and others as prohibited. -

- -

-            Usage:
-            <code>
-            String[] query = {"query1", "query2", "query3"};
-            String[] fields = {"filename", "contents", "description"};
-            BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
-            BooleanClause.Occur.MUST,
-            BooleanClause.Occur.MUST_NOT};
-            MultiFieldQueryParser.parse(query, fields, flags, analyzer);
-            </code>
-            
-

- The code above would construct a query: - -

-            <code>
-            (filename:query1) +(contents:query2) -(description:query3)
-            </code>
-            
- -
- Lucene version to match; this is passed through to - QueryParser. - - Queries string to parse - - Fields to search on - - Flags describing the fields - - Analyzer to use - - ParseException - if query parsing fails - - IllegalArgumentException - if the length of the queries, fields, and flags array differ - -
- -

Expert: policy for deletion of stale {@link IndexCommit index commits}. - -

Implement this interface, and pass it to one - of the {@link IndexWriter} or {@link IndexReader} - constructors, to customize when older - {@link IndexCommit point-in-time commits} - are deleted from the index directory. The default deletion policy - is {@link KeepOnlyLastCommitDeletionPolicy}, which always - removes old commits as soon as a new commit is done (this - matches the behavior before 2.2).

- -

One expected use case for this (and the reason why it - was first created) is to work around problems with an - index directory accessed via filesystems like NFS because - NFS does not provide the "delete on last close" semantics - that Lucene's "point in time" search normally relies on. - By implementing a custom deletion policy, such as "a - commit is only removed once it has been stale for more - than X minutes", you can give your readers time to - refresh to the new commit before {@link IndexWriter} - removes the old commits. Note that doing so will - increase the storage requirements of the index. See LUCENE-710 - for details.

-

-
- -

This is called once when a writer is first - instantiated to give the policy a chance to remove old - commit points.

- -

The writer locates all index commits present in the - index directory and calls this method. The policy may - choose to delete some of the commit points, doing so by - calling method {@link IndexCommit#delete delete()} - of {@link IndexCommit}.

- -

Note: the last CommitPoint is the most recent one, - i.e. the "front index state". Be careful not to delete it, - unless you know for sure what you are doing, and unless - you can afford to lose the index content while doing that. - -

- List of current - {@link IndexCommit point-in-time commits}, - sorted by age (the 0th one is the oldest commit). - -
- -

This is called each time the writer completed a commit. - This gives the policy a chance to remove old commit points - with each commit.

- -

The policy may now choose to delete old commit points - by calling method {@link IndexCommit#delete delete()} - of {@link IndexCommit}.

- -

If writer has autoCommit = true then - this method will in general be called many times during - one instance of {@link IndexWriter}. If - autoCommit = false then this method is - only called once when {@link IndexWriter#close} is - called, or not at all if the {@link IndexWriter#abort} - is called. - -

Note: the last CommitPoint is the most recent one, - i.e. the "front index state". Be careful not to delete it, - unless you know for sure what you are doing, and unless - you can afford to lose the index content while doing that. - -

- List of {@link IndexCommit}, - sorted by age (the 0th one is the oldest commit). - -
- - Gathers all Fieldables for a document under the same - name, updates FieldInfos, and calls per-field consumers - to process field by field. - - Currently, only a single thread visits the fields, - sequentially, for processing. - - - - Process the document. If there is - something for this document to be done in docID order, - you should encapsulate that as a - DocumentsWriter.DocWriter and return it. - DocumentsWriter then calls finish() on this object - when it's its turn. - - - - If there are fields we've seen but did not see again - in the last run, then free them up. - - - - Consumer returns this on each doc. This holds any - state that must be flushed synchronized "in docID - order". We gather these and flush them in order. - - - - This class accepts multiple added documents and directly - writes a single segment file. It does this more - efficiently than creating a single segment per document - (with DocumentWriter) and doing standard merges on those - segments. - - Each added document is passed to the {@link DocConsumer}, - which in turn processes the document and interacts with - other consumers in the indexing chain. Certain - consumers, like {@link StoredFieldsWriter} and {@link - TermVectorsTermsWriter}, digest a document and - immediately write bytes to the "doc store" files (ie, - they do not consume RAM per document, except while they - are processing the document). - - Other consumers, eg {@link FreqProxTermsWriter} and - {@link NormsWriter}, buffer bytes in RAM and flush only - when a new segment is produced. - Once we have used our allowed RAM buffer, or the number - of added docs is large enough (in the case we are - flushing by doc count instead of RAM usage), we create a - real segment and flush it to the Directory. - - Threads: - - Multiple threads are allowed into addDocument at once. - There is an initial synchronized call to getThreadState - which allocates a ThreadState for this thread. The same - thread will get the same ThreadState over time (thread - affinity) so that if there are consistent patterns (for - example each thread is indexing a different content - source) then we make better use of RAM. Then - processDocument is called on that ThreadState without - synchronization (most of the "heavy lifting" is in this - call). Finally the synchronized "finishDocument" is - called to flush changes to the directory. - - When flush is called by IndexWriter, or, we flush - internally when autoCommit=false, we forcefully idle all - threads and flush only once they are all idle. This - means you can call flush with a given thread even while - other threads are actively adding/deleting documents. - - - Exceptions: - - Because this class directly updates in-memory posting - lists, and flushes stored fields and term vectors - directly to files in the directory, there are certain - limited times when an exception can corrupt this state. - For example, a disk full while flushing stored fields - leaves this file in a corrupt state. Or, an OOM - exception while appending to the in-memory posting lists - can corrupt that posting list. We call such exceptions - "aborting exceptions". In these cases we must call - abort() to discard all docs added since the last flush. - - All other exceptions ("non-aborting exceptions") can - still partially update the index structures. These - updates are consistent, but, they represent only a part - of the document seen up until the exception was hit. - When this happens, we immediately mark the document as - deleted so that the document is always atomically ("all - or none") added to the index. - - - - Returns true if any of the fields in the current - buffered docs have omitTermFreqAndPositions==false - - - - If non-null, various details of indexing are printed - here. - - - - Set how much RAM we can use before flushing. - - - Set max buffered docs, which means we will flush by - doc count instead of by RAM usage. - - - - Get current segment name we are writing. - - - Returns how many docs are currently buffered in RAM. - - - Returns the current doc store segment we are writing - to. This will be the same as segment when autoCommit - * is true. - - - - Returns the doc offset into the shared doc store for - the current buffered docs. - - - - Closes the current open doc stores an returns the doc - store segment name. This returns null if there are * - no buffered documents. - - - - Called if we hit an exception at a bad time (when - updating the index files) and must discard all - currently buffered docs. This resets our state, - discarding any docs added since last flush. - - - - Reset after a flush - - - Flush all pending docs to a new segment - - - Build compound file for the segment we just flushed - - - Set flushPending if it is not already set and returns - whether it was set. This is used by IndexWriter to - trigger a single flush even when multiple threads are - trying to do so. - - - - Returns a free (idle) ThreadState that may be used for - indexing this one document. This call also pauses if a - flush is pending. If delTerm is non-null then we - buffer this deleted term after the thread state has - been acquired. - - - - Returns true if the caller (IndexWriter) should now - flush. - - - - Called whenever a merge has completed and the merged segments had deletions - - - Does the synchronized work to finish/flush the - inverted document. - - - - The IndexingChain must define the {@link #GetChain(DocumentsWriter)} method - which returns the DocConsumer that the DocumentsWriter calls to process the - documents. - - - - Consumer returns this on each doc. This holds any - state that must be flushed synchronized "in docID - order". We gather these and flush them in order. - - - - - Base class for enumerating all but deleted docs. - -

NOTE: this class is meant only to be used internally - by Lucene; it's only public so it can be shared across - packages. This means the API is freely subject to - change, and, the class could be removed entirely, in any - Lucene release. Use directly at your own risk! */ -

-
- - A filter that replaces accented characters in the ISO Latin 1 character set - (ISO-8859-1) by their unaccented equivalent. The case will not be altered. -

- For instance, 'À' will be replaced by 'a'. -

- -

- in favor of {@link ASCIIFoldingFilter} which covers a superset - of Latin 1. This class will be removed in Lucene 3.0. - -
- - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - To replace accented characters in a String by unaccented equivalents. - - - Some useful constants. - - - - $Id: Constants.java 828327 2009-10-22 06:47:40Z uschindler $ - - - - - The value of System.getProperty("java.version"). * - - - True iff this is Java version 1.1. - - - True iff this is Java version 1.2. - - - True iff this is Java version 1.3. - - - The value of System.getProperty("os.name"). * - - - True iff running on Linux. - - - True iff running on Windows. - - - True iff running on SunOS. - - - File-based {@link Directory} implementation that uses - mmap for reading, and {@link - SimpleFSDirectory.SimpleFSIndexOutput} for writing. - -

NOTE: memory mapping uses up a portion of the - virtual memory address space in your process equal to the - size of the file being mapped. Before using this class, - be sure your have plenty of virtual address space, e.g. by - using a 64 bit JRE, or a 32 bit JRE with indexes that are - guaranteed to fit within the address space. - On 32 bit platforms also consult {@link #setMaxChunkSize} - if you have problems with mmap failing because of fragmented - address space. If you get an OutOfMemoryException, it is recommened - to reduce the chunk size, until it works. - -

Due to - this bug in Sun's JRE, MMapDirectory's {@link IndexInput#close} - is unable to close the underlying OS file handle. Only when GC - finally collects the underlying objects, which could be quite - some time later, will the file handle be closed. - -

This will consume additional transient disk usage: on Windows, - attempts to delete or overwrite the files will result in an - exception; on other platforms, which typically have a "delete on - last close" semantics, while such operations will succeed, the bytes - are still consuming space on disk. For many applications this - limitation is not a problem (e.g. if you have plenty of disk space, - and you don't rely on overwriting files on Windows) but it's still - an important limitation to be aware of. - -

This class supplies the workaround mentioned in the bug report - (disabled by default, see {@link #setUseUnmap}), which may fail on - non-Sun JVMs. It forcefully unmaps the buffer on close by using - an undocumented internal cleanup functionality. - {@link #UNMAP_SUPPORTED} is true, if the workaround - can be enabled (with no guarantees). -

-
- - Create a new MMapDirectory for the named location. - - - the path of the directory - - the lock factory to use, or null for the default. - - IOException - - - Create a new MMapDirectory for the named location. - - - the path of the directory - - the lock factory to use, or null for the default. - - IOException - - - Create a new MMapDirectory for the named location and the default lock factory. - - - the path of the directory - - IOException - - - Create a new MMapDirectory for the named location and the default lock factory. - - - the path of the directory - - IOException - - - - - - - true, if this platform supports unmapping mmaped files. - - - This method enables the workaround for unmapping the buffers - from address space after closing {@link IndexInput}, that is - mentioned in the bug report. This hack may fail on non-Sun JVMs. - It forcefully unmaps the buffer on close by using - an undocumented internal cleanup functionality. -

NOTE: Enabling this is completely unsupported - by Java and may lead to JVM crashs if IndexInput - is closed while another thread is still accessing it (SIGSEGV). -

- IllegalArgumentException if {@link #UNMAP_SUPPORTED} - is false and the workaround cannot be enabled. - -
- - Returns true, if the unmap workaround is enabled. - - - - - Try to unmap the buffer, this method silently fails if no support - for that in the JVM. On Windows, this leads to the fact, - that mmapped files cannot be modified or deleted. - - - - Sets the maximum chunk size (default is {@link Integer#MAX_VALUE} for - 64 bit JVMs and 256 MiBytes for 32 bit JVMs) used for memory mapping. - Especially on 32 bit platform, the address space can be very fragmented, - so large index files cannot be mapped. - Using a lower chunk size makes the directory implementation a little - bit slower (as the correct chunk must be resolved on each seek) - but the chance is higher that mmap does not fail. On 64 bit - Java platforms, this parameter should always be {@link Integer#MAX_VALUE}, - as the adress space is big enough. - - - - Returns the current mmap chunk size. - - - - - Creates an IndexInput for the file with the given name. - - - Creates an IndexOutput for the file with the given name. - - -

- The TimeLimitedCollector is used to timeout search requests that take longer - than the maximum allowed search time limit. After this time is exceeded, the - search thread is stopped by throwing a TimeExceeded Exception. -

- -

- Use {@link TimeLimitingCollector} instead, which extends the new - {@link Collector}. This class will be removed in 3.0. - -
- - Lower-level search API.
- HitCollectors are primarily meant to be used to implement queries, sorting - and filtering. See {@link Collector} for a lower level and higher performance - (on a multi-segment index) API. - -
- - - $Id: HitCollector.java 764551 2009-04-13 18:33:56Z mikemccand $ - - Please use {@link Collector} instead. - -
- - Called once for every document matching a query, with the document - number and its raw score. - -

If, for example, an application wished to collect all of the hits for a - query in a BitSet, then it might:

-            Searcher searcher = new IndexSearcher(indexReader);
-            final BitSet bits = new BitSet(indexReader.maxDoc());
-            searcher.search(query, new HitCollector() {
-            public void collect(int doc, float score) {
-            bits.set(doc);
-            }
-            });
-            
- -

Note: This is called in an inner search loop. For good search - performance, implementations of this method should not call - {@link Searcher#Doc(int)} or - {@link Lucene.Net.Index.IndexReader#Document(int)} on every - document number encountered. Doing so can slow searches by an order - of magnitude or more. -

Note: The score passed to this method is a raw score. - In other words, the score will not necessarily be a float whose value is - between 0 and 1. -

-
- - Default timer resolution. - - - - - Default for {@link #IsGreedy()}. - - - - - Create a TimeLimitedCollector wrapper over another HitCollector with a specified timeout. - the wrapped HitCollector - - max time allowed for collecting hits after which {@link TimeExceededException} is thrown - - - - Calls collect() on the decorated HitCollector. - - - TimeExceededException if the time allowed has been exceeded. - - - Return the timer resolution. - - - - - Set the timer resolution. - The default timer resolution is 20 milliseconds. - This means that a search required to take no longer than - 800 milliseconds may be stopped after 780 to 820 milliseconds. -
Note that: -
    -
  • Finer (smaller) resolution is more accurate but less efficient.
  • -
  • Setting resolution to less than 5 milliseconds will be silently modified to 5 milliseconds.
  • -
  • Setting resolution smaller than current resolution might take effect only after current - resolution. (Assume current resolution of 20 milliseconds is modified to 5 milliseconds, - then it can take up to 20 milliseconds for the change to have effect.
  • -
-
-
- - Checks if this time limited collector is greedy in collecting the last hit. - A non greedy collector, upon a timeout, would throw a {@link TimeExceededException} - without allowing the wrapped collector to collect current doc. A greedy one would - first allow the wrapped hit collector to collect current doc and only then - throw a {@link TimeExceededException}. - - - - - - Sets whether this time limited collector is greedy. - true to make this time limited greedy - - - - - - TimerThread provides a pseudo-clock service to all searching - threads, so that they can count elapsed time with less overhead - than repeatedly calling System.currentTimeMillis. A single - thread should be created to be used for all searches. - - - - Get the timer value in milliseconds. - - - Thrown when elapsed search time exceeds allowed search time. - - - Returns allowed time (milliseconds). - - - Returns elapsed time (milliseconds). - - - Returns last doc that was collected when the search time exceeded. - - -

Wrapper to allow {@link SpanQuery} objects participate in composite - single-field SpanQueries by 'lying' about their search field. That is, - the masked SpanQuery will function as normal, - but {@link SpanQuery#GetField()} simply hands back the value supplied - in this class's constructor.

- -

This can be used to support Queries like {@link SpanNearQuery} or - {@link SpanOrQuery} across different fields, which is not ordinarily - permitted.

- -

This can be useful for denormalized relational data: for example, when - indexing a document with conceptually many 'children':

- -

-            teacherid: 1
-            studentfirstname: james
-            studentsurname: jones
-            
-            teacherid: 2
-            studenfirstname: james
-            studentsurname: smith
-            studentfirstname: sally
-            studentsurname: jones
-            
- -

a SpanNearQuery with a slop of 0 can be applied across two - {@link SpanTermQuery} objects as follows: -

-            SpanQuery q1  = new SpanTermQuery(new Term("studentfirstname", "james"));
-            SpanQuery q2  = new SpanTermQuery(new Term("studentsurname", "jones"));
-            SpanQuery q2m new FieldMaskingSpanQuery(q2, "studentfirstname");
-            Query q = new SpanNearQuery(new SpanQuery[]{q1, q2m}, -1, false);
-            
- to search for 'studentfirstname:james studentsurname:jones' and find - teacherid 1 without matching teacherid 2 (which has a 'james' in position 0 - and 'jones' in position 1).

- -

Note: as {@link #GetField()} returns the masked field, scoring will be - done using the norms of the field name supplied. This may lead to unexpected - scoring behaviour.

-

-
- - use {@link #ExtractTerms(Set)} instead. - - - - Creates a new instance with size elements. If - prePopulate is set to true, the queue will pre-populate itself - with sentinel objects and set its {@link #Size()} to size. In - that case, you should not rely on {@link #Size()} to get the number of - actual elements that were added to the queue, but keep track yourself.
- NOTE: in case prePopulate is true, you should pop - elements from the queue using the following code example: - -
-            PriorityQueue pq = new HitQueue(10, true); // pre-populate.
-            ScoreDoc top = pq.top();
-            
-            // Add/Update one element.
-            top.score = 1.0f;
-            top.doc = 0;
-            top = (ScoreDoc) pq.updateTop();
-            int totalHits = 1;
-            
-            // Now pop only the elements that were *truly* inserted.
-            // First, pop all the sentinel elements (there are pq.size() - totalHits).
-            for (int i = pq.size() - totalHits; i > 0; i--) pq.pop();
-            
-            // Now pop the truly added elements.
-            ScoreDoc[] results = new ScoreDoc[totalHits];
-            for (int i = totalHits - 1; i >= 0; i--) {
-            results[i] = (ScoreDoc) pq.pop();
-            }
-            
- -

NOTE: This class pre-allocate a full array of - length size. - -

- the requested size of this queue. - - specifies whether to pre-populate the queue with sentinel values. - - - -
- - Interface that exceptions should implement to support lazy loading of messages. - - For Native Language Support (NLS), system of software internationalization. - - This Interface should be implemented by all exceptions that require - translation - - - - - a instance of a class that implements the Message interface - - - - A {@link IndexDeletionPolicy} that wraps around any other - {@link IndexDeletionPolicy} and adds the ability to hold and - later release a single "snapshot" of an index. While - the snapshot is held, the {@link IndexWriter} will not - remove any files associated with it even if the index is - otherwise being actively, arbitrarily changed. Because - we wrap another arbitrary {@link IndexDeletionPolicy}, this - gives you the freedom to continue using whatever {@link - IndexDeletionPolicy} you would normally want to use with your - index. Note that you can re-use a single instance of - SnapshotDeletionPolicy across multiple writers as long - as they are against the same index Directory. Any - snapshot held when a writer is closed will "survive" - when the next writer is opened. - -

WARNING: This API is a new and experimental and - may suddenly change.

-

-
- - Take a snapshot of the most recent commit to the - index. You must call release() to free this snapshot. - Note that while the snapshot is held, the files it - references will not be deleted, which will consume - additional disk space in your index. If you take a - snapshot at a particularly bad time (say just before - you call optimize()) then in the worst case this could - consume an extra 1X of your total index size, until - you release the snapshot. - - - - Release the currently held snapshot. - - - A Payload is metadata that can be stored together with each occurrence - of a term. This metadata is stored inline in the posting list of the - specific term. -

- To store payloads in the index a {@link TokenStream} has to be used that - produces payload data. -

- Use {@link TermPositions#GetPayloadLength()} and {@link TermPositions#GetPayload(byte[], int)} - to retrieve the payloads from the index.
- -

-
- - the byte array containing the payload data - - - the offset within the byte array - - - the length of the payload data - - - Creates an empty payload and does not allocate a byte array. - - - Creates a new payload with the the given array as data. - A reference to the passed-in array is held, i. e. no - copy is made. - - - the data of this payload - - - - Creates a new payload with the the given array as data. - A reference to the passed-in array is held, i. e. no - copy is made. - - - the data of this payload - - the offset in the data byte array - - the length of the data - - - - Sets this payloads data. - A reference to the passed-in array is held, i. e. no - copy is made. - - - - Sets this payloads data. - A reference to the passed-in array is held, i. e. no - copy is made. - - - - Returns a reference to the underlying byte array - that holds this payloads data. - - - - Returns the offset in the underlying byte array - - - Returns the length of the payload data. - - - Returns the byte at the given index. - - - Allocates a new byte array, copies the payload data into it and returns it. - - - Copies the payload data to a byte array. - - - the target byte array - - the offset in the target byte array - - - - Clones this payload by creating a copy of the underlying - byte array. - - - - Add a new position & payload. If payloadLength > 0 - you must read those bytes from the IndexInput. - - - - Called when we are done adding positions & payloads - - - Add a new position & payload - - - Called when we are done adding positions & payloads - - - This class tracks the number and position / offset parameters of terms - being added to the index. The information collected in this class is - also used to calculate the normalization factor for a field. - -

WARNING: This API is new and experimental, and may suddenly - change.

-

-
- - Re-initialize the state, using this boost value. - boost value to use. - - - - Get the last processed term position. - the position - - - - Get total number of terms in this field. - the length - - - - Get the number of terms with positionIncrement == 0. - the numOverlap - - - - Get end offset of the last processed term. - the offset - - - - Get boost value. This is the cumulative product of - document boost and field boost for all field instances - sharing the same field name. - - the boost - - - - Provides support for converting longs to Strings, and back again. The strings - are structured so that lexicographic sorting order is preserved. - -

- That is, if l1 is less than l2 for any two longs l1 and l2, then - NumberTools.longToString(l1) is lexicographically less than - NumberTools.longToString(l2). (Similarly for "greater than" and "equals".) - -

- This class handles all long values (unlike - {@link Lucene.Net.Documents.DateField}). - -

- For new indexes use {@link NumericUtils} instead, which - provides a sortable binary representation (prefix encoded) of numeric - values. - To index and efficiently query numeric values use {@link NumericField} - and {@link NumericRangeQuery}. - This class is included for use with existing - indices and will be removed in a future release. - -
- - Equivalent to longToString(Long.MIN_VALUE) - - - Equivalent to longToString(Long.MAX_VALUE) - - - The length of (all) strings returned by {@link #longToString} - - - Converts a long to a String suitable for indexing. - - - Converts a String that was returned by {@link #longToString} back to a - long. - - - IllegalArgumentException - if the input is null - - NumberFormatException - if the input does not parse (it was not a String returned by - longToString()). - - - - This attribute can be used to pass different flags down the tokenizer chain, - eg from one TokenFilter to another one. - - - - EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long. -

- - Get the bitset for any bits that have been set. This is completely distinct from {@link TypeAttribute#Type()}, although they do share similar purposes. - The flags can be used to encode information about the token for use by other {@link Lucene.Net.Analysis.TokenFilter}s. - - -

- The bits - -
- - - - - - This TokenFilter provides the ability to set aside attribute states - that have already been analyzed. This is useful in situations where multiple fields share - many common analysis steps and then go their separate ways. -

- It is also useful for doing things like entity extraction or proper noun analysis as - part of the analysis workflow and saving off those tokens for use in another field. - -

-            TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader1));
-            TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream();
-            TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream();
-            TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader2));
-            source2.addSinkTokenStream(sink1);
-            source2.addSinkTokenStream(sink2);
-            TokenStream final1 = new LowerCaseFilter(source1);
-            TokenStream final2 = source2;
-            TokenStream final3 = new EntityDetect(sink1);
-            TokenStream final4 = new URLDetect(sink2);
-            d.add(new Field("f1", final1));
-            d.add(new Field("f2", final2));
-            d.add(new Field("f3", final3));
-            d.add(new Field("f4", final4));
-            
- In this example, sink1 and sink2 will both get tokens from both - reader1 and reader2 after whitespace tokenizer - and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired. - It is important, that tees are consumed before sinks (in the above example, the field names must be - less the sink's field names). If you are not sure, which stream is consumed first, you can simply - add another sink and then pass all tokens to the sinks at once using {@link #consumeAllTokens}. - This TokenFilter is exhausted after this. In the above example, change - the example above to: -
-            ...
-            TokenStream final1 = new LowerCaseFilter(source1.newSinkTokenStream());
-            TokenStream final2 = source2.newSinkTokenStream();
-            sink1.consumeAllTokens();
-            sink2.consumeAllTokens();
-            ...
-            
- In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready. -

Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene. -

-
- - Instantiates a new TeeSinkTokenFilter. - - - Returns a new {@link SinkTokenStream} that receives all tokens consumed by this stream. - - - Returns a new {@link SinkTokenStream} that receives all tokens consumed by this stream - that pass the supplied filter. - - - - - - Adds a {@link SinkTokenStream} created by another TeeSinkTokenFilter - to this one. The supplied stream will also receive all consumed tokens. - This method can be used to pass tokens from two different tees to one sink. - - - - TeeSinkTokenFilter passes all tokens to the added sinks - when itself is consumed. To be sure, that all tokens from the input - stream are passed to the sinks, you can call this methods. - This instance is exhausted after this, but all sinks are instant available. - - - - A filter that decides which {@link AttributeSource} states to store in the sink. - - - Returns true, iff the current state of the passed-in {@link AttributeSource} shall be stored - in the sink. - - - - Called by {@link SinkTokenStream#Reset()}. This method does nothing by default - and can optionally be overridden. - - - - A grammar-based tokenizer constructed with JFlex - -

This should be a good tokenizer for most European-language documents: - -

    -
  • Splits words at punctuation characters, removing punctuation. However, a - dot that's not followed by whitespace is considered part of a token.
  • -
  • Splits words at hyphens, unless there's a number in the token, in which case - the whole token is interpreted as a product number and is not split.
  • -
  • Recognizes email addresses and internet hostnames as one token.
  • -
- -

Many applications have specific tokenizer needs. If this tokenizer does - not suit your application, please consider copying this source code - directory to your project and maintaining your own grammar-based tokenizer. - - -

- You must specify the required {@link Version} compatibility when creating - StandardAnalyzer: -

-
-
- - this solves a bug where HOSTs that end with '.' are identified - as ACRONYMs. It is deprecated and will be removed in the next - release. - - - - A private instance of the JFlex-constructed scanner - - - String token types that correspond to token type int constants - - - Please use {@link #TOKEN_TYPES} instead - - - - Specifies whether deprecated acronyms should be replaced with HOST type. - This is false by default to support backward compatibility. -

- See http://issues.apache.org/jira/browse/LUCENE-1068 - -

- this should be removed in the next release (3.0). - -
- - Set the max allowed token length. Any token longer - than this is skipped. - - - - - - - - Creates a new instance of the {@link StandardTokenizer}. Attaches the - input to a newly created JFlex scanner. - - Use {@link #StandardTokenizer(Version, Reader)} instead - - - - Creates a new instance of the {@link Lucene.Net.Analysis.Standard.StandardTokenizer}. Attaches - the input to the newly created JFlex scanner. - - - The input reader - - Set to true to replace mischaracterized acronyms with HOST. - - See http://issues.apache.org/jira/browse/LUCENE-1068 - - Use {@link #StandardTokenizer(Version, Reader)} instead - - - - Creates a new instance of the - {@link org.apache.lucene.analysis.standard.StandardTokenizer}. Attaches - the input to the newly created JFlex scanner. - - - The input reader - - See http://issues.apache.org/jira/browse/LUCENE-1068 - - - - Creates a new StandardTokenizer with a given {@link AttributeSource}. - Use - {@link #StandardTokenizer(Version, AttributeSource, Reader)} - instead - - - - Creates a new StandardTokenizer with a given {@link AttributeSource}. - - - Creates a new StandardTokenizer with a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory} - Use - {@link #StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader)} - instead - - - - Creates a new StandardTokenizer with a given - {@link org.apache.lucene.util.AttributeSource.AttributeFactory} - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Prior to https://issues.apache.org/jira/browse/LUCENE-1068, StandardTokenizer mischaracterized as acronyms tokens like www.abc.com - when they should have been labeled as hosts instead. - - true if StandardTokenizer now returns these tokens as Hosts, otherwise false - - - Remove in 3.X and make true the only valid value - - - - - Set to true to replace mischaracterized acronyms as HOST. - - Remove in 3.X and make true the only valid value - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - - Helper methods to ease implementing {@link Object#toString()}. - - - for printing boost only if not 1.0 - - - Simple cache implementation that uses a HashMap to store (key, value) pairs. - This cache is not synchronized, use {@link Cache#SynchronizedCache(Cache)} - if needed. - - - - Returns a Set containing all keys in this cache. - - - Helper class for keeping Listss of Objects associated with keys. WARNING: THIS CLASS IS NOT THREAD SAFE - - - the backing store for this object - - - - direct access to the map backing this object. - - - - Adds val to the Set associated with key in the Map. If key is not - already in the map, a new Set will first be created. - - the size of the Set associated with key once val is added to it. - - - - Adds multiple vals to the Set associated with key in the Map. - If key is not - already in the map, a new Set will first be created. - - the size of the Set associated with key once val is added to it. - - - - A memory-resident {@link Directory} implementation. Locking - implementation is by default the {@link SingleInstanceLockFactory} - but can be changed with {@link #setLockFactory}. - - - $Id: RAMDirectory.java 781333 2009-06-03 10:38:57Z mikemccand $ - - - - Constructs an empty {@link Directory}. - - - Creates a new RAMDirectory instance from a different - Directory implementation. This can be used to load - a disk-based index into memory. -

- This should be used only with indices that can fit into memory. -

- Note that the resulting RAMDirectory instance is fully - independent from the original Directory (it is a - complete copy). Any subsequent changes to the - original Directory will not be visible in the - RAMDirectory instance. - -

- a Directory value - - if an error occurs - -
- - Creates a new RAMDirectory instance from the {@link FSDirectory}. - - - a File specifying the index directory - - - - - Use {@link #RAMDirectory(Directory)} instead - - - - Creates a new RAMDirectory instance from the {@link FSDirectory}. - - - a String specifying the full index directory path - - - - - Use {@link #RAMDirectory(Directory)} instead - - - - Returns true iff the named file exists in this directory. - - - Returns the time the named file was last modified. - IOException if the file does not exist - - - Set the modified time of an existing file to now. - IOException if the file does not exist - - - Returns the length in bytes of a file in the directory. - IOException if the file does not exist - - - Return total size in bytes of all files in this - directory. This is currently quantized to - RAMOutputStream.BUFFER_SIZE. - - - - Removes an existing file in the directory. - IOException if the file does not exist - - - Renames an existing file in the directory. - FileNotFoundException if from does not exist - - - - - Creates a new, empty file in the directory with the given name. Returns a stream writing this file. - - - Returns a stream reading an existing file. - - - Closes the store to future operations, releasing associated memory. - - -

Implements {@link LockFactory} using native OS file - locks. Note that because this LockFactory relies on - java.nio.* APIs for locking, any problems with those APIs - will cause locking to fail. Specifically, on certain NFS - environments the java.nio.* locks will fail (the lock can - incorrectly be double acquired) whereas {@link - SimpleFSLockFactory} worked perfectly in those same - environments. For NFS based access to an index, it's - recommended that you try {@link SimpleFSLockFactory} - first and work around the one limitation that a lock file - could be left when the JVM exits abnormally.

- -

The primary benefit of {@link NativeFSLockFactory} is - that lock files will be properly removed (by the OS) if - the JVM has an abnormal exit.

- -

Note that, unlike {@link SimpleFSLockFactory}, the existence of - leftover lock files in the filesystem on exiting the JVM - is fine because the OS will free the locks held against - these files even though the files still remain.

- -

If you suspect that this or any other LockFactory is - not working properly in your environment, you can easily - test it by using {@link VerifyingLockFactory}, {@link - LockVerifyServer} and {@link LockStressTest}.

- -

- - -
- - Create a NativeFSLockFactory instance, with null (unset) - lock directory. When you pass this factory to a {@link FSDirectory} - subclass, the lock directory is automatically set to the - directory itsself. Be sure to create one instance for each directory - your create! - - - - Create a NativeFSLockFactory instance, storing lock - files into the specified lockDirName: - - - where lock files are created. - - - - Create a NativeFSLockFactory instance, storing lock - files into the specified lockDir: - - - where lock files are created. - - - - Create a NativeFSLockFactory instance, storing lock - files into the specified lockDir: - - - where lock files are created. - - - - Writes bytes through to a primary IndexOutput, computing - checksum as it goes. Note that you cannot use seek(). - - - - Similar to {@link NearSpansOrdered}, but for the unordered case. - - Expert: - Only public for subclassing. Most implementations should not need this class - - - - Expert: an enumeration of span matches. Used to implement span searching. - Each span represents a range of term positions within a document. Matches - are enumerated in order, by increasing document number, within that by - increasing start position and finally by increasing end position. - - - - Move to the next match, returning true iff any such exists. - - - Skips to the first match beyond the current, whose document number is - greater than or equal to target.

Returns true iff there is such - a match.

Behaves as if written:

-            boolean skipTo(int target) {
-            do {
-            if (!next())
-            return false;
-            } while (target > doc());
-            return true;
-            }
-            
- Most implementations are considerably more efficient than that. -
-
- - Returns the document number of the current match. Initially invalid. - - - Returns the start position of the current match. Initially invalid. - - - Returns the end position of the current match. Initially invalid. - - - Returns the payload data for the current span. - This is invalid until {@link #Next()} is called for - the first time. - This method must not be called more than once after each call - of {@link #Next()}. However, most payloads are loaded lazily, - so if the payload data for the current position is not needed, - this method may not be called at all for performance reasons. An ordered - SpanQuery does not lazy load, so if you have payloads in your index and - you do not want ordered SpanNearQuerys to collect payloads, you can - disable collection with a constructor option.
- - Note that the return type is a collection, thus the ordering should not be relied upon. -
-

- WARNING: The status of the Payloads feature is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case.

- -

- a List of byte arrays containing the data of this payload, otherwise null if isPayloadAvailable is false - - java.io.IOException -
- - Checks if a payload can be loaded at this position. -

- Payloads can only be loaded once per call to - {@link #Next()}. - -

- true if there is a payload available at this position that can be loaded - -
- - WARNING: The List is not necessarily in order of the the positions - Collection of byte[] payloads - - IOException - - - Wraps a Spans, and can be used to form a linked list. - - - A {@link Filter} that only accepts numeric values within - a specified range. To use this, you must first index the - numeric values using {@link NumericField} (expert: {@link - NumericTokenStream}). - -

You create a new NumericRangeFilter with the static - factory methods, eg: - -

-            Filter f = NumericRangeFilter.newFloatRange("weight",
-            new Float(0.3f), new Float(0.10f),
-            true, true);
-            
- - accepts all documents whose float valued "weight" field - ranges from 0.3 to 0.10, inclusive. - See {@link NumericRangeQuery} for details on how Lucene - indexes and searches numeric valued fields. - -

NOTE: This API is experimental and - might change in incompatible ways in the next - release. - -

- 2.9 - - -
- - Factory that creates a NumericRangeFilter, that filters a long - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that queries a long - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that filters a int - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that queries a int - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that filters a double - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that queries a double - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that filters a float - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeFilter, that queries a float - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Returns the field name for this filter - - - Returns true if the lower endpoint is inclusive - - - Returns true if the upper endpoint is inclusive - - - Returns the lower value of this range filter - - - Returns the upper value of this range filter - - - Abstract decorator class of a DocIdSetIterator - implementation that provides on-demand filter/validation - mechanism on an underlying DocIdSetIterator. See {@link - FilteredDocIdSet}. - - - - Constructor. - Underlying DocIdSetIterator. - - - - Validation method to determine whether a docid should be in the result set. - docid to be tested - - true if input docid should be in the result set, false otherwise. - - - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - Expert: A ScoreDoc which also contains information about - how to sort the referenced document. In addition to the - document number and score, this object contains an array - of values for the document from the field(s) used to sort. - For example, if the sort criteria was to sort by fields - "a", "b" then "c", the fields object array - will have three elements, corresponding respectively to - the term values for the document in fields "a", "b" and "c". - The class of each element in the array will be either - Integer, Float or String depending on the type of values - in the terms of each field. - -

Created: Feb 11, 2004 1:23:38 PM - -

- lucene 1.4 - - $Id: FieldDoc.java 773194 2009-05-09 10:36:41Z mikemccand $ - - - - - -
- - Expert: Returned by low-level search implementations. - - - - - Expert: The score of this document for the query. - - - Expert: A hit document's number. - - - - - Expert: Constructs a ScoreDoc. - - - Expert: The values which are used to sort the referenced document. - The order of these will match the original sort criteria given by a - Sort object. Each Object will be either an Integer, Float or String, - depending on the type of values in the terms of the original field. - - - - - - - - Expert: Creates one of these objects with empty sort information. - - - Expert: Creates one of these objects with the given sort information. - - - Message Interface for a lazy loading. - For Native Language Support (NLS), system of software internationalization. - - - - A {@link MergeScheduler} that simply does each merge - sequentially, using the current thread. - - - -

Expert: {@link IndexWriter} uses an instance - implementing this interface to execute the merges - selected by a {@link MergePolicy}. The default - MergeScheduler is {@link ConcurrentMergeScheduler}.

- -

NOTE: This API is new and still experimental - (subject to change suddenly in the next release)

- -

NOTE: This class typically requires access to - package-private APIs (eg, SegmentInfos) to do its job; - if you implement your own MergePolicy, you'll need to put - it in package Lucene.Net.Index in order to use - these APIs. -

-
- - Run the merges provided by {@link IndexWriter#GetNextMerge()}. - - - Close this MergeScheduler. - - - Just do the merges in sequence. We do this - "synchronized" so that even if the application is using - multiple threads, only one merge may run at a time. - - - - Used by DocumentsWriter to implemented a StringReader - that can be reset to a new string; we use this when - tokenizing the string value from a Field. - - - - $Id - -

NOTE: This API is new and still experimental - (subject to change suddenly in the next release)

-

-
- - The class which implements SegmentReader. - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Clones the norm bytes. May be overridden by subclasses. New and experimental. - Byte array to clone - - New BitVector - - - - Clones the deleteDocs BitVector. May be overridden by subclasses. New and experimental. - BitVector to clone - - New BitVector - - - - - - - - - - - - Read norms into a pre-allocated array. - - - Create a clone from the initial TermVectorsReader and store it in the ThreadLocal. - TermVectorsReader - - - - Return a term frequency vector for the specified document and field. The - vector returned contains term numbers and frequencies for all terms in - the specified field of this document, if the field had storeTermVector - flag set. If the flag was not set, the method returns null. - - IOException - - - Return an array of term frequency vectors for the specified document. - The array contains a vector for each vectorized field in the document. - Each vector vector contains term numbers and frequencies for all terms - in a given vectorized field. - If no such fields existed, the method returns null. - - IOException - - - Return the name of the segment this reader is reading. - - - Return the SegmentInfo of the segment this reader is reading. - - - Returns the directory this index resides in. - - - Lotsa tests did hacks like:
- SegmentReader reader = (SegmentReader) IndexReader.open(dir);
- They broke. This method serves as a hack to keep hacks working -
-
- - Sets the initial value - - - Byte[] referencing is used because a new norm object needs - to be created for each clone, and the byte array is all - that is needed for sharing between cloned readers. The - current norm referencing is for sharing between readers - whereas the byte[] referencing is for copy on write which - is independent of reader references (i.e. incRef, decRef). - - - - Abstract API that consumes terms, doc, freq, prox and - payloads postings. Concrete implementations of this - actually do "something" with the postings (write it into - the index in a specific format). - - NOTE: this API is experimental and will likely change - - - - Add a new field - - - Called when we are done adding everything. - - - Add a new field - - - Called when we are done adding everything. - - - Bulk write a contiguous series of documents. The - lengths array is the length (in bytes) of each raw - document. The stream IndexInput is the - fieldsStream from which we should bulk-copy all - bytes. - - - - Class responsible for access to stored document fields. -

- It uses <segment>.fdt and <segment>.fdx; files. - -

- $Id: FieldsReader.java 801344 2009-08-05 18:05:06Z yonik $ - -
- - Returns a cloned FieldsReader that shares open - IndexInputs with the original one. It is the caller's - job not to close the original FieldsReader until all - clones are called (eg, currently SegmentReader manages - this logic). - - - - AlreadyClosedException if this FieldsReader is closed - - - Closes the underlying {@link Lucene.Net.Store.IndexInput} streams, including any ones associated with a - lazy implementation of a Field. This means that the Fields values will not be accessible. - - - IOException - - - Returns the length in bytes of each raw document in a - contiguous range of length numDocs starting with - startDocID. Returns the IndexInput (the fieldStream), - already seeked to the starting point for startDocID. - - - - Skip the field. We still have to read some of the information about the field, but can skip past the actual content. - This will have the most payoff on large fields. - - - - A Lazy implementation of Fieldable that differs loading of fields until asked for, instead of when the Document is - loaded. - - - - - - - - - - Sets the boost factor hits on this field. This value will be - multiplied into the score of all hits on this this field of this - document. - -

The boost is multiplied by {@link Lucene.Net.Documents.Document#GetBoost()} of the document - containing this field. If a document has multiple fields with the same - name, all such values are multiplied together. This product is then - used to compute the norm factor for the field. By - default, in the {@link - Lucene.Net.Search.Similarity#ComputeNorm(String, - FieldInvertState)} method, the boost value is multipled - by the {@link - Lucene.Net.Search.Similarity#LengthNorm(String, - int)} and then - rounded by {@link Lucene.Net.Search.Similarity#EncodeNorm(float)} before it is stored in the - index. One should attempt to ensure that this product does not overflow - the range of that encoding. - -

- - - - - - -
- - Returns the boost factor for hits for this field. - -

The default value is 1.0. - -

Note: this value is not stored directly with the document in the index. - Documents returned from {@link Lucene.Net.Index.IndexReader#Document(int)} and - {@link Lucene.Net.Search.Hits#Doc(int)} may thus not have the same value present as when - this field was indexed. - -

- - -
- - Returns the name of the field as an interned string. - For example "date", "title", "body", ... - - - - True iff the value of the field is to be stored in the index for return - with search hits. It is an error for this to be true if a field is - Reader-valued. - - - - True iff the value of the field is to be indexed, so that it may be - searched on. - - - - True iff the value of the field should be tokenized as text prior to - indexing. Un-tokenized fields are indexed as a single word and may not be - Reader-valued. - - - - True if the value of the field is stored and compressed within the index - - - True iff the term or terms used to index this field are stored as a term - vector, available from {@link Lucene.Net.Index.IndexReader#GetTermFreqVector(int,String)}. - These methods do not provide access to the original content of the field, - only to terms used to index it. If the original content must be - preserved, use the stored attribute instead. - - - - - - - True iff terms are stored as term vector together with their offsets - (start and end position in source text). - - - - True iff terms are stored as term vector together with their token positions. - - - True iff the value of the filed is stored as binary - - - Return the raw byte[] for the binary field. Note that - you must also call {@link #getBinaryLength} and {@link - #getBinaryOffset} to know which range of bytes in this - returned array belong to the field. - - reference to the Field value as byte[]. - - - - Returns length of byte[] segment that is used as value, if Field is not binary - returned value is undefined - - length of byte[] segment that represents this Field value - - - - Returns offset into byte[] segment that is used as value, if Field is not binary - returned value is undefined - - index of the first character in byte[] segment that represents this Field value - - - - True if norms are omitted for this indexed field - - - Renamed to {@link #getOmitTermFreqAndPositions} - - - - - - - - Expert: - - If set, omit normalization factors associated with this indexed field. - This effectively disables indexing boosts and length normalization for this field. - - - - Renamed to {@link #setOmitTermFreqAndPositions} - - - - Expert: - - If set, omit term freq, positions and payloads from - postings for this field. - -

NOTE: While this option reduces storage space - required in the index, it also means any query - requiring positional information, such as {@link - PhraseQuery} or {@link SpanQuery} subclasses will - silently fail to find results. -

-
- - Prints a Field for human consumption. - - - The value of the field in Binary, or null. If null, the Reader value, - String value, or TokenStream value is used. Exactly one of stringValue(), - readerValue(), binaryValue(), and tokenStreamValue() must be set. - - - - The value of the field as a Reader, or null. If null, the String value, - binary value, or TokenStream value is used. Exactly one of stringValue(), - readerValue(), binaryValue(), and tokenStreamValue() must be set. - - - - The value of the field as a TokenStream, or null. If null, the Reader value, - String value, or binary value is used. Exactly one of stringValue(), - readerValue(), binaryValue(), and tokenStreamValue() must be set. - - - - The value of the field as a String, or null. If null, the Reader value, - binary value, or TokenStream value is used. Exactly one of stringValue(), - readerValue(), binaryValue(), and tokenStreamValue() must be set. - - - - This is just a "splitter" class: it lets you wrap two - DocFieldConsumer instances as a single consumer. - - - - A Token's lexical type. The Default value is "word". - - - Returns this Token's lexical type. Defaults to "word". - - - Set the lexical type. - - - - - An {@link Analyzer} that filters {@link LetterTokenizer} - with {@link LowerCaseFilter} - - - - The results of a SpanQueryFilter. Wraps the BitSet and the position information from the SpanQuery - -

- NOTE: This API is still experimental and subject to change. - - -

-
- - - - - - - The bits for the Filter - - A List of {@link Lucene.Net.Search.SpanFilterResult.PositionInfo} objects - - Use {@link #SpanFilterResult(DocIdSet, List)} instead - - - - - The DocIdSet for the Filter - - A List of {@link Lucene.Net.Search.SpanFilterResult.PositionInfo} objects - - - - The first entry in the array corresponds to the first "on" bit. - Entries are increasing by document order - - A List of PositionInfo objects - - - - Use {@link #GetDocIdSet()} - - - - Returns the docIdSet - - - - A List of {@link Lucene.Net.Search.SpanFilterResult.StartEnd} objects - - - - - The end position of this match - - - - The Start position - The start position of this match - - - - Expert: Compares two ScoreDoc objects for sorting. - -

Created: Feb 3, 2004 9:00:16 AM - -

- lucene 1.4 - - $Id: ScoreDocComparator.java 738219 2009-01-27 20:15:21Z mikemccand $ - - use {@link FieldComparator} - -
- - Special comparator for sorting hits according to computed relevance (document score). - - - Special comparator for sorting hits according to index order (document number). - - - Constrains search results to only match those which also match a provided - query. Results are cached, so that searches after the first on the same - index using this filter are much faster. - - - $Id: QueryFilter.java 528298 2007-04-13 00:59:28Z hossman $ - - use a CachingWrapperFilter with QueryWrapperFilter - - - - Wraps another filter's result and caches it. The purpose is to allow - filters to simply filter, and then wrap with this class to add caching. - - - - A transient Filter cache. - - - Filter to cache results of - - - - Use {@link #GetDocIdSet(IndexReader)} instead. - - - - Provide the DocIdSet to be cached, using the DocIdSet provided - by the wrapped Filter. - This implementation returns the given DocIdSet. - - - - Constructs a filter which only matches documents matching - query. - - - - A Query that matches documents containing terms with a specified prefix. A PrefixQuery - is built by QueryParser for input like app*. - -

This query uses the {@link - MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} - rewrite method. -

-
- - Constructs a query for terms starting with prefix. - - - Returns the prefix of this query. - - - Prints a user-readable version of this query. - - - Expert: A hit queue for sorting by hits by terms in more than one field. - Uses FieldCache.DEFAULT for maintaining - internal term lookup tables. - - This class will not resolve SortField.AUTO types, and expects the type - of all SortFields used for construction to already have been resolved. - {@link SortField#DetectFieldType(IndexReader, String)} is a utility method which - may be used for field type detection. - - NOTE: This API is experimental and might change in - incompatible ways in the next release. - - - 2.9 - - $Id: - - - - - - - - Creates a hit queue sorted by the given list of fields. - -

NOTE: The instances returned by this method - pre-allocate a full array of length numHits. - -

- SortField array we are sorting by in priority order (highest - priority first); cannot be null or empty - - The number of hits to retain. Must be greater than zero. - - IOException -
- - Stores the sort criteria being used. - - - Given a queue Entry, creates a corresponding FieldDoc - that contains the values used to sort the given document. - These values are not the raw values out of the index, but the internal - representation of them. This is so the given search hit can be collated by - a MultiSearcher with other search hits. - - - The Entry used to create a FieldDoc - - The newly created FieldDoc - - - - - - Returns the SortFields being used by this hit queue. - - - An implementation of {@link FieldValueHitQueue} which is optimized in case - there is just one comparator. - - - - Returns whether a is less relevant than b. - ScoreDoc - - ScoreDoc - - true if document a should be sorted after document b. - - - - An implementation of {@link FieldValueHitQueue} which is optimized in case - there is more than one comparator. - - - - The TermVectorOffsetInfo class holds information pertaining to a Term in a {@link Lucene.Net.Index.TermPositionVector}'s - offset information. This offset information is the character offset as set during the Analysis phase (and thus may not be the actual offset in the - original content). - - - - Convenience declaration when creating a {@link Lucene.Net.Index.TermPositionVector} that stores only position information. - - - The accessor for the ending offset for the term - The offset - - - - The accessor for the starting offset of the term. - - - The offset - - - - Two TermVectorOffsetInfos are equals if both the start and end offsets are the same - The comparison Object - - true if both {@link #GetStartOffset()} and {@link #GetEndOffset()} are the same for both objects. - - - - This is a DocFieldConsumer that writes stored fields. - - - Fills in any hole in the docIDs - - - The SegmentMerger class combines two or more Segments, represented by an IndexReader ({@link #add}, - into a single Segment. After adding the appropriate readers, call the merge method to combine the - segments. -

- If the compoundFile flag is set, then the segments will be merged into a compound file. - - -

- - - - -
- - Maximum number of contiguous documents to bulk-copy - when merging stored fields - - - - norms header placeholder - - - This ctor used only by test code. - - - The Directory to merge the other segments into - - The name of the new segment - - - - Add an IndexReader to the collection of readers that are to be merged - - - - - - The index of the reader to return - - The ith reader to be merged - - - - Merges the readers specified by the {@link #add} method into the directory passed to the constructor - The number of documents that were merged - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Merges the readers specified by the {@link #add} method - into the directory passed to the constructor. - - if false, we will not merge the - stored fields nor vectors files - - The number of documents that were merged - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - close all IndexReaders that have been added. - Should not be called before merge(). - - IOException - - - - The number of documents in all of the readers - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Merge the TermVectors from each of the segments into the new one. - IOException - - - Process postings from multiple segments all positioned on the - same term. Writes out merged entries into freqOutput and - the proxOutput streams. - - - array of segments - - number of cells in the array actually occupied - - number of documents across all segments where this term was found - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Records the fact that roughly units amount of work - have been done since this method was last called. - When adding time-consuming code into SegmentMerger, - you should test different values for units to ensure - that the time in between calls to merge.checkAborted - is up to ~ 1 second. - - - - Useful constants representing filenames and extensions used by lucene - - - $rcs = ' $Id: Exp $ ' ; - - - - Name of the index segment file - - - Name of the generation reference file name - - - Name of the index deletable file (only used in - pre-lockless indices) - - - - Extension of norms file - - - Extension of freq postings file - - - Extension of prox postings file - - - Extension of terms file - - - Extension of terms index file - - - Extension of stored fields index file - - - Extension of stored fields file - - - Extension of vectors fields file - - - Extension of vectors documents file - - - Extension of vectors index file - - - Extension of compound file - - - Extension of compound file for doc store files - - - Extension of deletes - - - Extension of field infos - - - Extension of plain norms - - - Extension of separate norms - - - Extension of gen file - - - This array contains all filename extensions used by - Lucene's index files, with two exceptions, namely the - extension made up from .f + a number and - from .s + a number. Also note that - Lucene's segments_N files do not have any - filename extension. - - - - File extensions that are added to a compound file - (same as above, minus "del", "gen", "cfs"). - - - - File extensions of old-style index files - - - File extensions for term vector support - - - Computes the full file name from base, extension and - generation. If the generation is -1, the file name is - null. If it's 0, the file name is - If it's > 0, the file name is - - - -- main part of the file name - - -- extension of the filename (including .) - - -- generation - - - - Returns true if the provided filename is one of the doc - store files (ends with an extension in - STORE_INDEX_EXTENSIONS). - - - - This is the base class for an in-memory posting list, - keyed by a Token. {@link TermsHash} maintains a hash - table holding one instance of this per unique Token. - Consumers of TermsHash ({@link TermsHashConsumer}) must - subclass this class with its own concrete class. - FreqProxTermsWriter.PostingList is a private inner class used - for the freq/prox postings, and - TermVectorsTermsWriter.PostingList is a private inner class - used to hold TermVectors postings. - - - - A FilterIndexReader contains another IndexReader, which it - uses as its basic source of data, possibly transforming the data along the - way or providing additional functionality. The class - FilterIndexReader itself simply implements all abstract methods - of IndexReader with versions that pass all requests to the - contained index reader. Subclasses of FilterIndexReader may - further override some of these methods and may also provide additional - methods and fields. - - - -

Construct a FilterIndexReader based on the specified base reader. - Directory locking for delete, undeleteAll, and setNorm operations is - left to the base reader.

-

Note that base reader is closed if this FilterIndexReader is closed.

-

- specified base reader. - -
- - - - - - Base class for filtering {@link TermDocs} implementations. - - - Base class for filtering {@link TermPositions} implementations. - - - Base class for filtering {@link TermEnum} implementations. - - - Abstract class for enumerating terms. -

Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. -

-
- - Increments the enumeration to the next element. True if one exists. - - - Returns the current Term in the enumeration. - - - Returns the docFreq of the current Term in the enumeration. - - - Closes the enumeration to further activity, freeing resources. - - - Skips terms to the first beyond the current whose value is - greater or equal to target.

Returns true iff there is such - an entry.

Behaves as if written:

-            public boolean skipTo(Term target) {
-            do {
-            if (!next())
-            return false;
-            } while (target > term());
-            return true;
-            }
-            
- Some implementations *could* be considerably more efficient than a linear scan. - Check the implementation to be sure. -
- This method is not performant and will be removed in Lucene 3.0. - Use {@link IndexReader#Terms(Term)} to create a new TermEnum positioned at a - given term. - -
- - A {@link MergeScheduler} that runs each merge using a - separate thread, up until a maximum number of threads - ({@link #setMaxThreadCount}) at which when a merge is - needed, the thread(s) that are updating the index will - pause until one or more merges completes. This is a - simple way to use concurrency in the indexing process - without having to create and manage application level - threads. - - - - Sets the max # simultaneous threads that may be - running. If a merge is necessary yet we already have - this many threads running, the incoming thread (that - is calling add/updateDocument) will block until - a merge thread has completed. - - - - Get the max # simultaneous threads that may be - - - - - Return the priority that merge threads run at. By - default the priority is 1 plus the priority of (ie, - slightly higher priority than) the first thread that - calls merge. - - - - Return the priority that merge threads run at. - - - Does the actual merge, by calling {@link IndexWriter#merge} - - - Create and return a new MergeThread - - - Called when an exception is hit in a background merge - thread - - - - Used for testing - - - Used for testing - - - Used for testing - - - Used for testing - - - Used for testing - - - Holds buffered deletes, by docID, term or query. We - hold two instances of this class: one for the deletes - prior to the last flush, the other for deletes after - the last flush. This is so if we need to abort - (discard all buffered docs) we can also discard the - buffered deletes yet keep the deletes done during - previously flushed segments. - - - - An Analyzer that uses {@link WhitespaceTokenizer}. - - - The payload of a Token. See also {@link Payload}. - - - Initialize this attribute with no payload. - - - Initialize this attribute with the given payload. - - - Returns this Token's payload. - - - Sets this Token's payload. - - - A SinkTokenizer can be used to cache Tokens for use in an Analyzer -

- WARNING: {@link TeeTokenFilter} and {@link SinkTokenizer} only work with the old TokenStream API. - If you switch to the new API, you need to use {@link TeeSinkTokenFilter} instead, which offers - the same functionality. -

- - - Use {@link TeeSinkTokenFilter} instead - - - -
- - Get the tokens in the internal List. -

- WARNING: Adding tokens to this list requires the {@link #Reset()} method to be called in order for them - to be made available. Also, this Tokenizer does nothing to protect against {@link java.util.ConcurrentModificationException}s - in the case of adds happening while {@link #Next(Lucene.Net.Analysis.Token)} is being called. -

- WARNING: Since this SinkTokenizer can be reset and the cached tokens made available again, do not modify them. Modify clones instead. - -

- A List of {@link Lucene.Net.Analysis.Token}s - -
- - Returns the next token out of the list of cached tokens - The next {@link Lucene.Net.Analysis.Token} in the Sink. - - IOException - - - Override this method to cache only certain tokens, or new tokens based - on the old tokens. - - - The {@link Lucene.Net.Analysis.Token} to add to the sink - - - - Reset the internal data structures to the start at the front of the list of tokens. Should be called - if tokens were added to the list after an invocation of {@link #Next(Token)} - - IOException - - - This analyzer is used to facilitate scenarios where different - fields require different analysis techniques. Use {@link #addAnalyzer} - to add a non-default analyzer on a field name basis. - -

Example usage: - -

-            PerFieldAnalyzerWrapper aWrapper =
-            new PerFieldAnalyzerWrapper(new StandardAnalyzer());
-            aWrapper.addAnalyzer("firstname", new KeywordAnalyzer());
-            aWrapper.addAnalyzer("lastname", new KeywordAnalyzer());
-            
- -

In this example, StandardAnalyzer will be used for all fields except "firstname" - and "lastname", for which KeywordAnalyzer will be used. - -

A PerFieldAnalyzerWrapper can be used like any other analyzer, for both indexing - and query parsing. -

-
- - Constructs with default analyzer. - - - Any fields not specifically - defined to use a different analyzer will use the one provided here. - - - - Constructs with default analyzer and a map of analyzers to use for - specific fields. - - - Any fields not specifically - defined to use a different analyzer will use the one provided here. - - a Map (String field name to the Analyzer) to be - used for those fields - - - - Defines an analyzer to use for the specified field. - - - field name requiring a non-default analyzer - - non-default analyzer to use for field - - - - Return the positionIncrementGap from the analyzer assigned to fieldName - - - Emits the entire input as a single token. - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - Will be removed in Lucene 3.0. This method is final, as it should - not be overridden. Delegates to the backwards compatibility layer. - - - - "Tokenizes" the entire stream as a single token. This is useful - for data like zip codes, ids, and some product names. - - - - Common util methods for dealing with {@link IndexReader}s. - - - - - Gathers sub-readers from reader into a List. - - - - - - - - - Returns sub IndexReader that contains the given document id. - - - id of document - - parent reader - - sub reader of parent which contains the specified doc id - - - - Returns sub-reader subIndex from reader. - - - parent reader - - index of desired sub reader - - the subreader at subINdex - - - - Returns index of the searcher/reader for document n in the - array used to construct this searcher/reader. - - - - Expert: Delegating scoring implementation. Useful in {@link - Query#GetSimilarity(Searcher)} implementations, to override only certain - methods of a Searcher's Similiarty implementation.. - - - - Expert: Scoring API. -

Subclasses implement search scoring. - -

The score of query q for document d correlates to the - cosine-distance or dot-product between document and query vectors in a - - Vector Space Model (VSM) of Information Retrieval. - A document whose vector is closer to the query vector in that model is scored higher. - - The score is computed as follows: - -

- - -
- - - - - - - - - - - -
- score(q,d)   =   - coord(q,d)  ·  - queryNorm(q)  ·  - - - - ( - tf(t in d)  ·  - idf(t)2  ·  - t.getBoost() ·  - norm(t,d) - ) -
t in q
-
- -

where -

    -
  1. - - tf(t in d) - correlates to the term's frequency, - defined as the number of times term t appears in the currently scored document d. - Documents that have more occurrences of a given term receive a higher score. - The default computation for tf(t in d) in - {@link Lucene.Net.Search.DefaultSimilarity#Tf(float) DefaultSimilarity} is: - -
     
    - - - - - -
    - {@link Lucene.Net.Search.DefaultSimilarity#Tf(float) tf(t in d)}   =   - - frequency½ -
    -
     
    -
  2. - -
  3. - - idf(t) stands for Inverse Document Frequency. This value - correlates to the inverse of docFreq - (the number of documents in which the term t appears). - This means rarer terms give higher contribution to the total score. - The default computation for idf(t) in - {@link Lucene.Net.Search.DefaultSimilarity#Idf(int, int) DefaultSimilarity} is: - -
     
    - - - - - - - -
    - {@link Lucene.Net.Search.DefaultSimilarity#Idf(int, int) idf(t)}  =   - - 1 + log ( - - - - - -
    numDocs
    –––––––––
    docFreq+1
    -
    - ) -
    -
     
    -
  4. - -
  5. - - coord(q,d) - is a score factor based on how many of the query terms are found in the specified document. - Typically, a document that contains more of the query's terms will receive a higher score - than another document with fewer query terms. - This is a search time factor computed in - {@link #Coord(int, int) coord(q,d)} - by the Similarity in effect at search time. -
     
    -
  6. - -
  7. - - queryNorm(q) - - is a normalizing factor used to make scores between queries comparable. - This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), - but rather just attempts to make scores from different queries (or even different indexes) comparable. - This is a search time factor computed by the Similarity in effect at search time. - - The default computation in - {@link Lucene.Net.Search.DefaultSimilarity#QueryNorm(float) DefaultSimilarity} - is: -
     
    - - - - - -
    - queryNorm(q)   =   - {@link Lucene.Net.Search.DefaultSimilarity#QueryNorm(float) queryNorm(sumOfSquaredWeights)} -   =   - - - - - -
    1
    - –––––––––––––– -
    sumOfSquaredWeights½
    -
    -
     
    - - The sum of squared weights (of the query terms) is - computed by the query {@link Lucene.Net.Search.Weight} object. - For example, a {@link Lucene.Net.Search.BooleanQuery boolean query} - computes this value as: - -
     
    - - - - - - - - - - - -
    - {@link Lucene.Net.Search.Weight#SumOfSquaredWeights() sumOfSquaredWeights}   =   - {@link Lucene.Net.Search.Query#GetBoost() q.getBoost()} 2 -  ·  - - - - ( - idf(t)  ·  - t.getBoost() - ) 2 -
    t in q
    -
     
    - -
  8. - -
  9. - - t.getBoost() - is a search time boost of term t in the query q as - specified in the query text - (see query syntax), - or as set by application calls to - {@link Lucene.Net.Search.Query#SetBoost(float) setBoost()}. - Notice that there is really no direct API for accessing a boost of one term in a multi term query, - but rather multi terms are represented in a query as multi - {@link Lucene.Net.Search.TermQuery TermQuery} objects, - and so the boost of a term in the query is accessible by calling the sub-query - {@link Lucene.Net.Search.Query#GetBoost() getBoost()}. -
     
    -
  10. - -
  11. - - norm(t,d) encapsulates a few (indexing time) boost and length factors: - -
      -
    • Document boost - set by calling - {@link Lucene.Net.Documents.Document#SetBoost(float) doc.setBoost()} - before adding the document to the index. -
    • -
    • Field boost - set by calling - {@link Lucene.Net.Documents.Fieldable#SetBoost(float) field.setBoost()} - before adding the field to a document. -
    • -
    • {@link #LengthNorm(String, int) lengthNorm(field)} - computed - when the document is added to the index in accordance with the number of tokens - of this field in the document, so that shorter fields contribute more to the score. - LengthNorm is computed by the Similarity class in effect at indexing. -
    • -
    - -

    - When a document is added to the index, all the above factors are multiplied. - If the document has multiple fields with the same name, all their boosts are multiplied together: - -
     
    - - - - - - - - - - - -
    - norm(t,d)   =   - {@link Lucene.Net.Documents.Document#GetBoost() doc.getBoost()} -  ·  - {@link #LengthNorm(String, int) lengthNorm(field)} -  ·  - - - - {@link Lucene.Net.Documents.Fieldable#GetBoost() f.getBoost}() -
    field f in d named as t
    -
     
    - However the resulted norm value is {@link #EncodeNorm(float) encoded} as a single byte - before being stored. - At search time, the norm byte value is read from the index - {@link Lucene.Net.Store.Directory directory} and - {@link #DecodeNorm(byte) decoded} back to a float norm value. - This encoding/decoding, while reducing index size, comes with the price of - precision loss - it is not guaranteed that decode(encode(x)) = x. - For instance, decode(encode(0.89)) = 0.75. - Also notice that search time is too late to modify this norm part of scoring, e.g. by - using a different {@link Similarity} for search. -
     
    -

  12. -
- -
- - - - - - -
- - Set the default Similarity implementation used by indexing and search - code. - - - - - - - - - Return the default Similarity implementation used by indexing and search - code. - -

This is initially an instance of {@link DefaultSimilarity}. - -

- - - - -
- - Cache of decoded bytes. - - - Decodes a normalization factor stored in an index. - - - - - Returns a table for decoding normalization bytes. - - - - - Compute the normalization value for a field, given the accumulated - state of term processing for this field (see {@link FieldInvertState}). - -

Implementations should calculate a float value based on the field - state and then return that value. - -

For backward compatibility this method by default calls - {@link #LengthNorm(String, int)} passing - {@link FieldInvertState#GetLength()} as the second argument, and - then multiplies this value by {@link FieldInvertState#GetBoost()}.

- -

WARNING: This API is new and experimental and may - suddenly change.

- -

- field name - - current processing state for this field - - the calculated float norm - -
- - Computes the normalization value for a field given the total number of - terms contained in a field. These values, together with field boosts, are - stored in an index and multipled into scores for hits on each field by the - search code. - -

Matches in longer fields are less precise, so implementations of this - method usually return smaller values when numTokens is large, - and larger values when numTokens is small. - -

Note that the return values are computed under - {@link Lucene.Net.Index.IndexWriter#AddDocument(Lucene.Net.Documents.Document)} - and then stored using - {@link #EncodeNorm(float)}. - Thus they have limited precision, and documents - must be re-indexed if this method is altered. - -

- the name of the field - - the total number of tokens contained in fields named - fieldName of doc. - - a normalization factor for hits on this field of this document - - - - -
- - Computes the normalization value for a query given the sum of the squared - weights of each of the query terms. This value is then multipled into the - weight of each query term. - -

This does not affect ranking, but rather just attempts to make scores - from different queries comparable. - -

- the sum of the squares of query term weights - - a normalization factor for query weights - -
- - Encodes a normalization factor for storage in an index. - -

The encoding uses a three-bit mantissa, a five-bit exponent, and - the zero-exponent point at 15, thus - representing values from around 7x10^9 to 2x10^-9 with about one - significant decimal digit of accuracy. Zero is also represented. - Negative numbers are rounded up to zero. Values too large to represent - are rounded down to the largest representable value. Positive values too - small to represent are rounded up to the smallest positive representable - value. - -

- - - - -
- - Computes a score factor based on a term or phrase's frequency in a - document. This value is multiplied by the {@link #Idf(Term, Searcher)} - factor for each term in the query and these products are then summed to - form the initial score for a document. - -

Terms and phrases repeated in a document indicate the topic of the - document, so implementations of this method usually return larger values - when freq is large, and smaller values when freq - is small. - -

The default implementation calls {@link #Tf(float)}. - -

- the frequency of a term within a document - - a score factor based on a term's within-document frequency - -
- - Computes the amount of a sloppy phrase match, based on an edit distance. - This value is summed for each sloppy phrase match in a document to form - the frequency that is passed to {@link #Tf(float)}. - -

A phrase match with a small edit distance to a document passage more - closely matches the document, so implementations of this method usually - return larger values when the edit distance is small and smaller values - when it is large. - -

- - - the edit distance of this sloppy phrase match - - the frequency increment for this match - -
- - Computes a score factor based on a term or phrase's frequency in a - document. This value is multiplied by the {@link #Idf(Term, Searcher)} - factor for each term in the query and these products are then summed to - form the initial score for a document. - -

Terms and phrases repeated in a document indicate the topic of the - document, so implementations of this method usually return larger values - when freq is large, and smaller values when freq - is small. - -

- the frequency of a term within a document - - a score factor based on a term's within-document frequency - -
- - Computes a score factor for a simple term. - -

The default implementation is:

-            return idf(searcher.docFreq(term), searcher.maxDoc());
-            
- - Note that {@link Searcher#MaxDoc()} is used instead of - {@link Lucene.Net.Index.IndexReader#NumDocs()} because it is proportional to - {@link Searcher#DocFreq(Term)} , i.e., when one is inaccurate, - so is the other, and in the same direction. - -
- the term in question - - the document collection being searched - - a score factor for the term - - see {@link #IdfExplain(Term, Searcher)} - -
- - Computes a score factor for a simple term and returns an explanation - for that score factor. - -

- The default implementation uses: - -

-            idf(searcher.docFreq(term), searcher.maxDoc());
-            
- - Note that {@link Searcher#MaxDoc()} is used instead of - {@link Lucene.Net.Index.IndexReader#NumDocs()} because it is - proportional to {@link Searcher#DocFreq(Term)} , i.e., when one is - inaccurate, so is the other, and in the same direction. - -
- the term in question - - the document collection being searched - - an IDFExplain object that includes both an idf score factor - and an explanation for the term. - - IOException -
- - Computes a score factor for a phrase. - -

The default implementation sums the {@link #Idf(Term,Searcher)} factor - for each term in the phrase. - -

- the terms in the phrase - - the document collection being searched - - idf score factor - - see {@link #idfExplain(Collection, Searcher)} - -
- - Computes a score factor for a phrase. - -

- The default implementation sums the idf factor for - each term in the phrase. - -

- the terms in the phrase - - the document collection being searched - - an IDFExplain object that includes both an idf - score factor for the phrase and an explanation - for each term. - - IOException -
- - Computes a score factor based on a term's document frequency (the number - of documents which contain the term). This value is multiplied by the - {@link #Tf(int)} factor for each term in the query and these products are - then summed to form the initial score for a document. - -

Terms that occur in fewer documents are better indicators of topic, so - implementations of this method usually return larger values for rare terms, - and smaller values for common terms. - -

- the number of documents which contain the term - - the total number of documents in the collection - - a score factor based on the term's document frequency - -
- - Computes a score factor based on the fraction of all query terms that a - document contains. This value is multiplied into scores. - -

The presence of a large portion of the query terms indicates a better - match with the query, so implementations of this method usually return - larger values when the ratio between these parameters is large and smaller - values when the ratio between them is small. - -

- the number of query terms matched in the document - - the total number of terms in the query - - a score factor based on term overlap with the query - -
- - Calculate a scoring factor based on the data in the payload. Overriding implementations - are responsible for interpreting what is in the payload. Lucene makes no assumptions about - what is in the byte array. -

- The default implementation returns 1. - -

- The fieldName of the term this payload belongs to - - The payload byte array to be scored - - The offset into the payload array - - The length in the array - - An implementation dependent float to be used as a scoring factor - - - See {@link #ScorePayload(int, String, int, int, byte[], int, int)} - -
- - Calculate a scoring factor based on the data in the payload. Overriding implementations - are responsible for interpreting what is in the payload. Lucene makes no assumptions about - what is in the byte array. -

- The default implementation returns 1. - -

- The docId currently being scored. If this value is {@link #NO_DOC_ID_PROVIDED}, then it should be assumed that the PayloadQuery implementation does not provide document information - - The fieldName of the term this payload belongs to - - The start position of the payload - - The end position of the payload - - The payload byte array to be scored - - The offset into the payload array - - The length in the array - - An implementation dependent float to be used as a scoring factor - - -
- - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - The Similarity implementation used by default. - TODO: move back to top when old API is removed! - - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Remove this when old API is removed! - - - - Construct a {@link Similarity} that delegates all methods to another. - - - the Similarity implementation to delegate to - - - - A Scorer for queries with a required subscorer - and an excluding (prohibited) sub DocIdSetIterator. -
- This Scorer implements {@link Scorer#SkipTo(int)}, - and it uses the skipTo() on the given scorers. -
-
- - Construct a ReqExclScorer. - The scorer that must match, except where - - indicates exclusion. - - - - use {@link #NextDoc()} instead. - - - - Advance to non excluded doc. -
On entry: -
    -
  • reqScorer != null,
  • -
  • exclScorer != null,
  • -
  • reqScorer was advanced once via next() or skipTo() - and reqScorer.doc() may still be excluded.
  • -
- Advances reqScorer a non excluded required doc, if any. -
- true iff there is a non excluded required doc. - -
- - use {@link #DocID()} instead. - - - - Returns the score of the current document matching the query. - Initially invalid, until {@link #Next()} is called the first time. - - The score of the required scorer. - - - - use {@link #Advance(int)} instead. - - - - A Filter that restricts search results to values that have a matching prefix in a given - field. - - - - Prints a user-readable version of this query. - - - Position of a term in a document that takes into account the term offset within the phrase. - - - Go to next location of this term current document, and set - position as location - offset, so that a - matching exact phrase is easily identified when all PhrasePositions - have exactly the same position. - - - - Abstract class for enumerating a subset of all terms. -

Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. -

-
- - the current term - - - the delegate enum - to set this member use {@link #setEnum} - - - Equality compare on the term - - - Equality measure on the term - - - Indicates the end of the enumeration has been reached - - - use this method to set the actual TermEnum (e.g. in ctor), - it will be automatically positioned on the first matching term. - - - - Returns the docFreq of the current Term in the enumeration. - Returns -1 if no Term matches or all terms have been enumerated. - - - - Increments the enumeration to the next element. True if one exists. - - - Returns the current Term in the enumeration. - Returns null if no Term matches or all terms have been enumerated. - - - - Closes the enumeration to further activity, freeing resources. - - - Expert: Maintains caches of term values. - -

Created: May 19, 2004 11:13:14 AM - -

- lucene 1.4 - - $Id: FieldCache.java 807841 2009-08-25 22:27:31Z markrmiller $ - - - -
- - Expert: Stores term text values and document ordering data. - - - All the term values, in natural order. - - - For each document, an index into the lookup array. - - - Creates one of these objects - - - Indicator for StringIndex values in the cache. - - - Expert: The cache used internally by sorting and range query classes. - - - The default parser for byte values, which are encoded by {@link Byte#toString(byte)} - - - The default parser for short values, which are encoded by {@link Short#toString(short)} - - - The default parser for int values, which are encoded by {@link Integer#toString(int)} - - - The default parser for float values, which are encoded by {@link Float#toString(float)} - - - The default parser for long values, which are encoded by {@link Long#toString(long)} - - - The default parser for double values, which are encoded by {@link Double#toString(double)} - - - A parser instance for int values encoded by {@link NumericUtils#IntToPrefixCoded(int)}, e.g. when indexed - via {@link NumericField}/{@link NumericTokenStream}. - - - - A parser instance for float values encoded with {@link NumericUtils}, e.g. when indexed - via {@link NumericField}/{@link NumericTokenStream}. - - - - A parser instance for long values encoded by {@link NumericUtils#LongToPrefixCoded(long)}, e.g. when indexed - via {@link NumericField}/{@link NumericTokenStream}. - - - - A parser instance for double values encoded with {@link NumericUtils}, e.g. when indexed - via {@link NumericField}/{@link NumericTokenStream}. - - - - Interface to parse bytes from document fields. - - - - - Marker interface as super-interface to all parsers. It - is used to specify a custom parser to {@link - SortField#SortField(String, FieldCache.Parser)}. - - - - Return a single Byte representation of this field's value. - - - Interface to parse shorts from document fields. - - - - - Return a short representation of this field's value. - - - Interface to parse ints from document fields. - - - - - Return an integer representation of this field's value. - - - Interface to parse floats from document fields. - - - - - Return an float representation of this field's value. - - - Interface to parse long from document fields. - - - Use {@link FieldCache.LongParser}, this will be removed in Lucene 3.0 - - - - Return an long representation of this field's value. - - - Interface to parse doubles from document fields. - - - Use {@link FieldCache.DoubleParser}, this will be removed in Lucene 3.0 - - - - Return an long representation of this field's value. - - - A query that wraps a filter and simply returns a constant score equal to the - query boost for every document in the filter. - - - - $Id: ConstantScoreQuery.java 807180 2009-08-24 12:26:43Z markrmiller $ - - - - Returns the encapsulated filter - - - Prints a user-readable version of this query. - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - use {@link #NextDoc()} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #Advance(int)} instead. - - - - Compares {@link Lucene.Net.Index.TermVectorEntry}s first by frequency and then by - the term (case-sensitive) - - - - - - Holds all per thread, per field state. - - - Declare what fields to load normally and what fields to load lazily - - - - - - Pass in the Set of {@link Field} names to load and the Set of {@link Field} names to load lazily. If both are null, the - Document will not have any {@link Field} on it. - - A Set of {@link String} field names to load. May be empty, but not null - - A Set of {@link String} field names to load lazily. May be empty, but not null - - - - Indicate whether to load the field with the given name or not. If the {@link Field#Name()} is not in either of the - initializing Sets, then {@link Lucene.Net.Documents.FieldSelectorResult#NO_LOAD} is returned. If a Field name - is in both fieldsToLoad and lazyFieldsToLoad, lazy has precedence. - - - The {@link Field} name to check - - The {@link FieldSelectorResult} - - - - A field is a section of a Document. Each field has two parts, a name and a - value. Values may be free text, provided as a String or as a Reader, or they - may be atomic keywords, which are not further processed. Such keywords may - be used to represent dates, urls, etc. Fields are optionally stored in the - index, so that they may be returned with hits on the document. - - - - The value of the field as a String, or null. If null, the Reader value or - binary value is used. Exactly one of stringValue(), - readerValue(), and getBinaryValue() must be set. - - - - The value of the field as a Reader, or null. If null, the String value or - binary value is used. Exactly one of stringValue(), - readerValue(), and getBinaryValue() must be set. - - - - The value of the field in Binary, or null. If null, the Reader value, - or String value is used. Exactly one of stringValue(), - readerValue(), and getBinaryValue() must be set. - - This method must allocate a new byte[] if - the {@link AbstractField#GetBinaryOffset()} is non-zero - or {@link AbstractField#GetBinaryLength()} is not the - full length of the byte[]. Please use {@link - AbstractField#GetBinaryValue()} instead, which simply - returns the byte[]. - - - - The TokesStream for this field to be used when indexing, or null. If null, the Reader value - or String value is analyzed to produce the indexed tokens. - - - -

Expert: change the value of this field. This can - be used during indexing to re-use a single Field - instance to improve indexing speed by avoiding GC cost - of new'ing and reclaiming Field instances. Typically - a single {@link Document} instance is re-used as - well. This helps most on small documents.

- -

Each Field instance should only be used once - within a single {@link Document} instance. See ImproveIndexingSpeed - for details.

-

-
- - Expert: change the value of this field. See setValue(String). - - - Expert: change the value of this field. See setValue(String). - - - Expert: change the value of this field. See setValue(String). - - - Expert: change the value of this field. See setValue(String). - use {@link #setTokenStream} - - - - Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true. - May be combined with stored values from stringValue() or binaryValue() - - - - Create a field by specifying its name, value and how it will - be saved in the index. Term vectors will not be stored in the index. - - - The name of the field - - The string to process - - Whether value should be stored in the index - - Whether the field should be indexed, and if so, if it should - be tokenized before indexing - - NullPointerException if name or value is null - IllegalArgumentException if the field is neither stored nor indexed - - - Create a field by specifying its name, value and how it will - be saved in the index. - - - The name of the field - - The string to process - - Whether value should be stored in the index - - Whether the field should be indexed, and if so, if it should - be tokenized before indexing - - Whether term vector should be stored - - NullPointerException if name or value is null - IllegalArgumentException in any of the following situations: -
    -
  • the field is neither stored nor indexed
  • -
  • the field is not indexed but termVector is TermVector.YES
  • -
-
-
- - Create a field by specifying its name, value and how it will - be saved in the index. - - - The name of the field - - Whether to .intern() name or not - - The string to process - - Whether value should be stored in the index - - Whether the field should be indexed, and if so, if it should - be tokenized before indexing - - Whether term vector should be stored - - NullPointerException if name or value is null - IllegalArgumentException in any of the following situations: -
    -
  • the field is neither stored nor indexed
  • -
  • the field is not indexed but termVector is TermVector.YES
  • -
-
-
- - Create a tokenized and indexed field that is not stored. Term vectors will - not be stored. The Reader is read only when the Document is added to the index, - i.e. you may not close the Reader until {@link IndexWriter#AddDocument(Document)} - has been called. - - - The name of the field - - The reader with the content - - NullPointerException if name or reader is null - - - Create a tokenized and indexed field that is not stored, optionally with - storing term vectors. The Reader is read only when the Document is added to the index, - i.e. you may not close the Reader until {@link IndexWriter#AddDocument(Document)} - has been called. - - - The name of the field - - The reader with the content - - Whether term vector should be stored - - NullPointerException if name or reader is null - - - Create a tokenized and indexed field that is not stored. Term vectors will - not be stored. This is useful for pre-analyzed fields. - The TokenStream is read only when the Document is added to the index, - i.e. you may not close the TokenStream until {@link IndexWriter#AddDocument(Document)} - has been called. - - - The name of the field - - The TokenStream with the content - - NullPointerException if name or tokenStream is null - - - Create a tokenized and indexed field that is not stored, optionally with - storing term vectors. This is useful for pre-analyzed fields. - The TokenStream is read only when the Document is added to the index, - i.e. you may not close the TokenStream until {@link IndexWriter#AddDocument(Document)} - has been called. - - - The name of the field - - The TokenStream with the content - - Whether term vector should be stored - - NullPointerException if name or tokenStream is null - - - Create a stored field with binary value. Optionally the value may be compressed. - - - The name of the field - - The binary value - - How value should be stored (compressed or not) - - IllegalArgumentException if store is Store.NO - - - Create a stored field with binary value. Optionally the value may be compressed. - - - The name of the field - - The binary value - - Starting offset in value where this Field's bytes are - - Number of bytes to use for this Field, starting at offset - - How value should be stored (compressed or not) - - IllegalArgumentException if store is Store.NO - - - Specifies whether and how a field should be stored. - - - Store the original field value in the index in a compressed form. This is - useful for long documents and for binary valued fields. - - Please use {@link CompressionTools} instead. - For string fields that were previously indexed and stored using compression, - the new way to achieve this is: First add the field indexed-only (no store) - and additionally using the same field name as a binary, stored field - with {@link CompressionTools#compressString}. - - - - Store the original field value in the index. This is useful for short texts - like a document's title which should be displayed with the results. The - value is stored in its original form, i.e. no analyzer is used before it is - stored. - - - - Do not store the field value in the index. - - - Specifies whether and how a field should be indexed. - - - Do not index the field value. This field can thus not be searched, - but one can still access its contents provided it is - {@link Field.Store stored}. - - - - Index the tokens produced by running the field's - value through an Analyzer. This is useful for - common text. - - - - this has been renamed to {@link #ANALYZED} - - - - Index the field's value without using an Analyzer, so it can be searched. - As no analyzer is used the value will be stored as a single term. This is - useful for unique Ids like product numbers. - - - - This has been renamed to {@link #NOT_ANALYZED} - - - - Expert: Index the field's value without an Analyzer, - and also disable the storing of norms. Note that you - can also separately enable/disable norms by calling - {@link Field#setOmitNorms}. No norms means that - index-time field and document boosting and field - length normalization are disabled. The benefit is - less memory usage as norms take up one byte of RAM - per indexed field for every document in the index, - during searching. Note that once you index a given - field with norms enabled, disabling norms will - have no effect. In other words, for this to have the - above described effect on a field, all instances of - that field must be indexed with NOT_ANALYZED_NO_NORMS - from the beginning. - - - - This has been renamed to - {@link #NOT_ANALYZED_NO_NORMS} - - - - Expert: Index the tokens produced by running the - field's value through an Analyzer, and also - separately disable the storing of norms. See - {@link #NOT_ANALYZED_NO_NORMS} for what norms are - and why you may want to disable them. - - - - Specifies whether and how a field should have term vectors. - - - Do not store term vectors. - - - Store the term vectors of each document. A term vector is a list - of the document's terms and their number of occurrences in that document. - - - - Store the term vector + token position information - - - - - - - Store the term vector + Token offset information - - - - - - - Store the term vector + Token position and offset information - - - - - - - - - - - Works in conjunction with the SinkTokenizer to provide the ability to set aside tokens - that have already been analyzed. This is useful in situations where multiple fields share - many common analysis steps and then go their separate ways. -

- It is also useful for doing things like entity extraction or proper noun analysis as - part of the analysis workflow and saving off those tokens for use in another field. - -

-            SinkTokenizer sink1 = new SinkTokenizer();
-            SinkTokenizer sink2 = new SinkTokenizer();
-            TokenStream source1 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader1), sink1), sink2);
-            TokenStream source2 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader2), sink1), sink2);
-            TokenStream final1 = new LowerCaseFilter(source1);
-            TokenStream final2 = source2;
-            TokenStream final3 = new EntityDetect(sink1);
-            TokenStream final4 = new URLDetect(sink2);
-            d.add(new Field("f1", final1));
-            d.add(new Field("f2", final2));
-            d.add(new Field("f3", final3));
-            d.add(new Field("f4", final4));
-            
- In this example, sink1 and sink2 will both get tokens from both - reader1 and reader2 after whitespace tokenizer - and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired. - It is important, that tees are consumed before sinks (in the above example, the field names must be - less the sink's field names). - Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene -

- - See LUCENE-1058. -

- WARNING: {@link TeeTokenFilter} and {@link SinkTokenizer} only work with the old TokenStream API. - If you switch to the new API, you need to use {@link TeeSinkTokenFilter} instead, which offers - the same functionality. -

- - - Use {@link TeeSinkTokenFilter} instead - - -
- - Removes stop words from a token stream. - - - Construct a token stream filtering the given input. - Use {@link #StopFilter(boolean, TokenStream, String[])} instead - - - - Construct a token stream filtering the given input. - true if token positions should record the removed stop words - - input TokenStream - - array of stop words - - Use {@link #StopFilter(boolean, TokenStream, Set)} instead. - - - - Constructs a filter which removes words from the input - TokenStream that are named in the array of words. - - Use {@link #StopFilter(boolean, TokenStream, String[], boolean)} instead - - - - Constructs a filter which removes words from the input - TokenStream that are named in the array of words. - - true if token positions should record the removed stop words - - input TokenStream - - array of stop words - - true if case is ignored - - Use {@link #StopFilter(boolean, TokenStream, Set, boolean)} instead. - - - - Construct a token stream filtering the given input. - If stopWords is an instance of {@link CharArraySet} (true if - makeStopSet() was used to construct the set) it will be directly used - and ignoreCase will be ignored since CharArraySet - directly controls case sensitivity. -

- If stopWords is not an instance of {@link CharArraySet}, - a new CharArraySet will be constructed and ignoreCase will be - used to specify the case sensitivity of that set. - -

- - - The set of Stop Words. - - -Ignore case when stopping. - - Use {@link #StopFilter(boolean, TokenStream, Set, boolean)} instead - -
- - Construct a token stream filtering the given input. - If stopWords is an instance of {@link CharArraySet} (true if - makeStopSet() was used to construct the set) it will be directly used - and ignoreCase will be ignored since CharArraySet - directly controls case sensitivity. -

- If stopWords is not an instance of {@link CharArraySet}, - a new CharArraySet will be constructed and ignoreCase will be - used to specify the case sensitivity of that set. - -

- true if token positions should record the removed stop words - - Input TokenStream - - The set of Stop Words. - - -Ignore case when stopping. - -
- - Constructs a filter which removes words from the input - TokenStream that are named in the Set. - - - - - Use {@link #StopFilter(boolean, TokenStream, Set)} instead - - - - Constructs a filter which removes words from the input - TokenStream that are named in the Set. - - - true if token positions should record the removed stop words - - Input stream - - The set of Stop Words. - - - - - - Builds a Set from an array of stop words, - appropriate for passing into the StopFilter constructor. - This permits this stopWords construction to be cached once when - an Analyzer is constructed. - - - passing false to ignoreCase - - - - Builds a Set from an array of stop words, - appropriate for passing into the StopFilter constructor. - This permits this stopWords construction to be cached once when - an Analyzer is constructed. - - - passing false to ignoreCase - - - - - An array of stopwords - - If true, all words are lower cased first. - - a Set containing the words - - - - - A List of Strings representing the stopwords - - if true, all words are lower cased first - - A Set containing the words - - - - Returns the next input Token whose term() is not a stop word. - - - - - Please specify this when you create the StopFilter - - - - Returns version-dependent default for enablePositionIncrements. Analyzers - that embed StopFilter use this method when creating the StopFilter. Prior - to 2.9, this returns {@link #getEnablePositionIncrementsDefault}. On 2.9 - or later, it returns true. - - - - Set the default position increments behavior of every StopFilter created - from now on. -

- Note: behavior of a single StopFilter instance can be modified with - {@link #SetEnablePositionIncrements(boolean)}. This static method allows - control over behavior of classes using StopFilters internally, for - example {@link Lucene.Net.Analysis.Standard.StandardAnalyzer - StandardAnalyzer} if used with the no-arg ctor. -

- Default : false. - -

- - - Please specify this when you create the StopFilter - -
- - - - - - If true, this StopFilter will preserve - positions of the incoming tokens (ie, accumulate and - set position increments of the removed stop tokens). - Generally, true is best as it does not - lose information (positions of the original tokens) - during indexing. - -

When set, when a token is stopped - (omitted), the position increment of the following - token is incremented. - -

NOTE: be sure to also - set {@link QueryParser#setEnablePositionIncrements} if - you use QueryParser to create queries. -

-
- - Normalizes token text to lower case. - - - $Id: LowerCaseFilter.java 797665 2009-07-24 21:45:48Z buschmi $ - - - - CharReader is a Reader wrapper. It reads chars from - Reader and outputs {@link CharStream}, defining an - identify function {@link #CorrectOffset} method that - simply returns the provided offset. - - - - A simple class that stores Strings as char[]'s in a - hash table. Note that this is not a general purpose - class. For example, it cannot remove items from the - set, nor does it resize its hash table to be smaller, - etc. It is designed to be quick to test if a char[] - is in the set without the necessity of converting it - to a String first. - - - - Create set with enough capacity to hold startSize - terms - - - - Create set from a Collection of char[] or String - - - Create set from entries - - - true if the len chars of text starting at off - are in the set - - - - true if the System.String is in the set - - - Returns true if the String is in the set - - - Add this String into the set - - - Add this char[] directly to the set. - If ignoreCase is true for this Set, the text array will be directly modified. - The user should never modify this text array after calling this method. - - - - Returns an unmodifiable {@link CharArraySet}. This allows to provide - unmodifiable views of internal sets for "read-only" use. - - - a set for which the unmodifiable set is returned. - - an new unmodifiable {@link CharArraySet}. - - NullPointerException - if the given set is null. - - - - Adds all of the elements in the specified collection to this collection - - - Removes all elements from the set - - - Removes from this set all of its elements that are contained in the specified collection - - - Retains only the elements in this set that are contained in the specified collection - - - The Iterator<String> for this set. Strings are constructed on the fly, so - use nextCharArray for more efficient access. - - - - do not modify the returned char[] - - - Returns the next String, as a Set<String> would... - use nextCharArray() for better efficiency. - - - - Efficient unmodifiable {@link CharArraySet}. This implementation does not - delegate calls to a give {@link CharArraySet} like - {@link Collections#UnmodifiableSet(java.util.Set)} does. Instead is passes - the internal representation of a {@link CharArraySet} to a super - constructor and overrides all mutators. - - - - Stores and iterate on sorted integers in compressed form in RAM.
- The code for compressing the differences between ascending integers was - borrowed from {@link Lucene.Net.Store.IndexInput} and - {@link Lucene.Net.Store.IndexOutput}. -

- NOTE: this class assumes the stored integers are doc Ids (hence why it - extends {@link DocIdSet}). Therefore its {@link #Iterator()} assumes {@link - DocIdSetIterator#NO_MORE_DOCS} can be used as sentinel. If you intent to use - this value, then make sure it's not used during search flow. -

-
- - When a BitSet has fewer than 1 in BITS2VINTLIST_SIZE bits set, - a SortedVIntList representing the index numbers of the set bits - will be smaller than that BitSet. - - - - Create a SortedVIntList from all elements of an array of integers. - - - A sorted array of non negative integers. - - - - Create a SortedVIntList from an array of integers. - An array of sorted non negative integers. - - The number of integers to be used from the array. - - - - Create a SortedVIntList from a BitSet. - A bit set representing a set of integers. - - - - Create a SortedVIntList from an OpenBitSet. - A bit set representing a set of integers. - - - - Create a SortedVIntList. - An iterator providing document numbers as a set of integers. - This DocIdSetIterator is iterated completely when this constructor - is called and it must provide the integers in non - decreasing order. - - - - The total number of sorted integers. - - - - The size of the byte array storing the compressed sorted integers. - - - - This DocIdSet implementation is cacheable. - - - An iterator over the sorted integers. - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - A ScorerDocQueue maintains a partial ordering of its Scorers such that the - least Scorer can always be found in constant time. Put()'s and pop()'s - require log(size) time. The ordering is by Scorer.doc(). - - - - Create a ScorerDocQueue with a maximum size. - - - Adds a Scorer to a ScorerDocQueue in log(size) time. - If one tries to add more Scorers than maxSize - a RuntimeException (ArrayIndexOutOfBound) is thrown. - - - - Adds a Scorer to the ScorerDocQueue in log(size) time if either - the ScorerDocQueue is not full, or not lessThan(scorer, top()). - - - - true if scorer is added, false otherwise. - - - - Returns the least Scorer of the ScorerDocQueue in constant time. - Should not be used when the queue is empty. - - - - Returns document number of the least Scorer of the ScorerDocQueue - in constant time. - Should not be used when the queue is empty. - - - - Removes and returns the least scorer of the ScorerDocQueue in log(size) - time. - Should not be used when the queue is empty. - - - - Removes the least scorer of the ScorerDocQueue in log(size) time. - Should not be used when the queue is empty. - - - - Should be called when the scorer at top changes doc() value. - Still log(n) worst case, but it's at least twice as fast to
-            { pq.top().change(); pq.adjustTop(); }
-            
instead of
-            { o = pq.pop(); o.change(); pq.push(o); }
-            
-
-
- - Returns the number of scorers currently stored in the ScorerDocQueue. - - - Removes all entries from the ScorerDocQueue. - - - An "open" BitSet implementation that allows direct access to the array of words - storing the bits. -

- Unlike java.util.bitset, the fact that bits are packed into an array of longs - is part of the interface. This allows efficient implementation of other algorithms - by someone other than the author. It also allows one to efficiently implement - alternate serialization or interchange formats. -

- OpenBitSet is faster than java.util.BitSet in most operations - and *much* faster at calculating cardinality of sets and results of set operations. - It can also handle sets of larger cardinality (up to 64 * 2**32-1) -

- The goals of OpenBitSet are the fastest implementation possible, and - maximum code reuse. Extra safety and encapsulation - may always be built on top, but if that's built in, the cost can never be removed (and - hence people re-implement their own version in order to get better performance). - If you want a "safe", totally encapsulated (and slower and limited) BitSet - class, use java.util.BitSet. -

-

Performance Results

- - Test system: Pentium 4, Sun Java 1.5_06 -server -Xbatch -Xmx64M -
BitSet size = 1,000,000 -
Results are java.util.BitSet time divided by OpenBitSet time. - - - - - - - - - - -
cardinality intersect_count union nextSetBit get iterator
50% full 3.36 3.96 1.44 1.46 1.99 1.58
1% full 3.31 3.90   1.04   0.99
-
- Test system: AMD Opteron, 64 bit linux, Sun Java 1.5_06 -server -Xbatch -Xmx64M -
BitSet size = 1,000,000 -
Results are java.util.BitSet time divided by OpenBitSet time. - - - - - - - - - - -
cardinality intersect_count union nextSetBit get iterator
50% full 2.50 3.50 1.00 1.03 1.12 1.25
1% full 2.51 3.49   1.00   1.02
-
- $Id$ - -
- - Constructs an OpenBitSet large enough to hold numBits. - - - - - - - Constructs an OpenBitSet from an existing long[]. -
- The first 64 bits are in long[0], - with bit index 0 at the least significant bit, and bit index 63 at the most significant. - Given a bit index, - the word containing it is long[index/64], and it is at bit number index%64 within that word. -

- numWords are the number of elements in the array that contain - set bits (non-zero longs). - numWords should be <= bits.length, and - any existing words in the array at position >= numWords should be zero. - -

-
- - This DocIdSet implementation is cacheable. - - - Returns the current capacity in bits (1 greater than the index of the last bit) - - - Returns the current capacity of this set. Included for - compatibility. This is *not* equal to {@link #cardinality} - - - - Returns true if there are no set bits - - - Expert: returns the long[] storing the bits - - - Expert: sets a new long[] to use as the bit storage - - - Expert: gets the number of longs in the array that are in use - - - Expert: sets the number of longs in the array that are in use - - - Returns true or false for the specified bit index. - - - Returns true or false for the specified bit index. - The index should be less than the OpenBitSet size - - - - Returns true or false for the specified bit index - - - Returns true or false for the specified bit index. - The index should be less than the OpenBitSet size. - - - - returns 1 if the bit is set, 0 if not. - The index should be less than the OpenBitSet size - - - - sets a bit, expanding the set size if necessary - - - Sets the bit at the specified index. - The index should be less than the OpenBitSet size. - - - - Sets the bit at the specified index. - The index should be less than the OpenBitSet size. - - - - Sets a range of bits, expanding the set size if necessary - - - lower index - - one-past the last bit to set - - - - clears a bit. - The index should be less than the OpenBitSet size. - - - - clears a bit. - The index should be less than the OpenBitSet size. - - - - clears a bit, allowing access beyond the current set size without changing the size. - - - Clears a range of bits. Clearing past the end does not change the size of the set. - - - lower index - - one-past the last bit to clear - - - - Clears a range of bits. Clearing past the end does not change the size of the set. - - - lower index - - one-past the last bit to clear - - - - Sets a bit and returns the previous value. - The index should be less than the OpenBitSet size. - - - - Sets a bit and returns the previous value. - The index should be less than the OpenBitSet size. - - - - flips a bit. - The index should be less than the OpenBitSet size. - - - - flips a bit. - The index should be less than the OpenBitSet size. - - - - flips a bit, expanding the set size if necessary - - - flips a bit and returns the resulting bit value. - The index should be less than the OpenBitSet size. - - - - flips a bit and returns the resulting bit value. - The index should be less than the OpenBitSet size. - - - - Flips a range of bits, expanding the set size if necessary - - - lower index - - one-past the last bit to flip - - - - the number of set bits - - - - Returns the popcount or cardinality of the intersection of the two sets. - Neither set is modified. - - - - Returns the popcount or cardinality of the union of the two sets. - Neither set is modified. - - - - Returns the popcount or cardinality of "a and not b" - or "intersection(a, not(b))". - Neither set is modified. - - - - Returns the popcount or cardinality of the exclusive-or of the two sets. - Neither set is modified. - - - - Returns the index of the first set bit starting at the index specified. - -1 is returned if there are no more set bits. - - - - Returns the index of the first set bit starting at the index specified. - -1 is returned if there are no more set bits. - - - - this = this AND other - - - this = this OR other - - - Remove all elements set in other. this = this AND_NOT other - - - this = this XOR other - - - returns true if the sets have any elements in common - - - Expand the long[] with the size given as a number of words (64 bit longs). - getNumWords() is unchanged by this call. - - - - Ensure that the long[] is big enough to hold numBits, expanding it if necessary. - getNumWords() is unchanged by this call. - - - - Lowers numWords, the number of words in use, - by checking for trailing zero words. - - - - returns the number of 64 bit words it would take to hold numBits - - - returns true if both sets have the same bits set - - - Construct an OpenBitSetDISI with its bits set - from the doc ids of the given DocIdSetIterator. - Also give a maximum size one larger than the largest doc id for which a - bit may ever be set on this OpenBitSetDISI. - - - - Construct an OpenBitSetDISI with no bits set, and a given maximum size - one larger than the largest doc id for which a bit may ever be set - on this OpenBitSetDISI. - - - - Perform an inplace OR with the doc ids from a given DocIdSetIterator, - setting the bit for each such doc id. - These doc ids should be smaller than the maximum size passed to the - constructor. - - - - Perform an inplace AND with the doc ids from a given DocIdSetIterator, - leaving only the bits set for which the doc ids are in common. - These doc ids should be smaller than the maximum size passed to the - constructor. - - - - Perform an inplace NOT with the doc ids from a given DocIdSetIterator, - clearing all the bits for each such doc id. - These doc ids should be smaller than the maximum size passed to the - constructor. - - - - Perform an inplace XOR with the doc ids from a given DocIdSetIterator, - flipping all the bits for each such doc id. - These doc ids should be smaller than the maximum size passed to the - constructor. - - - - Provides methods for sanity checking that entries in the FieldCache - are not wasteful or inconsistent. -

-

- Lucene 2.9 Introduced numerous enhancements into how the FieldCache - is used by the low levels of Lucene searching (for Sorting and - ValueSourceQueries) to improve both the speed for Sorting, as well - as reopening of IndexReaders. But these changes have shifted the - usage of FieldCache from "top level" IndexReaders (frequently a - MultiReader or DirectoryReader) down to the leaf level SegmentReaders. - As a result, existing applications that directly access the FieldCache - may find RAM usage increase significantly when upgrading to 2.9 or - Later. This class provides an API for these applications (or their - Unit tests) to check at run time if the FieldCache contains "insane" - usages of the FieldCache. -

-

- EXPERIMENTAL API: This API is considered extremely advanced and - experimental. It may be removed or altered w/o warning in future releases - of Lucene. -

-

- - - - - - -
- - If set, will be used to estimate size for all CacheEntry objects - dealt with. - - - - Quick and dirty convenience method - - - - - Quick and dirty convenience method that instantiates an instance with - "good defaults" and uses it to test the CacheEntry[] - - - - - - Tests a CacheEntry[] for indication of "insane" cache usage. -

- NOTE:FieldCache CreationPlaceholder objects are ignored. - (:TODO: is this a bad idea? are we masking a real problem?) -

-

-
- - Internal helper method used by check that iterates over - valMismatchKeys and generates a Collection of Insanity - instances accordingly. The MapOfSets are used to populate - the Insantiy objects. - - - - - - Internal helper method used by check that iterates over - the keys of readerFieldToValIds and generates a Collection - of Insanity instances whenever two (or more) ReaderField instances are - found that have an ancestery relationships. - - - - - - - Checks if the seed is an IndexReader, and if so will walk - the hierarchy of subReaders building up a list of the objects - returned by obj.getFieldCacheKey() - - - - Simple pair object for using "readerKey + fieldName" a Map key - - - Simple container for a collection of related CacheEntry objects that - in conjunction with eachother represent some "insane" usage of the - FieldCache. - - - - Type of insane behavior this object represents - - - Description of hte insane behavior - - - CacheEntry objects which suggest a problem - - - Multi-Line representation of this Insanity object, starting with - the Type and Msg, followed by each CacheEntry.toString() on it's - own line prefaced by a tab character - - - - An Enumaration of the differnet types of "insane" behavior that - may be detected in a FieldCache. - - - - - - - - - - - Indicates an overlap in cache usage on a given field - in sub/super readers. - - - -

- Indicates entries have the same reader+fieldname but - different cached values. This can happen if different datatypes, - or parsers are used -- and while it's not necessarily a bug - it's typically an indication of a possible problem. -

-

- PNOTE: Only the reader, fieldname, and cached value are actually - tested -- if two cache entries have different parsers or datatypes but - the cached values are the same Object (== not just equal()) this method - does not consider that a red flag. This allows for subtle variations - in the way a Parser is specified (null vs DEFAULT_LONG_PARSER, etc...) -

-

-
- - Indicates an expected bit of "insanity". This may be useful for - clients that wish to preserve/log information about insane usage - but indicate that it was expected. - - - - Subclass of FilteredTermEnum for enumerating all terms that match the - specified range parameters. -

- Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. -

- 2.9 - -
- - Enumerates all terms greater/equal than lowerTerm - but less/equal than upperTerm. - - If an endpoint is null, it is said to be "open". Either or both - endpoints may be open. Open endpoints may not be exclusive - (you can't select all but the first or last term without - explicitly specifying the term to exclude.) - - - - - An interned field that holds both lower and upper terms. - - The term text at the lower end of the range - - The term text at the upper end of the range - - If true, the lowerTerm is included in the range. - - If true, the upperTerm is included in the range. - - The collator to use to collate index Terms, to determine their - membership in the range bounded by lowerTerm and - upperTerm. - - - IOException - - - Matches the union of its clauses. - - - Construct a SpanOrQuery merging the provided clauses. - - - Return the clauses whose spans are matched. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - -

A {@link Query} that matches numeric values within a - specified range. To use this, you must first index the - numeric values using {@link NumericField} (expert: {@link - NumericTokenStream}). If your terms are instead textual, - you should use {@link TermRangeQuery}. {@link - NumericRangeFilter} is the filter equivalent of this - query.

- -

You create a new NumericRangeQuery with the static - factory methods, eg: - -

-            Query q = NumericRangeQuery.newFloatRange("weight",
-            new Float(0.3f), new Float(0.10f),
-            true, true);
-            
- - matches all documents whose float valued "weight" field - ranges from 0.3 to 0.10, inclusive. - -

The performance of NumericRangeQuery is much better - than the corresponding {@link TermRangeQuery} because the - number of terms that must be searched is usually far - fewer, thanks to trie indexing, described below.

- -

You can optionally specify a precisionStep - when creating this query. This is necessary if you've - changed this configuration from its default (4) during - indexing. Lower values consume more disk space but speed - up searching. Suitable values are between 1 and - 8. A good starting point to test is 4, - which is the default value for all Numeric* - classes. See below for - details. - -

This query defaults to {@linkplain - MultiTermQuery#CONSTANT_SCORE_AUTO_REWRITE_DEFAULT} for - 32 bit (int/float) ranges with precisionStep <8 and 64 - bit (long/double) ranges with precisionStep <6. - Otherwise it uses {@linkplain - MultiTermQuery#CONSTANT_SCORE_FILTER_REWRITE} as the - number of terms is likely to be high. With precision - steps of <4, this query can be run with one of the - BooleanQuery rewrite methods without changing - BooleanQuery's default max clause count. - -

NOTE: This API is experimental and - might change in incompatible ways in the next release. - -

How it works

- -

See the publication about panFMP, - where this algorithm was described (referred to as TrieRangeQuery): - -

Schindler, U, Diepenbroek, M, 2008. - Generic XML-based Framework for Metadata Portals. - Computers & Geosciences 34 (12), 1947-1955. - doi:10.1016/j.cageo.2008.02.023
- -

A quote from this paper: Because Apache Lucene is a full-text - search engine and not a conventional database, it cannot handle numerical ranges - (e.g., field value is inside user defined bounds, even dates are numerical values). - We have developed an extension to Apache Lucene that stores - the numerical values in a special string-encoded format with variable precision - (all numerical values like doubles, longs, floats, and ints are converted to - lexicographic sortable string representations and stored with different precisions - (for a more detailed description of how the values are stored, - see {@link NumericUtils}). A range is then divided recursively into multiple intervals for searching: - The center of the range is searched only with the lowest possible precision in the trie, - while the boundaries are matched more exactly. This reduces the number of terms dramatically.

- -

For the variant that stores long values in 8 different precisions (each reduced by 8 bits) that - uses a lowest precision of 1 byte, the index contains only a maximum of 256 distinct values in the - lowest precision. Overall, a range could consist of a theoretical maximum of - 7*255*2 + 255 = 3825 distinct terms (when there is a term for every distinct value of an - 8-byte-number in the index and the range covers almost all of them; a maximum of 255 distinct values is used - because it would always be possible to reduce the full 256 values to one term with degraded precision). - In practice, we have seen up to 300 terms in most cases (index with 500,000 metadata records - and a uniform value distribution).

- -

Precision Step

-

You can choose any precisionStep when encoding values. - Lower step values mean more precisions and so more terms in index (and index gets larger). - On the other hand, the maximum number of terms to match reduces, which optimized query speed. - The formula to calculate the maximum term count is: -

-            n = [ (bitsPerValue/precisionStep - 1) * (2^precisionStep - 1 ) * 2 ] + (2^precisionStep - 1 )
-            
-

(this formula is only correct, when bitsPerValue/precisionStep is an integer; - in other cases, the value must be rounded up and the last summand must contain the modulo of the division as - precision step). - For longs stored using a precision step of 4, n = 15*15*2 + 15 = 465, and for a precision - step of 2, n = 31*3*2 + 3 = 189. But the faster search speed is reduced by more seeking - in the term enum of the index. Because of this, the ideal precisionStep value can only - be found out by testing. Important: You can index with a lower precision step value and test search speed - using a multiple of the original step value.

- -

Good values for precisionStep are depending on usage and data type: -

    -
  • The default for all data types is 4, which is used, when no precisionStep is given.
  • -
  • Ideal value in most cases for 64 bit data types (long, double) is 6 or 8.
  • -
  • Ideal value in most cases for 32 bit data types (int, float) is 4.
  • -
  • Steps >64 for long/double and >32 for int/float produces one token - per value in the index and querying is as slow as a conventional {@link TermRangeQuery}. But it can be used - to produce fields, that are solely used for sorting (in this case simply use {@link Integer#MAX_VALUE} as - precisionStep). Using {@link NumericField NumericFields} for sorting - is ideal, because building the field cache is much faster than with text-only numbers. - Sorting is also possible with range query optimized fields using one of the above precisionSteps.
  • -
- -

Comparisons of the different types of RangeQueries on an index with about 500,000 docs showed - that {@link TermRangeQuery} in boolean rewrite mode (with raised {@link BooleanQuery} clause count) - took about 30-40 secs to complete, {@link TermRangeQuery} in constant score filter rewrite mode took 5 secs - and executing this class took <100ms to complete (on an Opteron64 machine, Java 1.5, 8 bit - precision step). This query type was developed for a geographic portal, where the performance for - e.g. bounding boxes or exact date/time stamps is important.

- -

- 2.9 - - -
- - Factory that creates a NumericRangeQuery, that queries a long - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a long - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a int - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a int - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a double - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a double - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a float - range using the given precisionStep. - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Factory that creates a NumericRangeQuery, that queries a float - range using the default precisionStep {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). - You can have half-open ranges (which are in fact </≤ or >/≥ queries) - by setting the min or max value to null. By setting inclusive to false, it will - match all documents excluding the bounds, with inclusive on, the boundaries are hits, too. - - - - Returns the field name for this query - - - Returns true if the lower endpoint is inclusive - - - Returns true if the upper endpoint is inclusive - - - Returns the lower value of this range query - - - Returns the upper value of this range query - - - - Lucene.Net specific. Needed for Serialization - - - - - - - Lucene.Net specific. Needed for deserialization - - - - - - Subclass of FilteredTermEnum for enumerating all terms that match the - sub-ranges for trie range queries. -

- WARNING: This term enumeration is not guaranteed to be always ordered by - {@link Term#compareTo}. - The ordering depends on how {@link NumericUtils#splitLongRange} and - {@link NumericUtils#splitIntRange} generates the sub-ranges. For - {@link MultiTermQuery} ordering is not relevant. -

-
- - this is a dummy, it is not used by this class. - - - Compares if current upper bound is reached, - this also updates the term count for statistics. - In contrast to {@link FilteredTermEnum}, a return value - of false ends iterating the current enum - and forwards to the next sub-range. - - - - Increments the enumeration to the next element. True if one exists. - - - Closes the enumeration to further activity, freeing resources. - - - Expert: Callback for {@link #splitLongRange}. - You need to overwrite only one of the methods. -

NOTE: This is a very low-level interface, - the method signatures may change in later versions. -

-
- - This is a helper class to generate prefix-encoded representations for numerical values - and supplies converters to represent float/double values as sortable integers/longs. - -

To quickly execute range queries in Apache Lucene, a range is divided recursively - into multiple intervals for searching: The center of the range is searched only with - the lowest possible precision in the trie, while the boundaries are matched - more exactly. This reduces the number of terms dramatically. - -

This class generates terms to achive this: First the numerical integer values need to - be converted to strings. For that integer values (32 bit or 64 bit) are made unsigned - and the bits are converted to ASCII chars with each 7 bit. The resulting string is - sortable like the original integer value. Each value is also prefixed - (in the first char) by the shift value (number of bits removed) used - during encoding. - -

To also index floating point numbers, this class supplies two methods to convert them - to integer values by changing their bit layout: {@link #doubleToSortableLong}, - {@link #floatToSortableInt}. You will have no precision loss by - converting floating point numbers to integers and back (only that the integer form - is not usable). Other data types like dates can easily converted to longs or ints (e.g. - date to long: {@link java.util.Date#getTime}). - -

For easy usage, the trie algorithm is implemented for indexing inside - {@link NumericTokenStream} that can index int, long, - float, and double. For querying, - {@link NumericRangeQuery} and {@link NumericRangeFilter} implement the query part - for the same data types. - -

This class can also be used, to generate lexicographically sortable (according - {@link String#compareTo(String)}) representations of numeric data types for other - usages (e.g. sorting). - -

NOTE: This API is experimental and - might change in incompatible ways in the next release. - -

- 2.9 - -
- - The default precision step used by {@link NumericField}, {@link NumericTokenStream}, - {@link NumericRangeQuery}, and {@link NumericRangeFilter} as default - - - - Expert: The maximum term length (used for char[] buffer size) - for encoding long values. - - - - - - Expert: The maximum term length (used for char[] buffer size) - for encoding int values. - - - - - - Expert: Longs are stored at lower precision by shifting off lower bits. The shift count is - stored as SHIFT_START_LONG+shift in the first character - - - - Expert: Integers are stored at lower precision by shifting off lower bits. The shift count is - stored as SHIFT_START_INT+shift in the first character - - - - Expert: Returns prefix coded bits after reducing the precision by shift bits. - This is method is used by {@link NumericTokenStream}. - - the numeric value - - how many bits to strip from the right - - that will contain the encoded chars, must be at least of {@link #BUF_SIZE_LONG} - length - - number of chars written to buffer - - - - Expert: Returns prefix coded bits after reducing the precision by shift bits. - This is method is used by {@link LongRangeBuilder}. - - the numeric value - - how many bits to strip from the right - - - - This is a convenience method, that returns prefix coded bits of a long without - reducing the precision. It can be used to store the full precision value as a - stored field in index. -

To decode, use {@link #prefixCodedToLong}. -

-
- - Expert: Returns prefix coded bits after reducing the precision by shift bits. - This is method is used by {@link NumericTokenStream}. - - the numeric value - - how many bits to strip from the right - - that will contain the encoded chars, must be at least of {@link #BUF_SIZE_INT} - length - - number of chars written to buffer - - - - Expert: Returns prefix coded bits after reducing the precision by shift bits. - This is method is used by {@link IntRangeBuilder}. - - the numeric value - - how many bits to strip from the right - - - - This is a convenience method, that returns prefix coded bits of an int without - reducing the precision. It can be used to store the full precision value as a - stored field in index. -

To decode, use {@link #prefixCodedToInt}. -

-
- - Returns a long from prefixCoded characters. - Rightmost bits will be zero for lower precision codes. - This method can be used to decode e.g. a stored field. - - NumberFormatException if the supplied string is - not correctly prefix encoded. - - - - - - Returns an int from prefixCoded characters. - Rightmost bits will be zero for lower precision codes. - This method can be used to decode e.g. a stored field. - - NumberFormatException if the supplied string is - not correctly prefix encoded. - - - - - - Converts a double value to a sortable signed long. - The value is converted by getting their IEEE 754 floating-point "double format" - bit layout and then some bits are swapped, to be able to compare the result as long. - By this the precision is not reduced, but the value can easily used as a long. - - - - - - Convenience method: this just returns: - longToPrefixCoded(doubleToSortableLong(val)) - - - - Converts a sortable long back to a double. - - - - - Convenience method: this just returns: - sortableLongToDouble(prefixCodedToLong(val)) - - - - Converts a float value to a sortable signed int. - The value is converted by getting their IEEE 754 floating-point "float format" - bit layout and then some bits are swapped, to be able to compare the result as int. - By this the precision is not reduced, but the value can easily used as an int. - - - - - - Convenience method: this just returns: - intToPrefixCoded(floatToSortableInt(val)) - - - - Converts a sortable int back to a float. - - - - - Convenience method: this just returns: - sortableIntToFloat(prefixCodedToInt(val)) - - - - Expert: Splits a long range recursively. - You may implement a builder that adds clauses to a - {@link Lucene.Net.Search.BooleanQuery} for each call to its - {@link LongRangeBuilder#AddRange(String,String)} - method. -

This method is used by {@link NumericRangeQuery}. -

-
- - Expert: Splits an int range recursively. - You may implement a builder that adds clauses to a - {@link Lucene.Net.Search.BooleanQuery} for each call to its - {@link IntRangeBuilder#AddRange(String,String)} - method. -

This method is used by {@link NumericRangeQuery}. -

-
- - This helper does the splitting for both 32 and 64 bit. - - - Helper that delegates to correct range builder - - - Expert: Callback for {@link #splitLongRange}. - You need to overwrite only one of the methods. -

NOTE: This is a very low-level interface, - the method signatures may change in later versions. -

-
- - Overwrite this method, if you like to receive the already prefix encoded range bounds. - You can directly build classical (inclusive) range queries from them. - - - - Overwrite this method, if you like to receive the raw long range bounds. - You can use this for e.g. debugging purposes (print out range bounds). - - - - Expert: Callback for {@link #splitIntRange}. - You need to overwrite only one of the methods. -

NOTE: This is a very low-level interface, - the method signatures may change in later versions. -

-
- - Overwrite this method, if you like to receive the already prefix encoded range bounds. - You can directly build classical range (inclusive) queries from them. - - - - Overwrite this method, if you like to receive the raw int range bounds. - You can use this for e.g. debugging purposes (print out range bounds). - - - - Subclass of FilteredTermEnum for enumerating all terms that are similiar - to the specified filter term. - -

Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. -

-
- - Creates a FuzzyTermEnum with an empty prefix and a minSimilarity of 0.5f. -

- After calling the constructor the enumeration is already pointing to the first - valid term if such a term exists. - -

- - - - - IOException - - -
- - Creates a FuzzyTermEnum with an empty prefix. -

- After calling the constructor the enumeration is already pointing to the first - valid term if such a term exists. - -

- - - - - - - IOException - - -
- - Constructor for enumeration of all terms from specified reader which share a prefix of - length prefixLength with term and which have a fuzzy similarity > - minSimilarity. -

- After calling the constructor the enumeration is already pointing to the first - valid term if such a term exists. - -

- Delivers terms. - - Pattern term. - - Minimum required similarity for terms from the reader. Default value is 0.5f. - - Length of required common prefix. Default value is 0. - - IOException -
- - The termCompare method in FuzzyTermEnum uses Levenshtein distance to - calculate the distance between the given term and the comparing term. - - - - Finds and returns the smallest of three integers - - -

Similarity returns a number that is 1.0f or less (including negative numbers) - based on how similar the Term is compared to a target term. It returns - exactly 0.0f when -

-            editDistance < maximumEditDistance
- Otherwise it returns: -
-            1 - (editDistance / length)
- where length is the length of the shortest term (text or target) including a - prefix that are identical and editDistance is the Levenshtein distance for - the two words.

- -

Embedded within this algorithm is a fail-fast Levenshtein distance - algorithm. The fail-fast algorithm differs from the standard Levenshtein - distance algorithm in that it is aborted if it is discovered that the - mimimum distance between the words is greater than some threshold. - -

To calculate the maximum distance threshold we use the following formula: -

-            (1 - minimumSimilarity) * length
- where length is the shortest term including any prefix that is not part of the - similarity comparision. This formula was derived by solving for what maximum value - of distance returns false for the following statements: -
-            similarity = 1 - ((float)distance / (float) (prefixLength + Math.min(textlen, targetlen)));
-            return (similarity > minimumSimilarity);
- where distance is the Levenshtein distance for the two words. -

-

Levenshtein distance (also known as edit distance) is a measure of similiarity - between two strings where the distance is measured as the number of character - deletions, insertions or substitutions required to transform one string to - the other string. -

- the target word or phrase - - the similarity, 0.0 or less indicates that it matches less than the required - threshold and 1.0 indicates that the text and target are identical - -
- - Grow the second dimension of the array, so that we can calculate the - Levenshtein difference. - - - - The max Distance is the maximum Levenshtein distance for the text - compared to some other value that results in score that is - better than the minimum similarity. - - the length of the "other value" - - the maximum levenshtein distance that we care about - - - - Abstract decorator class for a DocIdSet implementation - that provides on-demand filtering/validation - mechanism on a given DocIdSet. - -

- - Technically, this same functionality could be achieved - with ChainedFilter (under contrib/misc), however the - benefit of this class is it never materializes the full - bitset for the filter. Instead, the {@link #match} - method is invoked on-demand, per docID visited during - searching. If you know few docIDs will be visited, and - the logic behind {@link #match} is relatively costly, - this may be a better way to filter than ChainedFilter. - -

- - -
- - Constructor. - Underlying DocIdSet - - - - This DocIdSet implementation is cacheable if the inner set is cacheable. - - - Validation method to determine whether a docid should be in the result set. - docid to be tested - - true if input docid should be in the result set, false otherwise. - - - - Implementation of the contract to build a DocIdSetIterator. - - - - - - - This interface is obsolete, use {@link FieldCache} instead. - - - Use {@link FieldCache}, this will be removed in Lucene 3.0 - - - - - Use {@link FieldCache#DEFAULT}; this will be removed in Lucene 3.0 - - - - Use {@link FieldCache.LongParser}, this will be removed in Lucene 3.0 - - - - Use {@link FieldCache.DoubleParser}, this will be removed in Lucene 3.0 - - - - Provides access to stored term vector of - a document field. The vector consists of the name of the field, an array of the terms tha occur in the field of the - {@link Lucene.Net.Documents.Document} and a parallel array of frequencies. Thus, getTermFrequencies()[5] corresponds with the - frequency of getTerms()[5], assuming there are at least 5 terms in the Document. - - - - The {@link Lucene.Net.Documents.Fieldable} name. - The name of the field this vector is associated with. - - - - - The number of terms in the term vector. - - - - An Array of term texts in ascending order. - - - - Array of term frequencies. Locations of the array correspond one to one - to the terms in the array obtained from getTerms - method. Each location in the array contains the number of times this - term occurs in the document or the document field. - - - - Return an index in the term numbers array returned from - getTerms at which the term with the specified - term appears. If this term does not appear in the array, - return -1. - - - - Just like indexOf(int) but searches for a number of terms - at the same time. Returns an array that has the same size as the number - of terms searched for, each slot containing the result of searching for - that term number. - - - array containing terms to look for - - index in the array where the list of terms starts - - the number of terms in the list - - - - - The number of the field this vector is associated with - - - - Extends TermFreqVector to provide additional information about - positions in which each of the terms is found. A TermPositionVector not necessarily - contains both positions and offsets, but at least one of these arrays exists. - - - - Returns an array of positions in which the term is found. - Terms are identified by the index at which its number appears in the - term String array obtained from the indexOf method. - May return null if positions have not been stored. - - - - Returns an array of TermVectorOffsetInfo in which the term is found. - May return null if offsets have not been stored. - - - - - - The position in the array to get the offsets from - - An array of TermVectorOffsetInfo objects or the empty list - - - - Returns an array of TermVectorOffsetInfo in which the term is found. - - - The position in the array to get the offsets from - - An array of TermVectorOffsetInfo objects or the empty list - - - - - - Returns an array of positions in which the term is found. - Terms are identified by the index at which its number appears in the - term String array obtained from the indexOf method. - - - - An IndexReader which reads indexes with multiple segments. - - - Construct reading the named set of readers. - - - This constructor is only used for {@link #Reopen()} - - - Version number when this IndexReader was opened. - - - Checks is the index is optimized (if it has a single segment and no deletions) - true if the index is optimized; false otherwise - - - - Tries to acquire the WriteLock on this directory. this method is only valid if this IndexReader is directory - owner. - - - StaleReaderException if the index has changed since this reader was opened - CorruptIndexException if the index is corrupt - Lucene.Net.Store.LockObtainFailedException - if another writer has this index open (write.lock could not be - obtained) - - IOException if there is a low-level IO error - - - - - - - Commit changes resulting from delete, undeleteAll, or setNorm operations -

- If an exception is hit, then either no changes or all changes will have been committed to the index (transactional - semantics). - -

- IOException if there is a low-level IO error -
- - Returns the directory this index resides in. - - - Expert: return the IndexCommit that this reader has opened. -

-

WARNING: this API is new and experimental and may suddenly change.

-

-
- - - - - - Optimized implementation. - - - An IndexReader which reads multiple, parallel indexes. Each index added - must have the same number of documents, but typically each contains - different fields. Each document contains the union of the fields of all - documents with the same document number. When searching, matches for a - query term are from the first index added that has the field. - -

This is useful, e.g., with collections that have large fields which - change rarely and small fields that change more frequently. The smaller - fields may be re-indexed in a new index and both indexes may be searched - together. - -

Warning: It is up to you to make sure all indexes - are created and modified the same way. For example, if you add - documents to one index, you need to add the same documents in the - same order to the other indexes. Failure to do so will result in - undefined behavior. -

-
- - Construct a ParallelReader. -

Note that all subreaders are closed if this ParallelReader is closed.

-

-
- - Construct a ParallelReader. - indicates whether the subreaders should be closed - when this ParallelReader is closed - - - - Add an IndexReader. - IOException if there is a low-level IO error - - - Add an IndexReader whose stored fields will not be returned. This can - accellerate search when stored fields are only needed from a subset of - the IndexReaders. - - - IllegalArgumentException if not all indexes contain the same number - of documents - - IllegalArgumentException if not all indexes have the same value - of {@link IndexReader#MaxDoc()} - - IOException if there is a low-level IO error - - - Tries to reopen the subreaders. -
- If one or more subreaders could be re-opened (i. e. subReader.reopen() - returned a new instance != subReader), then a new ParallelReader instance - is returned, otherwise this instance is returned. -

- A re-opened instance might share one or more subreaders with the old - instance. Index modification operations result in undefined behavior - when performed before the old instance is closed. - (see {@link IndexReader#Reopen()}). -

- If subreaders are shared, then the reference count of those - readers is increased to ensure that the subreaders remain open - until the last referring reader is closed. - -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Checks recursively if all subreaders are up to date. - - - Checks recursively if all subindexes are optimized - - - Not implemented. - UnsupportedOperationException - - - - - - - An IndexWriter creates and maintains an index. -

The create argument to the {@link - #IndexWriter(Directory, Analyzer, boolean) constructor} determines - whether a new index is created, or whether an existing index is - opened. Note that you can open an index with create=true - even while readers are using the index. The old readers will - continue to search the "point in time" snapshot they had opened, - and won't see the newly created index until they re-open. There are - also {@link #IndexWriter(Directory, Analyzer) constructors} - with no create argument which will create a new index - if there is not already an index at the provided path and otherwise - open the existing index.

-

In either case, documents are added with {@link #AddDocument(Document) - addDocument} and removed with {@link #DeleteDocuments(Term)} or {@link - #DeleteDocuments(Query)}. A document can be updated with {@link - #UpdateDocument(Term, Document) updateDocument} (which just deletes - and then adds the entire document). When finished adding, deleting - and updating documents, {@link #Close() close} should be called.

- -

These changes are buffered in memory and periodically - flushed to the {@link Directory} (during the above method - calls). A flush is triggered when there are enough - buffered deletes (see {@link #setMaxBufferedDeleteTerms}) - or enough added documents since the last flush, whichever - is sooner. For the added documents, flushing is triggered - either by RAM usage of the documents (see {@link - #setRAMBufferSizeMB}) or the number of added documents. - The default is to flush when RAM usage hits 16 MB. For - best indexing speed you should flush by RAM usage with a - large RAM buffer. Note that flushing just moves the - internal buffered state in IndexWriter into the index, but - these changes are not visible to IndexReader until either - {@link #Commit()} or {@link #close} is called. A flush may - also trigger one or more segment merges which by default - run with a background thread so as not to block the - addDocument calls (see below - for changing the {@link MergeScheduler}).

- -

The optional autoCommit argument to the {@link - #IndexWriter(Directory, boolean, Analyzer) constructors} - controls visibility of the changes to {@link IndexReader} - instances reading the same index. When this is - false, changes are not visible until {@link - #Close()} or {@link #Commit()} is called. Note that changes will still be - flushed to the {@link Directory} as new files, but are - not committed (no new segments_N file is written - referencing the new files, nor are the files sync'd to stable storage) - until {@link #Close()} or {@link #Commit()} is called. If something - goes terribly wrong (for example the JVM crashes), then - the index will reflect none of the changes made since the - last commit, or the starting state if commit was not called. - You can also call {@link #Rollback()}, which closes the writer - without committing any changes, and removes any index - files that had been flushed but are now unreferenced. - This mode is useful for preventing readers from refreshing - at a bad time (for example after you've done all your - deletes but before you've done your adds). It can also be - used to implement simple single-writer transactional - semantics ("all or none"). You can do a two-phase commit - by calling {@link #PrepareCommit()} - followed by {@link #Commit()}. This is necessary when - Lucene is working with an external resource (for example, - a database) and both must either commit or rollback the - transaction.

-

When autoCommit is true then - the writer will periodically commit on its own. [Deprecated: Note that in 3.0, IndexWriter will - no longer accept autoCommit=true (it will be hardwired to - false). You can always call {@link #Commit()} yourself - when needed]. There is - no guarantee when exactly an auto commit will occur (it - used to be after every flush, but it is now after every - completed merge, as of 2.4). If you want to force a - commit, call {@link #Commit()}, or, close the writer. Once - a commit has finished, newly opened {@link IndexReader} instances will - see the changes to the index as of that commit. When - running in this mode, be careful not to refresh your - readers while optimize or segment merges are taking place - as this can tie up substantial disk space.

-

-

Regardless of autoCommit, an {@link - IndexReader} or {@link Lucene.Net.Search.IndexSearcher} will only see the - index as of the "point in time" that it was opened. Any - changes committed to the index after the reader was opened - are not visible until the reader is re-opened.

-

If an index will not have more documents added for a while and optimal search - performance is desired, then either the full {@link #Optimize() optimize} - method or partial {@link #Optimize(int)} method should be - called before the index is closed.

-

Opening an IndexWriter creates a lock file for the directory in use. Trying to open - another IndexWriter on the same directory will lead to a - {@link LockObtainFailedException}. The {@link LockObtainFailedException} - is also thrown if an IndexReader on the same directory is used to delete documents - from the index.

-

- -

Expert: IndexWriter allows an optional - {@link IndexDeletionPolicy} implementation to be - specified. You can use this to control when prior commits - are deleted from the index. The default policy is {@link - KeepOnlyLastCommitDeletionPolicy} which removes all prior - commits as soon as a new commit is done (this matches - behavior before 2.2). Creating your own policy can allow - you to explicitly keep previous "point in time" commits - alive in the index for some time, to allow readers to - refresh to the new commit without having the old commit - deleted out from under them. This is necessary on - filesystems like NFS that do not support "delete on last - close" semantics, which Lucene's "point in time" search - normally relies on.

-

Expert: - IndexWriter allows you to separately change - the {@link MergePolicy} and the {@link MergeScheduler}. - The {@link MergePolicy} is invoked whenever there are - changes to the segments in the index. Its role is to - select which merges to do, if any, and return a {@link - MergePolicy.MergeSpecification} describing the merges. It - also selects merges to do for optimize(). (The default is - {@link LogByteSizeMergePolicy}. Then, the {@link - MergeScheduler} is invoked with the requested merges and - it decides when and how to run the merges. The default is - {@link ConcurrentMergeScheduler}.

-

NOTE: if you hit an - OutOfMemoryError then IndexWriter will quietly record this - fact and block all future segment commits. This is a - defensive measure in case any internal state (buffered - documents and deletions) were corrupted. Any subsequent - calls to {@link #Commit()} will throw an - IllegalStateException. The only course of action is to - call {@link #Close()}, which internally will call {@link - #Rollback()}, to undo any changes to the index since the - last commit. If you opened the writer with autoCommit - false you can also just call {@link #Rollback()} - directly.

-

NOTE: {@link - IndexWriter} instances are completely thread - safe, meaning multiple threads can call any of its - methods, concurrently. If your application requires - external synchronization, you should not - synchronize on the IndexWriter instance as - this may cause deadlock; use your own (non-Lucene) objects - instead.

-

-
- - Name of the write lock in the index. - - - Value to denote a flush trigger is disabled - - - Default value is 16 MB (which means flush when buffered - docs consume 16 MB RAM). Change using {@link #setRAMBufferSizeMB}. - - - - Default value is 10,000. Change using {@link #SetMaxFieldLength(int)}. - - - Default value is 128. Change using {@link #SetTermIndexInterval(int)}. - - - Default value for the write lock timeout (1,000). - - - - - - - - - - - Disabled by default (because IndexWriter flushes by RAM usage - by default). Change using {@link #SetMaxBufferedDocs(int)}. - - - - Disabled by default (because IndexWriter flushes by RAM usage - by default). Change using {@link #SetMaxBufferedDeleteTerms(int)}. - - - - - - - - - - Absolute hard maximum length for a term. If a term - arrives from the analyzer longer than this length, it - is skipped and a message is printed to infoStream, if - set (see {@link #setInfoStream}). - - - - Default for {@link #getMaxSyncPauseSeconds}. On - Windows this defaults to 10.0 seconds; elsewhere it's - 0. - - - - Expert: returns a readonly reader, covering all committed as well as - un-committed changes to the index. This provides "near real-time" - searching, in that changes made during an IndexWriter session can be - quickly made available for searching without closing the writer nor - calling {@link #commit}. - -

- Note that this is functionally equivalent to calling {#commit} and then - using {@link IndexReader#open} to open a new reader. But the turarnound - time of this method should be faster since it avoids the potentially - costly {@link #commit}. -

- - You must close the {@link IndexReader} returned by this method once you are done using it. - -

- It's near real-time because there is no hard - guarantee on how quickly you can get a new reader after - making changes with IndexWriter. You'll have to - experiment in your situation to determine if it's - faster enough. As this is a new and experimental - feature, please report back on your findings so we can - learn, improve and iterate.

- -

The resulting reader suppports {@link - IndexReader#reopen}, but that call will simply forward - back to this method (though this may change in the - future).

- -

The very first time this method is called, this - writer instance will make every effort to pool the - readers that it opens for doing merges, applying - deletes, etc. This means additional resources (RAM, - file descriptors, CPU time) will be consumed.

- -

For lower latency on reopening a reader, you should call {@link #setMergedSegmentWarmer} - to call {@link #setMergedSegmentWarmer} to - pre-warm a newly merged segment before it's committed - to the index. This is important for minimizing index-to-search - delay after a large merge. - -

If an addIndexes* call is running in another thread, - then this reader will only search those segments from - the foreign index that have been successfully copied - over, so far

. - -

NOTE: Once the writer is closed, any - outstanding readers may continue to be used. However, - if you attempt to reopen any of those readers, you'll - hit an {@link AlreadyClosedException}.

- -

NOTE: This API is experimental and might - change in incompatible ways in the next release.

- -

- IndexReader that covers entire index plus all - changes made so far by this IndexWriter instance - - - IOException -
- - Expert: like {@link #getReader}, except you can - specify which termInfosIndexDivisor should be used for - any newly opened readers. - - Subsambles which indexed - terms are loaded into RAM. This has the same effect as {@link - IndexWriter#setTermIndexInterval} except that setting - must be done at indexing time while this setting can be - set per reader. When set to N, then one in every - N*termIndexInterval terms in the index is loaded into - memory. By setting this to a value > 1 you can reduce - memory usage, at the expense of higher latency when - loading a TermInfo. The default value is 1. Set this - to -1 to skip loading the terms index entirely. - - - - Obtain the number of deleted docs for a pooled reader. - If the reader isn't being pooled, the segmentInfo's - delCount is returned. - - - - Used internally to throw an {@link - AlreadyClosedException} if this IndexWriter has been - closed. - - AlreadyClosedException if this IndexWriter is - - - Prints a message to the infoStream (if non-null), - prefixed with the identifying information for this - writer and the thread that's calling it. - - - - Casts current mergePolicy to LogMergePolicy, and throws - an exception if the mergePolicy is not a LogMergePolicy. - - - -

Get the current setting of whether newly flushed - segments will use the compound file format. Note that - this just returns the value previously set with - setUseCompoundFile(boolean), or the default value - (true). You cannot use this to query the status of - previously flushed segments.

- -

Note that this method is a convenience method: it - just calls mergePolicy.getUseCompoundFile as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

- -

- - -
- -

Setting to turn on usage of a compound file. When on, - multiple files for each segment are merged into a - single file when a new segment is flushed.

- -

Note that this method is a convenience method: it - just calls mergePolicy.setUseCompoundFile as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

-

-
- - Expert: Set the Similarity implementation used by this IndexWriter. - - - - - - - Expert: Return the Similarity implementation used by this IndexWriter. - -

This defaults to the current value of {@link Similarity#GetDefault()}. -

-
- - Expert: Set the interval between indexed terms. Large values cause less - memory to be used by IndexReader, but slow random-access to terms. Small - values cause more memory to be used by an IndexReader, and speed - random-access to terms. - - This parameter determines the amount of computation required per query - term, regardless of the number of documents that contain that term. In - particular, it is the maximum number of other terms that must be - scanned before a term is located and its frequency and position information - may be processed. In a large index with user-entered query terms, query - processing time is likely to be dominated not by term lookup but rather - by the processing of frequency and positional data. In a small index - or when many uncommon query terms are generated (e.g., by wildcard - queries) term lookup may become a dominant cost. - - In particular, numUniqueTerms/interval terms are read into - memory by an IndexReader, and, on average, interval/2 terms - must be scanned for each random term access. - - - - - - - Expert: Return the interval between indexed terms. - - - - - - - Constructs an IndexWriter for the index in path. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - path, replacing the index already there, - if any. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the path to the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - Maximum field length in number of tokens/terms: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - Use {@link #IndexWriter(Directory, Analyzer, - boolean, MaxFieldLength)} - -
- - Constructs an IndexWriter for the index in path. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - path, replacing the index already there, if any. - - - the path to the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Constructs an IndexWriter for the index in path. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - path, replacing the index already there, if any. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the path to the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - Use {@link #IndexWriter(Directory, - Analyzer, boolean, MaxFieldLength)} - -
- - Constructs an IndexWriter for the index in path. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - path, replacing the index already there, if any. - - - the path to the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Constructs an IndexWriter for the index in d. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - d, replacing the index already there, if any. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - -
- - Constructs an IndexWriter for the index in d. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - d, replacing the index already there, if any. - - - the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 - release, and call {@link #Commit()} when needed. - Use {@link #IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)} instead. - - - - Constructs an IndexWriter for the index in - path, first creating it if it does not - already exist. Text will be analyzed with - a. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the path to the index directory - - the analyzer to use - - Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - Use {@link #IndexWriter(Directory, Analyzer, MaxFieldLength)} - -
- - Constructs an IndexWriter for the index in - path, first creating it if it does not - already exist. Text will be analyzed with - a. - - - the path to the index directory - - the analyzer to use - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 - release, and call {@link #Commit()} when needed. - Use {@link #IndexWriter(Directory,Analyzer,MaxFieldLength)} instead. - - - - Constructs an IndexWriter for the index in - path, first creating it if it does not - already exist. Text will be analyzed with - a. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the path to the index directory - - the analyzer to use - - Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - Use {@link #IndexWriter(Directory, - Analyzer, MaxFieldLength)} - -
- - Constructs an IndexWriter for the index in - path, first creating it if it does not - already exist. Text will be analyzed with - a. - - - the path to the index directory - - the analyzer to use - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link #IndexWriter(Directory,Analyzer,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Constructs an IndexWriter for the index in - d, first creating it if it does not - already exist. Text will be analyzed with - a. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified - via the MaxFieldLength constructor. - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - -
- - Constructs an IndexWriter for the index in - d, first creating it if it does not - already exist. Text will be analyzed with - a. - - - the index directory - - the analyzer to use - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Constructs an IndexWriter for the index in - d, first creating it if it does not - already exist. Text will be analyzed with - a. - - - the index directory - - see above - - the analyzer to use - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Constructs an IndexWriter for the index in d. - Text will be analyzed with a. If create - is true, then a new, empty index will be created in - d, replacing the index already there, if any. - - - the index directory - - see above - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,boolean,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Expert: constructs an IndexWriter with a custom {@link - IndexDeletionPolicy}, for the index in d, - first creating it if it does not already exist. Text - will be analyzed with a. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - see above - - whether or not to limit field lengths - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - -
- - Expert: constructs an IndexWriter with a custom {@link - IndexDeletionPolicy}, for the index in d, - first creating it if it does not already exist. Text - will be analyzed with a. - - - the index directory - - see above - - the analyzer to use - - see above - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be - read/written to or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,IndexDeletionPolicy,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Expert: constructs an IndexWriter with a custom {@link - IndexDeletionPolicy}, for the index in d. - Text will be analyzed with a. If - create is true, then a new, empty index - will be created in d, replacing the index - already there, if any. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - see above - - {@link Lucene.Net.Index.IndexWriter.MaxFieldLength}, whether or not to limit field lengths. Value is in number of terms/tokens - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - -
- - Expert: constructs an IndexWriter with a custom {@link - IndexDeletionPolicy} and {@link IndexingChain}, - for the index in d. - Text will be analyzed with a. If - create is true, then a new, empty index - will be created in d, replacing the index - already there, if any. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - see above - - whether or not to limit field lengths, value is in number of terms/tokens. See {@link Lucene.Net.Index.IndexWriter.MaxFieldLength}. - - the {@link DocConsumer} chain to be used to - process documents - - which commit to open - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - -
- - Expert: constructs an IndexWriter with a custom {@link - IndexDeletionPolicy}, for the index in d. - Text will be analyzed with a. If - create is true, then a new, empty index - will be created in d, replacing the index - already there, if any. - - - the index directory - - see above - - the analyzer to use - - true to create the index or overwrite - the existing one; false to append to the existing - index - - see above - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - - This constructor will be removed in the 3.0 release. - Use {@link - #IndexWriter(Directory,Analyzer,boolean,IndexDeletionPolicy,MaxFieldLength)} - instead, and call {@link #Commit()} when needed. - - - - Expert: constructs an IndexWriter on specific commit - point, with a custom {@link IndexDeletionPolicy}, for - the index in d. Text will be analyzed - with a. - -

This is only meaningful if you've used a {@link - IndexDeletionPolicy} in that past that keeps more than - just the last commit. - -

This operation is similar to {@link #Rollback()}, - except that method can only rollback what's been done - with the current instance of IndexWriter since its last - commit, whereas this method can rollback to an - arbitrary commit point from the past, assuming the - {@link IndexDeletionPolicy} has preserved past - commits. - -

NOTE: autoCommit (see above) is set to false with this - constructor. - -

- the index directory - - the analyzer to use - - see above - - whether or not to limit field lengths, value is in number of terms/tokens. See {@link Lucene.Net.Index.IndexWriter.MaxFieldLength}. - - which commit to open - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if the directory cannot be read/written to, or - if it does not exist and create is - false or if there is any other low-level - IO error - -
- - Expert: set the merge policy used by this writer. - - - Expert: returns the current MergePolicy in use by this writer. - - - - - Expert: set the merge scheduler used by this writer. - - - Expert: returns the current MergePolicy in use by this - writer. - - - - - -

Determines the largest segment (measured by - document count) that may be merged with other segments. - Small values (e.g., less than 10,000) are best for - interactive indexing, as this limits the length of - pauses while indexing to a few seconds. Larger values - are best for batched indexing and speedier - searches.

- -

The default value is {@link Integer#MAX_VALUE}.

- -

Note that this method is a convenience method: it - just calls mergePolicy.setMaxMergeDocs as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

- -

The default merge policy ({@link - LogByteSizeMergePolicy}) also allows you to set this - limit by net size (in MB) of the segment, using {@link - LogByteSizeMergePolicy#setMaxMergeMB}.

-

-
- -

Returns the largest segment (measured by document - count) that may be merged with other segments.

- -

Note that this method is a convenience method: it - just calls mergePolicy.getMaxMergeDocs as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

- -

- - -
- - The maximum number of terms that will be indexed for a single field in a - document. This limits the amount of memory required for indexing, so that - collections with very large files will not crash the indexing process by - running out of memory. This setting refers to the number of running terms, - not to the number of different terms.

- Note: this silently truncates large documents, excluding from the - index all terms that occur further in the document. If you know your source - documents are large, be sure to set this value high enough to accomodate - the expected size. If you set it to Integer.MAX_VALUE, then the only limit - is your memory, but you should anticipate an OutOfMemoryError.

- By default, no more than {@link #DEFAULT_MAX_FIELD_LENGTH} terms - will be indexed for a field. -

-
- - Returns the maximum number of terms that will be - indexed for a single field in a document. - - - - - - Determines the minimal number of documents required - before the buffered in-memory documents are flushed as - a new Segment. Large values generally gives faster - indexing. - -

When this is set, the writer will flush every - maxBufferedDocs added documents. Pass in {@link - #DISABLE_AUTO_FLUSH} to prevent triggering a flush due - to number of buffered documents. Note that if flushing - by RAM usage is also enabled, then the flush will be - triggered by whichever comes first.

- -

Disabled by default (writer flushes by RAM usage).

- -

- IllegalArgumentException if maxBufferedDocs is - enabled but smaller than 2, or it disables maxBufferedDocs - when ramBufferSize is already disabled - - - -
- - If we are flushing by doc count (not by RAM usage), and - using LogDocMergePolicy then push maxBufferedDocs down - as its minMergeDocs, to keep backwards compatibility. - - - - Returns the number of buffered added documents that will - trigger a flush if enabled. - - - - - - Determines the amount of RAM that may be used for - buffering added documents and deletions before they are - flushed to the Directory. Generally for faster - indexing performance it's best to flush by RAM usage - instead of document count and use as large a RAM buffer - as you can. - -

When this is set, the writer will flush whenever - buffered documents and deletions use this much RAM. - Pass in {@link #DISABLE_AUTO_FLUSH} to prevent - triggering a flush due to RAM usage. Note that if - flushing by document count is also enabled, then the - flush will be triggered by whichever comes first.

- -

NOTE: the account of RAM usage for pending - deletions is only approximate. Specifically, if you - delete by Query, Lucene currently has no way to measure - the RAM usage if individual Queries so the accounting - will under-estimate and you should compensate by either - calling commit() periodically yourself, or by using - {@link #setMaxBufferedDeleteTerms} to flush by count - instead of RAM usage (each buffered delete Query counts - as one). - -

- NOTE: because IndexWriter uses ints when managing its - internal storage, the absolute maximum value for this setting is somewhat - less than 2048 MB. The precise limit depends on various factors, such as - how large your documents are, how many fields have norms, etc., so it's - best to set this value comfortably under 2048. -

- -

The default value is {@link #DEFAULT_RAM_BUFFER_SIZE_MB}.

- -

- IllegalArgumentException if ramBufferSize is - enabled but non-positive, or it disables ramBufferSize - when maxBufferedDocs is already disabled - -
- - Returns the value set by {@link #setRAMBufferSizeMB} if enabled. - - -

Determines the minimal number of delete terms required before the buffered - in-memory delete terms are applied and flushed. If there are documents - buffered in memory at the time, they are merged and a new segment is - created.

-

Disabled by default (writer flushes by RAM usage).

- -

- IllegalArgumentException if maxBufferedDeleteTerms - is enabled but smaller than 1 - - - -
- - Returns the number of buffered deleted terms that will - trigger a flush if enabled. - - - - - - Determines how often segment indices are merged by addDocument(). With - smaller values, less RAM is used while indexing, and searches on - unoptimized indices are faster, but indexing speed is slower. With larger - values, more RAM is used during indexing, and while searches on unoptimized - indices are slower, indexing is faster. Thus larger values (> 10) are best - for batch index creation, and smaller values (< 10) for indices that are - interactively maintained. - -

Note that this method is a convenience method: it - just calls mergePolicy.setMergeFactor as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

- -

This must never be less than 2. The default value is 10. -

-
- -

Returns the number of segments that are merged at - once and also controls the total number of segments - allowed to accumulate in the index.

- -

Note that this method is a convenience method: it - just calls mergePolicy.getMergeFactor as long as - mergePolicy is an instance of {@link LogMergePolicy}. - Otherwise an IllegalArgumentException is thrown.

- -

- - -
- - Expert: returns max delay inserted before syncing a - commit point. On Windows, at least, pausing before - syncing can increase net indexing throughput. The - delay is variable based on size of the segment's files, - and is only inserted when using - ConcurrentMergeScheduler for merges. - - This will be removed in 3.0, when - autoCommit=true is removed from IndexWriter. - - - - Expert: sets the max delay before syncing a commit - point. - - - - This will be removed in 3.0, when - autoCommit=true is removed from IndexWriter. - - - - If non-null, this will be the default infoStream used - by a newly instantiated IndexWriter. - - - - - - Returns the current default infoStream for newly - instantiated IndexWriters. - - - - - - If non-null, information about merges, deletes and a - message when maxFieldLength is reached will be printed - to this. - - - - Returns the current infoStream in use by this writer. - - - - - Returns true if verbosing is enabled (i.e., infoStream != null). - - - to change the default value for all instances of IndexWriter. - - - - Returns allowed timeout when acquiring the write lock. - - - - - Sets the default (for any instance of IndexWriter) maximum time to wait for a write lock (in - milliseconds). - - - - Returns default write lock timeout for newly - instantiated IndexWriters. - - - - - - Commits all changes to an index and closes all - associated files. Note that this may be a costly - operation, so, try to re-use a single writer instead of - closing and opening a new one. See {@link #Commit()} for - caveats about write caching done by some IO devices. - -

If an Exception is hit during close, eg due to disk - full or some other reason, then both the on-disk index - and the internal state of the IndexWriter instance will - be consistent. However, the close will not be complete - even though part of it (flushing buffered documents) - may have succeeded, so the write lock will still be - held.

- -

If you can correct the underlying cause (eg free up - some disk space) then you can call close() again. - Failing that, if you want to force the write lock to be - released (dangerous, because you may then lose buffered - docs in the IndexWriter instance) then you can do - something like this:

- -

-            try {
-            writer.close();
-            } finally {
-            if (IndexWriter.isLocked(directory)) {
-            IndexWriter.unlock(directory);
-            }
-            }
-            
- - after which, you must be certain not to use the writer - instance anymore.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer, again. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Closes the index with or without waiting for currently - running merges to finish. This is only meaningful when - using a MergeScheduler that runs merges in background - threads. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer, again. See above for details.

- -

NOTE: it is dangerous to always call - close(false), especially when IndexWriter is not open - for very long, because this can result in "merge - starvation" whereby long merges will never have a - chance to finish. This will cause too many segments in - your index over time.

- -

- if true, this call will block - until all merges complete; else, it will ask all - running merges to abort, wait until those merges have - finished (which should be at most a few seconds), and - then return. - -
- - Tells the docWriter to close its currently open shared - doc stores (stored fields & vectors files). - Return value specifices whether new doc store files are compound or not. - - - - Returns the Directory used by this index. - - - Returns the analyzer used by this index. - - - Returns the number of documents currently in this - index, not counting deletions. - - Please use {@link #MaxDoc()} (same as this - method) or {@link #NumDocs()} (also takes deletions - into account), instead. - - - - Returns total number of docs in this index, including - docs not yet flushed (still in the RAM buffer), - not counting deletions. - - - - - - Returns total number of docs in this index, including - docs not yet flushed (still in the RAM buffer), and - including deletions. NOTE: buffered deletions - are not counted. If you really need these to be - counted you should call {@link #Commit()} first. - - - - - - The maximum number of terms that will be indexed for a single field in a - document. This limits the amount of memory required for indexing, so that - collections with very large files will not crash the indexing process by - running out of memory.

- Note that this effectively truncates large documents, excluding from the - index terms that occur further in the document. If you know your source - documents are large, be sure to set this value high enough to accomodate - the expected size. If you set it to Integer.MAX_VALUE, then the only limit - is your memory, but you should anticipate an OutOfMemoryError.

- By default, no more than 10,000 terms will be indexed for a field. - -

- - -
- - Adds a document to this index. If the document contains more than - {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are - discarded. - -

Note that if an Exception is hit (for example disk full) - then the index will be consistent, but this document - may not have been added. Furthermore, it's possible - the index will have one segment in non-compound format - even when using compound files (when a merge has - partially succeeded).

- -

This method periodically flushes pending documents - to the Directory (see above), and - also periodically triggers segment merges in the index - according to the {@link MergePolicy} in use.

- -

Merges temporarily consume space in the - directory. The amount of space required is up to 1X the - size of all segments being merged, when no - readers/searchers are open against the index, and up to - 2X the size of all segments being merged when - readers/searchers are open against the index (see - {@link #Optimize()} for details). The sequence of - primitive merge operations performed is governed by the - merge policy. - -

Note that each term in the document can be no longer - than 16383 characters, otherwise an - IllegalArgumentException will be thrown.

- -

Note that it's possible to create an invalid Unicode - string in java if a UTF16 surrogate pair is malformed. - In this case, the invalid characters are silently - replaced with the Unicode replacement character - U+FFFD.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Adds a document to this index, using the provided analyzer instead of the - value of {@link #GetAnalyzer()}. If the document contains more than - {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are - discarded. - -

See {@link #AddDocument(Document)} for details on - index and IndexWriter state after an Exception, and - flushing/merging temporary free space requirements.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Deletes the document(s) containing term. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- the term to identify the documents to be deleted - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Deletes the document(s) containing any of the - terms. All deletes are flushed at the same time. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- array of terms to identify the documents - to be deleted - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Deletes the document(s) matching the provided query. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- the query to identify the documents to be deleted - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Deletes the document(s) matching any of the provided queries. - All deletes are flushed at the same time. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- array of queries to identify the documents - to be deleted - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Updates a document by first deleting the document(s) - containing term and then adding the new - document. The delete and then add are atomic as seen - by a reader on the same index (flush may happen only after - the add). - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- the term to identify the document(s) to be - deleted - - the document to be added - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Updates a document by first deleting the document(s) - containing term and then adding the new - document. The delete and then add are atomic as seen - by a reader on the same index (flush may happen only after - the add). - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- the term to identify the document(s) to be - deleted - - the document to be added - - the analyzer to use when analyzing the document - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - If non-null, information about merges will be printed to this. - - - Requests an "optimize" operation on an index, priming the index - for the fastest available search. Traditionally this has meant - merging all segments into a single segment as is done in the - default merge policy, but individaul merge policies may implement - optimize in different ways. - -

It is recommended that this method be called upon completion of indexing. In - environments with frequent updates, optimize is best done during low volume times, if at all. - -

-

See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion.

- -

Note that optimize requires 2X the index size free - space in your Directory. For example, if your index - size is 10 MB then you need 20 MB free for optimize to - complete.

- -

If some but not all readers re-open while an - optimize is underway, this will cause > 2X temporary - space to be consumed as those new readers will then - hold open the partially optimized segments at that - time. It is best not to re-open readers while optimize - is running.

- -

The actual temporary usage could be much less than - these figures (it depends on many factors).

- -

In general, once the optimize completes, the total size of the - index will be less than the size of the starting index. - It could be quite a bit smaller (if there were many - pending deletes) or just slightly smaller.

- -

If an Exception is hit during optimize(), for example - due to disk full, the index will not be corrupt and no - documents will have been lost. However, it may have - been partially optimized (some segments were merged but - not all), and it's possible that one of the segments in - the index will be in non-compound format even when - using compound file format. This will occur when the - Exception is hit during conversion of the segment into - compound format.

- -

This call will optimize those segments present in - the index when the call started. If other threads are - still adding documents and flushing segments, those - newly created segments will not be optimized unless you - call optimize again.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - -
- - Optimize the index down to <= maxNumSegments. If - maxNumSegments==1 then this is the same as {@link - #Optimize()}. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- maximum number of segments left - in the index after optimization finishes - -
- - Just like {@link #Optimize()}, except you can specify - whether the call should block until the optimize - completes. This is only meaningful with a - {@link MergeScheduler} that is able to run merges in - background threads. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Just like {@link #Optimize(int)}, except you can - specify whether the call should block until the - optimize completes. This is only meaningful with a - {@link MergeScheduler} that is able to run merges in - background threads. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Returns true if any merges in pendingMerges or - runningMerges are optimization merges. - - - - Just like {@link #ExpungeDeletes()}, except you can - specify whether the call should block until the - operation completes. This is only meaningful with a - {@link MergeScheduler} that is able to run merges in - background threads. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Expunges all deletes from the index. When an index - has many document deletions (or updates to existing - documents), it's best to either call optimize or - expungeDeletes to remove all unused data in the index - associated with the deleted documents. To see how - many deletions you have pending in your index, call - {@link IndexReader#numDeletedDocs} - This saves disk space and memory usage while - searching. expungeDeletes should be somewhat faster - than optimize since it does not insist on reducing the - index to a single segment (though, this depends on the - {@link MergePolicy}; see {@link - MergePolicy#findMergesToExpungeDeletes}.). Note that - this call does not first commit any buffered - documents, so you must do so yourself if necessary. - See also {@link #ExpungeDeletes(boolean)} - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Expert: asks the mergePolicy whether any merges are - necessary now and if so, runs the requested merges and - then iterate (test again if merges are needed) until no - more merges are returned by the mergePolicy. - - Explicit calls to maybeMerge() are usually not - necessary. The most common case is when merge policy - parameters have changed. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Expert: the {@link MergeScheduler} calls this method - to retrieve the next merge requested by the - MergePolicy - - - - Like getNextMerge() except only returns a merge if it's - external. - - - - Please use {@link #rollback} instead. - - - - Close the IndexWriter without committing - any changes that have occurred since the last commit - (or since it was opened, if commit hasn't been called). - This removes any temporary files that had been created, - after which the state of the index will be the same as - it was when commit() was last called or when this - writer was first opened. This can only be called when - this IndexWriter was opened with - autoCommit=false. This also clears a - previous call to {@link #prepareCommit}. - - IllegalStateException if this is called when - the writer was opened with autoCommit=true. - - IOException if there is a low-level IO error - - - Delete all documents in the index. - -

This method will drop all buffered documents and will - remove all segments from the index. This change will not be - visible until a {@link #Commit()} has been called. This method - can be rolled back using {@link #Rollback()}.

- -

NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).

- -

NOTE: this method will forcefully abort all merges - in progress. If other threads are running {@link - #Optimize()} or any of the addIndexes methods, they - will receive {@link MergePolicy.MergeAbortedException}s. -

-
- - Wait for any currently outstanding merges to finish. - -

It is guaranteed that any merges started prior to calling this method - will have completed once this method completes.

-

-
- - Merges all segments from an array of indexes into this index. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- Use {@link #addIndexesNoOptimize} instead, - then separately call {@link #optimize} afterwards if - you need to. - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Merges all segments from an array of indexes into this - index. - -

This may be used to parallelize batch indexing. A large document - collection can be broken into sub-collections. Each sub-collection can be - indexed in parallel, on a different thread, process or machine. The - complete index can then be created by merging sub-collection indexes - with this method. - -

NOTE: the index in each Directory must not be - changed (opened by a writer) while this method is - running. This method does not acquire a write lock in - each input Directory, so it is up to the caller to - enforce this. - -

NOTE: while this is running, any attempts to - add or delete documents (with another thread) will be - paused until this method completes. - -

This method is transactional in how Exceptions are - handled: it does not commit a new segments_N file until - all indexes are added. This means if an Exception - occurs (for example disk full), then either no indexes - will have been added or they all will have been.

- -

Note that this requires temporary free space in the - Directory up to 2X the sum of all input indexes - (including the starting index). If readers/searchers - are open against the starting index, then temporary - free space required will be higher by the size of the - starting index (see {@link #Optimize()} for details). -

- -

Once this completes, the final size of the index - will be less than the sum of all input index sizes - (including the starting index). It could be quite a - bit smaller (if there were many pending deletes) or - just slightly smaller.

- -

- This requires this index not be among those to be added. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Merges the provided indexes into this index. -

After this completes, the index is optimized.

-

The provided IndexReaders are not closed.

- -

NOTE: while this is running, any attempts to - add or delete documents (with another thread) will be - paused until this method completes. - -

See {@link #AddIndexesNoOptimize(Directory[])} for - details on transactional semantics, temporary free - space required in the Directory, and non-CFS segments - on an Exception.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Flush all in-memory buffered updates (adds and deletes) - to the Directory. -

Note: while this will force buffered docs to be - pushed into the index, it will not make these docs - visible to a reader. Use {@link #Commit()} instead - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- please call {@link #Commit()}) instead - - - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error -
- - Expert: prepare for commit. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- - -
- -

Expert: prepare for commit, specifying - commitUserData Map (String -> String). This does the - first phase of 2-phase commit. You can only call this - when autoCommit is false. This method does all steps - necessary to commit changes since this writer was - opened: flushes pending added and deleted docs, syncs - the index files, writes most of next segments_N file. - After calling this you must call either {@link - #Commit()} to finish the commit, or {@link - #Rollback()} to revert the commit and undo all changes - done since the writer was opened.

- - You can also just call {@link #Commit(Map)} directly - without prepareCommit first in which case that method - will internally call prepareCommit. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- Opaque Map (String->String) - that's recorded into the segments file in the index, - and retrievable by {@link - IndexReader#getCommitUserData}. Note that when - IndexWriter commits itself, for example if open with - autoCommit=true, or, during {@link #close}, the - commitUserData is unchanged (just carried over from - the prior commit). If this is null then the previous - commitUserData is kept. Also, the commitUserData will - only "stick" if there are actually changes in the - index to commit. Therefore it's best to use this - feature only when autoCommit is false. - -
- -

Commits all pending changes (added & deleted - documents, optimizations, segment merges, added - indexes, etc.) to the index, and syncs all referenced - index files, such that a reader will see the changes - and the index updates will survive an OS or machine - crash or power loss. Note that this does not wait for - any running background merges to finish. This may be a - costly operation, so you should test the cost in your - application and do it only when really necessary.

- -

Note that this operation calls Directory.sync on - the index files. That call should not return until the - file contents & metadata are on stable storage. For - FSDirectory, this calls the OS's fsync. But, beware: - some hardware devices may in fact cache writes even - during fsync, and return before the bits are actually - on stable storage, to give the appearance of faster - performance. If you have such a device, and it does - not have a battery backup (for example) then on power - loss it may still lose data. Lucene cannot guarantee - consistency on such devices.

- -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

- -

- - - - -
- - Commits all changes to the index, specifying a - commitUserData Map (String -> String). This just - calls {@link #PrepareCommit(Map)} (if you didn't - already call it) and then {@link #finishCommit}. - -

NOTE: if this method hits an OutOfMemoryError - you should immediately close the writer. See above for details.

-

-
- - Flush all in-memory buffered udpates (adds and deletes) - to the Directory. - - if true, we may merge segments (if - deletes or docs were flushed) if necessary - - if false we are allowed to keep - doc stores open to share with the next segment - - whether pending deletes should also - be flushed - - - - Expert: Return the total size of all index files currently cached in memory. - Useful for size management with flushRamDocs() - - - - Expert: Return the number of documents currently - buffered in RAM. - - - - Carefully merges deletes for the segments we just - merged. This is tricky because, although merging will - clear all deletes (compacts the documents), new - deletes may have been flushed to the segments since - the merge was started. This method "carries over" - such new deletes onto the newly merged segment, and - saves the resulting deletes file (incrementing the - delete generation for merge.info). If no deletes were - flushed, no new deletes file is saved. - - - - Merges the indicated segments, replacing them in the stack with a - single segment. - - - - Hook that's called when the specified merge is complete. - - - Checks whether this merge involves any segments - already participating in a merge. If not, this merge - is "registered", meaning we record that its segments - are now participating in a merge, and true is - returned. Else (the merge conflicts) false is - returned. - - - - Does initial setup for a merge, which is fast but holds - the synchronized lock on IndexWriter instance. - - - - This is called after merging a segment and before - building its CFS. Return true if the files should be - sync'd. If you return false, then the source segment - files that were merged cannot be deleted until the CFS - file is built & sync'd. So, returning false consumes - more transient disk space, but saves performance of - not having to sync files which will shortly be deleted - anyway. - - -- this will be removed in 3.0 when - autoCommit is hardwired to false - - - - Does fininishing for a merge, which is fast but holds - the synchronized lock on IndexWriter instance. - - - - Does the actual (time-consuming) work of the merge, - but without holding synchronized lock on IndexWriter - instance - - - - Blocks until all files in syncing are sync'd - - - Pauses before syncing. On Windows, at least, it's - best (performance-wise) to pause in order to let OS - flush writes to disk on its own, before forcing a - sync. - - -- this will be removed in 3.0 when - autoCommit is hardwired to false - - - - Walk through all files referenced by the current - segmentInfos and ask the Directory to sync each file, - if it wasn't already. If that succeeds, then we - prepare a new segments_N file but do not fully commit - it. - - - - Returns true iff the index in the named directory is - currently locked. - - the directory to check for a lock - - IOException if there is a low-level IO error - - - Returns true iff the index in the named directory is - currently locked. - - the directory to check for a lock - - IOException if there is a low-level IO error - Use {@link #IsLocked(Directory)} - - - - Forcibly unlocks the index in the named directory. -

- Caution: this should only be used by failure recovery code, - when it is known that no other process nor thread is in fact - currently accessing this index. -

-
- - Set the merged segment warmer. See {@link - IndexReaderWarmer}. - - - - Returns the current merged segment warmer. See {@link - IndexReaderWarmer}. - - - - Deprecated: emulates IndexWriter's buggy behavior when - first token(s) have positionIncrement==0 (ie, prior to - fixing LUCENE-1542) - - - - Holds shared SegmentReader instances. IndexWriter uses - SegmentReaders for 1) applying deletes, 2) doing - merges, 3) handing out a real-time reader. This pool - reuses instances of the SegmentReaders in all these - places if it is in "near real-time mode" (getReader() - has been called on this instance). - - - - Forcefully clear changes for the specifed segments, - and remove from the pool. This is called on succesful merge. - - - - Release the segment reader (i.e. decRef it and close if there - are no more references. - - - - IOException - - - Release the segment reader (i.e. decRef it and close if there - are no more references. - - - - IOException - - - Remove all our references to readers, and commits - any pending changes. - - - - Commit all segment reader in the pool. - IOException - - - Returns a ref to a clone. NOTE: this clone is not - enrolled in the pool, so you should simply close() - it when you're done (ie, do not call release()). - - - - Obtain a SegmentReader from the readerPool. The reader - must be returned by calling {@link #Release(SegmentReader)} - - - - - - - - IOException - - - Obtain a SegmentReader from the readerPool. The reader - must be returned by calling {@link #Release(SegmentReader)} - - - - - - - - - - - - - IOException - - - Specifies maximum field length (in number of tokens/terms) in {@link IndexWriter} constructors. - {@link #SetMaxFieldLength(int)} overrides the value set by - the constructor. - - - - Private type-safe-enum-pattern constructor. - - - instance name - - maximum field length - - - - Public constructor to allow users to specify the maximum field size limit. - - - The maximum field length - - - - Sets the maximum field length to {@link Integer#MAX_VALUE}. - - - Sets the maximum field length to - {@link #DEFAULT_MAX_FIELD_LENGTH} - - - - - If {@link #getReader} has been called (ie, this writer - is in near real-time mode), then after a merge - completes, this class can be invoked to warm the - reader on the newly merged segment, before the merge - commits. This is not required for near real-time - search, but will reduce search latency on opening a - new near real-time reader after a merge completes. - -

NOTE: This API is experimental and might - change in incompatible ways in the next release.

- -

NOTE: warm is called before any deletes have - been carried over to the merged segment. -

-
- - Basic tool and API to check the health of an index and - write a new segments file that removes reference to - problematic segments. - -

As this tool checks every byte in the index, on a large - index it can take quite a long time to run. - -

WARNING: this tool and API is new and - experimental and is subject to suddenly change in the - next release. Please make a complete backup of your - index before using this to fix your index! -

-
- - Default PrintStream for all CheckIndex instances. - Use {@link #setInfoStream} per instance, - instead. - - - - Create a new CheckIndex on the directory. - - - Set infoStream where messages should go. If null, no - messages are printed - - - - Returns true if index is clean, else false. - Please instantiate a CheckIndex and then use {@link #CheckIndex()} instead - - - - Returns true if index is clean, else false. - Please instantiate a CheckIndex and then use {@link #CheckIndex(List)} instead - - - - Returns a {@link Status} instance detailing - the state of the index. - -

As this method checks every byte in the index, on a large - index it can take quite a long time to run. - -

WARNING: make sure - you only call this when the index is not opened by any - writer. -

-
- - Returns a {@link Status} instance detailing - the state of the index. - - - list of specific segment names to check - -

As this method checks every byte in the specified - segments, on a large index it can take quite a long - time to run. - -

WARNING: make sure - you only call this when the index is not opened by any - writer. - - - -

Test field norms. -
- - Test the term index. - - - Test stored fields for a segment. - - - Test term vectors for a segment. - - - Repairs the index using previously returned result - from {@link #checkIndex}. Note that this does not - remove any of the unreferenced files after it's done; - you must separately open an {@link IndexWriter}, which - deletes unreferenced files when it's created. - -

WARNING: this writes a - new segments file into the index, effectively removing - all documents in broken segments from the index. - BE CAREFUL. - -

WARNING: Make sure you only call this when the - index is not opened by any writer. -

-
- - Command-line interface to check and fix an index. -

- Run it like this: -

-            java -ea:Lucene.Net... Lucene.Net.Index.CheckIndex pathToIndex [-fix] [-segment X] [-segment Y]
-            
-
    -
  • -fix: actually write a new segments_N file, removing any problematic segments
  • -
  • -segment X: only check the specified - segment(s). This can be specified multiple times, - to check more than one segment, eg -segment _2 - -segment _a. You can't use this with the -fix - option.
  • -
-

WARNING: -fix should only be used on an emergency basis as it will cause - documents (perhaps many) to be permanently removed from the index. Always make - a backup copy of your index before running this! Do not run this tool on an index - that is actively being written to. You have been warned! -

Run without -fix, this tool will open the index, report version information - and report any exceptions it hits and what action it would take if -fix were - specified. With -fix, this tool will remove any segments that have issues and - write a new segments_N file. This means all documents contained in the affected - segments will be removed. -

- This tool exits with exit code 1 if the index cannot be opened or has any - corruption, else 0. -

-
- - Returned from {@link #CheckIndex()} detailing the health and status of the index. - -

WARNING: this API is new and experimental and is - subject to suddenly change in the next release. - -

-
- - True if no problems were found with the index. - - - True if we were unable to locate and load the segments_N file. - - - True if we were unable to open the segments_N file. - - - True if we were unable to read the version number from segments_N file. - - - Name of latest segments_N file in the index. - - - Number of segments in the index. - - - String description of the version of the index. - - - Empty unless you passed specific segments list to check as optional 3rd argument. - - - - - True if the index was created with a newer version of Lucene than the CheckIndex tool. - - - List of {@link SegmentInfoStatus} instances, detailing status of each segment. - - - Directory index is in. - - - SegmentInfos instance containing only segments that - had no problems (this is used with the {@link CheckIndex#fixIndex} - method to repair the index. - - - - How many documents will be lost to bad segments. - - - How many bad segments were found. - - - True if we checked only specific segments ({@link - #CheckIndex(List)}) was called with non-null - argument). - - - - Holds the userData of the last commit in the index - - - Holds the status of each segment in the index. - See {@link #segmentInfos}. - -

WARNING: this API is new and experimental and is - subject to suddenly change in the next release. -

-
- - Name of the segment. - - - Document count (does not take deletions into account). - - - True if segment is compound file format. - - - Number of files referenced by this segment. - - - Net size (MB) of the files referenced by this - segment. - - - - Doc store offset, if this segment shares the doc - store files (stored fields and term vectors) with - other segments. This is -1 if it does not share. - - - - String of the shared doc store segment, or null if - this segment does not share the doc store files. - - - - True if the shared doc store files are compound file - format. - - - - True if this segment has pending deletions. - - - Name of the current deletions file name. - - - Number of deleted documents. - - - True if we were able to open a SegmentReader on this - segment. - - - - Number of fields in this segment. - - - True if at least one of the fields in this segment - does not omitTermFreqAndPositions. - - - - - - Map<String, String> that includes certain - debugging details that IndexWriter records into - each segment it creates - - - - Status for testing of field norms (null if field norms could not be tested). - - - Status for testing of indexed terms (null if indexed terms could not be tested). - - - Status for testing of stored fields (null if stored fields could not be tested). - - - Status for testing of term vectors (null if term vectors could not be tested). - - - Status from testing field norms. - - - Number of fields successfully tested - - - Exception thrown during term index test (null on success) - - - Status from testing term index. - - - Total term count - - - Total frequency across all terms. - - - Total number of positions. - - - Exception thrown during term index test (null on success) - - - Status from testing stored fields. - - - Number of documents tested. - - - Total number of stored fields tested. - - - Exception thrown during stored fields test (null on success) - - - Status from testing stored fields. - - - Number of documents tested. - - - Total number of term vectors tested. - - - Exception thrown during term vector test (null on success) - - -

This class provides a {@link Field} that enables indexing - of numeric values for efficient range filtering and - sorting. Here's an example usage, adding an int value: -

-            document.add(new NumericField(name).setIntValue(value));
-            
- - For optimal performance, re-use the - NumericField and {@link Document} instance for more than - one document: - -
-            NumericField field = new NumericField(name);
-            Document document = new Document();
-            document.add(field);
-            
-            for(all documents) {
-            ...
-            field.setIntValue(value)
-            writer.addDocument(document);
-            ...
-            }
-            
- -

The java native types int, long, - float and double are - directly supported. However, any value that can be - converted into these native types can also be indexed. - For example, date/time values represented by a - {@link java.util.Date} can be translated into a long - value using the {@link java.util.Date#getTime} method. If you - don't need millisecond precision, you can quantize the - value, either by dividing the result of - {@link java.util.Date#getTime} or using the separate getters - (for year, month, etc.) to construct an int or - long value.

- -

To perform range querying or filtering against a - NumericField, use {@link NumericRangeQuery} or {@link - NumericRangeFilter}. To sort according to a - NumericField, use the normal numeric sort types, eg - {@link SortField#INT} (note that {@link SortField#AUTO} - will not work with these fields). NumericField values - can also be loaded directly from {@link FieldCache}.

- -

By default, a NumericField's value is not stored but - is indexed for range filtering and sorting. You can use - the {@link #NumericField(String,Field.Store,boolean)} - constructor if you need to change these defaults.

- -

You may add the same field name as a NumericField to - the same document more than once. Range querying and - filtering will be the logical OR of all values; so a range query - will hit all documents that have at least one value in - the range. However sort behavior is not defined. If you need to sort, - you should separately index a single-valued NumericField.

- -

A NumericField will consume somewhat more disk space - in the index than an ordinary single-valued field. - However, for a typical index that includes substantial - textual content per document, this increase will likely - be in the noise.

- -

Within Lucene, each numeric value is indexed as a - trie structure, where each term is logically - assigned to larger and larger pre-defined brackets (which - are simply lower-precision representations of the value). - The step size between each successive bracket is called the - precisionStep, measured in bits. Smaller - precisionStep values result in larger number - of brackets, which consumes more disk space in the index - but may result in faster range search performance. The - default value, 4, was selected for a reasonable tradeoff - of disk space consumption versus performance. You can - use the expert constructor {@link - #NumericField(String,int,Field.Store,boolean)} if you'd - like to change the value. Note that you must also - specify a congruent value when creating {@link - NumericRangeQuery} or {@link NumericRangeFilter}. - For low cardinality fields larger precision steps are good. - If the cardinality is < 100, it is fair - to use {@link Integer#MAX_VALUE}, which produces one - term per value. - -

For more information on the internals of numeric trie - indexing, including the precisionStep - configuration, see {@link NumericRangeQuery}. The format of - indexed values is described in {@link NumericUtils}. - -

If you only need to sort by numeric value, and never - run range querying/filtering, you can index using a - precisionStep of {@link Integer#MAX_VALUE}. - This will minimize disk space consumed.

- -

More advanced users can instead use {@link - NumericTokenStream} directly, when indexing numbers. This - class is a wrapper around this token stream type for - easier, more intuitive usage.

- -

NOTE: This class is only used during - indexing. When retrieving the stored field value from a - {@link Document} instance after search, you will get a - conventional {@link Fieldable} instance where the numeric - values are returned as {@link String}s (according to - toString(value) of the used data type). - -

NOTE: This API is - experimental and might change in incompatible ways in the - next release. - -

- 2.9 - -
- - Creates a field for numeric values using the default precisionStep - {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). The instance is not yet initialized with - a numeric value, before indexing a document containing this field, - set a value using the various set???Value() methods. - This constructor creates an indexed, but not stored field. - - the field name - - - - Creates a field for numeric values using the default precisionStep - {@link NumericUtils#PRECISION_STEP_DEFAULT} (4). The instance is not yet initialized with - a numeric value, before indexing a document containing this field, - set a value using the various set???Value() methods. - - the field name - - if the field should be stored in plain text form - (according to toString(value) of the used data type) - - if the field should be indexed using {@link NumericTokenStream} - - - - Creates a field for numeric values with the specified - precisionStep. The instance is not yet initialized with - a numeric value, before indexing a document containing this field, - set a value using the various set???Value() methods. - This constructor creates an indexed, but not stored field. - - the field name - - the used precision step - - - - Creates a field for numeric values with the specified - precisionStep. The instance is not yet initialized with - a numeric value, before indexing a document containing this field, - set a value using the various set???Value() methods. - - the field name - - the used precision step - - if the field should be stored in plain text form - (according to toString(value) of the used data type) - - if the field should be indexed using {@link NumericTokenStream} - - - - Returns a {@link NumericTokenStream} for indexing the numeric value. - - - Returns always null for numeric fields - - - Returns always null for numeric fields - - - Returns always null for numeric fields - - - Returns the numeric value as a string (how it is stored, when {@link Field.Store#YES} is chosen). - - - Returns the current numeric value as a subclass of {@link Number}, null if not yet initialized. - - - Initializes the field with the supplied long value. - the numeric value - - this instance, because of this you can use it the following way: - document.add(new NumericField(name, precisionStep).SetLongValue(value)) - - - - Initializes the field with the supplied int value. - the numeric value - - this instance, because of this you can use it the following way: - document.add(new NumericField(name, precisionStep).setIntValue(value)) - - - - Initializes the field with the supplied double value. - the numeric value - - this instance, because of this you can use it the following way: - document.add(new NumericField(name, precisionStep).setDoubleValue(value)) - - - - Initializes the field with the supplied float value. - the numeric value - - this instance, because of this you can use it the following way: - document.add(new NumericField(name, precisionStep).setFloatValue(value)) - - - - Methods for manipulating arrays. - - - Parses the string argument as if it was an int value and returns the - result. Throws NumberFormatException if the string does not represent an - int quantity. - - - a string representation of an int quantity. - - int the value represented by the argument - - NumberFormatException if the argument could not be parsed as an int quantity. - - - Parses a char array into an int. - the character array - - The offset into the array - - The length - - the int - - NumberFormatException if it can't parse - - - Parses the string argument as if it was an int value and returns the - result. Throws NumberFormatException if the string does not represent an - int quantity. The second argument specifies the radix to use when parsing - the value. - - - a string representation of an int quantity. - - the base to use for conversion. - - int the value represented by the argument - - NumberFormatException if the argument could not be parsed as an int quantity. - - - Returns hash of chars in range start (inclusive) to - end (inclusive) - - - - Returns hash of chars in range start (inclusive) to - end (inclusive) - - - - A memory-resident {@link IndexOutput} implementation. - - - $Id: RAMOutputStream.java 691694 2008-09-03 17:34:29Z mikemccand $ - - - - Construct an empty output buffer. - - - Copy the current contents of this buffer to the named output. - - - Resets this to an empty buffer. - - - Returns byte usage of all buffers. - - - This exception is thrown when the write.lock - could not be released. - - - - - - A Query that matches documents containing a term. - This may be combined with other terms with a {@link BooleanQuery}. - - - - Constructs a query for the term t. - - - Returns the term of this query. - - - Prints a user-readable version of this query. - - - Returns true iff o is equal to this. - - - Returns a hash code value for this object. - - - Experimental class to get set of payloads for most standard Lucene queries. - Operates like Highlighter - IndexReader should only contain doc of interest, - best to use MemoryIndex. - -

- - WARNING: The status of the Payloads feature is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

-
- - that contains doc with payloads to extract - - - - Query should be rewritten for wild/fuzzy support. - - - - - payloads Collection - - IOException - - - Implements search over a set of Searchables. - -

Applications usually need only call the inherited {@link #Search(Query)} - or {@link #Search(Query,Filter)} methods. -

-
- - Creates a searcher which searches searchers. - - - Return the array of {@link Searchable}s this searches. - - - Returns index of the searcher for document n in the array - used to construct this searcher. - - - - Returns the document number of document n within its - sub-index. - - - - Create weight in multiple index scenario. - - Distributed query processing is done in the following steps: - 1. rewrite query - 2. extract necessary terms - 3. collect dfs for these terms from the Searchables - 4. create query weight using aggregate dfs. - 5. distribute that weight to Searchables - 6. merge results - - Steps 1-4 are done here, 5+6 in the search() methods - - - rewritten queries - - - - Document Frequency cache acting as a Dummy-Searcher. This class is no - full-fledged Searcher, but only supports the methods necessary to - initialize Weights. - - - - A query that applies a filter to the results of another query. - -

Note: the bits are retrieved from the filter each time this - query is used in a search - use a CachingWrapperFilter to avoid - regenerating the bits every time. - -

Created: Apr 20, 2004 8:58:29 AM - -

- 1.4 - - $Id: FilteredQuery.java 807821 2009-08-25 21:55:49Z mikemccand $ - - - -
- - Constructs a new query which applies a filter to the results of the original query. - Filter.getDocIdSet() will be called every time this query is used in a search. - - Query to be filtered, cannot be null. - - Filter to apply to query results, cannot be null. - - - - Returns a Weight that applies the filter to the enclosed query's Weight. - This is accomplished by overriding the Scorer returned by the Weight. - - - - Rewrites the wrapped query. - - - Prints a user-readable version of this query. - - - Returns true iff o is equal to this. - - - Returns a hash code value for this object. - - - use {@link #NextDoc()} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #Advance(int)} instead. - - - - Token Manager Error. - - - Lexical error occurred. - - - An attempt was made to create a second instance of a static token manager. - - - Tried to change to an invalid lexical state. - - - Detected (and bailed out of) an infinite loop in the token manager. - - - Indicates the reason why the exception is thrown. It will have - one of the above 4 values. - - - - Replaces unprintable characters by their escaped (or unicode escaped) - equivalents in the given string - - - - Returns a detailed message for the Error when it is thrown by the - token manager to indicate a lexical error. - Parameters : - EOFSeen : indicates if EOF caused the lexical error - curLexState : lexical state in which this error occurred - errorLine : line number when the error occurred - errorColumn : column number when the error occurred - errorAfter : prefix that was seen before this error occurred - curchar : the offending character - Note: You can customize the lexical error message by modifying this method. - - - - No arg constructor. - - - Constructor with message and reason. - - - Full Constructor. - - - You can also modify the body of this method to customize your error messages. - For example, cases like LOOP_DETECTED and INVALID_LEXICAL_STATE are not - of end-users concern, so you can return something like : - - "Internal Error : Please file a bug report .... " - - from this method for such cases in the release version of your parser. - - - - Default implementation of Message interface. - For Native Language Support (NLS), system of software internationalization. - - - -

[Note that as of 2.1, all but one of the - methods in this class are available via {@link - IndexWriter}. The one method that is not available is - {@link #DeleteDocument(int)}.]

- - A class to modify an index, i.e. to delete and add documents. This - class hides {@link IndexReader} and {@link IndexWriter} so that you - do not need to care about implementation details such as that adding - documents is done via IndexWriter and deletion is done via IndexReader. - -

Note that you cannot create more than one IndexModifier object - on the same directory at the same time. - -

Example usage: - - - - - -

- - - - - - -
- -     Analyzer analyzer = new StandardAnalyzer();
-     // create an index in /tmp/index, overwriting an existing one:
-     IndexModifier indexModifier = new IndexModifier("/tmp/index", analyzer, true);
-     Document doc = new Document();
-     doc.add(new Field("id""1", Field.Store.YES, Field.Index.NOT_ANALYZED));
-     doc.add(new Field("body""a simple test", Field.Store.YES, Field.Index.ANALYZED));
-     indexModifier.addDocument(doc);
-     int deleted = indexModifier.delete(new Term("id""1"));
-     System.out.println("Deleted " + deleted + " document");
-     indexModifier.flush();
-     System.out.println(indexModifier.docCount() " docs in index");
-     indexModifier.close();
-
-
- - - -

Not all methods of IndexReader and IndexWriter are offered by this - class. If you need access to additional methods, either use those classes - directly or implement your own class that extends IndexModifier. - -

Although an instance of this class can be used from more than one - thread, you will not get the best performance. You might want to use - IndexReader and IndexWriter directly for that (but you will need to - care about synchronization yourself then). - -

While you can freely mix calls to add() and delete() using this class, - you should batch you calls for best performance. For example, if you - want to update 20 documents, you should first delete all those documents, - then add all the new documents. - -

- Please use {@link IndexWriter} instead. - -
- - Open an index with write access. - - - the index directory - - the analyzer to use for adding new documents - - true to create the index or overwrite the existing one; - false to append to the existing index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Open an index with write access. - - - the index directory - - the analyzer to use for adding new documents - - true to create the index or overwrite the existing one; - false to append to the existing index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Open an index with write access. - - - the index directory - - the analyzer to use for adding new documents - - true to create the index or overwrite the existing one; - false to append to the existing index - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Initialize an IndexWriter. - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Throw an IllegalStateException if the index is closed. - IllegalStateException - - - Close the IndexReader and open an IndexWriter. - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Close the IndexWriter and open an IndexReader. - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - Make sure all changes are written to disk. - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Adds a document to this index, using the provided analyzer instead of the - one specific in the constructor. If the document contains more than - {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are - discarded. - - - - IllegalStateException if the index is closed - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Adds a document to this index. If the document contains more than - {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are - discarded. - - - - IllegalStateException if the index is closed - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Deletes all documents containing term. - This is useful if one uses a document field to hold a unique ID string for - the document. Then to delete such a document, one merely constructs a - term with the appropriate field and the unique ID string as its text and - passes it to this method. Returns the number of documents deleted. - - the number of documents deleted - - - - IllegalStateException if the index is closed - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Deletes the document numbered docNum. - - - StaleReaderException if the index has changed - since this reader was opened - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IllegalStateException if the index is closed - - - Returns the number of documents currently in this - index. If the writer is currently open, this returns - {@link IndexWriter#DocCount()}, else {@link - IndexReader#NumDocs()}. But, note that {@link - IndexWriter#DocCount()} does not take deletions into - account, unlike {@link IndexReader#numDocs}. - - IllegalStateException if the index is closed - - - Merges all segments together into a single segment, optimizing an index - for search. - - - - IllegalStateException if the index is closed - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - If non-null, information about merges and a message when - {@link #GetMaxFieldLength()} is reached will be printed to this. -

Example: index.setInfoStream(System.err); -

- - - IllegalStateException if the index is closed -
- - - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Setting to turn on usage of a compound file. When on, multiple files - for each segment are merged into a single file once the segment creation - is finished. This is done regardless of what directory is in use. - - - - IllegalStateException if the index is closed - - - - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - The maximum number of terms that will be indexed for a single field in a - document. This limits the amount of memory required for indexing, so that - collections with very large files will not crash the indexing process by - running out of memory.

- Note that this effectively truncates large documents, excluding from the - index terms that occur further in the document. If you know your source - documents are large, be sure to set this value high enough to accommodate - the expected size. If you set it to Integer.MAX_VALUE, then the only limit - is your memory, but you should anticipate an OutOfMemoryError.

- By default, no more than 10,000 terms will be indexed for a field. -

- - - IllegalStateException if the index is closed -
- - - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Determines the minimal number of documents required before the buffered - in-memory documents are merging and a new Segment is created. - Since Documents are merged in a {@link Lucene.Net.Store.RAMDirectory}, - large value gives faster indexing. At the same time, mergeFactor limits - the number of files open in a FSDirectory. - -

The default value is 10. - -

- - - IllegalStateException if the index is closed - IllegalArgumentException if maxBufferedDocs is smaller than 2 -
- - - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Determines how often segment indices are merged by addDocument(). With - smaller values, less RAM is used while indexing, and searches on - unoptimized indices are faster, but indexing speed is slower. With larger - values, more RAM is used during indexing, and while searches on unoptimized - indices are slower, indexing is faster. Thus larger values (> 10) are best - for batch index creation, and smaller values (< 10) for indices that are - interactively maintained. -

This must never be less than 2. The default value is 10. - -

- - - IllegalStateException if the index is closed -
- - - - CorruptIndexException if the index is corrupt - LockObtainFailedException if another writer - has this index open (write.lock could not - be obtained) - - IOException if there is a low-level IO error - - - Close this index, writing all pending changes to disk. - - - IllegalStateException if the index has been closed before already - CorruptIndexException if the index is corrupt - IOException if there is a low-level IO error - - - This class keeps track of closing the underlying directory. It is used to wrap - DirectoryReaders, that are created using a String/File parameter - in IndexReader.open() with FSDirectory.getDirectory(). - - This helper class is removed with all String/File - IndexReader.open() methods in Lucene 3.0 - - - - This member contains the ref counter, that is passed to each instance after cloning/reopening, - and is global to all DirectoryOwningReader derived from the original one. - This reuses the class {@link SegmentReader.Ref} - - - - Provides support for converting dates to strings and vice-versa. - The strings are structured so that lexicographic sorting orders - them by date, which makes them suitable for use as field values - and search terms. - -

This class also helps you to limit the resolution of your dates. Do not - save dates with a finer resolution than you really need, as then - RangeQuery and PrefixQuery will require more memory and become slower. - -

Compared to {@link DateField} the strings generated by the methods - in this class take slightly more space, unless your selected resolution - is set to Resolution.DAY or lower. - -

- Another approach is {@link NumericUtils}, which provides - a sortable binary representation (prefix encoded) of numeric values, which - date/time are. - For indexing a {@link Date} or {@link Calendar}, just get the unix timestamp as - long using {@link Date#getTime} or {@link Calendar#getTimeInMillis} and - index this as a numeric value with {@link NumericField} - and use {@link NumericRangeQuery} to query it. -

-
- - Converts a Date to a string suitable for indexing. - - - the date to be converted - - the desired resolution, see - {@link #Round(Date, DateTools.Resolution)} - - a string in format yyyyMMddHHmmssSSS or shorter, - depending on resolution; using GMT as timezone - - - - Converts a millisecond time to a string suitable for indexing. - - - the date expressed as milliseconds since January 1, 1970, 00:00:00 GMT - - the desired resolution, see - {@link #Round(long, DateTools.Resolution)} - - a string in format yyyyMMddHHmmssSSS or shorter, - depending on resolution; using GMT as timezone - - - - Converts a string produced by timeToString or - DateToString back to a time, represented as the - number of milliseconds since January 1, 1970, 00:00:00 GMT. - - - the date string to be converted - - the number of milliseconds since January 1, 1970, 00:00:00 GMT - - ParseException if dateString is not in the - expected format - - - - Converts a string produced by timeToString or - DateToString back to a time, represented as a - Date object. - - - the date string to be converted - - the parsed time as a Date object - - ParseException if dateString is not in the - expected format - - - - Limit a date's resolution. For example, the date 2004-09-21 13:50:11 - will be changed to 2004-09-01 00:00:00 when using - Resolution.MONTH. - - - The desired resolution of the date to be returned - - the date with all values more precise than resolution - set to 0 or 1 - - - - Limit a date's resolution. For example, the date 1095767411000 - (which represents 2004-09-21 13:50:11) will be changed to - 1093989600000 (2004-09-01 00:00:00) when using - Resolution.MONTH. - - - The time in milliseconds (not ticks). - The desired resolution of the date to be returned - - the date with all values more precise than resolution - set to 0 or 1, expressed as milliseconds since January 1, 1970, 00:00:00 GMT - - - - Specifies the time granularity. - - - Loader for text files that represent a list of stopwords. - - - - $Id: WordlistLoader.java 706342 2008-10-20 17:19:29Z gsingers $ - - - - Loads a text file and adds every line as an entry to a HashSet (omitting - leading and trailing whitespace). Every line of the file should contain only - one word. The words need to be in lowercase if you make use of an - Analyzer which uses LowerCaseFilter (like StandardAnalyzer). - - - File containing the wordlist - - A HashSet with the file's words - - - - Loads a text file and adds every non-comment line as an entry to a HashSet (omitting - leading and trailing whitespace). Every line of the file should contain only - one word. The words need to be in lowercase if you make use of an - Analyzer which uses LowerCaseFilter (like StandardAnalyzer). - - - File containing the wordlist - - The comment string to ignore - - A HashSet with the file's words - - - - Reads lines from a Reader and adds every line as an entry to a HashSet (omitting - leading and trailing whitespace). Every line of the Reader should contain only - one word. The words need to be in lowercase if you make use of an - Analyzer which uses LowerCaseFilter (like StandardAnalyzer). - - - Reader containing the wordlist - - A HashSet with the reader's words - - - - Reads lines from a Reader and adds every non-comment line as an entry to a HashSet (omitting - leading and trailing whitespace). Every line of the Reader should contain only - one word. The words need to be in lowercase if you make use of an - Analyzer which uses LowerCaseFilter (like StandardAnalyzer). - - - Reader containing the wordlist - - The string representing a comment. - - A HashSet with the reader's words - - - - Reads a stem dictionary. Each line contains: -
word\tstem
- (i.e. two tab seperated words) - -
- stem dictionary that overrules the stemming algorithm - - IOException -
- - This class is a scanner generated by - JFlex 1.4.1 - on 9/4/08 6:49 PM from the specification file - /tango/mike/src/lucene.standarddigit/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex - - - - This character denotes the end of file - - - initial size of the lookahead buffer - - - lexical states - - - Translates characters to character classes - - - Translates characters to character classes - - - Translates DFA states to action switch labels. - - - Translates a state to a row index in the transition table - - - The transition table of the DFA - - - ZZ_ATTRIBUTE[aState] contains the attributes of state aState - - - the input device - - - the current state of the DFA - - - the current lexical state - - - this buffer contains the current text to be matched and is - the source of the yytext() string - - - - the textposition at the last accepting state - - - the textposition at the last state to be included in yytext - - - the current text position in the buffer - - - startRead marks the beginning of the yytext() string in the buffer - - - endRead marks the last character in the buffer, that has been read - from input - - - - number of newlines encountered up to the start of the matched text - - - the number of characters up to the start of the matched text - - - the number of characters from the last newline up to the start of the - matched text - - - - zzAtBOL == true <=> the scanner is currently at the beginning of a line - - - zzAtEOF == true <=> the scanner is at the EOF - - - this solves a bug where HOSTs that end with '.' are identified - as ACRONYMs. It is deprecated and will be removed in the next - release. - - - - Fills Lucene token with the current token text. - - - Fills TermAttribute with the current token text. - - - Creates a new scanner - There is also a java.io.InputStream version of this constructor. - - - the java.io.Reader to read input from. - - - - Creates a new scanner. - There is also java.io.Reader version of this constructor. - - - the java.io.Inputstream to read input from. - - - - Unpacks the compressed character translation table. - - - the packed character translation table - - the unpacked character translation table - - - - Refills the input buffer. - - - false, iff there was new input. - - - if any I/O-Error occurs - - - - Closes the input stream. - - - Resets the scanner to read from a new input stream. - Does not close the old reader. - - All internal variables are reset, the old input stream - cannot be reused (internal buffer is discarded and lost). - Lexical state is set to ZZ_INITIAL. - - - the new input stream - - - - Returns the current lexical state. - - - Enters a new lexical state - - - the new lexical state - - - - Returns the text matched by the current regular expression. - - - Returns the character at position pos from the - matched text. - - It is equivalent to yytext().charAt(pos), but faster - - - the position of the character to fetch. - A value from 0 to yylength()-1. - - - the character at position pos - - - - Returns the length of the matched text region. - - - Reports an error that occured while scanning. - - In a wellformed scanner (no or only correct usage of - yypushback(int) and a match-all fallback rule) this method - will only be called with things that "Can't Possibly Happen". - If this method is called, something is seriously wrong - (e.g. a JFlex bug producing a faulty scanner etc.). - - Usual syntax/scanner level error handling should be done - in error fallback rules. - - - the code of the errormessage to display - - - - Pushes the specified amount of characters back into the input stream. - - They will be read again by then next call of the scanning method - - - the number of characters to be read again. - This number must not be greater than yylength()! - - - - Resumes scanning until the next regular expression is matched, - the end of input is encountered or an I/O-Error occurs. - - - the next token - - if any I/O-Error occurs - - - - Floating point numbers smaller than 32 bits. - - - $Id$ - - - - Converts a 32 bit float to an 8 bit float. -
Values less than zero are all mapped to zero. -
Values are truncated (rounded down) to the nearest 8 bit value. -
Values between zero and the smallest representable value - are rounded up. - -
- the 32 bit float to be converted to an 8 bit float (byte) - - the number of mantissa bits to use in the byte, with the remainder to be used in the exponent - - the zero-point in the range of exponent values - - the 8 bit float representation - -
- - Converts an 8 bit float to a 32 bit float. - - - floatToByte(b, mantissaBits=3, zeroExponent=15) -
smallest non-zero value = 5.820766E-10 -
largest value = 7.5161928E9 -
epsilon = 0.125 -
-
- - byteToFloat(b, mantissaBits=3, zeroExponent=15) - - - floatToByte(b, mantissaBits=5, zeroExponent=2) -
smallest nonzero value = 0.033203125 -
largest value = 1984.0 -
epsilon = 0.03125 -
-
- - byteToFloat(b, mantissaBits=5, zeroExponent=2) - - - Estimates the size of a given Object using a given MemoryModel for primitive - size information. - - Resource Usage: - - Internally uses a Map to temporally hold a reference to every - object seen. - - If checkIntered, all Strings checked will be interned, but those - that were not already interned will be released for GC when the - estimate is complete. - - - - Constructs this object with an AverageGuessMemoryModel and - checkInterned = true. - - - - check if Strings are interned and don't add to size - if they are. Defaults to true but if you know the objects you are checking - won't likely contain many interned Strings, it will be faster to turn off - intern checking. - - - - MemoryModel to use for primitive object sizes. - - - - MemoryModel to use for primitive object sizes. - - check if Strings are interned and don't add to size - if they are. Defaults to true but if you know the objects you are checking - won't likely contain many interned Strings, it will be faster to turn off - intern checking. - - - - Return good default units based on byte size. - - - - The maximum number of items to cache. - - - - - The list to efficiently maintain the LRU state. - - - - - The dictionary to hash into any location in the list. - - - - - The node instance to use/re-use when adding an item to the cache. - - - - - Container to hold the key and value to aid in removal from - the dictionary when an item is removed from cache. - - - - An average, best guess, MemoryModel that should work okay on most systems. - - - - - Returns primitive memory sizes for estimating RAM usage. - - - - - size of array beyond contents - - - - Class size overhead - - - - a primitive Class - bool, byte, char, short, long, float, - short, double, int - - the size in bytes of given primitive Class - - - - size of reference - - - - A {@link LockFactory} that wraps another {@link - LockFactory} and verifies that each lock obtain/release - is "correct" (never results in two processes holding the - lock at the same time). It does this by contacting an - external server ({@link LockVerifyServer}) to assert that - at most one process holds the lock at a time. To use - this, you should also run {@link LockVerifyServer} on the - host & port matching what you pass to the constructor. - - - - - - - - - should be a unique id across all clients - - the LockFactory that we are testing - - host or IP where {@link LockVerifyServer} - is running - - the port {@link LockVerifyServer} is - listening on - - - - A memory-resident {@link IndexInput} implementation. - - - $Id: RAMInputStream.java 632120 2008-02-28 21:13:59Z mikemccand $ - - - - Subclass of FilteredTermEnum for enumerating all terms that match the - specified wildcard filter term. -

- Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. - -

- $Id: WildcardTermEnum.java 783371 2009-06-10 14:39:56Z mikemccand $ - -
- - ***************************************** - String equality with support for wildcards - ****************************************** - - - - Creates a new WildcardTermEnum. -

- After calling the constructor the enumeration is already pointing to the first - valid term if such a term exists. -

-
- - Determines if a word matches a wildcard pattern. - Work released by Granta Design Ltd after originally being done on - company time. - - - - A {@link Collector} implementation that collects the top-scoring hits, - returning them as a {@link TopDocs}. This is used by {@link IndexSearcher} to - implement {@link TopDocs}-based search. Hits are sorted by score descending - and then (when the scores are tied) docID ascending. When you create an - instance of this collector you should know in advance whether documents are - going to be collected in doc Id order or not. - -

NOTE: The values {@link Float#NaN} and - {Float#NEGATIVE_INFINITY} are not valid scores. This - collector will not properly collect hits with such - scores. -

-
- - Creates a new {@link TopScoreDocCollector} given the number of hits to - collect and whether documents are scored in order by the input - {@link Scorer} to {@link #SetScorer(Scorer)}. - -

NOTE: The instances returned by this method - pre-allocate a full array of length - numHits, and fill the array with sentinel - objects. -

-
- - A Filter that restricts search results to a range of values in a given - field. - -

This filter matches the documents looking for terms that fall into the - supplied range according to {@link String#compareTo(String)}. It is not intended - for numerical ranges, use {@link NumericRangeFilter} instead. - -

If you construct a large number of range filters with different ranges but on the - same field, {@link FieldCacheRangeFilter} may have significantly better performance. -

- 2.9 - -
- - The field this range applies to - - The lower bound on this range - - The upper bound on this range - - Does this range include the lower bound? - - Does this range include the upper bound? - - IllegalArgumentException if both terms are null or if - lowerTerm is null and includeLower is true (similar for upperTerm - and includeUpper) - - - - WARNING: Using this constructor and supplying a non-null - value in the collator parameter will cause every single - index Term in the Field referenced by lowerTerm and/or upperTerm to be - examined. Depending on the number of index Terms in this Field, the - operation could be very slow. - - - The lower bound on this range - - The upper bound on this range - - Does this range include the lower bound? - - Does this range include the upper bound? - - The collator to use when determining range inclusion; set - to null to use Unicode code point ordering instead of collation. - - IllegalArgumentException if both terms are null or if - lowerTerm is null and includeLower is true (similar for upperTerm - and includeUpper) - - - - Constructs a filter for field fieldName matching - less than or equal to upperTerm. - - - - Constructs a filter for field fieldName matching - greater than or equal to lowerTerm. - - - - Returns the field name for this filter - - - Returns the lower value of this range filter - - - Returns the upper value of this range filter - - - Returns true if the lower endpoint is inclusive - - - Returns true if the upper endpoint is inclusive - - - Returns the collator used to determine range inclusion, if any. - - - Removes matches which overlap with another SpanQuery. - - - Construct a SpanNotQuery matching spans from include which - have no overlap with spans from exclude. - - - - Return the SpanQuery whose matches are filtered. - - - Return the SpanQuery whose matches must not overlap those returned. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - - Returns true iff o is equal to this. - - - Matches spans which are near one another. One can specify slop, the - maximum number of intervening unmatched positions, as well as whether - matches are required to be in-order. - - - - Construct a SpanNearQuery. Matches spans matching a span from each - clause, with up to slop total unmatched positions between - them. * When inOrder is true, the spans from each clause - must be * ordered as in clauses. - - - - Return the clauses whose spans are matched. - - - Return the maximum number of intervening unmatched positions permitted. - - - Return true if matches are required to be in-order. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - - Returns true iff o is equal to this. - - - Calculates the minimum payload seen - - - - - - Expert: obtains short field values from the - {@link Lucene.Net.Search.FieldCache FieldCache} - using getShorts() and makes those values - available as other numeric types, casting as needed. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

- for requirements - on the field. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

- - - -

Create a cached short field source with default string-to-short parser. -
- - Create a cached short field source with a specific string-to-short parser. - - - Query that sets document score as a programmatic function of several (sub) scores: -
    -
  1. the score of its subQuery (any query)
  2. -
  3. (optional) the score of its ValueSourceQuery (or queries). - For most simple/convenient use cases this query is likely to be a - {@link Lucene.Net.Search.Function.FieldScoreQuery FieldScoreQuery}
  4. -
- Subclasses can modify the computation by overriding {@link #getCustomScoreProvider}. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. -

-
- - Create a CustomScoreQuery over input subQuery. - the sub query whose scored is being customed. Must not be null. - - - - Create a CustomScoreQuery over input subQuery and a {@link ValueSourceQuery}. - the sub query whose score is being customed. Must not be null. - - a value source query whose scores are used in the custom score - computation. For most simple/convineient use case this would be a - {@link Lucene.Net.Search.Function.FieldScoreQuery FieldScoreQuery}. - This parameter is optional - it can be null or even an empty array. - - - - Create a CustomScoreQuery over input subQuery and a {@link ValueSourceQuery}. - the sub query whose score is being customized. Must not be null. - - value source queries whose scores are used in the custom score - computation. For most simple/convenient use case these would be - {@link Lucene.Net.Search.Function.FieldScoreQuery FieldScoreQueries}. - This parameter is optional - it can be null or even an empty array. - - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - Returns a {@link CustomScoreProvider} that calculates the custom scores - for the given {@link IndexReader}. The default implementation returns a default - implementation as specified in the docs of {@link CustomScoreProvider}. - @since 2.9.2 - - - - Compute a custom score by the subQuery score and a number of - ValueSourceQuery scores. - - The doc is relative to the current reader, which is - unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9). - Please override {@link #getCustomScoreProvider} and return a subclass - of {@link CustomScoreProvider} for the given {@link IndexReader}. - see CustomScoreProvider#customScore(int,float,float[]) - - - - Compute a custom score by the subQuery score and the ValueSourceQuery score. - - The doc is relative to the current reader, which is - unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9). - Please override {@link #getCustomScoreProvider} and return a subclass - of {@link CustomScoreProvider} for the given {@link IndexReader}. - @see CustomScoreProvider#customScore(int,float,float) - - - - Explain the custom score. - - The doc is relative to the current reader, which is - unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9). - Please override {@link #getCustomScoreProvider} and return a subclass - of {@link CustomScoreProvider} for the given {@link IndexReader}. - - - - Explain the custom score. - The doc is relative to the current reader, which is - unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9). - Please override {@link #getCustomScoreProvider} and return a subclass - of {@link CustomScoreProvider} for the given {@link IndexReader}. - - - - Checks if this is strict custom scoring. - In strict custom scoring, the ValueSource part does not participate in weight normalization. - This may be useful when one wants full control over how scores are modified, and does - not care about normalizing by the ValueSource part. - One particular case where this is useful if for testing this query. -

- Note: only has effect when the ValueSource part is not null. -

-
- - Set the strict mode of this query. - The strict mode to set. - - - - - - A short name of this query, used in {@link #ToString(String)}. - - - - - Creates a new instance of the provider class for the given IndexReader. - - - - - * Compute a custom score by the subQuery score and a number of - ValueSourceQuery scores. -

- Subclasses can override this method to modify the custom score. -

- If your custom scoring is different than the default herein you - should override at least one of the two customScore() methods. - If the number of ValueSourceQueries is always < 2 it is - sufficient to override the other - {@link #customScore(int, float, float) customScore()} - method, which is simpler. -

- The default computation herein is a multiplication of given scores: -

-                ModifiedScore = valSrcScore * valSrcScores[0] * valSrcScores[1] * ...
-            
-
- id of scored doc - score of that doc by the subQuery - scores of that doc by the ValueSourceQuery - custom score -
- - - - Explain the custom score. - Whenever overriding {@link #customScore(int, float, float[])}, - this method should also be overridden to provide the correct explanation - for the part of the custom scoring. - - doc being explained - explanation for the sub-query part - explanation for the value source part - an explanation for the custom score - - - - Explain the custom score. - Whenever overriding {@link #customScore(int, float, float)}, - this method should also be overridden to provide the correct explanation - for the part of the custom scoring. - - - doc being explained - explanation for the sub-query part - explanation for the value source part - an explanation for the custom score - - - A scorer that applies a (callback) function on scores of the subQuery. - - - use {@link #NextDoc()} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #Advance(int)} instead. - - - - A TermInfo is the record of information stored for a term. - - - The number of documents which contain the term. - - - Store a sorted collection of {@link Lucene.Net.Index.TermVectorEntry}s. Collects all term information - into a single, SortedSet. -
- NOTE: This Mapper ignores all Field information for the Document. This means that if you are using offset/positions you will not - know what Fields they correlate with. -
- This is not thread-safe -
-
- - Stand-in name for the field in {@link TermVectorEntry}. - - - - A Comparator for sorting {@link TermVectorEntry}s - - - - - The term to map - - The frequency of the term - - Offset information, may be null - - Position information, may be null - - - - The TermVectorEntrySet. A SortedSet of {@link TermVectorEntry} objects. Sort is by the comparator passed into the constructor. -
- This set will be empty until after the mapping process takes place. - -
- The SortedSet of {@link TermVectorEntry}. - -
- - For each Field, store a sorted collection of {@link TermVectorEntry}s -

- This is not thread-safe. -

-
- - - A Comparator for sorting {@link TermVectorEntry}s - - - - Get the mapping between fields and terms, sorted by the comparator - - - A map between field names and {@link java.util.SortedSet}s per field. SortedSet entries are {@link TermVectorEntry} - - - - Access to the Fieldable Info file that describes document fields and whether or - not they are indexed. Each segment has a separate Fieldable Info file. Objects - of this class are thread-safe for multiple readers, but only one thread can - be adding documents at a time, with no other reader or writer threads - accessing this object. - - - - Construct a FieldInfos object using the directory and the name of the file - IndexInput - - The directory to open the IndexInput from - - The name of the file to open the IndexInput from in the Directory - - IOException - - - Returns a deep clone of this FieldInfos instance. - - - Adds field info for a Document. - - - Returns true if any fields do not omitTermFreqAndPositions - - - Add fields that are indexed. Whether they have termvectors has to be specified. - - - The names of the fields - - Whether the fields store term vectors or not - - true if positions should be stored. - - true if offsets should be stored - - - - Assumes the fields are not storing term vectors. - - - The names of the fields - - Whether the fields are indexed or not - - - - - - - Calls 5 parameter add with false for all TermVector parameters. - - - The name of the Fieldable - - true if the field is indexed - - - - - - Calls 5 parameter add with false for term vector positions and offsets. - - - The name of the field - - true if the field is indexed - - true if the term vector should be stored - - - - If the field is not yet known, adds it. If it is known, checks to make - sure that the isIndexed flag is the same as was given previously for this - field. If not - marks it as being indexed. Same goes for the TermVector - parameters. - - - The name of the field - - true if the field is indexed - - true if the term vector should be stored - - true if the term vector with positions should be stored - - true if the term vector with offsets should be stored - - - - If the field is not yet known, adds it. If it is known, checks to make - sure that the isIndexed flag is the same as was given previously for this - field. If not - marks it as being indexed. Same goes for the TermVector - parameters. - - - The name of the field - - true if the field is indexed - - true if the term vector should be stored - - true if the term vector with positions should be stored - - true if the term vector with offsets should be stored - - true if the norms for the indexed field should be omitted - - - - If the field is not yet known, adds it. If it is known, checks to make - sure that the isIndexed flag is the same as was given previously for this - field. If not - marks it as being indexed. Same goes for the TermVector - parameters. - - - The name of the field - - true if the field is indexed - - true if the term vector should be stored - - true if the term vector with positions should be stored - - true if the term vector with offsets should be stored - - true if the norms for the indexed field should be omitted - - true if payloads should be stored for this field - - true if term freqs should be omitted for this field - - - - Return the fieldName identified by its number. - - - - - the fieldName or an empty string when the field - with the given number doesn't exist. - - - - Return the fieldinfo object referenced by the fieldNumber. - - - the FieldInfo object or null when the given fieldNumber - doesn't exist. - - - - - Stemmer, implementing the Porter Stemming Algorithm - - The Stemmer class transforms a word into its root form. The input - word can be provided a character at time (by calling add()), or at once - by calling one of the various stem(something) methods. - - - - reset() resets the stemmer so it can stem another word. If you invoke - the stemmer by calling add(char) and then stem(), you must call reset() - before starting another word. - - - - Add a character to the word being stemmed. When you are finished - adding characters, you can call stem(void) to process the word. - - - - After a word has been stemmed, it can be retrieved by toString(), - or a reference to the internal buffer can be retrieved by getResultBuffer - and getResultLength (which is generally more efficient.) - - - - Returns the length of the word resulting from the stemming process. - - - Returns a reference to a character buffer containing the results of - the stemming process. You also need to consult getResultLength() - to determine the length of the result. - - - - Stem a word provided as a String. Returns the result as a String. - - - Stem a word contained in a char[]. Returns true if the stemming process - resulted in a word different from the input. You can retrieve the - result with getResultLength()/getResultBuffer() or toString(). - - - - Stem a word contained in a portion of a char[] array. Returns - true if the stemming process resulted in a word different from - the input. You can retrieve the result with - getResultLength()/getResultBuffer() or toString(). - - - - Stem a word contained in a leading portion of a char[] array. - Returns true if the stemming process resulted in a word different - from the input. You can retrieve the result with - getResultLength()/getResultBuffer() or toString(). - - - - Stem the word placed into the Stemmer buffer through calls to add(). - Returns true if the stemming process resulted in a word different - from the input. You can retrieve the result with - getResultLength()/getResultBuffer() or toString(). - - - - Test program for demonstrating the Stemmer. It reads a file and - stems each word, writing the result to standard out. - Usage: Stemmer file-name - - - - Methods for manipulating strings. - - $Id: StringHelper.java 801344 2009-08-05 18:05:06Z yonik $ - - - - Expert: - The StringInterner implementation used by Lucene. - This shouldn't be changed to an incompatible implementation after other Lucene APIs have been used. - - - - Return the same string object for all equal strings - - - Compares two byte[] arrays, element by element, and returns the - number of elements common to both arrays. - - - The first byte[] to compare - - The second byte[] to compare - - The number of common elements. - - - - Compares two strings, character by character, and returns the - first position where the two strings differ from one another. - - - The first string to compare - - The second string to compare - - The first position where the two strings differ. - - - -

Implements {@link LockFactory} using {@link - File#createNewFile()}.

- -

NOTE: the javadocs - for File.createNewFile contain a vague - yet spooky warning about not using the API for file - locking. This warning was added due to this - bug, and in fact the only known problem with using - this API for locking is that the Lucene write lock may - not be released when the JVM exits abnormally.

-

When this happens, a {@link LockObtainFailedException} - is hit when trying to create a writer, in which case you - need to explicitly clear the lock file first. You can - either manually remove the file, or use the {@link - org.apache.lucene.index.IndexReader#unlock(Directory)} - API. But, first be certain that no writer is in fact - writing to the index otherwise you can easily corrupt - your index.

- -

If you suspect that this or any other LockFactory is - not working properly in your environment, you can easily - test it by using {@link VerifyingLockFactory}, {@link - LockVerifyServer} and {@link LockStressTest}.

- -

- - -
- - Create a SimpleFSLockFactory instance, with null (unset) - lock directory. When you pass this factory to a {@link FSDirectory} - subclass, the lock directory is automatically set to the - directory itsself. Be sure to create one instance for each directory - your create! - - - - Instantiate using the provided directory (as a File instance). - where lock files should be created. - - - - Instantiate using the provided directory (as a File instance). - where lock files should be created. - - - - Instantiate using the provided directory name (String). - where lock files should be created. - - - - A Spans that is formed from the ordered subspans of a SpanNearQuery - where the subspans do not overlap and have a maximum slop between them. -

- The formed spans only contains minimum slop matches.
- The matching slop is computed from the distance(s) between - the non overlapping matching Spans.
- Successive matches are always formed from the successive Spans - of the SpanNearQuery. -

- The formed spans may contain overlaps when the slop is at least 1. - For example, when querying using -

t1 t2 t3
- with slop at least 1, the fragment: -
t1 t2 t1 t3 t2 t3
- matches twice: -
t1 t2 .. t3      
-
      t1 .. t2 t3
- - - Expert: - Only public for subclassing. Most implementations should not need this class -
-
- - The spans in the same order as the SpanNearQuery - - - Indicates that all subSpans have same doc() - - - Advances the subSpans to just after an ordered match with a minimum slop - that is smaller than the slop allowed by the SpanNearQuery. - - true iff there is such a match. - - - - Advance the subSpans to the same document - - - Check whether two Spans in the same document are ordered. - - - - - true iff spans1 starts before spans2 - or the spans start at the same position, - and spans1 ends before spans2. - - - - Like {@link #DocSpansOrdered(Spans,Spans)}, but use the spans - starts and ends as parameters. - - - - Order the subSpans within the same document by advancing all later spans - after the previous one. - - - - The subSpans are ordered in the same doc, so there is a possible match. - Compute the slop while making the match as short as possible by advancing - all subSpans except the last one in reverse order. - - - - - - - - - - - The original list of terms from the query, can contain duplicates - - - - This class is very similar to - {@link Lucene.Net.Search.Spans.SpanNearQuery} except that it factors - in the value of the payloads located at each of the positions where the - {@link Lucene.Net.Search.Spans.TermSpans} occurs. -

- In order to take advantage of this, you must override - {@link Lucene.Net.Search.Similarity#ScorePayload(String, byte[],int,int)} - which returns 1 by default. -

- Payload scores are aggregated using a pluggable {@link PayloadFunction}. - -

- - -
- - By default, uses the {@link PayloadFunction} to score the payloads, but - can be overridden to do other things. - - - The payloads - - The start position of the span being scored - - The end position of the span being scored - - - - - - - This class wraps another ValueSource, but protects - against accidental double RAM usage in FieldCache when - a composite reader is passed to {@link #getValues}. - -

NOTE: this class adds a CPU penalty to every - lookup, as it must resolve the incoming document to the - right sub-reader using a binary search.

- -

- This class is temporary, to ease the - migration to segment-based searching. Please change your - code to not pass composite readers to these APIs. - -
- - A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum - score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries. - This is useful when searching for a word in multiple fields with different boost factors (so that the fields cannot be - combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost, - not the sum of the field scores (as BooleanQuery would give). - If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching - another gets a higher score than "albino" matching both fields. - To get this result, use both BooleanQuery and DisjunctionMaxQuery: for each term a DisjunctionMaxQuery searches for it in - each field, while the set of these DisjunctionMaxQuery's is combined into a BooleanQuery. - The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that - include this term in only the best of those multiple fields, without confusing this with the better case of two different terms - in the multiple fields. - - - - Creates a new empty DisjunctionMaxQuery. Use add() to add the subqueries. - the score of each non-maximum disjunct for a document is multiplied by this weight - and added into the final score. If non-zero, the value should be small, on the order of 0.1, which says that - 10 occurrences of word in a lower-scored field that is also in a higher scored field is just as good as a unique - word in the lower scored field (i.e., one that is not in any higher scored field. - - - - Creates a new DisjunctionMaxQuery - a Collection<Query> of all the disjuncts to add - - the weight to give to each matching non-maximum disjunct - - - - Add a subquery to this disjunction - the disjunct added - - - - Add a collection of disjuncts to this disjunction - via Iterable - - - - An Iterator<Query> over the disjuncts - - - Optimize our representation and our subqueries representations - the IndexReader we query - - an optimized copy of us (which may not be a copy if there is nothing to optimize) - - - - Create a shallow copy of us -- used in rewriting if necessary - a copy of us (but reuse, don't copy, our subqueries) - - - - Prettyprint us. - the field to which we are applied - - a string that shows what we do, of the form "(disjunct1 | disjunct2 | ... | disjunctn)^boost" - - - - Return true iff we represent the same query as o - another object - - true iff o is a DisjunctionMaxQuery with the same boost and the same subqueries, in the same order, as us - - - - Compute a hash code for hashing us - the hash code - - - - Expert: the Weight for DisjunctionMaxQuery, used to - normalize, score and explain these queries. - -

NOTE: this API and implementation is subject to - change suddenly in the next release.

-

-
- - The Similarity implementation. - - - The Weights for our subqueries, in 1-1 correspondence with disjuncts - - - Expert: Default scoring implementation. - - - Implemented as - state.getBoost()*lengthNorm(numTerms), where - numTerms is {@link FieldInvertState#GetLength()} if {@link - #setDiscountOverlaps} is false, else it's {@link - FieldInvertState#GetLength()} - {@link - FieldInvertState#GetNumOverlap()}. - -

WARNING: This API is new and experimental, and may suddenly - change.

-

-
- - Implemented as 1/sqrt(numTerms). - - - Implemented as 1/sqrt(sumOfSquaredWeights). - - - Implemented as sqrt(freq). - - - Implemented as 1 / (distance + 1). - - - Implemented as log(numDocs/(docFreq+1)) + 1. - - - Implemented as overlap / maxOverlap. - - - Determines whether overlap tokens (Tokens with - 0 position increment) are ignored when computing - norm. By default this is false, meaning overlap - tokens are counted just like non-overlap tokens. - -

WARNING: This API is new and experimental, and may suddenly - change.

- -

- - -
- - - - - - Called once per field per document if term vectors - are enabled, to write the vectors to - RAMOutputStream, which is then quickly flushed to - * the real term vectors files in the Directory. - - - - Convenience class for holding TermVector information. - - - Collapse the hash table & sort in-place. - - - Compares term text for two Posting instance and - returns -1 if p1 < p2; 1 if p1 > p2; else 0. - - - - Test whether the text for current RawPostingList p equals - current tokenText. - - - - Called when postings hash is too small (> 50% - occupied) or too large (< 20% occupied). - - - - This is a {@link LogMergePolicy} that measures size of a - segment as the total byte size of the segment's files. - - - - - - - - Default maximum segment size. A segment of this size - - - - -

Determines the largest segment (measured by total - byte size of the segment's files, in MB) that may be - merged with other segments. Small values (e.g., less - than 50 MB) are best for interactive indexing, as this - limits the length of pauses while indexing to a few - seconds. Larger values are best for batched indexing - and speedier searches.

- -

Note that {@link #setMaxMergeDocs} is also - used to check whether a segment is too large for - merging (it's either or).

-

-
- - Returns the largest segment (meaured by total byte - size of the segment's files, in MB) that may be merged - with other segments. - - - - - - Sets the minimum size for the lowest level segments. - Any segments below this size are considered to be on - the same level (even if they vary drastically in size) - and will be merged whenever there are mergeFactor of - them. This effectively truncates the "long tail" of - small segments that would otherwise be created into a - single level. If you set this too large, it could - greatly increase the merging cost during indexing (if - you flush many small segments). - - - - Get the minimum size for a segment to remain - un-merged. - - - - - - Adds a new term in this field - - - Called when we are done adding terms to this field - - - Removes words that are too long or too short from the stream. - - - - $Id: LengthFilter.java 807201 2009-08-24 13:22:34Z markrmiller $ - - - - Build a filter that removes words that are too long or too - short from the text. - - - - Returns the next input Token whose term() is the right len - - - Simple DocIdSet and DocIdSetIterator backed by a BitSet - - - This DocIdSet implementation is cacheable. - - - Returns the underlying BitSet. - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - This exception is thrown when you try to list a - non-existent directory. - - - - A {@link HitCollector} implementation that collects the top-sorting - documents, returning them as a {@link TopFieldDocs}. This is used by {@link - IndexSearcher} to implement {@link TopFieldDocs}-based search. - -

This may be extended, overriding the collect method to, e.g., - conditionally invoke super() in order to filter which - documents are collected. - -

- Please use {@link TopFieldCollector} instead. - -
- - A {@link HitCollector} implementation that collects the top-scoring - documents, returning them as a {@link TopDocs}. This is used by {@link - IndexSearcher} to implement {@link TopDocs}-based search. - -

This may be extended, overriding the collect method to, e.g., - conditionally invoke super() in order to filter which - documents are collected. - -

- Please use {@link TopScoreDocCollector} - instead, which has better performance. - - -
- - The total number of hits the collector encountered. - - - The priority queue which holds the top-scoring documents. - - - Construct to collect a given number of hits. - the maximum number of hits to collect - - - - use TopDocCollector(hq) instead. numHits is not used by this - constructor. It will be removed in a future release. - - - - Constructor to collect the top-scoring documents by using the given PQ. - the PQ to use by this instance. - - - - The total number of documents that matched this query. - - - The top-scoring hits. - - - Construct to collect a given number of hits. - the index to be searched - - the sort criteria - - the maximum number of hits to collect - - - - Expert: A Scorer for documents matching a Term. - - - Construct a TermScorer. - - - The weight of the Term in the query. - - An iterator over the documents matching the Term. - - The Similarity implementation to be used for score - computations. - - The field norms of the document fields for the Term. - - - - use {@link #Score(Collector)} instead. - - - - use {@link #Score(Collector, int, int)} instead. - - - - use {@link #DocID()} instead. - - - - Advances to the next document matching the query.
- The iterator over the matching documents is buffered using - {@link TermDocs#Read(int[],int[])}. - -
- true iff there is another document matching the query. - - use {@link #NextDoc()} instead. - -
- - Advances to the next document matching the query.
- The iterator over the matching documents is buffered using - {@link TermDocs#Read(int[],int[])}. - -
- the document matching the query or -1 if there are no more documents. - -
- - Skips to the first match beyond the current whose document number is - greater than or equal to a given target.
- The implementation uses {@link TermDocs#SkipTo(int)}. - -
- The target document number. - - true iff there is such a match. - - use {@link #Advance(int)} instead. - -
- - Advances to the first match beyond the current whose document number is - greater than or equal to a given target.
- The implementation uses {@link TermDocs#SkipTo(int)}. - -
- The target document number. - - the matching document or -1 if none exist. - -
- - Returns an explanation of the score for a document. -
When this method is used, the {@link #Next()} method - and the {@link #Score(HitCollector)} method should not be used. -
- The document number for the explanation. - -
- - Returns a string representation of this TermScorer. - - - Matches spans near the beginning of a field. - - - Construct a SpanFirstQuery matching spans in match whose end - position is less than or equal to end. - - - - Return the SpanQuery whose matches are filtered. - - - Return the maximum end position permitted in a match. - - - Returns a collection of all terms matched by this query. - use extractTerms instead - - - - - - Constrains search results to only match those which also match a provided - query. Also provides position information about where each document matches - at the cost of extra space compared with the QueryWrapperFilter. - There is an added cost to this above what is stored in a {@link QueryWrapperFilter}. Namely, - the position information for each matching document is stored. -

- This filter does not cache. See the {@link Lucene.Net.Search.CachingSpanFilter} for a wrapper that - caches. - - -

- $Id:$ - -
- - Constructs a filter which only matches documents matching - query. - - The {@link Lucene.Net.Search.Spans.SpanQuery} to use as the basis for the Filter. - - - - A {@link Scorer} which wraps another scorer and caches the score of the - current document. Successive calls to {@link #Score()} will return the same - result and will not invoke the wrapped Scorer's score() method, unless the - current document has changed.
- This class might be useful due to the changes done to the {@link Collector} - interface, in which the score is not computed for a document by default, only - if the collector requests it. Some collectors may need to use the score in - several places, however all they have in hand is a {@link Scorer} object, and - might end up computing the score of a document more than once. -
-
- - Creates a new instance by wrapping the given scorer. - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - A Query that matches documents within an exclusive range of terms. - -

This query matches the documents looking for terms that fall into the - supplied range according to {@link Term#CompareTo(Term)}. It is not intended - for numerical ranges, use {@link NumericRangeQuery} instead. - -

This query uses {@linkplain - MultiTermQuery#SCORING_BOOLEAN_QUERY_REWRITE}. If you - want to change this, use the new {@link TermRangeQuery} - instead. - -

- Use {@link TermRangeQuery} for term ranges or - {@link NumericRangeQuery} for numeric ranges instead. - This class will be removed in Lucene 3.0. - -
- - Constructs a query selecting all terms greater than - lowerTerm but less than upperTerm. - There must be at least one term and either term may be null, - in which case there is no bound on that side, but if there are - two terms, both terms must be for the same field. - - - The Term at the lower end of the range - - The Term at the upper end of the range - - If true, both lowerTerm and - upperTerm will themselves be included in the range. - - - - Constructs a query selecting all terms greater than - lowerTerm but less than upperTerm. - There must be at least one term and either term may be null, - in which case there is no bound on that side, but if there are - two terms, both terms must be for the same field. -

- If collator is not null, it will be used to decide whether - index terms are within the given range, rather than using the Unicode code - point order in which index terms are stored. -

- WARNING: Using this constructor and supplying a non-null - value in the collator parameter will cause every single - index Term in the Field referenced by lowerTerm and/or upperTerm to be - examined. Depending on the number of index Terms in this Field, the - operation could be very slow. - -

- The Term at the lower end of the range - - The Term at the upper end of the range - - If true, both lowerTerm and - upperTerm will themselves be included in the range. - - The collator to use to collate index Terms, to determine - their membership in the range bounded by lowerTerm and - upperTerm. - -
- - Returns the field name for this query - - - Returns the lower term of this range query. - - - Returns the upper term of this range query. - - - Returns true if the range query is inclusive - - - Returns the collator used to determine range inclusion, if any. - - - Prints a user-readable version of this query. - - - Returns true iff o is equal to this. - - - Returns a hash code value for this object. - - - Expert: Collects sorted results from Searchable's and collates them. - The elements put into this queue must be of type FieldDoc. - -

Created: Feb 11, 2004 2:04:21 PM - -

- lucene 1.4 - - $Id: FieldDocSortedHitQueue.java 695514 2008-09-15 15:42:11Z otis $ - -
- - Creates a hit queue sorted by the given list of fields. - Fieldable names, in priority order (highest priority first). - - The number of hits to retain. Must be greater than zero. - - - - Allows redefinition of sort fields if they are null. - This is to handle the case using ParallelMultiSearcher where the - original list contains AUTO and we don't know the actual sort - type until the values come back. The fields can only be set once. - This method is thread safe. - - - - - - Returns the fields being used to sort. - - - Returns an array of collators, possibly null. The collators - correspond to any SortFields which were given a specific locale. - - Array of sort fields. - - Array, possibly null. - - - - Returns whether a is less relevant than b. - ScoreDoc - - ScoreDoc - - true if document a should be sorted after document b. - - - - Expert: Describes the score computation for document and query, and - can distinguish a match independent of a positive value. - - - - The match status of this explanation node. - May be null if match status is unknown - - - - Sets the match status assigned to this explanation node. - May be null if match status is unknown - - - - Indicates whether or not this Explanation models a good match. - -

- If the match status is explicitly set (i.e.: not null) this method - uses it; otherwise it defers to the superclass. -

-

- - -
- - use {@link #Score(Collector, int, int)} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Score(Collector)} instead. - - - - use {@link #Advance(int)} instead. - - - - use {@link #DocID()} instead. - - - - use {@link #NextDoc()} instead. - - - - use {@link #Advance(int)} instead. - - - - A simple hash table of document scores within a range. - - - This class implements {@link InvertedDocConsumer}, which - is passed each token produced by the analyzer on each - field. It stores these tokens in a hash table, and - allocates separate byte streams per token. Consumers of - this class, eg {@link FreqProxTermsWriter} and {@link - TermVectorsTermsWriter}, write their own byte streams - under each term. - - - - Increments the enumeration to the next element. True if one exists. - - - Optimized scan, without allocating new terms. - Return number of invocations to next(). - - - - Returns the current Term in the enumeration. - Initially invalid, valid after next() called for the first time. - - - - Returns the previous Term enumerated. Initially null. - - - Returns the current TermInfo in the enumeration. - Initially invalid, valid after next() called for the first time. - - - - Sets the argument to the current TermInfo in the enumeration. - Initially invalid, valid after next() called for the first time. - - - - Returns the docFreq from the current TermInfo in the enumeration. - Initially invalid, valid after next() called for the first time. - - - - Closes the enumeration to further activity, freeing resources. - - - For each Field, store position by position information. It ignores frequency information -

- This is not thread-safe. -

-
- - A Map of Integer and TVPositionInfo - - - - - - - - Never ignores positions. This mapper doesn't make much sense unless there are positions - false - - - - Callback for the TermVectorReader. - - - - - - - - - - - Callback mechanism used by the TermVectorReader - The field being read - - The number of terms in the vector - - Whether offsets are available - - Whether positions are available - - - - Get the mapping between fields and terms, sorted by the comparator - - - A map between field names and a Map. The sub-Map key is the position as the integer, the value is {@link Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo}. - - - - Container for a term at a position - - - - The position of the term - - - - Note, there may be multiple terms at the same position - A List of Strings - - - - Parallel list (to {@link #getTerms()}) of TermVectorOffsetInfo objects. There may be multiple entries since there may be multiple terms at a position - A List of TermVectorOffsetInfo objects, if offsets are store. - - - - Holds state for inverting all occurrences of a single - field in the document. This class doesn't do anything - itself; instead, it forwards the tokens produced by - analysis to its own consumer - (InvertedDocConsumerPerField). It also interacts with an - endConsumer (InvertedDocEndConsumerPerField). - - - - LowerCaseTokenizer performs the function of LetterTokenizer - and LowerCaseFilter together. It divides text at non-letters and converts - them to lower case. While it is functionally equivalent to the combination - of LetterTokenizer and LowerCaseFilter, there is a performance advantage - to doing the two tasks at once, hence this (redundant) implementation. -

- Note: this does a decent job for most European languages, but does a terrible - job for some Asian languages, where words are not separated by spaces. -

-
- - A LetterTokenizer is a tokenizer that divides text at non-letters. That's - to say, it defines tokens as maximal strings of adjacent letters, as defined - by java.lang.Character.isLetter() predicate. - Note: this does a decent job for most European languages, but does a terrible - job for some Asian languages, where words are not separated by spaces. - - - - Construct a new LetterTokenizer. - - - Construct a new LetterTokenizer using a given {@link AttributeSource}. - - - Construct a new LetterTokenizer using a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory}. - - - Collects only characters which satisfy - {@link Character#isLetter(char)}. - - - - Construct a new LowerCaseTokenizer. - - - Construct a new LowerCaseTokenizer using a given {@link AttributeSource}. - - - Construct a new LowerCaseTokenizer using a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory}. - - - Converts char to lower case - {@link Character#toLowerCase(char)}. - - - - - Not implemented. Waiting for volunteers. - - - - - Not implemented. Waiting for volunteers. - - - - Simple standalone tool that forever acquires & releases a - lock using a specific LockFactory. Run without any args - to see usage. - - - - - - - - - Expert: - Public for extension only - - - - Subclass of FilteredTermEnum for enumerating all terms that match the - specified prefix filter term. -

- Term enumerations are always ordered by Term.compareTo(). Each term in - the enumeration is greater than all that precede it. - -

-
- - Implements parallel search over a set of Searchables. - -

Applications usually need only call the inherited {@link #Search(Query)} - or {@link #Search(Query,Filter)} methods. -

-
- - Creates a searchable which searches searchables. - - - TODO: parallelize this one too - - - A search implementation which spans a new thread for each - Searchable, waits for each search to complete and merge - the results back together. - - - - A search implementation allowing sorting which spans a new thread for each - Searchable, waits for each search to complete and merges - the results back together. - - - - Lower-level search API. - -

{@link Collector#Collect(int)} is called for every matching document. - -

Applications should only use this if they need all of the - matching documents. The high-level search API ({@link - Searcher#Search(Query)}) is usually more efficient, as it skips - non-high-scoring hits. - -

- to match documents - - if non-null, a bitset used to eliminate some documents - - to receive hits - - TODO: parallelize this one too - -
- - A thread subclass for searching a single searchable - - - MultiPhraseQuery is a generalized version of PhraseQuery, with an added - method {@link #Add(Term[])}. - To use this class, to search for the phrase "Microsoft app*" first use - add(Term) on the term "Microsoft", then find all terms that have "app" as - prefix using IndexReader.terms(Term), and use MultiPhraseQuery.add(Term[] - terms) to add them to the query. - - - 1.0 - - - - Sets the phrase slop for this query. - - - - - Sets the phrase slop for this query. - - - - - Add a single term at the next position in the phrase. - - - - - Add multiple terms at the next position in the phrase. Any of the terms - may match. - - - - - - - Allows to specify the relative position of terms within the phrase. - - - - - - - - - - - Returns a List<Term[]> of the terms in the multiphrase. - Do not modify the List or its contents. - - - - Returns the relative positions of terms in this phrase. - - - Prints a user-readable version of this query. - - - Returns true if o is equal to this. - - - Returns a hash code value for this object. - - - Lucene's package information, including version. * - - - This stores a monotonically increasing set of <Term, TermInfo> pairs in a - Directory. A TermInfos can be written once, in order. - - - - The file format version, a negative number. - - - Expert: The fraction of terms in the "dictionary" which should be stored - in RAM. Smaller values use more memory, but make searching slightly - faster, while larger values use less memory and make searching slightly - slower. Searching is typically not dominated by dictionary lookup, so - tweaking this is rarely useful. - - - - Expert: The fraction of {@link TermDocs} entries stored in skip tables, - used to accellerate {@link TermDocs#SkipTo(int)}. Larger values result in - smaller indexes, greater acceleration, but fewer accelerable cases, while - smaller values result in bigger indexes, less acceleration and more - accelerable cases. More detailed experiments would be useful here. - - - - Expert: The maximum number of skip levels. Smaller values result in - slightly smaller indexes, but slower skipping in big posting lists. - - - - Adds a new <fieldNumber, termBytes>, TermInfo> pair to the set. - Term must be lexicographically greater than all previous Terms added. - TermInfo pointers must be positive and greater than all previous. - - - - Called to complete TermInfos creation. - - - This stores a monotonically increasing set of <Term, TermInfo> pairs in a - Directory. Pairs are accessed either by Term or by ordinal position the - set. - - - - Returns the number of term/value pairs in the set. - - - Returns the offset of the greatest index entry which is less than or equal to term. - - - Returns the TermInfo for a Term in the set, or null. - - - Returns the TermInfo for a Term in the set, or null. - - - Returns the position of a Term in the set or -1. - - - Returns an enumeration of all the Terms and TermInfos in the set. - - - Returns an enumeration of terms starting at or after the named term. - - - Per-thread resources managed by ThreadLocal - - - Information about a segment such as it's name, directory, and files related - to the segment. - - *

NOTE: This API is new and still experimental - (subject to change suddenly in the next release)

-

-
- - Copy everything from src SegmentInfo into our instance. - - - Construct a new SegmentInfo instance by reading a - previously saved SegmentInfo from input. - - - directory to load from - - format of the segments info file - - input handle to read segment info from - - - - Returns total size in bytes of all of files used by - this segment. - - - - Returns true if this field for this segment has saved a separate norms file (_<segment>_N.sX). - - - the field index to check - - - - Returns true if any fields in this segment have separate norms. - - - Increment the generation count for the norms file for - this field. - - - field whose norm file will be rewritten - - - - Get the file name for the norms file for this field. - - - field index - - - - Mark whether this segment is stored as a compound file. - - - true if this is a compound file; - else, false - - - - Returns true if this segment is stored as a compound - file; else, false. - - - - Save this segment's info. - - - Used for debugging - - - We consider another SegmentInfo instance equal if it - has the same dir and same name. - - - - This {@link IndexDeletionPolicy} implementation that - keeps only the most recent commit and immediately removes - all prior commits after a new commit is done. This is - the default deletion policy. - - - - Deletes all commits except the most recent one. - - - Deletes all commits except the most recent one. - - - Used by DocumentsWriter to merge the postings from - multiple ThreadStates when creating a segment - - - - Provides support for converting dates to strings and vice-versa. - The strings are structured so that lexicographic sorting orders by date, - which makes them suitable for use as field values and search terms. - -

Note that this class saves dates with millisecond granularity, - which is bad for {@link TermRangeQuery} and {@link PrefixQuery}, as those - queries are expanded to a BooleanQuery with a potentially large number - of terms when searching. Thus you might want to use - {@link DateTools} instead. - -

- Note: dates before 1970 cannot be used, and therefore cannot be - indexed when using this class. See {@link DateTools} for an - alternative without such a limitation. - -

- Another approach is {@link NumericUtils}, which provides - a sortable binary representation (prefix encoded) of numeric values, which - date/time are. - For indexing a {@link Date} or {@link Calendar}, just get the unix timestamp as - long using {@link Date#getTime} or {@link Calendar#getTimeInMillis} and - index this as a numeric value with {@link NumericField} - and use {@link NumericRangeQuery} to query it. - -

- If you build a new index, use {@link DateTools} or - {@link NumericField} instead. - This class is included for use with existing - indices and will be removed in a future release. - -
- - Converts a Date to a string suitable for indexing. - RuntimeException if the date specified in the - method argument is before 1970 - - - - Converts a millisecond time to a string suitable for indexing. - RuntimeException if the time specified in the - method argument is negative, that is, before 1970 - - - - Converts a string-encoded date into a millisecond time. - - - Converts a string-encoded date into a Date object. - - - A WhitespaceTokenizer is a tokenizer that divides text at whitespace. - Adjacent sequences of non-Whitespace characters form tokens. - - - - Construct a new WhitespaceTokenizer. - - - Construct a new WhitespaceTokenizer using a given {@link AttributeSource}. - - - Construct a new WhitespaceTokenizer using a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory}. - - - Collects only characters which do not satisfy - {@link Character#isWhitespace(char)}. - - - - The start and end character offset of a Token. - - - Returns this Token's starting offset, the position of the first character - corresponding to this token in the source text. - Note that the difference between endOffset() and startOffset() may not be - equal to termText.length(), as the term text may have been altered by a - stemmer or some other filter. - - - - Set the starting and ending offset. - See StartOffset() and EndOffset() - - - - Returns this Token's ending offset, one greater than the position of the - last character corresponding to this token in the source text. The length - of the token in the source text is (endOffset - startOffset). - - - - Filters {@link StandardTokenizer} with {@link StandardFilter}, - {@link LowerCaseFilter} and {@link StopFilter}, using a list of English stop - words. - - -

- You must specify the required {@link Version} compatibility when creating - StandardAnalyzer: -

- -
- $Id: StandardAnalyzer.java 829134 2009-10-23 17:18:53Z mikemccand $ - -
- - Default maximum allowed token length - - - Specifies whether deprecated acronyms should be replaced with HOST type. - This is false by default to support backward compatibility. - - - this should be removed in the next release (3.0). - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - - - true if new instances of StandardTokenizer will - replace mischaracterized acronyms - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - This will be removed (hardwired to true) in 3.0 - - - - - Set to true to have new - instances of StandardTokenizer replace mischaracterized - acronyms by default. Set to false to preserve the - previous (before 2.4) buggy behavior. Alternatively, - set the system property - Lucene.Net.Analysis.Standard.StandardAnalyzer.replaceInvalidAcronym - to false. - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - This will be removed (hardwired to true) in 3.0 - - - - An array containing some common English words that are usually not - useful for searching. - - Use {@link #STOP_WORDS_SET} instead - - - - An unmodifiable set containing some common English words that are usually not - useful for searching. - - - - Builds an analyzer with the default stop words ({@link - #STOP_WORDS_SET}). - - Use {@link #StandardAnalyzer(Version)} instead. - - - - Builds an analyzer with the default stop words ({@link - #STOP_WORDS}). - - Lucene version to match See {@link - above} - - - - Builds an analyzer with the given stop words. - Use {@link #StandardAnalyzer(Version, Set)} - instead - - - - Builds an analyzer with the given stop words. - Lucene version to match See {@link - above} - - stop words - - - - Builds an analyzer with the given stop words. - Use {@link #StandardAnalyzer(Version, Set)} instead - - - - Builds an analyzer with the stop words from the given file. - - - Use {@link #StandardAnalyzer(Version, File)} - instead - - - - Builds an analyzer with the stop words from the given file. - - - Lucene version to match See {@link - above} - - File to read stop words from - - - - Builds an analyzer with the stop words from the given reader. - - - Use {@link #StandardAnalyzer(Version, Reader)} - instead - - - - Builds an analyzer with the stop words from the given reader. - - - Lucene version to match See {@link - above} - - Reader to read stop words from - - - - - Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - Remove in 3.X and make true the only valid value - - - - The stopwords to use - - Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - Remove in 3.X and make true the only valid value - - - - The stopwords to use - - Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - Remove in 3.X and make true the only valid value - - - - - The stopwords to use - - Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - Remove in 3.X and make true the only valid value - - - - The stopwords to use - - Set to true if this analyzer should replace mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - - Remove in 3.X and make true the only valid value - - - - Constructs a {@link StandardTokenizer} filtered by a {@link - StandardFilter}, a {@link LowerCaseFilter} and a {@link StopFilter}. - - - - Set maximum allowed token length. If a token is seen - that exceeds this length then it is discarded. This - setting only takes effect the next time tokenStream or - reusableTokenStream is called. - - - - - - - - Use {@link #tokenStream} instead - - - - - true if this Analyzer is replacing mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - This will be removed (hardwired to true) in 3.0 - - - - - Set to true if this Analyzer is replacing mischaracterized acronyms in the StandardTokenizer - - See https://issues.apache.org/jira/browse/LUCENE-1068 - - This will be removed (hardwired to true) in 3.0 - - - - Use by certain classes to match version compatibility - across releases of Lucene. -

- WARNING: When changing the version parameter - that you supply to components in Lucene, do not simply - change the version at search-time, but instead also adjust - your indexing code to match, and re-index. -

-
- - -

WARNING: if you use this setting, and then - upgrade to a newer release of Lucene, sizable changes - may happen. If precise back compatibility is important - then you should instead explicitly specify an actual - version. - If you use this constant then you may need to - re-index all of your documents when upgrading - Lucene, as the way text is indexed may have changed. - Additionally, you may need to re-test your entire - application to ensure it behaves as expected, as - some defaults may have changed and may break functionality - in your application. -

-
- - Match settings and bugs in Lucene's 2.0 release. - - - Match settings and bugs in Lucene's 2.1 release. - - - Match settings and bugs in Lucene's 2.2 release. - - - Match settings and bugs in Lucene's 2.3 release. - - - Match settings and bugs in Lucene's 2.3 release. - - - - Class to encode java's UTF16 char[] into UTF8 byte[] - without always allocating a new byte[] as - String.getBytes("UTF-8") does. - -

WARNING: This API is a new and experimental and - may suddenly change.

-

-
- - Encode characters from a char[] source, starting at - offset and stopping when the character 0xffff is seen. - Returns the number of bytes written to bytesOut. - - - - Encode characters from a char[] source, starting at - offset for length chars. Returns the number of bytes - written to bytesOut. - - - - Encode characters from this String, starting at offset - for length characters. Returns the number of bytes - written to bytesOut. - - - - Convert UTF8 bytes into UTF16 characters. If offset - is non-zero, conversion starts at that starting point - in utf8, re-using the results from the previous call - up until offset. - - - - This exception is thrown when the write.lock - could not be acquired. This - happens when a writer tries to open an index - that another writer already has open. - - - - - - Expert: A Directory instance that switches files between - two other Directory instances. -

Files with the specified extensions are placed in the - primary directory; others are placed in the secondary - directory. The provided Set must not change once passed - to this class, and must allow multiple threads to call - contains at once.

- -

NOTE: this API is new and experimental and is - subject to suddenly change in the next release. -

-
- - Return the primary directory - - - Return the secondary directory - - - Utility method to return a file's extension. - - - Constrains search results to only match those which also match a provided - query. - -

This could be used, for example, with a {@link TermRangeQuery} on a suitably - formatted date field to implement date filtering. One could re-use a single - QueryFilter that matches, e.g., only documents modified within the last - week. The QueryFilter and TermRangeQuery would only need to be reconstructed - once per day. - -

- $Id:$ - -
- - Constructs a filter which only matches documents matching - query. - - - - Use {@link #GetDocIdSet(IndexReader)} instead. - - - - Expert: obtains the ordinal of the field value from the default Lucene - {@link Lucene.Net.Search.FieldCache FieldCache} using getStringIndex() - and reverses the order. -

- The native lucene index order is used to assign an ordinal value for each field value. -

- Field values (terms) are lexicographically ordered by unicode value, and numbered starting at 1. -
- Example of reverse ordinal (rord): -
If there were only three field values: "apple","banana","pear" -
then rord("apple")=3, rord("banana")=2, ord("pear")=1 -

- WARNING: - rord() depends on the position in an index and can thus change - when other documents are inserted or deleted, - or if a MultiSearcher is used. - -

- WARNING: The status of the Search.Function package is experimental. - The APIs introduced here might change in the future and will not be - supported anymore in such a case. - -

NOTE: with the switch in 2.9 to segment-based - searching, if {@link #getValues} is invoked with a - composite (multi-segment) reader, this can easily cause - double RAM usage for the values in the FieldCache. It's - best to switch your application to pass only atomic - (single segment) readers to this API. Alternatively, for - a short-term fix, you could wrap your ValueSource using - {@link MultiValueSource}, which costs more CPU per lookup - but will not consume double the FieldCache RAM.

-

-
- - Contructor for a certain field. - field whose values reverse order is used. - - - - The Scorer for DisjunctionMaxQuery's. The union of all documents generated by the the subquery scorers - is generated in document number order. The score for each document is the maximum of the scores computed - by the subquery scorers that generate that document, plus tieBreakerMultiplier times the sum of the scores - for the other subqueries that generate the document. - - - - Creates a new instance of DisjunctionMaxScorer - - - Multiplier applied to non-maximum-scoring subqueries for a - document as they are summed into the result. - - -- not used since our definition involves neither coord nor terms - directly - - The sub scorers this Scorer should iterate on - - The actual number of scorers to iterate on. Note that the array's - length may be larger than the actual number of scorers. - - - - Generate the next document matching our associated DisjunctionMaxQuery. - - - true iff there is a next document - - use {@link #NextDoc()} instead. - - - - use {@link #DocID()} instead. - - - - Determine the current document score. Initially invalid, until {@link #Next()} is called the first time. - the score of the current generated document - - - - Advance to the first document beyond the current whose number is greater - than or equal to target. - - - the minimum number of the next desired document - - true iff there is a document to be generated whose number is at - least target - - use {@link #Advance(int)} instead. - - - - Explain a score that we computed. UNSUPPORTED -- see explanation capability in DisjunctionMaxQuery. - the number of a document we scored - - the Explanation for our score - - - - A Query that matches documents matching boolean combinations of other - queries, e.g. {@link TermQuery}s, {@link PhraseQuery}s or other - BooleanQuerys. - - - - Return the maximum number of clauses permitted, 1024 by default. - Attempts to add more than the permitted number of clauses cause {@link - TooManyClauses} to be thrown. - - - - - - Set the maximum number of clauses permitted per BooleanQuery. - Default value is 1024. - - - - Constructs an empty boolean query. - - - Constructs an empty boolean query. - - {@link Similarity#Coord(int,int)} may be disabled in scoring, as - appropriate. For example, this score factor does not make sense for most - automatically generated queries, like {@link WildcardQuery} and {@link - FuzzyQuery}. - - - disables {@link Similarity#Coord(int,int)} in scoring. - - - - Returns true iff {@link Similarity#Coord(int,int)} is disabled in - scoring for this query instance. - - - - - - Specifies a minimum number of the optional BooleanClauses - which must be satisfied. - -

- By default no optional clauses are necessary for a match - (unless there are no required clauses). If this method is used, - then the specified number of clauses is required. -

-

- Use of this method is totally independent of specifying that - any specific clauses are required (or prohibited). This number will - only be compared against the number of matching optional clauses. -

-

- EXPERT NOTE: Using this method may force collecting docs in order, - regardless of whether setAllowDocsOutOfOrder(true) has been called. -

- -

- the number of optional clauses that must match - - - -
- - Gets the minimum number of the optional BooleanClauses - which must be satisifed. - - - - Adds a clause to a boolean query. - - - TooManyClauses if the new number of clauses exceeds the maximum clause number - - - - - Adds a clause to a boolean query. - TooManyClauses if the new number of clauses exceeds the maximum clause number - - - - - Returns the set of clauses in this query. - - - Returns the list of clauses in this query. - - - Whether hit docs may be collected out of docid order. - - - this will not be needed anymore, as - {@link Weight#ScoresDocsOutOfOrder()} is used. - - - - Expert: Indicates whether hit docs may be collected out of docid order. - -

- Background: although the contract of the Scorer class requires that - documents be iterated in order of doc id, this was not true in early - versions of Lucene. Many pieces of functionality in the current Lucene code - base have undefined behavior if this contract is not upheld, but in some - specific simple cases may be faster. (For example: disjunction queries with - less than 32 prohibited clauses; This setting has no effect for other - queries.) -

- -

- Specifics: By setting this option to true, docid N might be scored for a - single segment before docid N-1. Across multiple segments, docs may be - scored out of order regardless of this setting - it only applies to scoring - a single segment. - - Being static, this setting is system wide. -

- -

- this is not needed anymore, as - {@link Weight#ScoresDocsOutOfOrder()} is used. - -
- - Whether hit docs may be collected out of docid order. - - - - - this is not needed anymore, as - {@link Weight#ScoresDocsOutOfOrder()} is used. - - - - Use {@link #SetAllowDocsOutOfOrder(boolean)} instead. - - - - Use {@link #GetAllowDocsOutOfOrder()} instead. - - - - Prints a user-readable version of this query. - - - Returns true iff o is equal to this. - - - Returns a hash code value for this object. - - - Thrown when an attempt is made to add more than {@link - #GetMaxClauseCount()} clauses. This typically happens if - a PrefixQuery, FuzzyQuery, WildcardQuery, or TermRangeQuery - is expanded to many terms during search. - - - - Expert: the Weight for BooleanQuery, used to - normalize, score and explain these queries. - -

NOTE: this API and implementation is subject to - change suddenly in the next release.

-

-
- - The Similarity implementation. - - - Fills in no-term-vectors for all docs we haven't seen - since the last doc that had term vectors. - - - - Taps into DocInverter, as an InvertedDocEndConsumer, - which is called at the end of inverting each field. We - just look at the length for the field (docState.length) - and record the norm. - - - - - - - - - - Constructs a new runtime exception with null as its - detail message. The cause is not initialized, and may subsequently be - initialized by a call to {@link #innerException}. - - - - Constructs a new runtime exception with the specified cause and a - detail message of (cause==null ? null : cause.toString()) - (which typically contains the class and detail message of - cause). -

- This constructor is useful for runtime exceptions - that are little more than wrappers for other throwables. - -

- the cause (which is saved for later retrieval by the - {@link #InnerException()} method). (A null value is - permitted, and indicates that the cause is nonexistent or - unknown.) - - 1.4 - -
- - Constructs a new runtime exception with the specified detail message. - The cause is not initialized, and may subsequently be initialized by a - call to {@link #innerException}. - - - the detail message. The detail message is saved for - later retrieval by the {@link #getMessage()} method. - - - - Constructs a new runtime exception with the specified detail message and - cause.

Note that the detail message associated with - cause is not automatically incorporated in - this runtime exception's detail message. - -

- the detail message (which is saved for later retrieval - by the {@link #getMessage()} method). - - the cause (which is saved for later retrieval by the - {@link #InnerException()} method). (A null value is - permitted, and indicates that the cause is nonexistent or - unknown.) - - 1.4 - -
- - Documents are the unit of indexing and search. - - A Document is a set of fields. Each field has a name and a textual value. - A field may be {@link Fieldable#IsStored() stored} with the document, in which - case it is returned with search hits on the document. Thus each document - should typically contain one or more stored fields which uniquely identify - it. - -

Note that fields which are not {@link Fieldable#IsStored() stored} are - not available in documents retrieved from the index, e.g. with {@link - ScoreDoc#doc}, {@link Searcher#Doc(int)} or {@link - IndexReader#Document(int)}. -

-
- - Constructs a new document with no fields. - - - Sets a boost factor for hits on any field of this document. This value - will be multiplied into the score of all hits on this document. - -

The default value is 1.0. - -

Values are multiplied into the value of {@link Fieldable#GetBoost()} of - each field in this document. Thus, this method in effect sets a default - boost for the fields of this document. - -

- - -
- - Returns, at indexing time, the boost factor as set by {@link #SetBoost(float)}. - -

Note that once a document is indexed this value is no longer available - from the index. At search time, for retrieved documents, this method always - returns 1. This however does not mean that the boost value set at indexing - time was ignored - it was just combined with other indexing time factors and - stored elsewhere, for better indexing and search performance. (For more - information see the "norm(t,d)" part of the scoring formula in - {@link Lucene.Net.Search.Similarity Similarity}.) - -

- - -
- -

Adds a field to a document. Several fields may be added with - the same name. In this case, if the fields are indexed, their text is - treated as though appended for the purposes of search.

-

Note that add like the removeField(s) methods only makes sense - prior to adding a document to an index. These methods cannot - be used to change the content of an existing index! In order to achieve this, - a document has to be deleted from an index and a new changed version of that - document has to be added.

-

-
- -

Removes field with the specified name from the document. - If multiple fields exist with this name, this method removes the first field that has been added. - If there is no field with the specified name, the document remains unchanged.

-

Note that the removeField(s) methods like the add method only make sense - prior to adding a document to an index. These methods cannot - be used to change the content of an existing index! In order to achieve this, - a document has to be deleted from an index and a new changed version of that - document has to be added.

-

-
- -

Removes all fields with the given name from the document. - If there is no field with the specified name, the document remains unchanged.

-

Note that the removeField(s) methods like the add method only make sense - prior to adding a document to an index. These methods cannot - be used to change the content of an existing index! In order to achieve this, - a document has to be deleted from an index and a new changed version of that - document has to be added.

-

-
- - Returns a field with the given name if any exist in this document, or - null. If multiple fields exists with this name, this method returns the - first value added. - Do not use this method with lazy loaded fields. - - - - Returns a field with the given name if any exist in this document, or - null. If multiple fields exists with this name, this method returns the - first value added. - - - - Returns the string value of the field with the given name if any exist in - this document, or null. If multiple fields exist with this name, this - method returns the first value added. If only binary fields with this name - exist, returns null. - - - - Returns an Enumeration of all the fields in a document. - use {@link #GetFields()} instead - - - - Returns a List of all the fields in a document. -

Note that fields which are not {@link Fieldable#IsStored() stored} are - not available in documents retrieved from the - index, e.g. {@link Searcher#Doc(int)} or {@link - IndexReader#Document(int)}. -

-
- - Returns an array of {@link Field}s with the given name. - Do not use with lazy loaded fields. - This method returns an empty array when there are no - matching fields. It never returns null. - - - the name of the field - - a Field[] array - - - - Returns an array of {@link Fieldable}s with the given name. - This method returns an empty array when there are no - matching fields. It never returns null. - - - the name of the field - - a Fieldable[] array - - - - Returns an array of values of the field specified as the method parameter. - This method returns an empty array when there are no - matching fields. It never returns null. - - the name of the field - - a String[] of field values - - - - Returns an array of byte arrays for of the fields that have the name specified - as the method parameter. This method returns an empty - array when there are no matching fields. It never - returns null. - - - the name of the field - - a byte[][] of binary field values - - - - Returns an array of bytes for the first (or only) field that has the name - specified as the method parameter. This method will return null - if no binary fields with the specified name are available. - There may be non-binary fields with the same name. - - - the name of the field. - - a byte[] containing the binary field value or null - - - - Prints the fields of a document for human consumption. - - - Simple utility class providing static methods to - compress and decompress binary data for stored fields. - This class uses java.util.zip.Deflater and Inflater - classes to compress and decompress, which is the same - format previously used by the now deprecated - Field.Store.COMPRESS. - - - - Compresses the specified byte range using the - specified compressionLevel (constants are defined in - java.util.zip.Deflater). - - - - Compresses the specified byte range, with default BEST_COMPRESSION level - - - Compresses all bytes in the array, with default BEST_COMPRESSION level - - - Compresses the String value, with default BEST_COMPRESSION level - - - Compresses the String value using the specified - compressionLevel (constants are defined in - java.util.zip.Deflater). - - - - Decompress the byte array previously returned by - compress - - - - Decompress the byte array previously returned by - compressString back into a String - - - - The term text of a Token. - - - Returns the Token's term text. - - This method has a performance penalty - because the text is stored internally in a char[]. If - possible, use {@link #TermBuffer()} and {@link - #TermLength()} directly instead. If you really need a - String, use this method, which is nothing more than - a convenience call to new String(token.termBuffer(), 0, token.termLength()) - - - - Copies the contents of buffer, starting at offset for - length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Copies the contents of buffer into the termBuffer array. - the buffer to copy - - - - Copies the contents of buffer, starting at offset and continuing - for length characters, into the termBuffer array. - - the buffer to copy - - the index in the buffer of the first character to copy - - the number of characters to copy - - - - Returns the internal termBuffer character array which - you can then directly alter. If the array is too - small for your token, use {@link - #ResizeTermBuffer(int)} to increase it. After - altering the buffer be sure to call {@link - #setTermLength} to record the number of valid - characters that were placed into the termBuffer. - - - - Grows the termBuffer to at least size newSize, preserving the - existing content. Note: If the next operation is to change - the contents of the term buffer use - {@link #SetTermBuffer(char[], int, int)}, - {@link #SetTermBuffer(String)}, or - {@link #SetTermBuffer(String, int, int)} - to optimally combine the resize with the setting of the termBuffer. - - minimum size of the new termBuffer - - newly created termBuffer with length >= newSize - - - - Allocates a buffer char[] of at least newSize, without preserving the existing content. - its always used in places that set the content - - minimum size of the buffer - - - - Return number of valid characters (length of the term) - in the termBuffer array. - - - - Set number of valid characters (length of the term) in - the termBuffer array. Use this to truncate the termBuffer - or to synchronize with external manipulation of the termBuffer. - Note: to grow the size of the array, - use {@link #ResizeTermBuffer(int)} first. - - the truncated length - - - - The positionIncrement determines the position of this token - relative to the previous Token in a {@link TokenStream}, used in phrase - searching. - -

The default value is one. - -

Some common uses for this are:

    - -
  • Set it to zero to put multiple terms in the same position. This is - useful if, e.g., a word has multiple stems. Searches for phrases - including either stem will match. In this case, all but the first stem's - increment should be set to zero: the increment of the first instance - should be one. Repeating a token with an increment of zero can also be - used to boost the scores of matches on that token.
  • - -
  • Set it to values greater than one to inhibit exact phrase matches. - If, for example, one does not want phrases to match across removed stop - words, then one could build a stop word filter that removes stop words and - also sets the increment to the number of stop words removed before each - non-stop word. Then exact phrase queries will only match when the terms - occur with no intervening stop words.
  • - -
-
-
- - Set the position increment. The default value is one. - - - the distance from the prior term - - - - Returns the position increment of this Token. - - - -
-
diff --git a/Lib/log4net.xml b/Lib/log4net.xml deleted file mode 100644 index 6d1f134..0000000 --- a/Lib/log4net.xml +++ /dev/null @@ -1,27658 +0,0 @@ - - - - log4net - - - - - Appender that logs to a database. - - - - appends logging events to a table within a - database. The appender can be configured to specify the connection - string by setting the property. - The connection type (provider) can be specified by setting the - property. For more information on database connection strings for - your specific database see http://www.connectionstrings.com/. - - - Records are written into the database either using a prepared - statement or a stored procedure. The property - is set to (System.Data.CommandType.Text) to specify a prepared statement - or to (System.Data.CommandType.StoredProcedure) to specify a stored - procedure. - - - The prepared statement text or the name of the stored procedure - must be set in the property. - - - The prepared statement or stored procedure can take a number - of parameters. Parameters are added using the - method. This adds a single to the - ordered list of parameters. The - type may be subclassed if required to provide database specific - functionality. The specifies - the parameter name, database type, size, and how the value should - be generated using a . - - - - An example of a SQL Server table that could be logged to: - - CREATE TABLE [dbo].[Log] ( - [ID] [int] IDENTITY (1, 1) NOT NULL , - [Date] [datetime] NOT NULL , - [Thread] [varchar] (255) NOT NULL , - [Level] [varchar] (20) NOT NULL , - [Logger] [varchar] (255) NOT NULL , - [Message] [varchar] (4000) NOT NULL - ) ON [PRIMARY] - - - - An example configuration to log to the above table: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Julian Biddle - Nicko Cadell - Gert Driesen - Lance Nehring - - - - Abstract base class implementation of that - buffers events in a fixed size buffer. - - - - This base class should be used by appenders that need to buffer a - number of events before logging them. For example the - buffers events and then submits the entire contents of the buffer to - the underlying database in one go. - - - Subclasses should override the - method to deliver the buffered events. - - The BufferingAppenderSkeleton maintains a fixed size cyclic - buffer of events. The size of the buffer is set using - the property. - - A is used to inspect - each event as it arrives in the appender. If the - triggers, then the current buffer is sent immediately - (see ). Otherwise the event - is stored in the buffer. For example, an evaluator can be used to - deliver the events immediately when an ERROR event arrives. - - - The buffering appender can be configured in a mode. - By default the appender is NOT lossy. When the buffer is full all - the buffered events are sent with . - If the property is set to true then the - buffer will not be sent when it is full, and new events arriving - in the appender will overwrite the oldest event in the buffer. - In lossy mode the buffer will only be sent when the - triggers. This can be useful behavior when you need to know about - ERROR events but not about events with a lower level, configure an - evaluator that will trigger when an ERROR event arrives, the whole - buffer will be sent which gives a history of events leading up to - the ERROR event. - - - Nicko Cadell - Gert Driesen - - - - Abstract base class implementation of . - - - - This class provides the code for common functionality, such - as support for threshold filtering and support for general filters. - - - Appenders can also implement the interface. Therefore - they would require that the method - be called after the appenders properties have been configured. - - - Nicko Cadell - Gert Driesen - - - - Implement this interface for your own strategies for printing log statements. - - - - Implementors should consider extending the - class which provides a default implementation of this interface. - - - Appenders can also implement the interface. Therefore - they would require that the method - be called after the appenders properties have been configured. - - - Nicko Cadell - Gert Driesen - - - - Closes the appender and releases resources. - - - - Releases any resources allocated within the appender such as file handles, - network connections, etc. - - - It is a programming error to append to a closed appender. - - - - - - Log the logging event in Appender specific way. - - The event to log - - - This method is called to log a message into this appender. - - - - - - Gets or sets the name of this appender. - - The name of the appender. - - The name uniquely identifies the appender. - - - - - Interface for appenders that support bulk logging. - - - - This interface extends the interface to - support bulk logging of objects. Appenders - should only implement this interface if they can bulk log efficiently. - - - Nicko Cadell - - - - Log the array of logging events in Appender specific way. - - The events to log - - - This method is called to log an array of events into this appender. - - - - - - Interface used to delay activate a configured object. - - - - This allows an object to defer activation of its options until all - options have been set. This is required for components which have - related options that remain ambiguous until all are set. - - - If a component implements this interface then the method - must be called by the container after its all the configured properties have been set - and before the component can be used. - - - Nicko Cadell - - - - Activate the options that were previously set with calls to properties. - - - - This allows an object to defer activation of its options until all - options have been set. This is required for components which have - related options that remain ambiguous until all are set. - - - If a component implements this interface then this method must be called - after its properties have been set before the component can be used. - - - - - - Initial buffer size - - - - - Maximum buffer size before it is recycled - - - - - Default constructor - - - Empty default constructor - - - - - Finalizes this appender by calling the implementation's - method. - - - - If this appender has not been closed then the Finalize method - will call . - - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Closes the appender and release resources. - - - - Release any resources allocated within the appender such as file handles, - network connections, etc. - - - It is a programming error to append to a closed appender. - - - This method cannot be overridden by subclasses. This method - delegates the closing of the appender to the - method which must be overridden in the subclass. - - - - - - Performs threshold checks and invokes filters before - delegating actual logging to the subclasses specific - method. - - The event to log. - - - This method cannot be overridden by derived classes. A - derived class should override the method - which is called by this method. - - - The implementation of this method is as follows: - - - - - - Checks that the severity of the - is greater than or equal to the of this - appender. - - - - Checks that the chain accepts the - . - - - - - Calls and checks that - it returns true. - - - - - If all of the above steps succeed then the - will be passed to the abstract method. - - - - - - Performs threshold checks and invokes filters before - delegating actual logging to the subclasses specific - method. - - The array of events to log. - - - This method cannot be overridden by derived classes. A - derived class should override the method - which is called by this method. - - - The implementation of this method is as follows: - - - - - - Checks that the severity of the - is greater than or equal to the of this - appender. - - - - Checks that the chain accepts the - . - - - - - Calls and checks that - it returns true. - - - - - If all of the above steps succeed then the - will be passed to the method. - - - - - - Test if the logging event should we output by this appender - - the event to test - true if the event should be output, false if the event should be ignored - - - This method checks the logging event against the threshold level set - on this appender and also against the filters specified on this - appender. - - - The implementation of this method is as follows: - - - - - - Checks that the severity of the - is greater than or equal to the of this - appender. - - - - Checks that the chain accepts the - . - - - - - - - - - Adds a filter to the end of the filter chain. - - the filter to add to this appender - - - The Filters are organized in a linked list. - - - Setting this property causes the new filter to be pushed onto the - back of the filter chain. - - - - - - Clears the filter list for this appender. - - - - Clears the filter list for this appender. - - - - - - Checks if the message level is below this appender's threshold. - - to test against. - - - If there is no threshold set, then the return value is always true. - - - - true if the meets the - requirements of this appender. - - - - - Is called when the appender is closed. Derived classes should override - this method if resources need to be released. - - - - Releases any resources allocated within the appender such as file handles, - network connections, etc. - - - It is a programming error to append to a closed appender. - - - - - - Subclasses of should implement this method - to perform actual logging. - - The event to append. - - - A subclass must implement this method to perform - logging of the . - - This method will be called by - if all the conditions listed for that method are met. - - - To restrict the logging of events in the appender - override the method. - - - - - - Append a bulk array of logging events. - - the array of logging events - - - This base class implementation calls the - method for each element in the bulk array. - - - A sub class that can better process a bulk array of events should - override this method in addition to . - - - - - - Called before as a precondition. - - - - This method is called by - before the call to the abstract method. - - - This method can be overridden in a subclass to extend the checks - made before the event is passed to the method. - - - A subclass should ensure that they delegate this call to - this base class if it is overridden. - - - true if the call to should proceed. - - - - Renders the to a string. - - The event to render. - The event rendered as a string. - - - Helper method to render a to - a string. This appender must have a - set to render the to - a string. - - If there is exception data in the logging event and - the layout does not process the exception, this method - will append the exception text to the rendered string. - - - Where possible use the alternative version of this method - . - That method streams the rendering onto an existing Writer - which can give better performance if the caller already has - a open and ready for writing. - - - - - - Renders the to a string. - - The event to render. - The TextWriter to write the formatted event to - - - Helper method to render a to - a string. This appender must have a - set to render the to - a string. - - If there is exception data in the logging event and - the layout does not process the exception, this method - will append the exception text to the rendered string. - - - Use this method in preference to - where possible. If, however, the caller needs to render the event - to a string then does - provide an efficient mechanism for doing so. - - - - - - The layout of this appender. - - - See for more information. - - - - - The name of this appender. - - - See for more information. - - - - - The level threshold of this appender. - - - - There is no level threshold filtering by default. - - - See for more information. - - - - - - It is assumed and enforced that errorHandler is never null. - - - - It is assumed and enforced that errorHandler is never null. - - - See for more information. - - - - - - The first filter in the filter chain. - - - - Set to null initially. - - - See for more information. - - - - - - The last filter in the filter chain. - - - See for more information. - - - - - Flag indicating if this appender is closed. - - - See for more information. - - - - - The guard prevents an appender from repeatedly calling its own DoAppend method - - - - - StringWriter used to render events - - - - - Gets or sets the threshold of this appender. - - - The threshold of the appender. - - - - All log events with lower level than the threshold level are ignored - by the appender. - - - In configuration files this option is specified by setting the - value of the option to a level - string, such as "DEBUG", "INFO" and so on. - - - - - - Gets or sets the for this appender. - - The of the appender - - - The provides a default - implementation for the property. - - - - - - The filter chain. - - The head of the filter chain filter chain. - - - Returns the head Filter. The Filters are organized in a linked list - and so all Filters on this Appender are available through the result. - - - - - - Gets or sets the for this appender. - - The layout of the appender. - - - See for more information. - - - - - - - Gets or sets the name of this appender. - - The name of the appender. - - - The name uniquely identifies the appender. - - - - - - Tests if this appender requires a to be set. - - - - In the rather exceptional case, where the appender - implementation admits a layout but can also work without it, - then the appender should return true. - - - This default implementation always returns true. - - - - true if the appender requires a layout object, otherwise false. - - - - - The default buffer size. - - - The default size of the cyclic buffer used to store events. - This is set to 512 by default. - - - - - Initializes a new instance of the class. - - - - Protected default constructor to allow subclassing. - - - - - - Initializes a new instance of the class. - - the events passed through this appender must be - fixed by the time that they arrive in the derived class' SendBuffer method. - - - Protected constructor to allow subclassing. - - - The should be set if the subclass - expects the events delivered to be fixed even if the - is set to zero, i.e. when no buffering occurs. - - - - - - Flush the currently buffered events - - - - Flushes any events that have been buffered. - - - If the appender is buffering in mode then the contents - of the buffer will NOT be flushed to the appender. - - - - - - Flush the currently buffered events - - set to true to flush the buffer of lossy events - - - Flushes events that have been buffered. If is - false then events will only be flushed if this buffer is non-lossy mode. - - - If the appender is buffering in mode then the contents - of the buffer will only be flushed if is true. - In this case the contents of the buffer will be tested against the - and if triggering will be output. All other buffered - events will be discarded. - - - If is true then the buffer will always - be emptied by calling this method. - - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Close this appender instance. - - - - Close this appender instance. If this appender is marked - as not then the remaining events in - the buffer must be sent when the appender is closed. - - - - - - This method is called by the method. - - the event to log - - - Stores the in the cyclic buffer. - - - The buffer will be sent (i.e. passed to the - method) if one of the following conditions is met: - - - - The cyclic buffer is full and this appender is - marked as not lossy (see ) - - - An is set and - it is triggered for the - specified. - - - - Before the event is stored in the buffer it is fixed - (see ) to ensure that - any data referenced by the event will be valid when the buffer - is processed. - - - - - - Sends the contents of the buffer. - - The first logging event. - The buffer containing the events that need to be send. - - - The subclass must override . - - - - - - Sends the events. - - The events that need to be send. - - - The subclass must override this method to process the buffered events. - - - - - - The size of the cyclic buffer used to hold the logging events. - - - Set to by default. - - - - - The cyclic buffer used to store the logging events. - - - - - The triggering event evaluator that causes the buffer to be sent immediately. - - - The object that is used to determine if an event causes the entire - buffer to be sent immediately. This field can be null, which - indicates that event triggering is not to be done. The evaluator - can be set using the property. If this appender - has the ( property) set to - true then an must be set. - - - - - Indicates if the appender should overwrite events in the cyclic buffer - when it becomes full, or if the buffer should be flushed when the - buffer is full. - - - If this field is set to true then an must - be set. - - - - - The triggering event evaluator filters discarded events. - - - The object that is used to determine if an event that is discarded should - really be discarded or if it should be sent to the appenders. - This field can be null, which indicates that all discarded events will - be discarded. - - - - - Value indicating which fields in the event should be fixed - - - By default all fields are fixed - - - - - The events delivered to the subclass must be fixed. - - - - - Gets or sets a value that indicates whether the appender is lossy. - - - true if the appender is lossy, otherwise false. The default is false. - - - - This appender uses a buffer to store logging events before - delivering them. A triggering event causes the whole buffer - to be send to the remote sink. If the buffer overruns before - a triggering event then logging events could be lost. Set - to false to prevent logging events - from being lost. - - If is set to true then an - must be specified. - - - - - Gets or sets the size of the cyclic buffer used to hold the - logging events. - - - The size of the cyclic buffer used to hold the logging events. - - - - The option takes a positive integer - representing the maximum number of logging events to collect in - a cyclic buffer. When the is reached, - oldest events are deleted as new events are added to the - buffer. By default the size of the cyclic buffer is 512 events. - - - If the is set to a value less than - or equal to 1 then no buffering will occur. The logging event - will be delivered synchronously (depending on the - and properties). Otherwise the event will - be buffered. - - - - - - Gets or sets the that causes the - buffer to be sent immediately. - - - The that causes the buffer to be - sent immediately. - - - - The evaluator will be called for each event that is appended to this - appender. If the evaluator triggers then the current buffer will - immediately be sent (see ). - - If is set to true then an - must be specified. - - - - - Gets or sets the value of the to use. - - - The value of the to use. - - - - The evaluator will be called for each event that is discarded from this - appender. If the evaluator triggers then the current buffer will immediately - be sent (see ). - - - - - - Gets or sets a value indicating if only part of the logging event data - should be fixed. - - - true if the appender should only fix part of the logging event - data, otherwise false. The default is false. - - - - Setting this property to true will cause only part of the - event data to be fixed and serialized. This will improve performance. - - - See for more information. - - - - - - Gets or sets a the fields that will be fixed in the event - - - The event fields that will be fixed before the event is buffered - - - - The logging event needs to have certain thread specific values - captured before it can be buffered. See - for details. - - - - - - - Initializes a new instance of the class. - - - Public default constructor to initialize a new instance of this class. - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Override the parent method to close the database - - - - Closes the database command and database connection. - - - - - - Inserts the events into the database. - - The events to insert into the database. - - - Insert all the events specified in the - array into the database. - - - - - - Adds a parameter to the command. - - The parameter to add to the command. - - - Adds a parameter to the ordered list of command parameters. - - - - - - Writes the events to the database using the transaction specified. - - The transaction that the events will be executed under. - The array of events to insert into the database. - - - The transaction argument can be null if the appender has been - configured not to use transactions. See - property for more information. - - - - - - Formats the log message into database statement text. - - The event being logged. - - This method can be overridden by subclasses to provide - more control over the format of the database statement. - - - Text that can be passed to a . - - - - - Connects to the database. - - - - - Retrieves the class type of the ADO.NET provider. - - - - Gets the Type of the ADO.NET provider to use to connect to the - database. This method resolves the type specified in the - property. - - - Subclasses can override this method to return a different type - if necessary. - - - The of the ADO.NET provider - - - - Prepares the database command and initialize the parameters. - - - - - Flag to indicate if we are using a command object - - - - Set to true when the appender is to use a prepared - statement or stored procedure to insert into the database. - - - - - - The list of objects. - - - - The list of objects. - - - - - - The security context to use for privileged calls - - - - - The that will be used - to insert logging events into a database. - - - - - The database command. - - - - - Database connection string. - - - - - String type name of the type name. - - - - - The text of the command. - - - - - The command type. - - - - - Indicates whether to use transactions when writing to the database. - - - - - Indicates whether to use transactions when writing to the database. - - - - - Gets or sets the database connection string that is used to connect to - the database. - - - The database connection string used to connect to the database. - - - - The connections string is specific to the connection type. - See for more information. - - - Connection string for MS Access via ODBC: - "DSN=MS Access Database;UID=admin;PWD=;SystemDB=C:\data\System.mdw;SafeTransactions = 0;FIL=MS Access;DriverID = 25;DBQ=C:\data\train33.mdb" - - Another connection string for MS Access via ODBC: - "Driver={Microsoft Access Driver (*.mdb)};DBQ=C:\Work\cvs_root\log4net-1.2\access.mdb;UID=;PWD=;" - - Connection string for MS Access via OLE DB: - "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\Work\cvs_root\log4net-1.2\access.mdb;User Id=;Password=;" - - - - - Gets or sets the type name of the connection - that should be created. - - - The type name of the connection. - - - - The type name of the ADO.NET provider to use. - - - The default is to use the OLE DB provider. - - - Use the OLE DB Provider. This is the default value. - System.Data.OleDb.OleDbConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 - - Use the MS SQL Server Provider. - System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 - - Use the ODBC Provider. - Microsoft.Data.Odbc.OdbcConnection,Microsoft.Data.Odbc,version=1.0.3300.0,publicKeyToken=b77a5c561934e089,culture=neutral - This is an optional package that you can download from - http://msdn.microsoft.com/downloads - search for ODBC .NET Data Provider. - - Use the Oracle Provider. - System.Data.OracleClient.OracleConnection, System.Data.OracleClient, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 - This is an optional package that you can download from - http://msdn.microsoft.com/downloads - search for .NET Managed Provider for Oracle. - - - - - Gets or sets the command text that is used to insert logging events - into the database. - - - The command text used to insert logging events into the database. - - - - Either the text of the prepared statement or the - name of the stored procedure to execute to write into - the database. - - - The property determines if - this text is a prepared statement or a stored procedure. - - - - - - Gets or sets the command type to execute. - - - The command type to execute. - - - - This value may be either (System.Data.CommandType.Text) to specify - that the is a prepared statement to execute, - or (System.Data.CommandType.StoredProcedure) to specify that the - property is the name of a stored procedure - to execute. - - - The default value is (System.Data.CommandType.Text). - - - - - - Should transactions be used to insert logging events in the database. - - - true if transactions should be used to insert logging events in - the database, otherwise false. The default value is true. - - - - Gets or sets a value that indicates whether transactions should be used - to insert logging events in the database. - - - When set a single transaction will be used to insert the buffered events - into the database. Otherwise each event will be inserted without using - an explicit transaction. - - - - - - Gets or sets the used to call the NetSend method. - - - The used to call the NetSend method. - - - - Unless a specified here for this appender - the is queried for the - security context to use. The default behavior is to use the security context - of the current thread. - - - - - - Should this appender try to reconnect to the database on error. - - - true if the appender should try to reconnect to the database after an - error has occurred, otherwise false. The default value is false, - i.e. not to try to reconnect. - - - - The default behaviour is for the appender not to try to reconnect to the - database if an error occurs. Subsequent logging events are discarded. - - - To force the appender to attempt to reconnect to the database set this - property to true. - - - When the appender attempts to connect to the database there may be a - delay of up to the connection timeout specified in the connection string. - This delay will block the calling application's thread. - Until the connection can be reestablished this potential delay may occur multiple times. - - - - - - Gets or sets the underlying . - - - The underlying . - - - creates a to insert - logging events into a database. Classes deriving from - can use this property to get or set this . Use the - underlying returned from if - you require access beyond that which provides. - - - - - Parameter type used by the . - - - - This class provides the basic database parameter properties - as defined by the interface. - - This type can be subclassed to provide database specific - functionality. The two methods that are called externally are - and . - - - - - - Initializes a new instance of the class. - - - Default constructor for the AdoNetAppenderParameter class. - - - - - Prepare the specified database command object. - - The command to prepare. - - - Prepares the database command object by adding - this parameter to its collection of parameters. - - - - - - Renders the logging event and set the parameter value in the command. - - The command containing the parameter. - The event to be rendered. - - - Renders the logging event using this parameters layout - object. Sets the value of the parameter on the command object. - - - - - - The name of this parameter. - - - - - The database type for this parameter. - - - - - Flag to infer type rather than use the DbType - - - - - The precision for this parameter. - - - - - The scale for this parameter. - - - - - The size for this parameter. - - - - - The to use to render the - logging event into an object for this parameter. - - - - - Gets or sets the name of this parameter. - - - The name of this parameter. - - - - The name of this parameter. The parameter name - must match up to a named parameter to the SQL stored procedure - or prepared statement. - - - - - - Gets or sets the database type for this parameter. - - - The database type for this parameter. - - - - The database type for this parameter. This property should - be set to the database type from the - enumeration. See . - - - This property is optional. If not specified the ADO.NET provider - will attempt to infer the type from the value. - - - - - - - Gets or sets the precision for this parameter. - - - The precision for this parameter. - - - - The maximum number of digits used to represent the Value. - - - This property is optional. If not specified the ADO.NET provider - will attempt to infer the precision from the value. - - - - - - - Gets or sets the scale for this parameter. - - - The scale for this parameter. - - - - The number of decimal places to which Value is resolved. - - - This property is optional. If not specified the ADO.NET provider - will attempt to infer the scale from the value. - - - - - - - Gets or sets the size for this parameter. - - - The size for this parameter. - - - - The maximum size, in bytes, of the data within the column. - - - This property is optional. If not specified the ADO.NET provider - will attempt to infer the size from the value. - - - - - - - Gets or sets the to use to - render the logging event into an object for this - parameter. - - - The used to render the - logging event into an object for this parameter. - - - - The that renders the value for this - parameter. - - - The can be used to adapt - any into a - for use in the property. - - - - - - Appends logging events to the terminal using ANSI color escape sequences. - - - - AnsiColorTerminalAppender appends log events to the standard output stream - or the error output stream using a layout specified by the - user. It also allows the color of a specific level of message to be set. - - - This appender expects the terminal to understand the VT100 control set - in order to interpret the color codes. If the terminal or console does not - understand the control codes the behavior is not defined. - - - By default, all output is written to the console's standard output stream. - The property can be set to direct the output to the - error stream. - - - NOTE: This appender writes each message to the System.Console.Out or - System.Console.Error that is set at the time the event is appended. - Therefore it is possible to programmatically redirect the output of this appender - (for example NUnit does this to capture program output). While this is the desired - behavior of this appender it may have security implications in your application. - - - When configuring the ANSI colored terminal appender, a mapping should be - specified to map a logging level to a color. For example: - - - - - - - - - - - - - - - The Level is the standard log4net logging level and ForeColor and BackColor can be any - of the following values: - - Blue - Green - Red - White - Yellow - Purple - Cyan - - These color values cannot be combined together to make new colors. - - - The attributes can be any combination of the following: - - Brightforeground is brighter - Dimforeground is dimmer - Underscoremessage is underlined - Blinkforeground is blinking (does not work on all terminals) - Reverseforeground and background are reversed - Hiddenoutput is hidden - Strikethroughmessage has a line through it - - While any of these attributes may be combined together not all combinations - work well together, for example setting both Bright and Dim attributes makes - no sense. - - - Patrick Wagstrom - Nicko Cadell - - - - The to use when writing to the Console - standard output stream. - - - - The to use when writing to the Console - standard output stream. - - - - - - The to use when writing to the Console - standard error output stream. - - - - The to use when writing to the Console - standard error output stream. - - - - - - Ansi code to reset terminal - - - - - Initializes a new instance of the class. - - - The instance of the class is set up to write - to the standard output stream. - - - - - Add a mapping of level to color - - The mapping to add - - - Add a mapping to this appender. - Each mapping defines the foreground and background colours - for a level. - - - - - - This method is called by the method. - - The event to log. - - - Writes the event to the console. - - - The format of the output will depend on the appender's layout. - - - - - - Initialize the options for this appender - - - - Initialize the level to color mappings set on this appender. - - - - - - Flag to write output to the error stream rather than the standard output stream - - - - - Mapping from level object to color value - - - - - Target is the value of the console output stream. - - - Target is the value of the console output stream. - This is either "Console.Out" or "Console.Error". - - - - Target is the value of the console output stream. - This is either "Console.Out" or "Console.Error". - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - The enum of possible display attributes - - - - The following flags can be combined together to - form the ANSI color attributes. - - - - - - - text is bright - - - - - text is dim - - - - - text is underlined - - - - - text is blinking - - - Not all terminals support this attribute - - - - - text and background colors are reversed - - - - - text is hidden - - - - - text is displayed with a strikethrough - - - - - The enum of possible foreground or background color values for - use with the color mapping method - - - - The output can be in one for the following ANSI colors. - - - - - - - color is black - - - - - color is red - - - - - color is green - - - - - color is yellow - - - - - color is blue - - - - - color is magenta - - - - - color is cyan - - - - - color is white - - - - - A class to act as a mapping between the level that a logging call is made at and - the color it should be displayed as. - - - - Defines the mapping between a level and the color it should be displayed in. - - - - - - An entry in the - - - - This is an abstract base class for types that are stored in the - object. - - - Nicko Cadell - - - - Default protected constructor - - - - Default protected constructor - - - - - - Initialize any options defined on this entry - - - - Should be overridden by any classes that need to initialise based on their options - - - - - - The level that is the key for this mapping - - - The that is the key for this mapping - - - - Get or set the that is the key for this - mapping subclass. - - - - - - Initialize the options for the object - - - - Combine the and together - and append the attributes. - - - - - - The mapped foreground color for the specified level - - - - Required property. - The mapped foreground color for the specified level - - - - - - The mapped background color for the specified level - - - - Required property. - The mapped background color for the specified level - - - - - - The color attributes for the specified level - - - - Required property. - The color attributes for the specified level - - - - - - The combined , and - suitable for setting the ansi terminal color. - - - - - A strongly-typed collection of objects. - - Nicko Cadell - - - - Creates a read-only wrapper for a AppenderCollection instance. - - list to create a readonly wrapper arround - - An AppenderCollection wrapper that is read-only. - - - - - An empty readonly static AppenderCollection - - - - - Initializes a new instance of the AppenderCollection class - that is empty and has the default initial capacity. - - - - - Initializes a new instance of the AppenderCollection class - that has the specified initial capacity. - - - The number of elements that the new AppenderCollection is initially capable of storing. - - - - - Initializes a new instance of the AppenderCollection class - that contains elements copied from the specified AppenderCollection. - - The AppenderCollection whose elements are copied to the new collection. - - - - Initializes a new instance of the AppenderCollection class - that contains elements copied from the specified array. - - The array whose elements are copied to the new list. - - - - Initializes a new instance of the AppenderCollection class - that contains elements copied from the specified collection. - - The collection whose elements are copied to the new list. - - - - Allow subclasses to avoid our default constructors - - - - - - - Copies the entire AppenderCollection to a one-dimensional - array. - - The one-dimensional array to copy to. - - - - Copies the entire AppenderCollection to a one-dimensional - array, starting at the specified index of the target array. - - The one-dimensional array to copy to. - The zero-based index in at which copying begins. - - - - Adds a to the end of the AppenderCollection. - - The to be added to the end of the AppenderCollection. - The index at which the value has been added. - - - - Removes all elements from the AppenderCollection. - - - - - Creates a shallow copy of the . - - A new with a shallow copy of the collection data. - - - - Determines whether a given is in the AppenderCollection. - - The to check for. - true if is found in the AppenderCollection; otherwise, false. - - - - Returns the zero-based index of the first occurrence of a - in the AppenderCollection. - - The to locate in the AppenderCollection. - - The zero-based index of the first occurrence of - in the entire AppenderCollection, if found; otherwise, -1. - - - - - Inserts an element into the AppenderCollection at the specified index. - - The zero-based index at which should be inserted. - The to insert. - - is less than zero - -or- - is equal to or greater than . - - - - - Removes the first occurrence of a specific from the AppenderCollection. - - The to remove from the AppenderCollection. - - The specified was not found in the AppenderCollection. - - - - - Removes the element at the specified index of the AppenderCollection. - - The zero-based index of the element to remove. - - is less than zero - -or- - is equal to or greater than . - - - - - Returns an enumerator that can iterate through the AppenderCollection. - - An for the entire AppenderCollection. - - - - Adds the elements of another AppenderCollection to the current AppenderCollection. - - The AppenderCollection whose elements should be added to the end of the current AppenderCollection. - The new of the AppenderCollection. - - - - Adds the elements of a array to the current AppenderCollection. - - The array whose elements should be added to the end of the AppenderCollection. - The new of the AppenderCollection. - - - - Adds the elements of a collection to the current AppenderCollection. - - The collection whose elements should be added to the end of the AppenderCollection. - The new of the AppenderCollection. - - - - Sets the capacity to the actual number of elements. - - - - - Return the collection elements as an array - - the array - - - - is less than zero - -or- - is equal to or greater than . - - - - - is less than zero - -or- - is equal to or greater than . - - - - - Gets the number of elements actually contained in the AppenderCollection. - - - - - Gets a value indicating whether access to the collection is synchronized (thread-safe). - - true if access to the ICollection is synchronized (thread-safe); otherwise, false. - - - - Gets an object that can be used to synchronize access to the collection. - - - - - Gets or sets the at the specified index. - - The zero-based index of the element to get or set. - - is less than zero - -or- - is equal to or greater than . - - - - - Gets a value indicating whether the collection has a fixed size. - - true if the collection has a fixed size; otherwise, false. The default is false - - - - Gets a value indicating whether the IList is read-only. - - true if the collection is read-only; otherwise, false. The default is false - - - - Gets or sets the number of elements the AppenderCollection can contain. - - - - - Supports type-safe iteration over a . - - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - - - Type visible only to our subclasses - Used to access protected constructor - - - - - - A value - - - - - Supports simple iteration over a . - - - - - - Initializes a new instance of the Enumerator class. - - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - - - - - - - Appends log events to the ASP.NET system. - - - - - Diagnostic information and tracing messages that you specify are appended to the output - of the page that is sent to the requesting browser. Optionally, you can view this information - from a separate trace viewer (Trace.axd) that displays trace information for every page in a - given application. - - - Trace statements are processed and displayed only when tracing is enabled. You can control - whether tracing is displayed to a page, to the trace viewer, or both. - - - The logging event is passed to the or - method depending on the level of the logging event. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Write the logging event to the ASP.NET trace - - the event to log - - - Write the logging event to the ASP.NET trace - HttpContext.Current.Trace - (). - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Buffers events and then forwards them to attached appenders. - - - - The events are buffered in this appender until conditions are - met to allow the appender to deliver the events to the attached - appenders. See for the - conditions that cause the buffer to be sent. - - The forwarding appender can be used to specify different - thresholds and filters for the same appender at different locations - within the hierarchy. - - - Nicko Cadell - Gert Driesen - - - - Interface for attaching appenders to objects. - - - - Interface for attaching, removing and retrieving appenders. - - - Nicko Cadell - Gert Driesen - - - - Attaches an appender. - - The appender to add. - - - Add the specified appender. The implementation may - choose to allow or deny duplicate appenders. - - - - - - Gets an attached appender with the specified name. - - The name of the appender to get. - - The appender with the name specified, or null if no appender with the - specified name is found. - - - - Returns an attached appender with the specified. - If no appender with the specified name is found null will be - returned. - - - - - - Removes all attached appenders. - - - - Removes and closes all attached appenders - - - - - - Removes the specified appender from the list of attached appenders. - - The appender to remove. - The appender removed from the list - - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - Removes the appender with the specified name from the list of appenders. - - The name of the appender to remove. - The appender removed from the list - - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - Gets all attached appenders. - - - A collection of attached appenders. - - - - Gets a collection of attached appenders. - If there are no attached appenders the - implementation should return an empty - collection rather than null. - - - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Closes the appender and releases resources. - - - - Releases any resources allocated within the appender such as file handles, - network connections, etc. - - - It is a programming error to append to a closed appender. - - - - - - Send the events. - - The events that need to be send. - - - Forwards the events to the attached appenders. - - - - - - Adds an to the list of appenders of this - instance. - - The to add to this appender. - - - If the specified is already in the list of - appenders, then it won't be added again. - - - - - - Looks for the appender with the specified name. - - The name of the appender to lookup. - - The appender with the specified name, or null. - - - - Get the named appender attached to this buffering appender. - - - - - - Removes all previously added appenders from this appender. - - - - This is useful when re-reading configuration information. - - - - - - Removes the specified appender from the list of appenders. - - The appender to remove. - The appender removed from the list - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - Removes the appender with the specified name from the list of appenders. - - The name of the appender to remove. - The appender removed from the list - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - Implementation of the interface - - - - - Gets the appenders contained in this appender as an - . - - - If no appenders can be found, then an - is returned. - - - A collection of the appenders in this appender. - - - - - Appends logging events to the console. - - - - ConsoleAppender appends log events to the standard output stream - or the error output stream using a layout specified by the - user. - - - By default, all output is written to the console's standard output stream. - The property can be set to direct the output to the - error stream. - - - NOTE: This appender writes each message to the System.Console.Out or - System.Console.Error that is set at the time the event is appended. - Therefore it is possible to programmatically redirect the output of this appender - (for example NUnit does this to capture program output). While this is the desired - behavior of this appender it may have security implications in your application. - - - Nicko Cadell - Gert Driesen - - - - The to use when writing to the Console - standard output stream. - - - - The to use when writing to the Console - standard output stream. - - - - - - The to use when writing to the Console - standard error output stream. - - - - The to use when writing to the Console - standard error output stream. - - - - - - Initializes a new instance of the class. - - - The instance of the class is set up to write - to the standard output stream. - - - - - Initializes a new instance of the class - with the specified layout. - - the layout to use for this appender - - The instance of the class is set up to write - to the standard output stream. - - - - - Initializes a new instance of the class - with the specified layout. - - the layout to use for this appender - flag set to true to write to the console error stream - - When is set to true, output is written to - the standard error output stream. Otherwise, output is written to the standard - output stream. - - - - - This method is called by the method. - - The event to log. - - - Writes the event to the console. - - - The format of the output will depend on the appender's layout. - - - - - - Target is the value of the console output stream. - This is either "Console.Out" or "Console.Error". - - - Target is the value of the console output stream. - This is either "Console.Out" or "Console.Error". - - - - Target is the value of the console output stream. - This is either "Console.Out" or "Console.Error". - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Appends log events to the system. - - - - The application configuration file can be used to control what listeners - are actually used. See the MSDN documentation for the - class for details on configuring the - debug system. - - - Events are written using the - method. The event's logger name is passed as the value for the category name to the Write method. - - - Nicko Cadell - - - - Initializes a new instance of the . - - - - Default constructor. - - - - - - Initializes a new instance of the - with a specified layout. - - The layout to use with this appender. - - - Obsolete constructor. - - - - - - Writes the logging event to the system. - - The event to log. - - - Writes the logging event to the system. - If is true then the - is called. - - - - - - Immediate flush means that the underlying writer or output stream - will be flushed at the end of each append operation. - - - - Immediate flush is slower but ensures that each append request is - actually written. If is set to - false, then there is a good chance that the last few - logs events are not actually written to persistent media if and - when the application crashes. - - - The default value is true. - - - - - Gets or sets a value that indicates whether the appender will - flush at the end of each write. - - - The default behavior is to flush at the end of each - write. If the option is set tofalse, then the underlying - stream can defer writing to physical medium to a later time. - - - Avoiding the flush operation at the end of each append results - in a performance gain of 10 to 20 percent. However, there is safety - trade-off involved in skipping flushing. Indeed, when flushing is - skipped, then it is likely that the last few log events will not - be recorded on disk when the application exits. This is a high - price to pay even for a 20% performance gain. - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Writes events to the system event log. - - - - The EventID of the event log entry can be - set using the EventLogEventID property () - on the . - - - There is a limit of 32K characters for an event log message - - - When configuring the EventLogAppender a mapping can be - specified to map a logging level to an event log entry type. For example: - - - <mapping> - <level value="ERROR" /> - <eventLogEntryType value="Error" /> - </mapping> - <mapping> - <level value="DEBUG" /> - <eventLogEntryType value="Information" /> - </mapping> - - - The Level is the standard log4net logging level and eventLogEntryType can be any value - from the enum, i.e.: - - Erroran error event - Warninga warning event - Informationan informational event - - - - Aspi Havewala - Douglas de la Torre - Nicko Cadell - Gert Driesen - Thomas Voss - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Initializes a new instance of the class - with the specified . - - The to use with this appender. - - - Obsolete constructor. - - - - - - Add a mapping of level to - done by the config file - - The mapping to add - - - Add a mapping to this appender. - Each mapping defines the event log entry type for a level. - - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Create an event log source - - - Uses different API calls under NET_2_0 - - - - - This method is called by the - method. - - the event to log - - Writes the event to the system event log using the - . - - If the event has an EventID property (see ) - set then this integer will be used as the event log event id. - - - There is a limit of 32K characters for an event log message - - - - - - Get the equivalent for a - - the Level to convert to an EventLogEntryType - The equivalent for a - - Because there are fewer applicable - values to use in logging levels than there are in the - this is a one way mapping. There is - a loss of information during the conversion. - - - - - The log name is the section in the event logs where the messages - are stored. - - - - - Name of the application to use when logging. This appears in the - application column of the event log named by . - - - - - The name of the machine which holds the event log. This is - currently only allowed to be '.' i.e. the current machine. - - - - - Mapping from level object to EventLogEntryType - - - - - The security context to use for privileged calls - - - - - The name of the log where messages will be stored. - - - The string name of the log where messages will be stored. - - - This is the name of the log as it appears in the Event Viewer - tree. The default value is to log into the Application - log, this is where most applications write their events. However - if you need a separate log for your application (or applications) - then you should set the appropriately. - This should not be used to distinguish your event log messages - from those of other applications, the - property should be used to distinguish events. This property should be - used to group together events into a single log. - - - - - - Property used to set the Application name. This appears in the - event logs when logging. - - - The string used to distinguish events from different sources. - - - Sets the event log source property. - - - - - This property is used to return the name of the computer to use - when accessing the event logs. Currently, this is the current - computer, denoted by a dot "." - - - The string name of the machine holding the event log that - will be logged into. - - - This property cannot be changed. It is currently set to '.' - i.e. the local machine. This may be changed in future. - - - - - Gets or sets the used to write to the EventLog. - - - The used to write to the EventLog. - - - - The system security context used to write to the EventLog. - - - Unless a specified here for this appender - the is queried for the - security context to use. The default behavior is to use the security context - of the current thread. - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - A class to act as a mapping between the level that a logging call is made at and - the color it should be displayed as. - - - - Defines the mapping between a level and its event log entry type. - - - - - - The for this entry - - - - Required property. - The for this entry - - - - - - Appends logging events to a file. - - - - Logging events are sent to the file specified by - the property. - - - The file can be opened in either append or overwrite mode - by specifying the property. - If the file path is relative it is taken as relative from - the application base directory. The file encoding can be - specified by setting the property. - - - The layout's and - values will be written each time the file is opened and closed - respectively. If the property is - then the file may contain multiple copies of the header and footer. - - - This appender will first try to open the file for writing when - is called. This will typically be during configuration. - If the file cannot be opened for writing the appender will attempt - to open the file again each time a message is logged to the appender. - If the file cannot be opened for writing when a message is logged then - the message will be discarded by this appender. - - - The supports pluggable file locking models via - the property. - The default behavior, implemented by - is to obtain an exclusive write lock on the file until this appender is closed. - The alternative model, , only holds a - write lock while the appender is writing a logging event. - - - Nicko Cadell - Gert Driesen - Rodrigo B. de Oliveira - Douglas de la Torre - Niall Daley - - - - Sends logging events to a . - - - - An Appender that writes to a . - - - This appender may be used stand alone if initialized with an appropriate - writer, however it is typically used as a base class for an appender that - can open a to write to. - - - Nicko Cadell - Gert Driesen - Douglas de la Torre - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Initializes a new instance of the class and - sets the output destination to a new initialized - with the specified . - - The layout to use with this appender. - The to output to. - - - Obsolete constructor. - - - - - - Initializes a new instance of the class and sets - the output destination to the specified . - - The layout to use with this appender - The to output to - - The must have been previously opened. - - - - Obsolete constructor. - - - - - - This method determines if there is a sense in attempting to append. - - - - This method checked if an output target has been set and if a - layout has been set. - - - false if any of the preconditions fail. - - - - This method is called by the - method. - - The event to log. - - - Writes a log statement to the output stream if the output stream exists - and is writable. - - - The format of the output will depend on the appender's layout. - - - - - - This method is called by the - method. - - The array of events to log. - - - This method writes all the bulk logged events to the output writer - before flushing the stream. - - - - - - Close this appender instance. The underlying stream or writer is also closed. - - - Closed appenders cannot be reused. - - - - - Writes the footer and closes the underlying . - - - - Writes the footer and closes the underlying . - - - - - - Closes the underlying . - - - - Closes the underlying . - - - - - - Clears internal references to the underlying - and other variables. - - - - Subclasses can override this method for an alternate closing behavior. - - - - - - Writes a footer as produced by the embedded layout's property. - - - - Writes a footer as produced by the embedded layout's property. - - - - - - Writes a header produced by the embedded layout's property. - - - - Writes a header produced by the embedded layout's property. - - - - - - Called to allow a subclass to lazily initialize the writer - - - - This method is called when an event is logged and the or - have not been set. This allows a subclass to - attempt to initialize the writer multiple times. - - - - - - This is the where logging events - will be written to. - - - - - Immediate flush means that the underlying - or output stream will be flushed at the end of each append operation. - - - - Immediate flush is slower but ensures that each append request is - actually written. If is set to - false, then there is a good chance that the last few - logging events are not actually persisted if and when the application - crashes. - - - The default value is true. - - - - - - Gets or set whether the appender will flush at the end - of each append operation. - - - - The default behavior is to flush at the end of each - append operation. - - - If this option is set to false, then the underlying - stream can defer persisting the logging event to a later - time. - - - - Avoiding the flush operation at the end of each append results in - a performance gain of 10 to 20 percent. However, there is safety - trade-off involved in skipping flushing. Indeed, when flushing is - skipped, then it is likely that the last few log events will not - be recorded on disk when the application exits. This is a high - price to pay even for a 20% performance gain. - - - - - Sets the where the log output will go. - - - - The specified must be open and writable. - - - The will be closed when the appender - instance is closed. - - - Note: Logging to an unopened will fail. - - - - - - Gets or set the and the underlying - , if any, for this appender. - - - The for this appender. - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Gets or sets the where logging events - will be written to. - - - The where logging events are written. - - - - This is the where logging events - will be written to. - - - - - - Default constructor - - - - Default constructor - - - - - - Construct a new appender using the layout, file and append mode. - - the layout to use with this appender - the full path to the file to write to - flag to indicate if the file should be appended to - - - Obsolete constructor. - - - - - - Construct a new appender using the layout and file specified. - The file will be appended to. - - the layout to use with this appender - the full path to the file to write to - - - Obsolete constructor. - - - - - - Activate the options on the file appender. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - This will cause the file to be opened. - - - - - - Closes any previously opened file and calls the parent's . - - - - Resets the filename and the file stream. - - - - - - Called to initialize the file writer - - - - Will be called for each logged message until the file is - successfully opened. - - - - - - This method is called by the - method. - - The event to log. - - - Writes a log statement to the output stream if the output stream exists - and is writable. - - - The format of the output will depend on the appender's layout. - - - - - - This method is called by the - method. - - The array of events to log. - - - Acquires the output file locks once before writing all the events to - the stream. - - - - - - Writes a footer as produced by the embedded layout's property. - - - - Writes a footer as produced by the embedded layout's property. - - - - - - Writes a header produced by the embedded layout's property. - - - - Writes a header produced by the embedded layout's property. - - - - - - Closes the underlying . - - - - Closes the underlying . - - - - - - Closes the previously opened file. - - - - Writes the to the file and then - closes the file. - - - - - - Sets and opens the file where the log output will go. The specified file must be writable. - - The path to the log file. Must be a fully qualified path. - If true will append to fileName. Otherwise will truncate fileName - - - Calls but guarantees not to throw an exception. - Errors are passed to the . - - - - - - Sets and opens the file where the log output will go. The specified file must be writable. - - The path to the log file. Must be a fully qualified path. - If true will append to fileName. Otherwise will truncate fileName - - - If there was already an opened file, then the previous file - is closed first. - - - This method will ensure that the directory structure - for the specified exists. - - - - - - Sets the quiet writer used for file output - - the file stream that has been opened for writing - - - This implementation of creates a - over the and passes it to the - method. - - - This method can be overridden by sub classes that want to wrap the - in some way, for example to encrypt the output - data using a System.Security.Cryptography.CryptoStream. - - - - - - Sets the quiet writer being used. - - the writer over the file stream that has been opened for writing - - - This method can be overridden by sub classes that want to - wrap the in some way. - - - - - - Convert a path into a fully qualified path. - - The path to convert. - The fully qualified path. - - - Converts the path specified to a fully - qualified path. If the path is relative it is - taken as relative from the application base - directory. - - - - - - Flag to indicate if we should append to the file - or overwrite the file. The default is to append. - - - - - The name of the log file. - - - - - The encoding to use for the file stream. - - - - - The security context to use for privileged calls - - - - - The stream to log to. Has added locking semantics - - - - - The locking model to use - - - - - Gets or sets the path to the file that logging will be written to. - - - The path to the file that logging will be written to. - - - - If the path is relative it is taken as relative from - the application base directory. - - - - - - Gets or sets a flag that indicates whether the file should be - appended to or overwritten. - - - Indicates whether the file should be appended to or overwritten. - - - - If the value is set to false then the file will be overwritten, if - it is set to true then the file will be appended to. - - The default value is true. - - - - - Gets or sets used to write to the file. - - - The used to write to the file. - - - - The default encoding set is - which is the encoding for the system's current ANSI code page. - - - - - - Gets or sets the used to write to the file. - - - The used to write to the file. - - - - Unless a specified here for this appender - the is queried for the - security context to use. The default behavior is to use the security context - of the current thread. - - - - - - Gets or sets the used to handle locking of the file. - - - The used to lock the file. - - - - Gets or sets the used to handle locking of the file. - - - There are two built in locking models, and . - The former locks the file from the start of logging to the end and the - later lock only for the minimal amount of time when logging each message. - - - The default locking model is the . - - - - - - Write only that uses the - to manage access to an underlying resource. - - - - - True asynchronous writes are not supported, the implementation forces a synchronous write. - - - - - Exception base type for log4net. - - - - This type extends . It - does not add any new functionality but does differentiate the - type of exception being thrown. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Constructor - - A message to include with the exception. - - - Initializes a new instance of the class with - the specified message. - - - - - - Constructor - - A message to include with the exception. - A nested exception to include. - - - Initializes a new instance of the class - with the specified message and inner exception. - - - - - - Serialization constructor - - The that holds the serialized object data about the exception being thrown. - The that contains contextual information about the source or destination. - - - Initializes a new instance of the class - with serialized data. - - - - - - Locking model base class - - - - Base class for the locking models available to the derived loggers. - - - - - - Open the output file - - The filename to use - Whether to append to the file, or overwrite - The encoding to use - - - Open the file specified and prepare for logging. - No writes will be made until is called. - Must be called before any calls to , - and . - - - - - - Close the file - - - - Close the file. No further writes will be made. - - - - - - Acquire the lock on the file - - A stream that is ready to be written to. - - - Acquire the lock on the file in preparation for writing to it. - Return a stream pointing to the file. - must be called to release the lock on the output file. - - - - - - Release the lock on the file - - - - Release the lock on the file. No further writes will be made to the - stream until is called again. - - - - - - Gets or sets the for this LockingModel - - - The for this LockingModel - - - - The file appender this locking model is attached to and working on - behalf of. - - - The file appender is used to locate the security context and the error handler to use. - - - The value of this property will be set before is - called. - - - - - - Hold an exclusive lock on the output file - - - - Open the file once for writing and hold it open until is called. - Maintains an exclusive lock on the file during this time. - - - - - - Open the file specified and prepare for logging. - - The filename to use - Whether to append to the file, or overwrite - The encoding to use - - - Open the file specified and prepare for logging. - No writes will be made until is called. - Must be called before any calls to , - and . - - - - - - Close the file - - - - Close the file. No further writes will be made. - - - - - - Acquire the lock on the file - - A stream that is ready to be written to. - - - Does nothing. The lock is already taken - - - - - - Release the lock on the file - - - - Does nothing. The lock will be released when the file is closed. - - - - - - Acquires the file lock for each write - - - - Opens the file once for each / cycle, - thus holding the lock for the minimal amount of time. This method of locking - is considerably slower than but allows - other processes to move/delete the log file whilst logging continues. - - - - - - Prepares to open the file when the first message is logged. - - The filename to use - Whether to append to the file, or overwrite - The encoding to use - - - Open the file specified and prepare for logging. - No writes will be made until is called. - Must be called before any calls to , - and . - - - - - - Close the file - - - - Close the file. No further writes will be made. - - - - - - Acquire the lock on the file - - A stream that is ready to be written to. - - - Acquire the lock on the file in preparation for writing to it. - Return a stream pointing to the file. - must be called to release the lock on the output file. - - - - - - Release the lock on the file - - - - Release the lock on the file. No further writes will be made to the - stream until is called again. - - - - - - This appender forwards logging events to attached appenders. - - - - The forwarding appender can be used to specify different thresholds - and filters for the same appender at different locations within the hierarchy. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Closes the appender and releases resources. - - - - Releases any resources allocated within the appender such as file handles, - network connections, etc. - - - It is a programming error to append to a closed appender. - - - - - - Forward the logging event to the attached appenders - - The event to log. - - - Delivers the logging event to all the attached appenders. - - - - - - Forward the logging events to the attached appenders - - The array of events to log. - - - Delivers the logging events to all the attached appenders. - - - - - - Adds an to the list of appenders of this - instance. - - The to add to this appender. - - - If the specified is already in the list of - appenders, then it won't be added again. - - - - - - Looks for the appender with the specified name. - - The name of the appender to lookup. - - The appender with the specified name, or null. - - - - Get the named appender attached to this appender. - - - - - - Removes all previously added appenders from this appender. - - - - This is useful when re-reading configuration information. - - - - - - Removes the specified appender from the list of appenders. - - The appender to remove. - The appender removed from the list - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - Removes the appender with the specified name from the list of appenders. - - The name of the appender to remove. - The appender removed from the list - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - Implementation of the interface - - - - - Gets the appenders contained in this appender as an - . - - - If no appenders can be found, then an - is returned. - - - A collection of the appenders in this appender. - - - - - Logs events to a local syslog service. - - - - This appender uses the POSIX libc library functions openlog, syslog, and closelog. - If these functions are not available on the local system then this appender will not work! - - - The functions openlog, syslog, and closelog are specified in SUSv2 and - POSIX 1003.1-2001 standards. These are used to log messages to the local syslog service. - - - This appender talks to a local syslog service. If you need to log to a remote syslog - daemon and you cannot configure your local syslog service to do this you may be - able to use the to log via UDP. - - - Syslog messages must have a facility and and a severity. The severity - is derived from the Level of the logging event. - The facility must be chosen from the set of defined syslog - values. The facilities list is predefined - and cannot be extended. - - - An identifier is specified with each log message. This can be specified - by setting the property. The identity (also know - as the tag) must not contain white space. The default value for the - identity is the application name (from ). - - - Rob Lyon - Nicko Cadell - - - - Initializes a new instance of the class. - - - This instance of the class is set up to write - to a local syslog service. - - - - - Add a mapping of level to severity - - The mapping to add - - - Adds a to this appender. - - - - - - Initialize the appender based on the options set. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - This method is called by the method. - - The event to log. - - - Writes the event to a remote syslog daemon. - - - The format of the output will depend on the appender's layout. - - - - - - Close the syslog when the appender is closed - - - - Close the syslog when the appender is closed - - - - - - Translates a log4net level to a syslog severity. - - A log4net level. - A syslog severity. - - - Translates a log4net level to a syslog severity. - - - - - - Generate a syslog priority. - - The syslog facility. - The syslog severity. - A syslog priority. - - - - The facility. The default facility is . - - - - - The message identity - - - - - Marshaled handle to the identity string. We have to hold on to the - string as the openlog and syslog APIs just hold the - pointer to the ident and dereference it for each log message. - - - - - Mapping from level object to syslog severity - - - - - Open connection to system logger. - - - - - Generate a log message. - - - - The libc syslog method takes a format string and a variable argument list similar - to the classic printf function. As this type of vararg list is not supported - by C# we need to specify the arguments explicitly. Here we have specified the - format string with a single message argument. The caller must set the format - string to "%s". - - - - - - Close descriptor used to write to system logger. - - - - - Message identity - - - - An identifier is specified with each log message. This can be specified - by setting the property. The identity (also know - as the tag) must not contain white space. The default value for the - identity is the application name (from ). - - - - - - Syslog facility - - - Set to one of the values. The list of - facilities is predefined and cannot be extended. The default value - is . - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - syslog severities - - - - The log4net Level maps to a syslog severity using the - method and the - class. The severity is set on . - - - - - - system is unusable - - - - - action must be taken immediately - - - - - critical conditions - - - - - error conditions - - - - - warning conditions - - - - - normal but significant condition - - - - - informational - - - - - debug-level messages - - - - - syslog facilities - - - - The syslog facility defines which subsystem the logging comes from. - This is set on the property. - - - - - - kernel messages - - - - - random user-level messages - - - - - mail system - - - - - system daemons - - - - - security/authorization messages - - - - - messages generated internally by syslogd - - - - - line printer subsystem - - - - - network news subsystem - - - - - UUCP subsystem - - - - - clock (cron/at) daemon - - - - - security/authorization messages (private) - - - - - ftp daemon - - - - - NTP subsystem - - - - - log audit - - - - - log alert - - - - - clock daemon - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - A class to act as a mapping between the level that a logging call is made at and - the syslog severity that is should be logged at. - - - - A class to act as a mapping between the level that a logging call is made at and - the syslog severity that is should be logged at. - - - - - - The mapped syslog severity for the specified level - - - - Required property. - The mapped syslog severity for the specified level - - - - - - Stores logging events in an array. - - - - The memory appender stores all the logging events - that are appended in an in-memory array. - - - Use the method to get - the current list of events that have been appended. - - - Use the method to clear the - current list of events. - - - Julian Biddle - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Gets the events that have been logged. - - The events that have been logged - - - Gets the events that have been logged. - - - - - - This method is called by the method. - - the event to log - - Stores the in the events list. - - - - - Clear the list of events - - - Clear the list of events - - - - - The list of events that have been appended. - - - - - Value indicating which fields in the event should be fixed - - - By default all fields are fixed - - - - - Gets or sets a value indicating whether only part of the logging event - data should be fixed. - - - true if the appender should only fix part of the logging event - data, otherwise false. The default is false. - - - - Setting this property to true will cause only part of the event - data to be fixed and stored in the appender, hereby improving performance. - - - See for more information. - - - - - - Gets or sets the fields that will be fixed in the event - - - - The logging event needs to have certain thread specific values - captured before it can be buffered. See - for details. - - - - - - Logs events to a remote syslog daemon. - - - - The BSD syslog protocol is used to remotely log to - a syslog daemon. The syslogd listens for for messages - on UDP port 514. - - - The syslog UDP protocol is not authenticated. Most syslog daemons - do not accept remote log messages because of the security implications. - You may be able to use the LocalSyslogAppender to talk to a local - syslog service. - - - There is an RFC 3164 that claims to document the BSD Syslog Protocol. - This RFC can be seen here: http://www.faqs.org/rfcs/rfc3164.html. - This appender generates what the RFC calls an "Original Device Message", - i.e. does not include the TIMESTAMP or HOSTNAME fields. By observation - this format of message will be accepted by all current syslog daemon - implementations. The daemon will attach the current time and the source - hostname or IP address to any messages received. - - - Syslog messages must have a facility and and a severity. The severity - is derived from the Level of the logging event. - The facility must be chosen from the set of defined syslog - values. The facilities list is predefined - and cannot be extended. - - - An identifier is specified with each log message. This can be specified - by setting the property. The identity (also know - as the tag) must not contain white space. The default value for the - identity is the application name (from ). - - - Rob Lyon - Nicko Cadell - - - - Sends logging events as connectionless UDP datagrams to a remote host or a - multicast group using an . - - - - UDP guarantees neither that messages arrive, nor that they arrive in the correct order. - - - To view the logging results, a custom application can be developed that listens for logging - events. - - - When decoding events send via this appender remember to use the same encoding - to decode the events as was used to send the events. See the - property to specify the encoding to use. - - - - This example shows how to log receive logging events that are sent - on IP address 244.0.0.1 and port 8080 to the console. The event is - encoded in the packet as a unicode string and it is decoded as such. - - IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, 0); - UdpClient udpClient; - byte[] buffer; - string loggingEvent; - - try - { - udpClient = new UdpClient(8080); - - while(true) - { - buffer = udpClient.Receive(ref remoteEndPoint); - loggingEvent = System.Text.Encoding.Unicode.GetString(buffer); - Console.WriteLine(loggingEvent); - } - } - catch(Exception e) - { - Console.WriteLine(e.ToString()); - } - - - Dim remoteEndPoint as IPEndPoint - Dim udpClient as UdpClient - Dim buffer as Byte() - Dim loggingEvent as String - - Try - remoteEndPoint = new IPEndPoint(IPAddress.Any, 0) - udpClient = new UdpClient(8080) - - While True - buffer = udpClient.Receive(ByRef remoteEndPoint) - loggingEvent = System.Text.Encoding.Unicode.GetString(buffer) - Console.WriteLine(loggingEvent) - Wend - Catch e As Exception - Console.WriteLine(e.ToString()) - End Try - - - An example configuration section to log information using this appender to the - IP 224.0.0.1 on port 8080: - - - - - - - - - - Gert Driesen - Nicko Cadell - - - - Initializes a new instance of the class. - - - The default constructor initializes all fields to their default values. - - - - - Initialize the appender based on the options set. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - The appender will be ignored if no was specified or - an invalid remote or local TCP port number was specified. - - - The required property was not specified. - The TCP port number assigned to or is less than or greater than . - - - - This method is called by the method. - - The event to log. - - - Sends the event using an UDP datagram. - - - Exceptions are passed to the . - - - - - - Closes the UDP connection and releases all resources associated with - this instance. - - - - Disables the underlying and releases all managed - and unmanaged resources associated with the . - - - - - - Initializes the underlying connection. - - - - The underlying is initialized and binds to the - port number from which you intend to communicate. - - - Exceptions are passed to the . - - - - - - The IP address of the remote host or multicast group to which - the logging event will be sent. - - - - - The TCP port number of the remote host or multicast group to - which the logging event will be sent. - - - - - The cached remote endpoint to which the logging events will be sent. - - - - - The TCP port number from which the will communicate. - - - - - The instance that will be used for sending the - logging events. - - - - - The encoding to use for the packet. - - - - - Gets or sets the IP address of the remote host or multicast group to which - the underlying should sent the logging event. - - - The IP address of the remote host or multicast group to which the logging event - will be sent. - - - - Multicast addresses are identified by IP class D addresses (in the range 224.0.0.0 to - 239.255.255.255). Multicast packets can pass across different networks through routers, so - it is possible to use multicasts in an Internet scenario as long as your network provider - supports multicasting. - - - Hosts that want to receive particular multicast messages must register their interest by joining - the multicast group. Multicast messages are not sent to networks where no host has joined - the multicast group. Class D IP addresses are used for multicast groups, to differentiate - them from normal host addresses, allowing nodes to easily detect if a message is of interest. - - - Static multicast addresses that are needed globally are assigned by IANA. A few examples are listed in the table below: - - - - - IP Address - Description - - - 224.0.0.1 - - - Sends a message to all system on the subnet. - - - - - 224.0.0.2 - - - Sends a message to all routers on the subnet. - - - - - 224.0.0.12 - - - The DHCP server answers messages on the IP address 224.0.0.12, but only on a subnet. - - - - - - - A complete list of actually reserved multicast addresses and their owners in the ranges - defined by RFC 3171 can be found at the IANA web site. - - - The address range 239.0.0.0 to 239.255.255.255 is reserved for administrative scope-relative - addresses. These addresses can be reused with other local groups. Routers are typically - configured with filters to prevent multicast traffic in this range from flowing outside - of the local network. - - - - - - Gets or sets the TCP port number of the remote host or multicast group to which - the underlying should sent the logging event. - - - An integer value in the range to - indicating the TCP port number of the remote host or multicast group to which the logging event - will be sent. - - - The underlying will send messages to this TCP port number - on the remote host or multicast group. - - The value specified is less than or greater than . - - - - Gets or sets the TCP port number from which the underlying will communicate. - - - An integer value in the range to - indicating the TCP port number from which the underlying will communicate. - - - - The underlying will bind to this port for sending messages. - - - Setting the value to 0 (the default) will cause the udp client not to bind to - a local port. - - - The value specified is less than or greater than . - - - - Gets or sets used to write the packets. - - - The used to write the packets. - - - - The used to write the packets. - - - - - - Gets or sets the underlying . - - - The underlying . - - - creates a to send logging events - over a network. Classes deriving from can use this - property to get or set this . Use the underlying - returned from if you require access beyond that which - provides. - - - - - Gets or sets the cached remote endpoint to which the logging events should be sent. - - - The cached remote endpoint to which the logging events will be sent. - - - The method will initialize the remote endpoint - with the values of the and - properties. - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Syslog port 514 - - - - - Initializes a new instance of the class. - - - This instance of the class is set up to write - to a remote syslog daemon. - - - - - Add a mapping of level to severity - - The mapping to add - - - Add a mapping to this appender. - - - - - - This method is called by the method. - - The event to log. - - - Writes the event to a remote syslog daemon. - - - The format of the output will depend on the appender's layout. - - - - - - Initialize the options for this appender - - - - Initialize the level to syslog severity mappings set on this appender. - - - - - - Translates a log4net level to a syslog severity. - - A log4net level. - A syslog severity. - - - Translates a log4net level to a syslog severity. - - - - - - Generate a syslog priority. - - The syslog facility. - The syslog severity. - A syslog priority. - - - Generate a syslog priority. - - - - - - The facility. The default facility is . - - - - - The message identity - - - - - Mapping from level object to syslog severity - - - - - Message identity - - - - An identifier is specified with each log message. This can be specified - by setting the property. The identity (also know - as the tag) must not contain white space. The default value for the - identity is the application name (from ). - - - - - - Syslog facility - - - Set to one of the values. The list of - facilities is predefined and cannot be extended. The default value - is . - - - - - syslog severities - - - - The syslog severities. - - - - - - system is unusable - - - - - action must be taken immediately - - - - - critical conditions - - - - - error conditions - - - - - warning conditions - - - - - normal but significant condition - - - - - informational - - - - - debug-level messages - - - - - syslog facilities - - - - The syslog facilities - - - - - - kernel messages - - - - - random user-level messages - - - - - mail system - - - - - system daemons - - - - - security/authorization messages - - - - - messages generated internally by syslogd - - - - - line printer subsystem - - - - - network news subsystem - - - - - UUCP subsystem - - - - - clock (cron/at) daemon - - - - - security/authorization messages (private) - - - - - ftp daemon - - - - - NTP subsystem - - - - - log audit - - - - - log alert - - - - - clock daemon - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - reserved for local use - - - - - A class to act as a mapping between the level that a logging call is made at and - the syslog severity that is should be logged at. - - - - A class to act as a mapping between the level that a logging call is made at and - the syslog severity that is should be logged at. - - - - - - The mapped syslog severity for the specified level - - - - Required property. - The mapped syslog severity for the specified level - - - - - - Delivers logging events to a remote logging sink. - - - - This Appender is designed to deliver events to a remote sink. - That is any object that implements the - interface. It delivers the events using .NET remoting. The - object to deliver events to is specified by setting the - appenders property. - - The RemotingAppender buffers events before sending them. This allows it to - make more efficient use of the remoting infrastructure. - - Once the buffer is full the events are still not sent immediately. - They are scheduled to be sent using a pool thread. The effect is that - the send occurs asynchronously. This is very important for a - number of non obvious reasons. The remoting infrastructure will - flow thread local variables (stored in the ), - if they are marked as , across the - remoting boundary. If the server is not contactable then - the remoting infrastructure will clear the - objects from the . To prevent a logging failure from - having side effects on the calling application the remoting call must be made - from a separate thread to the one used by the application. A - thread is used for this. If no thread is available then - the events will block in the thread pool manager until a thread is available. - - Because the events are sent asynchronously using pool threads it is possible to close - this appender before all the queued events have been sent. - When closing the appender attempts to wait until all the queued events have been sent, but - this will timeout after 30 seconds regardless. - - If this appender is being closed because the - event has fired it may not be possible to send all the queued events. During process - exit the runtime limits the time that a - event handler is allowed to run for. If the runtime terminates the threads before - the queued events have been sent then they will be lost. To ensure that all events - are sent the appender must be closed before the application exits. See - for details on how to shutdown - log4net programmatically. - - - Nicko Cadell - Gert Driesen - Daniel Cazzulino - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Send the contents of the buffer to the remote sink. - - - The events are not sent immediately. They are scheduled to be sent - using a pool thread. The effect is that the send occurs asynchronously. - This is very important for a number of non obvious reasons. The remoting - infrastructure will flow thread local variables (stored in the ), - if they are marked as , across the - remoting boundary. If the server is not contactable then - the remoting infrastructure will clear the - objects from the . To prevent a logging failure from - having side effects on the calling application the remoting call must be made - from a separate thread to the one used by the application. A - thread is used for this. If no thread is available then - the events will block in the thread pool manager until a thread is available. - - The events to send. - - - - Override base class close. - - - - This method waits while there are queued work items. The events are - sent asynchronously using work items. These items - will be sent once a thread pool thread is available to send them, therefore - it is possible to close the appender before all the queued events have been - sent. - - This method attempts to wait until all the queued events have been sent, but this - method will timeout after 30 seconds regardless. - - If the appender is being closed because the - event has fired it may not be possible to send all the queued events. During process - exit the runtime limits the time that a - event handler is allowed to run for. - - - - - A work item is being queued into the thread pool - - - - - A work item from the thread pool has completed - - - - - Send the contents of the buffer to the remote sink. - - - This method is designed to be used with the . - This method expects to be passed an array of - objects in the state param. - - the logging events to send - - - - The URL of the remote sink. - - - - - The local proxy (.NET remoting) for the remote logging sink. - - - - - The number of queued callbacks currently waiting or executing - - - - - Event used to signal when there are no queued work items - - - This event is set when there are no queued work items. In this - state it is safe to close the appender. - - - - - Gets or sets the URL of the well-known object that will accept - the logging events. - - - The well-known URL of the remote sink. - - - - The URL of the remoting sink that will accept logging events. - The sink must implement the - interface. - - - - - - Interface used to deliver objects to a remote sink. - - - This interface must be implemented by a remoting sink - if the is to be used - to deliver logging events to the sink. - - - - - Delivers logging events to the remote sink - - Array of events to log. - - - Delivers logging events to the remote sink - - - - - - Appender that rolls log files based on size or date or both. - - - - RollingFileAppender can roll log files based on size or date or both - depending on the setting of the property. - When set to the log file will be rolled - once its size exceeds the . - When set to the log file will be rolled - once the date boundary specified in the property - is crossed. - When set to the log file will be - rolled once the date boundary specified in the property - is crossed, but within a date boundary the file will also be rolled - once its size exceeds the . - When set to the log file will be rolled when - the appender is configured. This effectively means that the log file can be - rolled once per program execution. - - - A of few additional optional features have been added: - - Attach date pattern for current log file - Backup number increments for newer files - Infinite number of backups by file size - - - - - - For large or infinite numbers of backup files a - greater than zero is highly recommended, otherwise all the backup files need - to be renamed each time a new backup is created. - - - When Date/Time based rolling is used setting - to will reduce the number of file renamings to few or none. - - - - - - Changing or without clearing - the log file directory of backup files will cause unexpected and unwanted side effects. - - - - - If Date/Time based rolling is enabled this appender will attempt to roll existing files - in the directory without a Date/Time tag based on the last write date of the base log file. - The appender only rolls the log file when a message is logged. If Date/Time based rolling - is enabled then the appender will not roll the log file at the Date/Time boundary but - at the point when the next message is logged after the boundary has been crossed. - - - - The extends the and - has the same behavior when opening the log file. - The appender will first try to open the file for writing when - is called. This will typically be during configuration. - If the file cannot be opened for writing the appender will attempt - to open the file again each time a message is logged to the appender. - If the file cannot be opened for writing when a message is logged then - the message will be discarded by this appender. - - - When rolling a backup file necessitates deleting an older backup file the - file to be deleted is moved to a temporary name before being deleted. - - - - - A maximum number of backup files when rolling on date/time boundaries is not supported. - - - - Nicko Cadell - Gert Driesen - Aspi Havewala - Douglas de la Torre - Edward Smit - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Sets the quiet writer being used. - - - This method can be overridden by sub classes. - - the writer to set - - - - Write out a logging event. - - the event to write to file. - - - Handles append time behavior for RollingFileAppender. This checks - if a roll over either by date (checked first) or time (checked second) - is need and then appends to the file last. - - - - - - Write out an array of logging events. - - the events to write to file. - - - Handles append time behavior for RollingFileAppender. This checks - if a roll over either by date (checked first) or time (checked second) - is need and then appends to the file last. - - - - - - Performs any required rolling before outputting the next event - - - - Handles append time behavior for RollingFileAppender. This checks - if a roll over either by date (checked first) or time (checked second) - is need and then appends to the file last. - - - - - - Creates and opens the file for logging. If - is false then the fully qualified name is determined and used. - - the name of the file to open - true to append to existing file - - This method will ensure that the directory structure - for the specified exists. - - - - - Get the current output file name - - the base file name - the output file name - - The output file name is based on the base fileName specified. - If is set then the output - file name is the same as the base file passed in. Otherwise - the output file depends on the date pattern, on the count - direction or both. - - - - - Determines curSizeRollBackups (only within the current roll point) - - - - - Generates a wildcard pattern that can be used to find all files - that are similar to the base file name. - - - - - - - Builds a list of filenames for all files matching the base filename plus a file - pattern. - - - - - - - Initiates a roll over if needed for crossing a date boundary since the last run. - - - - - Initializes based on existing conditions at time of . - - - - Initializes based on existing conditions at time of . - The following is done - - determine curSizeRollBackups (only within the current roll point) - initiates a roll over if needed for crossing a date boundary since the last run. - - - - - - - Does the work of bumping the 'current' file counter higher - to the highest count when an incremental file name is seen. - The highest count is either the first file (when count direction - is greater than 0) or the last file (when count direction less than 0). - In either case, we want to know the highest count that is present. - - - - - - - Takes a list of files and a base file name, and looks for - 'incremented' versions of the base file. Bumps the max - count up to the highest count seen. - - - - - - - Calculates the RollPoint for the datePattern supplied. - - the date pattern to calculate the check period for - The RollPoint that is most accurate for the date pattern supplied - - Essentially the date pattern is examined to determine what the - most suitable roll point is. The roll point chosen is the roll point - with the smallest period that can be detected using the date pattern - supplied. i.e. if the date pattern only outputs the year, month, day - and hour then the smallest roll point that can be detected would be - and hourly roll point as minutes could not be detected. - - - - - Initialize the appender based on the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - Sets initial conditions including date/time roll over information, first check, - scheduledFilename, and calls to initialize - the current number of backups. - - - - - - Rollover the file(s) to date/time tagged file(s). - - set to true if the file to be rolled is currently open - - - Rollover the file(s) to date/time tagged file(s). - Resets curSizeRollBackups. - If fileIsOpen is set then the new file is opened (through SafeOpenFile). - - - - - - Renames file to file . - - Name of existing file to roll. - New name for file. - - - Renames file to file . It - also checks for existence of target file and deletes if it does. - - - - - - Test if a file exists at a specified path - - the path to the file - true if the file exists - - - Test if a file exists at a specified path - - - - - - Deletes the specified file if it exists. - - The file to delete. - - - Delete a file if is exists. - The file is first moved to a new filename then deleted. - This allows the file to be removed even when it cannot - be deleted, but it still can be moved. - - - - - - Implements file roll base on file size. - - - - If the maximum number of size based backups is reached - (curSizeRollBackups == maxSizeRollBackups) then the oldest - file is deleted -- its index determined by the sign of countDirection. - If countDirection < 0, then files - {File.1, ..., File.curSizeRollBackups -1} - are renamed to {File.2, ..., - File.curSizeRollBackups}. Moreover, File is - renamed File.1 and closed. - - - A new file is created to receive further log output. - - - If maxSizeRollBackups is equal to zero, then the - File is truncated with no backup files created. - - - If maxSizeRollBackups < 0, then File is - renamed if needed and no files are deleted. - - - - - - Implements file roll. - - the base name to rename - - - If the maximum number of size based backups is reached - (curSizeRollBackups == maxSizeRollBackups) then the oldest - file is deleted -- its index determined by the sign of countDirection. - If countDirection < 0, then files - {File.1, ..., File.curSizeRollBackups -1} - are renamed to {File.2, ..., - File.curSizeRollBackups}. - - - If maxSizeRollBackups is equal to zero, then the - File is truncated with no backup files created. - - - If maxSizeRollBackups < 0, then File is - renamed if needed and no files are deleted. - - - This is called by to rename the files. - - - - - - Get the start time of the next window for the current rollpoint - - the current date - the type of roll point we are working with - the start time for the next roll point an interval after the currentDateTime date - - - Returns the date of the next roll point after the currentDateTime date passed to the method. - - - The basic strategy is to subtract the time parts that are less significant - than the rollpoint from the current time. This should roll the time back to - the start of the time window for the current rollpoint. Then we add 1 window - worth of time and get the start time of the next window for the rollpoint. - - - - - - This object supplies the current date/time. Allows test code to plug in - a method to control this class when testing date/time based rolling. - - - - - The date pattern. By default, the pattern is set to ".yyyy-MM-dd" - meaning daily rollover. - - - - - The actual formatted filename that is currently being written to - or will be the file transferred to on roll over - (based on staticLogFileName). - - - - - The timestamp when we shall next recompute the filename. - - - - - Holds date of last roll over - - - - - The type of rolling done - - - - - The default maximum file size is 10MB - - - - - There is zero backup files by default - - - - - How many sized based backups have been made so far - - - - - The rolling file count direction. - - - - - The rolling mode used in this appender. - - - - - Cache flag set if we are rolling by date. - - - - - Cache flag set if we are rolling by size. - - - - - Value indicating whether to always log to the same file. - - - - - FileName provided in configuration. Used for rolling properly - - - - - The 1st of January 1970 in UTC - - - - - Gets or sets the date pattern to be used for generating file names - when rolling over on date. - - - The date pattern to be used for generating file names when rolling - over on date. - - - - Takes a string in the same format as expected by - . - - - This property determines the rollover schedule when rolling over - on date. - - - - - - Gets or sets the maximum number of backup files that are kept before - the oldest is erased. - - - The maximum number of backup files that are kept before the oldest is - erased. - - - - If set to zero, then there will be no backup files and the log file - will be truncated when it reaches . - - - If a negative number is supplied then no deletions will be made. Note - that this could result in very slow performance as a large number of - files are rolled over unless is used. - - - The maximum applies to each time based group of files and - not the total. - - - - - - Gets or sets the maximum size that the output file is allowed to reach - before being rolled over to backup files. - - - The maximum size in bytes that the output file is allowed to reach before being - rolled over to backup files. - - - - This property is equivalent to except - that it is required for differentiating the setter taking a - argument from the setter taking a - argument. - - - The default maximum file size is 10MB (10*1024*1024). - - - - - - Gets or sets the maximum size that the output file is allowed to reach - before being rolled over to backup files. - - - The maximum size that the output file is allowed to reach before being - rolled over to backup files. - - - - This property allows you to specify the maximum size with the - suffixes "KB", "MB" or "GB" so that the size is interpreted being - expressed respectively in kilobytes, megabytes or gigabytes. - - - For example, the value "10KB" will be interpreted as 10240 bytes. - - - The default maximum file size is 10MB. - - - If you have the option to set the maximum file size programmatically - consider using the property instead as this - allows you to set the size in bytes as a . - - - - - - Gets or sets the rolling file count direction. - - - The rolling file count direction. - - - - Indicates if the current file is the lowest numbered file or the - highest numbered file. - - - By default newer files have lower numbers ( < 0), - i.e. log.1 is most recent, log.5 is the 5th backup, etc... - - - >= 0 does the opposite i.e. - log.1 is the first backup made, log.5 is the 5th backup made, etc. - For infinite backups use >= 0 to reduce - rollover costs. - - The default file count direction is -1. - - - - - Gets or sets the rolling style. - - The rolling style. - - - The default rolling style is . - - - When set to this appender's - property is set to false, otherwise - the appender would append to a single file rather than rolling - the file each time it is opened. - - - - - - Gets or sets a value indicating whether to always log to - the same file. - - - true if always should be logged to the same file, otherwise false. - - - - By default file.log is always the current file. Optionally - file.log.yyyy-mm-dd for current formatted datePattern can by the currently - logging file (or file.log.curSizeRollBackup or even - file.log.yyyy-mm-dd.curSizeRollBackup). - - - This will make time based rollovers with a large number of backups - much faster as the appender it won't have to rename all the backups! - - - - - - Style of rolling to use - - - - Style of rolling to use - - - - - - Roll files once per program execution - - - - Roll files once per program execution. - Well really once each time this appender is - configured. - - - Setting this option also sets AppendToFile to - false on the RollingFileAppender, otherwise - this appender would just be a normal file appender. - - - - - - Roll files based only on the size of the file - - - - - Roll files based only on the date - - - - - Roll files based on both the size and date of the file - - - - - The code assumes that the following 'time' constants are in a increasing sequence. - - - - The code assumes that the following 'time' constants are in a increasing sequence. - - - - - - Roll the log not based on the date - - - - - Roll the log for each minute - - - - - Roll the log for each hour - - - - - Roll the log twice a day (midday and midnight) - - - - - Roll the log each day (midnight) - - - - - Roll the log each week - - - - - Roll the log each month - - - - - This interface is used to supply Date/Time information to the . - - - This interface is used to supply Date/Time information to the . - Used primarily to allow test classes to plug themselves in so they can - supply test date/times. - - - - - Gets the current time. - - The current time. - - - Gets the current time. - - - - - - Default implementation of that returns the current time. - - - - - Gets the current time. - - The current time. - - - Gets the current time. - - - - - - Send an e-mail when a specific logging event occurs, typically on errors - or fatal errors. - - - - The number of logging events delivered in this e-mail depend on - the value of option. The - keeps only the last - logging events in its - cyclic buffer. This keeps memory requirements at a reasonable level while - still delivering useful application context. - - - Authentication and setting the server Port are only available on the MS .NET 1.1 runtime. - For these features to be enabled you need to ensure that you are using a version of - the log4net assembly that is built against the MS .NET 1.1 framework and that you are - running the your application on the MS .NET 1.1 runtime. On all other platforms only sending - unauthenticated messages to a server listening on port 25 (the default) is supported. - - - Authentication is supported by setting the property to - either or . - If using authentication then the - and properties must also be set. - - - To set the SMTP server port use the property. The default port is 25. - - - Nicko Cadell - Gert Driesen - - - - Default constructor - - - - Default constructor - - - - - - Sends the contents of the cyclic buffer as an e-mail message. - - The logging events to send. - - - - Send the email message - - the body text to include in the mail - - - - Gets or sets a semicolon-delimited list of recipient e-mail addresses. - - - A semicolon-delimited list of e-mail addresses. - - - - A semicolon-delimited list of recipient e-mail addresses. - - - - - - Gets or sets the e-mail address of the sender. - - - The e-mail address of the sender. - - - - The e-mail address of the sender. - - - - - - Gets or sets the subject line of the e-mail message. - - - The subject line of the e-mail message. - - - - The subject line of the e-mail message. - - - - - - Gets or sets the name of the SMTP relay mail server to use to send - the e-mail messages. - - - The name of the e-mail relay server. If SmtpServer is not set, the - name of the local SMTP server is used. - - - - The name of the e-mail relay server. If SmtpServer is not set, the - name of the local SMTP server is used. - - - - - - Obsolete - - - Use the BufferingAppenderSkeleton Fix methods instead - - - - Obsolete property. - - - - - - The mode to use to authentication with the SMTP server - - - Authentication is only available on the MS .NET 1.1 runtime. - - Valid Authentication mode values are: , - , and . - The default value is . When using - you must specify the - and to use to authenticate. - When using the Windows credentials for the current - thread, if impersonating, or the process will be used to authenticate. - - - - - - The username to use to authenticate with the SMTP server - - - Authentication is only available on the MS .NET 1.1 runtime. - - A and must be specified when - is set to , - otherwise the username will be ignored. - - - - - - The password to use to authenticate with the SMTP server - - - Authentication is only available on the MS .NET 1.1 runtime. - - A and must be specified when - is set to , - otherwise the password will be ignored. - - - - - - The port on which the SMTP server is listening - - - Server Port is only available on the MS .NET 1.1 runtime. - - The port on which the SMTP server is listening. The default - port is 25. The Port can only be changed when running on - the MS .NET 1.1 runtime. - - - - - - Gets or sets the priority of the e-mail message - - - One of the values. - - - - Sets the priority of the e-mails generated by this - appender. The default priority is . - - - If you are using this appender to report errors then - you may want to set the priority to . - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Values for the property. - - - - SMTP authentication modes. - - - - - - No authentication - - - - - Basic authentication. - - - Requires a username and password to be supplied - - - - - Integrated authentication - - - Uses the Windows credentials from the current thread or process to authenticate. - - - - - Send an email when a specific logging event occurs, typically on errors - or fatal errors. Rather than sending via smtp it writes a file into the - directory specified by . This allows services such - as the IIS SMTP agent to manage sending the messages. - - - - The configuration for this appender is identical to that of the SMTPAppender, - except that instead of specifying the SMTPAppender.SMTPHost you specify - . - - - The number of logging events delivered in this e-mail depend on - the value of option. The - keeps only the last - logging events in its - cyclic buffer. This keeps memory requirements at a reasonable level while - still delivering useful application context. - - - Niall Daley - Nicko Cadell - - - - Default constructor - - - - Default constructor - - - - - - Sends the contents of the cyclic buffer as an e-mail message. - - The logging events to send. - - - Sends the contents of the cyclic buffer as an e-mail message. - - - - - - Activate the options on this appender. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Convert a path into a fully qualified path. - - The path to convert. - The fully qualified path. - - - Converts the path specified to a fully - qualified path. If the path is relative it is - taken as relative from the application base - directory. - - - - - - The security context to use for privileged calls - - - - - Gets or sets a semicolon-delimited list of recipient e-mail addresses. - - - A semicolon-delimited list of e-mail addresses. - - - - A semicolon-delimited list of e-mail addresses. - - - - - - Gets or sets the e-mail address of the sender. - - - The e-mail address of the sender. - - - - The e-mail address of the sender. - - - - - - Gets or sets the subject line of the e-mail message. - - - The subject line of the e-mail message. - - - - The subject line of the e-mail message. - - - - - - Gets or sets the path to write the messages to. - - - - Gets or sets the path to write the messages to. This should be the same - as that used by the agent sending the messages. - - - - - - Gets or sets the used to write to the pickup directory. - - - The used to write to the pickup directory. - - - - Unless a specified here for this appender - the is queried for the - security context to use. The default behavior is to use the security context - of the current thread. - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Appender that allows clients to connect via Telnet to receive log messages - - - - The TelnetAppender accepts socket connections and streams logging messages - back to the client. - The output is provided in a telnet-friendly way so that a log can be monitored - over a TCP/IP socket. - This allows simple remote monitoring of application logging. - - - The default is 23 (the telnet port). - - - Keith Long - Nicko Cadell - - - - Default constructor - - - - Default constructor - - - - - - Overrides the parent method to close the socket handler - - - - Closes all the outstanding connections. - - - - - - Initialize the appender based on the options set. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - Create the socket handler and wait for connections - - - - - - Writes the logging event to each connected client. - - The event to log. - - - Writes the logging event to each connected client. - - - - - - Gets or sets the TCP port number on which this will listen for connections. - - - An integer value in the range to - indicating the TCP port number on which this will listen for connections. - - - - The default value is 23 (the telnet port). - - - The value specified is less than - or greater than . - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Helper class to manage connected clients - - - - The SocketHandler class is used to accept connections from - clients. It is threaded so that clients can connect/disconnect - asynchronously. - - - - - - Opens a new server port on - - the local port to listen on for connections - - - Creates a socket handler on the specified local server port. - - - - - - Sends a string message to each of the connected clients - - the text to send - - - Sends a string message to each of the connected clients - - - - - - Add a client to the internal clients list - - client to add - - - - Remove a client from the internal clients list - - client to remove - - - - Callback used to accept a connection on the server socket - - The result of the asynchronous operation - - - On connection adds to the list of connections - if there are two many open connections you will be disconnected - - - - - - Close all network connections - - - - Make sure we close all network connections - - - - - - Test if this handler has active connections - - - true if this handler has active connections - - - - This property will be true while this handler has - active connections, that is at least one connection that - the handler will attempt to send a message to. - - - - - - Class that represents a client connected to this handler - - - - Class that represents a client connected to this handler - - - - - - Create this for the specified - - the client's socket - - - Opens a stream writer on the socket. - - - - - - Write a string to the client - - string to send - - - Write a string to the client - - - - - - Cleanup the clients connection - - - - Close the socket connection. - - - - - - Appends log events to the system. - - - - The application configuration file can be used to control what listeners - are actually used. See the MSDN documentation for the - class for details on configuring the - trace system. - - - Events are written using the System.Diagnostics.Trace.Write(string,string) - method. The event's logger name is passed as the value for the category name to the Write method. - - - Compact Framework
- The Compact Framework does not support the - class for any operation except Assert. When using the Compact Framework this - appender will write to the system rather than - the Trace system. This appender will therefore behave like the . -
-
- Douglas de la Torre - Nicko Cadell - Gert Driesen -
- - - Initializes a new instance of the . - - - - Default constructor. - - - - - - Initializes a new instance of the - with a specified layout. - - The layout to use with this appender. - - - Obsolete constructor. - - - - - - Writes the logging event to the system. - - The event to log. - - - Writes the logging event to the system. - - - - - - Immediate flush means that the underlying writer or output stream - will be flushed at the end of each append operation. - - - - Immediate flush is slower but ensures that each append request is - actually written. If is set to - false, then there is a good chance that the last few - logs events are not actually written to persistent media if and - when the application crashes. - - - The default value is true. - - - - - Gets or sets a value that indicates whether the appender will - flush at the end of each write. - - - The default behavior is to flush at the end of each - write. If the option is set tofalse, then the underlying - stream can defer writing to physical medium to a later time. - - - Avoiding the flush operation at the end of each append results - in a performance gain of 10 to 20 percent. However, there is safety - trade-off involved in skipping flushing. Indeed, when flushing is - skipped, then it is likely that the last few log events will not - be recorded on disk when the application exits. This is a high - price to pay even for a 20% performance gain. - - - - - - This appender requires a to be set. - - true - - - This appender requires a to be set. - - - - - - Assembly level attribute that specifies a domain to alias to this assembly's repository. - - - - AliasDomainAttribute is obsolete. Use AliasRepositoryAttribute instead of AliasDomainAttribute. - - - An assembly's logger repository is defined by its , - however this can be overridden by an assembly loaded before the target assembly. - - - An assembly can alias another assembly's domain to its repository by - specifying this attribute with the name of the target domain. - - - This attribute can only be specified on the assembly and may be used - as many times as necessary to alias all the required domains. - - - Nicko Cadell - Gert Driesen - - - - Assembly level attribute that specifies a repository to alias to this assembly's repository. - - - - An assembly's logger repository is defined by its , - however this can be overridden by an assembly loaded before the target assembly. - - - An assembly can alias another assembly's repository to its repository by - specifying this attribute with the name of the target repository. - - - This attribute can only be specified on the assembly and may be used - as many times as necessary to alias all the required repositories. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class with - the specified repository to alias to this assembly's repository. - - The repository to alias to this assemby's repository. - - - Initializes a new instance of the class with - the specified repository to alias to this assembly's repository. - - - - - - Gets or sets the repository to alias to this assemby's repository. - - - The repository to alias to this assemby's repository. - - - - The name of the repository to alias to this assemby's repository. - - - - - - Initializes a new instance of the class with - the specified domain to alias to this assembly's repository. - - The domain to alias to this assemby's repository. - - - Obsolete. Use instead of . - - - - - - Use this class to quickly configure a . - - - - Allows very simple programmatic configuration of log4net. - - - Only one appender can be configured using this configurator. - The appender is set at the root of the hierarchy and all logging - events will be delivered to that appender. - - - Appenders can also implement the interface. Therefore - they would require that the method - be called after the appenders properties have been configured. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - Initializes the log4net system with a default configuration. - - - - Initializes the log4net logging system using a - that will write to Console.Out. The log messages are - formatted using the layout object - with the - layout style. - - - - - - Initializes the log4net system using the specified appender. - - The appender to use to log all logging events. - - - Initializes the log4net system using the specified appender. - - - - - - Initializes the with a default configuration. - - The repository to configure. - - - Initializes the specified repository using a - that will write to Console.Out. The log messages are - formatted using the layout object - with the - layout style. - - - - - - Initializes the using the specified appender. - - The repository to configure. - The appender to use to log all logging events. - - - Initializes the using the specified appender. - - - - - - Base class for all log4net configuration attributes. - - - This is an abstract class that must be extended by - specific configurators. This attribute allows the - configurator to be parameterized by an assembly level - attribute. - - Nicko Cadell - Gert Driesen - - - - Constructor used by subclasses. - - the ordering priority for this configurator - - - The is used to order the configurator - attributes before they are invoked. Higher priority configurators are executed - before lower priority ones. - - - - - - Configures the for the specified assembly. - - The assembly that this attribute was defined on. - The repository to configure. - - - Abstract method implemented by a subclass. When this method is called - the subclass should configure the . - - - - - - Compare this instance to another ConfiguratorAttribute - - the object to compare to - see - - - Compares the priorities of the two instances. - Sorts by priority in descending order. Objects with the same priority are - randomly ordered. - - - - - - Assembly level attribute that specifies the logging domain for the assembly. - - - - DomainAttribute is obsolete. Use RepositoryAttribute instead of DomainAttribute. - - - Assemblies are mapped to logging domains. Each domain has its own - logging repository. This attribute specified on the assembly controls - the configuration of the domain. The property specifies the name - of the domain that this assembly is a part of. The - specifies the type of the repository objects to create for the domain. If - this attribute is not specified and a is not specified - then the assembly will be part of the default shared logging domain. - - - This attribute can only be specified on the assembly and may only be used - once per assembly. - - - Nicko Cadell - Gert Driesen - - - - Assembly level attribute that specifies the logging repository for the assembly. - - - - Assemblies are mapped to logging repository. This attribute specified - on the assembly controls - the configuration of the repository. The property specifies the name - of the repository that this assembly is a part of. The - specifies the type of the object - to create for the assembly. If this attribute is not specified or a - is not specified then the assembly will be part of the default shared logging repository. - - - This attribute can only be specified on the assembly and may only be used - once per assembly. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Initialize a new instance of the class - with the name of the repository. - - The name of the repository. - - - Initialize the attribute with the name for the assembly's repository. - - - - - - Gets or sets the name of the logging repository. - - - The string name to use as the name of the repository associated with this - assembly. - - - - This value does not have to be unique. Several assemblies can share the - same repository. They will share the logging configuration of the repository. - - - - - - Gets or sets the type of repository to create for this assembly. - - - The type of repository to create for this assembly. - - - - The type of the repository to create for the assembly. - The type must implement the - interface. - - - This will be the type of repository created when - the repository is created. If multiple assemblies reference the - same repository then the repository is only created once using the - of the first assembly to call into the - repository. - - - - - - Initializes a new instance of the class. - - - - Obsolete. Use RepositoryAttribute instead of DomainAttribute. - - - - - - Initialize a new instance of the class - with the name of the domain. - - The name of the domain. - - - Obsolete. Use RepositoryAttribute instead of DomainAttribute. - - - - - - Use this class to initialize the log4net environment using an Xml tree. - - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - Configures a using an Xml tree. - - - Nicko Cadell - Gert Driesen - - - - Private constructor - - - - - Automatically configures the log4net system based on the - application's configuration settings. - - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - Each application has a configuration file. This has the - same name as the application with '.config' appended. - This file is XML and calling this function prompts the - configurator to look in that file for a section called - log4net that contains the configuration data. - - - - - Automatically configures the using settings - stored in the application's configuration file. - - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - Each application has a configuration file. This has the - same name as the application with '.config' appended. - This file is XML and calling this function prompts the - configurator to look in that file for a section called - log4net that contains the configuration data. - - The repository to configure. - - - - Configures log4net using a log4net element - - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - Loads the log4net configuration from the XML element - supplied as . - - The element to parse. - - - - Configures the using the specified XML - element. - - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - Loads the log4net configuration from the XML element - supplied as . - - The repository to configure. - The element to parse. - - - - Configures log4net using the specified configuration file. - - The XML file to load the configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the log4net configuration data. - - - The log4net configuration file can possible be specified in the application's - configuration file (either MyAppName.exe.config for a - normal application on Web.config for an ASP.NET application). - - - The following example configures log4net using a configuration file, of which the - location is stored in the application's configuration file : - - - using log4net.Config; - using System.IO; - using System.Configuration; - - ... - - DOMConfigurator.Configure(new FileInfo(ConfigurationSettings.AppSettings["log4net-config-file"])); - - - In the .config file, the path to the log4net can be specified like this : - - - - - - - - - - - - - Configures log4net using the specified configuration file. - - A stream to load the XML configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the log4net configuration data. - - - Note that this method will NOT close the stream parameter. - - - - - - Configures the using the specified configuration - file. - - The repository to configure. - The XML file to load the configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The log4net configuration file can possible be specified in the application's - configuration file (either MyAppName.exe.config for a - normal application on Web.config for an ASP.NET application). - - - The following example configures log4net using a configuration file, of which the - location is stored in the application's configuration file : - - - using log4net.Config; - using System.IO; - using System.Configuration; - - ... - - DOMConfigurator.Configure(new FileInfo(ConfigurationSettings.AppSettings["log4net-config-file"])); - - - In the .config file, the path to the log4net can be specified like this : - - - - - - - - - - - - - Configures the using the specified configuration - file. - - The repository to configure. - The stream to load the XML configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - Note that this method will NOT close the stream parameter. - - - - - - Configures log4net using the file specified, monitors the file for changes - and reloads the configuration if a change is detected. - - The XML file to load the configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The configuration file will be monitored using a - and depends on the behavior of that class. - - - For more information on how to configure log4net using - a separate configuration file, see . - - - - - - - Configures the using the file specified, - monitors the file for changes and reloads the configuration if a change - is detected. - - The repository to configure. - The XML file to load the configuration from. - - - DOMConfigurator is obsolete. Use XmlConfigurator instead of DOMConfigurator. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The configuration file will be monitored using a - and depends on the behavior of that class. - - - For more information on how to configure log4net using - a separate configuration file, see . - - - - - - - Assembly level attribute to configure the . - - - - AliasDomainAttribute is obsolete. Use AliasRepositoryAttribute instead of AliasDomainAttribute. - - - This attribute may only be used at the assembly scope and can only - be used once per assembly. - - - Use this attribute to configure the - without calling one of the - methods. - - - Nicko Cadell - Gert Driesen - - - - Assembly level attribute to configure the . - - - - This attribute may only be used at the assembly scope and can only - be used once per assembly. - - - Use this attribute to configure the - without calling one of the - methods. - - - If neither of the or - properties are set the configuration is loaded from the application's .config file. - If set the property takes priority over the - property. The property - specifies a path to a file to load the config from. The path is relative to the - application's base directory; . - The property is used as a postfix to the assembly file name. - The config file must be located in the application's base directory; . - For example in a console application setting the to - config has the same effect as not specifying the or - properties. - - - The property can be set to cause the - to watch the configuration file for changes. - - - - Log4net will only look for assembly level configuration attributes once. - When using the log4net assembly level attributes to control the configuration - of log4net you must ensure that the first call to any of the - methods is made from the assembly with the configuration - attributes. - - - If you cannot guarantee the order in which log4net calls will be made from - different assemblies you must use programmatic configuration instead, i.e. - call the method directly. - - - - Nicko Cadell - Gert Driesen - - - - Default constructor - - - - Default constructor - - - - - - Configures the for the specified assembly. - - The assembly that this attribute was defined on. - The repository to configure. - - - Configure the repository using the . - The specified must extend the - class otherwise the will not be able to - configure it. - - - The does not extend . - - - - Attempt to load configuration from the local file system - - The assembly that this attribute was defined on. - The repository to configure. - - - - Configure the specified repository using a - - The repository to configure. - the FileInfo pointing to the config file - - - - Attempt to load configuration from a URI - - The assembly that this attribute was defined on. - The repository to configure. - - - - Gets or sets the filename of the configuration file. - - - The filename of the configuration file. - - - - If specified, this is the name of the configuration file to use with - the . This file path is relative to the - application base directory (). - - - The takes priority over the . - - - - - - Gets or sets the extension of the configuration file. - - - The extension of the configuration file. - - - - If specified this is the extension for the configuration file. - The path to the config file is built by using the application - base directory (), - the assembly file name and the config file extension. - - - If the is set to MyExt then - possible config file names would be: MyConsoleApp.exe.MyExt or - MyClassLibrary.dll.MyExt. - - - The takes priority over the . - - - - - - Gets or sets a value indicating whether to watch the configuration file. - - - true if the configuration should be watched, false otherwise. - - - - If this flag is specified and set to true then the framework - will watch the configuration file and will reload the config each time - the file is modified. - - - The config file can only be watched if it is loaded from local disk. - In a No-Touch (Smart Client) deployment where the application is downloaded - from a web server the config file may not reside on the local disk - and therefore it may not be able to watch it. - - - Watching configuration is not supported on the SSCLI. - - - - - - Class to register for the log4net section of the configuration file - - - The log4net section of the configuration file needs to have a section - handler registered. This is the section handler used. It simply returns - the XML element that is the root of the section. - - - Example of registering the log4net section handler : - - - -
- - - log4net configuration XML goes here - - - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Default constructor. - - - - - - Parses the configuration section. - - The configuration settings in a corresponding parent configuration section. - The configuration context when called from the ASP.NET configuration system. Otherwise, this parameter is reserved and is a null reference. - The for the log4net section. - The for the log4net section. - - - Returns the containing the configuration data, - - - - - - Assembly level attribute that specifies a plugin to attach to - the repository. - - - - Specifies the type of a plugin to create and attach to the - assembly's repository. The plugin type must implement the - interface. - - - Nicko Cadell - Gert Driesen - - - - Interface used to create plugins. - - - - Interface used to create a plugin. - - - Nicko Cadell - Gert Driesen - - - - Creates the plugin object. - - the new plugin instance - - - Create and return a new plugin instance. - - - - - - Initializes a new instance of the class - with the specified type. - - The type name of plugin to create. - - - Create the attribute with the plugin type specified. - - - Where possible use the constructor that takes a . - - - - - - Initializes a new instance of the class - with the specified type. - - The type of plugin to create. - - - Create the attribute with the plugin type specified. - - - - - - Creates the plugin object defined by this attribute. - - - - Creates the instance of the object as - specified by this attribute. - - - The plugin object. - - - - Returns a representation of the properties of this object. - - - - Overrides base class method to - return a representation of the properties of this object. - - - A representation of the properties of this object - - - - Gets or sets the type for the plugin. - - - The type for the plugin. - - - - The type for the plugin. - - - - - - Gets or sets the type name for the plugin. - - - The type name for the plugin. - - - - The type name for the plugin. - - - Where possible use the property instead. - - - - - - Assembly level attribute to configure the . - - - - This attribute may only be used at the assembly scope and can only - be used once per assembly. - - - Use this attribute to configure the - without calling one of the - methods. - - - Nicko Cadell - - - - Construct provider attribute with type specified - - the type of the provider to use - - - The provider specified must subclass the - class. - - - - - - Configures the SecurityContextProvider - - The assembly that this attribute was defined on. - The repository to configure. - - - Creates a provider instance from the specified. - Sets this as the default security context provider . - - - - - - Gets or sets the type of the provider to use. - - - the type of the provider to use. - - - - The provider specified must subclass the - class. - - - - - - Use this class to initialize the log4net environment using an Xml tree. - - - - Configures a using an Xml tree. - - - Nicko Cadell - Gert Driesen - - - - Private constructor - - - - - Automatically configures the log4net system based on the - application's configuration settings. - - - - Each application has a configuration file. This has the - same name as the application with '.config' appended. - This file is XML and calling this function prompts the - configurator to look in that file for a section called - log4net that contains the configuration data. - - - To use this method to configure log4net you must specify - the section - handler for the log4net configuration section. See the - for an example. - - - - - - - Automatically configures the using settings - stored in the application's configuration file. - - - - Each application has a configuration file. This has the - same name as the application with '.config' appended. - This file is XML and calling this function prompts the - configurator to look in that file for a section called - log4net that contains the configuration data. - - - To use this method to configure log4net you must specify - the section - handler for the log4net configuration section. See the - for an example. - - - The repository to configure. - - - - Configures log4net using a log4net element - - - - Loads the log4net configuration from the XML element - supplied as . - - - The element to parse. - - - - Configures the using the specified XML - element. - - - Loads the log4net configuration from the XML element - supplied as . - - The repository to configure. - The element to parse. - - - - Configures log4net using the specified configuration file. - - The XML file to load the configuration from. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the log4net configuration data. - - - The log4net configuration file can possible be specified in the application's - configuration file (either MyAppName.exe.config for a - normal application on Web.config for an ASP.NET application). - - - The first element matching <configuration> will be read as the - configuration. If this file is also a .NET .config file then you must specify - a configuration section for the log4net element otherwise .NET will - complain. Set the type for the section handler to , for example: - - -
- - - - - The following example configures log4net using a configuration file, of which the - location is stored in the application's configuration file : - - - using log4net.Config; - using System.IO; - using System.Configuration; - - ... - - XmlConfigurator.Configure(new FileInfo(ConfigurationSettings.AppSettings["log4net-config-file"])); - - - In the .config file, the path to the log4net can be specified like this : - - - - - - - - - - - - - Configures log4net using the specified configuration URI. - - A URI to load the XML configuration from. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the log4net configuration data. - - - The must support the URI scheme specified. - - - - - - Configures log4net using the specified configuration data stream. - - A stream to load the XML configuration from. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the log4net configuration data. - - - Note that this method will NOT close the stream parameter. - - - - - - Configures the using the specified configuration - file. - - The repository to configure. - The XML file to load the configuration from. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The log4net configuration file can possible be specified in the application's - configuration file (either MyAppName.exe.config for a - normal application on Web.config for an ASP.NET application). - - - The first element matching <configuration> will be read as the - configuration. If this file is also a .NET .config file then you must specify - a configuration section for the log4net element otherwise .NET will - complain. Set the type for the section handler to , for example: - - -
- - - - - The following example configures log4net using a configuration file, of which the - location is stored in the application's configuration file : - - - using log4net.Config; - using System.IO; - using System.Configuration; - - ... - - XmlConfigurator.Configure(new FileInfo(ConfigurationSettings.AppSettings["log4net-config-file"])); - - - In the .config file, the path to the log4net can be specified like this : - - - - - - - - - - - - - Configures the using the specified configuration - URI. - - The repository to configure. - A URI to load the XML configuration from. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The must support the URI scheme specified. - - - - - - Configures the using the specified configuration - file. - - The repository to configure. - The stream to load the XML configuration from. - - - The configuration data must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - Note that this method will NOT close the stream parameter. - - - - - - Configures log4net using the file specified, monitors the file for changes - and reloads the configuration if a change is detected. - - The XML file to load the configuration from. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The configuration file will be monitored using a - and depends on the behavior of that class. - - - For more information on how to configure log4net using - a separate configuration file, see . - - - - - - - Configures the using the file specified, - monitors the file for changes and reloads the configuration if a change - is detected. - - The repository to configure. - The XML file to load the configuration from. - - - The configuration file must be valid XML. It must contain - at least one element called log4net that holds - the configuration data. - - - The configuration file will be monitored using a - and depends on the behavior of that class. - - - For more information on how to configure log4net using - a separate configuration file, see . - - - - - - - Configures the specified repository using a log4net element. - - The hierarchy to configure. - The element to parse. - - - Loads the log4net configuration from the XML element - supplied as . - - - This method is ultimately called by one of the Configure methods - to load the configuration from an . - - - - - - Class used to watch config files. - - - - Uses the to monitor - changes to a specified file. Because multiple change notifications - may be raised when the file is modified, a timer is used to - compress the notifications into a single event. The timer - waits for time before delivering - the event notification. If any further - change notifications arrive while the timer is waiting it - is reset and waits again for to - elapse. - - - - - - The default amount of time to wait after receiving notification - before reloading the config file. - - - - - Watch a specified config file used to configure a repository - - The repository to configure. - The configuration file to watch. - - - Watch a specified config file used to configure a repository - - - - - - Holds the FileInfo used to configure the XmlConfigurator - - - - - Holds the repository being configured. - - - - - The timer used to compress the notification events. - - - - - Initializes a new instance of the class. - - The repository to configure. - The configuration file to watch. - - - Initializes a new instance of the class. - - - - - - Event handler used by . - - The firing the event. - The argument indicates the file that caused the event to be fired. - - - This handler reloads the configuration from the file when the event is fired. - - - - - - Event handler used by . - - The firing the event. - The argument indicates the file that caused the event to be fired. - - - This handler reloads the configuration from the file when the event is fired. - - - - - - Called by the timer when the configuration has been updated. - - null - - - - The implementation of the interface suitable - for use with the compact framework - - - - This implementation is a simple - mapping between repository name and - object. - - - The .NET Compact Framework 1.0 does not support retrieving assembly - level attributes therefore unlike the DefaultRepositorySelector - this selector does not examine the calling assembly for attributes. - - - Nicko Cadell - - - - Interface used by the to select the . - - - - The uses a - to specify the policy for selecting the correct - to return to the caller. - - - Nicko Cadell - Gert Driesen - - - - Gets the for the specified assembly. - - The assembly to use to lookup to the - The for the assembly. - - - Gets the for the specified assembly. - - - How the association between and - is made is not defined. The implementation may choose any method for - this association. The results of this method must be repeatable, i.e. - when called again with the same arguments the result must be the - save value. - - - - - - Gets the named . - - The name to use to lookup to the . - The named - - Lookup a named . This is the repository created by - calling . - - - - - Creates a new repository for the assembly specified. - - The assembly to use to create the domain to associate with the . - The type of repository to create, must implement . - The repository created. - - - The created will be associated with the domain - specified such that a call to with the - same assembly specified will return the same repository instance. - - - How the association between and - is made is not defined. The implementation may choose any method for - this association. - - - - - - Creates a new repository with the name specified. - - The name to associate with the . - The type of repository to create, must implement . - The repository created. - - - The created will be associated with the name - specified such that a call to with the - same name will return the same repository instance. - - - - - - Test if a named repository exists - - the named repository to check - true if the repository exists - - - Test if a named repository exists. Use - to create a new repository and to retrieve - a repository. - - - - - - Gets an array of all currently defined repositories. - - - An array of the instances created by - this . - - - Gets an array of all of the repositories created by this selector. - - - - - - Event to notify that a logger repository has been created. - - - Event to notify that a logger repository has been created. - - - - Event raised when a new repository is created. - The event source will be this selector. The event args will - be a which - holds the newly created . - - - - - - Create a new repository selector - - the type of the repositories to create, must implement - - - Create an new compact repository selector. - The default type for repositories must be specified, - an appropriate value would be . - - - throw if is null - throw if does not implement - - - - Get the for the specified assembly - - not used - The default - - - The argument is not used. This selector does not create a - separate repository for each assembly. - - - As a named repository is not specified the default repository is - returned. The default repository is named log4net-default-repository. - - - - - - Get the named - - the name of the repository to lookup - The named - - - Get the named . The default - repository is log4net-default-repository. Other repositories - must be created using the . - If the named repository does not exist an exception is thrown. - - - throw if is null - throw if the does not exist - - - - Create a new repository for the assembly specified - - not used - the type of repository to create, must implement - the repository created - - - The argument is not used. This selector does not create a - separate repository for each assembly. - - - If the is null then the - default repository type specified to the constructor is used. - - - As a named repository is not specified the default repository is - returned. The default repository is named log4net-default-repository. - - - - - - Create a new repository for the repository specified - - the repository to associate with the - the type of repository to create, must implement . - If this param is null then the default repository type is used. - the repository created - - - The created will be associated with the repository - specified such that a call to with the - same repository specified will return the same repository instance. - - - If the named repository already exists an exception will be thrown. - - - If is null then the default - repository type specified to the constructor is used. - - - throw if is null - throw if the already exists - - - - Test if a named repository exists - - the named repository to check - true if the repository exists - - - Test if a named repository exists. Use - to create a new repository and to retrieve - a repository. - - - - - - Gets a list of objects - - an array of all known objects - - - Gets an array of all of the repositories created by this selector. - - - - - - Notify the registered listeners that the repository has been created - - The repository that has been created - - - Raises the LoggerRepositoryCreatedEvent - event. - - - - - - Event to notify that a logger repository has been created. - - - Event to notify that a logger repository has been created. - - - - Event raised when a new repository is created. - The event source will be this selector. The event args will - be a which - holds the newly created . - - - - - - The default implementation of the interface. - - - - Uses attributes defined on the calling assembly to determine how to - configure the hierarchy for the repository. - - - Nicko Cadell - Gert Driesen - - - - Creates a new repository selector. - - The type of the repositories to create, must implement - - - Create an new repository selector. - The default type for repositories must be specified, - an appropriate value would be . - - - is . - does not implement . - - - - Gets the for the specified assembly. - - The assembly use to lookup the . - - - The type of the created and the repository - to create can be overridden by specifying the - attribute on the . - - - The default values are to use the - implementation of the interface and to use the - as the name of the repository. - - - The created will be automatically configured using - any attributes defined on - the . - - - The for the assembly - is . - - - - Gets the for the specified repository. - - The repository to use to lookup the . - The for the specified repository. - - - Returns the named repository. If is null - a is thrown. If the repository - does not exist a is thrown. - - - Use to create a repository. - - - is . - does not exist. - - - - Create a new repository for the assembly specified - - the assembly to use to create the repository to associate with the . - The type of repository to create, must implement . - The repository created. - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - The type of the created and - the repository to create can be overridden by specifying the - attribute on the - . The default values are to use the - implementation of the - interface and to use the - as the name of the repository. - - - The created will be automatically - configured using any - attributes defined on the . - - - If a repository for the already exists - that repository will be returned. An error will not be raised and that - repository may be of a different type to that specified in . - Also the attribute on the - assembly may be used to override the repository type specified in - . - - - is . - - - - Creates a new repository for the assembly specified. - - the assembly to use to create the repository to associate with the . - The type of repository to create, must implement . - The name to assign to the created repository - Set to true to read and apply the assembly attributes - The repository created. - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - The type of the created and - the repository to create can be overridden by specifying the - attribute on the - . The default values are to use the - implementation of the - interface and to use the - as the name of the repository. - - - The created will be automatically - configured using any - attributes defined on the . - - - If a repository for the already exists - that repository will be returned. An error will not be raised and that - repository may be of a different type to that specified in . - Also the attribute on the - assembly may be used to override the repository type specified in - . - - - is . - - - - Creates a new repository for the specified repository. - - The repository to associate with the . - The type of repository to create, must implement . - If this param is then the default repository type is used. - The new repository. - - - The created will be associated with the repository - specified such that a call to with the - same repository specified will return the same repository instance. - - - is . - already exists. - - - - Test if a named repository exists - - the named repository to check - true if the repository exists - - - Test if a named repository exists. Use - to create a new repository and to retrieve - a repository. - - - - - - Gets a list of objects - - an array of all known objects - - - Gets an array of all of the repositories created by this selector. - - - - - - Aliases a repository to an existing repository. - - The repository to alias. - The repository that the repository is aliased to. - - - The repository specified will be aliased to the repository when created. - The repository must not already exist. - - - When the repository is created it must utilize the same repository type as - the repository it is aliased to, otherwise the aliasing will fail. - - - - is . - -or- - is . - - - - - Notifies the registered listeners that the repository has been created. - - The repository that has been created. - - - Raises the event. - - - - - - Gets the repository name and repository type for the specified assembly. - - The assembly that has a . - in/out param to hold the repository name to use for the assembly, caller should set this to the default value before calling. - in/out param to hold the type of the repository to create for the assembly, caller should set this to the default value before calling. - is . - - - - Configures the repository using information from the assembly. - - The assembly containing - attributes which define the configuration for the repository. - The repository to configure. - - is . - -or- - is . - - - - - Loads the attribute defined plugins on the assembly. - - The assembly that contains the attributes. - The repository to add the plugins to. - - is . - -or- - is . - - - - - Loads the attribute defined aliases on the assembly. - - The assembly that contains the attributes. - The repository to alias to. - - is . - -or- - is . - - - - - Event to notify that a logger repository has been created. - - - Event to notify that a logger repository has been created. - - - - Event raised when a new repository is created. - The event source will be this selector. The event args will - be a which - holds the newly created . - - - - - - Defined error codes that can be passed to the method. - - - - Values passed to the method. - - - Nicko Cadell - - - - A general error - - - - - Error while writing output - - - - - Failed to flush file - - - - - Failed to close file - - - - - Unable to open output file - - - - - No layout specified - - - - - Failed to parse address - - - - - Appenders may delegate their error handling to an . - - - - Error handling is a particularly tedious to get right because by - definition errors are hard to predict and to reproduce. - - - Nicko Cadell - Gert Driesen - - - - Handles the error and information about the error condition is passed as - a parameter. - - The message associated with the error. - The that was thrown when the error occurred. - The error code associated with the error. - - - Handles the error and information about the error condition is passed as - a parameter. - - - - - - Prints the error message passed as a parameter. - - The message associated with the error. - The that was thrown when the error occurred. - - - See . - - - - - - Prints the error message passed as a parameter. - - The message associated with the error. - - - See . - - - - - - Interface for objects that require fixing. - - - - Interface that indicates that the object requires fixing before it - can be taken outside the context of the appender's - method. - - - When objects that implement this interface are stored - in the context properties maps - and - are fixed - (see ) the - method will be called. - - - Nicko Cadell - - - - Get a portable version of this object - - the portable instance of this object - - - Get a portable instance object that represents the current - state of this object. The portable object can be stored - and logged from any thread with identical results. - - - - - - Interface that all loggers implement - - - - This interface supports logging events and testing if a level - is enabled for logging. - - - These methods will not throw exceptions. Note to implementor, ensure - that the implementation of these methods cannot allow an exception - to be thrown to the caller. - - - Nicko Cadell - Gert Driesen - - - - This generic form is intended to be used by wrappers. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The level of the message to be logged. - The message object to log. - the exception to log, including its stack trace. Pass null to not log an exception. - - - Generates a logging event for the specified using - the and . - - - - - - This is the most generic printing method that is intended to be used - by wrappers. - - The event being logged. - - - Logs the specified logging event through this logger. - - - - - - Checks if this logger is enabled for a given passed as parameter. - - The level to check. - - true if this logger is enabled for level, otherwise false. - - - - Test if this logger is going to log events of the specified . - - - - - - Gets the name of the logger. - - - The name of the logger. - - - - The name of this logger - - - - - - Gets the where this - Logger instance is attached to. - - - The that this logger belongs to. - - - - Gets the where this - Logger instance is attached to. - - - - - - Base interface for all wrappers - - - - Base interface for all wrappers. - - - All wrappers must implement this interface. - - - Nicko Cadell - - - - Get the implementation behind this wrapper object. - - - The object that in implementing this object. - - - - The object that in implementing this - object. The Logger object may not - be the same object as this object because of logger decorators. - This gets the actual underlying objects that is used to process - the log events. - - - - - - Delegate used to handle logger repository creation event notifications - - The which created the repository. - The event args - that holds the instance that has been created. - - - Delegate used to handle logger repository creation event notifications. - - - - - - Provides data for the event. - - - - A - event is raised every time a is created. - - - - - - The created - - - - - Construct instance using specified - - the that has been created - - - Construct instance using specified - - - - - - The that has been created - - - The that has been created - - - - The that has been created - - - - - - Test if an triggers an action - - - - Implementations of this interface allow certain appenders to decide - when to perform an appender specific action. - - - The action or behavior triggered is defined by the implementation. - - - Nicko Cadell - - - - Test if this event triggers the action - - The event to check - true if this event triggers the action, otherwise false - - - Return true if this event triggers the action - - - - - - Defines the default set of levels recognized by the system. - - - - Each has an associated . - - - Levels have a numeric that defines the relative - ordering between levels. Two Levels with the same - are deemed to be equivalent. - - - The levels that are recognized by log4net are set for each - and each repository can have different levels defined. The levels are stored - in the on the repository. Levels are - looked up by name from the . - - - When logging at level INFO the actual level used is not but - the value of LoggerRepository.LevelMap["INFO"]. The default value for this is - , but this can be changed by reconfiguring the level map. - - - Each level has a in addition to its . The - is the string that is written into the output log. By default - the display name is the same as the level name, but this can be used to alias levels - or to localize the log output. - - - Some of the predefined levels recognized by the system are: - - - - . - - - . - - - . - - - . - - - . - - - . - - - . - - - - Nicko Cadell - Gert Driesen - - - - Constructor - - Integer value for this level, higher values represent more severe levels. - The string name of this level. - The display name for this level. This may be localized or otherwise different from the name - - - Initializes a new instance of the class with - the specified level name and value. - - - - - - Constructor - - Integer value for this level, higher values represent more severe levels. - The string name of this level. - - - Initializes a new instance of the class with - the specified level name and value. - - - - - - Returns the representation of the current - . - - - A representation of the current . - - - - Returns the level . - - - - - - Compares levels. - - The object to compare against. - true if the objects are equal. - - - Compares the levels of instances, and - defers to base class if the target object is not a - instance. - - - - - - Returns a hash code - - A hash code for the current . - - - Returns a hash code suitable for use in hashing algorithms and data - structures like a hash table. - - - Returns the hash code of the level . - - - - - - Compares this instance to a specified object and returns an - indication of their relative values. - - A instance or to compare with this instance. - - A 32-bit signed integer that indicates the relative order of the - values compared. The return value has these meanings: - - - Value - Meaning - - - Less than zero - This instance is less than . - - - Zero - This instance is equal to . - - - Greater than zero - - This instance is greater than . - -or- - is . - - - - - - - must be an instance of - or ; otherwise, an exception is thrown. - - - is not a . - - - - Returns a value indicating whether a specified - is greater than another specified . - - A - A - - true if is greater than - ; otherwise, false. - - - - Compares two levels. - - - - - - Returns a value indicating whether a specified - is less than another specified . - - A - A - - true if is less than - ; otherwise, false. - - - - Compares two levels. - - - - - - Returns a value indicating whether a specified - is greater than or equal to another specified . - - A - A - - true if is greater than or equal to - ; otherwise, false. - - - - Compares two levels. - - - - - - Returns a value indicating whether a specified - is less than or equal to another specified . - - A - A - - true if is less than or equal to - ; otherwise, false. - - - - Compares two levels. - - - - - - Returns a value indicating whether two specified - objects have the same value. - - A or . - A or . - - true if the value of is the same as the - value of ; otherwise, false. - - - - Compares two levels. - - - - - - Returns a value indicating whether two specified - objects have different values. - - A or . - A or . - - true if the value of is different from - the value of ; otherwise, false. - - - - Compares two levels. - - - - - - Compares two specified instances. - - The first to compare. - The second to compare. - - A 32-bit signed integer that indicates the relative order of the - two values compared. The return value has these meanings: - - - Value - Meaning - - - Less than zero - is less than . - - - Zero - is equal to . - - - Greater than zero - is greater than . - - - - - - Compares two levels. - - - - - - The level designates a higher level than all the rest. - - - - - The level designates very severe error events. - System unusable, emergencies. - - - - - The level designates very severe error events - that will presumably lead the application to abort. - - - - - The level designates very severe error events. - Take immediate action, alerts. - - - - - The level designates very severe error events. - Critical condition, critical. - - - - - The level designates very severe error events. - - - - - The level designates error events that might - still allow the application to continue running. - - - - - The level designates potentially harmful - situations. - - - - - The level designates informational messages - that highlight the progress of the application at the highest level. - - - - - The level designates informational messages that - highlight the progress of the application at coarse-grained level. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates fine-grained informational - events that are most useful to debug an application. - - - - - The level designates the lowest level possible. - - - - - Gets the name of this level. - - - The name of this level. - - - - Gets the name of this level. - - - - - - Gets the value of this level. - - - The value of this level. - - - - Gets the value of this level. - - - - - - Gets the display name of this level. - - - The display name of this level. - - - - Gets the display name of this level. - - - - - - A strongly-typed collection of objects. - - Nicko Cadell - - - - Creates a read-only wrapper for a LevelCollection instance. - - list to create a readonly wrapper arround - - A LevelCollection wrapper that is read-only. - - - - - Initializes a new instance of the LevelCollection class - that is empty and has the default initial capacity. - - - - - Initializes a new instance of the LevelCollection class - that has the specified initial capacity. - - - The number of elements that the new LevelCollection is initially capable of storing. - - - - - Initializes a new instance of the LevelCollection class - that contains elements copied from the specified LevelCollection. - - The LevelCollection whose elements are copied to the new collection. - - - - Initializes a new instance of the LevelCollection class - that contains elements copied from the specified array. - - The array whose elements are copied to the new list. - - - - Initializes a new instance of the LevelCollection class - that contains elements copied from the specified collection. - - The collection whose elements are copied to the new list. - - - - Allow subclasses to avoid our default constructors - - - - - - Copies the entire LevelCollection to a one-dimensional - array. - - The one-dimensional array to copy to. - - - - Copies the entire LevelCollection to a one-dimensional - array, starting at the specified index of the target array. - - The one-dimensional array to copy to. - The zero-based index in at which copying begins. - - - - Adds a to the end of the LevelCollection. - - The to be added to the end of the LevelCollection. - The index at which the value has been added. - - - - Removes all elements from the LevelCollection. - - - - - Creates a shallow copy of the . - - A new with a shallow copy of the collection data. - - - - Determines whether a given is in the LevelCollection. - - The to check for. - true if is found in the LevelCollection; otherwise, false. - - - - Returns the zero-based index of the first occurrence of a - in the LevelCollection. - - The to locate in the LevelCollection. - - The zero-based index of the first occurrence of - in the entire LevelCollection, if found; otherwise, -1. - - - - - Inserts an element into the LevelCollection at the specified index. - - The zero-based index at which should be inserted. - The to insert. - - is less than zero - -or- - is equal to or greater than . - - - - - Removes the first occurrence of a specific from the LevelCollection. - - The to remove from the LevelCollection. - - The specified was not found in the LevelCollection. - - - - - Removes the element at the specified index of the LevelCollection. - - The zero-based index of the element to remove. - - is less than zero - -or- - is equal to or greater than . - - - - - Returns an enumerator that can iterate through the LevelCollection. - - An for the entire LevelCollection. - - - - Adds the elements of another LevelCollection to the current LevelCollection. - - The LevelCollection whose elements should be added to the end of the current LevelCollection. - The new of the LevelCollection. - - - - Adds the elements of a array to the current LevelCollection. - - The array whose elements should be added to the end of the LevelCollection. - The new of the LevelCollection. - - - - Adds the elements of a collection to the current LevelCollection. - - The collection whose elements should be added to the end of the LevelCollection. - The new of the LevelCollection. - - - - Sets the capacity to the actual number of elements. - - - - - is less than zero - -or- - is equal to or greater than . - - - - - is less than zero - -or- - is equal to or greater than . - - - - - Gets the number of elements actually contained in the LevelCollection. - - - - - Gets a value indicating whether access to the collection is synchronized (thread-safe). - - true if access to the ICollection is synchronized (thread-safe); otherwise, false. - - - - Gets an object that can be used to synchronize access to the collection. - - - - - Gets or sets the at the specified index. - - The zero-based index of the element to get or set. - - is less than zero - -or- - is equal to or greater than . - - - - - Gets a value indicating whether the collection has a fixed size. - - true if the collection has a fixed size; otherwise, false. The default is false - - - - Gets a value indicating whether the IList is read-only. - - true if the collection is read-only; otherwise, false. The default is false - - - - Gets or sets the number of elements the LevelCollection can contain. - - - - - Supports type-safe iteration over a . - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - - - Type visible only to our subclasses - Used to access protected constructor - - - - - A value - - - - - Supports simple iteration over a . - - - - - Initializes a new instance of the Enumerator class. - - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - - - An evaluator that triggers at a threshold level - - - - This evaluator will trigger if the level of the event - passed to - is equal to or greater than the - level. - - - Nicko Cadell - - - - The threshold for triggering - - - - - Create a new evaluator using the threshold. - - - - Create a new evaluator using the threshold. - - - This evaluator will trigger if the level of the event - passed to - is equal to or greater than the - level. - - - - - - Create a new evaluator using the specified threshold. - - the threshold to trigger at - - - Create a new evaluator using the specified threshold. - - - This evaluator will trigger if the level of the event - passed to - is equal to or greater than the - level. - - - - - - Is this the triggering event? - - The event to check - This method returns true, if the event level - is equal or higher than the . - Otherwise it returns false - - - This evaluator will trigger if the level of the event - passed to - is equal to or greater than the - level. - - - - - - the threshold to trigger at - - - The that will cause this evaluator to trigger - - - - This evaluator will trigger if the level of the event - passed to - is equal to or greater than the - level. - - - - - - Mapping between string name and Level object - - - - Mapping between string name and object. - This mapping is held separately for each . - The level name is case insensitive. - - - Nicko Cadell - - - - Mapping from level name to Level object. The - level name is case insensitive - - - - - Construct the level map - - - - Construct the level map. - - - - - - Clear the internal maps of all levels - - - - Clear the internal maps of all levels - - - - - - Create a new Level and add it to the map - - the string to display for the Level - the level value to give to the Level - - - Create a new Level and add it to the map - - - - - - - Create a new Level and add it to the map - - the string to display for the Level - the level value to give to the Level - the display name to give to the Level - - - Create a new Level and add it to the map - - - - - - Add a Level to the map - - the Level to add - - - Add a Level to the map - - - - - - Lookup a named level from the map - - the name of the level to lookup is taken from this level. - If the level is not set on the map then this level is added - the level in the map with the name specified - - - Lookup a named level from the map. The name of the level to lookup is taken - from the property of the - argument. - - - If no level with the specified name is found then the - argument is added to the level map - and returned. - - - - - - Lookup a by name - - The name of the Level to lookup - a Level from the map with the name specified - - - Returns the from the - map with the name specified. If the no level is - found then null is returned. - - - - - - Return all possible levels as a list of Level objects. - - all possible levels as a list of Level objects - - - Return all possible levels as a list of Level objects. - - - - - - The internal representation of caller location information. - - - - This class uses the System.Diagnostics.StackTrace class to generate - a call stack. The caller's information is then extracted from this stack. - - - The System.Diagnostics.StackTrace class is not supported on the - .NET Compact Framework 1.0 therefore caller location information is not - available on that framework. - - - The System.Diagnostics.StackTrace class has this to say about Release builds: - - - "StackTrace information will be most informative with Debug build configurations. - By default, Debug builds include debug symbols, while Release builds do not. The - debug symbols contain most of the file, method name, line number, and column - information used in constructing StackFrame and StackTrace objects. StackTrace - might not report as many method calls as expected, due to code transformations - that occur during optimization." - - - This means that in a Release build the caller information may be incomplete or may - not exist at all! Therefore caller location information cannot be relied upon in a Release build. - - - Nicko Cadell - Gert Driesen - - - - When location information is not available the constant - NA is returned. Current value of this string - constant is ?. - - - - - Constructor - - The declaring type of the method that is - the stack boundary into the logging system for this call. - - - Initializes a new instance of the - class based on the current thread. - - - - - - Constructor - - The fully qualified class name. - The method name. - The file name. - The line number of the method within the file. - - - Initializes a new instance of the - class with the specified data. - - - - - - Gets the fully qualified class name of the caller making the logging - request. - - - The fully qualified class name of the caller making the logging - request. - - - - Gets the fully qualified class name of the caller making the logging - request. - - - - - - Gets the file name of the caller. - - - The file name of the caller. - - - - Gets the file name of the caller. - - - - - - Gets the line number of the caller. - - - The line number of the caller. - - - - Gets the line number of the caller. - - - - - - Gets the method name of the caller. - - - The method name of the caller. - - - - Gets the method name of the caller. - - - - - - Gets all available caller information - - - All available caller information, in the format - fully.qualified.classname.of.caller.methodName(Filename:line) - - - - Gets all available caller information, in the format - fully.qualified.classname.of.caller.methodName(Filename:line) - - - - - - Static manager that controls the creation of repositories - - - - Static manager that controls the creation of repositories - - - This class is used by the wrapper managers (e.g. ) - to provide access to the objects. - - - This manager also holds the that is used to - lookup and create repositories. The selector can be set either programmatically using - the property, or by setting the log4net.RepositorySelector - AppSetting in the applications config file to the fully qualified type name of the - selector to use. - - - Nicko Cadell - Gert Driesen - - - - Private constructor to prevent instances. Only static methods should be used. - - - - Private constructor to prevent instances. Only static methods should be used. - - - - - - Hook the shutdown event - - - - On the full .NET runtime, the static constructor hooks up the - AppDomain.ProcessExit and AppDomain.DomainUnload> events. - These are used to shutdown the log4net system as the application exits. - - - - - - Register for ProcessExit and DomainUnload events on the AppDomain - - - - This needs to be in a separate method because the events make - a LinkDemand for the ControlAppDomain SecurityPermission. Because - this is a LinkDemand it is demanded at JIT time. Therefore we cannot - catch the exception in the method itself, we have to catch it in the - caller. - - - - - - Return the default instance. - - the repository to lookup in - Return the default instance - - - Gets the for the repository specified - by the argument. - - - - - - Returns the default instance. - - The assembly to use to lookup the repository. - The default instance. - - - - Return the default instance. - - the repository to lookup in - Return the default instance - - - Gets the for the repository specified - by the argument. - - - - - - Returns the default instance. - - The assembly to use to lookup the repository. - The default instance. - - - Returns the default instance. - - - - - - Returns the named logger if it exists. - - The repository to lookup in. - The fully qualified logger name to look for. - - The logger found, or null if the named logger does not exist in the - specified repository. - - - - If the named logger exists (in the specified repository) then it - returns a reference to the logger, otherwise it returns - null. - - - - - - Returns the named logger if it exists. - - The assembly to use to lookup the repository. - The fully qualified logger name to look for. - - The logger found, or null if the named logger does not exist in the - specified assembly's repository. - - - - If the named logger exists (in the specified assembly's repository) then it - returns a reference to the logger, otherwise it returns - null. - - - - - - Returns all the currently defined loggers in the specified repository. - - The repository to lookup in. - All the defined loggers. - - - The root logger is not included in the returned array. - - - - - - Returns all the currently defined loggers in the specified assembly's repository. - - The assembly to use to lookup the repository. - All the defined loggers. - - - The root logger is not included in the returned array. - - - - - - Retrieves or creates a named logger. - - The repository to lookup in. - The name of the logger to retrieve. - The logger with the name specified. - - - Retrieves a logger named as the - parameter. If the named logger already exists, then the - existing instance will be returned. Otherwise, a new instance is - created. - - - By default, loggers do not have a set level but inherit - it from the hierarchy. This is one of the central features of - log4net. - - - - - - Retrieves or creates a named logger. - - The assembly to use to lookup the repository. - The name of the logger to retrieve. - The logger with the name specified. - - - Retrieves a logger named as the - parameter. If the named logger already exists, then the - existing instance will be returned. Otherwise, a new instance is - created. - - - By default, loggers do not have a set level but inherit - it from the hierarchy. This is one of the central features of - log4net. - - - - - - Shorthand for . - - The repository to lookup in. - The of which the fullname will be used as the name of the logger to retrieve. - The logger with the name specified. - - - Gets the logger for the fully qualified name of the type specified. - - - - - - Shorthand for . - - the assembly to use to lookup the repository - The of which the fullname will be used as the name of the logger to retrieve. - The logger with the name specified. - - - Gets the logger for the fully qualified name of the type specified. - - - - - - Shuts down the log4net system. - - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in all the - default repositories. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Shuts down the repository for the repository specified. - - The repository to shutdown. - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in the - repository for the specified. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Shuts down the repository for the repository specified. - - The assembly to use to lookup the repository. - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in the - repository for the repository. The repository is looked up using - the specified. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Resets all values contained in this repository instance to their defaults. - - The repository to reset. - - - Resets all values contained in the repository instance to their - defaults. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set its default "off" value. - - - - - - Resets all values contained in this repository instance to their defaults. - - The assembly to use to lookup the repository to reset. - - - Resets all values contained in the repository instance to their - defaults. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set its default "off" value. - - - - - - Creates a repository with the specified name. - - The name of the repository, this must be unique amongst repositories. - The created for the repository. - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - Creates the default type of which is a - object. - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The specified repository already exists. - - - - Creates a repository with the specified name. - - The name of the repository, this must be unique amongst repositories. - The created for the repository. - - - Creates the default type of which is a - object. - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The specified repository already exists. - - - - Creates a repository with the specified name and repository type. - - The name of the repository, this must be unique to the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - The name must be unique. Repositories cannot be redefined. - An Exception will be thrown if the repository already exists. - - - The specified repository already exists. - - - - Creates a repository with the specified name and repository type. - - The name of the repository, this must be unique to the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - The name must be unique. Repositories cannot be redefined. - An Exception will be thrown if the repository already exists. - - - The specified repository already exists. - - - - Creates a repository for the specified assembly and repository type. - - The assembly to use to get the name of the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - - - - Creates a repository for the specified assembly and repository type. - - The assembly to use to get the name of the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - - - - Gets an array of all currently defined repositories. - - An array of all the known objects. - - - Gets an array of all currently defined repositories. - - - - - - Internal method to get pertinent version info. - - A string of version info. - - - - Called when the event fires - - the that is exiting - null - - - Called when the event fires. - - - When the event is triggered the log4net system is . - - - - - - Called when the event fires - - the that is exiting - null - - - Called when the event fires. - - - When the event is triggered the log4net system is . - - - - - - Initialize the default repository selector - - - - - Gets or sets the repository selector used by the . - - - The repository selector used by the . - - - - The repository selector () is used by - the to create and select repositories - (). - - - The caller to supplies either a string name - or an assembly (if not supplied the assembly is inferred using - ). - - - This context is used by the selector to lookup a specific repository. - - - For the full .NET Framework, the default repository is DefaultRepositorySelector; - for the .NET Compact Framework CompactRepositorySelector is the default - repository. - - - - - - Implementation of the interface. - - - - This class should be used as the base for all wrapper implementations. - - - Nicko Cadell - Gert Driesen - - - - Constructs a new wrapper for the specified logger. - - The logger to wrap. - - - Constructs a new wrapper for the specified logger. - - - - - - The logger that this object is wrapping - - - - - Gets the implementation behind this wrapper object. - - - The object that this object is implementing. - - - - The Logger object may not be the same object as this object - because of logger decorators. - - - This gets the actual underlying objects that is used to process - the log events. - - - - - - Portable data structure used by - - - - Portable data structure used by - - - Nicko Cadell - - - - The logger name. - - - - The logger name. - - - - - - Level of logging event. - - - - Level of logging event. Level cannot be Serializable - because it is a flyweight. Due to its special serialization it - cannot be declared final either. - - - - - - The application supplied message. - - - - The application supplied message of logging event. - - - - - - The name of thread - - - - The name of thread in which this logging event was generated - - - - - - The time the event was logged - - - - The TimeStamp is stored in the local time zone for this computer. - - - - - - Location information for the caller. - - - - Location information for the caller. - - - - - - String representation of the user - - - - String representation of the user's windows name, - like DOMAIN\username - - - - - - String representation of the identity. - - - - String representation of the current thread's principal identity. - - - - - - The string representation of the exception - - - - The string representation of the exception - - - - - - String representation of the AppDomain. - - - - String representation of the AppDomain. - - - - - - Additional event specific properties - - - - A logger or an appender may attach additional - properties to specific events. These properties - have a string key and an object value. - - - - - - Flags passed to the property - - - - Flags passed to the property - - - Nicko Cadell - - - - Fix the MDC - - - - - Fix the NDC - - - - - Fix the rendered message - - - - - Fix the thread name - - - - - Fix the callers location information - - - CAUTION: Very slow to generate - - - - - Fix the callers windows user name - - - CAUTION: Slow to generate - - - - - Fix the domain friendly name - - - - - Fix the callers principal name - - - CAUTION: May be slow to generate - - - - - Fix the exception text - - - - - Fix the event properties - - - - - No fields fixed - - - - - All fields fixed - - - - - Partial fields fixed - - - - This set of partial fields gives good performance. The following fields are fixed: - - - - - - - - - - - - - The internal representation of logging events. - - - - When an affirmative decision is made to log then a - instance is created. This instance - is passed around to the different log4net components. - - - This class is of concern to those wishing to extend log4net. - - - Some of the values in instances of - are considered volatile, that is the values are correct at the - time the event is delivered to appenders, but will not be consistent - at any time afterwards. If an event is to be stored and then processed - at a later time these volatile values must be fixed by calling - . There is a performance penalty - for incurred by calling but it - is essential to maintaining data consistency. - - - Nicko Cadell - Gert Driesen - Douglas de la Torre - Daniel Cazzulino - - - - The key into the Properties map for the host name value. - - - - - The key into the Properties map for the thread identity value. - - - - - The key into the Properties map for the user name value. - - - - - Initializes a new instance of the class - from the supplied parameters. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The repository this event is logged in. - The name of the logger of this event. - The level of this event. - The message of this event. - The exception for this event. - - - Except , and , - all fields of LoggingEvent are filled when actually needed. Call - to cache all data locally - to prevent inconsistencies. - - This method is called by the log4net framework - to create a logging event. - - - - - - Initializes a new instance of the class - using specific data. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The repository this event is logged in. - Data used to initialize the logging event. - The fields in the struct that have already been fixed. - - - This constructor is provided to allow a - to be created independently of the log4net framework. This can - be useful if you require a custom serialization scheme. - - - Use the method to obtain an - instance of the class. - - - The parameter should be used to specify which fields in the - struct have been preset. Fields not specified in the - will be captured from the environment if requested or fixed. - - - - - - Initializes a new instance of the class - using specific data. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The repository this event is logged in. - Data used to initialize the logging event. - - - This constructor is provided to allow a - to be created independently of the log4net framework. This can - be useful if you require a custom serialization scheme. - - - Use the method to obtain an - instance of the class. - - - This constructor sets this objects flags to , - this assumes that all the data relating to this event is passed in via the - parameter and no other data should be captured from the environment. - - - - - - Initializes a new instance of the class - using specific data. - - Data used to initialize the logging event. - - - This constructor is provided to allow a - to be created independently of the log4net framework. This can - be useful if you require a custom serialization scheme. - - - Use the method to obtain an - instance of the class. - - - This constructor sets this objects flags to , - this assumes that all the data relating to this event is passed in via the - parameter and no other data should be captured from the environment. - - - - - - Serialization constructor - - The that holds the serialized object data. - The that contains contextual information about the source or destination. - - - Initializes a new instance of the class - with serialized data. - - - - - - Ensure that the repository is set. - - the value for the repository - - - - Write the rendered message to a TextWriter - - the writer to write the message to - - - Unlike the property this method - does store the message data in the internal cache. Therefore - if called only once this method should be faster than the - property, however if the message is - to be accessed multiple times then the property will be more efficient. - - - - - - Serializes this object into the provided. - - The to populate with data. - The destination for this serialization. - - - The data in this event must be fixed before it can be serialized. - - - The method must be called during the - method call if this event - is to be used outside that method. - - - - - - Gets the portable data for this . - - The for this event. - - - A new can be constructed using a - instance. - - - Does a fix of the data - in the logging event before returning the event data. - - - - - - Gets the portable data for this . - - The set of data to ensure is fixed in the LoggingEventData - The for this event. - - - A new can be constructed using a - instance. - - - - - - Returns this event's exception's rendered using the - . - - - This event's exception's rendered using the . - - - - Obsolete. Use instead. - - - - - - Returns this event's exception's rendered using the - . - - - This event's exception's rendered using the . - - - - Returns this event's exception's rendered using the - . - - - - - - Fix instance fields that hold volatile data. - - - - Some of the values in instances of - are considered volatile, that is the values are correct at the - time the event is delivered to appenders, but will not be consistent - at any time afterwards. If an event is to be stored and then processed - at a later time these volatile values must be fixed by calling - . There is a performance penalty - incurred by calling but it - is essential to maintaining data consistency. - - - Calling is equivalent to - calling passing the parameter - false. - - - See for more - information. - - - - - - Fixes instance fields that hold volatile data. - - Set to true to not fix data that takes a long time to fix. - - - Some of the values in instances of - are considered volatile, that is the values are correct at the - time the event is delivered to appenders, but will not be consistent - at any time afterwards. If an event is to be stored and then processed - at a later time these volatile values must be fixed by calling - . There is a performance penalty - for incurred by calling but it - is essential to maintaining data consistency. - - - The param controls the data that - is fixed. Some of the data that can be fixed takes a long time to - generate, therefore if you do not require those settings to be fixed - they can be ignored by setting the param - to true. This setting will ignore the - and settings. - - - Set to false to ensure that all - settings are fixed. - - - - - - Fix the fields specified by the parameter - - the fields to fix - - - Only fields specified in the will be fixed. - Fields will not be fixed if they have previously been fixed. - It is not possible to 'unfix' a field. - - - - - - Lookup a composite property in this event - - the key for the property to lookup - the value for the property - - - This event has composite properties that combine together properties from - several different contexts in the following order: - - - this events properties - - This event has that can be set. These - properties are specific to this event only. - - - - the thread properties - - The that are set on the current - thread. These properties are shared by all events logged on this thread. - - - - the global properties - - The that are set globally. These - properties are shared by all the threads in the AppDomain. - - - - - - - - - Get all the composite properties in this event - - the containing all the properties - - - See for details of the composite properties - stored by the event. - - - This method returns a single containing all the - properties defined for this event. - - - - - - The internal logging event data. - - - - - The internal logging event data. - - - - - The internal logging event data. - - - - - The fully qualified Type of the calling - logger class in the stack frame (i.e. the declaring type of the method). - - - - - The application supplied message of logging event. - - - - - The exception that was thrown. - - - This is not serialized. The string representation - is serialized instead. - - - - - The repository that generated the logging event - - - This is not serialized. - - - - - The fix state for this event - - - These flags indicate which fields have been fixed. - Not serialized. - - - - - Indicated that the internal cache is updateable (ie not fixed) - - - This is a seperate flag to m_fixFlags as it allows incrementel fixing and simpler - changes in the caching strategy. - - - - - Gets the time when the current process started. - - - This is the time when this process started. - - - - The TimeStamp is stored in the local time zone for this computer. - - - Tries to get the start time for the current process. - Failing that it returns the time of the first call to - this property. - - - Note that AppDomains may be loaded and unloaded within the - same process without the process terminating and therefore - without the process start time being reset. - - - - - - Gets the of the logging event. - - - The of the logging event. - - - - Gets the of the logging event. - - - - - - Gets the time of the logging event. - - - The time of the logging event. - - - - The TimeStamp is stored in the local time zone for this computer. - - - - - - Gets the name of the logger that logged the event. - - - The name of the logger that logged the event. - - - - Gets the name of the logger that logged the event. - - - - - - Gets the location information for this logging event. - - - The location information for this logging event. - - - - The collected information is cached for future use. - - - See the class for more information on - supported frameworks and the different behavior in Debug and - Release builds. - - - - - - Gets the message object used to initialize this event. - - - The message object used to initialize this event. - - - - Gets the message object used to initialize this event. - Note that this event may not have a valid message object. - If the event is serialized the message object will not - be transferred. To get the text of the message the - property must be used - not this property. - - - If there is no defined message object for this event then - null will be returned. - - - - - - Gets the exception object used to initialize this event. - - - The exception object used to initialize this event. - - - - Gets the exception object used to initialize this event. - Note that this event may not have a valid exception object. - If the event is serialized the exception object will not - be transferred. To get the text of the exception the - method must be used - not this property. - - - If there is no defined exception object for this event then - null will be returned. - - - - - - The that this event was created in. - - - - The that this event was created in. - - - - - - Gets the message, rendered through the . - - - The message rendered through the . - - - - The collected information is cached for future use. - - - - - - Gets the name of the current thread. - - - The name of the current thread, or the thread ID when - the name is not available. - - - - The collected information is cached for future use. - - - - - - Gets the name of the current user. - - - The name of the current user, or NOT AVAILABLE when the - underlying runtime has no support for retrieving the name of the - current user. - - - - Calls WindowsIdentity.GetCurrent().Name to get the name of - the current windows user. - - - To improve performance, we could cache the string representation of - the name, and reuse that as long as the identity stayed constant. - Once the identity changed, we would need to re-assign and re-render - the string. - - - However, the WindowsIdentity.GetCurrent() call seems to - return different objects every time, so the current implementation - doesn't do this type of caching. - - - Timing for these operations: - - - - Method - Results - - - WindowsIdentity.GetCurrent() - 10000 loops, 00:00:00.2031250 seconds - - - WindowsIdentity.GetCurrent().Name - 10000 loops, 00:00:08.0468750 seconds - - - - This means we could speed things up almost 40 times by caching the - value of the WindowsIdentity.GetCurrent().Name property, since - this takes (8.04-0.20) = 7.84375 seconds. - - - - - - Gets the identity of the current thread principal. - - - The string name of the identity of the current thread principal. - - - - Calls System.Threading.Thread.CurrentPrincipal.Identity.Name to get - the name of the current thread principal. - - - - - - Gets the AppDomain friendly name. - - - The AppDomain friendly name. - - - - Gets the AppDomain friendly name. - - - - - - Additional event specific properties. - - - Additional event specific properties. - - - - A logger or an appender may attach additional - properties to specific events. These properties - have a string key and an object value. - - - This property is for events that have been added directly to - this event. The aggregate properties (which include these - event properties) can be retrieved using - and . - - - Once the properties have been fixed this property - returns the combined cached properties. This ensures that updates to - this property are always reflected in the underlying storage. When - returning the combined properties there may be more keys in the - Dictionary than expected. - - - - - - The fixed fields in this event - - - The set of fields that are fixed in this event - - - - Fields will not be fixed if they have previously been fixed. - It is not possible to 'unfix' a field. - - - - - - Implementation of wrapper interface. - - - - This implementation of the interface - forwards to the held by the base class. - - - This logger has methods to allow the caller to log at the following - levels: - - - - DEBUG - - The and methods log messages - at the DEBUG level. That is the level with that name defined in the - repositories . The default value - for this level is . The - property tests if this level is enabled for logging. - - - - INFO - - The and methods log messages - at the INFO level. That is the level with that name defined in the - repositories . The default value - for this level is . The - property tests if this level is enabled for logging. - - - - WARN - - The and methods log messages - at the WARN level. That is the level with that name defined in the - repositories . The default value - for this level is . The - property tests if this level is enabled for logging. - - - - ERROR - - The and methods log messages - at the ERROR level. That is the level with that name defined in the - repositories . The default value - for this level is . The - property tests if this level is enabled for logging. - - - - FATAL - - The and methods log messages - at the FATAL level. That is the level with that name defined in the - repositories . The default value - for this level is . The - property tests if this level is enabled for logging. - - - - - The values for these levels and their semantic meanings can be changed by - configuring the for the repository. - - - Nicko Cadell - Gert Driesen - - - - The ILog interface is use by application to log messages into - the log4net framework. - - - - Use the to obtain logger instances - that implement this interface. The - static method is used to get logger instances. - - - This class contains methods for logging at different levels and also - has properties for determining if those logging levels are - enabled in the current configuration. - - - This interface can be implemented in different ways. This documentation - specifies reasonable behavior that a caller can expect from the actual - implementation, however different implementations reserve the right to - do things differently. - - - Simple example of logging messages - - ILog log = LogManager.GetLogger("application-log"); - - log.Info("Application Start"); - log.Debug("This is a debug message"); - - if (log.IsDebugEnabled) - { - log.Debug("This is another debug message"); - } - - - - - Nicko Cadell - Gert Driesen - - - Log a message object with the level. - - Log a message object with the level. - - The message object to log. - - - This method first checks if this logger is DEBUG - enabled by comparing the level of this logger with the - level. If this logger is - DEBUG enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of - the additivity flag. - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - - - - - - Log a message object with the level including - the stack trace of the passed - as a parameter. - - The message object to log. - The exception to log, including its stack trace. - - - See the form for more detailed information. - - - - - - - Log a formatted string with the level. - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - Log a message object with the level. - - Logs a message object with the level. - - - - This method first checks if this logger is INFO - enabled by comparing the level of this logger with the - level. If this logger is - INFO enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of the - additivity flag. - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - The message object to log. - - - - - - Logs a message object with the INFO level including - the stack trace of the passed - as a parameter. - - The message object to log. - The exception to log, including its stack trace. - - - See the form for more detailed information. - - - - - - - Log a formatted message string with the level. - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - Log a message object with the level. - - Log a message object with the level. - - - - This method first checks if this logger is WARN - enabled by comparing the level of this logger with the - level. If this logger is - WARN enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of the - additivity flag. - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - The message object to log. - - - - - - Log a message object with the level including - the stack trace of the passed - as a parameter. - - The message object to log. - The exception to log, including its stack trace. - - - See the form for more detailed information. - - - - - - - Log a formatted message string with the level. - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - Log a message object with the level. - - Logs a message object with the level. - - The message object to log. - - - This method first checks if this logger is ERROR - enabled by comparing the level of this logger with the - level. If this logger is - ERROR enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of the - additivity flag. - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - - - - - - Log a message object with the level including - the stack trace of the passed - as a parameter. - - The message object to log. - The exception to log, including its stack trace. - - - See the form for more detailed information. - - - - - - - Log a formatted message string with the level. - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - Log a message object with the level. - - Log a message object with the level. - - - - This method first checks if this logger is FATAL - enabled by comparing the level of this logger with the - level. If this logger is - FATAL enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of the - additivity flag. - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - The message object to log. - - - - - - Log a message object with the level including - the stack trace of the passed - as a parameter. - - The message object to log. - The exception to log, including its stack trace. - - - See the form for more detailed information. - - - - - - - Log a formatted message string with the level. - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Logs a formatted message string with the level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the String.Format method. See - for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - - - Checks if this logger is enabled for the level. - - - true if this logger is enabled for events, false otherwise. - - - - This function is intended to lessen the computational cost of - disabled log debug statements. - - For some ILog interface log, when you write: - - log.Debug("This is entry number: " + i ); - - - You incur the cost constructing the message, string construction and concatenation in - this case, regardless of whether the message is logged or not. - - - If you are worried about speed (who isn't), then you should write: - - - if (log.IsDebugEnabled) - { - log.Debug("This is entry number: " + i ); - } - - - This way you will not incur the cost of parameter - construction if debugging is disabled for log. On - the other hand, if the log is debug enabled, you - will incur the cost of evaluating whether the logger is debug - enabled twice. Once in and once in - the . This is an insignificant overhead - since evaluating a logger takes about 1% of the time it - takes to actually log. This is the preferred style of logging. - - Alternatively if your logger is available statically then the is debug - enabled state can be stored in a static variable like this: - - - private static readonly bool isDebugEnabled = log.IsDebugEnabled; - - - Then when you come to log you can write: - - - if (isDebugEnabled) - { - log.Debug("This is entry number: " + i ); - } - - - This way the debug enabled state is only queried once - when the class is loaded. Using a private static readonly - variable is the most efficient because it is a run time constant - and can be heavily optimized by the JIT compiler. - - - Of course if you use a static readonly variable to - hold the enabled state of the logger then you cannot - change the enabled state at runtime to vary the logging - that is produced. You have to decide if you need absolute - speed or runtime flexibility. - - - - - - - - Checks if this logger is enabled for the level. - - - true if this logger is enabled for events, false otherwise. - - - For more information see . - - - - - - - - Checks if this logger is enabled for the level. - - - true if this logger is enabled for events, false otherwise. - - - For more information see . - - - - - - - - Checks if this logger is enabled for the level. - - - true if this logger is enabled for events, false otherwise. - - - For more information see . - - - - - - - - Checks if this logger is enabled for the level. - - - true if this logger is enabled for events, false otherwise. - - - For more information see . - - - - - - - - Construct a new wrapper for the specified logger. - - The logger to wrap. - - - Construct a new wrapper for the specified logger. - - - - - - Virtual method called when the configuration of the repository changes - - the repository holding the levels - - - Virtual method called when the configuration of the repository changes - - - - - - Logs a message object with the DEBUG level. - - The message object to log. - - - This method first checks if this logger is DEBUG - enabled by comparing the level of this logger with the - DEBUG level. If this logger is - DEBUG enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of the - additivity flag. - - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - - - - Logs a message object with the DEBUG level - - The message object to log. - The exception to log, including its stack trace. - - - Logs a message object with the DEBUG level including - the stack trace of the passed - as a parameter. - - - See the form for more detailed information. - - - - - - - Logs a formatted message string with the DEBUG level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the DEBUG level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the DEBUG level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the DEBUG level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the DEBUG level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a message object with the INFO level. - - The message object to log. - - - This method first checks if this logger is INFO - enabled by comparing the level of this logger with the - INFO level. If this logger is - INFO enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger - and also higher in the hierarchy depending on the value of - the additivity flag. - - - WARNING Note that passing an - to this method will print the name of the - but no stack trace. To print a stack trace use the - form instead. - - - - - - Logs a message object with the INFO level. - - The message object to log. - The exception to log, including its stack trace. - - - Logs a message object with the INFO level including - the stack trace of the - passed as a parameter. - - - See the form for more detailed information. - - - - - - - Logs a formatted message string with the INFO level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the INFO level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the INFO level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the INFO level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the INFO level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a message object with the WARN level. - - the message object to log - - - This method first checks if this logger is WARN - enabled by comparing the level of this logger with the - WARN level. If this logger is - WARN enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger and - also higher in the hierarchy depending on the value of the - additivity flag. - - - WARNING Note that passing an to this - method will print the name of the but no - stack trace. To print a stack trace use the - form instead. - - - - - - Logs a message object with the WARN level - - The message object to log. - The exception to log, including its stack trace. - - - Logs a message object with the WARN level including - the stack trace of the - passed as a parameter. - - - See the form for more detailed information. - - - - - - - Logs a formatted message string with the WARN level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the WARN level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the WARN level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the WARN level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the WARN level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a message object with the ERROR level. - - The message object to log. - - - This method first checks if this logger is ERROR - enabled by comparing the level of this logger with the - ERROR level. If this logger is - ERROR enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger and - also higher in the hierarchy depending on the value of the - additivity flag. - - - WARNING Note that passing an to this - method will print the name of the but no - stack trace. To print a stack trace use the - form instead. - - - - - - Logs a message object with the ERROR level - - The message object to log. - The exception to log, including its stack trace. - - - Logs a message object with the ERROR level including - the stack trace of the - passed as a parameter. - - - See the form for more detailed information. - - - - - - - Logs a formatted message string with the ERROR level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the ERROR level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the ERROR level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the ERROR level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the ERROR level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a message object with the FATAL level. - - The message object to log. - - - This method first checks if this logger is FATAL - enabled by comparing the level of this logger with the - FATAL level. If this logger is - FATAL enabled, then it converts the message object - (passed as parameter) to a string by invoking the appropriate - . It then - proceeds to call all the registered appenders in this logger and - also higher in the hierarchy depending on the value of the - additivity flag. - - - WARNING Note that passing an to this - method will print the name of the but no - stack trace. To print a stack trace use the - form instead. - - - - - - Logs a message object with the FATAL level - - The message object to log. - The exception to log, including its stack trace. - - - Logs a message object with the FATAL level including - the stack trace of the - passed as a parameter. - - - See the form for more detailed information. - - - - - - - Logs a formatted message string with the FATAL level. - - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the FATAL level. - - A String containing zero or more format items - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the FATAL level. - - A String containing zero or more format items - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the FATAL level. - - A String containing zero or more format items - An Object to format - An Object to format - An Object to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - The string is formatted using the - format provider. To specify a localized provider use the - method. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Logs a formatted message string with the FATAL level. - - An that supplies culture-specific formatting information - A String containing zero or more format items - An Object array containing zero or more objects to format - - - The message is formatted using the method. See - String.Format for details of the syntax of the format string and the behavior - of the formatting. - - - This method does not take an object to include in the - log event. To pass an use one of the - methods instead. - - - - - - Event handler for the event - - the repository - Empty - - - - The fully qualified name of this declaring type not the type of any subclass. - - - - - Checks if this logger is enabled for the DEBUG - level. - - - true if this logger is enabled for DEBUG events, - false otherwise. - - - - This function is intended to lessen the computational cost of - disabled log debug statements. - - - For some log Logger object, when you write: - - - log.Debug("This is entry number: " + i ); - - - You incur the cost constructing the message, concatenation in - this case, regardless of whether the message is logged or not. - - - If you are worried about speed, then you should write: - - - if (log.IsDebugEnabled()) - { - log.Debug("This is entry number: " + i ); - } - - - This way you will not incur the cost of parameter - construction if debugging is disabled for log. On - the other hand, if the log is debug enabled, you - will incur the cost of evaluating whether the logger is debug - enabled twice. Once in IsDebugEnabled and once in - the Debug. This is an insignificant overhead - since evaluating a logger takes about 1% of the time it - takes to actually log. - - - - - - Checks if this logger is enabled for the INFO level. - - - true if this logger is enabled for INFO events, - false otherwise. - - - - See for more information and examples - of using this method. - - - - - - - Checks if this logger is enabled for the WARN level. - - - true if this logger is enabled for WARN events, - false otherwise. - - - - See for more information and examples - of using this method. - - - - - - - Checks if this logger is enabled for the ERROR level. - - - true if this logger is enabled for ERROR events, - false otherwise. - - - - See for more information and examples of using this method. - - - - - - - Checks if this logger is enabled for the FATAL level. - - - true if this logger is enabled for FATAL events, - false otherwise. - - - - See for more information and examples of using this method. - - - - - - - A SecurityContext used by log4net when interacting with protected resources - - - - A SecurityContext used by log4net when interacting with protected resources - for example with operating system services. This can be used to impersonate - a principal that has been granted privileges on the system resources. - - - Nicko Cadell - - - - Impersonate this SecurityContext - - State supplied by the caller - An instance that will - revoke the impersonation of this SecurityContext, or null - - - Impersonate this security context. Further calls on the current - thread should now be made in the security context provided - by this object. When the result - method is called the security - context of the thread should be reverted to the state it was in - before was called. - - - - - - The providers default instances. - - - - A configured component that interacts with potentially protected system - resources uses a to provide the elevated - privileges required. If the object has - been not been explicitly provided to the component then the component - will request one from this . - - - By default the is - an instance of which returns only - objects. This is a reasonable default - where the privileges required are not know by the system. - - - This default behavior can be overridden by subclassing the - and overriding the method to return - the desired objects. The default provider - can be replaced by programmatically setting the value of the - property. - - - An alternative is to use the log4net.Config.SecurityContextProviderAttribute - This attribute can be applied to an assembly in the same way as the - log4net.Config.XmlConfiguratorAttribute". The attribute takes - the type to use as the as an argument. - - - Nicko Cadell - - - - The default provider - - - - - Protected default constructor to allow subclassing - - - - Protected default constructor to allow subclassing - - - - - - Create a SecurityContext for a consumer - - The consumer requesting the SecurityContext - An impersonation context - - - The default implementation is to return a . - - - Subclasses should override this method to provide their own - behavior. - - - - - - Gets or sets the default SecurityContextProvider - - - The default SecurityContextProvider - - - - The default provider is used by configured components that - require a and have not had one - given to them. - - - By default this is an instance of - that returns objects. - - - The default provider can be set programmatically by setting - the value of this property to a sub class of - that has the desired behavior. - - - - - - Delegate used to handle creation of new wrappers. - - The logger to wrap in a wrapper. - - - Delegate used to handle creation of new wrappers. This delegate - is called from the - method to construct the wrapper for the specified logger. - - - The delegate to use is supplied to the - constructor. - - - - - - Maps between logger objects and wrapper objects. - - - - This class maintains a mapping between objects and - objects. Use the method to - lookup the for the specified . - - - New wrapper instances are created by the - method. The default behavior is for this method to delegate construction - of the wrapper to the delegate supplied - to the constructor. This allows specialization of the behavior without - requiring subclassing of this type. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the - - The handler to use to create the wrapper objects. - - - Initializes a new instance of the class with - the specified handler to create the wrapper objects. - - - - - - Gets the wrapper object for the specified logger. - - The wrapper object for the specified logger - - - If the logger is null then the corresponding wrapper is null. - - - Looks up the wrapper it it has previously been requested and - returns it. If the wrapper has never been requested before then - the virtual method is - called. - - - - - - Creates the wrapper object for the specified logger. - - The logger to wrap in a wrapper. - The wrapper object for the logger. - - - This implementation uses the - passed to the constructor to create the wrapper. This method - can be overridden in a subclass. - - - - - - Called when a monitored repository shutdown event is received. - - The that is shutting down - - - This method is called when a that this - is holding loggers for has signaled its shutdown - event . The default - behavior of this method is to release the references to the loggers - and their wrappers generated for this repository. - - - - - - Event handler for repository shutdown event. - - The sender of the event. - The event args. - - - - Map of logger repositories to hashtables of ILogger to ILoggerWrapper mappings - - - - - The handler to use to create the extension wrapper objects. - - - - - Internal reference to the delegate used to register for repository shutdown events. - - - - - Gets the map of logger repositories. - - - Map of logger repositories. - - - - Gets the hashtable that is keyed on . The - values are hashtables keyed on with the - value being the corresponding . - - - - - - Formats a as "HH:mm:ss,fff". - - - - Formats a in the format "HH:mm:ss,fff" for example, "15:49:37,459". - - - Nicko Cadell - Gert Driesen - - - - Render a as a string. - - - - Interface to abstract the rendering of a - instance into a string. - - - The method is used to render the - date to a text writer. - - - Nicko Cadell - Gert Driesen - - - - Formats the specified date as a string. - - The date to format. - The writer to write to. - - - Format the as a string and write it - to the provided. - - - - - - String constant used to specify AbsoluteTimeDateFormat in layouts. Current value is ABSOLUTE. - - - - - String constant used to specify DateTimeDateFormat in layouts. Current value is DATE. - - - - - String constant used to specify ISO8601DateFormat in layouts. Current value is ISO8601. - - - - - Renders the date into a string. Format is "HH:mm:ss". - - The date to render into a string. - The string builder to write to. - - - Subclasses should override this method to render the date - into a string using a precision up to the second. This method - will be called at most once per second and the result will be - reused if it is needed again during the same second. - - - - - - Renders the date into a string. Format is "HH:mm:ss,fff". - - The date to render into a string. - The writer to write to. - - - Uses the method to generate the - time string up to the seconds and then appends the current - milliseconds. The results from are - cached and is called at most once - per second. - - - Sub classes should override - rather than . - - - - - - Last stored time with precision up to the second. - - - - - Last stored time with precision up to the second, formatted - as a string. - - - - - Last stored time with precision up to the second, formatted - as a string. - - - - - Formats a as "dd MMM yyyy HH:mm:ss,fff" - - - - Formats a in the format - "dd MMM yyyy HH:mm:ss,fff" for example, - "06 Nov 1994 15:49:37,459". - - - Nicko Cadell - Gert Driesen - Angelika Schnagl - - - - Default constructor. - - - - Initializes a new instance of the class. - - - - - - Formats the date without the milliseconds part - - The date to format. - The string builder to write to. - - - Formats a DateTime in the format "dd MMM yyyy HH:mm:ss" - for example, "06 Nov 1994 15:49:37". - - - The base class will append the ",fff" milliseconds section. - This method will only be called at most once per second. - - - - - - The format info for the invariant culture. - - - - - Formats the as "yyyy-MM-dd HH:mm:ss,fff". - - - - Formats the specified as a string: "yyyy-MM-dd HH:mm:ss,fff". - - - Nicko Cadell - Gert Driesen - - - - Default constructor - - - - Initializes a new instance of the class. - - - - - - Formats the date without the milliseconds part - - The date to format. - The string builder to write to. - - - Formats the date specified as a string: "yyyy-MM-dd HH:mm:ss". - - - The base class will append the ",fff" milliseconds section. - This method will only be called at most once per second. - - - - - - Formats the using the method. - - - - Formats the using the method. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - The format string. - - - Initializes a new instance of the class - with the specified format string. - - - The format string must be compatible with the options - that can be supplied to . - - - - - - Formats the date using . - - The date to convert to a string. - The writer to write to. - - - Uses the date format string supplied to the constructor to call - the method to format the date. - - - - - - The format string used to format the . - - - - The format string must be compatible with the options - that can be supplied to . - - - - - - This filter drops all . - - - - You can add this filter to the end of a filter chain to - switch from the default "accept all unless instructed otherwise" - filtering behavior to a "deny all unless instructed otherwise" - behavior. - - - Nicko Cadell - Gert Driesen - - - - Subclass this type to implement customized logging event filtering - - - - Users should extend this class to implement customized logging - event filtering. Note that and - , the parent class of all standard - appenders, have built-in filtering rules. It is suggested that you - first use and understand the built-in rules before rushing to write - your own custom filters. - - - This abstract class assumes and also imposes that filters be - organized in a linear chain. The - method of each filter is called sequentially, in the order of their - addition to the chain. - - - The method must return one - of the integer constants , - or . - - - If the value is returned, then the log event is dropped - immediately without consulting with the remaining filters. - - - If the value is returned, then the next filter - in the chain is consulted. If there are no more filters in the - chain, then the log event is logged. Thus, in the presence of no - filters, the default behavior is to log all logging events. - - - If the value is returned, then the log - event is logged without consulting the remaining filters. - - - The philosophy of log4net filters is largely inspired from the - Linux ipchains. - - - Nicko Cadell - Gert Driesen - - - - Implement this interface to provide customized logging event filtering - - - - Users should implement this interface to implement customized logging - event filtering. Note that and - , the parent class of all standard - appenders, have built-in filtering rules. It is suggested that you - first use and understand the built-in rules before rushing to write - your own custom filters. - - - This abstract class assumes and also imposes that filters be - organized in a linear chain. The - method of each filter is called sequentially, in the order of their - addition to the chain. - - - The method must return one - of the integer constants , - or . - - - If the value is returned, then the log event is dropped - immediately without consulting with the remaining filters. - - - If the value is returned, then the next filter - in the chain is consulted. If there are no more filters in the - chain, then the log event is logged. Thus, in the presence of no - filters, the default behavior is to log all logging events. - - - If the value is returned, then the log - event is logged without consulting the remaining filters. - - - The philosophy of log4net filters is largely inspired from the - Linux ipchains. - - - Nicko Cadell - Gert Driesen - - - - Decide if the logging event should be logged through an appender. - - The LoggingEvent to decide upon - The decision of the filter - - - If the decision is , then the event will be - dropped. If the decision is , then the next - filter, if any, will be invoked. If the decision is then - the event will be logged without consulting with other filters in - the chain. - - - - - - Property to get and set the next filter - - - The next filter in the chain - - - - Filters are typically composed into chains. This property allows the next filter in - the chain to be accessed. - - - - - - Points to the next filter in the filter chain. - - - - See for more information. - - - - - - Initialize the filter with the options set - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - Typically filter's options become active immediately on set, - however this method must still be called. - - - - - - Decide if the should be logged through an appender. - - The to decide upon - The decision of the filter - - - If the decision is , then the event will be - dropped. If the decision is , then the next - filter, if any, will be invoked. If the decision is then - the event will be logged without consulting with other filters in - the chain. - - - This method is marked abstract and must be implemented - in a subclass. - - - - - - Property to get and set the next filter - - - The next filter in the chain - - - - Filters are typically composed into chains. This property allows the next filter in - the chain to be accessed. - - - - - - Default constructor - - - - - Always returns the integer constant - - the LoggingEvent to filter - Always returns - - - Ignores the event being logged and just returns - . This can be used to change the default filter - chain behavior from to . This filter - should only be used as the last filter in the chain - as any further filters will be ignored! - - - - - - The return result from - - - - The return result from - - - - - - The log event must be dropped immediately without - consulting with the remaining filters, if any, in the chain. - - - - - This filter is neutral with respect to the log event. - The remaining filters, if any, should be consulted for a final decision. - - - - - The log event must be logged immediately without - consulting with the remaining filters, if any, in the chain. - - - - - This is a very simple filter based on matching. - - - - The filter admits two options and - . If there is an exact match between the value - of the option and the of the - , then the method returns in - case the option value is set - to true, if it is false then - is returned. If the does not match then - the result will be . - - - Nicko Cadell - Gert Driesen - - - - flag to indicate if the filter should on a match - - - - - the to match against - - - - - Default constructor - - - - - Tests if the of the logging event matches that of the filter - - the event to filter - see remarks - - - If the of the event matches the level of the - filter then the result of the function depends on the - value of . If it is true then - the function will return , it it is false then it - will return . If the does not match then - the result will be . - - - - - - when matching - - - - The property is a flag that determines - the behavior when a matching is found. If the - flag is set to true then the filter will the - logging event, otherwise it will the event. - - - The default is true i.e. to the event. - - - - - - The that the filter will match - - - - The level that this filter will attempt to match against the - level. If a match is found then - the result depends on the value of . - - - - - - This is a simple filter based on matching. - - - - The filter admits three options and - that determine the range of priorities that are matched, and - . If there is a match between the range - of priorities and the of the , then the - method returns in case the - option value is set to true, if it is false - then is returned. If there is no match, is returned. - - - Nicko Cadell - Gert Driesen - - - - Flag to indicate the behavior when matching a - - - - - the minimum value to match - - - - - the maximum value to match - - - - - Default constructor - - - - - Check if the event should be logged. - - the logging event to check - see remarks - - - If the of the logging event is outside the range - matched by this filter then - is returned. If the is matched then the value of - is checked. If it is true then - is returned, otherwise - is returned. - - - - - - when matching and - - - - The property is a flag that determines - the behavior when a matching is found. If the - flag is set to true then the filter will the - logging event, otherwise it will the event. - - - The default is true i.e. to the event. - - - - - - Set the minimum matched - - - - The minimum level that this filter will attempt to match against the - level. If a match is found then - the result depends on the value of . - - - - - - Sets the maximum matched - - - - The maximum level that this filter will attempt to match against the - level. If a match is found then - the result depends on the value of . - - - - - - Simple filter to match a string in the event's logger name. - - - - The works very similar to the . It admits two - options and . If the - of the starts - with the value of the option, then the - method returns in - case the option value is set to true, - if it is false then is returned. - - - Daniel Cazzulino - - - - Flag to indicate the behavior when we have a match - - - - - The logger name string to substring match against the event - - - - - Default constructor - - - - - Check if this filter should allow the event to be logged - - the event being logged - see remarks - - - The rendered message is matched against the . - If the equals the beginning of - the incoming () - then a match will have occurred. If no match occurs - this function will return - allowing other filters to check the event. If a match occurs then - the value of is checked. If it is - true then is returned otherwise - is returned. - - - - - - when matching - - - - The property is a flag that determines - the behavior when a matching is found. If the - flag is set to true then the filter will the - logging event, otherwise it will the event. - - - The default is true i.e. to the event. - - - - - - The that the filter will match - - - - This filter will attempt to match this value against logger name in - the following way. The match will be done against the beginning of the - logger name (using ). The match is - case sensitive. If a match is found then - the result depends on the value of . - - - - - - Simple filter to match a keyed string in the - - - - Simple filter to match a keyed string in the - - - As the MDC has been replaced with layered properties the - should be used instead. - - - Nicko Cadell - Gert Driesen - - - - Simple filter to match a string an event property - - - - Simple filter to match a string in the value for a - specific event property - - - Nicko Cadell - - - - Simple filter to match a string in the rendered message - - - - Simple filter to match a string in the rendered message - - - Nicko Cadell - Gert Driesen - - - - Flag to indicate the behavior when we have a match - - - - - The string to substring match against the message - - - - - A string regex to match - - - - - A regex object to match (generated from m_stringRegexToMatch) - - - - - Default constructor - - - - - Initialize and precompile the Regex if required - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Check if this filter should allow the event to be logged - - the event being logged - see remarks - - - The rendered message is matched against the . - If the occurs as a substring within - the message then a match will have occurred. If no match occurs - this function will return - allowing other filters to check the event. If a match occurs then - the value of is checked. If it is - true then is returned otherwise - is returned. - - - - - - when matching or - - - - The property is a flag that determines - the behavior when a matching is found. If the - flag is set to true then the filter will the - logging event, otherwise it will the event. - - - The default is true i.e. to the event. - - - - - - Sets the static string to match - - - - The string that will be substring matched against - the rendered message. If the message contains this - string then the filter will match. If a match is found then - the result depends on the value of . - - - One of or - must be specified. - - - - - - Sets the regular expression to match - - - - The regular expression pattern that will be matched against - the rendered message. If the message matches this - pattern then the filter will match. If a match is found then - the result depends on the value of . - - - One of or - must be specified. - - - - - - The key to use to lookup the string from the event properties - - - - - Default constructor - - - - - Check if this filter should allow the event to be logged - - the event being logged - see remarks - - - The event property for the is matched against - the . - If the occurs as a substring within - the property value then a match will have occurred. If no match occurs - this function will return - allowing other filters to check the event. If a match occurs then - the value of is checked. If it is - true then is returned otherwise - is returned. - - - - - - The key to lookup in the event properties and then match against. - - - - The key name to use to lookup in the properties map of the - . The match will be performed against - the value of this property if it exists. - - - - - - Simple filter to match a string in the - - - - Simple filter to match a string in the - - - As the MDC has been replaced with named stacks stored in the - properties collections the should - be used instead. - - - Nicko Cadell - Gert Driesen - - - - Default constructor - - - - Sets the to "NDC". - - - - - - Write the event appdomain name to the output - - - - Writes the to the output writer. - - - Daniel Cazzulino - Nicko Cadell - - - - Abstract class that provides the formatting functionality that - derived classes need. - - - Conversion specifiers in a conversion patterns are parsed to - individual PatternConverters. Each of which is responsible for - converting a logging event in a converter specific manner. - - Nicko Cadell - - - - Abstract class that provides the formatting functionality that - derived classes need. - - - - Conversion specifiers in a conversion patterns are parsed to - individual PatternConverters. Each of which is responsible for - converting a logging event in a converter specific manner. - - - Nicko Cadell - Gert Driesen - - - - Initial buffer size - - - - - Maximum buffer size before it is recycled - - - - - Protected constructor - - - - Initializes a new instance of the class. - - - - - - Evaluate this pattern converter and write the output to a writer. - - that will receive the formatted result. - The state object on which the pattern converter should be executed. - - - Derived pattern converters must override this method in order to - convert conversion specifiers in the appropriate way. - - - - - - Set the next pattern converter in the chains - - the pattern converter that should follow this converter in the chain - the next converter - - - The PatternConverter can merge with its neighbor during this method (or a sub class). - Therefore the return value may or may not be the value of the argument passed in. - - - - - - Write the pattern converter to the writer with appropriate formatting - - that will receive the formatted result. - The state object on which the pattern converter should be executed. - - - This method calls to allow the subclass to perform - appropriate conversion of the pattern converter. If formatting options have - been specified via the then this method will - apply those formattings before writing the output. - - - - - - Fast space padding method. - - to which the spaces will be appended. - The number of spaces to be padded. - - - Fast space padding method. - - - - - - The option string to the converter - - - - - Write an dictionary to a - - the writer to write to - a to use for object conversion - the value to write to the writer - - - Writes the to a writer in the form: - - - {key1=value1, key2=value2, key3=value3} - - - If the specified - is not null then it is used to render the key and value to text, otherwise - the object's ToString method is called. - - - - - - Write an object to a - - the writer to write to - a to use for object conversion - the value to write to the writer - - - Writes the Object to a writer. If the specified - is not null then it is used to render the object to text, otherwise - the object's ToString method is called. - - - - - - Get the next pattern converter in the chain - - - the next pattern converter in the chain - - - - Get the next pattern converter in the chain - - - - - - Gets or sets the formatting info for this converter - - - The formatting info for this converter - - - - Gets or sets the formatting info for this converter - - - - - - Gets or sets the option value for this converter - - - The option for this converter - - - - Gets or sets the option value for this converter - - - - - - Initializes a new instance of the class. - - - - - Derived pattern converters must override this method in order to - convert conversion specifiers in the correct way. - - that will receive the formatted result. - The on which the pattern converter should be executed. - - - - Derived pattern converters must override this method in order to - convert conversion specifiers in the correct way. - - that will receive the formatted result. - The state object on which the pattern converter should be executed. - - - - Flag indicating if this converter handles exceptions - - - false if this converter handles exceptions - - - - - Flag indicating if this converter handles the logging event exception - - false if this converter handles the logging event exception - - - If this converter handles the exception object contained within - , then this property should be set to - false. Otherwise, if the layout ignores the exception - object, then the property should be set to true. - - - Set this value to override a this default setting. The default - value is true, this converter does not handle the exception. - - - - - - Write the event appdomain name to the output - - that will receive the formatted result. - the event being logged - - - Writes the to the output . - - - - - - Date pattern converter, uses a to format - the date of a . - - - - Render the to the writer as a string. - - - The value of the determines - the formatting of the date. The following values are allowed: - - - Option value - Output - - - ISO8601 - - Uses the formatter. - Formats using the "yyyy-MM-dd HH:mm:ss,fff" pattern. - - - - DATE - - Uses the formatter. - Formats using the "dd MMM yyyy HH:mm:ss,fff" for example, "06 Nov 1994 15:49:37,459". - - - - ABSOLUTE - - Uses the formatter. - Formats using the "HH:mm:ss,yyyy" for example, "15:49:37,459". - - - - other - - Any other pattern string uses the formatter. - This formatter passes the pattern string to the - method. - For details on valid patterns see - DateTimeFormatInfo Class. - - - - - - The is in the local time zone and is rendered in that zone. - To output the time in Universal time see . - - - Nicko Cadell - - - - The used to render the date to a string - - - - The used to render the date to a string - - - - - - Initialize the converter pattern based on the property. - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Convert the pattern into the rendered message - - that will receive the formatted result. - the event being logged - - - Pass the to the - for it to render it to the writer. - - - The passed is in the local time zone. - - - - - - Write the exception text to the output - - - - If an exception object is stored in the logging event - it will be rendered into the pattern output with a - trailing newline. - - - If there is no exception then nothing will be output - and no trailing newline will be appended. - It is typical to put a newline before the exception - and to have the exception as the last data in the pattern. - - - Nicko Cadell - - - - Default constructor - - - - - Write the exception text to the output - - that will receive the formatted result. - the event being logged - - - If an exception object is stored in the logging event - it will be rendered into the pattern output with a - trailing newline. - - - If there is no exception then nothing will be output - and no trailing newline will be appended. - It is typical to put a newline before the exception - and to have the exception as the last data in the pattern. - - - - - - Writes the caller location file name to the output - - - - Writes the value of the for - the event to the output writer. - - - Nicko Cadell - - - - Write the caller location file name to the output - - that will receive the formatted result. - the event being logged - - - Writes the value of the for - the to the output . - - - - - - Write the caller location info to the output - - - - Writes the to the output writer. - - - Nicko Cadell - - - - Write the caller location info to the output - - that will receive the formatted result. - the event being logged - - - Writes the to the output writer. - - - - - - Writes the event identity to the output - - - - Writes the value of the to - the output writer. - - - Daniel Cazzulino - Nicko Cadell - - - - Writes the event identity to the output - - that will receive the formatted result. - the event being logged - - - Writes the value of the - to - the output . - - - - - - Write the event level to the output - - - - Writes the display name of the event - to the writer. - - - Nicko Cadell - - - - Write the event level to the output - - that will receive the formatted result. - the event being logged - - - Writes the of the - to the . - - - - - - Write the caller location line number to the output - - - - Writes the value of the for - the event to the output writer. - - - Nicko Cadell - - - - Write the caller location line number to the output - - that will receive the formatted result. - the event being logged - - - Writes the value of the for - the to the output . - - - - - - Converter for logger name - - - - Outputs the of the event. - - - Nicko Cadell - - - - Converter to output and truncate '.' separated strings - - - - This abstract class supports truncating a '.' separated string - to show a specified number of elements from the right hand side. - This is used to truncate class names that are fully qualified. - - - Subclasses should override the method to - return the fully qualified string. - - - Nicko Cadell - - - - Initialize the converter - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Get the fully qualified string data - - the event being logged - the fully qualified name - - - Overridden by subclasses to get the fully qualified name before the - precision is applied to it. - - - Return the fully qualified '.' (dot/period) separated string. - - - - - - Convert the pattern to the rendered message - - that will receive the formatted result. - the event being logged - - Render the to the precision - specified by the property. - - - - - Gets the fully qualified name of the logger - - the event being logged - The fully qualified logger name - - - Returns the of the . - - - - - - Writes the event message to the output - - - - Uses the method - to write out the event message. - - - Nicko Cadell - - - - Writes the event message to the output - - that will receive the formatted result. - the event being logged - - - Uses the method - to write out the event message. - - - - - - Write the method name to the output - - - - Writes the caller location to - the output. - - - Nicko Cadell - - - - Write the method name to the output - - that will receive the formatted result. - the event being logged - - - Writes the caller location to - the output. - - - - - - Converter to include event NDC - - - - Outputs the value of the event property named NDC. - - - The should be used instead. - - - Nicko Cadell - - - - Write the event NDC to the output - - that will receive the formatted result. - the event being logged - - - As the thread context stacks are now stored in named event properties - this converter simply looks up the value of the NDC property. - - - The should be used instead. - - - - - - Property pattern converter - - - - Writes out the value of a named property. The property name - should be set in the - property. - - - If the is set to null - then all the properties are written as key value pairs. - - - Nicko Cadell - - - - Write the property value to the output - - that will receive the formatted result. - the event being logged - - - Writes out the value of a named property. The property name - should be set in the - property. - - - If the is set to null - then all the properties are written as key value pairs. - - - - - - Converter to output the relative time of the event - - - - Converter to output the time of the event relative to the start of the program. - - - Nicko Cadell - - - - Write the relative time to the output - - that will receive the formatted result. - the event being logged - - - Writes out the relative time of the event in milliseconds. - That is the number of milliseconds between the event - and the . - - - - - - Helper method to get the time difference between two DateTime objects - - start time (in the current local time zone) - end time (in the current local time zone) - the time difference in milliseconds - - - - Converter to include event thread name - - - - Writes the to the output. - - - Nicko Cadell - - - - Write the ThreadName to the output - - that will receive the formatted result. - the event being logged - - - Writes the to the . - - - - - - Pattern converter for the class name - - - - Outputs the of the event. - - - Nicko Cadell - - - - Gets the fully qualified name of the class - - the event being logged - The fully qualified type name for the caller location - - - Returns the of the . - - - - - - Converter to include event user name - - Douglas de la Torre - Nicko Cadell - - - - Convert the pattern to the rendered message - - that will receive the formatted result. - the event being logged - - - - Write the TimeStamp to the output - - - - Date pattern converter, uses a to format - the date of a . - - - Uses a to format the - in Universal time. - - - See the for details on the date pattern syntax. - - - - Nicko Cadell - - - - Write the TimeStamp to the output - - that will receive the formatted result. - the event being logged - - - Pass the to the - for it to render it to the writer. - - - The passed is in the local time zone, this is converted - to Universal time before it is rendered. - - - - - - - A Layout that renders only the Exception text from the logging event - - - - A Layout that renders only the Exception text from the logging event. - - - This Layout should only be used with appenders that utilize multiple - layouts (e.g. ). - - - Nicko Cadell - Gert Driesen - - - - Extend this abstract class to create your own log layout format. - - - - This is the base implementation of the - interface. Most layout objects should extend this class. - - - - - - Subclasses must implement the - method. - - - Subclasses should set the in their default - constructor. - - - - Nicko Cadell - Gert Driesen - - - - Interface implemented by layout objects - - - - An object is used to format a - as text. The method is called by an - appender to transform the into a string. - - - The layout can also supply and - text that is appender before any events and after all the events respectively. - - - Nicko Cadell - Gert Driesen - - - - Implement this method to create your own layout format. - - The TextWriter to write the formatted event to - The event to format - - - This method is called by an appender to format - the as text and output to a writer. - - - If the caller does not have a and prefers the - event to be formatted as a then the following - code can be used to format the event into a . - - - StringWriter writer = new StringWriter(); - Layout.Format(writer, loggingEvent); - string formattedEvent = writer.ToString(); - - - - - - The content type output by this layout. - - The content type - - - The content type output by this layout. - - - This is a MIME type e.g. "text/plain". - - - - - - The header for the layout format. - - the layout header - - - The Header text will be appended before any logging events - are formatted and appended. - - - - - - The footer for the layout format. - - the layout footer - - - The Footer text will be appended after all the logging events - have been formatted and appended. - - - - - - Flag indicating if this layout handle exceptions - - false if this layout handles exceptions - - - If this layout handles the exception object contained within - , then the layout should return - false. Otherwise, if the layout ignores the exception - object, then the layout should return true. - - - - - - The header text - - - - See for more information. - - - - - - The footer text - - - - See for more information. - - - - - - Flag indicating if this layout handles exceptions - - - - false if this layout handles exceptions - - - - - - Empty default constructor - - - - Empty default constructor - - - - - - Activate component options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - This method must be implemented by the subclass. - - - - - - Implement this method to create your own layout format. - - The TextWriter to write the formatted event to - The event to format - - - This method is called by an appender to format - the as text. - - - - - - The content type output by this layout. - - The content type is "text/plain" - - - The content type output by this layout. - - - This base class uses the value "text/plain". - To change this value a subclass must override this - property. - - - - - - The header for the layout format. - - the layout header - - - The Header text will be appended before any logging events - are formatted and appended. - - - - - - The footer for the layout format. - - the layout footer - - - The Footer text will be appended after all the logging events - have been formatted and appended. - - - - - - Flag indicating if this layout handles exceptions - - false if this layout handles exceptions - - - If this layout handles the exception object contained within - , then the layout should return - false. Otherwise, if the layout ignores the exception - object, then the layout should return true. - - - Set this value to override a this default setting. The default - value is true, this layout does not handle the exception. - - - - - - Default constructor - - - - Constructs a ExceptionLayout - - - - - - Activate component options - - - - Part of the component activation - framework. - - - This method does nothing as options become effective immediately. - - - - - - Gets the exception text from the logging event - - The TextWriter to write the formatted event to - the event being logged - - - Write the exception string to the . - The exception string is retrieved from . - - - - - - Interface for raw layout objects - - - - Interface used to format a - to an object. - - - This interface should not be confused with the - interface. This interface is used in - only certain specialized situations where a raw object is - required rather than a formatted string. The - is not generally useful than this interface. - - - Nicko Cadell - Gert Driesen - - - - Implement this method to create your own layout format. - - The event to format - returns the formatted event - - - Implement this method to create your own layout format. - - - - - - Adapts any to a - - - - Where an is required this adapter - allows a to be specified. - - - Nicko Cadell - Gert Driesen - - - - The layout to adapt - - - - - Construct a new adapter - - the layout to adapt - - - Create the adapter for the specified . - - - - - - Format the logging event as an object. - - The event to format - returns the formatted event - - - Format the logging event as an object. - - - Uses the object supplied to - the constructor to perform the formatting. - - - - - - A flexible layout configurable with pattern string. - - - - The goal of this class is to a - as a string. The results - depend on the conversion pattern. - - - The conversion pattern is closely related to the conversion - pattern of the printf function in C. A conversion pattern is - composed of literal text and format control expressions called - conversion specifiers. - - - You are free to insert any literal text within the conversion - pattern. - - - Each conversion specifier starts with a percent sign (%) and is - followed by optional format modifiers and a conversion - pattern name. The conversion pattern name specifies the type of - data, e.g. logger, level, date, thread name. The format - modifiers control such things as field width, padding, left and - right justification. The following is a simple example. - - - Let the conversion pattern be "%-5level [%thread]: %message%newline" and assume - that the log4net environment was set to use a PatternLayout. Then the - statements - - - ILog log = LogManager.GetLogger(typeof(TestApp)); - log.Debug("Message 1"); - log.Warn("Message 2"); - - would yield the output - - DEBUG [main]: Message 1 - WARN [main]: Message 2 - - - Note that there is no explicit separator between text and - conversion specifiers. The pattern parser knows when it has reached - the end of a conversion specifier when it reads a conversion - character. In the example above the conversion specifier - %-5level means the level of the logging event should be left - justified to a width of five characters. - - - The recognized conversion pattern names are: - - - - Conversion Pattern Name - Effect - - - a - Equivalent to appdomain - - - appdomain - - Used to output the friendly name of the AppDomain where the - logging event was generated. - - - - c - Equivalent to logger - - - C - Equivalent to type - - - class - Equivalent to type - - - d - Equivalent to date - - - date - - - Used to output the date of the logging event in the local time zone. - To output the date in universal time use the %utcdate pattern. - The date conversion - specifier may be followed by a date format specifier enclosed - between braces. For example, %date{HH:mm:ss,fff} or - %date{dd MMM yyyy HH:mm:ss,fff}. If no date format specifier is - given then ISO8601 format is - assumed (). - - - The date format specifier admits the same syntax as the - time pattern string of the . - - - For better results it is recommended to use the log4net date - formatters. These can be specified using one of the strings - "ABSOLUTE", "DATE" and "ISO8601" for specifying - , - and respectively - . For example, - %date{ISO8601} or %date{ABSOLUTE}. - - - These dedicated date formatters perform significantly - better than . - - - - - exception - - - Used to output the exception passed in with the log message. - - - If an exception object is stored in the logging event - it will be rendered into the pattern output with a - trailing newline. - If there is no exception then nothing will be output - and no trailing newline will be appended. - It is typical to put a newline before the exception - and to have the exception as the last data in the pattern. - - - - - F - Equivalent to file - - - file - - - Used to output the file name where the logging request was - issued. - - - WARNING Generating caller location information is - extremely slow. Its use should be avoided unless execution speed - is not an issue. - - - See the note below on the availability of caller location information. - - - - - identity - - - Used to output the user name for the currently active user - (Principal.Identity.Name). - - - WARNING Generating caller information is - extremely slow. Its use should be avoided unless execution speed - is not an issue. - - - - - l - Equivalent to location - - - L - Equivalent to line - - - location - - - Used to output location information of the caller which generated - the logging event. - - - The location information depends on the CLI implementation but - usually consists of the fully qualified name of the calling - method followed by the callers source the file name and line - number between parentheses. - - - The location information can be very useful. However, its - generation is extremely slow. Its use should be avoided - unless execution speed is not an issue. - - - See the note below on the availability of caller location information. - - - - - level - - - Used to output the level of the logging event. - - - - - line - - - Used to output the line number from where the logging request - was issued. - - - WARNING Generating caller location information is - extremely slow. Its use should be avoided unless execution speed - is not an issue. - - - See the note below on the availability of caller location information. - - - - - logger - - - Used to output the logger of the logging event. The - logger conversion specifier can be optionally followed by - precision specifier, that is a decimal constant in - brackets. - - - If a precision specifier is given, then only the corresponding - number of right most components of the logger name will be - printed. By default the logger name is printed in full. - - - For example, for the logger name "a.b.c" the pattern - %logger{2} will output "b.c". - - - - - m - Equivalent to message - - - M - Equivalent to method - - - message - - - Used to output the application supplied message associated with - the logging event. - - - - - mdc - - - The MDC (old name for the ThreadContext.Properties) is now part of the - combined event properties. This pattern is supported for compatibility - but is equivalent to property. - - - - - method - - - Used to output the method name where the logging request was - issued. - - - WARNING Generating caller location information is - extremely slow. Its use should be avoided unless execution speed - is not an issue. - - - See the note below on the availability of caller location information. - - - - - n - Equivalent to newline - - - newline - - - Outputs the platform dependent line separator character or - characters. - - - This conversion pattern offers the same performance as using - non-portable line separator strings such as "\n", or "\r\n". - Thus, it is the preferred way of specifying a line separator. - - - - - ndc - - - Used to output the NDC (nested diagnostic context) associated - with the thread that generated the logging event. - - - - - p - Equivalent to level - - - P - Equivalent to property - - - properties - Equivalent to property - - - property - - - Used to output the an event specific property. The key to - lookup must be specified within braces and directly following the - pattern specifier, e.g. %property{user} would include the value - from the property that is keyed by the string 'user'. Each property value - that is to be included in the log must be specified separately. - Properties are added to events by loggers or appenders. By default - the log4net:HostName property is set to the name of machine on - which the event was originally logged. - - - If no key is specified, e.g. %property then all the keys and their - values are printed in a comma separated list. - - - The properties of an event are combined from a number of different - contexts. These are listed below in the order in which they are searched. - - - - the event properties - - The event has that can be set. These - properties are specific to this event only. - - - - the thread properties - - The that are set on the current - thread. These properties are shared by all events logged on this thread. - - - - the global properties - - The that are set globally. These - properties are shared by all the threads in the AppDomain. - - - - - - - - r - Equivalent to timestamp - - - t - Equivalent to thread - - - timestamp - - - Used to output the number of milliseconds elapsed since the start - of the application until the creation of the logging event. - - - - - thread - - - Used to output the name of the thread that generated the - logging event. Uses the thread number if no name is available. - - - - - type - - - Used to output the fully qualified type name of the caller - issuing the logging request. This conversion specifier - can be optionally followed by precision specifier, that - is a decimal constant in brackets. - - - If a precision specifier is given, then only the corresponding - number of right most components of the class name will be - printed. By default the class name is output in fully qualified form. - - - For example, for the class name "log4net.Layout.PatternLayout", the - pattern %type{1} will output "PatternLayout". - - - WARNING Generating the caller class information is - slow. Thus, its use should be avoided unless execution speed is - not an issue. - - - See the note below on the availability of caller location information. - - - - - u - Equivalent to identity - - - username - - - Used to output the WindowsIdentity for the currently - active user. - - - WARNING Generating caller WindowsIdentity information is - extremely slow. Its use should be avoided unless execution speed - is not an issue. - - - - - utcdate - - - Used to output the date of the logging event in universal time. - The date conversion - specifier may be followed by a date format specifier enclosed - between braces. For example, %utcdate{HH:mm:ss,fff} or - %utcdate{dd MMM yyyy HH:mm:ss,fff}. If no date format specifier is - given then ISO8601 format is - assumed (). - - - The date format specifier admits the same syntax as the - time pattern string of the . - - - For better results it is recommended to use the log4net date - formatters. These can be specified using one of the strings - "ABSOLUTE", "DATE" and "ISO8601" for specifying - , - and respectively - . For example, - %utcdate{ISO8601} or %utcdate{ABSOLUTE}. - - - These dedicated date formatters perform significantly - better than . - - - - - w - Equivalent to username - - - x - Equivalent to ndc - - - X - Equivalent to mdc - - - % - - - The sequence %% outputs a single percent sign. - - - - - - The single letter patterns are deprecated in favor of the - longer more descriptive pattern names. - - - By default the relevant information is output as is. However, - with the aid of format modifiers it is possible to change the - minimum field width, the maximum field width and justification. - - - The optional format modifier is placed between the percent sign - and the conversion pattern name. - - - The first optional format modifier is the left justification - flag which is just the minus (-) character. Then comes the - optional minimum field width modifier. This is a decimal - constant that represents the minimum number of characters to - output. If the data item requires fewer characters, it is padded on - either the left or the right until the minimum width is - reached. The default is to pad on the left (right justify) but you - can specify right padding with the left justification flag. The - padding character is space. If the data item is larger than the - minimum field width, the field is expanded to accommodate the - data. The value is never truncated. - - - This behavior can be changed using the maximum field - width modifier which is designated by a period followed by a - decimal constant. If the data item is longer than the maximum - field, then the extra characters are removed from the - beginning of the data item and not from the end. For - example, it the maximum field width is eight and the data item is - ten characters long, then the first two characters of the data item - are dropped. This behavior deviates from the printf function in C - where truncation is done from the end. - - - Below are various format modifier examples for the logger - conversion specifier. - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Format modifierleft justifyminimum widthmaximum widthcomment
%20loggerfalse20none - - Left pad with spaces if the logger name is less than 20 - characters long. - -
%-20loggertrue20none - - Right pad with spaces if the logger - name is less than 20 characters long. - -
%.30loggerNAnone30 - - Truncate from the beginning if the logger - name is longer than 30 characters. - -
%20.30loggerfalse2030 - - Left pad with spaces if the logger name is shorter than 20 - characters. However, if logger name is longer than 30 characters, - then truncate from the beginning. - -
%-20.30loggertrue2030 - - Right pad with spaces if the logger name is shorter than 20 - characters. However, if logger name is longer than 30 characters, - then truncate from the beginning. - -
-
- - Note about caller location information.
- The following patterns %type %file %line %method %location %class %C %F %L %l %M - all generate caller location information. - Location information uses the System.Diagnostics.StackTrace class to generate - a call stack. The caller's information is then extracted from this stack. -
- - - The System.Diagnostics.StackTrace class is not supported on the - .NET Compact Framework 1.0 therefore caller location information is not - available on that framework. - - - - - The System.Diagnostics.StackTrace class has this to say about Release builds: - - - "StackTrace information will be most informative with Debug build configurations. - By default, Debug builds include debug symbols, while Release builds do not. The - debug symbols contain most of the file, method name, line number, and column - information used in constructing StackFrame and StackTrace objects. StackTrace - might not report as many method calls as expected, due to code transformations - that occur during optimization." - - - This means that in a Release build the caller information may be incomplete or may - not exist at all! Therefore caller location information cannot be relied upon in a Release build. - - - - Additional pattern converters may be registered with a specific - instance using the method. - -
- - This is a more detailed pattern. - %timestamp [%thread] %level %logger %ndc - %message%newline - - - A similar pattern except that the relative time is - right padded if less than 6 digits, thread name is right padded if - less than 15 characters and truncated if longer and the logger - name is left padded if shorter than 30 characters and truncated if - longer. - %-6timestamp [%15.15thread] %-5level %30.30logger %ndc - %message%newline - - Nicko Cadell - Gert Driesen - Douglas de la Torre - Daniel Cazzulino -
- - - Default pattern string for log output. - - - - Default pattern string for log output. - Currently set to the string "%message%newline" - which just prints the application supplied message. - - - - - - A detailed conversion pattern - - - - A conversion pattern which includes Time, Thread, Logger, and Nested Context. - Current value is %timestamp [%thread] %level %logger %ndc - %message%newline. - - - - - - Internal map of converter identifiers to converter types. - - - - This static map is overridden by the m_converterRegistry instance map - - - - - - the pattern - - - - - the head of the pattern converter chain - - - - - patterns defined on this PatternLayout only - - - - - Initialize the global registry - - - - Defines the builtin global rules. - - - - - - Constructs a PatternLayout using the DefaultConversionPattern - - - - The default pattern just produces the application supplied message. - - - Note to Inheritors: This constructor calls the virtual method - . If you override this method be - aware that it will be called before your is called constructor. - - - As per the contract the - method must be called after the properties on this object have been - configured. - - - - - - Constructs a PatternLayout using the supplied conversion pattern - - the pattern to use - - - Note to Inheritors: This constructor calls the virtual method - . If you override this method be - aware that it will be called before your is called constructor. - - - When using this constructor the method - need not be called. This may not be the case when using a subclass. - - - - - - Create the pattern parser instance - - the pattern to parse - The that will format the event - - - Creates the used to parse the conversion string. Sets the - global and instance rules on the . - - - - - - Initialize layout options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Produces a formatted string as specified by the conversion pattern. - - the event being logged - The TextWriter to write the formatted event to - - - Parse the using the patter format - specified in the property. - - - - - - Add a converter to this PatternLayout - - the converter info - - - This version of the method is used by the configurator. - Programmatic users should use the alternative method. - - - - - - Add a converter to this PatternLayout - - the name of the conversion pattern for this converter - the type of the converter - - - Add a named pattern converter to this instance. This - converter will be used in the formatting of the event. - This method must be called before . - - - The specified must extend the - type. - - - - - - The pattern formatting string - - - - The ConversionPattern option. This is the string which - controls formatting and consists of a mix of literal content and - conversion specifiers. - - - - - - Wrapper class used to map converter names to converter types - - - - Pattern converter info class used during configuration to - pass to the - method. - - - - - - default constructor - - - - - Gets or sets the name of the conversion pattern - - - - The name of the pattern in the format string - - - - - - Gets or sets the type of the converter - - - - The value specified must extend the - type. - - - - - - Type converter for the interface - - - - Used to convert objects to the interface. - Supports converting from the interface to - the interface using the . - - - Nicko Cadell - Gert Driesen - - - - Interface supported by type converters - - - - This interface supports conversion from arbitrary types - to a single target type. See . - - - Nicko Cadell - Gert Driesen - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Test if the can be converted to the - type supported by this converter. - - - - - - Convert the source object to the type supported by this object - - the object to convert - the converted object - - - Converts the to the type supported - by this converter. - - - - - - Can the sourceType be converted to an - - the source to be to be converted - true if the source type can be converted to - - - Test if the can be converted to a - . Only is supported - as the . - - - - - - Convert the value to a object - - the value to convert - the object - - - Convert the object to a - object. If the object - is a then the - is used to adapt between the two interfaces, otherwise an - exception is thrown. - - - - - - Extract the value of a property from the - - - - Extract the value of a property from the - - - Nicko Cadell - - - - Constructs a RawPropertyLayout - - - - - Lookup the property for - - The event to format - returns property value - - - Looks up and returns the object value of the property - named . If there is no property defined - with than name then null will be returned. - - - - - - The name of the value to lookup in the LoggingEvent Properties collection. - - - Value to lookup in the LoggingEvent Properties collection - - - - String name of the property to lookup in the . - - - - - - Extract the date from the - - - - Extract the date from the - - - Nicko Cadell - Gert Driesen - - - - Constructs a RawTimeStampLayout - - - - - Gets the as a . - - The event to format - returns the time stamp - - - Gets the as a . - - - The time stamp is in local time. To format the time stamp - in universal time use . - - - - - - Extract the date from the - - - - Extract the date from the - - - Nicko Cadell - Gert Driesen - - - - Constructs a RawUtcTimeStampLayout - - - - - Gets the as a . - - The event to format - returns the time stamp - - - Gets the as a . - - - The time stamp is in universal time. To format the time stamp - in local time use . - - - - - - A very simple layout - - - - SimpleLayout consists of the level of the log statement, - followed by " - " and then the log message itself. For example, - - DEBUG - Hello world - - - - Nicko Cadell - Gert Driesen - - - - Constructs a SimpleLayout - - - - - Initialize layout options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Produces a simple formatted output. - - the event being logged - The TextWriter to write the formatted event to - - - Formats the event as the level of the even, - followed by " - " and then the log message itself. The - output is terminated by a newline. - - - - - - Layout that formats the log events as XML elements. - - - - The output of the consists of a series of - log4net:event elements. It does not output a complete well-formed XML - file. The output is designed to be included as an external entity - in a separate file to form a correct XML file. - - - For example, if abc is the name of the file where - the output goes, then a well-formed XML file would - be: - - - <?xml version="1.0" ?> - - <!DOCTYPE log4net:events SYSTEM "log4net-events.dtd" [<!ENTITY data SYSTEM "abc">]> - - <log4net:events version="1.2" xmlns:log4net="http://logging.apache.org/log4net/schemas/log4net-events-1.2> - &data; - </log4net:events> - - - This approach enforces the independence of the - and the appender where it is embedded. - - - The version attribute helps components to correctly - interpret output generated by . The value of - this attribute should be "1.2" for release 1.2 and later. - - - Alternatively the Header and Footer properties can be - configured to output the correct XML header, open tag and close tag. - When setting the Header and Footer properties it is essential - that the underlying data store not be appendable otherwise the data - will become invalid XML. - - - Nicko Cadell - Gert Driesen - - - - Layout that formats the log events as XML elements. - - - - This is an abstract class that must be subclassed by an implementation - to conform to a specific schema. - - - Deriving classes must implement the method. - - - Nicko Cadell - Gert Driesen - - - - Protected constructor to support subclasses - - - - Initializes a new instance of the class - with no location info. - - - - - - Protected constructor to support subclasses - - - - The parameter determines whether - location information will be output by the layout. If - is set to true, then the - file name and line number of the statement at the origin of the log - statement will be output. - - - If you are embedding this layout within an SMTPAppender - then make sure to set the LocationInfo option of that - appender as well. - - - - - - Initialize layout options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Produces a formatted string. - - The event being logged. - The TextWriter to write the formatted event to - - - Format the and write it to the . - - - This method creates an that writes to the - . The is passed - to the method. Subclasses should override the - method rather than this method. - - - - - - Does the actual writing of the XML. - - The writer to use to output the event to. - The event to write. - - - Subclasses should override this method to format - the as XML. - - - - - - Flag to indicate if location information should be included in - the XML events. - - - - - Writer adapter that ignores Close - - - - - The string to replace invalid chars with - - - - - Gets a value indicating whether to include location information in - the XML events. - - - true if location information should be included in the XML - events; otherwise, false. - - - - If is set to true, then the file - name and line number of the statement at the origin of the log - statement will be output. - - - If you are embedding this layout within an SMTPAppender - then make sure to set the LocationInfo option of that - appender as well. - - - - - - The string to replace characters that can not be expressed in XML with. - - - Not all characters may be expressed in XML. This property contains the - string to replace those that can not with. This defaults to a ?. Set it - to the empty string to simply remove offending characters. For more - details on the allowed character ranges see http://www.w3.org/TR/REC-xml/#charsets - Character replacement will occur in the log message, the property names - and the property values. - - - - - - - Gets the content type output by this layout. - - - As this is the XML layout, the value is always "text/xml". - - - - As this is the XML layout, the value is always "text/xml". - - - - - - Constructs an XmlLayout - - - - - Constructs an XmlLayout. - - - - The LocationInfo option takes a boolean value. By - default, it is set to false which means there will be no location - information output by this layout. If the the option is set to - true, then the file name and line number of the statement - at the origin of the log statement will be output. - - - If you are embedding this layout within an SmtpAppender - then make sure to set the LocationInfo option of that - appender as well. - - - - - - Initialize layout options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - Builds a cache of the element names - - - - - - Does the actual writing of the XML. - - The writer to use to output the event to. - The event to write. - - - Override the base class method - to write the to the . - - - - - - The prefix to use for all generated element names - - - - - The prefix to use for all element names - - - - The default prefix is log4net. Set this property - to change the prefix. If the prefix is set to an empty string - then no prefix will be written. - - - - - - Set whether or not to base64 encode the message. - - - - By default the log message will be written as text to the xml - output. This can cause problems when the message contains binary - data. By setting this to true the contents of the message will be - base64 encoded. If this is set then invalid character replacement - (see ) will not be performed - on the log message. - - - - - - Set whether or not to base64 encode the property values. - - - - By default the properties will be written as text to the xml - output. This can cause problems when one or more properties contain - binary data. By setting this to true the values of the properties - will be base64 encoded. If this is set then invalid character replacement - (see ) will not be performed - on the property values. - - - - - - Layout that formats the log events as XML elements compatible with the log4j schema - - - - Formats the log events according to the http://logging.apache.org/log4j schema. - - - Nicko Cadell - - - - The 1st of January 1970 in UTC - - - - - Constructs an XMLLayoutSchemaLog4j - - - - - Constructs an XMLLayoutSchemaLog4j. - - - - The LocationInfo option takes a boolean value. By - default, it is set to false which means there will be no location - information output by this layout. If the the option is set to - true, then the file name and line number of the statement - at the origin of the log statement will be output. - - - If you are embedding this layout within an SMTPAppender - then make sure to set the LocationInfo option of that - appender as well. - - - - - - Actually do the writing of the xml - - the writer to use - the event to write - - - Generate XML that is compatible with the log4j schema. - - - - - - The version of the log4j schema to use. - - - - Only version 1.2 of the log4j schema is supported. - - - - - - The default object Renderer. - - - - The default renderer supports rendering objects and collections to strings. - - - See the method for details of the output. - - - Nicko Cadell - Gert Driesen - - - - Implement this interface in order to render objects as strings - - - - Certain types require special case conversion to - string form. This conversion is done by an object renderer. - Object renderers implement the - interface. - - - Nicko Cadell - Gert Driesen - - - - Render the object to a string - - The map used to lookup renderers - The object to render - The writer to render to - - - Render the object to a - string. - - - The parameter is - provided to lookup and render other objects. This is - very useful where contains - nested objects of unknown type. The - method can be used to render these objects. - - - - - - Default constructor - - - - Default constructor - - - - - - Render the object to a string - - The map used to lookup renderers - The object to render - The writer to render to - - - Render the object to a string. - - - The parameter is - provided to lookup and render other objects. This is - very useful where contains - nested objects of unknown type. The - method can be used to render these objects. - - - The default renderer supports rendering objects to strings as follows: - - - - Value - Rendered String - - - null - - "(null)" - - - - - - - For a one dimensional array this is the - array type name, an open brace, followed by a comma - separated list of the elements (using the appropriate - renderer), followed by a close brace. - - - For example: int[] {1, 2, 3}. - - - If the array is not one dimensional the - Array.ToString() is returned. - - - - - , & - - - Rendered as an open brace, followed by a comma - separated list of the elements (using the appropriate - renderer), followed by a close brace. - - - For example: {a, b, c}. - - - All collection classes that implement its subclasses, - or generic equivalents all implement the interface. - - - - - - - - Rendered as the key, an equals sign ('='), and the value (using the appropriate - renderer). - - - For example: key=value. - - - - - other - - Object.ToString() - - - - - - - - Render the array argument into a string - - The map used to lookup renderers - the array to render - The writer to render to - - - For a one dimensional array this is the - array type name, an open brace, followed by a comma - separated list of the elements (using the appropriate - renderer), followed by a close brace. For example: - int[] {1, 2, 3}. - - - If the array is not one dimensional the - Array.ToString() is returned. - - - - - - Render the enumerator argument into a string - - The map used to lookup renderers - the enumerator to render - The writer to render to - - - Rendered as an open brace, followed by a comma - separated list of the elements (using the appropriate - renderer), followed by a close brace. For example: - {a, b, c}. - - - - - - Render the DictionaryEntry argument into a string - - The map used to lookup renderers - the DictionaryEntry to render - The writer to render to - - - Render the key, an equals sign ('='), and the value (using the appropriate - renderer). For example: key=value. - - - - - - Map class objects to an . - - - - Maintains a mapping between types that require special - rendering and the that - is used to render them. - - - The method is used to render an - object using the appropriate renderers defined in this map. - - - Nicko Cadell - Gert Driesen - - - - Default Constructor - - - - Default constructor. - - - - - - Render using the appropriate renderer. - - the object to render to a string - the object rendered as a string - - - This is a convenience method used to render an object to a string. - The alternative method - should be used when streaming output to a . - - - - - - Render using the appropriate renderer. - - the object to render to a string - The writer to render to - - - Find the appropriate renderer for the type of the - parameter. This is accomplished by calling the - method. Once a renderer is found, it is - applied on the object and the result is returned - as a . - - - - - - Gets the renderer for the specified object type - - the object to lookup the renderer for - the renderer for - - - Gets the renderer for the specified object type. - - - Syntactic sugar method that calls - with the type of the object parameter. - - - - - - Gets the renderer for the specified type - - the type to lookup the renderer for - the renderer for the specified type - - - Returns the renderer for the specified type. - If no specific renderer has been defined the - will be returned. - - - - - - Internal function to recursively search interfaces - - the type to lookup the renderer for - the renderer for the specified type - - - - Clear the map of renderers - - - - Clear the custom renderers defined by using - . The - cannot be removed. - - - - - - Register an for . - - the type that will be rendered by - the renderer for - - - Register an object renderer for a specific source type. - This renderer will be returned from a call to - specifying the same as an argument. - - - - - - Get the default renderer instance - - the default renderer - - - Get the default renderer - - - - - - Interface implemented by logger repository plugins. - - - - Plugins define additional behavior that can be associated - with a . - The held by the - property is used to store the plugins for a repository. - - - The log4net.Config.PluginAttribute can be used to - attach plugins to repositories created using configuration - attributes. - - - Nicko Cadell - Gert Driesen - - - - Attaches the plugin to the specified . - - The that this plugin should be attached to. - - - A plugin may only be attached to a single repository. - - - This method is called when the plugin is attached to the repository. - - - - - - Is called when the plugin is to shutdown. - - - - This method is called to notify the plugin that - it should stop operating and should detach from - the repository. - - - - - - Gets the name of the plugin. - - - The name of the plugin. - - - - Plugins are stored in the - keyed by name. Each plugin instance attached to a - repository must be a unique name. - - - - - - A strongly-typed collection of objects. - - Nicko Cadell - - - - Creates a read-only wrapper for a PluginCollection instance. - - list to create a readonly wrapper arround - - A PluginCollection wrapper that is read-only. - - - - - Initializes a new instance of the PluginCollection class - that is empty and has the default initial capacity. - - - - - Initializes a new instance of the PluginCollection class - that has the specified initial capacity. - - - The number of elements that the new PluginCollection is initially capable of storing. - - - - - Initializes a new instance of the PluginCollection class - that contains elements copied from the specified PluginCollection. - - The PluginCollection whose elements are copied to the new collection. - - - - Initializes a new instance of the PluginCollection class - that contains elements copied from the specified array. - - The array whose elements are copied to the new list. - - - - Initializes a new instance of the PluginCollection class - that contains elements copied from the specified collection. - - The collection whose elements are copied to the new list. - - - - Allow subclasses to avoid our default constructors - - - - - - - Copies the entire PluginCollection to a one-dimensional - array. - - The one-dimensional array to copy to. - - - - Copies the entire PluginCollection to a one-dimensional - array, starting at the specified index of the target array. - - The one-dimensional array to copy to. - The zero-based index in at which copying begins. - - - - Adds a to the end of the PluginCollection. - - The to be added to the end of the PluginCollection. - The index at which the value has been added. - - - - Removes all elements from the PluginCollection. - - - - - Creates a shallow copy of the . - - A new with a shallow copy of the collection data. - - - - Determines whether a given is in the PluginCollection. - - The to check for. - true if is found in the PluginCollection; otherwise, false. - - - - Returns the zero-based index of the first occurrence of a - in the PluginCollection. - - The to locate in the PluginCollection. - - The zero-based index of the first occurrence of - in the entire PluginCollection, if found; otherwise, -1. - - - - - Inserts an element into the PluginCollection at the specified index. - - The zero-based index at which should be inserted. - The to insert. - - is less than zero - -or- - is equal to or greater than . - - - - - Removes the first occurrence of a specific from the PluginCollection. - - The to remove from the PluginCollection. - - The specified was not found in the PluginCollection. - - - - - Removes the element at the specified index of the PluginCollection. - - The zero-based index of the element to remove. - - is less than zero. - -or- - is equal to or greater than . - - - - - Returns an enumerator that can iterate through the PluginCollection. - - An for the entire PluginCollection. - - - - Adds the elements of another PluginCollection to the current PluginCollection. - - The PluginCollection whose elements should be added to the end of the current PluginCollection. - The new of the PluginCollection. - - - - Adds the elements of a array to the current PluginCollection. - - The array whose elements should be added to the end of the PluginCollection. - The new of the PluginCollection. - - - - Adds the elements of a collection to the current PluginCollection. - - The collection whose elements should be added to the end of the PluginCollection. - The new of the PluginCollection. - - - - Sets the capacity to the actual number of elements. - - - - - is less than zero. - -or- - is equal to or greater than . - - - - - is less than zero. - -or- - is equal to or greater than . - - - - - Gets the number of elements actually contained in the PluginCollection. - - - - - Gets a value indicating whether access to the collection is synchronized (thread-safe). - - true if access to the ICollection is synchronized (thread-safe); otherwise, false. - - - - Gets an object that can be used to synchronize access to the collection. - - - An object that can be used to synchronize access to the collection. - - - - - Gets or sets the at the specified index. - - - The at the specified index. - - The zero-based index of the element to get or set. - - is less than zero. - -or- - is equal to or greater than . - - - - - Gets a value indicating whether the collection has a fixed size. - - true if the collection has a fixed size; otherwise, false. The default is false. - - - - Gets a value indicating whether the IList is read-only. - - true if the collection is read-only; otherwise, false. The default is false. - - - - Gets or sets the number of elements the PluginCollection can contain. - - - The number of elements the PluginCollection can contain. - - - - - Supports type-safe iteration over a . - - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - - - Type visible only to our subclasses - Used to access protected constructor - - - - - - A value - - - - - Supports simple iteration over a . - - - - - - Initializes a new instance of the Enumerator class. - - - - - - Advances the enumerator to the next element in the collection. - - - true if the enumerator was successfully advanced to the next element; - false if the enumerator has passed the end of the collection. - - - The collection was modified after the enumerator was created. - - - - - Sets the enumerator to its initial position, before the first element in the collection. - - - - - Gets the current element in the collection. - - - The current element in the collection. - - - - - - - - Map of repository plugins. - - - - This class is a name keyed map of the plugins that are - attached to a repository. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - The repository that the plugins should be attached to. - - - Initialize a new instance of the class with a - repository that the plugins should be attached to. - - - - - - Adds a to the map. - - The to add to the map. - - - The will be attached to the repository when added. - - - If there already exists a plugin with the same name - attached to the repository then the old plugin will - be and replaced with - the new plugin. - - - - - - Removes a from the map. - - The to remove from the map. - - - Remove a specific plugin from this map. - - - - - - Gets a by name. - - The name of the to lookup. - - The from the map with the name specified, or - null if no plugin is found. - - - - Lookup a plugin by name. If the plugin is not found null - will be returned. - - - - - - Gets all possible plugins as a list of objects. - - All possible plugins as a list of objects. - - - Get a collection of all the plugins defined in this map. - - - - - - Base implementation of - - - - Default abstract implementation of the - interface. This base class can be used by implementors - of the interface. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - the name of the plugin - - Initializes a new Plugin with the specified name. - - - - - Attaches this plugin to a . - - The that this plugin should be attached to. - - - A plugin may only be attached to a single repository. - - - This method is called when the plugin is attached to the repository. - - - - - - Is called when the plugin is to shutdown. - - - - This method is called to notify the plugin that - it should stop operating and should detach from - the repository. - - - - - - The name of this plugin. - - - - - The repository this plugin is attached to. - - - - - Gets or sets the name of the plugin. - - - The name of the plugin. - - - - Plugins are stored in the - keyed by name. Each plugin instance attached to a - repository must be a unique name. - - - The name of the plugin must not change one the - plugin has been attached to a repository. - - - - - - The repository for this plugin - - - The that this plugin is attached to. - - - - Gets or sets the that this plugin is - attached to. - - - - - - Plugin that listens for events from the - - - - This plugin publishes an instance of - on a specified . This listens for logging events delivered from - a remote . - - - When an event is received it is relogged within the attached repository - as if it had been raised locally. - - - Nicko Cadell - Gert Driesen - - - - Default constructor - - - - Initializes a new instance of the class. - - - The property must be set. - - - - - - Construct with sink Uri. - - The name to publish the sink under in the remoting infrastructure. - See for more details. - - - Initializes a new instance of the class - with specified name. - - - - - - Attaches this plugin to a . - - The that this plugin should be attached to. - - - A plugin may only be attached to a single repository. - - - This method is called when the plugin is attached to the repository. - - - - - - Is called when the plugin is to shutdown. - - - - When the plugin is shutdown the remote logging - sink is disconnected. - - - - - - Gets or sets the URI of this sink. - - - The URI of this sink. - - - - This is the name under which the object is marshaled. - - - - - - - Delivers objects to a remote sink. - - - - Internal class used to listen for logging events - and deliver them to the local repository. - - - - - - Constructor - - The repository to log to. - - - Initializes a new instance of the for the - specified . - - - - - - Logs the events to the repository. - - The events to log. - - - The events passed are logged to the - - - - - - Obtains a lifetime service object to control the lifetime - policy for this instance. - - null to indicate that this instance should live forever. - - - Obtains a lifetime service object to control the lifetime - policy for this instance. This object should live forever - therefore this implementation returns null. - - - - - - The underlying that events should - be logged to. - - - - - Default implementation of - - - - This default implementation of the - interface is used to create the default subclass - of the object. - - - Nicko Cadell - Gert Driesen - - - - Interface abstracts creation of instances - - - - This interface is used by the to - create new objects. - - - The method is called - to create a named . - - - Implement this interface to create new subclasses of . - - - Nicko Cadell - Gert Driesen - - - - Create a new instance - - The name of the . - The instance for the specified name. - - - Create a new instance with the - specified name. - - - Called by the to create - new named instances. - - - If the is null then the root logger - must be returned. - - - - - - Default constructor - - - - Initializes a new instance of the class. - - - - - - Create a new instance - - The name of the . - The instance for the specified name. - - - Create a new instance with the - specified name. - - - Called by the to create - new named instances. - - - If the is null then the root logger - must be returned. - - - - - - Default internal subclass of - - - - This subclass has no additional behavior over the - class but does allow instances - to be created. - - - - - - Implementation of used by - - - - Internal class used to provide implementation of - interface. Applications should use to get - logger instances. - - - This is one of the central classes in the log4net implementation. One of the - distinctive features of log4net are hierarchical loggers and their - evaluation. The organizes the - instances into a rooted tree hierarchy. - - - The class is abstract. Only concrete subclasses of - can be created. The - is used to create instances of this type for the . - - - Nicko Cadell - Gert Driesen - Aspi Havewala - Douglas de la Torre - - - - This constructor created a new instance and - sets its name. - - The name of the . - - - This constructor is protected and designed to be used by - a subclass that is not abstract. - - - Loggers are constructed by - objects. See for the default - logger creator. - - - - - - Add to the list of appenders of this - Logger instance. - - An appender to add to this logger - - - Add to the list of appenders of this - Logger instance. - - - If is already in the list of - appenders, then it won't be added again. - - - - - - Look for the appender named as name - - The name of the appender to lookup - The appender with the name specified, or null. - - - Returns the named appender, or null if the appender is not found. - - - - - - Remove all previously added appenders from this Logger instance. - - - - Remove all previously added appenders from this Logger instance. - - - This is useful when re-reading configuration information. - - - - - - Remove the appender passed as parameter form the list of appenders. - - The appender to remove - The appender removed from the list - - - Remove the appender passed as parameter form the list of appenders. - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - Remove the appender passed as parameter form the list of appenders. - - The name of the appender to remove - The appender removed from the list - - - Remove the named appender passed as parameter form the list of appenders. - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - This generic form is intended to be used by wrappers. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The level of the message to be logged. - The message object to log. - The exception to log, including its stack trace. - - - Generate a logging event for the specified using - the and . - - - This method must not throw any exception to the caller. - - - - - - This is the most generic printing method that is intended to be used - by wrappers. - - The event being logged. - - - Logs the specified logging event through this logger. - - - This method must not throw any exception to the caller. - - - - - - Checks if this logger is enabled for a given passed as parameter. - - The level to check. - - true if this logger is enabled for level, otherwise false. - - - - Test if this logger is going to log events of the specified . - - - This method must not throw any exception to the caller. - - - - - - Deliver the to the attached appenders. - - The event to log. - - - Call the appenders in the hierarchy starting at - this. If no appenders could be found, emit a - warning. - - - This method calls all the appenders inherited from the - hierarchy circumventing any evaluation of whether to log or not - to log the particular log request. - - - - - - Closes all attached appenders implementing the interface. - - - - Used to ensure that the appenders are correctly shutdown. - - - - - - This is the most generic printing method. This generic form is intended to be used by wrappers - - The level of the message to be logged. - The message object to log. - The exception to log, including its stack trace. - - - Generate a logging event for the specified using - the . - - - - - - Creates a new logging event and logs the event without further checks. - - The declaring type of the method that is - the stack boundary into the logging system for this call. - The level of the message to be logged. - The message object to log. - The exception to log, including its stack trace. - - - Generates a logging event and delivers it to the attached - appenders. - - - - - - Creates a new logging event and logs the event without further checks. - - The event being logged. - - - Delivers the logging event to the attached appenders. - - - - - - The fully qualified type of the Logger class. - - - - - The name of this logger. - - - - - The assigned level of this logger. - - - - The level variable need not be - assigned a value in which case it is inherited - form the hierarchy. - - - - - - The parent of this logger. - - - - The parent of this logger. - All loggers have at least one ancestor which is the root logger. - - - - - - Loggers need to know what Hierarchy they are in. - - - - Loggers need to know what Hierarchy they are in. - The hierarchy that this logger is a member of is stored - here. - - - - - - Helper implementation of the interface - - - - - Flag indicating if child loggers inherit their parents appenders - - - - Additivity is set to true by default, that is children inherit - the appenders of their ancestors by default. If this variable is - set to false then the appenders found in the - ancestors of this logger are not used. However, the children - of this logger will inherit its appenders, unless the children - have their additivity flag set to false too. See - the user manual for more details. - - - - - - Lock to protect AppenderAttachedImpl variable m_appenderAttachedImpl - - - - - Gets or sets the parent logger in the hierarchy. - - - The parent logger in the hierarchy. - - - - Part of the Composite pattern that makes the hierarchy. - The hierarchy is parent linked rather than child linked. - - - - - - Gets or sets a value indicating if child loggers inherit their parent's appenders. - - - true if child loggers inherit their parent's appenders. - - - - Additivity is set to true by default, that is children inherit - the appenders of their ancestors by default. If this variable is - set to false then the appenders found in the - ancestors of this logger are not used. However, the children - of this logger will inherit its appenders, unless the children - have their additivity flag set to false too. See - the user manual for more details. - - - - - - Gets the effective level for this logger. - - The nearest level in the logger hierarchy. - - - Starting from this logger, searches the logger hierarchy for a - non-null level and returns it. Otherwise, returns the level of the - root logger. - - The Logger class is designed so that this method executes as - quickly as possible. - - - - - Gets or sets the where this - Logger instance is attached to. - - The hierarchy that this logger belongs to. - - - This logger must be attached to a single . - - - - - - Gets or sets the assigned , if any, for this Logger. - - - The of this logger. - - - - The assigned can be null. - - - - - - Get the appenders contained in this logger as an - . - - A collection of the appenders in this logger - - - Get the appenders contained in this logger as an - . If no appenders - can be found, then a is returned. - - - - - - Gets the logger name. - - - The name of the logger. - - - - The name of this logger - - - - - - Gets the where this - Logger instance is attached to. - - - The that this logger belongs to. - - - - Gets the where this - Logger instance is attached to. - - - - - - Construct a new Logger - - the name of the logger - - - Initializes a new instance of the class - with the specified name. - - - - - - Delegate used to handle logger creation event notifications. - - The in which the has been created. - The event args that hold the instance that has been created. - - - Delegate used to handle logger creation event notifications. - - - - - - Provides data for the event. - - - - A event is raised every time a - is created. - - - - - - The created - - - - - Constructor - - The that has been created. - - - Initializes a new instance of the event argument - class,with the specified . - - - - - - Gets the that has been created. - - - The that has been created. - - - - The that has been created. - - - - - - Hierarchical organization of loggers - - - - The casual user should not have to deal with this class - directly. - - - This class is specialized in retrieving loggers by name and - also maintaining the logger hierarchy. Implements the - interface. - - - The structure of the logger hierarchy is maintained by the - method. The hierarchy is such that children - link to their parent but parents do not have any references to their - children. Moreover, loggers can be instantiated in any order, in - particular descendant before ancestor. - - - In case a descendant is created before a particular ancestor, - then it creates a provision node for the ancestor and adds itself - to the provision node. Other descendants of the same ancestor add - themselves to the previously created provision node. - - - Nicko Cadell - Gert Driesen - - - - Base implementation of - - - - Default abstract implementation of the interface. - - - Skeleton implementation of the interface. - All types can extend this type. - - - Nicko Cadell - Gert Driesen - - - - Interface implemented by logger repositories. - - - - This interface is implemented by logger repositories. e.g. - . - - - This interface is used by the - to obtain interfaces. - - - Nicko Cadell - Gert Driesen - - - - Check if the named logger exists in the repository. If so return - its reference, otherwise returns null. - - The name of the logger to lookup - The Logger object with the name specified - - - If the names logger exists it is returned, otherwise - null is returned. - - - - - - Returns all the currently defined loggers as an Array. - - All the defined loggers - - - Returns all the currently defined loggers as an Array. - - - - - - Returns a named logger instance - - The name of the logger to retrieve - The logger object with the name specified - - - Returns a named logger instance. - - - If a logger of that name already exists, then it will be - returned. Otherwise, a new logger will be instantiated and - then linked with its existing ancestors as well as children. - - - - - Shutdown the repository - - - Shutting down a repository will safely close and remove - all appenders in all loggers including the root logger. - - - Some appenders need to be closed before the - application exists. Otherwise, pending logging events might be - lost. - - - The method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Reset the repositories configuration to a default state - - - - Reset all values contained in this instance to their - default state. - - - Existing loggers are not removed. They are just reset. - - - This method should be used sparingly and with care as it will - block all logging until it is completed. - - - - - - Log the through this repository. - - the event to log - - - This method should not normally be used to log. - The interface should be used - for routine logging. This interface can be obtained - using the method. - - - The logEvent is delivered to the appropriate logger and - that logger is then responsible for logging the event. - - - - - - Returns all the Appenders that are configured as an Array. - - All the Appenders - - - Returns all the Appenders that are configured as an Array. - - - - - - The name of the repository - - - The name of the repository - - - - The name of the repository. - - - - - - RendererMap accesses the object renderer map for this repository. - - - RendererMap accesses the object renderer map for this repository. - - - - RendererMap accesses the object renderer map for this repository. - - - The RendererMap holds a mapping between types and - objects. - - - - - - The plugin map for this repository. - - - The plugin map for this repository. - - - - The plugin map holds the instances - that have been attached to this repository. - - - - - - Get the level map for the Repository. - - - - Get the level map for the Repository. - - - The level map defines the mappings between - level names and objects in - this repository. - - - - - - The threshold for all events in this repository - - - The threshold for all events in this repository - - - - The threshold for all events in this repository. - - - - - - Flag indicates if this repository has been configured. - - - Flag indicates if this repository has been configured. - - - - Flag indicates if this repository has been configured. - - - - - - Event to notify that the repository has been shutdown. - - - Event to notify that the repository has been shutdown. - - - - Event raised when the repository has been shutdown. - - - - - - Event to notify that the repository has had its configuration reset. - - - Event to notify that the repository has had its configuration reset. - - - - Event raised when the repository's configuration has been - reset to default. - - - - - - Event to notify that the repository has had its configuration changed. - - - Event to notify that the repository has had its configuration changed. - - - - Event raised when the repository's configuration has been changed. - - - - - - Repository specific properties - - - Repository specific properties - - - - These properties can be specified on a repository specific basis. - - - - - - Default Constructor - - - - Initializes the repository with default (empty) properties. - - - - - - Construct the repository using specific properties - - the properties to set for this repository - - - Initializes the repository with specified properties. - - - - - - Test if logger exists - - The name of the logger to lookup - The Logger object with the name specified - - - Check if the named logger exists in the repository. If so return - its reference, otherwise returns null. - - - - - - Returns all the currently defined loggers in the repository - - All the defined loggers - - - Returns all the currently defined loggers in the repository as an Array. - - - - - - Return a new logger instance - - The name of the logger to retrieve - The logger object with the name specified - - - Return a new logger instance. - - - If a logger of that name already exists, then it will be - returned. Otherwise, a new logger will be instantiated and - then linked with its existing ancestors as well as children. - - - - - - Shutdown the repository - - - - Shutdown the repository. Can be overridden in a subclass. - This base class implementation notifies the - listeners and all attached plugins of the shutdown event. - - - - - - Reset the repositories configuration to a default state - - - - Reset all values contained in this instance to their - default state. - - - Existing loggers are not removed. They are just reset. - - - This method should be used sparingly and with care as it will - block all logging until it is completed. - - - - - - Log the logEvent through this repository. - - the event to log - - - This method should not normally be used to log. - The interface should be used - for routine logging. This interface can be obtained - using the method. - - - The logEvent is delivered to the appropriate logger and - that logger is then responsible for logging the event. - - - - - - Returns all the Appenders that are configured as an Array. - - All the Appenders - - - Returns all the Appenders that are configured as an Array. - - - - - - Adds an object renderer for a specific class. - - The type that will be rendered by the renderer supplied. - The object renderer used to render the object. - - - Adds an object renderer for a specific class. - - - - - - Notify the registered listeners that the repository is shutting down - - Empty EventArgs - - - Notify any listeners that this repository is shutting down. - - - - - - Notify the registered listeners that the repository has had its configuration reset - - Empty EventArgs - - - Notify any listeners that this repository's configuration has been reset. - - - - - - Notify the registered listeners that the repository has had its configuration changed - - Empty EventArgs - - - Notify any listeners that this repository's configuration has changed. - - - - - - Raise a configuration changed event on this repository - - EventArgs.Empty - - - Applications that programmatically change the configuration of the repository should - raise this event notification to notify listeners. - - - - - - The name of the repository - - - The string name of the repository - - - - The name of this repository. The name is - used to store and lookup the repositories - stored by the . - - - - - - The threshold for all events in this repository - - - The threshold for all events in this repository - - - - The threshold for all events in this repository - - - - - - RendererMap accesses the object renderer map for this repository. - - - RendererMap accesses the object renderer map for this repository. - - - - RendererMap accesses the object renderer map for this repository. - - - The RendererMap holds a mapping between types and - objects. - - - - - - The plugin map for this repository. - - - The plugin map for this repository. - - - - The plugin map holds the instances - that have been attached to this repository. - - - - - - Get the level map for the Repository. - - - - Get the level map for the Repository. - - - The level map defines the mappings between - level names and objects in - this repository. - - - - - - Flag indicates if this repository has been configured. - - - Flag indicates if this repository has been configured. - - - - Flag indicates if this repository has been configured. - - - - - - Event to notify that the repository has been shutdown. - - - Event to notify that the repository has been shutdown. - - - - Event raised when the repository has been shutdown. - - - - - - Event to notify that the repository has had its configuration reset. - - - Event to notify that the repository has had its configuration reset. - - - - Event raised when the repository's configuration has been - reset to default. - - - - - - Event to notify that the repository has had its configuration changed. - - - Event to notify that the repository has had its configuration changed. - - - - Event raised when the repository's configuration has been changed. - - - - - - Repository specific properties - - - Repository specific properties - - - These properties can be specified on a repository specific basis - - - - - Basic Configurator interface for repositories - - - - Interface used by basic configurator to configure a - with a default . - - - A should implement this interface to support - configuration by the . - - - Nicko Cadell - Gert Driesen - - - - Initialize the repository using the specified appender - - the appender to use to log all logging events - - - Configure the repository to route all logging events to the - specified appender. - - - - - - Configure repository using XML - - - - Interface used by Xml configurator to configure a . - - - A should implement this interface to support - configuration by the . - - - Nicko Cadell - Gert Driesen - - - - Initialize the repository using the specified config - - the element containing the root of the config - - - The schema for the XML configuration data is defined by - the implementation. - - - - - - Default constructor - - - - Initializes a new instance of the class. - - - - - - Construct with properties - - The properties to pass to this repository. - - - Initializes a new instance of the class. - - - - - - Construct with a logger factory - - The factory to use to create new logger instances. - - - Initializes a new instance of the class with - the specified . - - - - - - Construct with properties and a logger factory - - The properties to pass to this repository. - The factory to use to create new logger instances. - - - Initializes a new instance of the class with - the specified . - - - - - - Test if a logger exists - - The name of the logger to lookup - The Logger object with the name specified - - - Check if the named logger exists in the hierarchy. If so return - its reference, otherwise returns null. - - - - - - Returns all the currently defined loggers in the hierarchy as an Array - - All the defined loggers - - - Returns all the currently defined loggers in the hierarchy as an Array. - The root logger is not included in the returned - enumeration. - - - - - - Return a new logger instance named as the first parameter using - the default factory. - - - - Return a new logger instance named as the first parameter using - the default factory. - - - If a logger of that name already exists, then it will be - returned. Otherwise, a new logger will be instantiated and - then linked with its existing ancestors as well as children. - - - The name of the logger to retrieve - The logger object with the name specified - - - - Shutting down a hierarchy will safely close and remove - all appenders in all loggers including the root logger. - - - - Shutting down a hierarchy will safely close and remove - all appenders in all loggers including the root logger. - - - Some appenders need to be closed before the - application exists. Otherwise, pending logging events might be - lost. - - - The Shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Reset all values contained in this hierarchy instance to their default. - - - - Reset all values contained in this hierarchy instance to their - default. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set its default "off" value. - - - Existing loggers are not removed. They are just reset. - - - This method should be used sparingly and with care as it will - block all logging until it is completed. - - - - - - Log the logEvent through this hierarchy. - - the event to log - - - This method should not normally be used to log. - The interface should be used - for routine logging. This interface can be obtained - using the method. - - - The logEvent is delivered to the appropriate logger and - that logger is then responsible for logging the event. - - - - - - Returns all the Appenders that are currently configured - - An array containing all the currently configured appenders - - - Returns all the instances that are currently configured. - All the loggers are searched for appenders. The appenders may also be containers - for appenders and these are also searched for additional loggers. - - - The list returned is unordered but does not contain duplicates. - - - - - - Collect the appenders from an . - The appender may also be a container. - - - - - - - Collect the appenders from an container - - - - - - - Initialize the log4net system using the specified appender - - the appender to use to log all logging events - - - - Initialize the log4net system using the specified appender - - the appender to use to log all logging events - - - This method provides the same functionality as the - method implemented - on this object, but it is protected and therefore can be called by subclasses. - - - - - - Initialize the log4net system using the specified config - - the element containing the root of the config - - - - Initialize the log4net system using the specified config - - the element containing the root of the config - - - This method provides the same functionality as the - method implemented - on this object, but it is protected and therefore can be called by subclasses. - - - - - - Test if this hierarchy is disabled for the specified . - - The level to check against. - - true if the repository is disabled for the level argument, false otherwise. - - - - If this hierarchy has not been configured then this method will - always return true. - - - This method will return true if this repository is - disabled for level object passed as parameter and - false otherwise. - - - See also the property. - - - - - - Clear all logger definitions from the internal hashtable - - - - This call will clear all logger definitions from the internal - hashtable. Invoking this method will irrevocably mess up the - logger hierarchy. - - - You should really know what you are doing before - invoking this method. - - - - - - Return a new logger instance named as the first parameter using - . - - The name of the logger to retrieve - The factory that will make the new logger instance - The logger object with the name specified - - - If a logger of that name already exists, then it will be - returned. Otherwise, a new logger will be instantiated by the - parameter and linked with its existing - ancestors as well as children. - - - - - - Sends a logger creation event to all registered listeners - - The newly created logger - - Raises the logger creation event. - - - - - Updates all the parents of the specified logger - - The logger to update the parents for - - - This method loops through all the potential parents of - . There 3 possible cases: - - - - No entry for the potential parent of exists - - We create a ProvisionNode for this potential - parent and insert in that provision node. - - - - The entry is of type Logger for the potential parent. - - The entry is 's nearest existing parent. We - update 's parent field with this entry. We also break from - he loop because updating our parent's parent is our parent's - responsibility. - - - - The entry is of type ProvisionNode for this potential parent. - - We add to the list of children for this - potential parent. - - - - - - - - Replace a with a in the hierarchy. - - - - - - We update the links for all the children that placed themselves - in the provision node 'pn'. The second argument 'log' is a - reference for the newly created Logger, parent of all the - children in 'pn'. - - - We loop on all the children 'c' in 'pn'. - - - If the child 'c' has been already linked to a child of - 'log' then there is no need to update 'c'. - - - Otherwise, we set log's parent field to c's parent and set - c's parent field to log. - - - - - - Define or redefine a Level using the values in the argument - - the level values - - - Define or redefine a Level using the values in the argument - - - Supports setting levels via the configuration file. - - - - - - Set a Property using the values in the argument - - the property value - - - Set a Property using the values in the argument. - - - Supports setting property values via the configuration file. - - - - - - Event used to notify that a logger has been created. - - - - Event raised when a logger is created. - - - - - - Has no appender warning been emitted - - - - Flag to indicate if we have already issued a warning - about not having an appender warning. - - - - - - Get the root of this hierarchy - - - - Get the root of this hierarchy. - - - - - - Gets or sets the default instance. - - The default - - - The logger factory is used to create logger instances. - - - - - - A class to hold the value, name and display name for a level - - - - A class to hold the value, name and display name for a level - - - - - - Override Object.ToString to return sensible debug info - - string info about this object - - - - Value of the level - - - - If the value is not set (defaults to -1) the value will be looked - up for the current level with the same name. - - - - - - Name of the level - - - The name of the level - - - - The name of the level. - - - - - - Display name for the level - - - The display name of the level - - - - The display name of the level. - - - - - - A class to hold the key and data for a property set in the config file - - - - A class to hold the key and data for a property set in the config file - - - - - - Override Object.ToString to return sensible debug info - - string info about this object - - - - Property Key - - - Property Key - - - - Property Key. - - - - - - Property Value - - - Property Value - - - - Property Value. - - - - - - Used internally to accelerate hash table searches. - - - - Internal class used to improve performance of - string keyed hashtables. - - - The hashcode of the string is cached for reuse. - The string is stored as an interned value. - When comparing two objects for equality - the reference equality of the interned strings is compared. - - - Nicko Cadell - Gert Driesen - - - - Construct key with string name - - - - Initializes a new instance of the class - with the specified name. - - - Stores the hashcode of the string and interns - the string key to optimize comparisons. - - - The Compact Framework 1.0 the - method does not work. On the Compact Framework - the string keys are not interned nor are they - compared by reference. - - - The name of the logger. - - - - Returns a hash code for the current instance. - - A hash code for the current instance. - - - Returns the cached hashcode. - - - - - - Determines whether two instances - are equal. - - The to compare with the current . - - true if the specified is equal to the current ; otherwise, false. - - - - Compares the references of the interned strings. - - - - - - Provision nodes are used where no logger instance has been specified - - - - instances are used in the - when there is no specified - for that node. - - - A provision node holds a list of child loggers on behalf of - a logger that does not exist. - - - Nicko Cadell - Gert Driesen - - - - Create a new provision node with child node - - A child logger to add to this node. - - - Initializes a new instance of the class - with the specified child logger. - - - - - - The sits at the root of the logger hierarchy tree. - - - - The is a regular except - that it provides several guarantees. - - - First, it cannot be assigned a null - level. Second, since the root logger cannot have a parent, the - property always returns the value of the - level field without walking the hierarchy. - - - Nicko Cadell - Gert Driesen - - - - Construct a - - The level to assign to the root logger. - - - Initializes a new instance of the class with - the specified logging level. - - - The root logger names itself as "root". However, the root - logger cannot be retrieved by name. - - - - - - Gets the assigned level value without walking the logger hierarchy. - - The assigned level value without walking the logger hierarchy. - - - Because the root logger cannot have a parent and its level - must not be null this property just returns the - value of . - - - - - - Gets or sets the assigned for the root logger. - - - The of the root logger. - - - - Setting the level of the root logger to a null reference - may have catastrophic results. We prevent this here. - - - - - - Initializes the log4net environment using an XML DOM. - - - - Configures a using an XML DOM. - - - Nicko Cadell - Gert Driesen - - - - Construct the configurator for a hierarchy - - The hierarchy to build. - - - Initializes a new instance of the class - with the specified . - - - - - - Configure the hierarchy by parsing a DOM tree of XML elements. - - The root element to parse. - - - Configure the hierarchy by parsing a DOM tree of XML elements. - - - - - - Parse appenders by IDREF. - - The appender ref element. - The instance of the appender that the ref refers to. - - - Parse an XML element that represents an appender and return - the appender. - - - - - - Parses an appender element. - - The appender element. - The appender instance or null when parsing failed. - - - Parse an XML element that represents an appender and return - the appender instance. - - - - - - Parses a logger element. - - The logger element. - - - Parse an XML element that represents a logger. - - - - - - Parses the root logger element. - - The root element. - - - Parse an XML element that represents the root logger. - - - - - - Parses the children of a logger element. - - The category element. - The logger instance. - Flag to indicate if the logger is the root logger. - - - Parse the child elements of a <logger> element. - - - - - - Parses an object renderer. - - The renderer element. - - - Parse an XML element that represents a renderer. - - - - - - Parses a level element. - - The level element. - The logger object to set the level on. - Flag to indicate if the logger is the root logger. - - - Parse an XML element that represents a level. - - - - - - Sets a parameter on an object. - - The parameter element. - The object to set the parameter on. - - The parameter name must correspond to a writable property - on the object. The value of the parameter is a string, - therefore this function will attempt to set a string - property first. If unable to set a string property it - will inspect the property and its argument type. It will - attempt to call a static method called Parse on the - type of the property. This method will take a single - string argument and return a value that can be used to - set the property. - - - - - Test if an element has no attributes or child elements - - the element to inspect - true if the element has any attributes or child elements, false otherwise - - - - Test if a is constructible with Activator.CreateInstance. - - the type to inspect - true if the type is creatable using a default constructor, false otherwise - - - - Look for a method on the that matches the supplied - - the type that has the method - the name of the method - the method info found - - - The method must be a public instance method on the . - The method must be named or "Add" followed by . - The method must take a single parameter. - - - - - - Converts a string value to a target type. - - The type of object to convert the string to. - The string value to use as the value of the object. - - - An object of type with value or - null when the conversion could not be performed. - - - - - - Creates an object as specified in XML. - - The XML element that contains the definition of the object. - The object type to use if not explicitly specified. - The type that the returned object must be or must inherit from. - The object or null - - - Parse an XML element and create an object instance based on the configuration - data. - - - The type of the instance may be specified in the XML. If not - specified then the is used - as the type. However the type is specified it must support the - type. - - - - - - key: appenderName, value: appender. - - - - - The Hierarchy being configured. - - - - - Delegate used to handle logger repository shutdown event notifications - - The that is shutting down. - Empty event args - - - Delegate used to handle logger repository shutdown event notifications. - - - - - - Delegate used to handle logger repository configuration reset event notifications - - The that has had its configuration reset. - Empty event args - - - Delegate used to handle logger repository configuration reset event notifications. - - - - - - Delegate used to handle event notifications for logger repository configuration changes. - - The that has had its configuration changed. - Empty event arguments. - - - Delegate used to handle event notifications for logger repository configuration changes. - - - - - - Write the name of the current AppDomain to the output - - - - Write the name of the current AppDomain to the output writer - - - Nicko Cadell - - - - Write the name of the current AppDomain to the output - - the writer to write to - null, state is not set - - - Writes name of the current AppDomain to the output . - - - - - - Write the current date to the output - - - - Date pattern converter, uses a to format - the current date and time to the writer as a string. - - - The value of the determines - the formatting of the date. The following values are allowed: - - - Option value - Output - - - ISO8601 - - Uses the formatter. - Formats using the "yyyy-MM-dd HH:mm:ss,fff" pattern. - - - - DATE - - Uses the formatter. - Formats using the "dd MMM yyyy HH:mm:ss,fff" for example, "06 Nov 1994 15:49:37,459". - - - - ABSOLUTE - - Uses the formatter. - Formats using the "HH:mm:ss,fff" for example, "15:49:37,459". - - - - other - - Any other pattern string uses the formatter. - This formatter passes the pattern string to the - method. - For details on valid patterns see - DateTimeFormatInfo Class. - - - - - - The date and time is in the local time zone and is rendered in that zone. - To output the time in Universal time see . - - - Nicko Cadell - - - - The used to render the date to a string - - - - The used to render the date to a string - - - - - - Initialize the converter options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Write the current date to the output - - that will receive the formatted result. - null, state is not set - - - Pass the current date and time to the - for it to render it to the writer. - - - The date and time passed is in the local time zone. - - - - - - Write an environment variable to the output - - - - Write an environment variable to the output writer. - The value of the determines - the name of the variable to output. - - - Nicko Cadell - - - - Write an environment variable to the output - - the writer to write to - null, state is not set - - - Writes the environment variable to the output . - The name of the environment variable to output must be set - using the - property. - - - - - - Write the current thread identity to the output - - - - Write the current thread identity to the output writer - - - Nicko Cadell - - - - Write the current thread identity to the output - - the writer to write to - null, state is not set - - - Writes the current thread identity to the output . - - - - - - Pattern converter for literal string instances in the pattern - - - - Writes the literal string value specified in the - property to - the output. - - - Nicko Cadell - - - - Set the next converter in the chain - - The next pattern converter in the chain - The next pattern converter - - - Special case the building of the pattern converter chain - for instances. Two adjacent - literals in the pattern can be represented by a single combined - pattern converter. This implementation detects when a - is added to the chain - after this converter and combines its value with this converter's - literal value. - - - - - - Write the literal to the output - - the writer to write to - null, not set - - - Override the formatting behavior to ignore the FormattingInfo - because we have a literal instead. - - - Writes the value of - to the output . - - - - - - Convert this pattern into the rendered message - - that will receive the formatted result. - null, not set - - - This method is not used. - - - - - - Writes a newline to the output - - - - Writes the system dependent line terminator to the output. - This behavior can be overridden by setting the : - - - - Option Value - Output - - - DOS - DOS or Windows line terminator "\r\n" - - - UNIX - UNIX line terminator "\n" - - - - Nicko Cadell - - - - Initialize the converter - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Write the current process ID to the output - - - - Write the current process ID to the output writer - - - Nicko Cadell - - - - Write the current process ID to the output - - the writer to write to - null, state is not set - - - Write the current process ID to the output . - - - - - - Property pattern converter - - - - This pattern converter reads the thread and global properties. - The thread properties take priority over global properties. - See for details of the - thread properties. See for - details of the global properties. - - - If the is specified then that will be used to - lookup a single property. If no is specified - then all properties will be dumped as a list of key value pairs. - - - Nicko Cadell - - - - Write the property value to the output - - that will receive the formatted result. - null, state is not set - - - Writes out the value of a named property. The property name - should be set in the - property. - - - If the is set to null - then all the properties are written as key value pairs. - - - - - - A Pattern converter that generates a string of random characters - - - - The converter generates a string of random characters. By default - the string is length 4. This can be changed by setting the - to the string value of the length required. - - - The random characters in the string are limited to uppercase letters - and numbers only. - - - The random number generator used by this class is not cryptographically secure. - - - Nicko Cadell - - - - Shared random number generator - - - - - Length of random string to generate. Default length 4. - - - - - Initialize the converter options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Write a randoim string to the output - - the writer to write to - null, state is not set - - - Write a randoim string to the output . - - - - - - Write the current threads username to the output - - - - Write the current threads username to the output writer - - - Nicko Cadell - - - - Write the current threads username to the output - - the writer to write to - null, state is not set - - - Write the current threads username to the output . - - - - - - Write the UTC date time to the output - - - - Date pattern converter, uses a to format - the current date and time in Universal time. - - - See the for details on the date pattern syntax. - - - - Nicko Cadell - - - - Write the current date and time to the output - - that will receive the formatted result. - null, state is not set - - - Pass the current date and time to the - for it to render it to the writer. - - - The date is in Universal time when it is rendered. - - - - - - - Type converter for Boolean. - - - - Supports conversion from string to bool type. - - - - - - Nicko Cadell - Gert Driesen - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Convert the source object to the type supported by this object - - the object to convert - the converted object - - - Uses the method to convert the - argument to a . - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Exception base type for conversion errors. - - - - This type extends . It - does not add any new functionality but does differentiate the - type of exception being thrown. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Constructor - - A message to include with the exception. - - - Initializes a new instance of the class - with the specified message. - - - - - - Constructor - - A message to include with the exception. - A nested exception to include. - - - Initializes a new instance of the class - with the specified message and inner exception. - - - - - - Serialization constructor - - The that holds the serialized object data about the exception being thrown. - The that contains contextual information about the source or destination. - - - Initializes a new instance of the class - with serialized data. - - - - - - Creates a new instance of the class. - - The conversion destination type. - The value to convert. - An instance of the . - - - Creates a new instance of the class. - - - - - - Creates a new instance of the class. - - The conversion destination type. - The value to convert. - A nested exception to include. - An instance of the . - - - Creates a new instance of the class. - - - - - - Register of type converters for specific types. - - - - Maintains a registry of type converters used to convert between - types. - - - Use the and - methods to register new converters. - The and methods - lookup appropriate converters to use. - - - - - Nicko Cadell - Gert Driesen - - - - Private constructor - - - Initializes a new instance of the class. - - - - - Static constructor. - - - - This constructor defines the intrinsic type converters. - - - - - - Adds a converter for a specific type. - - The type being converted to. - The type converter to use to convert to the destination type. - - - Adds a converter instance for a specific type. - - - - - - Adds a converter for a specific type. - - The type being converted to. - The type of the type converter to use to convert to the destination type. - - - Adds a converter for a specific type. - - - - - - Gets the type converter to use to convert values to the destination type. - - The type being converted from. - The type being converted to. - - The type converter instance to use for type conversions or null - if no type converter is found. - - - - Gets the type converter to use to convert values to the destination type. - - - - - - Gets the type converter to use to convert values to the destination type. - - The type being converted to. - - The type converter instance to use for type conversions or null - if no type converter is found. - - - - Gets the type converter to use to convert values to the destination type. - - - - - - Lookups the type converter to use as specified by the attributes on the - destination type. - - The type being converted to. - - The type converter instance to use for type conversions or null - if no type converter is found. - - - - - Creates the instance of the type converter. - - The type of the type converter. - - The type converter instance to use for type conversions or null - if no type converter is found. - - - - The type specified for the type converter must implement - the or interfaces - and must have a public default (no argument) constructor. - - - - - - Mapping from to type converter. - - - - - Supports conversion from string to type. - - - - Supports conversion from string to type. - - - - - - Nicko Cadell - Gert Driesen - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Overrides the ConvertFrom method of IConvertFrom. - - the object to convert to an encoding - the encoding - - - Uses the method to - convert the argument to an . - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Interface supported by type converters - - - - This interface supports conversion from a single type to arbitrary types. - See . - - - Nicko Cadell - - - - Returns whether this converter can convert the object to the specified type - - A Type that represents the type you want to convert to - true if the conversion is possible - - - Test if the type supported by this converter can be converted to the - . - - - - - - Converts the given value object to the specified type, using the arguments - - the object to convert - The Type to convert the value parameter to - the converted object - - - Converts the (which must be of the type supported - by this converter) to the specified.. - - - - - - Supports conversion from string to type. - - - - Supports conversion from string to type. - - - - - Nicko Cadell - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Overrides the ConvertFrom method of IConvertFrom. - - the object to convert to an IPAddress - the IPAddress - - - Uses the method to convert the - argument to an . - If that fails then the string is resolved as a DNS hostname. - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Valid characters in an IPv4 or IPv6 address string. (Does not support subnets) - - - - - Supports conversion from string to type. - - - - Supports conversion from string to type. - - - The string is used as the - of the . - - - - - - Nicko Cadell - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Overrides the ConvertFrom method of IConvertFrom. - - the object to convert to a PatternLayout - the PatternLayout - - - Creates and returns a new using - the as the - . - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Convert between string and - - - - Supports conversion from string to type, - and from a type to a string. - - - The string is used as the - of the . - - - - - - Nicko Cadell - - - - Can the target type be converted to the type supported by this object - - A that represents the type you want to convert to - true if the conversion is possible - - - Returns true if the is - assignable from a type. - - - - - - Converts the given value object to the specified type, using the arguments - - the object to convert - The Type to convert the value parameter to - the converted object - - - Uses the method to convert the - argument to a . - - - - The object cannot be converted to the - . To check for this condition use the - method. - - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Overrides the ConvertFrom method of IConvertFrom. - - the object to convert to a PatternString - the PatternString - - - Creates and returns a new using - the as the - . - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Supports conversion from string to type. - - - - Supports conversion from string to type. - - - - - - Nicko Cadell - - - - Can the source type be converted to the type supported by this object - - the type to convert - true if the conversion is possible - - - Returns true if the is - the type. - - - - - - Overrides the ConvertFrom method of IConvertFrom. - - the object to convert to a Type - the Type - - - Uses the method to convert the - argument to a . - Additional effort is made to locate partially specified types - by searching the loaded assemblies. - - - - The object cannot be converted to the - target type. To check for this condition use the - method. - - - - - Attribute used to associate a type converter - - - - Class and Interface level attribute that specifies a type converter - to use with the associated type. - - - To associate a type converter with a target type apply a - TypeConverterAttribute to the target type. Specify the - type of the type converter on the attribute. - - - Nicko Cadell - Gert Driesen - - - - The string type name of the type converter - - - - - Default constructor - - - - Default constructor - - - - - - Create a new type converter attribute for the specified type name - - The string type name of the type converter - - - The type specified must implement the - or the interfaces. - - - - - - Create a new type converter attribute for the specified type - - The type of the type converter - - - The type specified must implement the - or the interfaces. - - - - - - The string type name of the type converter - - - The string type name of the type converter - - - - The type specified must implement the - or the interfaces. - - - - - - A straightforward implementation of the interface. - - - - This is the default implementation of the - interface. Implementors of the interface - should aggregate an instance of this type. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Append on on all attached appenders. - - The event being logged. - The number of appenders called. - - - Calls the method on all - attached appenders. - - - - - - Append on on all attached appenders. - - The array of events being logged. - The number of appenders called. - - - Calls the method on all - attached appenders. - - - - - - Calls the DoAppende method on the with - the objects supplied. - - The appender - The events - - - If the supports the - interface then the will be passed - through using that interface. Otherwise the - objects in the array will be passed one at a time. - - - - - - Attaches an appender. - - The appender to add. - - - If the appender is already in the list it won't be added again. - - - - - - Gets an attached appender with the specified name. - - The name of the appender to get. - - The appender with the name specified, or null if no appender with the - specified name is found. - - - - Lookup an attached appender by name. - - - - - - Removes all attached appenders. - - - - Removes and closes all attached appenders - - - - - - Removes the specified appender from the list of attached appenders. - - The appender to remove. - The appender removed from the list - - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - Removes the appender with the specified name from the list of appenders. - - The name of the appender to remove. - The appender removed from the list - - - The appender removed is not closed. - If you are discarding the appender you must call - on the appender removed. - - - - - - List of appenders - - - - - Array of appenders, used to cache the m_appenderList - - - - - Gets all attached appenders. - - - A collection of attached appenders, or null if there - are no attached appenders. - - - - The read only collection of all currently attached appenders. - - - - - - This class aggregates several PropertiesDictionary collections together. - - - - Provides a dictionary style lookup over an ordered list of - collections. - - - Nicko Cadell - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Add a Properties Dictionary to this composite collection - - the properties to add - - - Properties dictionaries added first take precedence over dictionaries added - later. - - - - - - Flatten this composite collection into a single properties dictionary - - the flattened dictionary - - - Reduces the collection of ordered dictionaries to a single dictionary - containing the resultant values for the keys. - - - - - - Gets the value of a property - - - The value for the property with the specified key - - - - Looks up the value for the specified. - The collections are searched - in the order in which they were added to this collection. The value - returned is the value held by the first collection that contains - the specified key. - - - If none of the collections contain the specified key then - null is returned. - - - - - - Base class for Context Properties implementations - - - - This class defines a basic property get set accessor - - - Nicko Cadell - - - - Gets or sets the value of a property - - - The value for the property with the specified key - - - - Gets or sets the value of a property - - - - - - Subclass of that maintains a count of - the number of bytes written. - - - - This writer counts the number of bytes written. - - - Nicko Cadell - Gert Driesen - - - - that does not leak exceptions - - - - does not throw exceptions when things go wrong. - Instead, it delegates error handling to its . - - - Nicko Cadell - Gert Driesen - - - - Adapter that extends and forwards all - messages to an instance of . - - - - Adapter that extends and forwards all - messages to an instance of . - - - Nicko Cadell - - - - The writer to forward messages to - - - - - Create an instance of that forwards all - messages to a . - - The to forward to - - - Create an instance of that forwards all - messages to a . - - - - - - Closes the writer and releases any system resources associated with the writer - - - - - - - - - Dispose this writer - - flag indicating if we are being disposed - - - Dispose this writer - - - - - - Flushes any buffered output - - - - Clears all buffers for the writer and causes any buffered data to be written - to the underlying device - - - - - - Writes a character to the wrapped TextWriter - - the value to write to the TextWriter - - - Writes a character to the wrapped TextWriter - - - - - - Writes a character buffer to the wrapped TextWriter - - the data buffer - the start index - the number of characters to write - - - Writes a character buffer to the wrapped TextWriter - - - - - - Writes a string to the wrapped TextWriter - - the value to write to the TextWriter - - - Writes a string to the wrapped TextWriter - - - - - - Gets or sets the underlying . - - - The underlying . - - - - Gets or sets the underlying . - - - - - - The Encoding in which the output is written - - - The - - - - The Encoding in which the output is written - - - - - - Gets an object that controls formatting - - - The format provider - - - - Gets an object that controls formatting - - - - - - Gets or sets the line terminator string used by the TextWriter - - - The line terminator to use - - - - Gets or sets the line terminator string used by the TextWriter - - - - - - Constructor - - the writer to actually write to - the error handler to report error to - - - Create a new QuietTextWriter using a writer and error handler - - - - - - Writes a character to the underlying writer - - the char to write - - - Writes a character to the underlying writer - - - - - - Writes a buffer to the underlying writer - - the buffer to write - the start index to write from - the number of characters to write - - - Writes a buffer to the underlying writer - - - - - - Writes a string to the output. - - The string data to write to the output. - - - Writes a string to the output. - - - - - - Closes the underlying output writer. - - - - Closes the underlying output writer. - - - - - - The error handler instance to pass all errors to - - - - - Flag to indicate if this writer is closed - - - - - Gets or sets the error handler that all errors are passed to. - - - The error handler that all errors are passed to. - - - - Gets or sets the error handler that all errors are passed to. - - - - - - Gets a value indicating whether this writer is closed. - - - true if this writer is closed, otherwise false. - - - - Gets a value indicating whether this writer is closed. - - - - - - Constructor - - The to actually write to. - The to report errors to. - - - Creates a new instance of the class - with the specified and . - - - - - - Writes a character to the underlying writer and counts the number of bytes written. - - the char to write - - - Overrides implementation of . Counts - the number of bytes written. - - - - - - Writes a buffer to the underlying writer and counts the number of bytes written. - - the buffer to write - the start index to write from - the number of characters to write - - - Overrides implementation of . Counts - the number of bytes written. - - - - - - Writes a string to the output and counts the number of bytes written. - - The string data to write to the output. - - - Overrides implementation of . Counts - the number of bytes written. - - - - - - Total number of bytes written. - - - - - Gets or sets the total number of bytes written. - - - The total number of bytes written. - - - - Gets or sets the total number of bytes written. - - - - - - A fixed size rolling buffer of logging events. - - - - An array backed fixed size leaky bucket. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - The maximum number of logging events in the buffer. - - - Initializes a new instance of the class with - the specified maximum number of buffered logging events. - - - The argument is not a positive integer. - - - - Appends a to the buffer. - - The event to append to the buffer. - The event discarded from the buffer, if the buffer is full, otherwise null. - - - Append an event to the buffer. If the buffer still contains free space then - null is returned. If the buffer is full then an event will be dropped - to make space for the new event, the event dropped is returned. - - - - - - Get and remove the oldest event in the buffer. - - The oldest logging event in the buffer - - - Gets the oldest (first) logging event in the buffer and removes it - from the buffer. - - - - - - Pops all the logging events from the buffer into an array. - - An array of all the logging events in the buffer. - - - Get all the events in the buffer and clear the buffer. - - - - - - Clear the buffer - - - - Clear the buffer of all events. The events in the buffer are lost. - - - - - - Gets the th oldest event currently in the buffer. - - The th oldest event currently in the buffer. - - - If is outside the range 0 to the number of events - currently in the buffer, then null is returned. - - - - - - Gets the maximum size of the buffer. - - The maximum size of the buffer. - - - Gets the maximum size of the buffer - - - - - - Gets the number of logging events in the buffer. - - The number of logging events in the buffer. - - - This number is guaranteed to be in the range 0 to - (inclusive). - - - - - - An always empty . - - - - A singleton implementation of the - interface that always represents an empty collection. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to enforce the singleton pattern. - - - - - - Copies the elements of the to an - , starting at a particular Array index. - - The one-dimensional - that is the destination of the elements copied from - . The Array must have zero-based - indexing. - The zero-based index in array at which - copying begins. - - - As the collection is empty no values are copied into the array. - - - - - - Returns an enumerator that can iterate through a collection. - - - An that can be used to - iterate through the collection. - - - - As the collection is empty a is returned. - - - - - - The singleton instance of the empty collection. - - - - - Gets the singleton instance of the empty collection. - - The singleton instance of the empty collection. - - - Gets the singleton instance of the empty collection. - - - - - - Gets a value indicating if access to the is synchronized (thread-safe). - - - true if access to the is synchronized (thread-safe); otherwise, false. - - - - For the this property is always true. - - - - - - Gets the number of elements contained in the . - - - The number of elements contained in the . - - - - As the collection is empty the is always 0. - - - - - - Gets an object that can be used to synchronize access to the . - - - An object that can be used to synchronize access to the . - - - - As the collection is empty and thread safe and synchronized this instance is also - the object. - - - - - - An always empty . - - - - A singleton implementation of the - interface that always represents an empty collection. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to enforce the singleton pattern. - - - - - - Copies the elements of the to an - , starting at a particular Array index. - - The one-dimensional - that is the destination of the elements copied from - . The Array must have zero-based - indexing. - The zero-based index in array at which - copying begins. - - - As the collection is empty no values are copied into the array. - - - - - - Returns an enumerator that can iterate through a collection. - - - An that can be used to - iterate through the collection. - - - - As the collection is empty a is returned. - - - - - - Adds an element with the provided key and value to the - . - - The to use as the key of the element to add. - The to use as the value of the element to add. - - - As the collection is empty no new values can be added. A - is thrown if this method is called. - - - This dictionary is always empty and cannot be modified. - - - - Removes all elements from the . - - - - As the collection is empty no values can be removed. A - is thrown if this method is called. - - - This dictionary is always empty and cannot be modified. - - - - Determines whether the contains an element - with the specified key. - - The key to locate in the . - false - - - As the collection is empty the method always returns false. - - - - - - Returns an enumerator that can iterate through a collection. - - - An that can be used to - iterate through the collection. - - - - As the collection is empty a is returned. - - - - - - Removes the element with the specified key from the . - - The key of the element to remove. - - - As the collection is empty no values can be removed. A - is thrown if this method is called. - - - This dictionary is always empty and cannot be modified. - - - - The singleton instance of the empty dictionary. - - - - - Gets the singleton instance of the . - - The singleton instance of the . - - - Gets the singleton instance of the . - - - - - - Gets a value indicating if access to the is synchronized (thread-safe). - - - true if access to the is synchronized (thread-safe); otherwise, false. - - - - For the this property is always true. - - - - - - Gets the number of elements contained in the - - - The number of elements contained in the . - - - - As the collection is empty the is always 0. - - - - - - Gets an object that can be used to synchronize access to the . - - - An object that can be used to synchronize access to the . - - - - As the collection is empty and thread safe and synchronized this instance is also - the object. - - - - - - Gets a value indicating whether the has a fixed size. - - true - - - As the collection is empty always returns true. - - - - - - Gets a value indicating whether the is read-only. - - true - - - As the collection is empty always returns true. - - - - - - Gets an containing the keys of the . - - An containing the keys of the . - - - As the collection is empty a is returned. - - - - - - Gets an containing the values of the . - - An containing the values of the . - - - As the collection is empty a is returned. - - - - - - Gets or sets the element with the specified key. - - The key of the element to get or set. - null - - - As the collection is empty no values can be looked up or stored. - If the index getter is called then null is returned. - A is thrown if the setter is called. - - - This dictionary is always empty and cannot be modified. - - - - Contain the information obtained when parsing formatting modifiers - in conversion modifiers. - - - - Holds the formatting information extracted from the format string by - the . This is used by the - objects when rendering the output. - - - Nicko Cadell - Gert Driesen - - - - Defaut Constructor - - - - Initializes a new instance of the class. - - - - - - Constructor - - - - Initializes a new instance of the class - with the specified parameters. - - - - - - Gets or sets the minimum value. - - - The minimum value. - - - - Gets or sets the minimum value. - - - - - - Gets or sets the maximum value. - - - The maximum value. - - - - Gets or sets the maximum value. - - - - - - Gets or sets a flag indicating whether left align is enabled - or not. - - - A flag indicating whether left align is enabled or not. - - - - Gets or sets a flag indicating whether left align is enabled or not. - - - - - - Implementation of Properties collection for the - - - - This class implements a properties collection that is thread safe and supports both - storing properties and capturing a read only copy of the current propertied. - - - This class is optimized to the scenario where the properties are read frequently - and are modified infrequently. - - - Nicko Cadell - - - - The read only copy of the properties. - - - - This variable is declared volatile to prevent the compiler and JIT from - reordering reads and writes of this thread performed on different threads. - - - - - - Lock object used to synchronize updates within this instance - - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Remove a property from the global context - - the key for the entry to remove - - - Removing an entry from the global context properties is relatively expensive compared - with reading a value. - - - - - - Clear the global context properties - - - - - Get a readonly immutable copy of the properties - - the current global context properties - - - This implementation is fast because the GlobalContextProperties class - stores a readonly copy of the properties. - - - - - - Gets or sets the value of a property - - - The value for the property with the specified key - - - - Reading the value for a key is faster than setting the value. - When the value is written a new read only copy of - the properties is created. - - - - - - Manages a mapping from levels to - - - - Manages an ordered mapping from instances - to subclasses. - - - Nicko Cadell - - - - Default constructor - - - - Initialise a new instance of . - - - - - - Add a to this mapping - - the entry to add - - - If a has previously been added - for the same then that entry will be - overwritten. - - - - - - Lookup the mapping for the specified level - - the level to lookup - the for the level or null if no mapping found - - - Lookup the value for the specified level. Finds the nearest - mapping value for the level that is equal to or less than the - specified. - - - If no mapping could be found then null is returned. - - - - - - Initialize options - - - - Caches the sorted list of in an array - - - - - - Implementation of Properties collection for the - - - - Class implements a collection of properties that is specific to each thread. - The class is not synchronized as each thread has its own . - - - Nicko Cadell - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Remove a property - - the key for the entry to remove - - - Remove the value for the specified from the context. - - - - - - Clear all the context properties - - - - Clear all the context properties - - - - - - Get the PropertiesDictionary stored in the LocalDataStoreSlot for this thread. - - create the dictionary if it does not exist, otherwise return null if is does not exist - the properties for this thread - - - The collection returned is only to be used on the calling thread. If the - caller needs to share the collection between different threads then the - caller must clone the collection before doings so. - - - - - - Gets or sets the value of a property - - - The value for the property with the specified key - - - - Get or set the property value for the specified. - - - - - - Outputs log statements from within the log4net assembly. - - - - Log4net components cannot make log4net logging calls. However, it is - sometimes useful for the user to learn about what log4net is - doing. - - - All log4net internal debug calls go to the standard output stream - whereas internal error messages are sent to the standard error output - stream. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - Static constructor that initializes logging by reading - settings from the application configuration file. - - - - The log4net.Internal.Debug application setting - controls internal debugging. This setting should be set - to true to enable debugging. - - - The log4net.Internal.Quiet application setting - suppresses all internal logging including error messages. - This setting should be set to true to enable message - suppression. - - - - - - Writes log4net internal debug messages to the - standard output stream. - - The message to log. - - - All internal debug messages are prepended with - the string "log4net: ". - - - - - - Writes log4net internal debug messages to the - standard output stream. - - The message to log. - An exception to log. - - - All internal debug messages are prepended with - the string "log4net: ". - - - - - - Writes log4net internal warning messages to the - standard error stream. - - The message to log. - - - All internal warning messages are prepended with - the string "log4net:WARN ". - - - - - - Writes log4net internal warning messages to the - standard error stream. - - The message to log. - An exception to log. - - - All internal warning messages are prepended with - the string "log4net:WARN ". - - - - - - Writes log4net internal error messages to the - standard error stream. - - The message to log. - - - All internal error messages are prepended with - the string "log4net:ERROR ". - - - - - - Writes log4net internal error messages to the - standard error stream. - - The message to log. - An exception to log. - - - All internal debug messages are prepended with - the string "log4net:ERROR ". - - - - - - Writes output to the standard output stream. - - The message to log. - - - Writes to both Console.Out and System.Diagnostics.Trace. - Note that the System.Diagnostics.Trace is not supported - on the Compact Framework. - - - If the AppDomain is not configured with a config file then - the call to System.Diagnostics.Trace may fail. This is only - an issue if you are programmatically creating your own AppDomains. - - - - - - Writes output to the standard error stream. - - The message to log. - - - Writes to both Console.Error and System.Diagnostics.Trace. - Note that the System.Diagnostics.Trace is not supported - on the Compact Framework. - - - If the AppDomain is not configured with a config file then - the call to System.Diagnostics.Trace may fail. This is only - an issue if you are programmatically creating your own AppDomains. - - - - - - Default debug level - - - - - In quietMode not even errors generate any output. - - - - - Gets or sets a value indicating whether log4net internal logging - is enabled or disabled. - - - true if log4net internal logging is enabled, otherwise - false. - - - - When set to true, internal debug level logging will be - displayed. - - - This value can be set by setting the application setting - log4net.Internal.Debug in the application configuration - file. - - - The default value is false, i.e. debugging is - disabled. - - - - - The following example enables internal debugging using the - application configuration file : - - - - - - - - - - - - - Gets or sets a value indicating whether log4net should generate no output - from internal logging, not even for errors. - - - true if log4net should generate no output at all from internal - logging, otherwise false. - - - - When set to true will cause internal logging at all levels to be - suppressed. This means that no warning or error reports will be logged. - This option overrides the setting and - disables all debug also. - - This value can be set by setting the application setting - log4net.Internal.Quiet in the application configuration file. - - - The default value is false, i.e. internal logging is not - disabled. - - - - The following example disables internal logging using the - application configuration file : - - - - - - - - - - - - Test if LogLog.Debug is enabled for output. - - - true if Debug is enabled - - - - Test if LogLog.Debug is enabled for output. - - - - - - Test if LogLog.Warn is enabled for output. - - - true if Warn is enabled - - - - Test if LogLog.Warn is enabled for output. - - - - - - Test if LogLog.Error is enabled for output. - - - true if Error is enabled - - - - Test if LogLog.Error is enabled for output. - - - - - - An always empty . - - - - A singleton implementation of the over a collection - that is empty and not modifiable. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to enforce the singleton pattern. - - - - - - Test if the enumerator can advance, if so advance. - - false as the cannot advance. - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will always return false. - - - - - - Resets the enumerator back to the start. - - - - As the enumerator is over an empty collection does nothing. - - - - - - The singleton instance of the . - - - - - Gets the singleton instance of the . - - The singleton instance of the . - - - Gets the singleton instance of the . - - - - - - Gets the current object from the enumerator. - - - Throws an because the - never has a current value. - - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will throw an . - - - The collection is empty and - cannot be positioned over a valid location. - - - - Gets the current key from the enumerator. - - - Throws an exception because the - never has a current value. - - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will throw an . - - - The collection is empty and - cannot be positioned over a valid location. - - - - Gets the current value from the enumerator. - - The current value from the enumerator. - - Throws an because the - never has a current value. - - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will throw an . - - - The collection is empty and - cannot be positioned over a valid location. - - - - Gets the current entry from the enumerator. - - - Throws an because the - never has a current entry. - - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will throw an . - - - The collection is empty and - cannot be positioned over a valid location. - - - - An always empty . - - - - A singleton implementation of the over a collection - that is empty and not modifiable. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to enforce the singleton pattern. - - - - - - Test if the enumerator can advance, if so advance - - false as the cannot advance. - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will always return false. - - - - - - Resets the enumerator back to the start. - - - - As the enumerator is over an empty collection does nothing. - - - - - - The singleton instance of the . - - - - - Get the singleton instance of the . - - The singleton instance of the . - - - Gets the singleton instance of the . - - - - - - Gets the current object from the enumerator. - - - Throws an because the - never has a current value. - - - - As the enumerator is over an empty collection its - value cannot be moved over a valid position, therefore - will throw an . - - - The collection is empty and - cannot be positioned over a valid location. - - - - A SecurityContext used when a SecurityContext is not required - - - - The is a no-op implementation of the - base class. It is used where a - is required but one has not been provided. - - - Nicko Cadell - - - - Singleton instance of - - - - Singleton instance of - - - - - - Private constructor - - - - Private constructor for singleton pattern. - - - - - - Impersonate this SecurityContext - - State supplied by the caller - null - - - No impersonation is done and null is always returned. - - - - - - Implements log4net's default error handling policy which consists - of emitting a message for the first error in an appender and - ignoring all subsequent errors. - - - - The error message is printed on the standard error output stream. - - - This policy aims at protecting an otherwise working application - from being flooded with error messages when logging fails. - - - Nicko Cadell - Gert Driesen - - - - Default Constructor - - - - Initializes a new instance of the class. - - - - - - Constructor - - The prefix to use for each message. - - - Initializes a new instance of the class - with the specified prefix. - - - - - - Log an Error - - The error message. - The exception. - The internal error code. - - - Prints the message and the stack trace of the exception on the standard - error output stream. - - - - - - Log an Error - - The error message. - The exception. - - - Prints the message and the stack trace of the exception on the standard - error output stream. - - - - - - Log an error - - The error message. - - - Print a the error message passed as parameter on the standard - error output stream. - - - - - - Flag to indicate if it is the first error - - - - - String to prefix each message with - - - - - Is error logging enabled - - - - Is error logging enabled. Logging is only enabled for the - first error delivered to the . - - - - - - A convenience class to convert property values to specific types. - - - - Utility functions for converting types and parsing values. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - Converts a string to a value. - - String to convert. - The default value. - The value of . - - - If is "true", then true is returned. - If is "false", then false is returned. - Otherwise, is returned. - - - - - - Parses a file size into a number. - - String to parse. - The default value. - The value of . - - - Parses a file size of the form: number[KB|MB|GB] into a - long value. It is scaled with the appropriate multiplier. - - - is returned when - cannot be converted to a value. - - - - - - Converts a string to an object. - - The target type to convert to. - The string to convert to an object. - - The object converted from a string or null when the - conversion failed. - - - - Converts a string to an object. Uses the converter registry to try - to convert the string value into the specified target type. - - - - - - Checks if there is an appropriate type conversion from the source type to the target type. - - The type to convert from. - The type to convert to. - true if there is a conversion from the source type to the target type. - - Checks if there is an appropriate type conversion from the source type to the target type. - - - - - - - Converts an object to the target type. - - The object to convert to the target type. - The type to convert to. - The converted object. - - - Converts an object to the target type. - - - - - - Instantiates an object given a class name. - - The fully qualified class name of the object to instantiate. - The class to which the new object should belong. - The object to return in case of non-fulfillment. - - An instance of the or - if the object could not be instantiated. - - - - Checks that the is a subclass of - . If that test fails or the object could - not be instantiated, then is returned. - - - - - - Performs variable substitution in string from the - values of keys found in . - - The string on which variable substitution is performed. - The dictionary to use to lookup variables. - The result of the substitutions. - - - The variable substitution delimiters are ${ and }. - - - For example, if props contains key=value, then the call - - - - string s = OptionConverter.SubstituteVariables("Value of key is ${key}."); - - - - will set the variable s to "Value of key is value.". - - - If no value could be found for the specified key, then substitution - defaults to an empty string. - - - For example, if system properties contains no value for the key - "nonExistentKey", then the call - - - - string s = OptionConverter.SubstituteVariables("Value of nonExistentKey is [${nonExistentKey}]"); - - - - will set s to "Value of nonExistentKey is []". - - - An Exception is thrown if contains a start - delimiter "${" which is not balanced by a stop delimiter "}". - - - - - - Converts the string representation of the name or numeric value of one or - more enumerated constants to an equivalent enumerated object. - - The type to convert to. - The enum string value. - If true, ignore case; otherwise, regard case. - An object of type whose value is represented by . - - - - Most of the work of the class - is delegated to the PatternParser class. - - - - The PatternParser processes a pattern string and - returns a chain of objects. - - - Nicko Cadell - Gert Driesen - - - - Constructor - - The pattern to parse. - - - Initializes a new instance of the class - with the specified pattern string. - - - - - - Parses the pattern into a chain of pattern converters. - - The head of a chain of pattern converters. - - - Parses the pattern into a chain of pattern converters. - - - - - - Build the unified cache of converters from the static and instance maps - - the list of all the converter names - - - Build the unified cache of converters from the static and instance maps - - - - - - Internal method to parse the specified pattern to find specified matches - - the pattern to parse - the converter names to match in the pattern - - - The matches param must be sorted such that longer strings come before shorter ones. - - - - - - Process a parsed literal - - the literal text - - - - Process a parsed converter pattern - - the name of the converter - the optional option for the converter - the formatting info for the converter - - - - Resets the internal state of the parser and adds the specified pattern converter - to the chain. - - The pattern converter to add. - - - - The first pattern converter in the chain - - - - - the last pattern converter in the chain - - - - - The pattern - - - - - Internal map of converter identifiers to converter types - - - - This map overrides the static s_globalRulesRegistry map. - - - - - - Get the converter registry used by this parser - - - The converter registry used by this parser - - - - Get the converter registry used by this parser - - - - - - Sort strings by length - - - - that orders strings by string length. - The longest strings are placed first - - - - - - This class implements a patterned string. - - - - This string has embedded patterns that are resolved and expanded - when the string is formatted. - - - This class functions similarly to the - in that it accepts a pattern and renders it to a string. Unlike the - however the PatternString - does not render the properties of a specific but - of the process in general. - - - The recognized conversion pattern names are: - - - - Conversion Pattern Name - Effect - - - appdomain - - - Used to output the friendly name of the current AppDomain. - - - - - date - - - Used to output the date of the logging event in the local time zone. - To output the date in universal time use the %utcdate pattern. - The date conversion - specifier may be followed by a date format specifier enclosed - between braces. For example, %date{HH:mm:ss,fff} or - %date{dd MMM yyyy HH:mm:ss,fff}. If no date format specifier is - given then ISO8601 format is - assumed (). - - - The date format specifier admits the same syntax as the - time pattern string of the . - - - For better results it is recommended to use the log4net date - formatters. These can be specified using one of the strings - "ABSOLUTE", "DATE" and "ISO8601" for specifying - , - and respectively - . For example, - %date{ISO8601} or %date{ABSOLUTE}. - - - These dedicated date formatters perform significantly - better than . - - - - - env - - - Used to output the a specific environment variable. The key to - lookup must be specified within braces and directly following the - pattern specifier, e.g. %env{COMPUTERNAME} would include the value - of the COMPUTERNAME environment variable. - - - The env pattern is not supported on the .NET Compact Framework. - - - - - identity - - - Used to output the user name for the currently active user - (Principal.Identity.Name). - - - - - newline - - - Outputs the platform dependent line separator character or - characters. - - - This conversion pattern name offers the same performance as using - non-portable line separator strings such as "\n", or "\r\n". - Thus, it is the preferred way of specifying a line separator. - - - - - processid - - - Used to output the system process ID for the current process. - - - - - property - - - Used to output a specific context property. The key to - lookup must be specified within braces and directly following the - pattern specifier, e.g. %property{user} would include the value - from the property that is keyed by the string 'user'. Each property value - that is to be included in the log must be specified separately. - Properties are stored in logging contexts. By default - the log4net:HostName property is set to the name of machine on - which the event was originally logged. - - - If no key is specified, e.g. %property then all the keys and their - values are printed in a comma separated list. - - - The properties of an event are combined from a number of different - contexts. These are listed below in the order in which they are searched. - - - - the thread properties - - The that are set on the current - thread. These properties are shared by all events logged on this thread. - - - - the global properties - - The that are set globally. These - properties are shared by all the threads in the AppDomain. - - - - - - - random - - - Used to output a random string of characters. The string is made up of - uppercase letters and numbers. By default the string is 4 characters long. - The length of the string can be specified within braces directly following the - pattern specifier, e.g. %random{8} would output an 8 character string. - - - - - username - - - Used to output the WindowsIdentity for the currently - active user. - - - - - utcdate - - - Used to output the date of the logging event in universal time. - The date conversion - specifier may be followed by a date format specifier enclosed - between braces. For example, %utcdate{HH:mm:ss,fff} or - %utcdate{dd MMM yyyy HH:mm:ss,fff}. If no date format specifier is - given then ISO8601 format is - assumed (). - - - The date format specifier admits the same syntax as the - time pattern string of the . - - - For better results it is recommended to use the log4net date - formatters. These can be specified using one of the strings - "ABSOLUTE", "DATE" and "ISO8601" for specifying - , - and respectively - . For example, - %utcdate{ISO8601} or %utcdate{ABSOLUTE}. - - - These dedicated date formatters perform significantly - better than . - - - - - % - - - The sequence %% outputs a single percent sign. - - - - - - Additional pattern converters may be registered with a specific - instance using or - . - - - See the for details on the - format modifiers supported by the patterns. - - - Nicko Cadell - - - - Internal map of converter identifiers to converter types. - - - - - the pattern - - - - - the head of the pattern converter chain - - - - - patterns defined on this PatternString only - - - - - Initialize the global registry - - - - - Default constructor - - - - Initialize a new instance of - - - - - - Constructs a PatternString - - The pattern to use with this PatternString - - - Initialize a new instance of with the pattern specified. - - - - - - Initialize object options - - - - This is part of the delayed object - activation scheme. The method must - be called on this object after the configuration properties have - been set. Until is called this - object is in an undefined state and must not be used. - - - If any of the configuration properties are modified then - must be called again. - - - - - - Create the used to parse the pattern - - the pattern to parse - The - - - Returns PatternParser used to parse the conversion string. Subclasses - may override this to return a subclass of PatternParser which recognize - custom conversion pattern name. - - - - - - Produces a formatted string as specified by the conversion pattern. - - The TextWriter to write the formatted event to - - - Format the pattern to the . - - - - - - Format the pattern as a string - - the pattern formatted as a string - - - Format the pattern to a string. - - - - - - Add a converter to this PatternString - - the converter info - - - This version of the method is used by the configurator. - Programmatic users should use the alternative method. - - - - - - Add a converter to this PatternString - - the name of the conversion pattern for this converter - the type of the converter - - - Add a converter to this PatternString - - - - - - Gets or sets the pattern formatting string - - - The pattern formatting string - - - - The ConversionPattern option. This is the string which - controls formatting and consists of a mix of literal content and - conversion specifiers. - - - - - - Wrapper class used to map converter names to converter types - - - - Wrapper class used to map converter names to converter types - - - - - - default constructor - - - - - Gets or sets the name of the conversion pattern - - - The name of the conversion pattern - - - - Gets or sets the name of the conversion pattern - - - - - - Gets or sets the type of the converter - - - The type of the converter - - - - Gets or sets the type of the converter - - - - - - String keyed object map. - - - - While this collection is serializable only member - objects that are serializable will - be serialized along with this collection. - - - Nicko Cadell - Gert Driesen - - - - String keyed object map that is read only. - - - - This collection is readonly and cannot be modified. - - - While this collection is serializable only member - objects that are serializable will - be serialized along with this collection. - - - Nicko Cadell - Gert Driesen - - - - The Hashtable used to store the properties data - - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Copy Constructor - - properties to copy - - - Initializes a new instance of the class. - - - - - - Deserialization constructor - - The that holds the serialized object data. - The that contains contextual information about the source or destination. - - - Initializes a new instance of the class - with serialized data. - - - - - - Gets the key names. - - An array of all the keys. - - - Gets the key names. - - - - - - Test if the dictionary contains a specified key - - the key to look for - true if the dictionary contains the specified key - - - Test if the dictionary contains a specified key - - - - - - Serializes this object into the provided. - - The to populate with data. - The destination for this serialization. - - - Serializes this object into the provided. - - - - - - See - - - - - See - - - - - - See - - - - - - - Remove all properties from the properties collection - - - - - See - - - - - - - See - - - - - - - See - - - - - Gets or sets the value of the property with the specified key. - - - The value of the property with the specified key. - - The key of the property to get or set. - - - The property value will only be serialized if it is serializable. - If it cannot be serialized it will be silently ignored if - a serialization operation is performed. - - - - - - The hashtable used to store the properties - - - The internal collection used to store the properties - - - - The hashtable used to store the properties - - - - - - See - - - - - See - - - - - See - - - - - See - - - - - See - - - - - See - - - - - The number of properties in this collection - - - - - See - - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Constructor - - properties to copy - - - Initializes a new instance of the class. - - - - - - Initializes a new instance of the class - with serialized data. - - The that holds the serialized object data. - The that contains contextual information about the source or destination. - - - Because this class is sealed the serialization constructor is private. - - - - - - Remove the entry with the specified key from this dictionary - - the key for the entry to remove - - - Remove the entry with the specified key from this dictionary - - - - - - See - - an enumerator - - - Returns a over the contest of this collection. - - - - - - See - - the key to remove - - - Remove the entry with the specified key from this dictionary - - - - - - See - - the key to lookup in the collection - true if the collection contains the specified key - - - Test if this collection contains a specified key. - - - - - - Remove all properties from the properties collection - - - - Remove all properties from the properties collection - - - - - - See - - the key - the value to store for the key - - - Store a value for the specified . - - - Thrown if the is not a string - - - - See - - - - - - - See - - - - - Gets or sets the value of the property with the specified key. - - - The value of the property with the specified key. - - The key of the property to get or set. - - - The property value will only be serialized if it is serializable. - If it cannot be serialized it will be silently ignored if - a serialization operation is performed. - - - - - - See - - - false - - - - This collection is modifiable. This property always - returns false. - - - - - - See - - - The value for the key specified. - - - - Get or set a value for the specified . - - - Thrown if the is not a string - - - - See - - - - - See - - - - - See - - - - - See - - - - - See - - - - - A that ignores the message - - - - This writer is used in special cases where it is necessary - to protect a writer from being closed by a client. - - - Nicko Cadell - - - - Constructor - - the writer to actually write to - - - Create a new ProtectCloseTextWriter using a writer - - - - - - Attach this instance to a different underlying - - the writer to attach to - - - Attach this instance to a different underlying - - - - - - Does not close the underlying output writer. - - - - Does not close the underlying output writer. - This method does nothing. - - - - - - Defines a lock that supports single writers and multiple readers - - - - ReaderWriterLock is used to synchronize access to a resource. - At any given time, it allows either concurrent read access for - multiple threads, or write access for a single thread. In a - situation where a resource is changed infrequently, a - ReaderWriterLock provides better throughput than a simple - one-at-a-time lock, such as . - - - If a platform does not support a System.Threading.ReaderWriterLock - implementation then all readers and writers are serialized. Therefore - the caller must not rely on multiple simultaneous readers. - - - Nicko Cadell - - - - Constructor - - - - Initializes a new instance of the class. - - - - - - Acquires a reader lock - - - - blocks if a different thread has the writer - lock, or if at least one thread is waiting for the writer lock. - - - - - - Decrements the lock count - - - - decrements the lock count. When the count - reaches zero, the lock is released. - - - - - - Acquires the writer lock - - - - This method blocks if another thread has a reader lock or writer lock. - - - - - - Decrements the lock count on the writer lock - - - - ReleaseWriterLock decrements the writer lock count. - When the count reaches zero, the writer lock is released. - - - - - - A that can be and reused - - - - A that can be and reused. - This uses a single buffer for string operations. - - - Nicko Cadell - - - - Create an instance of - - the format provider to use - - - Create an instance of - - - - - - Override Dispose to prevent closing of writer - - flag - - - Override Dispose to prevent closing of writer - - - - - - Reset this string writer so that it can be reused. - - the maximum buffer capacity before it is trimmed - the default size to make the buffer - - - Reset this string writer so that it can be reused. - The internal buffers are cleared and reset. - - - - - - Utility class for system specific information. - - - - Utility class of static methods for system specific information. - - - Nicko Cadell - Gert Driesen - Alexey Solofnenko - - - - Private constructor to prevent instances. - - - - Only static methods are exposed from this type. - - - - - - Initialize default values for private static fields. - - - - Only static methods are exposed from this type. - - - - - - Gets the assembly location path for the specified assembly. - - The assembly to get the location for. - The location of the assembly. - - - This method does not guarantee to return the correct path - to the assembly. If only tries to give an indication as to - where the assembly was loaded from. - - - - - - Gets the fully qualified name of the , including - the name of the assembly from which the was - loaded. - - The to get the fully qualified name for. - The fully qualified name for the . - - - This is equivalent to the Type.AssemblyQualifiedName property, - but this method works on the .NET Compact Framework 1.0 as well as - the full .NET runtime. - - - - - - Gets the short name of the . - - The to get the name for. - The short name of the . - - - The short name of the assembly is the - without the version, culture, or public key. i.e. it is just the - assembly's file name without the extension. - - - Use this rather than Assembly.GetName().Name because that - is not available on the Compact Framework. - - - Because of a FileIOPermission security demand we cannot do - the obvious Assembly.GetName().Name. We are allowed to get - the of the assembly so we - start from there and strip out just the assembly name. - - - - - - Gets the file name portion of the , including the extension. - - The to get the file name for. - The file name of the assembly. - - - Gets the file name portion of the , including the extension. - - - - - - Loads the type specified in the type string. - - A sibling type to use to load the type. - The name of the type to load. - Flag set to true to throw an exception if the type cannot be loaded. - true to ignore the case of the type name; otherwise, false - The type loaded or null if it could not be loaded. - - - If the type name is fully qualified, i.e. if contains an assembly name in - the type name, the type will be loaded from the system using - . - - - If the type name is not fully qualified, it will be loaded from the assembly - containing the specified relative type. If the type is not found in the assembly - then all the loaded assemblies will be searched for the type. - - - - - - Loads the type specified in the type string. - - The name of the type to load. - Flag set to true to throw an exception if the type cannot be loaded. - true to ignore the case of the type name; otherwise, false - The type loaded or null if it could not be loaded. - - - If the type name is fully qualified, i.e. if contains an assembly name in - the type name, the type will be loaded from the system using - . - - - If the type name is not fully qualified it will be loaded from the - assembly that is directly calling this method. If the type is not found - in the assembly then all the loaded assemblies will be searched for the type. - - - - - - Loads the type specified in the type string. - - An assembly to load the type from. - The name of the type to load. - Flag set to true to throw an exception if the type cannot be loaded. - true to ignore the case of the type name; otherwise, false - The type loaded or null if it could not be loaded. - - - If the type name is fully qualified, i.e. if contains an assembly name in - the type name, the type will be loaded from the system using - . - - - If the type name is not fully qualified it will be loaded from the specified - assembly. If the type is not found in the assembly then all the loaded assemblies - will be searched for the type. - - - - - - Generate a new guid - - A new Guid - - - Generate a new guid - - - - - - Create an - - The name of the parameter that caused the exception - The value of the argument that causes this exception - The message that describes the error - the ArgumentOutOfRangeException object - - - Create a new instance of the class - with a specified error message, the parameter name, and the value - of the argument. - - - The Compact Framework does not support the 3 parameter constructor for the - type. This method provides an - implementation that works for all platforms. - - - - - - Parse a string into an value - - the string to parse - out param where the parsed value is placed - true if the string was able to be parsed into an integer - - - Attempts to parse the string into an integer. If the string cannot - be parsed then this method returns false. The method does not throw an exception. - - - - - - Parse a string into an value - - the string to parse - out param where the parsed value is placed - true if the string was able to be parsed into an integer - - - Attempts to parse the string into an integer. If the string cannot - be parsed then this method returns false. The method does not throw an exception. - - - - - - Lookup an application setting - - the application settings key to lookup - the value for the key, or null - - - Configuration APIs are not supported under the Compact Framework - - - - - - Convert a path into a fully qualified local file path. - - The path to convert. - The fully qualified path. - - - Converts the path specified to a fully - qualified path. If the path is relative it is - taken as relative from the application base - directory. - - - The path specified must be a local file path, a URI is not supported. - - - - - - Creates a new case-insensitive instance of the class with the default initial capacity. - - A new case-insensitive instance of the class with the default initial capacity - - - The new Hashtable instance uses the default load factor, the CaseInsensitiveHashCodeProvider, and the CaseInsensitiveComparer. - - - - - - Gets an empty array of types. - - - - The Type.EmptyTypes field is not available on - the .NET Compact Framework 1.0. - - - - - - Cache the host name for the current machine - - - - - Cache the application friendly name - - - - - Text to output when a null is encountered. - - - - - Text to output when an unsupported feature is requested. - - - - - Start time for the current process. - - - - - Gets the system dependent line terminator. - - - The system dependent line terminator. - - - - Gets the system dependent line terminator. - - - - - - Gets the base directory for this . - - The base directory path for the current . - - - Gets the base directory for this . - - - The value returned may be either a local file path or a URI. - - - - - - Gets the path to the configuration file for the current . - - The path to the configuration file for the current . - - - The .NET Compact Framework 1.0 does not have a concept of a configuration - file. For this runtime, we use the entry assembly location as the root for - the configuration file name. - - - The value returned may be either a local file path or a URI. - - - - - - Gets the path to the file that first executed in the current . - - The path to the entry assembly. - - - Gets the path to the file that first executed in the current . - - - - - - Gets the ID of the current thread. - - The ID of the current thread. - - - On the .NET framework, the AppDomain.GetCurrentThreadId method - is used to obtain the thread ID for the current thread. This is the - operating system ID for the thread. - - - On the .NET Compact Framework 1.0 it is not possible to get the - operating system thread ID for the current thread. The native method - GetCurrentThreadId is implemented inline in a header file - and cannot be called. - - - On the .NET Framework 2.0 the Thread.ManagedThreadId is used as this - gives a stable id unrelated to the operating system thread ID which may - change if the runtime is using fibers. - - - - - - Get the host name or machine name for the current machine - - - The hostname or machine name - - - - Get the host name or machine name for the current machine - - - The host name () or - the machine name (Environment.MachineName) for - the current machine, or if neither of these are available - then NOT AVAILABLE is returned. - - - - - - Get this application's friendly name - - - The friendly name of this application as a string - - - - If available the name of the application is retrieved from - the AppDomain using AppDomain.CurrentDomain.FriendlyName. - - - Otherwise the file name of the entry assembly is used. - - - - - - Get the start time for the current process. - - - - This is the time at which the log4net library was loaded into the - AppDomain. Due to reports of a hang in the call to System.Diagnostics.Process.StartTime - this is not the start time for the current process. - - - The log4net library should be loaded by an application early during its - startup, therefore this start time should be a good approximation for - the actual start time. - - - Note that AppDomains may be loaded and unloaded within the - same process without the process terminating, however this start time - will be set per AppDomain. - - - - - - Text to output when a null is encountered. - - - - Use this value to indicate a null has been encountered while - outputting a string representation of an item. - - - The default value is (null). This value can be overridden by specifying - a value for the log4net.NullText appSetting in the application's - .config file. - - - - - - Text to output when an unsupported feature is requested. - - - - Use this value when an unsupported feature is requested. - - - The default value is NOT AVAILABLE. This value can be overridden by specifying - a value for the log4net.NotAvailableText appSetting in the application's - .config file. - - - - - - Utility class that represents a format string. - - - - Utility class that represents a format string. - - - Nicko Cadell - - - - Initialise the - - An that supplies culture-specific formatting information. - A containing zero or more format items. - An array containing zero or more objects to format. - - - - Format the string and arguments - - the formatted string - - - - Replaces the format item in a specified with the text equivalent - of the value of a corresponding instance in a specified array. - A specified parameter supplies culture-specific formatting information. - - An that supplies culture-specific formatting information. - A containing zero or more format items. - An array containing zero or more objects to format. - - A copy of format in which the format items have been replaced by the - equivalent of the corresponding instances of in args. - - - - This method does not throw exceptions. If an exception thrown while formatting the result the - exception and arguments are returned in the result string. - - - - - - Process an error during StringFormat - - - - - Dump the contents of an array into a string builder - - - - - Dump an object to a string - - - - - Implementation of Properties collection for the - - - - Class implements a collection of properties that is specific to each thread. - The class is not synchronized as each thread has its own . - - - Nicko Cadell - - - - The thread local data slot to use to store a PropertiesDictionary. - - - - - Internal constructor - - - - Initializes a new instance of the class. - - - - - - Remove a property - - the key for the entry to remove - - - Remove a property - - - - - - Clear all properties - - - - Clear all properties - - - - - - Get the PropertiesDictionary for this thread. - - create the dictionary if it does not exist, otherwise return null if is does not exist - the properties for this thread - - - The collection returned is only to be used on the calling thread. If the - caller needs to share the collection between different threads then the - caller must clone the collection before doing so. - - - - - - Gets or sets the value of a property - - - The value for the property with the specified key - - - - Gets or sets the value of a property - - - - - - Implementation of Stack for the - - - - Implementation of Stack for the - - - Nicko Cadell - - - - The stack store. - - - - - Internal constructor - - - - Initializes a new instance of the class. - - - - - - Clears all the contextual information held in this stack. - - - - Clears all the contextual information held in this stack. - Only call this if you think that this tread is being reused after - a previous call execution which may not have completed correctly. - You do not need to use this method if you always guarantee to call - the method of the - returned from even in exceptional circumstances, - for example by using the using(log4net.ThreadContext.Stacks["NDC"].Push("Stack_Message")) - syntax. - - - - - - Removes the top context from this stack. - - The message in the context that was removed from the top of this stack. - - - Remove the top context from this stack, and return - it to the caller. If this stack is empty then an - empty string (not ) is returned. - - - - - - Pushes a new context message into this stack. - - The new context message. - - An that can be used to clean up the context stack. - - - - Pushes a new context onto this stack. An - is returned that can be used to clean up this stack. This - can be easily combined with the using keyword to scope the - context. - - - Simple example of using the Push method with the using keyword. - - using(log4net.ThreadContext.Stacks["NDC"].Push("Stack_Message")) - { - log.Warn("This should have an ThreadContext Stack message"); - } - - - - - - Gets the current context information for this stack. - - The current context information. - - - - Gets the current context information for this stack. - - Gets the current context information - - - Gets the current context information for this stack. - - - - - - Get a portable version of this object - - the portable instance of this object - - - Get a cross thread portable version of this object - - - - - - The number of messages in the stack - - - The current number of messages in the stack - - - - The current number of messages in the stack. That is - the number of times has been called - minus the number of times has been called. - - - - - - Gets and sets the internal stack used by this - - The internal storage stack - - - This property is provided only to support backward compatability - of the . Tytpically the internal stack should not - be modified. - - - - - - Inner class used to represent a single context frame in the stack. - - - - Inner class used to represent a single context frame in the stack. - - - - - - Constructor - - The message for this context. - The parent context in the chain. - - - Initializes a new instance of the class - with the specified message and parent context. - - - - - - Get the message. - - The message. - - - Get the message. - - - - - - Gets the full text of the context down to the root level. - - - The full text of the context down to the root level. - - - - Gets the full text of the context down to the root level. - - - - - - Struct returned from the method. - - - - This struct implements the and is designed to be used - with the pattern to remove the stack frame at the end of the scope. - - - - - - The ThreadContextStack internal stack - - - - - The depth to trim the stack to when this instance is disposed - - - - - Constructor - - The internal stack used by the ThreadContextStack. - The depth to return the stack to when this object is disposed. - - - Initializes a new instance of the class with - the specified stack and return depth. - - - - - - Returns the stack to the correct depth. - - - - Returns the stack to the correct depth. - - - - - - Implementation of Stacks collection for the - - - - Implementation of Stacks collection for the - - - Nicko Cadell - - - - Internal constructor - - - - Initializes a new instance of the class. - - - - - - Gets the named thread context stack - - - The named stack - - - - Gets the named thread context stack - - - - - - Utility class for transforming strings. - - - - Utility class for transforming strings. - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - Write a string to an - - the writer to write to - the string to write - The string to replace non XML compliant chars with - - - The test is escaped either using XML escape entities - or using CDATA sections. - - - - - - Replace invalid XML characters in text string - - the XML text input string - the string to use in place of invalid characters - A string that does not contain invalid XML characters. - - - Certain Unicode code points are not allowed in the XML InfoSet, for - details see: http://www.w3.org/TR/REC-xml/#charsets. - - - This method replaces any illegal characters in the input string - with the mask string specified. - - - - - - Count the number of times that the substring occurs in the text - - the text to search - the substring to find - the number of times the substring occurs in the text - - - The substring is assumed to be non repeating within itself. - - - - - - The log4net Global Context. - - - - The GlobalContext provides a location for global debugging - information to be stored. - - - The global context has a properties map and these properties can - be included in the output of log messages. The - supports selecting and outputing these properties. - - - By default the log4net:HostName property is set to the name of - the current machine. - - - - - GlobalContext.Properties["hostname"] = Environment.MachineName; - - - - Nicko Cadell - - - - Private Constructor. - - - Uses a private access modifier to prevent instantiation of this class. - - - - - The global context properties instance - - - - - The global properties map. - - - The global properties map. - - - - The global properties map. - - - - - - The log4net Logical Thread Context. - - - - The LogicalThreadContext provides a location for specific debugging - information to be stored. - The LogicalThreadContext properties override any or - properties with the same name. - - - The Logical Thread Context has a properties map and a stack. - The properties and stack can - be included in the output of log messages. The - supports selecting and outputting these properties. - - - The Logical Thread Context provides a diagnostic context for the current call context. - This is an instrument for distinguishing interleaved log - output from different sources. Log output is typically interleaved - when a server handles multiple clients near-simultaneously. - - - The Logical Thread Context is managed on a per basis. - - - Example of using the thread context properties to store a username. - - LogicalThreadContext.Properties["user"] = userName; - log.Info("This log message has a LogicalThreadContext Property called 'user'"); - - - Example of how to push a message into the context stack - - using(LogicalThreadContext.Stacks["LDC"].Push("my context message")) - { - log.Info("This log message has a LogicalThreadContext Stack message that includes 'my context message'"); - - } // at the end of the using block the message is automatically popped - - - - Nicko Cadell - - - - Private Constructor. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - The thread context properties instance - - - - - The thread context stacks instance - - - - - The thread properties map - - - The thread properties map - - - - The LogicalThreadContext properties override any - or properties with the same name. - - - - - - The thread stacks - - - stack map - - - - The logical thread stacks. - - - - - - This class is used by client applications to request logger instances. - - - - This class has static methods that are used by a client to request - a logger instance. The method is - used to retrieve a logger. - - - See the interface for more details. - - - Simple example of logging messages - - ILog log = LogManager.GetLogger("application-log"); - - log.Info("Application Start"); - log.Debug("This is a debug message"); - - if (log.IsDebugEnabled) - { - log.Debug("This is another debug message"); - } - - - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - Uses a private access modifier to prevent instantiation of this class. - - - - Returns the named logger if it exists. - - Returns the named logger if it exists. - - - - If the named logger exists (in the default repository) then it - returns a reference to the logger, otherwise it returns null. - - - The fully qualified logger name to look for. - The logger found, or null if no logger could be found. - - - - Returns the named logger if it exists. - - - - If the named logger exists (in the specified repository) then it - returns a reference to the logger, otherwise it returns - null. - - - The repository to lookup in. - The fully qualified logger name to look for. - - The logger found, or null if the logger doesn't exist in the specified - repository. - - - - - Returns the named logger if it exists. - - - - If the named logger exists (in the repository for the specified assembly) then it - returns a reference to the logger, otherwise it returns - null. - - - The assembly to use to lookup the repository. - The fully qualified logger name to look for. - - The logger, or null if the logger doesn't exist in the specified - assembly's repository. - - - - Get the currently defined loggers. - - Returns all the currently defined loggers in the default repository. - - - The root logger is not included in the returned array. - - All the defined loggers. - - - - Returns all the currently defined loggers in the specified repository. - - The repository to lookup in. - - The root logger is not included in the returned array. - - All the defined loggers. - - - - Returns all the currently defined loggers in the specified assembly's repository. - - The assembly to use to lookup the repository. - - The root logger is not included in the returned array. - - All the defined loggers. - - - Get or create a logger. - - Retrieves or creates a named logger. - - - - Retrieves a logger named as the - parameter. If the named logger already exists, then the - existing instance will be returned. Otherwise, a new instance is - created. - - By default, loggers do not have a set level but inherit - it from the hierarchy. This is one of the central features of - log4net. - - - The name of the logger to retrieve. - The logger with the name specified. - - - - Retrieves or creates a named logger. - - - - Retrieve a logger named as the - parameter. If the named logger already exists, then the - existing instance will be returned. Otherwise, a new instance is - created. - - - By default, loggers do not have a set level but inherit - it from the hierarchy. This is one of the central features of - log4net. - - - The repository to lookup in. - The name of the logger to retrieve. - The logger with the name specified. - - - - Retrieves or creates a named logger. - - - - Retrieve a logger named as the - parameter. If the named logger already exists, then the - existing instance will be returned. Otherwise, a new instance is - created. - - - By default, loggers do not have a set level but inherit - it from the hierarchy. This is one of the central features of - log4net. - - - The assembly to use to lookup the repository. - The name of the logger to retrieve. - The logger with the name specified. - - - - Shorthand for . - - - Get the logger for the fully qualified name of the type specified. - - The full name of will be used as the name of the logger to retrieve. - The logger with the name specified. - - - - Shorthand for . - - - Gets the logger for the fully qualified name of the type specified. - - The repository to lookup in. - The full name of will be used as the name of the logger to retrieve. - The logger with the name specified. - - - - Shorthand for . - - - Gets the logger for the fully qualified name of the type specified. - - The assembly to use to lookup the repository. - The full name of will be used as the name of the logger to retrieve. - The logger with the name specified. - - - - Shuts down the log4net system. - - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in all the - default repositories. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - Shutdown a logger repository. - - Shuts down the default repository. - - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in the - default repository. - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - - - - Shuts down the repository for the repository specified. - - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in the - specified. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - The repository to shutdown. - - - - Shuts down the repository specified. - - - - Calling this method will safely close and remove all - appenders in all the loggers including root contained in the - repository. The repository is looked up using - the specified. - - - Some appenders need to be closed before the application exists. - Otherwise, pending logging events might be lost. - - - The shutdown method is careful to close nested - appenders before closing regular appenders. This is allows - configurations where a regular appender is attached to a logger - and again to a nested appender. - - - The assembly to use to lookup the repository. - - - Reset the configuration of a repository - - Resets all values contained in this repository instance to their defaults. - - - - Resets all values contained in the repository instance to their - defaults. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set to its default "off" value. - - - - - - Resets all values contained in this repository instance to their defaults. - - - - Reset all values contained in the repository instance to their - defaults. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set to its default "off" value. - - - The repository to reset. - - - - Resets all values contained in this repository instance to their defaults. - - - - Reset all values contained in the repository instance to their - defaults. This removes all appenders from all loggers, sets - the level of all non-root loggers to null, - sets their additivity flag to true and sets the level - of the root logger to . Moreover, - message disabling is set to its default "off" value. - - - The assembly to use to lookup the repository to reset. - - - Get the logger repository. - - Returns the default instance. - - - - Gets the for the repository specified - by the callers assembly (). - - - The instance for the default repository. - - - - Returns the default instance. - - The default instance. - - - Gets the for the repository specified - by the argument. - - - The repository to lookup in. - - - - Returns the default instance. - - The default instance. - - - Gets the for the repository specified - by the argument. - - - The assembly to use to lookup the repository. - - - Get a logger repository. - - Returns the default instance. - - - - Gets the for the repository specified - by the callers assembly (). - - - The instance for the default repository. - - - - Returns the default instance. - - The default instance. - - - Gets the for the repository specified - by the argument. - - - The repository to lookup in. - - - - Returns the default instance. - - The default instance. - - - Gets the for the repository specified - by the argument. - - - The assembly to use to lookup the repository. - - - Create a domain - - Creates a repository with the specified repository type. - - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - The created will be associated with the repository - specified such that a call to will return - the same repository instance. - - - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - Create a logger repository. - - Creates a repository with the specified repository type. - - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - The created will be associated with the repository - specified such that a call to will return - the same repository instance. - - - - - - Creates a repository with the specified name. - - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - Creates the default type of which is a - object. - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The name of the repository, this must be unique amongst repositories. - The created for the repository. - The specified repository already exists. - - - - Creates a repository with the specified name. - - - - Creates the default type of which is a - object. - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The name of the repository, this must be unique amongst repositories. - The created for the repository. - The specified repository already exists. - - - - Creates a repository with the specified name and repository type. - - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The name of the repository, this must be unique to the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - The specified repository already exists. - - - - Creates a repository with the specified name and repository type. - - - - The name must be unique. Repositories cannot be redefined. - An will be thrown if the repository already exists. - - - The name of the repository, this must be unique to the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - The specified repository already exists. - - - - Creates a repository for the specified assembly and repository type. - - - - CreateDomain is obsolete. Use CreateRepository instead of CreateDomain. - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - The assembly to use to get the name of the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - - Creates a repository for the specified assembly and repository type. - - - - The created will be associated with the repository - specified such that a call to with the - same assembly specified will return the same repository instance. - - - The assembly to use to get the name of the repository. - A that implements - and has a no arg constructor. An instance of this type will be created to act - as the for the repository specified. - The created for the repository. - - - - Gets the list of currently defined repositories. - - - - Get an array of all the objects that have been created. - - - An array of all the known objects. - - - - Looks up the wrapper object for the logger specified. - - The logger to get the wrapper for. - The wrapper for the logger specified. - - - - Looks up the wrapper objects for the loggers specified. - - The loggers to get the wrappers for. - The wrapper objects for the loggers specified. - - - - Create the objects used by - this manager. - - The logger to wrap. - The wrapper for the logger specified. - - - - The wrapper map to use to hold the objects. - - - - - Implementation of Mapped Diagnostic Contexts. - - - - - The MDC is deprecated and has been replaced by the . - The current MDC implementation forwards to the ThreadContext.Properties. - - - - The MDC class is similar to the class except that it is - based on a map instead of a stack. It provides mapped - diagnostic contexts. A Mapped Diagnostic Context, or - MDC in short, is an instrument for distinguishing interleaved log - output from different sources. Log output is typically interleaved - when a server handles multiple clients near-simultaneously. - - - The MDC is managed on a per thread basis. - - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - Uses a private access modifier to prevent instantiation of this class. - - - - - Gets the context value identified by the parameter. - - The key to lookup in the MDC. - The string value held for the key, or a null reference if no corresponding value is found. - - - - The MDC is deprecated and has been replaced by the . - The current MDC implementation forwards to the ThreadContext.Properties. - - - - If the parameter does not look up to a - previously defined context then null will be returned. - - - - - - Add an entry to the MDC - - The key to store the value under. - The value to store. - - - - The MDC is deprecated and has been replaced by the . - The current MDC implementation forwards to the ThreadContext.Properties. - - - - Puts a context value (the parameter) as identified - with the parameter into the current thread's - context map. - - - If a value is already defined for the - specified then the value will be replaced. If the - is specified as null then the key value mapping will be removed. - - - - - - Removes the key value mapping for the key specified. - - The key to remove. - - - - The MDC is deprecated and has been replaced by the . - The current MDC implementation forwards to the ThreadContext.Properties. - - - - Remove the specified entry from this thread's MDC - - - - - - Clear all entries in the MDC - - - - - The MDC is deprecated and has been replaced by the . - The current MDC implementation forwards to the ThreadContext.Properties. - - - - Remove all the entries from this thread's MDC - - - - - - Implementation of Nested Diagnostic Contexts. - - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - A Nested Diagnostic Context, or NDC in short, is an instrument - to distinguish interleaved log output from different sources. Log - output is typically interleaved when a server handles multiple - clients near-simultaneously. - - - Interleaved log output can still be meaningful if each log entry - from different contexts had a distinctive stamp. This is where NDCs - come into play. - - - Note that NDCs are managed on a per thread basis. The NDC class - is made up of static methods that operate on the context of the - calling thread. - - - How to push a message into the context - - using(NDC.Push("my context message")) - { - ... all log calls will have 'my context message' included ... - - } // at the end of the using block the message is automatically removed - - - - Nicko Cadell - Gert Driesen - - - - Initializes a new instance of the class. - - - Uses a private access modifier to prevent instantiation of this class. - - - - - Clears all the contextual information held on the current thread. - - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - Clears the stack of NDC data held on the current thread. - - - - - - Creates a clone of the stack of context information. - - A clone of the context info for this thread. - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - The results of this method can be passed to the - method to allow child threads to inherit the context of their - parent thread. - - - - - - Inherits the contextual information from another thread. - - The context stack to inherit. - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - This thread will use the context information from the stack - supplied. This can be used to initialize child threads with - the same contextual information as their parent threads. These - contexts will NOT be shared. Any further contexts that - are pushed onto the stack will not be visible to the other. - Call to obtain a stack to pass to - this method. - - - - - - Removes the top context from the stack. - - - The message in the context that was removed from the top - of the stack. - - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - Remove the top context from the stack, and return - it to the caller. If the stack is empty then an - empty string (not null) is returned. - - - - - - Pushes a new context message. - - The new context message. - - An that can be used to clean up - the context stack. - - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - Pushes a new context onto the context stack. An - is returned that can be used to clean up the context stack. This - can be easily combined with the using keyword to scope the - context. - - - Simple example of using the Push method with the using keyword. - - using(log4net.NDC.Push("NDC_Message")) - { - log.Warn("This should have an NDC message"); - } - - - - - - Removes the context information for this thread. It is - not required to call this method. - - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - This method is not implemented. - - - - - - Forces the stack depth to be at most . - - The maximum depth of the stack - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - Forces the stack depth to be at most . - This may truncate the head of the stack. This only affects the - stack in the current thread. Also it does not prevent it from - growing, it only sets the maximum depth at the time of the - call. This can be used to return to a known context depth. - - - - - - Gets the current context depth. - - The current context depth. - - - - The NDC is deprecated and has been replaced by the . - The current NDC implementation forwards to the ThreadContext.Stacks["NDC"]. - - - - The number of context values pushed onto the context stack. - - - Used to record the current depth of the context. This can then - be restored using the method. - - - - - - - The log4net Thread Context. - - - - The ThreadContext provides a location for thread specific debugging - information to be stored. - The ThreadContext properties override any - properties with the same name. - - - The thread context has a properties map and a stack. - The properties and stack can - be included in the output of log messages. The - supports selecting and outputting these properties. - - - The Thread Context provides a diagnostic context for the current thread. - This is an instrument for distinguishing interleaved log - output from different sources. Log output is typically interleaved - when a server handles multiple clients near-simultaneously. - - - The Thread Context is managed on a per thread basis. - - - Example of using the thread context properties to store a username. - - ThreadContext.Properties["user"] = userName; - log.Info("This log message has a ThreadContext Property called 'user'"); - - - Example of how to push a message into the context stack - - using(ThreadContext.Stacks["NDC"].Push("my context message")) - { - log.Info("This log message has a ThreadContext Stack message that includes 'my context message'"); - - } // at the end of the using block the message is automatically popped - - - - Nicko Cadell - - - - Private Constructor. - - - - Uses a private access modifier to prevent instantiation of this class. - - - - - - The thread context properties instance - - - - - The thread context stacks instance - - - - - The thread properties map - - - The thread properties map - - - - The ThreadContext properties override any - properties with the same name. - - - - - - The thread stacks - - - stack map - - - - The thread local stacks. - - - - -