Bala's Blog

JOY OF PROGRAMMING

Month: November, 2011

Restore Panels In Ubuntu Back To Their Default Settings

Messed up your panels in Gnome? Maybe your new to Ubuntu and accidentally deleted items or the panel itself and now you can’t figure out how to get it back.

Sure, you can add a new panel and rebuild it by adding the items back on the panel.

Instead of going through the trouble, there is an easy fix that will restore your panels back to their default settings quickly.

Open up a Terminal window, by clicking on Applications \ Accessories \ Terminal. Or, if you deleted the top panel and cannot access the menus, just press ALT+F2 and in the run dialog box, type gnome-terminal then click on Run.

You can also browse for applications, such as Terminal from the Run window, by clicking on the arrow icon next to ‘Show list of known applications” and browse for Terminal.

gnomedefaultpanel.png

Once the Terminal window opens, enter the following command at the prompt:

gconftool-2 – -shutdown

(Note: There should be no spaces between the two dashes before shutdown.)

EDIT – Reader nickrud has suggested a better method instead of shutting down gconfd. Instead use the following command (thanks nickrud!)

gconftool – -recursive-unset /apps/panel

(Remember: There should be no spaces between the two dashes before shutdown.)

Then enter the next command:

rm -rf ~/.gconf/apps/panel

And enter one more command:

pkill gnome-panel

That’s it!

Both top and bottom panels will appear (if missing) with their default settings. Now you can customize them to your preference and get on with using Ubuntu.

Advertisements

Shell Script for Reading the properties file of Java type

sed ‘/^\#/d’ property_file_name | grep ‘property_name’ | tail -n 1 | cut -d “=” -f2- | sed ‘s/^[[:space:]]*//;s/[[:space:]]*$//’

This command gives the property value from the properties file .

# gres.sh
pattern=$1
replacement=$2
propvalue=`sed ‘/^\#/d’ $3 | grep $1 | tail -n 1 | sed ‘s/^.*=//;s/^[[:space:]]*//;s/[[:space:]]*$//’`
A=”`echo | tr ’12’ ’01’ `”
sed -i -e “s$A$pattern=$propvalue$A$pattern=$replacement$A” $3
# end script

This replaces a property value within a given property file:

./gres.sh

Spell Check Configuration in Solr

The spell check is one of the essential things that we need to use in our application for spell
correction.
This can be done in solr by first writing the “Spell check component”

in the solrconfig.xml file.

The below is config of th spell check component

<searchComponent name=”keyspellcheck”>

<str name=”queryAnalyzerFieldType”>textSpell</str>

<!– Multiple “Spell Checkers” can be declared and used by this
component
–>

<!– a spellchecker built from a field of the main index, and
written to disk
–>
<!–   <lst name=”spellchecker”>
<str name=”name”>default</str>
<str name=”field”>keyword</str>
<str name=”spellcheckIndexDir”>spellchecker</str> –>
<!– uncomment this to require terms to occur in 1% of the documents in order to be included in the dictionary
<float name=”thresholdTokenFrequency”>.01</float>
–>
<!– </lst> –>
<lst name=”spellchecker”>
<!–
Optional, it is required when more than one spellchecker is configured.
Select non-default name with spellcheck.dictionary in request handler.
–>
<str name=”name”>default</str>
<!– The classname is optional, defaults to IndexBasedSpellChecker –>
<str name=”classname”>solr.IndexBasedSpellChecker</str>
<!–
Load tokens from the following field for spell checking,
analyzer for the field’s type as defined in schema.xml are used
–>
<str name=”field”>keyword</str>
<!– Optional, by default use in-memory index (RAMDirectory) –>
<str name=”spellcheckIndexDir”>./spellchecker</str>
<!– Set the accuracy (float) to be used for the suggestions. Default is 0.5 –>
<str name=”accuracy”>0.4</str>
<!– Require terms to occur in 1/100th of 1% of documents in order to be included in the dictionary –>
<!–<float name=”thresholdTokenFrequency”>.0001</float> –>
</lst>
<!– Example of using different distance measure –>
<lst name=”spellchecker”>
<str name=”name”>jarowinkler</str>
<str name=”field”>lowerfilt</str>
<!– Use a different Distance Measure –>
<str name=”distanceMeasure”>org.apache.lucene.search.spell.JaroWinklerDistance</str>
<str name=”spellcheckIndexDir”>./spellchecker</str>

</lst>
</searchComponent>

Here the field value must be specified in the schema file with the analyzers and tokenizers that are necessary.

Then the spell component has to be added with the request handler “SEARCH” so that it appears in the response of the solr

<requestHandler name=”search” default=”true”>
<!– default values for query parameters can be specified, these
will be overridden by parameters in the request
–>
<lst name=”defaults”>
<str name=”echoParams”>explicit</str>
<int name=”rows”>10</int>
<str name=”spellcheck.onlyMorePopular”>true</str>
<str name=”spellcheck.extendedResults”>false</str>
<str name=”spellcheck.count”>3</str>
<str name=”spellcheck”>true</str>
<str name=”spellcheck.collate”>true</str>
<str name=”spellcheck.extendedResults”>true</str>
</lst>

 

Then the data must be reindexed and the required suggestions for the misspelt word can be got.

 

JKB


KeywordTokenizerFactory vs StandardTokenizerFactory in solr

KeywordTokenizer does no actual tokenizing, so the entire
input string is preserved as a single token

StandardTokenizerFactory :-
It tokenizes on whitespace, as well as strips characters

Documentation :-
Splits words at punctuation characters, removing punctuations. However, a dot that’s not followed by whitespace is considered part of a token.
Splits words at hyphens, unless there’s a number in the token. In that case, the whole token is interpreted as a product number and is not split.
Recognizes email addresses and Internet hostnames as one token.

Would use this for fields where you want to search on the field data.

e.g. –

http://example.com/I-am+example?Text=-Hello

would generate 7 tokens (separated by comma) –

http,example.com,I,am,example,Text,Hello

KeywordTokenizerFactory :-

Keyword Tokenizer does not split the input at all.
No processing in performed on the string, and the whole string is treated as a single entity.
This doesn’t actually do any tokenization. It returns the original text as one term.

Mainly used for sorting or faceting requirements, where you want to match the exact facet when filtering on multiple words and sorting as sorting does not work on tokenized fields.

e.g.

http://example.com/I-am+example?Text=-Hello

would generate a single token –

http://example.com/I-am+example?Text=-Hello