Pages

Tuesday, 20 December 2011

ATG Search Indexing --> behind the scene steps explained

Read more about the search indexing @ http://tips4ufromsony.blogspot.com/2011/11/atg-search-architectural-flow-search.html

ATG search indexing involves index file creation, deploying and copying the index file to the search engine's box. The steps can be divided into Initial stage, Preparing Content, Indexing and Deploying. Please find below the detailed analysis of each step.



1. Initial stage:
       a. Check whether the folder deployshare configured correctly @ LaunchingService.deployShare  ( \atg\search\routing\LaunchingService.deployShare ). Lets assume that it is configured to \Search2007.1\SearchEngine\i686-win32-vc71\buildedIndexFiles.
       b. Lets assume that the index file folder ( \Search2007.1\SearchEngine\i686-win32-vc71\indexFiles)  has the following segments (folders) currently :
                    66900009 @ index engine box
       66900010 @ search engine box
       c. Lets assume that the component SearchEngineService has the "Local Content Path" as following for the search and index environments :
                   Search  environment   --> ../indexFiles/66900010
        Indexing environment  --> ../indexFiles/66900009

2.Preparing Content
     a. start an indexing engine @ step "Load latest pre-index customizations"
     b. delete the folder "\indexFiles\66900009" and create a new folder "\indexFiles\66900011" @ location "SearchEngine\i686-win32-vc71\indexFiles" @ index engine box
     c. copy the files initial.index ( from SearchEngine\i686-win32-vc71\data ) and  LUIStore.stg to “indexFiles\66900011” --> done by indexing engine

3.Indexing
     a. create a new index and stg files @ "SearchEngine\i686-win32-vc71\buildedIndexFiles". First create the index file and then the stg file.
     b. copy the new index and stg files to the folder indexFiles\66900011 --> done by indexing engine
     c. update the SearchEngineService.Local Content Path of the indexing environment to ../indexFiles\66900011
     d. kill the indexing engine

4.Deploying
    a. start a new answer engine
    b. create the folder "indexFiles\66900012" @ answer engine box -->  done by the new engine
    c. copy the new index and stg files to the folder "indexFiles\66900012" -->  done be new engine
    d. update the SearchEngineService.Local Content Path of the search environment to ..\indexFiles\66900012
    e. delete the folder "indexFiles\66900010"
    f.  shutdown the previous running engine

Tuesday, 29 November 2011

Jsp and CSS size limits that web developers need to aware

Here I am listing some erroneous cases that might occur in your web development phase, due to some size restrictions.

JSP file size limit :

You might get some run time exceptions that the JSP file size limit exceeds. Please find below the reason :

In JVM the size of a single JAVA method is limited to 64kb. When the jsp file is converted to Servlet, if the jspservice method's size exceeds the 64kb limit, this exception will occur. Keep in mind that this exception depends on the implementation of the JSP translator, means the same JSP code may give an exception in Tomcat and may run successfully in Weblogic due to the the difference in the logic to built the Servlet methods from JSP.


The best way to omit this issue is by using dynamic include.For example, if you are using
                 <%@ include file="sample.jsp" %> (static include),
 replace this to
                <jsp:include page="sample.jsp" />   (dynamic include).

Static includes affect page size where dynamic includes affect processing overhead.

Read more @ http://docs.oracle.com/cd/B32110_01/web.1013/b28961/workjsp.htm


CSS size limit in IE :

To get some performance improvements, if you try to combine all your CSS files and if the combined CSS size exceeds 288kb, beware : IE has a size limit to load the CSS content. IE will ignore any CSS beyond 288 KB and even gzipping content doesn't matter. This size limit appears to be a "per file" limit and you can split the CSS into two files and it will work fine.

Sunday, 27 November 2011

ATG Search architectural flow : Search and Index



I would like to explain the high level ATG Search implementation architecture ( for an online store) through the above diagram. In this diagram 1.x denotes the search functionality and 2.x denotes the indexing functionality. I have given JBoss as the application server.

Physical Boxes and Application Servers in the diagram ( as recommended by ATG )  :
  1. Estore ( Commerce ) Box --> The box with the estore/site ear (with the site JSPs and Java codes).
  2. Search Engine Box --> The box with the search engine application running.
  3. Indexing Engine Box --> The box with the indexing engine application running.
  4. CA (Content Administration) Box --> The box with the ATG CA ear ( where we could take CA -BCC - Search Administration and configure the search projects) .
  5. Search Indexer Box --> The box with the ATG Search Index ear ( to fetch the index data from repository). Note that the engine performing indexing will need access to the data it is indexing, which for production is the production repository. It will typically access the data via a commerce instance.  For best performance, and for large repositories, that commerce instance should be dedicated for search indexing, and should be a fast machine.
1. Search functionality flow details :

     1.1   Estore server will find the search engine box's host and search engine application running port details from the Search repository
     1.2   Estore server will call the Search engine application as a SOAP request using this host an port
     1.3   Search engine will find the search results using the index file
     1.4   Search engine will send the search results back to the Estore server

2. Index functionality flow details :

     2.1   CA server will start the indexing and will call the Search Index server to fetch the data to be indexed from the repository
     2.2   Search Index server will fetch the data from the catalog repository
     2.3   CA server will call the Index engine application to create the index files
     2.4   Index engine application will create the index files and keep it in a shared folder so that all the search engine applications can read it
     2.5   During the index deploy phase, all search engines will copy the index files to a local folder for fast access

If you need more details, please comment so that I can answer your specific questions  :-)


Friday, 25 November 2011

Is your e-Banking is secure with the extensions/add-ons ?

If your browser is having a large number of extensions/add-ons, how can you ensure a secure e-banking ?

Chrome incognito window or IE / Mozilla private browsing is the best option you have, to do the e-banking.

Google Chrome does not control how extensions handle your personal data. But all the extensions have been disabled for incognito windows. (You can reenable them individually in the extensions manager ).



The simplest way to start Chrome in Incognito @ Windows 7 is to right-click on its taskbar icon .

Also think about the other uses of Chromey incognito, just like to use it for Guest account log-ins ...

Monday, 21 November 2011

Good features of Eclipse 3.6 (Eclipse Helios) JDT



Read the Eclipse Galileo features @ http://tips4ufromsony.blogspot.com/2011/10/good-features-of-eclipse35-eclipse.html


New options in Open Resource dialog :
The Open Resource dialog supports three new features:
• Path patterns: If the pattern contains a /, the part before the last / is used to match a path in the workspace:


• Relative paths: For example, "./T" matches all files starting with T in the folder of the active editor or selection:



• Closer items on top: If the pattern matches many files with the same name, the files that are closer to the currently edited or selected resource are shown on top of the matching items list.



MarketPlace : 
Searching and adding new plugins for Eclipse have always been a challenge. The Eclipse Marketplace makes this much easier – it allows you to not only search a central location of all Eclipse plugins, but also allows you to find the most recent and the most popular plugins.





Fix multiple problems via problem hover:
    The problem hover now shows quick fix links that fix multiple instances of a problem in a file. The new links behave the same as pressing Ctrl+Enter in the Quick Fix proposal list (Ctrl+1) :



Dynamic path variables:
Linked resources can define their locations relative to user-defined path variables. Now, a set of predefined path variables are available:
PROJECT_LOC - pointing out the project location
WORKSPACE_LOC - pointing out the workspace location
When these variables are used, they are dynamically resolved based on the context of a linked resource. Those predefined variables may be also used to build user-defined variables.



Progress shown in platform task bar:  
Progress for long running operations is now shown in the platform task bar on platforms that support this feature. Progress is shown for long running tasks such as workbench startup, install, update, and repository synchronization.
 


Quick Access now shows keybindings for commands:
Quick Access (Ctrl+3) now shows keybindings for commands so you can save yourself from all that typing and just use the keyboard shortcut the next time you need to run a command.


 
Local History pruning can be disabled:
The local history size constraints can now be disabled. Users that never want to discard history no longer need to wait on shutdown for history cleanup to occur. To disable history cleaning, go to  Preferences > General > Workspace > Local History and disable Limit history size. Note that when this option is chosen, disk usage for the workspace local history will continue to grow indefinitely.



New 'Java Code Style Preferences' category when importing or exporting preferences:
When importing or exporting preferences, a new category is available that allows you to control whether Java code style preferences are imported or exported:



Call hierarchy view:
Helios lets you go through each caller one by one, and provides you a way to remove all those unnecessary calls from the call hierarchy view so that you can focus on those which concern you. Related to this, previously the extraction of type hierarchy information would cause the Eclipse to freeze in large projects. The recent version extracts type hierarchies in background leaving you to continue your work.



Control the formatting in code sections:
This preference allows you to define one tag to disable and one tag to enable the formatter (see the Off/On Tags tab in your formatter profile): Here is an example of formatted code which is using code sections with the tags defined as shown above:





Report missing @Override for method implementations in 1.6 mode:
The compiler now reports about missing @Override annotation in the case where a method implements a method from an interface:



@SuppressWarnings for optional errors:
The @SuppressWarnings annotation can now also suppress optional compile errors. In the below example, "value of local variable is not used" has been set to Error:



Compiler detects unused object allocation:
The Java compiler can now detect unused object allocations.This detection is disabled by default and can be enabled on the  Java > Compiler > Errors/Warnings preference page at the end of the Potential programming problems section:



Package name abbreviations:
Package names in Java views can now be abbreviated with custom rules.The abbreviation rules can be configured on the  Java > Appearance preference page. For example, the following rules produce the rendering shown below:
org.apache.lucene={AL}
org.apache.lucene.analysis={ALA}



Type Hierarchy computed in background:
The Type Hierarchy is now computed in an operation that can be sent to the background (or always runs in the background, depending on your settings). Your workbench is no longer blocked while a big hierarchy is computed:



Debug Variable Instance counts:
The Variables view provides a new column displaying the number of instances corresponding to the concrete type of each variable. To display the column, select Layout > Select Columns... from the view's menu, and then select Instance Count from the Select Columns dialog. Note that instance counts are only available debugging on JavaSE-1.6 (or newer) and are not applicable to primitive types.


 
Java breakpoint detail:
The Java breakpoint detail panes now display all properties in a single pane. So the property editing can be done from the same dialog box:



Edit test method in JUnit launch configuration:
In JUnit launch configurations, you can now edit the test method. In JUnit launch configurations, you can now edit the test method.





Saturday, 19 November 2011

Google Chrome shortcut keys


If you are a Google Chromey guy, please find below the list of shortcut keys for some of the most used features  :-)


Find more shortcut keys @ http://www.google.com/support/chrome/bin/static.py?page=guide.cs&guide=25799&topic=28650

Tuesday, 15 November 2011

MPC way of subtitle download

How can we easily download the subtitle of a video ?

If you are using the Media Player Classic, there is an easy option to download subtitle (if you are already connected to internet).

Go to File --> Subtitle Databse --> Download.

Now a list of subtitles will be listed including the language and you can choose the one and also can replace the existing one (if any). You have the option to save this subtitle (@  File --> Save Subtitle).

Please find below some snaps:




Thursday, 10 November 2011

Intimation u/s 143(1) of the Income Tax act

Have you got your Income Tax filing e-receipt ?

After a successful assessment of tax returns, income tax department issues Intimation u/s 143(1). Normally these intimations will be received through email to the Email address provided when filing income tax returns online.

If “NET AMOUNT REFUNDABLE /NET AMOUNT DEMAND”  is less than Rs 100, you can treat this Intimation u/s 143(1) as completion of income tax returns assessment under Income Tax Act. It can be useful for the proof of Income/ Completion of income tax returns assessment.

In case of demand , we need to pay the entire Demand within 30 days of receipt of this intimation.The payment can be made using the printed challan enclosed in the mail or you can go for online tax payment. The Tax Payment challan is enclosed if the Tax Payable exceeds Rs. 100.

If you go for online tax payment, follow the instructions listed @  http://tips4ufromsony.blogspot.com/2011/03/online-income-tax-payment-using.html and select the "Type of Payment" as  " (400) Tax on regular assessment ".

Aslo, read more about Tax filing @ http://tips4ufromsony.blogspot.com/2011/07/income-tax-process-and-e-filing.html .

Monday, 7 November 2011

Check your PF balance

You can check your PF balance from the site :  http://www.epfkerala.in/

Go to the menu :  “My Epf Balance” .  This will lead to the url : http://www.epfindia.com/MembBal.html

Select the EPFO office where your account is maintained and furnish your PF Account number.You will be asked to enter your name and mobile number. The given mobile number will be recorded along with the PF Account Number. On successful submission of above information, the details will be sent through SMS to the given mobile number.

Please find below some screen shots:








Tuesday, 1 November 2011

Lucene, sample JAVA code to Search an indexed file folder


Please find below the Lucene sample JAVA code to search the files inside a folder. This code will search the indexed folder for a search query in an indexed field.

This java code is expecting the index path ( where the index files were created ) , field which need to be searched and the query need be searched as program arguments like  "java SearchFiles [-index dir] [-field f] [-query string]" .


import java.io.File;
import java.util.ArrayList;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

public class SearchFiles {

public static void main(String[] args){
try{
String usage = "Usage: SearchFiles [-index dir] [-field f] [-query string] \n\n";
String index = "index";
String field = "contents";
String queryString = null;
int hitsPerPage = 100;
for (int i = 0; i < args.length; i++) {
if ("-index".equals(args[i])) {
index = args[i + 1];
i++;
} else if ("-field".equals(args[i])) {
field = args[i + 1];
i++;
} else if ("-query".equals(args[i])) {
queryString = args[i + 1];
i++;
}
}
new SearchFiles().searchFiles(index,field,queryString,hitsPerPage);
}catch (Exception e) {
e.printStackTrace();
}
}

public ArrayList<String> searchFiles(String index,String field,
String queryString,int hitsPerPage) throws Exception {

ArrayList<String> returnStringList = new ArrayList<String>();

IndexSearcher searcher = new IndexSearcher(FSDirectory.open(new File(index)));
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_31);
QueryParser parser = new QueryParser(Version.LUCENE_31, field, analyzer);
Query query = parser.parse(queryString);
int numberOfPages = 5;

TopDocs results = searcher.search(query, numberOfPages * hitsPerPage);

ScoreDoc[] hits = results.scoreDocs;
int numTotalHits = results.totalHits;
System.out.println(numTotalHits + " total matching documents");
returnStringList.add(numTotalHits + " total matching documents");

int start = 0;
int end = Math.min(numTotalHits, hitsPerPage);
for (int i = start; i < end; i++) {
Document doc = searcher.doc(hits[i].doc);
String path = doc.get("path");
if (path != null) {
System.out.println((i + 1) + ". " + path);
returnStringList.add((i + 1) + ". " + path);
String title = doc.get("title");
if (title != null) {
System.out.println("   Title: " + doc.get("title"));
returnStringList.add("   Title: " + doc.get("title"));
}
}
}
searcher.close();
return returnStringList;
}
}

Monday, 31 October 2011

Lucene, sample JAVA code to Index a file folder


Please find below the Lucene sample code to index the files inside a folder. This code will index ( or create fields for ) the file path, file title, modified date and contents of the file.

This java code is expecting the index path ( where the index files will be created ) and file folder path as program arguments like  "java IndexFiles  [-index INDEX_PATH] [-docs DOCS_PATH]" .

The logic of the code is to iterate through each file in the folder and call the method indexDoc(), where the above said fields are created and added to a Document object. This means that for each file there will be a document object and these document objects will be added to IndexWriter.

Please find below the screen shot of the indexd file folder :



import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Date;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.NumericField;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;

public class IndexFiles {
 public static void main(String[] args) {
  String usage = "java IndexFiles  [-index INDEX_PATH] [-docs DOCS_PATH] \n\n"
   + "This indexes the documents in DOCS_PATH, creating a Lucene index in"
   + "INDEX_PATH that can be searched with SearchFiles";
  String indexPath = "index";
  String docsPath = null;
  for (int i = 0; i < args.length; i++) {
   if ("-index".equals(args[i])) {
    indexPath = args[i + 1];
    i++;
   } else if ("-docs".equals(args[i])) {
    docsPath = args[i + 1];
    i++;
   }
  }
  if (docsPath == null) {
   System.err.println("Usage: " + usage);
   System.exit(1);
  }
  final File docDir = new File(docsPath);
  if (!docDir.exists() || !docDir.canRead()) {
   System.out.println("Document directory "
   + docDir.getAbsolutePath()
   + "does not exist or is not readable, please check the path");
   System.exit(1);
  }
  Date start = new Date();
  try {
   System.out.println("Indexing to directory '" + indexPath + "'...");
   Directory dir = FSDirectory.open(new File(indexPath));

   Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_31);
   IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_31,analyzer);
   iwc.setOpenMode(OpenMode.CREATE);
   IndexWriter writer = new IndexWriter(dir, iwc);
   findFilesAndIndex(writer, docDir);

   writer.close();
   Date end = new Date();
   System.out.println(end.getTime() - start.getTime()+ " total milliseconds");
  } catch (IOException e) {
   System.out.println(" caught a " + e.getClass()+ "\n with message: " + e.getMessage());
  }
 }

 static void findFilesAndIndex(IndexWriter writer, File file) throws IOException {
  FileInputStream fis = null;
  try{
  if (file.canRead()) {
   if (file.isDirectory()) {
   String[] files = file.list();
   if (files != null) {
    for (int i = 0; i < files.length; i++) {
    findFilesAndIndex(writer, new File(file, files[i]));
    }
   }
   } else {
    fis = new FileInputStream(file);
    indexDoc(writer, file,fis);
   }
  }
  }catch (IOException e) {
   System.out.println(" caught a " + e.getClass()+ "\n with message: " + e.getMessage());
  }finally {
   if(fis != null){
    fis.close();
   }
  }
 }

 static void indexDoc(IndexWriter writer, File file,FileInputStream fis) throws IOException {
  Document doc = new Document();
  Field pathField = new Field("path", file.getPath(),Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS);
  pathField.setOmitTermFreqAndPositions(true);
  doc.add(pathField);

  Field titleField = new Field("title", file.getName(),Field.Store.YES, Field.Index.NOT_ANALYZED_NO_NORMS);
  pathField.setOmitTermFreqAndPositions(true);
  doc.add(titleField);

  NumericField modifiedField = new NumericField("modified");
  modifiedField.setLongValue(file.lastModified());
  doc.add(modifiedField);

  doc.add(new Field("contents", new BufferedReader(new InputStreamReader(fis, "UTF-8"))));

  System.out.println("adding " + file);
  writer.addDocument(doc);
 }
}

Friday, 14 October 2011

Good features of Eclipse3.5 (Eclipse Galileo) JDT


This blog will list down the new features of Eclipse Galileo JDT. I will write another blog regarding the features of Eclipse Helios and Eclipse Indigo.

Read about Eclipse Helios features @ http://tips4ufromsony.blogspot.com/2011/11/good-features-of-eclipse-36-eclipse.html

==========================================================
1. Toggle Breadcrumb —> Will list the name of the file and the method name with respect to your cursor position , on the top of the Eclipse IDE. From here you can go to other methods, other classes in same package , ….

Screen shot of Toggle Breadcrumb:



==========================================================
2. From the method call , you can either go to declaration or to implementation

Screen shot of implementation call:



==========================================================
3. Advanced Open Type –> You can restrict the open type to a selected Working set only.

Screen shot of Advanced Open Type:



==========================================================
4. Embedded Telnet connection window —> You can have the telnet connection as a window in Eclipse

Screen shot of Telnet connection:



==========================================================
5. Embedded Sql developer —> You can view the database tables , run queries , can see the history of queries ran and the results …

Screen shot of Sql Developer:



==========================================================
6. Enhanced Local History —> You can view all the changes that you made in a file in all the file save that you done. —> Just like clear case/SVN / CVS , you can compare with the previous versions of the files to see each line changes

Screen shot of Enhanced Local History:



==========================================================
7.A new property window to view the properties of the selected file

Screen shot of property window:



==========================================================

8.Exclude selected packages or files from the build path

Screen shot of Exclusion of build path:



==========================================================
9. Ctrl +3 —> Advanced quick access to the available screens by typing the start letters

Screen shot of Ctrl +3:



==========================================================

10. XML files can be open in a Design View

Screen shot of XML Design View:



==========================================================

11. Quick search in window – Preference

Screen shot of  Quick search in window – Preference:



Monday, 10 October 2011

Apache Lucene quick links






Thursday, 6 October 2011

Apache Lucene Search Engine’s Features


Apache Lucene is a high-performance, full featured text search engine library written entirely in Java. It is part of Apache Jakarta Project. Lucene was originally written by Doug Cutting in Java. While suitable for any application which requires full text indexing and searching capability, Lucene has been widely recognized for its utility in the implementation of Internet search engines and local, single-site searching. Lucene is Doug Cutting’s wife’s middle name !

Features

1. Scalable, High-Performance Indexing

  • Over 95GB/hour on modern hardware
  • Small RAM requirements — only 1MB heap
  • Incremental indexing as fast as batch indexing
  • Index size roughly 20-30% the size of text indexed


2. Powerful, Accurate and Efficient Search Algorithms

  • Ranked searching — best results returned first
  • Sorting by any field
  • Multiple-index searching with merged results
  • Allows simultaneous update and searching


3. Flexible Queries

  • Phrase queries –>  like “star wars” –> search for the full word star wars.
  • Wildcard queries  –> like star* or  sta?  –> search for a single character or multi character replacements for the search words
  • Fuzzy queries  –> like star~0.8  –> search for the similar words with some weightage
  • Proximity queries  –> like  ”star wars”~10 –> search for a “star” and “wars” within 10 words of each other in a document
  • Range queries  –>  like {star-stun}  –>  search for documents in between star and stun. Exclusive queries are denoted by curly brackets
  • Fielded searching   –>  fields like  title, author, contents
  • Date-range searching   –> like [2006-2007]  –>  search for documents with field value in between 2006 and 2007. Inclusive queries are denoted by square brackets
  • Boolean Operators  –>  like star AND wars . The OR operator is the default conjunction operator.
  • Boosting a Term –>  like star^4  wars –> make documents with term star more relevant
  • + Operator  –>  like +star wars –>  search for documents that must contain “star” and may contain “wars”
  • - Operator  –>  like star -wars –>  search for documents that contain “star” and not contains “wars”
  • Grouping –>  like (star AND wars) OR website –>  using parentheses to group clauses to form sub queries
  • Escape special character –>  The current list special characters are   + – && || ! ( ) { } [ ] ^ ” ~ * ? : \  . To escape these character use the \ before the character.


4. Cross-Platform Solution

  • Available as Open Source software under the Apache License which lets you use Lucene in both commercial and Open Source programs
  • 100%-pure Java
  • Implementations in other programming languages available that are index-compatible


At the core of Lucene's logical architecture is the idea of a document containing fields of text. This flexibility allows Lucene's API to be independent of the file format. Text from PDFs, HTML, Microsoft Word, and OpenDocument documents, as well as many others (except images), can all be indexed as long as their textual information can be extracted.
Index  --> sequence of documents ( Directory)
Document  -->  sequence of fields
Field  --> named sequence of terms
Term  --> a text string (e.g., a word)
Terms:
A search query is broken up into terms and operators. There are two types of terms: Single Terms and Phrases. A Single Term is a single word such as "test" or "hello". A Phrase is a group of words surrounded by double quotes such as "hello dolly". Multiple terms can be combined together with Boolean operators to form a more complex query.

Fields:
When performing a search you can either specify a field, or use the default field. You can search any field by typing the field name followed by a colon ":" and then the term you are looking for.

Wednesday, 28 September 2011

ATG License Files and Oracle Software Delivery Cloud


Oracle no longer generates license keys that are specific to your IP address(es). Oracle now provides generic license files that enable you to fully utilize all of the features for which you are licensed.

Please find the ATG License files for different ATG versions @ http://www.oracle.com/us/support/licensecodes/atg/index.



@ Oracle Software Delivery Cloud , you can find downloads for all licensable Oracle products –> https://edelivery.oracle.com/

Please find below a screen shot for ATG products download :




Wednesday, 21 September 2011

ATG Search and how to generate XHTMLs from STG file


The ATG search  indexing will give you the idx and stg files. When I analyse the stg files with some text editors like Textpad or Ultraedit , found some <html> and </html> tags and the contents inside these tags seems to be the same content of the temporary XHTML files , which will be generated during the search indexing for each indexed item. So I deicded to take the contents in between the <html> and </html> tags and save as XHTML file and it works for almost all indexed items. As you might know, these XHTML file’s <head> tag contains all the meta properties ( refine properties ) and the <body> tag have the text properties ( searchable properties ) for each indexed item.

Please note that the above steps are not an ATG recommended method to generate the XHTML files. I come across to this simple method to form the XHTML files and I am not 100% sure that this will give all the XHTML files of a search index . But I found this to be very useful for debugging any ATG search related issues.

Please find the below JAVA code written to genrate the XHTML files from an stg file. The main method is expecting the name of the stg file as a program argument. This will create a folder named “XHTML_Files” in the current directory and will save the XHTML files inside this folder.

Please find the below screen shot of XHTML files generated using this JAVA code :




Please find the below screen shot of a sample XHTML file generated using this JAVA code :





import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;

public class ATGSearchStgXHTMLGenerator {
public static void main(String args[]) {
try {
String fileName =  args[0];
String xhtmlfilePath = ".\\XHTML_Files";
if(fileName ==  null){
System.out.println("Give the stg file name as input");
return;
}
File stgfile = new File(fileName);
FileReader stgfileFis = new FileReader(stgfile);
BufferedReader stgfileBr = new BufferedReader(stgfileFis);
String readLine = stgfileBr.readLine();
String outToWrite = null;
String outFileName = null;
File outFile = new File(xhtmlfilePath);
boolean exists = outFile.exists();
if(!exists){
(new File(xhtmlfilePath)).mkdir();
}
FileWriter outFileWriter = null;
int lineNumber = 1;
int xhtmlFileCount = 1;
boolean canWriteXhtmlfile = false;
do {
if(readLine.contains("<html>")
&& readLine.contains("</html>")){
outToWrite = readLine.substring(readLine.indexOf("<html>"),readLine.indexOf("</html>")+7);
outFileName = "\\XHTMLFile_";
canWriteXhtmlfile = true;
}else if(readLine.contains("<html>")
&& !readLine.contains("</html>")){
System.out.println("In the STG file at lineNumber:" +lineNumber+" ERROR: html tag found, but no end tag for");
outToWrite = readLine.substring(readLine.indexOf("<html>"),readLine.length());
outFileName = "\\XHTMLFile_Error_";
canWriteXhtmlfile = true;
}
if(canWriteXhtmlfile){
outFile =  new File(xhtmlfilePath+outFileName+(xhtmlFileCount++)+".xhtml");
outFileWriter =  new FileWriter(outFile);
outFileWriter.write(outToWrite);
outFileWriter.close();
}
readLine = stgfileBr.readLine();
lineNumber++;
canWriteXhtmlfile = false;
} while (readLine != null);
System.out.println("The STG file is processed fully till lineNumber"+lineNumber);
} catch (Exception e) {
e.printStackTrace();
}
}
}

Monday, 19 September 2011

GC Log Analyzer from IBM


This blog is about the GC( garbage collection) log analyzer from IBM. If you have a GC log and you want to analyze the file,  this IBM tool will help you with some graphical analyzer and some recommendations . You can download it from the following url :

http://www.alphaworks.ibm.com/tech/pmat

Please find below some screen shots and details that might help you.

1. Screen shot 1:



2. Screen shot 2 :



3. Screen shot 3 :



4. Screen shot 4: