Many times, it happens, we see very odd error, because of old jars loaded first and our apis are using new apis, and it is sometimes not easy to find, which jars are loaded first. How to find solution  ?

Simple solution, use “ps -efww | grep java” command  or from program, if wanted to know, add following lines in code, which will print all jars used.

 

URLClassLoader classLoader = (URLClassLoader) RedisMainApp.class.getClassLoader();
System.out.println(Arrays.toString(classLoader.getURLs()));

 

Advertisements

How to create Queue using Apex ?

Posted: September 10, 2015 by Narendra Shah in Uncategorized

To create Queue using apex,

  1. First group need to be created as below

Group qGroup=new Group(Name=[Type=’QUEUE’);

insert qGroup

2.  And finally queue object need be created with the group created in first step

 QueuesObject q= new QueuesObject (SobjectType=’Case/Lead‘,QueueId=qGroup.id);

insert q;

Apex Caching using Salesforce class

Posted: June 30, 2015 by Narendra Shah in Uncategorized
Tags: ,

Untold fact is static variable can be created and used as cached across your transaction. In recursive trigger, we see boolean flag is used to prevent recursive triggers, that is problem solving approach.
But in order to reduce soql, best option is to cache, and use across classes, triggers, controllers in same transaction.
By that way, we can stay away from hitting governance limit.

Sample without Cache:

trigger SoqlTrigger on Account(before insert, after insert, before update,after update) {
List acctsWithOpps =[SELECT Id,(SELECT Id,Name,CloseDate FROM Opportunities) FROM Account WHERE Id IN :Trigger.New];
if (Trigger.isInsert) {
if (Trigger.isBefore) {
//access acctsWithOpps and process
} else if (Trigger.isAfter) {
//access acctsWithOpps and process
}
}
else{
if (Trigger.isBefore) {
//access acctsWithOpps and process
} else if (Trigger.isAfter) {
//access acctsWithOpps and process
}
}
}

Problem with above code is, acctsWithOpps will be called 4 times during transaction. To save this time, we can use following, where same query is called only once.

Sample with Cache():
Account Cache Class Cache:
public class AccountOppsCache {
public static List acctsWithOpps=[SELECT Id,(SELECT Id,Name,CloseDate FROM Opportunities) FROM Account WHERE Id IN :Trigger.New];
}

SoqlTrigger trigger:

trigger SoqlTrigger on Account(before insert, after insert, before update,after update) {
List acctsWithOpps =[SELECT Id,(SELECT Id,Name,CloseDate FROM Opportunities) FROM Account WHERE Id IN :Trigger.New];
if (Trigger.isInsert) {
if (Trigger.isBefore) {
//access AccountOppsCache.acctsWithOpps and process
} else if (Trigger.isAfter) {
//access AccountOppsCacheacctsWithOpps and process
}
}
else{
if (Trigger.isBefore) {
//access AccountOppsCache.acctsWithOpps and process
} else if (Trigger.isAfter) {
//access AccountOppsCache.acctsWithOpps and process
}
}
}

In salesforce, i think people easily come around governance limit. And in our scenario, we were working and we were able to see that, one trigger updating same record, so after trigger called, and by this, same trigger was called 5 times, which was causing 72 soql executing part of transaction and warning alarmed. And solution to this is static.

As a Core Java Dev, i got confused static, but static in java is across class loader, and here in salesforce on transaction, so that resolves the problem.

Create one static boolean flag variable in class and update when trigger is called first time, So from next time always this flag variable is always false. And with that trigger will not run again for same Transaction.
Alternative 1:

Class code :
class CaseTriggerFlag {
private static boolean caseTriggerRunFlag=true;
public static boolean runOnce(){
if(caseTriggerRunFlag) {
caseTriggerRunFlag=false;
return true;
}else {
return caseTriggerRunFlag;
}
}
}

Trigger code :
trigger CaseTrigger on Case (after update) {
if(CaseTriggerFlag.runOnce()){
/** YOUR CODE COMES HERE
*/
}
}

Alternative 2:

Class code :
class CaseTriggerFlag {
public static boolean caseTriggerRunFlag=true;
}

Trigger code :
trigger CaseTrigger on Case (after update) {
if(CaseTriggerFlag.caseTriggerRunFlag){
CaseTriggerFlag.caseTriggerRunFlag=false;
/** YOUR CODE COMES HERE
*/
}
}

Liferay 7 m4 setup

Posted: May 15, 2015 by Narendra Shah in Liferay, Uncategorized

I was curious to see osgi in Liferay 7 and that curiosity bring me to install liferay 7.

How easy setup liferay.
1. Download Liferay 7 m4 tomcat.zip from http://sourceforge.net/projects/lportal/files/Liferay%20Portal/7.0.0%20M4/
2. unzip the file
3. If you have jdk 7/8 already setup with environment variable JAVA_HOME, you are ready to go to start
4. Here intention is not to use some database and run it. so i tried with hsql itself, liferay default database.
5. Navigate to directory(in zip extracted folder) liferay-portal-7.0-ce-m4\tomcat-7.0.42\bin
6. Startup.bat/startup.sh
7 AFter liferay started, i see basic config with name,email address etc.
8. Filled in basic detail and press Finish button, it took a while and see done. I see Term & Conditions to agree and my portal is ready. Here is some screen shot for first look.

Whooo, it is working cool, much faster than old versions. UI is very cool, everything is properly categorized and it is really well managed. Though the basic concepts, site,layout. control panel, role, web content etc is as is.

 

Control panel is properly Categorized as User, Sites, Apps, Configuration, here apps is portlet config, where portlet can be activated/deactivated etc.

 

 

 

With working experience of Liferay from 6.0.0.6 EE to 6.1.2.0, found following problems and issues.

Before understanding how to upgrade, it will helpfull to understand how Liferay upgrade works.

Liferay upgrades its database with one by one, like if you are using 6.0.0.6 and if you are upgrading to 6.1.2.0 then, flow will look like below example

  1. 6.0.6 to 6.0.11
  2. 6.0.11 to 6.0.12
  3. 6.0.12 to 6.1.0
  4. 6.1.0 to 6.1.1
  5. And finally 6.1.0 to 6.2.0

so it means that, so in beteen if it fails, your database is partially upgraded. I mean, let say if it throws error/Exception, yes there can be exceptions. So in those cases, your database is half upgraded and you might require some Liferay patch or some other problem in your database, which you need to take care accordingly and start again same process from your restoring backedup database to your upgrade instance and try fixing the error.

Now how it performs this upgrade, Liferay contains some jar with version, and version executor, so once you start upgrade ,it automatically picks up existing version and start upgrading to latest before starting. And this contains All indivisuall class like ImageLibrary upgrader or DL Upgrader with or without database sql scripts for each individual version. And thats it. So if you see any exception, you can decompile jar and see what is causing this problem.

So here we go for upgrade steps. 

  1. Code Migration: Here what i mean is you need to upgrade 4 things. 
    1. Get latest version Portal zip
    2. Custom portlets/hooks: Code need to upgraded with compatibility with latest version. This can be done via command line with ant upgrade option or manually updating portlet.xml and other config files.
    3. Ext: Here full Ext code need to be reviewed and validated according to any Kernel api changes. And it might happen that, you can achieve same functionality with Hooks, then design with Hooks.
    4. Service: If you have any service generated for any portlet/hook/ext, you need to regenerate service code again updating your old service xml file with minor new service modification(xsd and if any field update) . You might not be able to use old service as is.
  2. Data Migration
    1. Liferay made this process pretty easy.Liferay provides 2 kind of bundles zip for your latest version, you might need to get someintrim version upgrade, if youdont find direct upgrade links between your version to new version
      1. Liferay upgrade zip or
      2. Liferay Portal zip
    2. New Version licence file.
    3. Now you just need to unzip file and update portal-ext with your database instance, jackrabbit or anything and start upgrading.

Production Infrastructure. 

  1. 5 Apache with Loadbalancer, Siteminder Agent
  2. 5 Liferay Application Server
  3.  1 Solr server+1 Coveo Server for Search integration
  4. 1 Database Server MS Sql Server 2008 R2
  5. Configuration
    1. Jackrabbit data store  integration with MS SQL Server
    2. Siteminder SSO

Pre-deployement steps

If you are already read about above steps. Then it is pretty easy job for you.

  1. Download latest version Portal/upgrade zip
  2. Get latest version License
  3. Unpack/unzip your portal/upgrade zip.
  4. Create portal-ext.properties in portal[version]/tomcat[version]/ROOT/WEB-INF/classes with adding database information and enabling Jackrabbit settings, if you are using jackrabbit repository.
  5. Add repository.xml in portal[version]/data/jackrabbit/
  6. Add license.xml file in portal[version]/deploy folder.
  7. Install any Fix pack/patch required for upgrade. (This will be required, if there are patches which liferay have released or you have some issue and found some patch). If you dont know how to install patches, liferay have good amount of documentation.
  8. clean up your solr and Lucene instance data folders if any.
  9. And make sure you have your lportal database and lportal_doc (jackrabbit) database backed up properly, you might need to rerun your this same upgrade process.
  10. Tune your JVM for highest performance based on your server architecture.
  11. Things not to do. 
    1. No need to deploy your custom portlet/hooks/ext except image gallary/document library hook  before upgrade
    2. No need to copy any other marketplace portlet for now.
    3. No one is accessing/using your database during deployement steps
    4. All Liferay servers are stopped, and no one is accessing your database. You can actually confirm same from DBA or looking in Database Server.
    5. Let your customer know that for some time period(based on your mock run time), there will be outage for that particular time.

Deployment Steps

1.  Here we go, just need to run portal server(portal/tomcat/bin/startup.[bat|sh]) or upgrade script.

2. You can monitor catalina.out log in portal/tomcat/log folder, it will show, which version,with which entity/portlet liferay currently running.

3. Once server is started successfully(you will see message), you can start validating basic things.

Validation

1. Under portal settings, you can validate your things.

2. Validate your data is successfully migrated. and is visible also

Post deployment Steps

1. Deploy your custom portlet, marketplace portlet, hooks, ext if any.

2. Configure solr, and run reindex from portal settings

You might need to follow this steps multiple time, ideally liferay say that it is seam less upgrade, but it might happen, you need patch to successfully run the upgrade. With our upgrade, we needed 2 to 3 patch to finish upgrade, and finally data was successfully upgraded

My Developer tools

Posted: May 8, 2015 by Narendra Shah in Reserch And Development

Basic Tools which any Developer should have, these are my tools, which i always keep on my system.

  1. Editor:
    1. Eclipse
    2. Editplus
    3. Notepad++
  2. Java Software: Any java developer will have following tools
    1. JDK
    2. Tomcat:
  3. Remote connection
    1. Remote connection manager: Remote connection,default program is very poor to connect multiple remote desktops, this tool saves all information in categorized way and can save user/password, so just one click will connect to remote machine.
    2. Putty: Connect over ssh.
    3. WinSCP: For sftp, ssh data transfer
  4. Database
    1. Any one database(Postgres/Mysql/Mongo): Good to have one database setup in maching, i used started with using mysql, then postgresql then back to mysql, but now my love is mongo.
    2. HeidiSql/Squirrel-Database client
  5. Browser: Every one knows about what browser are.
    1. Chrome
      1. Advanced Rest Client
      2. YSlow
      3. Instant Dictionary
    2. Internet Explorer
  6. Compare/Merge
    1. Winmerge: Compare and merge 2 files, must tool for any developer.
  7. Zip/Archiving
    1. WinRAR: Dont like winzip for some reason, this tool is much more faster to open, view, create zips and can open broken zips also sometimes.
  8. System debug
    1. TCP View: View current connection from your machin/system/laptop to outside world, beuty of this tools is it will show your connection, which program made connection and resolving host, you can end established connection etc.
    2. Process Explorer: This tools is really help me to understand, in depth information about running processes like, what is running in your machine, which forked/created this process, what are threads or dlls or files used by the process, many times this is being my life saver during debugging.
    3. Cygwin: Dont use much, but though for linux lover this is the tool to enable linux shell in windows maching. With entry of powershell, this is being less popular
  9. Utility
    1. Adobe Flash: Flash viewing in browser.
    2. Adove Acrobat reader: Reading pdf
    3. VLC Player: Media player to play any video or audio format,
    4. K-Lite Codec: Media player codecs to any video, audio format
    5. Lightshot for taking easy screenshot, print screen with edit on the fly,

Following are design principle, which should be taken care when designing/implementing Object oriented system. This also fulfills requirement for design patterns.

1. SRP(Single Responsibility Principle)

According to this principle, a class should be handling a single functionality. In addition, there should be more than one reason for a developer to change a class from what it currently does. For example, if you have multiple functionalities in a class in Java then you might face problems when you are coupling this fake Java.

2. Open Close Principle

Open closed design principle means that your classes, functions or methods should be Open for inheritance/overriding so that new functionality can be added easily. The Open Closed Design Principle focuses on this aspect and makes it possible to avoid unauthorised changes to already tested and working code.

3. Liskov Substitution Principle (LSP)

This principle states that you must be able to subtypes for super types. This means that any methods or functions in your program that are making the use of super class types, then they should also be able to work with the sub class. It is in close relation to SRP and the Interface Segregation Principle.

4. Interface Segregation principle (ISP)

This principle tells you to ensure that clients do not implement any interface that it doesn’t use. In Java, an interface has the disadvantage from the design point of view, to implement all methods before the classes can use it;

5. Delegate

Delegate the tasks of your program to specific to classes. An example of this would be the equals () and hashCode() method.

All above principles are also called as SOLID oo principle

6. DRY

DRY stands for Don’t Repeat Yourself, which is pretty self explanatory. This means that you shouldn’t be duplicating your code in a program. If a code fragment is appearing in two places, then you would do well to turn it into a method instead of writing the entire fragment again. Lack of duplication helps in easy maintenance of the code.

7. Encapsulation

Whether your project gets bought over or whether you work on it yourself, your code will always change. So, it is good to encapsulate the part of the code that you think will be changed. This makes you code easy to maintain and test.

8. Dependency

The framework(spring, guice etc) that you use already provides you dependence, which means that looking for it a additionally will be a waste of time. You can also use byte code instrumentation as well. Some Aspect Oriented Programming (AOP) framework already does this. Lastly, you can also make the use of proxies.

9. Composition over Inheritance

According to many programmers, composition is more important than inheritance. They say that you should be focusing on the composition of your program than on employing a flexible inheritance structure. This is because composition allows you to change the behaviour of a class at runtime. Interface on the other hand allows you to use polymorphism, providing the flexibility to replace something with better implementation.

10. Programming for Interface

A programmer should always be programming for the interface and not for the implementation that will give them flexibility in code. Make use of interface types on variables and return types on methods. In addition, argument types should also be use.

What is Gamification ?

In simple term Gamification of any website or any application is allowing user to feel like they are playing games, when they work with your website/ application. Accordingly, user who perform action, based on actions he gets reward, he complete or progresses in missions and if more complex system like multiple communities or product site, then may be managing user track based on different site.

Gartner Definition: Gamification is the use of game mechanics to drive engagement in non-game business scenarios and to change behaviors in a target audience to achieve business outcomes. Many types of games include game mechanics such as points, challenges, leaderboards, rules and incentives that make game-play enjoyable. Gamification applies these to motivate the audience to higher and more meaningful levels of engagement. Humans are “hard-wired” to enjoy games and have a natural tendency to interact more deeply in activities that are framed in a game construct.

What is the benefit of Gamifying your site or application ?

If you wish to have more engagement from your user, Gamification is one choice. There is article on gartner for same about Gamification: Engagement Strategies for Business and IT.

What is the future  about Gamification ?

By 2014 year end, most of the fortune 500 companies start using Gamification somewhere in their application. By 2016, Gamification will be hot cake, which every application need to have.

Entities involved in Gamification:

1. Action/Activity which is considered for Reward

2. Reward in terms of level or achivement.

3. Mission.

4. Track/Chapter/Group/Community.

Will publish more detail in next blog.

I was looking for log analyser for logs from our sytem, i found this was interesting.

1. Scribe – Real time log aggregation used in Facebook
Scribe is a server for aggregating log data that’s streamed in real time from clients. It is designed to be scalable and reliable. It is developed and maintained by Facebook. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups.

https://github.com/facebook/scribe

2. Logstash – Centralized log storage, indexing, and searching

Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use. Logstash comes with a web interface for searching and drilling into all of your logs.

http://logstash.net/

3. Octopussy – Perl/XML Logs Analyzer, Alerter & Reporter
Octopussy is a Log analyzer tool. It analyzes the log, generates reports and alerts the admin. It has LDAP support to maintain users list. It exports report by Email, FTP & SCP. Scheduled reports could be generated. RRD tool to generate graphs.

http://sourceforge.net/projects/syslog-analyzer/

4. Awstats – Advanced web, streaming, ftp and mail server statistics
AWStats is a powerful tool that generates advanced web, streaming, ftp or mail server statistics graphically. It can analyze log files from all major server tools like Apache log files, ebStar, IIS and a lot of other web, proxy, wap, streaming servers, mail servers and some ftp servers. This log analyzer works as a CGI or from command line and shows you all possible information your log contains, in few graphical web pages.

http://awstats.sourceforge.net/

5. nxlog – Multi platform Log management
nxlog is a modular, multi-threaded, high-performance log management solution with multi-platform support. In concept it is similar to syslog-ng or rsyslog but is not limited to unix/syslog only. It can collect logs from files in various formats, receive logs from the network remotely over UDP, TCP or TLS/SSL . It supports platform specific sources such as the Windows Eventlog, Linux kernel logs, Android logs, local syslog etc.

http://nxlog.org/

6. Graylog2 – Open Source Log Management
Graylog2 is an open source log management solution that stores your logs in ElasticSearch. It consists of a server written in Java that accepts your syslog messages via TCP, UDP or AMQP and stores it in the database. The second part is a web interface that allows you to manage the log messages from your web browser. Take a look at the screenshots or the latest release info page to get a feeling of what you can do with Graylog2.

http://graylog2.org/

7. Fluentd – Data collector, Log Everything in JSON
Fluentd is an event collector system. It is a generalized version of syslogd, which handles JSON objects for its log messages. It collects logs from various data sources and writes them to files, database or other types of storages.

http://fluentd.org/

8. Meniscus – The Python Event Logging Service

Meniscus is a Python based system for event collection, transit and processing in the large. It’s primary use case is for large-scale Cloud logging, but can be used in many other scenarios including usage reporting and API tracing. Its components include Collection, Transport, Storage, Event Processing & Enhancement, Complex Event Processing, Analytics.

https://github.com/ProjectMeniscus/meniscus

9. lucene-log4j – Log4j file rolling appender which indexes log with Lucene
lucene-log4j solves a recurrent problem that production support team face whenever a live incident happens: filtering production log statements to match a session/transaction/user ID. It works by extending Log4j’s RollingFileAppender with Lucene indexing routines. Then with a LuceneLogSearchServlet, you get access to your log using web front end.

https://code.google.com/p/lucene-log4j/

10. Chainsaw – log viewer and analysis tool
Chainsaw is a companion application to Log4j written by members of the Log4j development community. Chainsaw can read log files formatted in Log4j’s XMLLayout, receive events from remote locations, read events from a DB, it can even work with the JDK 1.4 logging events.

http://logging.apache.org/chainsaw/

11. Logsandra – log management using Cassandra
Logsandra is a log management application written in Python and using Cassandra as back-end. It is written as demo for cassandra but it is worth to take a look. It provides support to create your own parser.

https://github.com/jbohman/logsandra

12. Clarity – Web interface for the grep
Clarity is a Splunk like web interface for your server log files. It supports searching (using grep) as well as trailing log files in realtime. It has been written using the event based architecture based on EventMachine and so allows real-time search of very large log files.

https://github.com/tobi/clarity

13. Webalizer – fast web server log file analysis
The Webalizer is a fast web server log file analysis program. It produces highly detailed, easily configurable usage reports in HTML format, for viewing with a standard web browser. It andles standard Common logfile format (CLF) server logs, several variations of the NCSA Combined logfile format, wu-ftpd/proftpd xferlog (FTP) format logs, Squid proxy server native format, and W3C Extended log formats.

http://www.webalizer.org/

14. Zenoss – Open Source IT Management
Zenoss Core is an open source IT monitoring product that delivers the functionality to effectively manage the configuration, health, performance of networks, servers and applications through a single, integrated software package.

http://sourceforge.net/projects/zenoss/?source=directory

15. OtrosLogViewer – Log parser and Viewer
OtrosLogViewer can read log files formatted in Log4j (pattern and XMLL yout), java.util.logging. Source of events can be local or remote file (ftp, sftp, sa ba, http) or sockets. It has many powerful features like filtering marking, formatting, adding notes, etc. It could also format SOAP messages in logs.

https://code.google.com/p/otroslogviewer/wiki/LogParser

16. Kafka – A high-throughput distributed messaging system
Kafka provides a publish-subscribe solution that can handle all activity stream data and processing on a consumer-scale web site. This kind of activity (page views, searches, and other user actions) are a key ingredient in many of the social feature on the modern web. This data is typically handled by “logging” and ad hoc log aggregation solutions due to the throughput requirements. This kind of ad hoc solution is a viable solution to providing logging data to Hadoop.

https://kafka.apache.org/

17. Kibana – Web Interface for Logstash and ElasticSearch
Kibana is a highly scalable interface for Logstash and ElasticSearch that allows you to efficiently search, graph, analyze and otherwise make sense of a mountain of logs. Kibana will load balance against your Elasticsearch cluster. Logstash’s daily rolling indicies let you scale to huge datasets, while Kibana’s sequential querying gets you most relevant data quickly, with more as it becomes available.

https://github.com/rashidkpc/Kibana

18. Pylogdb

A Python-powered, column-oriented database suitable for web log analysis pylogdb is a database suitable for web log analysis.

http://code.ohloh.net/project?pid=&ipid=129010

19. Epylog – a Syslog parser
Epylog is a syslog parser which runs periodically, looks at your logs, processes some of the entries in order to present them in a more comprehensible format, and then mails you the output. It is written specifically for large network clusters where a lot of machines (around 50 and upwards) log to the same loghost using syslog or syslog-ng.

https://fedorahosted.org/epylog/

20. Indihiang – IIS and Apache log analyzing tool
Indihiang Project is a web log analyzing tool. This tool analyzes IIS and Apache Web logs and generates real time reports. It has Web Log Viewer and analyzer. It is capable to analyze the trend from the logs. This tool also integrate with windows Explorer so you can attach a log file in to indihiang tool via context menu.

http://help.eazyworks.com/index.php?option=com_content&view=article&id=749:indihiang-web-log-analyzer&catid=233&Itemid=150