Showing posts with label Eclipselink. Show all posts
Showing posts with label Eclipselink. Show all posts

Sunday, June 6, 2010

JBoss and RichFaces: jsf libs and errors

Definitely: just following manual is just not enough. So we waste our working days on stuff like this...

Recently I started working on JBoss/RichFaces platform for a new application. I was following the instructions from RichFaces website, and added the jars they said were necessary. It worked well on Tomcat, but it didnt go that wel on JBoss. It resulted in exceptions like this:

ConfigurationException: CONFIGURATION FAILED! null at com.sun.faces.config.
ConfigManager.initialize(ConfigManager.java:213)...


with a long stack trace, of course... Stack trace didn't point to any configuration error, just this null pointer exception. So, after some Google-ing, I saw that it is a common problem, some are lucky to get a few pointers from stack trace, but some just get this null and are stuck. Fortunately, a few guys pointed out that the problem might be with the jfs libraries.

So, what really happens is that libraries that I have put in WEB-INF/lib folder conflict with JBoss JSF libraries already present on server. Libraries in question are jsf-api, jsf-impl and jsf-facelets jars. Problems might arise from the fact that versions of user provided and server provided libraries are not the same, or that interfaces of some required libraries are not appropriate for some other in combination.

So, my first step was to replace those jars with the one present in JBoss server libs (simple search through jars in JBoss with TotalCommander gives you location of libs, packed in JBoss war-s or some similar archive ). It didn't quite work. Obviously, if i deploy app war with the same libraries, the problem remains.

So, I then removed the jsf libs from web-inf/lib. It worked to a point where I started using jsf functions in my code (like FacesContext.getCurrentInstace()....) , where I needed jsf jars in project. But I don't want tohose jars in my war. So, what to do?

Solution is quite simple. Instead of placing jars yourself, you just add JBoss Server LIbrary to your project's build path in Eclipse. (Properties->Configure Build Path...). Your project has all required dependencies for development, and deployed app has the same enviroment like during deployment. You can check through libraries to see if there are any other unnecessary duplicates.




So,

Thursday, February 25, 2010

Eclipselink: table sequence generator and concurrency - Part Three

Thanks to eager readers, I finally got to finish this :D

So far, last two posts showed that there is a problem with sequencing in eclipselink regarding concurrent access and that there is a possibility of implementing custom sequence generator. We decided to to our own implementation of table generated sequence.

So, as we said, we have our own sequence generator class which is, interestingly, called CustomSequenceGenerator:

As you can see, getGeneratedValue method is used to get concrete value. There an instance of CustomSequenceManager class is used. Of course, it is used in our

public class CustomSequenceGenerator extends Sequence implements SessionCustomizer {
private Logger logger = Logging.getLogger(CustomSequenceGenerator.class);

private CustomSequenceManager customSequenceManager;
public CustomSequenceGenerator() {
super();
logger.debug("CustomSequenceGenerator()");
customSequenceManager = new CustomSequenceManager();
logger.debug("new CustomSequenceManager()");
}

public CustomSequenceGenerator(String name) {
super(name);
logger.debug("CustomSequenceGenerator(String name)");
customSequenceManager = new CustomSequenceManager();
logger.debug("new CustomSequenceManager()");
}

@Override
public Object getGeneratedValue(Accessor accessor,
AbstractSession writeSession, String seqName) {
logger.debug("getGeneratedValue()");
Long nextValue = customSequenceManager.getNextValue(seqName);
logger.debug("nextValue = " + nextValue);
return nextValue;
}

@Override
public Vector getGeneratedVector(Accessor accessor,
AbstractSession writeSession, String seqName, int size) {
return null;
}

@Override
protected void onConnect() {
}

@Override
protected void onDisconnect() {
}

@Override
public boolean shouldAcquireValueAfterInsert() {
return false;
}

@Override
public boolean shouldOverrideExistingValue(String seqName,
Object existingValue) {
return existingValue == null;
}

@Override
public boolean shouldUseTransaction() {
return false;
}

@Override
public boolean shouldUsePreallocation() {
return false;
}

public void customize(Session session) throws Exception {
logger.debug("customize(Session session) ");
CustomSequenceGenerator sequence = new CustomSequenceGenerator("BASE_ENTITY_SEQ");
logger.debug("new CustomSequenceGenerator('BASE_ENTITY_SEQ')");
session.getLogin().addSequence(sequence);
logger.debug("session.getLogin().addSequence(sequence)");
}

}


Now, let's see CUstomSequenceManager class. What we do is just simulate the sequencing as we know it in Oracle.

public class CustomSequenceManager {
private static Logger logger = Logger .getLogger(CustomSequenceManager.class);
protected HashMap sequenceMap = new HashMap();
//Saves values for frame limits
private Long allocationSize = MyParameters.getSequenceFrameSize();
protected HashMap sequenceFrame = new HashMap();

public synchronized Long getNextValue(String seqName){
SequenceService sequenceService = new SequenceService();

Long seqCount = sequenceMap.get(seqName);
Long seqLimit = sequenceFrame.get(seqName);

if(seqCount == null || seqCount == seqLimit-1){
seqCount = sequenceService.getCustomSequenceNextValue(seqName);
if(seqCount == null){
seqCount = 0L;
}
seqLimit = seqCount + allocationSize;

sequenceFrame.put(seqName, seqLimit);
sequenceMap.put(seqName, seqCount);
}
else if(seqCount<=seqLimit-1){
seqCount++;
sequenceMap.put(seqName, seqCount);
}


return seqCount;
}
}

I guess it's easy to follow. We allocate frame of values and use the values from it. when we exceed the frame values, we get the next frame etc. We use hash map for storing current values, and we get new frame values from SequenceService.


public class SequenceService {
SequencePersist sequencePersist;

public SequenceService (){
try {
Context ctx = new InitialContext();
sequencePersist = (SequencePersist) ctx.lookup("SequencePersistBean");

} catch (javax.naming.NamingException ne) {
throw new EJBException(ne);
}
}

public UserSequence create(UserSequence s){

return sequencePersist.create(s);
}
public UserSequence modify(UserSequence s){
return sequencePersist.modify(s);
}
public UserSequence find(String seqName){
Object seqObj = sequencePersist.find(seqName);
if(seqObj == null){
return null;
}
return (UserSequence)seqObj;
}
public List findQuery(String queryString){
return sequencePersist.findQuery(queryString);
}
public Long getCustomSequenceNextValue(String seqName){
return sequencePersist.getCustomSequenceNextVal(seqName);
}

}

Nothing special here, because all the work is done behind SequencePersist interface. Here we used EJB implementation, but you can also use Spring to get bean. What is really important is in SequencePersistBean implementation, and that is database access to sequence table.

@Stateless(name="SequencePersistBean")
public class SequencePersistBean implements SequencePersistLocal,SequencePersistRemote{
@PersistenceContext(name = "my_sequence/EntityManager",
unitName = "my_sequence")
EntityManager em;
Logger logger = Logger .getLogger(SequencePersistBean.class);

public SequencePersistBean(){

}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public UserSequence create(UserSequence s){
em.persist(s);
return (UserSequence)em.find(s.getClass(), s.getSeqName());
}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public UserSequence modify(UserSequence s){
return (UserSequence)em.merge(s);
}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public UserSequence find(String seqName){
Object seqObj = em.find(UserSequence.class, seqName);
if(seqObj == null){
return null;
}
return (UserSequence)seqObj;
}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public List findQuery(String queryString){
return (List) em.createQuery(queryString).getResultList();
}
@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public Long getCustomSequenceNextVal(String seqName){
Long sequenceCount = null;
try{
StoredProcedureCall spcall = new StoredProcedureCall();
spcall.setProcedureName("CC1NextSequenceVal");
spcall.addNamedArgument("SequenceName");
spcall.addNamedOutputArgument(
"NextValue", // procedure parameter name
"NextValue", // out argument field name
BigDecimal.class // Java type corresponding to type returned by procedure
);

ValueReadQuery query = new ValueReadQuery();
query.setCall(spcall);
query.addArgument("SequenceName"); // input

List args = new ArrayList();
args.add(seqName);


Session session = JpaHelper.getEntityManager(em).getActiveSession();
logger.debug("session" + session);
BigDecimal intSeq = (BigDecimal) session.executeQuery(query, args);
sequenceCount = intSeq.longValue();
}catch(RuntimeException e){
logger.fatal("getCustomSequenceNextVal", e);
}
logger.debug("sequenceCount" + sequenceCount);
return sequenceCount;
}
}


As you can see, @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) denoting methods is used to guarantee that each call is done in separate transaction.

I won't get into too much details about seuence table, because it is just imitation of Ewclipselink generated table. We use bean's methods to modify values in the table.


getCustomSequenceNextVal method is interesting, because we use stored procedure to ensure complete transaction isolation on database level. This Java code should be universal on all databases that have stord procedures and are supported by EclipseLink. The final touch is done writing stored procedure, which is DB thing. We used Oracle, so procedure is written as autonomous, on other database you should make something equivalent.

So, the essence of the matter is that you take frame of N values from DB table, store it in Java Hash map where you read and increment values. When you exceed the frame, you go to the DB to get new values. Reading through stored procedure that is autonomous will ensure that every concurrent reader gets different frame value. Basically, procedure looks in the table for row with specific SEQ_NAME. From the same row reads the value, and frame size and returns it. Then it increments value for frame, stores it, and that is it. Everything that Eclipselink table generated sequence should do, but doesn't for some reason.

Monday, December 14, 2009

Eclipselink: table sequence generator and concurrency - Part Two, Custom Sequence generator

In my previous post I explained how EclipseLink TableGenerated primary keys have a problem with concurency. I'm not sure why and whether other people had the same problem, but what can you do? Well, we had to do something.

First, I cosidered possibility of different configuring of primary key generation straegy for different platforms, using master entity superclass and confguring with XML descriptors (actually combining XML descriptors and annotations).

The, I tried to find some solution that enables us to somehow make EclipseLink use some alternative method of primary key generation. Fortunately, it did not take long.

On EclipseLink wiki there is an example of custom sequencing using UUID. Now, UUID generation is something we came across while using Hibernate, who has some issues using HashMaps and identifying objects, so generating UUID used in isEqual methods for objects is very recommender. Using UUID as primary key sounded interesting, but, frankly, I did not feel like modifying database schema...

So I wondered, why not making my own, concurrency safe version of table sequencing? After all, it is very nice solution, takes my mind of thinking whether I use MS, Oracle or whatever DB and I dont bother worrying over whether identity columns will do just as good job as Oracle sequences...

Let's look into customizing sequencing.

You can extend Eclipselink classes Sequence and StandardSequence. You also have to implenent SessionCustomizer interface.

public class UUIDSequence extends Sequence implements SessionCustomizer{

Now, look into the class. For us important methods are

public Object getGeneratedValue(Accessor accessor,
AbstractSession writeSession, String seqName) {
return UUID.randomUUID().toString().toUpperCase();
}
and

public void customize(Session session) throws Exception {
UUIDSequence sequence = new UUIDSequence("system-uuid");

session.getLogin().addSequence(sequence);
}

getGeneratedValue method, well, returns the generated sequence value in whatever way we want. In the example UUID is used, so simply radnom generated uuid value is returned from the method.

But, how do we make Eclipselink use our sequence generator? customize method is used for this. As you can see, a new instance of our sequence class is created with our specific name given to constructor, and our sequence is added to session. To tell eclipselink that we are using our custom generator we add a property in the persistence unit

 name="eclipselink.session.customizer" 
value
="eclipselink.example.UUIDSequence"/>
Where value of the property is full qualified class name of our Sequencing class.

And that is it. I'm not really sure how this works, but persistence unit sees that it has customizable class, method customize is called and our sequence is set. When you want to use sequence in your JPA entities, you siply specify name you passed to constructor

@Id
@GeneratedValue(generator="system-uuid"/>
@Column(name="PROJ_ID")
private int id;

In the next post we will make similar class which will use our own table generated sequence. And yes, it is quite simple :)

Wow, not bad post after 10 days of flu...

Wednesday, November 25, 2009

Eclipselink: table sequence generator and concurrency - Part One

Why do we like OR mapping? OK, you got tons of advertising written about that, so take your pick.
For your boss most likely most interesting thing would be universal database connectivity. You have customers with various databases, so it is much easier to sell your software. You dont have to buy a licence for a new database, relax, we got them all covered... Actually, that was our case.

OK, that is cool. We had our little piece of software up and running on Eclipselink (I had quite an adventure on migrating from Hibernate, but that is another story). Since we wanted to be database vendor independent, we chose table generated ID's for our primary keys.

Little intro, just to put some light on it, but I'm sure you guru's out there don't need it ;) Here is a good place to start. There you can find details on ID generation strategies.

When you specify an entity in JPA, you have to set one of the class fields as an ID, using annotation @Id.

So the next step is to see how to assign those ID's. You could set values yourself before persisting an object, but that is not very useful in practice. There are also options for using identity or sequencing, but this is not applicable on all databases. For instance, identity columns are available in SQL Server, but Oracle uses sequences. If you want universal solution, portable on all databases, you need table generated primary keys. It is similar to Oracle sequences, only the values are stored in a user table Eclipselink is accessing. If you use schema generation, Eclipselink will, according to your annotations, generate that table. Usually table has columns for sequence name and value, allocation size (default 50). If you look into the documentation on @TableGenerator annotation, you will see that you can configure name for this table, its columns, and also values for allocation size etc.

So it all sounded nice. And on Hibernate it worked nice. But life just isn't that simple. Life is fun.

While doing some stress testing on some application modules I became aware of an unpleasant fact that unusual number of unique constraints is occuring.

What I did was that I deployed application module and set up some 5 clients on others computers to target it with a large number of subsequent requests which typically did some inserting and updating in the database. Then it turned out that I get a relatively large number of unique constraint errors - on primary keys!

So, 5 clients target EJB modules, services in modules access database using Eclipselink. New objects are generated and mapped into tables. When persisting objects, Eclipselink takes tabel generated value and assigns it as objects ID. INsert into... And same thing happens initiated from several clients. One should expect that Eclipselink gives different values for primary keys each time new object is persisted. It shoul handle concurrency. But it looks like it does not.

Of course, we have checked our test cases and they were OK. It works just fine in a single client situation. When we addes another client, only 4-5 unique constraints occured on some 10000 inserts. Adding new clients increased exoenentially number of exceptions.

I have written few words to Oracle, on official forum as well on Metalink (now Oracle Support). Well, guys told me that it it shouldn't happen. But it does. It is not anything exotic - stateless session beans, jpa persistence using eclipselink provider on Weblogic server. You look up bean, method creates and persists object. It is not clustered environment.

I would really like to know if anyone else had similar problem, so if you hear anything... :)

Then search for alternative methods for generating ID's started. After a while, it turned out that Eclipselink is flexible enough to give us opportunity to use our own ID generation mechanism.

Next post will be really interesting one, when I present our cool solution :D

Sunday, February 1, 2009

Basic EclipseLink persistence.xml configuration

Right,let's start with the basics.

Weblogic has inherited from it's previous versions O/R mapping called KODO. I don't have much to say about it, but it is default ORM for Weblogic, just as Toplink was for OC4J or HIbernate is for JBoss. Of course, JPA is implemented on all of these platforms, and the implementation is done in these persistence mappers. You should try to use JPA as much as possible, and use HIbernate/EclipseLink specifics only where you don't have any other choice. Simple rule, but as always, the problem is judging when to do that. We will say more about it in some other post.

Let's get back to persistence.xml. Since default ORM for Weblogic is KODO, if you try to use persistence.xml you used on OC4J,

Here we used addidtional properties for generating tables based on our JPA entities. It can be useful in development. Of course, not in production, you don't want to drop all tables when you deploy application. Note here that on Toplink we had to excplicitly say type of databse so this would work. We also specified logging level for toplink.

Weblogic will assume that it should use default ORM, KODO. Then you will see some KODO specific messages, and things will work... To a certain point.

As it turned out, regardless of JPA standard, implementations are not perfect. The last thing you need is changing ORM in the middle of the project (believe me, Hibernate, Toplink... I know!). If you developed with Toplink so far, you should stick to what you know. So we want to stick to Toplink which meanwhile became EclipseLink. OK, you get it with new Weblogic. Now we just have to put couple more things in persistence.xml






Note couple of things. First, we had to tell Weblogic what persistence provider we want to use:

org.eclipse.persistence.jpa.PersistenceProvider

Make sure you put it right after persistence-unit tag.

Second, name of eclipselink properties are equivalent to toplink properties. Where once was toplink, now is eclipselink. Same thing goes for most of the classes, package naming etc. So all of this makes transition from Toplink to Eclipselink on Weblogic quite simple. Until we get to classloading and structuring our EAR for new platform.

That is another story.

Hope this helps someone!