Showing posts for tag "java"

Developing an Open/WebSphere Liberty UserRegistry with Tycho

Aug 16, 2019 3:08 PM

In my last post, I put something of a stick in the ground and announced a multi-blog-post project to discuss the process of making an XPages app portable for the future. In true season-cliffhanger fashion, though, I'm not going to start that immediately, but instead have a one-off entry about something almost entirely unrelated.

Specifically, I'm going to talk about developing a custom UserRegistry and TrustAssociationInterceptor for Open Liberty/WebSphere Liberty. IBM provides documentation for this process, and it's alright enough, but I had to learn some specific things coming at it from a Domino perspective.

What These Services Are

Before I get in to the specifics, it's worth discussing what specifically these services are, especially TrustAssociationInterceptor with its ominous-sounding name.

A UserRegistry class is a mechanism to provide a Liberty server with authentication and user info services. Liberty has a couple of these built-in, and the prototypical ones are the basic and LDAP registries. Essentially, these do the job of the Directory and Directory Assistance on Domino.

A TrustAssociationInterceptor class is related. What it does is take an incoming HTTP request and look for any credentials it understands. If present, it tells Liberty that the request can be considered authenticated for a given user name. The classic mechanisms for this are HTTP Basic and form-cookie authentication, but this can also cover mechanisms like OAuth. In Domino, this maps to the built-in authentication mechanisms and, more particularly, to DSAPI filters.

How I Used Them

My desire to implement these developed when I was working on the Domino Open Liberty Runtime. I wanted to allow Liberty to use the containing Domino server as a user registry without having to enable LDAP and, as a stretch goal, I wanted to have some sort of implicit SSO without having to configure LTPA.

So I ended up devising something of an ad-hoc directory API exposed as a servlet on Domino, which Liberty could use to make the needed queries. To pair with that, I wrote a TrustAssociationInterceptor implementation that looks for Domino auth cookies in incoming requests, make a call to a small servlet with that cookie, and grabs the associated username. That provides only one-way SSO, but that's good enough for now.

The Easy Part

The good part was that my assumption that my comfort with Tycho going in would help was generally correct. Since the final output I wanted was a bundle, I was able to just add it to my project structure like any other, and work with it in Eclipse's PDE normally. Tycho and PDE didn't necessarily help much - I still had to track down the Liberty API plugins and make a local update site out of them, but that was old hat by this point.

What Made Development Weird

I went into the project in high spirits: the interfaces required weren't bad, and Liberty uses OSGi internally. I figured that, with my years of OSGi experience, this would be a piece of cake.

And, admittedly, it kind of was. The core concepts are the same: building with Tycho, bundle activators, MANIFEST.MF, and all that. However, Liberty's use of OSGi is, I believe, much more modern than Domino's, and certainly much less focused on Equinox specifically.

For one, though Liberty is indeed OSGi-based, it doesn't use Maven+Tycho for its build process. Instead, it uses Gradle and the often-friendlier bnd tooling to handle its OSGi composition. That's not too huge of a difference, and the build process doesn't really affect the final built feature. The full differences are a whole big topic on their own, but the way they shake out for this purpose is essentially a difference in philosophy, and the different build mechanism was something of a herald of the downstream distinctions.

One big way this shows is in service registration. Coming from an Eclipse heritage, Equinox-based apps tend to use "plugin.xml" to register services, Liberty (and most others, I assume) favors programmatic registration of services inside the bundle activator. While this does indeed work on Equinox (including on Domino), this was the first time I'd encountered it, and it took some getting used to.

The other oddity was how you encapsulate your bundle as a feature in Liberty parlance. Liberty uses the term "feature" to refer to individual components that make up the server, and which you can configure in the "server.xml" file. These are declared using files similar to MANIFEST.MF with specialized headers to declare the name of the feature, the bundles that make it up, and any APIs it provides to the server and apps. In my case, I wrote a generic mechanism to deploy these features when a server is established, which writes the manifest files to the server's feature directory. Once they're deployed, they become available to the server as a feature with the "usr" prefix, like "usr:dominoUserRegistry-1.0" for my case.

In The Future

I have some ideas for additional features I'd like to develop - providing implicit APIs for Darwino and Jakarta NoSQL/JNoSQL would be handy, for example. This way went pretty smoothly, but I'll probably develop non-Domino ones using either Gradle or Maven with the maven-bundle-plugin. Either way, it ended up fairly pleasant once I discarded my old assumptions, and it's another good entry in the "pros" column for Liberty.

Java Grab Bag 2

May 3, 2019 3:37 PM

Tags: java
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Following in the vein of "Java Hiccups", I've had a couple things floating around my head lately that I think collectively make for a good post for Java developers, particularly those working in the Domino arena.

Without further ado:

Map#computeIfAbsent

This is a method that was added in Java 8 and, while it's not as big a deal as the addition of streams, it's one of my favorite additions and something I use very frequently. To give a point of reference, consider this common idiom from an imagined XPages app:

Map<String, Object> applicationScope = ExtLibUtil.getApplicationScope();
if(!applicationScope.containsKey("someVal")) {
  applicationScope.put("someVal", someExpensiveOperation());
}
String someVal = (String)applicationScope.get("someVal");

Essentially, using a Map as a cache for a complex computed value. Java 8 added the #computeIfAbsent method (alongside several similar ones) to do this in one go:

String someVal = (String)ExtLibUtil.getApplicationScope().computeIfAbsent("someVal", key -> someExpensiveOperation());

The second parameter is (usually) a lambda, like the ones used in streams, that takes the provided key as an argument and is only executed if the value does not already exist. Due to the way this was added, most implementations do pretty much the same thing as the first block of code, but you don't have to care about that. Your code gets a bit smaller, the intent is much clearer, and it's less prone to small bugs like changing the key and forgetting to change it in all three places.

Arrays Are Weird

Java's built-in array type is loosely based on C's, and that's reflected in the syntax:

int[] foo = new int[4];
foo[0] = 1;
foo[1] = 2;
foo[2] = 3;
foo[3] = 4;

Like C, they are zero-based, declared with the capacity and not the max index, and cannot be resized. Unlike C, arrays aren't just syntactical sugar on top of pointers, and this manifests immediately in bounds checking. Take this line:

foo[4] = 10;

In C, this will (famously) just write an integer 10 value into whatever memory happens to be just beyond the bounds of your array. In Java, you'll get an ArrayIndexOutOfBoundsException, saving you from the insidious bug. But, since Java arrays are (probably) implemented internally very similar to in C - they're likely contiguous blocks of memory sized to the type - they're still extremely efficient, and so they show up in a lot of speed-critical code.

As speedy and safe as they are, though, they're still pretty unfriendly. For starters, they can't be resized. When you do new int[4] (or use literal syntax like new int[] { 1, 2, 3, 4 }), you carve out that much memory and can't shrink or expand it in-place. You can change the values inside an array, just not its size. To "resize" efficiently, you have to make a new array and then use System.arraycopy to populate the new array with the contents of the old.

This is all why the List interface (with its predecessor class Vector) exists: they serve the same function of "ordered collection of stuff", but allow for dynamic resizing. Because these objects are usually "efficient enough" (ArrayList uses "true" arrays under the covers) while having numerous additional benefits, you should use them as your first go-to and only use arrays if you have a reason.

That's in part because the weirdness of arrays doesn't end with their inconvenience. Array types are actually implicitly-created classes, even when they contain primitive types. So:

int foo = 3; // primitive value
foo = null; // syntax error!
int[] bar = new int[] { 3 }; // Object
bar = null; // legal!

List<int> fooList = new ArrayList<>(); // syntax error - primitives can't be in generics
List<int[]> barList = new ArrayList<>(); // legal!

Object.class == Object[].class; // false
int[].class.isArray(); // not only legal, but true

Java provides two main utility classes for working with arrays: java.util.Arrays (extremely useful for Arrays.asList) and java.lang.reflect.Array (usually only useful in edge cases).

Java Has No Library Versioning System

If you've worked with Java in Designer or Eclipse, you've likely run across this preferences pane or its per-project version:

Eclipse Java compiler settings

These settings affect two things:

  • The syntax allowed in your source (e.g. new ArrayList<>() requires 1.7 or higher)
  • The class file format version (you can think of this like an NSF ODS). You've likely seen the latter in play by receiving an UnsupportedClassVersionError trying to run Java 7 or 8 code on a pre-9.0.1FP8 Domino server.

Conspicuously absent from this short list is anything to do with classes or methods added to the runtime in newer versions. For example, the String class gained a static method String.join to conveniently concatenate strings with a given delimiter. If you're targeting an older Java version but using a newer Java library (as Designer 9.0.1FP10+ does with a default target of 1.5 and JVM of 1.8), you can write a line of code using that method without issue - the syntax doesn't require anything above 1.5, so all is clear as far as the compiler is concerned. But if you then try to run that code on an older JVM (such as an older Domino server), you'll get an exception at runtime, since the method doesn't exist.

Unfortunately, the only true answer to this is to tell your IDE about a JRE for each specific Java version you're targeting, something that doesn't happen by default, and which will gradually get more difficult as Java 6 becomes harder and harder to come across.

This is one of the things that OSGi aims to fix - you could, for example, have many versions of Guava installed, and you could declare that your plugin works specifically with version 18. Then, when loading, the runtime will either bind to a matching version or give you an error that no version could be resolved. No mystery involved. Unfortunately, OSGi is a niche thing losing ground, and the module system introduced in Java 9 consciously does not address this.

In a pinch, you can use the file tool on most Unix systems to check the version of an individual class file:

$ file Foo.class
Foo.class: compiled Java class data, version 52.0 (Java 1.8)

Licensing

A little while ago, Oracle raised a bit of a stink by declaring that, as of this year, commercial use of their Java runtimes would require paid licensing. Historically, you could get support for Java for money, and certain additional components had their own licensing requirements, but it was pretty normal otherwise to install Java from java.sun.com and not give it a second thought.

This naturally caused a few questions when it comes to Domino and other Java-incorporating IBM products, and IBM released a statement that basically amounted to "you don't have to worry about it". IBM has maintained their own variant of the JVM (called J9 for Smalltalk-related reasons, not to be confused with Java 9) and anyway has always had arrangements with Sun/Oracle such that IBM's customers don't have to worry about dealing with Oracle directly.

But what about using Java outside of a licensed product, such as if you just want to run Tomcat on some server? The short answer there is that you're still fine, but you just have to know a little about the difference between "a JDK" and "Oracle's JDK". Java and the surrounding JDK have been progressively open-sourced in fits and spurts over the years, and are now at a point where the project called OpenJDK is basically the real Java environment, and then Oracle's JDK is just one implementation of it. It's similar to Linux: the core parts are open-source, and many distributions are entirely free, but there also exist commercial variants for pay.

So, if you want to run a Java stack, you can do so without putting forth a single cent by using an OpenJDK build. Oracle, IBM, and others will still be happy to take your money if you want a commercially-supported Java environment, of course.

For a longer explanation, this blog post from @javachampions is pretty much the definitive word.

 

Java With Domino After XPages

Mar 14, 2019 3:10 PM

IBM and HCL held a webcast today to detail some plans for Notes/Domino V11. There were some interesting tidbits elaborating on things like the pub/sub support, and it'll be worth tracking down a recording of the event when it's available.

What's important for this series, though, is that this event served as the long-promised "roadmap" announcement for XPages. The roadmap is, in effect, option three: HCL plans to look into ways to reuse some existing XPages code, but in general you should be aiming to write your UIs in something else, either consuming REST services from an XPages container or accessing Domino data via another route (like the domino-db Node.js module and hypothetical Java gRPC client).

So we know the end of the path: not XPages. However, it's not like we're all just going to throw away our existing apps, so there's work to do determining how we're going to get there. The options remain pretty much what they were after CollabSphere last year, albeit now with the doubt removed. The first two options - returning to LotusScript or going to Node - have their advantages and disadvantages, and you could make a reasonable case for either. Personally, I'm not interested in going down those roads, though, and I think it's better for any app of reasonable complexity to dive into Java. Other members of the community and I have developed tools over the years to make it easier, and now's the time to take some of these steps if you haven't already.

Do Not Use Server JavaScript

Sever JavaScript was always something of a trap for app architecture. There's nothing inherently wrong with having a scripting language on your UI pages, and it certainly helped bridge some gaps, but the way it and Designer intertwined encouraged developers to create non-portable messes. If you're still writing SSJS, stop immediately.

Learn Proper Java

Java has been around for a long time, and the way to right "good" Java code has changed over time and varies greatly by your environment. Some aspects, though, apply generally, and it's useful to stay up-to-date on current practices. I don't know a better resource for this than Effective Java, which has been updated for Java 7-9 since I last read it.

Speaking of which, you should learn about Java 8 streams and lambdas - they're great. Julian Robichaux did a presentation on this topic back at Connect 2017, and the slide deck is very elucidating.

Adopt Standard Java Technologies

Last year, I created a project to bring some modern JEE technologies to XPages. These are some of the same technologies I've been talking about in my "XPages to Java EE" series and, while that project can't bring the full JEE development experience to XPages, using those tools will help you write code that, in some cases, could be directly dropped into a Java EE app with no modifications at all. There's a big asterisk when it comes to actually accessing Domino data, but that's a solvable problem as well (with some more development).

In particular, you should start writing JAX-RS services. Not only is JAX-RS an excellent and very-capable spec, but REST services are portable to absolutely any front end.

Adopt Automated Builds

Maven has been something of a bugaboo for XPages developers for a while, but doesn't have to be. Node development (server- or client-side) revolves around npm and various build plugins, and Maven is much the same thing. One of the biggest improvements I've made lately to all of my active XPages apps is to wrap the on-disk project for them inside a Maven artifact, using the NSF ODP Tooling. That project allows you to automatically build your NSFs alongside other parts of the project (such as OSGi plugins) without having Designer involved.

Check the example project in that repo, and stay tuned for a 2.0 release (probably) imminently.

Learn Other Toolkits

If you're just starting the process of figuring out what to do after XPages, it doesn't particularly matter which other toolkit you learn, as long as it's reasonably modern. If you take some time to learn how to make, say, a React app but end up going with something else down the line, the lessons you learn will apply very closely. A particularly-comfortable option could be to learn JSF, which has a common ancestry with XPages but has up-to-date capabilities.

Whatever it is, though, just learn some other toolkit.

Follow Channels and Accounts for Other Tech

Over the last couple months, I've started following a lot of Jakarta-related blogs and Twitter luminaries. This applies elsewhere - even if you're not using other toolkits yet, it's very helpful to start immersing yourself in the news and culture.

Don't Stay Still

The primary thing to take to heart is the importance of doing something. Unless you're planning to change careers or retire in the short term, you'll have to make one decision or another. XPages is not going to get meaningfully better, and even existing apps will get worse with time as browsers and technology change.

Other environments, though, are already leagues ahead and are constantly improving. Dive in; the water's fine!

Java Hiccups

Nov 7, 2018 2:00 PM

Tags: java
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

To take a break from the doom-and-gloom of my last post, I figured it'd be good to dust off a post idea I've had in my drafts for a while: common hiccups that Java developers - particularly those coming from a Domino background - run into. This is sort of a grab bag of non-obvious concepts that are easy to assume incorrectly about, whether because of the way other languages work or the behavior of the lotus.domino API specifically.

So, roughly in order of complexity:

import Is Just For Cleanliness

In many languages, in particular C/C++/Objective-C, the natural equivalent of Java's import statement has a massive effect, physically grafting files into your source. In Java, though, import is really just for developer convenience. At runtime, there's no difference between having written this:

import java.util.ArrayList;
import java.util.List;

/* snip */
List<String> foo = new ArrayList<>();

...or this:

java.util.List<java.lang.String> foo = new java.util.ArrayList<>();

If you import a class but never use it in a file, it won't have any effect on the runtime behavior of the class. It's just used by the compiler to clarify what you mean when you use a base class name without reference.

Incidentally, as seen here, classes within the java.lang package (but not subpackages) are auto-imported, so it's as if each Java file has an invisible import java.lang.* at the top.

Compilation Doesn't Bake In Libraries

This is related, and is also an area where Java differs from some other environments. With C et al, you have the option to statically link referenced external libraries - which is to say, grab their contents at build time and put them into your compiled result such that they may as well be part of your program. Java doesn't do this: every time you reference a class or method, it's really just storing the equivalent of the string name of that class, which is then resolved at runtime.

This is why it's very easy to run into a ClassNotFoundException: you can compile code with some classes present on the class path, but then run it in a system where they're not present. The Java runtime doesn't pre-check whether all the required classes are available when it starts running, so you only find out when it hits that line of code.

Different Java-based environments deal with this in different ways. Standard (non-Domino) web apps deal with this by including dependency jars inside the WEB-INF/lib folder during the packaging phase of building. OSGi, XPages's framework of choice, has a whole dependency mechanism where you can specify bundles or packages with version ranges, in the hope of bringing some order to the chaos, with mixed success.

Primitives Are A Thing

Though the term "primitive" means a built-in data type generally, here I'm specifically talking about things like int and double. For historical and performance reasons, Java has a conceptual and practical distinction between the objects that you deal with most of the time and the primitive types used mostly for number storage. Namely: byte, char, short, int, long, float, double, and boolean. Unlike object references, there is no concept of a "null" value with these that would cause a NullPointerException. Referring to a variable with one of these types will always contain some value if the code compiles, even if it's just the 0/false default for an object property when not otherwise initialized.

Each of these types has a corresponding "boxed" object version, generally with the un-abbreviated name capitalized, such as Byte and Integer.

The distinction used to be harsher than it is today, thanks to autoboxing. Autoboxing is a compile-time behavior that will automatically convert between the primitive types and their object holders as necessary, allowing this type of code, which would otherwise be illegal:

Object i = 3;
int j = new Integer(4);

Autoboxing is mildly inefficient, so it's good to know that it exists, but you don't normally need to lose sleep over it.

All Object Variables Are Pointers

In a language like C++, there is a distinction between a variable that "is" an object vs. one that is a pointer to an object somewhere in memory. In Java, however, the former doesn't exist: an object variable is only ever a pointer. This has a couple implications. For one, this code only deals one object:

SomeClass foo = new SomeClass();
SomeClass bar = foo;
bar.setName("hi");
foo.setName("hello");
bar.getName(); // Will be "hello"

This is also why Java is picky about not referencing object variables until they've been initialized to at least something, so this generates a compile-time error:

Object foo;
foo.getClass();

Unfortunately, unlike some languages, Java has no language-level support for enforcing the distinction between a null object reference and a non-null one, which is why NullPointerExceptions are so prevalent.

Another implication of this leads into its own hiccup common to LotusScript programmers:

Strictly Speaking, All Method Arguments Are "By Value", But...

All method parameters in Java are "by value" in the LotusScript sense, the fact that all object variables are pointers means that the "value" you're passing to the method for an object parameter is always a reference. Java has no mechanism to pass a reference to a primitive type, nor does it have a mechanism to implicitly duplicate an object when passing it to a method.

Not only is this a bit conceptually confusing at first, but it's also a potential trap for bad programming practices. It's very easy to write a method that performs modifications on objects passed in as parameters, and this is often the right thing to do. However, since the language doesn't have any syntax mechanism for broadcasting this behavior, it's up to you as the programmer to either write the method name in such a way that it's obvious what's going to happen or clearly state it in the documentation if it's something that's going to be used outside the current file.

Casting Objects Doesn't Do Anything

By "casting", I'm referring to something like this:

RichTextItem body = (RichTextItem)doc.getFirstItem("SomeItem");

The (RichTextItem) is a cast, and it's yet another area that diverges from some other languages. What casting an object in Java means is that you're going to refer to an object by a different class or interface than the one it's been previously referred to as. It has some runtime implications, but the thing to keep in mind is that it's about choosing the name for something that exists as opposed to changing an object into a different class.

So, for example, the RichTextItem idiom above exists because the Document#getFirstItem method returns an object that's called a Item, but, when the item in the document is rich text, it will actually return a RichTextItem object. RichTextItem is a subclass of Item, and so it's legal to refer to such an object either as RichTextItem, Item, Base (the common interface for all Notes objects), or Object (the common superclass of all objects). In a situation like this, you have to cast the object because you're going to refer to it as a more-specific type of object than the one the method says it returns.

If you do this and the object is not actually of the type you're trying to cast it to (in this case, if it's a plain text item or MIME, most commonly), you'll end up with a ClassCastException, because the cast is enforced at runtime. But, success or fail, the cast will not actually affect the object itself in any way - it will continue on being whatever it was already, regardless of name.

Casting Primitives Does Do Something

For better or for worse, performing a cast on a primitive type does have the possibility of creating a new value. For example:

int foo = Integer.MAX_VALUE;
short bar = (short)foo;

Because int can hold more data than a short, this case creates a new value based on chopping off the highest-value bits of the internal binary representation of foo. (As a side note, because of the fun way computers deal with numbers, foo is 2147483647, while bar is -1.)

Normally, this behavior doesn't matter too much, since, if you have a method that takes, say, an int and you have a long, you can safely cast it down since it'll likely be a tiny value anyway. It's important to know that it can happen, though, and this behavior is very important when, for example, native C libraries that use unsigned values, which do not exist in Java as such.

Java Has Only A Limited Concept of Immutability

"Immutability" refers to the inability to change the value of an entity once it's created. It's come to the fore as a concept recently because working with immutable objects sidesteps a lot of issues with asynchronous programming. Java, unfortunately, doesn't really have any language-level support for immutable objects in the sense that, for example, Swift does.

Java throws a bit of a curve ball in this area with the final keyword, which means that a variable can't be reassigned after being first initialized. This means that you can't do things like this:

final int foo = 3;
foo = 4; // compiler error

final SomeClass bar = new SomeClass();
bar = new SomeClass(); // compiler error

This, on the other hand, is entirely legal:

final SomeClass foo = new SomeClass();
foo.setName("hi");
foo.setName("hello");

This is because the only thing blocked from changing here is the value of foo-the-reference, but the object it's referencing can be changed at will.

An object can be made effectively immutable, though, by means of making its outward-facing methods not change any of the internal state. This is used commonly for "value" classes, such as the aforementioned Integer. Though the language doesn't do anything to guarantee that the Integer class doesn't allow mutation, the class is written in such a way that it has no inlet for it.

Because of the value of immutable objects, they're used commonly in the core Java classes and in third-party libraries, particularly newer ones. However, since the language can't tell you if an object is immutable, you have to be on the lookout for whether a given method modifies the existing object in-place or returns a new object reflecting the change. This comes up frequently with Strings, which are immutable in Java. This is something I've seen commonly:

String foo = " hello ";
foo.trim();
System.out.println(foo);

That code will print " hello ", with the leading and trailing spaces (though without the quotes). This is because the String#trim method, like all "changing" methods on String, leaves the original value intact but returns a new String object reflecting the expected value.

This is just something you have to be on the lookout for, especially since this pattern isn't even consistently applied within the core Java classes. The Date class, for example, is infamously bad in a lot of ways, and one of those ways is that it has mutation methods.

Generics In Java Are Weird

A "generic" refers in this case to a class that is declared as being associated with one or more other types that can be defined after the fact. The prototypical example of this is a collection class, like List<String> foo. In this case, the List interface is generic and lets you specify the type of object you expect to find within it, in this case String.

Generics, unfortunately, were added after Java's initial release, and they bear the marks of it. Unlike languages like C++, Java generics are largely syntactic sugar, meant to replace things like:

String someString = (String)aListIKnowHasStrings.get(0);

...with this:

String someString = aListDeclaredWithStrings.get(0);

However, under the covers, a List only ever really knows it contains Objects, and the second form just transparently shims in a (String) cast at runtime. That's why you can do something like this:

((List<Object>)(List<?>)aListDeclaredWithStrings).add(new NotAStringObject());

That line will not only compile, but it will execute without issue at runtime. It's only later, when you try to extract the value to a String variable, that you'll hit a ClassCastException.

Some generic information is retained at runtime, depending on how it's used, but for the most part it's best to think of it as just a syntax nicety. This behavior is endless trouble, but something we have to live with.

Garbage Collection Is Automatic, But Resource Management Isn't Necessarily

This is one of the main things that bites Domino developers as they learn about Java. One of the early things they learn when switching from LotusScript to Java is that now you have to worry about the .recycle() method on your objects, or else you'll have trouble. This leads to two misapprehensions: that Java in general requires "recycling" for every object, and that recycling with Domino objects is about memory in the same way that a Java OutOfMemoryError is.

Unfortunately, the reason that recycle() exists at all requires delving into some nitty-gritty aspects of the Java environment, but I first want to reinforce that Java uses automatic garbage collection at all times to watch for and delete objects that are no longer used. That "no longer used" bit glosses over a bit, but take this as an example:

public void foo() {
  String a = "hello";
  String b = " there";
  return a + b;
}
public void bar() {
  String message = foo();
  System.out.println(message);
}

There are three objects in action here, but, by the time the code reaches the System.out.println line, a and b are no longer used and will be slated for automatic garbage collection. You as a programmer do not need to worry about them.

The lotus.domino objects, though, are trouble. I think it's best to not think of recycle() in terms of "memory" but instead think of the objects as "open resources", in the same way that you might open a network connection or a stream to a file on the filesystem. Unlike object memory, network resources are not necessarily automatically closed by Java - there are some affordances with syntax and the concept of "finalizers", but, in general, the responsibility for closing a resource lies with the programmer.

There are a few grab-bag notes to do with these objects:

  • Different lotus.domino objects refer to different kind of backing resources, which is why problems will sometimes manifest as complaints about memory (when they refer primarily to C-side structures in Domino's native memory, separate from Java) and sometimes as complaints about handles (database and document references, generally)
  • Recycling a "parent" object recycles all of its children, but the relationship is not always clear. Importantly, DateTime objects are children of the ancestor Session, even when you retrieve them from a Document, and so they can linger for a long time
  • Agents and XPages both mitigate and conceal the need for recycling by automatically closing the auto-generated Session(s) at the end of the agent execution or page request. In practice, you only really need to worry about recycling if you're, for example, looping over a large view

I may make another one of these posts in the future, and hopefully this goes a little way to clearing up some common misconceptions.

AbstractCompiledPage, Missing Plugins, and MANIFEST.MF in FP10 and V10

Oct 19, 2018 11:48 AM

Since 9.0.1 FP 10, and including V10 because it's largely identical for this purpose, I've encountered and seen others encountering a couple strange problems with compiling XPages projects. This is a perfect opporunity for me to spin a tale about the undergirding frameworks, but I'll start out with the immediate symptoms and their fixes.

The Symptoms

There are three broad categories of problems I've seen:

  • "AbstractCompiledPage cannot be resolved to a type"
  • Missing third-party XPages libraries, such as ODA, resulting in messages like "The import org.openntf cannot be resolved"
  • Complaints about MANIFEST.MF, like "MANIFEST.MF has no main section" and others

The first two are usually directly related and have the same fix, while the second can also be caused by some other sources, and the last one is entirely distinct.

Fix #1: The Target Platform

For the first two are based on problems in the active Target Platform, namely one or both of the standard platform components go missing. The upshot is that you want your Target Platform preferences to look something like this:

Working Target Platform

There should be a selected platform (the name doesn't matter, but "Running Platform" is the default name) with entries at least for ${eclipse_home} and for a directory inside your Notes data dir, here C:\Notes\Data\workspace\applications\eclipse. If they're missing, modify an existing platform or create a new one and add an "Installation"-type entry for ${eclipse_home} and a "Directory"-type one for the eclipse directory within your data dir.

Fix #2: Broken Plugins, Particularly ODA

Though V10 didn't change much when it comes to XPages, there are a few small differences. One in particular bit ODA: we had a dependency on the com.ibm.domino.commons plugin, which was in the standard Notes environment previously but is not as of V10 (though it's still present on the server). We fixed that one in the V10 release, and so you should update your ODA version if you hit this trouble. I don't think I've seen other plugins with this issue in the V10 transition, but it's a possibility if Fix #1 doesn't do it.

Fix #3: MANIFEST.MF

This one barely qualifies as a "fix", but it worked for me: if you see Designer complaining about MANIFEST.MF, you can usually beat it into submission by cleaning/rebuilding the project in question. The trouble is that Designer is, for some reason, skipping a step of the XPages compilation process, and cleaning usually kicks it into gear.

I've also seen others have success by deleting the error entry in the Errors view (which is actually a thing that you can do) and never seeing it again. I suspect that the real fix here is the same as above: during the next build, Designer creates the file properly and it goes away on its own.

The Causes

So what are the sources of these problems? The root reason is that Designer is a sprawling mountain of code, built on ancient frameworks and maintained by a diminished development team, but the immediate causes have to do with OSGi.

The first type of trouble - the target platform - most likely has to do with a change in the way Eclipse manages target platforms (look at the same prefs screen in 9.0.1 stock and you'll see it's quite different), and I suspect that there's a bug in the code that migrates between the two formats, possibly due to the dramatic age difference in the underlying Eclipse versions.

The second type of trouble - the MANIFEST.MF - is due to a behind-the-scenes switch in how Designer (and maybe the server) handles dependencies in XPages projects.

Target Platforms

The mechanism that OSGi projects - such as XPages applications - use for determining their dependencies at build time is the notion of a "Target Platform". The "target" refers to the notion that this is the platform that is expected to be available at runtime for what you're building - loosely equivalent to a basic Java classpath. An OSGi project is checked against this Target Platform to determine which classes are available based on their bundle names and versions.

This is distinct from the related concept of a "Running Platform". Designer, being based on Eclipse, is itself built on and runs using OSGi. Internally, it uses the same mechanisms that an XPages application does to determine what plugins it knows about and what services those plugins provide.

This distinction has historically been hidden from XPages developers due to the way the default Target Platform is set up, pointing at the same Running Platform it's using. So Designer itself has the core XPages plugins running, and it also exposes them to XPages applications as the Target. Similarly, the way we install XPages Libraries like ODA is to install them outright into the Designer Running Platform. This allows Designer to know about the library service provided, which it uses to populate the list of available plugins in the Xsp Properties editors.

However, as our trouble demonstrates, they're not inherently the same thing. In standalone OSGi development in Eclipse, it's often useful to have a Target Platform distinct from the Running Platform - such as the XPages environment for plugins - to ensure that you only depend on plugins that will be available at runtime. But when the two diverge in Designer, you end up with situations like this, where Designer-the-application knows about the XPages runtime and plugins and constructs an XPages project and translates XSP to Java using them, but then the compilation process with its empty Target Platform has no idea how to actually compile the generated code.

MANIFEST.MF

I've mentioned that an OSGi project "determines its dependencies" out of the Target Platform, but didn't mention the way it does that. The specific mechanism has changed over time (which is the source of our trouble), but the idea is that, in addition to the Java classes and resources, an OSGi bundle (or plugin) has a file that declares the names of the plugins it needs, including potentially a version range. So, for example, a plugin might say "I need org.apache.httpcomponents.httpclient at least version 4.5, but not 5.0 or higher". The compiler uses the Target Platform to find a matching plugin to compile the code, and the runtime environment (Domino in our case) does the same with its Running Platform when loading.

(Side note: you can also specify Java packages to include from any plugin instead of specific plugin names, but Designer does not do that, so it's not important for this purpose.)

(Other side note: this distinction comes, I believe, from Eclipse's switch from its own mechanism to OSGi in its 3.0 release, but I use "OSGi" to cover the general concept here.)

The old way to do this was in a file called "plugin.xml". If you look inside any XPages application in Package Explorer, you'll see this file and the contents will look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>
<plugin class="plugin.Activator"
  id="Galatea2dVCC_2fIKSG_dev5csyncagent_nsf" name="Domino Designer"
  provider="TODO" version="1.0.0">
  <requires>
    <!--AUTOGEN-START-BUILDER: Automatically generated by null. Do not modify.-->
    <import plugin="org.eclipse.ui"/>
    <import plugin="org.eclipse.core.runtime"/>
    <import optional="true" plugin="com.ibm.commons"/>
    <import optional="true" plugin="com.ibm.commons.xml"/>
    <import optional="true" plugin="com.ibm.commons.vfs"/>
    <import optional="true" plugin="com.ibm.jscript"/>
    <import optional="true" plugin="com.ibm.designer.runtime.directory"/>
    <import optional="true" plugin="com.ibm.designer.runtime"/>
    <import optional="true" plugin="com.ibm.xsp.core"/>
    <import optional="true" plugin="com.ibm.xsp.extsn"/>
    <import optional="true" plugin="com.ibm.xsp.designer"/>
    <import optional="true" plugin="com.ibm.xsp.domino"/>
    <import optional="true" plugin="com.ibm.notes.java.api"/>
    <import optional="true" plugin="com.ibm.xsp.rcp"/>
    <import optional="true" plugin="org.openntf.domino.xsp"/>
    <!--AUTOGEN-END-BUILDER: End of automatically generated section-->
  </requires>
</plugin>

You can see it here declaring a name for the pseudo-plugin that "is" the XPages application (oddly, "Domino Designer"), a couple other metadata bits, and, most importantly, the list of required plugins. This is the list that Designer historically (and maybe still; it's not clear) uses to populate the "Plug-in Dependencies" section in the Package Explorer view. It trawls through the Target Platform, finds a matching version of each of the named plugins (the latest version, since these have no specified ranges), adds it to the list, and recursively does the same for any re-exported dependencies of those plugins. "Re-exported" isn't exposed here as a concept, but it is a distinction in normal OSGi plugins.

Designer derives its starting points here from implicit required libraries in XPages applications (all those "org.eclipse" and "com.ibm" ones above) as well as through the special mechanism of XspLibrary extension contributions from plugins installed in the Running Platform. This is why a plugin like ODA has to be installed in Designer itself: in the runtime, it asks its plugins if they have any XspLibrary classes and uses those to determine the third-party plugin to load. Here, ODA declares that its library needs org.openntf.domino.xsp, so Designer adds that and its re-exported dependencies to the Plug-in Dependencies group.

With its switch to OSGi in the 3.x series circa 2005, most of the functionality of plugin.xml moved to a file called "META-INF/MANIFEST.MF". This starkly-named file is a standard part of Java, and OSGi extends it to include bundle/plugin metadata and dependency declarations. As of 9.0.1 FP10, Designer also generates one of these (or is supposed to) when assembling the XPages project. For the same project, it looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: Domino Designer
Bundle-SymbolicName: Galatea2dVCC_2fIKSG_dev5csyncagent_nsf;singleton:=true
Bundle-Version: 1.0.0
Bundle-Vendor: TODO
Require-Bundle: org.eclipse.ui,
  org.eclipse.core.runtime,
  com.ibm.commons,
  com.ibm.commons.xml,
  com.ibm.commons.vfs,
  com.ibm.jscript,
  com.ibm.designer.runtime.directory,
  com.ibm.designer.runtime,
  com.ibm.xsp.core,
  com.ibm.xsp.extsn,
  com.ibm.xsp.designer,
  com.ibm.xsp.domino,
  com.ibm.notes.java.api,
  com.ibm.xsp.rcp,
  org.openntf.domino.xsp
Eclipse-LazyStart: false

You can see much of the same information (though oddly not the Activator class) here, switched to the new format. This matches what you'll work with in normal OSGi plugins. For Eclipse/Equinox-targeted plugins, like XPages libraries, plugin.xml still exists, but it's reduced to just declaring extension points and contributions, and no longer includes dependency or name information.

Eclipse had moved to full OSGi by the time of Designer's pre-FP10 basis (2008's 3.4 Ganymede), but XPages's history goes back further, so I guess that the old-style Eclipse plugin.xml route is a relic of that. For a good while, Eclipse worked with the older-style plugins without batting an eye. FP10 brought a move to 2016's Eclipse 4.6 Neon, though, and I'm guessing that Eclipse dropped the backwards compatibility somewhere in the intervening eight years, so the XPages build process had to be adapted to generate both the older plugin.xml files for backwards compatibility as well as the newer MANIFEST.MF files.

I can't tell what the cause is, but, sometimes, Designer fails to populate the contents of this file. It might have something to do with the order of the builders in the internal Eclipse project file or some inner exception that manifests as an incomplete build. Regardless, doing a project clean and usually jogs Designer into doing its job.

Conclusion

The mix of layering a virtual Eclipse project over an NSF, the intricacies of OSGi, and IBM's general desire to insulate XPages developers from the black magic behind the scenes leads to any number of opportunities for bugs like these to crop up. Honestly, it's impressive that the whole things holds together as well as it does. Even though it doesn't seem like it to look at the user-visible changes, the framework changes in FP10 were massive, and it's not at all surprising that things like this would crop up. It's just a little unfortunate that the fixes are in no way obvious unless you've been stewing in this stuff for years.

App Dev After CollabSphere 2018

Jul 29, 2018 10:48 AM

In recent years, MWLUG/CollabSphere has tended to be a good time to get a lay of the land for what IBM - and now HCL - intends for their app dev strategy. Recent Connects weren’t too heavy on announcements of major import for Domino developers, and any news to come out tends to do so in the months leading up to summer.

This year, we’ve had time to digest the implications of the HCL transfer, get a feel for how they intend to handle the product, and generally get a good bead on their app-dev vision. What they’ve said so far this year is clear: LotusScript for old apps on mobile platforms and Node.js for new development (or new developers). As far as XPages, I believe that the most time that it got at the conference was in my session, which was about what to do after XPages.

LotusScript

Though I’ve certainly not hidden how painful the prospect of enhancements to LotusScript is to me, I have to admit that adding a few capabilities for REST data service access makes strategic sense for the platform. Though XPages made a significant mark on Domino app dev, it never pushed aside the classic style, and every move that IBM made for app modernization since then seemed to exist exclusively in the span of the sentence announcing it.

So HCL announced early this year that they planned to port the classic Notes client first to iOS and then later to Android and WebGL+WebAssembly. Adding any kind of Java to this plan - XPages, LS2J, etc. - would present some technical hurdles, and so it makes workload sense to focus on the languages that have runtimes in the C core.

Apps run this way won’t be good, but there’s some logic to the tack of targeting customers for whom “modernization” only really means “we want our same old apps to run offline on new OSes”. Their plan to run on phones also necessitates some more-dramatic changes to the tooling, so it’s possible that they have larger changes in mind - or at least we’ll see a return of the “hide on mobile” checkboxes in Designer.

Node.js

The big HCL push for Node.js seems to me to be a way to get a lot of bang for the buck: by positioning it as the new way to write apps, they’re both (potentially) making Domino more appealing to those not already on the platform and guiding existing developers to a platform for which IBM and HCL are not responsible. Though the domino-db driver is no small technical feat - and it looks like they’ve done a good job making it both fast and native-feeling in Node - it’s a much, much smaller footprint than XPages, which put IBM on the hook for maintaining an entire app-dev stack and UI toolkit with limited outside assistance.

I do think that it’s smart to write a Node.js DB driver - even if it doesn’t bring in an influx of new blood, it provides a legitimate app-dev story and Node is a top-notch platform. The gRPC stack also provides an entryway for future hooks and development without the assumptions of NRPC.

Java

Java development on Domino is in a weird place. Domino 10 doesn’t have anything directly for XPages/OSGi developers, though we’ll get access to DGQF via the Database class. I’ve heard whispers that they’re starting to plan more for Domino 11, but that’s largely conjecture at this point. Certainly, HCL has made it clear that their heart isn’t in it, and honestly I get why. Since XPages has been in essentially maintenance mode since 9.0.1 or earlier, it’s aged itself out of contention for modern app dev. It wouldn’t be impossible to drag it forward to something respectable, but then they’d still have another development environment exclusive to Domino to maintain.

I’m not sure what the best thing to do with the stack is. Though XPages didn’t bring all Domino developers to it, it did bring a significant chunk, and a lot of people have spent upwards of a decade of their life with the toolkit. For my part, I think it makes a lot of sense to move to “normal” Java/Jakarta EE development, which provides the possibility of salvaging Java-side code, though it leaves XSP and SSJS in the lurch. It’s hard to make a good financial case for either significantly upgrading the platform or at least undoing the tight coupling with the Domino server that it accrued over the years, though I’ll admit it’s sort of fun to think about.

Reforming the Blog in Darwino, Part 4

Jul 20, 2018 6:59 PM

Tags: java darwino
  1. Reforming the Blog in Darwino, Part 1
  2. Cramming Rails Into A Maven Tree
  3. Reforming the Blog in Darwino, Part 2
  4. Reforming the Blog in Darwino, Part 3
  5. Reforming the Blog in Darwino, Part 4

Last time, I went over my switch in tack for how I'm making the new version of my blog, and my overall focus on picking an interesting stack of JEE technologies. In this post, I'm going to start diving into the implementation of the UI, though I think that it will make sense to dedicate two posts to it.

The biggest decision I made with the UI side of this app is that I didn't want to make a client-side JS app. There's a reason they're so ascendant, and I find development with React or Stencil pretty enjoyable, but I wanted to go a different route here for a few reasons:

  • For a blog, a CSJS app is wildly overkill, and, in fact, would require extra work to fulfull one of the basic requirements of a blog, which is being web-crawler friendly.
  • I want to see how svelte I can make the client payload.
  • Skipping a JS framework (and a CSS one) is a great way to brush up on what plain HTML and CSS are capable of nowadays.
  • Unlike a typical Darwino app, my only target is a full-on Java web server, so I'm not held back on the Java side by the capabilities, say, of Dalvik on Android 4.
  • Part of me misses the simplicity of my early PHP days, albeit not the language.

The Java Side

I decided to go with the MVC 1.0 draft spec because it lets me write extremely focused code. Here is the controller for the home page:

package controller;

import javax.inject.Inject;
import javax.mvc.Models;
import javax.mvc.annotation.Controller;
import javax.ws.rs.GET;
import javax.ws.rs.Path;

import model.PostRepository;

@Path("/")
@Controller
public class HomeController {
	@Inject
	Models models;
	
	@Inject
	PostRepository posts;
	
	@GET
	public String get() {
		models.put("posts", posts.homeList());
		
		return "home.jsp";
	}
}

Naturally, there's a lot of magic going on behind the scenes - there's tons of heavy lifting going on here by JAX-RS, MVC, CDI, JNoSQL, and Darwino - but that's the point. All the other components are off doing their jobs in their areas, while the code that provides the UI doesn't have to care about the specifics.

Things can get more complicated on the pages that actually have some functionality to them, but the code remains pleasantly focused. Take the handler for deleting posts:

@DELETE
@Path("{postId}")
@RolesAllowed("admin")
public String delete(@PathParam("postId") String postId) {
	Post post = posts.findByPostId(postId).orElseThrow(() -> new IllegalArgumentException("Unable to find post matching ID " + postId));
	posts.deleteById(post.getId());
	return "redirect:posts";
}

This adds another level of magic in the form of javax.security.annotation.RolesAllowed, but it's more of the good kind: even with no knowledge of the underlying frameworks, it's pretty clear what every bit of code is doing here. It's a refreshing bit of that Rails simplicity, just more compile-type-safe and much uglier.

Even beyond the minimal code is the cleanliness that this brings to the structure of the application: other than the img, css, and js paths, all of the routing within the application is done care of JAX-RS and MVC. It's not beholden to the folder structure in the project or to a Domino-style implicit app router.

JSP

JSP has been the prototypical Java HTML language for about 20 years, and it's had a rough upbringing. The early versions committed the PHP/XPages sin of encouraging you to put business logic right on the page, and it even still has the typical Java problem that it's tricky to find advice about using it that uses technologies added since 2005.

Still, when used properly, it can be a nice, clean templating language. Again from the main home page:

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<t:layout>
	<c:forEach items="${posts}" var="post">
		<t:post value="${post}"/>
	</c:forEach>
</t:layout>

For an XPages developer, this is extremely comfortable. It's also very refreshingly elemental: there's no server-side persistence of the page, so everything is "load-time bound" and, with just HTML tags and core JSTL tags, nothing ends up on the page that you don't explicitly put there.

Ozark, the MVC implementation, also supports using JSF "Facelets" for the view portion, but JSP suits the task just fine.

HTML + CSS

It'd been far too long since I last really sat down and looked at what baseline HTML and CSS are like - in particular, I'd watched the rise of CSS Flexbox and Grid from afar, and never gave them a shot. Using components that generate their own HTML and pre-existing CSS frameworks to target with class names is all well and good, but it does leave you a bit disconnected from the fundamentals.

So, for this iteration, I tossed aside the very-nice Bootstrap framework I've been using, dusted off one of my old hand-built ones, and got to translating it into CSS Grid. This cut down on the page size enormously: I had already echewed Dojo by not using XPages, but this now also meant that I could ditch the core bootstrap.css, jQuery, and any jQuery plugins.

Beyond CSS Grid, have you seen how nice HTML forms are nowadays? Just looking at this post reveals how much is built in in the way of validation and different input types, even before you write a line of JavaScript.

Turbolinks

Having such a trimmed-down UI means that pages already load extremely quickly, but I figured this was also a perfect chance to try out a bit of clever tech from the team at Basecamp: Turbolinks. Turbolinks is a JS file that you bring into your app which then transparently takes over your in-app links to minimize the amount of rendering you have to do. Since the surrounding boilerplate of the app usually doesn't change between requests, it can figure out the "diff" between old and new and just replace the body. It's essentially like partial refreshes without the server knowing anything about it.

It's still technically inefficient to have the server render and transfer surrounding page elements that are just going to be discarded anyway. But, on the other hand, skipping that means that I don't have to write JavaScript handlers myself, use a full CSJS app framework, or keep state on the server side. The server just keeps doing what it does with a fully context-less request and the browser sorts it out. Basecamp's programmers are masters at the targeted deployment of kludges for maximum benefit.


In the next (final?) post in the series, I'll finish up with my "low-JS" experience and other lessons learned from this project.

Reforming the Blog in Darwino, Part 3

Jul 18, 2018 10:30 AM

Tags: java
  1. Reforming the Blog in Darwino, Part 1
  2. Cramming Rails Into A Maven Tree
  3. Reforming the Blog in Darwino, Part 2
  4. Reforming the Blog in Darwino, Part 3
  5. Reforming the Blog in Darwino, Part 4

A good while back, I created a project structure for reforming my blog here in Darwino, but, as happens with low-priority side projects, it withered on the vine, untouched since then. Beyond just the "cobbler's children" aspect to it, I also lost steam due to a couple technology paths I initially headed down.

The first was basing the UI on Angular, which I've never really enjoyed working with. I'm sure I could have ended up with a decent result with it, but Angular always rubbed me the wrong way. And not just Angular: for a dead-simple UI like this, a full JS UI is just weird overkill.

The second was off in the other direction: I initially tried cramming a Rails app in the tree, which could be made to work, but it introduced so many weird edge cases outside of the problem at hand. That alone isn't the end of the world, but not much of what I'd have to solve to make that work would be transferrable elsewhere any time soon, so it'd end up a real time sink.

So, taking what I've learned since and the projects that I've been working on, I've decided to take another swing at it. Before I get into the implementation side, it will be useful to go over the technologies I did choose for the new form.

Java/Jakarta EE

I've recently become kind of enamored with the modern form of the Jakarta EE stack, and so I decided to use this as an opportunity to really dive in to what a blue-ocean small by-the-books Java app looks like nowadays.

JEE got a well-deserved bad rap over the years for its configuration complexity and general impenetrable-ness, but I've been very pleased to find that those tides have largely receded. It's all still there if you want it, but a fresh new app primarily consists of decorating a handful of Java classes with declarative annotations.

JEE consists of a series of individual specs, and building an app involves choosing which ones you want to use, plus (depending on which you choose) picking your app server target.

Tomcat

I originally gave a shot to adding enough OSGi metadata and bundles to target Domino, but decided quickly that it was just not worth it. The HTTP/servlet stack in Domino is just so old that, even if I got everything bound together, I'd still be fighting the platform every step of the way.

The better route was to put it aside and just run a modern Java app server. I went down the list of GlassFish, Payara, WebSphere Liberty (the nearest miss), TomEE, and WildFly, but each one ended up having some problem with either the dependencies I wanted or with their Eclipse integration. I ended up settling on good ol' reliable Tomcat. Tomcat itself isn't actually a JEE server, but it's kind of like a Raspberry Pi: it gives you the baseline for a Java servlet engine, and then you can cobble together your own EE stack on top of it by explicitly bringing in implementations. Though the final .war file is far less svelte this way, I found that this build-your-own method results in the lowest chance of being held back by the platform currently.

As an aside, Sven Hasselbach has been writing a very interesting series on running Jetty on top of the Domino JVM to achieve a similar end, albeit with Spring.

Darwino

For all the same reasons as when I set out on this journey originally, I'm using Darwino for the baseline. This lets me replicate in my existing blog data smoothly while getting the advantages of a superior backing database. I'm not making use of mobile clients or most Darwino services with this, but the baseline is nonetheless a step up, and fits in with a JEE app like a glove.

JNoSQL

I brought in the JNoSQL Darwino driver I wrote a little while ago to handle the model layer. JNoSQL is essentially JPA but reformed for NoSQL access - no cruft, no relational/NoSQL impedence mismatch, and designed to fit with current JEE technologies.

CDI

CDI is one such technology, and it's a very interesting one to work with. The whole "dependency injection" realm is a little fraught and, if my Eclipse UI error reporter is any indication, prone to bizarre errors, but the core concept is good and very useful. I've gotten it into the swing of using it both as the "managed bean" provider for the front end as well as the general service provider glue for the app. It still takes some getting used to, and the learning curve falls prey to a similar problem as when I was learning Maven: something about learning how it works makes you forget what it was like to not know, and so a lot of the answers online assume way more knowledge than a neophyte has.

Bean Validation

I've long been a fan of the Java bean validation API, and it's a clean fit here too: JNoSQL picks up on the presence of Hibernate Validator without configuration beyond the dependency and it just works. No muss, no fuss.

JAX-RS + MVC Spec

JAX-RS is at this point familiar territory for a lot of Domino developers, but I decided to use it as the underpinnings of the whole UI, in tandem with a draft framework called MVC 1.0. The latter's generic name doesn't really give much detail, but it's essentially a spec that enhances JAX-RS entities with knowledge of HTML templating frameworks, allowing you to write a very clear app structure. It's not a server-state-based framework like JSF, but rather a bit "closer to the metal", where you deal directly with the HTTP method cycle.

As I'll go more into in the "UI" post, it's been surprisingly refreshing to get back to basics in this way - JSF/XPages is often a bit conceptually easier to work with (at first) and client-side JS frameworks have some REST+JSON purity to them, but just "this server-rendered HTML page with no server state is everything you need" feels really good sometimes.

Admittedly, the MVC spec itself is in a weird place. It was originally a candidate for inclusion in Java EE 8, but was dropped in the final runup. It's possible that this will prove to be a kiss of death, but the spec is so small but functional that I don't feel bad about taking the risk of building an app on it.


That about covers the technology stack. When I get around to writing the next post, I'll go into some of the specifics about how I decided to set up the UI, which has been a fun experiment of its own. In the mean time, the active repository is up at:

https://github.com/jesse-gallagher/frostillic.us-Blog/tree/develop/frostillicus-blog

A (Java-Centric) Domino Wish List

Jul 12, 2018 12:04 PM

Tags: domino java

Seeing the information come out of this week's HCL "Golden Ticket" event has got me thinking about some of my wish-list items for Domino development, mostly in the form of enhancements for existing capabilities and entirely around Java (since that's what I do).

Quality of Life

Javadoc

For some reason, the lotus.domino classes ship without Javadoc or even variable-name information, leading to this trainwreck:

Designer has its built-in help, which is also on the web, but that's quite a few steps down. This is table stakes for a Java API and always has been.

Updated p2 Repository

Back in 2014, the XPages team uploaded a clean p2 repository of the XPages artifacts to OpenNTF, corresponding with the 9.0.1 release. This repository saves a ton of hassle when building Tycho-based projects or just setting up an Eclipse workspace. However, it's quite long the tooth, as there have been several Notes.jar additions not included in there, and, in FP10, a significant upgrade to the undergirding OSGi framework.

I ended up writing a script to generated an updated version, but I don't have the legal ability to publish the results anywhere for easy consumption, meaning it has to be done manually and configured for each build environment. It would be a great convenience if there was an official package (ideally including the Designer plugins as well) and, even better, hosted on OpenNTF so that we could reference it by URL as we do for Eclipse releases (and require users to accept a license first).

Mavenized Repository

The p2 repository is good for Tycho-based projects, but, especially when targetting Domino is only one part of a project, it can be much more convenient to use "normal" Maven projects with maven-bundle-plugin. However, those projects can't use p2 repositories as such. For Darwino's needs, I ended up writing a tool in the (available-for-free) Darwino Studio plugins to Maven-ize a Domino p2 repository, but that hits the same snag as above of requiring manual setup in each instance.

This is another case where my preference would be on the OpenNTF Maven repository (plus Javadoc Jars, naturally).

Extension Library Source

The latest Extension Library release on OpenNTF is from FP9, while the latest on GitHub is from the FP7 era. FP10 shipped with a newer version of indeterminate nature. It'd be good to have this on both of those sites and, like with Javadoc, have source bundles shipped with the product in a way that is picked up automatically by Designer and Eclipse.

Source Bundles for Third-Party Components

The source for the undergirding Equinox stack is available, but it would be best to have, as an adjunct to the updated p2 repositories, the source bundles for the actual versions used so that we don't have to cobble together a platform from Eclipse's repositories.

Open-Source the Rest of the Stack

Having XPages, the Expeditor husk, and the other miscellaneous doodads that make up the proprietary layer as open source with an Apache-compatible license would cover a lot of the above and also be of tremendous use for XPages and non-XPages apps alike that run on or with Domino. I have a hard time imagining that it would lead to a lot of community-driven improvement, but it may do some (I'd have a few words to share with the file-download control, for example), and even just as a static release would be a significant boon.

Domino Connectivity in Eclipse

An idea I've been toying with lately is to make an Eclipse plugin that allows you to add Domino servers to the "Servers" view and control them to some extent. The basics would be to start/stop/restart HTTP, but the stretch goals would be to open a console view, get a list of running modules, integrate with the existing "load bundles from PDE" support, and, ideally, an outright "Run on Server" command for OSGi bundles and NSFs. However, I have so much on my plate that I'm not sure that I'll get to this any time soon unless I get a real itch some weekend.

Longevity

Refreshed JVMs

Feature Pack 8 brought Java 8, a vital step forward. However, since then, Oracle moved to a faster release cycle for Java and the JRE is now at version 10. Domino uses IBM's JVM variant, J9, which they recently moved to the Eclipse foundation as OpenJ9, where it has... sort of been keeping up, I think?

In any event, this increased pace of change has meant that the Java 8 honeymoon is over, and Domino development again requires special consideration when using current tools. I have no idea how complex the integration between Domino's tasks and the underlying JVM is, but my ideal would be to have constant or near-constant parity.

Servlet API 4.x

After the JRE version, the most important foundational element of a Java web app is the servlet API release. The current version is 4.0.1, while Domino supports 2.5 (or 2.4, maybe?). The good news is that the Java/Jakarta EE world seems to be used to lagging versions here, and 2.5 is a minimum version for a lot of current tech in much the same way that Java 6 was until somewhat recently, but there has been quite a bit added in recent years.

Presumably, a reason for the lag is the implied requirements of newer versions, such as WebSockets and HTTP/2 support, that would require heavy modifications to the core Domino HTTP code. Honestly, the more practical route is almost definitely to just use a different JEE server paired with CrossWorlds, some Java wrapper for the GRPC stuff HCL has been talking about, or (best of all) a Darwino app replicating with Domino, but still. WebSphere Liberty is actually really nice, by the way.

Refreshed Equinox

Like with the underlying JVM, the Feature Pack 10 update to a Neon-based OSGi/Equinox framework was a critical shot in the arm for the platform, but it is now also two major versions behind. This is a little less critical, since Equinox brings a bit less to the table for our needs and Neon is "new enough" for now, particularly on the server side, but it'd still be proper to keep pace.

Odds and Ends

Non-OSGi JEE Support

The Equinox framework that Domino uses is quite capable, but there's no pretending that OSGi-targetted development has its share of headaches. Most Java apps just target plain-old .war files and don't impose any particular requirements on the build process. Java development is a much more pleasant experience when you can just toss in any Maven dependency and not have to think about building a target platform for Eclipse or jumping through bundle-resolution hoops. I really like OSGi in theory, but I can't pretend that non-OSGi development isn't much smoother.

Domino technically supports "regular" servlets currently, but, uh, here's a snippet from the current documentation on that:

Hrm.

Full Extension Manager Support for Java

JAVADDIN/DOTS added a lot of EM hooks, but it doesn't cover the full suite of capabilities that a C addin can provide, such as authentication handling. Having this be fully accessible from Java would be useful even when treating Domino just as a data store and not as an app server.

 


 

I'm sure I could come up with more, but that's probably good for now. All easy, right?

Another Project: XPages Jakarta EE Support

Jun 3, 2018 4:40 PM

In my dealings with JNoSQL recently, I’ve been delving more into the world of modern Jakarta EE/Java EE/J2EE development, particularly the magic land of CDI.

The JEE stack tends to be organized as a collection of specs and implementations, many of which are really independent of each other and the underlying platform, making them pretty portable onto any reasonably-recent JVM. Now that Domino is actually on a reasonably-recent JVM, that makes it a workable target! So I decided to create a side project to bring some of JEE to XPages.

XPages has always been “sort of Java EE” - you don’t really have the full stack, and it’s far behind on the components that it does have, but a lot of the concepts are there. Of particular interest are managed beans and expression language.

CDI and Managed Beans

The XPages stack contains what amounts to a priomordial version of CDI. Since the release of XPages, JSF improved on the original faces-config.xml declaration method to add annotation-based declarations, and then CDI is something of a codification and expansion of that into the full Java world.

My project uses the Weld reference implementation of CDI to create a CDI context for each XPages app that opts in, allowing it to use annotations on classes to declare beans and properties:

@ApplicationScoped
@Named // or @Named("applicationGuy")
public class ApplicationGuy {
    public void getFoo() {
        return "hello";
    }
}

These can then be used like normal managed beans in an XPage:

<xp:text value="#{applicationGuy.foo}"/>

The project’s README contains some further examples.

I went with the Java SE implementation of Weld instead of the pre-built servlet or OSGi packages since those are a little too smart for this use: they pick up on the fact that they’re in a JSF environment, but expect newer versions of the servlet spec and JSF.

Expression Language

Since its original release, EL went through a similar standardization process as CDI and is now at version 3.0 and is distinct from JSP and JSF. As anyone who has tried to call a method on a bean in EL has found out, the XPages EL implementation lags pretty far behind, at the JSF 1.0/1.1 level. Since that time, it sprouted parameters and “projection” and is essentially a tiny scripting language now.

My project uses GlassFish’s EL implementation to outright replace the stock EL interpreter for apps making use of it. I added some affordances to IBM’s customized data support, so it’s intended as a drop-in replacement:

<xp:text value="${dataObjectExample.calculateFoo('some arg')}"/>

<xp:text value="#{el:requestGuy.hello()}"/> 

Note the “el:” prefix in the runtime-bound expression: that’s to get around Designer’s validation of runtime EL expressions.

So… Why?

That’s a good question! The first two reasons are “because it’s fun” and “to learn more about JEE”, but there’s also practical value for this sort of thing.

XPages is moribund, and that leaves Domino developers with a few options:

  • Go back to LotusScript. The iPad Notes client makes this a terrifyingly-practical option, but it’s soul death.
  • Go to JavaScript (or another platform). This is another route HCL is pushing, and it’s entirely valid: Node is a great platform with excellent support and momentum.
  • Go to modern Java.

For anyone who has invested a lot of time and brainpower in XPages over the years, that last one particularly appealing, and projects like this can help you get there. If you have a large XPages code base, as I do with one of my clients, it makes a lot more sense to work on that in such a way that it gradually becomes less XPage-dependent while avoiding the trap of a full rewrite in another language.

Many of us have already done something of this sort: JAX-RS is another JEE standard, and the Wink implementation in the Extension Library, though also aging, accomplishes this same sort of task. Especially if your services don’t reference Wink explicitly and write just to the spec, they are very portable.

That portability - of code and skillset - is critical. Say you have a class like this:

import javax.inject.Inject;
import javax.ws.rs.Path;
import javax.ws.rs.GET;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

@Path("/issues")
public class IssuesResource {
    @Inject IssueRepository issueRepository;

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public Response get(@QueryParam("category") String category) {
        return issueRepository.find(category).stream()
            .map(this::doSomething)
            .skip(3)
            .collect(this::toResponse);
        }

​     // ...
}

Which Java platform is that targetting? What’s the data storage mechanism? Who cares? This class certainly doesn’t. That could just as easily be Domino reading from an NSF or (as is actually the case in the example’s source) Tomcat with Darwino.

What’s Next?

Truthfully, maybe not much. Though JEE contains a whole raft of technologies, these two were the ones that scratch my immediate itch. We’ll see, though - the skill portability of erstwhile XPages developers is critically important, and I think that this is another one of the paths that can get us where we need to go.

Compiling and Testing XPages Plugins With Java 9+

Apr 13, 2018 3:38 PM

Tags: java xpages

Thanks to 9.0.1 FP8, we've been able to use Java 8 on Domino for a while, and FP10 makes that support a bit more official at the OSGi level. However, Java 8 is no longer the latest Java runtime, and so anyone writing XPages plugins and compiling/testing them via Maven will likely run into a situation where the compiling JRE is 9 or above. There are a couple changes in these runtimes that add some wrinkles to the process, so I took up the task of working around the problems I hit, and I created a Git repository to contain the code and tips I used to do so:

https://github.com/OpenNTF/org.openntf.domino.java9compat

The README in the repo contains a couple bits of Maven configuration that can be used to successfully compile XPages plugins in Java 9 or 10, and the included project contains a patch fragment for the com.ibm.notes.java.api plugin that serves up the Notes.jar to work around the fact that CORBA has been removed from the standard JRE.

For my needs, those changes made me able to compile a reasonably-complex XPages application, but there may be other edge cases I haven't hit yet. As I do, I'll add information and code there.

Next Project: ODP Compiler

Mar 5, 2018 5:32 PM

Tags: xpages java
  1. Next Project: ODP Compiler
  2. NSF ODP Tooling 1.0
  3. NSF ODP Tooling Example Project
  4. NSF ODP Tooling 1.2
  5. How the ODP Compiler Works, Part 1
  6. How the ODP Compiler Works, Part 2
  7. How the ODP Compiler Works, Part 3
  8. How the ODP Compiler Works, Part 4
  9. How the ODP Compiler Works, Part 5
  10. How the ODP Compiler Works, Part 6
  11. How the ODP Compiler Works, Part 7

One of the larger thorns in my side with my Domino development lately has been trying to automate builds of on-disk projects into NSFs via Jenkins. In theory, the process is pretty straightforward. It even works sometimes! However, particularly once you add in the necessity to deploy OSGi plugins to Designer first and want to run it from Jenkins, things get extraordinarily flaky: Designer may not launch properly from a behind-the-scenes Jenkins runner, the plugin installation may mysteriously fail, and so forth - and the error reporting is difficult at best.

So it's been on my mind for a good while to find a way to get from an ODP to an NSF without involving Designer, and I decided over the last couple days to really take a swing at it. It's not a small task, though; the process involves a number of difficult steps:

  • Install and activate provided OSGi plugins
  • Create an XPages registry that knows about the XPages libraries installed on the server, including those just contributed
  • Translate XPages and Custom Controls into Java source, with intra-file knowledge of the just-added CCs
  • Create a Java classpath that matches the plug-in dependencies that Designer derives from the dependant XPages Libraries
  • Compile the resultant Java source and any Java classes in the NSF into bytecode
  • Recompose the composite data form of the file data for these elements and many file resources into their DXL ".metadata" files for import
  • Create an NSF and import all of this
  • Compile any LotusScript in source-based libraries from the ODP
  • Uninstall any hot-loaded OSGi plugins

Those steps even leave out some fiddly details, like components defined via .xsp-config files in the NSF or XSP-associated .properties files, not to mention any steps I haven't encountered yet. It's a lot of work!

My first hope was to be able to hook into the process that Designer uses, perhaps grabbing a couple pertinent OSGi plugins and going from there. However, from what I can tell, all the involved plugins are intricately tied into many layers of Designer-the-IDE and so are no small matter to use on their own without also including the entire stack. So that left me to cobble together an equivalent process out of parts.

Fortunately, a couple projects have already provided a solid foundation for this. First and foremost is the XPages Bazaar. This is a project that Philippe Riand created a number of years ago, meant to be a workshop for really experimental components in a form less contrained than the ExtLib became. Since he left IBM, it's sat unmaintained, but I figured it'd be a perfect incubator for this project, so I tossed it up on GitHub, recomposed its Maven structure, and cleaned it up a bit for FP10 use. The reason why it makes such a perfect shell is a pair of its features: an XSP interpreter and an on-the-fly Java compiler. The former hooks into the mysterious guts of the XSP runtime to allow for translation of XSP to Java and the latter wraps the official Java compiler API with some OSGi knowledge to compile that along into bytecode.

Even starting with this, the first couple steps still required a lot of digging around. I learned how to install and activate OSGi bundles, how XPages Registries work internally, and made some tweaks to work around problems I encountered. I also encountered the joy of a bizarre javac bug to do with annotations in enum constructors, which my target project used.

Once I had the XPages-side components compiled, the next step was to start composing the NSF. The ODP format for XPages elements and other "file resource"-type entities is to put the code in its "normal" form in a file and then a subset of the DXL in a ".metadata" file next to it. The trouble here is that, even for entities where the file data is stored in the note unprocessed, the storage format isn't a strict binary blob of the file data: it's a composite data stream of file header and segment structures. I thought of two main ways I could go about getting these file resources into the NSF: via IBM's NAPI and by building the structures into the DXL files before import. The NAPI has a convenient FileAccess class for this purpose (presumably used by Designer), but my attempts to use it met primarily with server crashes. I'm sure it's possible to go this route, but I'd already "solved" the DXL problem years ago, for ODA's Design API. So, at least for now, I took the tack of writing out the binary structure manually, pouring it into the DXL as Base64, and importing that. It's a little inefficient, but it works.

Overall, I've made a lot of progress so far, but there's still a lot to be done: not all file types have their data put into the right places, LotusScript isn't properly compiled, Agents don't do anything at all yet, and XPages+CCs aren't actually imported into the NSF. Still, it's in a spot where I'm confident that it can one day work, whichis more than I could have said a week ago. If you'd like, browse around the code and pitch in if it's an itch you'd like to scratch as well.

New Small Project: generate-domino-update-site

Jan 31, 2018 9:28 PM

Tags: java xpages

For a good while now, the Domino Update Site for Build Management has been an essential tool for anyone setting up a local OSGi/Tycho development environment. However, it's really withered on the vine - the latest official release matches stock Domino 9.0.1, while recent fix packs have brought a number of improvements and new classes/methods. Since the binary code is entirely IBM's to distribute or not at their leisure, we can't update the release itself. Today's release of FP10, though, pushed me over the edge to the point where I at least wrote a tool to help the local creation of updated repositories.

The result of my work is generate-domino-update-site, a small CLI and programmatic script that you can point to a Domino installation, a destination directory, and an Eclipse program root to have it create (effectively) an updated version of the update site. The result contains a bit more than the official one did, but that should hopefully not hurt anything. Beyond just copying files into a directory, the script does the dirty work of re-packing unpacked bundles and features, vivifying the Notes.jar wrapper bundles, and creating p2 metadata (hence the Eclipse dependency).

To use the tool, you can clone the repository and run the jar as described in the readme. I may also get around to uploading it as a compiled result to a proper OpenNTF project page.

Lessons From Writing a JNoSQL Driver

Dec 30, 2017 11:12 AM

The other day, I decided to start up a side project to write an app for my Stars Without Number game in Darwino. Like back when I wrote a forum/raiding app for my WoW guild, I like to use this kind of opportunity to try new technologies and flesh out my skills in existing ones.

One such tech I've had my eye on for a bit is JNoSQL, which is a framework for integrating with NoSQL databases in Java. It's along the lines of Hibernate OGM, but intended to avoid the pitfalls of the relational/NoSQL that came with trying to adapt JPA directly to NoSQL databases. JNoSQL promised to be much easier to implement for a new database, so I decided to give it a shot.

JNoSQL

JNoSQL is split into two paired components, cleverly named Diana (the driver side) and Artemis (the model/integration side). The task of writing a driver for a new database is pretty well-contained: pick the database type(s) you want to implement (out of key/value, column, document, and graph) and implement about half a dozen interfaces. This is in stark contrast from when I took a swing at writing a Hibernate OGM driver, where the task was significantly more daunting. The final result is only ten Java files, with a chunk of them being utility classes for code organization.

It's a young project - young enough that the best version to run right now is 0.0.4-SNAPSHOT - but it functions well and it's been taken under the wing of the Eclipse foundation, which builds some confidence.

Implementation

Though the task was small, there were still a couple initial hurdles to getting going.

To begin with, I decided to start with the Couchbase driver - this certainly made the overall task easier, since Couchbase's semantics are very similar to Darwino's, but it also meant that I had to be wary of which parts of the codebase were really about implementing a Diana driver and which were Couchbase-isms. Fortunately, this was much easier than the equivalent work when I adapted the CouchDB Hibernate OGM driver, which was a sprawling codebase by comparison.

More significantly, though, it's always tough coming in to modify a codebase written by a single person or small team and learning as you go. The structure of the code is clean, but not quite my normal style (in part because Domino kept me from diving into Java 8 streams for so long), and I also had to ramp up quickly on the internal concepts of Diana. Fortunately, this was mostly easy, since the document-DB driver scaffolding is purpose-built, the hooks are straightforward and the query semantics were extremely easy to adapt. The largest impediment was getting used to the use of the term "Document", which internally refers to a key/value pair, while "DocumentEntity" is closer to the expected meaning.

Like the core implementation, the test suite I adapted from Couchbase was also pleasantly svelte, covering the bases without being an onerous nightmare to convert. Indeed, most of the code I added to it was the Darwino app scaffolding just for the test runtime.

Putting It Into Practice

Once the driver was written, I was hit by a bit of a personal curveball when I went to implement some actual data models. The model side, Artemis, is heavily wrapped together with CDI, which is a Java EE thing that, as I gather, handles managed beans, separation of implementation, and variable injection. This is a pretty normal thing for Java EE developers, but XPages's "don't call it Java EE" environment didn't introduce me to this aspect. As such, the fact that the documentation just kind of casually tossed around CDI terms and annotations threw me for a bit of a loop trying to determine what was what was required and what was just an idiom.

I eventually determined that I could use the reference implementation, Weld, without necessarily going whole-hog on Java-EE-everything. I'm a bit wary of what this bodes for whether I'll be able to use JNoSQL in Darwino on mobile devices, but I'll cross that bridge when I come to it. Once I got a bit of a handle on what Weld is and how to use it in unit tests (hint: make sure you have beans.xml files!), I was able to start writing my model objects and testing them.

Doing It Again

The fact that the bulk of my implementation work ended up being on the app side with CDI goes to show that the Diana driver model really shines. It got me thinking about how difficult it would be in the future, say to write a driver for Domino. There'd be some hurdles - Domino's lack of nested objects and antiquated querying mechanisms would need replacing - but the core task wouldn't be too bad. I don't know if I'd have a need for it, but it's nice to keep in mind as potential future small project.

All in all, I'm optimistic about the use of this. I'd love for Darwino to integrate as smoothly as possible into whatever standard environments it can, and this is one more step in that direction. I'll know as my side app takes shape how much this ingrains itself into my actual work.

First Steps to Code Coverage Analysis in Domino Plugins

Nov 9, 2017 8:53 AM

Tags: maven domino java

I'm always interested in getting the computer to tell me how to tell it what to do more successfully, and, to further that pursuit, I've started taking an interest in code coverage.

If you're not familiar with the term, "code coverage" refers to reporting on which lines of code were actually executed during runtime, most commonly in association with unit tests. Eclipse (and presumably other IDEs) has support for this, and I've decided to give it a shot.

Since I'm starting this out in the context of Domino plugins, there are more wrinkles than in most tutorials. Namely, the test suites I've written run exclusively through Maven instead of the Eclipse UI due to all the Notes environment setup, so I can't just use the normal UI tools to gather the data. Fortunately, Eclipse's EclEmma will work just fine with the output from a Maven project, as long as you configure it properly. I looked around for a while to find the right combination of tools to use, but it ended up being fairly simple to configure basic output that can be consumed in Eclipse's Coverage view.

There are two main additions. First, add the jacoco-maven-plugin to your root project's project.build.plugins block:

<plugin>
	<groupId>org.jacoco</groupId>
	<artifactId>jacoco-maven-plugin</artifactId>
	<version>0.7.8</version>
	<executions>
		<execution>
			<goals>
				<goal>prepare-agent</goal>
			</goals>
		</execution>
	</executions>
</plugin>

In normal cases, that would suffice. However, since the test configuration I have for Notes overrides the argLine property of the Tycho test runner, there's another step - add the tycho.testArgLine property manually into those blocks, such as in the Windows profile:

<profile>
	<activation>
		<os>
			<family>Windows</family>
		</os>
		<property>
			<name>notes-program</name>
		</property>
	</activation>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				<version>${tycho-version}</version>
 
				<configuration>
					<skip>false</skip>
 
					<argLine>${tycho.testArgLine} -Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
					<environmentVariables>
						<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
					</environmentVariables>
				</configuration>
			</plugin>
		</plugins>
	</build>
</profile>

Once that's configured, running the test suite via Maven will create a new file in the target folder of the test plugin: jacoco.exec. This file can then be consumed in Eclipse by opening the "Coverage" view:

Eclipse's Show View window

In that view, right click and choose "Import Session..." and point to the data file. Click "Next" and check the projects+source folders from your workspace you're interested in analyzing. When you click "Finish", it'll do two things. First, it'll fill the Coverage view with statistics from your run:

Code Coverage stats

(We have a lot of work to do fleshing out our test suites for this one)

Secondly, it'll start highlighting your code to show you what code is executed, which branches are only partially covered, and which lines are skipped entirely. For example (ignore the sickly color scheme - I need to work on that):

Code Coverage example

This shows how several of the if branches are only tested in one direction, while the "Faces" block is skipped entirely. That also shows some of the trouble with testing XPages-run code: the Tycho environment can't reproduce the XPages environment fully, so some branches aren't testable in that way. I haven't looked into the possibility of gathering similar data from JUnit for XPages, so perhaps that's possible.

For now, though, this will have to do. And, like with these other "code improvement" techniques I've integrated lately, there's a lot of potential tedium - juggling when to write a test to cover some code that will obviously always work just to improve the highlighting vs. just focusing on the low-hanging fruit - but I expect that it will be a nice addition to my workflow over time.

That Java Thing, Part 17: My Current XPages Plug-in Dev Environment

Feb 26, 2017 11:23 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

It's been a while since I started this series on Java development, but I've been meaning for a bit now to crack it back open to discuss my current development setup for plug-ins, since it's changed a bit.

The biggest change is that, thanks to Serdar's work on the latest XPages SDK release, I now have Domino running plug-ins from my OS X Eclipse workspace. Previously, I switched between either running on the Mac and doing manual builds or slumming it in Eclipse in Windows. Having just the main Eclipse environment on the Mac is a surprising boost in developer happiness.

The other main change I've made is to rationalize my target platform configuration a bit. In the early parts of this series, I talked about adding the Update Site for Build Management to the active Target Platform and going from there. I still basically do this, but I'm a little more deliberate about it now. Instead of adding to the running platform, I now tend to create another platform just to avoid the temptation to use plug-ins that are from the surrounding modern Eclipse environment (this only really applies in my workspaces where I don't also have actual-Eclipse plug-in projects).

The fullest form of this occurs in one of my projects that has a private-only repo, which allows me to stash the artifacts I can't distribute publicly. In that case, I have a number of library dependencies beyond just the core XPages site, and I took the approach of writing a target platform definition file and storing it in the root project, with relative references to the packaged dependencies. With this route, I or another developer can just open the platform file and set it as the target platform - that will tell Eclipse about everything it needs. To do this, I right-clicked on the project, chose "New" → "Other..." and then "Target Definition" under "Plug-in Development":

Target Definition

Within that file, I used Eclipse variable references to point to the packaged dependencies. In this repo, there is a folder named "osgi-deps" next to the root Maven project, so I wanted to tell Eclipse to start at the root project, go up one level, and then delve down into there for each folder. I added "directory" type entries for each one:

Target Definition Entries

The reference syntax is ${workspace_loc:some-project-name}../osgi-deps/Whatever. workspace_loc resolves the absolute filesystem path of the named project within the workspace - since I don't know where the workspace will be, but I DO know the name of the project, this gets me a useful starting point. Each of those entries points to the root of a p2-format update site for the project. This setup will tell Eclipse everything it needs.

Unfortunately, this is a spot where Maven (or, more specifically, Tycho) adds a couple caveats: not only does Tycho not allow the use of "directory" type entries in a target platform file like this (meaning it can't be simply re-used), but it also expects repositories it points to to have p2 metadata and not just "plugins" and "features" folders or even a site.xml. So there's a bit of conversion involved. The good news is that Eclipse comes with a tool that will upgrade old-style update sites to p2 in-place; the bad news is that it's completely non-obvious. I have a script that I run to convert each new release of the Extension Library to this format, and I adapt it for each dependency I add:

java -jar
	/Applications/Eclipse/Eclipse.app/Contents/Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
	-application org.eclipse.equinox.p2.publisher.UpdateSitePublisher
	-metadataRepository file:///full/path/to/osgi-deps/ExtLib
	-artifactRepository file:///full/path/to/osgi-deps/ExtLib
	-source /full/path/to/osgi-deps/ExtLib/
	-compress -publishArtifacts

Running this for each directory will create the artifacts.jar and content.jar files Tycho needs to read the directories as repositories. The next step is to add these repositories to the root project pom so they can be resolved at build time. To start with, I create a <properties> entry in the pom to contain the base path for each folder:

<osgi-deps-path>${project.baseUri}../../../osgi-deps</osgi-deps-path>

There may be a better way to do this, but the extra "../.." in there is because this property is re-resolved for each project, and so "project.baseUri" becomes relative to each plugin, not the root project. Following the sort of best practice approach to Tycho layouts, the sub-modules in this project are in "bundles", "features", "releng", and "tests" folders, so the path needs to hop up an extra layer. With that, I add <repositories> entries for each in the same root pom:

<repositories>
    <repository>
        <id>notes</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/XPages</url>
    </repository>
    <repository>
        <id>oda</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ODA</url>
    </repository>
    <repository>
        <id>extlib</id>
        <layout>p2</layout>
        <url>${osgi-deps-path}/ExtLib</url>
    </repository>
	<repository>
		<id>junit-xsp</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/org.openntf.junit.xsp.updatesite</url>
	</repository>
	<repository>
		<id>bazaar</id>
		<layout>p2</layout>
		<url>${osgi-deps-path}/XPagesBazaar</url>
	</repository>
	<repository>
		<id>eclipse-platform</id>
		<url>http://download.eclipse.org/releases/neon/</url>
		<layout>p2</layout>
	</repository>
</repositories>

The last entry is only needed if you have extra build-time dependencies to resolve - I use it to resolve JUnit 4.x, which for Eclipse I just tossed unstructured into a "plugins" folder in the "Misc" folder, without p2 metadata.

Though parts of this are annoyingly fiddly, it falls under the category of "worth it in the end" - after some initial trial and error, my target platform is more consistent and easier to share among multiple developers and automated build servers.

Code Safety and Pedantry

Jun 3, 2016 10:23 AM

Tags: java

Lately, I've been musing a lot on the topic of code "correctness" - that is, beyond the normal case of wanting code to do what I intended, and further into the realm of sweating even extremely-miniscule details. A lot of this is due to my continued watching of the evolution of Apple's Swift language (I highly recommend following Erica Sadun's blog for this). Swift is very much in the camp of "make sure all your 'i's are dotted and 't's crossed" languages, as opposed to more fast-and-loose languages like JavaScript or Ruby.

I've gone back and forth on the overarching concepts from time to time. I've long been a big Ruby fan, and a lot of that is because of a general feeling that, if you let go of a lot of the "strict old aunt of a compiler" restrictions, you gain a tremendous amount of expressiveness and productivity with few real-world problems. On the other hand, being immersed in Java all the time has shifted my brain to appreciating the benefits of stronger compile-time checks (at least on paper). Overall, I'm more on the latter side than the former now, double-edged sword though it is. This is why I've been diving into things like aggressive null analysis in my code. Even when it seems like it's being a pedant, there are certain classes of bugs that it finds that I wouldn't even normally think of on the fly. For example, the null checker flags this as being a potential NPE:

if(this.foo != null) {
	this.foo.doSomething();
}

My first reaction upon seeing that was along the lines of "you're full of crap, Eclipse", but then I noticed the small path a bug could take to creep in: multithreading. If I'm in a situation where the object containing foo is used across threads, there's a possibility where Thread A would evaluate this.foo != null to true and start to step in to the block. Then, Thread B would get its turn on the processor and evaluate this.foo = null in another method. Thread A would then pick up and try to call doSomething() on the newly-minted null. So I've swallowed my pride and started writing safer code like:

SomeObject localFoo = this.foo;
if(localFoo != null) {
	localFoo.doSomething();
}

What I've always admired about Swift is that it takes these sorts of lessons to heart and adapts the syntax to suit. My interest in code-correctness pedantry in Java has led me to write out verbose abominations like this:

private final @NotNull String foo;

Three of the conceptual tokens there are purely to say things that are best practices to start with: I don't want this property accessible outside the class, I want to make sure it's assigned during construction and not change thereafter, and I want to ensure it's not null. The Swift variant is:

let foo: String

Same thing, half the typing. And, as a bonus, since nullability checking is built in to the language and not a by-convention thing like the null annotations in Java, I can be sure that the rules will be applied. That sort of thing is the dream! But, since Java is the best language for the work I'm doing for now, the important thing is that it at least suits, verbose or not. In a lot of languages, it gets much more difficult to have this sort of assurance.

So, hassle as it is, I suggest that other Domino developers, on their paths through Java, consider picking up the same habits. For every time you run into something like Java complaining that it can't convert a List<String> into a List<Object>, diving fully into null checks and immutability will save you a late-night crash report and angry user. As you develop more in Java, give it a try.

The Cleansing Flame of Null Analysis

May 21, 2016 10:18 AM

Tags: java maven

Though most of my work lately has been on sprawling, platform-level stuff or other large existing codebases, part of it has involved a new small app. I decided to take this opportunity to dive more aggressively than previously into automated null analysis and other potential-bugs tools.

What I mean by "null analysis" is letting the IDE or compiler try to help you avoid NullPointerExceptions. Though there are plenty of other programming mistakes you could still make, these are among the most common, and so a little extra work up front to avoid them should pay dividends. Eclipse has some handy options in its Java → Compiler → Errors/Warnings preferences to assist with this:

The first option will pick up on some pretty basic instances, such as:

Object foo = null;
System.out.println(foo.hashCode());

Since this is clearly going to always cause an NPE, Eclipse is able to point this out as an error. The next level gets a little more nebulous: "potential" null pointer access. This crops up when Eclipse can't reliably determine whether a value will be null, either because there is no way to know at compile time (say, database access) or because the compiler's tooling is too limited. Here's a contrived example:

Object foo = Math.random() > 0.5 ? new Object() : null;
System.out.println(foo.hashCode());

This situation is clearly untenable, but there are other situations where you as a programmer can be very confident that the value will not be null (say, if you swap out the > 0.5 for >= 0.0), but the compiler doesn't know that. That's why it often makes sense to leave that as a warning instead of an error.

That's all stuff I've done before, but now I've decided to dive into annotation-based null analysis as well. Unfortunately, in stock Java, this is something of a hot mess (that list even leaves out Eclipse's home-grown version). Since Java didn't grow up with this sort of capability, it's been shoehorned in by various parties over the years. There are other tools to assist you in Java 8, but, unfortunately, I can only target 7 as the highest. For now, I've settled on the "sort-of standard" javax.validation.constraints package. It wasn't really intended for this specific purpose, but it's flexible enough to suit and can be used in Eclipse and FindBugs (though I have my reservations about the choice).

In Eclipse, this type of analysis can be enabled by checking "Enable annotation-based null analysis" below the other options and, unless you're using Eclipse's known annotations, adjusting the "Configure" options next to "Use default annotations for null specifications":

In any event, regardless of the choice of tooling, the "this shouldn't be null" annotations work the same way: you use them to decorate things that you either require not be null when provided to you (method parameters) or you promise to not be null when providing to others (method return values). For example:

public @NotNull Object doSomething(@NotNull Object otherObject) {
	return otherObject.toString();
}

This highlights three things, two good and one bad:

  • Good: The @NotNull in the method parameter means that, as long as the calling code is also checked for null use, the method can be confident that there won't be a NullPointerException when calling a method on otherObject.
  • Good: The @NotNull on the return value means that other code calling this method can be confident that they will not get a null value from it, and so can skip extra null checks.
  • Bad: Eclipse flags otherObject.toString() as a potential problem because it doesn't know for sure that Object#toString doesn't return null, because it has no nullability annotations. As programmers (or as a compiled-code analysis tool), we can be fairly confident that it will be non-null because any object returning null for that is essentially broken on its own.

That last one is a common problem when adopting annotation-based null analysis, at least in Eclipse (I hear it may be better in IntelliJ): its logic doesn't go very deep. If everything is gussied up with these annotations, you're clear - but as soon as you step outside of the project you're working on, you have to add in likely-unnecessary checks. Fortunately, these checks don't realistically hurt (a null check at runtime in a normal app is negligible performance-wise), but they can grate to have to add in.

Glutton for punishment that I am, I decided to go a step further and enable FindBugs processing as an integral step of my build. Though FindBugs can be very picky about the types of things it complains about, it is blessedly more thorough in its analysis than Eclipse, so you generally end up conceding that it is correct when it yells at you. Since the project is Maven-based, I added the check in the project's pom file:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>findbugs-maven-plugin</artifactId>
	<version>3.0.3</version>
	<configuration>
		<includeTests>true</includeTests>
	</configuration>
	<executions>
		<execution>
			<phase>compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
		<execution>
			<id>findbugs-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
	</executions>
</plugin>

For most uses, that's all that's required. Now, when the project is compiled, FindBugs will give it a once-over and halt the build if it finds anything it doesn't like. This can be tweaked a great deal - for example, changing the checks to run or the severity of the problem needed to fail the build - but the defaults will likely suit.

Adding these extra checks involves a lot of plusses and minuses. The big minus is that you may end up spending a lot of time "fixing" bugs that don't really exist, time that you could instead spend actually writing your application (and writing new bugs that the tools won't find anyway). There's really nothing to be gained by carefully explaining to Eclipse for the hundredth time that toString always returns non-null.

Still, particularly when tested out in a small, low-surface-area app, this can be a good practice to learn and refine. Eventually, a move to Java 8 will help this more, and it certainly doesn't hurt to add in nullability annotations in the mean time. Overall, I think having the tooling help you avoid a whole suite of common "brain fart" bugs like this is worthwhile.

That Java Thing, Part 16: Maven Fallout

Feb 23, 2016 2:33 PM

Tags: java maven
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

So, after the last post's large task of converting to Maven, this step is mostly about picking up the pieces and expanding on some of the concepts. We'll start with M2Eclipse, usually rendered as just "m2e".

m2e

m2e is the set of plugins that acts as Eclipse's interface to Maven. It more-or-less replaces the earlier maven-eclipse-plugin, though you will likely still see references to that around. Eclipse doesn't have any inherent knowledge of how Maven works, m2e has the complicated task of reading your projects' pom.xml files and adapting them to Eclipse's internal configuration. So, for example, in our projects it saw the presence of Tycho and determined that they should be imported as OSGi projects. In other cases, m2e may pick up the presence of things like Android plugins to trigger the use of the Android development tools.

Though it tries mightily, m2e is the source of a lot of the consternation that can come with a switch to Maven-based development. Because most Maven plugins don't have any inherent allowances for working in an Eclipse environment, adapters have to be written for each one in order for them to work with m2e - this is what the dialog yesterday installing the Tycho adapters was about. In some cases, these don't exist and you have to tell m2e to ignore the plugin; in other cases, the adapters DO exist, but are flawed in some way. Most of the time, things go alright, but there are enough edge cases that it can be irritating.

For this kind of task, m2e is pretty unobtrusive, but it's important to know it's there.

Updating the .gitignore

One side effect of m2e's behavior is that it's not a good idea to remove Eclipse's project configuration files from the Git repository. This is not required, but it can avoid a number of annoying problems when dealing with multi-person Maven projects. To start with, open the .gitignore file from the root of your local Git repository (you can get to this easily using Eclipse's Git Repositories view, in the "Working Directory" part of the repo). Add some lines at the end to ignore .project and .classpath, so your whole file should now look like:

._*
Thumbs.db
.DS_Store

*.class

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
#*.jar
*.war
*.ear

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

# Eclipse project files
.project
.classpath

Depending on how your (hypothetical) team wants to work, it may also make sense to ignore the .settings/ directory, which stores some additional Eclipse project information. However, some of that information may be useful to share - for example, on-save code-cleaning behavior that isn't readily expressed in Maven.

Due to the way Git works, just adding the files to the .gitignore won't remove them from the repository: instead, they'll just no longer show up in the list for new changes. In order to also remove them from the repository without deleting them from the filesystem, go to the "Navigator" pane in Eclipse (if it doesn't show up currently, you can add it via Window → Show View → Navigator), find each .project and .classpath file in the four projects (some will only have the former), right-click, and choose Team → Advanced → Untrack:

Now, commit the changes - though the files remain on the filesystem, they should show up as deleted in the commit dialog:

The target Folder

This one is one we've already prepped a bit for. Whereas most Eclipse projects store their binary output (Java class files, Jars, etc.) in the bin folder or elsewhere, the standard Maven behavior is to use target. For most of the projects, this doesn't matter - we had already configured the plugin to use a subfolder here for its classes, and the temporary files for other aspects don't matter. However, it's still important to know about this; when you're looking for the compiled or packaged output of a Maven project, this is the place to look, and we'll run into this when building the update site.

Building the Update Site

There's an important changed involved now with how the update site is built: it does not involve opening the site.xml and clicking "Build All" anymore. Instead, it involves right-clicking the root project ("parent-xsp") and choosing Run As → Maven Install:

There are two logical followup questions when seeing this change: "what?" and "why?". They're both bound together to what the nature of a Maven project is, and, significantly, the way Eclipse interacts with them. Maven is primarily a command-line tool - granted, it's a set of Java classes, but the primary way to interact with it is via the command line. m2e does a lot of work to interpret the projects in the same way as the `mvn` command-line tool, but it's just a secondary interpretation due to the way Maven and Eclipse work.

The way to fully build a Maven-ized project in a way that fully uses the configuration is to run the command-line tool. Fortunately, m2e comes with its own embedded version and doesn't require you to use a terminal, but the abstraction is very leaky - and this is why you use "Run As" instead of any of the normal "Build"-related commands. The "Run As" commands construct a CLI-type environment and execute the embedded Maven, which is what then does the real work.

Since "parent-xsp" is the root of our projects, it's the starting point to execute a Maven build. When you run this, you'll see a lot of chatter in Eclipse's Console view, particularly the first run: Maven will seek out the plugins needed to build the projects and install them to the local Maven repository (stored in ".m2/repository" in your home folder). After that, it will build and package each of your projects. There's a whole phase system going on here (similar in concept to the XPages lifecycle), as well as many configuration options, but the important part here is that the "install" command (called a "goal" in Maven parlance) is the last phase that we will worry about, and it will cover everything we need here.

Upon completion, the Console text should end with something like this (incidentally, "Reactor" is Maven's term for the entire blob of modules being processed):

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] parent-xsp ......................................... SUCCESS [  0.617 s]
[INFO] com.example.xsp.plugin ............................. SUCCESS [  1.647 s]
[INFO] com.example.xsp.feature ............................ SUCCESS [  0.411 s]
[INFO] com.example.xsp.update ............................. SUCCESS [  4.143 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 30.310 s
[INFO] Finished at: 2016-02-23T13:55:20-05:00
[INFO] Final Memory: 80M/191M
[INFO] ------------------------------------------------------------------------

Part of this process is the cration of the update site, which, due to how we configured it, will be represented twice in the "target" folder in "com.example.xsp.plugins": as a tree of files inside the "site" folder and also zipped up into the "site_assembly.zip" file. There's also a file named "site.zip", but that contains just the site.xml, which is not important. It's these files that you should now target with Designer and the NSF Update Site when updating the plugin. In fact, it'd be a good idea to delete the "features" and "plugins" folders from outside the "target" folder now - they won't be used any more.

As for the "Build All" button in site.xml, it's best to pretend it doesn't exist. It will still work, but it will break your Maven build, because it overwrites the "qualifier" in the version numbers. This is, admittedly, a drag: it's convenient having a clear, logical button to build the site, and it's very inconvenient that Eclipse doesn't tell you not to use it any more. However, the Maven process, besides being now required, has a nice advantage: now building the update site will no longer cause Git to want to check in the change. That's something that can get annoying very quickly when working with another developer on a non-Mavenized OSGi project.

Adding Back The Source Plugin

We'll finish the day on an easy one: adding back in the source plugin. Unlike the original setup, which used a separate feature to house the source plugin, we'll now include it in the same feature. You could also continue to have a separate source feature, which would be useful for very large projects where it would actually be a big burden to deploy the source to servers, but, for XPages libraries, it's generally not worth the cognitive hassle.

Since we already configured the Tycho source plugin earlier, this is just a matter of adding a reference to the (implied) source plugin to the feature.xml:

<?xml version="1.0" encoding="UTF-8"?>
<feature
      id="com.example.xsp.feature"
      label="Example XSP Library Feature"
      version="1.0.0.qualifier">

   <description url="http://www.example.com/description">
      [Enter Feature Description here.]
   </description>

   <copyright url="http://www.example.com/copyright">
      [Enter Copyright Description here.]
   </copyright>

   <license url="http://www.example.com/license">
      [Enter License Description here.]
   </license>

   <plugin
         id="com.example.xsp.plugin"
         download-size="0"
         install-size="0"
         version="0.0.0"
         unpack="false"/>

   <plugin
         id="com.example.xsp.plugin.source"
         download-size="0"
         install-size="0"
         version="0.0.0"/>

</feature>

Now, the source will be included in the output and bundled with the main feature when it's installed. Commit this change:


At this point, we're basically back to where we were previously, OSGi-wise, but in a much better position to scale the project further and take advantage of supporting systems. The next couple posts will cover some of those potential systems, as well as remaining large conceptual topics. There's a great deal to know when it comes to Maven, but it's all helpful.

That Java Thing, Part 15: Converting the Projects

Feb 22, 2016 10:26 AM

Tags: maven java
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Prelude: there was a typo in the previous entry. Originally, the file URL read "file://C:/IBM/UpdateSite", but, on Windows, there should be another slash in there: "file:///C:/IBM/UpdateSite". I've corrected the original post now, but you should make sure to fix your own settings.xml file if needed. Otherwise, Maven will complain down the line about the URI having "an authority component".

The time has come to do the dirty work of converting our existing plugin projects to Maven. There will be some filesystem-side reorganizing and not every project will make it (looking at you, source project), but overall it's mostly a job of pasting a bunch of XML into new files.

For the first leg of this, I recommend removing the projects from your Eclipse workspace by selecting them, right-clicking, and choosing Delete:

On the confirmation dialog, do not select "Delete project contents on disk" - we don't actually want to get rid of the files.

Next, find the projects on your filesystem, create a new folder alongside them named "com.example.xsp", and move the projects inside it. In Maven parlance, we're creating a "multi-module project", and this new folder is the top level in our module hierarchy. This can contain arbitrary levels and can be very helpful in project organization, but this will be a pretty simple parent-and-children case. Next, dive into the folder and delete the "com.example.xsp.source.feature" - we'll be able to generate this through Maven now, and so we can trim down our project count slightly.

Now, create a file named pom.xml in the "com.example.xsp" folder alongside the subfolders, and fill its contents with this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.example</groupId>
	<artifactId>parent-xsp</artifactId>
	<version>1.0.0-SNAPSHOT</version>
	
	<packaging>pom</packaging>

	<modules>
		<module>com.example.xsp.plugin</module>
		<module>com.example.xsp.feature</module>
		<module>com.example.xsp.update</module>
	</modules>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<tycho-version>0.24.0</tycho-version>
		<compiler>1.6</compiler>
	</properties>

	<repositories>
		<repository>
			<id>Luna</id>
			<layout>p2</layout>
			<url>http://download.eclipse.org/releases/luna/</url>
		</repository>
		<repository>
			<id>notes</id>
			<layout>p2</layout>
			<url>${notes-platform}</url>
		</repository>
	</repositories>

	<build>
		<plugins>
			<!--
				Maven compiler options
			-->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<version>3.1</version>
				<configuration>
					<source>${compiler}</source>
					<target>${compiler}</target>
					<compilerArgument>-err:-forbidden,discouraged,deprecation</compilerArgument>
				</configuration>
			</plugin>
			
			<!--
				Tycho plugins
			-->
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-maven-plugin</artifactId>
				<version>${tycho-version}</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-packaging-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<strictVersions>false</strictVersions>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-compiler-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<source>${compiler}</source>
					<target>${compiler}</target>
					<compilerArgument>-err:-forbidden,discouraged,deprecation</compilerArgument>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-source-plugin</artifactId>
				<version>${tycho-version}</version>
				<executions>
					<execution>
						<id>plugin-source</id>
						<goals>
							<goal>plugin-source</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>target-platform-configuration</artifactId>
				<version>${tycho-version}</version>
				<configuration>

					<pomDependencies>consider</pomDependencies>
					<dependency-resolution>
						<extraRequirements>
							<requirement>
								<type>eclipse-plugin</type>
								<id>com.ibm.notes.java.api.win32.linux</id>
								<versionRange>[9.0.1,9.0.2)</versionRange>
							</requirement>
						</extraRequirements>
						<optionalDependencies>ignore</optionalDependencies>
					</dependency-resolution>

					<filters>
						<!-- work around Equinox bug 348045 -->
						<filter>
							<type>p2-installable-unit</type>
							<id>org.eclipse.equinox.servletbridge.extensionbundle</id>
							<removeAll />
						</filter>
					</filters>

					<environments>
						<environment>
							<os>linux</os>
							<ws>gtk</ws>
							<arch>x86</arch>
						</environment>
						<environment>
							<os>linux</os>
							<ws>gtk</ws>
							<arch>x86_64</arch>
						</environment>
						<environment>
							<os>win32</os>
							<ws>win32</ws>
							<arch>x86</arch>
						</environment>
						<environment>
							<os>win32</os>
							<ws>win32</ws>
							<arch>x86_64</arch>
						</environment>
						<environment>
							<os>macosx</os>
							<ws>cocoa</ws>
							<arch>x86_64</arch>
						</environment>
					</environments>
					<resolver>p2</resolver>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

So... yeah, there's a lot going on here. This is the biggest of the "XML dumps" we're going to have and contains by far the greatest number of bizarre "you just have to know about it" parts. "POM" stands for "Project Object Model" - it's the language Maven uses to describe the project. Let's tackle the file from near the top (ignoring the XML header):

project and modelVersion

These elements are effectively just boilerplate: the project element is the root of our project descriptor and it contains some definitions to let XML editors parse the file format. In turn, the modelVersion describes to Maven the specific version we're working with, which has been "4.0.0" for as long as I've been doing this.

groupId, artifactId, and version

These elements are obligatory in one form or another in every project, but are less copy-and-paste-able: they define the name and version of your project. These are Maven's equivalents to OSGi's Bundle-SymbolicName and Bundle-Version, though Maven makes an explicit distinction between the overall grouping of the plugin and its specific name. These are essentially arbitrary, but the convention is to use the standard reverse-DNS version of your domain name for the group ID, and then keep this group ID consistent across different projects (...mostly). The artifact ID is less consistent, but it's good to pick a pattern like "projectPrefix-submodule". Here, we actually reverse that a bit to call it "parent-xsp" in order to emphasize that this project's purpose is entirely to be a parent to the submodules and not an interesting artifact to consume itself. We'll break this convention again for the submodules due to our use of Tycho/OSGi.

packaging

A project's packaging describes the sort of output. By default, if this is left un-specified, it's jar - a normal, run-of-the-mill Jar file. There are a few other common ones you may run into - such as war for J2EE web apps or bundle for non-Tycho OSGi bundles - and the one we're using here is pom. This is actually kind of the "none of the above" option: the "pom" is just the file we're editing now, and is included with every project type. Having a packaging type of pom generally means that either the project has no real outputs of its own (as is the case here) or it's an "ad hoc" project that doesn't fit an existing type.

modules

This block is the hallmark of a parent project: it lists the relative folder paths that contain the submodules. In this case, the names line up with the names we'll use for the submodules, but this could potentially vary depending on the folder names and locations. Parent-child module relationships don't have to be physically hierarchical on the filesystem, but it's a good convention when you don't have a specific reason to break it.

properties

This block is the project-level equivalent to the user-level property we defined in .m2/settings.xml. There are two types of properties that can go here, with no obvious distinction between them: known configuration properties used by Maven itself and arbitrary named variables used by the person writing the pom.

The first property - project.build.sourceEncoding - is an example of the former. During execution, Maven will reference this property defined in the project (or one of its parents) when determining the text file encoding to use. This could be set to something else if you're working with non-Unicode files, but it's important to set it here so that file interpretation will not be platform-dependent. These properties can be read like an equivalent of EL for the project XML: it sets a property of sourceEncoding within the build node in project, but more consisely (more or less).

The other two are variables for use later. The names of these have only very loose conventions, but there seem to be a couple common types: ALL_CAPS, camelCase, and hyphen-delimited. You can also specify variables as dot.delimited and they will work the same way, but that makes them more difficult to distinguish from the system-level properties.

repositories

The repositories block is the start of our OSGi-related weirdness. The block itself isn't OSGi-specific - it has its role in other projects that want to make use of dependencies outside of the core public Maven repositories - but the contents is. We're setting two repositories here: one to point to the main Eclipse repository (the Luna version here, but that could just as well be Mars or Kepler) and one to point to the XPages Update Site. This is where the property we set before comes into play, allowing different developers to keep the update site in different locations without changing the project's config.

build

The build section is often the largest part of a POM file - it contains definitions and configuration for various additional Maven plugins used during compilation. We have two tasks to accomplish here: ensure that we use Java 6 for compilation at the root level (to ensure the build doesn't execute as Java 7 or 8 and be incompatible with Domino) and enable a whole slew of Tycho plugins.

The maven-compiler-plugin block specifies the version of the Java compiler plugin to use (3.1, which is actually kind of old, but the differences aren't important) and then provides it with some configuration to set the Java version level and to not choke on forbidden references. Like the Java version, the latter is a nod to Domino: depending on your JVM configuration, you may run into forbidden-reference errors relating to the lotus.domino classes.

The next slew of blocks all relate to loading up various Tycho components. Tycho's job is esentially to construct an entire OSGi environment during the Maven build, and it consists of a number of moving parts, many of which are basically the Tycho version of normal Maven facilities. The tycho-maven-plugin is the core, and its extensions rule is what allows it to worm itself into various phases of the build process. The tycho-packaging-plugin controls the process of bundling the projects as their various types: the plugin, the feature, and the update site (in our case). The tycho-compiler-plugin is the Tycho variant of the Maven one we configured earlier. The tycho-source-plugin is what allowed us to kick the standalone source feature to the curb - it's the equivalent of the Eclipse-specific feature we had been hooking into before.

The tycho-platform-configuration is the scariest of the bunch. This plugin's job is to establish the OSGi Target Platform we're working with, in conjunction with the repository specified above. Not all of this configuration is necessary for our immediate needs, but may come in handy later. The pomDependencies rule is useful when using mixed-type dependencies in more-complicated projects, while the extraRequirements block forces the inclusion of the plugin fragment that contains Notes.jar. The optionalDependencies rule comes in handy from time to time with XPages projects: there are sometimes cases where there's a dependency that Eclipse has and which the server will have, but which will be awkward to get to in Maven, usually relating to dependencies-of-dependencies not related to compilation. The filters block is... I don't know; just keep it in there. The environments block is a way to describe the platforms on which your code can execute - I believe this is primarily used when testing. The resolver is like filters in that it's a "I just copy it around" thing; presumably, it refers to a specific code path for resolving plugin dependencies.

Whew!

Okay, so... that's the first one down! For the most part, you can carry around the whole bottom section pretty much as-is for your XPages Maven projects (as I do), and then gradually become comfortable with the specifics over time. Now, there's some good news and some bad news:

  • The bad news is that there are three more POM files to write.
  • The good news is that they are much, much simpler.

In recent versions of Tycho, they've added the ability to reduce the number of POMs involved, but there are limits to that, particularly to do with Jenkins, so we'll stick to the traditional way for now.

com.example.xsp.plugin

Go into the "com.example.xsp.plugin" folder and create a new pom.xml containing this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.plugin</artifactId>
	<packaging>eclipse-plugin</packaging>
</project>

Like I promised: much simpler. Because the parent POM already brought in all the Tycho plugins and configuration, all we need to do here is the basics. One slightly-unusual aspect here is the packaging type. eclipse-plugin isn't a packaging type known to Maven inherently; instead, it's provided by Tycho, but can be used in the same way.

The artifact ID here is a concession to Tycho: Maven artifact IDs don't usually follow the same full-reverse-DNS conversion as OSGi, but Tycho wants the artifact ID to match the OSGi bundle name.

com.example.xsp.feature

Next up is the pom.xml file in the "com.example.xsp.feature" folder:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.feature</artifactId>
	<packaging>eclipse-feature</packaging>
</project>

This is very similar to the last, with the only real differences being the artifact ID and the packaging type.

com.example.xsp.update

Now, the pom.xml in "com.example.xsp.update":

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.update</artifactId>
	<packaging>eclipse-update-site</packaging>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-packaging-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<archiveSite>true</archiveSite>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

This one's slightly longer, but not by too much. Beyond the different artifact ID and packaging, we also provide some additional configuration to the tycho-packaging-plugin. This is among the plugins that were established in the root POM, but it's re-defined here in order to enable the archiveSite configuration option. This will give us a nice ZIP file of the Update Site at the end.

There's one other thing to note here: Tycho considers the eclipse-update-site packaging type to be deprecated, and it may be removed in the future. In most examples you'll see outside of Domino, people use eclipse-repository instead. This gets back to the difference between the old-style ("site.xml") Eclipse Update Sites and the new-style ("category.xml", named P2) Update Sites. For now, we use the old-style variant because it works better with Notes and Domino.

In addition to this POM file, we also have two changes to make in the site.xml: remove the source-feature reference (we'll add this back elsewhere later) and clean up the versions:

<?xml version="1.0" encoding="UTF-8"?>
<site>
   <feature url="features/com.example.xsp.feature_1.0.0.qualifier.jar" id="com.example.xsp.feature" version="1.0.0.qualifier">
      <category name="Example"/>
   </feature>
   <category-def name="Example" label="Example"/>
</site>

The version distinction is to change the Eclipse-generated timestamps at the end of the versions to "qualifier". The reason for this is that, for Tycho, the site.xml acts as a pure configuration file, and will no longer be the site index itself. So Tycho wants the qualifier to be generic, and then will fill it in during compilation. Like the source feature, this will be covered more later.

Last Steps

With our POM files defined, the last step for now is to import the projects back into Eclipse. In Eclipse, go to File → Import, expand the "Maven" category, and choose "Existing Maven Projects":

On the next screen, browse to the "com.example.xsp" directory created earlier. If all goes well, this should find the four projects in their hierarchy:

Everything on this can be left as the defaults, though you may want to specify a more-descriptive working set name - that doesn't affect the project behavior.

When you click "Finish", Eclipse with churn for a bit and then, if you're running Mars and haven't done this before, it will present a dialog about "Maven plugin connectors":

The specifics of what is going on here are a large topic of their own, but the short of it is that Eclipse needs specialized plugins to deal with each Maven plugin, and in this case it's looked for (and found) connectors for Tycho. Click "Finish", "Next", and "OK", accept the license terms, and restart Eclipse as it tells you to.

When Eclipse restarts, it should go through some churning while it updates the Maven projects and should finally settle on no remaining errors.

Closing Out

There will be some things to discuss with the fallout from this conversion, but this will do it for today. Commit your changes, stand up and stretch, and grab a cup of relaxing tea:

That Java Thing, Part 14: Maven Environment Setup

Feb 21, 2016 5:51 PM

Tags: java maven
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Before diving into the task of converting our plugin projects to Maven, there's a bit of setup we need to do. In a basic case, Maven doesn't require much setup beyond the project file itself - it's a "convention over configuration" type of thing that tries to make doing things the default way smooth. However, since it's also a "Java" thing, that means that anything out of the ordinary requires a bunch of XML.

Our big "out of the ordinary" aspect is OSGi. Maven and OSGi are often at loggerheads, but the conflict won't be too great in our situation. Still, it does mean there will be a few hoops to jump through, and one of those hoops is dealing with our dependency on the XPages runtime plugins. Since these plugins are not packaged as fully-Mavenized artifacts (yet (hopefully)), we'll need to configure Tycho to read the p2 (Eclipse) style.

In part 3, we downloaded the Build Management Update Site from OpenNTF, and we'll reuse that here. What we need to do is create a global Maven settings file that, for now, will just contain a definition of a variable to point to this update site. It would also be possible to specify this inside the project itself, but it's better form to use a consistent variable name (the most common convention is notes-platform) in the project files and then have your local settings point to it on your machine.

The global Maven settings file is called settings.xml and is stored in a folder named .m2 in your user's home directory (e.g. C:\Users\someuser\.m2 or /Users/someuser/.m2). Creating a folder named with a leading dot can be a pain in Explorer and the Finder, so it may be necessary to drop into a command line or other tool to do it. One way or another, create this file and set its contents similar to this:

<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>main</id>
			<properties>
				<notes-platform>file:///C:/IBM/UpdateSite</notes-platform>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>main</activeProfile>
	</activeProfiles>
</settings>

Adjust the file:// URL as necessary to point to the location on your computer. It has to be a file URL and not a normal path, presumably because repositories are usually expected to be remote HTTP sites.

This is the only configuration we need before getting to the project, but it's a good preview of the sort of "try pasting this big block of XML somewhere" advice you're in for when it comes to Maven use. Over time, the structure of the XML and how it relates to Maven's behavior begins to crystallize, but it's definitely cumbersome to start with, and it will get more opaque before it gets less so.

Depending on your proclivities, this may be a good opportunity to install standalone Maven as well. Eclipse has its own embedded version, so this is not required, but it can be handy sometimes to be able to run Maven from the command line. On your average Linux distribution or OS X with Homebrew, Maven should be installable with a package manager. Otherwise, Maven can be downloaded from maven.apache.org - it doesn't have an installer as such, as it's essentially some scripts around Java classes, but they have tips for adding it to your path.

Next, we'll get to the real meat of this process: actually converting the projects to Maven.

That Java Thing, Part 12: Expanding the Plugin - JAX-RS

Dec 3, 2015 3:21 PM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

A couple of months back, Toby Samples wrote a series on using Wink in Domino. I'm not going to cover all of that ground here, but it's still worth dipping into the topic a bink, as writing REST services in an OSGi plugin is a great way to not only add capabilities to your XPages apps, but to also start making your data (and programming skills) more accessible from other platforms.

There are two main terms to know here: "JAX-RS" and "Wink".

JAX-RS stands for "Java API for RESTful Web Services". If you're observant, you may notice that there's no "X" in that phrase. This is because the "JAX" part is sort of a legacy hanger-on from the common "JAX" prefix used in JAX-WS and the other Java XML APIs... even though JAX-RS doesn't necessarily involve any XML. In any event, JAX-RS is a specification for a way to write RESTful web services in Java with (mostly) inline configuration using annotations.

Apache Wink is a specific implementation of this standard. For the most part, with these Java standards, the actual implementation shouldn't matter too much, but there are always edge cases. In any event, Wink has historically been the preferred method of doing this on Domino by virtue of the fact that the ExtLib's DAS is built on it and so it ships with Domino, along with some IBM support code.

Setting up JAX-RS services is pretty similar to the addition of the XSP Library and theme provider: we'll need to create the Java classes and add some configuration to the plugin.xml. In this case, we'll do it in reverse, starting with the configuration first before diving into the servlet class.

There are actually two ways to proceed here. In the past, I've added servlets directly to the plugin.xml file. This works fine, but Toby's series took a different and more-interesting route: instead of having the plugin provide a servlet alone, it provides a web application which in turn contains a servlet. Doing it this way brings the configuration more in line with what you'd see in a non-OSGi Java web application.

To start with, we'll need to add dependencies on four more plugins. Open the META-INF/MANIFEST.MF file, go to the "Dependencies" tab, and add "org.apache.wink", "com.ibm.domino.osgi.core", "org.eclipse.equinox.http.registry", and "com.ibm.wink":

One word of caution: when adding "org.eclipse.equinox.http.registry", make sure to add version "1.0.100". Eclipse will likely have a newer version available as well, but that won't be there on Domino. Setting the version requirement too high will cause the plugin to fail to load.

While on this screen, add "lotus.domino" in the "Imported Packages" section of the page. OSGi has two mechanisms for specifying dependencies - until this point, we've used exclusively the "Required Plug-ins" route, which attaches a plugin by name as a dependency. The "Imported Packages" route is a little different: it says "I want this Java package available, but I don't care what plug-in provides it". Outside of the Domino world, the Packages route is very common, but for our needs we usually use Plug-ins to avoid some common nitty-gritty issues with the XPages stack.

The plugin.xml addition is pretty similar to the services we added before, but with some different tags. We're going to add another extension point to the list, along with a couple lines of configuration. With a minor cleanup to the previous lines, the result should look like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
	<extension point="com.ibm.commons.Extension">
		<service class="com.example.xsp.ExampleLibrary" type="com.ibm.xsp.Library"/>
	</extension>
	<extension point="com.ibm.commons.Extension">
		<service class="com.example.xsp.theme.ExampleStyleKitFactory" type="com.ibm.xsp.stylekit.StyleKitFactory"/>
	</extension>
	<extension point="org.eclipse.equinox.http.registry.servlets">
		<servlet alias="/examplerest" class="com.example.xsp.servlet.ExampleServlet">
			<init-param name="applicationConfigLocation" value="/WEB-INF/exampleapplication"/>
			<init-param name="propertiesLocation" value="/WEB-INF/exampleservlet.properties"/>
			<init-param name="DisableHttpMethodCheck" value="true"/>
		</servlet>
	</extension>
</plugin>

The code inside of the extension point is sort of a compressed version of the kind of servlet configuration you see in JEE's web.xml file.

This is actually an area where Toby's series and these instructions diverge. In his series, he used a different extension point - com.ibm.pvc.webcontainer.application - to provide his servlet as part of a full web application. That is a very interesting route as well, and opens the door to broader Java application development. Either one will get the job done, but the "just the servlet" route saves a bit of cognitive overhead for now.

Now, to create the files mentioned in the configuration. In the src/main/resources folder, create a folder named WEB-INF and add the exampleapplication and exampleservlet.properties files, both plain text:

The exampleservlet.properties file can be left blank for now. It's there to provide Wink-specific initialization properties that are useful as you get further into it, but are not needed now.

The exampleapplication file lists the classes that will provide the individual REST resources, one per line. For now, we'll just have one:

com.example.xsp.servlet.HelloWorldResource

With the configuration set up, it's time to create two Java classes. First, create a new class named "ExampleServlet" in the "com.example.xsp.servlet" package and have it extend "com.ibm.domino.services.AbstractRestServlet":

This class doesn't actually need to do anything other than exist for our purposes, but it can be a useful point for hooking in behavior that occurs for each servlet request. The actual code will go in the "HelloWorldResource" class mentioned in the application file. So create that class in the "com.example.xsp.servlet" package. Unlike the previous class, this one doesn't need to extend anything:

This one contains the core of our business logic (such as it is):

package com.example.xsp.servlet;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import lotus.domino.NotesException;
import lotus.domino.Session;
import lotus.domino.Database;

import com.ibm.commons.util.StringUtil;
import com.ibm.commons.util.io.json.JsonJavaObject;
import com.ibm.domino.osgi.core.context.ContextInfo;

@Path("/helloworld/{id}")
public class HelloWorldResource {
	
	@GET
	@Produces(MediaType.APPLICATION_JSON)
    public Response get(
    		@Context UriInfo context, @PathParam("id") String id, @QueryParam("echo") String echo
    		) throws NotesException {
    	JsonJavaObject json = new JsonJavaObject();
    	
    	json.put("payload", "Hello world");
    	Session session = ContextInfo.getUserSession();
    	if (session != null) {
    		json.put("userName", session.getEffectiveUserName());
    	}
    	Database database = ContextInfo.getUserDatabase();
    	if(database != null) {
    		json.put("databaseFilePath", database.getFilePath());
    	}
    	if(StringUtil.isNotEmpty(echo)) {
    		json.put("echo", echo);
    	}
    	if(StringUtil.isNotEmpty(id)) {
    		json.put("id", id);
    	}
    	
    	return Response.ok(json.toString()).build();
    }
}

There are quite a few things going on here! Let's start with the JAX-RS annotations. JAX-RS, like other "new-era" Java APIs, using annotations heavily in order to put configuration inline and cut down on the number and size of the previous giant blobs of XML. They look a bit unusual if you haven't encountered Java annotations before, but their intent here is hopefully clear.

At the top, the @Path("/helloworld/{id}") annotation indicates that this class is hooked into "/helloworld" within our servlet, and moreover also expects a path parameter that it will refer to as "id". Then, the @GET annotation indicates that our get method will respond to that HTTP method (the name of the method is arbitrary - it could be "foobar" and still work). The @Produces(MediaType.APPLICATION_JSON) annotation further indicates that the method will supply JSON. There are other ways to accomplish this if the method may return different content types, but we know it's always JSON here.

Inside the method signature, there's a bit more magic going on. We're never going to call this method in any code ourselves, but instead Wink is going to inspect it and its parameter list in order to provide what it requests. By annotating the first parameter with @Context, we're telling Wink that we want to have that populated with the context information for the servlet - this is similar to ExternalContext from XSP development. This object isn't actually used in the example (and could be removed), but I've found that it's handy to have it consistently there. The other two parameters use @PathParam("id") and @QueryParam("echo") to extract the "id" parameter we specified earlier as well as look for "echo=..." in the query string.

The method itself uses IBM's JSON library to produce a simple JSON object containing the information we passed in, as well as some Domino-specific stuff, using the class ContextInfo. This class is sort of like the equivalent of ExtLibUtil for Equinox servlets, at least when it comes to getting access to the current session and database. ExtLibUtil itself, though available, doesn't function properly in a servlet, and so we use this instead. The current session is straightforward enough and works like you'd expect, but the current database is interesting. When you register a servlet via this method, it becomes available not only via "yourserver.com/examplerest", but also within each NSF, like "yourserver.com/foo.nsf/examplerest". By default, that will just provide the same results, but, if your code uses ContextInfo.getCurrentDatabase(), you can get the database the URL is requesting. This is how DAS does its thing, and it allows you to write a servlet that can operate in a contextually-appropriate way in each database.

So phew! Now that this is all written, you can build your update site and deploy it to the server. After restarting HTTP, you should be able to go to a URL like "http://yourserver.com/foo.nsf/examplerest/helloworld/someid?echo=hey", and see a result like this:

{
	"echo": "Hey",
	"userName": "CN=Jesse Gallagher/O=IKSG",
	"id": "someid",
	"databaseFilePath": "foo.nsf",
	"payload": "Hello world"
}

To finish, it's time to commit the changes.

Next up, it will be time to start taking the plunge into Maven. Don't worry; it'll be fun!

That Java Thing, Part 11: Diagnostics

Dec 1, 2015 8:43 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Though my surprisingly-packed schedule the last few weeks caused a hiatus, it's time to retun to this series with a quick description of some of the diagnostic tools available to you when doing plugin development (outside of connecting the debugger, which I may do eventually).

The primary tool in your "what the heck is going on?" toolbox should be the XPages Log File Reader. This app does a wonderful job providing a web UI for the important diagnostic files you'll likely need to see during plugin development (or normal XPages development as well). Even if you have control over the server and could see the files on the filesystem, having them in one UI is invaluable. By looking through the available tabs and pages, you can usually track down some error message related to your problem. So, if you don't have this installed currently, make a point of adding it.

Your other best friend will be the oddly-named XPages Portable Command Guide. Its purpose isn't as immediately clear as Mastering XPages, but it contains valuable nuggets of wisdom. In particular, for the purposes of developing plugins, it describes how to use the OSGi console commands.

By running tell http osgi followed by an appropriate command on the Domino server console, you can find out a bunch of important diagnostic state information about installed plugins. Things are a little obtuse in there, but eventually you learn enough to glean what you'll need to know. There are three primary commands I use: ss, diag, and bundle.

The ss command allows you to see a status summary of plugins matching a given name prefix. So, for example, if you run it for "com.example", you should get a listing like this:

[1140:00D0-15DC] 12/01/2015 07:44:05 AM  Remote console command issued by Jesse Gallagher/IKSG: tell http osgi ss com.example
[0FA8:0002-0C54] 12/01/2015 07:44:05 AM  Framework is launched.
[0FA8:0002-0C54] 12/01/2015 07:44:05 AM  id State       Bundle
[0FA8:0002-0C54] 12/01/2015 07:44:05 AM  79 RESOLVED    com.example.xsp.plugin.source_1.0.0.201511121147
[0FA8:0002-0C54] 12/01/2015 07:44:05 AM  83 ACTIVE      com.example.xsp.plugin_1.0.0.201511121147

The "State" column's values are states from the OSGi lifecycle, and it's worthwhile to at least read the list on that page. The gist of it is that "INSTALLED" is bad (though one step better than not being listed at all), while "RESOLVED" means "working but not yet activated", "<<LAZY>> means it is waiting to be used (like an XSP library), and "ACTIVE" means all is well (probably). This can be a very useful tool for seeing whether or not a plugin is properly on the server at all, and then whether there is an issue.

The diag command is the next level to drill down into when you want to get information about a misbehaving plugin. If you pass the full name of a plugin (not just the prefix), it will tell you if there are any missing dependencies. Running it on the example plugin, you should get something like this:

[1140:00D0-0D3C] 12/01/2015 08:18:41 AM  Remote console command issued by Jesse Gallagher/IKSG: tell http osgi diag com.example.xsp.plugin
[0FA8:0002-0C54] 12/01/2015 08:18:41 AM  initial@osginsf:osgi-dev.nsf/D3A3F506C30EFAB585257EFB005C40D1/com.example.xsp.plugin_1.0.0.201511121147.jar [83]
[0FA8:0002-0C54] 12/01/2015 08:18:41 AM    No unresolved constraints.

If, however, the plugin has a dependency that is unfulfilled (like this arbitrary one I added for this purpose), you'll get a different message:

[1140:00D0-140C] 12/01/2015 08:26:36 AM  Remote console command issued by Jesse Gallagher/IKSG: tell http osgi diag com.example.xsp.plugin
[0D50:0002-112C] 12/01/2015 08:26:36 AM  initial@osginsf:osgi-dev.nsf/EE0B673EB07C7AC985257F0E0049B171/com.example.xsp.plugin_1.0.0.201512010823.jar [12]
[0D50:0002-112C] 12/01/2015 08:26:36 AM    Direct constraints which are unresolved:
[0D50:0002-112C] 12/01/2015 08:26:36 AM      Missing required bundle org.eclipse.egit.mylyn.ui_4.0.3.

A slight variant of that situation is when there is an optional bundle that is missing, and this is usually fine. For example, it's common to see XPages plugins that have dependencies on "com.ibm.notes.java.api", which is the nicely-packaged version of the lotus.domino classes, but which is not actually on the server (since it uses Notes.jar directly). In that situation, the message is similar, but not a problem:

[1140:00D0-0B70] 12/01/2015 08:29:57 AM  Remote console command issued by Jesse Gallagher/IKSG: tell http osgi diag com.example.xsp.plugin
[0E18:0002-15A4] 12/01/2015 08:29:57 AM  initial@osginsf:osgi-dev.nsf/FB1CC2377927985185257F0E0049F6E8/com.example.xsp.plugin_1.0.0.201512010827.jar [12]
[0E18:0002-15A4] 12/01/2015 08:29:57 AM    Direct constraints which are unresolved:
[0E18:0002-15A4] 12/01/2015 08:29:57 AM      Missing optionally required bundle com.ibm.notes.java.api_9.0.1.

The final command is much more verbose and usually the least useful: bundle. What this does is to spit out a feed of pertinent information about the plugin. The first section of it looks like this:

[1140:00D0-0AF0] 12/01/2015 08:32:10 AM  Remote console command issued by Jesse Gallagher/IKSG: tell http osgi bundle com.example.xsp.plugin
[0E18:0002-15A4] 12/01/2015 08:32:11 AM  initial@osginsf:osgi-dev.nsf/FB1CC2377927985185257F0E0049F6E8/com.example.xsp.plugin_1.0.0.201512010827.jar [12]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM    Id=12, Status=<<LAZY>>    Data Root=C:\Program Files\IBM\Domino\data\domino\workspace\.config\org.eclipse.osgi\bundles\12\data
[0E18:0002-15A4] 12/01/2015 08:32:11 AM    No registered services.
[0E18:0002-15A4] 12/01/2015 08:32:11 AM    No services in use.
[0E18:0002-15A4] 12/01/2015 08:32:11 AM    Exported packages
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      com.example.xsp.beans; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.builder; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.concurrent; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.event; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.exception; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.math; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.mutable; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.reflect; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.text; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.text.translate; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.time; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.apache.commons.lang3.tuple; version="0.0.0"[exported]
[0E18:0002-15A4] 12/01/2015 08:32:11 AM    Imported packages
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      org.osgi.framework; version="1.4.0"<System Bundle [0]>
[0E18:0002-15A4] 12/01/2015 08:32:11 AM      com.ibm.designer.runtime; version="0.0.0"<update@../../shared/eclipse/plugins/com.ibm.xsp.core_9.0.1.20150605-1000/ [256]>

The "Exported packages" part is usually the most useful: it lets you ensure that the packages you think are being exported are indeed being exported. This won't tell you for sure that all of the classes in those packages are present and working, but it's a start.

The "Imported packages" section, which can run for hundreds of lines, is very verbose, but is potentially useful if you want to track down why code in your plugin that interacts with the outside is misbehaving. Perhaps it hasn't found the right dependent package at runtime, or perhaps the version of the other plugin it's using is not what you expect.

One word of warning about the bundle command: it can very easily display more information than Administrator's remote console will show you, so it's useful to look at the server console directly or the logs for everything. Fortunately, the last couple of lines relate to permission setup, which is not usually useful for this need.


These commands so far have applied to the Domino server, but they also apply to Notes/Designer. To see the OSGi console, though, you need to launch Notes specially, with "-RPARAMS -console" in the command line. I keep a second shortcut on my Start menu around for this purpose. When you launch it, there will be quite a bit of noise in there, since Eclipse is a very chatty thing, but it can be invaluable in a pinch. When using this console, since it's the OSGi console directly and not routed through a server task, you can drop the tell http osgi prefix to the commands and just do things like ss com.example directly.

The client also has useful diagnostic logs, though there's no handy XPages application to go along with it (unless it happens to run in the local web preview, I guess). Instead, if you go to Help → Support in Designer, you have a couple options to view log and trace information. If you're working on a plugin and don't see it in the Xsp Properties page list, this is the place to check.


In the next couple of posts, we'll add a little more code to the plugin, diving into using JAX-RS to serve web services, before marching towards Maven-ization.

That Java Thing, Interlude: Effective Java

Nov 16, 2015 8:15 AM

Tags: java
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

While taking a short breather in my continuing Java series, I think that now is a good time to reiterate my advice for all Domino developers to read Effective Java. It's probably not the best way to learn Java from scratch, but it's an invaluable tour through tons of important Java concepts. Even if you don't use most of the knowledge immediately, reading every section will help immerse you in the language and give you a better appreciation for its texture, which is one of the most important aspects of being a better programmer.

There is one caveat, though, when it comes to serialization. The serialization chapter in the book, though characteristically thorough and accurate, paints a much more dire and complicated picture of serialization than we as XPages developers usually have to worry about. It focuses on long-term storage of serialized objects - say, on the filesystem as a data format - whereas most of it going on in an XPages app is to make sure that your managed beans and data contexts don't throw exceptions when you do a partial refresh. Though we do run into it a bit when storing Java objects in Domino documents with MIMEBean or ODA, a managed bean class can get away with just "implements Serializable" attached and not a second thought.

Now go, make haste to Amazon and purchase the book!

That Java Thing, Part 10: Expanding the Plugin - Serving Resources

Nov 12, 2015 12:02 PM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

After sharing code, one of the handiest uses of a plugin is sharing web resources - stylesheets, JavaScript files, and so forth. This process is similar to the last couple steps in that, though it is not very complicated on the whole, it's pretty non-obvious how to get around to doing it.

To start with, we'll create some resources to serve up. Expand the src/main/resources folder in your project (it will be slightly more useful to use the "normal" folder version and not the source folder with the brown package icon, due to Eclipse's UI) and go to New → Other, then pick "Folder" within "General". Name the folder "web":

The create within it a folder named "example", and within that folders named "css" and "js":

Then, populate it with some files - I created a basic stylesheet named "style.css" and a JavaScript file name "script.js" and placed them within the css and js folders, respectively. It doesn't matter what they contain, as long as it's something you can test later; you could also drag in any files you have from the filesystem.

As a side note, as I did this, creating folders within the hierarchy, Eclipse got a bit wonky about refreshing the list, presumably because of the combined source/normal folder nature of src/main/resources. This inconvenience is a concession to the way m2e, the Maven integrator for Eclipse, will act later - it makes this folder a source folder automatically, so we may as well get used to it.

Next, create a theme file within src/main/resources directly (which is to say, not in the web folder) and name it "example.theme". When you create it, Eclipse will likely have trouble opening up an editor: it will try to use one from the OS, which will almost definitely be invalid. Instead, right-click the file and choose Open With → Text Editor:

Set its contents to:

<theme extends="webstandard" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="platform:/plugin/com.ibm.designer.domino.stylekits/schema/stylekit.xsd" >
	<resources>
		<styleSheet href="/.ibmxspres/.extlib/example/css/style.css"/>
		<script clientSide="true" src="/.ibmxspres/.extlib/example/js/script.js"/>
	</resources>
</theme>

Those paths look so weird because we'll be piggybacking on the ExtLib's resource-provider system.

Now, back to Java to create the classes that will provide these resources to the platform. Because it's less terrifying, we'll start with the StyleKitFactory, which provides the theme to the server. Right-click on src/main/java and create a new Java class:

  • Set its package to "com.example.xsp.theme"
  • Set its name to "ExampleStyleKitFactory"
  • Add two interfaces: com.ibm.xsp.stylekit.StyleKitFactory and com.ibm.xsp.stylekit.StyleKitListFactory. The former will provide the theme to the server, while the latter will tell Designer about the name it can use

Set the class contents to this:

package com.example.xsp.theme;

import java.io.InputStream;
import java.util.Arrays;
import java.util.List;

import com.ibm.xsp.stylekit.StyleKitFactory;
import com.ibm.xsp.stylekit.StyleKitListFactory;

public class ExampleStyleKitFactory implements StyleKitFactory, StyleKitListFactory {

	private static final String[] THEMES = {
		"example" //$NON-NLS-1$
	};
	private static final List<String> THEMES_LIST = Arrays.asList(THEMES);
	
	@Override
	public String[] getThemeIds() {
		return THEMES;
	}

	@Override
	public InputStream getThemeAsStream(String themeId, int scope) {
		if (scope == StyleKitFactory.STYLEKIT_GLOBAL) {
			if (THEMES_LIST.contains(themeId)) {
				return getThemeFromBundle(themeId + ".theme"); //$NON-NLS-1$
			}
		}
		return null;
	}

	@Override
	public InputStream getThemeFragmentAsStream(String themeId, int scope) {
		return null;
	}
	
	private InputStream getThemeFromBundle(final String fileName) {
		ClassLoader cl = getClass().getClassLoader();
		return cl.getResourceAsStream(fileName);
	}
}

There's quite a bit going on here! Some of it is due to the nature of the task and some of it comes from my own built-up habits that come in handy as the class grows. Towards the top of the class, THEMES contains an array of theme names known by the plugin - in this case, just the one, but it's good to have this standardized. THEMES_LIST contains a List wrapper around that array for programmatic convenience later.

The getThemeIds method is from StyleKitListFactory and is used by Designer to generate its list of available themes in the GUI. Implementing this interface isn't required, but it's a cross-the-Ts sort of thing.

The remaining methods implement StyleKitFactory, which provides the server with the theme data itself. getThemeAsStream is called by the runtime when it's searching for a theme requested by the app, so it contains a couple checks to make sure that the request is indeed intended for a plugin-based (global) theme with a name that this plugin knows about. It then uses a small utility method, getThemeFromBundle, to get the theme as an InputStream from the plugin's internal filesystem. Technically, this could be anything - it could construct the theme on the fly, fetch it from a URL, or get it from anywhere else, as long as it returns an InputStream, but this version is the most common.

getThemeFragmentAsStream is an interesting beast. The Extension Library uses it to hook in extra theme info for its Bootstrap themes on the fly. This is presumably useful for ad-hoc theme hierarchies and avoiding the theme-inheritance cap, but we don't have any need for it here.

Finally, the "$NON-NLS-1$" comment business is because I've developed a habit of enabling Eclipse's translated-strings warnings - the comments denote to the IDE that the hard-coded strings on those lines are not intended to be translated. You can ignore those if you so desire.

Moving on, now it's time to set up our resource provider, which will serve up the web resources. Before that, we'll take a minor detour back to the Activator class to fix up a method signature. Add "public" to the getContext method:

public static BundleContext getContext() {
	return context;
}

Now, we're going to implement the resource-loading class, modeled after the one from Bootstrap4XPages. Create a new class:

  • Set its package to "com.example.xsp.minifier"
  • Set its name to "ExampleLoader"
  • Set its superclass to com.ibm.xsp.extlib.minifier.ExtLibLoaderExtension

This class will have three methods of importance to us: getOSGiBundle (which is obligatory), loadCSSShortcuts, and getResourceURL:

package com.example.xsp.minifier;

import java.net.URL;

import javax.servlet.http.HttpServletRequest;

import org.osgi.framework.Bundle;

import com.example.xsp.Activator;
import com.ibm.commons.util.DoubleMap;
import com.ibm.xsp.extlib.minifier.ExtLibLoaderExtension;
import com.ibm.xsp.extlib.util.ExtLibUtil;

public class ExampleLoader extends ExtLibLoaderExtension {
	
	private static final String[] LIBRARY_RESOURCE_NAMESPACES = {
		"example" //$NON-NLS-1$
	};

	@Override
	public Bundle getOSGiBundle() {
		return Activator.getContext().getBundle();
	}

	@Override
	public void loadCSSShortcuts(DoubleMap<String, String> aliases, DoubleMap<String, String> prefixes) {
		if(prefixes != null) {
			for(int i = 0; i < LIBRARY_RESOURCE_NAMESPACES.length; i++) {
				String namespace = LIBRARY_RESOURCE_NAMESPACES[i];
				prefixes.put("9T0a" + i, "/.ibmxspres/.extlib/" + namespace); //$NON-NLS-1$ //$NON-NLS-2$
			}
		}
	}
	
	@Override
	public URL getResourceURL(HttpServletRequest request, String name) {
		for(String namespace : LIBRARY_RESOURCE_NAMESPACES) {
			if(name.startsWith(namespace)) {
				String path = "/web/" + name; //$NON-NLS-1$
				return ExtLibUtil.getResourceURL(getOSGiBundle(), path);
			}
		}
		return null;
	}
}

Once again, there's a lot going on, but you can see some general similarities to the theme provider. The getOSGiBundle method makes use of the Activator.getContext method we just modified, while the other two methods use the same sort of "internal array of Strings" pattern as before to make it easy to add more entries to the list down the line.

Things are a little strange in the loadCSSShortcuts method. The goal of this method is to provide the shorthand codes used in the minified URLs for resources, and so each plugin should make sure to provide unique ones. However, I don't know of any coordinated enforcer class that does this "making sure" for us, so you kind of just have to make up a unique string of characters that should be unique. Here, I picked "9T0a" as the base, just because they usually look like that.

The getResourceURL method is a little simpler, and is the equivalent of the earlier getThemeAsStream - given the incoming request for a resource name, it checks to see if it falls within its bailiwick and, if so, uses a method to provide a URL to the server environment.

These methods, like the ones in the theme provider, can be generally used as a "just drop them in the plugin" sort of thing without modification. There are other potential tweaks you can make, but it's probably best to keep it simple.

There's two more steps to getting these classes working. The theme provider needs to be registered in the plugin.xml and the resource loader needs to be added in the Activator. First, for the theme provider. Add another extension point entry to the plugin.xml:

Your plugin.xml source should look something like:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
   <extension
         point="com.ibm.commons.Extension">
      <service
            class="com.example.xsp.ExampleLibrary"
            type="com.ibm.xsp.Library">
      </service>
   </extension>
   <extension
         point="com.ibm.commons.Extension">
      <service
            class="com.example.xsp.theme.ExampleStyleKitFactory"
            type="com.ibm.xsp.stylekit.StyleKitFactory">
      </service>
   </extension>

</plugin>

For the loader, open the Activator class and add a new line to the start method to add the resource loader to the ExtLib's bank. The class should now look like:

package com.example.xsp;

import java.util.logging.Level;
import java.util.logging.Logger;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

import com.example.xsp.minifier.ExampleLoader;
import com.ibm.xsp.extlib.minifier.ExtLibLoaderExtension;

public class Activator implements BundleActivator {

	private static BundleContext context;
	
	public static final Logger log = Logger.getLogger(Activator.class.getPackage().getName());
	static {
		log.setLevel(Level.FINEST);
	}

	public static BundleContext getContext() {
		return context;
	}

	/*
	 * (non-Javadoc)
	 * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext)
	 */
	public void start(BundleContext bundleContext) throws Exception {
		Activator.context = bundleContext;
		
		if(log.isLoggable(Level.INFO)) {
			log.info("Starting Example XPages Library");
		}
		
		ExtLibLoaderExtension.getExtensions().add(new ExampleLoader());
	}

	/*
	 * (non-Javadoc)
	 * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext)
	 */
	public void stop(BundleContext bundleContext) throws Exception {
		Activator.context = null;
	}
}

(Despite appearances, the "start" method there is not actually commented out - its blue appearance is a bug in the JavaScript syntax highlighter used in this blog)

Now to see if all this work has paid off: build the update site and install it in Designer and Domino. When you relaunch Designer and open the "Xsp Properties" page of the NSF you're working with, you should see the "example" theme in the list of options (as long as you have a recent ExtLib release installed in Designer - the capability was added post-9.0.1):

Additionally, when you open the app on the web, you should see your CSS and JS files included. One final note about these resources: though the URLs in the theme start with "/.ibmxspres/.extlib/example", URLs references from non-XSP files (say, referencing font files from with CSS) should start instead with "/xsp/.ibmxspres/.extlib/example". The XPages runtime adds that "/xsp" when generating URLs for pages, but it doesn't do any such processing for static references.

So... that was more complicated than it seemed at the start! Still, once it's in place, it's all pretty much "set it and forget it" - in the future, you can add files and themes more easily, just adjusting the static arrays in the appropriate classes as necessary.

Finally, commit the changes and take a relaxing breath.

That Java Thing, Part 9: Expanding the Plugin - Jars

Nov 11, 2015 7:50 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

So it appears that I once again forgot to commit my changes. Well, consider this a cautionary tale, but we can still salvage the situation a bit by committing the previous changes before embarking on an unrelated modification - it's that mixing of different changes that can cause trouble in a source control repository.

For today's post, we'll add a third-party Jar to our plugin in order to use it internally and provide it to external applications (and we'll cover why those are distinct things).

The Jar we'll use is Apache Commons Lang, because it's a nice simple case without complicated dependencies. Embedding it inside a plugin is actually not the ideal deployment strategy for this, since it already contains OSGi metadata, but this setup is often the most expedient.

To start, download the latest binary release from their site and extract the Zip. Now, in Eclipse, right-click the "com.example.xsp.plugin" project, choose New → Folder, and name it "lib". Then, right-click that new folder and choose "Import...":

Then, choose "File System" from within "General":

On the next pane, browse to the extracted folder and, within Eclipse's file-picker list, choose the binary Jar:

Next, we have to make sure that this Jar file gets included in the final build and is available to classes within the Jar. To do that, open the META-INF/MANIFEST.MF file and go to the "Runtime" tab. In the "Classpath" section, click "Add...", and find the newly-added file:

This does two things: it adds it to the plugin's internal classpath (specified in the MANIFEST.MF file) and, as a convenience in Eclipse, adds the file to build.properties (which controls what is included in the build). If you go to the "build.properties" tab, it should look something like this:

source.. = src/main/java/,\
           src/main/resources/
output.. = target/classes/
bin.includes = META-INF/,\
               .,\
               plugin.xml,\
               lib/commons-lang3-3.4.jar

You should also see the Jar within your project list - being at the top level with this "jar of bits" icon means that Eclipse knows that it is included in the project's classpath:

At this point, the Jar's classes are available for use inside the plugin itself, but not exposed to the outside world. So we could add a method like this to the ExampleBean class and then call it from an XPage:

/**
 * @return the Java home directory path according to Apache Commons Lang
 */
public String getJavaHome() {
	return org.apache.commons.lang3.SystemUtils.getJavaHome().getAbsolutePath();
}

As an aside, sometimes Designer doesn't quite obey the normal rules of OSGi visibility, but Domino does, so you can fall into a trap with packages that aren't marked as visible but are still accessible in Designer.

Sometimes we do want to make these classes visible to downstream users, such as when the point of including the Jar is to distribute a set of foundational libraries used across apps. To do this, go back to the MANIFEST.MF file's "Runtime" tab. Now, click the "Add..." in the "Exported Packages" section. You should see all of the Apache commons packages - select them and click OK:

Now, when you build and install this plugin, the classes will be directly available in your XPages applications. This can be a good way to take a handful of Jars you may have sitting around in your jvm/lib/ext folder and distribute them in a better way. It's not the best way, since it misses some OSGi advantages like source and Javadoc for the embedded classes, but it can be much simpler than doing it "right".

Now, commit the changes for today:

In the next post, we'll add some web resources to the plugin and make them available in a theme and able to participate in resource aggregation.

That Java Thing, Part 8: Source Bundles

Nov 10, 2015 7:46 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Before anything else today, Eric McCormick reminding that yesterday's post missed the final step: committing the changes. So, let's take care of that right now. On my side, since my Windows VM is also being accessed from the Mac side, it's worthwhile to add a .gitignore file to the root of the repo to keep out all the .DS_Store nonsense. GitHub's Java gitignore is a good start, though skipping the part about ignoring Jar files. In your text editor of choice, create a new file named ".gitignore" in the root directory of the Git repository, with this content:

._*
Thumbs.db
.DS_Store

*.class

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
#*.jar
*.war
*.ear

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

Then commit the files:

Now, on to today's topic: adding a source bundle. This is a small topic, but will be very useful as you work down the line. As it stands right now, even though you have access to the source code of your plugin, Designer doesn't, and that means that you don't have nice parameter names, inline Javadoc, or viewable source when working with your classes in an actual XPages application. Since XSP libraries are almost always open-source or internal-use-only, including a source bundle is the fastest way to achieve all of these.

The way to do this is pretty non-obvious, but not complex once you know what to do. The work will be done with a new feature project, so go to File → New → Other and then "Feature Project" within "Plug-in Development". This should be set up very similarly to the original feature:

  • Set the project name to "com.example.xsp.source.feature"
  • Override the location with a folder inside the Git repository
  • Pick a variant on your original feature name, so "Example XSP Library Source Feature"

Unlike before, however, we don't want to select any plugins on the next pane - instead, just hit "Finish". This feature is going to use some Eclipse trickery to automatically generate a "source" version of the existing feature. Once the project is created, open its feature.xml file and go to the "feature.xml" tab. On there, add a bit of XML to manually specify the non-existent source plugin (I've left out the "description", "copyright", and "license" blocks, which are required even if just stubs):

<?xml version="1.0" encoding="UTF-8"?>
<feature
      id="com.example.xsp.source.feature"
      label="Example XSP Library Source Feature"
      version="1.0.0.qualifier">

   <!-- *snip* -->

	<includes
         id="com.example.xsp.plugin.source"
         version="0.0.0"/>
</feature>

Then, go to the "build.properties" tab and add a second line to tell Eclipse's builder to generate the source plugin:

bin.includes = feature.xml
generate.feature@com.example.xsp.plugin.source = com.example.xsp.feature

Next, open the site.xml file in the update site project and add the new feature to the "Managing the Site" section:

Now, save and "Build All". Then, go back to Designer and install the latest version from the update site, which should now have a second feature listed:

When you install that and relaunch Notes/Designer, you'll be able to access the source of your plugin. To see this in action, open your application in Designer and then go to the "Package Explorer" view (you may have to add it via Window → Show Eclipse Views → Package Explorer). Find your NSF's project and expand the "Plug-in Dependencies" category. Towards the bottom, you should see the example plugin Jar file - expand that and double-click on one of the classes:

If all went well, you should see the source to the original class, instead of Eclipse's incomprehensible bytecode dump. Additionally, now, when you access these classes from Java code inside your NSF, you'll get improved inline help, with correct parameter names and Javadoc (if you actually write the Javadoc).

The next step will be another small but useful topic: adding some third-party Jars to your plugin, either to use internally or to expose to your XPages applications.

That Java Thing, Part 7: Adding a Managed Bean to the Plugin

Nov 9, 2015 6:37 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

For today's step, we'll deal more with the meat of the task: putting your own code in the plugin. There are a lot of options for what a plugin can provide, ranging from just storing classes to be accessed from elsewhere to being full-on OSGi web applications. Adding a managed bean certainly falls on the simpler side of this spectrum, but it's also one of the most common uses. These steps should also be very familiar if you've created a managed bean in an XPages NSF before.

Before we get started, it will be useful later to access Extension Library code from this plugin, so there's a quick detour. In order to allow the plugin to "see" ExtLib code, the appropriate ExtLib plugin needs to be added as a dependency. Go to the MANIFEST.MF file's "Dependencies" tab and add com.ibm.xsp.extlib to the list:

Now, create a new class with an appropriate name by right-clicking on the src/main/java folder and selecting New → Class as before. Set its package to "com.example.xsp.beans" and its name to "ExampleBean". You don't have to name your bean classes with "Bean", but sometimes it makes sense. Make sure to add the java.io.Serializable interface via the "Add..." button in that section.

When you create the class, Eclipse will put a squiggly underline beneath the class name, indicating that it has a warning for you. If you hover your cursor over it (or hit Ctrl-1 while the cursor is on the line), you'll get more information and an option to fix it:

What this message is talking about has to do with serialization - the process of storing the Java object outside of memory and retrieving it later - and is something that would come up more in applications that intentionally store data long-term. For a thorough explanation of this and many other things, I can't recommend Effective Java enough. For our purposes as XPages developers, though, it's fine to pick the "default" option - we want more classes Serializable than other Java apps, but we care less about their long-term storage.

Next, add a basic "getter" method that returns a String we can see later, as well as a static method to retrieve the bean from the XPages environment from Java:

package com.example.xsp.beans;

import java.io.Serializable;

import javax.faces.context.FacesContext;

import com.ibm.xsp.extlib.util.ExtLibUtil;

public class ExampleBean implements Serializable {
	private static final long serialVersionUID = 1L;
	
	public static ExampleBean get() {
		return (ExampleBean)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "exampleBean");
	}

	public String getFoo() {
		return "hello from " + getClass().getSimpleName();
	}
}

Now, in order to make this a managed bean, well have to add a local faces-config file. These files have the same format as the one in the NSF. There's another nod towards Maven here: we'll put the XML file in a separate "resources" folder in the project, which is where Maven expects to find this sort of thing. Right-click the project and choose New → Source Folder:

Name it "src/main/resources":

Right-click on this newly-created source folder and choose New → Other, and within the dialog choose "File" from the "General" section:

On the next screen, name the file "beans.xml" and click Finish:

Most likely, Eclipse will open the file on the "Design" tab, which is pretty useless for this - it's better to switch to the "Source" tab. In this file, fill in the basic faces-config structure and an entry for the managed bean:

<?xml version="1.0" encoding="UTF-8"?>
<faces-config>
	<managed-bean>
		<managed-bean-name>exampleBean</managed-bean-name>
		<managed-bean-class>com.example.xsp.beans.ExampleBean</managed-bean-class>
		<managed-bean-scope>view</managed-bean-scope>
	</managed-bean>
</faces-config>

Now that we have this file, we have to tell the XPages runtime to actually use it as a faces-config file. To do that, go back to the ExampleLibrary class and implement the getFacesConfigFiles method, which lists the XML files from the plugin to contribute to the environment:

@Override
public String[] getFacesConfigFiles() {
	return new String[] {
			"beans.xml"
	};
}

The "@Override" bit above the method is indicating to the compiler that you are intentionally overriding a method declared in the superclass or interface - this is to help you ensure that you keep the method names correct for any code calling them.

Now, there are two minor remaining items to take care of before we deploy: exposing the class to Java code elsewhere and making sure the newly-created source folder is included in the build.

To do the former, open up the MANIFEST.MF file and go to the "Runtime" tab. Here, we can list the Java packages that should be available to code within the application (or other plugins). Since we'll want to have code that accesses the bean directly, add the "com.example.xsp.beans" package to the "Exported Packages" list:

This is important to do for any new package that is going to be accessed externally.

Next, go to the "build.properties" tab of the MANIFEST.MF editor (or open the build.properties file directly) and add the src/main/resources folder to the list of source folders:

source.. = src/main/java/,\
           src/main/resources/
output.. = target/classes
bin.includes = META-INF/,\
               .,\
               plugin.xml

The syntax of this file takes a little getting used to - the backslashes on many lines indicate that the next line should also be part of the same definition, but is split up for legibility.

With that, we're all set on this side. Head over to the site.xml file in the update site project and click "Build All" to build a new version of the plugin. Once that's done, go to the Update Site NSF for the server and import the new build, and then restart task http on the server console. Similarly, use the File → Application → Install routine in Designer to install the new version of the plugin there, and relaunch Notes.

Now that the bean is installed, you can test it out to make sure it's working by creating a new XPage in an app on the server with this content on the "Source" tab:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<xp:text value="#{exampleBean.foo}"/>
</xp:view>

When you launch the page in a browser, it should say "hello from ExampleBean". To try out the static get() method we created, change the XPage source to:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<xp:text><xp:this.value><![CDATA[#{javascript:
		"the bean says: " + com.example.xsp.beans.ExampleBean.get().getFoo()
	}]]></xp:this.value></xp:text>
</xp:view>

When you view that, you should see basically the same thing, prefixed with "the bean says: ". Whenever I make a change to test something that should have the same visual result, I like to toss in a small variation to make sure that my changes did, in fact, take effect.

For a lot of cases, this step may be all you need - many XPage libraries consist primarily as storehouses for Java classes and a few managed beans. The next couple steps are going to deal with some variations on this, plus some nice-to-haves.

That Java Thing, Part 6: Creating the Feature and Update Site

Nov 8, 2015 10:45 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

The last post covered turning our nascent plugin into a proper XPages library, so now it's time to fill in the remaining pieces to get it installed.

To do that, we'll need a feature and an update site. The feature will point to our plugin - in a more-complicated setup, this could point to several related plugins, but we'll just need the one. The update site will do similarly, referencing the feature in a way that Eclipse-type platforms know about.

Go to File → New and choose "Feature Project" within "Plug-in Development":

Hit "Next" and fill out the feature details:

  • Set the project name to "com.example.xsp.feature"
  • Override the location with a folder inside the Git repository (remembering to include the feature name in the path, to avoid placing it in the top-level folder)
  • Pick a feature name, such as "Example XSP Library Feature"

Click "Next", find and check "com.example.xsp.plugin" in the list of plugins to include in the feature, and click "Finish":

That's about it for the feature. Next up: the update site. As before, go to File → New and choose "Update Site Project" within "Plug-in Development":

There's not much configuration to do for this one: just name it "com.example.xsp.update" and override its location to be within your Git repository:

Once the update site is created, click "Add Feature.." in the site.xml editor and add the new "com.example.xsp.feature" project:

One last optional step is to set up a category in the update site, which will help organization on the server. To do that, click "New Category" and give it a name and ID. Then, drag the feature in the list onto the category to add it to the group:

Now the update site definition is set up, and the next step is to click "Build All" to have Eclipse package your plugin inside the update site. That will create a number of files within your update site project, which are the actual artifacts we need:

Next is to install the library into Designer. Launch Notes, then Designer (the sequence is important to avoid a bug with NSF update sites, so I stick to this habit), and open Preferences. In there, go to the "Domino Designer" section and check "Enable Eclipse plug-in install":

Hit "OK" to close the preferences and then go to File → Application → Install. In the dialog that pops up (sometimes there's a lengthy delay before it does for some reason), select "Search for new features to install" and click "Next". On the next pane, choose "Add Folder Location..." and browse to the update site project:

The next dialog will prompt for a name, and for now the default will do. Click "OK" in this dialog and "Finish" in the main one. If all goes well, that should bring up another dialog after a couple of seconds, which will allow you to select the feature:

Click "Next" and then "Finish". As it installs, you'll get a prompt to install the unsigned plugin, so select "Install this plug-in" and click "OK". Designer will do its thing and then display a small toast window in the bottom right asking if you want to restart now. It at least used to be the case that clicking this didn't do a proper restart (presumably due to the Notes stuff), so I still avoid clicking it. Instead, close Notes and Designer, and then relaunch them (Notes first again).

Now, to see that everything is working, open an existing application in Designer (or make a new one), go to Application Configuration → Xsp Properties → Page Generation tab, and look for "com.example.xsp.library" in the "XPage Libraries" section:

If it's not there, something went wrong. Unfortunately, debugging this sort of thing can get hairy, so it'd probably be best to ask me or someone else knowledgable about plugins for assistance for now.

Assuming it did work, though: great! Next up is the installation on the server. This is a whole tutorial on its own, and fortunately IBM has done the job for me. The upshot of those instructions is:

  • Create a database on the server using the "Eclipse Update Site" template (updatesite.ntf) and clean up the ACL to something appropriate. To see the template, make sure to click "Show advanced templates".
  • Import com.example.xsp.update by using the "Import Local Update Site" option and pointing to its site.xml file
  • Set the notes.ini property "OSGI_HTTP_DYNAMIC_BUNDLES" on the server to point to the file name of that update site
  • Restart HTTP or the entire server

Once HTTP or the server is restarted, you can test to see if it's working by creating an application on the server that has the library checked in the Xsp Properties page as above, and then visiting an XPage in it with a browser. If it's working, it will load normally; otherwise, it will complain about the application relying on a missing library.

So that covers the build and installation cycle! There's one last change to make in the projects before we commit them to Git. The update site project produces tons of Jar files - two for every "Build All" click - and there's no need to check them into the repository. This is a job for a .gitignore file, which we'll put in the update site project because we don't want to ignore intentional Jars later - .gitignore files cascade hierarchically. Right-click on the update site project in Eclipse and choose New → File:

Name the file ".gitignore" (with the starting dot) and click "Finish". The file contents should be a single line:

*.jar

Now, go to commit the changes to the repository, and you should be able to see the projects and Git ignore file we created, but not the Jars:

Now that we have the library built and deployed to the server, the next step will be to actually make it useful, by adding some Java code and accessing it from the XPages application.

That Java Thing, Part 5: Expanding the Plugin

Nov 6, 2015 8:48 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

In the last post, we created an empty plugin project for our XPages library. Today, we'll flesh that plugin out a bit and add the actual XSP hooks.

The first step is to start filling in some details in the plugin's Manifest file, which is the core description of what the plugin does and what it requires in its environment. To begin with, open the MANIFEST.MF file in the META-INF folder of your project and check the "This plug-in is a singleton" checkbox:

During one of the steps later, Eclipse would have yelled at us to do that anyway. This checkbox means that the plugin expects to be the only one of its kind active on the server. This is because it will contribute to the XSP Library extension point, and it should prevent duplicate contributions of the same library from different versions.

Next, go to the Dependencies tab of the Manifest editor, click "Add..." in the "Required Plug-ins" section on the left, and add "com.ibm.xsp.core":

Do the same with "com.ibm.commons":

Now, save and close the Manifest. The next step is to fix up a minor problem I forgot about in the last post: the base package name. By default, the base package is named after the plugin project, which means it now contains a redundant "plugin" part. That's not a problem per se, but it's not needed, and this will be a good introduction to Eclipse refactoring.

Expand the "src/main/java" (or "src" if you left it as the default) package folder within the project, then right-click on "com.example.xsp.plugin" and choose "Refactor" → "Rename...". Change the name to "com.example.xsp", make sure "Update references" is checked, check "Update fully qualified names in non-Java text files", and click "Preview". This should list a couple changes - Eclipse looks for references to the class in both Java and non-Java files to try to cover all of the bases. In a larger project, there may still be lingering references elsewhere, but in this case it does everything for us. Click "OK".

It's possible that there may be a small detour at this point. On my machine, I noticed a strange problem in Eclipse after making this change: it set the output folder of the project to a nonsense directory within the source. To make sure this isn't happening, double-click on build.properties and go to the "build.properties" tab. Make sure the "output.." line reads output.. = target/classes (or output.. = bin if you left the defaults). This is the file that controls many of the compilation settings for the plugin, and this specifically specifies the location for the compiled Java classes.

Now, time to expand the Activator a bit. Expand "com.example.xsp" and open the "Activator.java" file. The Activator class is an optional but there-by-default class that provides a couple hooks during the lifecycle of the plugin. For XSP libraries, there's usually not too much going on here, but it can serve as a useful coordinating and debugging point.

One such potential use is a centralized logger configuration. At least for now, we'll just use the basic JVM logging classes, since "proper" OSGi logging is a whole thing. With the addition of a logging property and initialization message, the Activator looks like this:

package com.example.xsp;

import java.util.logging.Level;
import java.util.logging.Logger;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

public class Activator implements BundleActivator {

	private static BundleContext context;
	
	public static final Logger log = Logger.getLogger(Activator.class.getPackage().getName());
	static {
		log.setLevel(Level.FINEST);
	}

	static BundleContext getContext() {
		return context;
	}

	/*
	 * (non-Javadoc)
	 * @see org.osgi.framework.BundleActivator#start(org.osgi.framework.BundleContext)
	 */
	public void start(BundleContext bundleContext) throws Exception {
		Activator.context = bundleContext;
		
		if(log.isLoggable(Level.INFO)) {
			log.info("Starting Example XPages Library");
		}
	}

	/*
	 * (non-Javadoc)
	 * @see org.osgi.framework.BundleActivator#stop(org.osgi.framework.BundleContext)
	 */
	public void stop(BundleContext bundleContext) throws Exception {
		Activator.context = null;
	}

}

The static block that sets the log level is a good trick to know about: that block executes once, when the class is first loaded. It's distinct from a contructor, which turns each time a new object of the class is created. It may help to think of the static portions of a class (the properties, methods, and this block) as a sort of special singleton version of the class created by the runtime. This sort of thing is non-obvious to new Java developers, but it starts to make sense after you do it for a while.

With the Activator in place, the next step is the actual XPages library class, which provides the specific details about the plugin's XSP interactions. Right-click on the "com.example.xsp" package and choose "New" → "Class" (if "Class" doesn't appear in that list, choose "Other" and then find "Class" within the "Java" folder). In the "New Java Class" dialog, click "Browse..." in the "Superclass" line around the middle, and look for the class AbstractXspLibrary:

Set the class name to "Example Library" and leave everything else as the defaults:

When creating the class, it will fill in a single required method: getLibraryId(). This method's job is to return a string that should be unique across XPages libraries, and which will show up as the library's identifier in Designer. By convention, this is related to the Java reverse-DNS name you're using, plus a suffix like ".library":

@Override
public String getLibraryId() {
	return Activator.class.getPackage().getName() + ".library";
}

There are a number of other methods, though, that are worth overriding in a normal plugin, related to the plugin setup, the other XSP libraries it depends on, and its versioning. This is a reasonable baseline for a modern XPages library that will use the Extension Library:

package com.example.xsp;

import java.util.logging.Level;
import java.util.logging.Logger;

import com.ibm.xsp.library.AbstractXspLibrary;

public class ExampleLibrary extends AbstractXspLibrary {
	
	private static final Logger log = Activator.log;
	
	static {
		if(log.isLoggable(Level.FINE)) {
			log.fine(ExampleLibrary.class.getName() + " loaded");
		}
	}

	@Override
	public String getLibraryId() {
		return Activator.class.getPackage().getName() + ".library";
	}

	@Override
	public String getPluginId() {
		return Activator.getContext().getBundle().getSymbolicName();
	}
	
	@Override
	public String getTagVersion() {
		return "1.0.0";
	}
	
	@Override
	public String[] getDependencies() {
		return new String[] {
				"com.ibm.xsp.core.library",
				"com.ibm.xsp.extsn.library",
				"com.ibm.xsp.domino.library",
				"com.ibm.xsp.designer.library",
				"com.ibm.xsp.extlib.library"
		};
	}
	
	@Override
	public boolean isGlobalScope() {
		return false;
	}
}

With that class in place, there's one final step for making this plugin a proper XSP library: the extension point. Extension points are how the XPages runtime knows which plugins provide libraries like this. Open the MANIFEST.MF file again and click on "Extensions" on the right side of the first page:

Eclipse will ask you if you want to show the hidden Extensions panel, which you do. On that tab, click "Add..." and choose "com.ibm.commons.Extension":

Once added, there will be two fields on the right side. In "type", enter "com.ibm.xsp.Library". In "class", click "Browse" and search for the name of the library created earlier, "com.example.xsp.ExampleLibrary". Once this is added, you should be able to click on the "plugin.xml" tab in the editor and see something like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
   <extension
         point="com.ibm.commons.Extension">
      <service
            class="com.example.xsp.ExampleLibrary"
            type="com.ibm.xsp.Library">
      </service>
   </extension>

</plugin>

There will also be a file named "plugin.xml" in your project.

After all that, we have a functional basis for our plugin. There are definitely a lot of things to remember in this process, but they start to make a sort of sense the more you do it.

With that set, commit your changes, where you can see the moved Activator file and the addition of the Library stuff:

In the next post, we'll create the feature and update-site projects needed to actually install this plugin in Designer and Domino.

That Java Thing, Part 4: Creating the Plugin

Nov 5, 2015 9:38 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

To make a basic XPages library, we'll need to create the trio of OSGi projects: the plugin, the feature, and the update site. For a long time, the XSP Starter Kit has been a great go-to starting point for this sort of thing. It definitely covers almost all of the potential ground, but it can be a bit overkill when you just want to put some classes in a shared place. So, for this exercise, we'll start from scratch.

But before we do that, we should create a local Git repository first. This isn't required, but it's a good idea, and Git repositories are so "cheap", technically, that there's no reason not to. There are a number of ways to do it - you can create and manage them via the command line, via a dedicated tool like SourceTree, or via the embedded Git client in Eclipse. We'll do the last one here.

To work with Git repositories, first add the Git Repositories view (Eclipse's "view" refers to the panes you see in the window) to your Eclipse UI by going to Windows → Show View → Other...:

Once you add it, you can click on "Create a new local Git repository" in the pane - I suggest creating a folder beneath the "git" folder in your home directory, for organizational purposes:

Now, on to creating an actual project. To do that, go to File → New → Project... and find "Plug-in Project" inside "Plug-in Development":

In the form that shows up after you hit Next, fill in some project details:

  • Set the project name to "com.example.xsp.plugin"
  • Override the location with a folder inside the Git repository you created earlier (make sure to include the plugin name in the path, rather than using the top level of the Git repo)
  • Change the source folder to "src/main/java" and output folder to "target/classes". These are nods towards Maven that aren't strictly required, but are a "may as well" thing.
  • Set the target platform to "an OSGi framework" → "Equinox"

On the next page, the only change needed is to set the Execution Environment to "JavaSE-1.6" (at least until Domino gets a newer JVM):

Then, click "Finish". It will ask you if you want to switch to the "Plug-in Development" perspective - "perspectives" are an Eclipse term for groupings+layouts of views for different purposes. You can choose either Yes or No, since the other perspective is very similar to the default J2EE perspective. If you choose "Yes", you'll have to re-add the Git Repositories view as above.

Once you've created the project, it's time to check it in to Git. Right-click on the Git repository in the Git Repositories view and choose "Commit...". The first time you do this, it will prompt you for a name and email address, which are pretty arbitrary, but it's good to keep them consistent across your Git presences. On the commit page, provide a useful message and click the "Select All" button (the middle one in the top right of the "Files" section) to include all of the newly-created files, and hit "Commit":

Now that the plugin's skeleton is in place, the next post will cover some details about it and how to turn it into an XSP Library.

That Java Thing, Part 3: Eclipse Prep

Nov 4, 2015 4:26 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Before you dive into OSGi development, you'll need a proper development environment, and that usually means Eclipse. Domino Designer, being Eclipse-based, can technically do the job and has the advantage of some of the setup being done, but it's so out-of-date at this point that it's not worth it (believe me, I tried originally). Newer Eclipse versions have improved noticeably, so it's best to grab a recent version. Currently, that means Mars, or Eclipse 4.5.1.

Before Eclipse, you'll need to install Java if you haven't already. To do that, go to Oracle's site and download the latest release for your platform. Once you install it and find out how many devices run Java, you can move on to installing Eclipse.

Head to the Eclipse download page and download the "Eclipse IDE for Java EE Developers" for your platform. They've recently added an Installer at the top of the page, and that works as well, but adds some new wrinkles that haven't made it into common knowledge yet. For now, you may want to stick with the "...or download an Eclipse Package" section. The "bitness" (32 or 64) doesn't matter too much - you should generally match the bitness of your OS.

Once you have Eclipse installed and running ("installation" generally involves dragging the extracted ZIP file content somewhere), the next step is to teach Eclipse about the XPages artifacts. If you're running on Windows and have a local Domino server, the XPages SDK may be your friend, though it hasn't been fully tested for Mars. If you are running on a non-Windows OS or otherwise don't want to go the full SDK route, there's a quick route that lacks the automation and debugger integration.

Setting up the environment used to be something of a hassle, but it became much simpler ever since IBM released the Update Site for Build Management. Primarily made for Maven-development use (sort of), this Update Site also serves as a quick stop to pick up all of the dependencies you need for XPages development in one place.

To make use of the Update Site, download the file from OpenNTF, extract it, and then extract the UpdateSite.zip file contained therein. You should end up with a folder like this:

In Eclipse, go to the preferences ("Window" → "Preferences" on Windows or "Eclipse" → "Preferences" on a Mac), then go to "Plug-in Development" → "Target Platform":

Edit the existing platform and, in the "Locations" tab of the ensuing dialog, click "Add..." → "Directory" → "Next", browse to that extracted UpdateSite folder, and click "Next" to verify that it finds a bunch of plugins, and then "Finish". Now, you'll be able to reference XPages platform plugins in your projects.

At this point, Eclipse is set up for the essentials, but it may still be a good idea to set up an appropriate JVM for Domino development to make sure you have a consistent baseline and to avoid problems where newer Java releases added classes methods not avilable in Java 6. Fortunately, on Windows (and Linux?), a Notes installation comes with a JVM that does the job nicely. To add it, go back to preferences, then "Java" → "Installed JREs" → "Add...". Choose "Standard VM" → "Next", browse to the "jvm" folder in your Notes (or Domino) installation directory, name it something appropriate, and then click Finish:

On the Mac, you can get a similar Java release from Apple, which nowadays will end up in /Library/Java/JavaVirtualMachines (older installers put it in /System/Library/Java/JavaVirtualMachines).

Now that Eclipse is set up, the next step will be to break ground on making an actual XPages plugin.

That Java Thing, Part 2: Intro to OSGi

Nov 3, 2015 6:44 AM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

OSGi once stood for "Open Services Gateway initiative", but that name slid from "impossibly vague" to "entirely obsolete" rather quickly. For our needs, OSGi is a mechanism for bringing sanity to the "big pile of Jars" that you might otherwise have in a large Java system. It provides a standardized way to describe the name of a library, its version, its dependencies, its capabilities, and its interactions with other libraries. In this way, rather than just "commons-io-2.1.jar", you can conceptually deal with "I require org.apache.commons.io, version 2.0 or above" and the environment can suss out what that means based on what is installed.

As Domino developers, we care about a subset of the OSGi concepts, primarily those related to creating and loading plugins. There is a handful of vital terms involved, some of which I've already been tossing around:

  • JAR: a Java Archive file is a ZIP file aimed at Java, generally containing classes and other resources (text files, images, and so forth). There are variants like "War" (Web Application Archive) and "Ear" (Enterprise Archive), but they're all still ZIP files.
  • Bundle: A "bundle" is, in effect, OSGi's word for a JAR. It expands on a contained file called META-INF/MANIFEST.MF (so named because Java likes to be difficult sometimes) to define the description of the bundle: its name, version, and so forth.
  • Plug-in: This is basically the same thing as a bundle. Presumably, it technically refers to a specialized kind of bundle, but we can use the terms interchangeably. It also technically has that hyphen in it, but I often write it "plugin" anyway.
  • Feature: This excessively-vague term is OSGi's way of grouping plugins together into a single conceptual installable unit. For example, the main feature included in the Extension Library references ten distinct plugins.
  • Update Site: Update sites are to features what features are to plugins: a way to organize these features into a collection, either to install them all at once or to provide a pool of available software to install. Eclipse-the-IDE uses them extensively for the latter case. As Domino developers, we primarily use them to distribute libraries and install them into the server's Update Site NSF.
  • Target Platform: A target platform refers to the set of plugins available in a given environment, which is used to determine whether a given plugin has the dependencies it will need to run. In our case, the target platform is usually the XPages runtime environment, containing OSGi base plugins, Java servlet support plugins, and the XPages runtime plugins themselves (among others). Plugin development environments use these heavily.
  • Extension Point: An extension point is a way for a plugin to either declare that it can either provide or consume a given service. For example, you might have a plugin that says "I can make use of any plugin that provides a color" and then another that says "I can provide the color blue". By using extension points, the first plugin can find the other, even though it may have been developed entirely independently.
  • Java EE: Java Enterprise Edition (sometimes written J2EE thanks to Java's weird relationship with versions) is basically the standard "web server stuff" for Domino (though it covers other aspects). XPages is sort of like a Java EE environment, but the experience is distinct. Notably, a Java EE server is a general platform on which other web-dev toolkits - JSP, JSF, Vaadin, JAX-RS, and so forth - can run. Java EE isn't inherently related to OSGi, but the two eventually bleed into each other with Domino and WebSphere.

OSGi in general goes beyond that, and some other aspects may be useful down the line (such as interacting with the OSGi console), but for now it's enough to think of it as a set of decorations for your projects. With some of the basic terms in line, next I will move on to setting up an Eclipse environment to start development.

That Java Thing, Part 1: The Java Problem in the Community

Nov 2, 2015 1:20 PM

Tags: java xpages
  1. That Java Thing, Part 1: The Java Problem in the Community
  2. That Java Thing, Part 2: Intro to OSGi
  3. That Java Thing, Part 3: Eclipse Prep
  4. That Java Thing, Part 4: Creating the Plugin
  5. That Java Thing, Part 5: Expanding the Plugin
  6. That Java Thing, Part 6: Creating the Feature and Update Site
  7. That Java Thing, Part 7: Adding a Managed Bean to the Plugin
  8. That Java Thing, Part 8: Source Bundles
  9. That Java Thing, Part 9: Expanding the Plugin - Jars
  10. That Java Thing, Part 10: Expanding the Plugin - Serving Resources
  11. That Java Thing, Interlude: Effective Java
  12. That Java Thing, Part 11: Diagnostics
  13. That Java Thing, Part 12: Expanding the Plugin - JAX-RS
  14. That Java Thing, Part 13: Introduction to Maven
  15. That Java Thing, Part 14: Maven Environment Setup
  16. That Java Thing, Part 15: Converting the Projects
  17. That Java Thing, Part 16: Maven Fallout
  18. That Java Thing, Part 17: My Current XPages Plug-in Dev Environment
  19. Java Hiccups
  20. Bitwise Operators
  21. Java Grab Bag 2

Java has been a bugbear for the Domino community for a long time. Though Notes and Domino have presented a top-to-bottom (mostly) Java environment for years, the monumental inertia of the corporate development community, the platform's tortured history of insufficiently-fleshed-out Java hooks, and IBM's "pay no attention to that man behind the curtain" pitch with regard to the language created an environment where Java is often something for "other developers".

XPages represented a tremendous leap forward for development on the platform, ditching the completely-unique bizarre forms-based web-dev experience in favor of something much closer to modern web development. At the time, the unstructured, SSJS-heavy façade presented by Designer made a sort of sense: the switch to XPages was already a huge change for Domino developers, and making XPage development look sort of like classic Notes/Domino development was a spoonful of sugar with the medicine.

But the critical flaw of XPages development has never been fixed: there's no smooth path from "hello, world" to a well-structured complex application. SSJS, as implemented in XPages, is unsuitable to the task, while Designer's presentation of the Java layer ranges from "non-obvious" to "hostile". Still, IBM did a good job of presenting the next major tier for XPages developers: writing Java first in the NSF and then in OSGi plugins. The problem has always been in figuring out how to get there.

This hurdle started out as an inconvenience just for those who wanted to figure out how the machine worked, but has grown into a significant problem for all Domino developers. While the XPages stack has remained relatively stagnant, the rest of the development world has raced forward, with new technologies giving rise to new frameworks and transforming the old. This is no discredit to the XPages team: they have consistently put in tremendous work across the board, but it's quite an industry to keep up with.

So we're in a tough spot now. The XPages platform isn't going away any time soon, but it's important for us as developers to progress. One option is to abandon ship entirely: pack up and move to some unrelated platform, leaving Domino behind entirely. But most of us, out of personal affection, professional acumen, or (primarily) business requirements, don't want to do that. Unfortunately, the path to improvement with Domino has only gotten more complicated with time. It began with learning about Java, OSGi, and Eclipse proper, but has since expanded to include a whole rogues gallery of other technologies: Maven, Tycho, m2e, Jenkins, Git, Wink/JAX-RS, jQuery, Bootstrap, Angular, and on and on.

There's no time like the present, though. I've walked this path, and I want to help everyone else walk it too. To kick that off, this series is going to cover the process of creating a basic XPages plugin, making it a little more complex, converting it to Maven, and building it with Jenkins.

To kick things off, the next post will provide an introduction to the concept of OSGi and the parts of it that we need to know for Domino development. Things may seem weird along the way, but trust me: it's worthwhile.

XPages Devs: Enable "Refresh entire application when design changes"

Sep 14, 2015 11:48 AM

Tags: xpages java

When developing an XPages application of beyond-minimal complexity, you're likely to run into a problem where your app starts saying that a class is incompatible with itself in one way or another. The exception usually traces down to something like "foo.SomeClass is incompatible with foo.SomeClass" or "cannot assign instance of foo.SomeClass to field X..." where the field is that same class. This has cropped up since time immemorial.

It's actually, though, something that IBM sort-of fixed in 8.5.3 by adding an xsp.properties option of xsp.application.forcefullrefresh=true and then, in 9.0, a GUI option in Xsp Properties:

Basically, this checkbox amounts to "don't break my app periodically". From what I gather, the default behavior is in the interest of being clever with classloaders, but can lead to creeping problems in complicated apps, to the point where changing seemingly-innocuous things like the ACL breaks the app until you restart HTTP or "kick" the app by modifying certain design elements (namely, Java classes or faces-config.xml). Since that behavior is never desired, there is no reason to not check this box, and I enable it on every new NSF I create.

Dealing with OSGi Fragments in Tycho and Designer

Sep 11, 2015 7:30 PM

Tags: java tycho osgi

This post is partly to spread information publicly and partly a useful note to my future self for the next time I run into this trouble.

In OGSi, the primary type of entity you're dealing with is a "Bundle" or "Plug-in" (the two terms are effectively the same for our needs). However, there's a sort of specialized type that you may run into called a "Fragment". They're similar to a plug-in in that they're a contained unit of Java code and resources, but they have the special property that they're attached to another plug-in and automatically come along for the ride when the main plug-in is used. This is useful in a couple situations, such as code organization, serving platform-specialized native libraries, after-the-fact additions, or providing library dependencies.

In the basic case, the only requirement is to specify in the fragment what the "parent" plug-in is (Eclipse provides a field for this in its editor) and then including the fragment in the installable feature alongside the plug-in. However, there are a few situations where a bit more work is required if you want to access the classes in the fragment: when used as part of a Tycho build and when used as an XSP Library in Designer (which may also apply to Eclipse dependency use generally).

Tycho

When doing a full Tycho build, even if both the plug-in and its fragment(s) are part of the current build, another project won't automatically include the fragment when doing the compilation. This can lead to a situation where the projects will compile cleanly in Eclipse (which handles the fragment attachment) but fail in Tycho. The trick, though small, is non-obvious: you have to tell the project that is using the fragment code about the fragment in its build.properties.

So say you have three projects: the main plug-in (some.main.plugin), a fragment attached to it (some.main.plugin.fragment), and the project consuming them (some.dependent.plugin). The normal first step is to include the main plug-in in the dependent plug-in's MANIFEST.MF as usual:

Require-Bundle: some.main.plugin

In Eclipse, this will suffice: both the main plug-in and its fragment will show up in the "Plug-in Dependencies" library. For Tycho, though, you have to tip it off using a line like this in build.properties:

extra.. = platform:/fragment/some.main.plugin.fragment

Think of this as saying "hey, dummy, don't forget about the fragment". Once you have that line, the Tycho-enabled Maven build should be able to resolve the fragment's classes and all will be well.

Designer

When using the plug-in and its fragment in an XSP Library in Designer, there's a similar-seeming problem: though Designer will include any direct dependencies of your Library plug-in in the class path, it won't pick up on any fragments by default (though it seems that Domino does). The trick here is that the primary plug-in has to tell Designer that it accepts fragments, which is done by setting Eclipse-ExtensibleAPI in the MANIFEST.MF file for some.main.plugin, like so:

Eclipse-ExtensibleAPI: true

Once that's in place, the fragment should start showing up in your NSF's classpath when the library is enabled.

Adding Components to an XPage Programmatically

Jul 19, 2015 9:16 AM

Tags: xpages java

One of my favorite aspects of working with apps using my framework is the component binding capability. This lets me just write the main structure of the page and let the controller do the grunt work of creating fields with validators and converters. There's a lot of magic behind the scenes to make it happen, but the core concept of dynamic component creation is relatively straightforward.

An XPage is a tree of components, and those components are all Java objects on the back end, which can be manipulated and added or removed programmatically. To demonstrate, I'll start with this basic XPage:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" beforePageLoad="#{controller.beforePageLoad}" afterPageLoad="#{controller.afterPageLoad}">
	<xp:div id="container">
	</xp:div>
</xp:view>

Now, I'll add a basic form table using the afterPageLoad method in the controller class:

package controller;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;

import com.ibm.xsp.component.UIViewRootEx2;
import com.ibm.xsp.component.xp.XspInputText;
import com.ibm.xsp.extlib.component.data.UIFormLayoutRow;
import com.ibm.xsp.extlib.component.data.UIFormTable;
import com.ibm.xsp.extlib.util.ExtLibUtil;
import com.ibm.xsp.util.FacesUtil;

import frostillicus.xsp.controller.BasicXPageController;

public class home extends BasicXPageController {
	private static final long serialVersionUID = 1L;

	@SuppressWarnings("unchecked")
	@Override
	public void afterPageLoad() throws Exception {
		super.afterPageLoad();

		UIViewRootEx2 view = (UIViewRootEx2)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "view");
		UIComponent container = FacesUtil.findChildComponent(view, "container");

		UIFormTable formTable = new UIFormTable();
		formTable.setFormTitle("Some Form");
		formTable.setStyle("margin: 2em; width: 20em");
		container.getChildren().add(formTable);
		formTable.setParent(container);

		UIFormLayoutRow formRow = new UIFormLayoutRow();
		formRow.setLabel("Name");
		formTable.getChildren().add(formRow);
		formRow.setParent(formTable);

		XspInputText inputText = new XspInputText();
		formRow.getChildren().add(inputText);
		inputText.setParent(formRow);
	}
}

There are a few concepts to get a handle on here, but fortunately they're not as esoteric as other aspects of back-end XPages development.

To start out with, there's the question of how you're supposed to know what classes the components are. The best way to find this out is to create a basic XPage containing the control with an ID and then go to the Package Explorer view, then the "Local" source folder, and find the class file for your page in the "xsp" package. In there, you can see the code that actually generates the XPage (which is doing basically the same thing as we're doing here). Look for the ID you gave the control on the page and you can find the class behind it.

Next is the job of finding the parent control you're going to attach the new components to. In this case, I used the FacesUtil class to search for the component by its ID, because I know it's the only one on the page with that base ID. This tack will usually do the trick for you, but there are other ways to find it, such as binding or XspQuery.

Finally, there's the need to both add the new child to the parent's children list and to set the parent in the child itself. This is a bit of housekeeping boilerplate, but it has to be done.

Once you have this code running, you get a basic form (using the Bootstrap3.2.0 theme in this case):

Some Form

This can get much more in-depth: anything you can do declaratively in the XPage XML, you can do programmatically in Java. You can also manipulate existing components: change their properties, rearrange them, or remove them from the tree entirely. I've found knowledge of this to be both useful as a part of the toolbox as well as a way to really clear up the mental model of what's going on on the page.

Parsing JSON in XPages Applications

May 21, 2015 12:47 PM

Tags: json java

David Leedy pointed out to me that a post I made last year about generating JSON in XPages left out a crucial bit of followup: reading that JSON back in. This topic is a bit simpler to start with, since there's really just one entrypoint: com.ibm.commons.util.io.json.JsonParser.fromJson(...).

There are a few variants of this method to provide either a callback to call during parsing or a List to fill with the result, but most of the time you're going to use the variants that take a String or Reader of JSON and convert it into a set of Java objects. As with generating JSON, the first parameter here is a JsonJavaFactory, and which static instance property you choose matters a bit. Contrary to the first-Google-result documentation, there are three types, and they differ slightly in the types of objects they output:

  • instance: This uses java.util.HashMap for JSON objects/maps and java.util.ArrayList for JSON arrays.
  • instanceEx: This is like instance, but uses JsonJavaObject for JSON objects/maps.
  • instanceEx2: Like instanceEx, this uses JsonJavaObject for objects/maps but also uses JsonArray for JSON arrays.

Since JsonJavaObject and JsonArray implement the normal Map<String, Object> and List<Object> interfaces you'd expect (and, indeed, subclass HashMap and ArrayList), you can treat them interchangeably if you're just using the interfaces like you should, but it may matter if you're doing something where you expect certain traits of one of the concrete classes or want to use the explicit getString, etc. methods on the JSON-specific ones.*

Anyway, with that out of the way, the actual code to use these is pretty straightforward:

String json = "{ \"foo\": 1, \"bar\": 2}";
Map<String, Object>result = (Map<String, Object>)JsonParser.fromJson(JsonJavaFactory.instance, json);

In this case, I'm immediately casting the result from Object to Map because I'm sure of the contents of the JSON. If you're less confident, you should surround it with instanceof tests before doing something like that. In any event, that's pretty much all there is to it. As with generating JSON, SSJS wraps this functionality in a fromJson method (which may or may not produce the same objects; I haven't checked).


* You could also subclass the standard as the code does if you have specific needs or desires, like using a LinkedHashMap instead of HashMap to preserve the order of the object's keys.

Quick PSA: LS2J Problems in 9.0.1 FP3

May 19, 2015 2:25 PM

Tags: java

While I'm at it, I realized that it may be useful to further spread information about the problem that led to me fiddling with the latest IFs and JVM patches in the first place: the LS2J problems in 9.0.1 FP3. Specifically, it's borked. The main way most people have encountered this is via an exception dialog when attempting to add new plugins to an Update Site NSF, since that uses LS2J to accomplish its task - it displays several layers of stack and the upshot is that there's a java.lang.InternalError about calling a constructor.

The fix is to install the JVM patch from here; Interim Fix 3, while also worth an install, doesn't cover this.

There's another caveat about that, though, for those who have both Notes and Domino installed on the same machine. Since the installer for the JVM patch is the same for both, it will pick one of the two and not give you a choice to choose the other. In the case of the 64-bit patch (I don't know why it picks the client in that case), Ulrich Krause posted workaround steps. In my case, both were 32-bit, it found Domino first and not Notes, and the same commands didn't work. My ugly workaround was to fireup regedit, browse to (if I recall correctly) HKEY_LOCAL_MACHINE\SOFTWARE\IBM and rename the "Domino" folder/key to something else, run the installer, and then rename the folder/key back.

How I Use JAX-RS in the frostillic.us Framework

May 1, 2015 5:59 PM

Tags: java rest

Inspired by Toby Samples's new blog series on JAX-RS in Domino, I'd like to share a description of how I made use of it to write the REST services in the frostillic.us Framework. This is not intended to be a from-scratch introduction - Toby is handling that well so far - but instead assumes a certain amount of knowledge with OSGi development and why you would want to do this in the first place.

The goal of my REST services is to provide an automatic REST/JSON API for any Framework model objects used in a database without having to include any servlet code in the database itself. It's a business-logic-friendly analogue to the Domino Access Services and "borrows" heavily from that code base. It doesn't use the DAS extension point, though, in large part because I didn't know that existed until recently. As far as I can tell, using that extension point saves you some bootstrapping work and makes it possible to enable/disable the service in the server config, but otherwise the work will likely be fairly similar.

Initial Setup

To get started, this will all have to take place in a plugin, unless it turns out there's a way to do it in-NSF. In this case, this made sense anyway, since I wanted the servlets to be available for everything. The first step was to make a stub class to act as the base of the servlet, even though it doesn't really do anything:

package frostillicus.xsp.model.servlet;

import javax.servlet.ServletException;
import com.ibm.domino.services.AbstractRestServlet;

public class ModelServlet extends AbstractRestServlet {
	private static final long serialVersionUID = 1L;

	public static ModelServlet instance;

	public ModelServlet() {
		instance = this;
	}

	@Override
	protected void doInit() throws ServletException {
		super.doInit();
	}
}

Once that class existed, I registered it in the plugin.xml as a servlet extension:

<extension id="frostillicus.xsp.model.Servlet" name="fmodelservlet" point="org.eclipse.equinox.http.registry.servlets">
	<servlet alias="/fmodel" class="frostillicus.xsp.model.servlet.ModelServlet">
		<init-param name="applicationConfigLocation" value="/WEB-INF/fmodelapplication"/>
		<init-param name="propertiesLocation" value="/WEB-INF/fmodelservlet.properties"/>
		<init-param name="DisableHttpMethodCheck" value="true"/>
	</servlet>
</extension>

In addition to that servlet class, it also references two text files to provide configuration for Wink, the JAX-RS implementation packaged with the Extension Library. The first is a list of resource classes to use in the servlet:

frostillicus.xsp.model.servlet.resources.ApiRootResource
frostillicus.xsp.model.servlet.resources.ManagersResource
frostillicus.xsp.model.servlet.resources.ManagerResource
frostillicus.xsp.model.servlet.resources.ModelResource

The second is a properties file with configuration options (I don't remember why this option is important):

wink.defaultUrisRelative=false

Implementing a Resource

The classes listed above are what receive a REST request (funneled through Wink/JAX-RS) and provide a response. As an example, here's the ManagerResource class:

package frostillicus.xsp.model.servlet.resources;

import java.io.IOException;
import java.net.URI;
import java.util.Map;
import java.util.HashMap;

import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.ResponseBuilder;
import javax.ws.rs.core.UriInfo;

import org.openntf.domino.Database;

import com.ibm.commons.util.io.json.JsonException;
import com.ibm.commons.util.io.json.JsonGenerator;
import com.ibm.commons.util.io.json.JsonJavaFactory;
import com.ibm.domino.commons.util.UriHelper;
import com.ibm.domino.das.utils.ErrorHelper;

import frostillicus.xsp.model.ModelManager;
import frostillicus.xsp.model.ModelObject;
import frostillicus.xsp.model.ModelUtils;
import frostillicus.xsp.util.FrameworkUtils;

@SuppressWarnings("unused")
@Path("{managerName}")
public class ManagerResource {

	@GET
	@Produces(MediaType.APPLICATION_JSON)
	public Response getManager(@Context final UriInfo uriInfo, @PathParam("managerName") final String managerName) {
		try {
			Map<String, Object> result = new HashMap<String, Object>();
			Database database = FrameworkUtils.getDatabase();
			if(database == null) {
				result.put("status", "error");
				result.put("message", "Must be run in the context of a database.");
			} else {
				Class<? extends ModelManager<?>> managerClass = ModelUtils.findModelManager(database, managerName);
				if(managerClass == null) {
					result.put("status", "failure");
					result.put("message", "No manager found for name '" + managerName + "'");
				} else {
					result.put("status", "success");
					result.put("managerClass", managerClass.getName());
				}
			}

			return ResourceUtils.createJSONResponse(result, false);
		} catch (Throwable e) {
			return ResourceUtils.createErrorResponse(e);
		}
	}

	@POST
	@Consumes(MediaType.APPLICATION_JSON)
	public Response createModel(final String requestEntity, @Context final UriInfo uriInfo, @PathParam("managerName") final String managerName) {

		Database database = FrameworkUtils.getDatabase();
		Class<? extends ModelManager<?>> managerClass = ModelUtils.findModelManager(database, managerName);
		if(managerClass == null) {
			return ErrorHelper.createErrorResponse("Manager '" + managerName + "' not found.", Response.Status.NOT_FOUND);
		}

		URI location;
		try {
			ModelManager<? extends ModelObject> manager = managerClass.newInstance();
			ModelObject model = manager.create();
			ResourceUtils.updateModelObject(requestEntity, model, false);

			location = UriHelper.appendPathSegment(uriInfo.getAbsolutePath(), model.getId());
		} catch(Throwable t) {
			return ResourceUtils.createErrorResponse(t);
		}

		ResponseBuilder builder = Response.created(location);
		Response response = builder.build();
		return response;
	}
}

There are quite a few concepts at work here, as well as tons of logic wrapped up in the referenced ResourceUtils and ModelUtils classes. The term "manager" in this class has no special meaning for JAX-RS or servlets - it's the term the Framework uses for the objects that provide access to models, like the "Posts" manager that maps requests for "all" to a back-end view named "Posts\All" and returning "Post" objects.

This is where things get hairy and diverge from basic servlet creation and head into Domino/XPages-specific eccentricities.

A Secret Double Life

The job of the model servlet is a bit strange, in that it doesn't want to just read document data from an NSF, but also should read and process Java classes. Framework model classes are defined in the NSF, not in plugins, and so the servlet has no real knowledge of what's inside the NSF, other than that it should look for classes that implement the appropriate interfaces.

When code is executing in an OSGi servlet context like this, it sits in a strange grey area. It's a better spot than agents - which have no knowledge of OSGi plugins or the XSP runtime - but it's not quite an XPages context, either, and there's no FacesContext available. Instead, a class called ContextInfo provides access to the session (running as the currently-authenticated Domino user) and the current database, if applicable. That "if applicable" comes in because an OSGi servlet can be accessed either as "http://foo.com/servletname" or as "http://foo.com/bar.nsf/servletname". I modified my utility class to paper over the difference between these two environments. The manager resource calls ModelUtils.findModelManager with this database context to try to find the requested manager. For example, if the request comes in as "/fmodel/Posts", it will search the database for a class or managed bean named "Posts".

This is where the ability to treat an NSF as a "bag of classes" comes in handy. It may be possible to do this another way, but I'm using the DatabaseClassLoader provided by ODA to perform searches on the Java classes contained in an NSF. For this purpose, the actual names and structure of the classes are irrelevant, only that they implement ModelManager. If such a class is found, then the servlet knows enough about it, thanks to the interface, to fetch collections and individual model objects as necessary.

Bits and Bobs

In addition to the trickery required to pull manager and model classes out of an NSF, there are also a number of other techniques and components, mostly lifted from the ExtLib, used to make this work.

The code makes heavy use of JsonWriter to produce JSON in a lightweight manner, rather than building up and then spitting out large blobs of JSON, which is particularly important with large amounts of data.

The AbstractDominoModel class needed a lot of reworking to exist in the not-quite-XPages environment. In an XSP context, it makes use of an inner DominoDocument object in order to be able to deal with file attachments more easily, but that doesn't work without the full context. Accordingly, it uses a holder class to paper over the difference between DominoDocument and "manually" accessing the ODA Document. Of particular note is the handling of rich text for export into the REST display. It uses the HTML converter class included in DominoUtils, which I assume uses the HTMLConvertItem C function underneath the hood. It then uses the converter object to output an HTML version of the rich-text item as well as URLs for any attachments.

There is a certain amount of number fiddling and off-by-one-error-prone work done to determine the first and last entries in a model collection to show. As with DAS, it supports both the "Range" header and "start" and "count" GET parameters.

I nabbed wholesale IBM's shim implementation of the PATCH method, which isn't included in the version of Wink shipped with the ExtLib (at least not when I wrote this). Ideally, PATCH would mean that the JSON provided to the server would only be used to update fields in-place (leaving any fields in the model not included in the JSON untouched), while PUT would replace the model object entirely (removing any model fields not present in the JSON). In reality, they both act as PATCH.

As with XPages access to model objects, the REST APIs work with the JPA annotations used in model objects for validation. The model objects check their context - in an XPages context, failed validations result in FacesMessages, while otherwise it throws a ConstraintViolationException. When this exception occurs in the servlet, the error-response method picks up on that and generates specialized JSON to provide an explanation of the failed constraints to the user.

So Yeah

If you haven't tried out JAX-RS servlets yet, don't let this list of caveats and complicated code daunt you. This specific case of working with model objects in a very generic way naturally leads to complicated code, and the annotation-based coding system of JAX-RS/Wink reduces the amount of code dramatically. None of my code has to deal with fetching HTTP requests or parsing query parameters into useful objects - the API does that for me. There's no doubt a good deal more I could have it do for me as well. This is a pretty clean way to write servlets and is absolutely the best way to write them when the code makes sense to exist in a plugin.

Quick Tip: Wrapping a lotus.domino.Document in a DominoDocument

Mar 3, 2015 3:13 PM

Tags: xpages java

In the XPages environment, there are two kinds of documents: the standard Document class you get from normal Domino API operations and DominoDocument (styled NotesXspDocument in SSJS). Many times, these can be treated roughly equivalently: DominoDocument has some (but not all) of the same methods that you're used to from Document, such as getItemValueString and replaceItemValue, and you can always get the inner Document object with getDocument(boolean applyChanges).

What's less common, though, is going the other direction: taking a Document object from some source (say, database.getDocumentByUNID) and wrapping it in a DominoDocument. This can be very useful, though, as the API on the latter is generally nicer, containing improved support for dealing with attachments, rich text and MIME, and even little-used stuff like getting JSON from an item directly. Take a gander at the API: com.ibm.xsp.model.domino.wrapped.DominoDocument.

It's non-obvious, though, how to make use of it from the Java side. For that, there are a couple wrap static methods at the bottom of the list. The methods are overflowing with parameters, but they align with many of the attributes used when specifying an <xp:dominoDocument/> data source and are mostly fine to leave as blanks. So if you have a given Document, you can wrap it like so:

Document notesDoc = database.getDocumentByUNID(someUNID);
DominoDocument domDoc = DominoDocument.wrap(
	database.getFilePath(), // databaseName
	notesDoc,               // Document
	null,                   // computeWithForm
	null,                   // concurrencyMode
	false,                  // allowDeletedDocs
	null,                   // saveLinksAs
	null                    // webQuerySaveAgent
);

Once you do that, you can make use of the improved API. You can also serialize these objects, though I believe you have to call restoreWrappedDocument on it after deserialization to use it again. I do this relatively frequently when I'm in a non-ODA project and want some nicer methods and better data-type support.

Factories in XPages

Nov 12, 2014 6:27 PM

Tags: java xpages

In my last post, I intimated that I wanted to write a post covering the concept of factories in XPages, or at least the specific kind in question. This is that post.

The term "factory" covers a number of meanings, and objects named this way crop up all over the place (ODA has one, for example). The kind I care about today are those defined in an XspContributor object in an XPages plugin. These factories are generally (exclusively, perhaps) used for the purpose of generating adapters: assistant objects that allow the framework to perform operations on object types that may have no knowledge of the framework at all.

The way this generally takes form is this: when the framework needs to perform a specialized task, it asks the application (that is to say, the Application object that controls the XPages app) for a list of factories it knows about that conform to a given interface:

FacesContext facesContext = FacesContext.getCurrentInstance();
ApplicationEx app = (ApplicationEx)facesContext.getApplication();
List<ComponentMapAdapterFactory> factories = app.findServices(COMPONENT_MAP_SERVICE_NAME);

Once it has this list, it loops through all of them and passes the object it's trying to adapt. If the factory is written to understand that type of object, it will return an adapter object; if not, it will return null:

ComponentMapAdapter adapter = null;
for(ComponentMapAdapterFactory fac : factories) {
	adapter = fac.createAdapter(object_);
	if(adapter != null) {
		break;
	}
}

The framework can then use that adapter to perform the action on the object, without either the framework or the object knowing anything about the other:

for(String propertyName : adapter.getPropertyNames()) {
	// ...
}

XSP internally uses this for a great many things. One use that I've run into (and butted heads with) is to create adapter objects for dealing with file attachments. The DominoDocument class has a bit of information about attachments, particularly in its AttachmentValueHolder inner class, but doesn't on its own handle the full job of dealing with file upload and download controls. During the processes of getting a list of files for a given field from the document and handling the "delete document" method, the XPages framework looks up appropriate factories to handle the data type.

The reason for this indirection is so that this sort of operation can work with arbitrary data: in an ideal world, the XPages framework doesn't care at all that anything it does is related to Domino, and similarly the Domino data model objects don't care at all that they're embedded in the XSP framework. By allowing the model-object author (IBM, in this case) to say "hey, when you want to get a list of file attachments for an object of this type, I can help", it decouples the operation cleanly (it's broken in this case, but the theory applies). When the framework needs to process an object, it loops through the adapter factory classes, asks each one if it can handle the object in question, and takes the adapter from the first one that returns non-null.

At first blush, this setup seems overly contrived. After all, isn't this what interfaces are for? Well, in many cases, yes, interfaces would do the job. However, sometimes it makes sense to add this extra layer of indirection. Say, for example, that you're adapting between two Java classes you don't control: you can't modify the framework to support a class you want, and you can't modify the class to conform to an interface the framework understands. In that case, an adapter factory is a perfect shim.

But in fact, I've even found it useful to adopt this structure when I control all of the code. When I was designing the model/component adapter in the frostillic.us Framework, I made the conscious decision to not tie the two sides (the controller and the model) together tightly. Instead, I wrote a pair of interfaces in the controller package: ComponentMapAdapterFactory and ComponentMapAdapter. This way, when the controller gets the order to create an input field for an object's "firstName" property, it loops through the list of ComponentMapAdapterFactorys to find one that fits. Over in the model package, I have an appropriate factory and adapter to handle my model framework.

I could have combined these two more tightly, but I enjoy the cleanliness this brings. I may not stick with my same model framework forever, and similarly the expectations of the controller class may change; because of this separation, it's clear where tweaks will need to be made. It also gives me the freedom to use the same component-building code with model objects I don't control, such as the adapter I wrote that pulls its configuration from Notes forms.


These types of factories are not something you're likely to run into during normal XPages development, but it may be useful to bear their existence in mind. The XPages framework is a great morass of moving parts, and so being able to chart the inner workings in your mental map one bit at a time can go a long way to mastering the platform.

Property Resolution in XPages EL

Nov 11, 2014 11:24 AM

Tags: xpages java

Reading Devin Olson's recent series on EL processing put me in the mood to refresh and fill out my knowledge of how that stuff actually happens.

A while back, I made a small foray of my own into explaining how property resolution in XPages EL works, one which I followed up with a mea culpa explaining that I had left out a few additional supported types. As happens frequently, that still didn't cover the full story. Before getting to what I mean by that, I'll step back to an overview of XPages EL in general.

Components of EL Processing

To my knowledge, there are three main conceptual pieces to the EL-resolution process. I'll use the EL #{foo.bar.baz[1]} as a common example.

  • The EL parser itself. This is what reads the EL above and determines that you want the 1-indexed property of the baz property of the bar property of the foo object. I don't know if there's a realistic way to override or extend this stock-EL behavior.
    • This does, though contain an extensible side path: BindingFactory. This lets you create your own processors for value and method bindings based on the EL prefix, in the same vein as #{javascript: ... }.
  • The VariableResolver. This is a relatively-common bit of XPages extensibility and for good reason: they're quite useful. The variable resolver is what is used by EL (and SSJS, and others) to determine what object is referenced by foo in the example.
  • The PropertyResolver. This is the companion to the VariableResolver and is what handles the rest of the dereferencing in the example. EL asks the app's property resolver to find the bar property of foo, then the baz property of that, and then the 1 indexed property of that. This is the main topic of conversation today.

Setting An Application's PropertyResolver

There are two main ways I know of to make use of property resolvers, and the first is analogous to the VariableResolver: you write a single object and specify it in your faces-config file, like so:

<application>
	<property-resolver>config.PropResolver</property-resolver>
</application>

Then the skeletal implementation of such an object looks like this:

package config;

import javax.faces.el.EvaluationException;
import javax.faces.el.PropertyNotFoundException;
import javax.faces.el.PropertyResolver;

public class PropResolver extends PropertyResolver {
	private final PropertyResolver delegate_;

	public PropResolver(PropertyResolver delegate) {
		delegate_ = delegate;
	}

	@Override
	public Class<?> getType(Object obj, Object property) throws EvaluationException, PropertyNotFoundException {
		return delegate_.getType(obj, property);
	}

	@Override
	public Class<?> getType(Object obj, int index) throws EvaluationException, PropertyNotFoundException {
		return delegate_.getType(obj, index);
	}

	@Override
	public Object getValue(Object obj, Object property) throws EvaluationException, PropertyNotFoundException {
		return delegate_.getValue(obj, property);
	}

	@Override
	public Object getValue(Object obj, int index) throws EvaluationException, PropertyNotFoundException {
		return delegate_.getValue(obj, index);
	}

	@Override
	public boolean isReadOnly(Object obj, Object property) throws EvaluationException, PropertyNotFoundException {
		return delegate_.isReadOnly(obj, property);
	}

	@Override
	public boolean isReadOnly(Object obj, int index) throws EvaluationException, PropertyNotFoundException {
		return delegate_.isReadOnly(obj, index);
	}

	@Override
	public void setValue(Object obj, Object property, Object value) throws EvaluationException, PropertyNotFoundException {
		delegate_.setValue(obj, property, value);
	}

	@Override
	public void setValue(Object obj, int index, Object value) throws EvaluationException, PropertyNotFoundException {
		delegate_.setValue(obj, index, value);
	}
}

If you've done much with DataObjects, you'll likely immediately recognize those methods: I imagine that DataObject was created as a simplest-possible implementation of a PropertyResolver-friendly interface.

So how might you use this? Well, most of the time, you probably shouldn't - in my experience, the standard property resolver is sufficient and using DataObject in custom objects is easy enough that it's the best path. Still, you could use this to patch the behavior that is driving Devin to madness or to paint over other persistent annoyances. For example, DominoViewEntry contains hard-coded properties for accessing the entry's Note ID, but not its cluster-friendly Universal ID. To fix this, you could override the non-indexed getValue method like so:

public Object getValue(Object obj, Object property) throws EvaluationException, PropertyNotFoundException {
	if (obj instanceof DominoViewEntry && "documentId".equals(property)) {
		return ((DominoViewEntry) obj).getUniversalID();
	}
	return delegate_.getValue(obj, property);
}

Now, anywhere where you have a DominoViewEntry, you can use #{viewEntry.documentId} to get the UNID. You could do the same for DominoDocument as well, if you were so inclined. You'll just have to plan to never have a column or field named "documentId" (much like you currently have to avoid "openPageURL", "columnIndentLevel", "childCount", "noteID", "selected", "responseLevel", "responseCount", and "id").

Property Resolver Factories

The other way to use PropertyResolvers is to register and use a PropertyResolverFactory. Unlike the faces-config approach, these do not override (all of) the default behavior, but are instead looked up by IBM's PropertyResolver implementation at a point during its attempts at property resolution. Specifically, that point is after support for ResourceBundle, ViewRowData, and DataObject and before delegation to Sun's stock resolver (which handles Maps, Lists, arrays, and generic POJOs).

If you get a type hierarchy, you can see that IBM uses this route for lotus.domino.Document, com.ibm.commons.util.io.json.JsonObject, and com.ibm.jscript.types.FBSObject (SSJS object) support. So the idea of this route is that you'd have your own custom object type which doesn't implement any of the aforementioned interfaces and for which you want to provide EL support beyond the normal getter/setter support. Normally, this is not something worth doing, but I could see it being useful if you have a third-party class you want to work in, such as a non-Domino/JDBC data source.

The method for actually using one of these is... counterintuitive, but is something you may have run into in plugin development. The first step is simple enough: implement the factory:

package config;

import javax.faces.el.PropertyResolver;
import com.ibm.xsp.el.PropertyResolverFactory;

public class PropResolverFactory implements PropertyResolverFactory {
	public PropertyResolver getPropertyResolver(Object obj) {
		return null;
	}
}

That getPropertyResolver method's job is to check to see if the object is one of the types it supports and, if it is, return a PropertyResolver object (the same kind as above) that will allow the primary resolver to get the property.

Actually registering the factory is weirder. It must be done via a plugin (or by code that manually registers it in the application when needed), and the best way to see what I mean is to take a look at an example: the OpenntfDominoXspContributor class used by the OpenNTF Domino API. The contributor is registered in the plugin.xml and returns an array of arrays (because Java doesn't have tuples or map literals) representing a unique name for your factory plus the implementing class.

This concept of factories actually probably warrants its own blog post down the line. For the time being, the upshot is that this approach is appropriate if you're adding your own data type via a plugin (and which wouldn't be better-suited to implement DataObject), so it's a rare use case indeed.

Generating JSON in XPages Applications

Sep 18, 2014 5:55 PM

Tags: json java

This topic is fairly well-trodden ground, but there's no harm in trodding it some more: methods of producing JSON in the XPages environment. Specifically, this will be primarily about the IBM Commons JSON classes, found in com.ibm.commons.util.io.json. The reason for that choice is just that they ship with Domino - other tools (like Gson) are great too, and in some ways better.

Before I go further, I'd like to reiterate a point I made before:

Never, ever, ever generate code without proper escaping.

This goes for any executable, markup, or data language like this. It's tempting to generate XML or JSON in Domino views, but formula language lacks proper escape functions and so, unless you are prepared to study the specs for all edge cases (or escape every character), don't do it.

So anyway, back to the JSON generation. To my knowledge, there are three main ways to generate JSON via the Commons libraries: a single call to process an existing Java object, by building a JsonJavaObject directly, and by "streaming" the code bit by bit. I've found the first and last methods to be useful, but there's nothing inherently wrong about the middle one.

Processing an existing object

With this route, you use a class called JsonGenerator to process an existing JSON-compatible object (usually a List or Map) into a String after building the object via some other mechanism. In a simple example, it looks like this:

Map<String, Object> foo = new HashMap<String, Object>();
foo.put("bar", "baz");
foo.put("ness", 1);
return JsonGenerator.toJson(JsonJavaFactory.instance, foo);

Overall, it's fairly straightforward: create your Map the normal way (or get it from some library), and then pass it to the JsonGenerator. Becuase it's Java, it forces you to also pass in the type of generator you want to use, and that's JsonJavaFactory's role. There are several instance objects that seem to vary primarily in how much they rely on the other Commons JSON classes in their implementation, and I have no idea what the performance or other characteristics are like. instance is fine.

Building a JsonJavaObject

An alternate route is to use the JsonJavaObject directly and then toString it at the end. This is very similar in structure to the previous example (because JsonJavaObject inherits from HashMap<String, Object> directly, for some reason):

JsonJavaObject foo = new JsonJavaObject();
foo.put("bar", "baz");
foo.put("ness", 1);
return foo.toString();

The result is effectively the same. So why do I avoid this method? Mostly for flexibility reasons. Conceptually, I don't like assuming that the data is going to end up as a JSON string right from the start unless there's a compelling reason to do so, and so it's strange to use JsonJavaObject right out of the gate. If your data starts coming from another source that returns a Map, you'll need to adjust your code to account for it, whereas that is less the case in the first case.

Still, it's no big problem if you use this. Presumably, it will be slightly faster than the first method, and it's often functionally identical anyway (considering it is a Map<String, Object>).

Streaming

This one is the weird one, but it will be familiar if you've ever written a renderer (stay tuned to NotesIn9 for an example). Rather than constructing the entire object, you push out bits of the JSON code as you go. There are a couple reasons you might do this: when you want to keep memory/processor use low when dealing with large objects, when you want to make your algorithm recursive without, again, using up too much memory, or when you're actually streaming the result to the client. It's the ugliest method, but it's potentially the fastest and most efficient. This is what you want to use if you're writing out a very large collection (say, filtered view data) in an XAgent or other servlet.

I'll leave out an example of using it in a streaming context for now, but if you're curious you can find examples in the DAS code in the Extension Library or in the REST API code for the frostillic.us model objects (which is derived wholesale from DAS).

The key object here is JsonWriter, which is found in the com.ibm.commons.util.io.json.util sub-package. Much like with other Writers in Java, you hook this up to a further destination - in this example, I'll use a StringWriter, which is a basic way to write into a String in memory and return that. In other situations, this would likely be the ServletOutputStream. Here's an example of it in action:

StringWriter out = new StringWriter();
JsonWriter writer = new JsonWriter(out, false);

writer.startObject();

writer.startProperty("bar");
writer.outStringLiteral("baz");
writer.endProperty();

writer.startProperty("ness");
writer.outIntLiteral(1);
writer.endProperty();

Map<String, Object> foo = new HashMap<String, Object>();
foo.put("bar", "baz");
foo.put("ness", 1);
writer.startProperty("foo");
writer.outObject(foo);
writer.endProperty();

writer.endObject();

writer.flush();
return out.toString();

As you can tell, the LOC count ballooned fast. But you can also tell that it makes a kind of sense: you're doing the same thing, but "manually" starting and ending each element (there are also constructs for arrays, booleans, etc.). This is very similar to writing out HTML/XML using equivalent libraries. And it's good to know that there's always a fallback to output a full object in the style of the first example when it's appropriate. For example, you might be outputting data from a large view - many entries, but each entry is fairly simple. In that case, you'd use this writer to handle the array structure, but build a Map for each entry and add that to keep the code simpler and more obvious.


So none of these are right in all cases (and sometimes you'll just do toJson(...) in SSJS), but they're all good to know. Most of the time, the choice will be between the first and the last: the easy-to-use, just-do-what-I-mean one and the cumbersome, really-crank-out-performance one.

A Centralized Bean for Translation

Sep 1, 2014 4:54 PM

Tags: java

The normal method for doing translation in XPages is by using the built-in Designer tooling, which creates properties files for each language for each XPage in your app. This is okay, though it requires using Designer to update the properties (since apparently the translation happens as part of the build process). For the refresh of my blog I'm working on, I'm taking a different tack, inspired by the way a client does it and similar to how I do it in the Framework.

Specifically, I have a centralized bean named "translation" that accepts strings and returns a best-match translation for the current session's locale. This "best match" is the routine that the XSP framework uses for getting resource bundles. So, taking my browser as an example, requesting the bundle named "translation" will cause it to search the classpath for these files until it finds one:

  1. translation_en_US.properties
  2. translation_en.properties
  3. translation.properties
  4. translation_fr.properties

(I'm not sure why it uses translation.properties before translation_fr.properties - maybe it assumes that it's the English strings when there's no locale code on the file).

So I take advantage of this by creating a DataObject bean to do my translation:

package config;

import java.io.IOException;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import java.util.MissingResourceException;
import java.util.ResourceBundle;

import javax.faces.context.FacesContext;

import com.ibm.xsp.application.ApplicationEx;
import com.ibm.xsp.designer.context.XSPContext;
import com.ibm.xsp.model.DataObject;

import frostillicus.xsp.bean.SessionScoped;
import frostillicus.xsp.bean.ManagedBean;
import frostillicus.xsp.util.FrameworkUtils;

@ManagedBean(name="translation")
@SessionScoped
public class Translation implements Serializable, DataObject {
	private static final long serialVersionUID = 1L;

	public static Translation get() {
		Translation existing = (Translation)FrameworkUtils.resolveVariable(Translation.class.getAnnotation(ManagedBean.class).name());
		return existing == null ? new Translation() : existing;
	}

	private transient ResourceBundle bundle_;
	private Map<Object, String> cache_ = new HashMap<Object, String>();

	public Class<String> getType(final Object key) {
		return String.class;
	}

	public String getValue(final Object key) {
		if(!cache_.containsKey(key)) {
			try {
				ResourceBundle bundle = getTranslationBundle();
				cache_.put(key, bundle.getString(String.valueOf(key)));
			} catch(IOException ioe) {
				throw new RuntimeException(ioe);
			} catch(MissingResourceException mre) {
				cache_.put(key, "[Untranslated " + key + "]");
			}
		}
		return cache_.get(key);
	}

	public boolean isReadOnly(final Object key) {
		return true;
	}

	public void setValue(final Object key, final Object value) {
		throw new UnsupportedOperationException();
	}


	private ResourceBundle getTranslationBundle() throws IOException {
		if(bundle_ == null) {
			FacesContext facesContext = FacesContext.getCurrentInstance();
			ApplicationEx app = (ApplicationEx)facesContext.getApplication();
			bundle_ = app.getResourceBundle("translation", XSPContext.getXSPContext(facesContext).getLocale());
		}
		return bundle_;
	}
}

This uses the Framework's annotation-based managed-bean declaration, but it'd work just fine in faces-config. This allows you to use EL to request a translation, such as #{translation.home}, #{translation['home']}, or #{translation[someVar.prop]}, or by using translation.getValue(...) in Java or JavaScript.

I've found this approach to be much easier to work with. There's only one central file to manage (you could split it up into multiple beans), you don't have to re-generate files for new languages, and the keys can be natural and human-friendly, instead of XPaths.

In addition, you could easily change this bean to get its translation information elsewhere without modifying the XPages that use it - it could look up, for example, against Domino documents to allow non-developers to change the translation values without any special rights (though you'd likely want to tweak the cache in that case).

How I Learned to Stop Worrying and Love C Structs

Aug 4, 2014 10:38 PM

Tags: c java

Since a large amount of on-site client time has put my framework work on hold for a bit, I figured I'd continue my dalliance in the world of raw item data.

Specifically, what I've been doing is filling out the collection of structs with Java wrappers, particularly in the "cd" subpackage. For someone like me who has only done bits and pieces of C over the years, this is a fruitful experience, particularly since I'm dealing with just the core concept of the Notes API data structures without the messy business of actually worrying about memory or crashing programs.

If you're not familiar with structs, what they are is effectively just the data portion of a class. They're like a blueprint of related pieces of data - ints, doubles, other structs, etc. - used both for creating new elements and for a contract for the type of data received from an API. The latter is what I'm dealing with: since the raw item data in DXL is presented as just a series of bytes, the struct documentation is vital in figuring out what to expect in which spot. Here is a representative example of a struct declaration from the Notes API, one which includes bonus concepts:

typedef struct{
	LSIG	Header;		/* Signature and Length */
	WORD 	FileExtLen;		/* Length of file extenstion */
	DWORD	FileDataSize;	/* Size (in bytes) of the file data */
	DWORD	SegCount;		/* Number of CDFILESEGMENT records expected to follow */
	DWORD	Flags;		/* Flags (currently unused) */
	DWORD	Reserved;		/* Reserved for future use */
	/*	Variable length string follows (not null terminated).
		This string is the file extension for the file. */
} CDFILEHEADER;

So... that's a thing, isn't it? It represents the start of a File Resource (or CSS file, Java class, XPage, etc.). I'll start from the top:

  1. LSIG Header: An "LSIG" is actually another struct, but a smaller one: its job is to let a processor (like my code) identify the following record, as well as providing the total size of the record. When processing the byte stream, my code looks to the first two bytes of each record to determine what to expect for the next block.
  2. WORD FileExtLen: To start with, "WORD" here is just an alias for an "unsigned short" - a 16-bit integer ranging from 0 to 65535 (64K). The term "word" itself refers to the computer-architecture term. This field specifically denotes the number of bytes to expect after the record for the file extension of the file resource being defined.
    • The "unsigned" above refers to the way the numbers are stored in memory, which is to say a series of binary digits. By default, the highest-value bit is reserved for the "sign" - a 0 for positive and a 1 for negative. Normal "signed" numbers can store positive values only half the size of their unsigned counterparts, because going from "0111 1111" (+127) wraps around to "1000 0000" (-127). This is why the "half" values - 32K, 2G - are seen as commonly as their "full" counterparts. As to why negative numbers are represented with all those zeros, you'll either have to read up independently or just trust me for now.
  3. DWORD FileDataSize: A "DWORD" is double the size of a "WORD" (hence the "D", presumably): it's an unsigned 32-bit number, ranging from 0 to 4,294,967,295 (4G). This number represents the total size of the file attachment, and so also presumably represents the maximum size of a file resource in Domino.
  4. DWORD SegCount: This indicates the number of "segment" records expected to follow. Segment records are similar to this header we're looking at, but contain the actual file data, split across multiple chunks and across multiple items. That "multiple items" bit is due to the overall structure of rich-text items, and is why you'll often see rich text item names (like "Body" or "$FileData") repeated many times within a note.
  5. DWORD Flags: As the comment in the API indicates, this field is currently unused. However, it's representative of something that is used very commonly in other structures: a set of bits that isn't useful as a number, but instead has values set to correspond to traits of the entity in question. This is referred to as a bit field and is an efficient way to store a fixed number of flags. So if you had a four-bit flags field for a text item, the bits may indicate whether it's summary data, a names field, a readers field, and/or an authors field - for example "1100" for a field that's summary and names, but not readers or authors. That is, incidentally, basically how those flags are implemented, albeit with a larger bit field.
    • There is a secondary type of "flag" in Notes: the "$Flags" and "$FlagsExt" fields in design elements. Those flags are less efficient - 8 bits per flag instead of 1 - but are generally more extensible and somewhat conceptually easier.
    • These bit fields are what those oddball bitwise operators are for, by the way.
  6. DWORD Reserved: Many (most?) C API data structures contain at least one block of bits like this that is reserved for future use. These are so that IBM can add features without breaking all existing code: API users are expected to leave anything there intact and to create new structures with all zeros. You can see this in action in a couple places, such as the addition of rudimentary theme support to legacy design elements, which used up a byte out of the previously-reserved block.
  7. Variable data: This is the fun part. Many structures contain "variable" data following the "fixed" portion. Whereas the previous parts are a predictable size - a DWORD will always be four bytes - the parts following the block can be anywhere from 0 bytes to whatever is the max value of their referring entity. Fortunately, the "SIG" at the beginning of the record tells the API user the length ahead of time, so it's not required to read it all in when dealing with an API entity. Still, the code to read this can be complex and bug-prone, particularly when there are multiple variable parts packed together. In this case, though, it's relatively simple: we get the value of "FileExtLen" and read that many bytes into an array and convert that from LMBCS to a respectable Unicode string.

Dealing with a stream of structs is... awkward at first, particularly when you're used to Java amenities like being able to just pour out and read back in serialized objects without even thinking twice about it. After a while, though, it gets easier, and you get an appreciation for a lower-level type of programming. And while you still may not be happy about Domino's 32K summary-data limit, you at least get an understanding for why it's there.

So if you're in a position to dive into this sort of thing once in a while, I recommend you do so. Though the value of what I'm writing specifically is mixed - I doubt the world needs, say, a Java wrapper for a CD record reflecting a DECS field association - the benefit to my brain is immense. Dealing with web and Java programming can cause you to become very disconnected from the fundamentals of programming, and something like this can bring you back to solid ground. Give it a shot!

Building an App with the frostillic.us Framework, Part 6

Jul 23, 2014 2:46 PM

Tags: java
  1. Building an App with the frostillic.us Framework, Part 1
  2. Building an App with the frostillic.us Framework, Part 2
  3. Building an App with the frostillic.us Framework, Part 3
  4. Building an App with the frostillic.us Framework, Part 4
  5. Building an App with the frostillic.us Framework, Part 5
  6. Building an App with the frostillic.us Framework, Part 6
  7. Building an App with the frostillic.us Framework, Part 7
  1. Define the data model
  2. Create the view and add it to an XPage
  3. Create the editing page
  4. Add validation and translation to the model
  5. Add notification to the model
  6. Add sorting to the view
  7. Basic servlet
  8. REST with Angular.js

Add sorting to the view

This step in the framework example is very similar to the second in that there's nothing particularly Framework-y about it.

Technically, the work in the title of this step is already done: I had already enabled click-to-sort and set sortable="true" in the XSP source. That on its own is enough to, as with a normal view, enable both available sort directions in the view panel:

So... that was easy! While we're at it, we'll go back in and add a categorized column. This, too, has no gotchas:

In my case, Designer gave this category a programmatic name of "$4", so we'll use that in the modified XSP for the view panel:

<xp:viewPanel value="#{Notes.all}" pageName="/note.xsp">
	<xp:this.facets>
		<xp:pager xp:key="headerPager" id="pager1" partialRefresh="true" layout="Previous Group Next" />
	</xp:this.facets>
	
	<xp:viewColumn columnName="$4"/>
	<xp:viewColumn columnName="Title" displayAs="link">
		<xp:viewColumnHeader value="Title" sortable="true"/>
	</xp:viewColumn>
	<xp:viewColumn columnName="Body">
		<xp:viewColumnHeader value="Body"/>
	</xp:viewColumn>
</xp:viewPanel>

This gives us the categorized column we expect, including the ability to collapse the categories like normal:

And, well, that's it too. Because of my extensive use of view features on the back end, Framework collections inherit the same features and (most of) the performance of the existing view data sources. That's a lot of the point: taking these model collections and using them in the ways you're used to using views in XPages apps is meant to be easy and not something you have to think too much about.

Generating Toaster, dGrowl, etc. Notifications From Server Code

Jul 22, 2014 12:30 PM

In yesterday's Framework-series post, I offhandedly mentioned that the "flashMessage" routine I use could easily be paired with or replaced by "toaster"-style notifications. This turned out to be very coincidental, as just today a NotesIn9 episode went up featuring Brad Balassaitis talking about using dGrowl for this purpose, along with an accompanying blog post.

Whenever I've used something like this, I've run into situations where I want to generate the message from the server, say as part of a partial refresh event to save an object. One option in that situation is to attach JavaScript to the "onComplete" and "onError" properties of the event handler (the latter of which is something I should do more regularly), but that doesn't give you access to all the information on the server that you may want to use in the message.

What I really want to do is send customized JavaScript code from the server back to the client, just for that event. Fortunately, there's an out-of-the-way method in the XPages stack to allow you to do this: UIViewRootEx#postScript. Though the documentation on that page is copious and detailed, it may not quite give you an indication of what that method does. What it does is to take a string of (client) JavaScript code that is then wrapped up and included in the currently-being-generated response to the browser, and which will be executed at the end of the current event (the partial refresh or, probably, initial page load). You can get to the view root object by resolving the "view" variable and casting it to an appropriate class:

UIViewRootEx2 view = (UIViewRootEx2)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "view");

So I've used this in a couple situations, and here's an example of a method I have in my utils class for a current project. The project uses a WrapBootstrap theme that came with Gritter, which is a jQuery plugin for a similar effect, but it could be adapted for either Dojo module.

public static void toaster(final String summary, final String detail, final boolean sticky, final String styleClass) {
	StringBuilder result = new StringBuilder();
	result.append("jQuery.gritter.add({\n");
	if(StringUtil.isNotEmpty(summary)) {
		result.append("\ttitle: ");
		JSUtil.addString(result, summary);
		result.append(",\n");
	}
	if(StringUtil.isNotEmpty(detail)) {
		result.append("\ttext: ");
		JSUtil.addString(result, detail);
		result.append(",\n");
	}
	result.append("\tsticky: " + sticky + ",\n");
	
	String effectiveStyleClass = (styleClass == null ? "gritter-info" : styleClass);
	result.append("\tclass_name: ");
	JSUtil.addString(result, effectiveStyleClass);
	result.append("\n");
	
	result.append("})");

	FrameworkUtils.getViewRoot().postScript(result.toString());
}

So, first off: yikes. This is an example of why Java is awful at string handling and generating other languages. However, this ugliness is in service of a vital lesson that goes beyond this task:

Never, ever, ever generate code without proper escaping.

This is why it's so problematic to generate JSON or XML via view columns: formula language lacks escaping functions for those languages, and most code I've seen neglects to include its own. If you don't escape strings you're adding into another language, you're asking for parsing bugs at best and code-injection attacks at worst.

But back to the code at hand. I'm using the com.ibm.xsp.util.JSUtil class that comes with the XPages framework to assist in writing out JavaScript code. The result is that you can write something like this in Java:

toaster("Some alert!", "These are \"alert\" details", false, "gritter-error");

...and end up with code like this sent to the browser:

jQuery.gritter.add({
	title: "Some alert!",
	text: "These are \"alert\" details",
	sticky: false,
	class_name: "gritter-error"
})

In my example from yesterday, if I were using a partial refresh instead of a page redirection, I could use this method to alert the user of a successful save.

Because my model framework publishes FacesMessages for event conditions in a Faces environment, I've also written some companion methods to translate them into a toaster events and to drain the queue, useful during a partial refresh:

public static void toastMessage(final FacesMessage message) {
	String styleClass = null;
	if(FacesMessage.SEVERITY_ERROR.equals(message.getSeverity())) {
		styleClass = "gritter-error";
	} else if(FacesMessage.SEVERITY_FATAL.equals(message.getSeverity())) {
		styleClass = "gritter-error";
	} else if(FacesMessage.SEVERITY_INFO.equals(message.getSeverity())) {
		styleClass = "gritter-info";
	} else if(FacesMessage.SEVERITY_WARN.equals(message.getSeverity())) {
		styleClass = "gritter-warning";
	} else {
		styleClass = "gritter-success";
	}
	toaster(message.getSummary(), message.getDetail(), false, styleClass);
}
@SuppressWarnings("unchecked")
public static void toastMessages() {
	Iterator<FacesMessage> messages = FacesContext.getCurrentInstance().getMessages();
	while(messages.hasNext()) {
		FacesMessage message = messages.next();
		toastMessage(message);
	}
}

This sort of thing can be a useful way to bridge the client and server sides of the app without giving up client UI niceties.

Building an App with the frostillic.us Framework, Part 5

Jul 21, 2014 3:46 PM

Tags: java
  1. Building an App with the frostillic.us Framework, Part 1
  2. Building an App with the frostillic.us Framework, Part 2
  3. Building an App with the frostillic.us Framework, Part 3
  4. Building an App with the frostillic.us Framework, Part 4
  5. Building an App with the frostillic.us Framework, Part 5
  6. Building an App with the frostillic.us Framework, Part 6
  7. Building an App with the frostillic.us Framework, Part 7
  1. Define the data model
  2. Create the view and add it to an XPage
  3. Create the editing page
  4. Add validation and translation to the model
  5. Add notification to the model
  6. Add sorting to the view
  7. Basic servlet
  8. REST with Angular.js

Add notification to the model

The next step in building our Framework app is a simple one: adding some basic visual notification when user saves a note. We'll leave the navigation rule to send the user to the home page in place and use flashScope (included in the Framework) and a "messages" custom control included in the scaffolding DB to add a message.

There are a few places where I could add this code, primarily either the model or the controller. Since it makes sense in this context, I'll add it to the model, via this additional code in the model.Note class:

@Override
protected void postSave() {
	super.postSave();

	FrameworkUtils.flashMessage("confirmation", "Note saved successfully.");
}

The flashMessage method in FrameworkUtils is a convenience method for exactly this sort of thing. It adds the message string to a List in the flashScope's property with the provided first parameter plus "Messages". So, in this case, #{flashScope.confirmationMessages} ends up containing ["Note saved successfully."].

These values are read by the xc:messages control included in the standard xc:layout control in the scaffolding DB. It outputs both the normal xp:messages notifications as well as these flash-scoped ones - currently, the classes for the messages are hard-coded in the control and should be tweaked for use with OneUI, Bootstrap, or another UI framework, but eventually I'll shift those out to themes. Since I'm using OneUIv3 for this demo, I set the classes in the control to the appropriate values. The result is a small message that appears on the first load of a subsequent page and disappears thereafter:

Fancy? No, but it doesn't need to be. This same postSave hook could be used to do much more complex things whenever a note is saved in any context, such as sending a notification email, writing a log entry, or Toaster notification (forgive the missing images on that link).

How I Deal With Collections in the Framework

Jul 20, 2014 10:30 AM

Tags: java

A post on Christian Güdemann's blog and a followup on Nathan Freeman's made me figure it could be useful to discuss how I deal with the job of working with document collections in the frostillic.us Framework.

First of all: I am not solving the same original problem Christian had. His post discussed strategies for actually opening and processing every document and secondarily building a collection using inadvisable selection formulas for views (namely, @Today). I don't generally do that stuff, so our paths diverge.

The collection code in the Framework is very focused on dancing on top of existing view indexes: using them for collecting documents, for sorting/categorization, and accessing summary data. In many ways, the main Framework collections can be thought of as a re-implementation of the Domino view data source in XPages, except without the same cheating they do and with some side benefits.

Core

The core essence of a Framework collection - a DominoModelList - is that it stores information about how to access the underlying view, which category filter to use, and which model class to use to create objects. This can be seen in its constructor; it grabs important metadata information about the view and that's about it. It's not until it's required that the list does the dirty work of actually fetching a ViewNavigator (or re-using one in the current request). As much as possible, I aim to use ViewNavigators - which is to say, whenever there's not an FT search involved. Navigators are (for Domino) highly efficient, particularly compared to the shockingly-bad performance characteristics of ViewEntryCollection.

The primary consumer of the navigator is the get(int) method (among other things, collections implement List). If you'll kindly ignore some workaround code for a bug I haven't reliably been able to pin on either my code or the OpenNTF API yet, you can see the navigator use in action: unless there's a search in place (in which case it falls back to VEC), the method fetches an active navigator, skips to the appropriate requested entry, and uses getCurrent() to retrieve it. Though the skip/retrieve mix is odd when the average case will be iterating over successive entries, the performance is speedy enough that I haven't felt the need to try to cover both random and sequential access differently.

Sorting

Since 8.5.0, we have the ability to use click-to-sort columns in the back-end API, and I make use of this to expose sortable columns though TabularDataModel's sorting methods. Once a sorted column is chosen, I pass that along to the underlying view when fetching it. If there's an FT seach query specified, I use the surfaced-in-8.5.3 FTSearchSorted method to retain the sorting.

Collapsible Categories

In 9.0.1, IBM added the ability to collapse categories in navigators via the setAutoExpandGuidance method paired with the view's setEnableNoteIDsForCategories. Using TabularDataModel's expand/collapse methods, I maintain a Set of the faux category note IDs generated when setEnableNoteIDsForCategories is enabled and pass them to the navigator when appropriate. This allows my code to deal with arbitrarily-collapsed categories without containing any other special code - considering how uselessly buggy my implementation of this was before 9.0.1, I'm quite happy those methods are there.

The combination of the sorting and categorization capabilities means that my collections are able to support the same xp:viewPanel UI features that standard xp:dominoView data sources are.

Deferred Data Access

The final key concept in the framework is the deferral of actually accessing a document until it's necessary. Each model object can be constructed in one of three ways: as a new document in a database, as a wrapper around an existing document, and as a wrapper around a view entry. In the view entry case, the model object doesn't touch the underlying document. Instead, it makes a note of the database path and the UNID (if a non-category) for if it DOES need to access it later and then stores the entry's column values in a map. While the model objects don't make a user-side distinction between view entries and documents (you can request any item value whether or not it's in the view), it DOES use these cached column values first. So if your code only requests values that are present in the view, the underlying document is never accessed at all. This leads to exceedingly-efficient (comparatively) data access without making the user of the objects worry about manually accessing the document if the value isn't in the view.


The result of all this is that the Framework collections share all the advantages and pitfalls of the underlying views. Some things are easy and fast (categorization, sorting, multi-entry documents, summary data) while some are still impractical or slow (Rich Text, MIMEBeans, arbitrary queries). But you go to production with the database indexer you have, and so far this method has been serving me well.

Dabbling in Reflection

Jul 7, 2014 3:00 PM

Tags: java

In a similar vein to my post from the other day, I'd like to discuss one of the other bugaboos that crops up from time to time in the edges of XPages development, usually spoken of in hushed tones: reflection.

If you're not familiar with the term, reflection is an aspect of metaprogramming; specifically, it's the ability to inspect and operate on objects and classes at runtime. Reflection in Java isn't as smooth as it is in other languages, but it makes sense as you work with it.

The most basic form is the getClass() that every non-null object has: you can use it to get an object representing the class of the object in question, which is your main starting point. This code will print out "java.lang.String":

Class<?> stringClass = "".getClass();
System.out.println(stringClass.getName());

From here, you can start to get fancier. For example, this code...

Class<?> stringClass = "".getClass();

for(java.lang.reflect.Method method : stringClass.getMethods()) {
	System.out.println(method);
}

... will print out the signatures for all public instance and static methods, with output like:

public boolean java.lang.String.equals(java.lang.Object)
public int java.lang.String.hashCode()
public java.lang.String java.lang.String.toString()
public char java.lang.String.charAt(int)
public int java.lang.String.compareTo(java.lang.String)
...
public int java.lang.String.compareTo(java.lang.Object)
public final native java.lang.Class java.lang.Object.getClass()
public final native void java.lang.Object.notify()
public final native void java.lang.Object.notifyAll()

There is also a variant to show just the methods from that specific class, including private/protected ones. But the java.lang.reflect.Method objects you get there are useful for more than just printing out signatures: they're proxies for actually executing the method against an appropriate object. To demonstrate, say you have this class:

public class ExampleClass {
	private String name;

	public ExampleClass(final String name) { this.name = name; }

	public void printName() { System.out.println(name); }
}

You can then use reflection to call the method (this also demonstrates an alternate way of getting a class object if you know the class you want):

ExampleClass someObject = new ExampleClass("my name is someObject");
Method printNameMethod = ExampleClass.class.getMethod("printName");
printNameMethod.invoke(someObject);

So, okay, I found a way to call a method that involves way more typing for no gain - so what? Well, in a situation where reflection is actually useful, you won't actually know what methods are available. And this happens constantly in XPages, but you don't have to worry about it: reflection is how EL and SSJS do their work. If you've ever looked through one of the monstrous stack traces emitted by an XPage with an exception, you've probably seen this very call:

java.lang.Exception
    controller.home.getOutput(home.java:18)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
    java.lang.reflect.Method.invoke(Method.java:611)
    com.sun.faces.el.PropertyResolverImpl.getValue(PropertyResolverImpl.java:96)
	...

That's reflection in action. The EL resolver found the object and then used reflection to determine how it should get the "output" property (most likely the instanceof operator) and then, determining that it was talking to a plain bean, constructed the name "getOutput", asked the object's class for that method, and invoked it.

Like most things in Java, reflection gets more complicated and powerful from here; there are facilities for dealing with generic types, creating new instances, reading annotations, making private properties/methods accessible, and so forth. But the basics are simple and worth knowing: it's all about asking an object what it is and what it is capable of. Once you're comfortable with that, the rest of the pieces will fall into place as you encounter them.

XPages Data Caching and a Dive Into Java

Jul 2, 2014 4:39 PM

Tags: java

One of the most common idioms I run into when writing an app is the desire to cache some expensive-to-compute data in the view or application scope on first fetch, and then use the cached version from then on. It usually takes a form like this:

public Date getCachedTime() {
	Map<String, Object> applicationScope = ExtLibUtil.getApplicationScope();
	if(!applicationScope.containsKey("cachedTime")) {
		// Some actual expensive operation goes here
		applicationScope.put("cachedTime", new Date());
	}
	return (Date)applicationScope.get("cachedTime");
}

That works well, but it bothered me that I had to write the same boilerplate code every time, and I wondered if I could come up with a better way. The short answer is "no, not really". But the longer answer is "sort of!". And though the solution I came up with is kinda pathological and is generally unsuitable for normal humans and not worth the thin advantages, it does provide a good example of a couple of more-advanced Java concepts that you're likely to run across the more you get into the language. So here's the cache function in question, plus a "shorthand" version and an example:

@SuppressWarnings("unchecked")
public static <T> T cache(final String cacheKey, final String scope, final Callable<T> callable) throws Exception {
	Map<String, Object> cacheScope = (Map<String, Object>) ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), scope + "Scope");
	if (!cacheScope.containsKey("cacheMap")) {
		cacheScope.put("cacheMap", new HashMap());
	}
	Map<String, Object> cacheMap = (Map<String, Object>) cacheScope.get("cacheMap");
	if (!cacheMap.containsKey(cacheKey)) {
		cacheMap.put(cacheKey, callable.call());
	}
	return (T) cacheMap.get(cacheKey);
}

public static <T> T cache(final String cacheKey, final Callable<T> callable) throws Exception {
	return cache(cacheKey, "application", callable);
}

public Date getFetchedTime() throws Exception {
	return cache("fetchedTime", new Callable<Date>() {
		public Date call() throws Exception {
			return new Date();
		}
	});
}

This code begs a question: what in the name of all that is good and holy is going on? There are a number of Java concepts at work here, some of which you've likely already seen:

  • Generics. Generics are the class names in angle brackets, like Map<String, Object> and Callable<Date>. They're Java's way of taking a container or producer class and making it usable with any class types without having to cast everything from Object.
  • Overriding methods. This is a small one: you can declare the same method name with different parameter types, which is often useful for "shorthand" versions of methods that allow for default values after a fashion. In this case, the short version of "cache" defaults to using the application scope by name.
  • The Callable interface. This is an interface in the java.util.concurrent package and is meant as a utility for multithreading purposes, but it also serves our needs here. Basically, it's an interface that means "this contains code that can be executed by using the call method with no arguments".
  • Anonymous classes. This is that business with "new Callable...". Java calls those anonymous classes and what they are are classes declared and instantiated inline with the code, without having a "proper" class declaration somewhere else. They're called "anonymous" because they have no name of their own - just the interface or class they implement/extend. In this case, I'm saying "make a new object that implements Callable and is used for this purpose only". I could make a standalone class, but there's no need. If you're familiar with closures from good languages, anonymous classes can be used like a really crappy version of those.
  • Generic return value declarations. The "<T> T" and "Callable<T>" bits build on normal generic use for when declaring a method. The first one, is brackets, basically says "okay Java, rather than returning a known object type, I'm going to let the user use anything, and for this purpose we're going to call it T". So from then on, an occurrence of T in that method body refers to whatever the programmer using the method intends. You can see in the getFetchedTime that I'm using a Callable<Date>; Java picks up on that "Date" name and conceptually substitutes it for all the Ts in the method. I'm essentially saying "I'm going to use the cache method, and I want it to accept and return Dates". Another call to the method could be to cache a List<String> or a SomeUserClass, while the method itself would remain the same.

So is all this worth it for the task at hand? Probably not. There are SOME advantages, in that it's easy to change the default caching scope, and in practice I also used a map that auto-expires its entries over time, but for normal use the first idiom is fine. But hey, it sure provided a whole torrent of Java concepts, so that's worth something.

Coaxing Enums Into XPage Combo Boxes

May 28, 2014 7:07 PM

Tags: java xpages

I've been reworking Forms 'n' Views over the last couple days to use it as a development companion for the OpenNTF Design API, and one of the minor hurdles I ran across was presenting a getter/setter pair that expects a Java Enum to an XPages control (a combo box in this case, but it would be the same with other similar controls). I had assumed at first that it would simply not work - that the runtime would entirely balk at the conversion. However, it turned out that XPages does half the job for you: for displaying existing values, it will properly coax between the string form and the Enum constant. However, it breaks when actually going to set the value.

This whetted my appetite enough that I decided to try to see if I could write a small converter to finish the job in a properly generic way, and it turns out I could. To cut to the chase, I wrote this class:

package converter;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;
import javax.faces.convert.Converter;
import javax.faces.el.ValueBinding;

public class EnumBindingConverter implements Converter {

	@SuppressWarnings("unchecked")
	public Object getAsObject(final FacesContext facesContext, final UIComponent component, final String value) {
		ValueBinding binding = component.getValueBinding("value");
		Class<? extends Enum> enumType = binding.getType(facesContext);

		return Enum.valueOf(enumType, value);
	}

	public String getAsString(final FacesContext facesContext, final UIComponent component, final Object value) {
		return String.valueOf(value);
	}

}

The getAsString method is your standard "I don't care about this" near-passthrough, but the getAsObject portion does the work necessary. Because I'm binding to a Java bean with getters/setters expecting the appropriate Enum type (this would presumably also work with a DataObject with a thoroughly-implemented getType method), the ValueBinding class is able to look up and provide back the expected Enum class. I then use the standard Enum.valueOf method to let Java do the work of actually finding the appropriate Enum constant.

Using this, I'm able to write a control like this:

<xp:comboBox value="#{columnNode.sortOrder}">
	<xp:this.converter><xp:converter converterId="enumBindingConverter"/></xp:this.converter>
	<xp:selectItem itemLabel="None" itemValue="${javascript:org.openntf.domino.design.DesignColumn.SortOrder.NONE}" />
	<xp:selectItem itemLabel="Ascending" itemValue="${javascript:org.openntf.domino.design.DesignColumn.SortOrder.ASCENDING}"/>
	<xp:selectItem itemLabel="Descending" itemValue="${javascript:org.openntf.domino.design.DesignColumn.SortOrder.DESCENDING}"/>
</xp:comboBox>

And it works! That allows you to bind to a getter/setter pair that expects an Enum without having to write in a shim to convert it to a String value beyond the generic converter.

You probably noticed the same thing I did, though: that's ugly as sin. Not only that, but it's a lot of redundant code when what you usually want to do is provide a selection of all available constants for the given Enum. To save myself a bit of future hassle and prettify my code a bit (at the expense of UI, but who cares about that? (I kid)). I wrote this DataObject bean to accept an Enum class name and return a list suitable for use in a control:

package bean;

import java.io.Serializable;
import java.util.*;

import javax.faces.context.FacesContext;
import javax.faces.model.SelectItem;

import org.openntf.domino.utils.Strings;

import com.ibm.xsp.model.DataObject;

import frostillicus.bean.*;

@ManagedBean(name="enumItems")
@ApplicationScoped
public class EnumSelectItems implements Serializable, DataObject {
	private static final long serialVersionUID = 1L;

	public Class<?> getType(final Object key) {
		return List.class;
	}

	@SuppressWarnings("unchecked")
	public List<SelectItem> getValue(final Object key) {
		if(key == null) {
			throw new NullPointerException("key cannot be null.");
		}

		String className = String.valueOf(key);

		try {
			Class<? extends Enum> enumClass = (Class<? extends Enum>)FacesContext.getCurrentInstance().getContextClassLoader().loadClass(className);

			Enum[] vals = enumClass.getEnumConstants();
			List<SelectItem> result = new ArrayList<SelectItem>(vals.length);
			for(Enum val : vals) {
				SelectItem item = new SelectItem();
				item.setLabel(Strings.toProperCase(val.name()));
				item.setValue(val);
				result.add(item);
			}

			return result;
		} catch(Throwable t) {
			throw new RuntimeException(t);
		}
	}

	public boolean isReadOnly(final Object key) {
		return true;
	}

	public void setValue(final Object key, final Object value) { }
}

It uses the @ManagedBean implementation that I cobbled together a while back, but you could just as easily use the normal faces-config route. The way you use this is to pass in the class name of the Enum you want in a selectItems component:

<xp:comboBox value="#{columnNode.sortOrder}" onchange="fnv_mark_dirty('#{id:viewPane}')">
	<xp:this.converter><xp:converter converterId="enumBindingConverter"/></xp:this.converter>
	<xp:selectItems value="${enumItems['org.openntf.domino.design.DesignColumn$SortOrder']}"/>
</xp:comboBox>

Though it only saves a few line in this case, it'd pay off more with larger Enums. It has the down side of being pretty naive about its conversions: it just does a basic first-character proper-casing of the Enum constant name, but there's no reason you couldn't add better human-friendly-ifying to it.

I'm not sure if I'll be able to satisfy all of my Java preferences when it comes to multi-value Enums, though. Java's generic type erasure means that runtime code wouldn't have a way to inspect the getters/setters to figure out the appropriate Enum type. Perhaps I can inspect the select items attached to the control to find the appropriate class.

Expanding your use of EL: Mea Culpa

May 2, 2014 9:27 PM

A while back, a wrote a couple posts about using Expression Language (EL) to clean up your XPages code, and in the first of them I gave what I labeled an exclusive list of the interfaces that XPages EL provides special support for (as opposed to just looking for getters/setters). It turns out I was sorely mistaken: there are at least two other types that EL specially supports.

The first is the class used for JavaScript objects, such as allowing #{foo.bar} to bind like you'd expect to a value set to the JavaScript literal { bar: 'baz' }. I'm not sure where the specific support comes in - presumably based on one of the classes in the hierarchy and not the IValue interface it implements - but it's not terribly important.

The second is an interface that I've used frequently, but didn't realize had this property: ViewRowData. ViewRowData is essentially a specialized variant of DataObject intended for view entries - so much so, in fact, that I just assumed that view entries supported DataObject simultaneously, as my classes that implement it do.

I don't expect I'll have a lot of situations where I'll want to implement ViewRowData but not DataObject, but it's good to know one additional tidbit about the inner workings of XPages. One day, maybe I'll try to find a truly exhaustive list of what gets special treatment.

How I Added File-Download Support To My Model Framework

Apr 12, 2014 1:33 PM

Tags: xpages java

Unlike the last few posts, this one isn't as much a tip or tutorial as it is an attempt to share my suffering by detailing the steps I took recently to add file-download and -upload support to my current model framework.

The goal of my framework overall is to retain the benefits of normal Domino document data sources (arbitrary field access, ease of use in simple cases, and compatibility with standard XPage idoms) while adding on useful new abilities (getters/setters, relationships with other objects, abstraction between the XPage and the actual location of the data). Most of this work is easy - by implementing DataObject, my objects worked well with EL bindings and most of the work was focused on hooking up Domino Document objects in a similar way to IBM's DominoDocument class.

However, some of the more esoteric controls are not so easy, and the file controls were chief among them. As I tinkered, I found that you can't directly point a xp:fileDownload control to a normal data source and get it to work well. You can get it to list a single file, but I wasn't able to get it to recognize more than one through traditional means.

In my tinkering, I discovered that there are two main paths for the control: the traditional one and a more-complicated one, which makes a lot of assumptions about the nature and availability of the referenced data source. Importantly, unlike normal bindings (which take your expression, say #{doc.body}, have the EL processor resolve it, and deal with the final resolved entity), the control actually treats the EL expression as a String, splits it in two on the "." character, and looks up the variable on the left side plus ".DATASOURCE". This "doc.DATASOURCE" idiom matches the behavior of data-source controls on an XPage, which make themselves or an associated object available by that name. If the control finds an object by that name and if it implements DataSource (a common interface used by these controls), it assumes that the source contains a DataContainer object that also implements DocumentDataContainer specifically.

Phew, okay, so far so good. It makes some inconvenient sense that the control would expect its referenced value to conform to the norms of the built-in data sources, so I wrote controls to do just that (previously, I got by using normal EL references from managed beans). However, I still ended up getting ClassCastExceptions at runtime: it turned out that the control also uses a nebulous factory object to attempt to convert the received value into a data type it expects, specifically DominoDocument. Unfortunately, unlike the interfaces I implemented already, subclassing DominoDocument was not something I was willing to do. So I was back to digging.

What I found was that this attempted conversion was happening due to an object implementing com.ibm.xsp.model.DataModelFactory, which has the job of taking an intermediate value used in the control (a com.ibm.xsp.model.FileDownloadValue, which contains a single-entry HashMap containing the object and requested field name) and returning the attachment data as an instance of DataModel. In addition to writing the code to do this, there's another wrinkle: the way the factory is found. The way I found to access this is thoroughly awkward: if I use the deprecated method getFactoryLookup() on the application instance, I can then retrieve the default factory by a key ("com.ibm.xsp.DOMINO_DATAMODEL_FACTORY"), wrap it in a new object that knows about my model objects, and replace it in the lookup.

Once I did all that, I was off to the races! Sort of! There remained another problem: the "delete attachment" icon on the control. I discovered that the way this icon works is that it composes an action that tracks down an adapter that knows how to communicate with the associated model object and sends it a deleteAttachments(...) call. As before, the way it "tracks down" the adapter is to look up another factory ("com.ibm.xsp.DESIGNER_DOMINO_ADAPTER_FACTORY") and ask it to provide an appropriate adapter. So again I wrote my own variant of this to intercept calls dealing with model objects and carefully placed it, Indiana-Jones-style, in place of the default factory.

The net result is that it works! I can point a file-download control at a model object (as long as that object was specifically instantiated in a data source control and is referenced directly in "object.field" format) and have it list all of the attachments in the field, along with a functioning delete action if desired. The file-upload control was, fortunately, much easier: when I realized I could swap out most of the guts of my model object with a back-end DominoDocument (now that the OpenNTF API takes care of data conversions for me), I was able to just proxy the upload calls to it and let it do the heavy lifting.

Now, there's only one nagging problem remaining: inline image uploads in rich text controls. But still, even if I never get that part working, the file manipulation will be enough. And, importantly, I've paved the way for me to be able to do use the same controls with model objects that aren't associated with Domino at all, should I desire to do so down the line.

The Collections Framework as Education

Apr 6, 2014 7:15 PM

While I've been formulating the followup to my last post, I've also been thinking about how wonderful the Java Collections framework is. I've been singing its praises for a long time (as have others), but what's stood out to me lately is how well it encapsulates many of the most important foundational concepts of Java.

One of the reasons Java appears so intimidating and foreign to Notes developers is that the way many are introduced to it - as an auxiliary for XPages - involves learning a torrent of unrelated concepts simultaneously: strict variable typing, object equality and comparison, class casting, managed beans, recycling (though this is just as important in SSJS), confusing recycling with memory management, static methods, inheritance, interfaces, exceptions, abstract classes, database access, serialization, concurrency, and so forth. If you focus on the Collections framework specifically, though, you can learn many of the important basics in a clear way. And as a bonus, these are critical concepts shared with other, better OO languages.

Interface vs. Implementation

One of my favorite lecture topics is the value of separating interface and implementation. Unfortunately, most of the time, the values are unclear - if you're only going to make one class, is it valuable to have an interface? - and it seems like pedantry. The Collection classes, however, bring this value into sharp focus. The core interfaces correspond to elemental data structures of Computer Science and tell you everything you need to know to use them, e.g.

  • List: a collection of objects in the order stored by the user, accessed by index.
  • Map: a collection of objects accessed by a key of any type specified by the user.
  • Set: a collection of unique objects in either arbitrary or, for SortedSets), automatically-sorted order.

These interfaces describe everything you need to know about the object and do so so well that there's rarely any need to specify the actual class at all outside of creation. The differences between the actual classes are, quite literally, implementation details: some focus on single-thread performance (like ArrayList), some serve specialized cases (like EnumMap or IdentityHashMap), some implement multiple useful interfaces (like LinkedList), and some are geared towards multi-threaded use (like the java.util.concurrent objects).

Job-Based Polymorphism

Not only does the Collection framework provide an example of a cleanly-designed interface/implementation separation, but it also provides examples of why this pays off. One of the handiest tricks is the way most of the objects (other than Map) participate in mutual-translation schemes. By this I mean that you can take one Collection and pass it to another (either in the constructor or via addAll) and build a second collection of the same objects, but obeying the rules of the new collection.

For example, say you have a List of usernames that you retrieved from a call to view.getColumnValues(...) and now you want to sort and unique-ify them (basically like @Sort(@Unique(...))). Well, there's a collection type for that: the SortedSet, of which TreeSet is the common implementation. Rather than manually building a unique, sorted List (even using helper methods), you can accomplish this with TreeSet's constructor. You can then add in names from other sources and maintain the same unique-and-sorted guarantee without any additional code:

SortedSet<String> names = new TreeSet<String>(view.getColumnValues(1));
names.addAll(doc.getItemValue("SomeNamesField"));
names.addAll(someNameMap.values());

Because those methods only care that the incoming values are in a Collection, the fact that some may be in Lists, in Sets, or in some other entirely-unknown type that implements Collection is completely irrelevant. As it should be.*

Reusability

This benefit is so inherent to the framework that I almost forgot to include it. Largely by the nature of the task being performed but also by virtue of proper coding, the collection objects are so reusable that they feel like part of the language, even though they're just objects like any you write. They're built to be generic and used in ways far outside of the original ken of their creators. Both the implementations and interfaces can be spread far and wide, such as with my database model collection class that also acts as a List.

Takeaway

Not all of these benefits are going to be useful in all of your code, and it will be very infrequent that you have a reason to write your own framework of this type. By learning the contours of this framework, though, you can advance your knowledge of Java and object-oriented programming generally by leaps and bounds. It will help you write cleaner code and to identify similar concepts used elsewhere.

* For example, did you know that viewScope is not a HashMap but actually a class called "javax.faces.component.UIViewRoot$ViewMap"? That's how much the implementation doesn't matter.

A Quick, Dirty, and Inefficient @ManagedBean Implementation in XPages

Dec 17, 2013 9:24 AM

Tags: xpages java

By now, most of us are pretty familiar with the process for adding managed beans to an XPages app: go to faces-config.xml and add a <managed-bean>...</managed-bean> block for each bean. However, JSF 2 added another method for declaring managed beans: inline annotations in the Java class. This wasn't one of the features backported to the XPages runtime, but it turns out it's not bad to add something along these lines, and I decided to give it a shot while working on the OpenNTF API.

The first thing that's required is a version of the annotations that ship with JSF 2. Fortunately, annotations in Java, though ugly to define, are very simple, and each one involves only a few lines of code and can be added to your NSF directly:

javax.faces.bean.ManagedBean

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface ManagedBean {
	String name();
}

javax.faces.bean.ApplicationScoped

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface ApplicationScoped { }

javax.faces.bean.SessionScoped

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface SessionScoped { }

javax.faces.bean.ViewScoped

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface ViewScoped { }

javax.faces.bean.RequestScoped

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface RequestScoped { }

javax.faces.bean.NoneScoped

package javax.faces.bean;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(value=RetentionPolicy.RUNTIME)
public @interface NoneScoped { }

The more-complicated code comes into play when it's time to actually make use of these annotations. There are a couple ways this could be handled, and the route I took was via a variable resolver. Note that this code requires my current branch of the OpenNTF API, though a mild variant would work with the latest M release:

resolver.AnnotatedBeanResolver

package resolver;

import java.io.Serializable;
import java.util.*;
import javax.faces.context.FacesContext;
import javax.faces.el.EvaluationException;
import javax.faces.el.VariableResolver;
import javax.faces.bean.*;

import org.openntf.domino.*;
import org.openntf.domino.design.*;

import com.ibm.commons.util.StringUtil;

public class AnnotatedBeanResolver extends VariableResolver {

	private final VariableResolver delegate_;

	public AnnotatedBeanResolver(final VariableResolver resolver) {
		delegate_ = resolver;
	}

	@SuppressWarnings("unchecked")
	@Override
	public Object resolveVariable(final FacesContext facesContext, final String name) throws EvaluationException {
		Object existing = delegate_.resolveVariable(facesContext, name);
		if(existing != null) {
			return existing;
		}

		try {
			// If the main resolver couldn't find it, check our annotated managed beans
			Map<String, Object> applicationScope = (Map<String, Object>)delegate_.resolveVariable(facesContext, "applicationScope");
			if(!applicationScope.containsKey("$$annotatedManagedBeanMap")) {
				Map<String, BeanInfo> beanMap = new HashMap<String, BeanInfo>();

				Database database = (Database)delegate_.resolveVariable(facesContext, "database");
				DatabaseDesign design = database.getDesign();
				for(String className : design.getJavaResourceClassNames()) {
					Class<?> loadedClass = Class.forName(className);
					ManagedBean beanAnnotation = loadedClass.getAnnotation(ManagedBean.class);
					if(beanAnnotation != null) {
						BeanInfo info = new BeanInfo();
						info.className = loadedClass.getCanonicalName();
						if(loadedClass.isAnnotationPresent(ApplicationScoped.class)) {
							info.scope = "application";
						} else if(loadedClass.isAnnotationPresent(SessionScoped.class)) {
							info.scope = "session";
						} else if(loadedClass.isAnnotationPresent(ViewScoped.class)) {
							info.scope = "view";
						} else if(loadedClass.isAnnotationPresent(RequestScoped.class)) {
							info.scope = "request";
						} else {
							info.scope = "none";
						}

						if(!StringUtil.isEmpty(beanAnnotation.name())) {
							beanMap.put(beanAnnotation.name(), info);
						} else {
							beanMap.put(loadedClass.getSimpleName(), info);
						}
					}
				}

				applicationScope.put("$$annotatedManagedBeanMap", beanMap);
			}

			// Now that we know we have a built map, look for the requested name
			Map<String, BeanInfo> beanMap = (Map<String, BeanInfo>)applicationScope.get("$$annotatedManagedBeanMap");
			if(beanMap.containsKey(name)) {
				BeanInfo info = beanMap.get(name);
				Class<?> loadedClass = Class.forName(info.className);
				// Check its scope
				if("none".equals(info.scope)) {
					return loadedClass.newInstance();
				} else {
					Map<String, Object> scope = (Map<String, Object>)delegate_.resolveVariable(facesContext, info.scope + "Scope");
					if(!scope.containsKey(name)) {
						scope.put(name, loadedClass.newInstance());
					}
					return scope.get(name);
				}
			}
		} catch(Exception e) {
			e.printStackTrace();
			throw new RuntimeException(e);
		}

		return null;
	}

	private static class BeanInfo implements Serializable {
		private static final long serialVersionUID = 1L;

		String className;
		String scope;
	}
}

faces-config.xml

<faces-config>
	...
	<application>
		<variable-resolver>resolver.AnnotatedBeanResolver</variable-resolver>
	</application>
</faces-config>

This is not efficient! What happens is that, whenever a variable is requested that the main resolver can't find (or is null), the code goes through each Java class defined in the NSF (not in plugins, the filesystem, or in attached Jars) and checks if it contains the @ManagedBean definition. Fortunately, the worst of this is only incurred the first time (according to the duration of applicationScope), but that one-time process is still something that would get linearly slower as the number of Java resources in your app increases.

Now, there are still other features of the actual JSF 2 implementation that this doesn't cover, particularly @ManagedProperty, but it can still be useful on its own. In practice, it lets you ditch the faces-config declaration and write your beans like this:

package bean;

import javax.faces.bean.*;
import java.io.Serializable;

@ManagedBean(name="someAnnotatedBean")
@ApplicationScoped
public class AnnotatedBean implements Serializable {
	private static final long serialVersionUID = 1L;

	// ...
}

I plan to give this a shot in the next thing I write to see how it works out. And regardless of future use, it's a nice example of the kind of thing that gets easier with the OpenNTF API.

Building My App On Custom Renderers

Nov 24, 2013 8:41 PM

Tags: xpages java
  1. I Am Terribly Excited About Custom Renderers
  2. Building My App On Custom Renderers

A little while ago, I mentioned how I have become smitten with custom renderers - Java classes that take XPage components like application layouts and widgets and render them in custom HTML. Though I've had a lot of other things to work on in between then and today, my ardor for the subject has not dimmed. So today, I mostly completed the process for my task-tracking app.

At a high level, the app is a fairly typical web app built using the popular-for-good-reason Ace - Responsive Admin Template theme from WrapBootstrap. I'm a huge fan of using these pre-built themes - in addition to handling the visual design, they tend to come packaged with a set of specialized widgets and jQuery plugins that are made to work well as a cohesive whole. However, the down side is that you have to write your code specifically targeting the framework, even moreso than stock Bootstrap (and for this reason, I couldn't directly use Bootstrap4XPages). This makes it a perfect candidate for renderer-ification, as most of the components I'm using - the layout, widget boxes, view panels/data tables, form layouts - have direct analogs in either the stock XPage runtime or in the OneUI-based components in the Extension Library.

So what I have done is to take the basic structure of the Bootstrap4XPages project (and more than a smidge of its code) and used it to build custom renderers for the components I need, packaged into an OSGi plugin:

Project Explorer

It's something of a mess of code, and parts of it have been a pain to deal with, but for the most part it's been a process of looking at the original renderer source in the ExtLib and Bootstrap4XPages and then either copying and tweaking, subclassing and overriding, or implementing from scratch. I've run into some bothersome limitations in the stock objects as well, particularly when it comes to areas where I need to apply CSS classes to elements not represented in the components, like the FontAwesome-based icon classes for the nav bar or specific classes for widget title bars and the body area. Since these components also lack convenient "attr" properties, I've resorted to ugly hacks: I've co-opted the "imageAlt" property of the tree nodes to trigger creation of a Bootstrap-style "i" tag with a class for the navigator, and for the widgets I just made the styleClass apply only to the header and added an ugly exception to look for "no-padding" classes and apply them to the body instead. It's not a pretty process sometimes.

But the results are pretty interesting. By using standard components, I have an app that can swap readily between very different themes (with some rough edges for areas where code is still Ace-specific):

Ace

OneUI v2.1

OneUI v3.0.2

All of this does, of course, raise an important question. Namely: why bother? The original way I implemented it - by writing the Bootstrap layout HTML by hand, tweaking individual classes here and there, and forgoing the OneUI-based components - worked just fine, and it took less work. And really, I'm not terribly likely to suddenly decide that I want to switch this app from Ace to OneUI or stock Bootstrap. There are a couple important benefits to this approach that apply both to me personally and to others who may consider doing the same thing:

  • Practice. This seems odd to put first, but it's vitally important. Writing these renderers has improved my knowledge of the platform a great deal. It's given me a reason to delve further into the Extension Library source, to improve my OSGi-plugin-writing knowledge, and to familiarize myself with an easily-overlooked part of the JSF stack. Like many things with the plumbing of XPages, it's more of a PITA than it absolutely needs to be, but, if you do it, you'll come out a much better developer.
  • Focus. Now that I have almost all of the code that's specific to Bootstrap or Ace out of my XPage app, the remaining code has a sharpened focus on the task at hand. Now I use standard widget, pager, and view controls just like in any run-of-the-mill app and hook them up the same way, without having to worry about how it's going to look. Time spent on the problem at hand is time spent well.
  • Maintainability. During my transition from hardcoded HTML in the XSP source to custom renderers, I also jumped from Ace 1.0 to 1.2, which included an upgrade to Bootstrap 3. For this task, that meant I had to learn about a number of the differences between Bootstrap 2 and 3, as well as tweaking lots of little CSS classes and HTML structures across every page of the app. Next time, though, when Ace version 2 running with Bootstrap 4 comes out, I only need to change the theme, entirely outside of the app, and everything else will come along for the ride (mostly).
  • Less Code in the NSF. This is a corollary to the last bullet point, but is important as its own concept. By moving the theme and supporting files (like the CSS and JS assets themselves) out to a plugin, there's less cruft to carry around in the NSF itself and, more importantly, less stuff to keep up-to-date when used in common across multiple apps. This is not unique to this technique (you can move the resource files out without doing renderers, and you can bundle common custom controls that way too), but it's a valuable side effect.
  • Interoperability. This doesn't matter so much to me currently, since I'm the only one working on this app, but it will matter later, and it would matter immediately to any larger company. If each of your apps, regardless of the final appearance, is built using the standard XPage and ExtLib controls, that means you don't need to familiarize every developer working on it with the CSS framework of your choice, whether it be Bootstrap, your company's home-grown HTML, or something else entirely. You can have one person writing the themes and have the rest of the team go about building their apps by the book (or, rather, books).
  • Flexibility. As demonstrated by my screenshots above, this technique brings tremendous flexibility. In this specific case, I knew what theme I wanted to use when I started writing the app, but in the future I won't need to. Whenever I'm struck by inspiration for a new app, I can start writing code immediately (dressing it up in Ace or OneUI) and then decide on and buy another theme later without the hassle of going back into my functional XPages to replace and tweak a bunch of CSS classes.

It's a pity that there's such overhead in the initial work of writing custom renderers, but it will get gradually easier over time, now that Bootstrap4XPages is available to reference. And still, hassle and all, I believe this to be the best way forward for organized XPage development, and represents one of the platform's distinct advantages over others (well, the others that aren't JSF). I strongly encourage everyone to look at least a little into the idea. And I also, as always, encourage you to contact me at will either to pick my brain or commission my company to help renderer-ify or otherwise improve your apps.

I Am Terribly Excited About Custom Renderers

Nov 12, 2013 7:56 PM

Tags: xpages java
  1. I Am Terribly Excited About Custom Renderers
  2. Building My App On Custom Renderers

It's much to my chagrin that it took me until this past weekend to properly dive into custom renderers. I had seen them before, primarily when working with mypic, but always avoided building my own, primarily because of what a hassle they are to write. However, I'm now convinced that the hassle is worth it, at least for some primary uses.

If you're not familiar with custom renderers, they're sort of an odd beast, but they're the kind of thing where the concept eventually "clicks" in your mind at some point. In XPages, a component (such as a link, a panel, or an application layout) consists of two main aspects: the structural code that defines the properties of the component and the renderer that handles actually outputting the HTML for the browser. What adding your own custom renderers allows is for you to continue to use the same XSP markup to define the control, yet have very different results.

The best example of this (and the source of most of my education) is the recent Bootstrap4XPages project. It takes a standard OneUI applicationLayout-based app and styles it using Bootstrap markup instead of OneUI, even though the XSP markup is the same. In the normal case, this just means that you can reuse some of the basics when building a Bootstrap-targeted app. And that's good enough on its own: it saves you tons of work.

But the more important aspect is that it lets you focus much more on solving the task at hand and much less on writing to the layout framework.

This is a huge advantage for XPages (and any framework with this separation). As low-key as Bootstrap is (and other modern CSS frameworks are), you still have to structure your markup and sprinkle class names around to match its expectations. With a really fleshed-out set of custom renderers, though, you could potentially not care at all about the resultant UI framework when writing your app and instead focus on using stock or ExtLib controls to solve the problem directly, letting the theme+renderers do the work.

In reality, there'll be a bit of leaking (for example, using Bootstrap-friendly FontAwesome classes instead of images doesn't really fit with standard IBM controls), but I aim to minimize that as much as possible in my future projects. My goal is to use applicationLayout and standard controls for as much as I can, and instead writing a set of custom renderers for each primary theme I want to apply.

I'll have more to write about this later, but for now, the TL;DR version is: custom renderers have the potential to dramatically improve the way XPages apps are written, as long as you're willing to do the initial Java legwork.

Progress on the org.openntf.domino Design API

Sep 6, 2013 6:57 PM

Tags: java openntf

The primary focus of my recent work with the OpenNTF Domino API has been the implementation of a fleshed-out design API. I've done a lot of work in my Domino-programming career manipulating design elements, usually via DXL (such as with my Forms 'n' Views app), and my aim is to codify all of that into the API itself, eventually growing it to cover almost every design element that Designer does (we'll see about Navigators, Composite Apps, and the like).

One of the big hurdles in recent years when manipulating design elements has been dealing with XPages-related elements - XPages themselves, Java design elements, Jars, faces-config.xml, etc.. Though XPages by their nature have a very nice XML representation, that doesn't carry over to DXL - all of those elements are dumped out in the "raw" data format that DXL uses when it doesn't have a better idea of how to handle what you're trying to export. And more specifically, though the XSP markup and the compiled Java code are present in the result, attempts to actually manipulate that were stymied by a "checksum" block of bits at the start of the data - you couldn't just drop the Java bytecode into the doc and expect it to work.

Fortunately, it turns out that those bits aren't a weird "checksum" at all: the data is just stored as Composite Data, the internal format of rich text in Domino. Specifically, all of those elements share the same internal representation as File Resources and Style Sheets, being composed of a CDFILEHEADER and list of CDFILESEGMENTs. When DXL uses the "raw" format, it exports that data as an array of Base64-encoded bytes directly. After discovering this, I wrote the start of a CD implementation in the API and got it to work: the API is now able to read/write the attached data of any one of those file types. Now, there are a few steps that would have to be implemented between this and "write a new XPage with a few lines of code", but this is a step in that direction (and you could theoretically modify your faces-config.xml, xsp.properties, etc. on the fly now, though I haven't tested much).

More immediately, it's also a huge step in the direction of interesting read-focused uses. One of these uses in particular has caught my fancy: loading and using XSP-related Java classes in other contexts. To go along with my work, I wrote a DatabaseClassLoader that gives you access to all of those class types from anywhere using the API (including Jar design elements). The possibilities for this have made me giddy: re-using data-model objects in external scheduled tasks, sharing classes with Java agents (...with the caveat that you have to use reflection or a scripting language), "loading" other XPage apps from inside another, treating NSFs as replicating, access-controlled Jars, and so on. I have my own notions for some practical uses I may want to explore, and I'm very interested in seeing what kind of things this lets others do as well.

Expanding your use of EL (Part 2)

Jul 24, 2013 12:37 PM

Tags: el java
  1. Expanding your use of EL (Part 1)
  2. Expanding your use of EL (Part 2)

Last time, I said that my next post would delve into operations and comparisons in EL, but I figured first it would be good to have a real-use example of what I talked about last time.

In several of my apps lately, I've taken to using properties files to store app configuration, primarily the server and file paths for the databases that store the actual data. The standard way of doing this is to use the platform's built-in bundle-handling abilities. However, that'd leave me stuck with either putting the bundle reference in the XSP markup somewhere or in a theme - the former would muddy things up and the latter is not available at page-load time. So instead, I wrote a quick class to use as an application-scoped bean, making it available everywhere via EL and Java:

package config;

import java.io.IOException;
import java.io.Serializable;
import java.util.Locale;
import java.util.ResourceBundle;
import javax.faces.context.FacesContext;
import com.ibm.xsp.application.ApplicationEx;
import com.ibm.xsp.model.DataObject;

public class AppConfig implements Serializable, DataObject {
	private static final long serialVersionUID = -5125445506949601097L;

	private transient ResourceBundle config;

	public Class<?> getType(final Object keyObject) {
		return String.class;
	}

	public Object getValue(final Object keyObject) {
		if(!(keyObject instanceof String)) { throw new IllegalArgumentException(); }
		return getConfig().getString((String)keyObject);
	}

	public boolean isReadOnly(final Object keyObject) { return true; }

	public void setValue(final Object keyObject, final Object value) { }

	private ResourceBundle getConfig() {
		if (config == null) {
			try {
				ApplicationEx app = (ApplicationEx) FacesContext.getCurrentInstance().getApplication();
				config = app.getResourceBundle("config", Locale.getDefault());
			} catch (IOException ioe) {
				ioe.printStackTrace();
			}
		}
		return config;
	}
}

This is an example of a simple read-only DataObject implementation. There are only four methods to implement: getType, which returns the class of the value for the key; getValue, which returns the actual value; isReadOnly, which determines whether a given key is settable (always false here); and setValue, which would set the value for the key if I wasn't making it read-only.

It's pretty straightforward to take this idea and apply it to any of your Map-like objects to ease their use in EL without having to implement or stub-out the dozen or so methods demanded by the actual Map interface.

Expanding your use of EL (Part 1)

Jul 23, 2013 1:07 PM

Tags: el java
  1. Expanding your use of EL (Part 1)
  2. Expanding your use of EL (Part 2)

One of the main aspects to cleaning up the code of an XPage is the heavy use of Expression Language (EL), which is XPages/JSF's shorthand language for binding properties to values and methods. The most common kind you'll run across is the kind you get for "free" when working with document data sources:

<xp:inputText value="#{doc.FirstName}"/>

This is the essence of EL in a nutshell: the actual code behind the scenes can be pretty complex, but EL handles that for you. It goes far beyond this, though, in two important ways: accessing custom and non-Domino data sources and writing complex operations.

Accessing Custom Data Sources

In addition to accessing "built-in" data types like Domino documents and the scope variables, EL is generic enough to deal with several kinds of data access (technically, it ONLY does these, and the standard bindings are examples of it):

  • Lists (and probably arrays). EL can use bracketed-subscript notation to reference individual elements of a Java list/array, e.g. #{names[3]}. I've found this to be the rarest use, but it could be useful if repeating over two parallel lists.
  • Maps. This is how EL accesses the scope variables (requestScope, viewScope, etc.). It translates code like #{sessionScope.cart} to sessionScope.get("cart") and sessionScope.put("cart", ...).
  • DataObjects. This is basically a "Map-lite" interface from IBM that is meant for making generic key/value objects easily-accessible via EL - this is what Domino document objects implement. It translates code like #{doc.FirstName} to doc.getValue("FirstName") and doc.setValue("FirstName", ...).
  • Generic bean-style methods. If (and only if) your object doesn't implement any of the above interfaces, EL falls back to seeking out bean-style getters and setters. For example, if you have a class that implements no special interfaces, EL will attempt to match #{myObject.foo} to myObject.getFoo() (or myObject.isFoo()) and myObject.setFoo(...).

The first important thing to remember is that EL applies these rules to your own and third-party objects just as much as it does to IBM's stock objects. If your code is just #{doc.foo}, you can switch "doc" from a Domino document to a HashMap, a custom DataObject class, a bean with getFoo, or an entirely-different third-party data source that implements one of those and it will continue to work smoothly. If you instead wrote it as #{javascript:doc.getItemValueString('foo')}, your experience would be... a bit rougher.

The second important thing to remember is EL's equivalence between dot and bracket notations. What I mean by this is that these three are all equivalent (assuming "barHolder" is a variable on the page containing the string "bar"):

  • #{foo.bar}
  • #{foo['bar']} or #{foo["bar"]}
  • #{foo[barHolder]}

Note that those are equivalent for every supported object type, including generic beans (okay, but not Lists, because #{foo.1} is illegal). This equivalence allows you to use EL to reference keys that would not be legal in dot notation (e.g. #{someProperties['us.iksg.setting']}) and to use other EL expressions to determine the field name. Say you're keeping some user-specific info in the application scope; you could end up with a chain like this to access it via EL:

<xp:text value="#{applicationScope.userCache[context.user.name].ticketCount}"/>

The more you understand the different ways of accessing your objects via EL, the more you can remove SSJS from your XPage markup and make your apps cleaner, (probably) faster, and more portable.

In my next post, I will cover the second major topic: complex operations and comparisons in EL.

The "final" Keyword in Java

Jun 16, 2013 2:45 PM

Yesterday, I suggested that everyone enable finalizing parameters in their code cleanups and, better, to do the same in their Java save actions; later, I set Eclipse loose on the org.openntf.domino API to implement it there. Adding final keywords to method parameters (and elsewhere where possible) is one of the habits I picked up from reading Effective Java a bit ago, but it's kind of a subtle thing and the benefits aren't immediately obvious.

If you're not at all familiar with the final keyword, it has a couple meanings in Java - in the context of variable and parameter declarations, it means that you can't reassign the value of the variable after it's been assigned the first time. So, for example, this is illegal and will generate a compile error:

final int i = 1;
i = 2;

In that example, it seems like a "why the heck would I do that?" sort of thing, but it's not too much of a stretch to picture a block of code involving something like this:

void doSomething(Document doc) {
	System.out.println("doc is " + doc.getUniversalID());
	
	// a bunch of code
	doc = database.getDocumentByID("FFFF0002");
	// a bunch more code
	
	System.out.println("doc is " + doc.getUniversalID());
}

That will WORK and it won't damage the original doc that was passed in, but it'll get confusing, particularly the more lines of code or eyeballs involved. Functioning or not, it's poor code style, and changing the method signature to doSomething(final Document doc) means that the compiler will enforce good style.

In addition to being good style, final has some technical benefits. First off, it might be faster. Secondly, if you ever use anonymous classes, final variables are a necessity for accessing information in the surrounding block. If you get into the habit of finalizing your variables, you'll already be ready by the time you have a need for anonymous classes.

final is a component of a larger and more important topic: immutability. I should probably write a lengthy post on immutability in the future, but for now there's an important thing to remember: final does not make your objects immutable. It only prevents assignment - it doesn't prevent calling methods on an object that change the object's value (in fact, Java lacks any language construct or cultural convention for immutability). So even if your Document variable is declared final, you (or any code you pass it to) can still call doc.replaceItemValue("Form", "Ha ha"); with impunity.

My Latest Programming-Technique Improvements

Apr 23, 2013 11:34 AM

Tags: java

Over the past couple weeks, I've been trying out some new things to make my code a bit cleaner and I've been having a good time of it, so I figured I'd write up a quick summary.

Composed Method

First and foremost, I've been trying out the Composed Method of programming. This is not a new idea - indeed, it's a pretty direct application of general refactoring - but the slight perspective change from "factor out reused code" to starting out with clean, small blocks has been a pleasant switch. The gist of the idea is that, rather than writing a giant, LotusScript-agent-style block of code first and then breaking out individual components after the fact, you think in terms of small, discrete operations that you can name. The result is that your primary "control" methods end up very descriptive and easy to follow, consisting primarily of ordered phrases specifically saying what's going on. I encourage you to search around and familiarize yourself with the technique.

import static

Java's import static statement is a method-level analog to the normal import statement: it lets you make static methods of classes available for easy calling without having to specify the surrounding class name. Most of the examples you'll see (like those on the linked docs) focus on the Math class, which makes sense, but it applies extremely well to XPages programming. Pretty much every app should have a JSFUtil class floating around, and most should also use the ExtLibUtil class for its convenience methods like getCurrentSession:

import static com.ibm.xsp.extlib.util.ExtLibUtil.getSessionScope;
import static com.ibm.xsp.extlib.util.ExtLibUtil.getCurrentSession;

// ...

getSessionScope().put("currentUser", getCurrentSession().getEffectiveUserName());

Night and day? Not really, but it's a little more convenient and easier to read.

Now, as the linked docs above indicate, import static comes with a warning to use it cautiously. Part of this is Java's cultural aversion to concision, but it's also a good idea to not go too crazy. When the methods you're importing are clear in intent and unlikely to overlap with local ones (like those in ExtLibUtil), it makes sense, particularly because Designer/Eclipse allows you to hover over the method or hit F3 and see where it's defined.

createDocument

Since creating a document in the current database with a known set of fields is such a common action, I've taken a page from one of our extended methods in org.openntf.domino and have started using a createDocument method that takes arbitrary pairs of keys and value and runs them through replaceItemValue, along with a helper method to allow for storing additional data types (since I'm still using the legacy API for these projects):

protected static Document createDocument(final Object... params) throws Exception {
	Document doc = ((Database)resolveVariable("database")).createDocument();

	for(int i = 0; i < params.length; i += 2) {
		if(params[i] != null) {
			String key = params[i].toString();
			if(!key.isEmpty()) {
				doc.replaceItemValue(key, toDominoFriendly(params[i+1]));
			}
		}
	}

	return doc;
}
protected static Object toDominoFriendly(final Object value) throws Exception {
	if(value == null) {
		return "";
	} else if(value instanceof List) {
		Vector<Object> result = new Vector<Object>(((List<?>)value).size());
		for(Object listObj : (List<?>)value) {
			result.add(toDominoFriendly(listObj));
		}
		return result;
	} else if(value instanceof Date) {
		return ((Session)resolveVariable("session")).createDateTime((Date)value);
	} else if(value instanceof Calendar) {
		return ((Session)resolveVariable("session")).createDateTime((Calendar)value);
	}
	return value;
}

In use, it looks like this:

Document request = createDocument(
		"Form", "Request",
		"Principal", getCurrentSession().getEffectiveUserName(),
		"StartDate", getNewRequestStartDate(),
		"EndDate", getNewRequestEndDate(),
		"Type", getNewRequestType()
);

 

The result of these changes is that I'm writing less code and the intent of every line is much more clear. It also happens to be more fun, which never hurts.

Fun With Old XML Features

Apr 3, 2013 1:35 AM

Tags: java xml

One of the side effects of working on the OpenNTF Domino API is that I saw every method in the interfaces, including ones that were either new to me or that I had forgotten about a long time ago. One of these is the "parseXML" method found on Items, RichTextItems, and EmbeddedObjects. This was added back in 5.0.3, I assume for some reason related to the mail template, like everything else added back then. Basically, it takes either the contents of a text item, the text of a rich text item, or the contents of an attached XML document and converts it to an org.w3c.dom.Document object (the usual Java standard for dealing with XML docs).

That's actually kind of cool on its own in some edge cases (along with the accompanying transformXML method), but you can also combine it with ANOTHER little-utilized feature: XPath support in XPages. So say you have an XML document like this attached to a Notes doc (or stored in an Item):

<stuff>
	<thing>
		<foo>bar</foo>
	</thing>
	<thing>
		<foo>baz</foo>
	</thing>
</stuff>

Once you have that, you can write code like this*:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<xp:this.data>
		<xp:dominoDocument var="doc" formName="Doc" action="openDocument">
			<xp:this.postOpenDocument><![CDATA[#{javascript:
				requestScope.put("xmlDoc", doc.getDocument().getFirstItem("Attachment").getEmbeddedObjects()[0].parseXML(false))
			}]]></xp:this.postOpenDocument>
		</xp:dominoDocument>
	</xp:this.data>
	
	<xp:repeat value="#{xpath:xmlDoc:stuff/thing}" var="thing">
		<p><xp:text value="#{xpath:thing:foo}"/></p>
	</xp:repeat>

</xp:view>

I don't really have any immediate use to do this kind of thing, but sometimes it's just fun to explore what the platform lets you do. It's one more thing I'll keep in the back of my mind as a potential technique if the right situation arises.

* Never write code like this.

We Have The Technology: The org.openntf.domino API

Mar 21, 2013 9:16 PM

As Nathan and Tim posted earlier today, a number of us have been working recently on a pretty exciting new project: the org.openntf.domino API. This is a drop-in replacement/extension for the existing lotus.domino Java API that improves its stability and feature set in numerous ways. The way I see it, there are a couple main areas of improvement:

Plain Old Bug Fixes

doc.hasItem(null) crashes a Domino server. That's no good! We've fixed that, and we're going through and fixing other (usually less severe) bugs in the existing API.

Modern Java

Java has progressed a long way since Domino 4.x, but the Java API hasn't much. Our API is chock full of Iterators, proper documentation, generic type definitions, preference for interfaces over concrete classes, and other trappings of a clean, modern API. Additionally, Nathan put together some voodoo programming to take care of the recycle thing. You read that right: recycle = handled.

New Features

In addition to cleaning up the existing functionality, we're adding features that will be useful every day. My favorite of those (since I'm implementing it myself) is baked-in MIMEBean-and-more support in get/replaceItemValue:

Set<String> stringSet = new HashSet<String>();
stringSet.add("foo");
stringSet.add("bar");
doc.replaceItemValue("Set", stringSet);

doc.replaceItemValue("Docs", database.getAllDocuments());

NoteCollection notes = database.createNoteCollection(true);
notes.buildCollection();
doc.replaceItemValue("Notes", notes);

doc.replaceItemValue("ExternalizableObject", new MimeType("text", "plain"));

List<String> notAVector = new ArrayList<String>();
notAVector.add("hey");
doc.replaceItemValue("TextList", notAVector);

doc.replaceItemValue("LongArray", new long[] { 1L, 2L, Integer.MAX_VALUE + 1L });

That's all legal now! We have more on the docket for either the initial release or future versions, too: TinkerPop Blueprints support, easy conversion of applicable entities to XML and JSON, fetching remote documents from DXL or JSON files, proper Logging support, and more.

 

As we've been developing the new API, we've already found it painful to go back to the old one - it doesn't take long to get spoiled. And fortunately, the transition process will be smooth: you can start by just replacing one entry point for your code (say, changing "JavaAgent" to "org.openntf.domino.JavaAgent" in an agent), then advance to swapping out your "import lotus.domino.*" line for "import org.openntf.domino.*" line, and then start using the new methods. All the while, your existing code will continue to work as well or better than before.

We hope to have a solid release available around the beginning of next month, and in the mean time I highly encourage you to check out the code. Download it, give it a shot, let us know how it works for you and if you have anything you'd like us to add.

The Bean-Backed Table Design Pattern

Jan 22, 2013 5:54 PM

First off, I don't like the name of this that I came up with, but it'll have to do.

One of the design problems that comes up all the time in Notes/Domino development is the "arbitrary table" idea. In classic Notes, you could solve this with an embedded view, generated HTML (if you didn't want it to be good), or a fixed-size table with a bunch of hide-whens. With XPages, everything is much more flexible, but there's still the question of the actual implementation.

The route I've been taking lately involves xp:dataTables, Lists of Maps, MIMEBean, and controller classes. In a recent game-related database, I wanted to store a list of components that go into a recipe. There can be an arbitrary number of them (well, technically, 4 in the game, but "arbitrary" to the code) and each has just two fields: an item ID and a count. The simplified data table code looks like this:

<xp:dataTable value="#{pageController.componentInfo}" var="component" indexVar="componentIndex">
	<xp:column styleClass="addRemove">
		<xp:this.facets>
			<xp:div xp:key="footer">
				<xp:button value="+" id="addComponent">
					<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="componentInfo"
						action="#{pageController.addComponent}"/>
				</xp:button>
			</xp:div>
		</xp:this.facets>
						
		<xp:button value="-" id="removeComponent">
			<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="componentInfo"
				action="#{pageController.removeComponent}"/>
		</xp:button>
	</xp:column>
					
	<xp:column styleClass="itemName">
		<xp:this.facets><xp:text xp:key="header" value="Item"/></xp:this.facets>
						
		<xp:inputText value="#{component.ItemID}"/>
	</xp:column>
					
	<xp:column styleClass="count">
		<xp:this.facets><xp:text xp:key="header" value="Count"/></xp:this.facets>
		<xp:inputText id="inputText1" value="#{component.Count}" defaultValue="1">
			<xp:this.converter><xp:convertNumber type="number" integerOnly="true"/></xp:this.converter>
		</xp:inputText>
	</xp:column>
</xp:dataTable>

The two data columns are pretty normal - they could easily point to an array/List in a data context or a view/collection of documents. The specific implementation is that it's an ArrayList stored in the view scope - either created new or deserialized from the document as appropriate:

public void postNewDocument() {
	Map<String, Object> viewScope = ExtLibUtil.getViewScope();
	viewScope.put("ComponentInfo", new ArrayList<Map<String, String>>());
}

public void postOpenDocument() throws Exception {
	Map<String, Object> viewScope = ExtLibUtil.getViewScope();
	Document doc = this.getDoc().getDocument();
	viewScope.put("ComponentInfo", JSFUtil.restoreState(doc, "ComponentInfo"));
}

public List<Map<String, Serializable>> getComponentInfo() {
	return (List<Map<String, Serializable>>)ExtLibUtil.getViewScope().get("ComponentInfo");
}

Those two "addComponent" and "removeComponent" actions are very simple as well: they just fetch the list from the view scope and manipulate it:

public void addComponent() {
	this.getComponentInfo().add(new HashMap<String, Serializable>());
}
public void removeComponent() {
	int index = (Integer)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "componentIndex");
	this.getComponentInfo().remove(index);
}

One key part to notice is that the "componentIndex" variable is indeed the value you'd want. In the context of the clicked button (say, in the second row of the table), the componentIndex variable is set to 1, and the resolver and FacesContext make that work.

On save, I just grab the modified Document object, serialize the object back into it, and save the data source (that's what super.save() from my superclass does):

public String save() throws Exception {
	List<Map<String, Serializable>> componentInfo = this.getComponentInfo();

	Document doc = this.getDoc().getDocument(true);
	JSFUtil.saveState((Serializable)componentInfo, doc, "ComponentInfo");

	return super.save();
}

The end result is that I have a user-expandable/collapsible table that can hold arbitrary fields (since it's a Map - it could just as easily be a list of any other serializable objects):

If I wanted to access the data from non-Java contexts, I could replace the MIMEBean bits with another method, like a text-list-based format or response documents, but the final visual result (and the XSP code) would be unchanged.

More On "Controller" Classes

Jan 7, 2013 7:39 PM

Tags: xpages mvc java
  1. "Controller" Classes Have Been Helping Me Greatly
  2. More On "Controller" Classes

Since my last post on the matter , I've been using this "controller" class organization method in a couple other projects (including a refresh of the back-end of this blog), and it's proven to be a pretty great way to go about XPages development.

As I mentioned before, the "controller" term comes from the rough equivalent in Rails. Rails is thoroughly MVC based, so a lot of your programming involves creating the UI of a page in HTML with a small sprinkling of Ruby (the "view"), backed by a class that is conceptually tied to it and stores all of the real business logic associated with that specific page (the "controller"). XPages don't have quite this same assumption built in, but the notion of pairing an XPage with a single backing Java class is a solid one. Alternatively, you can think of the "controller" class as being like a stylesheet or client JavaScript file: the HTML page handles the design, and only contains references to functionality defined elsewhere.

What this means in practice is that I've been pushing to eliminate all traces of non-EL bindings in my XSP markup in favor of writing the code in the associated Java class - this includes not only standard page events like beforeRenderResponse , but also anywhere else that Server JavaScript (or Ruby) would normally appear, like value and method bindings. Here's a simple example, from an XPage named Test and its backing class:

Test.xsp
<p><xp:button id="clickMe" value="Click Me">
	<xp:eventHandler event="onclick" submit="true" refreshMode="partial" refreshId="output"
		action="#{pageController.refreshOutput}"/>
</xp:button></p>
	
<p><xp:text id="output" value="#{pageController.output}"/></p>
controller/Test.java
package controller;

import frostillicus.controller.BasicXPageController;
import java.util.Date;

public class Test extends BasicXPageController {
	private static final long serialVersionUID = 1L;

	private String output = "default";
	public String getOutput() { return this.output; }

	public void refreshOutput() {
		this.output = new Date().toString();
	}
}

In a simple case like this, it doesn't buy you much, but imagine a more complicated case, say code to handle post-save document processing and notifications, or complicated code to generate the value for a container control. The separation between the two pays significant dividends when you stop having to worry scrolling through reams of code in the XSP markup when you're working on the page layout, while having a known location for the Java code associated with a page makes it easier to track down functionality. Plus, Server JavaScript is only mildly more expressive than Java, so even the business logic itself stays just about as clean (moving from Ruby to Java, on the other hand, imposes a grotesque LOC penalty).

It sounds strange, but the most important benefit I've derived from this hasn't been from performance or cleanliness (though those are great), but instead the discipline it provides. Now, there's no question where business logic associated with a page should go: in a class that implements XPageController with the same name as the page and put in the "controller" package. If I want pages to share functionality, that's handled either via a common method stored in another class or, as appropriate, via class inheritance. No inline code, no Server JavaScript script libraries. If something is going to be calculated, it's done in Java.

There is one area that's given me a bit of trouble: custom controls. For example, the linksbar on this blog requires computation to generate, but it's not associated with any specific page. For now, I put that in the top-level controller class, but that feels a bit wrong. The same solution also doesn't apply to the code that handles rendering each individual post in the list, which may be on the Home page, the Month page, or individually on the Post page, in which case it also has edit/save/delete actions associated with it. For now, I created a "helper" class that I instantiate (yes, with JavaScript, which is depressing) for each instance. I think the "right" way to do it with the architecture is to create my controls entirely in Java, with renderers and all. That would allow me to handle the properties passed in cleanly, but the code involved with creating those things is ugly as sin. Still, I'll give that a shot later to see if it's worth the tradeoff.

"Controller" Classes Have Been Helping Me Greatly

Dec 26, 2012 4:54 PM

Tags: xpages mvc java
  1. "Controller" Classes Have Been Helping Me Greatly
  2. More On "Controller" Classes

I mentioned a while ago that I've been using "controller"-type classes paired with specific XPages to make my code cleaner. They're not really controllers since they don't actually handle any server direction or page loading, but they do still hook into page events in a way somewhat similar to Rails controller classes. The basic idea is that each XPage gets an object to "back" it - I tie page events like beforePageLoad and afterRenderResponse to methods on the class that implements a standard interface:

package frostillicus.controller;

import java.io.Serializable;
import javax.faces.event.PhaseEvent;

public interface XPageController extends Serializable {
	public void beforePageLoad() throws Exception;
	public void afterPageLoad() throws Exception;

	public void afterRestoreView(PhaseEvent event) throws Exception;

	public void beforeRenderResponse(PhaseEvent event) throws Exception;
	public void afterRenderResponse(PhaseEvent event) throws Exception;
}

I have a basic stub class to implement that as well as an abstract class for "document-based" pages:

package frostillicus.controller;

import iksg.JSFUtil;

import javax.faces.context.FacesContext;

import com.ibm.xsp.extlib.util.ExtLibUtil;
import com.ibm.xsp.model.domino.wrapped.DominoDocument;

public class BasicDocumentController extends BasicXPageController implements DocumentController {
	private static final long serialVersionUID = 1L;

	public void queryNewDocument() throws Exception { }
	public void postNewDocument() throws Exception { }
	public void queryOpenDocument() throws Exception { }
	public void postOpenDocument() throws Exception { }
	public void querySaveDocument() throws Exception { }
	public void postSaveDocument() throws Exception { }

	public String save() throws Exception {
		DominoDocument doc = this.getDoc();
		boolean isNewNote = doc.isNewNote();
		if(doc.save()) {
			JSFUtil.addMessage("confirmation", doc.getValue("Form") + " " + (isNewNote ? "created" : "updated") + " successfully.");
			return "xsp-success";
		} else {
			JSFUtil.addMessage("error", "Save failed");
			return "xsp-failure";
		}
	}
	public String cancel() throws Exception {
		return "xsp-cancel";
	}
	public String delete() throws Exception {
		DominoDocument doc = this.getDoc();
		String formName = (String)doc.getValue("Form");
		doc.getDocument(true).remove(true);
		JSFUtil.addMessage("confirmation", formName + " deleted.");
		return "xsp-success";
	}

	public String getDocumentId() {
		try {
			return this.getDoc().getDocument().getUniversalID();
		} catch(Exception e) { return ""; }
	}

	public boolean isEditable() { return this.getDoc().isEditable(); }

	protected DominoDocument getDoc() {
		return (DominoDocument)ExtLibUtil.resolveVariable(FacesContext.getCurrentInstance(), "doc");
	}
}

I took it all one step further in the direction "convention over configuration" as well: I created a ViewHandler that looks for a class in the "controller" package with the same name as the current page's Java class (e.g. "/Some_Page.xsp" → "controller.Some_Page") - if it finds one, it instantiates it; otherwise, it uses the basic stub implementation. Once it has the class created, it plunks it into the viewScope and creates some MethodBindings to tie beforeRenderResponse, afterRenderResponse, and afterRestoreView to the object without having to have that code in the XPage (it proved necessary to still include code in the XPage for the before/after page-load and document-related events).

So on its own, the setup I have above doesn't necessarily buy you much. You save a bit of repetitive code for standard CRUD pages when using the document-controller class, but that's about it. The real value for me so far has been a clarification of what goes where. Previously, if I wanted to run some code on page load, or attached to a button, or to set a viewScope value, or so forth, it could go anywhere: in a page event, in a button action, in a dataContext, in a this.value property for a xp:repeat, or any number of other places. Now, if I want to evaluate something, it's going to happen in one place: the controller class. So if I have a bit of information that needs recalculating (say, the total cost of a shopping cart), I make a getTotalCost() method on the controller and set the value in the XPage to pageController.totalCost. Similarly, if I need to set some special values on a document on load or save, I have a clear, standard way to do it:

package controller;

import com.ibm.xsp.model.domino.wrapped.DominoDocument;
import frostillicus.controller.BasicDocumentController;

public class Projects_Contact extends BasicDocumentController {
	private static final long serialVersionUID = 1L;

	@Override
	public String save() throws Exception {
		DominoDocument doc = this.getDoc();
		doc.setValue("FullName", ("CN=" + doc.getValue("FirstName") + " " + doc.getValue("LastName")).trim() + "/O=IKSGClients");

		return super.save();
	}

	@Override
	public void postNewDocument() throws Exception {
		super.postNewDocument();

		DominoDocument doc = this.getDoc();
		doc.setValue("Type", "Person");
		doc.setValue("MailSystem", "5");
	}
}

It sounds like a small thing - after all, who cares if you have some SSJS in the postNewDocument event on an XPage? And, besides, isn't that where it's supposed to go? Well, sure, when you only have a small amount of code, doing it inline works fine. However, as I've been running into for a while, the flexibility of XPages makes them particularly vulnerable to becoming tangled blobs of repeated and messy code. By creating a clean, strict system from the start, I've made it so that the separation is never muddied: the XPage is about how things appear and the controller class is about how those things get to the XPage in the first place.

We'll see how it holds up as I use it more, but so far this "controller"-class method strikes a good balance between code cleanliness without getting too crazy on the backing framework (as opposed so some of the "model" systems I've tried making).

A Couple Updates to My Domino-One-Offs Repository

Sep 4, 2012 6:25 PM

Tags: xpages java

I realized earlier that I let a couple of the classes in Domino-One-Offs stagnate relative to the versions I currently use. Since originally posting my convenience XML library and, more usefully, the DynamicViewCustomizer, I've tweaked both in my various projects to add more features I ended up needing.

The DynamicViewCustomizer has gone further down the path of Notes-client fidelity at the expense of a bit more overhead, doing things like setting column widths (except where the column is set to extend to the window width, either specifically or by virtue of being the last column, as appropriate). It's still not perfect, as I ran into a bit of trouble with, I think, HTML in category columns (and you can see commented-out remnants of various other changes), but it's potentially useful.

For my "man, I hate dealing with all these stupid orthogonal Java XML libraries" XML library, I've added a number of functions to support my Forms 'n' Views app (coming Soon™), namely better attribute handling, a simple NodeList wrapper (that actually acts like a List), and a basic getXml() method for when you just want a string version.

Finally, I tossed in the current version of my SortableMapView class, meant to gobble up Views or ViewEntryCollections and make them sortable by all columns. That one uses Lombok, but I think you could just remove the bits about making it List-compatible and still be fine.

An Extended Document Data Source To Support MIMEBean

Aug 22, 2012 8:02 PM

Tags: mime xpages java

Ever since I personally came up with the idea of storing serialized Java objects in Notes documents via MIME, I've been trying to use the technique all over the place, since it's basically the best thing since sliced bread.

My latest notion was that the pattern would be all the cooler if the serialized Java objects could be referenced directly in EL, so you could do something like #{doc.LineItems[3].cost}, pulling directly from a more-or-less normal Domino document. I decided to give implementing this a shot, since it would also have the side benefit of introducing me to writing JSF components/complex types.

The path I took sure feels like the long route (lots of manual doubling up of property definitions and proxying methods), but it seems to work: I have a "mimeDominoDocument" data source that acts just like a normal document source except it transparently serializes and deserializes Java objects. It's not exactly a perfect implementation (the setValue() method has a rather naive method of guessing which objects should be serialized and which shouldn't, and I didn't handle the case of setting a value to an object and then using, say, getItemValueString()), but it should be fine for demo and basic purposes.

I set up an example database with pages for the appropriate code here: /tests/mime.nsf.

Starting With Java Classes First

Aug 20, 2012 9:06 PM

Tags: xpages java

Every time I make a new XPages app, I start by using the normal xp:dominoView and xp:dominoDocument data sources, on the theory that building with the standard set of components will keep things clean while I build on top of that. However, with each successive app, the time before I get annoyed at how messy the code has to be and start writing Java wrapper and management classes gets shorter and shorter.

For example, in my Forms 'n' Views mini-Designer app, I have the sidebar list of databases, which is just pointing to a view and showing those entries. That works pretty well, since the values I need are in the view itself. However, now I want to show the database icons, but also want to check to make sure the DBs are on the current server, allow the current user to access them, and are accessible via HTTP (I'm just pointing the image to /whatever.nsf/$Icon). Doing that with just the view entries will get messy, requiring a bunch of Server JavaScript to be crammed into my xp:image object. It'd work, but it'll only be a matter of time before I want to add more smarts, such as if I make it so that you can grant enhanced access to certain design elements to a certain user. The best way to handle this will be to make a "DatabaseEntry" class and give it methods to determine the icon and design-element visibility.

I don't think I really saved myself any time by delaying this - it's pretty rare that I DON'T have this sort of calculation eventually, so I'm just adding a bit of time doing things the "dirty" way first before cleaning them up. I'm not really familiar with "vanilla" JSF, but I get the impression that it doesn't really let you put as much business logic into the view portion as XPages do (I think JSF tends to stick to EL normally). Certainly, when you write a Rails app, you start with the models and controllers cleanly before putting the data on the page. The XPage architecture doesn't make that as easy, with its lack of reasonably-modifiable controllers, but adding Java classes and using collection classes in xp:dataContexts or as managed beans work great.

I've toyed with the idea of expanding the classes I use in my forums app to work smoothly with any view/document, picking up the columns by name and storing them in Map representing the entry/document. The problem with that is that the data source and needs tend to be just different enough per app that it's not generic enough (for example, in this one, I'm getting databases for the "add DB" picker via DbDirectory, not a view). In the mean time, I think I'll just start with the access classes first and see what I do most commonly.

This View Indexer Thing Might Be Worth Some Time

Aug 5, 2012 4:51 PM

Tags: java views crazy
  1. I Went Crazy and Made a Small View Indexer
  2. This View Indexer Thing Might Be Worth Some Time

I've put a little more time into my small view indexer from the other day, and it seems even more promising now. The editor I made for it might give you some idea of the potential:

Fancy View Builder

Since the view building is all being done in Java, I decided to toss in the list of available JSR-223-compliant scripting languages currently available. The scripting context is given a variable "doc" that's a DocumentWrapper class I wrote that implements Map to make its use a bit easier. For added fun, I baked in knowledge of serialized Java objects stored in the document's items:

Fancy View Objects

Normally, the wrapper just returns the value of getItemValue(...), but when it encounters a MIME entity of type "application/x-java-serialized-object", it deserializes it and returns that instead, allowing for real structured data access.

I also realized that I don't have any reason to stick to only objects available in the stock JDK for index storage. Since using this will require extra Java code anyway, I may as well make my life easier and write my own "view entry" class. Once I get that sorted out, I can work on keeping the indexes updated with a server task, dealing with reader fields, and physical storage.

Hmm, maybe I should make myself an editor for normal views that looks like this one. It'd probably be a lot less hassle to deal with than the legacy one.

I Went Crazy and Made a Small View Indexer

Aug 4, 2012 11:45 AM

Tags: java views crazy
  1. I Went Crazy and Made a Small View Indexer
  2. This View Indexer Thing Might Be Worth Some Time

Warning: this post is almost entirely pie-in-the-sky nonsense, untethered from reality. Probably.

So yesterday, for some reason, I got to thinking about Domino's view indexing and whether or not it can be readily done better. The current view indexer does its intended task with aplomb (most of the time), but it has its problems. For one, iterating over view indexes in Java (read: everything you should do in Domino today) is dog slow. More theoretically, since it deals only with summary data for performance reasons, its ability to deal with structured data is limited - you can't deal with rich text or, for example, store any sort of Map in a Domino document and deal with it in a view (realistically).

I had the notion that it may be faster, at least in some cases, to do your own indexing: make a Java collection of your "view" data and serialize the result into a note. I decided to try a little test: take a view that consists only of a sorted column of 100k documents' UNIDs and compare it to a TreeSet of the same. My initial results were promising: iterating over the collection of documents (via getNextDocument() and a forall with getDocumentByUNID(), respectively) was about twice as fast with the Java version and was stored in about 17% the space, presumably thanks to document compression.

To be fair, that was really stacking the deck in favor of Java: I didn't care about reader fields or any entry metadata like position, and fetching every document in a large view is possibly the most expensive way to use it. Presumably, fetching just the first entry would make the view version much faster. Still, it shows that it might be useful in some cases, so I decided to try something a bit more complicated.

I made another view with 15k documents (task requests) and two columns: the date of the request, sorted, and the summary line. I matched that up with a TreeSet containing Lists of Maps and iterated over both to fetch and print the entry data. Again, the Java version ended up way faster, taking about 25% of the time.

Now it has me wondering about whether it'd be worth investigating further. It's fair to assume that Java can be treated as the new "native" language of Domino, so serialization is potentially a legitimate mechanism. Furthermore, a real implementation of an alternative view index wouldn't have to be as hopelessly naive as the one I put together: the basic java.util.* collections are almost definitely not the best storage mechanisms for a database index, and, better still, this is well-trodden territory. Since normal views would still exist in this magic future world, these fancy views could eschew a lot of the UI and build-performance requirements of the normal ones in favor of fancy tricks like native knowledge of serialized data structures. Imagine storing structured data like a table of line items in a MIME entity but then being able to query that in a "view". Reader fields could be handled by storing the applicable names in the "entry" and then comparing that two the user's names list at runtime, presumably like standard views do. A server task could handle monitoring document changes and making incremental changes (like happens now, really).

So it's theoretically possible. Would it be worth it? I'm not sure - maybe. It's fun to think about, in any event.

A Prototype In-App Messaging System for XPages

Jul 9, 2012 2:35 PM

Tags: xpages java

Sven Hasselbach's post about ApplicationListeners the other day and the notion of post-MVC methods of app architecture got me thinking about the notion of generic messaging/events inside an XPages app. The first example that comes to mind is a project-tracking database where there may be events like "new task created" or "delivery ready for review" - business logic stuff where the fact that it's a document being modified is an implementation detail. In my actual project-tracking database at work, I implemented this by creating "Notification Stub" documents with information about the event and having a scheduled agent process them. That's a fine Notes-y way to do it (and has its own advantages, like having another cluster member send the emails), but I want a way for the running XPages app to notice and react to these things without the immediate code knowing about every single concern.

I did a quick search and didn't find any such capability in basic JSF or XPages, but I realized it wouldn't be terribly difficult to implement it myself: have an application-scoped bean (maybe one per scope, if it's useful) with a methods to dispatch events and declare listeners. Then, other beans could attach themselves as listeners and the normal code would only be responsible for sending out its messages, to be received by zero or more unknown objects:

public interface XPagesEventDispatcher {
	public void dispatchEvent(XPagesEvent event);
	public void addListener(XPagesEventListener listener);
}

The actual event objects would be similarly simple: just a name and an arbitrary array of objects:

public interface XPagesEvent {
	public String getEventName();
	public Object[] getEventPayload();
}

Finally, listeners are the simplest of all, being responsible only for providing a method to handle an XPagesEvent (EventListener is the standard "tagging" interface for this kind of thing):

public interface XPagesEventListener extends EventListener {
	public void receiveEvent(XPagesEvent event);
}

However, after creating my gloriously clean set of interfaces, I ran into the ugly marsh of reality: application-scoped beans only exist after first use. That works fine for the dispatcher itself, but not for listener beans, the whole point of which is to not be referenced in code elsewhere. After a bit of searching, I came up with a solution that may work. When defining a managed bean, you can set properties, and those properties can be Lists. So, rather than creating two beans, I define just the dispatcher bean, but provide it a list of class names of listener beans to create at startup:

<managed-bean>
	<managed-bean-name>applicationDispatcher</managed-bean-name>
	<managed-bean-class>frostillicus.event.SimpleEventDispatcher</managed-bean-class>
	<managed-bean-scope>application</managed-bean-scope>
	<managed-property>
		<property-name>listenerClasses</property-name>
		<list-entries>
			<value-class>java.lang.String</value-class>
			<value>frostillicus.Messenger</value>
		</list-entries>
	</managed-property>
</managed-bean>

I gave my dispatcher implementation class a convenience methods to take a string and 0 or more objects and dispatch the event, so it could be used in code like:

applicationDispatcher.dispatch("home page loaded")
applicationDispatcher.dispatch("delivery submitted", [delivery.getUniversalID()])

So far, this setup is working pretty well. I plan to toy with it a bit and then try it out in proper use. Provided it works well (and it doesn't turn out that this capability already exists and I just missed it), I'll turn it into a project on GitHub/OpenNTF. As long as it keeps working, I'm excited about the prospects, especially when combined with XPages Threads and Jobs for non-blocking event handling.

Using Lombok to Write Cleaner Java in Designer

Jul 6, 2012 6:48 PM

Tags: java

Java, as a language, has a number of admirable qualities, but succinctness is not among them. Even a simple "Person" class with just "firstName" and "lastName" properties can boil over with at least a dozen lines of boilerplate to add constructors, getters, setters, and equivalency testing. Fortunately, Project Lombok can help. Their main page contains a good video of the basics, while the Feature Overview page covers the rest. If you've written a lot of Java classes, I suspect that the video will have you hooked.

And there's good news: Lombok can work in Designer! It's not as simple as it is for vanilla Eclipse and it took me a good amount of trial-and-error to get it functional, but I figured out what you have to do:

  1. Copy lombok.jar into your Notes program directory (e.g. C:\Program Files\IBM\Lotus\Notes). It has to go there so that the initial program launch sees it.
  2. Additionally, copy the same lombok.jar to your jvm\lib\ext directory inside your Notes program directory. It has to go there so that it's on your project build path. Yes, normally you "shouldn't" put your Java libraries there, but I figure it's alright since this takes local, non-update-site fiddling anyway.
  3. Now for the weird part. Normally, the Lombok installer modifies eclipse.ini to launch some support stuff. That won't work for us, however. What we need is the file "jvm.properties" in the framework\rcp\deploy directory. The syntax isn't the same as for eclipse.ini and I'm not 100% sure my method doesn't have any horrible side effects, but adding these two lines (the two under "Lombok stuff", with "vmarg.javaagent" and "vmarg.Xbootclasspath/a") worked for me:
    vmarg.javaagent=-javaagent:lombok.jar
vmarg.Xbootclasspath/a=-Xbootclasspath/a:lombok.jar

Now, I've only had this installed for a couple hours and have only done a couple basic tests, so I can't rule out the notion that this will interfere with some other Designer thing (the "Xbootclasspath/a" line may override some Designer default), so use with care. On the plus side, Lombok's magic works during compilation, not runtime, so you shouldn't need lombok.jar on the destination server. You'd still need it on the Designer client of any other programmers working on the code, though.

Update: I think you can avoid doubling up your copies of lombok.jar (and avoid a problem where launching the normal Notes client doesn't work) by just putting lombok.jar in jvm/lib/ext and using these lines in jvm.properties instead:

vmarg.javaagent=-javaagent:${rcp.home}/../jvm/lib/ext/lombok.jar
vmarg.Xbootclasspath/a=-Xbootclasspath/a:${rcp.home}/../jvm/lib/ext/lombok.jar

Use Interfaces All The Time

Jun 27, 2012 9:10 AM

Tags: java

In addition to being useful concepts generally, Java interfaces can be used literally in code as a way of keeping your code as clean and generic as possible. While you can't ever create a new object based on an interface, you CAN use interface names as object types for variables and parameters. For example, this is a legal way to create a Vector of Objects:

List<Object> someList = new Vector<Object>();

"That's great and all," you say, "but why bother doing that?" And indeed, in a simple case like a mostly-procedural Java agent, you won't get much benefit from doing that. But imagine a method like this:

public List<String> retrieveValuesFromDatabase();

Since all that method guarantees is that it's going to return some sort of List, it has a lot of discretion as to the form that list will take. Maybe the programmer starts out with Vector but then decides that ArrayList is better for the purpose - they're free to change it without troubling the user of the method one bit. Maybe they want to get a little crazier and make their own custom class that implements List and provides a live view of a database - again, that can happen with no change to the user's code.

While the primary benefit comes when large libraries use interfaces in their APIs, even moderately-sized structured programs written by one programmer can benefit. Take a (contrived and unsafe) class like this:

public class ExampleClass {
	private List<String> cache;

	public ExampleClass() {
		this.cache = new ArrayList<String>();
	}
	public List<String> getCache() {
		return this.cache;
	}
	public void setCache(List<String> cache) {
		this.cache = cache;
	}
}

The same benefits mentioned above apply here: because you're referring to your instance variable just as a List, you're free to change the actual implementing class by changing only one line. The benefits are only compounded when you add more and more-complicated methods to your class.

Additionally, using interfaces when they're not strictly necessary can help avoid bad habits and traps. Our go-to example, Vector class, pre-dates the Java Collections Framework and was only retrofitted to the List interface when the framework came out in Java 1.2. It shows its age in a couple ways, not the least of which is the inclusion of a couple methods it uses that are not part of List, such as elementAt(...) and addElement(...). These are best replaced entirely with get(...) and add(...), which are common to all Lists... and are easier on the eyes, to boot.

It's easy to run into problems with APIs that don't use interfaces extensively, like, say, the Domino API*. Presumably due to the fact that the "new" lotus.domino classes were implemented in a pre-1.2 JRE, they use Vector extensively. Most of the time, this is fine - anything expecting a List will happily accept a Vector, but the reverse doesn't hold. You don't have to go far to create a problem. Multi-value controls in XPages, when bound to a Java object, prefer the use of ArrayList (but work great when pointed to an existing List of any stripe). As a result, this will result in an exception:

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<xp:this.dataContexts>
		<xp:dataContext var="newRegData" value="${javascript: new java.util.HashMap()}"/>
	</xp:this.dataContexts>
	
	<xp:checkBoxGroup value="#{newRegData.multi}">
		<xp:selectItems value="#{javascript:['one', 'two']}"/>
	</xp:checkBoxGroup>
	
	<xp:button id="createDoc" value="Create Doc">
		<xp:eventHandler event="onclick" submit="true"><xp:this.action><![CDATA[#{javascript:
			var doc = database.createDocument()
			doc.replaceItemValue("MultiValue", newRegData["multi"])
			doc.save()
		}]]></xp:this.action></xp:this.eventHandler>
	</xp:button>
</xp:view>

The problem is that, while ArrayList is still a List, replaceItemValue(...) doesn't give two hoots about interfaces, forcing you to do something like this:

var vec = new java.util.Vector()
vec.addAll(newRegData["multi"])
doc.replaceItemValue("MultiValue", vec)

Not pretty, and that's one that could be fixed without changing the outward API.

The upshot of all this is that you can derive a lot of benefit from using interfaces extensively, even when they're not strictly necessary. I even have a habit of writing lines like "List<Object> values = (List<Object>)doc.getItemValue("SomeItem");", because I am a madman. You may not go as far as I do, but it's still usually a good idea to follow the policy of using interfaces whenever the methods you want are available that way.

* Technically, the the lotus.domino.* "classes" are all interfaces and the implementing classes vary based on the type of connection you're using (e.g. local vs. CORBA). As usual, Domino provides both good and bad examples simultaneously.

Putting java.util to Use

Jun 22, 2012 11:46 AM

Tags: java

I'm in the process of figuring out a good way to combine data from several sources into a single activity stream, which means that they should be categorized by date and then sorted by time. While that's a piece of cake with a single view, it gets hairy when you have several views, or perhaps several different types of sources entirely. Fortunately, abstract data types are here to help.

You're already using Lists and Maps, right? For this, I decided to use Maps and one of my personal favorites, the Set. If you're not familiar with them, Sets are like Lists, but only contain one of each element and don't (normally) guarantee any specific order - DocumentCollections are a type of Set (albeit not actually implementing the interface).

I created the categories by using a Map with the Date as the key and Sets of entries as the value. That would work well enough using HashMaps and HashSets, but they would require manual sorting in the XPage to display them in the right order. Fortunately, Java includes some more-specific types for this purpose: SortedMap and SortedSet. These are used the same way as normal Maps and Sets, but automatically maintain an order (based on either your own Comparator or the "natural" ordering based on the objects' compareTo(...) methods). Better still, the specific TreeMap and TreeSet implementation classes have methods to get at the keys and values, respectively, in descending order.

Once I had my collection objects picked out, all I had to do was start filling them in. I used stock Date objects for the Map's keys and wrote a compareTo(...) method for the entries I'm keeping in the Set. Then, on the containing activity stream class, I just had to write a "serializer" method to write out the current state of the objects into a List for access.

While I may change around the way I do this (I may end up putting the category headers inline so I just use one SortedSet for performance), it provides a pretty good example of when you can use some of Java's built-in classes to do some of the grunt work for you.

Reverse-Engineering File Formats for Fun

Jun 11, 2012 9:00 AM

Tags: java

Well, okay, it wasn't for fun; it was for work. About a year and a half ago, I had occasion to parse the contents of shared object files from a Flash Media Server ("*.fso" files), and I figured it might be interesting to go back over the kind of things I had to do to accomplish that.

The first thing I did was to search around to see if anyone else had solved the same problem. However, while there are plenty of parsers for "Flash shared objects", not the least of which is in Flash itself, it seemed that, unless I was doing it wrong, that format is different from the one used by servers, so I couldn't use any of the ones I found. So I was left with a folder full of binary files and a task to accomplish. Sounds like a job for programming!

Fortunately, I knew the shape of the data inside the files: they were essentially arrays of maps. I could see the keys and some of the values in there, so the files were binary but not compressed, which gave me hope. However, since the data was variable-length, I couldn't just find the record delimiter and use offsets - I had to go through and find the various byte-value delimiters for each aspect and write a proper parser.

I opened up a couple of the files in vi sessions to compare them and, using the handy ga command, checked the byte values of the non-ASCII characters. I found that the file always starts with 0 then 3, and there appeared to be a "general" delimiter of 0, 0, 0 to break up header sections of the file and each record. Then I saw a number that was different between each file - ah-hah, the record count!

After another delimiter, I found a number followed by the name of the file (for some reason). It turned out that the number corresponded to the length of the file name, indicating that the format uses Pascal-style length-prefixed Strings. Good to know! After the name came a second instance of the record count for some reason, leaving the rest of the file to the actual records.

Conceptually, the records are easy: there should be about $record_count instances of some sort of record delimiter with the data packed in there in, essentially, key-value pairs. That's ALMOST how it is, but it got a bit weird: all records were ended with the same "general" delimiter used in the header, but they ALSO had their own two-byte record delimiter. That is, except when they used a three-byte variant for some reason. And there are a couple unknown values in there - they were different in each record and I'm sure they serve some use, but I couldn't figure out what it was. Once I figured out the hairy bits, though, the data was pretty much as I expected: there was a record index (for some reason), then the expected key-value pairs. Most of them were strings, but not all - there was an "urgent" key in some entries that had no corresponding value (though now that I back look at the code, I suspect that it was a boolean value of \1 to indicate "true").

The end result of all this was by far the most C-like Java I've ever written:

FSOParser.java

I felt pretty good after finishing that. High-level object tress are fun and all, but it's good to, from time to time, get your hands dirty with lower-level stuff like reading files as integer arrays.

PS: I don't know why my FSORecord class didn't just extend HashMap. I blame the brain haze from staring at binary files for too long. Conversely, I think I did have a reason to use arrays of int instead of byte - I think it had to do with Java byte being always signed but the data in the file being unsigned.

Turns out Partial Execute is handy

Aug 22, 2010 10:53 PM

My main "spare time" project involves working on a lineup editor for raids in WoW. Basically, each player can have one or more characters, each of which can perform one or two roles (out of "Tank", "Healer", or "DPS"), and the UI should allow the person setting it up to pick which character that player will bring and which role they will perform. The setup is a table, with each row looking like:

The checkbox indicates whether or not the player is going on the run, then there's the player name, the currently-selected character (the icon denotes the character's current specialization), then how they originally signed up and their desired role, then three checkboxes for the role, and then their skill/gear rank.

Originally, I had the lineup set up as a Server-Side JavaScript array with ad-hoc objects with fields for each column. This worked out alright at first, but not only stepped all over MVC separation, since the controls had to "know" about the nature of the data they were working with, but made the necessary event-handler code really tangled.

What clinched the deal was the side effects of switching a character or role - not every character can fill every role, so switching a character or role could easily switch the other. Case in point - if I switch my lineup line there to my hunter Elegabal, the role has to change as well:

With a JavaScript array and event handlers, every change meant code attached to the control itself that looked up the current character in the database, the roles they can fill, if they're eligible for the selected role, then either what roles they CAN fill (if a character was picked) or what character is available to fill the selected role. Not impossible, but VERY hairy.

So I switched over to Java objects on the back-end. While the Java code itself is less expressive than JavaScript, the organization is significantly cleaner and more logical. The Lineup is a class that descends from Vector to provide the overall table, and each element is a LineupLine object with getters and setters for each important element. That way, rather than each role radio button having to have code to sanity-check the character, they're just bound to "#{line.role}" and the LineupLine.setRole(String) method handles all the particulars.

Early on, though, I started running into some oddities - I'd pick a different role, the table would refresh, and then my role selection would go back to its previous state. I added in some System.out.print() lines in the Java objects to make sure setRole() was being called and, sure enough, it was - I could see that it was called with the right new value, yet it kept switching back to the old one. Upon further investigation, I found the problem: while the new role was being set, the server was also getting a setCharacter() call with the existing character value. So the role would be switched, then the character would be set, and, since that character couldn't do the new role, it'd naturally switch the character back.

The solution I found was to enable Partial Execution on the input controls. I can only assume that this means that, rather than sending the server the entire contents of the row as the new data to process, it sends only the control's (i.e. the drop-down's or the radio button's) data, which limits the method calls to only the one that actually changed. Presumably, this has the side effect of being slightly faster, since less data is sent to the server to be processed.