Seeing the Familiar in SwiftUI and Combine

  • Jun 11, 2019

Apple announced quite a bit at WWDC last week, and some of the stealth favorites for programmers have turned out to be SwiftUI and Combine. My macOS and iOS development is incidental at most, but I like to pay attention to this stuff, and I think that these frameworks in particular are notable for how they reflect attributes of other languages and frameworks that I do use.

SwiftUI

Before beginning, I should point out that the best source of information about SwiftUI at the moment is Apple's WWDC video archive, in particular "Introducing SwiftUI" and "SwiftUI Essentials".

SwiftUI is generally summed up as a "declarative UI framework", with "declarative" here primarily setting it apart from imperatively-defined UI. Truth be told, a mix of both has been common in both Cocoa development and elsewhere for a very long time - the definition of UI layout in .nib files comes from the old NeXT days, for example. The things that set this apart are some technical improvements and a switch of focus from your program creating and managing a UI to instead your program defining what it wants the UI to be and letting the framework handle it.

Declaring the UI

The best way to think of the way this works is that, instead of actively calling methods and setting properties on a window, some buttons, etc. on the screen, what your code does in SwiftUI is instead create a work order for what it wants the current state of the UI to be and hands that to the framework to figure out what needs to happen to get there. That "what needs to happen" can range from initially creating the layout to figuring the right way to find the difference between the new states, removing old elements, adding new ones, and animating transitions between them.

The most immediate analogue for this in common use today is React, which has essentially the same model. In (presumably) both SwiftUI and React, your job as a programmer is to have a method on a component object that is called frequently to emit what it thinks its contents should be right now. So if you have, for example, an array of objects that's displayed as an unordered list, you'll have a render() method that looks like:

render() {
    return (
      <ul>
        {this.props.items.map(item => (
          <li key={item.id}>{item.text}</li>
        ))}
      </ul>
    );
  }

In SwiftUI, a cut-down version would look like:

var body: some View {
  List(model.items) {
    Text(item.text)
  }
}

And, as I mentioned, the core concepts aren't new. In core XPages, we'd write something very similar:

<xp:repeat value="#{items}" var="item">
	<xp:this.facets>
    	<xp:text xp:key="header" contentType="HTML" value="&lt;ul&gt;"/>
    	<xp:text xp:key="footer" contentType="HTML" value="&lt;/ul&gt;"/>
  </xp:this.facets>
  
  <li><xp:text value="#{item.text}"/></li>
</xp:repeat>

However, the last one differs in a number of ways, not the least of which is that the UI-generation part doesn't retain a concept of shifting state. You, the programmer, may know that the items list may grow or shrink by a value, but, in the XPages model, the browser just starts with one block of HTML and replaces it with another on a refresh. Internally on both server and browser, I'm sure there's some retained memory for efficiency's sake, but a removed list item won't, for example, know to gracefully fade out and let the remaining ones shift into its place.

SwiftUI drives this distinction home by preferring UI component definitions to be "structs". Java doesn't currently have an analogue to structs, but they come from C and they're effectively a "pure data" type. The distinction between classes and structs gets a little murky in Swift, but the core idea is that structs are meant to show pure data in a given state and are copied when passed around. That relates here because your SwiftUI component's job is not to be the UI, but rather to emit in-memory blueprints for what it wants given the current state of the data. When data changes, the framework asks the component for a new representation, compares the old one and new one in memory, and then makes changes to the UI representation as appropriate.

It's kind of an odd distinction to write out, but I found that there was a point when I was learning React when it "clicked" in my head.

Separation of UI Definition and Output

Beyond the simplicity of declaring the UI, one of the big things that the SwiftUI presentations drive home is how one UI definition can be used across all of Apple's platforms. A list is a list is a list, and the fact that it looks one way on a watch and another way on a TV isn't inherently important.

Seeing this made me feel pretty good, since it reminded me of a post a wrote years ago that dealt with the component/renderer split in XPages at a conceptual level. Though renderers in XPages don't go so far as spitting out an AppKit application at runtime, the concept of an abstractly-defined UI that is then rendered differently based on the target is very important.

What remains to be seen with SwiftUI is how clean this distinction can remain. In my XPage example, I used some of the ExtLib's semantic form components, but real XPage applications are almost always mucked up with inline CSS classes and styles, client and server JavaScript, meaningless HTML tags like <div>, and so forth. It's one thing to say "oh, this just means a checkbox on macOS and a toggle on iOS", but another to handle a complex, crafted user interface that may be radically different in different contexts.

DSLs

SwiftUI is written in the full Swift language, but it's best described as a domain-specific language written in Swift. The term "DSL" originally meant a whole language dedicated to a small task, but nowadays usually refers instead to a style of writing an API that, when paired with a general-purpose language, acts like it's a language dedicated to the task. This is usually a feature of "scripting"-type languages, where the syntax is clean and flexible enough to not interfere.

In the Java world, the go-to language for this has long been Groovy. Darwino uses Groovy for the database adapter DSL, and SmartNSF does as well for defining services. The most common use nowadays is probably Gradle, the Maven-competing build system. It uses Groovy's closures and parentheses-less method calls to make a build script that looks more like a configuration file than a script:

plugins {
    id 'java'
    id 'application'
}

repositories {
    jcenter() 
}

dependencies {
    implementation 'com.google.guava:guava:26.0-jre' 

    testImplementation 'junit:junit:4.12' 
}

mainClassName = 'demo.App' 

The key reason why these DSLs are useful (other than not having to write a new parser) is that you have the full abilities of the underlying language at your disposal. In SwiftUI, you can do your "hide-when" logic by using the normal old if structure, rather than having a specialized "when should this show up?" property. That also means that you can bring in whatever other logic you have, third-party libraries, and so forth, without having to have specialized support in the API.

Combine

Combine, in addition to having an ominous name, is a less-flashy addition than SwiftUI, but is nonetheless important and also shows the integration of growing themes in programming elsewhere, specifically reactive programming. It's one of those topics where you can very easily fall off a conceptual cliff, and I found even Apple's introductory session to make it sound more daunting than it is by focusing on the specific Swift protocols in practice.

I've found that the best way to learn something like this is to set aside the "asynchronous" aspect at first to focus on the "data flow" part. For this, we're in luck, since this is what Java 8 streams are. Streams are all about starting with some source of data - often just a List implementation, but it's intentionally arbitrary - and changing, filtering, sorting, and otherwise manipulating it to get a result. So a prototypical example from Java can be:

Stream.of(10, 1, -3, 5)
	.filter(i -> i > 0)                  // [10, 1, 5]
	.map(i -> i * 2)                     // [20, 2, 10]
	.sorted()                            // [2, 10, 20]
	.map(String::valueOf)                // ["2", "10", "20"]
	.collect(Collectors.joining(", "));  // "2, 10, 20"

Most of the time in practice, the actual implementation will pretty much match the sequential reading of this path and so will be roughly similar to if you wrote it out with for loops, if statements, and so forth. However, because your code is describing what you want done rather than how to do it, there's room here for short-circuits, multithreading, and optimizations for different collection types.

If you look at a snippet of an example from Combine, you can see near-identical syntax in Swift, with the same idiomatic indentation:

let p4 = p1
	.merge(with: p2)
	.append(5) // add 5 to the end of the sequence
	.allSatisfy { $0 >= 1 } // check if all values are bigger than 0
	.count() // how many values: 1

Importantly, the same style of syntax applies when the incoming data is asynchronous and when the final output is similarly out-of-band. This is what all of Apple's Combine examples start with, which is why I think it can seem a bit daunting. But I think the only real switch to make in your head is to slot in the term "Publisher" for the starting provider of your data (originally an array here, but it could be a keyboard or network resource) and "Subscriber" for the code that deals with it.

Admittedly, things can get more complicated there when you need to add in error handling, value binding, and other practical considerations, but the simplicity of the core concept remains.

Overall

For Domino needs specifically and web development generally, SwiftUI and Combine don't themselves matter much. Still, I think it's useful to take times like this to see larger trends and, when you can, bask in the satisfaction of having seen the concepts elsewhere.

My Slides From Engage 2019 - De04. Java With Domino After XPages

  • May 20, 2019

Engage 2019 has come and gone, and I had an excellent time. I also quite enjoyed presenting my "group therapy" session on some options that XPages developers have for the future. In a lot of ways, it was similar to my presentation at CollabSphere last year, mixed with the various new developments I've talked about on here since then:

Engage 2019

  • May 6, 2019

Engage is just around the corner in Brussels, and I'll be joining everyone there, presentation in hand. Specifically:

Dev04. Java With Domino After XPages
Tuesday, May 14 at 16:00 in room E. Mahy

XPages guided Domino web development out of a world of archaic proprietary hacks and into the realm of Java server development and something approximating Java EE. Now that the XPages framework is moribund, the question is: what's next? If you've built up Java skills over the years, you have a direct path to use them in the new modern world, whether via OSGi plugins and new XPages extensions on Domino or standalone Java web projects. This session will discuss the lessons we learned from XPages, how they correspond to newer technologies, and how to bring your existing apps and plugins forward.

If you attended my session CollabSphere last year (by the way, you should go to CollabSphere this year too), you may note that the title is pretty similar to that one, albeit changing "In Domino" to "With Domino". This massive change in preposition reflects how things have only gotten more awkward for us in the intervening year.

So, if you're attending Engage, join me as we work through our sorrows together!

 

Java Grab Bag 2

  • May 3, 2019

Following in the vein of "Java Hiccups", I've had a couple things floating around my head lately that I think collectively make for a good post for Java developers, particularly those working in the Domino arena.

Without further ado:

Map#computeIfAbsent

This is a method that was added in Java 8 and, while it's not as big a deal as the addition of streams, it's one of my favorite additions and something I use very frequently. To give a point of reference, consider this common idiom from an imagined XPages app:

Map<String, Object> applicationScope = ExtLibUtil.getApplicationScope();
if(!applicationScope.containsKey("someVal")) {
  applicationScope.put("someVal", someExpensiveOperation());
}
String someVal = (String)applicationScope.get("someVal");

Essentially, using a Map as a cache for a complex computed value. Java 8 added the #computeIfAbsent method (alongside several similar ones) to do this in one go:

String someVal = (String)ExtLibUtil.getApplicationScope().computeIfAbsent("someVal", key -> someExpensiveOperation());

The second parameter is (usually) a lambda, like the ones used in streams, that takes the provided key as an argument and is only executed if the value does not already exist. Due to the way this was added, most implementations do pretty much the same thing as the first block of code, but you don't have to care about that. Your code gets a bit smaller, the intent is much clearer, and it's less prone to small bugs like changing the key and forgetting to change it in all three places.

Arrays Are Weird

Java's built-in array type is loosely based on C's, and that's reflected in the syntax:

int[] foo = new int[4];
foo[0] = 1;
foo[1] = 2;
foo[2] = 3;
foo[3] = 4;

Like C, they are zero-based, declared with the capacity and not the max index, and cannot be resized. Unlike C, arrays aren't just syntactical sugar on top of pointers, and this manifests immediately in bounds checking. Take this line:

foo[4] = 10;

In C, this will (famously) just write an integer 10 value into whatever memory happens to be just beyond the bounds of your array. In Java, you'll get an ArrayIndexOutOfBoundsException, saving you from the insidious bug. But, since Java arrays are (probably) implemented internally very similar to in C - they're likely contiguous blocks of memory sized to the type - they're still extremely efficient, and so they show up in a lot of speed-critical code.

As speedy and safe as they are, though, they're still pretty unfriendly. For starters, they can't be resized. When you do new int[4] (or use literal syntax like new int[] { 1, 2, 3, 4 }), you carve out that much memory and can't shrink or expand it in-place. You can change the values inside an array, just not its size. To "resize" efficiently, you have to make a new array and then use System.arraycopy to populate the new array with the contents of the old.

This is all why the List interface (with its predecessor class Vector) exists: they serve the same function of "ordered collection of stuff", but allow for dynamic resizing. Because these objects are usually "efficient enough" (ArrayList uses "true" arrays under the covers) while having numerous additional benefits, you should use them as your first go-to and only use arrays if you have a reason.

That's in part because the weirdness of arrays doesn't end with their inconvenience. Array types are actually implicitly-created classes, even when they contain primitive types. So:

int foo = 3; // primitive value
foo = null; // syntax error!
int[] bar = new int[] { 3 }; // Object
bar = null; // legal!

List<int> fooList = new ArrayList<>(); // syntax error - primitives can't be in generics
List<int[]> barList = new ArrayList<>(); // legal!

Object.class == Object[].class; // false
int[].class.isArray(); // not only legal, but true

Java provides two main utility classes for working with arrays: java.util.Arrays (extremely useful for Arrays.asList) and java.lang.reflect.Array (usually only useful in edge cases).

Java Has No Library Versioning System

If you've worked with Java in Designer or Eclipse, you've likely run across this preferences pane or its per-project version:

Eclipse Java compiler settings

These settings affect two things:

  • The syntax allowed in your source (e.g. new ArrayList<>() requires 1.7 or higher)
  • The class file format version (you can think of this like an NSF ODS). You've likely seen the latter in play by receiving an UnsupportedClassVersionError trying to run Java 7 or 8 code on a pre-9.0.1FP8 Domino server.

Conspicuously absent from this short list is anything to do with classes or methods added to the runtime in newer versions. For example, the String class gained a static method String.join to conveniently concatenate strings with a given delimiter. If you're targeting an older Java version but using a newer Java library (as Designer 9.0.1FP10+ does with a default target of 1.5 and JVM of 1.8), you can write a line of code using that method without issue - the syntax doesn't require anything above 1.5, so all is clear as far as the compiler is concerned. But if you then try to run that code on an older JVM (such as an older Domino server), you'll get an exception at runtime, since the method doesn't exist.

Unfortunately, the only true answer to this is to tell your IDE about a JRE for each specific Java version you're targeting, something that doesn't happen by default, and which will gradually get more difficult as Java 6 becomes harder and harder to come across.

This is one of the things that OSGi aims to fix - you could, for example, have many versions of Guava installed, and you could declare that your plugin works specifically with version 18. Then, when loading, the runtime will either bind to a matching version or give you an error that no version could be resolved. No mystery involved. Unfortunately, OSGi is a niche thing losing ground, and the module system introduced in Java 9 consciously does not address this.

In a pinch, you can use the file tool on most Unix systems to check the version of an individual class file:

$ file Foo.class
Foo.class: compiled Java class data, version 52.0 (Java 1.8)

Licensing

A little while ago, Oracle raised a bit of a stink by declaring that, as of this year, commercial use of their Java runtimes would require paid licensing. Historically, you could get support for Java for money, and certain additional components had their own licensing requirements, but it was pretty normal otherwise to install Java from java.sun.com and not give it a second thought.

This naturally caused a few questions when it comes to Domino and other Java-incorporating IBM products, and IBM released a statement that basically amounted to "you don't have to worry about it". IBM has maintained their own variant of the JVM (called J9 for Smalltalk-related reasons, not to be confused with Java 9) and anyway has always had arrangements with Sun/Oracle such that IBM's customers don't have to worry about dealing with Oracle directly.

But what about using Java outside of a licensed product, such as if you just want to run Tomcat on some server? The short answer there is that you're still fine, but you just have to know a little about the difference between "a JDK" and "Oracle's JDK". Java and the surrounding JDK have been progressively open-sourced in fits and spurts over the years, and are now at a point where the project called OpenJDK is basically the real Java environment, and then Oracle's JDK is just one implementation of it. It's similar to Linux: the core parts are open-source, and many distributions are entirely free, but there also exist commercial variants for pay.

So, if you want to run a Java stack, you can do so without putting forth a single cent by using an OpenJDK build. Oracle, IBM, and others will still be happy to take your money if you want a commercially-supported Java environment, of course.

For a longer explanation, this blog post from @javachampions is pretty much the definitive word.

 

Bitwise Operators

  • Apr 26, 2019

In the "Java Hiccups" post, I briefly mentioned this little tidbit:

(short)Integer.MAX_VALUE == -1   // true

This fact betrays a lot about how numbers are stored internally in most languages. Some of those details, like the fact that the specific method is called two's complement, are fully in the range of "computer science" stuff - but it can be handy to know about a couple ways you can put this sort of thing to use.

For our purposes, let's work with the short data type in Java, because it's painfully important to Notes. "Short" is called such because it's smaller than a standard integer type in a given language. In some languages, like C, the length of core data types like this is sometimes tied to the bitness of the processor, but in Java they're consistent across everything. Specifically, short in Java is 16 bits long. Let's take the number 21,038, which is represented in binary as:

0101 0010 0010 1110

That's two bytes, and each byte is broken up into two groups of four bits each because it turns out to be helpful visually and because each "nibble" there can be matched to a single hexadecimal character (522E in this case). This is the most common way you'll see binary written when you start getting into this sort of thing.

So I mentioned casting in the previous post, and what casting does with primitive integer types like this is to chop off the bits that don't fit. If you cast the above value into a byte, it'll chop off all but the ending 8 bits, giving you:

0010 1110

Which is 46. If you cast it to a larger type, such as int (32 bits in Java), it just adds some zeros to the front:

0000 0000 0000 0000 0101 0010 0010 1110

Which is still 21,038, but now it takes up twice as much memory.

For our uses, this stuff matters immediately in two ways: it represents the limits of some data storage and it allows us to manipulate numbers with bitwise operators.

Storage Limits

If you max out the bits in a 16-bit integer, you get 1111 1111 1111 1111, and this value is either -1 or 65,535 (64k) depending on whether or not the value is considered "signed" by the code using it. When it's "unsigned", a 16-bit integer can represent between 0 and 65,535; when it's "signed", its range is -32,768 to 32,767 (32k). Java only has "signed" types, which is why Short.MAX_VALUE is 32k and not 64k. C, on the other hand, lets you choose in your code which type you want to use.

The Notes C API uses a couple type names to represent consistent sizes across different platforms, and the one that's important here is WORD: a 16-bit unsigned integer. This is used in, for example, the function used to set the value of a text item in a note:

NSFItemSetText(
  NOTEHANDLE  hNote,
  const char far *ItemName,
  const char far *ItemText,
  WORD  TextLength)

That WORD at the end there is where you tell the API how much text you're providing, and is thus capped at 64k - and is why you can store 60k of text in a non-summary text field but not 70k. As to why summary data is limited to 32k and not 64k, I'm guessing that either there's a signed value in there somewhere or that last bit was needed for something else.

If you look over the list of Notes limits, you can see this reflected all over the place, along with a few 8-bit (0-255) limits for good measure.

Bitwise Operators (for real this time)

The term "bitwise" refers to a handful of operators that deal directly on the bits of a number. Java takes a cue from C here and provides &, |, ^, ~, <<, >>, and >>>. These all have their uses, primarily for really-low-level stuff and efficiency, but we mainly care about & and |. These two may have bitten you in the past because of their similarity to && and || - and in particular because Formula Language uses the single-character versions to mean what Java means by the double-character ones. They also kind of work the same way, but aren't really the same.

To see how they work, let's take our original starting number, 21,038, and pair it up with another value, 5,000:

0101 0010 0010 1110
0001 0011 1000 1000

Visualizing the numbers in binary and stacked like this comes in extremely handy when it comes to dealing with bitwise operators. We'll start with &, or "bitwise AND". What this operator does is take two numbers and return a number where the matching slots in each one both contain a 1. So, in this case, it results in this internal math:

0101 0010 0010 1110
0001 0011 1000 1000
===================
0001 0010 0000 1000

Which corresponds to 4,616.

The | operator, or "bitwise OR", will provide a result where either of the original numbers has a 1 in the slot. In this case:

0101 0010 0010 1110
0001 0011 1000 1000
===================
0101 0011 1010 1110

Or 21,422.

It's pretty rare that you want these operators for the actual numerical values, though. What they're pretty much always used for are "bit fields", an extremely-efficient way to store a set of related flags. Going back to the Notes API, the NSFItemAppend function (a generic way to add an item value) has a parameter WORD Flags, that matches up to a number of properties that you can assign to an item. In the C source, they're written as hexadecimal values, but I've added in the binary versions here:

#define	ITEM_SIGN          0x0001   // 0000 0000 0000 0001
#define	ITEM_SEAL          0x0002   // 0000 0000 0000 0010
#define	ITEM_SUMMARY       0x0004   // 0000 0000 0000 0100
#define	ITEM_READWRITERS   0x0020   // 0000 0000 0010 0000
#define	ITEM_NAMES         0x0040   // 0000 0000 0100 0000
#define	ITEM_PLACEHOLDER   0x0100   // 0000 0001 0000 0000
#define	ITEM_PROTECTED     0x0200   // 0000 0010 0000 0000
#define	ITEM_READERS       0x0400   // 0000 0100 0000 0000
#define ITEM_UNCHANGED     0x1000   // 0001 0000 0000 0000

So you can see that each flag there has a distinct slot filled in that each other doesn't (you can also see, like missing teeth, the bits that are probably obsolete or undocumented features). If you want to create an item that is a signed, sealed, summary, readers item, you'd do this:

WORD flags = ITEM_SIGN | ITEM_SEAL | ITEM_SUMMARY | ITEM_READERS

…which will result in this bit of internal math:

0000 0000 0000 0001
0000 0000 0000 0010
0000 0000 0000 0100
0000 0100 0000 0000
===================
0000 0100 0000 0111

This is extremely efficient, both because you can store a bunch of information in just 16 bits, but also because working with these bit fields is baked into processors at the lowest level. There's not much you can do on a computer that's faster, in fact.

When you have a value like this, you can query it for whether or not a given value is set by using the AND operator:

flags & ITEM_SUMMARY

This will result in:

0000 0000 0000 0100

…which is the same as the value of ITEM_SUMMARY itself, since that one slot is the only bit in common. Conversely, if you check:

flags & ITEM_PROTECTED

…you'll end up with zero, since the flag doesn't match.

Use in Java

Most of the time, you don't have to think too much about the bits that make up numbers in Java, but they creep up from time to time. Generally, the main situations where they arise are when you're dealing with a low-level API or with code written by someone who spent a lot of time with C and deeply internalized its efficiencies.

For an example of the former, take a look at the com.ibm.designer.domino.napi.NotesNoteItem class in IBM's NAPI. It contained a method called int getFlags(), which returns exactly the value we were talking about above. If you get your hands on a summary item this way, you can do this to tell if it's a summary item:

(item.getFlags() & 0x0004) != 0

Note the extra != 0. While the operation involved is the same as in C, C lets you treat any non-zero integer value as a boolean true, while Java forces you to perform an actual boolean operation.

For an example of the latter, turn your eyes to the DefaultColumnDef class used in xe:dynamicViewPanel generation and customization, a snippet of which is:

public static class DefaultColumnDef implements ColumnDef {
      public static final int FLAG_HIDDEN         = 0x000001;
      public static final int FLAG_LINK           = 0x000002;
      public static final int FLAG_ONCLICK        = 0x000004;

      public int flags;
  
      public boolean isHidden() {
          return (flags&FLAG_HIDDEN)!=0;
      }
      public boolean isLink() {
          return (flags&FLAG_LINK)!=0;
      }
      public boolean isOnClick() {
          return (flags&FLAG_ONCLICK)!=0;
      }
}

This could also be represented as a bunch of boolean properties on the object or, most idiomatically, an EnumSet, but using a bitmask is slightly more space- and CPU-cycle-efficient. Slightly. The difference isn't enough to even think about normally, but can become worth it in cases where the code is called so often that tiny efficiencies can make a difference.

This whole topic is the sort of thing that is normally so many layers of abstraction below where we work that it's basically invisible, but it's definitely good to at least know the basics for the times when it does come up.

XPages on Android

  • Apr 15, 2019

Around the start of the year, I had a bit of a dalliance with the idea of running XPages outside Domino. The upshot of that project is that it is indeed possible to do so, but there'd be some work to do to make it practical.

This month, we revisited the idea with a healthy dose of Darwino to provide some undergirding technology, with the goal of being able to run a plain old XPages app on mobile devices backed by Darwino DBs and replicating back to Domino. There was a lot of fiddling involved, but it works:

Android is the natural first target, but iOS is about 70% there, with the scaffolding loading up to the point where it loads a page but currently with a bit of trouble when it comes to resolving data classes and executing renderers.

What Specifically Is Going On?

We set out a few required parameters to call it a successful proof-of-concept:

  • It has to use the actual XPages framework - that is to say, the jar files shipped with Notes/Domino
  • The XPages themselves have to be shared among Domino and the Darwino app, in the form of their Java "intermediate" source (compiling from .xsp source is possible but is a whole other thing)
  • It has to use xp:dominoView and xp:dominoDocument data sources
  • It has to load the Extension Library
  • It has to use ancillary elements from the app: managed beans, themes, CSS, images, SSJS libraries

This checks all of those boxes, and it's pretty satisfying to see in action.

The Stumbling Blocks

As I discussed in my post about the original project, there are aspects of XPages that are meant to make this sort of thing possible, with the core parts abstracting out different platforms and runtime environments. Over the years, though, assumptions about Domino and OSGi crept in, with newer additions and the Extension Library taking a bit less care to be environment-neutral.

Moreover, XPages is not open source and I don't have any particular special access to it, making this whole thing essentially, in the video-game sense, hard mode. There are a handful of classes that needed to be outright swapped out, like the annoyingly-final NotesContext class, but there was much less of that than I'd thought.

After that, getting the data to point to Darwino was pleasantly straightforward. Other than a handful of areas where the NAPI comes in, the Domino data sources largely adhere to the rules of the lotus.domino interfaces and don't make assumptions about the implementing classes (this is essentially how ODA shims in as well).

What This Could Be Useful For

With the right fleshing out, this could be a real way to run existing XPages apps on mobile devices. That could be pretty useful on its own, but I don't think that carrying forward XPages apps as-is is the right idea. The side effect of this, though, is that you have a functioning XPages app in a normal old .war file project, structured with Maven or Gradle, and ready to be molded into newer frameworks using whatever tooling you'd like. No Designer, no OSGi, no Servlet 2.4/2.5, just a clean basis running your existing logic and ready to be improved.

If the mountain of existing XPages code is going to have a future, I think it should be something like this.

Implementing Cluster Replication From Domino to Darwino

  • Mar 19, 2019

Since its inception, Darwino has had two-way replication between it and Domino, and it's evolved over the years in fidelity and configurability. Recently, I was able to check an item off the to-do list that I've wanted for a while: "cluster-style" replication from Domino, where a document change immediately kicks off replication to Darwino.

Component #1: Extension Manager

Fortunately, the implementation is straightforward in concept: the Extension Manager has been in there since 4.0 and provides exactly the hooks one would need to implement this. The trouble with the Extension Manager, though, is that you can only subscribe to events from within an ExtMgr addin, and that means native code. You can't just hook into it from, say, an OSGi plugin.

My first thought was to use DOTS, which created this exact sort of bridge years ago. However, it's critically limited: the EM ferrying was never really fleshed out that much, and nor will it ever be, since it's not a supported project. It was kind-of-sort-of supported in the 9.x era for "social" purposes, but those days are behind us. Moreover, its separate OSGi environment wouldn't suit Darwino's needs particularly well.

The DOTS dynamic library, though, could still be potentially useful. Nathan Freeman came across this a couple of years ago: due to the way the DOTS dylib ferries the events from the ExtMgr world to DOTS, the channel is actually consumable by anything, not just DOTS specifically. My initial implementation did exactly this: it fed from the fountain of messages produced by the DOTS dylib for its own ends.

However, the core trouble still remains that DOTS isn't supported as such, and it has too many moving parts for us to want to take on as a dependency. Moreover, the "siphoning" only works if there's only one subscriber listening - on a server that also runs ODA (which Darwino does not use), you have the two consumers contending for messages, which is a recipe for missed events.

So I decided to write a custom-made ExtMgr addin, which would have the advantages of being much smaller and easier to maintain, feeding a different queue, and being a fun learning opportunity for me. The last part isn't as important to Darwino-the-product per se, but I always like when it lines up like that.

Component #2: Message Queues

The way DOTS and this new addin do their things is to use Message Queues, another technology presumably-not-coincidentally added to Domino in R4. The way these work is that you create a named queue (DOTS's, for example, is named MQ$DOTS) and then feed it strings, which are then consumed by anything running in the same Notes/Domino environment by requesting the queue of the same name and waiting for messages. It's pretty simple both in theory and in execution, with the minor problem that the API isn't officially available from Java.

Fortunately for ODA's (and presumably DOTS's) use, though it's not part of the official API, there is a lotus.notes.internal.MessageQueue class that is a (shockingly-thin) wrapper around the MQ* functions in the C API. It's functional, and the thinness of the wrapper means that the method parameters, though unnamed in the bytecode as usual, are clear matches for the equivalent C parameters.

I initially started using this, but ended up writing a nicer wrapper in Darwino's NAPI that implements BlockingQueue, making consumption in Java much clearer.

Component #3: The Listener

The final piece was the most comfortable, since it's entirely back in the warm embrace of Java: I wrote a class to listen for events coming through this queue, extract the database name, check for replicators configured for that NSF, and immediately kick any applicable ones off.

The End Result

Since the overhead of the work in a small replication is so minor, the end result is effectively the same as cluster replication between Domino servers: the change/creation/deletion is propagated over within a couple milliseconds (for small documents, due to, you know, physics). It's pretty satisfying to see in action.

Pub/Sub

If you've been following HCL's announcements lately and feel like this sounds very similar to the pub/sub support they've slated for V11, you're right. My guess is that they wanted to get Elastic Search working with low latency and (kindly and wisely) decided to turn the work required into a nicer interface for the same EM events we're using. It's a good feature and, assuming it's consumable from local Java, would have made my work here easier, but I didn't want to wait, and we target a couple Domino releases back anyway.

Anatomy of a Clean Open-Source Project

  • Mar 18, 2019

Over the years, initially thanks to Peter Tanner's diligent work as the OpenNTF IP Manager and now my own occupation of that seat, I've learned to really appreciate the virtues of dotting your "i"s and crossing your "t"s when it comes to making an open-source project legally clean and clear.

It's definitely something I underrated early on, though - caring about the specific differences between licenses and, in particular, maintaining things like per-file license/copyright headers felt like annoying busywork. For a project that only you will ever use, it technically is, but the hope of open source is that you'll get other people using your work and, ideally, contributing back in turn, and that's when it's important to make sure you have everything sorted out.

Why Bother?

Well, for one, you or your users could theoretically be sued or otherwise legally entangled if you don't keep track of this stuff. Admittedly, it's fairly unlikely, but the consequence of, for example, unknowingly including GPL software in your proprietary product is potentially significant.

It's for that sort of reason that it's important to make sure everything is clean before some large corporations will risk even looking at your project. IBM is particularly good about this because they were significantly burned in the past, and came out of it with extremely-strict view, and that rubbed off on OpenNTF both culturally and with their gracious technical assistance along the way.

And, since large consumers require this sort of vetting, it's also important to know how to do it if you want to contribute code to an open-source organization like OpenNTF, Eclipse, or Apache.

It's also surprisingly satisfying once you get into the swing of it, I've found.

The Example Project

Since I've been spending a lot of time recently with the NSF ODP Tooling project, we'll look at its GitHub repository.

Common Files

There are a couple common features that tend to show up, and which both people and tooling (like GitHub's license identifier) look for:

  • The LICENSE file, which is the most critical. This contains the text of the license you're using, as well as one of the declarations of the copyright year (though, admittedly, it's easy to forget to include that part). This is what declares the effective license for the code in the repository that you own the copyright to, and should be included right from the start if possible.

  • The NOTICE file, which is vital if you're including any code from any sources not covered under the main copyright. This file should list all of the third-party code you have included in the repository, its license type, and, if possible, where to acquire it. If your project's distributable form includes additional third-party code not included in the repository (such as Maven or npm dependencies), these should be enumerated here as well

    • Writing this file has an important side effect in that it forces you to account for the licenses of your dependencies. More than once, I've run into a situation where I found that a common dependency had an incompatible license (such as the pure GPL). In some cases, this has meant abandoning the dependency outright, while in others it has meant finding a better-licensed alternative. Eclipse Orbit exists in large part for this purpose.
  • A legal directory containing any additional license/redistribution information not covered by the NOTICE. This can also sometimes take the form of files like NOTICE-Weld in the root of a project, and is useful for mass-including copyright/notice information from third parties in their original form.

In addition to including these files in the project repository, you should also make sure to include them in any binary distributions you make. In my projects, this takes the shape of inclusions in a Maven Assembly Plugin packaging file.

File Headers

I originally chafed against the idea of per-file copyright/license headers. They're not strictly necessary when the files are included in the original repository, they're redundant, you end up with massive commits touching hundreds of files just to change a year, and they can dwarf the size of the actual code they're copyrighting.

However, I've really come around to the practice of including them, and the main reason is that it makes the files easier for others to copy and use legally. It's one thing when someone finds their way to the root of your repository or distribution package, but it's another when they find an individual class by doing a web search or hitting F3 in Eclipse. In those cases, they can find their way up to the license (assuming your source package includes it), but it's much easier if it's just declared right up front.

It's also easier to clearly distinguish the third-party code you're including. When each file has its copyright information clearly noted, you can easily tell the difference between a sui-generis project file and an included third-party file without having to parse through the NOTICE every time.

And, fortunately, it doesn't have to be a huge hassle to maintain. In each of my Maven projects, I include a license plugin configuration to declare copyright information, any special data types, and which files to not include. Then, whenever I add new files or make a change after the turn of a year, I can run mvn license:format and it'll keep everything tidy for me.

pom.xml Configuration

Maven (and it's not alone in this) provides a lot of pom.xml-level elements to declare all sorts of metadata about your project, like its SCM repository, issue tracker, and, critically, license and developers. I like to declare the inception year, the license, and the <developers> block:

	<inceptionYear>2018</inceptionYear>

	<licenses>
		<license>
			<name>The Apache Software License, Version 2.0</name>
			<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
		</license>
	</licenses>

	<developers>
		<developer>
			<name>Jesse Gallagher</name>
			<email>jesse@frostillic.us</email>
		</developer>
	</developers>

I use that <inceptionYear/> value as part of the license-header file to keep track of the copyright range, at least when there's contiguous multi-year development.

OSGi Stuff

Since most of my projects are still OSGi, I've been aiming to improve my licensing setup there too. The main place where this comes into play is in feature.xml files, which have required elements to specify the copyright and license. It's not terribly unusual for these to end up as their "[Enter Copyright Description here.]" defaults, but it's important to fill these in. They're included in the "accept the licenses" dialog when installing into Eclipse/Designer, and are available in the "Installed software" descriptions in the UI.

But What License to Use to Begin With?

I'll finish off this post with what is actually the most important part of the process, but which can usually be answered simply. There are a lot of open-source licenses out there, and you could theoretically make up your own, but for our purposes the choice tends to follow some basic rules:

  • If you're contributing to an established OS organization, use theirs - for example, if you're contributing to Eclipse, use the EPL.

  • If you want your code to be mixed other OS projects and (potentially) proprietary ones, pick Apache or something like it.

    • At OpenNTF, we have a preference for Apache over other similar licenses, because it's well-established and makes copyright handling clearer than the equivalents, something that is critical for large companies. Let past lawyers do your legwork on this one.
  • If you want to require that users of your code keep the code open source, consider the GPL.

    • Be extremely wary of this, however: the GPL is intentionally "infectious" and limits how the code can be used. Various projects carve out little exceptions to the GPL to allow use in otherwise-non-GPL products, but it's still something of a minefield.
    • The GPL is one of the approved licenses for OpenNTF, but we kind of discourage it except in cases where a project is GPL because it's derived from previously-GPL'd code.
  • If you don't want to be bothered too much by copyright and just want the code out there, consider Public Domain. In practice, it's usually best for you to retain copyright, but explicitly declaring Public Domain is certainly an effective way of allowing any use.

For projects in our community, the quick answer is "use Apache". It's permissive, covers copyright, and is known and trusted by pretty much everyone.

More Work Than It's Worth?

Both the earlier parts of this post and Betteridge's Law contribute to making it clear that my answer is "no, it's not more work than it's worth", but I can certainly see why it'd feel that way. The first couple times I submitted projects to OpenNTF and got a "here's some stuff to fix" email from Peter Tanner, part of me definitely chafed at the whole thing. That can be particularly the case for Notes-based contributions - sometimes, you just want to plunk an NTF on the project page and be done with it, and Notes certainly doesn't have a "wrap this NSF copy in a ZIP with LICENSE and NOTICE files" checkbox.

However, as I learned more about the legal importance of having licenses correct and got more practice at doing this stuff from the start, I started to appreciate the whole process. It also turned out to be really helpful to sort this stuff out on smaller projects before working on larger ones, especially ones with established teams and procedures.

In all, it's worth it both to allow you to contribute to larger projects and, regardless of project size, it's worth it for anyone consuming your code.

Java With Domino After XPages

  • Mar 14, 2019

IBM and HCL held a webcast today to detail some plans for Notes/Domino V11. There were some interesting tidbits elaborating on things like the pub/sub support, and it'll be worth tracking down a recording of the event when it's available.

What's important for this series, though, is that this event served as the long-promised "roadmap" announcement for XPages. The roadmap is, in effect, option three: HCL plans to look into ways to reuse some existing XPages code, but in general you should be aiming to write your UIs in something else, either consuming REST services from an XPages container or accessing Domino data via another route (like the domino-db Node.js module and hypothetical Java gRPC client).

So we know the end of the path: not XPages. However, it's not like we're all just going to throw away our existing apps, so there's work to do determining how we're going to get there. The options remain pretty much what they were after CollabSphere last year, albeit now with the doubt removed. The first two options - returning to LotusScript or going to Node - have their advantages and disadvantages, and you could make a reasonable case for either. Personally, I'm not interested in going down those roads, though, and I think it's better for any app of reasonable complexity to dive into Java. Other members of the community and I have developed tools over the years to make it easier, and now's the time to take some of these steps if you haven't already.

Do Not Use Server JavaScript

Sever JavaScript was always something of a trap for app architecture. There's nothing inherently wrong with having a scripting language on your UI pages, and it certainly helped bridge some gaps, but the way it and Designer intertwined encouraged developers to create non-portable messes. If you're still writing SSJS, stop immediately.

Learn Proper Java

Java has been around for a long time, and the way to right "good" Java code has changed over time and varies greatly by your environment. Some aspects, though, apply generally, and it's useful to stay up-to-date on current practices. I don't know a better resource for this than Effective Java, which has been updated for Java 7-9 since I last read it.

Speaking of which, you should learn about Java 8 streams and lambdas - they're great. Julian Robichaux did a presentation on this topic back at Connect 2017, and the slide deck is very elucidating.

Adopt Standard Java Technologies

Last year, I created a project to bring some modern JEE technologies to XPages. These are some of the same technologies I've been talking about in my "XPages to Java EE" series and, while that project can't bring the full JEE development experience to XPages, using those tools will help you write code that, in some cases, could be directly dropped into a Java EE app with no modifications at all. There's a big asterisk when it comes to actually accessing Domino data, but that's a solvable problem as well (with some more development).

In particular, you should start writing JAX-RS services. Not only is JAX-RS an excellent and very-capable spec, but REST services are portable to absolutely any front end.

Adopt Automated Builds

Maven has been something of a bugaboo for XPages developers for a while, but doesn't have to be. Node development (server- or client-side) revolves around npm and various build plugins, and Maven is much the same thing. One of the biggest improvements I've made lately to all of my active XPages apps is to wrap the on-disk project for them inside a Maven artifact, using the NSF ODP Tooling. That project allows you to automatically build your NSFs alongside other parts of the project (such as OSGi plugins) without having Designer involved.

Check the example project in that repo, and stay tuned for a 2.0 release (probably) imminently.

Learn Other Toolkits

If you're just starting the process of figuring out what to do after XPages, it doesn't particularly matter which other toolkit you learn, as long as it's reasonably modern. If you take some time to learn how to make, say, a React app but end up going with something else down the line, the lessons you learn will apply very closely. A particularly-comfortable option could be to learn JSF, which has a common ancestry with XPages but has up-to-date capabilities.

Whatever it is, though, just learn some other toolkit.

Follow Channels and Accounts for Other Tech

Over the last couple months, I've started following a lot of Jakarta-related blogs and Twitter luminaries. This applies elsewhere - even if you're not using other toolkits yet, it's very helpful to start immersing yourself in the news and culture.

Don't Stay Still

The primary thing to take to heart is the importance of doing something. Unless you're planning to change careers or retire in the short term, you'll have to make one decision or another. XPages is not going to get meaningfully better, and even existing apps will get worse with time as browsers and technology change.

Other environments, though, are already leagues ahead and are constantly improving. Dive in; the water's fine!

XPages to Java EE, Part 13: Why Do This, Anyway?

  • Mar 1, 2019

In the introductory post to this series, I started by saying that I've come around to the idea that Java EE (and its kindred technologies) is the future for Java development with Domino, and I think it's worth taking some time to make the case that you should think so too.

A lot of it starts with a nagging question:

Is XPages Dead?

Speaking very strictly, no. XPages remains a component of a commercial product and so bug reports receive attention and fixes as they occur, and presumably all of the many problems the platform is bound to run into as the world evolves will also be addressed.

However, I think the more important question is:

Does It Even Matter If XPages Is Technically Not Dead?

At this point, XPages has, for almost its entire existence inside Domino, been "living" the peculiar kind of undeath that afflicts enterprise software. This is the sort of undeath you could see in, for example, IRIX, which had its last major release in 1998 but was "supported" with maintenance releases for another eight years. XPages has similarly had a few new features since the initial ExtLib release in the 8.5.2 era era - such as the Bootstrap renderkit and some quality-of-life bits - but it's clearly been in maintenance mode for a while. It's been a reactive type of maintenance, too: since XPages doesn't have the Notes-client-app advantage of existing in a controlled environment, XPages development has been increasingly a story of something breaking in some browser and harassing IBM or HCL until there's a fix.

"But wait," you might say, "HCL is going to be in charge now, and look how they're revitalizing the core of Domino! Maybe they'll do the same for XPages!" And, well, maybe they will. I doubt it, though - the core team they brought over isn't involved with XPages, and in general IBM and HCL seem to think so little of it that they don't capitalize it correctly in slides. HCL's development strategy seems to (defensibly) focus on porting the Notes client to other platforms and encouraging developers to either crawl back to LotusScript or to use development stacks that HCL isn't on the hook for, like Node.

And, even if they did staff up to enhance XPages, would it be wise to depend on that? It would still be a single-vendor stack using technologies (OSGi) and idioms (server-side page state) that work but aren't the way the wind is blowing. I would expect that any renewed push would be short-lived.

Okay, Grumpy Gus, Why Is Java EE Any Better?

There was a period in the recent past where Java EE was hitting some similar trouble: Oracle largely lost interest in Java EE (and Java in general, to a lesser extent) and development slowed. However, because so much was open source and (critically) the development pool was so much larger, things kept moving along, with the insurgent group of MicroProfile building new technologies. The transfer of Java EE to Eclipse has proven to be a blessing as well, with the community taking up the mantle splendidly.

As with anything, there's no absolute guarantee of a future path, but JEE has the critical mass that XPages doesn't, with community members and companies emotionally and financially invested in its future. There's no one company whose loss of interest would doom the whole thing, even the Java rump state at Oracle.

And, in the mean time, the technology is just better. You can use current versions of Java, new web technologies are adopted immediately, the IDEs are significantly more stable and featureful, the whole stack has source and Javadoc readily available, the surrounding tooling is better, the modern APIs are simpler and more descriptive, and so forth. Even though XPage's cousin, JSF, has lost some interest, it has evolved significantly past the point where XPages forked away - and is also just one of many good options for app development.

This Is The Bridge We Were Promised

When XPages was ported into Domino, one of the ancillary features for developers was that it would provide a path to the "real" development world, out of our abusive cell of Domino's legacy web stack. It meant being able to consume common Java libraries more practically than in Java agents, extending the capabilities of the platform ourselves, and using development practices that are closer to the rest of the world.

And, to a large extent, these promises came true. We've had the ability to work with the Servlet spec directly, we've been bringing in libraries like Poi with only medium-sized hurdles, and we've been able to (kind of) break the mental binding between the UI and data storage.

Now it's time to truly cross the bridge. There's still a case to be made for writing new XPages applications for now, but every line of XSP markup should be acknowledged as new technical debt. At this point, you should either start working towards Java EE or have a clear plan for how you're going to get there. It doesn't have to be Java EE: there's a lot to be said for taking up HCL on their suggestion to write Node apps, and there are other Java and non-Java frameworks out there that are actively advancing. Regardless of your choice, if you don't choose something, your development will remain stunted and you'll be at a near-guaranteed risk of some critical security problem showing up in an old Dojo version or some other part of the stack and having to clamor for an emergency fix that would otherwise be a one-line pom.xml or package.json tweak (if needed at all).

It's also just very pleasant - give it a try.

XPages to Java EE, Part 12: Container Authentication

  • Feb 20, 2019

Coming from Domino, we're both spoiled by the way it handles users and authentication. The server comes with an implicit directory far removed from the app, and so we just use that. Moreover, the HTTP "container" handles authentication for us: we configure whether we want HTTP Basic auth, simple session auth, LTPA, or SAML (I guess) authentication at the server/web site level, also far from the app. Then, in the application, we just set up an ACL in the DB or XPage and reader/author fields on notes and let the server take care of it.

We're also pretty constrained by this, as anyone who has wanted to set up a custom in-app login page, in-app user registry, or even server-level specialized user repository knows. The convenience comes with a cost.

Java EE's Routes

Authentication in Java EE has a complicated history, but the general idea is that there are three main current ways to handle user registries and authentication:

  1. Container authentication, which is roughly equivalent to Domino. In this case, you configure your server with knowledge about the user registry (say, a static set of users or an LDAP server) and how those users map to "roles" in the JEE sense (more on this in a bit), and then your app just tells the server what parts need login.
  2. Per-app custom authentication. This is roughly equivalent to doing a special in-app session cookie in a Domino app for authentication, but the hooks provided by Java EE make it easier to manage. You could do this entirely homebrew or use a project like Apache Shiro to do the heavy lifting for you.
  3. The Java EE Security API 1.0. This is a new standard, introduced in Java EE 8, meant to give apps the fine-grained custom control of the second option while offloading as many particulars as desired to the servlet container - essentially, a "best of both worlds" approach. In practice, I've found it a bit under-documented and fiddly in its current incarnation, but I'll aim to cover it in a future post.

Container Authentication

For this post, I'll cover setting up container authentication, since it's the easiest, is well-documented, and can even serve the "per-app" role if you follow the "one app per server" model that's in vogue for microservice/"cloud native" applications.

For testing and development cases, web servers generally provide a mechanism for writing out a simple user/password list. Open Liberty does this via server.xml and Tomcat/TomEE does it via tomcat-users.xml. However, since we're Domino people, that means we have access to an LDAP server we're already comfortable with running, so we can jump right into a "real" setup.

Configuring Domino for LDAP

This topic is a bit out of the bailiwick of this series, but usually it just means creating an LDAP Internet Site and running load ldap. Domino's pretty convenient sometimes. I like to test a freshly-configured LDAP server with Apache Directory Studio.

Configuring Liberty

Once you have LDAP running and working, open Liberty's "server.xml" file (dubbed "Server Configuration" in Eclipse's Servers view) and add the ldapRegistry-3.0 feature and an ldapRegistry element within the top-level server element, customized for your setup:

<featureManager>
    <feature>javaee-8.0</feature>
    <feature>localConnector-1.0</feature>
    <feature>mpOpenAPI-1.0</feature>
    <feature>ldapRegistry-3.0</feature>
</featureManager>

<ldapRegistry host="some.domino.server" port="389" sslEnabled="false"
    	bindDN="someuser" bindPassword="somepassword" 
    	ldapType="IBM Lotus Domino" baseDN=""/>

There are ways to prevent putting an unencrypted password in the "server.xml" file, but this will do for now.

As a nice bonus, Liberty's IBM provenance means we get built-in knowledge of Domino and so don't have to jump through the common hoops of setting up query filters and whatnot.

The other bit of configuration in this file is to map users and groups to roles. This is an area where Domino and JEE diverge a bit. "Roles" in JEE mean more or less the same thing as in Domino, but they're configured outside of the application. They can be configured per-app within the server configuration, accomplishing the same end result, but there's a different level of indirection. For our users, we'll configure them server-wide. So, add another element as a peer to the ldapRegistry:

<authorization-roles>
  <security-role name="admin">
    <group name="LocalDomainAdmins"/>
  </security-role>
</authorization-roles>

You can feel free to customize this at will - there's also a user type and the concept lines up pretty much with what you'd expect.

Configuring the App

The server configuration on its own doesn't yet change anything - we can still visit any page in the app without a login prompt. What we should do now is to clamp down certain aspects. To demonstrate this, we'll add a UI operation for deleting people and restrict it to our freshly-minted "admin" role. Go to the PersonController class and add the method:

@POST
@Path("{id}/delete")
@RolesAllowed("admin")
@Controller
@Operation(hidden=true)
public String deletePerson(@PathParam("id") String id) {
  personRepository.deleteById(id);
  return "redirect:people"; //$NON-NLS-1$
}

Note that we're using POST for this instead of DELETE as a nod to web browsers' historical limitations. There is a way to intercept and remap incoming requests so that we can use @DELETE annotations that I may cover later, but for now this should work fine.

Modify the "personList.tag" file within "WEB-INF/tags" in "Deployed Resources" to include a button to point to this action:

<%@tag description="Displays List&kt;model.Person&gt;" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" %>
<%@attribute name="value" required="true" type="java.util.List" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>

<h1>People</h1>
<table>
	<thead>
		<tr>
			<th>ID</th>
			<th>Name</th>
			<th>Email Address</th>
			<th></th>
		</tr>
	</thead>
	<tbody>
		<c:forEach items="${pageScope.value}" var="person">
		    <tr>
		    	<td><c:out value="${person.id}"/></td>
		    	<td><c:out value="${person.name}"/></td>
		    	<td><c:out value="${person.emailAddress}"/></td>
		    	<td>
		    		<form action="${pageContext.request.contextPath}/resources/people/${person.id}/delete" method="POST">
		    			<input type="submit" value="X"/>
		    		</form>
		    	</td>
		    </tr>
		</c:forEach>
	</tbody>
</table>

If you visit http://localhost:9091/javaeetutorial/resources/people and click one of the deletion buttons, you should be greeted with an HTTP Basic authentication prompt that should block you until you enter in valid credentials for an admin.

Next Steps

At this point, we've really covered the main aspects that you'd need to know when making the move to Java EE. After this point, I have some ideas for various miscellaneous posts - other authentication options, app fit and finish, JSF, app configuration, deployment, and so forth - but, if you've been following along, I suggest you take the opportunity now to explore for yourself.

It would also be worth your time to look up some other existing educational resources. I hear that Java Brains is quite good on a lot of these topics, and Adam Bien is one of the leading sources for Java EE examples (and is the source of the template we used at the very start). When looking around, be wary of the age of the content: though pretty much anything related to Java EE will still work, there was a big move away from the Bad Old Days in recent versions, particularly EE 8, and so older examples may have you jump through uglier hoops than necessary. There are also whole technologies that are in common use that are not of immediate interest to us, such as Enterprise JavaBeans, so you can learn or not learn about those at will.

XPages to Java EE, Part 11: Mixing MVC and an API

  • Feb 16, 2019

When we set up our MVC controller classes, we put the @Controller annotation at the class level, which tells the environment that the entire class is dedicated to running the UI. However, we don't necessarily always want to do that - JAX-RS is the way to build REST APIs, after all, and so we should also add JSON versions of our Person methods.

Person Model Modification

Before we get to the meat of the code, go back to the Person class and modify it to remove the parameter to the @Id annotation and switch it to a String type:

package model;

import javax.validation.constraints.Email;
import javax.validation.constraints.NotBlank;
import javax.ws.rs.FormParam;

import org.jnosql.artemis.Column;
import org.jnosql.artemis.Entity;
import org.jnosql.artemis.Id;

@Entity
public class Person {
	@Id
	private String id;
	@Column @FormParam("name") @NotBlank
	private String name;
	@Column @FormParam("emailAddress") @NotBlank @Email
	private String emailAddress;
	
	public String getId() { return id; }
	public void setId(String id) { this.id = id; }
	public String getName() { return name; }
	public void setName(String name) { this.name = name; }
	public String getEmailAddress() { return emailAddress; }
	public void setEmailAddress(String emailAddress) { this.emailAddress = emailAddress; }
}

The reason for the change is that my original example was based on code from an older version of JNoSQL, and long IDs end up causing trouble when updating existing documents.

Also go to PersonRepository and modify it to use a String for the key:

package model;

import java.util.List;

import org.jnosql.artemis.Repository;

public interface PersonRepository extends Repository<Person, String> {
	List<Person> findAll();
}

 

Tweaking the Controller

The first step to adding in API methods doing this is to move the @Controller annotation down to just the methods that emit JSP responses (and adjust for the changed ID field while we're here):

package com.example;

import java.util.Random;

import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.mvc.Controller;
import javax.mvc.Models;
import javax.validation.Valid;
import javax.ws.rs.BeanParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;

import org.bson.types.ObjectId;
import org.jnosql.artemis.Database;
import org.jnosql.artemis.DatabaseType;

import model.Person;
import model.PersonRepository;

@Path("/people")
@RequestScoped
public class PersonController {
	@Inject
	Models models;
	
	@Inject
	@Database(DatabaseType.DOCUMENT)
	PersonRepository personRepository;
	
	@GET
	@Controller
	public String home() {
		models.put("people", personRepository.findAll()); //$NON-NLS-1$
		return "person-new.jsp"; //$NON-NLS-1$
	}
	
	@POST
	@Controller
	public String createPerson(@BeanParam @Valid Person person) {
		if(person.getId() == null || person.getId().isEmpty()) {
			person.setId(new ObjectId().toHexString());
		}
		
		models.put("person", personRepository.save(person)); //$NON-NLS-1$
		models.put("people", personRepository.findAll()); //$NON-NLS-1$
		return "person-created.jsp"; //$NON-NLS-1$
	}
}

Doing this shouldn't change the behavior of the app, and that's what we want.

Add Some API methods

Now, to be a proper REST API, we'll want a suite of Create-Read-Update-Delete methods using standard HTTP verbs. Add these methods to the class:

@GET
@Produces(MediaType.APPLICATION_JSON)
public List<Person> list() {
  return personRepository.findAll();
}

@GET
@Path("{id}")
@Produces(MediaType.APPLICATION_JSON)
public Person getPerson(@PathParam("id") String id) {
  return personRepository.findById(id).orElseThrow(() -> new javax.ws.rs.NotFoundException("Could not find person for ID " + id)); //$NON-NLS-1$
}

@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Person createPersonApi(@Valid Person person) {
  if(person.getId() == null) {
    person.setId(new ObjectId().toHexString());
  }
  return personRepository.save(person);
}

@DELETE
@Path("{id}")
@Produces(MediaType.APPLICATION_JSON)
public JsonObject deletePersonApi(@PathParam("id") String id) {
  personRepository.findById(id).orElseThrow(() -> new javax.ws.rs.NotFoundException("Could not find person for ID " + id)); //$NON-NLS-1$
  personRepository.deleteById(id);
  return Json.createObjectBuilder()
      .add("success", true) //$NON-NLS-1$
      .build();
}

@PUT
@Path("{id}")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public Person updatePersonApi(@PathParam("id") String id, @Valid Person person) {
  person.setId(id);
  return personRepository.save(person);
}

Here, we're taking advantage of JAX-RS's MIME-type-based routing: because our @Controller methods deal with HTML but these new methods declare that they're working with JSON, JAX-RS will route incoming browser visits to the controller and incoming API requests to the others.

Testing It Out

We can see this in action by trying out the API from the command line (or with a REST client app like Postman):

$ curl -i http://localhost:9091/javaeetutorial/resources/people
HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Sat, 16 Feb 2019 17:05:16 GMT
Content-Language: en-US
Content-Length: 246

[{"emailAddress":"api@test.com","id":"5c683fd40048ce11b5f6aee8","name":"API Test"},{"emailAddress":"foo@foo.com","id":"5c6841520048cea7c9f6c2d5","name":"Foo Fooson"},{"emailAddress":"foo@foo.com","id":"5c6841690048cea7c9f6c2d7","name":"API mod"}]

$ curl -i -X POST -H"Content-Type: application/json" http://localhost:9091/javaeetutorial/resources/people -d "{\"emailAddress\":\"foo@foo.com\",\"name\":\"Created with cURL\"}"
HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Sat, 16 Feb 2019 17:11:55 GMT
Content-Language: en-US
Content-Length: 89

{"emailAddress":"foo@foo.com","id":"5c68445b0048cea7c9402b85","name":"Created with cURL"}

$ curl -i -X PUT -H"Content-Type: application/json" http://localhost:9091/javaeetutorial/resources/people/5c68445b0048cea7c9402b85 -d "{\"emailAddress\":\"foo_mod@foo.com\",\"name\":\"Modified with cURL\"}"
HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Sat, 16 Feb 2019 17:12:30 GMT
Content-Language: en-US
Content-Length: 94

{"emailAddress":"foo_mod@foo.com","id":"5c68445b0048cea7c9402b85","name":"Modified with cURL"}

$ curl -i -X DELETE http://localhost:9091/javaeetutorial/resources/people/5c68445b0048cea7c9402b85
HTTP/1.1 200 OK
X-Powered-By: Servlet/4.0
Content-Type: application/json
Date: Sat, 16 Feb 2019 17:12:54 GMT
Content-Language: en-US
Content-Length: 16

{"success":true}

OpenAPI Documentation

OpenAPI is the boring-ified name of the standardized spec of Swagger, a mechanism for documenting REST APIs - kind of like what WSDL is for SOAP web services. This spec has become an important part of MicroProfile, the set of Java server technologies geared towards writing microservices that shares a lot of core technologies with Java EE. One of the niceties that MicroProfile includes is an automatic OpenAPI generator for JAX-RS services. There are a few things to add to our environment to enable this, but not too much.

To begin with, we'll have to enable the OpenAPI generator feature in Open Liberty (TomEE may have something like this; I don't know). To do that, open up the "server.xml" file (labeled "Server Configuration" in Eclipse's Servers view) and add "mpOpenAPI-1.0" to the feature list:

<featureManager>
    <feature>javaee-8.0</feature>
    <feature>localConnector-1.0</feature>
    <feature>mpOpenAPI-1.0</feature>
</featureManager>

Doing that alone will enable the API documentation, available at http://localhost:9091/openapi. However, if you look closely at the output, you'll see it's not exactly what we'd want: the GET operation for /resources/people points to our MVC home method, which it considers an unstructured string. It also lists the "helloworld" and "markdown" resources, and you can feel free to delete those classes outright - we won't be returning to them.

To fix this, first go to the project's "pom.xml" and add a dependency on the MicroProfile OpenAPI spec:

<dependency>
    <groupId>org.eclipse.microprofile.openapi</groupId>
    <artifactId>microprofile-openapi-api</artifactId>
    <version>1.1</version>
    <scope>provided</scope>
</dependency>

This is another one we can mark as "provided" since the implementation is included with the server.

Now, go back to the PersonController class, add an import line for org.eclipse.microprofile.openapi.annotations.Operation, and annotate the two MVC methods to mark them hidden from OpenAPI:

@GET
@Controller
@Operation(hidden=true)
public String home() {
  models.put("people", personRepository.findAll()); //$NON-NLS-1$
  return "person-new.jsp"; //$NON-NLS-1$
}

@POST
@Controller
@Operation(hidden=true)
public String createPerson(@BeanParam @Valid Person person) {
  if(person.getId() == null) {
    person.setId(new ObjectId().toHexString());
  }
  personRepository.save(person);

  models.put("person", person); //$NON-NLS-1$
  models.put("people", personRepository.findAll()); //$NON-NLS-1$
  return "person-created.jsp"; //$NON-NLS-1$
}

Now, if you refresh the /openapi output, you can see that it switched to the list method, and it knows that it outputs JSON, and includes a reference to the Person object structure at the bottom of the file.

There's a good deal more you can do with these annotations to customize the output, but it's nice to know that you can get an immediately-useful file that could be used to generate structured client libraries "for free".

Next Steps

Next, I think we'll dive into the world of Java EE authentication, which will be a very-different experience from what we're used to with Domino, for better and for worse.

XPages to Java EE, Part 10: Data Storage

  • Feb 15, 2019

How you store your data in an application is a potentially-huge topic, which is one of the reasons I've pushed it off for a while.

Designer's Curse

This is particularly the case because of the habits we learned over the years as Domino developers. One of the most grievous wounds Domino inflicted on us was an encouragement to always write directly to the data-storage implementation objects - forms and views for Notes client design or the lsxbe/lotus.domino classes for LotusScript and Java. They work, sure - fetching a document, setting fields, and storing it will get the job done - but it's an extremely-bad habit to work without a model framework and some level of indirection. Various people, including me, have made valiant efforts to add a model/DAO layer into XPages development in particular, but they've met with little uptake outside the individual developers who wrote them.

Fortunately, Java EE does not suffer from this specific brain poison, and it has a long history of abstracted data access, primarily via the Java Persistence API, traditionally backed by a JDBC driver for a SQL database. The point of an API like that is to let you write your model objects with just some annotations to explain to JPA bits about how it should be stored, and JPA will take care of the dirty work of actually mapping data types, writing queries, fetching data, and so forth.

JNoSQL

We won't be using JPA for this example, though. Instead, we'll be adding our second incubating spec: JNoSQL. JNoSQL is intended to be essentially "JPA for NoSQL", a largely-rethought API that won't crash into the hackiness of Hibernate's valiant attempt of re-using JPA directly. JNoSQL is currently slated for standardization as part of Jakarta EE and is under active development, but reached a point a while ago where it's good for use.

However, while there's technically a Domino JNoSQL driver that I put together last year, it's more of a POC than a real thing, and we'll skip it for this. For my uses, I've been using Darwino, which does have a splendid JNoSQL driver, but this series isn't the place to go through getting set up with that. For simplicity's sake, we'll be using MongoDB, which is quick to set up and is probably the furthest-developed driver in core JNoSQL.

MongoDB

So, to start out with, install MongoDB somewhere locally. This differs system-by-system - on Linux and macOS, I think it's available with package managers, or for any OS you can download an installer from their site.

Once it's installed, create a database named "exampledb" and a collection within it named "Person", as seen here with Compass, the standard admin app.

MongoDB collection configuration

Add the Driver

In your project's "pom.xml", add the JNoSQL document DB packages and MongoDB driver to your dependencies block:

		<!-- JNoSQL -->
		<dependency>
			<groupId>org.jnosql.artemis</groupId>
			<artifactId>artemis-core</artifactId>
			<version>0.0.7</version>
		</dependency>
		<dependency>
			<groupId>org.jnosql.artemis</groupId>
			<artifactId>artemis-document</artifactId>
			<version>0.0.7</version>
		</dependency>
		<dependency>
			<groupId>org.jnosql.artemis</groupId>
			<artifactId>artemis-validation</artifactId>
			<version>0.0.7</version>
		</dependency>
		<dependency>
			<groupId>org.jnosql.diana</groupId>
			<artifactId>mongodb-driver</artifactId>
			<version>0.0.7</version>
		</dependency>

For reference, "artemis" in JNoSQL terms refers to the mapping API - the annotations we're going to use - while "diana" refers to the driver portion.

Create the Configuration Class

Create a new class in the model package called DocumentCollectionManagerProducer:

package model;

import java.util.Collections;
import java.util.Map;

import javax.annotation.PostConstruct;
import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Produces;

import org.jnosql.diana.api.Settings;
import org.jnosql.diana.api.document.DocumentCollectionManager;
import org.jnosql.diana.api.document.DocumentCollectionManagerFactory;
import org.jnosql.diana.api.document.DocumentConfiguration;
import org.jnosql.diana.mongodb.document.MongoDBDocumentCollectionManager;
import org.jnosql.diana.mongodb.document.MongoDBDocumentCollectionManagerFactory;
import org.jnosql.diana.mongodb.document.MongoDBDocumentConfiguration;

@ApplicationScoped
public class DocumentCollectionManagerProducer {
	private DocumentConfiguration<MongoDBDocumentCollectionManagerFactory> configuration;
	private DocumentCollectionManagerFactory<MongoDBDocumentCollectionManager> managerFactory;
	
	@PostConstruct
	public void init() {
		configuration = new MongoDBDocumentConfiguration();
    // Modify this if MongoDB is not on localhost
		Map<String, Object> settings = Collections.singletonMap("mongodb-server-host-1", "localhost:27017"); //$NON-NLS-1$ //$NON-NLS-2$
		managerFactory = configuration.get(Settings.of(settings));
	}
	
	@Produces
	public DocumentCollectionManager getManager() {
		return managerFactory.get("exampledb"); //$NON-NLS-1$
	}
}

There's a lot there, but fortunately some of it builds on the CDI producer/scope functionality we encountered earlier. What we're doing here is setting up an application-wide bean that produces a configuration object for JNoSQL to use - specifically, using MongoDB. In a real situation, you'd want to externalize the settings in some way, but putting it into the code will do for now. The getManager() method will be used behind the scenes when JNoSQL asks the environment for a document-database manager.

Create the Model

Create another new class in the model package, this time named Person:

package model;

import javax.validation.constraints.Email;
import javax.validation.constraints.NotBlank;
import javax.ws.rs.FormParam;

import org.jnosql.artemis.Column;
import org.jnosql.artemis.Entity;
import org.jnosql.artemis.Id;

@Entity
public class Person {
	@Id("id")
	private long id;
	@Column @FormParam("name") @NotBlank
	private String name;
	@Column @FormParam("emailAddress") @NotBlank @Email
	private String emailAddress;
	
	public long getId() { return id; }
	public void setId(long id) { this.id = id; }
	public String getName() { return name; }
	public void setName(String name) { this.name = name; }
	public String getEmailAddress() { return emailAddress; }
	public void setEmailAddress(String emailAddress) { this.emailAddress = emailAddress; }
}

This class uses JNoSQL's annotations to define an object that can be stored (@Entity), its unique ID field (@Id), and the fields contained in it (@Column, a name that matches JPA's SQL-based view of the world). You can also see that it includes the JAX-RS and validation annotations from the class we set up when learning about MVC. With the artemis-validation dependency we included, JNoSQL will, like JAX-RS, automatically enforce bean property constraints like this when saving, meaning we don't have to spent so much time dealing with validation logic ourselves.

Whether or not it's a good idea mix the JAX-RS and persistence annotations like this is something I'm not entirely sure about, but it'll work for our purposes.

Create the Repository

Create a new interface (not a class) in the model package named PersonRepository:

package model;

import java.util.List;

import org.jnosql.artemis.Repository;

public interface PersonRepository extends Repository<Person, Long> {
	List<Person> findAll();
}

You may be thinking at this point, as I originally did, that the next step will be to create an implementation class to do the work for this. Nope! This is where some real CDI voodoo comes into play: inside JNoSQL is a bean that produces "proxy" classes on the fly for Repository interfaces and figures out implementations of the methods based on their names, return types, and parameters. It's not magic - there are limits - but in cases like this it'll do what we'd otherwise expect to have to do ourselves.

Back to the PersonController

Return to the PersonController class we created before and rework it to use our newly-minted JNoSQL objects:

package com.example;

import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.mvc.Controller;
import javax.mvc.Models;
import javax.validation.Valid;
import javax.ws.rs.BeanParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;

import org.jnosql.artemis.Database;
import org.jnosql.artemis.DatabaseType;

import model.Person;
import model.PersonRepository;

@Path("/people")
@Controller
@RequestScoped
public class PersonController {
	@Inject
	Models models;
	
	@Inject
	@Database(DatabaseType.DOCUMENT)
	PersonRepository personRepository;
	
	@GET
	public String home() {
		models.put("people", personRepository.findAll()); //$NON-NLS-1$
		return "person-new.jsp"; //$NON-NLS-1$
	}
	
	@POST
	public String createPerson(@BeanParam @Valid Person person) {
		if(person.getId() == 0) {
			person.setId(new Random().nextLong());
		}
		personRepository.save(person);
		
		models.put("person", person); //$NON-NLS-1$
		models.put("people", personRepository.findAll()); //$NON-NLS-1$
		return "person-created.jsp"; //$NON-NLS-1$
	}
}

Add a Person List Tag

Back in the "Deployed Resources" section of the project, create a new directory beneath "WEB-INF" called "tags", and within that create a new file named "personList.tag":

WEB-INF/tags directory

Set the file contents to:

<%@tag description="Displays List&kt;model.Person&gt;" pageEncoding="UTF-8" trimDirectiveWhitespaces="true" %>
<%@attribute name="value" required="true" type="java.util.List" %>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>

<h1>People</h1>
<table>
	<thead>
		<tr>
			<th>ID</th>
			<th>Name</th>
			<th>Email Address</th>
		</tr>
	</thead>
	<tbody>
		<c:forEach items="${pageScope.value}" var="person">
		    <tr>
		    	<td><c:out value="${person.id}"/></td>
		    	<td><c:out value="${person.name}"/></td>
		    	<td><c:out value="${person.emailAddress}"/></td>
		    </tr>
		</c:forEach>
	</tbody>
</table>

This is our JSP equivalent of an XPages custom control, though all of the configuration is done inline instead of via XPages's auto-maintained .xsp-config side file.

Update the Person Views

Modify "person-new.jsp":

<%@page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" trimDirectiveWhitespaces="true"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<!DOCTYPE html>
<html lang="${translation._lang}">
	<head>
		<title>${translation.appTitle}</title>
	</head>
	<body>
		<h1>Create Person</h1>
		<form method="post" action="people">
			<dl>
				<dt>Name</dt>
				<dd><input name="name" required/></dd>
			</dl>
			<dl>
				<dt>Email Address</dt>
				<dd><input type="email" name="emailAddress" required/></dd>
			</dl>
			<input type="submit" value="Submit"/>
		</form>
		
		<t:personList value="${people}"/>
	</body>
</html>

Do similarly to "person-created.jsp":

<%@page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" trimDirectiveWhitespaces="true"%>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>
<!DOCTYPE html>
<html lang="${translation._lang}">
	<head>
		<title>${translation.appTitle}</title>
	</head>
	<body>
		<h1>Created Person</h1>
		<dl>
			<dt>Name</dt>
			<dd><c:out value="${person.name}"/></dd>
		</dl>
		<dl>
			<dt>Email Address</dt>
			<dd><c:out value="${person.emailAddress}"/></dd>
		</dl>
		
		<t:personList value="${people}"/>
	</body>
</html>

Take It For a Spin

Launch the Liberty server and visit http://localhost:9091/javaeetutorial/resources/people. You should be able to add new entries, with the browser taking care of the client-side validation for you and JNoSQL and JAX-RS handling it on the server side. Best of all, the data should persist!

Updated UI with person listing

If you look at the database in Compass, you'll see entries there as well. JNoSQL mapped the Person class name to the "Person" collection in the database:

Data stored in MongoDB

Next Steps

In the next post, I plan to touch a bit on mixing MVC controller methods with JSON-based REST APIs, to bring these parts together into something that starts to approach a real application.

Update: Troubleshooting Note

One thing I encountered in my fiddling was an intermittent case where the server wouldn't load the app, instead complaining about an unsatisfied provider for the PersonRepository class. If you run into this, make sure you have a "beans.xml" file inside your "webapp/WEB-INF" directory in "Deployed Resources", and set its contents to:

<?xml version="1.0" encoding="UTF-8"?>
<beans bean-discovery-mode="all"
  xmlns="http://xmlns.jcp.org/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd"/>

This is the CDI configuration file. Though it's mostly empty, the critical part is bean-discovery-mode="all", which causes it to check all available providers in the classpath.

XPages to Java EE, Part 9: IDE Features Grab Bag

  • Feb 13, 2019

In today's post, I'm going to go over a handful of features that IDEs, particularly Eclipse, bring to the table to improve the development experience. Some of these aren't unique to EE development, but our use of Maven and standardized technology makes them better than their equivalents in XPages development.

Open Declaration

In Eclipse, if you right click on most anything in a Java file, you can choose "Open Declaration" (or press F3 with the text cursor inside it), you can go to the source of whatever it is, if available:

Open Declaration menu item

This also works in Designer, but it's often much more useful here: because Designer ships without source for its JVM or really any of the classes that make up the XPage stack, you tend to be greeted by the white "here's a bunch of bytecode" screen, which is rarely particularly useful.

Since we're working with open-source components and we told Eclipse to download source and Javadoc for Maven artifacts, though, almost everything will have available source, letting you explore what's happening much more easily.

This will also work for your own code, making F3 an extremely-useful method of navigation. While you're there, try out the "Open Type Hierarchy" and "Open Call Hierarchy" options too.

Open Implementation

CDI's technique of using @Inject to inject implementations of interfaces automatically is a great way to abstract away the business of doing a new SomethingImpl() or finding the managed bean. However, sometimes it's good to find out where the objects you're getting are actually coming from, and Eclipse lets you do this by holding Command (probably Control on Windows, if I had to guess) and hovering over an interface name:

Open Implementation menu option

In this menu, "Open Declaration" will take you to the Models interface class file, while "Open Implementation" will take you to the ModelsImpl implementation class.

Note, though, that Eclipse isn't really following CDI logic here - instead, it's just trying to find implementing classes and either opening a single implementation if there's only one or showing a menu to select from multiple. It happens that that's usually the same thing effectively, but it's an important distinction.

IntelliJ is a bit smarter on this point: it will try to figure out all the CDI logic and show a little bean icon on a line with an injection to take you to the object or method providing it:

IntelliJ bean detection

JBoss Tools

To be fair to Eclipse, there is a project to provide a great deal of improved Java EE behavior: JBoss Tools. It comes with a bunch of addins, editors, and configurators meant to make Eclipse much more aware of things like CDI's real logic. I personally have found it pretty janky in practice, but it's been a while since I gave it a proper shot. Your mileage may vary.

JAX-RS Resources

Since JAX-RS is such a critical part of modern Java development, it comes in handy that Eclipse has some specialized knowledge of it. If you expand the "Services" folder in your project, you'll have a "REST" folder that contains all of your declared JAX-RS endpoint classes and their methods:

Eclipse JAX-RS resource listing

You can use this as a general overview of your app's external API (and, when using MVC, its full URL layout) and you can also double-click any of the entries to go to the declaration.

If you get into a situation where you're writing a paired set of a server module and a client module, you can also let Eclipse generate a resource client class for you.

Deployment Descriptor

Within the "Deployment Descriptor" node in your project, Eclipse lists a bunch of EE/Servlet stuff:

Deployment Descriptor node

Historically, this referred specifically to the contents of the web.xml, but it also shows annotated classes as applicable - such as the Listener coming from the Ozark MVC implementation. Depending on how you're constructing your app, this may be more or less useful, but it's definitely good to know it's there.

Up Next

Next time, maybe I'll finally get to talking about users and data. Or maybe I won't! We'll see.

 

XPages to Java EE, Part 8: IDE Server Integration

  • Feb 12, 2019

I said that the next post was going to be about authentication or databases, but I'm turning that into a filthy lie. Instead, we're going to take a bit of a detour to talk about a couple ways to let the IDE help you out in development. I'll be talking about Eclipse specifically, but I know that at least the paid version of IntelliJ IDEA has similar features. The first feature on the docket is having the IDE manage an app server for you.

Servers

Up until this point, we've been using the TomEE Maven plugin to create and launch a server for us, which is convenient and portable across whatever environment you're working with. However, once you have a couple applications or want to make persistent server configurations outside of the individual app's pom.xml, it makes sense to run a server and deploy the app to it.

This is where Eclipse's "Servers" view comes into play. If the panel isn't currently in your workspace, go to Window -> Show View -> Other... and choose "Servers" under the "Server" category. By default, it'll be stacked with some other tabs, but I like to position this pane in the bottom-right of my IDE:

Eclipse Servers view

The way this view tends to work is that it points either at a local server installation or to a running remote server. We'll want the former, but, sadly, this is where we part ways with TomEE. Though the server is more than capable for our needs, the Eclipse integration (based on base Tomcat) is not, and it hinders its use here. Instead, we'll be switching to my current favorite: Open Liberty. Download the latest build from that page (19.0.0.1 as of this writing) and extract the archive to some location on your system (I like to keep a directory of various app servers).

Next, we'll need to install the Liberty support plugin in Eclipse, which is a fortunately-simple matter. Visit the Liberty in Eclipse download page and drag the "Install" button onto your Eclipse toolbar. Once it loads, click Next and Finish a couple times until it's done, and then let it restart Eclipse.

Once Eclipse restarts, click that "Click this link to create a new server..." link. In the resultant dialog, choose "Liberty Server" under the "IBM" category and give it a descriptive name:

Eclipse's New Server dialog

Click Next > and enter the path where you extracted Open Liberty in the "Path" field:

Server path

Click Next >, leave everything at its default, and click Finish.

Click the twistie next to the newly-created server and double-click the "Server Configuration [server.xml] new server" entry to open the server config editor. If it opens on the "Design" tab, click the "Source" tab to see the XML configuration. Set it to:

<server description="new server">
    <!-- Enable features -->
    <featureManager>
        <feature>javaee-8.0</feature>
        <feature>localConnector-1.0</feature>
    </featureManager>

    <!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
    <httpEndpoint httpPort="9091" httpsPort="9443" id="defaultHttpEndpoint"/>

    <!-- Automatically expand WAR files and EAR files -->
    <applicationManager autoExpand="true"/>
  
    <keyStore id="defaultKeyStore" password="testKeystore"/>
</server>

 

The first difference here from the default is to swap out the jsp-2.3 feature for javaee-8.0. Liberty organizes its capabilities by "features" in order to let you really trim down the runtime if you don't need specific capabilities. For our sake, we want the whole EE shebang available, though. localConnector-1.0 remains, and is what allows Eclipse to control the server.

We also changed the httpPort of the httpEndpoint element to 9091 to match what we've been using via the Maven config. We also added a basic not-exactly-secure keyStore configuration. This would be filled in more if you enable SSL.

Deploying the App

There are several ways to actually get our app onto this server, but the one I've built up a habit of using is to right-click the project, and then select Run As -> Run On Server:

Run on Server

The newly-created server should be selected by default in the resultant dialog, so just click Finish immediately.

The server will churn for a bit and eventually output [AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9091/javaeetutorial/. It will also, I note with chagrin, output this:

[ERROR   ] SRVE0283E: Exception caught while initializing context: java.lang.IllegalArgumentException: The controller com.example.HomeController is not a managed CDI bean. Maybe the controller class is missing a scope annotation (e.g. @RequestScoped).
	at org.mvcspec.ozark.servlet.OzarkServletContextListener.failIfNoCdiBean(OzarkServletContextListener.java:76)
	at org.mvcspec.ozark.servlet.OzarkServletContextListener.contextInitialized(OzarkServletContextListener.java:60)
	at com.ibm.ws.webcontainer.webapp.WebApp.notifyServletContextCreated(WebApp.java:2377)
	at [internal classes]

That's because I forgot a step in the setup of MVC controllers yesterday, which is that you're supposed to add one of those annotations to the class too. Fortunately, the examples still work, and we'll fix that omission next time we edit the code.

Most likely, Eclipse will automatically open the app's default page in whatever browser it's configured to use - by default, a browser embedded in Eclipse itself.

Updating the App

In future posts, rather than specifically saying to run the clean install tomee:run Maven goal, I'll just say to run the app - either way should still work. By default, Eclipse will re-deploy the application to Liberty after changes, so you probably won't need to do anything other than refresh the page to see future changes.

Next Up

Next, I think I'm going to cover a grab bag of other features Eclipse has for working with Java EE apps before returning to the dirty business of writing code.