Old App Idea: App Manager

  • May 24, 2016

Years back, I took a small whack at an idea that had been percolating in my head: a "app manager" application that would assist with the XPage-specific portions of running a Domino server. It would cover a lot of ground that Administrator really doesn't touch, such as inspecting NSFs to see which contain XPage artifacts, highlighting potential problems with them, and assisting with app-design backup and deployment.

It never really got too far, since there are always a hundred other things to work on, but I always liked the idea. I decided to toss what code I had laid down up on GitHub:

https://github.com/jesse-gallagher/app-manager

I don't imagine I'll really have time to build on that unless I'm suddenly really struck with inspiration. But either way, the code is there for anyone interested in taking a look.

The Cleansing Flame of Null Analysis

  • May 21, 2016

Though most of my work lately has been on sprawling, platform-level stuff or other large existing codebases, part of it has involved a new small app. I decided to take this opportunity to dive more aggressively than previously into automated null analysis and other potential-bugs tools.

What I mean by "null analysis" is letting the IDE or compiler try to help you avoid NullPointerExceptions. Though there are plenty of other programming mistakes you could still make, these are among the most common, and so a little extra work up front to avoid them should pay dividends. Eclipse has some handy options in its Java → Compiler → Errors/Warnings preferences to assist with this:

The first option will pick up on some pretty basic instances, such as:

Object foo = null;
System.out.println(foo.hashCode());

Since this is clearly going to always cause an NPE, Eclipse is able to point this out as an error. The next level gets a little more nebulous: "potential" null pointer access. This crops up when Eclipse can't reliably determine whether a value will be null, either because there is no way to know at compile time (say, database access) or because the compiler's tooling is too limited. Here's a contrived example:

Object foo = Math.random() > 0.5 ? new Object() : null;
System.out.println(foo.hashCode());

This situation is clearly untenable, but there are other situations where you as a programmer can be very confident that the value will not be null (say, if you swap out the > 0.5 for >= 0.0), but the compiler doesn't know that. That's why it often makes sense to leave that as a warning instead of an error.

That's all stuff I've done before, but now I've decided to dive into annotation-based null analysis as well. Unfortunately, in stock Java, this is something of a hot mess (that list even leaves out Eclipse's home-grown version). Since Java didn't grow up with this sort of capability, it's been shoehorned in by various parties over the years. There are other tools to assist you in Java 8, but, unfortunately, I can only target 7 as the highest. For now, I've settled on the "sort-of standard" javax.validation.constraints package. It wasn't really intended for this specific purpose, but it's flexible enough to suit and can be used in Eclipse and FindBugs (though I have my reservations about the choice).

In Eclipse, this type of analysis can be enabled by checking "Enable annotation-based null analysis" below the other options and, unless you're using Eclipse's known annotations, adjusting the "Configure" options next to "Use default annotations for null specifications":

In any event, regardless of the choice of tooling, the "this shouldn't be null" annotations work the same way: you use them to decorate things that you either require not be null when provided to you (method parameters) or you promise to not be null when providing to others (method return values). For example:

public @NotNull Object doSomething(@NotNull Object otherObject) {
	return otherObject.toString();
}

This highlights three things, two good and one bad:

  • Good: The @NotNull in the method parameter means that, as long as the calling code is also checked for null use, the method can be confident that there won't be a NullPointerException when calling a method on otherObject.
  • Good: The @NotNull on the return value means that other code calling this method can be confident that they will not get a null value from it, and so can skip extra null checks.
  • Bad: Eclipse flags otherObject.toString() as a potential problem because it doesn't know for sure that Object#toString doesn't return null, because it has no nullability annotations. As programmers (or as a compiled-code analysis tool), we can be fairly confident that it will be non-null because any object returning null for that is essentially broken on its own.

That last one is a common problem when adopting annotation-based null analysis, at least in Eclipse (I hear it may be better in IntelliJ): its logic doesn't go very deep. If everything is gussied up with these annotations, you're clear - but as soon as you step outside of the project you're working on, you have to add in likely-unnecessary checks. Fortunately, these checks don't realistically hurt (a null check at runtime in a normal app is negligible performance-wise), but they can grate to have to add in.

Glutton for punishment that I am, I decided to go a step further and enable FindBugs processing as an integral step of my build. Though FindBugs can be very picky about the types of things it complains about, it is blessedly more thorough in its analysis than Eclipse, so you generally end up conceding that it is correct when it yells at you. Since the project is Maven-based, I added the check in the project's pom file:

<plugin>
	<groupId>org.codehaus.mojo</groupId>
	<artifactId>findbugs-maven-plugin</artifactId>
	<version>3.0.3</version>
	<configuration>
		<includeTests>true</includeTests>
	</configuration>
	<executions>
		<execution>
			<phase>compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
		<execution>
			<id>findbugs-test-compile</id>
			<phase>test-compile</phase>
			<goals>
				<goal>check</goal>
			</goals>
		</execution>
	</executions>
</plugin>

For most uses, that's all that's required. Now, when the project is compiled, FindBugs will give it a once-over and halt the build if it finds anything it doesn't like. This can be tweaked a great deal - for example, changing the checks to run or the severity of the problem needed to fail the build - but the defaults will likely suit.

Adding these extra checks involves a lot of plusses and minuses. The big minus is that you may end up spending a lot of time "fixing" bugs that don't really exist, time that you could instead spend actually writing your application (and writing new bugs that the tools won't find anyway). There's really nothing to be gained by carefully explaining to Eclipse for the hundredth time that toString always returns non-null.

Still, particularly when tested out in a small, low-surface-area app, this can be a good practice to learn and refine. Eventually, a move to Java 8 will help this more, and it certainly doesn't hurt to add in nullability annotations in the mean time. Overall, I think having the tooling help you avoid a whole suite of common "brain fart" bugs like this is worthwhile.

Quick, Short-Notice Announcement: Delaware Valley Soc-Biz UG Meetup

  • May 17, 2016

Granted, this is short notice, but for anyone in the southeast-PA area, there's a meetup at IBM's offices this Thursday (the 19th) from 11 to 12:30. I'll be there, giving a presentation about OpenNTF and some of the ways that I've used the projects we work on there on my customer projects. Better still, there's lunch! The signup form is over on Greenhouse here:

https://greenhouse.lotus.com/forms/landing/org/app/80846239-6f7a-4483-8ace-9e5e02b0a661/launch/index.html?form=F_Form1

If you're able to make it, great! Otherwise, I imagine there should be more of this sort of thing in the future.

Darwino for Domino: Domino-side Configuration

  • May 16, 2016

In my last post, I mentioned that a big part of the job of the Darwino-Domino replicator is converting the data one way or another to better suit the likely programming model Darwino-side or to clean up old data. The way this is done is via a configuration database on the Domino side (an XPages application), which allows you to specify Database Adapters that configure the translation. While it is possible to write these in Java, the primary way is to use a Groovy-based DSL script.

The simplest form of this script may be one line just defining where the NSF is:

nsfName "foo.nsf"

With that, the replicator will open foo.nsf and attempt to replicate all documents with "best fit" translations of each field it comes across. Things can get a little more complex, though:

form("SomeForm") {
	excludeField "foo"
	excludeFields "IgnoreMe"
	
	field "foo_(.*)", nameRegex: true
	arrayField "bar_(.*)", delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: false, compact: true,
		userDataFormatName: "ODAChunk", nameRegex: true
	field ~"impliedfoo_(.*)"
}

form("ArrayTest") {
	arrayField "FirstName", type:TEXT, delimiter: "_"
	arrayField "LastName", type:TEXT, delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: true, compact: true
	
	// Similar to the ODA style
	arrayField "UserData", type:USERDATA, delimiter: "\$", zeroBased: true, initialIndexed: false, prefix: false, compact: true,
		userDataFormatName: "ODAChunk",
		toDarwino: { value ->
			// The value will be an array of byte arrays, which should be strings
			println "testrep: called toDarwino!"
			value.collect { bytes ->
				new String((byte[])bytes, "UTF-8");
			}
		},
		toDomino: { value ->
			// We should provide an array of byte arrays
			println "testrep: called toDomino!"
			println "value is ${value}"
			def result = value.collect { string ->
				string.getBytes("UTF-8")
			}
			println "result is ${result}"
			return result
		}
}

form("RestrictedFields") {
	restrictToDefinedFields true
	field "KnownField"
	field "KnownField2"
}

The specifics of what's going on in this example (pulled from unit test data) aren't too important, but it demonstrates the customizability that an in-language DSL brings. Since the code is executed in Groovy, it has access to the full Java runtime (including, if you deliver it in a plugin, your own custom classes and dependencies), as well as Groovy's nice abilities like closures. In fact, a great many properties, like the converters above, can be specified as closures and executed in the context of each document as it's replicating, allowing for pretty fine-grained translation.

And personally, adapting Groovy into the process was an interesting exercise. Since Groovy was designed explicitly as a "scripting" variant of Java, the process of working it in to an existing Java code base is very smooth, and there aren't too many gotchas. I wrote some Java classes that provide the context for the root and individual "form" blocks, wired up the interpreter, and then they call each other seamlessly. Other languages could probably suit the job well too - JRuby, Rhino, etc. - but Groovy is mature, purpose-built, and largely syntax-compatible with Java itself, making it a very comfortable fit.

Darwino for Domino: Replication and Data Format

  • May 11, 2016

One of the key points of interest in Darwino for Domino developers is its two-way replication. Darwino's replication system was built in such a way that, in addition to its own internal needs, you can also write a replicator to connect to an entirely-unrelated system, as long as that replicator translates the foreign data to and from JSON documents. Domino is a perfect case for this, since the data model is already very similar, and its replicator ships with Darwino and has been a focus of my attention for a while.

Behind the scenes, this replication takes advantage of a lot of the same kinds of things that Domino's replication always has: UNIDs, sequence IDs, original-vs.-in-file modification/creation, and deletion stubs. These details are transparent to the developer: the Darwino adapter knows how to fetch the appropriate data from the NSF and to convince Domino that the Darwino DB is (almost) like a remote Domino server, at least as far as the stored data is concerned.

The primary differences show up in the way data is formatted for storage. Darwino uses JSON for its document format, which has a couple key advantages and disadvantages compared to NSF's "bag of items" approach. The best approach is probably to provide an example and highlight the pertinent differences. Say you have a Domino document that (conceptually) looks like this:

FirstName
	Type: TEXT
	Value: Foo
LastName
	Type: TEXT
	Value: Fooson
Username
	Type: TEXT
	Flags: NAMES READWRITERS
	Value: CN=Joe Schmoe/O=SomeOrg
Birthday
	Type: TIME
	Value: 1970/2/1
Vacations
	Type: TIME_RANGE
	Value: 2016/1/1-2016/1/5, 2016/3/3-2016/3/5
IsAdmin
	Type: TEXT
	Value: Y

On the Darwino side, that would potentially look like this:

{
	"_writers": {
		"username": ["cn=Joe Schmoe,o=SomeOrg"]
	}
	"firstname": "Foo",
	"lastname": "Fooson",
	"birthday": "1970-02-01",
	"vacations": [
		"2016-01-01/2016-01-05",
		"2016-03-03/2016-03-05"
	]
	"admin": true
}

There are certainly a few things to take note of here. First and foremost is the structure of the authors field. Because JSON doesn't have field metadata, the way Darwino does its readers/writers security is by using specially-named and -structured properties within the JSON, and so the converter moves all readers and writers fields into that. This has internal-implementation reasons, but I think it's also conceptually preferable to, say, having a multi-level object within each field to declare its flags separate from the value. The name also happens to be stored in LDAP style, because Darwino is more at home with standard LDAP conventions for that sort of thing.

Another thing to note is the format of the date fields. Since JSON doesn't have a real date/time type of its own, these values are converted according to ISO 8601 and stored as strings. That means that your Darwino application will need to know that those string values represent dates, but they're reasonable to deal with.

The multi-value date field leads to another important aspect: arrays. In Domino, most items are conceptually presented as arrays, regardless of whether they contain single or multiple values, leading to code that requires either explicitly asking for the first element or jumping through hoops to deal with single or multiple values. Since that's a drag to worry about with JSON, the default behavior for single-value items transferred to Darwino is to store them as "bare" values. When configuring the translation (which will be fodder for a future post), you are able to specify that you want a field to be always stored as an array, which will allow the Darwino-side code to be simpler.

The last field shows off an outright advantage of the JSON format: boolean storage. When defining the conversion, you can specify a field as boolean and provide what the true/false values will be, and they will be sent over to JSON as true and false explicitly. That's not a night-and-day change, but it is a nice help.

Finally, there's the matter of rich text, which is unsurprisingly a nontrivial problem. This is handled in an XPage-alike way: MIME rich text is transferred with only minor adjustments, while Composite Data is converted to HTML and cleaned up a bit before transfer. Darwino supports the concept of attachments natively, and so Domino attachments are brought over with a naming prefix to match the field they're attached to, plus a delimiter to indicate whether they're normal attachments or embedded images. The way it is presented on the user end is dependent on the application, but Darwino has some routines to translate storage-safe inline image refs to app-relative URLs.

Later, I will go into the process of how the Domino-Darwino adapters are set up. The short of it is that you create scripts that can run the gamut from just telling the server where to find the NSF to customizing the translation of each field encountered. This allows you to either transfer the data back and forth in the default "best approximation" approach or use the opportunity to enforce a bit of a schema on old data.

An Overview of Darwino for Domino Types

  • Apr 14, 2016

So, Darwino! I've mentioned it quite a few times on Twitter and, particularly, in person, but I think it's high time I write some proper blog posts about it.

To start with, I'll cover what Darwino is. The short version is it's a Java-based development framework with a replicating document database. The interesting aspects go beyond that, though:

  • In addition to Java web servers, it targets mobile devices, both Android and, through RoboVM, iOS. Those devices store their own replicas of the databases for offline work in the same conceptual way as Notes, but with native (or hybrid web, if you're so inclined) mobile user interfaces.
  • The document database sits on top of SQL servers. Many modern SQL servers have native support for JSON data, and Darwino takes advantage of this to get document-DB flexibility with SQL features.
  • Business logic is shared between platforms. Because Java acts as a common language between each platform, and the document DB works the same way locally and remotely, the core business logic of the app can be identical across each targetted platform, with only the UI changing between them.
  • Along those lines, Darwino isn't prescriptive with the UI: it's not a front-end framework itself, instead providing the basis for using other front-end tools, such as Ionic, JSF, and Vaadin for web/hybrid UIs and the native OS toolkits on mobile.
  • The Darwino syncing protocol is designed to be adaptable to other services. This is immediately notable for Domino developers, but can also be (and has been, in some cases) adapted for arbitrary other back ends, like Connections social data or other databases.

How does this relate to Domino/XPages development? That depends on your desires, really.

In some ways, it doesn't. Darwino is its own platform, running on Java web servers like WebSphere and Tomcat, using independent SQL servers like DB2 and PostgreSQL. Darwino's replication between the server and mobile devices is similar to Domino's, but is its own thing. Similarly, the document model, though conceptually similar to Domino (including enhanced reader and author fields), is not NSF.

However, there are a number of reasons why it's of interest to a Domino developer, and the most immediate of those is its ability to do two-way replication with Domino databases. I'm a little biased on this point because of how much time I've spent working on it, but this replication is capable of some nifty tricks to make it capable and adaptable, including transformation of the data, two-way maintenance of document time stamps, and so forth. With this syncing, it makes it very practical to extend your existing Domino app - be it a classic-style Notes/web app or an XPages one - with a Darwino-side UI that uses the same data, synced down to mobile devices for offline access. And this doesn't require migration; since the changes replicate back, the app can remain chugging away unchanged on the Domino side if desired. This also can be tremendously useful for reporting, by syncing the data over to a full SQL database that can be viewed and queried by normal tools.

And, really, it's also of interest to Domino developers personally by virtue of being a platform that has learned a lot of valuable lessons from Domino and extended them in new ways. Those years of accumulated document-database knowledge will carry over nicely, with some extra benefits if you're SQL-familiar too. Any Java knowledge will come in handy immediately, as Darwino is thoroughly Java-based on all target platforms. And, thanks to its pedigree, a lot of the platform support concepts are similar to aspects of XPages (the good parts). In general, the more XPages development you've done, the more it will benefit you (especially if you want to use JSF for the UI!). You could also, with some servlet-implementation limitations, run Darwino apps on Domino via OSGi, and I've been putting some side work into accessing Darwino databases from XPages directly - Darwino could make a solid basis for Domino-run apps.

So this turned into a bit of a sales pitch, but there's no getting around it - I find this thoroughly compelling and exciting. As I have time, I plan to expand on Darwino's various capabilities (along with the other various blog series I still plan to get to). For now, you can register for and download the Community Edition, read the documentation there, and/or track me down with any questions.

A Bit of Code Archaeology

  • Mar 10, 2016

Yesterday, I decided to toss the source of my first real XPages app up on GitHub:

https://github.com/jesse-gallagher/Raidomatic

It's my WoW guild's web site, which had some forums as well as a raid-management tool and loot tracker. I'm guessing that those tools won't be particularly useful for your average XPages app, but they were interesting things to build, and were a great exercise in figuring out the platform. Since it is quite old, there are also plenty of terrible decisions in there, such as my mass recycler from before I knew that Domino objects are already all recycled at the end of the request, but hey, that's growth.

In any event, it's there for the curious, or in case any of the code ends up being useful for future Googlers.

Maven Native Chronicles: Running Automated Notes-based Tests

  • Feb 27, 2016

This post isn't really in my ongoing Java thread, though it's related in that this is the sort of thing that may come up in fairly-advanced cases. This post will assume a functional knowledge of Maven, Tycho, and JUnit.

For Darwino, I ran into the need to run unit tests on Domino-adapter code during the Maven build process. Since the Domino project tree uses Tycho, this ended up differing slightly from standard Maven testing. Rather than using the src/test/java directory in the same project to house the associated tests, Tycho prefers the very-OSGi-native method of having a separate project, but declaring it a "fragment" plugin attached to the primary one. In OSGi terms, a fragment is a special type of plugin that, when loaded by the runtime, gets glommed on to a specified host plugin and runs in its same classpath. In other cases, this may be used to provide platform-specific additions, add locale resources, or other uses.

So I created a new fragment project, which is structurally much like a normal plugin, but with an extra line in the MANIFEST.MF:

Fragment-Host: com.example.some.parent.plugin

This line tips off the OSGi environment to its nature. In the pom.xml, there are a number of important differences related both to how Tycho handles test fragments and the necessity of loading the Notes native libraries:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>some-parent</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.some.parent.plugin.test</artifactId>
	<packaging>eclipse-test-plugin</packaging>

	<build>
		<plugins>
			<!--
				By default, Tycho doesn't include the other fragment plugins when running the test.
				So here, we manually include the appropriate features. 
			 -->
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>target-platform-configuration</artifactId>
				<version>${tycho-version}</version>
				
				<configuration>
					<dependency-resolution>
						<extraRequirements>
						
							<requirement>
								<type>eclipse-plugin</type>
								<id>com.ibm.notes.java.api.win32.linux</id>
								<versionRange>[9.0.1,9.0.2)</versionRange>
							</requirement>
							
							<requirement>
								<type>eclipse-feature</type>
								<id>com.example.some.native.feature</id>
								<versionRange>0.0.0</versionRange>
							</requirement>
							
						</extraRequirements>
					</dependency-resolution>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-surefire-plugin</artifactId>
				
				<configuration>
					<testSuite>${project.artifactId}</testSuite>
					<testClass>com.example.some.parent.plugin.test.AllTests</testClass>
				</configuration>
			</plugin>
		</plugins>
	</build>
	
</project>

The preamble is the same as usual for Maven, but the packaging is slightly different. Instead of eclipse-plugin, this should be packaged as eclipse-test-plugin. Tycho's packaging doesn't particularly care about whether or not it's a fragment, but it does care about its test nature.

Things get a little interesting in the target-platform-configuration block. These two entries have similar purposes: to cause Tycho to load up other, native-artifact fragments required to run the tests. The first one showed up in the Java series: it contains Notes.jar, but, because it is itself a fragment (and can't be directly depended upon by the test project), Tycho won't automatically load it unless directed to. The second one serves a similar purpose, but loads a feature instead. This feature contains references to a number of distinct platform-dependent native-artifact fragments, and specifying this dependency causes Tycho to consider each one without having to specifically enumerate them in the POM.

The final block is a little simpler, and it just tells Tycho where to start when it goes to run the fragment as a test suite. The AllTests class is a test suite in the JUnit 4 convention, with @RunWith and @Suite.SuiteClasses annotations.


There's another catch to this, though: Notes has some specific demands on its environment, and in particular must be run with knowledge of a Notes program directory, a data directory, a notes.ini, and an ID file (unless you're doing DIIOP (which you probably shouldn't)). The specifics of what the libraries expect in their runtime environment and how they should be loaded in their API calls vary a little from platform to platform, and I ended up with a pile of "just keep trying stuff until it works" code. The result, though, is that I have automated tests running on Windows, Linux, and OS X. First, there's the large platform-specific section of my root POM, which defines platform-activated profiles that set up environment variables:

<!-- These profiles add support for specific platforms for tests -->
<profiles>
	<profile>
		<activation>
			<os>
				<family>Windows</family>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					<version>${tycho-version}</version>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
	<profile>
		<id>mac</id>
		<activation>
			<os>
				<family>mac</family>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<PATH>${notes-program}${path.separator}${env.PATH}</PATH>
							<LD_LIBRARY_PATH>${notes-program}${path.separator}${env.LD_LIBRARY_PATH}</LD_LIBRARY_PATH>
							<DYLD_LIBRARY_PATH>${notes-program}${path.separator}${env.DYLD_LIBRARY_PATH}</DYLD_LIBRARY_PATH>
							<Notes_ExecDirectory>${notes-program}</Notes_ExecDirectory>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
	<profile>
		<id>linux</id>
		<activation>
			<os>
				<family>unix</family>
				<name>linux</name>
			</os>
			<property>
				<name>notes-program</name>
			</property>
		</activation>
	
		<build>
			<plugins>
				<plugin>
					<groupId>org.eclipse.tycho</groupId>
					<artifactId>tycho-surefire-plugin</artifactId>
					<version>${tycho-version}</version>
					
					<configuration>
						<skip>false</skip>
						
						<argLine>-Dfile.encoding=UTF-8 -Djava.library.path="${notes-program}"</argLine>
						<environmentVariables>
							<!-- The res/C path entry is important for loading formula language properly -->
							<PATH>${notes-program}${path.separator}${notes-program}/res/C${path.separator}${notes-data}${path.separator}${env.PATH}</PATH>
							<LD_LIBRARY_PATH>${notes-program}${path.separator}${env.LD_LIBRARY_PATH}</LD_LIBRARY_PATH>
							
							<!-- Notes-standard environment variable to specify the program directory -->
							<Notes_ExecDirectory>${notes-program}</Notes_ExecDirectory>
							<Directory>${notes-data}</Directory>
							
							<!-- Linux generally requires that the notes.ini path be specified manually, since it's difficult to determine automatically -->
							<!-- This variable is a convention used in the test classes, not Notes-standard -->
							<NotesINI>${notes-ini}</NotesINI>
						</environmentVariables>
					</configuration>
				</plugin>
			</plugins>
		</build>
	</profile>
</profiles>

Each block is kicked off both by a specific OS combination, using Maven's OS names (you can also target specific architectures within them), as well as the presence of a notes-program property. This is a convention I've adopted to go alongside the notes-program property that points to the XSP plugins; this one instead points to the root Notes or Domino install to use for execution.

Windows is the easiest since Notes still feels most at home on there. There, it's just a matter of adding the Notes program root to the Java library path and the environment's PATH. From there, the Notes libraries automatically picked up the data directory and notes.ini, presumably from the registry.

The Mac is mildly more complex: in addition to the two settings from Windows, I also ended up adding the program path to LD_LIBRARY_PATH and DYLD_LIBRARY_PATH. I'm not entirely sure both are needed, but hey, it works this way. In addition, I had to specify Notes_ExecDirectory. After that, the tests found the location of the data dir and Notes Preferences, presumably due to Mac OS conventions.

Linux needed the most hand-holding, which shouldn't be too surprising for those who have installed Domino on Linux - it doesn't seem to respect any platform conventions there. In addition to specifying the notes-program property and using it in the same places as on the Mac, I also added two more properties to my Maven config: notes-data, to point to the data directory, and notes-ini, to point to notes.ini. I used the notes-data property to specify the Directory environment variable that the Notes libraries look for, and then I also specified NotesINI. That's not something that the Notes libs look for, but instead it's a way to shuttle the configuration to the Java code that actually executes the tests.

That leads to the final hurdle: initializing the Notes environment in the JUnit test classes. To do that, I specified a @BeforeClass method that checks for the presence of the Notes_ExecDirectory and NotesINI environment variables. If they're present (i.e. it's Linux), it calls NotesInitExtended with the value of Notes_ExecDirectory as the first argument and = plus the value of NotesINI as the second. Afterwards, whether or not that was called, it calls NotesThread.sinitThread(), and from then on NotesFactory.createSession() will generate proper native sessions.

There's also an @AfterClass method that is the mirror of that: it calls NotesThread.stermThread() and then, on Linux, NotesTerm.


So yeah, there are a lot of hoops to hop through! Hopefully, this post will be helpful for someone attempting to do the same thing I did, and it'll cut down on a lot of searching around and trying to piece together a working environment.

That Java Thing, Part 16: Maven Fallout

  • Feb 23, 2016

So, after the last post's large task of converting to Maven, this step is mostly about picking up the pieces and expanding on some of the concepts. We'll start with M2Eclipse, usually rendered as just "m2e".

m2e

m2e is the set of plugins that acts as Eclipse's interface to Maven. It more-or-less replaces the earlier maven-eclipse-plugin, though you will likely still see references to that around. Eclipse doesn't have any inherent knowledge of how Maven works, m2e has the complicated task of reading your projects' pom.xml files and adapting them to Eclipse's internal configuration. So, for example, in our projects it saw the presence of Tycho and determined that they should be imported as OSGi projects. In other cases, m2e may pick up the presence of things like Android plugins to trigger the use of the Android development tools.

Though it tries mightily, m2e is the source of a lot of the consternation that can come with a switch to Maven-based development. Because most Maven plugins don't have any inherent allowances for working in an Eclipse environment, adapters have to be written for each one in order for them to work with m2e - this is what the dialog yesterday installing the Tycho adapters was about. In some cases, these don't exist and you have to tell m2e to ignore the plugin; in other cases, the adapters DO exist, but are flawed in some way. Most of the time, things go alright, but there are enough edge cases that it can be irritating.

For this kind of task, m2e is pretty unobtrusive, but it's important to know it's there.

Updating the .gitignore

One side effect of m2e's behavior is that it's not a good idea to remove Eclipse's project configuration files from the Git repository. This is not required, but it can avoid a number of annoying problems when dealing with multi-person Maven projects. To start with, open the .gitignore file from the root of your local Git repository (you can get to this easily using Eclipse's Git Repositories view, in the "Working Directory" part of the repo). Add some lines at the end to ignore .project and .classpath, so your whole file should now look like:

._*
Thumbs.db
.DS_Store

*.class

# Mobile Tools for Java (J2ME)
.mtj.tmp/

# Package Files #
#*.jar
*.war
*.ear

# virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
hs_err_pid*

# Eclipse project files
.project
.classpath

Depending on how your (hypothetical) team wants to work, it may also make sense to ignore the .settings/ directory, which stores some additional Eclipse project information. However, some of that information may be useful to share - for example, on-save code-cleaning behavior that isn't readily expressed in Maven.

Due to the way Git works, just adding the files to the .gitignore won't remove them from the repository: instead, they'll just no longer show up in the list for new changes. In order to also remove them from the repository without deleting them from the filesystem, go to the "Navigator" pane in Eclipse (if it doesn't show up currently, you can add it via Window → Show View → Navigator), find each .project and .classpath file in the four projects (some will only have the former), right-click, and choose Team → Advanced → Untrack:

Now, commit the changes - though the files remain on the filesystem, they should show up as deleted in the commit dialog:

The target Folder

This one is one we've already prepped a bit for. Whereas most Eclipse projects store their binary output (Java class files, Jars, etc.) in the bin folder or elsewhere, the standard Maven behavior is to use target. For most of the projects, this doesn't matter - we had already configured the plugin to use a subfolder here for its classes, and the temporary files for other aspects don't matter. However, it's still important to know about this; when you're looking for the compiled or packaged output of a Maven project, this is the place to look, and we'll run into this when building the update site.

Building the Update Site

There's an important changed involved now with how the update site is built: it does not involve opening the site.xml and clicking "Build All" anymore. Instead, it involves right-clicking the root project ("parent-xsp") and choosing Run As → Maven Install:

There are two logical followup questions when seeing this change: "what?" and "why?". They're both bound together to what the nature of a Maven project is, and, significantly, the way Eclipse interacts with them. Maven is primarily a command-line tool - granted, it's a set of Java classes, but the primary way to interact with it is via the command line. m2e does a lot of work to interpret the projects in the same way as the `mvn` command-line tool, but it's just a secondary interpretation due to the way Maven and Eclipse work.

The way to fully build a Maven-ized project in a way that fully uses the configuration is to run the command-line tool. Fortunately, m2e comes with its own embedded version and doesn't require you to use a terminal, but the abstraction is very leaky - and this is why you use "Run As" instead of any of the normal "Build"-related commands. The "Run As" commands construct a CLI-type environment and execute the embedded Maven, which is what then does the real work.

Since "parent-xsp" is the root of our projects, it's the starting point to execute a Maven build. When you run this, you'll see a lot of chatter in Eclipse's Console view, particularly the first run: Maven will seek out the plugins needed to build the projects and install them to the local Maven repository (stored in ".m2/repository" in your home folder). After that, it will build and package each of your projects. There's a whole phase system going on here (similar in concept to the XPages lifecycle), as well as many configuration options, but the important part here is that the "install" command (called a "goal" in Maven parlance) is the last phase that we will worry about, and it will cover everything we need here.

Upon completion, the Console text should end with something like this (incidentally, "Reactor" is Maven's term for the entire blob of modules being processed):

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] parent-xsp ......................................... SUCCESS [  0.617 s]
[INFO] com.example.xsp.plugin ............................. SUCCESS [  1.647 s]
[INFO] com.example.xsp.feature ............................ SUCCESS [  0.411 s]
[INFO] com.example.xsp.update ............................. SUCCESS [  4.143 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 30.310 s
[INFO] Finished at: 2016-02-23T13:55:20-05:00
[INFO] Final Memory: 80M/191M
[INFO] ------------------------------------------------------------------------

Part of this process is the cration of the update site, which, due to how we configured it, will be represented twice in the "target" folder in "com.example.xsp.plugins": as a tree of files inside the "site" folder and also zipped up into the "site_assembly.zip" file. There's also a file named "site.zip", but that contains just the site.xml, which is not important. It's these files that you should now target with Designer and the NSF Update Site when updating the plugin. In fact, it'd be a good idea to delete the "features" and "plugins" folders from outside the "target" folder now - they won't be used any more.

As for the "Build All" button in site.xml, it's best to pretend it doesn't exist. It will still work, but it will break your Maven build, because it overwrites the "qualifier" in the version numbers. This is, admittedly, a drag: it's convenient having a clear, logical button to build the site, and it's very inconvenient that Eclipse doesn't tell you not to use it any more. However, the Maven process, besides being now required, has a nice advantage: now building the update site will no longer cause Git to want to check in the change. That's something that can get annoying very quickly when working with another developer on a non-Mavenized OSGi project.

Adding Back The Source Plugin

We'll finish the day on an easy one: adding back in the source plugin. Unlike the original setup, which used a separate feature to house the source plugin, we'll now include it in the same feature. You could also continue to have a separate source feature, which would be useful for very large projects where it would actually be a big burden to deploy the source to servers, but, for XPages libraries, it's generally not worth the cognitive hassle.

Since we already configured the Tycho source plugin earlier, this is just a matter of adding a reference to the (implied) source plugin to the feature.xml:

<?xml version="1.0" encoding="UTF-8"?>
<feature
      id="com.example.xsp.feature"
      label="Example XSP Library Feature"
      version="1.0.0.qualifier">

   <description url="http://www.example.com/description">
      [Enter Feature Description here.]
   </description>

   <copyright url="http://www.example.com/copyright">
      [Enter Copyright Description here.]
   </copyright>

   <license url="http://www.example.com/license">
      [Enter License Description here.]
   </license>

   <plugin
         id="com.example.xsp.plugin"
         download-size="0"
         install-size="0"
         version="0.0.0"
         unpack="false"/>

   <plugin
         id="com.example.xsp.plugin.source"
         download-size="0"
         install-size="0"
         version="0.0.0"/>

</feature>

Now, the source will be included in the output and bundled with the main feature when it's installed. Commit this change:


At this point, we're basically back to where we were previously, OSGi-wise, but in a much better position to scale the project further and take advantage of supporting systems. The next couple posts will cover some of those potential systems, as well as remaining large conceptual topics. There's a great deal to know when it comes to Maven, but it's all helpful.

That Java Thing, Part 15: Converting the Projects

  • Feb 22, 2016

Prelude: there was a typo in the previous entry. Originally, the file URL read "file://C:/IBM/UpdateSite", but, on Windows, there should be another slash in there: "file:///C:/IBM/UpdateSite". I've corrected the original post now, but you should make sure to fix your own settings.xml file if needed. Otherwise, Maven will complain down the line about the URI having "an authority component".

The time has come to do the dirty work of converting our existing plugin projects to Maven. There will be some filesystem-side reorganizing and not every project will make it (looking at you, source project), but overall it's mostly a job of pasting a bunch of XML into new files.

For the first leg of this, I recommend removing the projects from your Eclipse workspace by selecting them, right-clicking, and choosing Delete:

On the confirmation dialog, do not select "Delete project contents on disk" - we don't actually want to get rid of the files.

Next, find the projects on your filesystem, create a new folder alongside them named "com.example.xsp", and move the projects inside it. In Maven parlance, we're creating a "multi-module project", and this new folder is the top level in our module hierarchy. This can contain arbitrary levels and can be very helpful in project organization, but this will be a pretty simple parent-and-children case. Next, dive into the folder and delete the "com.example.xsp.source.feature" - we'll be able to generate this through Maven now, and so we can trim down our project count slightly.

Now, create a file named pom.xml in the "com.example.xsp" folder alongside the subfolders, and fill its contents with this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.example</groupId>
	<artifactId>parent-xsp</artifactId>
	<version>1.0.0-SNAPSHOT</version>
	
	<packaging>pom</packaging>

	<modules>
		<module>com.example.xsp.plugin</module>
		<module>com.example.xsp.feature</module>
		<module>com.example.xsp.update</module>
	</modules>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<tycho-version>0.24.0</tycho-version>
		<compiler>1.6</compiler>
	</properties>

	<repositories>
		<repository>
			<id>Luna</id>
			<layout>p2</layout>
			<url>http://download.eclipse.org/releases/luna/</url>
		</repository>
		<repository>
			<id>notes</id>
			<layout>p2</layout>
			<url>${notes-platform}</url>
		</repository>
	</repositories>

	<build>
		<plugins>
			<!--
				Maven compiler options
			-->
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<version>3.1</version>
				<configuration>
					<source>${compiler}</source>
					<target>${compiler}</target>
					<compilerArgument>-err:-forbidden,discouraged,deprecation</compilerArgument>
				</configuration>
			</plugin>
			
			<!--
				Tycho plugins
			-->
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-maven-plugin</artifactId>
				<version>${tycho-version}</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-packaging-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<strictVersions>false</strictVersions>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-compiler-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<source>${compiler}</source>
					<target>${compiler}</target>
					<compilerArgument>-err:-forbidden,discouraged,deprecation</compilerArgument>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-source-plugin</artifactId>
				<version>${tycho-version}</version>
				<executions>
					<execution>
						<id>plugin-source</id>
						<goals>
							<goal>plugin-source</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>target-platform-configuration</artifactId>
				<version>${tycho-version}</version>
				<configuration>

					<pomDependencies>consider</pomDependencies>
					<dependency-resolution>
						<extraRequirements>
							<requirement>
								<type>eclipse-plugin</type>
								<id>com.ibm.notes.java.api.win32.linux</id>
								<versionRange>[9.0.1,9.0.2)</versionRange>
							</requirement>
						</extraRequirements>
						<optionalDependencies>ignore</optionalDependencies>
					</dependency-resolution>

					<filters>
						<!-- work around Equinox bug 348045 -->
						<filter>
							<type>p2-installable-unit</type>
							<id>org.eclipse.equinox.servletbridge.extensionbundle</id>
							<removeAll />
						</filter>
					</filters>

					<environments>
						<environment>
							<os>linux</os>
							<ws>gtk</ws>
							<arch>x86</arch>
						</environment>
						<environment>
							<os>linux</os>
							<ws>gtk</ws>
							<arch>x86_64</arch>
						</environment>
						<environment>
							<os>win32</os>
							<ws>win32</ws>
							<arch>x86</arch>
						</environment>
						<environment>
							<os>win32</os>
							<ws>win32</ws>
							<arch>x86_64</arch>
						</environment>
						<environment>
							<os>macosx</os>
							<ws>cocoa</ws>
							<arch>x86_64</arch>
						</environment>
					</environments>
					<resolver>p2</resolver>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

So... yeah, there's a lot going on here. This is the biggest of the "XML dumps" we're going to have and contains by far the greatest number of bizarre "you just have to know about it" parts. "POM" stands for "Project Object Model" - it's the language Maven uses to describe the project. Let's tackle the file from near the top (ignoring the XML header):

project and modelVersion

These elements are effectively just boilerplate: the project element is the root of our project descriptor and it contains some definitions to let XML editors parse the file format. In turn, the modelVersion describes to Maven the specific version we're working with, which has been "4.0.0" for as long as I've been doing this.

groupId, artifactId, and version

These elements are obligatory in one form or another in every project, but are less copy-and-paste-able: they define the name and version of your project. These are Maven's equivalents to OSGi's Bundle-SymbolicName and Bundle-Version, though Maven makes an explicit distinction between the overall grouping of the plugin and its specific name. These are essentially arbitrary, but the convention is to use the standard reverse-DNS version of your domain name for the group ID, and then keep this group ID consistent across different projects (...mostly). The artifact ID is less consistent, but it's good to pick a pattern like "projectPrefix-submodule". Here, we actually reverse that a bit to call it "parent-xsp" in order to emphasize that this project's purpose is entirely to be a parent to the submodules and not an interesting artifact to consume itself. We'll break this convention again for the submodules due to our use of Tycho/OSGi.

packaging

A project's packaging describes the sort of output. By default, if this is left un-specified, it's jar - a normal, run-of-the-mill Jar file. There are a few other common ones you may run into - such as war for J2EE web apps or bundle for non-Tycho OSGi bundles - and the one we're using here is pom. This is actually kind of the "none of the above" option: the "pom" is just the file we're editing now, and is included with every project type. Having a packaging type of pom generally means that either the project has no real outputs of its own (as is the case here) or it's an "ad hoc" project that doesn't fit an existing type.

modules

This block is the hallmark of a parent project: it lists the relative folder paths that contain the submodules. In this case, the names line up with the names we'll use for the submodules, but this could potentially vary depending on the folder names and locations. Parent-child module relationships don't have to be physically hierarchical on the filesystem, but it's a good convention when you don't have a specific reason to break it.

properties

This block is the project-level equivalent to the user-level property we defined in .m2/settings.xml. There are two types of properties that can go here, with no obvious distinction between them: known configuration properties used by Maven itself and arbitrary named variables used by the person writing the pom.

The first property - project.build.sourceEncoding - is an example of the former. During execution, Maven will reference this property defined in the project (or one of its parents) when determining the text file encoding to use. This could be set to something else if you're working with non-Unicode files, but it's important to set it here so that file interpretation will not be platform-dependent. These properties can be read like an equivalent of EL for the project XML: it sets a property of sourceEncoding within the build node in project, but more consisely (more or less).

The other two are variables for use later. The names of these have only very loose conventions, but there seem to be a couple common types: ALL_CAPS, camelCase, and hyphen-delimited. You can also specify variables as dot.delimited and they will work the same way, but that makes them more difficult to distinguish from the system-level properties.

repositories

The repositories block is the start of our OSGi-related weirdness. The block itself isn't OSGi-specific - it has its role in other projects that want to make use of dependencies outside of the core public Maven repositories - but the contents is. We're setting two repositories here: one to point to the main Eclipse repository (the Luna version here, but that could just as well be Mars or Kepler) and one to point to the XPages Update Site. This is where the property we set before comes into play, allowing different developers to keep the update site in different locations without changing the project's config.

build

The build section is often the largest part of a POM file - it contains definitions and configuration for various additional Maven plugins used during compilation. We have two tasks to accomplish here: ensure that we use Java 6 for compilation at the root level (to ensure the build doesn't execute as Java 7 or 8 and be incompatible with Domino) and enable a whole slew of Tycho plugins.

The maven-compiler-plugin block specifies the version of the Java compiler plugin to use (3.1, which is actually kind of old, but the differences aren't important) and then provides it with some configuration to set the Java version level and to not choke on forbidden references. Like the Java version, the latter is a nod to Domino: depending on your JVM configuration, you may run into forbidden-reference errors relating to the lotus.domino classes.

The next slew of blocks all relate to loading up various Tycho components. Tycho's job is esentially to construct an entire OSGi environment during the Maven build, and it consists of a number of moving parts, many of which are basically the Tycho version of normal Maven facilities. The tycho-maven-plugin is the core, and its extensions rule is what allows it to worm itself into various phases of the build process. The tycho-packaging-plugin controls the process of bundling the projects as their various types: the plugin, the feature, and the update site (in our case). The tycho-compiler-plugin is the Tycho variant of the Maven one we configured earlier. The tycho-source-plugin is what allowed us to kick the standalone source feature to the curb - it's the equivalent of the Eclipse-specific feature we had been hooking into before.

The tycho-platform-configuration is the scariest of the bunch. This plugin's job is to establish the OSGi Target Platform we're working with, in conjunction with the repository specified above. Not all of this configuration is necessary for our immediate needs, but may come in handy later. The pomDependencies rule is useful when using mixed-type dependencies in more-complicated projects, while the extraRequirements block forces the inclusion of the plugin fragment that contains Notes.jar. The optionalDependencies rule comes in handy from time to time with XPages projects: there are sometimes cases where there's a dependency that Eclipse has and which the server will have, but which will be awkward to get to in Maven, usually relating to dependencies-of-dependencies not related to compilation. The filters block is... I don't know; just keep it in there. The environments block is a way to describe the platforms on which your code can execute - I believe this is primarily used when testing. The resolver is like filters in that it's a "I just copy it around" thing; presumably, it refers to a specific code path for resolving plugin dependencies.

Whew!

Okay, so... that's the first one down! For the most part, you can carry around the whole bottom section pretty much as-is for your XPages Maven projects (as I do), and then gradually become comfortable with the specifics over time. Now, there's some good news and some bad news:

  • The bad news is that there are three more POM files to write.
  • The good news is that they are much, much simpler.

In recent versions of Tycho, they've added the ability to reduce the number of POMs involved, but there are limits to that, particularly to do with Jenkins, so we'll stick to the traditional way for now.

com.example.xsp.plugin

Go into the "com.example.xsp.plugin" folder and create a new pom.xml containing this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.plugin</artifactId>
	<packaging>eclipse-plugin</packaging>
</project>

Like I promised: much simpler. Because the parent POM already brought in all the Tycho plugins and configuration, all we need to do here is the basics. One slightly-unusual aspect here is the packaging type. eclipse-plugin isn't a packaging type known to Maven inherently; instead, it's provided by Tycho, but can be used in the same way.

The artifact ID here is a concession to Tycho: Maven artifact IDs don't usually follow the same full-reverse-DNS conversion as OSGi, but Tycho wants the artifact ID to match the OSGi bundle name.

com.example.xsp.feature

Next up is the pom.xml file in the "com.example.xsp.feature" folder:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.feature</artifactId>
	<packaging>eclipse-feature</packaging>
</project>

This is very similar to the last, with the only real differences being the artifact ID and the packaging type.

com.example.xsp.update

Now, the pom.xml in "com.example.xsp.update":

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>com.example</groupId>
		<artifactId>parent-xsp</artifactId>
		<version>1.0.0-SNAPSHOT</version>
	</parent>
	<artifactId>com.example.xsp.update</artifactId>
	<packaging>eclipse-update-site</packaging>

	<build>
		<plugins>
			<plugin>
				<groupId>org.eclipse.tycho</groupId>
				<artifactId>tycho-packaging-plugin</artifactId>
				<version>${tycho-version}</version>
				<configuration>
					<archiveSite>true</archiveSite>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

This one's slightly longer, but not by too much. Beyond the different artifact ID and packaging, we also provide some additional configuration to the tycho-packaging-plugin. This is among the plugins that were established in the root POM, but it's re-defined here in order to enable the archiveSite configuration option. This will give us a nice ZIP file of the Update Site at the end.

There's one other thing to note here: Tycho considers the eclipse-update-site packaging type to be deprecated, and it may be removed in the future. In most examples you'll see outside of Domino, people use eclipse-repository instead. This gets back to the difference between the old-style ("site.xml") Eclipse Update Sites and the new-style ("category.xml", named P2) Update Sites. For now, we use the old-style variant because it works better with Notes and Domino.

In addition to this POM file, we also have two changes to make in the site.xml: remove the source-feature reference (we'll add this back elsewhere later) and clean up the versions:

<?xml version="1.0" encoding="UTF-8"?>
<site>
   <feature url="features/com.example.xsp.feature_1.0.0.qualifier.jar" id="com.example.xsp.feature" version="1.0.0.qualifier">
      <category name="Example"/>
   </feature>
   <category-def name="Example" label="Example"/>
</site>

The version distinction is to change the Eclipse-generated timestamps at the end of the versions to "qualifier". The reason for this is that, for Tycho, the site.xml acts as a pure configuration file, and will no longer be the site index itself. So Tycho wants the qualifier to be generic, and then will fill it in during compilation. Like the source feature, this will be covered more later.

Last Steps

With our POM files defined, the last step for now is to import the projects back into Eclipse. In Eclipse, go to File → Import, expand the "Maven" category, and choose "Existing Maven Projects":

On the next screen, browse to the "com.example.xsp" directory created earlier. If all goes well, this should find the four projects in their hierarchy:

Everything on this can be left as the defaults, though you may want to specify a more-descriptive working set name - that doesn't affect the project behavior.

When you click "Finish", Eclipse with churn for a bit and then, if you're running Mars and haven't done this before, it will present a dialog about "Maven plugin connectors":

The specifics of what is going on here are a large topic of their own, but the short of it is that Eclipse needs specialized plugins to deal with each Maven plugin, and in this case it's looked for (and found) connectors for Tycho. Click "Finish", "Next", and "OK", accept the license terms, and restart Eclipse as it tells you to.

When Eclipse restarts, it should go through some churning while it updates the Maven projects and should finally settle on no remaining errors.

Closing Out

There will be some things to discuss with the fallout from this conversion, but this will do it for today. Commit your changes, stand up and stretch, and grab a cup of relaxing tea:

That Java Thing, Part 14: Maven Environment Setup

  • Feb 21, 2016

Before diving into the task of converting our plugin projects to Maven, there's a bit of setup we need to do. In a basic case, Maven doesn't require much setup beyond the project file itself - it's a "convention over configuration" type of thing that tries to make doing things the default way smooth. However, since it's also a "Java" thing, that means that anything out of the ordinary requires a bunch of XML.

Our big "out of the ordinary" aspect is OSGi. Maven and OSGi are often at loggerheads, but the conflict won't be too great in our situation. Still, it does mean there will be a few hoops to jump through, and one of those hoops is dealing with our dependency on the XPages runtime plugins. Since these plugins are not packaged as fully-Mavenized artifacts (yet (hopefully)), we'll need to configure Tycho to read the p2 (Eclipse) style.

In part 3, we downloaded the Build Management Update Site from OpenNTF, and we'll reuse that here. What we need to do is create a global Maven settings file that, for now, will just contain a definition of a variable to point to this update site. It would also be possible to specify this inside the project itself, but it's better form to use a consistent variable name (the most common convention is notes-platform) in the project files and then have your local settings point to it on your machine.

The global Maven settings file is called settings.xml and is stored in a folder named .m2 in your user's home directory (e.g. C:\Users\someuser\.m2 or /Users/someuser/.m2). Creating a folder named with a leading dot can be a pain in Explorer and the Finder, so it may be necessary to drop into a command line or other tool to do it. One way or another, create this file and set its contents similar to this:

<?xml version="1.0"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
	<profiles>
		<profile>
			<id>main</id>
			<properties>
				<notes-platform>file:///C:/IBM/UpdateSite</notes-platform>
			</properties>
		</profile>
	</profiles>
	<activeProfiles>
		<activeProfile>main</activeProfile>
	</activeProfiles>
</settings>

Adjust the file:// URL as necessary to point to the location on your computer. It has to be a file URL and not a normal path, presumably because repositories are usually expected to be remote HTTP sites.

This is the only configuration we need before getting to the project, but it's a good preview of the sort of "try pasting this big block of XML somewhere" advice you're in for when it comes to Maven use. Over time, the structure of the XML and how it relates to Maven's behavior begins to crystallize, but it's definitely cumbersome to start with, and it will get more opaque before it gets less so.

Depending on your proclivities, this may be a good opportunity to install standalone Maven as well. Eclipse has its own embedded version, so this is not required, but it can be handy sometimes to be able to run Maven from the command line. On your average Linux distribution or OS X with Homebrew, Maven should be installable with a package manager. Otherwise, Maven can be downloaded from maven.apache.org - it doesn't have an installer as such, as it's essentially some scripts around Java classes, but they have tips for adding it to your path.

Next, we'll get to the real meat of this process: actually converting the projects to Maven.

That Java Thing, Part 13: Introduction to Maven

  • Feb 19, 2016

I've been laying warnings that this would be coming and you've seen me grouse about it for over a year, but now the time has come to really dive into Maven for Domino developers.

To lead into it, there are two main topics to cover: what Maven is and why you should bother.

What Maven Is

Maven is a build automation tool, primarily for Java applications but able to work with a number of other languages and environments.

The concept of a "build automation tool" is a strange one when you're coming from a Notes/Domino perspective, and it's the source of a lot of consternation when moving to it. In classic Notes, there conceptually was no build phase for an application: certain things would be compiled on save, but there was rarely any need to think about this. Designer was the way to write applications and it took care of it. With XPages came a bit of Eclipse-ism with the notion of "Build" being a separate, not-necessarily-automatic stage, but there still wasn't much user-facing configuration going on: other than maybe adding a source folder, the IDE just kind of took care of it.

Even for OSGi plugin developers, the need seems a little arcane. Eclipse does have project and build configurations, and it runs through build scripts internally when you export Jars or build an update site. Again, though, this is all largely hidden and the user doesn't normally have to think much about it.

Where this comes in, though, is when you want to start expanding your projects in ways beyond the "single bag of code" stage: automatically including pre-packaged dependencies, making the project cleanly available to others down stream, sharing configuration across projects, and, particularly, automating building, testing, and deployment with an environment like Jenkins. Maven (and the alternatives like Gradle) provide important structure and meta-information to do these things and scale them to ever-larger tasks.

Why You Should Bother

I've found the pro-Maven pitch to be kind of a weird one, since it's sort of like unit/integration testing in that, before you do it, it doesn't seem worth the hassle, but then, when you've switched over, it seems crazy to not do it. Like a cult, I guess, but a somewhat better idea.

I'll start with an important reason: it's good for your career. Unless you want to remain on legacy-maintenance duty forever (which, granted, can be a stable gig), it's important to keep improving your skills, and build automation is a big concept that pays dividends in knowledge in Domino programming and beyond. Once you're familiar with a system like Maven, you start recognizing the same patterns elsewhere: in some of OSGi's capabilities, in older systems like Make, and, crucially, in whatever modern JavaScript toolchains are doing lately. Learning something like this opens up doors.

It also makes testing that much more natural. Building and running automated tests is certainly possible in Eclipse, but Maven's structure strongly encourages it (newly-created projects start with JUnit and a tests directory, to nudge you in that direction), and having test running be a phase in building makes it much more foolproof. The virtue of writing tests creeps up on you: even if you don't go whole-hog TDD, getting into the habit of starting each bug fix with a failing-then-successful test case means you now have a bug you'll never have to see again. Granted, as with a lot of other aspects, the nature of Domino development creates some hurdles, but it's still worth it.

And finally: features, features, features. Once you get comfortable with Maven, it becomes much easier to spin up accompanying source and Javadoc packages for your projects, add in other languages, filter content during builds, create alternate build profiles for different situations, bring in remote resources easily, deploy to targets automatically, manage versioning cleanly, and so forth. These are all things that are possible in the absence of a system like Maven, but Maven brings them all together in a way that is understandable both by your IDE of choice and by faceless servers.

What's Next

The next step will move from high-flown concepts to some brass tacks: preparing a Maven environment for Domino development and getting a look at what Maven's configuration files look like.

Connect 2016 and Darwino 1.0

  • Feb 8, 2016

Last week was Connect 2016 and, while I don't have a full review of it, I felt that it was a pretty successful conference. The new venue was much less weird and more purpose-fitting than expected. Moreover, while the conference content wasn't bursting with announcements and in-depth technical dives like at something like WWDC, it did feel a bit more grounded and less marketing-hollow than the last two. So I'll call it a win.

On the OpenNTF front, the conference saw a bit more in the slow rollout of our improved processes and server infrastructure. Prominic has been graciously providing us with servers to run the Atlassian stack of development apps, and we're gradually putting these to use in building and tracking projects run through OpenNTF. Christian Güdemann talked about his company's move to more-structured development using tools like this, as well as an overview of where OpenNTF in particular is heading. Before too long, the OpenNTF Domino API will fall in line with this, switching to a Gitflow-style branch structure and a clean workflow between Stash, Bamboo, and Jira.

The conference also saw the 1.0 release of Darwino. If you're not familiar with it, Darwino is a platform for Java-based development that provides a document database (with improved takes on classic Domino features like hierarchies and reader/author security) and services that will run the same business logic on Java app servers, iOS, and Android. Of particular interest for Connect was the Domino connector, which does two-way replication with Domino databases. The free-for-non-commercial-use Community Edition is a good place to dive in, and we'll be putting out tutorial videos and posts in the weeks to come.

All in all, last week set the stage for the year to come, and I think it should be a very interesting year indeed.

Connect 2016 Lead-Up

  • Jan 26, 2016

Phew, well, my plugin series continues its hiatus due to how thoroughly swamped I've been with work the last couple months. It will return in time, ready to dive into the fruitful and terrifying topic of Maven-ization. In the mean time, we're very close indeed now to this year's Connect, and I'm looking forward to it.

There are a number of sessions that I'm rather looking forward to, but I'd like to mention two due to my indirect and direct association with them, respectively.

First, on Monday at 11:30 is OpenNTF – From Donation to Contribution, presented by Christian Güdemann. Though OpenNTF has been comparatively quiet externally this past year, we've been building up a suite of tools and integrations (such as the Slack channel, which you can join with the link on the right of the OpenNTF page). This will relate to the Maven posts I'll be writing: Domino-related development is necessarily getting more structured, and that leads to new techniques and tools to help manage the additional complexity and reap the benefits of it.

Secondly, on Tuesday at 4:30 is Don't give up on Domino! Introducing Darwino: A New Lifeline for Domino Developers and Customers, presented by Philippe Riand and myself. We'll be talking about Darwino's interaction with Domino, particularly how you can use it to create web and mobile apps that bi-directionally replicate with Domino and work offline.

Beyond that session, you should track us down to talk about Darwino. Though I am, granted, biased on the matter, I think there's quite a bit there to interest Domino developers, beyond just the prospect of data replication.

That Java Thing, Part 12: Expanding the Plugin - JAX-RS

  • Dec 3, 2015

A couple of months back, Toby Samples wrote a series on using Wink in Domino. I'm not going to cover all of that ground here, but it's still worth dipping into the topic a bink, as writing REST services in an OSGi plugin is a great way to not only add capabilities to your XPages apps, but to also start making your data (and programming skills) more accessible from other platforms.

There are two main terms to know here: "JAX-RS" and "Wink".

JAX-RS stands for "Java API for RESTful Web Services. If you're observant, you may notice that there's no "X" in that phrase. This is because the "JAX" part is sort of a legacy hanger-on from the common "JAX" prefix used in JAX-WS and the other Java XML APIs... even though JAX-RS doesn't necessarily involve any XML. In any event, JAX-RS is a specification for a way to write RESTful web services in Java with (mostly) inline configuration using annotations.

Apache Wink is a specific implementation of this standard. For the most part, with these Java standards, the actual implementation shouldn't matter too much, but there are always edge cases. In any event, Wink has historically been the preferred method of doing this on Domino by virtue of the fact that the ExtLib's DAS is built on it and so it ships with Domino, along with some IBM support code.

Setting up JAX-RS services is pretty similar to the addition of the XSP Library and theme provider: we'll need to create the Java classes and add some configuration to the plugin.xml. In this case, we'll do it in reverse, starting with the configuration first before diving into the servlet class.

There are actually two ways to proceed here. In the past, I've added servlets directly to the plugin.xml file. This works fine, but Toby's series took a different and more-interesting route: instead of having the plugin provide a servlet alone, it provides a web application which in turn contains a servlet. Doing it this way brings the configuration more in line with what you'd see in a non-OSGi Java web application.

To start with, we'll need to add dependencies on four more plugins. Open the META-INF/MANIFEST.MF file, go to the "Dependencies" tab, and add "org.apache.wink", "com.ibm.domino.osgi.core", "org.eclipse.equinox.http.registry", and "com.ibm.wink":

One word of caution: when adding "org.eclipse.equinox.http.registry", make sure to add version "1.0.100". Eclipse will likely have a newer version available as well, but that won't be there on Domino. Setting the version requirement too high will cause the plugin to fail to load.

The plugin.xml addition is pretty similar to the services we added before, but with some different tags. We're going to add another extension point to the list, along with a couple lines of configuration. With a minor cleanup to the previous lines, the result should look like this:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
	<extension point="com.ibm.commons.Extension">
		<service class="com.example.xsp.ExampleLibrary" type="com.ibm.xsp.Library"/>
	</extension>
	<extension point="com.ibm.commons.Extension">
		<service class="com.example.xsp.theme.ExampleStyleKitFactory" type="com.ibm.xsp.stylekit.StyleKitFactory"/>
	</extension>
	<extension point="org.eclipse.equinox.http.registry.servlets">
		<servlet alias="/examplerest" class="com.example.xsp.servlet.ExampleServlet">
			<init-param name="applicationConfigLocation" value="/WEB-INF/exampleapplication"/>
			<init-param name="propertiesLocation" value="/WEB-INF/exampleservlet.properties"/>
			<init-param name="DisableHttpMethodCheck" value="true"/>
		</servlet>
	</extension>
</plugin>

The code inside of the extension point is sort of a compressed version of the kind of servlet configuration you see in JEE's web.xml file.

This is actually an area where Toby's series and these instructions diverge. In his series, he used a different extension point - com.ibm.pvc.webcontainer.application - to provide his servlet as part of a full web application. That is a very interesting route as well, and opens the door to broader Java application development. Either one will get the job done, but the "just the servlet" route saves a bit of cognitive overhead for now.

Now, to create the files mentioned in the configuration. In the src/main/resources folder, create a folder named WEB-INF and add the exampleapplication and exampleservlet.properties files, both plain text:

The exampleservlet.properties file can be left blank for now. It's there to provide Wink-specific initialization properties that are useful as you get further into it, but are not needed now.

The exampleapplication file lists the classes that will provide the individual REST resources, one per line. For now, we'll just have one:

com.example.xsp.servlet.HelloWorldResource

With the configuration set up, it's time to create two Java classes. First, create a new class named "ExampleServlet" in the "com.example.xsp.servlet" package and have it extend "com.ibm.domino.services.AbstractRestServlet":

This class doesn't actually need to do anything other than exist for our purposes, but it can be a useful point for hooking in behavior that occurs for each servlet request. The actual code will go in the "HelloWorldResource" class mentioned in the application file. So create that class in the "com.example.xsp.servlet" package. Unlike the previous class, this one doesn't need to extend anything:

This one contains the core of our business logic (such as it is):

package com.example.xsp.servlet;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import lotus.domino.NotesException;
import lotus.domino.Session;
import lotus.domino.Database;

import com.ibm.commons.util.StringUtil;
import com.ibm.commons.util.io.json.JsonJavaObject;
import com.ibm.domino.osgi.core.context.ContextInfo;

@Path("/helloworld/{id}")
public class HelloWorldResource {
	
	@GET
	@Produces(MediaType.APPLICATION_JSON)
    public Response get(
    		@Context UriInfo context, @PathParam("id") String id, @QueryParam("echo") String echo
    		) throws NotesException {
    	JsonJavaObject json = new JsonJavaObject();
    	
    	json.put("payload", "Hello world");
    	Session session = ContextInfo.getUserSession();
    	if (session != null) {
    		json.put("userName", session.getEffectiveUserName());
    	}
    	Database database = ContextInfo.getUserDatabase();
    	if(database != null) {
    		json.put("databaseFilePath", database.getFilePath());
    	}
    	if(StringUtil.isNotEmpty(echo)) {
    		json.put("echo", echo);
    	}
    	if(StringUtil.isNotEmpty(id)) {
    		json.put("id", id);
    	}
    	
    	return Response.ok(json.toString()).build();
    }
}

There are quite a few things going on here! Let's start with the JAX-RS annotations. JAX-RS, like other "new-era" Java APIs, using annotations heavily in order to put configuration inline and cut down on the number and size of the previous giant blobs of XML. They look a bit unusual if you haven't encountered Java annotations before, but their intent here is hopefully clear.

At the top, the @Path("/helloworld/{id}") annotation indicates that this class is hooked into "/helloworld" within our servlet, and moreover also expects a path parameter that it will refer to as "id". Then, the @GET annotation indicates that our get method will respond to that HTTP method (the name of the method is arbitrary - it could be "foobar" and still work). The @Produces(MediaType.APPLICATION_JSON) annotation further indicates that the method will supply JSON. There are other ways to accomplish this if the method may return different content types, but we know it's always JSON here.

Inside the method signature, there's a bit more magic going on. We're never going to call this method in any code ourselves, but instead Wink is going to inspect it and its parameter list in order to provide what it requests. By annotating the first parameter with @Context, we're telling Wink that we want to have that populated with the context information for the servlet - this is similar to ExternalContext from XSP development. This object isn't actually used in the example (and could be removed), but I've found that it's handy to have it consistently there. The other two parameters use @PathParam("id") and @QueryParam("echo") to extract the "id" parameter we specified earlier as well as look for "echo=..." in the query string.

The method itself uses IBM's JSON library to produce a simple JSON object containing the information we passed in, as well as some Domino-specific stuff, using the class ContextInfo. This class is sort of like the equivalent of ExtLibUtil for Equinox servlets, at least when it comes to getting access to the current session and database. ExtLibUtil itself, though available, doesn't function properly in a servlet, and so we use this instead. The current session is straightforward enough and works like you'd expect, but the current database is interesting. When you register a servlet via this method, it becomes available not only via "yourserver.com/examplerest", but also within each NSF, like "yourserver.com/foo.nsf/examplerest". By default, that will just provide the same results, but, if your code uses ContextInfo.getCurrentDatabase(), you can get the database the URL is requesting. This is how DAS does its thing, and it allows you to write a servlet that can operate in a contextually-appropriate way in each database.

So phew! Now that this is all written, you can build your update site and deploy it to the server. After restarting HTTP, you should be able to go to a URL like "http://yourserver.com/foo.nsf/examplerest/helloworld/someid?echo=hey", and see a result like this:

{
	"echo": "Hey",
	"userName": "CN=Jesse Gallagher/O=IKSG",
	"id": "someid",
	"databaseFilePath": "foo.nsf",
	"payload": "Hello world"
}

To finish, it's time to commit the changes.

Next up, it will be time to start taking the plunge into Maven. Don't worry; it'll be fun!