This topic is fairly well-trodden ground, but there's no harm in trodding it some more: methods of producing JSON in the XPages environment. Specifically, this will be primarily about the IBM Commons JSON classes, found in
com.ibm.commons.util.io.json. The reason for that choice is just that they ship with Domino - other tools (like Gson) are great too, and in some ways better.
Before I go further, I'd like to reiterate a point I made before:
Never, ever, ever generate code without proper escaping.
This goes for any executable, markup, or data language like this. It's tempting to generate XML or JSON in Domino views, but formula language lacks proper escape functions and so, unless you are prepared to study the specs for all edge cases (or escape every character), don't do it.
So anyway, back to the JSON generation. To my knowledge, there are three main ways to generate JSON via the Commons libraries: a single call to process an existing Java object, by building a
JsonJavaObject directly, and by "streaming" the code bit by bit. I've found the first and last methods to be useful, but there's nothing inherently wrong about the middle one.
Processing an existing object
With this route, you use a class called
JsonGenerator to process an existing JSON-compatible object (usually a
Map) into a String after building the object via some other mechanism. In a simple example, it looks like this:
Map<String, Object> foo = new HashMap<String, Object>(); foo.put("bar", "baz"); foo.put("ness", 1); return JsonGenerator.toJson(JsonJavaFactory.instance, foo);
Overall, it's fairly straightforward: create your
Map the normal way (or get it from some library), and then pass it to the
JsonGenerator. Becuase it's Java, it forces you to also pass in the type of generator you want to use, and that's
JsonJavaFactory's role. There are several
instance objects that seem to vary primarily in how much they rely on the other Commons JSON classes in their implementation, and I have no idea what the performance or other characteristics are like.
instance is fine.
An alternate route is to use the
JsonJavaObject directly and then
toString it at the end. This is very similar in structure to the previous example (because
JsonJavaObject inherits from
HashMap<String, Object> directly, for some reason):
JsonJavaObject foo = new JsonJavaObject(); foo.put("bar", "baz"); foo.put("ness", 1); return foo.toString();
The result is effectively the same. So why do I avoid this method? Mostly for flexibility reasons. Conceptually, I don't like assuming that the data is going to end up as a JSON string right from the start unless there's a compelling reason to do so, and so it's strange to use
JsonJavaObject right out of the gate. If your data starts coming from another source that returns a
Map, you'll need to adjust your code to account for it, whereas that is less the case in the first case.
Still, it's no big problem if you use this. Presumably, it will be slightly faster than the first method, and it's often functionally identical anyway (considering it is a
This one is the weird one, but it will be familiar if you've ever written a renderer (stay tuned to NotesIn9 for an example). Rather than constructing the entire object, you push out bits of the JSON code as you go. There are a couple reasons you might do this: when you want to keep memory/processor use low when dealing with large objects, when you want to make your algorithm recursive without, again, using up too much memory, or when you're actually streaming the result to the client. It's the ugliest method, but it's potentially the fastest and most efficient. This is what you want to use if you're writing out a very large collection (say, filtered view data) in an XAgent or other servlet.
I'll leave out an example of using it in a streaming context for now, but if you're curious you can find examples in the DAS code in the Extension Library or in the REST API code for the frostillic.us model objects (which is derived wholesale from DAS).
The key object here is
JsonWriter, which is found in the
com.ibm.commons.util.io.json.util sub-package. Much like with other
Writers in Java, you hook this up to a further destination - in this example, I'll use a
StringWriter, which is a basic way to write into a String in memory and return that. In other situations, this would likely be the
ServletOutputStream. Here's an example of it in action:
StringWriter out = new StringWriter(); JsonWriter writer = new JsonWriter(out, false); writer.startObject(); writer.startProperty("bar"); writer.outStringLiteral("baz"); writer.endProperty(); writer.startProperty("ness"); writer.outIntLiteral(1); writer.endProperty(); Map<String, Object> foo = new HashMap<String, Object>(); foo.put("bar", "baz"); foo.put("ness", 1); writer.startProperty("foo"); writer.outObject(foo); writer.endProperty(); writer.endObject(); writer.flush(); return out.toString();
As you can tell, the LOC count ballooned fast. But you can also tell that it makes a kind of sense: you're doing the same thing, but "manually" starting and ending each element (there are also constructs for arrays, booleans, etc.). This is very similar to writing out HTML/XML using equivalent libraries. And it's good to know that there's always a fallback to output a full object in the style of the first example when it's appropriate. For example, you might be outputting data from a large view - many entries, but each entry is fairly simple. In that case, you'd use this writer to handle the array structure, but build a
Map for each entry and add that to keep the code simpler and more obvious.
So none of these are right in all cases (and sometimes you'll just do
toJson(...) in SSJS), but they're all good to know. Most of the time, the choice will be between the first and the last: the easy-to-use, just-do-what-I-mean one and the cumbersome, really-crank-out-performance one.