Javachannel's Interesting Links podcast, episode 9

Welcome to the ninth ##java podcast. If we were using octal, we’d be on the eleventh podcast by now. It’s hosted by Joseph Ottinger (dreamreal) and Andrew Lombardi (kinabalu on irc) from Mystic Coders and, as a guest, Tracy Snell (waz) on the IRC channel, and it’s Friday, 2017 December 29.
As always, this podcast is basically interesting content pulled from various sources, and funneled through the ##java IRC channel on freenode. You can find the show notes at the channel’s website, at javachannel.org; you can find all of the podcasts using the tag (or even “category”) “podcast”, and each podcast is tagged with its own identifier, too, so you can find this one by searching for the tag “podcast-9”. The podcast can also be found on iTunes.
You can submit to the podcast, too, by using the channel bot; ~submit, followed by an interesting url, along with maybe a comment that shows us what you think is interesting about the url. Yes, you have to have a url; this is not necessarily entirely sourced material, but we definitely prefer sourced material over random thoughts.

  1. Up first we have Eugen Paraschiv, from Baeldung, with “How Memory Leaks Happen in a Java Application“. Most Java coders don’t really think about memory leaks, because the JVM is so good at, well, not leaking memory – but it’s still doable, especially when you have long-lasting data structures sticking around, like caches, that can retain references to object trees long past their usefulness. Eugen points out the obvious suspects – the use of static, for example, to retain references for the duration of the reference to the class – and also points out String.intern() as a culprit for some. That’s more a Java 7 and earlier problem than a Java 8 problem, since Java 8 handles interned strings differently. He points out unclosed resources as an obvious problem – unreleased filehandles, et cetera – and points out badly defined classes being used in collections (as Object’s implementation of equals() and hashCode() make retrieval difficult and therefore those objects will last as long as the Set does). He follows it all up with some ways to detect memory leaks. Well done article, although some of the approaches that lead to memory leaks are a lot less of an impact than he implies, especially with the current releases of the JVM. (This does not mean that it’s okay to be ignorant.)
  2. Some notes on null-intolerant Java,” from Dominic Fox, says that ”Null-intolerant Java is Java in which there are no internal null checks”. There are a few variants of this; there’re libraries that guarantee no methods will return null, other forms that use Optional, or just throw exceptions if you use a null when you’re not supposed to, or that will defensively check for null on library entry so that you fail immediately if you try to pass in a null when you’re not supposed to. He also mentions Kotlin’s null-checking – using the word “nugatory” for the first time in my experience. (It means “of no value,” just in case you’re wondering or you’re having difficulty inferring it from context.) His criticism of Kotlin in this case is that the null-checks are there whether you want them or not, and there’s a performance cost to them… but I’m not sure how well or deeply he’s measured. It’s actually a pretty decently written article; he mentions Optional – as he should, really – but also describes it sanely, meaning that he doesn’t say something stupid like “use Optional everywhere for nullable values.”

  3. The Java subreddit r/java has a comment thread called “What are features that make Java stand apart from other languages?” Lots of interesting comments; the JVM features highly, the language’s simplicity features, the ecosystem, even the JCP gets a positive mention or two. It’s an interesting DISCUSSION. What do you think makes Java stand out? One of the things I’ve liked in Java that’s changing, though, is the preference for a single idiom; when you look at Scala, you can achieve a given task in eight wildly incompatible ways, but Java’s historically had ONE way to do most things, and that’s changing. I’m not suggesting that that’s a BAD thing, but it’s something that’s changing. Essential conservatism in motion, folks!

  4. RxTest is a Kotlin library meant to help test RxJava flows. (Is that what RxJava calls their pipeline processing mechanisms? I don’t know.) It’s easy to test the simpler parts of event handling mechanisms – you pass it some input, see if you get the right output – but testing flows tends to be a little more involved, because most of it’s asynchronous and it’s hard to test entire streams of processes. Good idea, expressed cleanly, although if you don’t know Kotlin it might be a good time to learn some of it. (This does not mean there aren’t ways to test RxJava event handling in pure Java – it’s just neat to see DSLs for stuff like this.) Also mentioned in this section: awaitility, a library whose name Joe apparently cannot pronounce.

  5. Along the same topic: DZone also has an article on designing a DSL in Kotlin, entitled “Kotlin DSL: From Theory to Practice.” Oddly enough, it also creates a testing DSL, for a learning environment. You’d think that people would use other problem domains for DSLs more often – but it seems like most examples are build tools or testing libraries. It’s still interesting stuff. Have you ever used or implemented a DSL? How broadly used was it, how successful do you think it was, and why?

  6. Dan Luu wrote up an article called “Computer latency: 1977-2017” (therefore: “-40,” duh), using a keypress and the time it takes to display that key as a measure of latency. He goes into OS mechanics (simpler OSes display faster, regardless of processor speed) and display refresh rates… really fascinating stuff, even if it’s not Java-related at all. (I kinda wish he’d measured a JVM on some of these platforms, although that would expand his testing matrix considerably.) (Also mentioned in this section: Rastan, an old video game. And it’s “Rastan,” not “Rathstan,” which is what Joe was mistranslating it into. Memory is what it is, apparently.)

  7. DZone is back with “Alan Kay Was Wrong (About Him Being Wrong),” an article talking about one of the primary ways to describe how objects interact: “messaging.” (Alan Kay – a person, with a name – is considered one of the fathers of object-oriented programming, BTW.) It used to be that you’d describe an object as “communicating” with another object by way of method calls, so you’d say something like “the model is ‘sent to’ the renderer, with the print method.” Nowadays, if you’re not working with Objective C, that sounds a little quaint, I think, but it’s still a way of thinking about data flow. The article is suggesting that Kay – who questioned the analogy – was absolutely right when you think about the objects in question being representative of modules rather than finely-grained objects. With Java 9 supporting module granularity, who knows – this analogy may be back in vogue when people really start using Java 9 modules in practice.

Well, it’s the end of the calendar year – may the new year bring you health, wealth, happiness, and comfort as you attempt to spread the same. We wish you all the best in everything you do.

Javachannel's Interesting Links podcast, episode 8

Welcome to the eighth ##java podcast. I’m Joseph Ottinger, dreamreal on the IRC channel, and it’s Thursday, 2017 December 20. Andrew Lombardi from Mystic Coders is with me again.
Please don’t forget: this is your podcast, with your content too. You can contribute by using a carrier pigeon and sending us notes encoded with rot13 – twice if you want to be really secure – or by using javabot on the IRC channel, with ~submit and an http link, or you can also write content for the channel blog at javachannel.org, or you can even just tell us that something’s interesting… we’ll pick it up from there.

  1. Non-Blocking vs. blocking I/O: Go with blocking.” is an article by ##java’s surial. In it, he’s talking about asynchronous code, especially with respect to I/O… and his assertion is that you really don’t want to do it. If you decide you do (and there are reasons to) then you should at least rely on some of the libraries that already exist to make it easier… but he mostly points out that it isn’t worth it for most programmers. Interesting read, especially when you consider that Python and Node.JS live and die on this programming model.

  2. To self-doubting developers: are you good enough?” is an article meant to make you mediocre programmers feel better about yourselves. It talks about the processes and exercises that we all more or less had to go through to achieve competence. It’s not a long post, but it has some good points; programming is practice and art, just like athletics, really – and sometimes you lose, sometimes you plateau, sometimes you have to put in time that someone else might not have to put in. Sometimes the other guy is a natural at some things, and your effort is required to give you the edge… but the good news is that you can put in the effort.

  3. Jason Whaley posted a link called “Incident review: API and Dashboard outage on 10 October 2017” that went into a Postgres multinode deployment failure. It’s a payments company, so the outage is a pretty big deal for them; the short form is that they had a series of failures at the wrong time, and the postgres installation failed. That’s something we don’t hear about very often – either because people are ashamed of it, or hiding it, or some other more nefarious reason, perhaps. More reasons why I’m a developer and not in DevOps? Some pretty in-depth analysis on multi-master Postgres and unintended consequences of architecture. Appears that aside from the Postgres-specific things mentioned here, it probably is a good idea to regularly introduce fault into your infrastructure to test it, to see where the problems are you didn’t intend. And the automation erodes knowledge.

  4. Want to Become the Best at What You Do? Read this.” goes over five steps to being all that you can be including quoting “Eye of the Tiger” for added insult. Several of the ideas in here are valid though, focusing on improving your skills / self-improvement and putting yourself out there in a vulnerable way. All the items have a “The Secret” type of vibe around them though, which is a bit of a turn off. Love the process, better yourself, make a positive impact on the world, sounds pretty good.

  5. A user on ##java posted a reference to zerocell – a simple open source library to read Excel spreadsheets into Java POJOs. Apache POI is the go-to for this, but POI is a little long in the tooth; it’s always nice to see people creating new solutions. I don’t have any Excel spreadsheets that I need converted into POJOs handy – and I don’t think I’ve EVER had them… except maybe once.

  6. Understanding and Overcoming Coder’s Block” is YET ANOTHER lifestyle article for this podcast; it’s addressing those times when someone who might otherwise be a good coder – or writer, or anything – encounters the inability to write anything worthwhile. It’s focused on code, but it’s pretty general even so: reasons include a lack of clarity on what you’re trying to achieve, or a lack of decisiveness about how to solve a problem, or maybe the problem just seems too big to solve, or maybe even that you’re just not all that jazzed about the project you’re working on. It also addresses external factors – you know, real life – that might be getting in the way. Lastly, it includes some tips for each of those problems to perhaps point the way forward.

  7. The Myth of the Interchangeable Developer” is yet another lifestyle article that points out what we all know but that recruiters and managers seem to be ignorant about: we all have specific skillsets. If I’m a good services developer, it doesn’t necessarily follow that I’m a good UI developer, for example… it doesn’t mean that I can’t learn, but it certainly implies that there’s an extra cost in time or aptitude for me to actually design a UI.

  8. Understanding Monads: a guide for the perplexed” is an article trying to explain monads yet again. Maybe it’s me, but I’m thinking that monads might be one of those formal terms that’s useful but not useful enough, because they’ve been around forever but people still don’t get them. Maybe there’s a giant set of programmers who shouldn’t be allowed to program… but my feeling is that ‘monad’ is mostly jargon. To me it’s a stateless bit of code that defers state elsewhere, so it’s “functionally pure.” Lots of languages that rely on asynchronous programming have a similar concept, but they don’t necessarily call them “monads” (and they can store state elsewhere, too, so maybe they’re cheating.) It’s a decent article, but if you don’t understand monads, it may not .. actually change anything for you. But maybe it will.

  9. Ah, Project Valhalla. DZone has an article – pretty old now, actually, a month or two – that talks about Valhalla. No Valkyries, unfortunately, but value types instead: object references that are referred to just like primitives. This means that Java might get some forms of reification… but it’s hard to say. The main thing I wanted to see from the article was more clear example code; there’s one that boxes an integer in a generic class, without specifying the integer type, but I’m not actually seeing where there’s a real benefit in code yet.

  10. One of my favorite subjects is up next: “Why Senior Devs Write Dumb Code and How to Spot a Junior From A Mile Away.” Want to find a junior developer? Find someone who spends four hours tuning a bit of code that will run … once every four hours. Overexerting yourself trying to write the perfect bit of code every time… that’s a junior developer. Of course, we all know some senior developers who do the same sort of thing… and we tolerate them, but it’s just tolerance. I don’t write supercomplicated code if I can help it, and I’d rather provide simple code to get something simple done if I can, even if that means I’m wasting a few hundred K of RAM or a few dozen milliseconds. I mean, sure, if we need those milliseconds or that RAM, we can tune for that… but we do that when necessary and not otherwise. A summary might be: hesitate to wax locquacious when your innate desire is to extrude tendrils for others to admire your skill; alternatively, allow their senses to inhale your greatness despite their inability to immediately perceive how impressive your capabilities are, especially in comparison to their own.

Non-Blocking vs. blocking I/O: Go with blocking.

You might have heard or read a various things on the Internet extolling the virtues of writing your applications to be ‘non blocking’. Java offers various libraries that help with this endeavour; Undertow is a java web server that lets you write your servlet handlers non-blocking, for example. XNIO is a more general I/O library that lets you write non-blocking code in simpler fashion.
If you take nothing else away from this article, at the very least use one of those libraries to do your non-blocking work.
But the real point is: don’t write non-blocking code.
That’s right: Don’t write non-blocking code. It’s not worth it. You’re using Java, it ships with a garbage collector. Going non-blocking is like manually cleaning up your garbage: in theory it’s faster, in practice it’s more annoying, more error prone, and performance-wise usually either irrelevant or actually slower.
Said differently: Programming for non-blocking is a lot harder than you think it is, and the performance benefits are far less than you think they are.

What does ‘non-blocking’ mean?

Let’s say you wish to read from a file, so you create a FileInputStream and proceed to call the read() method on it. What happens now? Well, the CPU needs to send a signal that travels to various associated chips on the motherboard, to eventually end up at the disk controller, which will then query some cells in the solid state array and returns this to you. Heaven forbid if you still have a spinning disk, in which case we might have to wait for something as archaic as a motor to spin some metal around and a pick-up-needle like thing to swing in there to read some bits! Even with very fast hardware this process takes ages, at least, as far as the CPU is concerned.

From Your Editor: Want to know more about how your computer actually works and why? A good book to read is Charles Petzold’s Code.

So what happens? Well, the CPU will just… wait. In parlance, the thread executing the read() call ‘blocks’, that is, it’ll stop executing while the disk system fetches the requested bytes, and the CPU is going to go do some other stuff (perhaps fill the audio buffer with the next bit of the music you’re playing in the background, do some compilation, et cetera), and if there’s nothing else to do, it’ll idle for a bit and save power.
Once the disk system has fetched the bytes and returned them to the CPU, the CPU will pick back up where it left off and your read() call returns.
That’s how ‘blocking’ works. Blocking is relevant for lots of things, but it’s almost always going to come up when talking to other systems: Networks, Disks, a database, et cetera: Input/Output, or I/O.
There’s another way to do it, though: ‘non-blocking’ code. (Yes, the quotes are intentional.) In non-blocking, things work a little differently. Instead of the thread freezing when there is no data yet, a non-blocking read() returns the bytes that are available, and execution continues. If you didn’t get it all, well, call it some more, or later. In practice, there are 2 separate strategies to make this work correctly:

  1. The functional/closures/lambda way: You ask for some data, and you provide some code that is to be executed once the data is fetched, and you leave the job of gathering this data (waiting for the disk, for example), and calling this code, to whatever framework you’re using, and it might use non-blocking I/O under the hood.
  2. The multiplexing way: You are a thing that responds to many requests (imagine a web server, designed to handle thousands of simultaneously incoming requests), and if there isn’t yet enough data to respond to some incoming request, you simply… go work on some other request. You round-robin your way through all the requests, handling whatever data there is.

It’s not as fast as you think it’ll be

Generally, ‘non blocking is faster!’ is a usually incorrect oversimplification that is based on the idea that switching threads is very slow. Let’s compare a webserver handling 100 simultaneous requests written with blocking I/O in mind, versus a webserver that is written non-blocking style. The blocking one obviously needs 100 threads to execute simultaneously; most of them will be asleep, waiting for data to either arrive or be sent out to the clients they are handling.
The blocking webserver will be switching active threads a lot. Contrast this to the non-blocking webserver: It might well run on only a single core: The one thread this webserver has will sleep as long as all 100 connections are waiting for I/O, and if even a single one has something to do, the thread awakes, does whatever job is needed for all connections that have data available, and only goes back to sleep when all 100 connections are awaiting I/O.

From Your Editor: The nonblocking approach just described is what Python and Node use nearly exclusively, by the way.

Thus, the theory goes: Switching threads is slow, especially compared to.. simply.. not doing that.
But this is an oversimplified state of affairs: In actual practice, modern kernels are really good at taking care of the bookkeeping that threads require. You can create many thousands of threads in Java and it’ll be fine. Furthermore, your one non-blocking thread model is also switching contexts: Every time it jumps to handling another connection, it needs to look at a completely different chunk of memory. This will usually invalidate the caches on your CPU and loading a new page into cache takes on the order of magnitude of 600 or so CPU operations. The CPU just sits there doing nothing while a new page is loaded into cache. This cost is similar to (and takes a similar duration as) thread switches. You gained nothing.
For more on why you’re not going to see a meaningful performance boost (in fact, why you’ll probably get less performance), check out Paul Thyma’s Presentation “Thousands of Threads and Blocking I/O: The Old Way to Write Java Servers Is New Again (and Way Better)“. Thyma is the operator of mailinator.com; he knows a thing or two about handling a lot of simultaneous traffic.

It’s really, really complicated

The problem with non-blocking is two-fold: First of all, modern CPUs definitely have more than one core, so a web server that has a single thread handling all connections in a non-blocking fashion is actually much slower: All cores but one are idling. You can easily solve this problem by having as many ‘handler threads’ all doing non-blocking operations, as you have cores. But this does mean you get none of the benefits of having a single thread. That is, you do not get to ignore synchronization issues unless you’re willing to pay a huge performance fine (bugs that occur because of the order in which threads execute, so called ‘race conditions’ – these bugs are notoriously hard to find and test for).
Secondly: The point of non-blocking is that you do not block: Whenever you are executing in a non-blocking context it is a bug to call any method or do anything that DOES block. You can NOT talk to a database in a non-blocking handler, because that will, or might, block. You can’t ping a server. Something as simple and innocuous as writing a log might block. The problem is, if you do something that does block, you won’t notice until much later when your server seems to fall over even at a fairly light load. You won’t get a log message or an exception if you mess up and block in a non-blocking context. Your server just stops being able to handle more than a handful of requests (equal to the # of cores you have) in a timely fashion for a while. Whoops! Your big server designed to handle millions of users can’t handle more than 8 people connecting at once because it’s waiting for a log line to be flushed to disk!

From Your Editor: Of course, you could write all of those operations in a nonblocking fashion as well, which is what Python and Node.JS have to do – and there’s a good reason why such things lead to a condition known as “callback hell.” It can be done, obviously; they do it. It’s also incredibly ugly and error-prone; it’s a good example of the “cure” being worse than the “disease.”

This really is a big problem: As a rule, most Java libraries simply do not mention whether or not they block – you can’t rely on documentation and you can’t rely on exceptions either.
Trying to program in this ‘you cannot block!’ world is incredibly complicated.
For a deeper dive into the nuttiness of that programming model, read “What color is your function?” by Bob Nystrom, a language designer on the Dart team.

Soo.. completely useless, huh? Why does it exist, then?

Well, non-blocking probably exists because, like the idea of the tongue map, it’s just a very widespread myth that non-blocking automatically means it’s all going to run significantly faster.
It’s not completely useless, though. The biggest benefit to writing your code non-blocking style, is that you gain full control of buffer sizes. Normally, in the blocking I/O model, most of the state of handling your I/O is stuck in the stack someplace: If you ask an inputstream to read a JSON block, for example, and only half of the data is readily available, half of that data is processed by your thread into the data structures you’re reading that JSON into, and then the thread freezes. That data is now stuck in bits of heap memory, and the rest is in this thread’s stack memory. Contrast to a non-blocking model where you’ll have made a ByteBuffer explicitly and most of the data is in there.
CPU stack, ByteBuffer: Potato, potato: It’s all RAM. The difference is: stacks are at least 1MB, and the same size for each and every thread in the VM. ByteBuffers are fully under your control. You can make them smaller than 1MB, you can dynamically update the size, and you can have different buffer sizes for different connections, for example.
Non-blocking code tends to have a smaller memory footprint because of this, if the coder’s aware and takes advantage of the possibility, and that’s the one saving grace it does offer.
It is up to you to figure out if that benefit is worth the hardship and performance hit of going with the non-blocking model. Generally, RAM is very cheap. I’d wager my bets on the blocking model. After all, we don’t write all our software in hand tuned machine code either!