Joseph has posted 1 posts at DZone. View Full User Profile

Transmission Pipeline Integrity Management Program on NetBeans

04.13.2011
| 9442 views |
  • submit to reddit

I am Joseph Haller, managing member of Universalis Consulting Services LLC in Las Vegas, Nevada, performing engineering data and statistical analysis services and data integration services for the natural gas utility industry.  I have a Ph.D. in astronomy from the University of Arizona where I worked in Galactic Center research involving infrared observations and stellar system modeling of the stellar population and the supermassive black hole that reside there.

I also taught undergraduate physics and introductory astronomy. I fell out of academia when a move to Las Vegas was upon me in the late 1990’s. I worked as a corporate web developer at Southwest Gas Corporation (where I picked up Java), then had a brief foray at a destined-to-be-defunct dot com as an in-house application developer.

I ended up back at Southwest Gas working in the pipeline engineering field. Among other things, I participated in the implementation of a transmission pipeline integrity management program (TRIMP). Out of the many aspects to TRIMP, my efforts are directed toward the integration of data needed for risk analysis. I became a contractor a few years ago, in part so I could pursue other activities.

What's Southwest Gas doing with NetBeans?

The data integration work for TRIMP has been developed using the NetBeans IDE since day one. I have been an unwavering user since about 2000. Initially I had to perform calculations pertaining to linear reference models of pipelines. Linear reference systems are used to uniquely identify and label all the parts of a linear feature network such as a road system, pipeline system, river estuary, etc. No functionality was available for this in the mapping system at hand. Moreover, there is not a clear way to automate fully the generation of such models; it requires human input. I built a small Swing application called “PickUpSticks” that would take extracts of pipeline segment data from the mapping system and assemble them into continuous pieces making up the linear feature network. The application allowed the user to edit the pieces through merging, splitting, changing direction, naming, etc. The linear reference models were foundational for the data integration needed to perform a risk analysis by a third-party application. All risk analysis data ends up with a linear reference, either as point data or range data placed upon these linear features.

A growing set of ancillary tasks were also written and these amounted to various employments of computational geometry. Not realizing it at the time, I had basically stumbled into the GIS world. I would build these tasks in the NetBeans IDE and configure them to run as ANT targets in the project build.xml file. The NetBeans IDE was my user interface. In those first few years I had to quickly beat a direct path toward the computational objectives without a lot of time for architectural considerations. Inevitably, the combination of development efforts and strict data analysis activities were proving too much for one person. Another employee was eventually brought in to perform the analyses using the tools I developed. But the execution of the tasks still required a fair bit a detailed knowledge of what was going on under the covers. We needed to further encapsulate them within an application targeting a non-omniscient user.

Mike Mann was brought in four years ago as an independent contractor precisely to assist in this effort as a Swing developer. He and I worked toward building a framework that could incorporate PickUpSticks and the various analysis tasks into a single application. Today Southwest Gas has what is called the “TRIMP Suite” application, built on the NetBeans Platform, to support the data integration and analysis activities of TRIMP.

How was the decision made to use the NetBeans Platform?

Mike Mann initially used the JIDE tool set to build the first desktop version of the TRIMP Suite that ported the prior versions of PickUpSticks and the analysis tools. However, a lot of the existing legacy code had not really been designed to work in a desktop environment. We were getting into the classic “spaghetti code” situation; a lot of stitching together of things across boundaries. It was not very clean.

Mike had worked previously with NetBeans Dream Team member Tom Wheeler and had seen what the NetBeans Platform could do as a foundation for a desktop application. Mike advocated migrating to the NetBeans Platform because of the out-of-the-box windowing system it offered, plus the module system.

It made sense to move in this direction since it was simply an extension of our development environment. The documentation and the available books were improving. It was not a hard choice to make. I have been pleased ever since. Moreover, we have been encouraged by the support Oracle has given to the NetBeans Platform since the Sun acquisition.

What have been the main gains in using it?

  • One of the first gains we saw right away was simply the use of the module system. It enabled us to resolve a lot of the dependency issues we had been encountering in the prior legacy code. It also enforced discipline in how we wrote and organized our code.
  • Second, the windowing system has been of great value. It has allowed us to roll out a sophisticated application exposing the functionality of our preexisting code. PickUpSticks is now embedded as a JPanel inside a TopComponent along with a few other navigational panels that were disparate in the original Swing version. They are all now an integrated whole. Linear reference coordinates can be converted to 2-D coordinate positions and vice-versa. Linear features are listed by name and can be quickly found on the map view by selection.

  • Mike has of late been using more of the Nodes, Explorer Views, and Visual Lib APIs in building user interfaces for our most recent additions.

What are some of the highlights of the application?



Figure 1: TRIMP Suite PickUpSticks Application

A principle tool in the application is PickUpSticks (Figure 1). It builds linear reference models from pipeline segment data obtained from GIS extracts. The linear reference model decomposes the interconnected set of pipes into discrete line segment features. Each linear feature is assigned a unique name and a generated “stationing”, the distance along the line from a chosen fixed end. The PickUpSticks models store the geometry of each linear feature. If you tell me the linear feature and station then I can tell you its x-y coordinates and vice-versa. The linear features are selectable and when they are selected a “station slider” tool allows you to scroll along the length of the line even as the map updates all the while. Conversely, you can select a station on the tool and it will place you there on the map. Other standard map function behaviors include zooming, panning, etc. Generally, the pipeline attributes change as you move along the line. A pipe design model is also hooked in so that pipe attributes display as you slide along each point. Mike Mann has put an enormous amount of effort incorporating a whole host of convenience features that make the interface informative and easy to use.

Figure 2: A TRIMP Suite Utility Task (on the right)

We have over two dozen utility tasks that have also been incorporated (an example is shown toward the right in Figure 2). These tasks involve some kind of analysis pertaining to the linear reference models such as generating pipe design models, exporting linear reference model files, boundary-stationing analysis, coordinate transformations, dynamic segmentation, model migration, etc.

Because of the growth Southwest Gas has experienced over the years, the pipeline system changes all the time. Thus, a series of snap shot models have been accumulating and are available for selection. A very thorny issue with the linear reference models arises from their time dependent nature. All of the data that is referenced to a particular linear reference model needs to be updated if you change the linear reference system.

It took several years, but eventually we built a solution to tackle this problem. The first part is a module called “Compare” which uses two “hot rod” instances of PickUpSticks (Figure 3a). Two sequential models are loaded. The application automatically identifies segments having the same physical locations and attributes and then matches them together, along with their linear references, in a table (Figure 3b). The user then matches remaining unmatched segments and flags old or new segments having no matches. They can see and compare on the dual PickUpSticks panels the old and new versions of the model. Synchronized navigation exists between the map panels, slider panels, and match tables. A dual monitor screen is needed to run the Compare application because there is so much information. In the end, you have a mapping file that can translate linear references from an old model to linear references in a new model. Then we have a series of tasks (called “MatchSticks”) that leverage the mapping file to perform the linear reference transformations of old data sets to updated data sets for both point and range data. We currently work with ASCII text and Excel data files.

Figure 3a: TRIMP Suite Compare Application (dual PickUpSticks panels)

Figure 3b: TRIMP Suite Compare Application (segment match tables)

The most important boundary features along pipelines are known as “High Consequence Areas” or HCAs. Managing HCAs is central to the transmission integrity management process. The HCAs are defined in part by the human activity within buildings or areas in proximity to the pipeline, which obviously can change. Meanwhile they have to be tracked in a linear reference model which also changes on a per model basis. Thus, HCAs have very dynamic behavior over time both on the part of the physical world and on the part of the data.

Figure 4a: TRIMP Suite HCA Directory using table interface

Figure 4b: TRIMP Suite HCA Directory using Visual Lib interface

In late 2010, Mike Mann worked with the TRIMP team data specialist to build a module to track the history of HCAs (Figures 4a and 4b). This tool was very much needed and they did a great job. Mike went to town incorporating much of the recent NetBeans Platform training we had during September 2010 in San Francisco to make a very useful and professional application. (Think Dr. McCoy repairing Spock’s cranium after learning from the “Controller” that brain transplant techniques are “child’s play” in the aptly named episode “Spock’s Brain”.)

Do you have some tip & tricks or other interesting things that you think other NetBeans Platform users should know about?

  • Conceptually, it helped to see that the relation of the NetBeans Platform hosting your application modules is somewhat analogous to the relation of the operating system hosting programs on the typical computer. The operating system does a lot of things behind the scenes that you learn to rely on and eventually forget about (routing of GUI and keyboard events for instance). The NetBeans Platform API’s have several aspects of this when you are writing code for your application. At first it is bewildering to understand the different levels of context sensitivity, why you configure certain things in the file system, or write some event code in certain ways that somehow works, or always extend certain classes in certain ways. What makes these things work together is the platform behind the scenes. As you read the books and documentation you should take care to note what the platform is doing for you and why you need to write things the way you do.

  • The data processing of PickUpSticks is very memory intensive. We encountered issues when we moved to NetBeans Platform. Take note on how to configure the JVM switches of the platform configuration file to expand the memory.

  • Mike has recently split the application from one suite into about a dozen or so. This has greatly helped the modularity issues. We can deploy different versions to different end users with different pieces of functionality. It also helps with the compile times.

  • Desktop integration has been important for us. We rely on Excel and a few other external DLL files. We have been successful in using TeamDev’s JNIWrapper and JExcel products to expose spreadsheets in our application (as well as Apache POI for processing large spreadsheet files). Interfacing with native code in the NetBeans Platform required some patience but it can work. Once, a strange problem with Excel appeared. The default “double-click to open” mouse action for Excel files ceased to work on some of the desktop computers. It was really annoying. It turned out we were the cause - in the event when a user had opened Excel in TRIMP Suite but then closed the Suite without first closing Excel. Mike was able to write a call back method to prevent Excel instances opened by us from being closed on their own; they must be closed with a button in our application. For insurance, I wrote a TRIMP Suite utility task that does nothing more than open and close Excel behind the scenes thus re-enabling the double-click action.

What are the future features you're planning to add to the application?

Out of those hectic first days of development, we ended up with a file-centric application. This worked well for a single user environment. However, as other TRIMP team personnel become dependent on the information, data integrity issues arise. Not surprisingly, our next major efforts will involve moving all of the linear reference and pipe design models and data analysis results to an Oracle database, relying on Oracle Spatial to help in the storage and the processing wherever it makes sense. We anticipate this will take at least a year to accomplish.

There is considerable interest in expanding this application to other areas of the integrity management process; we still only support a small part of the overall effort. We have offered a proposal that outlines a path on how to get there. A decision to proceed will probably not be made until we get further along with the database migration.

TRIMP Suite is not a general mapping solution. We extensively use a third-party tool outside of TRIMP Suite called Global Mapper to display spatial data, re-project coordinates, convert file formats, etc. We have obtained the Global Mapper SDK, which is written in C. My initial efforts with the SDK use it with JNIWrapper to hook in coordinate transformation functions. However, it can do so much more. I am currently researching how we can better leverage additional capabilities

Do you have other plans in terms of things you'd like to do with the NetBeans Platform?

  • Forking the JVM – the TRIMP Data Analyst now has some models that are very large and processing them breaks the desktop environment. I can run the tasks independently of the NetBeans Platform but inside TRIMP Suite we encounter the “GC overhead limit exceeded” OutOfMemoryError. As we speak I am writing a routine to use Apache ANT to fork a new JVM instance to run the task independently of TRIMP Suite. I have a working test program using a simple Java application to launch the fork but inside the NetBeans Platform the original JVM class path does not completely clone into the forked JVM instance, I presume because of the module class loader architecture. I am researching this so that we can get to the point where no special class path configuration is needed in the forked instance.

  • Scala – Scala has interested me for quite some time. I have seen enough of it to know that it would be of great help in a lot of our data processing routines. How to get it in our application though? I saw some blogs on the web on how to make a NetBeans project mixing Java and Scala source code and another on creating a Scala NetBeans module. I was able to get them to work. Next, I took it a little further and was able to make a NetBeans Module project using both Java and Scala. I wrote a module installer invoking both Java and Scala code. A little bit of configuration file hand stitching was needed but it is possible. I will be pursuing this further during the coming year.

  • JINI / JavaSpaces – Mike Mann is always on the lookout for interesting technologies to incorporate into the application. As we move from a single- to a multi-user environment it makes sense that we will need a general environment for information to be commonly shared. Mike has been trying out Fly Object Space which has both Java and Scala bindings. Some of our larger computations and future workflow efforts could benefit from it.

  • JavaFX – We have had our eye on JavaFX for some time but using it in the NetBeans Platform was not a viable option. Nor did it make sense to abandon the NetBeans Platform in favor of JavaFX. We are very interested to see what becomes of it in the Java 7 release when JavaFX is exposed as a Java API. It has been a long time coming and we eagerly await it.

References:




Published at DZone with permission of its author, Joseph Haller.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)