[Guest Blogger] 40 Years in the Field (Part 11)

 

Graham Jones (Surrey, British Columbia, IT Pro)

Designing, constructing and commissioning a process plant bears some similarity to software development. When we develop software we always start from a perfect set of requirements that never change, right? In your dreams. Building a process plant, especially when the process is the fruit of your own research, is not dissimilar. If we waited for the perfect starting point we would never begin and we would never make the schedule that the client demands. The fundamental problem in plant engineering is that there are multiple technical (up to 10) and management disciplines involved. All disciplines work in series and in parallel at different times throughout the project. Nobody can really start without the process design but it changes, perhaps because of new data from research or new info from the client (clients never change their minds, right?). In a perfect world it would be a “straight through process”. However, just to make it even more interesting downstream changes may generate the need to recycle new info back upstream. In other words, designing and building a (safe and operable) process plant is very complex and challenging.  

In the late 1980’s/early 1990’s market competition really upped the ante when it came to project schedules. Negotiation of a typical contract might go along the lines of “company X can supply a similar technology but in 2 months less”. By the time the final price had been negotiated, and it was promised in 3 months less (not 2), and the end point date hadn’t shifted (thank you sales and marketing – perhaps engineering could be consulted next time) we were in trouble before we even started! Does this all sound very familiar. The upshot was that projects which had previously been making money were now losing money. Something had to change! The market wasn’t going to change and in fact as time went on schedules got even more “lunatic” because of time to market issues for the clients. The way we “executed” had to change. This required a fundamental re-examination of processes and data flow. A return to profitability could only be achieved by applying traditional “data processing” techniques leading to “Concurrent Engineering”. A massive effort was launched by our parent company to understand and document processes, and to document every single data item that is involved both from technical and Project Management standpoints. The development project eventually took some 4 years (original estimate was 2 years), cost approx. $15 million in today’s money and took several more years to become the routine daily working environment. In an earlier article I suggested that paradigm shifts in work processes take 7 - 10 years to fully and successfully implement when you consider the re-training/re-orientation/re-indoctrination of people. When you consider that the typical maximum useful time span for a major software product used to be 5 - 7 years (it is getting shorter all of the time more recently and consequently creating major challenges) then it is easy to see why we were really in a constant state of flux.

It was very difficult to execute “error free” projects using a manual system because checking the design was a time consuming, very tedious, and thus error prone,  exercise which was often compromised when time became short (not dissimilar to compromising on software testing). This was the main reason why the “we will catch it in the field (ie. construction)” expression was often heard. This situation became even more acute with shorter schedules. As I explained in an earlier part this was very expensive. You are at the mercy of the construction contractor who has typically bid a fixed price. They bid a low price to get the contract and then ding you on the “extras”. Everybody knew this and yet the system perpetuated itself. Pleading with people (which I did frequently) to spend extra time and effort in the office usually fell on deaf ears. Time pressures and the client breathing down your neck invariably pressed people into “panicky” decision making. Having spent some time doing PM I understand perfectly the tough spot that you are in when things aren’t going as well as you would like. However, the most successful PM’s resist the temptation to make expedient choices and stick to making the decisions which are in the best long term interest of the project. Invariably that will entail making some very difficult and sometimes unpopular choices affecting the team, client or company management or possibly all three. Everyone will thank you for it in the end even though it doesn’t feel good at the time! If PM was easy many more people would be good at it.

PDMS eventually went a long way to mitigating problems in terms of physical fit, which is where a lot of the manual checking effort would go. In fact a few contractors “lost their shirt” when PDMS, and similar products, came into more common use. However, it couldn’t fix things that were fundamentally flawed because of poor design resulting in operational problems. This might lead to very punitive contract penalties if the plant failed to meet the process guarantees (typically production output, product quality and cost of services, eg. steam, electricity, etc.). For example, suppose that the client had informed us in good time (ie. as per the contract) that the steam pressure for a heater was now lower than originally specified but that info had not been properly propagated through the design resulting in an operational problem. Penalties could literally be $50K per day for every day that the plant failed to meet the guarantees. It didn’t take much to wipe out your profit!

Perhaps now it is clear why such a huge effort and sum of money was invested. Effectively, the market place forced a paradigm shift in the process plant design business, which ultimately was a good thing because it made the industry more efficient. Doing all of the upfront analysis was all well and good but we needed solutions. While all of this was going on DB technology was advancing at a pace and it was clear that some form of DBMS had to be involved. The product of choice was Oracle. At that time nobody had ever attempted an undertaking of this kind before in plant engineering and inevitably that presented some challenges. The biggest initial mistake was “going for broke”. By that I mean trying to build an “all singing, all dancing system”, ie. the “grand” design including all processes and data elements. The data flow analysis amounted to many hundreds of DFD’s and the data dictionary had thousands of entries. This initial approach was a huge mistake and a painful lesson which didn’t do a lot for the confidence of the management team putting up the money. I haven’t yet indicated where I fit into all of this. My role was to be part of a multi company team (Canada, US, UK & Europe) producing DFD’s for the process discipline and to review DFD’s for all of the other disciplines. By this point I was the Engineering Manager responsible for all of the disciplines in my company. Each company was to review everything since the business environment and client base varied from company to company. As you might expect that resulted in multiple iterations of the DFD’s and requirements specifications, sometimes ad nauseum (:.

It was soon recognized that the project had to be split into multiple applications, which “spoke to each other” under the control of “human beings” and not have everything done automatically via software. The initial “grand design” was at the insistence of the IT people doing the development work. In theory it could and should be done! It was never wanted by the people actually doing the work, including me. One of the biggest difficulties for people was getting their head around all of this data in various states of completion “flying around” inside this huge DB. In other words, “if I change this, what will actually happen and where?”. It was simply too much for people to comprehend or feel confident about. A very important issue in engineering design is the control and logging of the release of “information packages” as data moves from discipline to discipline. By breaking the project into discipline based apps this paralleled what happened in the manual system and therefore people could easily identify with it. The fact that it now happened electronically based upon access privileges didn’t matter. After we got past the “I told you so stuff”, things started to come together. Obviously it wasn’t possible to release all of the apps at once and a mixed electronic and manual system is often less efficient than either, which was grist to the mill for the always present naysayers.

Big plusses with the electronic system, apart from improved accuracy due to a reduction in transcription errors, were the ability to store the “design” at different stages and easily go back to previous package releases if necessary, the ability to easily re-use data from one project to the next and the preparation of more accurate proposal work for the same process technologies. Eventually, this saved a huge amount of effort and improved accuracy by not “re-inventing the already proven wheel”. Give 5 engineers the same specs and you will get 5 different designs, some much better than others. This parallels the re-use of “quality” program code which only makes sense. Computer based data also assisted in project reviews, both in time and accuracy, because a detailed “history” (who changed it, when and for what reason) of every single piece of data was available. People couldn’t so easily “hide” anymore and the poorer performers were more readily weeded out. Today my expectation would be that there is more digital integration within the design process and that digital data is routinely sent to vendors and manufacturers. Why send a drawing for someone to figure out how to program a CNC machine to make a part when you can send the instructions electronically.

Cheers

Graham J.