Process unit representations in planning models need to include capacity controls so that the optimization takes into account limits on how much material can be processed. The obvious and easy one to set up is a count of the feed.
Process vectors are usually written per unit of feed, so if you put 1 as a co-efficient on the capacity control and give the maximum (and perhaps also minimum flow), the unit has a size.   If your vector consumes something other than 1 unit of feed, use that same factor to load the limit.  To control the outputs, use the yield factor, as below from our WT Demo model.

Such limits will be in the same units as the process vectors – so m3/day (or BBL) in a volume-balanced model and tons /day in weight balanced model.   Historically many weight models were set up with weight-based limits.  But as it almost certain that physical constraints on the actual plant are volumetric flows, these are clearly an approximation of the capacity based on some reference density.  The modeller has to be assuming that the feed density is sufficiently stable for this to not have a significant impact on the usefulness of the results.  (If you have a volume model, you can probably stop reading here - unless you happen to have a unit where the capacity is limited by weight, in which case you sort of have the inverse issue - but not as messy since Density blends by volume.)

Weight models can quite easily have volume capacity limits though, if there is density information for the streams flowing in and out. Count the m3 per ton of feed by using 1/Density – specific volume – for the loading factor. If the relevant density is a constant in the model, then you can just calculate the value and use it. If you are generating crude library data or have cutpoint optimization you look this information up in the assay.

Here we see crude library tags expanded for one crude so you can see that CDU Capacity is now counted in volume and the LGO/HGO draws factors no longer match the yield fractions as they have been divided by the density of the cut.

Variable Density
If the density is not known in advance because the feed is a blend or a rundown pool from another unit, then you will need a recursed element to count the volume. The key to doing this efficiently is the internal property, .VL, that is equal to 1/density. (Volume models have a .WT which is equal to density).  You can then either use the Adherent Recursion system to define a PUP for the factor, or link your capacity control row to .VL

Using a PUP to count m3 per ton of feed is simple.  Multiply the .VL property of the feed pool by 1. If you want to count barrels, multiply it by 6.29.

Although you could calculate the same value from 1/DEN, that would create a more complex recursion problem because density blends by volume.   When volume blending properties are used as inputs to AR calculations in weight models (or weight blending properties in volume models) an additional matrix factor has to be approximated and recursed to correct for density change. Using .VL is simpler and more likely to be stable.

If you want to control the volume of an output stream you can also use a PUP, of course. Calculate it as Yield x Specific Volume or divided by Density either directly from the simulator, or from the PUPs that are returned to track these as Yield Qualities, as PUP VLDREF below.

In this case there is less impact of using volume or density outputs from the simulator to calculate the factor.  What matters for the recursion is what the inputs are used for the simulator.  That is a rundown density predicted from feed volume will be just as stable as one calculated from a volume factor, while calculating output density and/or volume from feed density would be less so.    Note that even if the yield on the vector is 1 you should not use the same PUP on both the Operations and Yield Quality Panels, as there is likely to be scaling applied to the quality PUP (depending on the "Quality Transfer PUPs" settingon the unit AR panel - or the \$REALBASE option set in Table 200.0) that is not needed for a PUP on a limit row.

Select the PUPs as the factors on the capacity rows.

If you prefer a more traditional approach and still use static delta-base unit representations, you can pick up feed density information by linking the capacity control to the volume counter.
On the Loading Factor panel, indicate that the capacity is related to the .VL property. (For spreadsheet input, this goes in the \$QUAL column of the unit's 2xx.1 table)

On the Operations panel this property code will be displayed on the limit row.

Any factor that is on a vector that has a pool feed  will be replaced with the assumed specific volume.  The tons of feed counting could have been counted on either the base mode (98A) or the Feed mode (P81) as they always have the same activity. However, anything we want to recurse has to be on the Feed mode.  (If you put a number on the Base mode it will be a constant)  With this structure the volume factor will be replaced with the current assumed value – along with the N2A property that is driving the yield shift. These numbers will be updated every pass and there will be a connection to the pool property error balance so that the optimization of each linear approximation is influenced by any changes in feed quality. You can check that the count is correct by comparing the feed volume on the unit mass balance report to the reported capacity – and if it is a dedicated feed pool there should also be a match to the reported blended volume (and in turn that should be consistent with the reported density).

If you want to constrain in barrels instead of m3, you can do this by transferring the volume count information to another operation and adding the additional conversion.

Constrain REV to a fix of zero. This will drive vector V81 to be equal to the feed m3. 6.29 bbl/m3 loaded onto REB gives you a barrel control.  (Such volume vectors are also useful for setting up the quality driver rows to control deltas on volume blending inputs -and in a volume model, one driven to count feed weight is a better place to put a base sulphur number - although you will get a better linearization in for both weight and volume models if you use AR.)

Quantity vs. Quality:

So, if your feed and rundown flow constraints are physically volumetric you have options that will allow you to represent that, even in a weight-balanced model. Should you use them? The client who raised the question was as much concerned about the impact on the recursion behaviour as how to model it. With a volumetric control, the model now has the possibility to trade density against quantity when selecting feed stocks. Light feeds will hit volumetric limits sooner – giving fewer tons to turn into products or requiring more processing time to achieve the same amount. Heavy feeds give more tons, but less volume of material (or perhaps require more cracking type processes to achieve the same volumes). Could this cause some oscillation between recursion passes? Maybe. It will depend on how much of the pricing for inputs and outputs is in by weight or volume and, as always, how clear the incentives are. Mixed sales – for example gasoline by volume for the local market but by weight for export cargos – will definitely provide incentives for fine tuning density. Such opportunities to play games with moving the quality error from one destination to another would need to be managed by having some constraints on mounts or other distinct specifications.   However, as limits are volumetric this choice is a real world one, and not just a modelling artefact.  The advantages of taking density into account the evaluations seem to outweigh the potential for problems. If the model can’t accurately determine how much of each crude it can run on the towers, how can you compare their values?

From right next to the fan on Kathy's Desk.

22nd July 2021