
Tactical workload exceptions are in place to prevent tactical queries from consuming unreasonable amounts of resources. It is important to have this protection because the super-priority and almost unlimited access to resources given to work running in the Tactical tier with SLES 11 is easy to abuse.
Components that Make up the Tactical Workload Exception
A tactical workload exception is actually two distinctly different exceptions bundled into one: One exception is on CPU and the other is on I/O usage. Once one or both of these exceptions are detected, the query running in the tactical tier will be demoted to a non-tactical workload.
Each of these exceptions comes with two different threshold variants: A per node threshold and a sum over all nodes threshold.
Each node has its own instance of the operating system and the priority scheduler which only knows about the activities on that one node. Each node’s priority scheduler instance monitors its own per node exception thresholds for CPU and I/O. The sum over all node exceptions, on the other hand, are monitored and detected by workload management (TASM or TIWM).
Here is how the TASM orange book explains these two tactical workload threshold variants, and how they will appear in Viewpoint Workload Designer:
-
Tactical CPU time threshold value (default values are defined by the Workload Designer):
- CPU (sum over all nodes) – Default value is the number of nodes multiplied by the CPU per node value.
- CPU per node – Default value is 2 seconds.
-
Tactical I/O physical byte threshold values (default values are defined by the Workload Designer):
- I/O (sum over all nodes) – Default is the number of nodes multiplied by the I/O per node value.
- I/O per Node – Default value is 200 MB. (Note the I/O per Node value is stored in the TDWM table in KB and must be converted to MB or GB to create the tactical exception in the Priority Scheduler. There is a small loss of precision when this conversion occurs.)
The screen shot below shows you what the Tactical Exception screen looks like in Viewpoint Workload Designer for a one-node system.
Per Node Thresholds
As stated previously, a request is moved to a different workload on a node when either the CPU per node or the I/O per node threshold is reached. This means that it is possible for a request to be running on a different workload on one node (having been demoted), while all other nodes are running the request in the original workload.
This could occur if there is heavy skew on one node, so that one node exceeds the per node threshold of either CPU or I/O. When the CPU (sum over all nodes) or the I/O (sum over all nodes) threshold is reached the request is moved on all nodes at the same time, and any notification actions performed.
Database Query Log
The easiest way to see if a tactical exception across all nodes has taken place it to look at the FinalWDID field in the DBQlogTbl. That field captures the workload ID of the workload where the query completed. If it is different from the workload ID (WDID), then a workload exception to demote took place.
If you are on 15.10 and DBQLogTbl logging is enabled, the number of nodes that reached the CPU per node threshold value is logged in the TacticalCPUException field, and the number of nodes that reached the I/O per node threshold value is logged in the TacticalIOException field.
After a System Expansion
If you make changes to these CPU and I/O thresholds and the rule set is saved, the new thresholds values become fixed, static values until changed again by the administrator. If there is a system expansion and the number of nodes in the configuration changes compared to when the rule change was made, the CPU and I/O sum over all nodes settings is not automatically adjusted.
After a reconfig, always recheck these exception threshold values to make sure they are correct for you on the new platform.
Guidelines When Changing Tactical Workload Exceptions
Tactical workload exceptions are intended to be a permanent part of each tactical workload and cannot be deleted. However, their thresholds can be modified as needed. It is important to monitor the frequency of tactical exceptions and if you change the exception thresholds, do so cautiously. It is recommended that thresholds for tactical events remain low, at the default settings or below.
One motivation for creating multiple workloads with the Tactical workload management method is the need to have different tactical exception threshold values. You may have a set of applications that should always submit very low CPU usage queries, and you want to quickly detect any longer running ones and move them to a lower priority workload. For these you would create a tactical workload with a tactical exception with lower tactical CPU time threshold values. On the other hand, you may have some very critical tactical applications that should be allowed to consume larger levels of CPU and I/O. For these applications, you could create a tactical workload with a tactical exception that utilizes, for example, the default threshold values.
Tactical queries are expedited, which means they will use reserved AMP worker tasks if such reserves have been defined on the platform. At the time of the demotion, tactical requests that are being moved will take their reserved AWT with them to the non-tactical workload until such time as the current step completes. When the next query step is dispatched, it will use standard AMP worker tasks, just as do all non-tactical queries.
If there are large numbers of demotions out of tactical, this could contribute to misuse of reserved AMP worker tasks, and could deplete their number sooner than you might like. For that reason, if tactical queries are hitting the exception thresholds often, that should be a tipoff that either the tactical application requires tuning, or that the workload belongs in a non-tactical tier.
One approach used at some customer sites is to keep the two threshold variants (per node and sum over all nodes) for both the CPU and the I/O exceptions as the same number. In other words, if you want to demote all tactical queries that exceed 1 CPU second, set both the per node and the across all nodes settings at 1 second. The same with I/O. That ensures that if a query is demoted on one AMP, it will be demoted on all AMPs, and you will have eliminated the possibility of one node running the query on one workload, and another node running it on a different workload. One reason to do this has to do with what DBQL captures. Unless the sum over all nodes value is reached, a FinalWDID reflecting the demoted-to workload will not be displayed in the DBQLogTbl for the query.
Managing CPU Per-Node Exception Threshold Values
It is generally recommended that you not reduce the tactical exception threshold on CPU below 1 CPU second for either CPU per node or for sum over all nodes. The lower the threshold value is set, the more CPU overhead is involved when the tactical requests approach or exceed the threshold.
At very low thresholds, the effort involved in monitoring and detecting non-compliant tactical requests can add considerable overhead, with the potential for impacting other tactical and non-tactical work that is active on the system. The more queries active in the workload, and the more queries running in the workload that either reach or come close to reaching the per-node exception threshold, the more overhead in CPU cost will be incurred by this approach, particularly if the threshold setting is set low.
However, in cases where it has been established that hardly any tactical queries in the workload ever reach or come close to reaching the CPU per node threshold, CPU seconds per node exceptions below 1 second are an option. This is especially true if the concurrency within the tactical workload is low.
CPU exception enforcement overhead is not expected to be problematic if the workload exception acts as a rarely-used safety net, where only a truly unexpected event would cause the exception to be triggered. Before considering any threshold below 1 second, make sure that the desired CPU per node threshold is well above the CPU usage of the vast majority of requests running in that workload.