Quick Hit Tips – CPUSPEED, RESTRICTIVE, and DB2_WORKLOAD

You may also like...

13 Responses

  1. Chris Aldrich says:

    RESTRICTIVE is cool. But there are some caveats with it. You may find that some tasks you used to be able to do, you can no longer do because PUBLIC had underlying priveleges you didn’t even know about. For example, you can’t just grant CONNECT and SELECT to give SELECT access. You also need to grant USAGE on the SYSDEFAULTUSERWORKLOAD. This is because everything (other than admin requests) route through this workload. PUBLIC has access to that workload by default. Make it RESTRICTIVE and now no one has access to that workload. It now must be granted.

    Other things you may need to grant access to are the DB2SECURITYLABEL and SQLSTATE functions. You may also need to hunt and peck for CLP and CLI packages and grant them (remember those change every release and possibly fix pack….).

    As you can see, this gets really sticky quick. The easiest way to handle it is to use Role Based Access Control (RBAC). Now you can easily dole out those privileges to those who need it, while locking out everyone else. (I’ll add in another infomercial here about using Trusted Contexts to lock down IDs to specific IPs or hosts. Very handy.)

    Just my two cents. 🙂

    • mkrafick says:

      Chris,

      I’ve been using more Role Based Access Control over the past few months. As opposed to granting AIX groups access. I find it gets around all the “gotcha’s” of AIX groups.

      Do you find the same thing? Do you prefer role based groups?

      -Mike

      • Chris Aldrich says:

        Group access is horrid. I am realizing that now is that is how we have set things up. It just allows too many oopses in security. I am planning on cutting over to RBAC to have tighter control on who can see and do what. Plus the benefit with moving to roles is I can start to use Trusted Contexts (as I noted above) to lock down IDs to specific systems. This is especially good for service accounts. Set up your Trusted Context for a particular ID and give it the roles you want *ONLY* within that Trusted Context. If that ID tries to connect outside of that IP or host, it can’t connect (also assuming you have either revoked CONNECT on PUBLIC or have a RESTRICTIVE database). *Very handy*. I like this a LOT and am trying to move that way. Plus it also makes it easier to start using Row and Column Access Control (RCAC).

  2. Ken Warren says:

    Hi Mark.

    I’ve noticed the value calculated depends on how busy the server is. I’m guessing thats something to do with our platform….higher values are returned when the LPAR is busier.

    We run on IBM 795 frames with LPARs using CPU virtualization. My guess is the busier the LPAR, the more CPU resource it’s allowed. Whatever the calculation does, it must ‘see’ more CPUs.

    Begs the question: if your workload varies during the day, affecting the CPU resources required, is it better to recalc at the busiest time, the quietest time or somewhere in the middle.

    Regards
    Ken.

    • Ken Warren says:

      Michael, not Mark! Sorry!

    • mkrafick says:

      Ken,

      You discovered something I didn’t and that makes a lot of sense. I couldn’t understand why if I ran the command to re-evaluate CPUSPEED 10x I could get 10 different (sometimes greatly different) answers. Business would affect speed.

      As for when is the best time to do this, good question. A second e-mail discussion has spun off about this and I may be going to the labs to ask a clarifying question – I will bring this to their attention as well for a “rule of thumb”. I’ll let you know what they say.

      Thanks for the feedback.

      • Ken Warren says:

        Hi Michael.

        I had to use ‘db2 update dbm cfg using CPUSPEED -1’ to change the value.
        The article suggests ‘db2set CPUSPEED -1’ but I had no joy with that.

        Regards
        Ken.

        • mkrafick says:

          Yep, embarrassingly enough I had put the wrong command. I edited the article to correct it. Thanks for the update.

  3. tsardina says:

    Good info on some params that can easily be missed!

    I’ve found that the best way to set CPUSPEED is by having a good benchmark test. Remember, it’s not telling the cpu how fast to run an instruction, it’s telling the optimizer how fast it thinks an instruction will run. I don’t necessarily have to have the most accurate setting to match the CPU speed, I really want the best set of plans to get my workload done the fastest while balancing that with the least amount of resources. I set CPUSPEED in a similar way to AVG_APPLS. I’ve rarely found I need to set AVG_APPLS greater than 2 even though I’ve had workloads with many more concurrent apps than that. For CPUSPEED sometimes I use the default value by setting it to -1, sometimes I set it manually. It depends on the results of my testing.

    • mkrafick says:

      Excellent advice. So my question to be – what are you looking at when you benchmark. CPU strain, number transactions processed, etc? I think we all have our favorites to look at, but what is the one or two you focus on for this tuning specifically?

      I’m always looking for new benchmarks.

      • tsardina says:

        For me this is usually a once and done kind of parameter, except for the scenarios listed above in this blog post. I normally don’t try tuning this one. But when I do I’m looking at query runtimes and queries per sec, combined with changes in explain plans – pretty straight forward. I’m looking for the right setting that gives the best throughput without causing any issues.

        Not very exciting I know, but that’s the key to this one for me.

  4. Scott says:

    I had some wicked problems with memory config such that 9.7 was crashing on Windows, so I ran a db2 update dbm cfg reset, but now my cpuspeed is quite different.

    What does that reset do to the cpuspeed? should I set to -1?

    • mkrafick says:

      Interesting. From the knowledge center, we know that “update db cfg reset” will “set documented default configuration values”. For CPUSPEED, the default is -1 which means “reassess and set”. You essentially ran CPUSPEED -1.

      So my question is:
      Did you have a baseline of how queries ran BEFORE the reset and how are they running now?

      I started to hyperfocus on CPUSPEED after I wrote that article and found I could run it 10x in 10 minutes and get a different result each time. It seem to be fickle and a little to “willy nilly” (per my managers comments) for me to put in place and truly test. (I worked for a fairly conservative company).

      My recommendation is this:
      1. If you are faster, keep it.
      2. If you don’t know or are nervous, try to set the value back to its original setting. But leaving it where it is at should only help.

      Sorry for the ambiguous answer. But it really depends on testing against a baseline. However I have only heard of good things by issuing a -1.

Leave a Reply

Your email address will not be published. Required fields are marked *