Beruflich Dokumente
Kultur Dokumente
Latest updates:
For the injectors it is straight forward but did need a change to a couple of files.
For the controller, I have made changes external to the solution, in the controlling script in jenkins. In
this case, I redirect the working directory on the controller to /mnt. So before calling the project file in
Jenkins I do this (with some debug steps included):
At the end, I restore the project files (discarding the working directory):
One more thing I have added to the solution is to stop collating the general results summary report if I
am not using it. This has saved me several gigabytes of space in jenkins for large test runs. You'll see in
jmeter-ec2.sh, extra lines where I decide not to perform various steps:
if [ -z "$REPORT_SUMMARY" ] ; then
With all these changes in place, I have been able to run much larger projects. And I have run much
larger AWS instances to run more threads in my scripts (all the code and assertions do require high CPU
usage)
${__javaScript(parseInt(${__P(injectorNo)}) * 20000)}
This code gives 20 seconds delay between each injector startup
x8 injectors = 160 seconds to ramp them all up.
24 Sept 2013
Added disc space monitor to keep an eye on controller disc space as jobs build up on the jenkins slave
And then you get a graph in Jenkins with errors flagged up if you go above the limit you set (optional
command line setting):#
13th Sept 2013
Whilst running full scale tests I have found that more load average information is useful for the
injectors in particular. So I now print out 1 minute, 5 minute and 15 minute metrics rather
than just the 1 minute value
I had a Jenkins build that didn't catch the JMeter stop signal for some reason so I've added a
fail safe STOP command line argument to jmeter-ec2.sh:
usage:
echo "[STOP] -optional, stop after this time in seconds. this can override the script
setting and can be used as a failsafe."
Small change to results-to-workspace.sh. Just changed 'rm *' to 'rm -f *'. I had a Jenkins
project that cleared the workspace itself so this was throwing an unnecessary error.
Earlier changes:
I'm not really aiming to keep versions on here. I'll just provide the latest one at the moment.
But I have got a list of things to be looked at since I first went live. I put the solution up a bit
early but now points 2 and 3 below have been addressed so for me it is fully operational. Point
1, errors, may need looking at for individual projects.
There are a few things that need doing before this becomes an enterprise solution:
1. Check for errors in the logs for failure, don't just rely on perf limits. I had some
errors on one run but still got good timings. Need to look for 'Exception' in the jmeter
log (at least). This could be a separate test (separate jtl file) from the 95th percentile
test.
It turns out this is not so simple. I have in my jmeter log file for example:
which is acceptable because I am using the csv option to stop the thread on end of line.
This is convenient for throwing data files at the test but of course I can't now fail the
test just on finding 'Exception'.
The next point I am working on one solution now. I'll leave this note here just to
highlight there are different ways of achieving this depending on your needs. I have
used another solution. See the JM Assertion page for details:
2. The script needs to check for text on the pages and report failures and these need
picking up by Jenkins - pass/fail. Perhaps with an acceptable percentage. One way to do
this may be to use an if controller to get data lines in the output file for both pass and
fail (to find the text) and count occurrences of both. Again, this could be a separate test
(separate jtl file) from the 95th percentile test.
After running a bit more, I find I need more specific run time data to screen. I want
specific rates and response times rather than the general summary results:
3. Output specific transaction rates and response times to screen during run time.
The general summary results may not be needed.
5. Are there any better JMeter / Jenkins graphs out there? I did look into this a few
years ago and didn't come up with anything.
6. The 95th percentile at runtime includes pass and fail timings. This should probably
filter on pass results only. I don't see this as urgent as pass and fail counts are also
shown so the user can see how significant this is. A bit more work is needed to do the
filtering and I'm trying to keep that to a minimum for the runtime analysis.
7. 95th percentile post run analysis could be more efficient if it used the bespoke
assertions files and it could offer more options - pass fail percentiles and not just 95th. -
DONE. See JM 95th v2.
I haven't yet run this solution under heavy load or over extensive test periods. No doubt other
issues will surface during use.
I hope this is all of some use out there. If you do use it an acknowledgment would be good.
[Home] [About (CV)] [Contact Us] [JMeter Cloud] [JM Highlights] [JM Overview] [JM Control]
[JM Inject] [JM Threads] [JM Results] [JM Assertions] [JM TPS] [JM Metrics] [JM Runtime]
[JM Collation] [JM Logs] [JM 95th] [JM 95th v2] [JM Jenkins] [JM Corporate] [JM Scripts]
[JM Variables] [JM Embedded] [JM Hosts] [JM Running] [JM Example] [JM Versions]
[webPageTest] [_64 images] [asset moniitor] [Linux Monitor] [Splunk ETL] [Splunk API]
[AWS bash] [LR Rules OK] [LR Slave] [LR CI Graphs] [LoadRunner CI] [LR CI Variables]
[LR Bamboo] [LR Methods] [LR CI BASH] [Bash methods] [Jenkins V2] [Streaming vid]
[How fast] [Finding Issues] [Reporting] [Hand over] [VB Scripts] [JMeter tips] [JMeter RAW] In the
[Dynatrace] [Documents] [FAQ] [Legal] Cartesian
Elements
Ltd group
of
companies