All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could do the same thing to the MaxExecutionTime by adding 2 more rows in the search: ... | eval colorCode2 = if(MaxExecutionTime > Treshold, "#D94E17", "#55C169") ... | eval MaxExecutionTime = m... See more...
You could do the same thing to the MaxExecutionTime by adding 2 more rows in the search: ... | eval colorCode2 = if(MaxExecutionTime > Treshold, "#D94E17", "#55C169") ... | eval MaxExecutionTime = mvappend(MaxExecutionTime,colorCode) ...  And another format section: <format type="color" field="MaxExecutionTime"> <colorPalette type="expression">mvindex(value,1)</colorPalette> </format> So now it reads: <row> <panel> <html depends="$hidecsspanel$"> <style> #ColoredTable table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <title>TEST XRT Execution Dashboard</title> <table id="ColoredTable"> <search> <query>index="aws_app_corp-it_xrt" sourcetype="xrt_log" "OK/INFO - 1012550 - Total Calc Elapsed Time" | rex field=source "(?&lt;Datetime&gt;\d{8}_\d{6})_usr@(?&lt;Username&gt;[\w\.]+)_ses@\d+_\d+_MAXL#(?&lt;TemplateName&gt;\d+)_apd@(?&lt;ScriptName&gt;[\w]+)_obj#(?&lt;ObjectID&gt;[^.]+)\.msh\.log" | rex "Total Calc Elapsed Time\s*:\s*\[(?&lt;calc_time&gt;\d+\.\d+)\]\s*seconds" | stats avg(calc_time) as AverageExecutionTime max(calc_time) as MaxExecutionTime by ScriptName, ObjectID, TemplateName | eval AverageExecutionTime = round(AverageExecutionTime, 0) | lookup script_tresholds ObjectID ScriptName MaxLTemplate as "TemplateName" OUTPUT Threshold AS "Treshold" | eval colorCode = if(AverageExecutionTime > Treshold, "#D94E17", "#55C169") | eval colorCode2 = if(MaxExecutionTime > Treshold, "#D94E17", "#55C169") | table ScriptName, AverageExecutionTime, MaxExecutionTime, Treshold, ObjectID, TemplateName, colorCode | search $ScriptName$ $ObjectID$ | sort - AverageExecutionTime | eval AverageExecutionTime = mvappend(AverageExecutionTime,colorCode) | eval MaxExecutionTime = mvappend(MaxExecutionTime,colorCode) | fields - colorCode</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="refresh.display">progressbar</option> <format type="color" field="AverageExecutionTime"> <colorPalette type="expression">mvindex(value,1)</colorPalette> </format> <format type="color" field="MaxExecutionTime"> <colorPalette type="expression">mvindex(value,1)</colorPalette> </format> </table> </panel> </row>
hello @Knust : try this query: |rest services/server/info |eval new_version = "9.4.0" ```replace it with the version you're upgrading to``` |eval current_version = version |eval old_version = ... See more...
hello @Knust : try this query: |rest services/server/info |eval new_version = "9.4.0" ```replace it with the version you're upgrading to``` |eval current_version = version |eval old_version = if(new_version > current_version, "yes", "no") |table current_version new_version old_version on the table: if the old_version column says yes, you need to upgrade if it says no no need you can also rename the the way you would prefer by using | rename command please let me know if this helps.
Maybe this old post helps you? https://community.splunk.com/t5/Splunk-Search/dashboard-time-token-with-multiple-ealiest-latest-search/m-p/710873
If you have test / dev environment you could easily add some roles which have different search filters. Assign those to your test users and use job inspector to see how splunk creates final SPL for th... See more...
If you have test / dev environment you could easily add some roles which have different search filters. Assign those to your test users and use job inspector to see how splunk creates final SPL for those queries.
Thanks @ITWhisperer  Can you please let me know how to set the field "info_min_time"  ? I've used the Time input as below :  <input type="time" token="field1"> <label>TIME</label> <default> <ea... See more...
Thanks @ITWhisperer  Can you please let me know how to set the field "info_min_time"  ? I've used the Time input as below :  <input type="time" token="field1"> <label>TIME</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> <change> <eval token="token_time.earliest_epoch">if(isnum('earliest'),'earliest',relative_time(now(),'earliest')</eval> <eval token="token_time.latest_epoch">if(isnum('latest'),'latest',relative_time(now(),'latest')</eval> </change> </input>
When we have roleA and roleB which have srcFilters  roleA: source=A roleB: source=B Then splunk add those in every SPL queries which these users do like input: index=a foobar real SPL:  index... See more...
When we have roleA and roleB which have srcFilters  roleA: source=A roleB: source=B Then splunk add those in every SPL queries which these users do like input: index=a foobar real SPL:  index=a source=A foobar  then if user has both roles assigned to him/her  input: index=a foobar real SPL: index=a source=A AND source=B AND foobar I suppose that this example opens your eyes how these srcFilters are working and where it leads.
@isoutamo If possible use some template with automation tool to generate those indexes.conf, auth*.conf files and use apps or eg. Terraform or ansible to manage your splunk environment. In that way y... See more...
@isoutamo If possible use some template with automation tool to generate those indexes.conf, auth*.conf files and use apps or eg. Terraform or ansible to manage your splunk environment. In that way your management overhead isn’t so big and usually you will get better quality too as side effects.---> I don't have any coding knowledge till date and not aware of this setup. Can you please explain more about this setup? If not possible soon what would be alternative solution? Creating indexes for each config ID (I thought it is also not a good idea). What if a single user has only single role or a user has different roles (different config IDs)!within same index... Still he would get index=A source=123456 AND source=56789? 
@richgalloway IOW, if you add "Source=123456" to the search filter with the intention of restricting results from index=A, it also will restrict the results from index=B and all other indexes. ---> I... See more...
@richgalloway IOW, if you add "Source=123456" to the search filter with the intention of restricting results from index=A, it also will restrict the results from index=B and all other indexes. ---> I didn't get this point. I will be creating a role A and selecting index A and then restricting with search filter (source=123456). So ideally user A will be assigned role A and he will have access to index A and source 123456. These source are unique and only one role for this source. This role may be assigned to multiple users. But how index B be involved here? Please clarify.
Not only …/etc as UFs internal bookkeeping is under …/var you must backup also it.
hello @rukshar , the stanza looks good, make sure it is placed in the right location: either: /opt/splunkforwarder/etc/system/local or : /opt/splunkforwarder/etc/apps/<yourapp>/local also double ... See more...
hello @rukshar , the stanza looks good, make sure it is placed in the right location: either: /opt/splunkforwarder/etc/system/local or : /opt/splunkforwarder/etc/apps/<yourapp>/local also double check you have an outputs.conf , also a quick restart won't hurt.
Yes you can use service user for own KOs and e.g. run those with schedules. Actually I prefer this way if possible. Probably most important thing is that usually service users don’t leave company. Wh... See more...
Yes you can use service user for own KOs and e.g. run those with schedules. Actually I prefer this way if possible. Probably most important thing is that usually service users don’t leave company. When normal user leave and he/she have scheduled searches, alerts etc those doesn’t work after user has removed or his/her roles are removed. Another advantage is that you could add some capabilities which you want give to normal user or those could have more resources. This also means that alerts etc don’t use user quota etc. A negative thing is that when no one owns those KOs there could be lot of unnecessary and unused KOs and even some scheduled alerts, reports etc. running for long time and using resources. Anyhow I see that there are more advantages to use those than avoiding those. Usually I prefer to use several service users with different integrated source systems and with different roles assigned to those svc users.
I believe it's do-able just make sure : to go over few prerequisite things: 1 to check compatibility: refer to splunk documentation for more info about compatibility and update ways. 2- to check... See more...
I believe it's do-able just make sure : to go over few prerequisite things: 1 to check compatibility: refer to splunk documentation for more info about compatibility and update ways. 2- to check system requirements: ensure that your system meets the requiment for splunk 9.4 or whatever version you're upgrading to. 3- and most importantly DON'T forget to take a backup of your /etc in case something goes wrong.
Personally I try to avoid search filters as much as possible, even creating more indexes etc. if you have extreme simple environment which contains only one role per user and one use case then those ... See more...
Personally I try to avoid search filters as much as possible, even creating more indexes etc. if you have extreme simple environment which contains only one role per user and one use case then those could work without issues? But as @richgalloway already said, when there are different search filter for each roles and any person have more than one role then those are combined together with AND operator (not with OR)! In real life this usually leads to situation when you couldn’t found nothing or at least what you are looking. If possible use some template with automation tool to generate those indexes.conf, auth*.conf files and use apps or eg. Terraform or ansible to manage your splunk environment. In that way your management overhead isn’t so big and usually you will get better quality too as side effects.
try this: | rest/services/data/indexes | table  title searchFactor replicationFactor  and to check if data is searchable : |metadata type=source |search source IN(your_datasource)
I already try those after fixing delimiters etc, but SPL expects some _time which aren’t present in this example file.
If you add a search filter for a role then the filter applies to *all* searches made by users with that role. IOW, if you add "Source=123456" to the search filter with the intention of restricting r... See more...
If you add a search filter for a role then the filter applies to *all* searches made by users with that role. IOW, if you add "Source=123456" to the search filter with the intention of restricting results from index=A, it also will restrict the results from index=B and all other indexes.
Sample events have been provided below, but, unfortunately, they don't match the supplied lookup and are not in a good format (fields and headers have different delimiters and are consequently not al... See more...
Sample events have been provided below, but, unfortunately, they don't match the supplied lookup and are not in a good format (fields and headers have different delimiters and are consequently not aligned well!)
You can put the restriction on the role (and assign the role to the relevant user) and this will work for source (amongst other fields).
We're also seeing similar results in our Organization. Got flagged for the same binary yesterday. No mention of the binaries or their usage in the AME documentation, but it is used for license valida... See more...
We're also seeing similar results in our Organization. Got flagged for the same binary yesterday. No mention of the binaries or their usage in the AME documentation, but it is used for license validation in the product. You can see the python script here where they are referenced and license validation occurs.  alert_manager_enterprise\lib\ame\utilities\LicenseValidatorUtility.py I'm not entirely sure where else the binaries are being referenced at this time but without access to the source code of the binaries (vsl & vsw) we are choosing to take it on face value that they are potentially malicious and acting accordingly. I uploaded vsl to VirusTotal as well but it appears to be coming back clean, for now.  We are working to determine if we want to remove only vsw.exe from our app deployment or remove the app entirely.  I have reached out to the developers via the contact information on their website and will report back what they have to say about it. This is disheartening because I'm a long time fan of the Alert Manager, and now Alert Manger Enterprise application. I'll continue to monitor this thread for suggested recommendations as the situation evolves.   
Hi @isoutamo , Yes, I'm fully aware if this solution, and would also use it, if I had physical access to the box, but I don't. But I do have REST access, why I'm looking for a rest solution  ... See more...
Hi @isoutamo , Yes, I'm fully aware if this solution, and would also use it, if I had physical access to the box, but I don't. But I do have REST access, why I'm looking for a rest solution  PS. The deprecated REST package app still works, there is just no link to get the SPL file anymore.