If I understand your question correctly, you are wanting to get the timestamp for the action associated with the max duration? Try using eventstats :
search | eventstats max(duration) AS max_duration by Action | where max_duration = duration | table _time,duration,Action
You might get some duplicate rows for Action if multiple events have the same max duration and Action. You can use a |dedup Action to remove those after the where clause.
... View more
Regarding storage being the culprit: what type of CPU load are you seeing when looking at your process list in 'top' or 'vmstat'? Is it mostly USER time or is it SYSTEM? If you are seeing a lot of CPU time spent on the SYSTEM side, you could be IO bound. If it is all tied up in USER land you are CPU bound. For IO-bound workloads, you will need to increase your available storage IOPS/bandwidth. For CPU-bound workloads, add processors/cores.
How many of your ES datamodels are you really using and how many do you have accelerated? Try disabling acceleration on datamodels that you are not using. You might also want to look at limiting the number of concurrent datamodel acceleration jobs that can run simultaneously as well as the backfill_time in your datamodels.conf:
[default]
acceleration.max_concurrent = 1
acceleration.backfill_time = -1h
... View more
Could you do something like this?
index=index_1
| fields bib, height, age, weight
| append [index=index_2 | fields bib, heartrate, pace, bodytemp]
| stats avg(heartrate),values(bib) by age
... View more
I believe the serverclasses are refreshed when the deployment server is restarted. One way you could avoid the agent restart is to check in your serverclass.conf and remove/comment out any instances of restartSplunkd = true .
... View more
I think you identified the right way to handle this:
Does that mean we should have one
licensing master for the company, and
all our separate team setups should
connect to it as license slaves?
This will also help your organization keep track of who is using the unlimited license and justify costs with keeping that unlimited license. Also, if I had to guess, there is probably some legalese somewhere stating that you should not load the same license key on multiple separate systems - though I could not quickly find reference to such in my Software License Agreement.
... View more
If I understand the question, you want to ignore events with MAC addresses that occurred between 0000-0600. You could probably do this with a subsearch that finds MAC addresses that had events during the time period 0-6h and then exclude those from your main search.
eventtype=mac_activity | where NOT [eventtype=mac_activity (date_hour>0 AND date_hour<6) | dedup mac | fields mac]
... View more
The license_usage.log provides data in bytes. To get the base2 representation of GiB you would indeed divide by bytes/1024/1024/1024 like below:
index=_internal host=license_manager source=*license_usage.log type="Usage"
| stats sum(b) as b by h
| eval gb=round(b/1024/1024/1024, 3)
... View more
Kormot, try this for the SEDCMD in your props:
SEDCMD-win_dns = s/\(\d+\)/./g
Note the escaped parenthesis and the \ before the d as well. These look to be missing from wbfoxii's props.conf due to formatting.
... View more
If you are unable to remember the password for the admin user, you could try moving the passwd file out of the way and restart the Splunk service to have it reset back to the default.
It should be under:
$SPLUNK_HOME\etc\passwd
... View more
sourcetype=syslog | stats count by host
Or, you could use something like this to see how much data each host is sending:
sourcetype=syslog | eval length=len(_raw) | stats sum(length) by host
... View more
The get-vm CSV is probably the best way. You can use that as an inputlookup with a |metadata command to find systems that have not recently sent you data like so:
| inputlookup append=t vmware_hosts
| fields nt_host
| where NOT [| metadata index=vmware type=hosts earliest=-1d@d latest=now
| where lastTime > relative_time(now(), "-1d@d")
| rex field=host "(?<nt_host>[^\.]+)"
| fields nt_host]
| sort nt_host
edit: added "probably the best way".
... View more
Assuming you have a Splunk Enterprise license, one of the capabilities associated with a user's role is the "change_own_password" capability. If a user's role does not have that capability, they should not be able to change their own password.
... View more
martin_mueller is correct. Just configure your inputs.conf and outputs.conf on your intermediate Universal forwarder appropriately for your environment:
inputs.conf:
[splunktcp://9998]
disabled = 0
compressed = false
outputs.conf:
[tcpout]
useACK = true
indexAndForward = false
forwardedindex.filter.disable = true
... View more
Use the table command to reduce down your output to just the fields you care about.
index=yma source="/apt/local/logs/error_log" AND "vax" AND "cmd" AND "opcode" | table cmd, vax, opcode, error*
You might have to use the full JSON path for your fields depending on how you extracted/aliased the fields, for example:
header.result.command.cmd
header.result.command.vax
header.result.command.opcode
header.result.command.error*
... View more
Use buckets and break your stats down by _time.
| bucket _time span=5m
| stats count(eval(Login_Status)) AS Total, count(eval(Login_Status=302 AND Recruiter_Status=200 AND QuickSearch_Status=200)) AS Success by _time
| eval Division=Success/Total
| eval Percent=round ((Division)*100,2)
| eval Final=Percent + "%"
| table _time, Percent
... View more
You could convert your IP to IP decimal format and check if it is between the address begin/end range.
source="A" ip="192.168.0.23"
| rex field=ip "(?<oct1>\d+)\.(?<oct2>\d+)\.(?<oct3>\d+)\.(?<oct4>\d+)"
| eval ip_decimal=(oct1 * 256 * 256 * 256) + (oct2 * 256 * 256) + (oct3 * 256) + (oct4)
Then use your ip_decimal to filter out your dataset from your database.
| where (ip_decimal >= ip_address_begin) OR (ip_decimal <= ip_address_end)
edit: missed a quotation mark.
... View more
I would make sure that the 'consultant' user has permissions to view whatever App context the 'eventtype' was created in. To explain in more detail, the 'admin' user probably has read/write permissions for that Splunk App, but 'consultant' does not, so when they use 'eventtype=x' they don't have access to that knowledge object and the search provides no results.
... View more
Try replacing the 'where' with an eval in your stats command:
index=test
| stats count(eval(Value>=95)) AS Events by Host
That should result in either a count of your events that have the field Value >= 95 or 0 if no events meet that criteria.
... View more