Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than ...
See more...
Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than users which have admin role? In SCP that role is restricted for Splunk Cloud Ops team only, not for any customers. If need you can create a support ticket and ask is this a valid assumption? r. Ismo
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i...
See more...
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i need to club all the values into one.Below the query and i need to show values as LastRunTimeCount - 79 in the pie chart content..lastRunTime="*" content.lastRunTime!="NA"
[search index="Test" applicationName="scheduler" content.lastRunTime="*" content.lastRunTime!="NA" | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.lastRunTime as LastRunTimeCount | stats Count(LastRunTimeCount) as total by correlationId
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options fo...
See more...
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options for finding hosts or sources that stop submitting events: Meta Woot! https://splunkbase.splunk.com/app/2949/ TrackMe https://splunkbase.splunk.com/app/4621/ Broken Hosts App for Splunk https://splunkbase.splunk.com/app/3247/ Alerts for Splunk Admins ("ForwarderLevel" alerts) https://splunkbase.splunk.com/app/3796/ Monitoring Console https://docs.splunk.com/Documentation/Splunk/latest/DMC/Configureforwardermonitoring Deployment Server https://docs.splunk.com/Documentation/DepMon/latest/DeployDepMon/Troubleshootyourdeployment#Forwarder_warningsSome helpful posts: https://lantern.splunk.com/hc/en-us/articles/360048503294-Hosts-logging-data-in-a-certain-timeframe https://www.duanewaddle.com/proving-a-negative/ r. Ismo
Thanks for your help Combining the data sets using "| stats values(*) as * by Account_Name" I was able to get what I'm looking for:
(index="wineventlog" AND sourcetype="wineventlog" AND Ev...
See more...
Thanks for your help Combining the data sets using "| stats values(*) as * by Account_Name" I was able to get what I'm looking for:
(index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR
(index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=* AND OU="Test Users")
| eval Account_Name = lower( coalesce( Account_Name, sAMAccountName))
| search Account_Name=*
| stats values(*) as * by Account_Name
| where EventCode=4740 AND OU="Test Users"
| fields Account_Name EventCode OU
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a sourcetype=splunkd host=b. Is there someone can point me to the right direction of troublesh...
See more...
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a sourcetype=splunkd host=b. Is there someone can point me to the right direction of troubleshooting this issue. Thanks a lot.
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you pr...
See more...
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you probably will got some duplicate events as it's not 100% that you will check ack from that individual HF/HEC where you have sent original event. I'm not sure if HEC ack implement also HF level ack into use? Personally I will enable it manually. As I said, if I use HEC ack I also enable ack on inputs.conf on the whole path from HEC node to all indexers. If your HF will crash before HEC client has read that then your client should sent those events again and you will get duplicates. Same situation if you have many HF behind LB and sticky sessions didn't work or any HF will crash/stop serving. You should implement your HEC client so, that there is some timeout for preventing it to wait forever. Just after timeout has reached it will send that event again. There will be come situations when you will never get that ack for individual event!
Okay, good point, I must have left my brain somewhere far away... Indeed, max(bytes) is 47KB and avg is 2KB, less than 1MB! Thank you all for your responsiveness.
Agent-based: Use the Splunk OpenTelemetry Collector ( link ) or the Splunk UniversalForwarder ( link ) Agent-less: Use the Splunk Add-On for AWS ( link ) it calls the AWS REST API.
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command. CIM-compliance means a field has a name and value described in the CIM manual (https://docs....
See more...
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command. CIM-compliance means a field has a name and value described in the CIM manual (https://docs.splunk.com/Documentation/CIM/5.3.2/User/Howtousethesereferencetables). The only fields the can be used in tstats are those created at index-time or those in an accelerated datamodel.
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has ...
See more...
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has "sc_admin" role. Is there any specific capability that is missing that is needed for this ?
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is ...
See more...
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is expected.
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" ...
See more...
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" returns any results at all. If not - it means you have problems on the ingestion end - you have no events at all to search from (or maybe you're looking for wrong data). And then add one step after another until the results stop being in line with what you expect them to be. This will be the step that is wrong.
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the eve...
See more...
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the events are written by some solution to an intermediate file picked up later by forwarder - check the file contents and see if those \xXX codes are there. If the events are pushed by syslog - sniff the traffic with tcpdump and see if they are there. Most probably the response to one of those questions (or a similar one regarding your particular transport channel) will be affirmative. And that will mean that the issue is external to Splunk - you're ingesting badly formatted data.
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the ...
See more...
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the messages are produced, where are they produced, how are they stored, when are they produced, etc.
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore. 1. Use this Add-on - it may help with some ...
See more...
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore. 1. Use this Add-on - it may help with some config/testing so needs to be installed - https://splunkbase.splunk.com/app/5520 2. Develop your own Action - https://dev.splunk.com/enterprise/docs/devtools/customalertactions/
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know...
See more...
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know if this will fix it, but may help with the different inputs you may have, sounds like its bottlenecked somewhere ) Check the memory usage on the where this add-on is running (normally on a HF) - perhaps you need to increase this if it’s very low. Have a look at the troubleshooting guide, there may items there to help further investigate. https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user wan...
See more...
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user want you detect inside of this index. example: redshift/ consecutive login failed