All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your help   Combining the data sets using "| stats values(*) as * by Account_Name"  I was able to get what I'm looking for: (index="wineventlog" AND sourcetype="wineventlog" AND Ev... See more...
Thanks for your help   Combining the data sets using "| stats values(*) as * by Account_Name"  I was able to get what I'm looking for: (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=* AND OU="Test Users") | eval Account_Name = lower( coalesce( Account_Name, sAMAccountName)) | search Account_Name=* | stats values(*) as * by Account_Name | where EventCode=4740 AND OU="Test Users" | fields Account_Name EventCode OU
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a  sourcetype=splunkd  host=b. Is there someone can point me to the right direction of troublesh... See more...
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a  sourcetype=splunkd  host=b. Is there someone can point me to the right direction of troubleshooting this issue. Thanks a lot.
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you pr... See more...
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you probably will got some duplicate events as it's not 100% that you will check ack from that individual HF/HEC where you have sent original event. I'm not sure if HEC ack implement also HF level ack into use? Personally I will enable it manually. As I said, if I use HEC ack I also enable ack on inputs.conf on the whole path from HEC node to all indexers. If your HF will crash before HEC client has read that then your client should sent those events again and you will get duplicates. Same situation if you have many HF behind LB and sticky sessions didn't work or any HF will crash/stop serving. You should implement your HEC client so, that there is some timeout for preventing it to wait forever.  Just after timeout has reached it will send that event again. There will be come situations when you will never get that ack for individual event!
Okay, good point, I must have left my brain somewhere far away... Indeed, max(bytes) is 47KB and avg is 2KB, less than 1MB! Thank you all for your responsiveness.
Agent-based: Use the Splunk OpenTelemetry Collector ( link ) or the Splunk UniversalForwarder ( link ) Agent-less: Use the Splunk Add-On for AWS ( link ) it calls the AWS REST API.
I believe you have to use the full name of the field ("entry.name", for example).
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command.  CIM-compliance means a field has a name and value described in the CIM manual (https://docs.... See more...
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command.  CIM-compliance means a field has a name and value described in the CIM manual (https://docs.splunk.com/Documentation/CIM/5.3.2/User/Howtousethesereferencetables). The only fields the can be used in tstats are those created at index-time or those in an accelerated datamodel.
Please give me examples of agentless and agent- based onboarding in splunk
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has ... See more...
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has "sc_admin" role. Is there any specific capability that is missing that is needed for this ?
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is ... See more...
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is expected.
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" ... See more...
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" returns any results at all. If not - it means you have problems on the ingestion end - you have no events at all to search from (or maybe you're looking for wrong data). And then add one step after another until the results stop being in line with what you expect them to be. This will be the step that is wrong.
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the eve... See more...
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the events are written by some solution to an intermediate file picked up later by forwarder - check the file contents and see if those \xXX codes are there. If the events are pushed by syslog - sniff the traffic with tcpdump and see if they are there. Most probably the response to one of those questions (or a similar one regarding your particular transport channel) will be affirmative. And that will mean that the issue is external to Splunk  - you're ingesting badly formatted data.
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the ... See more...
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the messages are produced, where are they produced, how are they stored, when are they produced, etc.
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore.   1. Use this Add-on - it may help with some ... See more...
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore.   1. Use this Add-on - it may help with some config/testing so needs to be installed - https://splunkbase.splunk.com/app/5520  2. Develop your own Action - https://dev.splunk.com/enterprise/docs/devtools/customalertactions/   
I'm not sure but only a tiny fraction of a % of messages seem to be affected. Our Splunk team haven't been able to help. 
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know... See more...
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know if this will fix it, but may help with the different inputs you may have, sounds like its bottlenecked somewhere ) Check the memory usage on the where this add-on is running (normally on a HF)  - perhaps you need to increase this if it’s very low. Have a look at the troubleshooting guide, there may items there to help further investigate. https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user wan... See more...
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user want you detect inside of this index. example: redshift/  consecutive login failed 
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.ma... See more...
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'xoar_Management_Exchange' start_time=1715152233 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 135, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/batch.py", line 54, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 225, in discover self._clear_expired_markers() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 294, in _clear_expired_markers checkpoint.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 86, in sweep return self._store.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 258, in sweep indexes = self.build_indexes(fp) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 189, in build_indexes indexes[key] = pos File "/opt/splunk/etc/apps/splunk_ta_o365/lib/sortedcontainers/sorteddict.py", line 300, in __setitem__ dict.__setitem__(self, key, value) MemoryError
@gcusello Thank you so much
In what way are they inconsistent? (The totals are most likely different due to the rounding)