All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" ro... See more...
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" role able to access other app features?  Expectation is to see what users with "admin" role see: What have I done wrong? Thank you and best regards, Andrew
Hi, I think you could use something like this instead: https://community.splunk.com/t5/Splunk-Search/Removing-all-null-columns-from-stats-table/m-p/566579   ------------ If this was helpful, som... See more...
Hi, I think you could use something like this instead: https://community.splunk.com/t5/Splunk-Search/Removing-all-null-columns-from-stats-table/m-p/566579   ------------ If this was helpful, some karma would be appreciated.
Try something like this | lookup <your lookup> Value AS sFaultInverter1 OUTPUT ErrorCode | table "nice_date", sFaultInverter1, ErrorCode
Thanks, I just tried: | where last_backup_t < relative_time(now(), "-1d@d-4h") or is_offline="true" So i didn´t need the "search", sometimes the resolution is easier than you think...
Hi @Vantine, yes it's correct. You're speaking of windows log so you could simplify (and make faster) your search in this way: index=wineventlog sourcetype=wineventlog EventCode=4771 OR EventCode=... See more...
Hi @Vantine, yes it's correct. You're speaking of windows log so you could simplify (and make faster) your search in this way: index=wineventlog sourcetype=wineventlog EventCode=4771 OR EventCode=4776 | timechart span=30m count by user | where count>500 Ciao. Giuseppe
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failu... See more...
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failure EventCode=4771 OR EventCode=4776 | bucket _time span=30m | stats count by user | where count>500 I want to make sure this is correct.   Thanks!
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_c... See more...
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_connections x.x.x.x:50496:9997 connectionType=cookedSSL sourcePort=50496 sourceHost=x.x.x.x sourceIp=x.x.x.x destPort=9997 kb=15.458984375 _tcp_avg_thruput=7.262044477222557 _tcp_Kprocessed=589.84765625 [...]   It's the "tcp_Kprocessed" field, especially related to the field "kb", which is the most important, in my opinion. What is in practice "tcp_Kprocessed", considering that its values are often very inconsistent and not proportionate to the kb? Thanks.
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05... See more...
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05.12.2023 10:43:27","-1","-1","-1","-1","-1","-1","-1" "05.12.2023 10:41:17",0,320,0,0,"-1",0,0 "05.12.2023 10:30:32",0,0,1,0,"-1",0,0 "05.12.2023 10:28:51",0,0,1,0,"-1",0,0 "05.12.2023 10:28:10","-1","-1","-1","-1","-1","-1","-1" Lookup Attribut,Value,ErrorCode sFaultInverter1,-1,NoCommunication sFaultInverter1,0,noError sFaultInverter1,1,CompressorCurrentSensorFault sFaultInverter1,2,FactorySettings sFaultInverter1,4, sFaultInverter1,8, sFaultInverter1,16,InverterBridgeTemperatureSensorFault sFaultInverter1,32,DLTSensorFault sFaultInverter1,64,ICLFailure sFaultInverter1,128,EEPROMFault sFaultInverter1,256,UpdateProcess sFaultInverter1,512, sFaultInverter1,1024, sFaultInverter1,2048, sFaultInverter1,4096, sFaultInverter1,8129, sFaultInverter1,16384, sFaultInverter1,32768, sFaultInverter2,-1,NoCommunication sFaultInverter2,0,noError sFaultInverter2,1,CommunicationLos sFaultInverter2,2,DcLinkRipple sFaultInverter2,4, sFaultInverter2,8,AcGridOverVtg sFaultInverter2,16,AcGridUnderVtg sFaultInverter2,32,DcLinkOverVtgSW sFaultInverter2,64,DcLinkUnderVtg sFaultInverter2,128,SpeedFault sFaultInverter2,256,AcGridPhaseLostFault sFaultInverter2,512,InverterBridgeOverTemperature sFaultInverter2,1024, sFaultInverter2,2048, I would like to have table with e.G. 3 columns: "nice_date",sFaultInverter1,ErrorCode "05.12.2023 10:46:53",0,noError "05.12.2023 10:43:27","-1",NoCommunication "05.12.2023 10:41:17",0,noError "05.12.2023 10:30:32",0,noError "05.12.2023 10:28:51",0,noError "05.12.2023 10:28:10","-1",NoCommunication for each value of sFaultInverter1 an ErrorCode from the lookUp table. Any help?
I've found an interesting specific case where there are two callRecord with the same id, both with version=1, but one is a peerToPeer call and the other is a groupCall. I think there are multiple cal... See more...
I've found an interesting specific case where there are two callRecord with the same id, both with version=1, but one is a peerToPeer call and the other is a groupCall. I think there are multiple callRecords because the initial peerToPeer call had a third participant added, escalating it to a groupCall. This could also explain some apparent duplication.
OR is usually placed between predicates in a logical evaluation, e.g. as part of a where command. Splunk works on a pipeline of events and you can't compare between events (without bringing them tog... See more...
OR is usually placed between predicates in a logical evaluation, e.g. as part of a where command. Splunk works on a pipeline of events and you can't compare between events (without bringing them together in a correlated event). Alerts can be triggered based on expressions, for example, number of events left in the pipeline, so perhaps you need to fashion a search which returns the events you are interested in and trigger on the presence of these events?
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data... See more...
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data><Data>signature_name</Data><Data>binary_description</Data> How can I extract it automatically to fields/value: process_name = process_name signature = signature_name binary = binary_description   Is there any way without using "big" regex? to just $1:$2:$3.. and then add names to $1, $2, $3 like for CSV. something like:  REGEX = (?ms)<Data>(.*?)<\/Data> this will create maybe one multi value field and then assign Field_name
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | ev... See more...
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | eval last_backup_t =strptime(last_backup, "%Y-%m-%d %H:%M:%S.%N%z") | where last_backup_t < relative_time(now(), "-2d@d") | search is_offline= true Thanks
Looking at the webhook events in more detail reveals my first wrong assumption: a single call can produce multiple webhook events, with one of two changeTypes: 'created' or 'updated'. The longer the ... See more...
Looking at the webhook events in more detail reveals my first wrong assumption: a single call can produce multiple webhook events, with one of two changeTypes: 'created' or 'updated'. The longer the call goes on for, the more changeType:updated events are pushed to the webhook. However, looking at callRecord events with a matching id it gets stranger. I can see 15 webhook (one 'created' and 14 'updated') events with the same id today with Splunk _time values between 10:15 and 12:15. But there are (only) 8 matching callRecord events all with the same Splunk _time value of 07:30, startDateTime of 07:30 and endDateTime of 09:53, each with a different 'version' of  1, 2, 3, 4, 5, 8, 12 or 15, and an incrementing lastDateTimeModified value (between 10:14 and 12:12) I thought the _time value in a splunk event showed when it was created. How can these callRecord events all have been created at 07:30, for a call that was in place between 07:30 and 09:53, and have webhook events between 10:15 and 12:15?
Hi @aguilard, it's avery strange behavior: open a cae t sSplunk Support. Ciao. Giuseppe
Hi, we are ingesting Couchbase JSON Documents into Splunk Cloud using Kafka.   When I open the same document (1st one ingested in Splunk - _raw and 2nd one is Couchbase JSON) and compare in Visual ... See more...
Hi, we are ingesting Couchbase JSON Documents into Splunk Cloud using Kafka.   When I open the same document (1st one ingested in Splunk - _raw and 2nd one is Couchbase JSON) and compare in Visual Studio Code, I can see differences as shown below: Splunk syntax highlighted data for this record is identical to original Couchbase JSON. Can you please help me understand why _raw is showing this data differently and also is there any way to get _raw data in the same format at original JSON? Thank you.  
Hi @parthiban, the problem are the starting data: viewing your data without any transformation, it seems that you haven't the data: so reducing the search without | where name="YYYY" have you  stat... See more...
Hi @parthiban, the problem are the starting data: viewing your data without any transformation, it seems that you haven't the data: so reducing the search without | where name="YYYY" have you  status? index="XXXX" "Genesys system is available" | rename "response_details.response_payload.entities{}.onlineStatus" as status if not you have to redesign your search because it isn't congruent. Ciao. Giuseppe
The solution is to add your trusted cert to splunk's system cert in $SPLUNK_HOME/etc/auth file.
Brilliant  it worked. Thank you!  
Hi @gcusello  I've shared an example Splunk payload. In that, we have the 'onlinestatus' field under 'response details,' 'response payload,' and 'entities.' First, we need to extract the 'online... See more...
Hi @gcusello  I've shared an example Splunk payload. In that, we have the 'onlinestatus' field under 'response details,' 'response payload,' and 'entities.' First, we need to extract the 'onlinestatus' and serial number (for identifying the device) before applying the condition for the alert right?
Yes, like form 25th Nov we are able to see the logs for the sourcetype so please guide how to check where to check.