All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Think I was able to get the table in the output you are wanting.  I was able to achieve this using this SPL. | makeresults | eval json_object="{ \"KeyA\": [ { \"path\": \"... See more...
Think I was able to get the table in the output you are wanting.  I was able to achieve this using this SPL. | makeresults | eval json_object="{ \"KeyA\": [ { \"path\": \"/attibuteA\", \"op\": \"replace\", \"value\": \"hello\" }, { \"path\": \"/attibuteB\", \"op\": \"replace\", \"value\": \"hi\" } ], \"KeyB\": [ { \"path\": \"/attibuteA\", \"op\": \"replace\", \"value\": \"\" }, { \"path\": \"/attibuteC\", \"op\": \"replace\", \"value\": \"hey\" }, { \"path\": \"/attibuteD\", \"op\": \"replace\", \"value\": \"hello\" } ], \"KeyC\": [ { \"path\": \"/attibuteE\", \"op\": \"replace\", \"value\": \"\" } ] }" | fields - _time | fromjson json_object | fields - json_object ``` Assuming that your fieldset at this point is only the list of fields needed for your final output ``` ``` If there are other fields present that are not apart of the json object you are trying to parse then you should set up a naming convention for fields youy want to loop through ``` | foreach * [ | eval all_objects=mvappend( 'all_objects', case( isnull('<<FIELD>>'), null(), mvcount('<<FIELD>>')==1, json_set('<<FIELD>>', "Key", "<<FIELD>>"), mvcount('<<FIELD>>')>1, mvmap('<<FIELD>>', json_set('<<FIELD>>', "Key", "<<FIELD>>")) ) ) ] | fields + all_objects | mvexpand all_objects | spath input=all_objects | fields - all_objects | fields + Key, path, op, value
I think you should be able to do this by using the mvmap() function. Here is the eval to return the resulting table above. ``` Eval to perform set operation against Splunk multivalue ... See more...
I think you should be able to do this by using the mvmap() function. Here is the eval to return the resulting table above. ``` Eval to perform set operation against Splunk multivalue fields ``` | eval new_remove_old_set_operation=case( isnull(new_groups), null(), mvcount(new_groups)==1, if(NOT 'new_groups'=='old_groups', 'new_groups', null()), mvcount(new_groups)>1, mvmap(new_groups, if(NOT 'new_groups'=='old_groups', 'new_groups', null())) )   Full SPL snippet used to replicate your scenario. | makeresults | eval user="user1", old_groups=mvappend( "group1", "group2", "group3" ), new_groups=mvappend( "group1", "group2", "group3", "group4", "group5" ) | append [ | makeresults | eval user="user2", old_groups=mvappend( "group3", "group4", "group5" ), new_groups=mvappend( "group4", "group5", "group6", "group7", "group8" ) ] | fields - _time | fields + user, old_groups, new_groups ``` Eval to perform set operation against Splunk multivalue fields ``` | eval new_remove_old_set_operation=case( isnull(new_groups), null(), mvcount(new_groups)==1, if(NOT 'new_groups'=='old_groups', 'new_groups', null()), mvcount(new_groups)>1, mvmap(new_groups, if(NOT 'new_groups'=='old_groups', 'new_groups', null())) )
Hi dtburrows3, thank you very much. It works. I like that solution with the temporary field.
Thank you for your help. You are right. With your guidance, I have successfully solved my problem!
So simply hitting the lookup with the input field "Value" as "sFaultInverter1" will pull back ErrorCodes for multiple rows from the lookup since there is multiple "Attribut" values with the same "Val... See more...
So simply hitting the lookup with the input field "Value" as "sFaultInverter1" will pull back ErrorCodes for multiple rows from the lookup since there is multiple "Attribut" values with the same "Value". Since you are asking to pull back ErrorCodes for the "sFaultInverter1" field from the raw data I assume you just want to pull back the ErrorCodes that have the Attribut "sFaultInverter1". This can be done a few different ways. You can scope down the lookup inline to only pull back Attribut="sFaultInverter1" and then do a join against Value from the lookup. That would look something like this. | join type=left sFaultInverter1 [ | inputlookup <lookup> where Attribut="sFaultInverter1" | fields - Attribut | rename Value as sFaultInverter1 ] | fields + "nice_date", sFaultInverter1, ErrorCode Or create a temporary field and include as an additional input field to the lookup like this. | eval scoped_lookup_attribute="sFaultInverter1" | lookup <lookup> Attribut as scoped_lookup_attribute, Value as sFaultInverter1 OUTPUT ErrorCode | fields + "nice_date", sFaultInverter1, ErrorCode  Both of these methods return the same results below You can see that if you only use "sFaultInverter1" as the only input field to the lookup it will actually pull back multiple results since there are multiple Attribut in the lookup with the same Value. They just happen to have the same ErrorCodes in your example, but if they were different that would become problematic. And it would just take more SPL to account for the multivalue field when I don't think it is really needed for this.  
| eval groups=mvappend(old, old, new) | stats count by user groups Where count=3, the group exists in both old and new Where count=2, the group exists just in old Where count=1, the group exists j... See more...
| eval groups=mvappend(old, old, new) | stats count by user groups Where count=3, the group exists in both old and new Where count=2, the group exists just in old Where count=1, the group exists just in new
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" ro... See more...
Hello! I have a Splunk Enterprise 9.0.7 deployment.  I have a local user with the "power" role.  When connecting to the Search & Reporting app I can only see the Search option: Isn't "power" role able to access other app features?  Expectation is to see what users with "admin" role see: What have I done wrong? Thank you and best regards, Andrew
Hi, I think you could use something like this instead: https://community.splunk.com/t5/Splunk-Search/Removing-all-null-columns-from-stats-table/m-p/566579   ------------ If this was helpful, som... See more...
Hi, I think you could use something like this instead: https://community.splunk.com/t5/Splunk-Search/Removing-all-null-columns-from-stats-table/m-p/566579   ------------ If this was helpful, some karma would be appreciated.
Try something like this | lookup <your lookup> Value AS sFaultInverter1 OUTPUT ErrorCode | table "nice_date", sFaultInverter1, ErrorCode
Thanks, I just tried: | where last_backup_t < relative_time(now(), "-1d@d-4h") or is_offline="true" So i didn´t need the "search", sometimes the resolution is easier than you think...
Hi @Vantine, yes it's correct. You're speaking of windows log so you could simplify (and make faster) your search in this way: index=wineventlog sourcetype=wineventlog EventCode=4771 OR EventCode=... See more...
Hi @Vantine, yes it's correct. You're speaking of windows log so you could simplify (and make faster) your search in this way: index=wineventlog sourcetype=wineventlog EventCode=4771 OR EventCode=4776 | timechart span=30m count by user | where count>500 Ciao. Giuseppe
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failu... See more...
Trying to set up an alert to show any log in that has had 500 log on failures in under 30 min.   Here is what I currently have (with non relevant data changed) index=* sourcetype=* action=failure EventCode=4771 OR EventCode=4776 | bucket _time span=30m | stats count by user | where count>500 I want to make sure this is correct.   Thanks!
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_c... See more...
Hi. I use a lot the metrics.log Indexer side, to debug some bottleneck and/or stress inside the Infrastructure. There is a field, i can't really understand at all,   INFO Metrics - group=tcpin_connections x.x.x.x:50496:9997 connectionType=cookedSSL sourcePort=50496 sourceHost=x.x.x.x sourceIp=x.x.x.x destPort=9997 kb=15.458984375 _tcp_avg_thruput=7.262044477222557 _tcp_Kprocessed=589.84765625 [...]   It's the "tcp_Kprocessed" field, especially related to the field "kb", which is the most important, in my opinion. What is in practice "tcp_Kprocessed", considering that its values are often very inconsistent and not proportionate to the kb? Thanks.
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05... See more...
Hi, can anybody help with this task? inputs: "nice_date",sFaultInverter1,sFaultInverter2,sFaultInverter3,sFaultPFC,"sFaultSR-Plaus",sFaultSR,sFaultSpeed "05.12.2023 10:46:53",0,0,1,0,"-1",0,0 "05.12.2023 10:43:27","-1","-1","-1","-1","-1","-1","-1" "05.12.2023 10:41:17",0,320,0,0,"-1",0,0 "05.12.2023 10:30:32",0,0,1,0,"-1",0,0 "05.12.2023 10:28:51",0,0,1,0,"-1",0,0 "05.12.2023 10:28:10","-1","-1","-1","-1","-1","-1","-1" Lookup Attribut,Value,ErrorCode sFaultInverter1,-1,NoCommunication sFaultInverter1,0,noError sFaultInverter1,1,CompressorCurrentSensorFault sFaultInverter1,2,FactorySettings sFaultInverter1,4, sFaultInverter1,8, sFaultInverter1,16,InverterBridgeTemperatureSensorFault sFaultInverter1,32,DLTSensorFault sFaultInverter1,64,ICLFailure sFaultInverter1,128,EEPROMFault sFaultInverter1,256,UpdateProcess sFaultInverter1,512, sFaultInverter1,1024, sFaultInverter1,2048, sFaultInverter1,4096, sFaultInverter1,8129, sFaultInverter1,16384, sFaultInverter1,32768, sFaultInverter2,-1,NoCommunication sFaultInverter2,0,noError sFaultInverter2,1,CommunicationLos sFaultInverter2,2,DcLinkRipple sFaultInverter2,4, sFaultInverter2,8,AcGridOverVtg sFaultInverter2,16,AcGridUnderVtg sFaultInverter2,32,DcLinkOverVtgSW sFaultInverter2,64,DcLinkUnderVtg sFaultInverter2,128,SpeedFault sFaultInverter2,256,AcGridPhaseLostFault sFaultInverter2,512,InverterBridgeOverTemperature sFaultInverter2,1024, sFaultInverter2,2048, I would like to have table with e.G. 3 columns: "nice_date",sFaultInverter1,ErrorCode "05.12.2023 10:46:53",0,noError "05.12.2023 10:43:27","-1",NoCommunication "05.12.2023 10:41:17",0,noError "05.12.2023 10:30:32",0,noError "05.12.2023 10:28:51",0,noError "05.12.2023 10:28:10","-1",NoCommunication for each value of sFaultInverter1 an ErrorCode from the lookUp table. Any help?
I've found an interesting specific case where there are two callRecord with the same id, both with version=1, but one is a peerToPeer call and the other is a groupCall. I think there are multiple cal... See more...
I've found an interesting specific case where there are two callRecord with the same id, both with version=1, but one is a peerToPeer call and the other is a groupCall. I think there are multiple callRecords because the initial peerToPeer call had a third participant added, escalating it to a groupCall. This could also explain some apparent duplication.
OR is usually placed between predicates in a logical evaluation, e.g. as part of a where command. Splunk works on a pipeline of events and you can't compare between events (without bringing them tog... See more...
OR is usually placed between predicates in a logical evaluation, e.g. as part of a where command. Splunk works on a pipeline of events and you can't compare between events (without bringing them together in a correlated event). Alerts can be triggered based on expressions, for example, number of events left in the pipeline, so perhaps you need to fashion a search which returns the events you are interested in and trigger on the presence of these events?
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data... See more...
Hi, I have Windows Event for specific application that have payload in Windows Event Log, when using Splunk_TA_windows to extract data will get field with multipe "Data". <Data>process_name</Data><Data>signature_name</Data><Data>binary_description</Data> How can I extract it automatically to fields/value: process_name = process_name signature = signature_name binary = binary_description   Is there any way without using "big" regex? to just $1:$2:$3.. and then add names to $1, $2, $3 like for CSV. something like:  REGEX = (?ms)<Data>(.*?)<\/Data> this will create maybe one multi value field and then assign Field_name
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | ev... See more...
Hi guys,   I started today with Splunk and have one question.   I want to use an or function that if the second "or" the third row is active I got the trigger.   Any ideas how to do it? | eval last_backup_t =strptime(last_backup, "%Y-%m-%d %H:%M:%S.%N%z") | where last_backup_t < relative_time(now(), "-2d@d") | search is_offline= true Thanks
Looking at the webhook events in more detail reveals my first wrong assumption: a single call can produce multiple webhook events, with one of two changeTypes: 'created' or 'updated'. The longer the ... See more...
Looking at the webhook events in more detail reveals my first wrong assumption: a single call can produce multiple webhook events, with one of two changeTypes: 'created' or 'updated'. The longer the call goes on for, the more changeType:updated events are pushed to the webhook. However, looking at callRecord events with a matching id it gets stranger. I can see 15 webhook (one 'created' and 14 'updated') events with the same id today with Splunk _time values between 10:15 and 12:15. But there are (only) 8 matching callRecord events all with the same Splunk _time value of 07:30, startDateTime of 07:30 and endDateTime of 09:53, each with a different 'version' of  1, 2, 3, 4, 5, 8, 12 or 15, and an incrementing lastDateTimeModified value (between 10:14 and 12:12) I thought the _time value in a splunk event showed when it was created. How can these callRecord events all have been created at 07:30, for a call that was in place between 07:30 and 09:53, and have webhook events between 10:15 and 12:15?
Hi @aguilard, it's avery strange behavior: open a cae t sSplunk Support. Ciao. Giuseppe