All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ravida , when you created the Correlation Search, did you associate the "Add Notable" Adaptive Response Action? Then, when you configure the Add Notable Adaptive Response Action, did you create... See more...
Hi @ravida , when you created the Correlation Search, did you associate the "Add Notable" Adaptive Response Action? Then, when you configure the Add Notable Adaptive Response Action, did you created the Drilldown Search? Ciao. Giuseppe
Hi @loganramirez , usually Splunk displays date in the timezone defined for the user. to pass a timestamp in a different timezone, use eval and pass the transformed value instead of the original on... See more...
Hi @loganramirez , usually Splunk displays date in the timezone defined for the user. to pass a timestamp in a different timezone, use eval and pass the transformed value instead of the original one. Ciao. Giuseppe
Hi @siddharthad, in the drilldown link, you have to pass all the fields (in the ones that you have in your results) that are useful to identify the events to display in the drilldown dashboard. Put... See more...
Hi @siddharthad, in the drilldown link, you have to pass all the fields (in the ones that you have in your results) that are useful to identify the events to display in the drilldown dashboard. Put attention that you can pass only the fields in your main search, e.g. if you have a | table _tima, Name host, the only fields that you can pass are _time, Name and host. If you need to pass other fields that you don't want to display in the main dashboard, you can add them to the search and list the fields to display in the <fields></fields> tag. Ciao. Giuseppe
Hi @nateloepker , your data seems to have a json format, did you tried using INDEXED_EXTRACTIONS = json in your sourcetype definition oer the spath command (https://docs.splunk.com/Documentation/Spl... See more...
Hi @nateloepker , your data seems to have a json format, did you tried using INDEXED_EXTRACTIONS = json in your sourcetype definition oer the spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath)? Ciao. Giuseppe
Hi @Yashvik, this probably depends on the data you're using, anyway, try to group your ata by a common key usingstats instead table command, something like this: index=splunk_idx source= some_sour... See more...
Hi @Yashvik, this probably depends on the data you're using, anyway, try to group your ata by a common key usingstats instead table command, something like this: index=splunk_idx source= some_source | rex field=log "level=(?<level>.*?)," | rex field=log "\[CID:(?<cid>.*?)\]" | rex field=log "message=(?<msg>.*?)," | rex field=log "elapsed_time_ms=\"(?<elap>.*?)\"" | search msg="\"search pattern\"" | stats values(msg) AS msg values(elap) AS elap BY cid Ciao. Giuseppe
Hi @rkaufman , don't attach your request, even if on the same topic, because in this way, you'll have less attention that a new one. Anyway, my hint is the same: open a case to Splunk support, send... See more...
Hi @rkaufman , don't attach your request, even if on the same topic, because in this way, you'll have less attention that a new one. Anyway, my hint is the same: open a case to Splunk support, sending them a diag. What if you try to not install the last ver version but the previous one? Ciao. Giuseppe
Hi @ash2 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_we... See more...
Hi All, When we doing a splunk search in our application (sh_app1), we noticed some fields are duplicated / double up (refer: sample_logs.png) if we do the same search in another application (sh_welcome_app_ui), we do not see any duplication for the same fields. cid Perf-May06-9-151xxx level INFO node_name aks-application-xxx   SPL being used. index=splunk_idx source= some_source | rex field=log "level=(?<level>.*?)," | rex field=log "\[CID:(?<cid>.*?)\]" | rex field=log "message=(?<msg>.*?)," | rex field=log "elapsed_time_ms=\"(?<elap>.*?)\"" | search msg="\"search pattern\"" | table cid, msg, elap The event count remains same if we search inside that app or any other app, only some fields are  duplicated. We couldn't figure out where the actual issue is.  Can someone help? 
Thanks for your response! I got the query I index  | timechart span=1d sum(abc) as total by xyz | eval day=lower(strftime(_time,"%A")) | where day=="monday" | fields - day
Hi @gowthammahes  Are you trying to index this log file in indexer/search head directly OR are you trying to read this file thru Universal Forwarder?
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhoo... See more...
has anyone successfully using Splunk API call /services/saved/searches/SEARCH_NAME(https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#saved.2Fsearches.2F.7Bname.7D) to add a webhook for an existing Splunk report? I added action.webhook=1 , action.webhook.param.url=https://1234.com , and actions=pagerduty,webhook successfully through API but the Splunk UI does not show the webhook on UI (please see screenshot). Anyone has any idea what seem to be the problem?     curl \ --data-urlencode 'action.webhook.param.url=https://1234.com' \ --data-urlencode 'action.webhook=1' \ --data-urlencode 'actions=pagerduty,webhook' \ --data-urlencode 'output_mode=json' \ --header "Authorization: Splunk A_TOKEN_HERE" \ --insecure \ --request 'POST' \ --retry '12' \ --retry-delay '5' \ --silent \ "https://localhost:8089/services/saved/searches/test-12345"      
I found the solution.  | eval firstNewValue = mvindex(newValue,0)  
EXTRACT props do not invoke a transform.  Use REPORT, instead.
What message lurks beneath the yellow triangles? There are a few concerns: 1) The event timestamps may be too old to extract properly 2) MAX_TIMESTAMP_LOOKAHEAD of 15 is too short for times after ... See more...
What message lurks beneath the yellow triangles? There are a few concerns: 1) The event timestamps may be too old to extract properly 2) MAX_TIMESTAMP_LOOKAHEAD of 15 is too short for times after 9:59 3) The sourcetype name is "Test9" in props.conf, but "test9" is selected in the wizard.  Sourcetypes are case-sensitive by default.
The reason for setting up the example data in that way is based on my understanding of your description of the problem. Generally the easiest way to give advice is for you to post an example of the ... See more...
The reason for setting up the example data in that way is based on my understanding of your description of the problem. Generally the easiest way to give advice is for you to post an example of the data from both types and demonstrate what you want to achieve with the output. No you don't need to append - the whole makeresults/append section is about setting up an example data set to show how you go about joining the two. If you can post an example of the two data sources, it would be easier to show how it should be done.  
An alternative to regex is to use coalesce.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)]   As @gcusello mentioned, if you intend to use join command, conside... See more...
An alternative to regex is to use coalesce.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)]   As @gcusello mentioned, if you intend to use join command, consider stats or another method instead.  For example,   | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)] | fields - RU3NDS_* | stats values(*) as * dc(*) as dc_* by RU3NDS   Here is a complete emulation to illustrate how to correlate without using join command:   | makeresults format=csv data="RU3NDS, left_data_var foo1, leftbar1 foo2, leftbar1 foo1, leftbar2 foo3, leftbar3" | append [makeresults format=csv data="RU3NDS_abcd, right_data_var foo1, rightbar1 foo2, rightbar3 foo1, rightbar2 foo3, rightbar1"] | append [makeresults format=csv data="RU3NDS_efgh, right_data_var foo1, rightbar3 foo2, rightbar1 foo1, rightbar3 foo3, rightbar2"] ``` data emulation above ``` | foreach RU3NDS_* [eval RU3NDS = coalesce(RU3NDS, <<FIELD>>)] | fields - RU3NDS_* | stats values(*) as * dc(*) as dc_* by RU3NDS   RU3NDS dc_left_data_var dc_right_data_var left_data_var right_data_var foo1 2 3 leftbar1 leftbar2 rightbar1 rightbar2 rightbar3 foo2 1 2 leftbar1 rightbar1 rightbar3 foo3 1 2 leftbar3 rightbar1 rightbar2
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error... See more...
After installation of Alert Manager Enterprise 3.0.6 in Splunk Cloud, the Start screen never appears and gives error  "JSON replay had no payload value"  10 times.   Q.  Anyone run into this error?  
I am a little confused by the SPL.  Did you try this? | makeresults | eval src_ip="10.0.0.0 166.226.118.0 136.226.158.0 185.46.212.0 2a03:eec0:1411::" | makemv delim=" " src_ip | mvexpand src_ip | l... See more...
I am a little confused by the SPL.  Did you try this? | makeresults | eval src_ip="10.0.0.0 166.226.118.0 136.226.158.0 185.46.212.0 2a03:eec0:1411::" | makemv delim=" " src_ip | mvexpand src_ip | lookup zscalerip.csv CIDR AS src_ip OUTPUT CIDR as CIDR_match | eval Is_managed_device=if(isnull(CIDR_match), "false", "true") | table src_ip Is_managed_device  
You can make volunteers' life easier by listing sample lookup content in table format, and construct mock/sample SQL values according to illustrated lookup table or vice versa. Anyway, there are oft... See more...
You can make volunteers' life easier by listing sample lookup content in table format, and construct mock/sample SQL values according to illustrated lookup table or vice versa. Anyway, there are often different ways to solve the same problem depending on actual data characteristics and nuances in requirements.  If I understand you correctly, you want to catalogue events into some lk_wlc_app_name based on fragments of SQL that may match lk_wlc_app_short.  You mentioned that SQL has no structure (regarding the key strings you are trying to match); your illustrated data suggest that your intended matches do not fall in "natural" word boundaries.  This makes any strategy at risk of being too aggressive as to give false positives. Because of the constraints, one very aggressive strategy is to use wildcard matches.  You need to set "Match type" of lk_wlc_app_short to WILDCARD in "Advanced Options", and your table should contain wildcards before and after the short string, like lk_wlc_app_short lk_wlc_app_name *ART* Attendance Roster Tool *Building_Mailer* Building Mailer *SCBT*  Service Center Billing Tool Once this is set up, all you need is lookup, like   | lookup lookup_weblogic_app lk_wlc_app_short as SQL   Again, this is perhaps not an optimal solution because look-backward match is expensive.
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)... See more...
Hello, I'm trying to dynamically set some extractions to save myself time and effort from writing hundreds of extractions. In my orgs IdAM solution, we have hundreds of various user claims. ie)  Data={"Claims":{"http://wso2.org/claims/user":"username","http://wso2.org/claims/role":"user_role",...etc} I would like to set up a single extraction that will extract all of these claims. My idea was the following props.conf EXTRACT-nrl_test = MatchAllClaims transforms.conf [MatchAllClaims] FORMAT = user_$1::$2 REGEX = \"http:\/\/wso2.org\/claims\/(\w+)\":\"([^\"]+) MV_ADD = true   I was hoping this would extract the field dynamically, but it did not work. is there a way to accomplish this with one extraction?   Thank you