All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there,   I clicked on 'Start Free Trial' and waited for an email for login credentials for 'My instance'. It's been an hour and I've also tried logging into two different browsers with my Splunk... See more...
Hi there,   I clicked on 'Start Free Trial' and waited for an email for login credentials for 'My instance'. It's been an hour and I've also tried logging into two different browsers with my Splunk account login, cleared cache & restarted my computer. I've gone as far as creating another login to see if my account was the issue, but same problems there. Can I have the credentials needed sent to me so I can log in and use the Cloud Trial?
Hi Splunkers. whenever I'm running Splunk rebuild command I'm getting the below error.  
I see this : /opt/splunk/etc/apps/splunk_essentials_8_2/appserver/static/exampleInfo.json differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf differs /opt/splunk/etc/apps/... See more...
I see this : /opt/splunk/etc/apps/splunk_essentials_8_2/appserver/static/exampleInfo.json differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Dashboard_Studio.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Durable_Scheduled_Searches.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Federated_Search_Support_for_On-prem_to_On-prem_Environments.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Health_Report_Improvements_and_Search_Head_Cluster_Health_Report_Capabilities.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/New_JSON_Commands.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Search_Restriction_by_Data_Age.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/default/data/ui/views/Splunk_Operator_for_Kubernetes.xml differs /opt/splunk/etc/apps/splunk_essentials_8_2/splunk_essentials_8_2_new.tgz missing
I have a Splunk search that contains a call to a custom search command that I created. The command (python script) takes in events (windows event log events) and finds event sequences. The command on... See more...
I have a Splunk search that contains a call to a custom search command that I created. The command (python script) takes in events (windows event log events) and finds event sequences. The command only returns the events that are part of a found sequence and removes all the other events from the results, so hundreds, thousands, or more events go into the command and typically 0 or just a few events are returned. The command also adds a few fields to each of the remaining events to denote what sequence it is part of, the step in the sequence, and the time duration into the sequence. This search with the custom command works fine from a Splunk search page, I can see the search results on the "statistics" tab. Although, if I change the search mode from Verbose to either Fast or Smart, I no longer get my results and instead get "No results found". The results come back when I switch back to Verbose mode. When I save my search as a dashboard, the dashboard shows "No results found". If I edit the dashboard and change the visualization from "statistics table" to "events" and then back to "statistics table" and save, I see the search results on the dashboard. But, when I reload the dashboard it shows "No results found" again. Below is the search that I am using on the search page and on the dashboard:   source="2021-06-*" index="wineventlog" (sourcetype="WinEventLog:Security" OR sourcetype="WinEventLog:Microsoft-Windows-Sysmon/Operational") ((EventCode=11) OR (EventCode=4688 AND New_Process_Name="*schtasks.exe*")) NOT splunk | eval TargetFilename = mvindex(TargetFilename, 0) | evidseq F 2h 11 "(ComputerName>'name',TargetFilename>'exe')" 4688 "(New_Process_Name='*schtasks.exe*',ComputerName!'%name%',Process_Command_Line='*%exe%*')" 11 "(ComputerName='%name%')" | table _time EventCode timeSinceStart sequenceId sequenceStep ComputerName TargetFilename New_Process_Name Process_Command_Line | sort sequenceId sequenceStep   Also, I found that if I add the following to the bottom of my search on the dashboard, the correct results actually do appear consistently, even when I reload the dashboard, but I do not want to display my results with the transaction command.   | transaction sequenceId | sort sequenceId   So, how can I (reliably) get the results to show up on the dashboard? Any help anyone can provide would be greatly appreciated!
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined   From ? Here is how those fields break down: %h Remote hostname (i.e., who's asking?) %l Remote logname (if de... See more...
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined   From ? Here is how those fields break down: %h Remote hostname (i.e., who's asking?) %l Remote logname (if defined) %u Remote user (if authenticated) %t Time request was received %r first line of the request %>s Final status (200, 404, etc) %b Size of the response in bytes %{Referer} How did this user get here? %{User-Agent} What browser is the user using? so that we can I want to be able to filter on the first item (remote hostname) so I can weed out known scanners, but every time I try to do something with that field it gets deleted.   Can I translate the Apache log format into something splunk can handle? Thank u a million in advance.
Hello I have some issues with Defining Fields from Splitted  Raw Data within an Event. Sample Events, Code used to split Raw Event, Output of Splitted Data, and My Issues are given below :  Raw Eve... See more...
Hello I have some issues with Defining Fields from Splitted  Raw Data within an Event. Sample Events, Code used to split Raw Event, Output of Splitted Data, and My Issues are given below :  Raw Events: DAS7CNB_L_0__20210630-23574912_5827.html@^@^2021/06/30@^@^23:57:49@^@^DAS7CNB@^@^select "tin","payer_tin","min"(case when "state" in( 'AA','AE','AP','AS','FM','GU','MH','MP','PR','PW','VI' ) then 1 else 0 end) as "f1065k1nonus","max"(case when "state" in( 'WA','OR','CA','AK','HI','MT','ID','WY','NV','UT','CO','AZ','NM' ) then 1 when "state" in( 'ND','MN','SD','IA','NE','KS','MO','WI','IL','IN','MI','OH' ) then 2 when "state" in( 'NY','PA','NJ','NH','VT','ME','MA','RI','CT' ) then 3 when "state" in( 'TX','OK','AR','LA','KY','TN','MS','AL','WV','DE','MD','VA','NC','DC','SC','GA','FL' ) then 4 when "state" in( 'AA','AE','AP','AS','FM','GU','MH','MP','PR','PW','VI' ) then 5 else 0 end) as "f1065k1maxdistoff","max"("interest") as "interest_f1065_k1","max"("guarpaymt") as "guarpaymt_f1065_k1","max"("ord_inc") as "ord_inc_f1065_k1","max"("othrental") as "othrental_f1065_k1","max"("realestate") as "realestate_f1065_k1","max"("royalties") as "royalties_f1065_k1","max"("section179") as "section179_f1065_k1" into #TEMP9A from "irmf_f1065_k1" where "tax_yr" = 2016 and "tin" > 0 and "tin_typ" in( 0,1,2,3 ) group by "tin","payer_tin"@^@^|DAS7CNB.#TEMP9A|cdwsa.IRMF_F1065_K1@^@^ My SQL Command:  eval SQLField=split(_raw,"@^@^")| table SQLField Output of Splitted Data: DAS7CNB_L_0__20210630-23574912_5827.html 2021/06/30 23:57:49 DAS7CNB select "a"."basetin","w2nonus","w2maxdistoff","ssanonus","ssamaxdistoff","f1099rnonus","f1099rmaxdistoff","f1099miscnonus","f1099miscmaxdistoff","f1099gnonus","f1099gmaxdistoff","f1099intnonus","f1099intmaxdistoff","f1099oidnonus","f1099oidmaxdistoff","f1041k1nonus","f1041k1maxdistoff","f1065k1nonus","f1065k1maxdistoff","wages_w2","allocated_tips_w2","medicare_wages_w2","taxable_fica_tips_w2","WITHHLDG_w2","pens_annties_f1099_ssa_rrb","withhldg_f1099_ssa_rrb","gross_distrib_f1099r","taxable_amt_f1099r","WITHHLDG_f1099r","non_emp_compensation_f1099misc","othincome_f1099misc","rents_f1099misc","royalties_f1099misc","crop_insurance_f1099misc","WITHHLDG_f1099misc","taxbl_grant_f1099g","UNEMP_COMP_f1099g","prior_refnd_f1099g","agr_subsds_f1099g","atta_pymnt_f1099g","WITHHLDG_f1099g","interest_f1099int","savings_bonds_f1099int","WITHHLDG_f1099int","interest_f1099oid","withhldg_f1099oid","interest_f1041_k1","bus_inc_f1041_k1","net_rental_f1041_k1","oth_prtflo_f1041_k1","oth_rental_f1041_k1","interest_f1065_k1","guarpaymt_f1065_k1","ord_inc_f1065_k1","othrental_f1065_k1","realestate_f1065_k1","royalties_f1065_k1","section179_f1065_k1" into #TEMP9 from(select "basetin","w2nonus","w2maxdistoff","ssanonus","ssamaxdistoff","f1099rnonus","f1099rmaxdistoff","f1099miscnonus","f1099miscmaxdistoff","f1099gnonus","f1099gmaxdistoff","f1099intnonus","f1099intmaxdistoff","f1099oidnonus","f1099oidmaxdistoff","f1041k1nonus","f1041k1maxdistoff","wages_w2","allocated_tips_w2","medicare_wages_w2","taxable_fica_tips_w2","WITHHLDG_w2","pens_annties_f1099_ssa_rrb","withhldg_f1099_ssa_rrb","gross_distrib_f1099r","taxable_amt_f1099r","WITHHLDG_f1099r","non_emp_compensation_f1099misc","othincome_f1099misc","rents_f1099misc","royalties_f1099misc","crop_insurance_f1099misc","WITHHLDG_f1099misc","taxbl_grant_f1099g","UNEMP_COMP_f1099g","prior_refnd_f1099g","agr_subsds_f1099g","atta_pymnt_f1099g","WITHHLDG_f1099g","interest_f1099int","savings_bonds_f1099int","WITHHLDG_f1099int","interest_f1099oid","withhldg_f1099oid","interest_f1041_k1","bus_inc_f1041_k1","net_rental_f1041_k1","oth_prtflo_f1041_k1","oth_rental_f1041_k1" from #TEMP8) as "A" left outer join(select "tin","min"(case when "f1065k1nonus" = 1 then 1 else 0 end) as "f1065k1nonus","max"(case when "f1065k1maxdistoff" = 1 then 1 when "f1065k1maxdistoff" = 2 then 2 when "f1065k1maxdistoff" = 3 then 3 when "f1065k1maxdistoff" = 4 then 4 when "f1065k1maxdistoff" = 5 then 5 else 0 end) as "f1065k1maxdistoff","sum"("interest_f1065_k1") as "interest_f1065_k1","sum"("guarpaymt_f1065_k1") as "guarpaymt_f1065_k1","sum"("ord_inc_f1065_k1") as "ord_inc_f1065_k1","sum"("othrental_f1065_k1") as "othrental_f1065_k1","sum"("realestate_f1065_k1") as "realestate_f1065_k1","sum"("royalties_f1065_k1") as "royalties_f1065_k1","sum"("section179_f1065_k1") as "section179_f1065_k1" from #TEMP9a group by "tin") as "B" on "a"."basetin" = "b"."tin" DAS7CNB.#TEMP9 DAS7CNB.#TEMP9A|cdsawsa.IRMF_F1065_K1   My Issues: It splitted as expected. But, I have some issues with defining   text  (please see the text in Bold  right above My Issues:)  values DAS7CNB.#TEMP9A  as ID_DataFile and cdsawsa.IRMF_F1065_K1 as ID_DataTempFile Thank you.....any help will be highly appreciated.  
I want to use splunk to send an alert when the power goes out in our office. The current idea is to set up a machine (probably windows or linux) powered into an outlet, set up as a Universal Forwarde... See more...
I want to use splunk to send an alert when the power goes out in our office. The current idea is to set up a machine (probably windows or linux) powered into an outlet, set up as a Universal Forwarder sending a constant stream of info to the Enterprise instance (what this form of info would be I'm not sure yet. probably a script that constantly loops). And then to have the Enterprise instance (on aws so it will still be online if the power goes out) monitor for when the Forwarder machine stops sending information - then send me an alert. So when the power goes out the machine in the office will power down and the Enterprise instance will recognize this and alert me. If anyone has any other ideas of ways that they might monitored for power loss(or can help to outline how I should set up my current idea) please let me know. Thanks!   Edit: Can't figure out how to change the forum category of this post from feedback to something else. 
We recently updated our Splunk infrastructure to 8.1 and before we upgraded, the enable TLS option was checked on the mail server settings. The alert_actions.conf has not changed at all. Now for emai... See more...
We recently updated our Splunk infrastructure to 8.1 and before we upgraded, the enable TLS option was checked on the mail server settings. The alert_actions.conf has not changed at all. Now for emails being sent, we receive the following error in the python.log: sendemail:456 - [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:742) while sending mail to: <EMAIL_ADDRESS>   If I manually add use_tls = 1 in that conf file, there are no errors, but if enabled on the web UI, it errors with the above.    I am not sure what else to check here as nothing has changed except for the upgrading of the Splunk version on the servers. Has anyone else experienced this?
Hi all, I am trying to create an ansible playbook to automate splunk setup. I just noticed that the -user flag is not working (my admin is named admin instead of splunk). Am I doing something incorr... See more...
Hi all, I am trying to create an ansible playbook to automate splunk setup. I just noticed that the -user flag is not working (my admin is named admin instead of splunk). Am I doing something incorrect with this command? (Note: {{ splunk_user_password }} is an ansible variable that I am passing in to the bash command). /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes -user splunk --seed-passwd {{ splunk_user_password }}   Do I need to use a user-seed.conf file instead? Does the user flag not work (I've tried both -- and - flags)?    Thanks, jack
I have the following rex substitution in a query to aggregate various log messages (with the string Liveness and Readiness):   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="E... See more...
I have the following rex substitution in a query to aggregate various log messages (with the string Liveness and Readiness):   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="Error") source="*kubelet.log" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Liveness(.*)/\1 message=\"Liveness Error\"/g" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Readiness(.*)/\1 message=\"Readiness Error\"/g"| dedup object message   The above appears to work correctly and provide the desired result. For example, the above transforms events like below: I0903 17:12:49.308289 2024433 event.go:211] "Event occurred" object="namespace1/podfoo" message="Readiness probe failed: + cd /sandbox\\n++ curl --output /dev/null --max-time 28 --silent --write-out '%{http_code}' http://0.0.0.0:20012/heartbeat\\n+ ret=000\\n+ for expected_status in 200\\n+ [[ 000 == 200 ]]\\n+ [[ '' == \\\\t\\\\r\\\\u\\\\e ]]\\n+ false\\n" nicely into  the following: I0903 17:12:49.308289 2024433 event.go:211] "Event occurred" object="namespace1/podfoo" message="Readiness Error" However when I try to stream the above query into stats ("stats count by message"), the transformed events generated as part of the rex substitution disappears for some reason and stats seem to be acting on the original event messages (as if the rex sed had no effect).   index=k8s ("event.go") AND (kind="Pod") AND (type="Warning" OR type="Error") source="*kubelet.log" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Liveness(.*)/\1 message=\"Liveness Error\"/g" | rex mode=sed "s/(object=\"[^\s]+\")(.*)Readiness(.*)/\1 message=\"Readiness Error\"/g"| dedup object message | stats count by message   With the above, stats appears to aggregate on the original message contents of the events rather than the output of the rex substitution. For example, I see: message count Readiness probe errored: rpc error: code = Unknown ... 1059 Readiness probe failed: HTTP probe failed with statuscode: 503 2003 rather than the substituted message fields aggregated to something along the lines of  message count Readiness Error 3062   How can I get the output of the rex sed (like in the example above) to pass the substituted message fields in the events to stats?
Greetings, I am very new to Splunk and I am sure my question may have been asked multiple times.  I went through multiple articles but unable to get the answers.  It may be very simple for experts. ... See more...
Greetings, I am very new to Splunk and I am sure my question may have been asked multiple times.  I went through multiple articles but unable to get the answers.  It may be very simple for experts. I have two files and need to frame a query to join the log file with another log file, which are both *.json files File1.json "lvl": "DEBUG" "msg": "JobID 123456789012345678901234567890123456789012345678901234567890 completed with state: Failed" "ts": "2021-09-07T16:50:21.901Z" File2.json "JobName":"Lambda Handler" "Ruuid": "123456789012345678901234567890123456789012345678901234567890" My requirement is to parse File1.json and extract the JobID number alone, in this case 1234....0 and join this derived field with Ruuid in File2.json to form a end result like this JobName,JobID,msg Lambda Handler,123456789012345678901234567890123456789012345678901234567890,JobID 123456789012345678901234567890123456789012345678901234567890 completed with state: Failed I used substring to extract the JobID from File1 with this, but I am not sure how to use this derived field "Ruuid" to join with file 2 Ruuid index=* | source="File1.json" msg = *"completed with state:" | table msg | eval Ruuid = substr(msg,6,62) | Any inputs would be really helpful to me. Thanks.
Hello, I have following Sample Event. Q17CNB_L_0__20210630-235755_5828.html@^@^2021/06/30@^@^23:57:55@^@^ Q17CNB @^@^ I have following REX command to extract ID and DateTime Fields from it rex "(... See more...
Hello, I have following Sample Event. Q17CNB_L_0__20210630-235755_5828.html@^@^2021/06/30@^@^23:57:55@^@^ Q17CNB @^@^ I have following REX command to extract ID and DateTime Fields from it rex "(?<ID>.{6}).*?@\^@\^(?<DateTime>\d\d\d\d\/\d\d\/\d\d@\^@\^\d\d:\d\d:\d\d)   ID looked as expected, but I got DateTime Field as  "2021/06/30@^@^23:57:55" . Is there anyways, we can have DateTime Field like "2021/06/30 23:57:55"....without (@^@^) from this Event. Thank you so much, appreciate your support in these efforts.
I have a custom developed modular input that was developed with an older version of the Add-on builder app.  The custom code itself has always been compatible with python3, however I'm trying to get ... See more...
I have a custom developed modular input that was developed with an older version of the Add-on builder app.  The custom code itself has always been compatible with python3, however I'm trying to get the app fully updated to be compatible with the most recent and upcoming versions of Splunk Enterprise and Cloud.  To do that I'm trying to get past Splunk App certification.  When I attempt the App Pre-certification validation process in Add-on Builder 4.0.0, it gives me the following error Error App Precertification Check that all the modular inputs defined in inputs.conf.spec are explicitly set the python.version to python3. Modular input "example" is defined in README/inputs.conf.spec, python.version should be explicitly set to python3 under each stanza. File: README/inputs.conf.spec Line Number: 3 However, I do have "python.version = python3" specified in the apps inputs.conf.spec file.  No matter what I've tried, it keeps giving me the same error, but I can't find anymore info on what might be wrong.  Any insight or suggestion would be appreciated. 
Howdy, I have searched through the settings and can't seem to find out the parameter needed to disable the little circles in the new Line chart, what am I missing? The circles are quite jarring comp... See more...
Howdy, I have searched through the settings and can't seem to find out the parameter needed to disable the little circles in the new Line chart, what am I missing? The circles are quite jarring compared to what my dashboards used to look like. I can't imagine that they aren't configurable?? New chart: Old chart  
I have a table where the first four columns includes an icon.  I want to have word wrap disabled.  When I disable word wrap my icons disappear from a table.  I can't seem to figure out what's going o... See more...
I have a table where the first four columns includes an icon.  I want to have word wrap disabled.  When I disable word wrap my icons disappear from a table.  I can't seem to figure out what's going on with this.  I tried expanding the table rows (width and height) to see if the icons were hiding, but that does not seem to be the case.   Screenshot of icons with word wrap enabled: Screenshot of icons disappearing with word wrap disabled (preferred configuration):   Here is the code I am using: icons.js require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { /* return cell.field; */ return _(['Column1','Column2','Column3','Column4']).contains(cell.field); }, render: function($td, cell) { var value = cell.value; if(value=="col1data" ) { $td.html("<div class='col1data'> </div>") } else if(value=="col2data") { $td.html("<div class='col2data'> </div>") } else if(value=="col3data") { $td.html("<div class='col3data'> </div>") } else if(value=="col4data") { $td.html("<div class='col4data'> </div>") } } }); var sh = mvc.Components.get("table1"); if(typeof(sh)!="undefined") { sh.getVisualization(function(tableView) { // Add custom cell renderer and force re-render tableView.table.addCellRenderer(new CustomRangeRenderer()); tableView.table.render(); }); } });   icons.css #table1 .col1data { background-image: url('/static/app/testapp/images/col1.png') !important; background-repeat: no-repeat !important; background-size: 20px 20px; !important; background-position: center; !important; /* background-color: coral; !important; */ } #table1 .col2data { background-image: url('/static/app/testapp/images/col2.png') !important; background-repeat: no-repeat !important; background-size: 20px 20px; !important; background-position: center; !important; } #table1 .col3data { background-image: url('/static/app/testapp/images/col3.png') !important; background-repeat: no-repeat !important; background-size: 20px 20px; !important; background-position: center; !important; } #table1 .col4data { background-image: url('/static/app/testapp/images/col4.png') !important; background-repeat: no-repeat !important; background-size: 20px 20px; !important; background-position: center; !important; }  
Hi , I want to add a row total=0 in the below table .Can somebody suggest? query used is:  index="monthend" source="Period End Tracker_6Sep'21.csv"|rename "Activity Name" as Activity|dedup Activ... See more...
Hi , I want to add a row total=0 in the below table .Can somebody suggest? query used is:  index="monthend" source="Period End Tracker_6Sep'21.csv"|rename "Activity Name" as Activity|dedup Activity|eval status_1=if(Activity=="Month End Closing for AP,AR & GL","Delay","On Track")|stats count by status_1
Hi, I am using below query to search all correlation ID based on a search string and get the SOAPResponse using map search, But this is returning a partial search results. Is my query looks good ? ... See more...
Hi, I am using below query to search all correlation ID based on a search string and get the SOAPResponse using map search, But this is returning a partial search results. Is my query looks good ? index=pivotal sourcetype=ApplicationTest "SearchString" CorrelationId="*" | table CorrelationId | map search="search index=pivotal sourcetype=ApplicationTest $CorrelationId$ SOAPResponse" Thanks, Bhuvan.  
I have a splunk query that results in a table , while creating alert it just sends the first row of the results ,so we are missing the remaining results. Inorder to address this , i wanted to combine... See more...
I have a splunk query that results in a table , while creating alert it just sends the first row of the results ,so we are missing the remaining results. Inorder to address this , i wanted to combine the results in one row or a message to be sent. QUERY:     | inputlookup gtsnet.csv | fields "dataset_name" | search NOT [search index = asvdataintegration source=piedpiper sts_asvdataintegration_symphony_lambda_clewriter_events | search event.proc_stat_cd = "SCSS" AND event.evt_dtl.EventDesc = "workflow_found" AND event.module_response.requester = "_SUCCESS" AND event.s3_location = "*"s3://cof-data-*/"*"/lake/gtsnet*"*" AND "event.module_name"=LAMBDA | rename event.regrd_dataset_nm as dataset_name | table dataset_name | format]       Current Format:   Expected Format:  
Hi Team, I want to transpose few fields as below .. (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local... See more...
Hi Team, I want to transpose few fields as below .. (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total" | append [search (index=abc OR index=def) blocked =0 | rex field=index "(?<Local_Market>\w.*?)_" | stats count as Detected by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total"] | stats values(*) as * by Local_Market | transpose 0 header_field=Local_Market column_name=Local_Market here i want to add one column of date ( eval Time=strftime(_time,"%m/%d/%y")) which should not be transpose .. date                    Local_Market    Total   a  b   c 05-09-2021       INDIA                     3      1  1  1 05-09-2021       UK                          5       3  2  0
Hi, I have a saved search link to an action of sending an email for each result. The saved search runs every 5 min. If I run the search manually I get 5 results but surprisingly I dont get 5 emails... See more...
Hi, I have a saved search link to an action of sending an email for each result. The saved search runs every 5 min. If I run the search manually I get 5 results but surprisingly I dont get 5 emails. Instead I get a random number of emails each time, never 5. looking at logs using the query  index=_internal source="C:\\Program Files\\Splunk\\var\\log\\splunk\\python.log" sendemail I see many ERRORS like ERROR sendemail:522 - (421, b'4.3.2 Service not active', 'XXXXXXXX') while sending mail to: XXX@yyy I searched in google without success for the some hints to solve this issue. But, when I manually connect to each node of the exchange cluster using putty I managed to send emails without any issue . Any idea of what could I check? thanks!