All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Deleted the schedule report/alert.  They keep sending letters to the post office. They are not in the system. empty reports come from deleted scheduled reports.  Thanks
I have a query like this   sourcetype=tseltdw tags{}= "request" | fillnull data.service,data.service1, api_revamp,data.status, tags{}, keyword, keyword_api,data.timeTaken | eval keyword_api=if(ke... See more...
I have a query like this   sourcetype=tseltdw tags{}= "request" | fillnull data.service,data.service1, api_revamp,data.status, tags{}, keyword, keyword_api,data.timeTaken | eval keyword_api=if(keyword LIKE "user/628%" OR keyword LIKE "user/08%" ,"user/msisdn",keyword) | eval data.service1= if(len('data.service')>200, "null",'data.service') | eval datex=strftime(_time,"%Y-%m-%d") | eval datetime=strftime(_time,"%Y-%m-%d %H:00:00") | eval hourx=strftime(_time,"%H") | eval data.uri3= if(len('data.uri2')>100, "null",'data.uri2') | stats count as trx by datex, hourx, datetime, data.service1, data.status, tags{}, data._id, keyword_api,api_revamp, data.timeTaken | sort data.timeTaken asc and return like this.  Can anyone help me how to return one value only with p90 percentile by data.timeTaken? Much appreciated for any help, thank you.
Hi, I have multiple hosts and would like to find out the approximate daily Log size of each host .  Please help me to resolve my issue
Hello Splunkers. I'm working on some of the usecases on ES and one of the request that I've got from my upper management is to consolidate all the usecases and their notables and send them a singl... See more...
Hello Splunkers. I'm working on some of the usecases on ES and one of the request that I've got from my upper management is to consolidate all the usecases and their notables and send them a single email everyday. Is there any way that I can do it from the Splunk UI? Please help. Thanks.
I have a TimeField with data format is like  4 Days 14 Hours 40 Minutes  and sometimes 7 Hours 40 Minutes TimeField 4 Days 14 Hours 40 Minutes 7 Hours 40 Minutes 40 Minutes   I want... See more...
I have a TimeField with data format is like  4 Days 14 Hours 40 Minutes  and sometimes 7 Hours 40 Minutes TimeField 4 Days 14 Hours 40 Minutes 7 Hours 40 Minutes 40 Minutes   I want to convert this field values into seconds so that i can sort my data based on time. Thanks!
I have to display a data for the specific date . there is 2 way to pick the date, one is from DB as system date, another is the date taken from event. So please help me do this in dashboard 
Hi, I have a single Value visualization with trellis. The dashboard was created to get the backlog count + how long the backlog is in the system. on a list view, the result is: sp_type values(sp_t... See more...
Hi, I have a single Value visualization with trellis. The dashboard was created to get the backlog count + how long the backlog is in the system. on a list view, the result is: sp_type values(sp_tot) Post-o.splexcbj-HAPQ 1484 (0min) Post-o.splexcbj-HUEVQ 32 (0min)   im using: eval sp_type=sp_qtype."-".sp_qname stats sum(sp_msgnum) as "total" by sp_type sp_msgbcklog eval sp_tot=total." (".sp_msgbcklog."min)" sort total streamstats count as "AA" eval sp_type = printf("%*s", len(sp_type) + AA, sp_type) where total!="0" stats values(sp_tot) by sp_type  then i use single value visualization, the display is like this: but the problem is that i need to make the result is in RED color, and to make sure that the result will not overlap each other like the one in the screenshot.  *i added streamstats to sort and make sure that the one with highest value will return first Any idea how to achieve that?  
Below are my 2 log lines -  1.Successfully received message RECEIVED, payload={\"reference_id\":\"ABCD\"...} 2. Successfully published COMPLETED,  payload=(referenceId=ABCD,... For the given refer... See more...
Below are my 2 log lines -  1.Successfully received message RECEIVED, payload={\"reference_id\":\"ABCD\"...} 2. Successfully published COMPLETED,  payload=(referenceId=ABCD,... For the given referenceId ABCD, I want to search if "COMPLETED" message was published or not.  I am trying to do nested search but not getting the right result -  index=xyz "Successfully *"  "COMPLETED"  | rex "referenceId=(?<referenceId>[^,]*).*" | join reference_id in [search index=xyz  "Successfully * message" AND ("RECEIVED") | rex "reference_id\\\\\":\\\\\"(?<reference_id>[^\\\\]*).*" | dedup reference_id | fields reference_id] | stats count by referenceId | where count < 1 I am expecting output like -  ABCD 0
Hello all, I'm having trouble getting the correct difference in time when subtracting from the "now() " functions. Any help would be appreciated. Here is my sample query :   Where my start time sta... See more...
Hello all, I'm having trouble getting the correct difference in time when subtracting from the "now() " functions. Any help would be appreciated. Here is my sample query :   Where my start time stamp looks like: 2005-07-05T04:28:34.453494Z     index=main | where status_1="open" | eval start=strptime(create_time, "%Y-%m-%dT%H:%M:%S.%6QZ" | eval current_time=now() | eval diff=current_time-start | fieldformat diff=tostring(diff, "duration") | table _time, id_box, diff, start, end 
Our event log has request and response. Request and response body can either be a json object or json array. I need to extract resquest.body and response.body to construct a field "httpdetails" which... See more...
Our event log has request and response. Request and response body can either be a json object or json array. I need to extract resquest.body and response.body to construct a field "httpdetails" which is a string . How can i achieve this using single spath function. example of log events :     { "message": { "request": { "body": {} }, "response": { "body": [ { "id": "85118db6-2d5c-6bb0-ff67-5bc9ef5d4a1f", "createdon": "2021-07-08T00:37:02.512Z" } ] } } }         { "message": { "request": { "body": { "$limitafter": "2021-07-08T20:08:29.983Z" } }, "response": { "statuscode": 200, "body": { "count": "22" } } } }     Splunk query : | spath output=response_data message.response.body | spath output=request_data message.request.body | eval request_data=if(isnull(request_data) , NULL , request_data) | eval response_data=if(isnull(response_data),  NULL, response_data) | eval httpdetails="\n"+request_data+"\n-----------------Response---------------\n"+response_data, httpdetails = split(httpdetails,"\n") | eval details=if(isnotnull(httpdetails), httpdetails, details)  After running this query "httpdetails" is shown below. Here response_data for first log event is coming as NULL instead of object array. How can I fix this??  
We see on the UF -       /opt/splunkforwarder/etc/apps $ \ls -tlr total 48 drwxr-xr-x 4 splunk splunk 4 Apr 15 2019 SplunkUniversalForwarder drwxr-xr-x 4 splunk splunk 4 ... See more...
We see on the UF -       /opt/splunkforwarder/etc/apps $ \ls -tlr total 48 drwxr-xr-x 4 splunk splunk 4 Apr 15 2019 SplunkUniversalForwarder drwxr-xr-x 4 splunk splunk 4 Apr 15 2019 introspection_generator_addon drwxr-xr-x 4 splunk splunk 4 Apr 15 2019 search drwxr-xr-x 3 splunk splunk 3 Apr 15 2019 splunk_httpinput drwxr-xr-x 5 splunk splunk 5 Apr 15 2019 learned         These apps are not on the deployment server and they interfere with the configurations on the forwarder. Why are they there and what can be done to remove them?
I have a query to send an alert, which have 2 conflict conditions: |where alarm=1  generate some sum information only for alarm happens |where alarm=0 do something for cleaning the alarm |tabl... See more...
I have a query to send an alert, which have 2 conflict conditions: |where alarm=1  generate some sum information only for alarm happens |where alarm=0 do something for cleaning the alarm |table * But I only can do one of them, If I put where alarm=1 first, then I only can generate the alarm, otherwise, only can do clean alarm. If I put where alarm=1 OR alarm=0, it cannot generate some sum information for the alarm data. For example |eventstats list(x) etc. Any suggestion? Thanks in advance.    
Is it possible to Backup / Restore Splunk / ES critical .conf files for DR using the GUI / Web interface?
This is my Field with a Values inside,   Data  Passed 3rd  July Passed 8th  July Failed  3rd July Failed 8th July Total 3rd July Total 8th July Desired Order is  Data  Total 3rd Jul... See more...
This is my Field with a Values inside,   Data  Passed 3rd  July Passed 8th  July Failed  3rd July Failed 8th July Total 3rd July Total 8th July Desired Order is  Data  Total 3rd July Passed 3rd July Failed 3rd July Total 8th July Passed 8th July Failed 8th July please help me out, it would be appreciated.
Hi All, may I know how many index master we can add to the search head . We currently have 6 and trying to add 7th ? It’s not working out can some one send some document or have experience in same k... See more...
Hi All, may I know how many index master we can add to the search head . We currently have 6 and trying to add 7th ? It’s not working out can some one send some document or have experience in same kind of setup would be helpful.   Thanks  Rahul 
I have located & listed the Built-in Apps & Add-ons. Where do I find the new version of the Apps & Add-ons that come with Splunk Ent. / ES please. Thank u in advance. 
Hi All, Am new to splunk. Need on help. We are using Splunk Add-on for Service Now in our splunk instance and sending events to servicenow for ticketing/alerting.  If ServiceNow is unavailable for... See more...
Hi All, Am new to splunk. Need on help. We are using Splunk Add-on for Service Now in our splunk instance and sending events to servicenow for ticketing/alerting.  If ServiceNow is unavailable for several minutes (let’s say 5-10 minutes), will the alerts that are generated during that time be held in queue and then sent over once the nodes are back online?  Or are they just lost for that particular time frame? As per my understanding they will just lost for that particular time frame. But please help me in creating the search query for Skipped/Dropped events which are not gone to servicenow. So that we can send that alerts/events once again once servicenow is up. 
I have the event as follows    2021-07-12T18:40:56 host_abc MAIN 1 19 1.0.12.34 user_abc "ABCDEF GHIJ KLMN"................   From the above I am trying to extract the string which is between the... See more...
I have the event as follows    2021-07-12T18:40:56 host_abc MAIN 1 19 1.0.12.34 user_abc "ABCDEF GHIJ KLMN"................   From the above I am trying to extract the string which is between the double quotes which comes right after the username field   where user_abc is a field value of username field.     
Our application developers were looking to poll the service states of their IIS Application Pools.  This would be just like the Windows Service States (Start/Stopped/Disabled). He wrote a powershel... See more...
Our application developers were looking to poll the service states of their IIS Application Pools.  This would be just like the Windows Service States (Start/Stopped/Disabled). He wrote a powershell script to check if the Windows Host has IIS installed and if so, checks the service state of all Application Pools.  This was tested in a test environment without any issues.  However, it is now consuming up to 7GB of memory. My question is, for custom powershell scripts.  I'm not completely versed on Powershell, so can't say for sure if this code is the most optimal method for achieving the results.  Is there a better way?  For such a small task, why would it consume so much memory?       #### SysInternals Process Info ##### Command Line: powershell.exe -command "& {get-content "C:\Windows\TEMP\\inputffdc149fdcf785fb.tmp" | "C:\Program Files\SplunkUniversalFowarder\bin\splunk-powershell.ps1" "C:\Program Files\SplunkUniversalForwarder" ffdc1449fdcf785fb}"         The powershell.log file only has two lines when it runs:       07-09-2021 11:11:58.8862744-5 INFO start splunk-powershell.ps1 07-09-2021 11:12:00.5243172-5 INFO launched disposer       The temp file contains info about the app-pool.ps1 stanza       SplunkServerUri:https://127.0.0.1:8089 SplunkSessionKey:<redacted> stanzas stanza:App-Pool-State event_group:-1,1 index:appdevadmin_servers script:. "$SplunkHome\etc\apps\loves_ta_windows_appdev\bin\powershell\app-pool.ps1" source:powershell://App-Pool-State sourcetype:Windows:AppPool         Inputs.conf       ###### Inputs.conf ###### [powershell://App-Pool-State] script = . "$SplunkHome\etc\apps\loves_ta_windows_appdev\bin\powershell\app-pool.ps1" schedule = */5 * * * * disabled = 0 sourcetype = Windows:AppPool index = appdevadmin_servers         Powershell Script       ###### app-pool.ps1 ##### If (Get-WmiObject -Class Win32_ServerFeature -ComputerName $env:computername | Where-Object {$_.name -like "Web Server (IIS)"}){ Import-Module WebAdministration $ApplicationPools = Get-ChildItem IIS:\AppPools; Foreach ($ApplicationPool in $ApplicationPools){ $ApplicationPoolName = $ApplicationPool.Name; $ApplicationPoolState = $ApplicationPool.State; If ($ApplicationPool.processModel.identityType -eq 'SpecificUser'){ $UserIdentity = $ApplicationPool.processModel.UserName; } else { $UserIdentity = $ApplicationPool.processModel.identityType; } Write-Output " ApplicationPoolName=`"$ApplicationPoolName`" ApplicationPoolState=`"$ApplicationPoolState`" UserIdentity=`"$UserIdentity`" "; } }        
Hi All, I am quite new to Phantom. I have written few plabooks which works perfectly as intended when run from the debugger. However, the issue is that, when the playbooks are called via automation,... See more...
Hi All, I am quite new to Phantom. I have written few plabooks which works perfectly as intended when run from the debugger. However, the issue is that, when the playbooks are called via automation, the playbooks start executing but stops in between before getting completed. There are error/warnings seen in the container. How is that the playbook runs fine when called manually from debugger but not when called by automation. Any leads would be appreciated. Thanks, Shaquib