All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I was wondering if anyone could help me with this simple problem- I'm trying to graph the total amount of good calls, bad calls, as well as their fail rate percentages to show up on a chart.... See more...
Hello, I was wondering if anyone could help me with this simple problem- I'm trying to graph the total amount of good calls, bad calls, as well as their fail rate percentages to show up on a chart. So far I've been able to chart the sums of good calls and bad calls according to the respective 'channel' that they were on, but the Fail_Rate percentage field that I've tried to define doesn't seem to be working out.   I've tried a few different methods of trying to plot the Fail_Rate but at this point I'm questioning whether or not I've defined the field correctly       source="C:\\Call_logs" termcodeID=1 OR termcodeID=34 OR termcodeID=7 OR termcodeID=9 OR termcodeID=21 OR termcodeID=27 OR termcodeID=30 OR termcodeID=32 OR termcodeID=34 ChanID!=0 | eval Good=if(termcodeID=1,"Good", "Bad") | eventstats count(termcodeID) as totalcalls | eval Fail_Rate=sum((Bad/totalcalls)*100,1) | chart count over ChanID by Good      
Hello, I wonder if someone could help me out with a query. I'm trying to compare a value against different point in time, for multiple sources. For some reason, the appendcols seems to be adding col... See more...
Hello, I wonder if someone could help me out with a query. I'm trying to compare a value against different point in time, for multiple sources. For some reason, the appendcols seems to be adding columns without matching with another field  so if the values come out in a different order (not all datasource report at different times), then the report gets mixed results from different datasources so they don't match.  Basically, I need this query to join/group on the actual_data_source field         index=bi sourcetype=dbx_bi source=automation earliest=-1h@h latest=@h | bin _time span=1h | stats sum(error_percentage) as last_hour_percentage by _time,actual_data_source | appendcols [search index=bi sourcetype=dbx_bi source=automation earliest=-169h@h latest=-168h@h | bin _time as last_week span=1h | stats sum(error_percentage) as last_week_percentage by last_week,actual_data_source ] | appendcols [search index=bi sourcetype=dbx_bi source=automation earliest=-673h@h latest=-672h@h | bin _time as last_month span=1h | stats sum(error_percentage) as last_month_percentage by last_month,actual_data_source ] | eval last_hour=strftime(_time,"%Y-%m-%d %H:%M (%:::z %Z)"),last_week=strftime(last_week,"%Y-%m-%d %H:%M (%:::z %Z)"), last_month=strftime(last_month,"%Y-%m-%d %H:%M (%:::z %Z)"), change=(last_hour_percentage-last_week_percentage) | search last_hour_percentage>10 | table actual_data_source, last_hour, last_hour_percentage, last_week, last_week_percentage, last_month, last_month_percentage         Thanks!
Hi, Got a message from Splunk that our universal forwarder certificate package will be expiring soon and trying to update the package following their instructions for installing the credentials pac... See more...
Hi, Got a message from Splunk that our universal forwarder certificate package will be expiring soon and trying to update the package following their instructions for installing the credentials package (which works on a new/clean install) it returns that we need to use the update argument:     App "100_XXXX_splunkcloud" already exists; use the "update" argument to install anyway     This is the syntax used (following Splunk documentation) that returns the message:     .\splunk install app ../etc/apps/splunkclouduf.spl -auth xxx:xxxxxxx      What is the syntax we should use to force the update? I have tried every which way that I can think of and nothing works. Thanks!
Hello, I am trying to come-up with something which will automatically enrich the events using the country information using the src_ip field in the events. I understand that the iplocation command ... See more...
Hello, I am trying to come-up with something which will automatically enrich the events using the country information using the src_ip field in the events. I understand that the iplocation command can do this in search time. Is there any way we can get this done automatically using props.conf? I am expecting to have a lookup file which we can leverage to achieve this and I cannot find any. Cheers.
Hi, I have the following event as an example.   Properties: { [-] Path: /v1.0/locations/branches QueryString: ?branchNumbers=5318& RequestPath: /v1.0/locations/branches StatusCode: 404 TraceId:... See more...
Hi, I have the following event as an example.   Properties: { [-] Path: /v1.0/locations/branches QueryString: ?branchNumbers=5318& RequestPath: /v1.0/locations/branches StatusCode: 404 TraceId: 3f39adaf-ae24-44f4-b5cb-f8c49be023a0 }   I am trying to query this using the below search:   index=myIndex "Properties.RequestPath"!="*/v1*/events/*" "Properties.RequestPath"!="*_status*" "Properties.StatusCode">399 "Properties.TraceId"!="" | dedup "Properties.TraceId" | table "Properties.RequestPath" "Properties.TraceId "Properties.StatusCode" "Properties.QueryString"     The above query is returning the RequestPath and the TraceId just fine. But StatusCode and QueryString are all blank and when I check the tab, it says NULL.   Can anyone please help?
Hello Friends, Basesearch | Table workflowname runid count status. When it's serached,results will be as mentioned below  workflowname runid count status Workflowname1 123    5      Completed... See more...
Hello Friends, Basesearch | Table workflowname runid count status. When it's serached,results will be as mentioned below  workflowname runid count status Workflowname1 123    5      Completed Workflowname2 456    7      Paused Workflowname1 789    8      Completed Workflowname3 1011  4      Running Workflowname1 1013  4      Running Workflowname2 432    8      Completed I have configured an alert,to trigger when the result are greater than 0. Which means all the above mentioned results will be part of the email alert notification. When I use the suppress option by mentioning the fieldname as workflowname only one result been recieved as a part of email alert notifications.    Example how now the email is received  Email received for the Workflowname1 workflowname runid count status Workflowname1 123    5      Completed   Email received for the Workflowname2 workflowname runid count status Workflowname2 456    7      Paused   Can someone help out here with different email alert all the results for the unique workflowname should be triggered. Excepted one -  One mail for the workflowname1 workflowname runid count status Workflowname1 123    5      Completed Workflowname1 789    8      Completed Workflowname1 1013  4      Running   Other email for the workflowname2 workflowname runid count status Workflowname2 456    7      Paused Workflowname2 432    8      Completed   Separate email for the workflowname3 workflowname runid count status Workflowname3 1011  4      Running   Looking forward to hear inorder to achieve the above result    Thanks for the support.
The Health check from my monitoring console keeps "loading". I pressed F12 and it shows a few errors: If I pressed the 404 errors I get a XML document:  Any idea what is happening? The ... See more...
The Health check from my monitoring console keeps "loading". I pressed F12 and it shows a few errors: If I pressed the 404 errors I get a XML document:  Any idea what is happening? The Health Check worked before without any issue
Hello All, I have been searching for "how to" but not had much luck. I have this search: I run it realtime, and test with fixed time range (like 15 min,. etc)   sourcetype=linux_secure eventtyp... See more...
Hello All, I have been searching for "how to" but not had much luck. I have this search: I run it realtime, and test with fixed time range (like 15 min,. etc)   sourcetype=linux_secure eventtype="ssh_open" OR eventtype="ssh_close" | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval UserAction=case(eventtype="ssh_open","On",eventtype="ssh_close","Off",1==1,UserAction) | stats last(UsaerAction) by Date,host,user,UserAction | sort - Date   This search gives me a user, a host, and a "on" if user logs on and an "Off" if user logs off. I would like to not show the "Off" condition when the user logs off - i.e. make the "On" line in the search result go away (disappear)   How might I do this? thanks for a great source of info, eholz1
In Splunk, each user role would be allocated with threshold memory limit. Once we exceeds the limit (in the form of running many/large search queries), we probably end with the error "Waiting for que... See more...
In Splunk, each user role would be allocated with threshold memory limit. Once we exceeds the limit (in the form of running many/large search queries), we probably end with the error "Waiting for queued job to start". Is there a way to check the memory usage of my user profile (and/or other specific user) in Splunk? I would like to check the usage details, as it helps me to optimise my search and obviously it helps me in avoiding the error. I tried to find from the logs of `_internal` index, but unable to find the exact information.  Could anyone please help on this.
Hello, I'm trying to retrieve all the host-sourcetype combinations that are not captured by any Datamodel. I have a perimeter with all the assets to verify and check if they fit some DM or not. I c... See more...
Hello, I'm trying to retrieve all the host-sourcetype combinations that are not captured by any Datamodel. I have a perimeter with all the assets to verify and check if they fit some DM or not. I can't crisp my mind around unfortunately. Is there anyone with any idea?   Thank you.
I want to search below events in the base search. However these are not getting displayed when I use the where cmd. They are only getting shown when I use the spath and search cmd like below. Any ide... See more...
I want to search below events in the base search. However these are not getting displayed when I use the where cmd. They are only getting shown when I use the spath and search cmd like below. Any idea why where cmd is not working even though the filter criteria is getting matched? Its working for other events with the same filter criteria belpw. Only difference I can see is the multi line stack_trace field that is missing in other events. Could that be the issue?  Query that does not give any results even though qualifying events are there BASE_SEARCH | where like(MOP,"MC") Query that yields the results BASE_SEARCH | spath MOP | search MOP=MC   Event     { LEVEL: ERROR MESSAGE: Failed to process stack_trace: Exception trace.. at blah at blah MOP: MC }      
We are configuring salesforce splunk integration in our salesforce sandbox. We followed the documentation provided by splunk and have added the Salesforce add and was able to authenticate successfull... See more...
We are configuring salesforce splunk integration in our salesforce sandbox. We followed the documentation provided by splunk and have added the Salesforce add and was able to authenticate successfully.  We have configured the inputs with the oAuth authentication and validated successfully.  However we do not see any logs captured in our splunk and additionally we see the following in the logs,  Splunk Log : 2022-10-10 20:54:56,778 INFO pid=28611 tid=MainThread file=input_module_sfdc_event_log.py:collect_events:333 | [stanza_name=Test_Event_Logs] Collecting events started. 2022-10-10 20:54:56,779 WARNING pid=28611 tid=MainThread file=sfdc_common.py:key_configured:223 | [stanza_name=Test_Event_Logs] Salesforce refresh_token is not configured for account "Sandbox_POC". Add-on is going to exit. any advice is greatly appreciated
I need to split the below log files to like excel table. My Log file is: 2022-05-25 13:00:02 100.200.190.70 - test [12345]dele /TestingFile+-+END+-+GOD+WEL+SOONER+-+SFTP.txt - 220- 105 - 443 202... See more...
I need to split the below log files to like excel table. My Log file is: 2022-05-25 13:00:02 100.200.190.70 - test [12345]dele /TestingFile+-+END+-+GOD+WEL+SOONER+-+SFTP.txt - 220- 105 - 443 2022-06-30 12:05:08 200.231.150.150 - welcome [98765]created /TestingFileFromSource+-+COME+-+THE+END+Server+-+FileName.csv - 226 - 19 - 22 Expected Result is: ( I tried some regular expression but no luck) Field1 Field2 Field3 Field4 Field5 Field6 Field7 Field8 Field9 2022/05/25 13:00:02 100.200.190.70 test 12345 dele TestingFile END GOD WEL sooner SFTP.txt 220 105 443 2022/06/30 12:05:08 200.231.150.150 welcome 98765 created TestingFileFromSource COME THE END Sending FileName.csv 226 19 22
Hi everyone, In my search, I set bucket span=2h _time. It returns only hours which have data There are some hours where no data returns so, it is not shown in the result. I want to find it and I ... See more...
Hi everyone, In my search, I set bucket span=2h _time. It returns only hours which have data There are some hours where no data returns so, it is not shown in the result. I want to find it and I use makecontinous Raw data: _time id count 10/10/2022 16:00 1 12 10/10/2022 18:00 1 14 11/10/2022 08:00 1 15 11/10/2022 10:00 1 54 10/10/2022 16:00 2 78 10/10/2022 18:00 2 45 10/10/2022 20:00 2 5 11/10/2022 00:00 2 6 Expectation: _time id count 10/10/2022 16:00 1 12 10/10/2022 18:00 1 14 10/10/2022 20:00     10/10/2022 22:00     11/10/2022 00:00     10/10/2022 20:00     10/10/2022 22:00     11/10/2022 00:00     11/10/2022 08:00 1 15 11/10/2022 10:00 1 54 10/10/2022 16:00 2 78 10/10/2022 18:00 2 45 10/10/2022 20:00 2 5 10/10/2022 22:00     11/10/2022 00:00 2 6   After that I want to fill the id = null by the previous id and count = null by 0 I can do it for a single id but the makecontinuous doesn't work like that for multiple id (in the example I take 2 but in reality I have more) Do you have any idea please?
Hi to all, after upgrading from version 3.8 to version 3.10.0 we had to rename all input name containing a . (dot) or a - (dash), otherwise they were not working. Someone knows if this is a normal be... See more...
Hi to all, after upgrading from version 3.8 to version 3.10.0 we had to rename all input name containing a . (dot) or a - (dash), otherwise they were not working. Someone knows if this is a normal behavior? It took long time to understand the issue expecially because in dbx log nothing was written. Please check this issue and in case update documentation in Splunk DOC.
I need to set the date range from month to Date in splunk.   <earliest>now</earliest> <latest>mon</latest>
Visualization is visible in edit mode but not in non-edit/normal panel. The panel is empty in UI ,but the graph is visible in edit mode. I checked query its working and everything fine.   The a... See more...
Visualization is visible in edit mode but not in non-edit/normal panel. The panel is empty in UI ,but the graph is visible in edit mode. I checked query its working and everything fine.   The answer for this is highly appreciable.   Thanks
Hi, is it possible to hide the description of the dashboard? I found the command to hide the title including the description but I just want to hide the description.
The deployment server and the UF both run on linux.  In deployment server the app owned by splunk: splunk but when I push the app to the UF the app changes to root:root also the permission changes as... See more...
The deployment server and the UF both run on linux.  In deployment server the app owned by splunk: splunk but when I push the app to the UF the app changes to root:root also the permission changes as well.  What configuration do I need to change to make owner and permission of the app not to change?  The Splunk service run as Splunk user.  
 I wrote an external command in python and the only way I can get it to work is to put a | makeresults prior to it in the search. | makeresults | mycustomcommand |  My command just pulls back an arr... See more...
 I wrote an external command in python and the only way I can get it to work is to put a | makeresults prior to it in the search. | makeresults | mycustomcommand |  My command just pulls back an array of data through a rest call. I am not passing it any arguments. I have tried setting streaming to both "true" and "false". I have also tried setting generating to both "true" and "false" in the commands.conf. Can someone tell me the correct setting so I can just run: | mycustomcommand Currently if I run it like that I do not get any results (also no errors). Any help would be appreciated. Thanks, -Bob