All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am consuming some data using an API, I want to calculate avg time it took for all my customer, after each ingestion (data consumed for a particular customer), I print a time matrix for that custome... See more...
I am consuming some data using an API, I want to calculate avg time it took for all my customer, after each ingestion (data consumed for a particular customer), I print a time matrix for that customer. timechart span=24h avg(total_time) Now to calculate average I cannot simply extract the time field and do avg(total_time), because if customerA completes ingestion in 1 hour, and customerB takes 24 hours, customer A will be logged 24 times and B will be logged once, giving me inaccurate results and bringing down the average. How do I create a filter let's say time duration is 7 days, so I get only those log lines for a particular customer which has the maximum total_time over a period of 7 days. i.e one log line per customer that has max total_time over a period of 7 days for that particular customer.
Hi,   splunk service on Search head is stopping frequently .After restarting splunk service it would come up. But now even after restarting splunk service is not starting on search head. I see be... See more...
Hi,   splunk service on Search head is stopping frequently .After restarting splunk service it would come up. But now even after restarting splunk service is not starting on search head. I see below error. ERROR ScriptRunner - Error setting up output pipe. ERROR AdminManagerExternal - External handler failed with code '-1' and output: ''. See splunkd.log for stderr output.   Any suggestions on how to fix
I want to search for endpoints  /api/work/12345678 i.e api/work/(8 digt number). My below query gives me all the three endpoint in the logs. I just only want the ones that are  /api/work/12345678.  ... See more...
I want to search for endpoints  /api/work/12345678 i.e api/work/(8 digt number). My below query gives me all the three endpoint in the logs. I just only want the ones that are  /api/work/12345678.  Search Query - cf_app_name="preval" cf_space_name="prod" msg="*/api/jobs/*"  My logs contain msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678/data HTTP/1.1" 200 0 407 "-" "Java/1.8.0_222" msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678 HTTP/1.1"  200 0 407 "-" "Java/1.8.0_222" msg: abc - [2021-08-06T06:49:11.529+0000] "GET /api/work/12345678/photo HTTP/1.1" 200 0 407 "-" "Java/1.8.0_222" Thanks
All my log statements are of below format. { "source": "stdout", "tag": "practice/myapplication:4444a76b917", "labels": { "pod-template-hash": "343242344", "version": "9216a76b917b8258a1ee6de... See more...
All my log statements are of below format. { "source": "stdout", "tag": "practice/myapplication:4444a76b917", "labels": { "pod-template-hash": "343242344", "version": "9216a76b917b8258a1ee6de7d3bbf9a78ca59f1f", "app_docker_io/instance": "my-application" }, "time": "1628235185.043", "line": "2021-08-06T07:33:05.043Z LCS traceId=a83a082592cf2275, spanId=a83a082592cf2275 LCE [qtp310090733-278] ERROR c.p.p.c.a.ErrorHandlerAdvice.logErrorDesc(34) - ERROR RESPONSE SENT", "attrs": { "image": "practice/myapplication:4444a76b917", "env": "dev", "region": "local", "az": "us-west" } }   i want to extract the timestamp from beginning of each line and sort my results based on that timestamp. I have no idea of splunk search queries. can someone help?
Hello guys, Iam creating a dashboard which show some statistics about the UFs of our environment. By finding a good solution for the amount of events delivered per index, I noticed something I cant... See more...
Hello guys, Iam creating a dashboard which show some statistics about the UFs of our environment. By finding a good solution for the amount of events delivered per index, I noticed something I cant explain at the moment. Hopefully you can bring light in the dark. For my understanding: # The amount of indexed events on the indexer by the forwarder itself | tstats count as eventcount where index=* OR index=_* host=APP01 earliest=-60m@m latest=now by index, sourcetype | stats sum(eventcount) as eventcount by index index eventcount _internal 11608 win 1337   # The amount of events which are forwarded by the forwarder  index=_internal component=Metrics host=APP01 series=* NOT series IN (main) group=per_index_thruput | stats sum(ev) AS eventcount by series series eventcount _internal 1243 win 2876   But both of them are delivering different values for the same timerange (60min) Has anyone an idea why this is happening? Thanks. BR, Tom
We created new STG Splunk Alerts and enabled them starting July 27. The strange thing is that they cannot send emails to prj-sens-test@mail.rakuten.com and MS teams email 581e7bfc.OFFICERAKUTEN.onmic... See more...
We created new STG Splunk Alerts and enabled them starting July 27. The strange thing is that they cannot send emails to prj-sens-test@mail.rakuten.com and MS teams email 581e7bfc.OFFICERAKUTEN.onmicrosoft.com@apac.teams.ms for any new alert that happens.       Since we migrated to a new system, we cloned our old STG Splunk Alerts and then updated the name and also the sourcetypes for the new STG Splunk Alerts. Everything else, schedule, email recipient, subject and email message are the same. We have deleted the old STG Splunk Alerts. Our last email from STG Splunk Alert was on July 28, which was from the old Splunk Alert.       We are wondering why it suddenly stopped sending emails. May I ask if you have any ideas?    This is only an issue in STG Splunk. New alerts in PRD Splunk are not  working properly.       Our new alerts are here https://stg-asplunksrch101z.stg.jp.local/en-US/app/sens/alerts       This is for STG splunk with the following details:   User name: user_sens   Splunk host: https://stg-asplunksrch101z.stg.jp.local/   Group name: Ichiba Business Expansion Group   App team name: ibe   Service ID: 1013
We are planning to use Infra-as-Code(IAC) for Splunk Cluster implementation. Hence, can anyone please advise if there is an api to bootstrap a SHC ?
Please help sql when connecting to different IPs is successful. filed list ip -> src_ip access -> success (filed is You can change it to something comfortable.)
I need assistance regarding a side project of mine, is there a way to only extract data from a certain location to create a dashboard? I need to create a dashboard that will give me a consolidated o... See more...
I need assistance regarding a side project of mine, is there a way to only extract data from a certain location to create a dashboard? I need to create a dashboard that will give me a consolidated output and then segregate them individually by count and other details. I checked but only found docs that had wehbook that pushed data from splunk to other tools, but I need a way to pull the data from a tool.
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv f... See more...
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv file. is there a way or similar condition to get log events saying this serivce is getting error is this host with the message
I'm useing alert manager in splunk alert action  with email action together.   But some time ,only the email can got the alert  notification, i check in _internal index, found some err log   8/6/... See more...
I'm useing alert manager in splunk alert action  with email action together.   But some time ,only the email can got the alert  notification, i check in _internal index, found some err log   8/6/218:10:02.402 AM | 08-06-2021 08:10:02.402 +0800 ERROR sendmodalert - action=alert_manager STDERR - UnicodeEncodeError: 'latin-1' codec can't encode characters in position 171-177: Body ('文件完整性告警') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.host = bj-vm-sec-searchhead-splunk-188index = _internalsourcetype = splunkdsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:02.319 AM | 2021-08-06 08:10:02,319 INFO pid="86180" logger="alert_manager_suppression_helper" message="Checking for matching suppression rules for alert=/etc/passwd文件完整性告警" (SuppressionHelper.py:66)host = bj-vm-sec-searchhead-splunk-188index = _internalmessage = Checking for matching suppression rules for alert=/etc/passwd文件完整性告警sourcetype = alert_manager_suppression_helper-too_smallsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:02.248 AM | 2021-08-06 08:10:02,248 INFO pid="86180" logger="alert_manager" message="Found job for alert '/etc/passwd文件完整性告警' with title 'HIDS passwd file monitorning'. Context is 'HIDS_all' with 1 results." (alert_manager.py:566)host = bj-vm-sec-searchhead-splunk-188index = _internalmessage = Found job for alert '/etc/passwd文件完整性告警' with title 'HIDS passwd file monitorning'. Context is 'HIDS_all' with 1 results.sourcetype = alert_manager-too_smallsplunk_server = bj-vm-sec-searchhead-splunk-188 8/6/218:10:01.733 AM | 08-06-2021 08:10:01.733 +0800 INFO sendmodalert - Invoking modular alert action=alert_manager for search="/etc/passwd文件完整性告警" sid="scheduler__splunk_SElEU19hbGw__RMD5bbb47a07bc26a359_at_1628208600_360" in app="HIDS_all" owner="splunk" type="saved"   so it seems like alert manager not support Chinese charater.  
What is the usage for the alerts index on all the indexer? I install the alert manager app on search head, run the alert search on search head, also I see some alert log data on search head alerts i... See more...
What is the usage for the alerts index on all the indexer? I install the alert manager app on search head, run the alert search on search head, also I see some alert log data on search head alerts index, but no log on indexer, so I want to know What is the usage for the alerts index on all the indexer? If I didn't have alerts index on indexer, if it will affect the alert manager running?
I currently have Splunk running a python script every 1 min with the following output: {"DEMO": 2700, "TEST": 0, "TEST-3": 5}   How can i visualize this data in the visualize part ? All the pie ch... See more...
I currently have Splunk running a python script every 1 min with the following output: {"DEMO": 2700, "TEST": 0, "TEST-3": 5}   How can i visualize this data in the visualize part ? All the pie charts, etc seems to only support a single field, whereas i would like all fields returned by the script to be automatically added in preferably in a pie chart or graph where i can sort by the value
I have a json format of data, I can not use the following method to process the results I want, when metricValue is a new dictionary, I changed how to extract. index="huawei_fc" sourcetype="BW_H... See more...
I have a json format of data, I can not use the following method to process the results I want, when metricValue is a new dictionary, I changed how to extract. index="huawei_fc" sourcetype="BW_HWFC:metric:host" | rename value{}.* as * | eval t = mvzip(metricId,metricValue) | mvexpand t | eval mId=mvindex(split(t,","),0),mValue=mvindex(split(t,","),1) | stats values(mValue) as mValue by _time,urn,mId I was unable to extract the data in metricValue from the SPL above        
Good day, As mentioned, I wanted to flatten a series of multivalue fields, and make it just like single row entries, where the type will become "String" and not "Multivalue". To be clearer, here's m... See more...
Good day, As mentioned, I wanted to flatten a series of multivalue fields, and make it just like single row entries, where the type will become "String" and not "Multivalue". To be clearer, here's my base search: | makeresults | eval a="this,is" | eval b="an,example" | eval c="group1,group2" | makemv delim="," a | makemv delim="," b | makemv delim="," c | stats values(a) as a, values(b) as b by c | eval type_a=typeof(a) | eval type_b=typeof(b) result of this will be: so what I wanted to do is make the result like this: c a b type_a type_b group1 is an String String group1 this example String String group2 is an String String group3 this example String String             When i add this to the base search: mvexpand a | mvexpand b | eval type_c=typeof(a) | eval type_d=typeof(b) the output will be: As you can see, this was able to handle the requirement in making the entries as "String". However,  it has created unnecessary combinations (as compared to my expected output), given that "a" and "b" are multivalue fields. I am not sure if the way I'll state this is correct, but perhaps, what I wanted is to expand/remove the "grouping" nature, but still output/display it as a single line/row entry like in a CSV file. An option to handle this is just output the results into a CSV or JSON file, and do the processing away from Splunk, but doing everything inside Splunk is included in my requirement. Thanks a lot in advance, and as always, any ideas are greatly appreciated
I am currently using a python API call to retrieve data from Splunk. I am getting approximately 1 day of data when the argument passed is for 30 days or more or lower of the data shown in the Splunk c... See more...
I am currently using a python API call to retrieve data from Splunk. I am getting approximately 1 day of data when the argument passed is for 30 days or more or lower of the data shown in the Splunk console. Can someone help?
Hello Splunk Community I'm working on a SPL to give _time difference of list of eventTypes as per the algorithm. Currently I'm using the below query. index=apple source=datapipe AccountNumber=* ... See more...
Hello Splunk Community I'm working on a SPL to give _time difference of list of eventTypes as per the algorithm. Currently I'm using the below query. index=apple source=datapipe AccountNumber=* eventType=newyork          OR                               eventType=california         OR                             eventType=boston             OR                             eventType=houston           OR                            eventType=dallas                OR                         eventType=austin               OR                           eventType=Irvine                OR                        eventType=Washington      OR                      eventType=Atlanta               OR                       eventType=San Antonio      OR                  eventType=Brazil                   OR                  eventType=Mumbai              OR                      eventType=Delhi                    OR                    |fieldformat _time=strftime(_time,"%m/%d/%Y%I:%M:%S %p") |sort by AccountNumber,_time |streamstats  range(_time) as diff window=2 |eval DifferenceInTimeByEventtime=strftime(diff,"%M:%S") |table AccountNumber eventType _time DifferenceInTimeByEventtime The query is working..However I need the time difference as per the algorithm. NOT ONLY as per the previous event .The algorithm is as follows A    eventType=newyork                                    B    eventType=california            B-A                         C    eventType=boston                C-B                                  D    eventType=houston             D-C                                 E     eventType=dallas                  E-D                     F     eventType=dallas                   F-D                    G     eventType=Irvine                 G-E                        H     eventType=Irvine                  H-F        I      eventType=Atlanta                I-H                       J    eventType=San Antonio         J-I                  K   eventType=San Antonio         K-I                              L    eventType=Mumbai               L-I                     M   eventType=Delhi                    M-I I'm looking for a _time difference according to the algorithm above Add Avg,Max,Min column to the search     I would appreciate if there is a query optimization Thanks in Advance.
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv ... See more...
I am trying to get the alert when Excerption error happens but there are many hosts and services. In splunk the services and host arent arranged so manually I added the service name and hosts in csv file. is there a way or similar condition to get log events saying this serivce is getting error is this host with the message
Hi Guys, I have created a simple query with stats command and I'm able to see the required results. If same search is ran by another user he is not able to see results but if that user removes comm... See more...
Hi Guys, I have created a simple query with stats command and I'm able to see the required results. If same search is ran by another user he is not able to see results but if that user removes commands from the search query he is able to see events. I checked permission of that user and it have same roles which I have. So I beleive it's not a permission issue.
  How would I write the props config file for following events, any help will be highly appreciated, thank you!   Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Removing old data in Re... See more...
  How would I write the props config file for following events, any help will be highly appreciated, thank you!   Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Removing old data in Repository Thu, 01 Jul 2021 00:20:04 -0400|system|flush_vulns|INFO|-1|Successful removal of old  data in Repository Thu, 01 Jul 2021 00:20:05 -0400|system|flush_vulns|INFO|-1|Removing old data in Repository Thu, 01 Jul 2021 00:20:05 -0400|system|flush_vulns|INFO|-1|Successful removal of old data in Repository