All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to use the same token twice within dashboard studio. I have a dropdown which assigns a token to filter data, when I try to set the same token on drilldown of a chart. on a drop down inp... See more...
I am trying to use the same token twice within dashboard studio. I have a dropdown which assigns a token to filter data, when I try to set the same token on drilldown of a chart. on a drop down input I set token tk_index on a table or chart I try to set the same token tk_index When I try to use the "static" option it complains that the "token name is duplicate". When I try to use the "predefined" option, it simply refuses to set it (setting to row.title.value)   any ideas ?  
I would like to know how to transfer data received from UniversalForwarder from HeavyForwarder to SplunkCloud. I would like to have a configuration like UF -> HF -> SplunkCloud. I don't know about d... See more...
I would like to know how to transfer data received from UniversalForwarder from HeavyForwarder to SplunkCloud. I would like to have a configuration like UF -> HF -> SplunkCloud. I don't know about data transfer between UF and HF, but I don't know how to transfer that data to SplunkCloud.  
Hi We have logs of images created in a series, like below. They are identified by a unique series id, the number of events for each series is variable. time_1 image_number:1 series_id:99999 time_2... See more...
Hi We have logs of images created in a series, like below. They are identified by a unique series id, the number of events for each series is variable. time_1 image_number:1 series_id:99999 time_2 image_number:2 series_id:99999 time_3 image_number:3 series_id:99999 time_n image_number:n series_id:99999 I need to calculate the average time for an image created, i.e. the total time (time_n - time_1)/n for each series. We have thousands of series every day.  Any tips on how I can achieve this?
Hi Team, I am looking for the help to monitor the directory on heavy forwarder which contains CSV file. can you please advise me which configuration files i need to update  and what configurati... See more...
Hi Team, I am looking for the help to monitor the directory on heavy forwarder which contains CSV file. can you please advise me which configuration files i need to update  and what configuration for example- server1 is the HF. /opt/abc/file.csv is the directory and file name present on HF Please advise. Thank you,
Is there any programatic way to retrieve last 15 mins traces
  INDEX Name generated (10 million new records every day) INDEX Fields username, secret, key  Lookup file secrets.csv with fields secret (128 bit string - 1 million static records) I am creating ... See more...
  INDEX Name generated (10 million new records every day) INDEX Fields username, secret, key  Lookup file secrets.csv with fields secret (128 bit string - 1 million static records) I am creating a report to check if any of secret is found within the secrets.csv list and flag it. index=generated [| inputlookup secrets.csv | fields secret] | table username, secret, key How does the check for secret if exists in both index generated is validated in inputlookup in the search string?
I have two indexes Index accounts: [user. payroll] Index employees: [user, emp_details, emp_information]   I am trying to use a search to search all the 1 million users in index users to search f... See more...
I have two indexes Index accounts: [user. payroll] Index employees: [user, emp_details, emp_information]   I am trying to use a search to search all the 1 million users in index users to search for the corresponding details of the same user in different index which contains 20 million records. I tried something like index=accounts user=* | join type=left user [search index=employees | fields user,  emp_details,  emp_information] | table user, emp_details, emp_information   But its not search all the users and joining all the users.   
Any ideas on how to pull a random sample for the logging application that spans the full month and does not specify sources or source types? We’re trying to make this generic enough that it can be ap... See more...
Any ideas on how to pull a random sample for the logging application that spans the full month and does not specify sources or source types? We’re trying to make this generic enough that it can be applied to any system that starts logging to scan samples of whatever raw data they’ve logged. The query that has been used historically is only pulling the first 25 of the last time items were logged: index=co_lob co_id=app1 co_env=prod | head 25 | stats latest(_time) as latestinput, latest(source) as source, latest(_raw) as latestraw, count by host, index, co_id, sourcetype, co_env | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(latestinput) AS latestinput | eval application="app1" | table application, count, host, index, latestinput, latestraw, source, sourcetype, co_id, co_env   I found the information on random() and tried: index=co_lob co_id=app1 co_env=prod | eval rand=random() % 50 | head 50   and was going to go from there to extract into the right table format for the scanning, but even just running for the week to date it times out. Trying to get a random 50 or 100 from across an entire month. Using the Event Sampling doesn’t work because even if I go 1 : 100,000,000, for some of these applications that are logging millions of transactions an hour, it’s causing performance issues and is too much for review.    Thank you in advance for any guidance
My coldtofrozen has stopped working. Might be related to python3, but I'm not 100% sure. I've done some tweaking to the coldtofrozen.py #! /opt/splunk/bin python and I've checked other settings, but ... See more...
My coldtofrozen has stopped working. Might be related to python3, but I'm not 100% sure. I've done some tweaking to the coldtofrozen.py #! /opt/splunk/bin python and I've checked other settings, but all seem to be okay. Are there any commands or tools I can run to help troubleshoot? Where would the errors be logged? Thanks
as a result of an inputlookup, I have the following table 1(a dish that a chef can prepare, and the chef's name): dish chef a gordon ramsay b gordon ramsay c Guy Fie... See more...
as a result of an inputlookup, I have the following table 1(a dish that a chef can prepare, and the chef's name): dish chef a gordon ramsay b gordon ramsay c Guy Fieri d Guy Fieri f Jamie Oliver g gordon ramsay h gordon ramsay Rachael Ray   and I have the following table from another outputlookup (the restaurant where a chef works, and the chef name): restaurant chef 1 gordon ramsay 2 Guy Fieri 3 Guy Fieri 4 Jaime Oliver 5 Michael Caines I want to combine the 2 tables into this: restaurant dish chef 1 a gordon ramsay 1 b gordon ramsay 2 c Guy Fieri 2 d Guy Fieri 3 c Guy Fieri 3 d Guy Fieri 4 f Jamie Oliver 1 g gordon ramsay 1 h gordon ramsay 5 null Michael Caines null h Rachael Ray Basically, based on tables 1 & 2, how do I get a table telling me the restaurant where a chef works, the dishes that he/she would prepare, and the chef's name? In stuff I've tried,  I'm able to combine table 1 & 2 with the join command, but a lot of results end up getting filtered out. (eg. I might end up with one result per chef but not getting all the dishes, or one result per dish but not getting all the restaurants).  
I simply need to timechart the numeric values from field that is being returned.  For example index=proxy | timechart count by resp_time.  getting something like this: I need one line that ch... See more...
I simply need to timechart the numeric values from field that is being returned.  For example index=proxy | timechart count by resp_time.  getting something like this: I need one line that charts all the values... instead it splits them up by how many times it has seen each value.
Hello, I have an issue with web and syslog indexes not being logged properly. I believe that I will need to change the settings of the Spunk Forwarders and I need help with modifying the UF configs s... See more...
Hello, I have an issue with web and syslog indexes not being logged properly. I believe that I will need to change the settings of the Spunk Forwarders and I need help with modifying the UF configs so that I can correct the data that needs to be logged. We have a deployment server set up and I think this is probably the route to go. What does the process look like for doing this?
Hello I am beginner with Splunk. I made a query and my search result is like      text1 text2 text3 response: { "status":"UP", "object1":{ "field1":"name1", "status":"UP" ... See more...
Hello I am beginner with Splunk. I made a query and my search result is like      text1 text2 text3 response: { "status":"UP", "object1":{ "field1":"name1", "status":"UP" }, "object2":{ "field2":"name2", "status":"UP" }, "object3":{ "object4":{ "field4":"name4", "status":"UP" }, "object5":{ "field5":"name5", "status":"UP" }, "status":"UP" }, "object6":{ "field6":"name6", "status":"UP" } }     I want to obtain the value for object3.status for a column of table. How to do this ? With rex field=_raw or spath ? Thank you in advance.
Hi.  I am new to splunk and testing it in lab right now, seeing if it will work for us.   Some of the docs are a little confusing, so want to make sure I am understanding things right What I need ... See more...
Hi.  I am new to splunk and testing it in lab right now, seeing if it will work for us.   Some of the docs are a little confusing, so want to make sure I am understanding things right What I need to monitor is Events from all servers, AD Changes/Lockouts.  Maybe Fortinet Logs as well.  Hoping to use splunk for AD Monitoring, and stop paying Netwrix.   Trying to determine how much data would be needed for ingestion.  Is ingestion for only importing of data, or does is it also used in processing that Splunk does? I currently have the Windows add-on installed in my instance. My first part of testing is this. 1. Get logs from servers, do I still need to configure the inputs file in the default directory, enabling ADMON? 2. AD Change monitoring, Same as above, do I need to enable it there, or do I just set it up in the AD tab under settings->inputs?   TIA for your help! -Will  
I have a requirement to process and correlate the data as soon as things come in. The data has some triggering events, which can be identified and used. Is that possible in splunk to run something ... See more...
I have a requirement to process and correlate the data as soon as things come in. The data has some triggering events, which can be identified and used. Is that possible in splunk to run something based on the incoming data.
Hi All,  I am new to Splunk and joined this community seeking help. Request you to please help me getting my doubts clear. My Question is 1. When my Splunk is down for an hour, and if i get any ad... See more...
Hi All,  I am new to Splunk and joined this community seeking help. Request you to please help me getting my doubts clear. My Question is 1. When my Splunk is down for an hour, and if i get any adhoc request to get the data for that hour period, so once Splunk is up then what we need to do  (restart splunk forwarder?) to restore data or data will be restored by itself  or data will be lost. 2. What to do/where to check at instance level when i am unable to see latest log files/data in splunk 3. What to do if log files are missing in splunk forwarder after patching,  how to add files or what is the correct approach    
I have a series of panels in a dashboard that drill down to the next panel.  I discovered that the data I want to drilldown on the populates in different sections of the event.  I used the field extr... See more...
I have a series of panels in a dashboard that drill down to the next panel.  I discovered that the data I want to drilldown on the populates in different sections of the event.  I used the field extraction tool in splunk to create two fields.  I then used the eval and coalesce to create one field.   index=”someIndex” sourcetype="FooSource" | rename Field1 as Foo1 Field2 as Foo2 | eval TotalFoo = coalesce(foo1,foo2)  | chart dc(field3) by "TotalFoo" Field4 For the panel I want to populate based on the TotalFoo field won't work. I believe this is due to the sub-search runs before the main search, so the TotalFoo field does not exist.   index=”someIndex” sourcetype="FooSource" | rename Field1 as Foo1 Field2 as Foo2 | eval TotalFoo = coalesce(foo1,foo2) | search TotalFoo="$onClick$" I'm wondering how to get around this limitation or if that is possibel?  
Hello,  question about AWS Systems Manager | Splunkbase  is there any nice way to teach Splunk (and SOAR) to trigger incidents, defined in AWS SSM Incident Manager? root cause - it allows to c... See more...
Hello,  question about AWS Systems Manager | Splunkbase  is there any nice way to teach Splunk (and SOAR) to trigger incidents, defined in AWS SSM Incident Manager? root cause - it allows to call.  and in our case this is almost only one option to make splunk informative and react on alerts out of business hours. 
Hey all,  I'm attempting to create a query that will compare a specified time frame to that same time frame from each of the four weeks prior with a line graph. Thanks in advance for any help!
I am trying to pass multiple values using a dropdown input. How can I add multiple values with each choice in a dropdown input? As the user clicks any choice, all the values associated with the choic... See more...
I am trying to pass multiple values using a dropdown input. How can I add multiple values with each choice in a dropdown input? As the user clicks any choice, all the values associated with the choice should be passed and populate a panel based on the values.     <form> <label>Demo</label> <fieldset submitButton="false"> <input type="dropdown" token="Variety_token" searchWhenChanged="true"> <label>Fruit List</label> <choice value="111,222,333,444">Mango</choice> <choice value="123,456,112">Apple</choice> <choice value="555,666,777,888,999">Banana</choice> <choice value="753,482">Grapes</choice> </input> </fieldset> <row> <panel> <table> <title>Fruit List/title> <search> <query>index=* sourcetype=source Fruitid=$Variety_token$ | stats count by Fruitname, Fruitvariety, Fruitid......</query> <earliest>-1y@y</earliest> <latest>@y</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>