All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm trying to export, dump, or download large quantity of data from splunk. So far I tried dump command and the splunk cli search command to do this -When I ran the search in the UI follow... See more...
Hello, I'm trying to export, dump, or download large quantity of data from splunk. So far I tried dump command and the splunk cli search command to do this -When I ran the search in the UI followed by the dump command and once the search finished I was unable to locate the file. Place I look for was /opt/splunk/var/run/splunk/dispatch, but I may be looking in the wrong system...is it my indexer or searchhead where this file is located? -using the cli search command created some memory issues or login failures Other options? Note:I am the Splunk Admin, 6 indexer, 6 searchheads
Need to build a server dashboard that displays servers for each application and the status of server in a graphical representation if it's up or down. Please see the attached screenshot and suggest o... See more...
Need to build a server dashboard that displays servers for each application and the status of server in a graphical representation if it's up or down. Please see the attached screenshot and suggest on how to achieve.
I have a simple dashboard with a drop-down input field:   <fieldset autoRun="false" submitButton="true"> <input type="time" searchWhenChanged="false"> <label>Default (5m realtime)</la... See more...
I have a simple dashboard with a drop-down input field:   <fieldset autoRun="false" submitButton="true"> <input type="time" searchWhenChanged="false"> <label>Default (5m realtime)</label> <default> <earliest>rt-5m</earliest> <latest>rtnow</latest> </default> </input> <input type="dropdown" token="instance" searchWhenChanged="true"> <label>Instance</label> <choice value="prod">US</choice> <choice value="test">EU</choice> <default>prod</default> </input> </fieldset>   I would like to set the img src in the below HTML panel to specific URL, based on the value of the selected token above:   <panel> <html> <img src="https://specific-url-based-on-the-token-value-above-html"></img> </html> </panel>     For example, if the "US" option is chosen, the img src URL should be: https://www.US-option.foo.bar.org Or, if the "EU" option is chosen, the img src URL should be: https://www.EU-option.foo.bar.edu I have tried "condition match", etc. and cannot seem to figure it out. Any guidance is greatly appreciated.
Please I need help with ingesting data to do the Splunk Fundamental 2 Lab Exercises. The problem is that I have all the PDF documents for the Splunk fundamental 2 lab exercises but do not have the PD... See more...
Please I need help with ingesting data to do the Splunk Fundamental 2 Lab Exercises. The problem is that I have all the PDF documents for the Splunk fundamental 2 lab exercises but do not have the PDF that tells me all the files I need to download to do all the 14 lab exercises in the Splunk fundamental 2 Lab exercise. I did the training over 2 years ago and I wanted to go through the lab training exercises again without purchasing the material from Splunk. Please assist with all the files I need to do all the 14 lab exercises. 
In my testing, I am very impressed by the TrackMe app.  It is very full featured and very mature.  Thank you for your efforts in delivering it to the Splunk Community! One need we have in our enviro... See more...
In my testing, I am very impressed by the TrackMe app.  It is very full featured and very mature.  Thank you for your efforts in delivering it to the Splunk Community! One need we have in our environment is monitoring the contents of lookups.  I would love to be able to do this within TrackMe as well...is there a trick to maybe getting this to work today?  I played around with 'inputlookup' with 'raw' and 'from' search types in elastic data sources, but I am not really seeing a way to implement. Does anyone have any ideas on how/if TrackMe might be able to monitor lookups as well? Thanks, REID
Hello - I would like to change the default value of "Select..." to "Filter..." how might I go about this?   I do not want it to be an option as I am using: <input type="dropdown" token="user"... See more...
Hello - I would like to change the default value of "Select..." to "Filter..." how might I go about this?   I do not want it to be an option as I am using: <input type="dropdown" token="user" searchWhenChanged="false"> <change> <condition match="len($value$) &gt; 0"> <set token="user">user="$value$"</set> </condition> <condition> <set token="user"></set> </condition> </change> <label>Identity</label> <allowCustomValues>true</allowCustomValues> <fieldForLabel>test</fieldForLabel> <fieldForValue>test</fieldForValue> <search> <query>|makeresults |eval test=""</query> </search> </input>   Thanks in advance.
We see the following -   /opt/apps/splunk/etc/apps/FireEye_iSIGHT_Splunk_App/bin [splunk@qcsplapps bin]$ python tail_f.py Traceback (most recent call last): File "tail_f.py", line 2, in <module> ... See more...
We see the following -   /opt/apps/splunk/etc/apps/FireEye_iSIGHT_Splunk_App/bin [splunk@qcsplapps bin]$ python tail_f.py Traceback (most recent call last): File "tail_f.py", line 2, in <module> from common import ISIGHT_ADDON_LOGS, get_curr_epoc_time, SPLUNK_LOG_DIR, LOCAL_CONFIG, DEFAULT_CONFIG File "/opt/apps/splunk/etc/apps/FireEye_iSIGHT_Splunk_App/bin/common.py", line 23, in <module> import splunk.entity as entity ImportError: No module named splunk.entity   What do we need to do in order to import the splunk.entity module?
Is there a way to pass a variable value, say a background color value, to a prebuilt panel?  I want to have these panels flexible so that when they are used in various dashboards the color schemes ca... See more...
Is there a way to pass a variable value, say a background color value, to a prebuilt panel?  I want to have these panels flexible so that when they are used in various dashboards the color schemes can be consistent within each dashboard.
Hi,   we are using version 1.2.4 on Splunk 7.3.7, and we noticed our interval setting of (interval=600 / 10 mins) is not being obeyed. We can see when it does make a successful connection and pull ... See more...
Hi,   we are using version 1.2.4 on Splunk 7.3.7, and we noticed our interval setting of (interval=600 / 10 mins) is not being obeyed. We can see when it does make a successful connection and pull the logs, we see at the start of the connection "HTTP connection pooling" in the logs. However, what we see subsequently are continuous connections every minute or so. We tried a splunk restart to see if this made a difference but it hasn't changed it's behaviour. So now we see connections from every 30-40 mins or longer.  Below is an example of the logs:   2020-11-23 15:42:12,824 INFO pid=32193 tid=MainThread file=splunk_rest_client.py:_request_handler:105 | Use HTTP connection pooling 2020-11-23 15:42:12,824 DEBUG pid=32193 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:9001/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/TA_MS_O365_Reporting_checkpointer (body: {}) 2020-11-23 15:42:12,826 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): 127.0.0.1:9001 2020-11-23 15:42:12,834 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:9001 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/TA_MS_O365_Reporting_checkpointer HTTP/1.1" 200 5509 2020-11-23 15:42:12,835 DEBUG pid=32193 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.010786 2020-11-23 15:42:12,836 DEBUG pid=32193 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:9001/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/ (body: {'count': -1, 'search': 'TA_MS_O365_Reporting_checkpointer', 'offset': 0}) 2020-11-23 15:42:12,842 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:9001 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/?count=-1&search=TA_MS_O365_Reporting_checkpointer&offset=0 HTTP/1.1" 200 7403 2020-11-23 15:42:12,844 DEBUG pid=32193 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.007860 2020-11-23 15:42:12,852 DEBUG pid=32193 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:9001/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/index_continuously_obj_checkpoint (body: {}) 2020-11-23 15:42:12,857 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:9001 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/index_continuously_obj_checkpoint HTTP/1.1" 200 128 2020-11-23 15:42:12,857 DEBUG pid=32193 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.005903 2020-11-23 15:42:12,858 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ Start date: 2020-11-23 14:21:59.057785, End date: 2020-11-23 14:31:59.057785 2020-11-23 15:42:12,858 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?\$filter=StartDate eq datetime'2020-11-23T14:21:59.057785Z' and EndDate eq datetime'2020-11-23T14:31:59.057785Z' 2020-11-23 15:42:12,863 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): reports.office365.com:443 2020-11-23 15:42:16,000 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?%5C$filter=StartDate%20eq%20datetime'2020-11-23T14:21:59.057785Z'%20and%20EndDate%20eq%20datetime'2020-11-23T14:31:59.057785Z' HTTP/1.1" 200 None 2020-11-23 15:42:16,073 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ max date before getting message: 2020-11-23 14:21:59.057785 2020-11-23 15:42:16,893 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ max date after getting messages: 2020-11-23 15:41:50.582546 2020-11-23 15:42:16,894 DEBUG pid=32193 tid=MainThread file=binding.py:post:750 | POST request to https://127.0.0.1:9001/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/batch_save (body: {'body': '[{"state": "{\\"max_date\\": \\"2020-11-23 15:41:50.582546\\"}", "_key": "index_continuously_obj_checkpoint"}]'}) 2020-11-23 15:42:16,926 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:9001 "POST /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/batch_save HTTP/1.1" 200 39 2020-11-23 15:42:16,928 DEBUG pid=32193 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.033994 2020-11-23 15:42:16,928 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ nextLink URL (@odata.nextLink): https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 2020-11-23 15:42:16,928 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 2020-11-23 15:42:16,932 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): reports.office365.com:443 2020-11-23 15:42:18,938 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 HTTP/1.1" 200 None 2020-11-23 15:42:19,777 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ max date after getting messages: 2020-11-23 15:41:50.582546 2020-11-23 15:42:19,778 DEBUG pid=32193 tid=MainThread file=binding.py:post:750 | POST request to https://127.0.0.1:9001/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/batch_save (body: {'body': '[{"state": "{\\"max_date\\": \\"2020-11-23 15:41:50.582546\\"}", "_key": "index_continuously_obj_checkpoint"}]'}) 2020-11-23 15:42:19,827 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:9001 "POST /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/batch_save HTTP/1.1" 200 39 2020-11-23 15:42:19,829 DEBUG pid=32193 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.051431 2020-11-23 15:42:19,830 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | _Splunk_ nextLink URL (@odata.nextLink): https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=3999 2020-11-23 15:42:19,830 DEBUG pid=32193 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=3999 2020-11-23 15:42:19,834 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): reports.office365.com:443 2020-11-23 15:42:21,872 DEBUG pid=32193 tid=MainThread file=connectionpool.py:_make_request:437 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=3999   thanks
I Had tried "charting.fieldscolor" option but color is not applying to the the bar graph. Pls help in applying color to the bar   <row> <panel> <chart> <title>Result Count By</ti... See more...
I Had tried "charting.fieldscolor" option but color is not applying to the the bar graph. Pls help in applying color to the bar   <row> <panel> <chart> <title>Result Count By</title> <query>|search</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisY.scale">log</option> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">all</option> <option name="charting.fieldColors">{"Data_Entry_Time": 0x66cc66, "Bad": 0xcccc66, "Ugly": 0xcc6666}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="height">332</option> <option name="link.exportResults.visible">0</option> <option name="link.inspectSearch.visible">0</option> <option name="link.openPivot.visible">0</option> <option name="refresh.display">progressbar</option> <option name="refresh.link.visible">0</option> <drilldown> <set token="SELECTED">$row.Rule$</set> <set token="NAME">$row.Name$</set> </drilldown> </chart> </panel> </row>     
I've just finished adding new physical indexers to our existing multi-site indexer cluster and I'm trying to figure out the safest method for removing the old virtual indexers. We started off with a... See more...
I've just finished adding new physical indexers to our existing multi-site indexer cluster and I'm trying to figure out the safest method for removing the old virtual indexers. We started off with a 2 site cluster with each site having 3 members and the following config. available_sites = site1,site2 multisite = true replication_factor = 2 site_replication_factor = origin:1,total:2 site_search_factor = origin:1,total:2 When I added the new indexers I created 2 new sites (3 and 4) and amended the config as follows. available_sites = site1,site2,site3,site4 multisite = true replication_factor = 4 site_replication_factor = origin:1,total:4 site_search_factor = origin:1,total:4 Now that the upgraded cluster has equalized I'm trying to figure out what the safest method for removing sites 1 and 2 is.  I think it should be; 1. splunk offline --enforce-counts (while watching the indexer clustering dashboard on the CM waiting for all data to be searchable before offlining the next). 2. Put the cluster into maintenance mode and update server.conf on the CM as follows; available_sites = site3,site4 multisite = true replication_factor = 2 site_replication_factor = origin:1,total:2 site_search_factor = origin:1,total:2 3. Disabled maintenance mode. Any and all thoughts/past experiences appreciated.
hi How to add a form stylesheet and a form script both please? I done this but its wrong   <form stylesheet="format.css", script="tokenlinks.js"> Could you help me please??  
KV Store initialization failed. Please contact your system administrator Unable to initialize modular input "microsoft_graph_security" defined in the app "TA-microsoft-graph-security-add-on-for-sp... See more...
KV Store initialization failed. Please contact your system administrator Unable to initialize modular input "microsoft_graph_security" defined in the app "TA-microsoft-graph-security-add-on-for-splunk": Unable to locate suitable script for introspection..
I downloaded the SplunkUF Credentials Package file by following the instructions here, but I exposed it to the public internet. I realize the package file contains an RSA public and private key pair... See more...
I downloaded the SplunkUF Credentials Package file by following the instructions here, but I exposed it to the public internet. I realize the package file contains an RSA public and private key pair, which I presume to be used to build the HTTPS connection between the Universal Forwarder and the Splunk Cloud instance.  I would like to assume that this key pair is compromised and move on. Is there a way to force Splunk Cloud to generate another Credentials Package file with a new pair of keys? Thanks for your help in advance, cheers.
HI. --How to check ip address of indexers from search head
Hi after I try to format datetime field - it shows empty   index=_audit action=alert_fired ss_app=omega_core_audit | convert ctime(trigger_time) | eval Criticality = case(severity=1,"Info", ... See more...
Hi after I try to format datetime field - it shows empty   index=_audit action=alert_fired ss_app=omega_core_audit | convert ctime(trigger_time) | eval Criticality = case(severity=1,"Info", severity=2, "Low", severity=3, "Medium", severity=4,"High", severity=5, "Critical", 1=1, severity) | stats earliest(trigger_time) as min_time, latest(trigger_time) as max_time, count by ss_name Criticality | eval min_time = strftime(min_time,"%Y-%m-%d %H:%M:%S")   field min_time returns NULL after I try to set format. (max_time is OK - but without format) please advise on how to correctly output the datetime fields with desired format regards Altin
Hi Everyone! Recently, I was browsing on AppDynamics docs regarding the auto-remediation scripts. I was amazed at how this works, and decided to read more about custom actions. Then a question came... See more...
Hi Everyone! Recently, I was browsing on AppDynamics docs regarding the auto-remediation scripts. I was amazed at how this works, and decided to read more about custom actions. Then a question came in to my mind. On our organization, there is a server dedicated for Ansible components. Ansible is really net tool to automate tasks such as rolling restarts, deletion, and provisioning, all in one playbook. I was thinking if there would be a way to trigger an ansible playbook rather than a single remediation script? I think this would be possible IF there is an ansible installed on the monitored server. However, the Ansible components are stored on a separate component. I'm thinking that a workaround would be to trigger an auto-remediation script on monitored server, which then runs the playbook command. This method, however, will be more tedious as there are more points involved to trigger a single playbook. Are there any out-of-the-box way / techniques / workarounds to execute a playbook from the policy Controller config?
Hi, need to build a server dashboard that displays servers for each application and the status of server in a graphical representation if it's up or down. Please see the screenshot and suggest how to... See more...
Hi, need to build a server dashboard that displays servers for each application and the status of server in a graphical representation if it's up or down. Please see the screenshot and suggest how to achieve.
Hello splunk community, I would like to use the DLTK app on my Free single-instance Splunk deployment. I am using windows 10. I downloaded the DLTK App as well as installed the Docker Desktop for ... See more...
Hello splunk community, I would like to use the DLTK app on my Free single-instance Splunk deployment. I am using windows 10. I downloaded the DLTK App as well as installed the Docker Desktop for windows. I allowed Docker to listen on port 2375 as suggested in many posts. But when I try to complete the DLTK setup, with docker host tpc://localhost:2375, and both endpoint and external URLs set as loalhost, I always encounter the following error:   File "C:\Program Files\Splunk\Python-3.7\lib\socket.py", line 716, in create_connection sock.connect(sa) ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it  Does anybody know what I could try to make it works? Thank you in advance for your help!
Hello,   I was wondering if a new event time-generated 1 month ago but indexed today (with the correct _time, meaning "a month ago") will be accelerated by the Summarization process? Or is there a... See more...
Hello,   I was wondering if a new event time-generated 1 month ago but indexed today (with the correct _time, meaning "a month ago") will be accelerated by the Summarization process? Or is there a way to change the earliest time in the Summarization search for the Data Model Acceleration?   Thanks