All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a docker (no Kubernetes)  what agent should I install? thanks.
Hi, I have configured my windows forwarder to use the custom CA and Server certificate. Below is the configuration and the forwarder is able to connect to indexer fine. File: C:\Program Files\Spl... See more...
Hi, I have configured my windows forwarder to use the custom CA and Server certificate. Below is the configuration and the forwarder is able to connect to indexer fine. File: C:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = XXX:9998 clientCert = C:\Program Files\SplunkUniversalForwarder\etc\auth\mycerts\testCertificate.pem sslPassword = XXX useClientSSLCompression = true sslRootCAPath = C:\Program Files\SplunkUniversalForwarder\etc\auth\mycerts\myCAcertificate.pem [tcpout-server://XXX:9998] But still in the splunkd.log file i am seeing below message, X509Verify [14596 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates   Any idea if I am missing any configs here?
Hello,  How would I confirm that my SPLUNK Configuration established for IPv6 in addition to IPv4 traffic?  Any help would be highly appreciated. Thank you so much
Hello, I have installed the GitHub Add-On for Splunk but I am not currently not seeing any data. I think I have possibly entered the incorrect input fields, but I'm not sure where in GitHub I can... See more...
Hello, I have installed the GitHub Add-On for Splunk but I am not currently not seeing any data. I think I have possibly entered the incorrect input fields, but I'm not sure where in GitHub I can find these fields. Has anyone set this up before and could show me where the input fields are in GitHub?   Thanks, Sophie
I cannot use any of the fields extracted by spath inside an eval.  The result is always null. Input: (formatted for easy reading)   { "meta": { "emit_interval_s": 600 }, "operations":... See more...
I cannot use any of the fields extracted by spath inside an eval.  The result is always null. Input: (formatted for easy reading)   { "meta": { "emit_interval_s": 600 }, "operations": { "kv": { "Get": { "total_count": 4, "percentiles_us": { "75": 17747.0, "95": 18706.0, "98": 18706.0, "99": 18706.0, "100": 18706.0 } }, "GetClusterConfig": { "total_count": 708, "percentiles_us": { "75": 13723.0, "95": 14339.550000000001, "98": 14567.56, "99": 18207.0, "100": 18207.0 } }, "GetMeta": { "total_count": 4, "percentiles_us": { "75": 15776.75, "95": 16761.0, "98": 16761.0, "99": 16761.0, "100": 16761.0 } } } } }   And this is query: | spath input=json_field | eval a=operations.kv.Get.percentiles_us.100 | table json_field operations.kv.Get.percentiles_us.100 a In the output, a is always null but the operations.kv.Get.percentiles_us.100 always displays the correct value. What's happening here?
I created an add-on with Add-on Builder and Modular input using Python code but the Input page is not available - Error 404 and message "Failed to load Inputs Page". inputs.conf file was created ju... See more...
I created an add-on with Add-on Builder and Modular input using Python code but the Input page is not available - Error 404 and message "Failed to load Inputs Page". inputs.conf file was created just in the local folder and looks like this: [khoros_api_python] index = khoros start_by_shell = false python.version = python3 sourcetype = khoros_api_python interval = 86400   What can be wrong? 
Hello! I'm struggling with the time ranges within my query. I have two indexes (anonymized)   index=documentation contains the information which element is mounted in a device.   index= e... See more...
Hello! I'm struggling with the time ranges within my query. I have two indexes (anonymized)   index=documentation contains the information which element is mounted in a device.   index= eor contains events for devices   Now I'm trying to search only for events in the index=eor for devices that contain the element=COB for the last xx time range So I tried to set the time range for the sub search like this: index=eor name IN (*) status IN (*) [ search index= documentation earliest=1 latest=now()      | search element = COB      | table devices ] | table a, b, c, d   But I'm getting no results.   If I set the time picker let's say to a time range, there are the last events in the documentation index, I'm getting results...   Greetings Chris
I'm researching a solution for sending Windows Event logs to a third party service that requires them to be in "Snare over Syslog" format, not the RFC-3164-compliant format that Splunk puts them in w... See more...
I'm researching a solution for sending Windows Event logs to a third party service that requires them to be in "Snare over Syslog" format, not the RFC-3164-compliant format that Splunk puts them in when using syslog format. Has anyone accomplished this? We do have a heavy forwarder in our environment that is set up to receive these logs from our Universal Forwarders, and I know you can use things like SEDCMD to modify data within the logs as they come in, but I haven't found a way to completely reformat them into this new format and send them out.  If anyone has done this or has any tips, I'd appreciate it! This is what the format looks like: Appendix A - Event Output Format - Snare SCWX Windows Agent v5 Documentation - Confluence (atlassian.net)
Hello, I am solving following problem: HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:   ERROR HttpInputDataHandler - Failed processing http input... See more...
Hello, I am solving following problem: HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:   ERROR HttpInputDataHandler - Failed processing http input, token name=linux_rh, channel=n/a, source_IP=10.177.155.14, reply=9, events_processed=18, http_input_body_size=8405, parsing_err="Server is busy"   There was 7 messages of this kind during 10 minute interval. I found that "reply=9" means "server is busy" - this is a message for log source "stop sending data", because that HF is overloaded (log source really stopped sending data). At the same time parsing, aggregation, typing, httpinput and splunktcpin queues had 100% fill ratio, indexing queue has 0% fill ratio. At the same time, VMWare host on which HF is running, was probably overloaded - CPU frequency on this host in usually about 1GHz, but grew up to 4GHz shortly for this time (it was not caused by Splunk HF probably). At the same time, there is no ERROR messages in splunkd.log on IDX cluster, which is receiving data from concerned HF. Based on this information, I came to the following conclusion: Because the indexqueue on the HF was not full and there were no ERRORs on the IDX cluster, there was no problem on the IDX cluster or on the network between the HF and the IDX cluster. Due to VMWare host overload, the HF did not have sufficient resources to process messages, so the parsing, aggregation, and typing queues became full. As a result, the following occurred: to populate the httpinput and splunktcpin queues to generate ERROR error HttpInputDataHandler - Failed processing http input stop receiving data from the log source As soon as the VMWare host overload ended (after cca 10 minutes), data reception was resumed, no data was lost. Could you please review my conclusion and tell, if I am right? Or there is something more to investigate? And what to do to avoid this problem in future? Re-configure queue setting (set higher max_size_kb)? Or add some power to VMWare host? Or something else? Thank you very much in advance for any input. Best regards Lukas Mecir 
For instance, I want to filter for HTTP from 192.168.0.100. The closest I can get:   remoteAddress = 192.168.0.100 protocol = tcp   Is there no way to include the port? https://docs.splunk.... See more...
For instance, I want to filter for HTTP from 192.168.0.100. The closest I can get:   remoteAddress = 192.168.0.100 protocol = tcp   Is there no way to include the port? https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Inputsconf#Windows_Host_Monitoring
Hi all, I'm trying to do a field extraction of database name (let's call the field "DBname") from logs that come in 2 formats: Jan 19 15:58:06 192.168.1.2 Jan 19 15:58:06 Message forwarded from D... See more...
Hi all, I'm trying to do a field extraction of database name (let's call the field "DBname") from logs that come in 2 formats: Jan 19 15:58:06 192.168.1.2 Jan 19 15:58:06 Message forwarded from Database1: Oracle Audit blablablabla Jan 20 06:36:17 192.168.1.3 Jan 20 06:36:17 Database2 journal: Oracle Audit blablablablabla Jan 21 06:36:17 192.168.1.4 Jan 21 06:36:17 Database_10 journal: Oracle Audit blablablablabla Jan 22 15:58:06 192.168.1.5 Jan 22 15:58:06 Message forwarded from Database4: Oracle Audit blablablabla Jan 23 15:58:06 192.168.1.6 Jan 23 15:58:06 Message forwarded from prmds1: Oracle Audit blablablabla Jan 24 15:58:06 192.168.1.7 Jan 24 15:58:06 Message forwarded from Database_15: Oracle Audit blablablabla Jan 26 15:58:06 192.168.1.9 Jan 26 15:58:06 Message forwarded from prmds2: Oracle Audit blablablabla Jan 27 15:58:06 192.168.1.8 Jan 27 15:58:06 fafa32 journal: Oracle Audit blablablablabla So, the "DBname" field value comes after "Message forwarded from" or before "journal". Splunk fails with the regex and unfortunately so do I. I found it's an issue that the events are so similarly formatted in this case My question is if I am missing something with the regex or I should approach it in a completely different manner.  Thank you for the help!
Hi All, I would like to know which applications are ingesting more data and violating the license.  I tried the below query but I am not sure if it gives correct results. index=_internal source... See more...
Hi All, I would like to know which applications are ingesting more data and violating the license.  I tried the below query but I am not sure if it gives correct results. index=_internal source=*license_usage.log type=”Usage” splunk_server=* | eval Date=strftime(_time, “%Y/%m/%d”) | streamstats sum(b) as volume | eval MB=round(volume/1024/1024,5) | timechart span=1w avg(MB) by idx    index=_internal source=*license_usage.log type=Usage | stats sum(b) as bytes by h | eval MB = round(bytes/1024/1024,1) | fields h MB | rename h as host          
hello I have a windows client and a Splunk Enterprise in other windows and connect them with mikrotik in Gns3. I want to send my browsers history to splunk and see them. how do it? my browser... See more...
hello I have a windows client and a Splunk Enterprise in other windows and connect them with mikrotik in Gns3. I want to send my browsers history to splunk and see them. how do it? my browser is google chrome. i do it in Mozilla with add monitor profile directory. thanks
Can we populate the raw events from one index to summary index. If yes how can I do that can you please help me 
I just installed the Knowledge Object overview App for Splunk (SplunkWorks - Contributor: Jason New) and it seems it's missing macro's and most panels don't update.   Any suggestions on contacts for ... See more...
I just installed the Knowledge Object overview App for Splunk (SplunkWorks - Contributor: Jason New) and it seems it's missing macro's and most panels don't update.   Any suggestions on contacts for support?
Hello  I am trying collect data from powermax Array with Dell EMC Add-on,  during my test on dev environment (standalone mode) everything was working perfectly. When I turn on my Prod environment... See more...
Hello  I am trying collect data from powermax Array with Dell EMC Add-on,  during my test on dev environment (standalone mode) everything was working perfectly. When I turn on my Prod environment, any data is coming, It seems like my heavy forwarder receive data but doesn't send them to indexers. I don't see any error in my ta_dellemc_vmax_inputs.log file. Here are some informations about my environment: My dev environment in which everything working well : 1 search head with Linux RHEL7.9 and Splunk 8.2.3 My Prod environment :  Heavy Forwarder :  Linux 3.10.0 RHEL7.9 and Splunk 8.1.6 Indexers : Linux 3.10.0  RHEL7.9 and Splunk 8.2.3 search heads :  Linux 3.10.0 and Splunk 8.2.3 Some logs from ta_dellemc_vmax_inputs.log 2022-02-07 19:01:55,649 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Passed performance timestamp recency check: 1644258000000. 2022-02-07 19:01:55,650 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Starting metrics collection run. 2022-02-07 19:01:56,003 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Array collection complete. 2022-02-07 19:01:56,085 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | SRP collection complete. 2022-02-07 19:01:59,258 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Storage Group collection complete. 2022-02-07 19:02:00,386 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Director collection complete. 2022-02-07 19:02:00,387 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Finished collection run. 2022-02-07 19:02:00,388 INFO pid=4607 tid=MainThread file=base_modinput.py:log_info:295 | Input: data_input_vmax_xxxx | Array: xxxxxxxxxxxxx | Completed metrics collection run in 6 seconds. Please could you help me to resolve this issue or give some advice about configurations ?  Thanks.
Is it possible to prevent a system admin adding inputs at a forwarder? I only want sanctioned inputs to be used i.e. I want our Splunk admins to approve all forwarders and inputs. I thought deploymen... See more...
Is it possible to prevent a system admin adding inputs at a forwarder? I only want sanctioned inputs to be used i.e. I want our Splunk admins to approve all forwarders and inputs. I thought deployment server might solve this but it does not cover all local inputs per say. 
I need to add a line that show the number of results I get in the timeframe of the dashboard search using the job.earliestTime and job.latestTime tokens but if I use the replace function it doesn't w... See more...
I need to add a line that show the number of results I get in the timeframe of the dashboard search using the job.earliestTime and job.latestTime tokens but if I use the replace function it doesn't work. This is the code of the dashboard I used:     <html> <div class="custom-result-value">Results: $result$ in the time frame ($stime$ a $ltime$)</div> </html> <table id="test_table"> <search> <query>| metadata type=sources | eval lastTime=strftime(lastTime, "%Y-%m-%d %H:%M:%S.%Q"), firstTime=strftime(firstTime, "%Y-%m-%d %H:%M:%S.%Q"), recentTime=strftime(recentTime, "%Y-%m-%d %H:%M:%S.%Q")</query> <earliest>-365d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <progress> <eval token="result">tonumber('job.resultCount')</eval> <eval token="ltime">tostring('job.latestTime')</eval> <eval token="stime">tostring('job.earliestTime')</eval> <eval token="stime">replace('$stime$',"\+\d+:00","")</eval> <eval token="stime">replace('$ltime$',"\+\d+:00","")</eval> </progress> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table>     I get this regardless or not i use the replace command The replace command works if used in a regular search  
We encountered an error after we upgraded to a new version of Splunk. This Splunk instance is under a distributed environment and this is one of the indexers within a cluster. Please see the logs bel... See more...
We encountered an error after we upgraded to a new version of Splunk. This Splunk instance is under a distributed environment and this is one of the indexers within a cluster. Please see the logs below after we run ./splunk status :       Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/system/local /migration.conf'Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1359, in <module> sys.exit(main(sys.argv)) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1212, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1067, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 35, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3166, in autoMigrate checkTimezones(CONF_PROPS, dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 411, in checkTimezones migSettings = comm.readConfFile(PATH_MIGRATION_CONF) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli_common.py", line 172, in readConfFile f = open(path, 'rb') PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/system/local/migration.conf'Please file a case online at http://www.splunk.com/page/submit_issue       We also tried the "chown" command but still no luck.    
I have the below error showing on the search head, I've been looking for a cause of this error with no luck.   Unable to initialize modular input "itsi_suite_enforcer" defined in the app "SA-ITOA... See more...
I have the below error showing on the search head, I've been looking for a cause of this error with no luck.   Unable to initialize modular input "itsi_suite_enforcer" defined in the app "SA-ITOA": Introspecting scheme=itsi_suite_enforcer: script running failed (exited with code 1)   Did anyone encounter any similar error ever?