All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to implement a dynamic input dropdown using a query in the dashboard studio. The code I am using is as follows:         "input_H05frgOO": {             "options": {        ... See more...
Hi, I am trying to implement a dynamic input dropdown using a query in the dashboard studio. The code I am using is as follows:         "input_H05frgOO": {             "options": {                 "items": [],                 "token": "host_token",                 "defaultValue": ""             },             "title": "HOST",             "type": "input.dropdown",             "dataSources": {                 "primary": "ds_ljNWYr7J"             }         } And below is the data sources:         "ds_ljNWYr7J": {             "type": "ds.search",             "options": {                 "query": "| mstats avg(\"mx.process.cpu.utilization\") as X WHERE index=\"murex_metrics\" span=10s BY \"mx.env\" | dedup mx.env | table mx.env"             },             "name": "search_6"         } The input dropdown says "waiting for input". Could you please help me with the issue?   Regards, Pravin
Hi, I'm configuring SSL in a test environment on version 8.2.6 of Splunk Enterprise before upgrading to Splunk 9.0.0.  I have managed to encrypt traffic between my Splunk servers, however, I am n... See more...
Hi, I'm configuring SSL in a test environment on version 8.2.6 of Splunk Enterprise before upgrading to Splunk 9.0.0.  I have managed to encrypt traffic between my Splunk servers, however, I am now unable forward data to my Indexers as they're are refusing connections from my Forwarders. Do I have to have a certificate on all of my Forwarders to make use of SSL/TLS? I'm trying to avoid the overhead of having to manage certificates on all of my  servers that I have in the Production environment. Thanks. Mike.
Hi Team, How to get OCI logging option under Data Inputs?
I scheduled a search to run at 0 2,8,14,20 * * *  The timezone of the search head is UTC.  Therefore I expect the next run tiem to be 2am UTC, yet Splunk says the next run time would be 6am UTC.  H... See more...
I scheduled a search to run at 0 2,8,14,20 * * *  The timezone of the search head is UTC.  Therefore I expect the next run tiem to be 2am UTC, yet Splunk says the next run time would be 6am UTC.  How could this be? And where is this configured? I suspect there is a setting somewhere which is making the cron expressions be interpreted in US Eastern Time. Since we are observing Daylight Savings Time, Eastern Daylight Time would be UTC-4. The documentation (Use cron expressions for alert scheduling - Splunk Documentation) says "The Splunk cron analyzer defaults to the timezone where the search head is configured. This can be verified or changed by going to Settings > Searches, reports, and alerts > Scheduled time." I find no "Scheduled Time" under Settings > Search, reports and alerts. I did post this to the feedback on that documentation page in case it is actually inaccurate. Where can I check and verify?   Thanks!
Hi Splunkers, This may be easy, but I'm not able to solve it, if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard d... See more...
Hi Splunkers, This may be easy, but I'm not able to solve it, if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard deviation above the mean, but I'm not sure how to implement that.  Thanks!  So this is what I have:  index=X sourcetype=Y source=metrics.kv_log appln_name IN ("FEED_FILE_ROUTE", "FEED_INGEST_ROUTE") this_hour="*" | bin span=1h _time | stats latest(this-hour) AS Volume BY appln_name, _time | eval day_of_week=strftime(_time,"%A"), hour=strftime(_time,"%H") | lookup mt_expected_processed_volume.csv name as appln_name, day_of_week, hour outputnew avg_volume, stdev_volume
Received the below error after answering "y" to Perform migration and upgrade without previewing configuration changes? [y/n] y.   -- Migration information is being logged to '/opt/splunk/var/log... See more...
Received the below error after answering "y" to Perform migration and upgrade without previewing configuration changes? [y/n] y.   -- Migration information is being logged to '/opt/splunk/var/log/splunk/migration.log.2022-07-06.23-37-40' -- Migrating to: VERSION=9.0.0 BUILD=6818ac46f2ec PRODUCT=splunk PLATFORM=Linux-x86_64 Copying '/opt/splunk/etc/myinstall/splunkd.xml' to '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak'. An unforeseen error occurred: Exception: <class 'PermissionError'>, Value: [Errno 13] Permission denied: '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak' Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1359, in <module> sys.exit(main(sys.argv)) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1212, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1067, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 35, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3158, in autoMigrate comm.copyItem(PATH_SPLUNKD_XML, PATH_SPLUNKD_XML_BAK, dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli_common.py", line 1086, in copyItem shutil.copy(src, dst) File "/opt/splunk/lib/python3.7/shutil.py", line 248, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/opt/splunk/lib/python3.7/shutil.py", line 121, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: '/opt/splunk/etc/myinstall/splunkd.xml-migrate.bak'
Can Splunk DBConnect use the SQL WITH statement?   WITH TABLE_BASE AS ( -- this section is the base query and matches the Smart reporting logic SELECT DISTINCT The WITH command is not highlighte... See more...
Can Splunk DBConnect use the SQL WITH statement?   WITH TABLE_BASE AS ( -- this section is the base query and matches the Smart reporting logic SELECT DISTINCT The WITH command is not highlighted in red as the other commands.  
I need help with loading CSV files into Splunk with the event time recorded as seconds past midnight instead of HH:MM:SS time. Below is a sample of the data I need to load. How do I specify that the ... See more...
I need help with loading CSV files into Splunk with the event time recorded as seconds past midnight instead of HH:MM:SS time. Below is a sample of the data I need to load. How do I specify that the time column is the number of seconds past midnight when defining the Timestamp for the Source Type? PickStartDate,BTVersion,TripNumber,Sequence,PassingTime,ArrivalTime,DepartureTime,FlagStop,ByPass,EarlyDeparture,event_line_number 2021-04-25,S1000216,1020,1,54900,54900.0,54900.0,0,0,,1 2021-04-25,S1000216,1020,2,54955,,,0,0,,2 2021-04-25,S1000216,1020,3,54999,,,0,0,,3
So I need to move a deployer to a dedicated host. I have a 3 member shc on version 8.1.3, all healthy. I have read a number of posts that give similar answers (to the answer I received for my origi... See more...
So I need to move a deployer to a dedicated host. I have a 3 member shc on version 8.1.3, all healthy. I have read a number of posts that give similar answers (to the answer I received for my original post), such as > "copy over the /opt/splunk/etc/shcluster to the new deployer" "configure the new deployer (to use the cluster's secret key and to set the SHC label), move the configuration bundle from the old deployer to the new deployer, and then point the cluster members to the new deployer" "migrate the shcluster folder structure and any shclustering stanza configurations you have on the deployer to the new deployer" " also break the SHC and rebuild with new deployer info"   While all these answers make sense, IDK exactly know what to reconfigure/change so I read > https://docs.splunk.com/Documentation/Splunk/8.1.3/DistSearch/BackuprestoreSHC And that takes you down the rabbit hole of backups and restores that don't entirely seem necessary, so I am wondering if anyone can verify the minimum changes that need to be made? OR if I should follow the above link's instructions.   As I understand it, I just need to do the following>>> 1) Build new Deployer ( new IP, new FQDN), install Splunk... 2) configure with the [shclustering] stanza in /opt/splunk/etc/system/local  server.conf [shclustering] pass4SymmKey =<secret> shcluster_label = <name> 3) on each SHC member,  edit the [shclustering] stanza [shclustering] conf_deploy_fetch_url = https://<newIP>:8089 make sure "pass4SymmKey" and "shcluster_label" is same as on new Deployer 4) copy over the /opt/splunk/etc/shcluster to the new deployer 5) Restart everything  Does that seem right?  I don't have the luxury of a dev environment to test...  Do I need to put shc members in detention or stop splunk on everything before I make the changes? Any advice is appreciated. Thank you  
Good day  I have a splunk 8.2.3 onpremise instance, apparently my system does not want to start, when I execute manual start it gives me the following [root@srvsplunk ~]# systemctl status -l splu... See more...
Good day  I have a splunk 8.2.3 onpremise instance, apparently my system does not want to start, when I execute manual start it gives me the following [root@srvsplunk ~]# systemctl status -l splunk ● splunk.service - SYSV: Splunk indexer service Loaded: loaded (/etc/rc.d/init.d/splunk; bad; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2022-07-06 16:23:25 -05; 12min ago Docs: man:systemd-sysv-generator(8) Process: 1495 ExecStart=/etc/rc.d/init.d/splunk start (code=exited, status=127) Jul 06 16:23:25 srvsplunk splunk[1495]: Checking kvstore port [8191]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: open Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: SSL certificate generation failed. Jul 06 16:23:25 srvsplunk systemd[1]: splunk.service: control process exited, code=exited status=127 Jul 06 16:23:25 srvsplunk systemd[1]: Failed to start SYSV: Splunk indexer service. Jul 06 16:23:25 srvsplunk systemd[1]: Unit splunk.service entered failed state. Jul 06 16:23:25 srvsplunk systemd[1]: splunk.service failed. how can i fix my splunk system?? Thanks
All, This is another license utilization report mismatch. I have request to generate license utilization report per day and save it for historical data. I am using the 30 Days License Usage rep... See more...
All, This is another license utilization report mismatch. I have request to generate license utilization report per day and save it for historical data. I am using the 30 Days License Usage report as a base for my daily report:     index=_internal host=licensemaster source=*license_usage.log* type="RolloverSummary" earliest=-1d@d latest=-0d@d | bin _time span=1d | stats sum(b) as sumb last(stacksz) as laststacksz by _time component | eval sumgb=round(sumb/1024/1024/1024, 3) | eval laststackszgb=round(laststacksz/1024/1024/1024, 3)     And it is giving me the result as expected: I want to go further and try to get the license utilization per hour, so I changed the search to:     index=_internal host=licensemaster source=*license_usage.log* type=Usage earliest=-1d@d latest=-0d@d | stats sum(b) as sumb last(poolsz) as lastpoolsz by _time | eval sumgb=round(sumb/1024/1024/1024, 3) | eval lastpoolszg=round(lastpoolsz/1024/1024/1024, 3) | addcoltotals sumb     But the result is lower than than the daily one: 967069668524 bytes is 900.656 Gb. What am I doing wrong? I am running Splunk Enterprise 8.2.6. Thank you, Gerson Garcia
New to Splunk and banging my head against the wall with this problem for over a day now. Please help... Need to compare two different fields from two different events to determine whether the values... See more...
New to Splunk and banging my head against the wall with this problem for over a day now. Please help... Need to compare two different fields from two different events to determine whether the values of those fields match. I ran a search that returns events. All events have an ACCOUNT_NUM field. Depending on the event, it will have either a DATE_TYPE1 field or a DATE_TYPE2 field. The report should display each distinct ACCOUNT_NUM that has one of each DATE_TYPE - so, a column for ACCOUNT NUM, a column for DATE_TYPE1, a column for DATE_TYPE2, and a column for DATE_STATUS ("Match" or "No Match") to indicate whether the two dates match.  So far, I have:   | stats values(DATE_TYPE1) AS "Date One" values(DATE_TYPE2) AS "Date Two" count by ACCOUNT_NUM | where count > 1   This groups the distinct ACCOUNT_NUM and shows me the two DATE_TYPES but how do I indicate whether the two dates match? I tried adding:   | eval DATE_STATUS=if(DATE_TYPE1=DATE_TYPE2, "Match", "No Match")   but this returns "No Match" for all of the events. My understanding is this is because    | eval   is evaluating each event individually. Since no event has both date types, it is not finding a match. How can I get it to compare the date types of each distinct account number as grouped together by my   | stats   command?
Hello everyone, I am trying to ingest data into Splunk and the data is into some .tgz files, but within those files are a lot of different folders and levels of directories, the thing is that I want... See more...
Hello everyone, I am trying to ingest data into Splunk and the data is into some .tgz files, but within those files are a lot of different folders and levels of directories, the thing is that I want to read just one type of file that is into those directories and is not an absolute path is a relative the path can change and can be into any directory. So the inputs .conf was set up with something like this: [monitor:///dir1/dir2/Spk/Test/*.tgz] whitelist=my.log   But this is not working because of this: When you configure wildcards in a file input path, Splunk Enterprise creates an implicit allow list for that stanza. The longest wildcard-free path becomes the monitor stanza, and Splunk Enterprise translates the wildcards into regular expressions.  https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards?_gl=1*srk1nm*_ga*MjE0MDA2MDA2MS4xNjI4ODcyNDg2*_gid*MTUyMjkxNTIzLjE2NTY5Njg1NDE.&_ga=2.178037989.152291523.1656968541-2140060061.1628872486   So I am looking the way to filter those logs using whitelisting, should I use regular expressions to filter the logs?   Thank you in advance.
Data is events with a date, username, company,  score. I want to calculate an NPS score by company. detractors = scores 1-6 passive = score 7-8 promoters = score 9-10 NPS=%promoters-%detractors ... See more...
Data is events with a date, username, company,  score. I want to calculate an NPS score by company. detractors = scores 1-6 passive = score 7-8 promoters = score 9-10 NPS=%promoters-%detractors   Help would be highly appreciated.    
If I download from the free trial link, may I use that for the upgrade without problems?
Hi, I am trying to update an incident that was created by an alert action from Splunk ITSI. But, everytime the alert gets triggered, a new incident is getting created instead of updating the existi... See more...
Hi, I am trying to update an incident that was created by an alert action from Splunk ITSI. But, everytime the alert gets triggered, a new incident is getting created instead of updating the existing incident. I tried everything mentioned in the link given below: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Commandsandscripts#Update_behavior_for_incidents Please guide as to what needs to be done to update a previously created incident? Should I need to get the status of the incident from ServiceNow and use that in the search query when I try to update the incident?  It would be great if you could help me with any documentation or a video reference that could help me in performing this activity of updating an incident that was created already. Thanks!
Currently, I have HTML within my XML dashboard that will only appear when a certain token is set. However, whenever I go and click "Edit", the HTML is always visible. Is there any way around this?
Hi Team,                     We are reviewing the use cases in our Splunk Enterprise security, We have given Throttling as 1 day for a use case, but want to check how many alerts are being suppresse... See more...
Hi Team,                     We are reviewing the use cases in our Splunk Enterprise security, We have given Throttling as 1 day for a use case, but want to check how many alerts are being suppressed by Throttling action. Is there any search query or any way how to check that.?                     Is there anyway that we can show the proof that throttling is working fine.?                     Thanks in advance for your help.
hello team, i would like to create a dashboard for logs pushed splunk on a regular basis. how do i get a real-time dashboard, for both logs and alerts for applications running on azure/aws. i should... See more...
hello team, i would like to create a dashboard for logs pushed splunk on a regular basis. how do i get a real-time dashboard, for both logs and alerts for applications running on azure/aws. i should be able to see this alerts and take remedy on that.   best regards, mercy 
Hello All, This is my first time posting to Splunk Community. I've found a lot of value here and hope you all are doing well. I have an add-on built with the Splunk Add-on Builder (I believe vers... See more...
Hello All, This is my first time posting to Splunk Community. I've found a lot of value here and hope you all are doing well. I have an add-on built with the Splunk Add-on Builder (I believe version 4.1.0) that contains an alert action that packages up search results and sends them to a HEC input. I am utilizing George Starcher's Python class for sending events to HEC inputs (https://github.com/georgestarcher/Splunk-Class-httpevent). The alert action works perfectly except when I enable the proxy - then I am hit with the error message:     Traceback (most recent call last): File "/opt/splunk/etc/apps/<appname>/bin/<appname>/splunk_http_event_collector.py", line 287, in _batchThread response = self.requests_retry_session().post(self.server_uri, data=payload, headers=headers, verify=self.SSL_verify,proxies=proxies) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 635, in post return self.request("POST", url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/adapters.py", line 499, in send timeout=timeout, File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connectionpool.py", line 696, in urlopen self._prepare_proxy(conn) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connectionpool.py", line 964, in _prepare_proxy conn.connect() File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connection.py", line 359, in connect conn = self._connect_tls_proxy(hostname, conn) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connection.py", line 506, in _connect_tls_proxy ssl_context=ssl_context, File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 827, in _create raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname      Has anyone come across similar behavior? I am trying a variety of different things but this has quickly gone over my head. Any help or direction would be greatly appreciated. Please let me know what information I can provide. Thank you.