All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi there should be more information on _internal log. Just query from it like index=_internal LM* OR expired That should show you more information about y our issue. r. Ismo
This works on my laptop (macOS + Splunk 9.2.1) See details from here https://marketplace.visualstudio.com/items?itemName=Splunk.splunk I have set next values on settings.json Splunk Rest Url -- h... See more...
This works on my laptop (macOS + Splunk 9.2.1) See details from here https://marketplace.visualstudio.com/items?itemName=Splunk.splunk I have set next values on settings.json Splunk Rest Url -- https://localhost:8089 Token auth has enabled and I have generate own token for this Then just create file e.g. Splunk-SPL-test.splnb index=_internal | stats count by component Run it and you see events and can select also visualisation etc. 
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM... See more...
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM, etc). All instances use the one LM.  On the SHC we are unable to search, getting the subject line message, yet on the ES SH we can search fine and no error message. The nodes of the SHC are "phoning home" to the LM. Licensing settings (indexer name, manager server uri) have been verified as correct. All nodes show having connected to the LM within the last minute-ish. Not sure where to look from here.
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts ... See more...
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts | stats dc(content.scheduleDetails.lastRunTime) AS lastRunTime its showing 55 counts. my output as: content.scheduleDetails.lastRunTime     Count 02/FEB/2024 08:22:19 AM 9 02/FEB/2024 08:21:19 AM 63 03/FEB/2024 08:22:19 AM 7   Expected output as only total count of the field: 79  
Hello. I am completely new at Splunk. Recently, I've recently taken on a role where I'll be working with Splunk quite a lot. I have a question about SC4S (Splunk Connect For Syslog). I successfully i... See more...
Hello. I am completely new at Splunk. Recently, I've recently taken on a role where I'll be working with Splunk quite a lot. I have a question about SC4S (Splunk Connect For Syslog). I successfully installed the SC4S (podman + systemd) using the guide from this: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/podman-systemd-general/ The SC4S is installed in Centos 7 VM (in vsphere). The HEC is configured successfully in heavy forwarder and I can successfully see the SC4S is properly communicating with Splunk. After that, I used Kiwi Syslog Message Generator from my windows 10 machine to send a syslog tcp message to the Centos 7 VM. Successful Output (TCP): However, if i sent a syslog udp message, the message was not successfully sent. As shown in the screenshot, the messages sent was zero after i pressed send. Unsuccessful Output (UDP): No new messages were shown in Splunk Web. 514 TCP and UDP is enabled in the firewall in Centos 7. I would like to request assistance about this issue. Thank you.                  
I have splunk logs where there is key word like  <ref>BTB- Abcd1234<ref> as it's primary key for trade reference and I did extract using delemiter <> , and give field name "my_Ref". now if sear... See more...
I have splunk logs where there is key word like  <ref>BTB- Abcd1234<ref> as it's primary key for trade reference and I did extract using delemiter <> , and give field name "my_Ref". now if search BTB it showing me all the matching reference as my dashboard search string is like <ref>BTB-*<ref> . now the problem is along with reference i can see some additional line is also getting pick and when is see the event detail my extract field showing that values .  output from search query :  index=in_my "<ref>*$Ref$*<ref> | table my_ref | dedup my_ref 1.BTB-Abcd1 2.BTB-Abvd2 3.]...)Application]true ?.. 4.BTB-Acdg3 5.BTB-Shfhfj4 now I want to ignore the 3."]...)Application]true "value and don't know how.... can someone please help on the same.
I still don't see a solution for this even with the MSSQL 4.1 driver and latest DBX with latest Splunk JDBC MSSQL addon. Also, the original poster mentioned they are on CentOS. We are on Linux as wel... See more...
I still don't see a solution for this even with the MSSQL 4.1 driver and latest DBX with latest Splunk JDBC MSSQL addon. Also, the original poster mentioned they are on CentOS. We are on Linux as well with Oracle JRE 17.  The point being that the dll wouldn't do much good here.
I just realized why I got more values because there are nested objects below with the same fields but i only want the first one that shows
Hi, recently we had an issue with the LUN drive where data is stored and after fixing it, a new problem came up. splunk services starts normally but the web access does not work anymore. the outpu... See more...
Hi, recently we had an issue with the LUN drive where data is stored and after fixing it, a new problem came up. splunk services starts normally but the web access does not work anymore. the output of the splunk start command is the following     \bin>splunk.exe start Splunk> Map. Reduce. Recycle. Checking prerequisites... Checking mgmt port [8089]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... (skipping validation of index paths because not running as LocalSystem) Validated: _configtracker _introspection _metrics _metrics_rollup _thefishbucket anomaly_detection autek azure cim_modactions cisco citrix email eusc_apps firedalerts ftp hyper-v infraops itsi_grouped_alerts itsi_im_meta itsi_im_metrics itsi_import_objects itsi_notable_archive itsi_notable_audit itsi_summary itsi_summary_metrics itsi_tracked_alerts kubernetes metrics_sc4s msad msexchange netauth netfw netops netproxy os osnix pan_logs perfmon rancher_k8sca rancher_k8scc rancher_k8scs rancherprod sample snmptrapd sns symantec sysmon test thor windefender windows wineventlog winevents Done Bypassing local license checks since this instance is configured with a remote license master. Checking filesystem compatibility... Done Checking conf files for problems... Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. One or more regexes in your configuration are not valid. For details, please see btool.log or directly above. Done Checking default conf files for edits... Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-9.0.8-4fb5067d40d2-windows-64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Splunkd: Starting (pid 38432) Done     extract of btool.log      05-06-2024 11:07:35.039 WARN ConfMetrics - single_action=BASE_INITIALIZE took wallclock_ms=1014 05-06-2024 11:17:25.445 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 11:17:25.445 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.310 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.310 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.373 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 13:19:36.176 WARN ConfMetrics - single_action=BASE_INITIALIZE took wallclock_ms=1234 05-06-2024 14:44:42.912 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:42.912 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:42.975 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 14:44:51.022 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:51.022 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:51.084 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 16:36:21.051 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:21.051 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:21.114 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 16:36:29.661 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:29.661 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:29.723 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features.       I already checked the /etc/system/local/web.conf and everything seems fine.     [settings] enableSplunkWebSSL = 1 httpport = 443     system/default/web.conf     [default] [settings] # enable/disable the appserver startwebserver = 1 # First party apps: splunk_dashboard_app_name = splunk-dashboard-studio # enable/disable splunk dashboard app feature enable_splunk_dashboard_app_feature = true # port number tag is missing or 0 the server will NOT start an http listener # this is the port used for both SSL and non-SSL (we only have 1 port now). httpport = 8000 # this determines whether to start SplunkWeb in http or https. enableSplunkWebSSL = false # location of splunkd; don't include http[s]:// in this anymore. mgmtHostPort = 127.0.0.1:8089 # list of ports to start python application servers on (although usually # one port is enough) # # In the past a special value of "0" could be passed here to disable # the modern UI appserver infrastructure, but that is no longer supported. appServerPorts = 8065     any suggestion? many thanks. jose
I find the description in the docs a bit confusing. The summary range is the logical equivalent of the retention period for the indexes - it tells you for how long (approximately - see the remark on... See more...
I find the description in the docs a bit confusing. The summary range is the logical equivalent of the retention period for the indexes - it tells you for how long (approximately - see the remark on buckets) the DAS will be stored. The backfill range is for how long back the data will be searched and which range the summarization search will update with each run. So that backfill range of 15 minutes means that each summarization search will be launched with "earliest=-15m". Those parameters are not directly related to system load but they can affect system load. And system load can affect summarization searches. Since there is a limit for concurrent summarization searches and the summarization searches have lowest priority of all searches, the summarization search parameters can influence if the searches successfully run at all. For example, if you have a fairly active index storing network flows and you set up backfill range of a year and tell Splunk to spawn summarization search every minute and additionally limit concurrent summarization searches to 1 there is no way in heaven and hell that this configuration ever runs without skipping searches. And if you set the maximum summarization search run-time to 5 minutes, your acceleration will probably never build the summaries because each run will be spawned and killed in 5 minutes intervals (not every single minute as you'd be hitting the maximum concurrent searches limit).  
Hi @munang , retention in indexes is a different (and not related) thing than DM acceleration period: they can be (and usually they are) different: data are stored in indexes for the retention perio... See more...
Hi @munang , retention in indexes is a different (and not related) thing than DM acceleration period: they can be (and usually they are) different: data are stored in indexes for the retention period, instead data in DM are stored for the time of the most searches. Ciao. Giuseppe
Hi @Amadou, search in aws logs or documentation how to recognize the logfail in aws (e.g. in windows logfail is EventCode=4625) and modify my search. Ciao. Giuseppe
I want to get the values from the path field but I can't extract this alone as data.initial_state.path would output extra values   
Interesting, splunk support hasn't had any luck with my case yet. We've been attempting different things but no luck. I may throw in the towel and downgrade to 2019  
@gcusello For instance you received a ticket that say you have to create an alert to detect IAM root user multiple fail login attempt, index aws  
that is the main problem. The current panel/section won't let me make it bigger so i can add the other sections to it ... So I have 6 tables ... 3 tables only show because the 3 others are under the ... See more...
that is the main problem. The current panel/section won't let me make it bigger so i can add the other sections to it ... So I have 6 tables ... 3 tables only show because the 3 others are under the selection and I haven't been able to figure out how to make the selection bigger so i can add the 3 others. 
@gcusello    Thank you very much for your reply. Then, can I understand that the Summary Range used when defining data model acceleration is not the period for which data is stored, but the perio... See more...
@gcusello    Thank you very much for your reply. Then, can I understand that the Summary Range used when defining data model acceleration is not the period for which data is stored, but the period that ensures that data is kept up-to-date, and that data is preserved according to the retention period set in the index? Looking through old posts, it seems a bit confusing. https://community.splunk.com/t5/Reporting/How-far-back-in-time-will-datamodel-data-be-kept/m-p/137257
It only displays users with their IP addresses, but the problem is that I still have a lot more lines than with this command: index="index2" tag=1 | table srcip, _time (8000 lines versus 1000) ... See more...
It only displays users with their IP addresses, but the problem is that I still have a lot more lines than with this command: index="index2" tag=1 | table srcip, _time (8000 lines versus 1000) So, I think either it's not filtering enough or it's adding users who aren't supposed to be there. How can I handle this?
yes, the Splunk admin can then add it to the correct app context and apply permissions. 
Hi @m92 , if you want only IPs present in both indexes, you could use this search: (index="index1" Users =* IP=*) OR (index="index2" tag=1 ) | regex Users!="^AAA-[0-9]{5}\$" | eval IP=if(match(IP, ... See more...
Hi @m92 , if you want only IPs present in both indexes, you could use this search: (index="index1" Users =* IP=*) OR (index="index2" tag=1 ) | regex Users!="^AAA-[0-9]{5}\$" | eval IP=if(match(IP, "^::ffff:"), replace(IP, "^::ffff:(\d+\.\d+\.\d+\.\d+)$", "\1"), IP) | eval ip=coalesce(IP,srcip) | stats dc(index) AS index_count values(Users) AS Users earliest(_time) AS earliest latest(_time) AS latest BY ip | where index_count>1 | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, earliest latest Ciao. Giuseppe