All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@shiba KV Store issues usually occur when Splunk's Key-Value Store is not functioning properly, which can impact searches that depend on KV Store collections. But if you are getting this ERROR on ind... See more...
@shiba KV Store issues usually occur when Splunk's Key-Value Store is not functioning properly, which can impact searches that depend on KV Store collections. But if you are getting this ERROR on indexers. Ignore it and "Security Risk Warning: Found an Empty Value for 'allowedDomainList':-The allowedDomainList parameter in alert_actions.conf is not configured properly, leaving it empty. This parameter specifies the domains allowed for sending alerts (e.g., via email).  If this server is being used as both an indexer and a search head, please confirm.  
Can you explain the regular expression you used? 
Yes, they're UFs. I already set  [thruput] maxKBps = 0 in limits.conf in the app.
Hello, thanks for your answer. I understand about the indexer. Is there any problem with the following message? Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/... See more...
Hello, thanks for your answer. I understand about the indexer. Is there any problem with the following message? Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56
According to regex101.com, your regular expression works.  This one, however, is more efficient. EXTRACT-Rule = (")?Rule:(?P<Rule>.*?)(?(1)\1|,) It looks for a leading quotation mark and uses that ... See more...
According to regex101.com, your regular expression works.  This one, however, is more efficient. EXTRACT-Rule = (")?Rule:(?P<Rule>.*?)(?(1)\1|,) It looks for a leading quotation mark and uses that as the terminating character (using (?(1)\1|,)).
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me k... See more...
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me know if you have any idea of ​​regular expression that satisfies all cases below to extract rule field by looking at the original data below.       Test-String Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:45 ICxxx SymantecServer: Nxxx,10.150.35.108,Continue,Application and Device Control is ready,System,Begin: 2022-12-05 17:13:18,End Time: 2022-12-05 17:13:18,Rule: Built-in rule,0,SysPlant,0,SysPlant,None,User Name: None,Domain Name: None,Action Type: ,File size (bytes): 0,Device ID: Dec 5 17:22:59 10.2.1.166 Dec 5 17:12:45 ICxxx SymantecServer,10.10.232.76,Blocked,[AC7-2.1] 스크립트 차단 - Caller,End Time: 2024-12-05 16:41:09,Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도,9056,C:/Windows/System32/svchost.exe,0,No Module Name,C:/Windows/System32/GroupPolicy/DataStore/0/SysVol/celltrion.com/Policies/{08716B68-6FB2-4C06-99B3-2685F9035E2E}/Machine/Scripts/Startup/start_dot3svc.bat,User Name: xxx,Domain Name: xxx,Action Type: ,File size (bytes): xx,Device ID: xxx\xx&Ven_NVMe&Prod_Skhynix_BC501_NV\5&974&0&000 Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:06 IC01 SymantecServer: N1404002,10.50.248.13,Blocked,이 규칙은 모든 응용 프로그램이 시스템에 드라이브 문자를 추가하는 모든 USB 장치에 파일을 쓸 수 없도록 차단합니다. - File,Begin: 2024-12-05 16:33:53,End Time: 2024-12-05 16:33:53,"Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단",4032,C:/Program Files/Microsoft Office/xxx/Office16/EXCEL.EXE,0,No Module Name,D:/1. NBD/1. ADC cytotoxicity/2024-4Q/~$20241203-05 CT-P70 Drug release.xlsx,User Name: 1404002,Domain Name:xxx,Action Type: ,File size (bytes): 0,xx         extract string Rule: Built-in rule Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도 Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단                          
Try something like this (I am not sure if you need click.value2, value, name or name2 though, you would need to experiment <drilldown> <condition match="$click.value2$=&quot;AAA&quot;"> ... See more...
Try something like this (I am not sure if you need click.value2, value, name or name2 though, you would need to experiment <drilldown> <condition match="$click.value2$=&quot;AAA&quot;"> <set token="ModuleA">true</set> <unset token="ModuleOther"></unset> </condition> <condition> <unset token="ModuleA"></unset> <set token="ModuleOther">$trellis.value$</set> </condition> </drilldown> ... <!-- panel A is opened --> <panel depends="$ModuleA$"> ... </panel> <!-- panel Other is opened --> <panel depends="$ModuleOther$"> ... </panel>
Several pointers about asking questions: When sharing structured data, please click "Raw text" before copying from event window.  Splunk's formatted display creates hurdle for volunteers to reverse... See more...
Several pointers about asking questions: When sharing structured data, please click "Raw text" before copying from event window.  Splunk's formatted display creates hurdle for volunteers to reverse. If you expect people to help read some SPL code you illustrate, your illustrated data should include relevant details used in your code.  For example, your illustration does not give indication of message.ssoType, message.incomingRequest.partner, etc. (In the following, I will assume that they are flat paths that require no special treatment.) The key to solving your problem is to note that JSON node message.backendCalls is an array.  In SPL, the flattened JSON array is denoted with a pair of curly brackets, i.e., message.backendCalls{}.  In addition, IF the raw events has a structure similar to your illustration, message.incomingRequest.partner, message.backendCalls{}.*, etc., should have already been extracted by Splunk at search time.  There is no need for spath.  Further more, placing filters in index search is more efficient than putting them downstream.  Combining these pointers, you should consider   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" message.incomingRequest.partner = * | rename message.incomingRequest.partner as "SSO_Partner" | stats distinct_count("UUID") as Count by SSO_Partner, Membership_LOB, message.backendCalls{}.responseCode   Your sample data would result in SSO_Partner Membership_LOB message.backendCalls{}.responseCode Count FBI CIA 200 1 Here is a reverse engineered emulation for you to play with and compare with real data   | makeresults | eval _raw = "{ \"@timestamp\": \"2024-12-25T08:10:57.764Z\", \"Membership_Category\": \"*******\", \"Membership_LOB\": \"CIA\", \"UUID\": \"********\", \"adminId\":\"*************\", \"adminLevel\": \"*************\", \"correlation-id\": \"*************\", \"dd.env\":\"*************\", \"dd.service\":\"*************\", \"dd.span_id\":\"*************\", \"dd.trace_id\":\"*************\", \"dd.version\":\"*************\", \"logger\":\"*************\", \"message\": { \"incomingRequest\": { \"partner\": \"FBI\" }, \"ssoType\": \"Outbound\", \"backendCalls\": [ { \"elapsedTime\": \"****\", \"endPoint\":\"*************\", \"requestObject\": { }, \"responseCode\": 200, \"responseObject\": { } } ] } }" | spath ``` the above emulates index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" message.incomingRequest.partner = * ```      
I am using these dox: https://docs.splunk.com/Documentation/ES/8.0.1/Admin/AddThreatIntelSources#Add_threat_intelligence_with_a_custom_lookup_file It is pretty straightforward but I suspect that ... See more...
I am using these dox: https://docs.splunk.com/Documentation/ES/8.0.1/Admin/AddThreatIntelSources#Add_threat_intelligence_with_a_custom_lookup_file It is pretty straightforward but I suspect that my configuraiton is not working.  Where are the "master lookups" that Splunk's Threat Framework uses?  I assume that there is 1 "master lookup" each for IPv4, domains, urls, hashes, etc.  Or perhaps they are all combined into 1.   There are about 100 lookups this client's ES and I have checked out the ones that look promising but didn't find my new data so I cannot conclude anything.
All of those messages can be ignored.  Server-A is an indexer and indexers do not use the KVStore.  In fact, KVStore can be disabled on indexers. The "Found an empty value for 'allowedDomainList' "... See more...
All of those messages can be ignored.  Server-A is an indexer and indexers do not use the KVStore.  In fact, KVStore can be disabled on indexers. The "Found an empty value for 'allowedDomainList' " message can be ignored if you choose to, especially on an indexer.  If you're concerned about security and the sending of emails outside certain domains then follow the instructions in the message.
Hello dear Splunkers! I am struggling with this issue for days and just can't resolve it (ChatGPT is clueless). I have a panel that displays a trellis pie chart (splitted by a field called MODULE),... See more...
Hello dear Splunkers! I am struggling with this issue for days and just can't resolve it (ChatGPT is clueless). I have a panel that displays a trellis pie chart (splitted by a field called MODULE), and the panel has a drilldown that creates the token $module$. I need to open another panel only if the value inside $module$ equals to "AAA". Otherwise, I need to open a different panel. For example: - If I click on the pie chart on the value MODULE="AAA" -> A panel called "ModuleA" is opened. - If I click on the pie chart on the value MODULE="BBB" (or any other value) ->  A panel called "ModuleOther" is opened.   Right now I tried everything I could find on the internet/from the brain/from ChatGPT, but nothing works. Here's my code for now: <drilldown> <condition> <case match="AAA"> <set token="ModuleA">true</set> <unset token="ModuleOther"></unset> </case> <default> <unset token="ModuleA"></unset> <set token="ModuleOther">$trellis.value$</set> </default> </condition> </drilldown> ... <!-- panel A is opened --> <panel depends="$ModuleA$"> ... </panel> <!-- panel Other is opened --> <panel depends="$ModuleOther$"> ... </panel> It doesn't work since the tokens don't even contain a value when clicked on the pie. I even added a text input for each token in order to check if they contain any value when click, and the answer is no. It's important to emphasize that when I tried to make it a simple drilldown and open one panel when any pie is clicked it worked just fine. The problem is with the conditional tokening.   Thank you in advance!!
Are these UFs? Did you change the default thruput limit?
Gotta find which stanza defines this input (might be a single file, might be a whole directory) and add the _meta setting there. In case of my home lab UF, it's s:\Program Files\SplunkUniversalForw... See more...
Gotta find which stanza defines this input (might be a single file, might be a whole directory) and add the _meta setting there. In case of my home lab UF, it's s:\Program Files\SplunkUniversalForwarder\bin>splunk btool inputs list --debug [...] s:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\inputs.conf [monitor://s:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log] s:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\inputs.conf _TCP_ROUTING = * s:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf _rcvbuf = 1572864 s:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = s:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = s:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_resolve_ad_obj = 0 s:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf host = $decideOnStartup So in my case I'd need to either add a custom app or at least system\local\inputs.conf (but it's best to avoid putting events into system/local) with [monitor://s:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log] _meta = my_field::something
It's hard to tell you how to fix your setup when we don't know the details of your configuration and your certs. Just one important thing - if you want to enable TLS, get yourself a CA and issue pro... See more...
It's hard to tell you how to fix your setup when we don't know the details of your configuration and your certs. Just one important thing - if you want to enable TLS, get yourself a CA and issue proper certificates. Using self-signeds everywhere will not help you much securitywise and you'll run into troubles when trying to validate them properly (which might be your case)
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing... See more...
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing there when i am adding same field at the end of below. Query :-   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" | spath "message.incomingRequest.partner" | rename message.incomingRequest.partner as "SSO_Partner" | search "SSO_Partner"=* | stats distinct_count("UUID") as Count by SSO_Partner, Membership_LOB, message.backendCalls.responseCode When i am not adding same field then its showing below results, Below is showing whole JSON from which i am trying to fetch response code. { [-] @timestamp: 2024-12-25T08:10:57.764Z Membership_Category: ******* Membership_LOB: **** UUID: ******** adminId:************* adminLevel: ************* correlation-id: ************* dd.env:************* dd.service:************* dd.span_id:************* dd.trace_id:************* dd.version:************* logger:************* message: { [-] backendCalls: [ [-] { [-] elapsedTime: **** endPoint:************* requestObject: { [+] } responseCode: 200 responseObject: { [+] } } ]  
Hi @mouniprathi , probably the issue is in the format of the time token (it must be in epochtime!) that you are passing to the second panel, could you share the code of your dashboard, with attenti... See more...
Hi @mouniprathi , probably the issue is in the format of the time token (it must be in epochtime!) that you are passing to the second panel, could you share the code of your dashboard, with attention to the two searches you're using in the two panels and the earliest and latest tags in the second panel? anyway, you have to insert the token inside the search, not in the token, something like this: <your_search> [ | makeresults | eval earliest=relative_time($excep_time$,"+120s"), latest=$excep_time$) | fields earliest latest ] | ... Ciao. Giuseppe
After some investigation, the answer is: 1) The OS is Linux Redhat 8, Splunk UF version 9.1.1, we have 2 deployment of Splunk which is Splunk Enterprise and Splunk Security, on my end (Splunk Enterp... See more...
After some investigation, the answer is: 1) The OS is Linux Redhat 8, Splunk UF version 9.1.1, we have 2 deployment of Splunk which is Splunk Enterprise and Splunk Security, on my end (Splunk Enterprise) there are only 2 inputs but on the Security end, there are a lot, with 2 apps HG_TA_Splunk_Nix and TA_nmon (roughly 40 inputs each) over 4 hosts. 2.1) There are some but not noteworthy ERROR. The errors are below: +700 ERROR TcpoutputQ [11073 TcpOutEloop] - Unexpected event id=<eventid>  -> benign ERROR as per Splunk dev +700 ERROR ExecProcessor [32056 ExecProcessor] - message from "$SPLUNKHOME/HG_TA_Splunk_Nix/bin/update.sh" https://repo.napas.local/centos/7/updates/x84_64/repodata/repomd.xml: [Errorno14] curl#7 - "Failed to connect to repo.napas.local:80; No route to host" 2.2) HealthReporter show +700 INFO PeriodHealthReporter - feature="Ingestion latency" color=red/yellow indicator="ingestion_latency_gap_multiplier" due_to_threshold_value=1 measured_value=26684 reason=Events from tracker.log have not been seen for the last 26684 seconds, which is more than the red threshold ( 210 seconds ). This typically occurs when indexing or forwarding are falling behind or are blocked." node_type=indicator node_path=splunkd.file_monitor_input.ingestion_latency.ingestion_latency_gap_multiplier. 2.3) log _internal |stats count by destIP show  idx1: 14248 idx2: 8014 idx3: 7963 idx4: 7809 Which is more concerning than I thought it would be.  2.4) Another find. The log is now lagging 1 hour behind, and still being pulled/ingest. But the internal log had stop, the time now is 9:08, but the last internal log is 8:19, with no error, which is +700 Metrics - group=thruput, name=uncooked_output, instantaneous_kbps=0.000, instantaneous_eps=0.000, average_kbps=0.000, total_k_processed=0.000, kb=0.000, ev=0, interval_sec=60  
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message ... See more...
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message started appearing on Splunk Web. Log collection and searching is possible. A-Server acts as an indexer, and one search and indexer are used. Search peer A-Server has the following message: Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:34:12 Search peer A-Server has the following message: KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:34:11 Search peer A-Server has the following message: KV Store process terminated abnormally (exit code 14, status PID 29873 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:34:11 Search peer A-Server has the following message: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:34:11 Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ ... See more...
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ so that I can pass it to another panel in same dashboard. However once I click first panel, the second panel takes should take $app_name$ and $excep_time$ as inputs and show logs for that particualr app with time picker of $excep_time$. I have no problem adding +120 seconds but when I use earliest and latest its throwing invalid input message and when I use _time>= or _time<= it still taking UI time picker but not the search time. How do I pass time from one row to another panel and search in that passed time window  
You don't have to make up your own process for reading historical data - Splunk has documentation for that.  See https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Restorearchiveddata