All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpo... See more...
Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpool etc. My Search Output looks like, StoreID, City, OpenIssues, Status Store 1, London, 3, Critical/Red Store 2, York, 2, Warning/Amber Store 3, Bristol, 0, Dormant/Green Store 4, Liverpool, 1, Warning/Amber can someone please suggest if/how this can be done? Thank you.
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application an... See more...
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application and Services Logs -> Microsoft -> AzureADPasswordprotection ->DCAgent -> Admin I have added the below inputs.conf stanza in Windows_TA addon [WinEventLog://Microsoft-AzureADPasswordProtection-DCAgent/Admin] disabled = false index = wineventlog_itd renderXml=false & [WinEventLog:Microsoft-AzureADPasswordProtection-DCAgent/Admin*] disabled = false index = wineventlog_itd renderXml=false Both are not working. Any thoughts ??
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody becaus... See more...
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody because: Failed to get LDAP user=“nobody” from any configured servers ERROR UserManagerPro [16791 SchedulerThread] - user=“nobody” had no roles I think these are all scheduled searches that are executed without an owner and therefore executed as user nobody. These messages didn't appear with version 9.1 What's the best way to turn off these messages? The annoying thing is that some searches come from Splunk apps (console monitoring, splunk archiver, etc.)
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB... See more...
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB of SSD and 3TB of SAS to work with. So naturally, I put hot path to the SSD and cold path to the SAS. Now we are encountering the problems that the indexingQueue always fill up to 100 whenever that indexer ingest data. So my question is: 1. Does the process of rolling bucket from hot to cold affect the IOPS and the writting in indexingQueue? 2. My understanding is that, the data flow go like this  Forwarder -> indexer hot -> indexer cold, and this is a continuous process. And in case indexer hot is max out, it will roll to cold, but cold is SAS so the writing speed is < SSD. For example hot ingesting 2000 events per sec, but only push out 500 events per sec to cold, but hot is full already so it render the effective ingesting speed of hot to only 500 (since it full and can only take in the amount that it can push out). Is this correct?  3. If my understanding is correct, how should I approach in optimizing it. I'm thinking of two option: a) Switch our retention policy from size base to day base, setting hot retention to 1 day, cold remain size retention, since we ingested 600~800GB per day, we can ensure the hot partion will always have a buffer to ensure the smooth transition. My question in this section is when is the rolling happen, at the end of the day, or whenever the event is one day old, thus don't change anything. b) Create a warm path as a buffer, hot->warm->cold, the warm bucket will have 1TB and retention of 1 day, so, and with how we ingest 600-800GB per day, the warm path will always have space for the hot to roll over Is there anything else can I do?
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me k... See more...
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me know if you have any idea of ​​regular expression that satisfies all cases below to extract rule field by looking at the original data below.       Test-String Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:45 ICxxx SymantecServer: Nxxx,10.150.35.108,Continue,Application and Device Control is ready,System,Begin: 2022-12-05 17:13:18,End Time: 2022-12-05 17:13:18,Rule: Built-in rule,0,SysPlant,0,SysPlant,None,User Name: None,Domain Name: None,Action Type: ,File size (bytes): 0,Device ID: Dec 5 17:22:59 10.2.1.166 Dec 5 17:12:45 ICxxx SymantecServer,10.10.232.76,Blocked,[AC7-2.1] 스크립트 차단 - Caller,End Time: 2024-12-05 16:41:09,Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도,9056,C:/Windows/System32/svchost.exe,0,No Module Name,C:/Windows/System32/GroupPolicy/DataStore/0/SysVol/celltrion.com/Policies/{08716B68-6FB2-4C06-99B3-2685F9035E2E}/Machine/Scripts/Startup/start_dot3svc.bat,User Name: xxx,Domain Name: xxx,Action Type: ,File size (bytes): xx,Device ID: xxx\xx&Ven_NVMe&Prod_Skhynix_BC501_NV\5&974&0&000 Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:06 IC01 SymantecServer: N1404002,10.50.248.13,Blocked,이 규칙은 모든 응용 프로그램이 시스템에 드라이브 문자를 추가하는 모든 USB 장치에 파일을 쓸 수 없도록 차단합니다. - File,Begin: 2024-12-05 16:33:53,End Time: 2024-12-05 16:33:53,"Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단",4032,C:/Program Files/Microsoft Office/xxx/Office16/EXCEL.EXE,0,No Module Name,D:/1. NBD/1. ADC cytotoxicity/2024-4Q/~$20241203-05 CT-P70 Drug release.xlsx,User Name: 1404002,Domain Name:xxx,Action Type: ,File size (bytes): 0,xx         extract string Rule: Built-in rule Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도 Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단                          
I am using these dox: https://docs.splunk.com/Documentation/ES/8.0.1/Admin/AddThreatIntelSources#Add_threat_intelligence_with_a_custom_lookup_file It is pretty straightforward but I suspect that ... See more...
I am using these dox: https://docs.splunk.com/Documentation/ES/8.0.1/Admin/AddThreatIntelSources#Add_threat_intelligence_with_a_custom_lookup_file It is pretty straightforward but I suspect that my configuraiton is not working.  Where are the "master lookups" that Splunk's Threat Framework uses?  I assume that there is 1 "master lookup" each for IPv4, domains, urls, hashes, etc.  Or perhaps they are all combined into 1.   There are about 100 lookups this client's ES and I have checked out the ones that look promising but didn't find my new data so I cannot conclude anything.
Hello dear Splunkers! I am struggling with this issue for days and just can't resolve it (ChatGPT is clueless). I have a panel that displays a trellis pie chart (splitted by a field called MODULE),... See more...
Hello dear Splunkers! I am struggling with this issue for days and just can't resolve it (ChatGPT is clueless). I have a panel that displays a trellis pie chart (splitted by a field called MODULE), and the panel has a drilldown that creates the token $module$. I need to open another panel only if the value inside $module$ equals to "AAA". Otherwise, I need to open a different panel. For example: - If I click on the pie chart on the value MODULE="AAA" -> A panel called "ModuleA" is opened. - If I click on the pie chart on the value MODULE="BBB" (or any other value) ->  A panel called "ModuleOther" is opened.   Right now I tried everything I could find on the internet/from the brain/from ChatGPT, but nothing works. Here's my code for now: <drilldown> <condition> <case match="AAA"> <set token="ModuleA">true</set> <unset token="ModuleOther"></unset> </case> <default> <unset token="ModuleA"></unset> <set token="ModuleOther">$trellis.value$</set> </default> </condition> </drilldown> ... <!-- panel A is opened --> <panel depends="$ModuleA$"> ... </panel> <!-- panel Other is opened --> <panel depends="$ModuleOther$"> ... </panel> It doesn't work since the tokens don't even contain a value when clicked on the pie. I even added a text input for each token in order to check if they contain any value when click, and the answer is no. It's important to emphasize that when I tried to make it a simple drilldown and open one panel when any pie is clicked it worked just fine. The problem is with the conditional tokening.   Thank you in advance!!
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing... See more...
When trying to fetch values using below query then its not showing result in statistics, Reason is i want to fetch message.backendCalls.responseCode also in my statistics response its showing nothing there when i am adding same field at the end of below. Query :-   index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Outbound" | spath "message.incomingRequest.partner" | rename message.incomingRequest.partner as "SSO_Partner" | search "SSO_Partner"=* | stats distinct_count("UUID") as Count by SSO_Partner, Membership_LOB, message.backendCalls.responseCode When i am not adding same field then its showing below results, Below is showing whole JSON from which i am trying to fetch response code. { [-] @timestamp: 2024-12-25T08:10:57.764Z Membership_Category: ******* Membership_LOB: **** UUID: ******** adminId:************* adminLevel: ************* correlation-id: ************* dd.env:************* dd.service:************* dd.span_id:************* dd.trace_id:************* dd.version:************* logger:************* message: { [-] backendCalls: [ [-] { [-] elapsedTime: **** endPoint:************* requestObject: { [+] } responseCode: 200 responseObject: { [+] } } ]  
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message ... See more...
After upgrading from Splunk Enterprise 9.2.2 to 9.2.4, the following error is displayed in the Splunk Web message: After upgrading Splunk Enterprise from 9.2.2 to 9.2.4, the following error message started appearing on Splunk Web. Log collection and searching is possible. A-Server acts as an indexer, and one search and indexer are used. Search peer A-Server has the following message: Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:34:12 Search peer A-Server has the following message: KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:34:11 Search peer A-Server has the following message: KV Store process terminated abnormally (exit code 14, status PID 29873 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:34:11 Search peer A-Server has the following message: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:34:11 Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ ... See more...
Hi All, I have dashboard which shows table of exceptions that happened within in 24 hours. My table has columns App_Name and time of exception. I tokenized these into $app_name$ and as $excep_time$ so that I can pass it to another panel in same dashboard. However once I click first panel, the second panel takes should take $app_name$ and $excep_time$ as inputs and show logs for that particualr app with time picker of $excep_time$. I have no problem adding +120 seconds but when I use earliest and latest its throwing invalid input message and when I use _time>= or _time<= it still taking UI time picker but not the search time. How do I pass time from one row to another panel and search in that passed time window  
Hello everyone I currently have a cluster of 2 indexes and also 1 search header mounted on Linux and everything is going well with it, these days what I need is to restore indexed data from 1 year a... See more...
Hello everyone I currently have a cluster of 2 indexes and also 1 search header mounted on Linux and everything is going well with it, these days what I need is to restore indexed data from 1 year ago, which I have on a disk mounted on the server, I am trying to be able to view that data from my header, but I can't do it, I have done tests like the following: -I have created a new index called mydb2, so as not to alter my original index (mydb), and I have copied several of the directories that have this name "db_1711654894_1711568541_1281_6C91679A-EBBC-4F09-A710-1CC8C8CA8FDC" to the $SPLUNK_DB/mydb2/db/ directory, when doing this I was not successful -From the cluster I restarted the 2 indexes, and it didn't work either, but after 2 days, data began to appear, but only the data corresponding to 4 days, however the data directories that I copied to $SPLUNK_DB/mydb2/db/ are several and correspond to 5 months, more days have passed, and I have restarted, and no more data has appeared Does anyone in the community have knowledge of this? To know how to view historical data that has been restored from a backup  
Hello Splunk Community, I am working on the configuration of a distributed Splunk deployment, and I need clarification regarding the KV Store. Could you please confirm where the KV Store should be c... See more...
Hello Splunk Community, I am working on the configuration of a distributed Splunk deployment, and I need clarification regarding the KV Store. Could you please confirm where the KV Store should be configured in a distributed environment? Should it be enabled on the Search Heads, Indexers, or another component of the deployment? Any guidance on best practices would be greatly appreciated. Thank you for your help! Best regards,
Hello, I have a case where the logs from 4 host are lagging behind. Why I say inconsistant is the laggig is differ from 5 to 30 minutes, sometime didn't at all.  When the log don't show up 30 minute... See more...
Hello, I have a case where the logs from 4 host are lagging behind. Why I say inconsistant is the laggig is differ from 5 to 30 minutes, sometime didn't at all.  When the log don't show up 30 minutes or more, I go to the forwarder management and disable/enable apps, restart Splunkd, then the log continue with 1, 2 seconds lag. The other host also lagging behind at peak hour, but only for 1 or 2 minutes (maximum 5' for source with large amount of logs).  I admit that our indexer cluster is not up to par in IOPS requirement but for 4 paticular host to be visible underperform is quite concerning.  Can someone show me steps to debug and solve the problems. 
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-i... See more...
I am trying to use the Splunk Add-on for Tomcat  first time. When I try Add Account this results in error message below. I think the add-on expects Java to be somewhere. Java is installed on my all-in-one Splunk server, where the add-on is installed. How do I make Java available to this add-on? Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 142, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/handler.py", line 107, in wrapper self.endpoint.validate( File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 85, in validate self._loop_fields("validate", name, data, existing=existing) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in _loop_fields return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/init.py", line 82, in <listcomp> return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields] File "/opt/splunk/etc/apps/Splunk_TA_tomcat/lib/splunktaucclib/rest_handler/endpoint/field.py", line 56, in validate res = self.validator.validate(value, data) File "/opt/splunk/etc/apps/Splunk_TA_tomcat/bin/Splunk_TA_tomcat_account_validator.py", line 85, in validate self._process = subprocess.Popen( # nosemgrep false-positive : The value java_args is File "/opt/splunk/lib/python3.9/subprocess.py", line 951, in __init_ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/splunk/lib/python3.9/subprocess.py", line 1837, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'java'
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set th... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. When the data comes, I set the sourcetype dynamically based on the value of the journald TRANSPORT field. This works fine. After that, I would like to apply other transforms to the logs with a certain sourcetypes e.g. remove the logs if the log has a certain phrase. Unfortunately, for some reason, the second transform is not working. Here is the props and configs that I'm using   here is my transforms.conf:   [set_new_sourcetype] SOURCE_KEY = field:TRANSPORT REGEX = ([^\s]+) FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype   [setnull_syslog_test] REGEX = (?i)test DEST_KEY = queue FORMAT = nullQueue   here is my pros.conf:   [source::journald:///var/log/journal] TRANSFORMS-change_sourcetype = set_new_sourcetype   [sourcetype::syslog] TRANSFORMS-setnull = setnull_syslog_test   Any idea why the setnull_syslog_test transform is not working?
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{... See more...
I cannot get auth to work for the HTTP Input in the Splunk trial. curl -H "Authorization: Splunk <HEC_token>" -k https://http-inputs-<stack_url>.splunkcloud.com:8088/services/collector/event -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}' My Splunk URL in https://<stack_url>.splunkcloud.com I've scoured the forums and web trying a number of combinations here. The HTTP Input is in the Enabled state on the Splunk console. Any help is appreciated. Thank you    
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if ... See more...
I want to set up splunk alert that can have two threshold  1. if the time is between 8 AM to 5PM - alert if AvgDuration is greater than 1000ms 2. If time is between 5pm to next day 8AM - alert if avgduration is greater than 500ms How do i implement this Query am working on <mySearch>| bin _time span=1m| stats avg(msg.DurationMs) AS AvgDuration by _time, msg.Service | where AvgDuration > 1000
Can someone please help me with dashboard search query that will look for all alerts configured in splunk and list only those alerts having index=* 
Hi, I am using splunk otel,  send log to splunk enterprise.For different sourcetype, I want to do different thing, like add field, remove fields can you guide me, thanks a lot.   For below, it wor... See more...
Hi, I am using splunk otel,  send log to splunk enterprise.For different sourcetype, I want to do different thing, like add field, remove fields can you guide me, thanks a lot.   For below, it work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           statements:           - set(attributes["johnaddkey"], "johnaddvalue") ```   For below, it does not work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           statements:           - set(attributes["johntestwhere"], "johnvaluewhere") where attributes["sourcetype"]             == "kube:container:istio-proxy" ``` For below, it does not work. ```       transform/istio-proxy:         error_mode: ignore         log_statements:         - context: log           conditions:           - attributes["sourcetype"] == "kube:container:istio-proxy"           statements:           - set(attributes["johnaddkeyc"], "johnaddvaluec")  ```    
Hello everyone, I have found posts over the last 10 years with a specific error/bug(?). The src and dest IP addresses are swapped for the Cisco ASA event with ID 302013. If you look in the app,... See more...
Hello everyone, I have found posts over the last 10 years with a specific error/bug(?). The src and dest IP addresses are swapped for the Cisco ASA event with ID 302013. If you look in the app, it even points out that these two fields are knowingly swapped. However, for the following TearDown event of the same connection, the IPs are not swapped. I am trying to figure out why this is the case. Since this postings about this topic has been around for 10 years now and the app says: "# direction is outbound - source and destination fields are swapped" ... it can't be an error. But I can't explain it. Can anyone comment on this? Example: <166>Dec 23 2024 10:36:04: %ASA-6-302013: Built outbound TCP connection 224811914 for dmz-sample-uidoc_172.27.252.0/27_604:172.27.252.1/8200 (172.27.252.1/8200) to fwr_sample_172.20.25.0/26:172.27.13.131/62388 (172.27.13.131/62388) Result: src=172.27.13.131 || dest = 172.27.252.1 <166>Dec 23 2024 10:36:04: %ASA-6-302014: Teardown TCP connection 224811914 for dmz-sample-uidoc_172.27.252.0/27_604:172.27.252.1/8200 to fwr_sample_172.20.25.0/26:172.27.13.131/62388 duration 0:00:00 bytes 0 TCP FINs from fwr_sample_172.20.25.0/26 Result: src=172.27.252.1 || dest = 172.27.13.131 Thanks and best regards Jan