All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

because of above screenshot results!
What makes you think it may be the wrong algorithm for your use case?
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something... See more...
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-recentTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent LastSeen recentTime DaysBehind
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should... See more...
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should be ok. You can use splunk list inputstatus to see how your inputs are doing. Check the spkunkd.log on the forwarder as well. Does your UF have enough permissions to read that channel?
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat... See more...
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat Matching. For example, everything related to IP will fall under ip_intel lookup.  Please hit Karma, if this helps!
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | ... See more...
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | `service_intel` | `process_intel` | `file_intel` | `registry_intel` | `user_intel` | `email_intel` | `certificate_intel` | `ip_intel`
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email ale... See more...
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. As already explained, this warning matters only if you care about where alert emails can be sent. Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56 These messages definitely are a problem on a search head, but not on an indexer.  Consult mongod.log for details about the problem and fix what is reported.  For indexers, turn off KVStore by adding the following to server.conf [kvstore] disabled=true
Hi All, We initially received a requirement to configure and ingest logs from Azure Storage Blob. To address this, we installed the Splunk Add-On for Microsoft Cloud Services on our Heavy Forwarder ... See more...
Hi All, We initially received a requirement to configure and ingest logs from Azure Storage Blob. To address this, we installed the Splunk Add-On for Microsoft Cloud Services on our Heavy Forwarder servers and configured it to pull logs from Azure Storage Blob using the Azure Storage Account. Currently, there's a new requirement to ingest Databricks logs from Azure Storage Blob. We completed the necessary configurations and set the default sourcetype to mscs:storage:blob for data parsing. While the events are visible in Splunk after the configuration, we noticed that the data parsing is not functioning as expected for these events. As a troubleshooting step, I changed the sourcetype to mscs:storage:blob:json, but the issue still persists. Could you please assist me in resolving this issue? Your guidance would be greatly appreciated.  
Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpo... See more...
Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpool etc. My Search Output looks like, StoreID, City, OpenIssues, Status Store 1, London, 3, Critical/Red Store 2, York, 2, Warning/Amber Store 3, Bristol, 0, Dormant/Green Store 4, Liverpool, 1, Warning/Amber can someone please suggest if/how this can be done? Thank you.
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application an... See more...
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application and Services Logs -> Microsoft -> AzureADPasswordprotection ->DCAgent -> Admin I have added the below inputs.conf stanza in Windows_TA addon [WinEventLog://Microsoft-AzureADPasswordProtection-DCAgent/Admin] disabled = false index = wineventlog_itd renderXml=false & [WinEventLog:Microsoft-AzureADPasswordProtection-DCAgent/Admin*] disabled = false index = wineventlog_itd renderXml=false Both are not working. Any thoughts ??
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody becaus... See more...
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody because: Failed to get LDAP user=“nobody” from any configured servers ERROR UserManagerPro [16791 SchedulerThread] - user=“nobody” had no roles I think these are all scheduled searches that are executed without an owner and therefore executed as user nobody. These messages didn't appear with version 9.1 What's the best way to turn off these messages? The annoying thing is that some searches come from Splunk apps (console monitoring, splunk archiver, etc.)
Try this expression ("|)Rule:\s*(?P<Rule>.*?)\1,\d
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB... See more...
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB of SSD and 3TB of SAS to work with. So naturally, I put hot path to the SSD and cold path to the SAS. Now we are encountering the problems that the indexingQueue always fill up to 100 whenever that indexer ingest data. So my question is: 1. Does the process of rolling bucket from hot to cold affect the IOPS and the writting in indexingQueue? 2. My understanding is that, the data flow go like this  Forwarder -> indexer hot -> indexer cold, and this is a continuous process. And in case indexer hot is max out, it will roll to cold, but cold is SAS so the writing speed is < SSD. For example hot ingesting 2000 events per sec, but only push out 500 events per sec to cold, but hot is full already so it render the effective ingesting speed of hot to only 500 (since it full and can only take in the amount that it can push out). Is this correct?  3. If my understanding is correct, how should I approach in optimizing it. I'm thinking of two option: a) Switch our retention policy from size base to day base, setting hot retention to 1 day, cold remain size retention, since we ingested 600~800GB per day, we can ensure the hot partion will always have a buffer to ensure the smooth transition. My question in this section is when is the rolling happen, at the end of the day, or whenever the event is one day old, thus don't change anything. b) Create a warm path as a buffer, hot->warm->cold, the warm bucket will have 1TB and retention of 1 day, so, and with how we ingest 600-800GB per day, the warm path will always have space for the hot to roll over Is there anything else can I do?
@shiba KV Store issues usually occur when Splunk's Key-Value Store is not functioning properly, which can impact searches that depend on KV Store collections. But if you are getting this ERROR on ind... See more...
@shiba KV Store issues usually occur when Splunk's Key-Value Store is not functioning properly, which can impact searches that depend on KV Store collections. But if you are getting this ERROR on indexers. Ignore it and "Security Risk Warning: Found an Empty Value for 'allowedDomainList':-The allowedDomainList parameter in alert_actions.conf is not configured properly, leaving it empty. This parameter specifies the domains allowed for sending alerts (e.g., via email).  If this server is being used as both an indexer and a search head, please confirm.  
Can you explain the regular expression you used? 
Yes, they're UFs. I already set  [thruput] maxKBps = 0 in limits.conf in the app.
Hello, thanks for your answer. I understand about the indexer. Is there any problem with the following message? Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/... See more...
Hello, thanks for your answer. I understand about the indexer. Is there any problem with the following message? Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56
According to regex101.com, your regular expression works.  This one, however, is more efficient. EXTRACT-Rule = (")?Rule:(?P<Rule>.*?)(?(1)\1|,) It looks for a leading quotation mark and uses that ... See more...
According to regex101.com, your regular expression works.  This one, however, is more efficient. EXTRACT-Rule = (")?Rule:(?P<Rule>.*?)(?(1)\1|,) It looks for a leading quotation mark and uses that as the terminating character (using (?(1)\1|,)).
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me k... See more...
I need to extract the Rule field using a regex in props.conf without using transforms.conf. The regex I used was Rule\:(?P<Rule>\s.*?(?=\")|((\s\w+)+)\-\w+\s\w+|\s.*?(?=\,)) Please let me know if you have any idea of ​​regular expression that satisfies all cases below to extract rule field by looking at the original data below.       Test-String Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:45 ICxxx SymantecServer: Nxxx,10.150.35.108,Continue,Application and Device Control is ready,System,Begin: 2022-12-05 17:13:18,End Time: 2022-12-05 17:13:18,Rule: Built-in rule,0,SysPlant,0,SysPlant,None,User Name: None,Domain Name: None,Action Type: ,File size (bytes): 0,Device ID: Dec 5 17:22:59 10.2.1.166 Dec 5 17:12:45 ICxxx SymantecServer,10.10.232.76,Blocked,[AC7-2.1] 스크립트 차단 - Caller,End Time: 2024-12-05 16:41:09,Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도,9056,C:/Windows/System32/svchost.exe,0,No Module Name,C:/Windows/System32/GroupPolicy/DataStore/0/SysVol/celltrion.com/Policies/{08716B68-6FB2-4C06-99B3-2685F9035E2E}/Machine/Scripts/Startup/start_dot3svc.bat,User Name: xxx,Domain Name: xxx,Action Type: ,File size (bytes): xx,Device ID: xxx\xx&Ven_NVMe&Prod_Skhynix_BC501_NV\5&974&0&000 Dec 5 17:22:59 10.2.1.166 Dec 5 17:13:06 IC01 SymantecServer: N1404002,10.50.248.13,Blocked,이 규칙은 모든 응용 프로그램이 시스템에 드라이브 문자를 추가하는 모든 USB 장치에 파일을 쓸 수 없도록 차단합니다. - File,Begin: 2024-12-05 16:33:53,End Time: 2024-12-05 16:33:53,"Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단",4032,C:/Program Files/Microsoft Office/xxx/Office16/EXCEL.EXE,0,No Module Name,D:/1. NBD/1. ADC cytotoxicity/2024-4Q/~$20241203-05 CT-P70 Drug release.xlsx,User Name: 1404002,Domain Name:xxx,Action Type: ,File size (bytes): 0,xx         extract string Rule: Built-in rule Rule: 모든 응용 프로그램 | [AC7-2.1] 파일 및 폴더 액세스 시도 Rule: USB 드라이브에 읽기 허용,쓰기 차단 | [AC4-1.1] USB 드라이브에 읽기 허용,쓰기 차단                          
Try something like this (I am not sure if you need click.value2, value, name or name2 though, you would need to experiment <drilldown> <condition match="$click.value2$=&quot;AAA&quot;"> ... See more...
Try something like this (I am not sure if you need click.value2, value, name or name2 though, you would need to experiment <drilldown> <condition match="$click.value2$=&quot;AAA&quot;"> <set token="ModuleA">true</set> <unset token="ModuleOther"></unset> </condition> <condition> <unset token="ModuleA"></unset> <set token="ModuleOther">$trellis.value$</set> </condition> </drilldown> ... <!-- panel A is opened --> <panel depends="$ModuleA$"> ... </panel> <!-- panel Other is opened --> <panel depends="$ModuleOther$"> ... </panel>