All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\T... See more...
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\.   So I have deployed TA-Exchange-Mailbox from the TA-Exchange App download from Splunkbase.  I also deployed TA-exchange-SMTP.   The TA-exchange-smtp  local/inputs.conf file looks like this - only made a couple changes in the path: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\Edge\ProtocolLog\...\*] index = smtp sourcetype = exchange:smtp added this one after install: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\*] index = smtp sourcetype = MSExch2019:Tracking So I am not 100% sure this is correct. For the TA-Exchange-Mailbox - I have 3 stanzas based upon the info from this forum previous messages: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:MessageTracking queue=parsingQueue index=smtp disabled=0 [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=smtp disabled=false [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpSend] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpSend queue=parsingQueue index=smtp disabled=false Again - I know nothing in regards to this level of data gathering so Im hoping one of you all who have will be able to guide me in the right direction so that I can begin ingesting.
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please ... See more...
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please advise on any functions in SPL that can replace map to achieve this? SPL is as follows:   index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | addinfo | table info_min_time info_max_time earliest_time latest_time | eval earliest_time=strftime(earliest_time,"%F 00:00:00") | eval earliest_time=strptime(earliest_time,"%F %T") | eval earliest_time=round(earliest_time) | eval searchEarliestTime2=if(info_min_time == "0.000", earliest_time, info_min_time) | eval searchLatestTime2=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime2,searchLatestTime2, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | where end <=searchLatestTime2 | eval end=round(end) | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | map search="search earliest=\"$start$\" latest=\"$end$\" index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | dedup day oprt_user_name blng_dept_name oprt_user_acct | stats count as "fwcishu" by day oprt_user_name blng_dept_name oprt_user_acct | eval a=$a$ | eval b=$b$ | stats count as "day_count",values(day) as "qdate",max(day) as "alert_date" by a b oprt_user_name,oprt_user_acct " maxsearches=500000 | where day_count > 2 | eval alert_date=strptime(alert_date,"%F") | eval alert_date=relative_time(alert_date,"+1d") | eval alert_date=strftime(alert_date, "%F") | table a b oprt_user_name oprt_user_acct day_count qdate alert_date   I want to implement statistical analysis of data from 2019 to the present, where a user visits multiple times a day and counts it as one visit, to calculate the continuous number of visits by interval users every 7 days since 2019.    
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on t... See more...
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on this document, but the documentation on metrics seems limited. Specifically, I'm curious about: How storage and indexing differ for event indexes vs. metric indexes under the hood. Why high cardinality is a bigger concern for metric indexes compared to event indexes. I understand from this glossary entry that metric time series (MTS) are central to how metrics work, but I'd appreciate a more in-depth explanation on the inner workings and trade-offs involved. Additionally, if I have a dimension with a unique ID, would it be better to use an event index instead of a metric index? If anyone could shed light on this or point me toward relevant resources, that would be great!
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastE... See more...
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent DaysBehind
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in adva... See more...
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in advance! 
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated,... See more...
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated, then these times will be close together. If a log was generated on Dec 11 but indexed today, then the lastTime and recentTime values will be different. Ref: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Metadata You should think about how you want your search to handle historical data. Perhaps you want this search to filter to sourcetypes where you expect data to come in every day, and then you can filter out sourcetypes which contain data indexed recently but are timestamped a long time ago. The highlighted sourcetype only has 3 events which makes me think that it does not have fresh data every day.
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other da... See more...
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other day 95th percentile to check for deviation. Also, this json file has been generated by jmeter file using jtl file. Please let me know if you know any way to generate the report in splunk using jtl file index=jenkins_artifact source="job/V8_JMeter_Load_Test_STAGE_Pipeline/*/src/TestResults/*/JMeter/RUN2/statistics.json" | spath | eval date = strftime(_time, "%m-%d %k:%M") | eval "Transaction Name"=mvindex(split(transaction,"."),0) | eval pct2ResTime = round(pct2ResTime) | untable date "Transaction Name" pct2ResTime | xyseries "Transaction Name" date pct2ResTime      
because of above screenshot results!
What makes you think it may be the wrong algorithm for your use case?
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something... See more...
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-recentTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent LastSeen recentTime DaysBehind
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should... See more...
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should be ok. You can use splunk list inputstatus to see how your inputs are doing. Check the spkunkd.log on the forwarder as well. Does your UF have enough permissions to read that channel?
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat... See more...
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat Matching. For example, everything related to IP will fall under ip_intel lookup.  Please hit Karma, if this helps!
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | ... See more...
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | `service_intel` | `process_intel` | `file_intel` | `registry_intel` | `user_intel` | `email_intel` | `certificate_intel` | `ip_intel`
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email ale... See more...
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. As already explained, this warning matters only if you care about where alert emails can be sent. Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56 These messages definitely are a problem on a search head, but not on an indexer.  Consult mongod.log for details about the problem and fix what is reported.  For indexers, turn off KVStore by adding the following to server.conf [kvstore] disabled=true
Hi All, We initially received a requirement to configure and ingest logs from Azure Storage Blob. To address this, we installed the Splunk Add-On for Microsoft Cloud Services on our Heavy Forwarder ... See more...
Hi All, We initially received a requirement to configure and ingest logs from Azure Storage Blob. To address this, we installed the Splunk Add-On for Microsoft Cloud Services on our Heavy Forwarder servers and configured it to pull logs from Azure Storage Blob using the Azure Storage Account. Currently, there's a new requirement to ingest Databricks logs from Azure Storage Blob. We completed the necessary configurations and set the default sourcetype to mscs:storage:blob for data parsing. While the events are visible in Splunk after the configuration, we noticed that the data parsing is not functioning as expected for these events. As a troubleshooting step, I changed the sourcetype to mscs:storage:blob:json, but the issue still persists. Could you please assist me in resolving this issue? Your guidance would be greatly appreciated.  
Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpo... See more...
Hello, I am looking to add a UK Map on dashboard studio to show number of open issues (ITSM Data) and RAG Status for Flagship Stores available in different cities like, London, York, Bristol, Liverpool etc. My Search Output looks like, StoreID, City, OpenIssues, Status Store 1, London, 3, Critical/Red Store 2, York, 2, Warning/Amber Store 3, Bristol, 0, Dormant/Green Store 4, Liverpool, 1, Warning/Amber can someone please suggest if/how this can be done? Thank you.
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application an... See more...
I ma trying to onboard the %SystemRoot%\System32\Winevt\Logs\Microsoft-AzureADPasswordProtection-DCAgent%4Admin.evtx logs This logs is available on the eventviewer under Eventviwer-> Application and Services Logs -> Microsoft -> AzureADPasswordprotection ->DCAgent -> Admin I have added the below inputs.conf stanza in Windows_TA addon [WinEventLog://Microsoft-AzureADPasswordProtection-DCAgent/Admin] disabled = false index = wineventlog_itd renderXml=false & [WinEventLog:Microsoft-AzureADPasswordProtection-DCAgent/Admin*] disabled = false index = wineventlog_itd renderXml=false Both are not working. Any thoughts ??
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody becaus... See more...
Since I migrated splunk to version 9.2.4, I've been getting a lot of error messages from all Splunk servers : WARN UserManagerPro [16791 SchedulerThread] - Unable to get roles for user=nobody because: Failed to get LDAP user=“nobody” from any configured servers ERROR UserManagerPro [16791 SchedulerThread] - user=“nobody” had no roles I think these are all scheduled searches that are executed without an owner and therefore executed as user nobody. These messages didn't appear with version 9.1 What's the best way to turn off these messages? The annoying thing is that some searches come from Splunk apps (console monitoring, splunk archiver, etc.)
Try this expression ("|)Rule:\s*(?P<Rule>.*?)\1,\d
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB... See more...
Hello all,  I want to ask about the mechanic of rolling bucket from hot to cold. In our indexes.conf we don't setup a warm path, just hot and cold control by maxDataSizeMB.  System team give me 1TB of SSD and 3TB of SAS to work with. So naturally, I put hot path to the SSD and cold path to the SAS. Now we are encountering the problems that the indexingQueue always fill up to 100 whenever that indexer ingest data. So my question is: 1. Does the process of rolling bucket from hot to cold affect the IOPS and the writting in indexingQueue? 2. My understanding is that, the data flow go like this  Forwarder -> indexer hot -> indexer cold, and this is a continuous process. And in case indexer hot is max out, it will roll to cold, but cold is SAS so the writing speed is < SSD. For example hot ingesting 2000 events per sec, but only push out 500 events per sec to cold, but hot is full already so it render the effective ingesting speed of hot to only 500 (since it full and can only take in the amount that it can push out). Is this correct?  3. If my understanding is correct, how should I approach in optimizing it. I'm thinking of two option: a) Switch our retention policy from size base to day base, setting hot retention to 1 day, cold remain size retention, since we ingested 600~800GB per day, we can ensure the hot partion will always have a buffer to ensure the smooth transition. My question in this section is when is the rolling happen, at the end of the day, or whenever the event is one day old, thus don't change anything. b) Create a warm path as a buffer, hot->warm->cold, the warm bucket will have 1TB and retention of 1 day, so, and with how we ingest 600-800GB per day, the warm path will always have space for the hot to roll over Is there anything else can I do?