All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What happened if Splunk SOAR license expired? I cannot find a document to explain it.
Thank you, I forgot the code format when using it less
The SPL I provided is indeed not a problem with the production environment. I want to implement data statistics for the interval from 2019 to the present, where a user visits multiple times a day and... See more...
The SPL I provided is indeed not a problem with the production environment. I want to implement data statistics for the interval from 2019 to the present, where a user visits multiple times a day and counts it as one visit. I want to calculate the continuous number of user visits for the interval every 7 days since 2019.
In addition to @ITWhisperer comments, there is an alternative way to set/unset pairs of tokens using the <eval> token mechanism, i.e. <drilldown> <eval token="ModuleA">if($click.value2$="AAA", "tr... See more...
In addition to @ITWhisperer comments, there is an alternative way to set/unset pairs of tokens using the <eval> token mechanism, i.e. <drilldown> <eval token="ModuleA">if($click.value2$="AAA", "true", null())</eval> <eval token="ModuleOther">if($click.value2$="AAA", null(), $trellis.value$)</eval> <drilldown> I prefer this mechanism over <condition>, where null() is equivalent to unsetting a token. It avoids the &quot; usage and keeps the number of lines down. Again, click.value2 may not be the right one
Can you summarise what you are trying to do, your SPL contains some errors, e.g. using info_min_time in your mvrange() eval statement, which does not exist and that fact that you have max_searches as... See more...
Can you summarise what you are trying to do, your SPL contains some errors, e.g. using info_min_time in your mvrange() eval statement, which does not exist and that fact that you have max_searches as half a million indicates you're going about this the wrong way. Describe the problem you are trying to solve, your inputs and your expected outputs.  
Can you edit and format your SPL as a code block using this symbol </> in the Body menu - makes it far easier to digest long SPL
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\T... See more...
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\.   So I have deployed TA-Exchange-Mailbox from the TA-Exchange App download from Splunkbase.  I also deployed TA-exchange-SMTP.   The TA-exchange-smtp  local/inputs.conf file looks like this - only made a couple changes in the path: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\Edge\ProtocolLog\...\*] index = smtp sourcetype = exchange:smtp added this one after install: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\*] index = smtp sourcetype = MSExch2019:Tracking So I am not 100% sure this is correct. For the TA-Exchange-Mailbox - I have 3 stanzas based upon the info from this forum previous messages: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:MessageTracking queue=parsingQueue index=smtp disabled=0 [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=smtp disabled=false [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpSend] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpSend queue=parsingQueue index=smtp disabled=false Again - I know nothing in regards to this level of data gathering so Im hoping one of you all who have will be able to guide me in the right direction so that I can begin ingesting.
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please ... See more...
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please advise on any functions in SPL that can replace map to achieve this? SPL is as follows:   index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | addinfo | table info_min_time info_max_time earliest_time latest_time | eval earliest_time=strftime(earliest_time,"%F 00:00:00") | eval earliest_time=strptime(earliest_time,"%F %T") | eval earliest_time=round(earliest_time) | eval searchEarliestTime2=if(info_min_time == "0.000", earliest_time, info_min_time) | eval searchLatestTime2=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime2,searchLatestTime2, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | where end <=searchLatestTime2 | eval end=round(end) | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | map search="search earliest=\"$start$\" latest=\"$end$\" index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | dedup day oprt_user_name blng_dept_name oprt_user_acct | stats count as "fwcishu" by day oprt_user_name blng_dept_name oprt_user_acct | eval a=$a$ | eval b=$b$ | stats count as "day_count",values(day) as "qdate",max(day) as "alert_date" by a b oprt_user_name,oprt_user_acct " maxsearches=500000 | where day_count > 2 | eval alert_date=strptime(alert_date,"%F") | eval alert_date=relative_time(alert_date,"+1d") | eval alert_date=strftime(alert_date, "%F") | table a b oprt_user_name oprt_user_acct day_count qdate alert_date   I want to implement statistical analysis of data from 2019 to the present, where a user visits multiple times a day and counts it as one visit, to calculate the continuous number of visits by interval users every 7 days since 2019.    
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on t... See more...
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on this document, but the documentation on metrics seems limited. Specifically, I'm curious about: How storage and indexing differ for event indexes vs. metric indexes under the hood. Why high cardinality is a bigger concern for metric indexes compared to event indexes. I understand from this glossary entry that metric time series (MTS) are central to how metrics work, but I'd appreciate a more in-depth explanation on the inner workings and trade-offs involved. Additionally, if I have a dimension with a unique ID, would it be better to use an event index instead of a metric index? If anyone could shed light on this or point me toward relevant resources, that would be great!
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastE... See more...
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent DaysBehind
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in adva... See more...
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in advance! 
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated,... See more...
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated, then these times will be close together. If a log was generated on Dec 11 but indexed today, then the lastTime and recentTime values will be different. Ref: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Metadata You should think about how you want your search to handle historical data. Perhaps you want this search to filter to sourcetypes where you expect data to come in every day, and then you can filter out sourcetypes which contain data indexed recently but are timestamped a long time ago. The highlighted sourcetype only has 3 events which makes me think that it does not have fresh data every day.
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other da... See more...
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other day 95th percentile to check for deviation. Also, this json file has been generated by jmeter file using jtl file. Please let me know if you know any way to generate the report in splunk using jtl file index=jenkins_artifact source="job/V8_JMeter_Load_Test_STAGE_Pipeline/*/src/TestResults/*/JMeter/RUN2/statistics.json" | spath | eval date = strftime(_time, "%m-%d %k:%M") | eval "Transaction Name"=mvindex(split(transaction,"."),0) | eval pct2ResTime = round(pct2ResTime) | untable date "Transaction Name" pct2ResTime | xyseries "Transaction Name" date pct2ResTime      
because of above screenshot results!
What makes you think it may be the wrong algorithm for your use case?
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something... See more...
Splunkers i thought i had an search to detect and alert when a sourcetype don't sent logs, but i found out that i may have wrong algorithm | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-recentTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent LastSeen recentTime DaysBehind
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should... See more...
You can't use wildcards. So assuming you have a single log and your name is ok (I don't use Azure stuff so can't verify the actual name of the channel but it looks reasonable) the first syntax should be ok. You can use splunk list inputstatus to see how your inputs are doing. Check the spkunkd.log on the forwarder as well. Does your UF have enough permissions to read that channel?
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat... See more...
Yes, search for "_intel" in Lookup Definition and you will see all Threat Intel Lookup along with definition -  All lookups from the specific categories gets combined / merged and used to Threat Matching. For example, everything related to IP will fall under ip_intel lookup.  Please hit Karma, if this helps!
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | ... See more...
I assume that this accepted answer is correct: https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-use-the-threat-feed-I-added-using-threat-intelligence/m-p/234794 So like this: | `service_intel` | `process_intel` | `file_intel` | `registry_intel` | `user_intel` | `email_intel` | `certificate_intel` | `ip_intel`
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email ale... See more...
@shiba wrote: Security risk warning: Found an empty value for 'allowedDomainList' in the alert_actions.conf configuration file. If you do not configure this setting, then users can send email alerts with search results to any domain. You can add values for 'allowedDomainList' either in the alert_actions.conf file or in Server Settings > Email Settings > Email Domains in Splunk Web. As already explained, this warning matters only if you care about where alert emails can be sent. Failed to start KV Store process. See mongod.log and splunkd.log for details. 2024/12/25 11:26:57 KV Store changed status to failed. KVStore process terminated.. 2024/12/25 11:26:56 KV Store process terminated abnormally (exit code 14, status PID 2757 exited with code 14). See mongod.log and splunkd.log for details. 2024/12/25 11:26:56 These messages definitely are a problem on a search head, but not on an indexer.  Consult mongod.log for details about the problem and fix what is reported.  For indexers, turn off KVStore by adding the following to server.conf [kvstore] disabled=true