All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears you have multiple stats for the same transaction in the event . try using mvdedup | spath | eval date=strftime(_time,"%m-%d %k:%M") | table date *.pct2ResTime | foreach *.pct2ResTime ... See more...
It appears you have multiple stats for the same transaction in the event . try using mvdedup | spath | eval date=strftime(_time,"%m-%d %k:%M") | table date *.pct2ResTime | foreach *.pct2ResTime [| eval <<FIELD>> = mvdedup('<<FIELD>>')] | untable date transaction pct2ResTime | eval "Transaction Name"=mvindex(split(transaction,"."),0) | xyseries "Transaction Name" date pct2ResTime
This seems to be different from your previous description. Counting is one thing, listing sessions is another. Furthermore, we don't know your data.
Thank you, but the client wants to obtain dimensions every 7 days, with approximately 1200 result sets. The output results need to include: start time, end time, username, department, number of days ... See more...
Thank you, but the client wants to obtain dimensions every 7 days, with approximately 1200 result sets. The output results need to include: start time, end time, username, department, number of days visited, multi value query time, and alarm time
You're overcomplicating your search. If you want to calculate how many days during a week your users connected to a service there are probably several ways about it. The easiest and most straightfor... See more...
You're overcomplicating your search. If you want to calculate how many days during a week your users connected to a service there are probably several ways about it. The easiest and most straightforward would probably be to | bin _time span=1d to have all visits during the same day with the same timestamp (the alternative would be to use strftime) Now you need to calculate different days in each week for each user | stats dc(_time) by user _time span=1d the alternative is the timechart command.
More words please. What do you want to achieve and why?
Hi, Can you try the following regex Regex: Rule:\s(?P<Rule>(.*?)(?=,\d+)) It uses positive lookahead (?=) and captures everything until it finds "," followed by digit. If the end of the rule... See more...
Hi, Can you try the following regex Regex: Rule:\s(?P<Rule>(.*?)(?=,\d+)) It uses positive lookahead (?=) and captures everything until it finds "," followed by digit. If the end of the rule always has a digit then this will work. Keep in mind that if an word is replaced by digit at the end of the rule this will not work. Please try and if it works an upvote is appreciated.
What happened if Splunk SOAR license expired? I cannot find a document to explain it.
Thank you, I forgot the code format when using it less
The SPL I provided is indeed not a problem with the production environment. I want to implement data statistics for the interval from 2019 to the present, where a user visits multiple times a day and... See more...
The SPL I provided is indeed not a problem with the production environment. I want to implement data statistics for the interval from 2019 to the present, where a user visits multiple times a day and counts it as one visit. I want to calculate the continuous number of user visits for the interval every 7 days since 2019.
In addition to @ITWhisperer comments, there is an alternative way to set/unset pairs of tokens using the <eval> token mechanism, i.e. <drilldown> <eval token="ModuleA">if($click.value2$="AAA", "tr... See more...
In addition to @ITWhisperer comments, there is an alternative way to set/unset pairs of tokens using the <eval> token mechanism, i.e. <drilldown> <eval token="ModuleA">if($click.value2$="AAA", "true", null())</eval> <eval token="ModuleOther">if($click.value2$="AAA", null(), $trellis.value$)</eval> <drilldown> I prefer this mechanism over <condition>, where null() is equivalent to unsetting a token. It avoids the &quot; usage and keeps the number of lines down. Again, click.value2 may not be the right one
Can you summarise what you are trying to do, your SPL contains some errors, e.g. using info_min_time in your mvrange() eval statement, which does not exist and that fact that you have max_searches as... See more...
Can you summarise what you are trying to do, your SPL contains some errors, e.g. using info_min_time in your mvrange() eval statement, which does not exist and that fact that you have max_searches as half a million indicates you're going about this the wrong way. Describe the problem you are trying to solve, your inputs and your expected outputs.  
Can you edit and format your SPL as a code block using this symbol </> in the Body menu - makes it far easier to digest long SPL
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\T... See more...
Being completely new to this:  Our SMTP servers gathered data completely before using the SMTP Add-on. My Doman admin Now wants me to start ingesting D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\.   So I have deployed TA-Exchange-Mailbox from the TA-Exchange App download from Splunkbase.  I also deployed TA-exchange-SMTP.   The TA-exchange-smtp  local/inputs.conf file looks like this - only made a couple changes in the path: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\Edge\ProtocolLog\...\*] index = smtp sourcetype = exchange:smtp added this one after install: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking\*] index = smtp sourcetype = MSExch2019:Tracking So I am not 100% sure this is correct. For the TA-Exchange-Mailbox - I have 3 stanzas based upon the info from this forum previous messages: [monitor://D:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\MessageTracking] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:MessageTracking queue=parsingQueue index=smtp disabled=0 [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpReceive] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpReceive queue=parsingQueue index=smtp disabled=false [monitor://D:\Exchange Server\TransportRoles\Logs\*\ProtocolLog\SmtpSend] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype=MSExchange:2019:SmtpSend queue=parsingQueue index=smtp disabled=false Again - I know nothing in regards to this level of data gathering so Im hoping one of you all who have will be able to guide me in the right direction so that I can begin ingesting.
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please ... See more...
Hello teachers, I have encountered an SPL statement that involves restrictions on the map function. Currently, there is a problem of inaccurate data loss in the statistical results. Could you please advise on any functions in SPL that can replace map to achieve this? SPL is as follows:   index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | addinfo | table info_min_time info_max_time earliest_time latest_time | eval earliest_time=strftime(earliest_time,"%F 00:00:00") | eval earliest_time=strptime(earliest_time,"%F %T") | eval earliest_time=round(earliest_time) | eval searchEarliestTime2=if(info_min_time == "0.000", earliest_time, info_min_time) | eval searchLatestTime2=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime2,searchLatestTime2, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | where end <=searchLatestTime2 | eval end=round(end) | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | fields start a end b | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | map search="search earliest=\"$start$\" latest=\"$end$\" index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | dedup day oprt_user_name blng_dept_name oprt_user_acct | stats count as "fwcishu" by day oprt_user_name blng_dept_name oprt_user_acct | eval a=$a$ | eval b=$b$ | stats count as "day_count",values(day) as "qdate",max(day) as "alert_date" by a b oprt_user_name,oprt_user_acct " maxsearches=500000 | where day_count > 2 | eval alert_date=strptime(alert_date,"%F") | eval alert_date=relative_time(alert_date,"+1d") | eval alert_date=strftime(alert_date, "%F") | table a b oprt_user_name oprt_user_acct day_count qdate alert_date   I want to implement statistical analysis of data from 2019 to the present, where a user visits multiple times a day and counts it as one visit, to calculate the continuous number of visits by interval users every 7 days since 2019.    
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on t... See more...
I'm trying to understand the differences between event indexes and metric indexes in terms of how they handle storage and indexing. I have a general understanding of how event indexes work based on this document, but the documentation on metrics seems limited. Specifically, I'm curious about: How storage and indexing differ for event indexes vs. metric indexes under the hood. Why high cardinality is a bigger concern for metric indexes compared to event indexes. I understand from this glossary entry that metric time series (MTS) are central to how metrics work, but I'd appreciate a more in-depth explanation on the inner workings and trade-offs involved. Additionally, if I have a dimension with a unique ID, would it be better to use an event index instead of a metric index? If anyone could shed light on this or point me toward relevant resources, that would be great!
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastE... See more...
@marnall thanks for the suggestion it worked ! | metadata type=sourcetypes | search sourcetype=something* | eval "LastSeen"=now()-lastTime | rename lastTime as "LastEvent" | fieldformat "LastEvent"=strftime(LastEvent, "%c") | eval DaysBehind=round((LastSeen/86400)) | table sourcetype LastEvent DaysBehind
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in adva... See more...
Yeah good point let me try something around your suggestion. I'm not splunk admin so i dont know much of audit/admin techniques', any suggestion and advice is highly respected and appreciated in advance! 
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated,... See more...
Note that the lastTime field contains the timestamp of the latest event seen, while the recentTime field contains the indextime of the latest event. If the log was indexed soon after being generated, then these times will be close together. If a log was generated on Dec 11 but indexed today, then the lastTime and recentTime values will be different. Ref: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Metadata You should think about how you want your search to handle historical data. Perhaps you want this search to filter to sourcetypes where you expect data to come in every day, and then you can filter out sourcetypes which contain data indexed recently but are timestamped a long time ago. The highlighted sourcetype only has 3 events which makes me think that it does not have fresh data every day.
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other da... See more...
I did attach the query what i tried and screenshots of how i makeresults and how json files look like. Basically, I would like to compare today's 95th percentile with previous day or some other day 95th percentile to check for deviation. Also, this json file has been generated by jmeter file using jtl file. Please let me know if you know any way to generate the report in splunk using jtl file index=jenkins_artifact source="job/V8_JMeter_Load_Test_STAGE_Pipeline/*/src/TestResults/*/JMeter/RUN2/statistics.json" | spath | eval date = strftime(_time, "%m-%d %k:%M") | eval "Transaction Name"=mvindex(split(transaction,"."),0) | eval pct2ResTime = round(pct2ResTime) | untable date "Transaction Name" pct2ResTime | xyseries "Transaction Name" date pct2ResTime