All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kozanic_FF, did you solve the problem?
thanks for it....   but I need to REX not for REGEX
Hi @ITWhisperer , This is the search I have used. index="xxx" source="*yyy" | eval id=mvindex(split(source,"/"),5) | reverse | table id _raw | rex field=_raw "(?<timestamp>[^|]+)\|(?<PID>[^|]+)... See more...
Hi @ITWhisperer , This is the search I have used. index="xxx" source="*yyy" | eval id=mvindex(split(source,"/"),5) | reverse | table id _raw | rex field=_raw "(?<timestamp>[^|]+)\|(?<PID>[^|]+)" | table id timestamp PID | eval _time=strptime(timestamp,"%Y-%m-%d %H:%M:%S.%4N") | table id _time PID | sort 0 id _time | streamstats count as s_no by id | table id _time s_no PID
Thanks for the response. @Ryan.Paredez 
https://regex101.com/ https://www.regexbuddy.com/  
hi @camellia, You need to configure these in the forwarder not on the indexer servers. Also,  KV_MODE = json is search time configuration, not index-time configuration. Set INDEXED_EXTRACTION... See more...
hi @camellia, You need to configure these in the forwarder not on the indexer servers. Also,  KV_MODE = json is search time configuration, not index-time configuration. Set INDEXED_EXTRACTIONS = JSON for your sourcetype in props.conf. Deploy props.conf and transforms.conf in your forwarder. [itsd] DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = JSON LINE_BREAKER = ([\r\n]+) category = Structured disabled = false pulldown_type = true TRANSFORMS-null1 = replace_null TRANSFORMS-null2 = replace_null1  
hi @lucky, Try this: | rex "\-(PUT|GET|POST|DELETE)(?<url>[\/A-z]+).*Responsecode=(?<ResponseCode>\d+)" Sample query: | makeresults | eval _raw="message: INFO [nio-8443-exce-8] b. b. b.filter.l... See more...
hi @lucky, Try this: | rex "\-(PUT|GET|POST|DELETE)(?<url>[\/A-z]+).*Responsecode=(?<ResponseCode>\d+)" Sample query: | makeresults | eval _raw="message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingcintextfil=ter.post process(Loggingcintextfilter.java\"201)-PUT/actatarr/halt/liveness||||||||||||METRIC|--|Responsecode=400|Response Time=0" | rex "\-(PUT|GET|POST|DELETE)(?<url>[\/A-z]+).*Responsecode=(?<ResponseCode>\d+)"
how to down load debugrex ..command sheet  please provide link 
Thank you, I randomly ran it for a few buckets and observed the following message: Moving bucket='rb_1681312487_1677890027_1731_FBA51F26-2043-4798-B18D-2D637A7347B9', initiating warm_to_cold: from=... See more...
Thank you, I randomly ran it for a few buckets and observed the following message: Moving bucket='rb_1681312487_1677890027_1731_FBA51F26-2043-4798-B18D-2D637A7347B9', initiating warm_to_cold: from='/Data/splunkdb/o365/db' to='/Data/splunkdb/o365/colddb', caller='chillIfNeeded', reason='maximum number of warm buckets exceeded'. I'm not sure if this is the reason that could be affecting the data retention period. Initially, I had "maxHotBuckets = 10" defined, but it's no longer defined, and I've left it as the default value. [test] coldPath = volume:primary/test/colddb homePath = volume:primary/test/db thawedPath = $SPLUNK_DB/test/thaweddb maxTotalDataSizeMB = 512000 frozenTimePeriodInSecs = 39420043
You can use the following on Search Head: index=_internal source=*license_usage.log type=Usage pool=* | eval _time=strftime(_time,"%m-%d-%y") | stats sum(b) as ub by _time | eval ub=round(ub/1024/... See more...
You can use the following on Search Head: index=_internal source=*license_usage.log type=Usage pool=* | eval _time=strftime(_time,"%m-%d-%y") | stats sum(b) as ub by _time | eval ub=round(ub/1024/1024/1024,3) | eval _time=strptime(_time,"%m-%d-%y") | sort _time | eval _time=strftime(_time,"%m-%d-%y") | rename _time as Date ub as "Daily License Quota Used" You can define the "Date Range" to get daily usage.
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingc... See more...
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingcintextfil=ter.post process(Loggingcintextfilter.java"201)-PUT/actatarr/halt/liveness||||||||||||METRIC|--|Responsecode=400|Response Time=0
Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files... See more...
Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files were sent to index in the path /opt/splunk/etc/peer-apps/_cluster/local. However, when I searched in the search header, the desired effect was not found. props.conf [itsd] DATETIME_CONFIG = CURRENT KV_MODE = json LINE_BREAKER = ([\r\n]+) category = Structured disabled = false pulldown_type = true TRANSFORMS-null1 = replace_null TRANSFORMS-null2 = replace_null1   transforms.conf [replace_null] REGEX = ^\[ DEST_KEY=queue FORMAT=nullQueue [replace_null1] REGEX=(.*)(\}\s?\}) DEST_KEY=_raw FORMAT=$1$2
Transaction may be your friend. index=ee_rpa_uipath_platform_* AND OrganizationUnitID IN ($folder$) ```| sort OrganizationUnitID, RobotName, _time, Message``` | eval robotmessage = OrganizationUnitI... See more...
Transaction may be your friend. index=ee_rpa_uipath_platform_* AND OrganizationUnitID IN ($folder$) ```| sort OrganizationUnitID, RobotName, _time, Message``` | eval robotmessage = OrganizationUnitID . ":" . RobotName . ":" . Message | transaction robotmessage maxevents=3 | where closed_txn=true AND eventcount > 2 About the commented-out sort: because your end goal will always be some kind of tables grouped by OrganizationUnitID and RobotName, there is no point to sort against these two early; if your events come in "naturally", most likely you do not need to sort by _time.
There is something wrong with later part of the SPL  ... | eval time=now() | sort 0 - time | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | outputlookup lkp_wms_print_slislo1.cs... See more...
There is something wrong with later part of the SPL  ... | eval time=now() | sort 0 - time | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | outputlookup lkp_wms_print_slislo1.csv append=true override_if_empty=true | where time > relative_time(now(), "-2d@d") OR isnull(time) You are saying new field time is the current time of the search (now()) - so all events will have a new field 'time' with the same value and then you sort all events - that will produce no sort. Then you discard that new field time with the fields statement and then you use the time field that no longer exists to see if it's more than 2 days old - it would never be, as you just set it to now() above then threw it away. So the where clause 'isnull(time)' will ALWAYS be true. You have date and timestamp in your data, which is also 'now()'. If you want to discard everything here, then you could do something like | fields date, desc, sli, slo, burnrate, timestamp, averageDuration | inputlookup lkp_wms_print_slislo1.csv append=true | where timestamp > relative_time(now(), "-28d@d") | outputlookup lkp_wms_print_slislo1.csv  so instead of appending to the existing lookup, you load the entire lookup and filter out timestamps older than 28 days and then write the entire dataset back. Note that this does not handle duplicates if you run the search more than once, so you'd have to handle that if you need to.
So, if your filter criteria is also by org and robot name, then you can add those into the "BY" clause in the streamstats. You may not need to use sort if you are also splitting by org+robot as the ... See more...
So, if your filter criteria is also by org and robot name, then you can add those into the "BY" clause in the streamstats. You may not need to use sort if you are also splitting by org+robot as the reset_on_change will reset only when org+robot+message changes.
@bestSplunker  You can see a working example with your data by copying/pasting this to your search window. | makeresults | eval data=split(replace("_time=2022-12-01T10:00:01.000Z, account_id=1, que... See more...
@bestSplunker  You can see a working example with your data by copying/pasting this to your search window. | makeresults | eval data=split(replace("_time=2022-12-01T10:00:01.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:02.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:03.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:07.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:09.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:11.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:12.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:13.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:14.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:22.000Z, account_id=2, query user infomation. _time=2022-12-01T10:01:27.000Z, account_id=3, query user infomation. _time=2022-12-01T10:00:27.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:30.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:33.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:34.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:36.000Z, account_id=2, query user infomation. _time=2022-12-01T10:01:37.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:39.000Z, account_id=1, query user infomation. _time=2022-12-01T10:01:45.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:47.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:55.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:59.000Z, account_id=3, query user infomation.", "\n", "###"), "###") | mvexpand data | rex field=data "_time=(?<t>\d+-\d+-\d+T\d+:\d+:\d+\.\d+Z), account_id=(?<account_id>\d+)," | eval _time=strptime(t, "%FT%T.%QZ") | table _time account_id ``` Above is just your example data setup ``` ``` Use streamstats to calculate the event count and gap for each account ``` | streamstats c window=2 global=f range(_time) as gap by account_id ``` Remove the first event, so it doesn't get used in the gap average calculation ``` | where c=2 ``` Now calculate the average and total span and gap count ``` | stats sum(gap) as span count as gap_count avg(gap) as avg_frequency by account_id | where avg_frequency<5  
NB: There are a couple of mistakes in your interval calculations, e.g. account 1 is 2 + 6 + 90, not 30. Anyway, you can simply use streamstats to get the gap and then average that out, i.e. ... your... See more...
NB: There are a couple of mistakes in your interval calculations, e.g. account 1 is 2 + 6 + 90, not 30. Anyway, you can simply use streamstats to get the gap and then average that out, i.e. ... your_search ... | streamstats c window=2 global=f range(_time) as gap by account_id | where c=2 | stats avg(gap) as avg_frequency by account_id | where avg_frequency<5 streamstats will count the number (c) of events seen in it's gap calulation and then take the range of time values to create a field 'gap' by the account id. where c=2 is done to remove the first event for each account id, as the gap will always be 0, so we want to use a gap count rather than event count to calculate average Then just use stats/where to filter out those less than 5. You could of course use stdev and calculate outliers from the norm rather than using a fixed 5 second gap as that would be more flexible if traffic changes. Please avoid using transaction - it is not intended for this purpose and has memory limitations that can cause it just to ignore certain events where you detect transactions over a long period of time.
It seems like this has been a problem for some time, e.g. https://community.splunk.com/t5/Knowledge-Management/Why-are-collected-events-in-a-summary-index-losing-milliseconds/m-p/163098 I generally... See more...
It seems like this has been a problem for some time, e.g. https://community.splunk.com/t5/Knowledge-Management/Why-are-collected-events-in-a-summary-index-losing-milliseconds/m-p/163098 I generally avoid using the summary indexing option in the scheduled search, but instead use the collect statement directly in the SPL and format the _raw field I want, as _time is also a bit strange with the collect command. You need to have a _raw with the _time value set in there, to make it work well, e.g. ``` Your search ... ``` | fields _time field1 field2... | eval _raw="_time="._time | foreach "*" [| eval _raw=_raw.case(isnull('<<FIELD>>'),"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] | collect index=your_summary_index addtime=f  
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were cl... See more...
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were closed. I'm using this search: | inputlookup append=T incident_review_lookup | rename user as reviewer | `get_realname(owner)` | `get_realname(reviewer)` | eval nullstatus=if(isnull(status),"true","false") | `get_reviewstatuses` | eval status=if((isnull(status) OR isnull(status_label)) AND nullstatus=="false",0,status) | eval status_label=if(isnull(status_label) AND nullstatus=="false","Unassigned",status_label) | eval status_description=if(isnull(status_description) AND nullstatus=="false","unknown",status_description) | eval _time=time | `uitime(time)` | fields - nullstatus     What's wrong?
Hi,  im facing the same problem. do you have any idea or workaround to solve this problem ? Regards, Ruli