All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, eventstats can indeed be sometimes used when you need to retain the original events but remember that eventstats is a "heavier" command than single stats (it has to keep all the events and add t... See more...
Yes, eventstats can indeed be sometimes used when you need to retain the original events but remember that eventstats is a "heavier" command than single stats (it has to keep all the events and add the summarized data to all events so it needs potentially way way more resources than simple stats; it's also not that well distributable)
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": ... See more...
Hi,   I have multiple events with the following JSON object.   { "timeStamp": "2024-02-29T10:00:00.673Z", "collectionIntervalInMinutes": "1", "node": "plgiasrtfing001", "inboundErrorSummary": [ { "name": "400BadRequestMalformedHeader", "value": 1 }, { "name": "501NotImplementedMethod", "value": 2 }, { "name": "otherErrorResponses", "value": 1 } ] }     I am trying to extract the name/values from the inboundErrorSummary array and display the sum total of all the values of the same name and plot them by time. So the output should be something like             Date 400BadRequestMalformedHeader 501NotImplementedMethod otherErrorResponses 2024-02-29T10:00:00 1 2 1 2024-02-29T11:00:00 10 40 50   Even a total count of each name field should also work. I am quite new to splunk queries, so hope someone can help and also explain the steps on how its done. Thanks in advance.
At least earlier I have had some issue to use [default]. The end result was that I must move those to actual sourcetype definition or otherwise those didn't affect as I was hopping. Also CLONE_SOURC... See more...
At least earlier I have had some issue to use [default]. The end result was that I must move those to actual sourcetype definition or otherwise those didn't affect as I was hopping. Also CLONE_SOURCETYPE has some caveat when you want to manipulate it. I think that @PickleRick  has some case on last autumn about this, where we try to solve same kind of situation?
Hi @isoutamo , yes I have some CLONE_SOURCETYPE, but I applied the transformation in props.conf in the default stanza:   [default] TRANSFORMS-abc = fieldname   and this should be applied to all ... See more...
Hi @isoutamo , yes I have some CLONE_SOURCETYPE, but I applied the transformation in props.conf in the default stanza:   [default] TRANSFORMS-abc = fieldname   and this should be applied to all the sourcetypes. Maybe I could try to apply to source: [source::/var/log/remote/*] TRANSFORMS-abc = fieldname Ciao. Giuseppe
Based on your example data etc. this works. | makeresults | eval source="/var/log/remote/abc/def.xyx" | eval relay_hostname = replace(source, "/var/log/remote/([^/]+)/.*","\1") So it should work al... See more...
Based on your example data etc. this works. | makeresults | eval source="/var/log/remote/abc/def.xyx" | eval relay_hostname = replace(source, "/var/log/remote/([^/]+)/.*","\1") So it should work also on props.conf! Are you absolutely sure that those sourcetype names are correct on your props.conf and that there are not any CLONE_SOURCETYPE etc. which can lead to wrong path? You should also check that there is no host or source definitions which overrides that sourcetype  definition.
Hi @isoutamo , I tried your solution: [relay_hostname] INGEST_EVAL = relay_hostname = replace(source, "(/var/log/remote/)([^/]+)(/.*)","\2") with no luck. As I said, I have the doubt that I would... See more...
Hi @isoutamo , I tried your solution: [relay_hostname] INGEST_EVAL = relay_hostname = replace(source, "(/var/log/remote/)([^/]+)(/.*)","\2") with no luck. As I said, I have the doubt that I would extract the new field from the source field that maybe isn't still extracted! I also tried a transformation at search time: with the same result. thank you and ciao. Giuseppe  
Post the output of: | inputlookup testip If its to long post part with IP 10.10.10.x
Hi as @jotne has used %z as time zone information on props.conf you should add also this to your regex and then if/when needed use strptime and strftime functions to convert that field as needed. On... See more...
Hi as @jotne has used %z as time zone information on props.conf you should add also this to your regex and then if/when needed use strptime and strftime functions to convert that field as needed. On ingestion time that happened automatically with correct TIME* definitions. r. Ismo
As I have used replace on those examples, you can use it same way. On those cases I have take some part of source (e.g. yyyymmmdd from file path) and use it as a field value. Basically replace is ... See more...
As I have used replace on those examples, you can use it same way. On those cases I have take some part of source (e.g. yyyymmmdd from file path) and use it as a field value. Basically replace is a one way to use regex on splunk.
Thank you so much for your great help
I guess the first 9 days in every month has just one digit.  This should do:   ,\s(\d\d\.\d\d\.\d\d\s\w+\s+\d+\w+\d\d)\s   Added a + behind a space since it may be more than one space TIME_FORMA... See more...
I guess the first 9 days in every month has just one digit.  This should do:   ,\s(\d\d\.\d\d\.\d\d\s\w+\s+\d+\w+\d\d)\s   Added a + behind a space since it may be more than one space TIME_FORMAT = %z, %T %a %e%b%y Change a %d to %e
Hello @jotne , this is the regex: | rex field=_raw (?<date>\s(\d\d\.\d\d\.\d\d\s\w+\s\d+\w+\d\d)\s) 
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the contr... See more...
Really struggling with this one, so looking for a hero to come along with a solution! I have an index of flight data. Each departing flight has a timestamp for when the pilot calls up to the control tower to request to push back, this field is called ASRT (Actual Start Request Time). Each flight also has a time that it uses the runway, this is called ATOT_ALDT (Actual Take Off Time/Actual Landing Time). What I really need to calculate, is for each departing flight, how many over flights used the runway (had an ATOT_ALDT) between when the flight calls up (ASRT) and then uses the runway itself (ATOT_ALDT). This is to work out what the runway queue was like for each departing aircraft. I have tried using the concurrency command, however, this doesn't return the desired results as it only shows the number flights that started before and not the ones that started after. We may have a situation where an aircraft calls up after one before but then departs before. And this doesn't capture that. So I've found an approach that in theory should work. I ran an eventstats that lists the take off/landing time of every flight, so then I can mvexpand that and run an eval across each line. However, multi-value fields have a limit of 100, and there can be up to 275 flights in the time period I need to check. Can anyone else think of a way of achieving this? My code is below: REC_UPD_TM = the time the record was updated (this index uses the flights scheduled departure time as _time, so we need to find the latest record for each flight) displayed_flyt_no = The flight number e.g EZY1234 DepOrArr = Was the flight a departure or an arrival.   index=flights | eval _time = strptime(REC_UPD_TM."Z","%Y-%m-%d %H:%M:%S%Z") | dedup AODBUniqueField sortby - _time | fields AODBUniqueField DepOrArr displayed_flyt_no ASRT ATOT_ALDT | sort ATOT_ALDT | where isnotnull(ATOT_ALDT) | eval asrt_epoch = strptime(ASRT,"%Y-%m-%d %H:%M:%S"), runway_epoch = strptime(ATOT_ALDT,"%Y-%m-%d %H:%M:%S") | table DepOrArr displayed_flyt_no ASRT asrt_epoch ATOT_ALDT runway_epoch | eventstats list(runway_epoch) as runway_usage | search DepOrArr="D" | mvexpand runway_usage | eval queue = if(runway_usage>asrt_epoch AND runway_usage<runway_epoch,1,0) | stats sum(queue) as queue by displayed_flyt_no    
Hi @isoutamo , thank you for your hint, but using INGEST-EVAL, I can use an eval function, instead I need to use a regex to extract a field from another field. The correct way is the first I used b... See more...
Hi @isoutamo , thank you for your hint, but using INGEST-EVAL, I can use an eval function, instead I need to use a regex to extract a field from another field. The correct way is the first I used but there's something wrong and I don't understand what. Maybe the source field isn't still extracted when I try to extract with a regex a part of the path. Ciao. Giuseppe
hii,  it worked fine till February but for some reason the date is not getting extracted for March. Could you please help here want the date extracted for all the months..as the day goes by
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So... See more...
Actually i am looking a query on a scenario where there are few istances on my hosts and it went down.Eventually the there were no logs within 2 hrs ..but we find after 2 hrs the logs are captured.So if we find no logs coming from server past 30 min, it should trigger an alert.
How about INGEST_EVAL? Here are some examples https://community.splunk.com/t5/Getting-Data-In/How-to-get-props-and-transforms-to-extract-time-from-source/m-p/644598/highlight/true#M109720 https://... See more...
How about INGEST_EVAL? Here are some examples https://community.splunk.com/t5/Getting-Data-In/How-to-get-props-and-transforms-to-extract-time-from-source/m-p/644598/highlight/true#M109720 https://community.splunk.com/t5/Getting-Data-In/How-to-apply-source-file-date-using-INGEST-as-Time/m-p/596865
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "... See more...
how we can colour the text as green for status as running and red for stopped for single value visualization in dashboard studio splunk. My Code is below :- "ds_B6p8HEE0": {             "type": "ds.chain",             "options": {                 "enableSmartSources": true,                 "extend": "ds_JRxFx0K2",                 "query": "| eval status = if(OPEN_MODE=\"READ WRITE\",\"running\",\"stopped\") | stats latest(status)"             },             "name": "oracle status"
Hi @PickleRick and @isoutamo , I also tried to solve the issue at search time, but there are many sourcetypes to associate this field, so I tried to create a field extraction to associate to source=... See more...
Hi @PickleRick and @isoutamo , I also tried to solve the issue at search time, but there are many sourcetypes to associate this field, so I tried to create a field extraction to associate to source=/var/log/remote/*, but it still doesn't run, probably because I cannot use the jolly char in a source for field extractions. Ciao. Giuseppe
@vk2 You can check the below document, Splunk universal forwarder is compatible with Linux OS which is having kernel 4.x or higher. If you have kernel 3.x , Splunk supports this platform and architec... See more...
@vk2 You can check the below document, Splunk universal forwarder is compatible with Linux OS which is having kernel 4.x or higher. If you have kernel 3.x , Splunk supports this platform and architecture, but might remove support in a future release.  https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Confirm_support_for_your_computing_platform