All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You have not shown how Sender_ID has been extracted. Having said that, you may need to re-extract it with a rex command, such as this: | rex "Sender_ID=(?<Sender_ID>.+)\s Receiver_ID"
Before I invest too much time working on a regex, please can you share your events in a code block </>. Also, do your events really have "Part" in them? (Regex matches in patterns and unless the patt... See more...
Before I invest too much time working on a regex, please can you share your events in a code block </>. Also, do your events really have "Part" in them? (Regex matches in patterns and unless the patterns are accurate, the match will not be found.)
@Roy1 Can I question your first (15 seconds of activity) where there are only 2 events 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 where x,y and z are the sam... See more...
@Roy1 Can I question your first (15 seconds of activity) where there are only 2 events 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 where x,y and z are the same, so I am not clear on how you get 15 seconds active. It seems like 5 or 10 seconds.  The 135 also seems like it should be 145 Here is an example using your data that gives you a timechart - NB: It's only done with a single equipmentID, so needs to be tested with more than one Id | makeresults | fields - _time | eval d=split(replace("12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)", " ","##"),"##") | mvexpand d | rename d as _raw | extract | rex "(?<time>\d+:\d+:\d+)" | eval _time=strptime(time, "%H:%M:%S") | eval description=if(isnull(_time), _raw, null()) | fields - _raw time | where isnotnull(_time) | sort _time ``` The above is just data setup ``` | streamstats time_window=10s count range(x) as r_x range(y) as r_y range(z) as r_z by equipmentID | eval isIdleByCoord=if(r_x<=50 AND r_y<=50 AND r_z<=50, 1, 0) | streamstats window=2 global=f range(_time) as r by equipmentID | timechart fixedrange=f sum(eval(if(isIdleByCoord=1, r, null()))) as InActive by equipmentID It calculates isIdleByCoord is calculated in a 10 second window. If there is no activity for a piece of equipment then it will not have a data point, so the 'r' calculation will show the range of the previous point. Let me know if this helps
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: =============================... See more...
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: ============================================== Part 1.    session start is completed Part 2.     Before app message row count    : 9000000                   Before app consolidation row count    :8888800 Part 3.     append message completed Part 4.     After app message flush row count : 0                    After app message flush row count     :1000000 =================================================   How can we use regex and get the fields from above event and show them in table like below parts                   message                                                  count Part 1               session start is completed part 2                 Before app message row count          9000000                                                                                                         8888800 part 3                   append message completed
Using join is not a Splunk way of doing things, generally you would use stats. I'm not entirely clear on what fields exist in what indexes in your example. Does InstanceId exist in index=main data - ... See more...
Using join is not a Splunk way of doing things, generally you would use stats. I'm not entirely clear on what fields exist in what indexes in your example. Does InstanceId exist in index=main data - that is what you are joining on From your description it sounds like all you want are those InstanceIds that come from the subsearch, so maybe I'm missing something If you are looking to find only those ResourceId where ResourceId=InstanceId from your current subsearch, but are also looking for other information, then  index=main ResourceId=* OR (index=other type=instance earliest=-2h) | eval InstanceId=coalesce(ResourceId, InstanceId) | stats values(*) as * values(index) as indexes count by InstanceId | where mvcount(indexes)=2  
Need more information - the question doesn't make sense
Use the aligntime parameter to timechart, i.e. | timechart span=30m aligntime=@m...
Please share some sample (anonymised) events
Hi, i was facing the same issue. I have changed under transforms.conf the following: [kv_cp_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+([^|]+) [kv_cp_syslog_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+... See more...
Hi, i was facing the same issue. I have changed under transforms.conf the following: [kv_cp_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+([^|]+) [kv_cp_syslog_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+"((?:[^"\\]|\\.)+)"  
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting... See more...
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting result in 30 mins internal say 11:30 AM, 12 PM, 12:30 PM, 1 PM etc.  
To simplify things, I will just follow your initial clue and assume that ID and Name are also part of event.ResourceAttributes.   index=test field1=* field2=* | spath input=field3 | foreach "event... See more...
To simplify things, I will just follow your initial clue and assume that ID and Name are also part of event.ResourceAttributes.   index=test field1=* field2=* | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=mvappend(type, if(isnotnull('<<FIELD>>'), '<<FIELD>>', null())) ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   If they are in some other nodes, just rewrite the foreach list. Here is a fuller emulation that I made up based on your singular mock data point.   | makeresults | eval field3 = mvappend("{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"ID\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has ID, Resource Name, no Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has Resource Name, no others ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"Name\": \"value1\", \"key2\": \"value2\", \"ID\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has ID, Name, no Resource Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has none of the three ```) | mvexpand field3 ``` the above sort of emulates index=test field1=* field2=* ``` | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=mvappend(type, if(isnotnull('<<FIELD>>'), '<<FIELD>>', null())) ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   What this does is to add variations to which of "Name", "Resource Name", and "ID" do or do not appear in each event.  You can play with it and compare with real data.  The output is event.AccountId event.CloudPlatform event.CloudService Additional Details xxxxxxxxxx1 CloudProvider Service name-resource-121sg6fe {} xxxxxxxxxx2 CloudProvider Service name-resource-121sg6fe value1 value2 value3 {}   One more suggestion: @bowesmana's idea is just to use foreach.  The above format does not group the present or missing attributes in a very distinguishable manner.  An alternative to using mvappend inside the foreach subsearch is to also carry the input keys in addition to values in "Additional Details".  Using a JSON structure is one such method.   index=test field1=* field2=* | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=json_set(type, replace("<<FIELD>>", "event.ResourceAttributes.", ""), '<<FIELD>>') ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   This is a full emulation:   | makeresults | eval field3 = mvappend("{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"ID\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has ID, Resource Name, no Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has Resource Name, no others ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"Name\": \"value1\", \"key2\": \"value2\", \"ID\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has ID, Name, no Resource Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has none of the three ```) | mvexpand field3 ``` the above sort of emulates index=test field1=* field2=* ``` | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=json_set(type, replace("<<FIELD>>", "event.ResourceAttributes.", ""), '<<FIELD>>') ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   And output from this emulation. event.AccountId event.CloudPlatform event.CloudService Additional Details xxxxxxxxxx1 CloudProvider Service {"Name":null,"Resource Name":"name-resource-121sg6fe","ID":null} {"Name":null,"Resource Name":null,"ID":null} xxxxxxxxxx2 CloudProvider Service {"Name":"value1","Resource Name":null,"ID":"value3"} {"Name":null,"Resource Name":"name-resource-121sg6fe","ID":"value2"}
Hi @att35 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @BTB, let me understand: you want to check if you received in the last three days less than 30% of the logs that you received in the previous 3 days for each sourcetype, is it correct? If this i... See more...
Hi @BTB, let me understand: you want to check if you received in the last three days less than 30% of the logs that you received in the previous 3 days for each sourcetype, is it correct? If this is your requirement, you could try something like this: | tstats count latest(_time) AS _time WHERE index=* earliest=-6d BY sourcetype | eval period=if(_time>now()-86400*3,"Last","Previous" | stats sum(eval(if(period="Last",count,0))) AS Last sum(eval(if(period="Previous",count,0))) AS Previous dc(period) AS period_count values(period) AS period BY sourcetype | eval diff_perc=(Last-Previous)/Previous*100 | where diff_perc<30 if you want a different algorithm you can implement using my approach. Ciao. Giuseppe
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in ... See more...
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in splunkd.log, inputs & outputs config are in place, there are no space issues as well. What could be the possible reason for this if anyone can help me? All our indexers & SH's are hosted in Splunk cloud.
anyone still facing this issue? is there any fix for this
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert ... See more...
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert if the log numbers drop by 30% or more in any period. I've seen this done with search and with _index, but I'm unsure which way is best. I don't want to build almost 100 searches for 100 different source types, and I'd much rather do it by the twenty-something indexes. I'm not sure if ML is the right way to do this but I've seen times when logs stop flowing and it isn't noticed for days and I want to prevent that from happening. Any help is appreciated. 
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUS... See more...
I am using Splunk Enterprise Version: 9.1.0.1. my search query is : index="webmethods_prd" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" InterfaceName=USCUSTOMERPO Status=Success OR Status=Failure | eval timestamp=strftime(_time, "%F")|chart limit=30 dc(TxID) over Sender_ID by timestamp in result I am getting incomplete Sender_ID, splunk removed space from Sender_ID but actually it should be full name , like this : How can I preserve the full Sender_ID here?   Avik
You can do it, but it's a little fiddly - it involves making the response value a multivalue field with the response value as the first element and an indicator whether it's over threshold saved as t... See more...
You can do it, but it's a little fiddly - it involves making the response value a multivalue field with the response value as the first element and an indicator whether it's over threshold saved as the second value. Using CSS you stop the second value from being displayed, then use the standard colorPalette settings to set the colour. Here's an example panel that demonstrates how <panel> <html depends="$hidden$"> <style> #coloured_cell table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <table id="coloured_cell"> <title>Colouring a table cell based on it's relative comparison to another cell</title> <search> <query> | makeresults count=10 | fields - _time | eval Threshold=random() % 100, Value=random() % 100 | eval Comparison=case(Value &lt; Threshold,-1, Value &gt; Threshold, 1, true(), 0) | eval Value=mvappend(Value, Comparison) | table Threshold Value </query> <earliest>-15m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <format type="color" field="Value"> <colorPalette type="expression">case(mvindex(value, 1) == "1", "#ff0000", mvindex(value, 1) == "-1", "#000000", true(), "#00ff00")</colorPalette> </format> </table> </panel>  It will make a field called Comparison which is -1 if the Value is < Threshold, =1 if Value is > Threshold otherwise 0. This is appended to the real value field Then the <colorPalette> statement will test the mvindex(..1) which gives the Comparison number and the expression defines the colour.
What's your search that's returning the data - is this in a dashboard. If the data is coming from a KV store than you will have to do the time bounding yourself as Splunk will not limit your result s... See more...
What's your search that's returning the data - is this in a dashboard. If the data is coming from a KV store than you will have to do the time bounding yourself as Splunk will not limit your result set based on time as there is no similar time concept in the KV store, you have to do it yourself.