All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to ... See more...
Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to be followed. Thanks in advance
Sender_ID is present in logging: as example:  2024-02-16 09:55:41:829 EST| INFO |InterfaceName=USCUSTOMERPO POCanonical_JSONHttpDataProcess=END JSON data successfully processed to Order Processor a... See more...
Sender_ID is present in logging: as example:  2024-02-16 09:55:41:829 EST| INFO |InterfaceName=USCUSTOMERPO POCanonical_JSONHttpDataProcess=END JSON data successfully processed to Order Processor application for TxID=20240216095535623-0EEu Sender_ID=hC Bioscience Inc Receiver_ID=ThermoFisher Scientific TxnType=USCustomer_PO Format=cXML Direction=Inbound PO_Num=2550 Status=Success   please help to form the query : i tried this but still the issue persist it is taking only 1st word from log line   
For anyone wanting to push the idea of FreeBSD 14 support, this is where it can be done: https://ideas.splunk.com/ideas/SFXIMMID-I-583 Feel free to spend up to 10 votes! Thanks a lot for your supp... See more...
For anyone wanting to push the idea of FreeBSD 14 support, this is where it can be done: https://ideas.splunk.com/ideas/SFXIMMID-I-583 Feel free to spend up to 10 votes! Thanks a lot for your support and for spreading the word
Try something like this index="idx" | where [| inputlookup name.csv | table id name ]
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is t... See more...
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is that it extracts the first 2 lines. How do I extract just the first line? Thank you Simone
There is only one! I deleted the others. Can someone help me?
You have not shown how Sender_ID has been extracted. Having said that, you may need to re-extract it with a rex command, such as this: | rex "Sender_ID=(?<Sender_ID>.+)\s Receiver_ID"
Before I invest too much time working on a regex, please can you share your events in a code block </>. Also, do your events really have "Part" in them? (Regex matches in patterns and unless the patt... See more...
Before I invest too much time working on a regex, please can you share your events in a code block </>. Also, do your events really have "Part" in them? (Regex matches in patterns and unless the patterns are accurate, the match will not be found.)
@Roy1 Can I question your first (15 seconds of activity) where there are only 2 events 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 where x,y and z are the sam... See more...
@Roy1 Can I question your first (15 seconds of activity) where there are only 2 events 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 where x,y and z are the same, so I am not clear on how you get 15 seconds active. It seems like 5 or 10 seconds.  The 135 also seems like it should be 145 Here is an example using your data that gives you a timechart - NB: It's only done with a single equipmentID, so needs to be tested with more than one Id | makeresults | fields - _time | eval d=split(replace("12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)", " ","##"),"##") | mvexpand d | rename d as _raw | extract | rex "(?<time>\d+:\d+:\d+)" | eval _time=strptime(time, "%H:%M:%S") | eval description=if(isnull(_time), _raw, null()) | fields - _raw time | where isnotnull(_time) | sort _time ``` The above is just data setup ``` | streamstats time_window=10s count range(x) as r_x range(y) as r_y range(z) as r_z by equipmentID | eval isIdleByCoord=if(r_x<=50 AND r_y<=50 AND r_z<=50, 1, 0) | streamstats window=2 global=f range(_time) as r by equipmentID | timechart fixedrange=f sum(eval(if(isIdleByCoord=1, r, null()))) as InActive by equipmentID It calculates isIdleByCoord is calculated in a 10 second window. If there is no activity for a piece of equipment then it will not have a data point, so the 'r' calculation will show the range of the previous point. Let me know if this helps
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: =============================... See more...
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: ============================================== Part 1.    session start is completed Part 2.     Before app message row count    : 9000000                   Before app consolidation row count    :8888800 Part 3.     append message completed Part 4.     After app message flush row count : 0                    After app message flush row count     :1000000 =================================================   How can we use regex and get the fields from above event and show them in table like below parts                   message                                                  count Part 1               session start is completed part 2                 Before app message row count          9000000                                                                                                         8888800 part 3                   append message completed
Using join is not a Splunk way of doing things, generally you would use stats. I'm not entirely clear on what fields exist in what indexes in your example. Does InstanceId exist in index=main data - ... See more...
Using join is not a Splunk way of doing things, generally you would use stats. I'm not entirely clear on what fields exist in what indexes in your example. Does InstanceId exist in index=main data - that is what you are joining on From your description it sounds like all you want are those InstanceIds that come from the subsearch, so maybe I'm missing something If you are looking to find only those ResourceId where ResourceId=InstanceId from your current subsearch, but are also looking for other information, then  index=main ResourceId=* OR (index=other type=instance earliest=-2h) | eval InstanceId=coalesce(ResourceId, InstanceId) | stats values(*) as * values(index) as indexes count by InstanceId | where mvcount(indexes)=2  
Need more information - the question doesn't make sense
Use the aligntime parameter to timechart, i.e. | timechart span=30m aligntime=@m...
Please share some sample (anonymised) events
Hi, i was facing the same issue. I have changed under transforms.conf the following: [kv_cp_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+([^|]+) [kv_cp_syslog_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+... See more...
Hi, i was facing the same issue. I have changed under transforms.conf the following: [kv_cp_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+([^|]+) [kv_cp_syslog_log_format] REGEX = ([a-zA-Z0-9_-]+)[:=]+"((?:[^"\\]|\\.)+)"  
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting... See more...
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting result in 30 mins internal say 11:30 AM, 12 PM, 12:30 PM, 1 PM etc.  
To simplify things, I will just follow your initial clue and assume that ID and Name are also part of event.ResourceAttributes.   index=test field1=* field2=* | spath input=field3 | foreach "event... See more...
To simplify things, I will just follow your initial clue and assume that ID and Name are also part of event.ResourceAttributes.   index=test field1=* field2=* | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=mvappend(type, if(isnotnull('<<FIELD>>'), '<<FIELD>>', null())) ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   If they are in some other nodes, just rewrite the foreach list. Here is a fuller emulation that I made up based on your singular mock data point.   | makeresults | eval field3 = mvappend("{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"ID\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has ID, Resource Name, no Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has Resource Name, no others ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"Name\": \"value1\", \"key2\": \"value2\", \"ID\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has ID, Name, no Resource Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has none of the three ```) | mvexpand field3 ``` the above sort of emulates index=test field1=* field2=* ``` | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=mvappend(type, if(isnotnull('<<FIELD>>'), '<<FIELD>>', null())) ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   What this does is to add variations to which of "Name", "Resource Name", and "ID" do or do not appear in each event.  You can play with it and compare with real data.  The output is event.AccountId event.CloudPlatform event.CloudService Additional Details xxxxxxxxxx1 CloudProvider Service name-resource-121sg6fe {} xxxxxxxxxx2 CloudProvider Service name-resource-121sg6fe value1 value2 value3 {}   One more suggestion: @bowesmana's idea is just to use foreach.  The above format does not group the present or missing attributes in a very distinguishable manner.  An alternative to using mvappend inside the foreach subsearch is to also carry the input keys in addition to values in "Additional Details".  Using a JSON structure is one such method.   index=test field1=* field2=* | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=json_set(type, replace("<<FIELD>>", "event.ResourceAttributes.", ""), '<<FIELD>>') ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   This is a full emulation:   | makeresults | eval field3 = mvappend("{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"ID\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has ID, Resource Name, no Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"Resource Name\": \"name-resource-121sg6fe\", \"etc\": \"etc\"} } }" ``` has Resource Name, no others ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx2\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"Name\": \"value1\", \"key2\": \"value2\", \"ID\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has ID, Name, no Resource Name ```, "{\"event\": { \"AccountId\": \"xxxxxxxxxx1\", \"CloudPlatform\": \"CloudProvider\", \"CloudService\": \"Service\", \"ResourceAttributes\": {\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\", \"key4\": [{\"key\": \"value\", \"key\": \"value\"}], \"etc\": \"etc\"} } }" ``` has none of the three ```) | mvexpand field3 ``` the above sort of emulates index=test field1=* field2=* ``` | eval type = json_object() | spath input=field3 | foreach "event.ResourceAttributes.Name", "event.ResourceAttributes.Resource Name", "event.ResourceAttributes.ID" [ | eval type=json_set(type, replace("<<FIELD>>", "event.ResourceAttributes.", ""), '<<FIELD>>') ] | stats values(type) as "Additional Details" by event.AccountId event.CloudPlatform event.CloudService   And output from this emulation. event.AccountId event.CloudPlatform event.CloudService Additional Details xxxxxxxxxx1 CloudProvider Service {"Name":null,"Resource Name":"name-resource-121sg6fe","ID":null} {"Name":null,"Resource Name":null,"ID":null} xxxxxxxxxx2 CloudProvider Service {"Name":"value1","Resource Name":null,"ID":"value3"} {"Name":null,"Resource Name":"name-resource-121sg6fe","ID":"value2"}
Hi @att35 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @BTB, let me understand: you want to check if you received in the last three days less than 30% of the logs that you received in the previous 3 days for each sourcetype, is it correct? If this i... See more...
Hi @BTB, let me understand: you want to check if you received in the last three days less than 30% of the logs that you received in the previous 3 days for each sourcetype, is it correct? If this is your requirement, you could try something like this: | tstats count latest(_time) AS _time WHERE index=* earliest=-6d BY sourcetype | eval period=if(_time>now()-86400*3,"Last","Previous" | stats sum(eval(if(period="Last",count,0))) AS Last sum(eval(if(period="Previous",count,0))) AS Previous dc(period) AS period_count values(period) AS period BY sourcetype | eval diff_perc=(Last-Previous)/Previous*100 | where diff_perc<30 if you want a different algorithm you can implement using my approach. Ciao. Giuseppe