All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your swift response. I need to calculate the duration between first "fail" to first "success" for every Item. Unfortunately the result is incorrect: Item StartTime EndTime Duration B  ... See more...
Thanks for your swift response. I need to calculate the duration between first "fail" to first "success" for every Item. Unfortunately the result is incorrect: Item StartTime EndTime Duration B        02:40:05 02:45:05 00:05:00 B        02:20:05 02:25:05 00:05:00 B        02:15:05 02:30:05 00:15:00     ==> should be "B  02:15:05 02:25:05 00:10:00" A        02:10:00 02:15:00 00:05:00 A        02:05:00 02:20:00 00:15:00     ==> should be "A  02:05:00 02:15:00 00:10:00" I'd tried this method before, however consecutive "Result=fail" causes overlapped results.
Hi All, For the current version of Splunk Cloud, does it allow the integration with Google Authenticator for Multi-Factor Authentication?
Hi @WK, what's the condition fro grouping? How can I recognize StartTime and EndTime? this is one of the few situation where to use the transactin command. if you want to trace when there's a Fai... See more...
Hi @WK, what's the condition fro grouping? How can I recognize StartTime and EndTime? this is one of the few situation where to use the transactin command. if you want to trace when there's a Fail and a following Success, you could try somethin like this: <your_search> | transaction Item StartsWith="Result=Fail" EndsWith="Result=Success" | eval StartTime=strftime(_time,"%H:%M:%S), EndTime=strftime(_time+duration,"%H:%M:%S), Duration=tostring(duration,"duration") | table Item StartTime EndTime Duration Ciao. Giuseppe  
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR me... See more...
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR message, please find the below ERROR message.   2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL,   How to write props .conf and transforms.conf configuration files. please help me.   Regards Vijay .K        
Hi @gayathrc , I suppose that you already have your Splunk infrastrcuture, if not you have to engage a splunk architect to design it. Anyway, are you speaking of Packet capture or network switches ... See more...
Hi @gayathrc , I suppose that you already have your Splunk infrastrcuture, if not you have to engage a splunk architect to design it. Anyway, are you speaking of Packet capture or network switches logs? in the first case, you have to configure The Splunk App for Steam, for more datails see at  https://splunkbase.splunk.com/app/1809 https://splunkbase.splunk.com/app/5234 https://splunkbase.splunk.com/app/5238 If instead you have to use Swirches logs, you have to configure one of the component of your Splunk infrastructure (usually an Heavy Forwarder) as receiver of network inputs (for more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Monitornetworkports). then you have to install the add-on related to your network technology (e.g. the Cisco Add-on for network technoogy https://splunkbase.splunk.com/app/1467) and then search for the fieds extracted. If you don't have the basic knoledge about Splunk searching, see the Splunk Search Tutorial (https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial). Ciao. Giuseppe
I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B ... See more...
I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B Result=success 02:15:00 Item=A Result=success 02:15:05 Item=B Result=fail 02:20:00 Item=A Result=success 02:20:05 Item=B Result=fail 02:25:00 Item=A Result=success 02:25:05 Item=B Result=success 02:30:00 Item=A Result=success 02:30:05 Item=B Result=success 02:35:00 Item=A Result=success 02:35:05 Item=B Result=success 02:40:00 Item=A Result=success 02:40:05 Item=B Result=fail 02:45:00 Item=A Result=success 02:45:05 Item=B Result=success 02:50:00 Item=A Result=success 02:50:05 Item=B Result=success 02:55:00 Item=A Result=success 02:55:05 Item=B Result=success My desired results: Item StartTime EndTime Duration A       02:05:00    02:15:00 00:10:00 B       02:15:05    02:25:05 00:10:00 B       02:40:05    02:45:05 00:05:00 I had tried transaction and streamstats but got wrong results. Can anybody here help me to solve this problem? Thank you.
Hi Cansel Its a SaaS Controller, so cannot run those commands on the Controller.  AppD Support also informed me that this is not a controller-side issue, but rather general network issue.
Great @tscroggins  With that it worked perfectly for me, I didn't know the functions for processing json, with this I will be able to learn them Thank you so much!
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for ... See more...
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for the customer number ,I have to search whether 3 events meets logs available in the splunk log or not  Ex: index= 1 source type ="abc" "s1 event received" and "s2 event received" and "s3 event received"     When I search above query ,I will be getting like( format as below ) S1 received for 12345 customer,name=abz S2 received for 12345 customer,name = abz S3 received for 12345 customer,name =abz   If for one customer,all 3 event are met,next i want to search "created" message available in the splunk for same customer (12345) Here "created" message index and source type is different   If "created" message not available for 12345 customer no after 5 min once all 3 events meets,I need alert with customer no.pls help on this query..if "created" message available after 5 min also need capture customer number. Fyi : if we received "created" message in the log ,sample log will be (json format ) Log : created :{"customer no" : "12345",name :"kanunam"} like that.   Please please help me on search query.
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network ... See more...
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network with a specific IP range? I essentially want for Splunk Enterprise to act as a logger for all traffic that enters the internal network on a certain port, for example. How do I go about it? FYI - I do not want to use the Forwarder or upload log files function.
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for ... See more...
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for the customer number ,I have to search whether 3 events meets logs available in the splunk log or not  Ex: index= 1 source type ="abc" "s1 event received" and "s2 event received" and "s3 event received"     When I search above query ,I will be getting like S1 received for 12345 customer S2 received for 12345 customer S3 received for 12345 customer   If for one customer,all 3 event are met,next i want to search "created" message available in the splunk for same customer (12345) Here "created" message index and source type is different   If "created" message not available for 12345 customer no after 5 min once all 3 events meets,I need alert.pls help on this query.    
Empty and null are different things. If the field is "" then it is not null, so I use | where len(company)>0
Thank you for you help! It work!!
Without actual sample events, this may not match your use case exactly, but it's a starting point. I've used the following events to test: 2023-11-12 00:00:00 id=1 name=a hello how where 2023-11-1... See more...
Without actual sample events, this may not match your use case exactly, but it's a starting point. I've used the following events to test: 2023-11-12 00:00:00 id=1 name=a hello how where 2023-11-12 00:01:00 id=2 name=b hello how where 2023-11-12 00:03:00 id=1 name=a completed 2023-11-12 00:10:00 id=3 name=c hello how where 2023-11-12 00:10:00 id=4 name=d hello how where 2023-11-12 00:14:00 id=3 name=c completed 2023-11-12 00:16:00 id=4 name=d completed Save the following as an alert, and schedule it to run every minute: ((index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2)) earliest=-6m@m latest=@m | addinfo | where _time<relative_time(info_max_time, "-5m@m") OR match(_raw, "completed") | transaction keepevicted=t id name startswith="hello AND how AND where" endswith="completed" | where (closed_txn==0 AND NOT match(_raw, "completed")) OR duration>300 | table _time id name You can test the search using hard-coded, 6-minute timespans instead of earliest=-6m@m latest=@m. There will be a >1 minute delay before alerts are triggered, but the 6-minute time range allows us to cover the start time +/- 30 seconds of a sliding 5-minute window. Using the sample data, alerts would be triggered at the following times: Execution time: ~2023-11-12 00:07:00 - no completed event => _time id name 2023-11-12 00:01:00 2 b   Execution time: ~2023-11-12 00:16:00 - completed event late (>5 minutes) => _time id name 2023-11-12 00:10:00 4 d
I found solution for this: index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") (vuln=* AND score=* AND company=*)   ==>   condition for vuln, score, company exis... See more...
I found solution for this: index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") (vuln=* AND score=* AND company=*)   ==>   condition for vuln, score, company exists (vuln=*)  ==> condition for only vuln exists NOT (company="") ==> condition for only company exists          company=*   "is equivalent with"  NOT (company="")     "is equivalent with"    isnull(company) any idea why company=*  or isnull(company) does not work? Thank you
I had to re-think the way I was showing my job info in the UI as a orphaned Transaction could mean two different things.  i.e. 'running' start with no end message or multiple starts along with modif... See more...
I had to re-think the way I was showing my job info in the UI as a orphaned Transaction could mean two different things.  i.e. 'running' start with no end message or multiple starts along with modifying my initial SPL to look for an aborted message when a failure occurred. It was quite a learning experience is I went through all different permutations of keeporphans=true/false keepevicted=true/false and then look at the ending transaction data. I did notice that closed_txn defaults to null and not 0 and had to write some code to make it default to 0. eval closed_txn = if ( isnull(closed_txn,0, closed_txn) search closed_txn=0 to find those transactions that only has a START log event... Thanks for all you input as it gave me a different perspective from the way I was looking at it.  I was also to rewrite some of my existing SPL to have fewer lines.
Your search and your data don't match, in that you are parsing time in your SPL, but your data shows that as already in epoch time. You need to use max=0 in the join statement. Note that using join... See more...
Your search and your data don't match, in that you are parsing time in your SPL, but your data shows that as already in epoch time. You need to use max=0 in the join statement. Note that using join is not good practice, as it has a number of limitations and is slow. stats is generally the way to go, but in this case, you're effectively using the subsearch as a lookup table and are using it for a range search, but be aware.
As @bowesmana noted, this is the way. The timestamp is time zone-aware, though, so be mindful of the offset. If you prefer, you can include a time zone in your conversion, e.g. as a shortcut for for ... See more...
As @bowesmana noted, this is the way. The timestamp is time zone-aware, though, so be mindful of the offset. If you prefer, you can include a time zone in your conversion, e.g. as a shortcut for for UTC: | eval _time=strptime(Date."Z", "%Y-%m-%d%Z")
Date has YYYY-MM-DD format. I managed changing the '_time'  field by using  the command: eval _time=strptime(Date,"%Y-%m-%d") Now the Time column in the events list shows the date in the dd/mm/yy... See more...
Date has YYYY-MM-DD format. I managed changing the '_time'  field by using  the command: eval _time=strptime(Date,"%Y-%m-%d") Now the Time column in the events list shows the date in the dd/mm/yyyy, with the actual time of 00:00:00.000
As @tscroggins says, it's always important to get your ingest dates correctly extracted from the data in the first place. However, to extract a time from a field in the data you use the strptime() f... See more...
As @tscroggins says, it's always important to get your ingest dates correctly extracted from the data in the first place. However, to extract a time from a field in the data you use the strptime() function, e.g. | eval _time=strptime(date_field, "format_string") which will overwrite the existing _time field with the time converted from your data field called date_field according to the format string you specify. Time format variables are documented here https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Commontimeformatvariables e, g. this example, which you can paste into your search bar will convert the string in my_date_field to _time. | makeresults | eval my_date_field="2023-11-13 08:01:02.123" | eval _time=strptime(my_date_field, "%F %T.%Q") Note that times are converted to epoch times, but the _time field is special in that it will show you the formatted date, rather than the epoch.