All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating... See more...
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a report doesnt make sense in this case. To have something available weeks after is exactly what report is for.  You don't want to force people to create their own bookmark for such purposes.  And there is absolutely no need to instruct future users to enter them manually. (In fact, the best report is where you disallow time picker.) The point is, you CAN reproduce search results years after if your saved search contains the same time window as your original search.  Have you read the document I linked?  Say, I want people to search the following   index=_internal | timechart span=2h count by sourcetype ``` data emulation 2 ``` | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total, null(), '<<FIELD>>')]   for the past 2 days, where "past 2 days" is merely a reference to my search time.  You probably recognize that you don't need any precision in this time period. (I'll demonstrate more precise requirements later.)  So, say I am searching at 1015 Zulu time of 2023-09-13.  It is perhaps sufficient to pass 1000 Zulu time for future users. (Or 1100 as chances may suit.)  I can save the search as   index=_internal earliest=09/11/2023:10:00:00 latest=09/13/2023:10:00:00 | timechart span=2h count by sourcetype ``` data emulation 2 ``` | addtotals | delta "Total" as _delta | foreach * Total [eval <<FIELD>> = if(-_delta > Total, null(), '<<FIELD>>')]   If you want to be more precise, you can always specify time with more precision. You can do this by looking at your watch, or you can get it from Splunk.  For example, I want   index=_internal | stats count by sourcetype   for a certain period that I am searching for.  I can do   index=_internal | stats count by sourcetype | addinfo | fields - info_s*   This gives me sourcetype count info_max_time info_min_time dbx_health_metrics 8220 1694583382.000 1694579760.000 dbx_server 2 1694583382.000 1694579760.000 splunk_python 76 1694583382.000 1694579760.000 splunk_search_messages 2 1694583382.000 1694579760.000 splunk_web_access 5 1694583382.000 1694579760.000 splunk_web_service 15 1694583382.000 1694579760.000 splunkd 32275 1694583382.000 1694579760.000 splunkd_access 824 1694583382.000 1694579760.000 splunkd_ui_access 619 1694583382.000 1694579760.000 I just put info_min_time and info_max_time back.   index=_internal earliest=1694579760.000 latest=1694583382.000 | stats count by sourcetype     (They happen to be the past 4 hours.)  As I said, if I want to know what happened in the past four hours tonight, this search will always give me the same output whether I do it tomorrow or a year after.  And I never have to write a memo to myself about when I did this search, nor do I need to use time selector again.
You could use previous methods to contact splunk and tell that you are willing to give your enhancements to TA-aws if they are interested about those. I suppose that PM team are at least interested t... See more...
You could use previous methods to contact splunk and tell that you are willing to give your enhancements to TA-aws if they are interested about those. I suppose that PM team are at least interested to discuss with you what you have and can they use those or not.
The Add-on docs provide docs how the single IIS should be configured so that it logs the proper data. How to deploy that configuration in your environment is something you have to consult with your ... See more...
The Add-on docs provide docs how the single IIS should be configured so that it logs the proper data. How to deploy that configuration in your environment is something you have to consult with your admins and check with your local policies. We can't tell you if in your case GPO will be the appropriate solution. It might be (I'm not sure if you can configure those settings with GPO) but there can be other ways to do it (for example if you used any third party automation solution you could use that instead of deploying settings via GPO. The reqiurements for the Add-on regarding IIS configuration are described here - https://docs.splunk.com/Documentation/AddOns/released/MSIIS/Hardwareandsoftwarerequirements#Microsoft_IIS_setup_requirements but how to apply them properly is up to you and your infrastructure team.
Hi All,  Any luck on this issue, I am facing similar issue with process monitoring extension. Error log: [Monitor-Task-Thread1] 13 Sep. 2023 10:57:56,299 DEBUG WindowsParser-Process Monitor - Unabl... See more...
Hi All,  Any luck on this issue, I am facing similar issue with process monitoring extension. Error log: [Monitor-Task-Thread1] 13 Sep. 2023 10:57:56,299 DEBUG WindowsParser-Process Monitor - Unable to retrieve process info for pid 5248 org.hyperic.sigar.SigarPermissionDeniedException: Access is denied. -Pavan
I think there is a way to increase the default expiry times. But this involves a cost that is the jobs occupies a space in users disk quota. So its probably not a good idea to increase these to huge ... See more...
I think there is a way to increase the default expiry times. But this involves a cost that is the jobs occupies a space in users disk quota. So its probably not a good idea to increase these to huge values both from a user and machine resource perspective.
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a... See more...
Our specific requirement is to have links to share with others, say while troubleshooting an issue, which can then be used even after weeks to come back to the same exact search result. So creating a report doesnt make sense in this case.   Also, after running searches using UI, its hard to add 'earliest and latest'  manually every time you want to share it with the correct formatting of time. Most users dont know this or wouldnt find it easy I think.
the if (event=from_source1... is a test you will have to make using whatever fields you have to indicate the data is a souce1 event (source/sourcetype/index?) If you need to only take source 2 event... See more...
the if (event=from_source1... is a test you will have to make using whatever fields you have to indicate the data is a souce1 event (source/sourcetype/index?) If you need to only take source 2 events that are inside the source1 window or it can span more than one day, you'll have to do it a bit differently
If you paste this into your search window, you can see it being done with your example dataset | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.123,2023-09-13 01:13:13.1... See more...
If you paste this into your search window, you can see it being done with your example dataset | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.123,2023-09-13 01:13:13.123 Day_2,2023-09-11 01:11:11.123,2023-09-12 01:12:12.123 Day_1,2023-09-10 01:10:10.123,2023-09-11 01:11:11.123" | multikv forceheader=1 | table DATE Start_Time End_Time | eval _time = relative_time(strptime(Start_Time, "%F %T.%Q"), "@d") | append [ | makeresults | eval _raw="Event type,Time,Others EventID2,2023-09-11 01:20:20.123, EventID1,2023-09-11 01:11:11.123, EventID9,2023-09-10 01:20:30.123, EventID3,2023-09-10 01:20:10.123, EventID5,2023-09-10 01:10:20.123, EventID1,2023-09-10 01:10:10.123," | multikv forceheader=1 | table Event_type Time | eval _time = strptime(Time, "%F %T.%Q") | fields - Time ] | bin _time span=1d | stats list(*) as * count by _time but the way you should do this is to search source1 OR search source2 | eval _time = if(event=from_source_1, relative_time(strptime(Start_Time, "%F %T.%Q"), "@d"), strptime(Time, "%F %T.%Q")) | bin _time span=1d | stats list(*) as * count by _time so this will create a _time field for the source 1 events that is the start of the day, it creates a _time field based on source 2 event times and then uses BIN to create a 1 day grouping and then stats list to collect them together. Count will always be one more than the source 2 events. Note that this  assumes each source 1 event only occurs once on a day assumes that source 2 events will not occur outside the time range of source 1 range  
You can do ... search... | eval c=actionelementtype.":".actionelementname | chart sum(Total_Transactions) over _time by c and then you will get it over time and you can stack it with the chart form... See more...
You can do ... search... | eval c=actionelementtype.":".actionelementname | chart sum(Total_Transactions) over _time by c and then you will get it over time and you can stack it with the chart format options. or how did you imaging visualising these two dimensions over _time?
Now there's an odd thing with a field called "_time" If that field in your lookup file really is called that WITH the underscore, it very much depends on what that data really is in the lookup, beca... See more...
Now there's an odd thing with a field called "_time" If that field in your lookup file really is called that WITH the underscore, it very much depends on what that data really is in the lookup, because Splunk will always render the _time field in a string way, not as an epoch, so if your lookup contains   "_time",client,noclient "1694268000.000000",iphone,airpord "1694354400.000000",samsung,earbud   then when you do inputlookup yourfile.csv it will LOOK like 2023-09-10 iphone airpord 2023-09-11 samsung earbud so in that case, the field is already in EPOCH time and you would have to go   | inputlookup times.csv | where _time>=strptime("2023-09-10", "%F")   and you will get your results back. I suspect this is YOUR case, because ... HOWEVER, if your lookup contains    "_time",client,noclient 2023-09-10,iphone,airpord 2023-09-11,samsung,earbud   then that where clause will not work and you must first fix up _time. That said, this WILL work if your data is actually strings like above   | inputlookup abc.csv | search _time>="2023-09-10"   as you can do string comparisons IFF _time is also a string and you are using ISO8601 date format YYYY-MM-DD
this is the machine-agent logs which i get after installing the machine agent on the ec2 Linux server
Splunk cannot compare timestamps as strings.  They must be converted to epoch (integer) form first. | inputlookup abc.csv | eval _time = strptime(_time, "%Y-%m-%d") | search _time >= strptime("2023... See more...
Splunk cannot compare timestamps as strings.  They must be converted to epoch (integer) form first. | inputlookup abc.csv | eval _time = strptime(_time, "%Y-%m-%d") | search _time >= strptime("2023-09-10", "%Y-%m-%d")  
I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why? Below is the screenshot of the aler... See more...
I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why? Below is the screenshot of the alert: And below is a sample result from the alert query  
Hello Splunk Family, I am looking for help on making a graph in Splunk. I am trying to monitor the amount of transactions by different methods names with different objects and separate that by da... See more...
Hello Splunk Family, I am looking for help on making a graph in Splunk. I am trying to monitor the amount of transactions by different methods names with different objects and separate that by date. Here is an example of the data I have Date Object Type Object Name Total Transactions Aug 1 LibPush Root 15 Aug 1 LibPush ProcessQueue 12 Aug 1 LibPush Failed 2 Aug 1 Company ChangeConfigSet 34 Aug 1 Company CleanUpMsg 15 Aug 1 Company GetMsg 32 Aug 1 Company SendMSG 13 Aug 2 LibPush Root 15 Aug 2 LibPush ProcessQueue 12 Aug 2 LibPush Failed 2 Aug 2 Company ChangeConfigSet 34 Aug 2 Company CleanUpMsg 15 Aug 2 Company GetMsg 32 Aug 2 Company SendMSG 45 Aug 3 LibPush Root 15 Aug 3 LibPush ProcessQueue 12 Aug 3 LibPush Failed 2 Aug 3 Company ChangeConfigSet 34 Aug 3 Company CleanUpMsg 15 Aug 3 Company GetMsg 32 Aug 3 Company SendMSG 45   The only thing is that there are a lot of Object Types and Object Names so maybe the top 10 object types per day. Here is a lame attempt at a drawing of what I want. Here is the code I got so far [mycode] | bin _time span=1d| chart count(indexid) over actionelementname by actionelementtype but it is missing the date and it is not stacked.   Any help would be deeply appreciated!     
I have a csv file which has data like this and i am using  | inputlookup abc.csv | search _time >= '2023-09-10" but its is not showing any data _time client noclient 2023-09-10 i... See more...
I have a csv file which has data like this and i am using  | inputlookup abc.csv | search _time >= '2023-09-10" but its is not showing any data _time client noclient 2023-09-10 iphone airpord 2023-09-11 samsung earbud   how do i get the data only for the selected date like from the above query
Network team confirms that the traffic couldn't return back to the source due to routing issue. The traffic from the src to dest via port 9997 ends only in the first SYN and the ACK couldn't go back.
Indexed fields cannot span major segments.  Space " " breaks the value into multiple major segments.  The value to be indexed must not contain major index breakers like space " ".
Would changing each server's iis logging settings using a GPO be the recommended option for solving this issue?
Thanks again for your response @yuanliu . This certainly clarifies why the OR would not work for me (these datasets are really timeless, and so the OR was resulting in an empty set). It also gives me... See more...
Thanks again for your response @yuanliu . This certainly clarifies why the OR would not work for me (these datasets are really timeless, and so the OR was resulting in an empty set). It also gives me some ideas of how to use the stats method (possibly in combination with append or similar) to try to get what I need, and your simulation comes in very handy for experimentation. Your particular solution does not really produce the results that I'm looking for (note that the resulting dataset in my example is ALL that I want as a result of the merge - nothing more), but using 'list' instead of 'values' appears to do so (just like @bowesmana suggested). Turns out that my needs changed since I posted and I happen to not need to produce this 'merge' query anymore, but I appreciate the help here and I think the suggestions were a good lesson on how to use some of the Splunk commands. Thanks again!
All searches expire.  The default is 10 minutes, but shared searches are automatically extended to 7 days.  I'm not aware of a way to extend search results past that.