All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

the if (event=from_source1... is a test you will have to make using whatever fields you have to indicate the data is a souce1 event (source/sourcetype/index?) If you need to only take source 2 event... See more...
the if (event=from_source1... is a test you will have to make using whatever fields you have to indicate the data is a souce1 event (source/sourcetype/index?) If you need to only take source 2 events that are inside the source1 window or it can span more than one day, you'll have to do it a bit differently
If you paste this into your search window, you can see it being done with your example dataset | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.123,2023-09-13 01:13:13.1... See more...
If you paste this into your search window, you can see it being done with your example dataset | makeresults | eval _raw="DATE,Start_Time,End_Time Day_3,2023-09-12 01:12:12.123,2023-09-13 01:13:13.123 Day_2,2023-09-11 01:11:11.123,2023-09-12 01:12:12.123 Day_1,2023-09-10 01:10:10.123,2023-09-11 01:11:11.123" | multikv forceheader=1 | table DATE Start_Time End_Time | eval _time = relative_time(strptime(Start_Time, "%F %T.%Q"), "@d") | append [ | makeresults | eval _raw="Event type,Time,Others EventID2,2023-09-11 01:20:20.123, EventID1,2023-09-11 01:11:11.123, EventID9,2023-09-10 01:20:30.123, EventID3,2023-09-10 01:20:10.123, EventID5,2023-09-10 01:10:20.123, EventID1,2023-09-10 01:10:10.123," | multikv forceheader=1 | table Event_type Time | eval _time = strptime(Time, "%F %T.%Q") | fields - Time ] | bin _time span=1d | stats list(*) as * count by _time but the way you should do this is to search source1 OR search source2 | eval _time = if(event=from_source_1, relative_time(strptime(Start_Time, "%F %T.%Q"), "@d"), strptime(Time, "%F %T.%Q")) | bin _time span=1d | stats list(*) as * count by _time so this will create a _time field for the source 1 events that is the start of the day, it creates a _time field based on source 2 event times and then uses BIN to create a 1 day grouping and then stats list to collect them together. Count will always be one more than the source 2 events. Note that this  assumes each source 1 event only occurs once on a day assumes that source 2 events will not occur outside the time range of source 1 range  
You can do ... search... | eval c=actionelementtype.":".actionelementname | chart sum(Total_Transactions) over _time by c and then you will get it over time and you can stack it with the chart form... See more...
You can do ... search... | eval c=actionelementtype.":".actionelementname | chart sum(Total_Transactions) over _time by c and then you will get it over time and you can stack it with the chart format options. or how did you imaging visualising these two dimensions over _time?
Now there's an odd thing with a field called "_time" If that field in your lookup file really is called that WITH the underscore, it very much depends on what that data really is in the lookup, beca... See more...
Now there's an odd thing with a field called "_time" If that field in your lookup file really is called that WITH the underscore, it very much depends on what that data really is in the lookup, because Splunk will always render the _time field in a string way, not as an epoch, so if your lookup contains   "_time",client,noclient "1694268000.000000",iphone,airpord "1694354400.000000",samsung,earbud   then when you do inputlookup yourfile.csv it will LOOK like 2023-09-10 iphone airpord 2023-09-11 samsung earbud so in that case, the field is already in EPOCH time and you would have to go   | inputlookup times.csv | where _time>=strptime("2023-09-10", "%F")   and you will get your results back. I suspect this is YOUR case, because ... HOWEVER, if your lookup contains    "_time",client,noclient 2023-09-10,iphone,airpord 2023-09-11,samsung,earbud   then that where clause will not work and you must first fix up _time. That said, this WILL work if your data is actually strings like above   | inputlookup abc.csv | search _time>="2023-09-10"   as you can do string comparisons IFF _time is also a string and you are using ISO8601 date format YYYY-MM-DD
this is the machine-agent logs which i get after installing the machine agent on the ec2 Linux server
Splunk cannot compare timestamps as strings.  They must be converted to epoch (integer) form first. | inputlookup abc.csv | eval _time = strptime(_time, "%Y-%m-%d") | search _time >= strptime("2023... See more...
Splunk cannot compare timestamps as strings.  They must be converted to epoch (integer) form first. | inputlookup abc.csv | eval _time = strptime(_time, "%Y-%m-%d") | search _time >= strptime("2023-09-10", "%Y-%m-%d")  
I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why? Below is the screenshot of the aler... See more...
I have configure a splunk alert with alert condition to Trigger for each result. But every time I only get the alert for only one of those results. Any idea why? Below is the screenshot of the alert: And below is a sample result from the alert query  
Hello Splunk Family, I am looking for help on making a graph in Splunk. I am trying to monitor the amount of transactions by different methods names with different objects and separate that by da... See more...
Hello Splunk Family, I am looking for help on making a graph in Splunk. I am trying to monitor the amount of transactions by different methods names with different objects and separate that by date. Here is an example of the data I have Date Object Type Object Name Total Transactions Aug 1 LibPush Root 15 Aug 1 LibPush ProcessQueue 12 Aug 1 LibPush Failed 2 Aug 1 Company ChangeConfigSet 34 Aug 1 Company CleanUpMsg 15 Aug 1 Company GetMsg 32 Aug 1 Company SendMSG 13 Aug 2 LibPush Root 15 Aug 2 LibPush ProcessQueue 12 Aug 2 LibPush Failed 2 Aug 2 Company ChangeConfigSet 34 Aug 2 Company CleanUpMsg 15 Aug 2 Company GetMsg 32 Aug 2 Company SendMSG 45 Aug 3 LibPush Root 15 Aug 3 LibPush ProcessQueue 12 Aug 3 LibPush Failed 2 Aug 3 Company ChangeConfigSet 34 Aug 3 Company CleanUpMsg 15 Aug 3 Company GetMsg 32 Aug 3 Company SendMSG 45   The only thing is that there are a lot of Object Types and Object Names so maybe the top 10 object types per day. Here is a lame attempt at a drawing of what I want. Here is the code I got so far [mycode] | bin _time span=1d| chart count(indexid) over actionelementname by actionelementtype but it is missing the date and it is not stacked.   Any help would be deeply appreciated!     
I have a csv file which has data like this and i am using  | inputlookup abc.csv | search _time >= '2023-09-10" but its is not showing any data _time client noclient 2023-09-10 i... See more...
I have a csv file which has data like this and i am using  | inputlookup abc.csv | search _time >= '2023-09-10" but its is not showing any data _time client noclient 2023-09-10 iphone airpord 2023-09-11 samsung earbud   how do i get the data only for the selected date like from the above query
Network team confirms that the traffic couldn't return back to the source due to routing issue. The traffic from the src to dest via port 9997 ends only in the first SYN and the ACK couldn't go back.
Indexed fields cannot span major segments.  Space " " breaks the value into multiple major segments.  The value to be indexed must not contain major index breakers like space " ".
Would changing each server's iis logging settings using a GPO be the recommended option for solving this issue?
Thanks again for your response @yuanliu . This certainly clarifies why the OR would not work for me (these datasets are really timeless, and so the OR was resulting in an empty set). It also gives me... See more...
Thanks again for your response @yuanliu . This certainly clarifies why the OR would not work for me (these datasets are really timeless, and so the OR was resulting in an empty set). It also gives me some ideas of how to use the stats method (possibly in combination with append or similar) to try to get what I need, and your simulation comes in very handy for experimentation. Your particular solution does not really produce the results that I'm looking for (note that the resulting dataset in my example is ALL that I want as a result of the merge - nothing more), but using 'list' instead of 'values' appears to do so (just like @bowesmana suggested). Turns out that my needs changed since I posted and I happen to not need to produce this 'merge' query anymore, but I appreciate the help here and I think the suggestions were a good lesson on how to use some of the Splunk commands. Thanks again!
All searches expire.  The default is 10 minutes, but shared searches are automatically extended to 7 days.  I'm not aware of a way to extend search results past that.
You can always specify time window in search itself. (earliest, latest, etc.)  See Time modifiers. As to "share job", Saved search aka "Report" might be a viable alternative.  After your search laun... See more...
You can always specify time window in search itself. (earliest, latest, etc.)  See Time modifiers. As to "share job", Saved search aka "Report" might be a viable alternative.  After your search launches, you can "Save as" and select Report to give it a name.
Nice!  Love these little Splunk quirks aka tricks. (For anyone who stumble upon the same needs in the future, using _ would be perfect if Total doesn't have to be nullified.  I need to null Total, so... See more...
Nice!  Love these little Splunk quirks aka tricks. (For anyone who stumble upon the same needs in the future, using _ would be perfect if Total doesn't have to be nullified.  I need to null Total, so the amount of work will be similar to reorder foreach.)
If the dashboards somehow found their way into the /default folder then you will not be able to delete them using the UI.  Otherwise, check the permissions on the dashboards to make sure you have wri... See more...
If the dashboards somehow found their way into the /default folder then you will not be able to delete them using the UI.  Otherwise, check the permissions on the dashboards to make sure you have write access to them.  Failing that, the CLI may be your best answer.
Now that we are thinking in Splunk terms , note that the ...... part in your illustration can make a difference in how best to construct a "solution".  So, I assume that Dataset A and B are NOT fro... See more...
Now that we are thinking in Splunk terms , note that the ...... part in your illustration can make a difference in how best to construct a "solution".  So, I assume that Dataset A and B are NOT from the same sources, e.g., fields b and d must come from different sources, different sourcetypes, from different periods of time, even different indices.  Without such information, volunteers have to make assumptions that may or may not be helpful.  Where such have biggest impact would be when two datasets come from differing indices and/or time periods. For simplicity, I will assume a common scenario when both datasets come from the same index and same time period.  Further assume that the only differentiating factor is sourcetype, A and B.  An effective OR would be between these two.   index=common_index ((sourcetype = A) OR (sourcetype = B))   Now, the above is often expressed as   index=common_index sourcetype IN (A, B)   Meanwhile, you may often have additional, differing search terms for A and B.  So you may want to keep those parentheses.  For example, you may want to restrict events to only those with fully populated fields of interest,   index=common_index ((sourcetype = A a=* b=* c=*) OR (sourcetype = B a=* d=* e=* f=*))   Anyway, my previous post only demonstrated how to leverage any key as "primary key", but did not include the final step in an outer join.  Here it is for your scenario:   | stats values(*) as * by a | fields a b c d e f | foreach * [mvexpand <<FIELD>>]   Using your sample datasets, the output is a b c d e f a1 b1 c1 d1 e1 f1 a1 b1 c1 d1 e1 f2 a1 b1 c1 d1 e1 f3 a1 b1 c1 d1 e2 f1 a1 b1 c1 d1 e2 f2 a1 b1 c1 d1 e2 f3 a1 b1 c1 d1 e3 f1 a1 b1 c1 d1 e3 f2 a1 b1 c1 d1 e3 f3 a1 b1 c1 d2 e1 f1 a1 b1 c1 d2 e1 f2 a1 b1 c1 d2 e1 f3 a1 b1 c1 d2 e2 f1 a1 b1 c1 d2 e2 f2 a1 b1 c1 d2 e2 f3 a1 b1 c1 d2 e3 f1 a1 b1 c1 d2 e3 f2 a1 b1 c1 d2 e3 f3 a1 b1 c1 d3 e1 f1 a1 b1 c1 d3 e1 f2 Here is an emulation that you can play with and compare with real data   | makeresults | eval _raw = "a,b,c a1,b1,c1 a2,b2,c2" | multikv forceheader=1 | fields - _* linecount | eval sourcetype = "A" | append [makeresults | eval _raw = "a,d,e,f a1,d1,e1,f1 a1,d2,e2,f2 a1,d3,e3,f3 a2,d4,e4,f4 a2,d5,e5,f5" | multikv forceheader=1 | fields - _* linecount | eval sourcetype = "B"] ``` data emulation above ```  
By "use case" I presume you mean "search".  If so, you can get all saved searches with this query | rest /servicesNS/-/-/saved/searches
Hi @Abhiram.Sahoo, At this time, I was told its best to reach out to AppD Support for more help. How do I submit a Support ticket? An FAQ