All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please share the raw text of your events, anonymised appropriately, in a code block, not a picture, to assist volunteers designing a solution to meet your requirements.
So this is the reason why you are missing summary data. There could be a number of reasons for this difference. It could be that there is a delay in your infrastructure such that it takes a long ti... See more...
So this is the reason why you are missing summary data. There could be a number of reasons for this difference. It could be that there is a delay in your infrastructure such that it takes a long time between the event being written to the log which is being ingested It could be that the application is writing events with an event time which is many hours prior to the time it is written to the log You should investigate this. If this is not something that can be fixed, then you could look at your summary index population searches to take these delays into account e.g. running "back fill" search that populate your summary index with these "delayed" events. You would need to be careful about "double-counting" events which have already been included in earlier populations of the summary index
@ITWhisperer  I ran query for on the source data which fills the summary index and below is the results.  
@renjith_nair is missing a by clause | stats delim="," list(INTEL) as INTEL,list(WEIGHT) as WEIGHT by ID | nomv INTEL | nomv WEIGHT
The Time column shown is the local time for the UTC time in the event which appears to be 4 hours different. This does not show you the index time of the event, merely how the time field has been int... See more...
The Time column shown is the local time for the UTC time in the event which appears to be 4 hours different. This does not show you the index time of the event, merely how the time field has been interpreted from the event at ingestion time. You need to do the same calculation you did for the summary index i.e. _indextime - _time to find out the lag between the event time and the index time to see if this is the "source" of your "delay" - note this is not really the true source of the delay, if it is significant e.g. over 1hr 45 minutes, this could be the reason why you are not getting the events into your summary index. For example, if you have an event with a time of 01:15am, it would have to have been indexed by 02:45am in order for it to appear in the report which is populating the summary index for 01:00am to 02:00am
@PickleRick @ITWhisperer  I can see there is a huge delayed in hours in the source data which fills the summary index is around 8.67 hours. Green arrows: To showcase the index and event time B... See more...
@PickleRick @ITWhisperer  I can see there is a huge delayed in hours in the source data which fills the summary index is around 8.67 hours. Green arrows: To showcase the index and event time Below is the attributes I am using in props. DATETIME_CONFIG = KV_MODE = xml NO_BINARY_CHECK = true CHARSET = UTF-8 LINE_BREAKER = <\/eqtext:EquipmentEvent>() crcSalt = <SOURCE> NO_BINARY_CHECK = true SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 754 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3QZ TIME_PREFIX = \<\/State\>\<eqtext\:EventTime\> SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g category = Custom pulldown_type = true TZ = UTC ===================================== Sample logs I am attaching below.  <eqtext:EquipmentEvent xmlns:eqtext="http://Asas.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://Asas.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://Asas.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7073</AreaID><ZoneID>33</ZoneID><EquipmentID>81</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Applicator tamper is jammed</eqtext:Description><eqtext:MIS_Address>0.1</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-08-16T12:14:24.843Z</eqtext:EventTime><eqtext:MsgNr>6232609270406364028</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent> <eqtext:EquipmentEvent xmlns:eqtext="http://Asas.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://Asas.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://Asas.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7073</AreaID><ZoneID>33</ZoneID><EquipmentID>81</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Applicator tamper is jammed</eqtext:Description><eqtext:MIS_Address>0.1</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>ACK_BY_SYSTEM</State><eqtext:EventTime>2024-08-16T12:14:24.843Z</eqtext:EventTime><eqtext:MsgNr>6232609270406364028</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>   Please help me what I can do fix it.
You can put your summary indexes in different apps and only allow certain roles access to the different apps, or you could restrict access to the indexes by role. For populating the summary index, h... See more...
You can put your summary indexes in different apps and only allow certain roles access to the different apps, or you could restrict access to the indexes by role. For populating the summary index, how are you doing this? What do you mean by "original fields"?
What delays do you get for your source data?
There's nothing wrong with the index itself. Leave it alone Depending on your data, your search and your collect command syntax that can actually be an OK result. Impossible to say without knowin... See more...
There's nothing wrong with the index itself. Leave it alone Depending on your data, your search and your collect command syntax that can actually be an OK result. Impossible to say without knowing your usecase and those details.
@PickleRick As per the below screenshot I can see huge delays in the indexing. So is this the cause that data is not visible on time.  What actions I need to perform for summary index? ... See more...
@PickleRick As per the below screenshot I can see huge delays in the indexing. So is this the cause that data is not visible on time.  What actions I need to perform for summary index?  
This discussion has been very informative.
1. There is no such thing as "subindex". Indexes are separate entities and do not form any kind of hierarchy. 2. Unless you have a Very Good Reason (tm) there's not much sense in splitting data into... See more...
1. There is no such thing as "subindex". Indexes are separate entities and do not form any kind of hierarchy. 2. Unless you have a Very Good Reason (tm) there's not much sense in splitting data into multiple indexes - you use search-time filters to return just a subset of your events when needed 3. Summary indexing is usually used for - as the name says - storing pre-aggregated summaries of your data so you can later usse those aggregates to speed up your searches. Using collect to simply copy events from one index to another _usually_ doesn't make much sense (see also 2.) So, what's the use case?
Apart from the technicalities which @yuanliu already tackled, there is also a logical flaw in your approach. Even if you aggregate your second search output into a single count you have two relativel... See more...
Apart from the technicalities which @yuanliu already tackled, there is also a logical flaw in your approach. Even if you aggregate your second search output into a single count you have two relatively unrelated values. Substracting cardinalities makes sense only if one set is a subset of another one. In your case those sets may overlap but one doesn't have to be included in the other.
Again - where do the users have this message with a return to Splunk button? I don't recall anything with this functionality in core Splunk installation.
Do you mean something like this: | rex "^\S+\s+\((?<transaction_id>[^\)]+)" | transaction transaction_id startswith="Starting execution for request" endswith="Successfully completed execution" Here... See more...
Do you mean something like this: | rex "^\S+\s+\((?<transaction_id>[^\)]+)" | transaction transaction_id startswith="Starting execution for request" endswith="Successfully completed execution" Here is an emulation of your mock sample data you can play with and compare with real data | makeresults format=csv data="_raw 2024-08-12T10:04:16.962-04:00 (434-abc-345789-de456ght) Extended Request Id: cmtf1111111111111111= 2024-08-12T10:04:16.963-04:00 (434-abc-345789-de456ght) Verifying Usage Plan for request: AAAAAAAAAAAAAAAAAAAAAAAA 2024-08-12T10:04:16.964-04:00 (434-abc-345789-de456ght) BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB 2024-08-12T10:04:16.964-04:00 (434-abc-345789-de456ght) AAAAAAAAAABBBBBBBBBBBBBCCCCCCCCCCCCCCCCCC 2024-08-12T10:04:16.964-04:00 (434-abc-345789-de456ght) Starting execution for request: 8hhhhh-cdcd-434444-8bbb-dedr44444 2024-08-16T10:04:16.964-04:00 (434-abc-345789-de456ght) HTTP Method: POST, Resource Path: /ddd/Verifyffghhjj/ddddddd 2024-08-16T10:04:25.969-04:00 (434-abc-345789-de456ght) Successfully completed execution 2024-08-16T10:04:25.969-04:00 (434-abc-345789-de456ght) Method completed with status: 200 2024-08-16T10:04:25.969-04:00 (434-abc-345789-de456ght) AAAAAA Integration Endpoint RequestId: 11111111111111111111" | rex "^(?<_time>\S+)" | eval _time = strptime(_time, "%FT%T.%3N") | sort - _time ``` data emulation above ```  
You realize that the first search results in one single row, and the second gives a series of rows, right?  Without illustrating or describing what your desired output look like, you are asking volun... See more...
You realize that the first search results in one single row, and the second gives a series of rows, right?  Without illustrating or describing what your desired output look like, you are asking volunteers to read your mind.  This is generally a bad idea on a forum like this. If your requirement is to subtract singular Count in the first search from "Bookmark Status" in every row in the second search, you can do something as simple as   | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" | appendcols [sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status" by bookmark_status_display] | eventstats values(Count) as Count | eval diff = 'Bookmark Status' - Count   Here I am using appendcols instead of the usual approach using append because one of the searches only gives out one single row.  This is not the most semantic approach but sometimes I like code economy.  In fact, this method applies to any two searches as long as one of them yields a single row. Here is an emulation as proof of concept:   | tstats count AS Count where index=_internal ``` the above emulates | rest /services/saved/searches | search alert_type!="always" AND action.email.to="production@email.com" AND title!="*test*" | stats count(action.email.to) AS "Count" ``` | appendcols [tstats count AS "Bookmark Status" where index=_introspection by sourcetype | rename sourcetype AS bookmark_status_display ``` this subsearch emulates | sseanalytics 'bookmark' | where bookmark_status="successfullyImplemented" | stats count(bookmark_status_display) AS "Bookmark Status" by bookmark_status_display ``` ] | eventstats values(Count) as Count | eval diff = 'Bookmark Status' - Count   You will get something like Count Bookmark Status bookmark_status_display diff 151857 201 http_event_collector_metrics -151656 151857 2365 kvstore -149492 151857 57 search_telemetry -151800 151857 462 splunk_disk_objects -151395 151857 303 splunk_telemetry -151554
Try https://github.com/whackyhack/Splunk-org-chart.  (Play with the dashboard to find who's the big boss:-)
Not totally clear what the eventstats is doing here.  It would help if you could illustrate the desired results from mock data.  Do you mean to produce two tables like these? 1. superhero archet... See more...
Not totally clear what the eventstats is doing here.  It would help if you could illustrate the desired results from mock data.  Do you mean to produce two tables like these? 1. superhero archetype id strengths superhero superman super strength, flight, and heat vision superhero batman exceptional martial arts skills, detective abilities, and psychic abilities 2. villan archetype id strengths villain joker cunning and unpredictable personality To do these, you can use   index=characters | spath path={} | mvexpand {} | spath input={} | fields id, strengths, archetype | where archetype="superhero" | stats values(*) as * by id   for superhero; for villan, use   index=characters ``` | spath path={} | mvexpand {} | spath input={} | fields id, strengths, archetype | where archetype="villan" | stats values(*) as * by id   Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw="[ { \"id\": \"superman\", \"strengths\": \"super strength, flight, and heat vision\", \"archetype\": \"superhero\" }, { \"id\": \"batman\", \"strengths\": \"exceptional martial arts skills, detective abilities, and psychic abilities\", \"archetype\": \"superhero\" }, { \"id\": \"joker\", \"strengths\": \"cunning and unpredictable personality\", \"archetype\": \"villain\" } ]" | spath ``` the above emulates index=characters ```    
@PickleRick  “Whenever I click on ‘Return to Splunk,’ it redirects to the Splunk login page. Instead, I want it to redirect to a custom URL. When users face login issues, a message will pop up, and w... See more...
@PickleRick  “Whenever I click on ‘Return to Splunk,’ it redirects to the Splunk login page. Instead, I want it to redirect to a custom URL. When users face login issues, a message will pop up, and when they click ‘Return to Splunk,’ they will be redirected to the custom URL.” How can I do this?
If you have identifier of each transaction such as transaction id, use stats to get the earliest and latest for e.g. your search |earliest(_time) as starttime,latest(_time) as endtime by transactio... See more...
If you have identifier of each transaction such as transaction id, use stats to get the earliest and latest for e.g. your search |earliest(_time) as starttime,latest(_time) as endtime by transactionID|eval duration=endtime-starttime