All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This looks like JSON format data - if so, you should be extracting as JSON and using the JSON functions to manipulate the data. Please share your full event in raw format in a code block, anonymise y... See more...
This looks like JSON format data - if so, you should be extracting as JSON and using the JSON functions to manipulate the data. Please share your full event in raw format in a code block, anonymise your data as appropriate. This will enable volunteers to better guide you on a way forward.
Hello. I have a lot of events. Each event contains similar string \"errorDetail\":\"possible_value\"  Please specify how to create new field \"errorDetail\" and  stats all possible values? (There a... See more...
Hello. I have a lot of events. Each event contains similar string \"errorDetail\":\"possible_value\"  Please specify how to create new field \"errorDetail\" and  stats all possible values? (There are more than 50 kinds of errorDetail) For example: \"errorDetail\":\"acctNumber\"  \"errorDetail\":\"Message Version higher"\ \"errorDetail\":\"email\" Thank you.
Hi @jaibalaraman, Here's a static example that uses separate elements to display a Sankey-like bar: { "visualizations": { "viz_GGlMQrhz": { "type": "splunk.rectangle", ... See more...
Hi @jaibalaraman, Here's a static example that uses separate elements to display a Sankey-like bar: { "visualizations": { "viz_GGlMQrhz": { "type": "splunk.rectangle", "options": { "fillColor": "#5a4575", "strokeColor": "#5a4575" } }, "viz_sdLspBWZ": { "type": "splunk.rectangle", "options": { "fillColor": "#5a4575", "strokeColor": "#5a4575", "fillOpacity": 0.5, "strokeOpacity": 0.5 } }, "viz_G2e5COXh": { "type": "splunk.rectangle", "options": { "fillColor": "#0877a6", "strokeColor": "#0877a6" } }, "viz_izmTEXa4": { "type": "splunk.singlevalue", "options": { "backgroundColor": "transparent", "majorFontSize": 20 }, "dataSources": { "primary": "ds_zydmsUyG" } }, "viz_OBDGe1i4": { "type": "splunk.markdown", "options": { "markdown": "****Account Temporarily Locked Out (403120)****", "fontSize": "custom", "customFontSize": 20 } } }, "dataSources": { "ds_zydmsUyG": { "type": "ds.search", "options": { "query": "| stats count\n| eval count=123", "queryParameters": { "earliest": "0", "latest": "" } }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_GGlMQrhz", "type": "block", "position": { "x": 0, "y": 0, "w": 20, "h": 70 } }, { "item": "viz_sdLspBWZ", "type": "block", "position": { "x": 20, "y": 0, "w": 510, "h": 70 } }, { "item": "viz_G2e5COXh", "type": "block", "position": { "x": 530, "y": 0, "w": 20, "h": 70 } }, { "item": "viz_izmTEXa4", "type": "block", "position": { "x": 430, "y": 0, "w": 100, "h": 70 } }, { "item": "viz_OBDGe1i4", "type": "block", "position": { "x": 30, "y": 20, "w": 400, "h": 30 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Sankey-like" }  
Both status values have the same cause, but different behaviors.  A deferred search will be skipped if it cannot run within the schedule window.  A continued search will run at the next opportunity.
Where do the users have this message with a return to Splunk button? Whenever the user does not have access to the Splunk Login. Is it possible to configure in SAML ? @PickleRick 
The time format actually seems to match your event. But the question is whether the event itself contains right information. You'd have to check the source system's configuration for that.
Here I am taking the TIME_FORMAT in props.conf from the eventtime field present in raw data (Toronto’s time zone is EST (UTC -5:00). Is there any changes here I need to change.
Hi @richgalloway   Thank you for your response. If a continued search could not be scheduled/started, then how is it continuous? Also, deferred search could also not be scheduled, are both not s... See more...
Hi @richgalloway   Thank you for your response. If a continued search could not be scheduled/started, then how is it continuous? Also, deferred search could also not be scheduled, are both not same?
The source itself might be simply misconfigured and using wrong timezone. If it's using time sync of some kind it shouldn't happen when the time is reported in UTC but if the time was manually set us... See more...
The source itself might be simply misconfigured and using wrong timezone. If it's using time sync of some kind it shouldn't happen when the time is reported in UTC but if the time was manually set using  wrong timezone it will be reported as wrong timestamp.
Rather than a picture, please provide your sample data as raw text in a code block (anonymised/obfuscated as necessary) to aid volunteers designing a solution for you. Also, I assume from your descr... See more...
Rather than a picture, please provide your sample data as raw text in a code block (anonymised/obfuscated as necessary) to aid volunteers designing a solution for you. Also, I assume from your description, this is all ingested as a single event. Does the event have a timestamp or does it take the ingestion time? Having extracted the fields, how do you want them displayed / reported on, e.g. all still in a single event or separate events for each location (and _time)?
You could use match and a regex | eval rule_type=if(match(_raw,"MMT\d+_(?:[^_]+(?<!Aws))_"),"onprem","cloud")
Please share the raw text of your events, anonymised appropriately, in a code block, not a picture, to assist volunteers designing a solution to meet your requirements.
So this is the reason why you are missing summary data. There could be a number of reasons for this difference. It could be that there is a delay in your infrastructure such that it takes a long ti... See more...
So this is the reason why you are missing summary data. There could be a number of reasons for this difference. It could be that there is a delay in your infrastructure such that it takes a long time between the event being written to the log which is being ingested It could be that the application is writing events with an event time which is many hours prior to the time it is written to the log You should investigate this. If this is not something that can be fixed, then you could look at your summary index population searches to take these delays into account e.g. running "back fill" search that populate your summary index with these "delayed" events. You would need to be careful about "double-counting" events which have already been included in earlier populations of the summary index
@ITWhisperer  I ran query for on the source data which fills the summary index and below is the results.  
@renjith_nair is missing a by clause | stats delim="," list(INTEL) as INTEL,list(WEIGHT) as WEIGHT by ID | nomv INTEL | nomv WEIGHT
The Time column shown is the local time for the UTC time in the event which appears to be 4 hours different. This does not show you the index time of the event, merely how the time field has been int... See more...
The Time column shown is the local time for the UTC time in the event which appears to be 4 hours different. This does not show you the index time of the event, merely how the time field has been interpreted from the event at ingestion time. You need to do the same calculation you did for the summary index i.e. _indextime - _time to find out the lag between the event time and the index time to see if this is the "source" of your "delay" - note this is not really the true source of the delay, if it is significant e.g. over 1hr 45 minutes, this could be the reason why you are not getting the events into your summary index. For example, if you have an event with a time of 01:15am, it would have to have been indexed by 02:45am in order for it to appear in the report which is populating the summary index for 01:00am to 02:00am
@PickleRick @ITWhisperer  I can see there is a huge delayed in hours in the source data which fills the summary index is around 8.67 hours. Green arrows: To showcase the index and event time B... See more...
@PickleRick @ITWhisperer  I can see there is a huge delayed in hours in the source data which fills the summary index is around 8.67 hours. Green arrows: To showcase the index and event time Below is the attributes I am using in props. DATETIME_CONFIG = KV_MODE = xml NO_BINARY_CHECK = true CHARSET = UTF-8 LINE_BREAKER = <\/eqtext:EquipmentEvent>() crcSalt = <SOURCE> NO_BINARY_CHECK = true SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 754 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3QZ TIME_PREFIX = \<\/State\>\<eqtext\:EventTime\> SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g category = Custom pulldown_type = true TZ = UTC ===================================== Sample logs I am attaching below.  <eqtext:EquipmentEvent xmlns:eqtext="http://Asas.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://Asas.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://Asas.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7073</AreaID><ZoneID>33</ZoneID><EquipmentID>81</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Applicator tamper is jammed</eqtext:Description><eqtext:MIS_Address>0.1</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>WENT_OUT</State><eqtext:EventTime>2024-08-16T12:14:24.843Z</eqtext:EventTime><eqtext:MsgNr>6232609270406364028</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent> <eqtext:EquipmentEvent xmlns:eqtext="http://Asas.com/FM/EqtEvent/EqtEventExtTypes/V1/1/5" xmlns:sbt="http://Asas.com/FM/Common/Services/ServicesBaseTypes/V1/8/4" xmlns:eqtexo="http://Asas.com/FM/EqtEvent/EqtEventExtOut/V1/1/5"><eqtext:ID><eqtext:Location><eqtext:PhysicalLocation><AreaID>7073</AreaID><ZoneID>33</ZoneID><EquipmentID>81</EquipmentID><ElementID>0</ElementID></eqtext:PhysicalLocation></eqtext:Location><eqtext:Description> Applicator tamper is jammed</eqtext:Description><eqtext:MIS_Address>0.1</eqtext:MIS_Address></eqtext:ID><eqtext:Detail><State>ACK_BY_SYSTEM</State><eqtext:EventTime>2024-08-16T12:14:24.843Z</eqtext:EventTime><eqtext:MsgNr>6232609270406364028</eqtext:MsgNr><Severity>LOW</Severity><eqtext:OperatorID>WALVAU-SCADA-1</eqtext:OperatorID><ErrorType>TECHNICAL</ErrorType></eqtext:Detail></eqtext:EquipmentEvent>   Please help me what I can do fix it.
You can put your summary indexes in different apps and only allow certain roles access to the different apps, or you could restrict access to the indexes by role. For populating the summary index, h... See more...
You can put your summary indexes in different apps and only allow certain roles access to the different apps, or you could restrict access to the indexes by role. For populating the summary index, how are you doing this? What do you mean by "original fields"?
What delays do you get for your source data?
There's nothing wrong with the index itself. Leave it alone Depending on your data, your search and your collect command syntax that can actually be an OK result. Impossible to say without knowin... See more...
There's nothing wrong with the index itself. Leave it alone Depending on your data, your search and your collect command syntax that can actually be an OK result. Impossible to say without knowing your usecase and those details.