All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? ... See more...
Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? Without the actual search you are running it is hard for us to determine the issue here, but I would start out by running the search manually to see what user values you get, then work back from there to determine why the correct value isnt appearing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ws  For All-in-one then you are good, and then yes - once ready to deploy you would put this on your HF. One things Ive just notice which I missed before is that you are changing the sourcetype... See more...
Hi @ws  For All-in-one then you are good, and then yes - once ready to deploy you would put this on your HF. One things Ive just notice which I missed before is that you are changing the sourcetypes. The second set of props probably arent applying to the new sourcetype name (you cant have 2 bites of the same cherry...) so try applying the event breaker props to the original sourcetype in the [preprocess_case] stanza.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sabollam  I think you first need to address the issue of the multiple JSON events displaying in a single event as per your screenshot. I suspect that the reason you are getting the "none" value ... See more...
Hi @sabollam  I think you first need to address the issue of the multiple JSON events displaying in a single event as per your screenshot. I suspect that the reason you are getting the "none" value is because its failing to do the json_extract to get the timestamp value because the JSON is not valid/there are multiple events. If you are able to get the event breaking properly then I think the INGEST_EVAL should work. As others have said, its worth making sure you are consciously doing this based on valid decision - there may be other ways to achieve this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid, For testing purpose, my architecture is all-in-one setup.  For my actual deployment, to mine understand the props.conf and transforms.conf will be at my HF right? since the pulling ... See more...
Hi @livehybrid, For testing purpose, my architecture is all-in-one setup.  For my actual deployment, to mine understand the props.conf and transforms.conf will be at my HF right? since the pulling of the json file land at my HF local server.
Hi @ws  Can you confirm where you applied those props/transforms and what your architecture looks like? They need applying to either HF or Indexers depending where the data lands.  Did this answe... See more...
Hi @ws  Can you confirm where you applied those props/transforms and what your architecture looks like? They need applying to either HF or Indexers depending where the data lands.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kiran_panchavat, I noticed that the sample provided aligns with what I’m trying to achieve. However, after applying the same settings for testing, I’m still not getting the same results as you. ... See more...
Hi @kiran_panchavat, I noticed that the sample provided aligns with what I’m trying to achieve. However, after applying the same settings for testing, I’m still not getting the same results as you. I’ve attached a screenshot for your reference—please help point out any mistakes or adjustments that may be needed. I don’t believe the issue lies with the transforms.conf configuration. JSON file: [ { "attribute":{ "type": "case" }, "Id": "I0000005", "name": "ws", "email": "ws@gmail.com", "case type__c": "Service Case", "date": "17/4/2025", "time": "16:15", "account":{ "attribute": { "type": "account" }, "Id": "I0000005" } }, { "attribute":{ "type": "case" }, "Id": "I0000006", "name": "thomas", "email": "thomas@gmail.com", "case type__c": "Transaction Case", "date": "17/4/2025", "time": "16:15", "account":{ "attribute": { "type": "account" }, "Id": "I0000006" } } ]   Search Head:   Props.conf   Transforms.conf  
Thanks @livehybrid, preliminary testing shows this seems to be working - great! Will of course mark as solution after some further testing. To answer your initial question... it was allowing all 46... See more...
Thanks @livehybrid, preliminary testing shows this seems to be working - great! Will of course mark as solution after some further testing. To answer your initial question... it was allowing all 4662 events.
1. Ekhm, your "dev team" cannot handle epoch timestamp? That is... surprising to say the least. 2. Who produces those logs? Another app written by another "dev team"?
The reason is, Our dev team requires the timestamp which is in epoch needs to be formatted to "%d-%m-%d %H:%M:%S.%3N", Have already created a calculated field to convert this to the format we require... See more...
The reason is, Our dev team requires the timestamp which is in epoch needs to be formatted to "%d-%m-%d %H:%M:%S.%3N", Have already created a calculated field to convert this to the format we require. But still they need this to be done at indexing stage. props.conf [resource_timestamp] SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = json KV_MODE = none TIME_PREFIX = \"timestamp\"\: TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 13 TRANSFORMS-updateTimestamp = updateTimestamp TRANSFORMS-overrideTimeStamp = overrideTimeStamp transforms.conf [overrideTimeStamp] INGEST_EVAL = _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N")) [updateTimestamp] #INGEST_EVAL = timestamp=json_extract(_raw, "timestamp" INGEST_EVAL = timestamp=strftime(json_extract(_raw, "timestamp") / 1000, "%m-%d-%Y %H:%M:%S.%3N") I was able to format the timestamp in _raw but the timestamp field in the interesting field is still showing up as epoch, How can I transform the value of the timestamp similar to _raw.    
Hi @Praz_123  To access the HF via REST you need to make sure they are setup in MC but also be able to reach their REST endpoints. If you just want to see the health by host then you can try the fo... See more...
Hi @Praz_123  To access the HF via REST you need to make sure they are setup in MC but also be able to reach their REST endpoints. If you just want to see the health by host then you can try the following which will report hosts with red health checks: index=_internal host=* source="*/var/log/splunk/health.log" | stats latest(color) as color by feature, node_path, node_type, host | stats values(node_path) by color host node_type | where color="red"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ok, there have been many ideas here but  oone asked the main question. Why do you want to do it?
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Xyseries
Hi @RSS_STT  Use a stats command like this: | chart values(utilization) over Host by drive_Name   | makeresults count=3 | streamstats count | eval Host=case(count=1 OR count=3, "aaa", count... See more...
Hi @RSS_STT  Use a stats command like this: | chart values(utilization) over Host by drive_Name   | makeresults count=3 | streamstats count | eval Host=case(count=1 OR count=3, "aaa", count=2, "bbb"), drive_Name=case(count=1 OR count=2, "D:", count=3, "E:"), utilization=case(count=1, 20, count=2, 30, count=3, 60) | fields - count _time | chart values(utilization) over Host by drive_Name  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Trying to fiddle with structured data by means of simple regexes is doomed to cause problems sooner or later. You have a single json array. If you want to split it into separate items you should use ... See more...
Trying to fiddle with structured data by means of simple regexes is doomed to cause problems sooner or later. You have a single json array. If you want to split it into separate items you should use external tool (or force your source to log separate events).
Hi @ws  You need to setup the linebreaker to distinguish between different events starting with the attributes key. == props.conf == [yourSourcetype] SHOULD_LINEMERGE=false TRUNCATE = 100000 LINE_B... See more...
Hi @ws  You need to setup the linebreaker to distinguish between different events starting with the attributes key. == props.conf == [yourSourcetype] SHOULD_LINEMERGE=false TRUNCATE = 100000 LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s+{) Note: TRUNCATE can be a high number but should ideally NOT be 0!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREF... See more...
@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREFIX="date":\s*"     NOTE:  * When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.    
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E ... See more...
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E aaa 20 60 bbb 30  
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you coul... See more...
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you could use add_info to your search to get the search time and use the $result.search_time$   index=_internal | stats count | addinfo | eval search_time=strftime(info_search_time,"%c") | fields - info_* This is a simple example to help you get started.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. D... See more...
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. Does anyone know how to split the array into separate events based on my condition? I want it to appear as two sets of events. JSON string: Splunk Search Head:      
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroup... See more...
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroups But the user running the search must have the dispatch_to_indexers (or however it is called) capability.