All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The reason is, Our dev team requires the timestamp which is in epoch needs to be formatted to "%d-%m-%d %H:%M:%S.%3N", Have already created a calculated field to convert this to the format we require... See more...
The reason is, Our dev team requires the timestamp which is in epoch needs to be formatted to "%d-%m-%d %H:%M:%S.%3N", Have already created a calculated field to convert this to the format we require. But still they need this to be done at indexing stage. props.conf [resource_timestamp] SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = json KV_MODE = none TIME_PREFIX = \"timestamp\"\: TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 13 TRANSFORMS-updateTimestamp = updateTimestamp TRANSFORMS-overrideTimeStamp = overrideTimeStamp transforms.conf [overrideTimeStamp] INGEST_EVAL = _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N")) [updateTimestamp] #INGEST_EVAL = timestamp=json_extract(_raw, "timestamp" INGEST_EVAL = timestamp=strftime(json_extract(_raw, "timestamp") / 1000, "%m-%d-%Y %H:%M:%S.%3N") I was able to format the timestamp in _raw but the timestamp field in the interesting field is still showing up as epoch, How can I transform the value of the timestamp similar to _raw.    
Hi @Praz_123  To access the HF via REST you need to make sure they are setup in MC but also be able to reach their REST endpoints. If you just want to see the health by host then you can try the fo... See more...
Hi @Praz_123  To access the HF via REST you need to make sure they are setup in MC but also be able to reach their REST endpoints. If you just want to see the health by host then you can try the following which will report hosts with red health checks: index=_internal host=* source="*/var/log/splunk/health.log" | stats latest(color) as color by feature, node_path, node_type, host | stats values(node_path) by color host node_type | where color="red"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ok, there have been many ideas here but  oone asked the main question. Why do you want to do it?
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Xyseries
Hi @RSS_STT  Use a stats command like this: | chart values(utilization) over Host by drive_Name   | makeresults count=3 | streamstats count | eval Host=case(count=1 OR count=3, "aaa", count... See more...
Hi @RSS_STT  Use a stats command like this: | chart values(utilization) over Host by drive_Name   | makeresults count=3 | streamstats count | eval Host=case(count=1 OR count=3, "aaa", count=2, "bbb"), drive_Name=case(count=1 OR count=2, "D:", count=3, "E:"), utilization=case(count=1, 20, count=2, 30, count=3, 60) | fields - count _time | chart values(utilization) over Host by drive_Name  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Trying to fiddle with structured data by means of simple regexes is doomed to cause problems sooner or later. You have a single json array. If you want to split it into separate items you should use ... See more...
Trying to fiddle with structured data by means of simple regexes is doomed to cause problems sooner or later. You have a single json array. If you want to split it into separate items you should use external tool (or force your source to log separate events).
Hi @ws  You need to setup the linebreaker to distinguish between different events starting with the attributes key. == props.conf == [yourSourcetype] SHOULD_LINEMERGE=false TRUNCATE = 100000 LINE_B... See more...
Hi @ws  You need to setup the linebreaker to distinguish between different events starting with the attributes key. == props.conf == [yourSourcetype] SHOULD_LINEMERGE=false TRUNCATE = 100000 LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s+{) Note: TRUNCATE can be a high number but should ideally NOT be 0!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREF... See more...
@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREFIX="date":\s*"     NOTE:  * When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.    
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E ... See more...
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E aaa 20 60 bbb 30  
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you coul... See more...
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you could use add_info to your search to get the search time and use the $result.search_time$   index=_internal | stats count | addinfo | eval search_time=strftime(info_search_time,"%c") | fields - info_* This is a simple example to help you get started.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. D... See more...
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. Does anyone know how to split the array into separate events based on my condition? I want it to appear as two sets of events. JSON string: Splunk Search Head:      
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroup... See more...
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroups But the user running the search must have the dispatch_to_indexers (or however it is called) capability.
Hi @Prajwal_Kasar 1. Include the WinRM library (and its dependencies) in your app bundle before installing it on Splunk Cloud. #Within your app mkdir lib pip install --target=lib winrm 2. Prepend... See more...
Hi @Prajwal_Kasar 1. Include the WinRM library (and its dependencies) in your app bundle before installing it on Splunk Cloud. #Within your app mkdir lib pip install --target=lib winrm 2. Prepend lib to sys.path in your alert script # bin/alert_winrm.py - For example import os, sys vendor_dir = os.path.join(os.path.dirname(__file__), "../lib") sys.path.insert(0, vendor_dir) import winrm def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): session = winrm.Session(TargetServer, auth=(UserName, Password), transport='ntlm') # … your cleanup logic … if __name__ == "__main__": # parse args and call clean_old_files()   3. Package & deploy as you would normally 4. Note WinRM’s may require additional deps (requests, xmltodict, six) but I think pip should install these. Ensure Splunk Cloud can reach your Windows host on port 5985/5986 - this can be managed with ACS. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Folks, I'm encountering an issue with Splunk Cloud where it indicates that the winrm module is not found. I'm attempting to install and run a custom alert action packaged Python application th... See more...
Hello Folks, I'm encountering an issue with Splunk Cloud where it indicates that the winrm module is not found. I'm attempting to install and run a custom alert action packaged Python application that uses winrm to establish a remote connection to a target server for cleanup processes. However, after installation and testing, I discovered that winrm is not installed in the Splunk Cloud environment used by our organization. Is there any workaround to achieve this and proceed further? Issue: ModuleNotFoundError:No module named 'winrm Script block that uses winrm: import winrm import sys import argparse import os def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): # Initialize return values deleted_files = [] deleted_count = 0 #print(f"Connecting to server: {TargetServer}...") #remove above print statement in next deployment. try: # Establish a WinRM session session = winrm.Session(TargetServer, auth=(UserName, Password), transport='ntlm') #splunkcloud Splunk ITSI Module for Application Performance Monitoring 
Thanking you for replying - I've tried both but throws out a different error.  I was told that splunk-soar version 6.4.0.92 only takes dict { }.  I've attached the error message for the array error..... See more...
Thanking you for replying - I've tried both but throws out a different error.  I was told that splunk-soar version 6.4.0.92 only takes dict { }.  I've attached the error message for the array error... Error message for array [ ]
Can you explain more detail level what you have in this splunk instance? Like it’s role, are there modular inputs, own SPL commands, amount of users, queries, DMA, other accelerations, daily data size... See more...
Can you explain more detail level what you have in this splunk instance? Like it’s role, are there modular inputs, own SPL commands, amount of users, queries, DMA, other accelerations, daily data size etc
Hi All We got this requirement to print the timestamp in mail subject for scheduled report. the timestamp should indicate the time it got sent. for exg, the report runs twice a day so if it runs 6 ... See more...
Hi All We got this requirement to print the timestamp in mail subject for scheduled report. the timestamp should indicate the time it got sent. for exg, the report runs twice a day so if it runs 6 am and 6 pm, the mail subject should indicate dd-mm-yyyy 06:00:00 or 18:00:00 Please help.
@livehybrid  , @PickleRick  , @isoutamo  I need the health status for HF while running the query. There are more than 5 HFs, and when I run the query for each HF individually, I get the results. H... See more...
@livehybrid  , @PickleRick  , @isoutamo  I need the health status for HF while running the query. There are more than 5 HFs, and when I run the query for each HF individually, I get the results. However, I can't create a single alert that covers all HFs and —doing so would result in more than 5 separate alerts, one for each HF. If am running the same query in LM and able to see all components status in a one go can't it be possible for the HF and IHF       
I think changing tag permissions will give me carpal tunnel.  If anyone knows where I can submit my claim let me know.
Through the top command, we found that the Splunkd process is using 100% of the swap space. However, it is impossible to determine the root cause because there is no way to check exactly what kind of... See more...
Through the top command, we found that the Splunkd process is using 100% of the swap space. However, it is impossible to determine the root cause because there is no way to check exactly what kind of operation the swap space is using. Do you know anything about a case that solved the problem of using 100% of the swap space? Thank you.