All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREF... See more...
@ws  Hey, you can try this settings [ <SOURCETYPE NAME> ] CHARSET=UTF-8 SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\s*{(?=\s*"attribute":\s*{) TRUNCATE=0 INDEXED_EXTRACTIONS =JSON TIME_PREFIX="date":\s*"     NOTE:  * When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.    
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E ... See more...
I want to transpose the below row to column. Host drive_Name utilization   aaa D 20   bbb D 30   aaa E 60     want to covert above table result as below. Host D E aaa 20 60 bbb 30  
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you coul... See more...
Hi @MsF-2000  You may be able to use $job.latestTime$ in your subject - however I believe this is a unix timestamp so it may be hard for the receiver to know what it really means. Instead, you could use add_info to your search to get the search time and use the $result.search_time$   index=_internal | stats count | addinfo | eval search_time=strftime(info_search_time,"%c") | fields - info_* This is a simple example to help you get started.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. D... See more...
I'm looking for a way to split a JSON array into multiple events, but it keeps getting indexed as a single event. I've tried using various parameters in props.conf, but none of them seem to work. Does anyone know how to split the array into separate events based on my condition? I want it to appear as two sets of events. JSON string: Splunk Search Head:      
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroup... See more...
You can use the splunk_server_group argument for the rest command to dispatch it to defined group of servers. See https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Distributedsearchgroups But the user running the search must have the dispatch_to_indexers (or however it is called) capability.
Hi @Prajwal_Kasar 1. Include the WinRM library (and its dependencies) in your app bundle before installing it on Splunk Cloud. #Within your app mkdir lib pip install --target=lib winrm 2. Prepend... See more...
Hi @Prajwal_Kasar 1. Include the WinRM library (and its dependencies) in your app bundle before installing it on Splunk Cloud. #Within your app mkdir lib pip install --target=lib winrm 2. Prepend lib to sys.path in your alert script # bin/alert_winrm.py - For example import os, sys vendor_dir = os.path.join(os.path.dirname(__file__), "../lib") sys.path.insert(0, vendor_dir) import winrm def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): session = winrm.Session(TargetServer, auth=(UserName, Password), transport='ntlm') # … your cleanup logic … if __name__ == "__main__": # parse args and call clean_old_files()   3. Package & deploy as you would normally 4. Note WinRM’s may require additional deps (requests, xmltodict, six) but I think pip should install these. Ensure Splunk Cloud can reach your Windows host on port 5985/5986 - this can be managed with ACS. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Folks, I'm encountering an issue with Splunk Cloud where it indicates that the winrm module is not found. I'm attempting to install and run a custom alert action packaged Python application th... See more...
Hello Folks, I'm encountering an issue with Splunk Cloud where it indicates that the winrm module is not found. I'm attempting to install and run a custom alert action packaged Python application that uses winrm to establish a remote connection to a target server for cleanup processes. However, after installation and testing, I discovered that winrm is not installed in the Splunk Cloud environment used by our organization. Is there any workaround to achieve this and proceed further? Issue: ModuleNotFoundError:No module named 'winrm Script block that uses winrm: import winrm import sys import argparse import os def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): # Initialize return values deleted_files = [] deleted_count = 0 #print(f"Connecting to server: {TargetServer}...") #remove above print statement in next deployment. try: # Establish a WinRM session session = winrm.Session(TargetServer, auth=(UserName, Password), transport='ntlm') #splunkcloud Splunk ITSI Module for Application Performance Monitoring 
Thanking you for replying - I've tried both but throws out a different error.  I was told that splunk-soar version 6.4.0.92 only takes dict { }.  I've attached the error message for the array error..... See more...
Thanking you for replying - I've tried both but throws out a different error.  I was told that splunk-soar version 6.4.0.92 only takes dict { }.  I've attached the error message for the array error... Error message for array [ ]
Can you explain more detail level what you have in this splunk instance? Like it’s role, are there modular inputs, own SPL commands, amount of users, queries, DMA, other accelerations, daily data size... See more...
Can you explain more detail level what you have in this splunk instance? Like it’s role, are there modular inputs, own SPL commands, amount of users, queries, DMA, other accelerations, daily data size etc
Hi All We got this requirement to print the timestamp in mail subject for scheduled report. the timestamp should indicate the time it got sent. for exg, the report runs twice a day so if it runs 6 ... See more...
Hi All We got this requirement to print the timestamp in mail subject for scheduled report. the timestamp should indicate the time it got sent. for exg, the report runs twice a day so if it runs 6 am and 6 pm, the mail subject should indicate dd-mm-yyyy 06:00:00 or 18:00:00 Please help.
@livehybrid  , @PickleRick  , @isoutamo  I need the health status for HF while running the query. There are more than 5 HFs, and when I run the query for each HF individually, I get the results. H... See more...
@livehybrid  , @PickleRick  , @isoutamo  I need the health status for HF while running the query. There are more than 5 HFs, and when I run the query for each HF individually, I get the results. However, I can't create a single alert that covers all HFs and —doing so would result in more than 5 separate alerts, one for each HF. If am running the same query in LM and able to see all components status in a one go can't it be possible for the HF and IHF       
I think changing tag permissions will give me carpal tunnel.  If anyone knows where I can submit my claim let me know.
Through the top command, we found that the Splunkd process is using 100% of the swap space. However, it is impossible to determine the root cause because there is no way to check exactly what kind of... See more...
Through the top command, we found that the Splunkd process is using 100% of the swap space. However, it is impossible to determine the root cause because there is no way to check exactly what kind of operation the swap space is using. Do you know anything about a case that solved the problem of using 100% of the swap space? Thank you.
We have an alert showing users that are authenticating after working hours for security reasons, I'm sure y'all familiar with, but at the same time, we know who leaves their workstations on during th... See more...
We have an alert showing users that are authenticating after working hours for security reasons, I'm sure y'all familiar with, but at the same time, we know who leaves their workstations on during the night. However, we have recently received alerts with "unknown" users reported in the alert. But after checking the host's event viewer (Security Log) and comparing with the timestamps in the alert, the event logs shows the users. Any idea how we can edit our search string, or what may have caused the string to return the unknown value?
I have this in props and transforms. [resource_timestamp] SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = json KV_MODE = none TIME_PREFIX = "timestamp": TIME_FORMAT = %s%3N DATETIME_CONFIG = NO... See more...
I have this in props and transforms. [resource_timestamp] SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = json KV_MODE = none TIME_PREFIX = "timestamp": TIME_FORMAT = %s%3N DATETIME_CONFIG = NONE TRANSFORMS-overrideTimeStamp = overrideTimeStamp   [overrideTimeStamp] INGEST_EVAL = _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N")) #INGEST_EVAL = _raw=strftime(json_extract(_raw, "timestamp")/1000, "%m-%d-%Y %H:%M:%S.%3N") I can now see the intended timeformat is being updated in the timestamp field but i also see the value of timestamp twice with none and epoch format, how do i eliminate none value.  
Hi @livehybrid, I tried to apply props and transforms like you mentioned earlier but i don't see events are breaking,  the value of the timestamp is still showing the epoch value not the time format ... See more...
Hi @livehybrid, I tried to apply props and transforms like you mentioned earlier but i don't see events are breaking,  the value of the timestamp is still showing the epoch value not the time format I needed. it's also showing none value in the results which is not expected, how to eliminate the none in the results.  
I've read through some of the Splunk documentation and previously one of my colleagues already configured the "Windows server health" content pack, but when I check the "OS:Performance.WIN.Memory" I ... See more...
I've read through some of the Splunk documentation and previously one of my colleagues already configured the "Windows server health" content pack, but when I check the "OS:Performance.WIN.Memory" I only see 4 metrics and cannot get the overall % memory utilization because I do not have the total amount to begin with. These are the only metrics I have: Available_MBytes Cache_Bytes Page_Reads/sec Install and configure the Content Pack for Monitoring Microsoft Windows - Splunk Documentation
Hi @Praz_123  As described by @PickleRick  and @isoutamo  - it can sometimes be possible to add these to MC but not always practical, and a bit hacky!  If you are wanting a high level view of a for... See more...
Hi @Praz_123  As described by @PickleRick  and @isoutamo  - it can sometimes be possible to add these to MC but not always practical, and a bit hacky!  If you are wanting a high level view of a forwarder then you can use the health.log using the following SPL index=_internal host=yourFowarderHost source="*/var/log/splunk/health.log" | stats latest(color) as color by feature, node_path, node_type, host   If you have a number of forwarders to monitor then you could adapt this to score the colours and show the worst?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Kenny_splunk  Really the only way to "clean" an index is for the data be aged-out. Running the "| delete" on an index will stop it appearing in searches however it will still be present on the d... See more...
Hi @Kenny_splunk  Really the only way to "clean" an index is for the data be aged-out. Running the "| delete" on an index will stop it appearing in searches however it will still be present on the disks, just with markers that stop it being returned, therefore it wont actually give you any space back if this is what you are looking for. The best thing you can do is control the data arriving in the platform and reduce this as necessary, hopefully over time the older/larger/waste data will age out and free up space.  What is your retention on this index(es)? If its something like 90 days then you wont have too long to wait, but if its 6 years then you might be stuck with that old data for some time!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @manideepa  Are you referring to service indicators in the glass tables versus notables generated in a table? Please could you share screenshots or sample data so that we can ensure we're giving ... See more...
Hi @manideepa  Are you referring to service indicators in the glass tables versus notables generated in a table? Please could you share screenshots or sample data so that we can ensure we're giving you the best answer.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing