All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I use 'SEDCMD-rm<fieldname>'   WHY my sedcmd is not work? SEDCMD-rm-appname = s/app_name\=.*/\s// SEDCMD-rm_appsaas = s/app_saas\=\w+\s//
@silverKi  Try below config to remove highlighted fields from the _raw event. Since they’re not in the raw, Splunk won’t auto-extract them at search time. props.conf [secui:fw] TRANSFORMS-rem... See more...
@silverKi  Try below config to remove highlighted fields from the _raw event. Since they’re not in the raw, Splunk won’t auto-extract them at search time. props.conf [secui:fw] TRANSFORMS-removefields = remove_unwanted_fields transforms.conf [remove_unwanted_fields] REGEX = \s?(fw_rule_name|app_saas|nat_rule_name|is_ssl|user_id|is_sslvpn|app_name|host|app_protocol|src_country|app_category|dst_country)=[^ ]* FORMAT = DEST_KEY = _raw Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
To be specific. I want to disable these two button below that appear on the top part of my dashboard and the export button under every stats display.        
I am trying to exclude unnecessary fields from the firewall log collection. I am trying to delete the fields by excluding them, but they are not reflected well, so I am curious about the related ... See more...
I am trying to exclude unnecessary fields from the firewall log collection. I am trying to delete the fields by excluding them, but they are not reflected well, so I am curious about the related collection exclusion process.
_raw data [fw4_deny] [ip-address] start_time="1998-07-07 11:21:09" end_time="1998-07-07 11:21:09" machine_name=test_chall_1 fw_rule_id=11290 fw_rule_name=auto_ruleId_1290 nat_rule_id=0 nat_rule_name... See more...
_raw data [fw4_deny] [ip-address] start_time="1998-07-07 11:21:09" end_time="1998-07-07 11:21:09" machine_name=test_chall_1 fw_rule_id=11290 fw_rule_name=auto_ruleId_1290 nat_rule_id=0 nat_rule_name= src_ip=1xx.1xx.0.x user_id=- src_port=63185 dst_ip=192.168.0.2 dst_port=16992 protocol=6 app_name=- app_protocol=- app_category=- app_saas=no input_interface=eth212 bytes_forward=70 bytes_backward=0 packets_total=1 bytes_total=70 flag_record=S terminate_reason=Denied by Deny Rule is_ssl=no is_sslvpn=no host=- src_country=X2 dst_country=X2 [resource_cnt] [10.10.10.10] time="1998-07-07 11:24:50" machine_name=test_boby_1 cpu_usage=7.0 mem_usage=19.8 disk_usage=5.6 cpu_count=32, cpu_per_usage=3.0-2.9-2.0-2.0-2.0-2.0-0.0-0.0-23.0-7.9-7.0-6.9-19.4-19.0-8.0-7.0-1.0-1.0-16.0-1.0-2.0-2.0-1.0-2.0-24.8-9.0-16.2-8.0-9.0-9.9-5.0-8.1 my props.conf [secui:fw] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SEDCMD-duration = s/duration=\d+\s// SEDCMD-fragment_info = s/fragment_info=\S*\s// SEDCMD-ingres_if = s/ingres_if=\S*\s// SEDCMD-input = s/input\sinterface/interface/ SEDCMD-packets_backward = s/packets_backward=\S*\s// SEDCMD-packets_forward = s/packets_forward=\S*\s// SEDCMD-pre = s/^[^\[]+// SEDCMD-terminate_reason = s/\sterminate_reason=-// SEDCMD-user_auth = s/user_auth=\S*\s// SEDCMD-userid = s/user_id=\S*\s// TRANSFORMS-secui_nullq = secui_nullq TRANSFORMS-stchg7 = secui_resource TRANSFORMS-stchg8 = secui_session category = Custom description = test disabled = false pulldown_type = true <Fields you want to exclude> fw_rule_name, app_saas nat_rule_name, is_ssl user_id, is_sslvpn app_name, host app_protocol, src_country app_category, dst_country I want to exclude fields that I want to exclude from being extracted at index time. Currently, fields that I want to exclude are automatically extracted when searching for fields of interest. Is there a way to do this?  
Suppose you have a lookup called myhosts.csv; it has a field called host.  You use this as primary input, then find which host has zero count compared with index search. | inputlookup myhosts.csv | ... See more...
Suppose you have a lookup called myhosts.csv; it has a field called host.  You use this as primary input, then find which host has zero count compared with index search. | inputlookup myhosts.csv | append [search index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count by sourcetype,host] | stats values(sourcetype) as not_missing by host | where isnull(not_missing)  
Thank you for the reply. I've used lookup tables a little before and can probably figure out that piece of it. Once I have that comparison list working, how would I say where events for that sourcety... See more...
Thank you for the reply. I've used lookup tables a little before and can probably figure out that piece of it. Once I have that comparison list working, how would I say where events for that sourcetype is zero? I've tried something like this without success: ... | stats count by sourcetype,host | where sourcetype="windows PFirewall Log" | where "count">="1"
Splunk is not good at reporting on things that don't exist. To get around this, you need to provide a list (of the hosts you are interested in) and compare that to the number of event you have for ea... See more...
Splunk is not good at reporting on things that don't exist. To get around this, you need to provide a list (of the hosts you are interested in) and compare that to the number of event you have for each host, and then just keep those where the number of events is less than 1. This is often done using a lookup file, for example (if the hosts are "new"), or some historic data (if the hosts are "old").
Hello, I have Database Connect setup and it's working all fine. But I can't wrap my head around how the Alert Action works.  The Alert action "Output results to databases" has no parameters - what ... See more...
Hello, I have Database Connect setup and it's working all fine. But I can't wrap my head around how the Alert Action works.  The Alert action "Output results to databases" has no parameters - what am I missing? I have a DB table "test_table" with columns col1, col2 and want to setup | makeresults | eval col1 = "test", col2 = "result" as an alert that pushes the results into the "test_table". I would expect the Alert action to at least need to know what DB Output to use? Any help appreciated, Kind Regards Andre 
Hi @Cleffa  Looking at the limited docs, it doesnt look like you can inject a UUID into the filename, however it does access timestamp variables so you could perhaps add a millisecond (%f) to your f... See more...
Hi @Cleffa  Looking at the limited docs, it doesnt look like you can inject a UUID into the filename, however it does access timestamp variables so you could perhaps add a millisecond (%f) to your filename to make it more unique? Check ou the timestamp docs at https://docs.python.org/3.7/library/datetime.html#strftime-strptime-behavior:~:text=Microsecond%20as%20a%20decimal%20number%2C%20zero%2Dpadded%20on%20the%20left.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi, sometimes there are 3 new data and I need JSON separate, but they overwritten, I find no way to add a UUID to the file name /results_%H%M%S.json      
Hello Splunk Community. I'd like to use a query to find a host which is a member of a tag group and has 0 events for a specific sourcetype. Here's the search that gets me most of the way there: inde... See more...
Hello Splunk Community. I'd like to use a query to find a host which is a member of a tag group and has 0 events for a specific sourcetype. Here's the search that gets me most of the way there: index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count by sourcetype,host But I'd like to return only hosts which have 0 events (aka. are missing firewall data). How can I do this?
I have taken the file and deleted and repopulated it.  I have used a new file created in notepad++ and another file created in excel.  still no luck.  I am beyond frustrated because I know it is some... See more...
I have taken the file and deleted and repopulated it.  I have used a new file created in notepad++ and another file created in excel.  still no luck.  I am beyond frustrated because I know it is something simple somewhere just cannot figure out where. 
The data is a simple CSV file so the props just need to specify that. [sap:systemlog] INDEXED_EXTRACTIONS = csv DATETIME_CONFIG = CURRENT No need for REPORT or EXTRACT.
Hi @Akhanda  That is fine, just create the file and then make sure the permissions allow for whichever user Splunk runs as.  by default the local directory is empty (the defaults are in the “defaul... See more...
Hi @Akhanda  That is fine, just create the file and then make sure the permissions allow for whichever user Splunk runs as.  by default the local directory is empty (the defaults are in the “default” directory alongside the local directory so wouldn’t necessarily expect it to exist already.  Regards Will
Dear splunk community, After successfully implementing the input from @afx : "How to Splunk the SAP Security Audit Log" I was encouraged to implement the SAP system log (SM21) on my own. So far, ... See more...
Dear splunk community, After successfully implementing the input from @afx : "How to Splunk the SAP Security Audit Log" I was encouraged to implement the SAP system log (SM21) on my own. So far, I have managed to send the log to SPLUNK, but given the log's encoding system, I am unable to process it correctly in SPLUNK. Most likely, my error lies in the transforms.conf or props.conf.  props.conf [sap:systemlog] category = Custom REPORT-SYS = REPORT-SYS EXTRACT-fields = ^(?<Prefix>.{3})(?<Date>.{8})(?<Time>.{6})(?<Code>\w\w)(?<Field1>.{5})(?<Field2>.{2})(?<Field3>.{3})(?<Field4>.)(?<Field5>.)(?<Field6>.{8})(?<Field7>.{12})(?<Field8>.{20})(?<Field9>.{40})(?<Field10>.{3})(?<Field11>.)(?<Field12>.{64})(?<Field13>.{20}) LOOKUP-auto_sm21 = sm21 message_id AS message_id OUTPUTNEW area AS area subid AS subid ps_posid AS ps_posid transforms.conf [REPORT-SYS] DELIMS = "|" FIELDS = "message_id","date","time","term1","os_process_id","term2","work_process_number","type_process","term3","term4","user","term5","program","client","session","variable","term6","term7","term8","term9","id_tran","id_cont","id_cone" [sm21] batch_index_query = 0 case_sensitive_match = 1 filename = sm21.csv Has anyone experienced a similar issue to mine?  Best Regards.
Hi, There is no limits.conf file in the $SPLUNK_HOME/etc/system/local/  directory !!
Hi I have created a playbook to capture inputs from the user like short description, description and priority and create a SIR in servicenow. but when I try to capture the State it does not show it ... See more...
Hi I have created a playbook to capture inputs from the user like short description, description and priority and create a SIR in servicenow. but when I try to capture the State it does not show it can someone help me understand where I am going wrong? Need immediate help  
Hi @Akhanda  To change the search_startup_config_timeout_ms setting, edit the limits.conf file (e.g. in $SPLUNK_HOME/etc/system/local/limits.conf) and add or modify this setting under the [search] s... See more...
Hi @Akhanda  To change the search_startup_config_timeout_ms setting, edit the limits.conf file (e.g. in $SPLUNK_HOME/etc/system/local/limits.conf) and add or modify this setting under the [search] stanza. For example, to increase the timeout to 30000ms (30 seconds): [search] search_startup_config_timeout_ms = 30000 After saving the changes, restart Splunk for the new setting to take effect. This setting controls how long (in milliseconds) Splunk waits for configuration initialization before timing out when starting a search.  Only adjust this setting if you are sure storage performance is the cause, it sounds like this is the case here and I assume this is a local/dev instance with underlying performance issues.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ASGrover  Are you able to confirm that the indexers have been updated correctly on the indexers? One way to check this is with btool: $SPLUNK_HOME/bin/splunk cmd btool indexes list --debug Al... See more...
Hi @ASGrover  Are you able to confirm that the indexers have been updated correctly on the indexers? One way to check this is with btool: $SPLUNK_HOME/bin/splunk cmd btool indexes list --debug Also, are your peers (indexers) showing up in the Peers tab on the Indexer Clustering page from your cluster manager? Lastly - Just double check you are on the cluster manager! I have found myself looking a other hosts before wondering where on earth my hosts have gone!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing