All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team,   I'm trying to trigger a autosys job based on alert we recieved in splunk.   Any idea how to acheive it ?
I can't really see anything wrong but I dislike the following. /opt/splunk/etc/system/local/props.conf KV_MODE = json Since I do see it in several of the various splunkd* stanzas it makes me th... See more...
I can't really see anything wrong but I dislike the following. /opt/splunk/etc/system/local/props.conf KV_MODE = json Since I do see it in several of the various splunkd* stanzas it makes me think it was set in local under a default stanza.  I personally would look to remove that but keep in mind if this fixes the internal log extraction it will break something else that needs the json configuration.  I've always tried to create custom apps and place any default overrides in the custom app rather than allow anything to fall into the ./splunk/etc/system/local/*.conf.
The owner field is who is the current owner of a knowledge object and used for enforcing permissions and capabilities.  Unless the index is created via the GUI the value is likely to default to 'syst... See more...
The owner field is who is the current owner of a knowledge object and used for enforcing permissions and capabilities.  Unless the index is created via the GUI the value is likely to default to 'system' or such generic terms.  Even if created via the GUI once the user departs the organization the user name should be disabled/deleted which risks leaving the object unavailable and the object should be migrated to a generic ID or a different user. I don't see any automated method of pulling the information you desire from a rest call given that owner can change and creation date is likely just listed as earliest event in the index which is not reliable. Previously I would have an app just to define indexes pushed to IDX tier from the CM.  After the index stanza you can comment in the information you want to record but you wouldn't be able to view that from a rest call.
Please try to remove the " (double quotes) from the TIME_FORMAT. TIME_FORMAT=%d/%m/%Y %H:%M:%S   If this isn't working checkout the btool on this source/host/sourcetype for any DATETIME_CONFIG... See more...
Please try to remove the " (double quotes) from the TIME_FORMAT. TIME_FORMAT=%d/%m/%Y %H:%M:%S   If this isn't working checkout the btool on this source/host/sourcetype for any DATETIME_CONFIG setting on your props.conf. Hope this helps.
It's not clear if you are speaking about patching Splunk application servers or just other servers in your environment.  Any server hosting a Splunk function will report into the DMC and that should ... See more...
It's not clear if you are speaking about patching Splunk application servers or just other servers in your environment.  Any server hosting a Splunk function will report into the DMC and that should be your source of truth about how the Splunk application is functioning after a server patch. Other servers in your environment should be monitored based upon your own desires and concepts of critical functions.  It really lays outside the topics on this community answer board.
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV f... See more...
For the past 2 days I'm trying to figure something out. I'll try to be clear as possible and hopefully that someone can guide me or explain why this is working like this. I'm trying to index a CSV file stored in S3, but unfortunately the sourcetype aws:s3:csv is not indexing the file "properly" (meaning it is not extracting any fields - check left screenshot from the attached file). I've modified the sourcetype aws:s3:csv (under the Splunk Addon for AWS application) and configured it exactly like the default CSV sourcetype (under system/default/proprs.conf). After doing this if I index a file manually via "Settings/Add data" it is being indexed properly (fields are being extracted), but if the very same file is indexed by the app Splunk Addon for AWS,again  configured with the same sourcetype, there are no extracted fields. Check attached screenshot for reference. I've also tried to add other different configurations to the not-modified aws:s3:csv sourcetype like INDEXED_EXTRACTIONS = CSV; HEADER_FIELD_LINE_NUMBER = 1; FIELD_NAMES = field1,field2,field3 and various other configurations in props.conf (under Splunk Addon for AWS) but without success. The only "workaround" is if I use REPORT-extract_fields in props.conf for that sourcetype and in transforms.conf configure it, but this is not ideal. Additionally I've set the sourcetype to csv  (default Splunk sourcetype) in the inputs.conf but this also seems to not work. Splunk 9.2.1 Splunk Add-on for AWS 7.7.0 Similar questions without proper answer: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-Amazon-Web-Services-How-to-get-a-CSV-file/m-p/131725 https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Add-on-for-AWS-Ingesting-csv-files-and-he-fields-are-not/td-p/656923 https://community.splunk.com/t5/All-Apps-and-Add-ons/S3-bucket-with-CSV-files-not-extracting-fields-at-index-time/m-p/458671 https://community.splunk.com/t5/Getting-Data-In/No-fields-or-timestamps-extracted-when-indexing-TSV-from-S3/m-p/660436 https://community.splunk.com/t5/Getting-Data-In/Why-is-CSV-data-not-getting-parsed-while-being-monitored-on/td-p/275515
That's what i'm finding as well.  I'm curious if there's a round-about way to do this.  Maybe using that string as a token in a dashboard?
Hi I modified the props.conf as recommended and no change, time is still being taken as ingest time: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time\=\" TIME_FORMAT = "%d/%m/%Y... See more...
Hi I modified the props.conf as recommended and no change, time is still being taken as ingest time: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time\=\" TIME_FORMAT = "%d/%m/%Y %H:%M:%S" MAX_TIMESTAMP_LOOKAHEAD = 27 CHARSET = UTF-8 KV_MODE = none DISABLED = false Any other ideas?
TIME_PREFIX is a regex match and they can get touchy sometimes.  I would force the = and the " to be escaped so: TIME_PREFIX = time\=\".  Then I would take advantage of the MAX_TIMESTAMP_LOOKAHEAD, a... See more...
TIME_PREFIX is a regex match and they can get touchy sometimes.  I would force the = and the " to be escaped so: TIME_PREFIX = time\=\".  Then I would take advantage of the MAX_TIMESTAMP_LOOKAHEAD, although it should be inherited from the default I always like to put it in my app when I have multiple timestamps in the raw data.
Thank you for your response. I am uploading the btool output for splunkd.
I am, admittedly, quite new to Splunk...but the requirement as posted is just a picture. Are you asking if Splunk is capable of providing dashboards for each of the categories and what those search ... See more...
I am, admittedly, quite new to Splunk...but the requirement as posted is just a picture. Are you asking if Splunk is capable of providing dashboards for each of the categories and what those search queries would look like? Or do you already have that and you just want to use DS to create that clickable picture to display the associated dashboards?   Just wondering...
Thanks for confirming my suspicion. SED'ed a lot!
I've tried to search internal in several different apps and it all extracted the fields.  The field extractions are clearly marked out in props.conf under the Splun app default directory.  I really c... See more...
I've tried to search internal in several different apps and it all extracted the fields.  The field extractions are clearly marked out in props.conf under the Splun app default directory.  I really can't see how that would have been subverted but a btool outputs from props.conf for stanza splunkd would be good.
Try something like this index=firewall [ search index=vpn computer_name=Desktop_1 | table assigned_ip login_time logout_time | rename login_time as earliest | rename logout_time as latest ] | stats... See more...
Try something like this index=firewall [ search index=vpn computer_name=Desktop_1 | table assigned_ip login_time logout_time | rename login_time as earliest | rename logout_time as latest ] | stats count by destination_ip
Hello!  Wanted to ask if anyone has experience with receiving SNMPv2 trap alerts in Splunk 8.2.5 (Win 2019)?  Background: we have an environment monitor device that sends high/low temperature alerts ... See more...
Hello!  Wanted to ask if anyone has experience with receiving SNMPv2 trap alerts in Splunk 8.2.5 (Win 2019)?  Background: we have an environment monitor device that sends high/low temperature alerts to the local SNMP Trap svc, from there picked up by a generic WMI SNMP provider, from which Splunk pulls the data.   "wmi.conf":       [WMI:SNMP]namespace = \\.\root\snmp\localhost interval = 10wql = SELECT * FROM SnmpNotification disabled = 0 index = snmpindex current_only = 1         Problem we're running into is that when the data is ingested, Splunk has an issue translating the "VarBindList" object it gets from WMI, containing the SNMP variable binding ("varbind") info that describes the SNMP trap alert from the device (ticks, OID, text msg of what alert was tripped).   Sample Splunk search result from "snmpindex": (see: VarBindList=<unknown variant result type 8205> below):       20241007120551.314854 AgentAddress=10.2.13.19 AgentTransport Address=10.2.13.19 AgentTransportProtocol=IP Community=alispub Identification=1.3.6.1.4.1.20916.1.13.2.1 SECURITY_DESCRIPTOR=NULL TIME_CREATED=133727763449700336 TimeStamp=1894 VarBindList=<unknown variant result type 8205> wmi_type=SNMP host=MS source=WMI:SNMP sourcetype=WMI:SNMP       Been trying various Splunk configs/transforms, XML, etc. but all are basically contingent on getting good data into "_raw", and "_raw" col just has that msg.  Our RoomAlert3S device we need to upgrade to only sends SNMPv2 or v3.  Everything seems to work fine when the trap is v1 (from past behavior/our test util).
I'm still learning Splunk and would like to learn how to combine some searches. Goal: Use the VPN search results to perform firewall searches according to how many VPN records found. Example: ... See more...
I'm still learning Splunk and would like to learn how to combine some searches. Goal: Use the VPN search results to perform firewall searches according to how many VPN records found. Example: 1. Search the vpn index to get a table of assigned_ip and the login/logout time:   index=vpn computer_name=Desktop_1 | table assigned_ip login_time logout_time     assigned_ip login_time logout_time 10.255.111.112 1728409500 1728459000 10.255.119.199 1728392083 1728401383   2. Use the result above to do a firewall search (I'd like to use results from step 1 instead of the hardcoded values. I also want to append separate rows found in step 1 to find firewall records during different ip assignments):   index=firewall source_ip=10.255.111.112 earliest=1728409500latest=1728459000 | append [ search index=firewall source_ip=10.2555.119.199 earliest=1728392083 latest=1728401383 ] | stats count by destination_ip     The closest I got so far is using separate subsearch returns, which takes longer to run and doesn't seem to return more than 1 value:   index=firewall source_ip=[ search index=vpn computer_name=Desktop_1 | return $assigned_ip ] latest=[ search index=vpn computer_name=Desktop_1 | return $logout_time ] earliest=[ search index=vpn computer_name=Desktop_1 | return $login_time] | stats count by destination_ip     Is there a way to do this? I also tried to use tojson(), but it returns 1 table row into its own json object that I can't use together for the firewall search. Thank you so much in advance
I tried to run the Indexing Performance: Instance dashboard but was not getting any data, on exploring the search I found out index=_internal is not doing the field extractions for this data in the l... See more...
I tried to run the Indexing Performance: Instance dashboard but was not getting any data, on exploring the search I found out index=_internal is not doing the field extractions for this data in the log: group=per_host_thruput, ingest_pipe=1, series="splunkserver.local", kbps=8.451, eps=32.903, kb=261.974, ev=1020, avg_age=2.716, max_age=3 If I manually extract the fields using rex I can view it in the search but the dashboard still doesn't show the results. Is there a way to extract these fields for the internal index? Thanks!
Just want to say I love your extension and use it everywhere I can.
We have some events coming in to Splunk that show as following: time="09/10/2024 11:41:15" URL="[Redacted String]" Name="[Redacted String]" Issuer="[Redacted String]" Issued="27/10/2023 13:27:22" E... See more...
We have some events coming in to Splunk that show as following: time="09/10/2024 11:41:15" URL="[Redacted String]" Name="[Redacted String]" Issuer="[Redacted String]" Issued="27/10/2023 13:27:22" Expires="26/10/2025 12:27:22" Splunk is using ingest time instead of the time field. In props.conf for this sourcetype I have the following: SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = time= TIME_FORMAT = "%d/%m/%Y %H:%M:%S" CHARSET = UTF-8 KV_MODE = none DISABLED = false However the time isn't being extracted properly, what do I need to change / add? Thanks.
Sorry, my notebook ran out of battery. To test the dashboard you only have to enter the ip (range) with either prefix = or ! = to black or white list the ip (range). The entered value in the text bo... See more...
Sorry, my notebook ran out of battery. To test the dashboard you only have to enter the ip (range) with either prefix = or ! = to black or white list the ip (range). The entered value in the text box will be passed to the multiselect field. For the multiselect input you only have to change the prefix from "clientip" to the desired field that you wanna filter. The search in the search panel can be replaced by your search  That should be enough to verify if it is a proper solution for your problem.