All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Still new to Splunk, seeking for some help.  I have a index=account_Information, with account_number, cell_number, etc.    I want to list the account_number and the cell_number associated.  I have ... See more...
Still new to Splunk, seeking for some help.  I have a index=account_Information, with account_number, cell_number, etc.    I want to list the account_number and the cell_number associated.  I have a list of hundreds account_numbers in a csv file. I uploaded the csv file but how to use it?  My search:   (how to replace the ORs) index=account_Information account_Number_1 OR account_Number_2 OR account_number_3 ...  |  table account_number cell_number Thanks a lot.  
Hi, I've got the latest Splunk Stream app installed & configured to accept Netflow v9 events from my router. This part works fine actually. However, when starting to dig deeper in "useful" fields i... See more...
Hi, I've got the latest Splunk Stream app installed & configured to accept Netflow v9 events from my router. This part works fine actually. However, when starting to dig deeper in "useful" fields it seems I'm missing a few...I would expect the Stream App to be able to cope with everything "standard" within v9/IPFIX packets/templates. When going to the STM App -> Configuration -> Configure Stream -> Netflow -> "Edit" you get this nice list of about 156 fields which I all enabled.   Now I've taken Wireshark capture from the v9 data arriving at my Splunk server and the "template" contains these fields below. Field (20/23): postNATSourceIPv4Address Type: postNATSourceIPv4Address (225) Length: 4 Field (21/23): postNATDestinationIPv4Address Type: postNATDestinationIPv4Address (226) Length: 4 Field (22/23): postNAPTSourceTransportPort Type: postNAPTSourceTransportPort (227) Length: 2 Field (23/23): postNAPTDestinationTransportPort Type: postNAPTDestinationTransportPort (228) Length: 2   And a typical populated capture would look like this : Cisco NetFlow/IPFIX Version: 9 Count: 7 SysUptime: 873590.040000000 seconds Timestamp: Jun 22, 2020 10:03:08.000000000 CEST CurrentSecs: 1592812988 FlowSequence: 22 SourceId: 0 FlowSet 1 [id=256] (7 flows) FlowSet Id: (Data) (256) FlowSet Length: 532 [Template Frame: 1] Flow 1 [Duration: 0.000000000 seconds (switched)] StartTime: 873528.130000000 seconds EndTime: 873528.130000000 seconds Packets: 1 Octets: 86 InputInt: 15 OutputInt: 14 SrcAddr: IP.OF.INTERNAL.PC DstAddr: SOME.PUBLIC.ISP.DNSADDRESS Protocol: UDP (17) IP ToS: 0x00 SrcPort: 51020 (51020) DstPort: 53 (53) NextHop: SOME.PUBLIC.ISP.DNSADDRESS DstMask: 0 SrcMask: 0 TCP Flags: 0x00 Destination Mac Address: Router12_12:12:c6 (61:3c:61:31:11:b1) Source Mac Address: ASRockIn_84:01:36 (d0:50:99:84:01:36) Post Source Mac Address: 00:00:00_00:00:00 (00:00:00:00:00:00) Post NAT Source IPv4 Address: MY.PUBLIC.ISP.ADDRESS Post NAT Destination IPv4 Address: SOME.PUBLIC.ISP.DNSADDRESS Post NAPT Source Transport Port: 51020 Post NAPT Destination Transport Port: 53   Looking again at my Splunk Enterprise intstallation, there is this "vocabulaire" file under opt/splunk/etc/apps/Splunk_TA_stream/default/vocabularies It contains the "netflow.xml" file with (what I think) all field that can be "interpreted" / "decoded" as the Netflow packets arrive. <Term id="netflow.postNATSourceIPAddress"> <Term id="netflow.postNATDestinationIPAddress"> <Term id="netflow.postNAPTSourceTransportPort"> <Term id="netflow.postNAPTDestinationTransportPort"> So ... these 4 fields seem to already be part of the default vocabulary ... yet they never show up as any accessible "field" in Splunk ? In a moment of madness, I've edited the file below and make some additions. (router = Mikrotik = IANA Vendor ID 14988) user@splunky:/opt/splunk/etc/apps/Splunk_TA_stream/default# more streamfwd.conf [streamfwd] port = 8889 ipAddr = 127.0.0.1 netflowReceiver.0.ip = IP.OF.MY.SPLUNK netflowReceiver.0.port = 9995 netflowReceiver.0.decoder = netflow netflowElement.0.enterpriseid = 14988 netflowElement.0.id = 225 netflowElement.0.termid = netflow.postNATSourceIPAddress netflowElement.1.enterpriseid = 14988 netflowElement.1.id = 226 netflowElement.1.termid = netflow.postNATDestinationIPAddress netflowElement.2.enterpriseid = 14988 netflowElement.2.id = 227 netflowElement.2.termid = netflow.postNAPTSourceTransportPort netflowElement.3.enterpriseid = 14988 netflowElement.3.id = 228 netflowElement.3.termid = netflow.postNAPTDestinationTransportPort   ...then stop/start my Splunk but these fields don't show up with the 156 possible fields under the "stream config" tab (see earlier) To cut a long story short : Where are these fields ? Why are they not showing up since they are hitting the Splunk Stream App and it seems they are "known" Thanks!
_time SubjectUserName TargetOutboundUserName host IpAddress Sun Jun 21 08:37:39 2020 bcharlie bcharliex by-100 ::1 Sun Jun 21 08:37:03 2020 bcharlie bcharliex by-100 ::... See more...
_time SubjectUserName TargetOutboundUserName host IpAddress Sun Jun 21 08:37:39 2020 bcharlie bcharliex by-100 ::1 Sun Jun 21 08:37:03 2020 bcharlie bcharliex by-100 ::1   I need to exclude search results where SubjectUserName+TargetOutboundUserName will always be excluded. TargetOutboundUsername will always be SubjectUsername+x How would I write that out?
Hello there - I didn't get a response to an email to Splunk support with the following, so thought I'd try here:   One of our mutual customers reported that Splunk's app to integrate with Pivotal T... See more...
Hello there - I didn't get a response to an email to Splunk support with the following, so thought I'd try here:   One of our mutual customers reported that Splunk's app to integrate with Pivotal Tracker used to live here: https://splunkbase.splunk.com/app/1584/ but now it's gone.   Any plans to bring that back?   If not, is this where we should direct customers?: https://docs.splunk.com/Documentation/MINTMgmtConsole/1.0/UserGuide/Integratewithdevelopertools#Pivotal_Tracker   Thanks! Joanne - from the Pivotal Tracker team
I have updated the RWI Executive dashboard to the latest version. When launching the app it is stating that I need to continue configuring the app. I press the "continue to app setup page" button it ... See more...
I have updated the RWI Executive dashboard to the latest version. When launching the app it is stating that I need to continue configuring the app. I press the "continue to app setup page" button it goes to the next page, however it is blank. Has anyone else experienced this issue?
Years back the outputlookup command would create a csv lookup file in the user's app folder making it Private and owned by the user who ran the command. Now the lookup gets created with "No owner" an... See more...
Years back the outputlookup command would create a csv lookup file in the user's app folder making it Private and owned by the user who ran the command. Now the lookup gets created with "No owner" and inherits the permissions of the app.  Allowing users to run the outputlookup command this way can lead to massive number of public lookup files without owners. Additionally, anyone with write permissions to the app can then overwrite another user's lookup file.  What conf settings can be modified so that the outputlookup command is set to create the lookup file as  a privately owned object (owner being the user who ran the command)
within splunk cloud is there a way I can view a list of heavy or universal forwarders sending data to it? 
Observation: Suddenly the SplunkSearchHead stopped cleaning the jobs in dispatch directory (/opt/splunk/var/run/splunk/dispatch). Found jobs of age greater than one year. Due to that /opt partiti... See more...
Observation: Suddenly the SplunkSearchHead stopped cleaning the jobs in dispatch directory (/opt/splunk/var/run/splunk/dispatch). Found jobs of age greater than one year. Due to that /opt partition filled up with huge jobs nearly 97021 in dispatch directory. The available /opt partition free space reduced lesser than 5 GB. The dashboard CSV lookup failed. Not able to perform read operation on /opt partition. Action taken: We cleaned the jobs of age greater than one day manually. Then SplunkSearchHead continuous to run normally: (i) Able to perform CSV lookups  (ii) started cleaning the jobs greater than one hour automatically – We observed this for last 10 days.  Always the dispatch directory contains almost 201 jobs. Question: (1) We are not able to identify the root cause regarding the reason for sudden hike of 97021 jobs fill-ups in the dispatch directory. Can you please help us to understand the possibility of such occurrence? (2) On splunk forum search, found few solution. We partially automated with the script  (a) Cleaning the failed jobs logic seems fine. (b) Not sure about the right procedure to clean the remaining jobs on demand blindly. Can another mechanism is there to check the files in the job directory and identify them as not need jobs and clean them? (3) Is any other solution is there to clean them by automated way ?   Partially automated script for cleaning #!/usr/bin/env bash SPLUNK_HOME=/opt/splunk SPLUNK_DISPATCH_DIR=$SPLUNK_HOME/var/run/splunk/dispatch CLEAN_UP_OLD_JOBS_IN_DAYS=1 # Clean the failed jobs # If subdirectories in $SPLUNK_DISPATCH_DIR are beyond 24hrs of their last modtime and the job directory does not # contain both info.csv and status.csv files, then that job is considered to be as failed search job. find $SPLUNK_DISPATCH_DIR -mindepth 1 -maxdepth 1 -type d -mtime +$CLEAN_UP_OLD_JOBS_IN_DAYS | while read job; do if [ ! -e "$job/info.csv" ] && [ ! -e "$job/status.csv" ] ; then rm -rfv $job fi; done; # Clean the splunk home parition on demand # If jobs of last modified time greater than 24 hours when when the splunk-home disk space is less than # 5 GB and splunk jobs are more than 500 BLOCK_SIZE_IN_BYTES=1024 FIVE_GB_IN_KILOBYTES=5242880 SPLUNK_JOBS_MAX_COUNT_LIMIT=500 SPLUNK_HOME_FS_MOUNT_AVAILABLE_SIZE=$(df -P --block-size=$BLOCK_SIZE_IN_BYTES /opt/splunk/ | tail -1 | awk '{print $4}') SPLUNK_JOBS_COUNT=$(ls /opt/splunk/var/run/splunk/dispatch | wc -l) if [ $SPLUNK_HOME_FS_MOUNT_AVAILABLE_SIZE -lt $FIVE_GB_IN_KILOBYTES ] && [ $SPLUNK_JOBS_COUNT -gt $SPLUNK_JOBS_MAX_COUNT_LIMIT ] ; then find $SPLUNK_DISPATCH_DIR -mindepth 1 -maxdepth 1 -type d -mtime +$CLEAN_UP_OLD_JOBS_IN_DAYS | while read job; do rm -rfv $job done; fi
Given the following indexes.conf configurations are my calculations and thoughts correct? [default] # set TSIDX settings for Enterprise Security Data Models (this overrides the default of $SPLU... See more...
Given the following indexes.conf configurations are my calculations and thoughts correct? [default] # set TSIDX settings for Enterprise Security Data Models (this overrides the default of $SPLUNK_DB) # tstatsHomePath = volume:primary/$_index_name/datamodel_summary # Default for each index. Can be overridden per index based upon the volume of data received by that index. tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary maxTotalDataSizeMB = 300000000 # 365 dyas frozenTimePeriodInSecs = 31536000 maxWarmDBCount = 4294967295 lastChanceIndex = orphan_events [volume:primary] path = /splunkdata maxVolumeDataSizeMB = 17408000 [volume:cold] path = /splunkdata_cold maxVolumeDataSizeMB = 31457280 # Tstats Home Path [volume:_splunk_summaries] path = /splunkdata ####custom#### [main] repFactor = auto maxDataSize = auto_high_volume maxHotBuckets = 10 #maxMemMB = 20 coldPath = volume:cold/main/colddb homePath = volume:primary/main/db thawedPath = $SPLUNK_DB/main/thaweddb   In the worse case, with only large buckets, the maximum size for the hot+warm volume of the main index will be: 4294967295 + 10 * 10GB (4294967305) * 10GB = 42,949,637,050 GB  OR maxVolumeDataSizeMB = 17408000   If data isnt rolling to cold soon enough its most likely because these thresholds havent been met yet?    
Hi, Splunk_TA_microsoft-cloudservices/bin/splunktamscs/ca_certs_locater.py TEMP_CERT_FILE_NAME = 'httplib2_merged_certificates_{}.crt' Whenever the inputs run this script creates the cert fi... See more...
Hi, Splunk_TA_microsoft-cloudservices/bin/splunktamscs/ca_certs_locater.py TEMP_CERT_FILE_NAME = 'httplib2_merged_certificates_{}.crt' Whenever the inputs run this script creates the cert file under /tmp/ directory and it will never get deleted and making /tmp/ directory full. Is this a bug with the App? OR It's required to create a cert file while running inputs? Regards, Arun Sunny
Observation The Nessus scan detected few certificate errors on the Splunk ports 8089 (management port), 8000(web-UI) and  8191(MONGOD).   The certificate errors are          (1) SSL Self-Signed Ce... See more...
Observation The Nessus scan detected few certificate errors on the Splunk ports 8089 (management port), 8000(web-UI) and  8191(MONGOD).   The certificate errors are          (1) SSL Self-Signed Certificate,         (2) SSL Certificate Cannot Be Trusted         (3) SSL Certificate Signed Using Weak Hashing Algorithm. The error (1) and (2) are happened due to self signed certificate and the error (3) happened, due to singed with SHA1 algorithm. Action Taken: For Web UI port 8000 :  We followed the procedure in the link and solved ‘SSL Certificate Singed Using Weak Hashing Algorithm’  https://docs.splunk.com/Documentation/Splunk/8.0.4/Security/Self-signcertificatesforSplunkWeb Issue: For 8089 and 8191,  seems it use the default certificate and keys present in the directory “/opt/splunk/etc/auth/”. For splunk fresh installation, the default certificates and keys are generated with “sha256WithRSAEncryption”. This looks good. But, the same splunk version installed few years back is singed with SHA1.  We removed /opt/splunk/etc/auth/server.pem and restarted splunkd. The new server.pem is generated with SHA256. Questions: (1) Other server.pem, the remaining various default certificate present in /opt/splunk/etc/auth/ directory are singed with SHA1.  How these can be converted to SHA256.  Can you please help us regarding the procedure for this ? (2) Can you please clarify which certificate and keys are used for  8089 and 8191 ? (3) We are Splunk licensed customer. Is splunk team is providing a way to sign and make the certificate trusted?
My users are reporting their lookup edits aren't being saved since upgrade to Splunk Cloud 8.0 a few weeks ago. The error is "You do not have permission to perform this operation" when they do have a... See more...
My users are reporting their lookup edits aren't being saved since upgrade to Splunk Cloud 8.0 a few weeks ago. The error is "You do not have permission to perform this operation" when they do have access. (They can edit the lookups via inputlookup / outputlookup.) I saw wiki troubleshooting doc:    https://lukemurphey.net/projects/splunk-lookup-editor/wiki/Troubleshooting When I execute the recommended query: index=_internal (sourcetype="lookup_editor_controller" OR sourcetype=lookup_editor_rest_handler OR sourcetype=lookup_backups_rest_handler) I see no ERROR events at all -- just INFO events. And I don't see any events (at all) since 6/8 -- which was three days after the upgrade to 8.x.  Does that mean the REST function isn't working as of 6/8?
I am using this search in Splunk, index=voice sourcetype=voice_cvp source="*ActivityLog*" host="omatelstgcvp4" ",ForbExt_Accept," | table_raw , that results in the following  10.217.108.151.159283... See more...
I am using this search in Splunk, index=voice sourcetype=voice_cvp source="*ActivityLog*" host="omatelstgcvp4" ",ForbExt_Accept," | table_raw , that results in the following  10.217.108.151.1592834757078.388.F,06/22/2020 09:06:22.240,set_COVIDForbExtAccept,custom,ForbExt_accept,978362,4024754759, and I would like to be able to have it only display ForbExt_accept,978362,4024754759, to use to send an alert w/this data in a csv file
Hello,  I have an index cluster and would like to send ALL data to a non splunk third party end point. is this possible? this is the props.conf however for an index cluster where should i configu... See more...
Hello,  I have an index cluster and would like to send ALL data to a non splunk third party end point. is this possible? this is the props.conf however for an index cluster where should i configure these files?  the second configuration is the transforms.conf the final configuration is the outputs.conf [syslog] TRANSFORMS-routing = routeAl [routeAll] REGEX=(.) DEST_KEY=_TCP_ROUTING FORMAT=Everything [tcpout] defaultGroup=nothing [tcpout:Everything] disabled=false sendCookedData=false server=10.1.12.1:10514  my concern is that i actually just want to forward all of my data. is there a particular configuration needed for this? or any ideas?  many thanks  Willsy
Hello Everyone on Splunk Forum I have problem with sending DC to Splunk Setup. This DC machine first should send logs to IFs tier and after this place events in indexer. I have checked interna... See more...
Hello Everyone on Splunk Forum I have problem with sending DC to Splunk Setup. This DC machine first should send logs to IFs tier and after this place events in indexer. I have checked internal logs for this particular machine with "ERROR" log_level. Interesting thing which has found by me is problem with 'TcpOutputFd' There are folling messages Connection to host=10.200.80.11:9997 failed. sock_error = 10054. SSL Error = No error Connection to host=10.200.80.12:9997 failed. sock_error = 10054. SSL Error = No error Connection to host=10.200.80.13:9997 failed I am not very familiar with managing distributed Splunk setup - I am still learning new things. Could you please tell me how i can resolve this problem. Thanks BR Dawid
I have a search query for: dest_port=4402   I want to include 4404.  what would the syntax for dest_port look like?
Hello All I'm trying to use eval if like command with json type data (kv_mode = json) but it seems as though it's not respecting the command when used on this type of data. I'm searching Nessus dat... See more...
Hello All I'm trying to use eval if like command with json type data (kv_mode = json) but it seems as though it's not respecting the command when used on this type of data. I'm searching Nessus data and we are using Splunk_TA_nessus I'm trying to do something like: index=nessusdata sourcetype="tenable:sc:vuln" scan_result_info.name="my scan*" | eval newfield=if(like(scan_result_info.name, "my scan%"), "it's working", "it's not working") All results return as not working meaning the if like eval isn't working. I've tried it eval a=if(scan_result_info.name like "my scan%", "working", "not working") Neither works with Nessus type data but everything works when I use the same commands on iis type data. I know that I'm typing the commands correctly. Could someone explain to me how to get this to work with data where kv_mode = json Is there another way to go about this or am I out of luck with eval if like against Json type data?
Working on tuning indexers, volumes and storage. We are seeing indexers fill over what we anticipated so I wanted to reduce maxVolumeDataSizeMB. The docs wording on this isnt necessarily clear, what ... See more...
Working on tuning indexers, volumes and storage. We are seeing indexers fill over what we anticipated so I wanted to reduce maxVolumeDataSizeMB. The docs wording on this isnt necessarily clear, what does "If the size is exceeded, Splunk will remove buckets with the oldest value of latest time (for a given bucket) across all indexes in the volume, until the volume is below the maximum size. " mean in reference to maxVolumeDataSizeMB?   Does it delete(freeze) the data or does it roll it to cold?
Hello! I am building an alert to detect potential password spraying (it is looking for 10 or more failed logons within the last 15 minutes, where the username is correct but the password is wrong). ... See more...
Hello! I am building an alert to detect potential password spraying (it is looking for 10 or more failed logons within the last 15 minutes, where the username is correct but the password is wrong). It works well, however there is one issue. If the same user fails to login a lot then it will trigger the alert. I only want a failure to count if the usernames are different. For example, if one user fails to login 10 times it will NOT alert. If 10 different users fail to login once each then it would alert. Below is my syntax:   index=*-windows-logs EventCode=4625 signature="User name is correct but the password is wrong" Account_Name!=*$ | stats count by src_ip | where count > 10            
A search head crashed and the last message in the log file is - -- 06-22-2020 11:13:39.341 -0400 WARN  PeriodicReapingTimeout - Spent 20995ms reaping search artifacts in /opt/apps/splunk/var/run/spl... See more...
A search head crashed and the last message in the log file is - -- 06-22-2020 11:13:39.341 -0400 WARN  PeriodicReapingTimeout - Spent 20995ms reaping search artifacts in /opt/apps/splunk/var/run/splunk/dispatch What can it be?