All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, In Splunk Enterprise Security, in order to embed field values in a title we need to use "$fieldname$" but in the ITSI i can see in the documentation that it is "%fieldname%", the way of represen... See more...
Hi, In Splunk Enterprise Security, in order to embed field values in a title we need to use "$fieldname$" but in the ITSI i can see in the documentation that it is "%fieldname%", the way of representing the fieldvalues in the title differs in both cases?? Thanks, BK
Hey All, I recently installed/configured the Microsoft Teams Add-on in an attempt to ingest call logs and meeting info from Microsoft Teams.   I have run into an issue I was hoping someone could hel... See more...
Hey All, I recently installed/configured the Microsoft Teams Add-on in an attempt to ingest call logs and meeting info from Microsoft Teams.   I have run into an issue I was hoping someone could help with or shed some light on. Add-On Version: 1.02 Splunk Version: 7.3.4 App is installed on a HF. I have followed the instructions on the setup and have the Subscription, User Reports, Call Reports and Webhook all setup in the inputs section of the app. It appears though the only thing working is the User Reports. I have granted all of the required permissions in Teams\Azure  per the documentation. The _internal logs don't give a whole lot of information indicating what the issue might be even with DEBUG logging enabled for the app. The only thing I am seeing in the logs indicating an issue was this: 127.0.0.1 - splunk-system-user [30/Jun/2020:09:05:36.213 -0500] "GET /servicesNS/nobody/TA_MS_Teams/properties/TA_MS_Teams HTTP/1.1" 404 144 - - - 0ms And this: 2020-06-30 09:25:43,189 ERROR pid=107176 tid=MainThread file=base_modinput.py:log_error:309 | Could not create subscription: 400 Client Error: Bad Request for url: https://graph.microsoft.com/beta/subscriptions The  documentation also mentions a webook which I am a little confused as to where that webhook resides. Is it in Teams itself or where the app is installed? It seems like the webook is in the app on the HF based on how the documentation reads? Any help would be greatly appreciated. Thanks, Andrew
Hello all, I have a search head cluster where everything is working well. We have one member of the cluster where the mgmt_uri needs to be changed. When i change the mgmt_uri to the new address in s... See more...
Hello all, I have a search head cluster where everything is working well. We have one member of the cluster where the mgmt_uri needs to be changed. When i change the mgmt_uri to the new address in server.conf and restart that search head, it will come back up but the cluster will show the old uri in use and that member as being down. I can switch it back to the old uri and it will show as being back up. What is the best way to change the mgmt_uri of this search head. thanks!
Hi Team, I have created connection for oracle DB in db connect app. When i am trying to run the sql query in DB connect i can see the result. But when i am trying to search the same result in search... See more...
Hi Team, I have created connection for oracle DB in db connect app. When i am trying to run the sql query in DB connect i can see the result. But when i am trying to search the same result in search head using index and source , not getting any result.  
Following the best practices for removing an LDAP user I am at the stage where I want to remove  the $HOME/splunk/etc/users/$userid folder. I have tried deleting from the cluster captain but this doe... See more...
Following the best practices for removing an LDAP user I am at the stage where I want to remove  the $HOME/splunk/etc/users/$userid folder. I have tried deleting from the cluster captain but this does not seem to be distributed across the cluster. Do I need to delete from each member individually?   Thanks.
I am looking for a solution for my current environment: - Data residing on AWS S3. This data is from various sources and we collect them to AWS S3 buckets - We are planning to install HF under the ... See more...
I am looking for a solution for my current environment: - Data residing on AWS S3. This data is from various sources and we collect them to AWS S3 buckets - We are planning to install HF under the same AWS account where the data is available on S3. This data should be injected from S3 to Heavy Forwarder (HF) and then from HF, it should get ingested into Indexer cluster - Since we are getting the data from various different sources, do we need to install individual Splunk apps or add-ons for these data types on HF. Data may be Cylance, FireEye etc. data? Since couple of these apps require data ingestion directly from the source device, it seems we cannot use them for our purpose. My question is: Should we directly inject data from S3 to HF and then from HF to Indexer cluster? Here is a flow to show end to end picture: AWS S3 (Data from sources) ->> AWS SQS ->> HF (with Splunk App for AWS to pull data from AWS SQS) ->> Indexer cluster   Thanks.
Hi, Splunk server: 7.3.5 snow_ta version: 6.0.0 I'm trying to collect data from the snow cmdb input with the ta, but the present error is showed: 'sys_updated_on' field is not found in the data c... See more...
Hi, Splunk server: 7.3.5 snow_ta version: 6.0.0 I'm trying to collect data from the snow cmdb input with the ta, but the present error is showed: 'sys_updated_on' field is not found in the data collected for 'cmdb_ci_productive' input   These are the specefic events before the error:         2020-06-26 23:03:46,897 INFO pid=105839 tid=Thread-1 file=snow_data_loader.py:_do_collect:198 | Initiating request to https://xxx.service-now.com/api/now/table/cmdb_ci?sysparm_display_value=all&sysparm_limit=1000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2000-01-01+00:00:00^ORDERBYsys_updated_on 2020-06-26 23:03:56,373 INFO pid=105839 tid=Thread-1 file=snow_data_loader.py:_do_collect:251 | Ending request to https://xxx.service-now.com/api/now/table/cmdb_ci?sysparm_display_value=all&sysparm_limit=1000&sysparm_exclude_reference_link=true&sysparm_query=sys_updated_on>=2000-01-01+00:00:00^ORDERBYsys_updated_on 2020-06-26 23:03:56,400 ERROR pid=105839 tid=Thread-1 file=snow_data_loader.py:_write_checkpoint:360 | 'sys_updated_on' field is not found in the data collected for 'cmdb_ci_productive' input. In order to resolve the issue, provide valid value in 'Time field of the table' on Inputs page, or edit 'timefield' parameter for the affected input in inputs.conf file.       I tried the troubleshooting query (splunk addon documentation) vs the splunk query (show in the logs) and I can see the data (sys_updated_on field) in the web browser.       https://xxx.service-now.com/cmdb_ci.do?JSONv2&sysparm_query=sys_created_on>=2016-01-01+00:00:00^ORDERBYsys_created_on&sysparm_record_count=50 xxx.service-now.com         It seems like the user configured have the permission to read the data, but for some reason is not working with the TA. Is there a known issue about it? Did I miss something? Regards,
Hello all, I am hoping for help creating a comma separated list.  I have tried multiple different things and all have resulted in lists, but never quite what I am needing.   I have a list of email ... See more...
Hello all, I am hoping for help creating a comma separated list.  I have tried multiple different things and all have resulted in lists, but never quite what I am needing.   I have a list of email addresses, that I need to be listed out, comma separated so that I can automate a currently manual process of updating a DLP policy. The list would appear as follows input data: Email email1@email.com email2@email.com email3@email.com email4@email.com email5@email.com ... ... email1124@email.com email1125@email.com   The output list that I need comma separated needs to be displayed as follows EmailAddress email1@email.com, email2@email.com, email3@email.com, email4@email.com, email5@email.com, ... ... email1124@email.com, email1125@email.com   note that the list is comma separated however the final entry does not get a comma.  This is because Symantec DLP reconizes the comma separator as an expected new entry.  If there is no comma, the final entry is expected as the last entry.    I have tied stats list (this worked) however, it limits the output to 100 (I have around 1500 email addresses).  I know that I could have the limits.conf increased from 100, but I would like to avoid this just do accomplish this one task.   I have also tried to string the fields to string the , however it places the comma at the end of the final value in the list. | eval EmailAddress=Email+"," | table EmailAddress I have also tried mvjoin which just creates a giant mv field, which would be ok, expect some of the email addresses have a - in them, which then line breaks resulting in the .csv file being sent out break and have emails not formatted correctly. I have also tried delim with a dc and values, however it also just creates a giant mv list with commas at the end of all values including the end value. | stats delim="," dc(Email) as EmailAddressCount, values(Email) as EmailAddress | nomv EmailAddress | table EmailAddress Is there a way to create the comma separated list as requested? or is there an easier way to remove the trailing character from the LAST value? Thank you
Hi, i am having an error of getting this user input for a drop-down to work where i am unable to find any errors within my code. Can somebody help me for this error?  This is the error i am getting.... See more...
Hi, i am having an error of getting this user input for a drop-down to work where i am unable to find any errors within my code. Can somebody help me for this error?  This is the error i am getting. This is my search query (sourcetype="windows event logs" OR sourcetype="General-linux-sql.log" OR sourcetype="csv") | eval spec_IP=case ([|search sourcetype="General-linux-sql.log"], [| rex field=_raw "\[(?<IP_addr>\d+.\d+.\d+.\d+)\]"], [| search sourcetype="csv"], [| rex field=_raw ",(?<src_ip>\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}),\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3},,,"], [| search sourcetype="windows event logs"], [| search *"Account Locked"* | rex field=_raw "\[(?<acc_ip>\d+.\d+.\d+.\d+)\]"] ) | stats count by Specific_IP
For example : these are some part of my logs: sender= xyz(receiver=a, receiver =b)  sender= abc(receiver=a,receiver =d) sender=xyz(receiver=a) ....more entries And result should be something li... See more...
For example : these are some part of my logs: sender= xyz(receiver=a, receiver =b)  sender= abc(receiver=a,receiver =d) sender=xyz(receiver=a) ....more entries And result should be something like: sender=xyz receiver=a sender=xyz receiver=b sender=abc receiver=c sender=abc receiver=d and I am using remote button as input So whenever i give input of receiver=a it should give me a table like sender = abc.       1 sender= xyz         2 Need help! To write query  
Good morning all, I have been beating my head against this issue for a week or more.  Let me give you the details. We have one indexer and multiple Universal Forwarders in the field.  One of these ... See more...
Good morning all, I have been beating my head against this issue for a week or more.  Let me give you the details. We have one indexer and multiple Universal Forwarders in the field.  One of these forwarders I am running a scripted input to gather directory data for a file monitoring solution. input.conf:   ###### Scripted Input to monitor jpeg files [script://.\bin\dircontents.bat] disabled = 0 ## Run once per minute interval = 60 sourcetype = Script:dir_files index = filewatch   dircontents.bat   @echo off D: cd /seed dir /b   The forwarder gathers this data from the script:   24Aug2017.txt 24Jan2018.txt 28Jul2016.txt 28Jul2016.txt~ 29Jan2018.txt INCHARGE-AM-PM-AL.seedfile INCHARGE-AM-PM-AZ.seedfile INCHARGE-AM-PM-GA-FL.seedfile INCHARGE-AM-PM.seedfile MitchDRSite.list rcp.list TSM-seed.list   This data is one event with Multiple lines.  I want to bread on the line feeds.  That sounds simple enough.   props.conf   [Script:dir_files] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) MAX_EVENTS = 10000 TRUNCATE = 0   After I deploy the configs to the UF, the data starts coming in as a single event with multiple lines.  Very frustrating!!! I have tried many things, changed my regex around and I just can not find the solution.   Any help would be appreciated at this time. Let me know what you think Rcp  
Hello I currently get CSV results from a daily import into Splunk.  The first field is a serial number in this format INA field called "Serial Number" like this "xxx-xxx-xxx" it is the first field i... See more...
Hello I currently get CSV results from a daily import into Splunk.  The first field is a serial number in this format INA field called "Serial Number" like this "xxx-xxx-xxx" it is the first field in the raw_data results. I have a lookup  called SerialNumber that has a series of serial numbers with the same format I want to check for in the daily report.  I have tested the lookup alone in Splunk and it works fine. It has about 20 serial numbers that I want to check for in the daily results. If there is a match just return the serial number or true index="blah" sourcetype="blah:csv" [ | inputlookup SerialNumber ] fields  thanks for your help    
Hi Everyone, Could you please help me to find out the issue with my Splunk instance. I am not getting email from splunk for the scheduled report/alert. Splunk version: 6.6.2 SMTP server is config... See more...
Hi Everyone, Could you please help me to find out the issue with my Splunk instance. I am not getting email from splunk for the scheduled report/alert. Splunk version: 6.6.2 SMTP server is configured on that local server and port 25 is open. 1. for testing triggered email command from powershell in this case I am getting email .   $MailparamsDisable = @{ 'To' = 'himanshu.b.shekhar@xyz.com' 'Subject' = 'xyz account maintenance' 'From' = 'IAM.Admin@xyz.com' 'SmtpServer' = 'xyz.abc.com' 'Body' = "Hi Splunk" } Send-MailMessage @MailparamsDisable     2. but from splunk i am not getting email. index=_internal | head 5 | sendemail to="himanshu.b.shekhar@xyz.com" server="xyz.abc.com:25" subject="Here is an email notification" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true Search Factory: Unknown search command 'sendemail'. what could be issue please help me on this.   Thank in advance.   Thank You, Himanshu
I've been noticing some bundle distribution errors recently on one of my search heads. This search head is part of a 3 server search head cluster connecting to an indexer cluster. The message I am se... See more...
I've been noticing some bundle distribution errors recently on one of my search heads. This search head is part of a 3 server search head cluster connecting to an indexer cluster. The message I am seeing is this:     06-30-2020 08:04:57.184 -0400 ERROR DistributedBundleReplicationManager - HTTP response code 400 (HTTP/1.1 400 Bad Request). Error applying delta=/opt/splunk/var/run/searchpeers/CD224AB5-4AD5-4680-BA13-01234645B8B6-1593518595-1593518680.delta, searchHead=CD224AB5-4AD5-4680-BA13-01234645B8B6, prevTime=1593518595, prevChksum=730910445532009996, curTime=1593518680, curChksum=3452252417076391070: Error copying /opt/splunk/var/run/searchpeers/CD224AB5-4AD5-4680-BA13-01234645B8B6-1593518595/indexing_tokens to /opt/splunk/var/run/searchpeers/CD224AB5-4AD5-4680-BA13-01234645B8B6-1593518680.2346fcb2e8de1493.tmp/indexing_tokens. 1 errors occurred. Description for first 1: [{operation:"stat'ing source directory", error:"No suc... (message truncated, search splunkd.log for "distribution_error" to see it in full).       I haven't had much luck finding anything on this on Answers or just searching for the issue. I've gone to the splunkd.log to search for this message but it actually shows truncated at the source, so there isn't "seeing it in full" there. May need to turn up logging level for it. Just wanted to see if anyone had any thoughts or have run into this before?
Hi, I'm trying to setup a DNS lookup following the instructions her:   https://docs.splunk.com/Documentation/Splunk/8.0.4/Knowledge/Configureexternallookups#External_lookup_example   But there i... See more...
Hi, I'm trying to setup a DNS lookup following the instructions her:   https://docs.splunk.com/Documentation/Splunk/8.0.4/Knowledge/Configureexternallookups#External_lookup_example   But there is no external_lookup.py in the $SPLUNK_HOME/etc/system/bin/ Is there a chance to get the external_lookup.py anywhere else? I´m running Splunk Enterprise 8.0.4 on an SLES 12
Why did I get a warning on my real-time rolling windows alert? this is my alert configuration:  
I have to write query for extracting out the values from multi valued field example field:  Region=America, Africa Region=Asia Region=America, Asia i want table like this: Region            Cou... See more...
I have to write query for extracting out the values from multi valued field example field:  Region=America, Africa Region=Asia Region=America, Asia i want table like this: Region            Count America             2 Asia                     2 Africa                 1 I have used split cmmnd: eval temp=split(Region,“,”)  Now what is happening is it is only giving me count of Asia =1   Need little help:)
Hi Splunk Folks, We have Splunk Physical Servers with 8GB disk space storage for /opt folder which frequently reaching 90% of the disk space threshold (7.2GB). Since we cannot easily upgrade the dis... See more...
Hi Splunk Folks, We have Splunk Physical Servers with 8GB disk space storage for /opt folder which frequently reaching 90% of the disk space threshold (7.2GB). Since we cannot easily upgrade the disk space because these are Physical servers, we are looking for files that we can remove or migrate. We found this "/opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/save" folder (1GB in size) that seems like containing the same files (btree_index.dat, btree_records.dat and snapshot) with its predecessor folder (/opt/splunk/var/lib/splunk/fishbucket/splunk_private_db) Are questions are, what are these Splunk files do and does it safe if we will delete or move them to another folder to free some disk space on /opt? Here is the commands we used to check which file has consume a large volume of diskspace -bash-4.2$ df -h /opt/splunk Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg00-opt 8.0G 6.5G 1.6G 82% /opt -bash-4.2$ du -h --max-depth=1 /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db 1001M /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/save (Has the most consumed diskspace) 335M /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/snapshot 1.7G /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/ (Total) If we look inside the "save" folder from /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db, we can see it has same files (btree_index.dat, btree_records.dat and snapshot) . Thus it just might be a backup of splunk_private_db -bash-4.2$ ls -l /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db -rw-------. 1 splunk splunk 104865400 Jun 24 04:52 btree_index.dat -rw-------. 1 splunk splunk 246211800 Jun 24 04:56 btree_records.dat drwx------. 3 splunk splunk 79 Jun 24 04:49 save drwx------. 2 splunk splunk 70 Jun 24 04:49 snapshot -bash-4.2$ ls -l /opt/splunk/var/lib/splunk/fishbucket/splunk_private_db/save -rw-------. 1 splunk splunk 152715440 Nov 22 2019 btree_index.dat -rw-------. 1 splunk splunk 371572840 Nov 22 2019 btree_records.dat drwx------. 2 splunk splunk 70 Nov 22 2019 snapshot   Regards, John Kevin Aracan
I have been working the past 2 weeks with getting powershell scripts and cron jobs stable. but find that when i run 6 scripts into the input.conf, i causes issue on some of the scripts randomly. one ... See more...
I have been working the past 2 weeks with getting powershell scripts and cron jobs stable. but find that when i run 6 scripts into the input.conf, i causes issue on some of the scripts randomly. one of the errors is "Collection was modified; enumeration operation may not execute" The thing is the powershell script is running the same PS1 file where the parameter is almost identical. and when i restart the service the same error can come from one of the other script and the one just failed At the moment i have been able to fix the issue with implementing a sleep in front of every script, and this seems to work, but are not the ideal solution Here is the debug log with the sleep. in basic it a script using dbatools and run some queries. I do an import of dbatools at the beginning, and for the output it also looks good in splunk when the script runs succesful. I tried to do an ISE where i added all the same script lines as in the input.conf, but are unable to reproduce it no matte how much i want it to fail. i also had some different approaches, to collect all the script info to a variable and output at the bottom run as Dot, Source Run as a isolated function with powershell command The only thing that make it stable is the sleep.  06-30-2020 10:45:00.0225972+2 INFO Start executing script=sleep 3; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 35 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag35 06-30-2020 10:45:00.0235980+2 INFO Start executing script=sleep 4; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 46 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag46 06-30-2020 10:45:01.3045026+2 INFO End of executing script=sleep 1; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 27 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag27, execution_time=1.2849084 seconds 06-30-2020 10:45:02.2756997+2 INFO End of executing script=sleep 2; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 34 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag34, execution_time=2.2541032 seconds 06-30-2020 10:45:03.3188309+2 INFO End of executing script=sleep 3; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 35 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag35, execution_time=3.295233 seconds 06-30-2020 10:45:04.1506740+2 INFO End of executing script=sleep 4; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 46 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag46, execution_time=4.127076 seconds 06-30-2020 10:45:07.3460828+2 INFO End of executing script=sleep 6; ."C:\Program Files\SplunkUniversalForwarder\etc\apps\xxx\Bin\Get-NCSQLDiagsplunk.ps1" -QueryIDs 102 -SqlInstance localhost -SplunkEnable for stanza=NCSQLDiag102, execution_time=7.3264873 seconds
Hi Experts, I have a search query that give me a result table like below: Employee Salary A 1000 B 2000 C 0   How can we trigger an alert when one of our employee's salary equa... See more...
Hi Experts, I have a search query that give me a result table like below: Employee Salary A 1000 B 2000 C 0   How can we trigger an alert when one of our employee's salary equals to zero or specific number?