All Topics

Top

All Topics

Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I ... See more...
Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I would like to hide those OOTB dashboards from the app / role, in a bulk action. Doing so one by one will not be fun Is there a way to accomplish that?    Thanks in advance.  
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is b... See more...
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is being taken. To remedy this, I added a custom props in ../etc/apps/Splunk_TA_bluecoat-proxysg/local, with the following stanza: [bluecoat:proxysg:access:syslog] TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^   Rest of the configuration is the same as it is in the base app (Splunk_TA_bluecoat-proxysg).   During testing, when I upload logs through Add Data, the the time stamp is being properly parsed. However when I start using SplunkTCP to ingest the data, the timestamp extraction stops working.  Note that in both of the scenarios, the rest of the parsing configurations (field extraction and mapping is working just fine). Troubleshooting: 1. I tried to check with btool for props .. I can see the custom stanza I added there. 2. Tried putting the props in ../etc/system/local 3. Restarted Splunk multiple times. Any ideas that I can try to get this to work? or where should I look at? Sample Log: 2024-12-03 07:30:06 9 172.24.126.56 - - - - "None" - policy_denied DENIED "Suspicious" - 200 TCP_ACCELERATED CONNECT - tcp beyondwords-h0e8gjgjaqe0egb7.a03.azurefd.net 443 / - - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0" 172.29.184.14 39 294 - - - - - "none" "none" "none" 7 - - 631d69b45739e3b6-00000000df56e125-00000000674eb37e - - Splunk Search (Streaming data): Splunk Search (uploaded data):    
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice va... See more...
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv | search displayname != "ABN8" AND displayname != "ABR8" AND displayname != "ABRA7" AND displayname != "ABMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> </input> <input type="multiselect" token="system_number" searchWhenChanged="true"> <label>Node</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>Node</fieldForLabel> <fieldForValue>sys_number</fieldForValue> <change> <eval token="form.system_number">case(mvcount('form.system_number') == 2 AND mvindex('form.system_number', 0) == "*", mvindex('form.system_number', 1), mvfind('form.system_number', "\\*") == mvcount('form.system,_number') - 1, "*", true(), 'form.system_number')</eval> </change> <search> <query>| inputlookup node.csv | fields site prefix Node sys_number | eval token_value = "$siteid$" | eval site_val = if(match(token_value, "OR\s*index="), split(replace(token_value, "\s*OR\s*index=\s*", ","), ","), token_value) | where prefix=site_val | dedup Node | table Node sys_number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <prefix>"</prefix> <suffix>"</suffix> <valueSuffix>","</valueSuffix> <delimiter> </delimiter> </input> the problem here is, I need to have field for label as Node but , When I'm selecting an value in siteid, then selecting a value in Node, after that selecting the secong value in Siteid, the node change its value to sys_number, but actually it should be Node, as we mentioned fields label as Node only but it changes to sys_number.    this only happens after selecting any values in Node, if we select values in siteid, the Node behaved wierd. Other eise its fine, Thanks!
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but... See more...
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but I am getting below error. As I need to send UDP data running on port 8125, I am using heavy forwarder instead of universal forwarder and I have configured heavy forwarder pointing to "prd-p-7mh2z.splunkcloud.com:9997" Getting error on the dashboard ``` The TCP output processor has paused the data flow. Forwarding to host_dest=prd-p-7mh2z.splunkcloud.com inside output group default-autolb-group from host_src=rahusri2s-MacBook-Pro.local has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. ```   cat /Applications/splunk/etc/system/local/outputs.conf Password: [tcpout] defaultGroup = default-autolb-group indexAndForward = 1 [tcpout:default-autolb-group] server = prd-p-7mh2z.splunkcloud.com:9997 [tcpout-server://prd-p-7mh2z.splunkcloud.com:9997] # cat /Applications/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip [udp://8125] connection_host = dns host = rahusri2s-MacBook-Pro.local index = 4_dec_8125_udp sourcetype = statsd   Thanks in advance. #splunk 
Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or E... See more...
Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or Edge then enter some required website (example : https:\\www.yahoomail.com) and press enter. Then a pop up will come on the browser which will ask us to enter the credentials. This popup will have our internet proxy server Url with port number that is https://myinternetserver01.mydomain.com:4443 and a option to enter username and password as attached in the screenshot. Once we enter the credentials it will allow us to browse any website on that computer until we log out from that computer. Due to this restrictions, we are unable to use some of the splunk add ons which requires internet connection. We tried many options using proxy settings but none of them are working.   Can some one please guide us where can we input this internet server URL, Port and credentials so that Splunk will have a direct connection to internet and we can use all spunk add on which needs internet.      
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my He... See more...
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my Heavy Forwarder (https://splunkbase.splunk.com/app/6668) Added the account in the add-on (username, password, host) Activated all the inputs: cisco:dnac:clienthealth cisco:dnac:devicehealth cisco:dnac:compliance cisco:dnac:issue cisco:dnac:networkhealth cisco:dnac:securityadvisory I also created my “cisco_dna” index on my Splunk Cloud instance. Installed the Cisco DNA Center App (https://splunkbase.splunk.com/app/6669) Done, I started receiving logs in Splunk from Cisco DNA But when validating the dashboards in the APP and reviewing the search results I noticed that the values of the fields are duplicated. Even if I apply some dedup to any of the fields, the result is “only one duplicate value”. This affects me when I have to take a value to perform an operation or make a graph. Does anyone know what this problem is due to and how I could solve it? Cisco DNA Center Add-on Cisco DNA Center App 
Register here.This thread is for the Community Office Hours session on Splunk Application Performance Monitoring on Tue, January 14, 2025 at 1pm PT / 4pm ET.   What can I ask in this AMA? How can... See more...
Register here.This thread is for the Community Office Hours session on Splunk Application Performance Monitoring on Tue, January 14, 2025 at 1pm PT / 4pm ET.   What can I ask in this AMA? How can I send traces to APM? How do I track service performance with dashboards? What are some tips for setting up deployment environments? What are AutoDetect detectors and how can I use them? What are best practices for high-value features like Tag Spotlight and Service-Centric views? How do I set up business workflows? Anything else you'd like to learn!   Please submit your questions at registration. You can also head to the#office-hours user Slack channel to ask questions (request access here).   Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants. Look forward to connecting!
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB ... See more...
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB Apply only to: ip Search string: |makeresults|abuseipdbcheck ip=$ip$ I'd like to be able to use this for a report but I haven't figured out how trigger to call this workflow action to provide results.  I've done Google searches and I've tried a number of things. I am hoping someone in the community might be able to help. Thank you! Jodi
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a... See more...
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a stab, and try to better explain them from your own perspective Please don't point me or reference to any Splunk docs, I've read them already and still can't see when is the best use case to use these. I want to read you're opinion! What is the main difference between these two commands? splunk enable maintenance-mode splunk upgrade-init cluster-peers Here is the scene: I will be upgrading a cluster of splunk cluster manager and their peers. Cluster manager Indexers I don't want to initiate a bucket fixup on each indexer (10 peers * 10TB on each peer). Which one best fits/servers my use case above?
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ub... See more...
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ubuntu 24.04.   Is there a step-by-step guid on how to ingest my syslog data from Unifi into Splunk please?  Regards,   BOOMEL
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please a... See more...
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please advise what else should we do differently to get proper format.   12/3/24 9:21:58.000 AM   P}\x00\x00\x8B\x00\x00\x00\x00\x00\x00\xFFE\x90\xDDn\x9B@\x84_eun\xF6\xA2v}\xF6\xD8;lo$W\xDEM\xD5 sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xB9\xB7\xE6\xA0sV\xBA\xA0\x85\xFF~H\xA4[\xB31D\xE7aI\xA8\xFDe\xD7˄~\xB5MM\xE6>\xDCAIh_\xF5ç\xE0\xCCa\x97f\xC9V\xE7XJ o]\xE2\xEE\xED{3N\xC0e\xBA\xD6y\K\xA3P\xC8&\x97\xB16\xDDg\x93Ħ\xA0䱌C\xC5\xE3\x80~\x82\xDD\xED\xAD\xD39%\xA1\xEDu\xCE\x9F35\xC7y\xF0IN\xD6냱\xF6?\xF8\xE3\xE0\xEC~\xB7\x9Cv\x9D\x92 \x91\xC2k\xF9\xFANO   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   Y7'BaRsԈd\xBA\x88|\xC1i.\xFC\xD6dwG4\xA1<iᓕK\xF7ѹ* ]\xED\xB3̬-\xFC\xF4\xF7eb   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   .e #r.\xA4P\x9C\xB1(\x8A# \xA98\x86(e\xAC\x82\xB8B\x94\xA1`(ac{i\x86\xB1\xBA\A3%\xD3r\x888\xFB\xF73\xD0\xE0n   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   "   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   3néo\xAFc\xDB\xF9o\xEDyl\xFAto\xED\xF3\xB1\x9B\xFFn}3\xB4\x94o$\xF3\xA7\xF1\xE3dx\x81\xB6   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \x98`_\xAB[   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   &9"!b\xA3 host = http-inputs-elosusbaws.splunkcloud.com source = http:aws_vpc_use1_logging sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xD5Ӱ\xE8\xEBa\xD1\xFAa\xAC\xFC\xA9Yt}u:7\xF5â\xBA\xD5\xED\xF8\xEE\xB6c\xDFT\xD0\xF0\xF3`6κc\xD7WG19r\xC98   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xAA\x80+\x84\xC8b\x98\xC1\xB9{\xDC\xF4\xDD\xED   sourcetype = aws:cloudwatchlogs:vpcflow
I have installed and configured my Universal forwarder, however while it starts it remains inactive: Active forwards: None Configured but inactive forwards: 10.###.##.##:9997 I have validated th... See more...
I have installed and configured my Universal forwarder, however while it starts it remains inactive: Active forwards: None Configured but inactive forwards: 10.###.##.##:9997 I have validated that I am using the correct ip address, and that I can ping the indexer from the forwarder,  and that port 9997 is not blocked.  So at this point Im just not sure how to resolve this?  Any assistance would be appreciated. Thanks!
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to lim... See more...
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to limit the log files from for example WinFIM from exceeding the data.
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning... See more...
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning time of 150 days backwards but it generates false alerts, any suggestions to adapt my model?
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Sysl... See more...
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Syslog is now being ingested on one of the HFs as a network input. I saw that to solve my isssue I could injest my syslog logs on a UF and forward them to my HFs taking advantage of the built-in load balancing of the intermediate forwarders (aka HFs) which would simplify a lot the deployment. On the other hand another seen solution is manually implementing a load balancing machine in front of the HFs to injest the syslog data and manually balance load. Which solution is best suited for a splunk development? IMO 1st one is much more straight forward but I need to validate this is a correct aproach.   Thanks in advanced!
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.e... See more...
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.ecb.europa.eu/"  Output in SPLUNK Dashboard: T2S is operating normally.  
Hello guys, I am trying to add a time range to my search, so the user can pick any time range and see data for the selected time (e.g. 24hours, last 30 days, previous year etc), . I created a time ra... See more...
Hello guys, I am trying to add a time range to my search, so the user can pick any time range and see data for the selected time (e.g. 24hours, last 30 days, previous year etc), . I created a time range control and token for this purpose, called TimeRange. But when I run my query, I get the below error: Invalid value "$TimeRange$" for time term 'earliest' Here is my query: base query earliest = $TimeRange$, latest=now () | other query
Hello Experts, I am Getting Error while importing splunk-enterprise-security_732.spl Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description ... See more...
Hello Experts, I am Getting Error while importing splunk-enterprise-security_732.spl Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Content-Length of 920287904 too large (maximum is 524288000)</msg> </messages> </response>   need help on this   #SplunkError #ContentLengthExceeded #EnterpriseSecurity  #UploadIssue #LargeAppFileError  
Hi, I have a python script that requires a hostname as input and then runs an Ansible job via AWX. Is there a way to install this cleanly via a dashboard or in a menu in ES? I actually just want t... See more...
Hi, I have a python script that requires a hostname as input and then runs an Ansible job via AWX. Is there a way to install this cleanly via a dashboard or in a menu in ES? I actually just want to enter the hostname and use it to start the script. Regards, David
Hello Experts, I am Getting Error while importing splunk-it-service-intelligence_4191.spl. Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error descri... See more...
Hello Experts, I am Getting Error while importing splunk-it-service-intelligence_4191.spl. Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description "There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-DATABASE" Please help on this  #SplunkError #InvalidAppContents #AppUploadIssue #SplunkDebugging #ITSIError