All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or E... See more...
Hi   We have Splunk Enterprise installed on a Windows computer which does not have direct access to the internet. To access the internet on that computer, usually we open a browser like Chrome or Edge then enter some required website (example : https:\\www.yahoomail.com) and press enter. Then a pop up will come on the browser which will ask us to enter the credentials. This popup will have our internet proxy server Url with port number that is https://myinternetserver01.mydomain.com:4443 and a option to enter username and password as attached in the screenshot. Once we enter the credentials it will allow us to browse any website on that computer until we log out from that computer. Due to this restrictions, we are unable to use some of the splunk add ons which requires internet connection. We tried many options using proxy settings but none of them are working.   Can some one please guide us where can we input this internet server URL, Port and credentials so that Splunk will have a direct connection to internet and we can use all spunk add on which needs internet.      
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my He... See more...
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my Heavy Forwarder (https://splunkbase.splunk.com/app/6668) Added the account in the add-on (username, password, host) Activated all the inputs: cisco:dnac:clienthealth cisco:dnac:devicehealth cisco:dnac:compliance cisco:dnac:issue cisco:dnac:networkhealth cisco:dnac:securityadvisory I also created my “cisco_dna” index on my Splunk Cloud instance. Installed the Cisco DNA Center App (https://splunkbase.splunk.com/app/6669) Done, I started receiving logs in Splunk from Cisco DNA But when validating the dashboards in the APP and reviewing the search results I noticed that the values of the fields are duplicated. Even if I apply some dedup to any of the fields, the result is “only one duplicate value”. This affects me when I have to take a value to perform an operation or make a graph. Does anyone know what this problem is due to and how I could solve it? Cisco DNA Center Add-on Cisco DNA Center App 
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB ... See more...
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB Apply only to: ip Search string: |makeresults|abuseipdbcheck ip=$ip$ I'd like to be able to use this for a report but I haven't figured out how trigger to call this workflow action to provide results.  I've done Google searches and I've tried a number of things. I am hoping someone in the community might be able to help. Thank you! Jodi
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a... See more...
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a stab, and try to better explain them from your own perspective Please don't point me or reference to any Splunk docs, I've read them already and still can't see when is the best use case to use these. I want to read you're opinion! What is the main difference between these two commands? splunk enable maintenance-mode splunk upgrade-init cluster-peers Here is the scene: I will be upgrading a cluster of splunk cluster manager and their peers. Cluster manager Indexers I don't want to initiate a bucket fixup on each indexer (10 peers * 10TB on each peer). Which one best fits/servers my use case above?
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ub... See more...
I am new to Splunk but spent a log time with Unifi kit. I am on the latest version of Unifi controller with a config for SIEM integration with Splunk. I have installed Splunk on a Proxmox VM using Ubuntu 24.04.   Is there a step-by-step guid on how to ingest my syslog data from Unifi into Splunk please?  Regards,   BOOMEL
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please a... See more...
Standard format of data ingestion with default setup sending data via HEC token, Data getting ingested non-human readable format. Tried creating a new token and sourcetype but still no luck. Please advise what else should we do differently to get proper format.   12/3/24 9:21:58.000 AM   P}\x00\x00\x8B\x00\x00\x00\x00\x00\x00\xFFE\x90\xDDn\x9B@\x84_eun\xF6\xA2v}\xF6\xD8;lo$W\xDEM\xD5 sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xB9\xB7\xE6\xA0sV\xBA\xA0\x85\xFF~H\xA4[\xB31D\xE7aI\xA8\xFDe\xD7˄~\xB5MM\xE6>\xDCAIh_\xF5ç\xE0\xCCa\x97f\xC9V\xE7XJ o]\xE2\xEE\xED{3N\xC0e\xBA\xD6y\K\xA3P\xC8&\x97\xB16\xDDg\x93Ħ\xA0䱌C\xC5\xE3\x80~\x82\xDD\xED\xAD\xD39%\xA1\xEDu\xCE\x9F35\xC7y\xF0IN\xD6냱\xF6?\xF8\xE3\xE0\xEC~\xB7\x9Cv\x9D\x92 \x91\xC2k\xF9\xFANO   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   Y7'BaRsԈd\xBA\x88|\xC1i.\xFC\xD6dwG4\xA1<iᓕK\xF7ѹ* ]\xED\xB3̬-\xFC\xF4\xF7eb   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   .e #r.\xA4P\x9C\xB1(\x8A# \xA98\x86(e\xAC\x82\xB8B\x94\xA1`(ac{i\x86\xB1\xBA\A3%\xD3r\x888\xFB\xF73\xD0\xE0n   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   "   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   3néo\xAFc\xDB\xF9o\xEDyl\xFAto\xED\xF3\xB1\x9B\xFFn}3\xB4\x94o$\xF3\xA7\xF1\xE3dx\x81\xB6   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \x98`_\xAB[   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   &9"!b\xA3 host = http-inputs-elosusbaws.splunkcloud.com source = http:aws_vpc_use1_logging sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xD5Ӱ\xE8\xEBa\xD1\xFAa\xAC\xFC\xA9Yt}u:7\xF5â\xBA\xD5\xED\xF8\xEE\xB6c\xDFT\xD0\xF0\xF3`6κc\xD7WG19r\xC98   sourcetype = aws:cloudwatchlogs:vpcflow   12/3/24 9:21:58.000 AM   \xAA\x80+\x84\xC8b\x98\xC1\xB9{\xDC\xF4\xDD\xED   sourcetype = aws:cloudwatchlogs:vpcflow
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to lim... See more...
How do I limit the amount of data coming over from  [monitor://path/to/file] in my splunk forwarder inputs.conf file.  I did see whitelist directory and blacklist directory. Any other ways to limit the log files from for example WinFIM from exceeding the data.
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning... See more...
I want to schedule data learning for a source and alert me more accurately when the data gets closer to zero and this behavior is not normal. I am currently using Forecast time series with a learning time of 150 days backwards but it generates false alerts, any suggestions to adapt my model?
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Sysl... See more...
Hi folks, I'm having a hard time picking the right architecture for setting up a solution to gain high availability of my syslog inputs. My current setup is: - 4 UFs - 2 HFs - Splunk Cloud Syslog is now being ingested on one of the HFs as a network input. I saw that to solve my isssue I could injest my syslog logs on a UF and forward them to my HFs taking advantage of the built-in load balancing of the intermediate forwarders (aka HFs) which would simplify a lot the deployment. On the other hand another seen solution is manually implementing a load balancing machine in front of the HFs to injest the syslog data and manually balance load. Which solution is best suited for a splunk development? IMO 1st one is much more straight forward but I need to validate this is a correct aproach.   Thanks in advanced!
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.e... See more...
Hi Team  Can you please help me to extract the data from the external website to Splunk Dashboard.  Is it possible ??  Example :  I've to fetch the below status from the website: "https://www.ecb.europa.eu/"  Output in SPLUNK Dashboard: T2S is operating normally.  
Hello guys, I am trying to add a time range to my search, so the user can pick any time range and see data for the selected time (e.g. 24hours, last 30 days, previous year etc), . I created a time ra... See more...
Hello guys, I am trying to add a time range to my search, so the user can pick any time range and see data for the selected time (e.g. 24hours, last 30 days, previous year etc), . I created a time range control and token for this purpose, called TimeRange. But when I run my query, I get the below error: Invalid value "$TimeRange$" for time term 'earliest' Here is my query: base query earliest = $TimeRange$, latest=now () | other query
Hello Experts, I am Getting Error while importing splunk-enterprise-security_732.spl Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description ... See more...
Hello Experts, I am Getting Error while importing splunk-enterprise-security_732.spl Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Content-Length of 920287904 too large (maximum is 524288000)</msg> </messages> </response>   need help on this   #SplunkError #ContentLengthExceeded #EnterpriseSecurity  #UploadIssue #LargeAppFileError  
Hi, I have a python script that requires a hostname as input and then runs an Ansible job via AWX. Is there a way to install this cleanly via a dashboard or in a menu in ES? I actually just want t... See more...
Hi, I have a python script that requires a hostname as input and then runs an Ansible job via AWX. Is there a way to install this cleanly via a dashboard or in a menu in ES? I actually just want to enter the hostname and use it to start the script. Regards, David
Hello Experts, I am Getting Error while importing splunk-it-service-intelligence_4191.spl. Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error descri... See more...
Hello Experts, I am Getting Error while importing splunk-it-service-intelligence_4191.spl. Current Splunk version which is used here is Splunk EnterpriseVersion: 9.3.2 Here is the Error description "There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-DATABASE" Please help on this  #SplunkError #InvalidAppContents #AppUploadIssue #SplunkDebugging #ITSIError
Hi all I have 2 scenarios: We ingest logs (windows, linux) using the Splunk agent. Ingest logs from flat files using the Splunk agent   I've been asked to check whether the Splunk agent has an... See more...
Hi all I have 2 scenarios: We ingest logs (windows, linux) using the Splunk agent. Ingest logs from flat files using the Splunk agent   I've been asked to check whether the Splunk agent has any log integrity checking feature. Does the Splunk agent (or any other component in Splunk ES) check that the logs have not been tampered with in transit?  Thanks J  
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity... See more...
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity attached to his role.... when I created a role for developer which has the below capablities attached to its role: capabilities = [   "search",   "list_all_objects",   "rest_properties_get",   "embed_report" ] Now when I login as a developer and when I try viewing the dashboards its visible and its in read mode only but the developer can create new dashboards also which shouldnt be allowed. How can i restrict developer from creating a new dashboard? And also automatically the below capablities gets added to the role along with the ones which ive specified above: run_collect run_mcollect schedule_rtsearch edit_own_objects Ive also given read access in the specific dashboard permissions setting for the developers role only..
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</l... See more...
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</label></panel>" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3990 100 3913 100 77 12955 254 --:--:-- --:--:-- --:--:-- 13255<?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>panels</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels</id> <updated>2024-12-03T12:27:38+05:30</updated> <generator build="0b8d769cb912" version="9.3.1"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/_new" rel="create"/> <link href="/servicesNS/admin/search/data/ui/panels/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>user_login_panel</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels/user_login_panel</id> <updated>2024-12-03T12:27:38+05:30</updated> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="list"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="edit"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="remove"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/move" rel="move"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:data"><![CDATA[<panel><label>User Login Stats</label></panel>]]></s:key> <s:key name="eai:digest">6ad60f5607b5d1dd50044816b18d139b</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="label">User Login Stats</s:key> <s:key name="panel.title">user_login_panel</s:key> <s:key name="rootNode">panel</s:key> </s:dict> </content> </entry> </feed> Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ I have created the panel using the Rest api splunk doccumentation.. https://docs.splunk.com/Documentation/Splunk/7.2.0/RESTREF/RESTknowledge?_gl=1*5lyxk4*_gcl_au*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*FPAU*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*_ga*NDU2NzA4MDU0LjE3Mjg4MjkwMzU.*_ga_5EPM2P39FV*MTczMTMxNDgwOC42OC4xLjE3MzEzMTQ4MjIuNDYuMC45MjMyNTUzMTE.*_fplc*ZDZBQlJUQXM5UjkzY3lLQTMlMkZyZjdBNnlmMUE1bzg2TEc1JTJGc1hMbWc5RUFYMjR1V2lLdDBabjJzUmlYZzJSZXp4VkhzRU8wOUg4OVJKb1JFbWtMMnloYnR4NGRzJTJGVjR3NkdyJTJGeUl5SlBLejJyMWo3RE8lMkJhT0R0a3B1cjRIdyUzRCUzRA..#data.2Fui.2Fpanels) After creating the panel its not showing in my Splunk enterprises UI. What is the actual use of this????
Hi,  from splunk, how can i check what are the logs is being forwarded out to another SIEM? output.conf is configured to forward syslog, what does the syslog containing?
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calc... See more...
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calculating three fields: REMEDIATION_50, _80, and _100 using the following   |eval REMEDIATION_50 = if(PERCENTAGE <= 50, "x", "")     From this eval statement, I am going to have multiple rows where the _50, and _80 rows are marked, and some where both fields are marked.  I'm interested in isolating the DAY of the first time each of these milestones are hit.  I've yet to craft the right combination of stats, where, and evals that gets me what I want. In the end, I'd like to get to this of sorts Start 50% 80% 100% 2024-11-01 2024-11-23 2024-12-02 -   Any help would be appreciated, thanks!
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events w... See more...
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events where the path field matches any of the regex patterns specified in the Regex_Path column. lookupfile:   Here is the challenge I'm facing: When using the match() function in my search query, it only successfully matches if the Regex_Path pattern completely matches the path field in the event. However, I expected match() to perform partial matches based on the regex pattern, which does not seem to be the case. Interestingly, if I manually replace the Regex_Path in the where match() clause with the actual regex pattern, it successfully performs the match as expected. Here is an example of my search query: index=teleport event="sftp" path!="" | eval path_lower=lower(path) | lookup Sensitive_File_Path.csv Regex_Path AS path_lower OUTPUT Regex_Path, Note | where match(path_lower, Regex_Path) | table path_lower, Regex_Path, Note I would like to understand why the match() function isn't working as anticipated when using the lookup table and whether there is a better method to achieve the desired regex matching. Any insights or suggestions on how to resolve this issue would be greatly appreciated.