All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Admins, What solutions you use to get notified on mobile about internal Splunk issues in out of office hours? I mean when e.g. splunkd goes down on indexers, data is not indexed anym... See more...
Hello Splunk Admins, What solutions you use to get notified on mobile about internal Splunk issues in out of office hours? I mean when e.g. splunkd goes down on indexers, data is not indexed anymore for any reason etc. We need something free of charge. There is no other team except of us who needs to be notified about the issue. I have heard about Splunk On Call solution but seems to be a bit complex. Anyone having any experience with it? Hope to get some inspirations Many greetings, Justyna
Hi Splunkers. I'm trying to integrate Bitdefender Gravityzone (Cloud) with Splunk on-premises, I have used the official documentation from the Bitdefender website: https://www.bitdefender.com/bus... See more...
Hi Splunkers. I'm trying to integrate Bitdefender Gravityzone (Cloud) with Splunk on-premises, I have used the official documentation from the Bitdefender website: https://www.bitdefender.com/business/support/en/77211-171475-splunk.html but I'm stuck in the "Enable the Splunk integration" step; In the beginning, I have tried using the "Enable the Splunk integration manually" method,  I have put everything in place and run the command in the documentation, but ended up with an error stating that "The web server with this URL must support TLS 1.2, at least" as shown in the below screenshot: I have reviewed the documenting again in this link: https://www.bitdefender.com/business/support/en/77209-135319-setpusheventsettings.html Under the "Important" note: "Event Push Service requires the HTTP collector running on the third-party platforms to support SSL with TLS 1.2 or higher, to send events successfully." But here is the thing, I think that HEC by default only supports TLSv1.2 despite sslVersions=*   $ cat /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf [http] disabled=1 port=8088 enableSSL=1 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true ackIdleCleanup=true   I have tried to use: sslVersions=tls1.2 but nothing happened, it still shows the same issue. Can someone please help me figure out how to solve this TLS issue? Afterward, I have tried to use the "Enable the Splunk integration by running a script" method, aging I have put everything in place and run the script, but ended up with an error stating that:   FAIL - server response: <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx</center> </body> </html>   as shown in the below screenshot: Any Idea why this happens? Much thanks.
Hi  We are planning to decommission splunk enterprise in our environment. We need to stop sending data to splunk . How should we proceed , from where we should start? Can we find any SOP for this d... See more...
Hi  We are planning to decommission splunk enterprise in our environment. We need to stop sending data to splunk . How should we proceed , from where we should start? Can we find any SOP for this decommision process. But we want to store the indexed data for more than 365 days .  This is new task we are handling for the first time , any proper guidance will be much appreciated.   Thanks in advance.
Splunk must be restarted for changes to take effect. Contact Splunk Cloud Support to complete the restart. But does not have the permission to raise a support ticket because still in the trial stag... See more...
Splunk must be restarted for changes to take effect. Contact Splunk Cloud Support to complete the restart. But does not have the permission to raise a support ticket because still in the trial stage. thanks  
Hello Team, We are having the Splunk  cloud licensed server, How to do rest api request calls to Splunk cloud from postman? management port is already enabled on Splunk. still I am getting timeout... See more...
Hello Team, We are having the Splunk  cloud licensed server, How to do rest api request calls to Splunk cloud from postman? management port is already enabled on Splunk. still I am getting timeout error. Pl
Hi all, This is change condition in 3 inputs       <change> <condition label="Any"> <set token="flag_1">0</set> </condition> <condition> <s... See more...
Hi all, This is change condition in 3 inputs       <change> <condition label="Any"> <set token="flag_1">0</set> </condition> <condition> <set token="flag_1">1</set> <set token="showDetails">true</set> </condition> </change> <change> <condition label="Any"> <set token="flag_2">0</set> </condition> <condition> <set token="flag_2">1</set> <set token="showDetails">true</set> </condition> </change> <change> <condition label="Any"> <set token="flag_3">0</set> </condition> <condition> <set token="flag_3">1</set> <set token="showDetails">true</set> </condition> </change>       This is the drilldown token for setting "showDetails" to "true" to display another table:       <drilldown> <condition field="RuleID"> <set token="form.ruleID_tok">$click.value2$</set> <set token="flag_1">1</set> <set token="showDetails">true</set> </condition> <condition field="RuleDescription"> <set token="form.ruleDescription_tok">$click.value2$</set> <set token="flag_2">1</set> <set token="showDetails">true</set> </condition> <condition field="RuleLevel"> <set token="form.ruleLevel_tok">$click.value2$</set> <set token="flag_3">1</set> <set token="showDetails">true</set> </condition> </drilldown>       And now, I want to unset showDetails when (flag_1, flag_2, flag_3) = 0. To hide the table depends on showDetails token.
I need to exclude events from a timechart only if they fulfill 2 conditions: the field returns 0 for ALL events in the entire day (24hours) AND the days are weekends (Saturday & Sunday) I have ... See more...
I need to exclude events from a timechart only if they fulfill 2 conditions: the field returns 0 for ALL events in the entire day (24hours) AND the days are weekends (Saturday & Sunday) I have tried  | date_wkend = strftime(_time,"%A") | search NOT (date_wkend = "Saturday" AND varA = 0) | search NOT (date_wkend = "Sunday" AND varA = 0) However this also excludes the events from a weekend that has some non-zero results for varA, and since I have to do some further calculations based on a full-day span, my calculations are inaccurate.
I'm a huge fan of the Splunk Docker container. I noticed the 'latest' tag hasn't been updated in a few months and is still Splunk Enterprise 8.2.5 even though Splunk Enterprise 8.2.6 has been release... See more...
I'm a huge fan of the Splunk Docker container. I noticed the 'latest' tag hasn't been updated in a few months and is still Splunk Enterprise 8.2.5 even though Splunk Enterprise 8.2.6 has been released. Then I noticed that even though 'latest' hasn't updated, the image for Splunk Enterprise 8.2.6 has been added to the Docker images list. See splunk/splunk tags. I'm no Docker expert so I'm guessing I am just missing some obvious thing.... Why is the splunk/splunk:latest not pointing to the latest release of splunk/splunk:8.2.6?  
Is there a way to speed up this process because I have an assignment due but i can't download the ova of free community edition of phantom because my account is in review.
Hi all, Im trying to access the API from PostMan, but  getting the error 401. My question is the user / pass this should be the user I use to connect to the URL or I have to user the API cliente? t... See more...
Hi all, Im trying to access the API from PostMan, but  getting the error 401. My question is the user / pass this should be the user I use to connect to the URL or I have to user the API cliente? thanks. 
I'm getting a bit annoyed at throttling for each, as although it works - it has a habit of resetting itself if I need to tweak the SPL,  or cron time... almost tempted to populate a kvstore and take ... See more...
I'm getting a bit annoyed at throttling for each, as although it works - it has a habit of resetting itself if I need to tweak the SPL,  or cron time... almost tempted to populate a kvstore and take control...  anyone else ?  does editing the savedsearches.conf allow you or the advanced edit option allow you to get round what I perceive as annoying behavior
Hello, How would I specify the time frame in a search to provide me the events between 7am - 5pm weekdays and all results for weekends within the same search
  Can you please help me understand if Google Workspace Add-on equivalent update for G suite for Splunk add-on? Because, we used g suite earlier, after seeing that the app had been updated we insta... See more...
  Can you please help me understand if Google Workspace Add-on equivalent update for G suite for Splunk add-on? Because, we used g suite earlier, after seeing that the app had been updated we installed and configured Google Workspaces. But, sourcetypes and the way events are parsed are not similar to gsuite. Thanks in advance
Pretty much the title. I tried messing with the user interface navigation settings and the closest I can get is making the glass table lister the default page. But this also alters the user interfac... See more...
Pretty much the title. I tried messing with the user interface navigation settings and the closest I can get is making the glass table lister the default page. But this also alters the user interface like so:   This would be a separate issue. My main concern is making a specific glass table the default page when opening ITSI. Any help would be greatly appreciated.
Hello, Below is the existing stanza in the inputs.conf [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog) disabled = 1... See more...
Hello, Below is the existing stanza in the inputs.conf [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog) disabled = 1 I also want to add the following folder to be blacklist /var/log/dynatrace and any logs within the folder/sub folders. Can you please explain how this can be done? Is the syntax below correct? blacklist=(lastlog|anaconda\.syslog)|(dynatrace) Appreciate your experience and help.
The data i have is  816851-567-7554080981706881 50A720 -123-8150015922249983 816851-567-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 816851-567-0065137870504409 ... See more...
The data i have is  816851-567-7554080981706881 50A720 -123-8150015922249983 816851-567-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 816851-567-0065137870504409 50A720 -123-1135131573613120 50A720 -123-0065137870504409 I want to extract 50A720 or 816851using | rex field=name  mode=sed "s/816851/" getting error Error in 'rex' command: Failed to initialize sed. Failed to parse the replacement string.
I'm interested in suggestions on how to tackle this. I know how I would implement it in Python, but not really sure best practice for SOAR. Let's say I have an Action called "Lookup Host" If it ... See more...
I'm interested in suggestions on how to tackle this. I know how I would implement it in Python, but not really sure best practice for SOAR. Let's say I have an Action called "Lookup Host" If it runs successfully, it returns a dict with some data [{"hostname": "test1", "device_id": "abc123"}] but we might actually not have data on this host, so it will return empty: [] I need to ensure that we have data, otherwise later playbook actions won't complete. Would we use a decision here - like "If result != []: continue, else: exit playbook" Here's is loosely what I want to do, but in Python Code:     result = LookupHost(hostname="test1") if result: # Have a result, so can continue run_second_action() else: # no data found, exit exit(0)    
I'm trying to centralize our app information on our HFs. Each HF has the following scheduled search set up: | rest /services/apps/local | search disabled=0 | table splunk_server label title versio... See more...
I'm trying to centralize our app information on our HFs. Each HF has the following scheduled search set up: | rest /services/apps/local | search disabled=0 | table splunk_server label title version update.version check_for_updates | collect index="meta_apps" The index "meta_apps" exists on the HF and on Splunk Cloud. However, I don't see these results in Splunk Cloud. What am I missing?
Hi, My team and I are currently developing a website which needs to pull data from Splunk and insert it into visualizations on the home page on the website. As the title suggests, we are currently ... See more...
Hi, My team and I are currently developing a website which needs to pull data from Splunk and insert it into visualizations on the home page on the website. As the title suggests, we are currently using React and NodeJS and due to our absolute lack of Splunk experience we are a bit bogged down so please forgive me if this is a potentially dumb question.  We are trying to use the Splunk JavaScript SDK in Node to establish a connection and pull data from Splunk. we have tried absolutely everything  at this point but cannot establish a connection and perform a simple service.login through the SDK. We have tried this with postman and it appears to be working just fine on that side of things. For Example:  We have tried using the code from Server Side Javascript but when running it, it throws the following error:         throw err; ^ { response: { headers: {}, statusCode: 600 }, status: 600, data: undefined, error: Error: connect ECONNREFUSED ::1:8089 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1195:16) { errno: -4078, code: 'ECONNREFUSED', syscall: 'connect', address: '::1', port: 8089 } } Node.js v17.9.0                OR this error :                throw err; ^ { response: { headers: {}, statusCode: 600 }, status: 600, data: undefined, error: Error: write EPROTO 04490000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:c:\ws\deps\openssl\openssl\ssl\record\ssl3_record.c:355: at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16) { errno: -4046, code: 'EPROTO', syscall: 'write' } } Node.js v17.9.0                 Can anyone please help? Any help would be greatly appreciated
I feel I'm getting lost in the sauce. I'm working on creating a props.conf for some syslog data on ingest (not search time) where the syslog message has it's standard syslog content, and then my mess... See more...
I feel I'm getting lost in the sauce. I'm working on creating a props.conf for some syslog data on ingest (not search time) where the syslog message has it's standard syslog content, and then my message will start off with a statement followed by colon delimited fields with new lines. So like this message below. NOTE: the bold "normal" text changes depending on the message type, so this part is dynamic. <priority>timestamp data1 data2 this is a normal message: key:val key1:val1 key2:val2 --- key_n:val_n   So I want to parse the the first line and pull different values from the syslog message, and then after that just use a delimiter so I don't have to specify each field (because there are a lot of fields, up to 50 different key:value lines). First, not sure how to specify to Splunk: parse line 1 one way, and then use a delimiter on every other line. I'm sure there is a way? I've looked into attributes for structured data here. I want to treat the first line almost like a header (different from the rest of the log), but not like the FIELD_HEADER properties as this isn't a delimited header I'm attempting to extract (like csv data). How can I parse just the first line of my syslog (probably with some regex to grab everything appropriately), and then for the rest of my content use the delimiter? Maybe I could use FIELD_DELIMITER=: ? Additionally, I'm thinking I might have to use the transforms DELIM property here. I'm thinking something like this: DELIMS = "\r\n", ":"