All Topics

Top

All Topics

Hello friends, last week we updated a few apps based on feedback from the Upgrade Readiness App. Admittedly, the apps were forgotten about for about a year. Yet, we are receiving e-mail notificati... See more...
Hello friends, last week we updated a few apps based on feedback from the Upgrade Readiness App. Admittedly, the apps were forgotten about for about a year. Yet, we are receiving e-mail notifications about deprecated jQuery or python version for given apps. Since all the apps are "Built by Splunk Inc.", I have a feeling this should not be the case.  Is it safe to [Dismiss App Alert] in the Upgrade Readiness App? Will the alert re-appear after the next scan?
How can SOAR be set up to receive data from Splunk ES, process it, send an action to the endpoints, and update the event status in Splunk ES?
Based on documentation, and posts (Who do saved scheduled searches run as? and  Question about "run as" (Owner or User ) for saved searches), a saved search configured to "run as" owner, should run w... See more...
Based on documentation, and posts (Who do saved scheduled searches run as? and  Question about "run as" (Owner or User ) for saved searches), a saved search configured to "run as" owner, should run with permissions that the owner of the search has. However, I have two saved searches that do not work that way. Specifically, the searches use indexes that I (the owner) has access to but other user roles do not. The difference that I can think of is that my searches are in a Splunk Cloud instance, and my users authenticate using SAML against a IdP on premise. Any insights would be much appreciated!
hello every one i had sangfor firewall, and there is no addon on splunk for it, so what is the method to get firewall logs on splunk thanks
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-de... See more...
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-desk-simple-addon.readthedocs.io/en/latest/configuration.html#configuring-via-rest-api ) My question is, is it possible to use the REST API to configure the response action itself for every correlation search?
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of t... See more...
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of the queries  in base search. 2.How to make panels as reports. If we made as report will dashboard will more effective?. And I am using dynamic search in my dashboard. Could you please provide some tips or some example to improve the speed and performance of my Splunk dashboard. Thanks, Karthi
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarde... See more...
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarders. The sender of the events get the 503 error below: upstream connect error or disconnect/reset before headers. reset reason: connection termination while the internal load balancer get this error: backend_connection_closed_before_data_sent_to_client What really baffles me is that I couldn't find any error logs in Splunk that might be connected to this issue. There's also no indication that our heavy forwarders are hitting their queue limits. I even tried increasing the max queue size of certain queues including that of the HEC input in question but even that didn't help at all. Is there any other stuff that I can check to help me pin point the cause of this problem?
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend).  I followed recommendation from ... See more...
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend).  I followed recommendation from other community ask by renaming server.pem to server.pem.old and restarting the Splunk service to create a new one.  It correctly creates  a new server.pem with a valid expiration date, however it still displays the old cerficate in my browser.  I already checked with btool, and it seems fine (pointing to server.pem). I also already checked web.conf and tried to manually indicate the file path but it's still not working... Am I missing something? 
These dashboards are part of an app i made, to visualize Nessus security scans i Splunk. The idea is somewhat inspired by the existing Tenable App for Splunk from Tenable, but I wanted to to take the... See more...
These dashboards are part of an app i made, to visualize Nessus security scans i Splunk. The idea is somewhat inspired by the existing Tenable App for Splunk from Tenable, but I wanted to to take the visualizations to the next level, and make the data easier to understand and navigate. The first dashboard is an overview dashboard. The picture below does not show the whole dashboard, but you get the point. It shows data from all vulnerability scans, with color coding differentiating the level of vulnerability severity. It's an easy way of seeing which environments and hosts have the most vulnerabilities, and see which type of vulnerabilities are most widespread. Also note that it shows what period there are scan data from (which might not be the same as the time picker) and how many networks have been scanned (out of the total number of networks). This next picture shows one of many drilldown dashboards in the app. It allows for a more detailed view of vulnerabilities per host, and also the possibility to get more information about a specific host if you click on the top table. This table also uses the same color coding as the overview dashboard. The bottom table links directly to the Tenable website, with more information about the specific vulnerability ID clicked on. All dashboards allows for various types of filtering, or example only show vulnerabilities with a minimum severity, e.g. at least medium. The dashboards are also utilizing the Splunk ES asset list to get more information about the hosts, so that it's possible to sort on vulnerabilities per business group or environment, among other things. Also, there is a lookup of "ignored vulnerabilities", for which the users can add vulnerabilities to ignore them in the dashboards, e.g. by editing it in the Splunk App for Lookup File Editing. Summary of functionality used in the dashboards: Color coding of vulnerability severity Drilldowns to other dashboards with more detailed information Drilldowns to external URLs with information on severity IDs Various filtering options on the dashboards Host enrichment from Splunk ES asset list Dynamic whitelisting of vulnerabilities through lookup file Correlation with other sources to show meta-information about vulnerability scans
What transfer protocol splunk uses like FTP, sFTP.. That is what transfer method is used to transfer data via TCP when sending data from UF to Splunk core cluster (enterprise)
I need to create below dashboard .This will be the main dashboard and from here I can navigate to any of the other mentioned dashboards. AAA, BBB, CCC are the separate dashboards and all these should... See more...
I need to create below dashboard .This will be the main dashboard and from here I can navigate to any of the other mentioned dashboards. AAA, BBB, CCC are the separate dashboards and all these should be accessed form this main dashboard.       
Hello, I'm Splunk Newbie. This is a post that I found while looking for improvement of Splunk's search performance, but I'm asking you a question because it's a little confusing.   I referred to t... See more...
Hello, I'm Splunk Newbie. This is a post that I found while looking for improvement of Splunk's search performance, but I'm asking you a question because it's a little confusing.   I referred to the two posts below. https://splunk.illinois.edu/splunk-at-illinois/using-splunk/searching-splunk/how-to-optimize-your-searches/ https://idelta.co.uk/3-easy-ways-to-speed-up-your-splunk-searches-and-why-they-help/     Question 1) - index=firewall_data 127.0.0.1 Or - index=firewall_data "127.0.0.1" If I search that, because of the internal segmentation process 127 127 1 127 0 1 Is it right to search by dividing it into three approach? Because of this, If I use index=firewall_data TERM (127.1.1.24), is it correct that the breaker is not used and it shows better performance? Question 2) index=firewall_data "127.0.0.1" has more resources if the assumptions in question 1 are correct The index= firewall_data TERM (127.1.1.24) should perform better, but when tested, it actually did the same. It says that the data I searched for and the resource (time) are all the same, why?  
Hey all, I'm building new dashboard that contains 2 multiselect values: Site: USA, Romania, Turkey.... (only countries) Campus: USA1,USA2,Romania1,Romania2.... (contains the country's name and num... See more...
Hey all, I'm building new dashboard that contains 2 multiselect values: Site: USA, Romania, Turkey.... (only countries) Campus: USA1,USA2,Romania1,Romania2.... (contains the country's name and number). I want that when I select country/countires in Site multiselect value I will see only options to select the relevant campuses in Campus multiselect value. How can I create inherited rule that the Campus will inherit from Site value? Thanks.
i have facing this issue before 1 week after doing a rolling restart it's comes out i did a rolling restart just because to fix the data durability and this step didn't fix anything and a new issue c... See more...
i have facing this issue before 1 week after doing a rolling restart it's comes out i did a rolling restart just because to fix the data durability and this step didn't fix anything and a new issue comes out which is below in the photo        any solution of this matter please    thanks
If an old admin account is deleted in a Splunk Enterprise distributed environment, any actions or related tasks associated with that account will be affected such as ( usecases, lookups, ...etc)
Initially, the dashboard shows two panels showing the status of requests from Buttercup Games (tutorial) dataset. The first panel shows the percentage of requests in each hour with various status va... See more...
Initially, the dashboard shows two panels showing the status of requests from Buttercup Games (tutorial) dataset. The first panel shows the percentage of requests in each hour with various status values. A nominal threshold is set at 85% success rate. The colour of the banner is dynamically chosen, based on percentage of hours when success rate is below nominal threshold. When the user selects a time period (with pan and zoom), another panel opens up. This chart uses the time period selected above, and breaks down the statuses into 5-minute slots. The success and threshold overlay lines are percentages; the other status values are counted in the 5-minute bins. When the user selects a status bar (from the chart or legend), a further panel opens up with the selected events tabled. Additional statuses can be added to the filter by clicking on them on the chart, clicking them again removes them from the filter. Clicking on "threshold" removes the statis filter completely. Moving the time select pan and zoom, removes the drilldown table panel. Resetting the time select pan and zoom, removes the 5-minute status chart. The final panel shows the hourly success rate for the final day with the upper and lower boundaries being the corresponding maximum and minimum hourly rates from the dataset. The left-hand y-axis is auto-scaled depending on overall minimum success rate. This is a standard line chart using some hidden features normally used by the predict command.
I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is ca... See more...
I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is causing this error.  Does anyone have any guidance on how to find the offending CSV file?
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_saf... See more...
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"1qaz@WSX#EDC"} Because password is sensitive information, I do 6-digits mask before indexing. In addition, I need to check if the password meets the complexity, for example, the password should be at least 8 characters long and must include at least three of the following: numbers, uppercase letters, lowercase letters, and special characters. So the indexed data should be: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"******SX#EDC","is_password_meet_complexity":"Yes"} I already mask the password with SEDCMD like this: [json_sourcetype] SEDCMD-password = s/\"password\"\:\s+\"\S{6}([^ ]*)/"password":"******\1/g But I have no idea how to extract the complexity metadata of password field before indexing ( add "is_password_meet_complexity" field to log), should I use ingest time eval? Your support in this is highly appreciated.      
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 ... See more...
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 Whenever a Bag Type is missing for certain Airline (in above case Transfer data is missing for 03/05/2024 IX). I need to create a manual row entry with value as 0 (Total Processed = 0) Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 03/05/2024 IX Transfer 0