All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Based on documentation, and posts (Who do saved scheduled searches run as? and  Question about "run as" (Owner or User ) for saved searches), a saved search configured to "run as" owner, should run w... See more...
Based on documentation, and posts (Who do saved scheduled searches run as? and  Question about "run as" (Owner or User ) for saved searches), a saved search configured to "run as" owner, should run with permissions that the owner of the search has. However, I have two saved searches that do not work that way. Specifically, the searches use indexes that I (the owner) has access to but other user roles do not. The difference that I can think of is that my searches are in a Splunk Cloud instance, and my users authenticate using SAML against a IdP on premise. Any insights would be much appreciated!
hello every one i had sangfor firewall, and there is no addon on splunk for it, so what is the method to get firewall logs on splunk thanks
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-de... See more...
Hi There, We are using the JIRA service desk add-on to open JSM tickets from splunk ES correlation search alerts. I found the docs how to set up the Add-on via REST API. ( https://ta-jira-service-desk-simple-addon.readthedocs.io/en/latest/configuration.html#configuring-via-rest-api ) My question is, is it possible to use the REST API to configure the response action itself for every correlation search?
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of t... See more...
Hi All, I've been working on a dashboard in Splunk and I am noticing that it takes a considerable time amount of time to load. How to optimize the performance of my dashboard. 1.created most of the queries  in base search. 2.How to make panels as reports. If we made as report will dashboard will more effective?. And I am using dynamic search in my dashboard. Could you please provide some tips or some example to improve the speed and performance of my Splunk dashboard. Thanks, Karthi
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarde... See more...
So we have an internal load balancer that distributes HEC requests between 2 heavy forwarders. HEC is working fine and all but a small fraction of the requests are not making it to the heavy forwarders. The sender of the events get the 503 error below: upstream connect error or disconnect/reset before headers. reset reason: connection termination while the internal load balancer get this error: backend_connection_closed_before_data_sent_to_client What really baffles me is that I couldn't find any error logs in Splunk that might be connected to this issue. There's also no indication that our heavy forwarders are hitting their queue limits. I even tried increasing the max queue size of certain queues including that of the HEC input in question but even that didn't help at all. Is there any other stuff that I can check to help me pin point the cause of this problem?
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend).  I followed recommendation from ... See more...
We noticed this morning that all the certificates for our Splunk servers are expired since a week (discovered whilst investigating why KVStore stopped this weekend).  I followed recommendation from other community ask by renaming server.pem to server.pem.old and restarting the Splunk service to create a new one.  It correctly creates  a new server.pem with a valid expiration date, however it still displays the old cerficate in my browser.  I already checked with btool, and it seems fine (pointing to server.pem). I also already checked web.conf and tried to manually indicate the file path but it's still not working... Am I missing something? 
What transfer protocol splunk uses like FTP, sFTP.. That is what transfer method is used to transfer data via TCP when sending data from UF to Splunk core cluster (enterprise)
I need to create below dashboard .This will be the main dashboard and from here I can navigate to any of the other mentioned dashboards. AAA, BBB, CCC are the separate dashboards and all these should... See more...
I need to create below dashboard .This will be the main dashboard and from here I can navigate to any of the other mentioned dashboards. AAA, BBB, CCC are the separate dashboards and all these should be accessed form this main dashboard.       
Hello, I'm Splunk Newbie. This is a post that I found while looking for improvement of Splunk's search performance, but I'm asking you a question because it's a little confusing.   I referred to t... See more...
Hello, I'm Splunk Newbie. This is a post that I found while looking for improvement of Splunk's search performance, but I'm asking you a question because it's a little confusing.   I referred to the two posts below. https://splunk.illinois.edu/splunk-at-illinois/using-splunk/searching-splunk/how-to-optimize-your-searches/ https://idelta.co.uk/3-easy-ways-to-speed-up-your-splunk-searches-and-why-they-help/     Question 1) - index=firewall_data 127.0.0.1 Or - index=firewall_data "127.0.0.1" If I search that, because of the internal segmentation process 127 127 1 127 0 1 Is it right to search by dividing it into three approach? Because of this, If I use index=firewall_data TERM (127.1.1.24), is it correct that the breaker is not used and it shows better performance? Question 2) index=firewall_data "127.0.0.1" has more resources if the assumptions in question 1 are correct The index= firewall_data TERM (127.1.1.24) should perform better, but when tested, it actually did the same. It says that the data I searched for and the resource (time) are all the same, why?  
Hey all, I'm building new dashboard that contains 2 multiselect values: Site: USA, Romania, Turkey.... (only countries) Campus: USA1,USA2,Romania1,Romania2.... (contains the country's name and num... See more...
Hey all, I'm building new dashboard that contains 2 multiselect values: Site: USA, Romania, Turkey.... (only countries) Campus: USA1,USA2,Romania1,Romania2.... (contains the country's name and number). I want that when I select country/countires in Site multiselect value I will see only options to select the relevant campuses in Campus multiselect value. How can I create inherited rule that the Campus will inherit from Site value? Thanks.
i have facing this issue before 1 week after doing a rolling restart it's comes out i did a rolling restart just because to fix the data durability and this step didn't fix anything and a new issue c... See more...
i have facing this issue before 1 week after doing a rolling restart it's comes out i did a rolling restart just because to fix the data durability and this step didn't fix anything and a new issue comes out which is below in the photo        any solution of this matter please    thanks
If an old admin account is deleted in a Splunk Enterprise distributed environment, any actions or related tasks associated with that account will be affected such as ( usecases, lookups, ...etc)
I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is ca... See more...
I'm seeing errors such as:    Corrupt csv header in CSV file , 2 columns with the same name '' (col #12 and #8, #12 will be ignored) but there I can't find the reference to what CSV file that is causing this error.  Does anyone have any guidance on how to find the offending CSV file?
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_saf... See more...
Hi, everyone, need you help. I have the json data, and the format is like this: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"1qaz@WSX#EDC"} Because password is sensitive information, I do 6-digits mask before indexing. In addition, I need to check if the password meets the complexity, for example, the password should be at least 8 characters long and must include at least three of the following: numbers, uppercase letters, lowercase letters, and special characters. So the indexed data should be: "alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test@abc.com", "password":"******SX#EDC","is_password_meet_complexity":"Yes"} I already mask the password with SEDCMD like this: [json_sourcetype] SEDCMD-password = s/\"password\"\:\s+\"\S{6}([^ ]*)/"password":"******\1/g But I have no idea how to extract the complexity metadata of password field before indexing ( add "is_password_meet_complexity" field to log), should I use ingest time eval? Your support in this is highly appreciated.      
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 ... See more...
I have table as below  Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 Whenever a Bag Type is missing for certain Airline (in above case Transfer data is missing for 03/05/2024 IX). I need to create a manual row entry with value as 0 (Total Processed = 0) Date Out Airline Bag Type Total Processed 01/05/2024 IX Local 100 01/05/2024 IX Transfer 120 02/05/2024 BA Local 140 02/05/2024 BA Transfer 160 03/05/2024 IX Local 150 03/05/2024 IX Transfer 0
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Recei... See more...
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Receives perfectly addon#1 + addon#2 + addon#3 Now, a user enter root in ServerX and create his own custom addon inside UF, addon#4. Now ServerX has addon#1 + addon#2 + addon#3 (DS) + addon#4 (custom created by user) Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)? Thanks.
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledg... See more...
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledge. I looked at the capabilities but cant seem to find the right one that provides access to fields.     
Hi Team, Good day! I need to build query in such way that need to get only success payload that are related to particular service name. where that service name is used by different application ... See more...
Hi Team, Good day! I need to build query in such way that need to get only success payload that are related to particular service name. where that service name is used by different application such like (EDS, CDS). we need to pull the data from request payload to Response payload success based on correlation ID which is present in request payload and each event contain unique Correlation ID. and we are using below query to pull the data for request payload. index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "TargetID":"abc" "Sender":"SenderID":"abc" By using above query, we are getting below raw data: INFO 2024-05-23 06:05:30,275 [[OS].uber.11789: [services-workorders-procapi].implementation:abc-field-flow.CPU_LITE @7d275f1b] [event: 2-753d5970-18ca-11ef-8980-0672a96fbe16] com.wing.esb: PROCESS :: implementation:abc-field-flow :: STARTED :-: CORRELATION ID :: 2-753d5970-18ca-11ef-8980-0672a96fbe16 :-: REQUEST PAYLOAD :: {"Header":{"Target":{"TargetID":"abc"},"Sender":{"SenderID":"abc"}},"DataArea":{"workOrder":"42141","unitNumber":"145","timestamp":"05/23/2024 00:53:57","nbSearches":"0","modelSeries":"123","manufacturer":"FLY","id":"00903855","faultCode":"6766,1117,3497,3498,3867,6255,Blank","faliurePoint":"120074","faliureMeasure":"MI","eventType":"DBR","event":[{"verificationStatus":"Y","timestamp":"05/23/2024 01:32:30","solutionSeq":"1","solutionId":"S00000563","searchNumber":"0","searchCompleted":"True","repairStatus":"N","informationType":"","componentID":""},{"verificationStatus":"Y","timestamp":"05/23/2024 01:32:30","solutionSeq":"2","solutionId":"S00000443","searchNumber":"0","searchCompleted":"True","repairStatus":"N","informationType":"","componentID":""},{"verificationStatus":"Y","timestamp":"05/23/2024 02:03:25","solutionSeq":"3","solutionId":"S00000933","searchNumber":"0","searchCompleted":"True","repairStatus":"Y","informationType":"","componentID":""}],"esn":"12345678","dsStatus":"Open","dsID":"00903855","dsClosureType":null,"customerName":"Tar Wars","createDate":"05/23/2024 00:53:49","application":"130","accessSRTID":""}} And we are using below query for response payload:  index="OS" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "status": "SUCCESS" By using above query, we are getting below raw data: 5/23/24 11:35:33.618 AM INFO 2024-05-23 06:05:33,618 [[OS].uber.11800: [services-workorders-procapi].implementation:abc-field-flow.CPU_INTENSIVE @4366240b] [event: 2-753d5970-18ca-11ef-8980-0672a96fbe16] com.wing.esb: PROCESS :: implementation::mainFlow :: COMPLETED :-: CORRELATION ID :: 2-753d5970-18ca-11ef-8980-0672a96fbe16 :-: RESPONSE PAYLOAD :: { "MessageIdentifier": "2-753d5970-18ca-11ef-8980-0672a96fbe16", "ReturnCode": 0, "ReturnCodeDescription": "", "status": "SUCCESS", "Message": "Message Received" } The above two quires raw data in the request payload correlation id should match to the response payload correlation id. So based on that I want to search query to pull only data from request payload to response payload based on the Correlation ID. How to build the query by using two search quires I want only response payload data from two quires. Thanks in advance for your help! Regards, Vamshi Krishna M.
I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_ind... See more...
I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_index | bin _time span=1d | stats sum(numerical_feature) as daily_sum by department, _time | apply my_model Currently it is showing all outliers.