All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ptothehil This is the resolution for me too. I downloaded it on a personal device and hashed it and it was the correct hash. When attempting to bring it onto the corporate network it is being corrup... See more...
@ptothehil This is the resolution for me too. I downloaded it on a personal device and hashed it and it was the correct hash. When attempting to bring it onto the corporate network it is being corrupted as it is being flagged as containing a virus. 
I get all the sources list and total events within each source  
What do you get if you do this? index="SC_POC1" | stats count by sourcetype source
The stats command is counting events not occurrences of status values. You need to use mvexpand to separate out the test cases so you can count the individual status values. | spath suite.case{} out... See more...
The stats command is counting events not occurrences of status values. You need to use mvexpand to separate out the test cases so you can count the individual status values. | spath suite.case{} output=cases | mvexpand cases | spath input=cases status output=Status | spath input=cases name output=case | spath suite.name output=suite | spath MetaData.jobname output=Job_Name | spath MetaData.buildnumber output=Build_Variant | spath MetaData.JENKINS_URL output=Jenkins_Server | stats count(eval(Status="Execution Failed" OR Status="case_Failed")) AS Failed_cases, count(eval(Status="Passed")) AS Passed_cases, count(eval(Status="Failed" OR Status="case_Error")) AS Execution_Failed_cases, dc(case) as Total_cases dc(suite) as "Total suite" by Job_Name Build_Variant Jenkins_Server
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzi... See more...
Dear team, Is there any recommended way to index .zip from Azure blob Storage via Splunk Add-on for Microsoft Cloud Services? If it is impossible directly, is there any preferred workaround to unzip it someway?  Big thanks!!!  
For the protocol, version 9.2.2 doesn't help either.
Same issue here, using splunk-sdk for python, four years later... any updates about these parameters..?    
The data I have uploaded contains 5 CSV files (mentioned) Apartments.csv, Buildings.csv, Maintenance.csv, Energy Consumption.csv, and Security Events.csv.  I used the Splunk web interface and the Ad... See more...
The data I have uploaded contains 5 CSV files (mentioned) Apartments.csv, Buildings.csv, Maintenance.csv, Energy Consumption.csv, and Security Events.csv.  I used the Splunk web interface and the Add Data feature to upload data. The search query used to search data within the index is index="SC_POC1" If I search for data in the index, then it shows data from the last uploaded table by default. As in the screenshot attached, the search query shows only data on Energy Consumption, however index "SC_POC1" contains data of all the 5 csvs. I can search for the other data like Apartments, Buildings by specifying in the query like  index="SC_POC1" source="Apartments.csv", but then it will show only Apartments data. I want to show all the data (events) in the index. For this, I used joins on the tables so that I could search for the entire data of the index. but it also did not work. I want to know if there is a better way to do this. (I am using Splunk Enterprise)
hi @MOR09  Did you fix it?
I tried editing from UI, increased the maxresults to 1000000 ,post that I am able to see only 50k results, but not all the results What other configurations needs to be changed in order to get all... See more...
I tried editing from UI, increased the maxresults to 1000000 ,post that I am able to see only 50k results, but not all the results What other configurations needs to be changed in order to get all the results? 
Hi @H2ck1ngPr13sT , sorry I confused searchmatch with match, please use match function. Ciao. Giuseppe
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transm... See more...
Hello, What are the best practices for configuring Splunk memory and swap partition space? now resources is: The resources of the three index nodes are 24C, 64GB, 2T, and SSD with a 10 gigabit transmission rate. Each index node has 64GB of physical memory, and SWAP has 8GB. SWAP strategy requires physical memory to exceed 70% before it can be used. The current situation is that only 1.6GB of physical memory is used, but the swap uses 3.8GB. The following is the alarm information. [Alarm Name] system.swap.used_pct [Warning content] The usage rate of swap partition has reached 39.76%,. and the AVG has exceeded the threshold of 20.0% in the past minute. I have some questions to ask: 1. Why is swap usage so much higher than memory. 2. How to configure memory and swap partition space, and what are the best practices?
In dashboard studio there seems to be no limit and no next-button. Also the pdf shows all the rows so that's a way to solve it.
Hello @shub_loginsoft , This seems to be issue with Browser Cookies, You can try by clearing browser data or for temporary time use in Incognito mode. Please let me know if this works!
Hi, @ITWhisperer  The events are like below.     { "MetaData": { "JENKINS_URL": "https://abc.com", "stagename": "ABC_CT", "variantname": "NEW_ABC", "jobname": ... See more...
Hi, @ITWhisperer  The events are like below.     { "MetaData": { "JENKINS_URL": "https://abc.com", "stagename": "ABC_CT", "variantname": "NEW_ABC", "jobname": "abc", "buildnumber": 29, "filename": "1729005933566.json" }, "suite": { "hostname": "localhost", "failures": 0, "package": "ABC", "tests": 0, "name": "ABC_test", "id": 0, "time": 0, "errors": 0, "case": [ { "classname": "xyz", "name": "foo1", "time": 0, "status": "Passed" }, { "classname": "pqr", "name": "foo2", "time": 0, "status": "Passed" }, ........ ] } }     there will be so many events like this for a single project and the values will be repeated in those events. Like suite, case will be repeated.   index=... sourcetype=... |spath ... | stats count(eval(Status="Execution Failed" OR Status="case_Failed")) AS Failed_cases, count(eval(Status="Passed")) AS Passed_cases, count(eval(Status="Failed" OR Status="case_Error")) AS Execution_Failed_cases, dc(case) as Total_cases dc(suite) as "Total suite" by Job_Name Build_Variant Jenkins_Server   I use spath to get every parameter, then i use them in the query.
Hi @inventsekar  I got this error when using the send email command, that's probably because I am not an admin error: command="sendemail", 'rootCAPath' while sending mail to: Thanks
Hi @inventsekar , If I uncheck open in new tab, it will not open a new tab and use the current tab. My goal is to open a new tab, but only one, not two.   Thank you  
Hi @ITWhisperer  Here's the code:    Thank you so much { "type": "splunk.singlevalueicon", "options": { "showValue": false, "icon": "splunk-enterprise-kvstore://12345abcdefg... See more...
Hi @ITWhisperer  Here's the code:    Thank you so much { "type": "splunk.singlevalueicon", "options": { "showValue": false, "icon": "splunk-enterprise-kvstore://12345abcdefg" }, "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "/splunk/app/test_app/second_dashboard?form.student_token=$student_token$", "newTab": true } } ], "context": {}, "showProgressBar": false, "showLastUpdated": false }    
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specifi... See more...
This is regarding the integration between Splunk and Google Workspace. I have followed the documentation below to configure the integration, but the log data is not being ingested into the specified index in Splunk, and I cannot view the Google Workspace logs on Splunk. Additionally, there are no apparent errors after the integration setup. I would appreciate any advice or precautions to take when installing the Add-on for Google Workspace. # Additional info Upon checking the log files, the following errors were found. However, no 40x errors were found. Could not refresh service account credentials because of ('unauthorized_client: Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.', {'error': 'unauthorized_client', 'error_description': 'Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested.'}) # Referenced Documentation ## Installation of the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Installation ## Issuing Authentication Keys for Accounts Created on the Add-on for Google Workspace https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 -> Refer to the "Google Workspace activity report prerequisites" section in the above document. ## Add-on Configuration https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs2 -> Refer to the "Add your Google Workspace account information" and "Configure activity report data collection using Splunk Web" sections in the above document. ## Troubleshooting https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Troubleshoot -> Refer to the "No events appearing in the Splunk platform" section in the above document. https://community.splunk.com/t5/Getting-Data-In/Why-is-Splunk-Add-on-for-Google-Workspace-inputs-getting-401/m-p/602874
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, ... See more...
Hello, we urgently need to obtain a Splunk local disaster recovery solution and hope to receive a best practice explanation. The existing Splunk consists of 3 search heads, 1 deployer, 1 master node, 1 DMC, 3 indexes, and 2 heavy forwarders. In this architecture, the search replication factors are all 2 and there is stock data available. The demand for local disaster recovery is: The host room where the existing data center's Xinchuang SIEM system is located has been shut down, and the data in the disaster recovery room can be queried normally. The closure of the newly built disaster recovery host room will not affect the use of the existing data center's SIEM system. RPO 0 cannot lose data, RTO can recover within 6 hours.