All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Maybe you can share sample search results (full text, anonymize as needed) and let us know which XML nodes represent "entities"?  Is there anything wrong with the search you posted?  If the search pr... See more...
Maybe you can share sample search results (full text, anonymize as needed) and let us know which XML nodes represent "entities"?  Is there anything wrong with the search you posted?  If the search produces what you need, what kind of change do you want? Based on that posted search, I speculate that there is no inherent structure in search results that tells user what are key-value pairs of an "entity".  In other words, the so-called "entity" is a construct invented by whoever posted this search.  Without knowing actual data structure, it is fruitless for volunteers to try simplification.
How can we check if thyere is any throttling in Splunk when ingesting events via AWS Kinesis AddOn? What metrics are available for this addon?
Your search should have given you the results.  Anything unexpected happens when you run the search?  The most I can think of is to search for only scheduled and enabled searches. | rest /services/s... See more...
Your search should have given you the results.  Anything unexpected happens when you run the search?  The most I can think of is to search for only scheduled and enabled searches. | rest /services/saved/searches | seach eai:acl.app = myapp eai.acl.sharing = app is_scheduled = 1 disabled = 0 | fields eai:acl.owner eai:acl.app eai:acl.sharing search title cron_schedule description  
Splunk Studio show a message or icon for a Pie chart which returns no data:   I am looking to display an icon or message if no results are found on a pie chart in place of the grey pie image on a d... See more...
Splunk Studio show a message or icon for a Pie chart which returns no data:   I am looking to display an icon or message if no results are found on a pie chart in place of the grey pie image on a dashboard if no results are found. Index=test (Eventcode=4010 OR Evnetcode=4011) | stats latest (eventcode) as latest_event_code by Site | eval Site= upper(site) | where latest_event_code=4010     I have been trying appends like the following: | stats count |eval NoResult="0" | Where count=0 | appendpipe [stats count | eval Noresult="0" | eval test="test Message"]
any solution for the issue?
Hi Splunkers, I have a doubt about License Consumption. I'm not here to ask how to calculate daily ingestion and/or license consumption in a Splunk Envrinonment. Community is full of topic about th... See more...
Hi Splunkers, I have a doubt about License Consumption. I'm not here to ask how to calculate daily ingestion and/or license consumption in a Splunk Envrinonment. Community is full of topic about this and I have my search I use when no Monitor Console is configured. The point is the following: on a LM, I have 3 different environment, each one with a set of SH, indexers and so on. The only "point of contact" is the LM itself, so, in a schematic way: Env A (SHs, IDX cluster, others hosts) ---> LM "X" Env B (SHs, IDX cluster, others hosts) ---> LM "X" Env C (SHs, IDX cluster, others hosts) ---> LM "X" Question is: what about if I have to search daily license consumption for only one of above ENVs? For example, I want calculate license consumption only for Env A. First thing I thought: Ok, I have two options: Use MC Use my search on _internal logs, based on license consumption data, and specify, as idx parameter, only indexes subset for desiderd ENV. PROBLEM: ENVs have not totally different indexes. For example, index "linux_audit" is set on all 3 env. So, if I try to differentiate cluster based on their own indexes, I'm not able to do this.
The best way is to send your Microsoft Entra ID (formerly Azure AD) data to an event hub.  Then, use the Splunk Add-on for Microsoft Cloud Services to ingest the data (hint: use the azure:monitor:aad... See more...
The best way is to send your Microsoft Entra ID (formerly Azure AD) data to an event hub.  Then, use the Splunk Add-on for Microsoft Cloud Services to ingest the data (hint: use the azure:monitor:aad sourcetype).  Here's a Lantern article for setting up the add-on => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data   Alternatively, you can use Splunk Add-on for Microsoft Azure.  Use the "Azure Active Directory Interactive Sign-ins" input to get the data.  Depending on your environment size, you may hit some throttling limitations with the REST API this add-on uses => https://github.com/splunk/splunk-add-on-microsoft-azure/wiki/Configure-Azure-Active-Directory-inputs-for-the-Splunk-Add-on-for-Microsoft-Azure#throttling-guidance
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Snapshot Data" as the resource type (see screenshot).  
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Disk Data" as the resource type (see screenshot). Then, you can use this search to find unatta... See more...
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Disk Data" as the resource type (see screenshot). Then, you can use this search to find unattached (orphaned) disks: index=main sourcetype="mscs:resource:disk" properties.diskState="unattached"    
@snowee  We recommend that you raise a ticket and that you refer to the resources below for your information. Solved: splunkd using too much RAM - Splunk Community  Troubleshooting high resource u... See more...
@snowee  We recommend that you raise a ticket and that you refer to the resources below for your information. Solved: splunkd using too much RAM - Splunk Community  Troubleshooting high resource usage in Splunk Enterprise - Splunk Lantern Limit search process memory usage - Splunk Documentation  
First thing would be to change simple | stats count by country to | timechart span=1d count by country This will give you a separate count for each day and each country. Now you can either use ... See more...
First thing would be to change simple | stats count by country to | timechart span=1d count by country This will give you a separate count for each day and each country. Now you can either use | timewrap 1day to get a... not very pretty vector which is not very nice to work with or - which I'd do probably - use | transpose 0 To get a list of fields called "row 1", "row 2" (and possibly more if you had more days in your search) from which you can calculate your delta.
Hi team, I am using AppDynamics SaaS version 23.11 and monitoring my on-premise servers and applications. Some volumetrics data on agents used,  Agent wise  Prod DTA Machine Agent 4612 3... See more...
Hi team, I am using AppDynamics SaaS version 23.11 and monitoring my on-premise servers and applications. Some volumetrics data on agents used,  Agent wise  Prod DTA Machine Agent 4612 3211 App Agent 884 414 DB Agent 13 10 Analytics Agent 12 14 I would like to know amount of traffic sent from my on-premise AppD agents ( machine, app, DB and analytics) to controller. If there is a way to get those numbers ( not expecting exact at least approx should be fine) then please let me know. Please note, we are not using any proxy in-between agents and controller.
Can I retrieve list of alerts shared in App level, Is it possible? |rest /services/saved/searches | search eai:acl.app=my_app eai:acl.sharing=app | fields eai:acl.owner eai:acl.app eai:acl.sharin... See more...
Can I retrieve list of alerts shared in App level, Is it possible? |rest /services/saved/searches | search eai:acl.app=my_app eai:acl.sharing=app | fields eai:acl.owner eai:acl.app eai:acl.sharing search title cron_schedule description
Sorry I should have be a bit more clear:  Here is the search I am getting that gives me to total number of hits to my website on any give day from a specific country.. For example this search mig... See more...
Sorry I should have be a bit more clear:  Here is the search I am getting that gives me to total number of hits to my website on any give day from a specific country.. For example this search might return:  Canda 10 Mexico 30 index=data sourcetype=access | ip="*" | iplocation allfields=true ip | where country ! United States | stats count by country I would like to set up a search to show me if traffic from any given country drops by 10% or more and the list the countries that have the drop in traffic...   Thanks    
Hello! We actually noticed different results in two dashboard panels. 1-With the first, We have used the fields command to specify the fields we needed to work with, then applied a count. 2-In the... See more...
Hello! We actually noticed different results in two dashboard panels. 1-With the first, We have used the fields command to specify the fields we needed to work with, then applied a count. 2-In the second, The same query was used with the table command instead of fields and then applying a count We have noticed different results in count, query number 2 , gave a correct and complete result. Can someone please explain the difference between the two commands table and fields , and why fiels seems to give missing results Thank you
You probably need to use external scripting such as python to solve your algo processing needs as it falls outside simple text pattern matching. By design Splunk itself is more designed for data retr... See more...
You probably need to use external scripting such as python to solve your algo processing needs as it falls outside simple text pattern matching. By design Splunk itself is more designed for data retrieval, aggregation and general text operations which I would consider the typical use case of Splunk. 
I do have data on my _internal index for much longer then. At least from sourcetype splunkd
Hi @altink , the issue isn't in the dashboard that's correct, the issue is on the Data retention that is less than 60 days. If you want 60 days report,you have to enlarge the _internal retention. ... See more...
Hi @altink , the issue isn't in the dashboard that's correct, the issue is on the Data retention that is less than 60 days. If you want 60 days report,you have to enlarge the _internal retention. Ciao. Giuseppe
Hi Folks,   I am trying to get Splunk response from java using below method ---------------- public String executeSearch(String searchQuery) throws IOException { //String apiUrl = hostName + ... See more...
Hi Folks,   I am trying to get Splunk response from java using below method ---------------- public String executeSearch(String searchQuery) throws IOException { //String apiUrl = hostName + "/__raw/services/search/jobs/export?search=" + URLEncoder.encode(searchQuery, "UTF-8").replace("+", "%20"); String apiUrl = hostName + "/__raw/services/search/jobs/export?search=" + URLEncoder.encode(searchQuery, "UTF-8") .replace("+", "%2B") .replace("%3D", "=") .replace("%20", "+") .replace("%2A", "*") .replace("%3F", "?") .replace("%40", "@") .replace("%2C", ","); URL url = new URL(apiUrl); System.out.println("Value of Splunk URL is " + url); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); String credentials = userName + ":" + password; String encodedCredentials = Base64.getEncoder().encodeToString(credentials.getBytes()); connection.setRequestProperty("Authorization", "Basic " + encodedCredentials); StringBuilder response = new StringBuilder(); try (BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()))) { String inputLine; while ((inputLine = in.readLine()) != null) { System.out.println("Response Line: " + inputLine); // Print each line of the response response.append(inputLine); } } return response.toString(); } public static void main(String[] args) { if (args.length < 10) { System.out.println("Insufficient arguments provided. Please provide all required arguments."); System.exit(1); // Exit with error code 1 } String hostName = args[0]; String userName = args[1]; String password = args[2]; String query = args[3]; String logFileLocation = args[4]; String fileName = args[5]; String fileType = args[6]; String startDate = args[7]; String endDate = args[8]; String time = args[9]; try { SplunkRestClient client = new SplunkRestClient(hostName, userName, password); String searchResult = client.executeSearch(query); System.out.println(searchResult); // Write search result to file String filePath = logFileLocation + File.separator + fileName + "." + fileType; Files.write(Paths.get(filePath), searchResult.getBytes()); // Check if file is empty File file = new File(filePath); if (file.length() == 0) { System.out.println("File is empty. Deleting..."); if (file.delete()) { System.out.println("File deleted successfully."); } else { System.out.println("Failed to delete file."); } } else { // Validate file contents (assuming JSON data) try { new JSONObject(new String(Files.readAllBytes(Paths.get(filePath)))); System.out.println("File contents are valid JSON."); } catch (Exception e) { System.out.println("File is corrupt. Deleting..."); /*if (file.delete()) { System.out.println("Corrupt file deleted successfully."); } else { System.out.println("Failed to delete corrupt file."); }*/ } } } catch (IOException e) { System.out.println("Error occurred while executing search: " + e.getMessage()); System.exit(2); // Exit with error code 2 } } ------------------------------- I am calling this java file using bat file :: All Splunk host name set host_nam=https://log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now set host_cfn=https://cfn-log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now set host_dcust=https://koss01-log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now :: Splunk user name set username=******** :: Splunk user password set password=******** :: Splunk search query for CAN, AUS, EUR set query_kpi=index=*kpi* level=ERROR logger=KPI* set query_wfm=index=*wfm* level=ERROR logger=KPI* set file_type="JSON" set start_date="" set end_Date="" set time="3600" %JAVA_PATH% com.kronos.hca.daily.monitoring.processor.SplunkRestClient %host_nam% %username% %password% "%query_nam_kpi%" "%logFileLocation%" "%file_name_nam_kpi%" %file_type% %start_date% %end_Date% %time%,  
 @Keerthi , you need only the dedup for the field to listù, the other dedup isn't required. in few words, you should run something like this: <your_search> | dedup Time | sort Time | table Time C... See more...
 @Keerthi , you need only the dedup for the field to listù, the other dedup isn't required. in few words, you should run something like this: <your_search> | dedup Time | sort Time | table Time Ciao. Giuseppe