All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1  This is because the values from the endpoint are in MB but are being divided by 1024 twice in this search hence they become in TB.  try switching 1024/1024 for just 1024 in each occurrence... See more...
Hi @SN1  This is because the values from the endpoint are in MB but are being divided by 1024 twice in this search hence they become in TB.  try switching 1024/1024 for just 1024 in each occurrence and see if that resolves for you Will
Hey @Osama_Abbas1  Thanks for letting us know how you resolved it Good luck with your future AppD work! Will
This variation of the partition-space search previously mentioned might work for you, this particular search is from within the monitoring consoles so should work. Infact for me it does return the co... See more...
This variation of the partition-space search previously mentioned might work for you, this particular search is from within the monitoring consoles so should work. Infact for me it does return the correct space for the "/" mount point.  | rest splunk_server=* /services/server/status/partitions-space | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(fs_type) as fs_type first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", fs_type as "File System Type", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)"  I think Ive had issues in the past where people havent implemented this endpoint correctly and havent split the stats by mount_point, which can mean you get broken stats based on a number of different mount points! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
If you do decide to go down the route of using the Linux TA then just enable what you need (ie the disk monitoring) as it can become very busy very quickly as there are lots of different inputs! Ple... See more...
If you do decide to go down the route of using the Linux TA then just enable what you need (ie the disk monitoring) as it can become very busy very quickly as there are lots of different inputs! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
when i am running this search it is giving 16GB as total_GB while our total size is 16Tb.
Hi @Praz_123  You could try running   | rest /services/server/info | table serverName, server_roles, status   This query uses the rest command to access the Splunk REST API endpoint that provide... See more...
Hi @Praz_123  You could try running   | rest /services/server/info | table serverName, server_roles, status   This query uses the rest command to access the Splunk REST API endpoint that provides information about the servers in your Splunk deployment. The server_roles field will help you identify which servers are indexers, and the status field will show their current status. If you want to specifically check the health of your indexers, you can use the following query: | rest /services/server/status/health/overview | search title="Indexer" | table title, health, messag    Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SN1  I think this might have been mentioned on the previous thread about disk space that you raised, but I think you should probably look to implement the Splunk Add-on for Unix and Linux which ... See more...
Hi @SN1  I think this might have been mentioned on the previous thread about disk space that you raised, but I think you should probably look to implement the Splunk Add-on for Unix and Linux which has Disk monitoring. https://splunkbase.splunk.com/app/833 Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @lar06  Please can you confirm that you are using 9.3.2 on both the source and destination Splunk instance and that it is not being sent via any non-supported Splunk versions or external systems?... See more...
Hi @lar06  Please can you confirm that you are using 9.3.2 on both the source and destination Splunk instance and that it is not being sent via any non-supported Splunk versions or external systems? Have you made any recent changes to the architecture? Are either the client/source or destination server running hot (ie high CPU?) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello Using Splunk 9.3.2 What does this error mean ? ERROR TcpOutputFd [ TcpOutEloop] - Expecting to be in eWaitCapabilityResponse, found itself in SendCapability Thanks
We took a few steps and looked at how the config files worked. It seemed as if the content of the different config types were virtually merged (each type of config with its own kind). Therefore we ... See more...
We took a few steps and looked at how the config files worked. It seemed as if the content of the different config types were virtually merged (each type of config with its own kind). Therefore we reasoned that we could use the settings we were setting up via the GUI for the forwarder, and use our outputs.conf from the app, to add/override the settings we needed, and it turned out that this approach works! So now we have the possibility to set up the forwarding via the Web UI, and also have those settings augmented with our own extra settings. This seems to solve our initial problem.
Hello i run df -h on indexer and i got now i want the total , available and used space but using SPL how can i achieve that. like in this case total will be 16Tb , used will be 12T and Av... See more...
Hello i run df -h on indexer and i got now i want the total , available and used space but using SPL how can i achieve that. like in this case total will be 16Tb , used will be 12T and Available will be 5T if we sum all the spaces . How can i get this result using SPL.      
Is there is any Query  to check whether the indexers status  is  down, up or in unknown state .  I can check in monitoring console but need a query to see for all indexer.
I am looking for a document to integrate Cisco cyber vision integration with Splunk. 
We are referring to the following site for the setup: Cisco Security Cloud App for Splunk User Guide - Configure Cisco Products in Cisco Security Cloud App [Support] - Cisco https://www.cisco.com/c/e... See more...
We are referring to the following site for the setup: Cisco Security Cloud App for Splunk User Guide - Configure Cisco Products in Cisco Security Cloud App [Support] - Cisco https://www.cisco.com/c/en/us/td/docs/security/cisco-secure-cloud-app/user-guide/cisco-security-cloud-user-guide/m_configure_cisco_products_in_cisco_security_cloud.html  In the "Configure an Application" chapter and the "Cisco Duo > Procedure" section, does the "Input Name" refer to the value set in the Duo Admin Console under "Admin API > Settings > Name"? Do these values need to be consistent between Splunk and Duo? I understand that the configuration settings in Cisco Security Cloud can be reused from the Duo Splunk Connector. Is this correct? Or are there any configuration items that have changed from the Duo Splunk Connector? Currently, we are setting permissions according to step 4 of the "First Steps" on the following site: https://duo.com/docs/splunkapp#first-steps  . Should we apply the same settings when implementing Cisco Security Cloud? Are the server system requirements for implementing Cisco Security Cloud the same as those for the Duo Splunk Connector? If there are differences, could you please provide detailed system requirements?
This document explains ssl_reload for all ports except 9998 - Data receiving port on indexer https://docs.splunk.com/Documentation/Splunk/9.3.2/Security/RenewExistingCerts Is there a seperate end... See more...
This document explains ssl_reload for all ports except 9998 - Data receiving port on indexer https://docs.splunk.com/Documentation/Splunk/9.3.2/Security/RenewExistingCerts Is there a seperate end point to reload indexing receiving thread that will take effect of the new renewed cert? - Pradeep
I cannot seem to get the OTEL API Key. Whenever I try to generate it in the OTEL section and I press "Generate Access Key" the key never shows up and the button reappears shortly after. Looks... See more...
I cannot seem to get the OTEL API Key. Whenever I try to generate it in the OTEL section and I press "Generate Access Key" the key never shows up and the button reappears shortly after. Looks like the server's response is sending back "204 No Content" I confirmed this by using the API (GET /controller/restui/otel/getOtelApiKey) and seeing the same response (204 No Content) Is it not possible to generate an OTEL key on free trial accounts? 
I've been offered to replace searchmatch() with match(). | makeresults 1 | eval _raw="ONE TWO THREE" | eval result=if(searchmatch("ONE THREE"), 1, 0) | eval garbage=if(searchmatch("ONE WHATEVER ... See more...
I've been offered to replace searchmatch() with match(). | makeresults 1 | eval _raw="ONE TWO THREE" | eval result=if(searchmatch("ONE THREE"), 1, 0) | eval garbage=if(searchmatch("ONE WHATEVER THREE"), 1, 0) | eval matching=if(match(_raw, "ONE THREE"), 1, 0)   Seems to be more readable than escaped quotation marks.
The original problem was discovered with Splunk alert that had all proper fields.  I created this small script just to demonstrate the issue without a need for real data.
Hi, in the SPL query, rename mem_used as "Memory Usage (KB)" >>> It should be in MB instead. 
Here are a couple of links to other posts here https://community.splunk.com/t5/Getting-Data-In/JSON-field-extraction-beyond-5000-chars/m-p/549963#:~:text=The%20auto%2Dfield%2Dextraction%20stops,resu... See more...
Here are a couple of links to other posts here https://community.splunk.com/t5/Getting-Data-In/JSON-field-extraction-beyond-5000-chars/m-p/549963#:~:text=The%20auto%2Dfield%2Dextraction%20stops,result%20list%20after%20a%20search. https://community.splunk.com/t5/Getting-Data-In/Missing-events-JSON-payload-and-indexed-extractions/m-p/489113 If you start changing limits.conf - which is not simple with Cloud, it will affect general settings, so is not always the best way to go. If you have a field that is not extracted and it's a simple field - i.e. a single value inside a JSON object with no multivalue component then a simple calculated field can work, i.e. | eval type=spath(_raw, "data.tree.fruit.type") In the conf just use the spath... part for the eval definition