All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  introspection index is intended to collect information about your systems running Splunk and give you more data to help diagnose Splunk performance issues. There are some details about what da... See more...
@SN1  introspection index is intended to collect information about your systems running Splunk and give you more data to help diagnose Splunk performance issues. There are some details about what data is collected at About Splunk Enterprise platform instrumentation - Splunk Documentation   For example, If you want to search CPU and memory utilization per search execution with relevant information like which used executed and more. index=_introspection host=* source=*/resource_usage.log* component=PerProcess data.process_type="search" | stats latest(data.pct_cpu) AS resource_usage_cpu latest(data.mem_used) AS resource_usage_mem by data.pid, _time, data.search_props.type,data.search_props.mode, data.search_props.role,data.search_props.user, data.search_props.app, data.search_props.sid You may be able to find some useful information in the What does platform instrumentation log? - Splunk Documentation or the Introspection endpoint descriptions - Splunk Documentation
Don't add to or touch the Python libraries that ship with Splunk as they will be replaced with each upgrade.  Put the required libraries (that Splunk doesn't provide) in your app.  This is the way.
@tolgaakkapulu  Please verify whether the `OTX` index has been created on both the indexers and the heavy forwarder. If it hasn't been created, kindly proceed to create it. In some cases, data may b... See more...
@tolgaakkapulu  Please verify whether the `OTX` index has been created on both the indexers and the heavy forwarder. If it hasn't been created, kindly proceed to create it. In some cases, data may be successfully fetched, but if the index doesn't exist, the events will be discarded. Create the index on the Heavy Forwarder and also on the Indexer, if not already created. If you're using a single standalone Splunk instance, create the index only on that instance. To verify if the OTX Add-on is functioning correctly, check the internal logs by running the following search on the Search Head: index=_internal *otx*
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could... See more...
Hello, After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could not find any errors. What could be the reasons for the OTX index being empty? Can you help me with this?
this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) ... See more...
this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) | eval free=round(free/1024/1024,2) | eval capacity_TB = round(capacity /1024/1024, 2) | eval pct_usage = round(usage / capacity * 100, 2) | table splunk_server, usage_TB , capacity_TB , free it gives memory usage of splunk servers , can this be implemented using _introspection index as well?
Hi @whitefang1726  Your deployment server should be sized based on the number of clients. Anything over 50 clients requires a dedicated server with the recommended specs of 12CPU cores and 12GB RAM.... See more...
Hi @whitefang1726  Your deployment server should be sized based on the number of clients. Anything over 50 clients requires a dedicated server with the recommended specs of 12CPU cores and 12GB RAM.  It would be worth checking the following docs which covers the sizing requirements based on your environment. https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Calculatedeploymentserverperformance   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @secure , are you sure about the field names? you used two different names for each of them (ostype and OS_type) but maybe it's a mistyping. Anyway, che ck the field names. Then check the value... See more...
Hi @secure , are you sure about the field names? you used two different names for each of them (ostype and OS_type) but maybe it's a mistyping. Anyway, che ck the field names. Then check the value of os_version, if you use the "<" char, it must be numeric. Ciao. Giuseppe
Hi @whitefang1726 , Deployment Server is a stand-alone Splunk server, so it requires at least 12 CPUs and 12 GB RAM. if it has few clients to manage you can use reducted requirements, and if it has... See more...
Hi @whitefang1726 , Deployment Server is a stand-alone Splunk server, so it requires at least 12 CPUs and 12 GB RAM. if it has few clients to manage you can use reducted requirements, and if it has to manage many clientss, you could use more resourses. Ciao. Giuseppe
Hello Guys, I have an existing deployment server and I'm reviewing the average network bandwidth of the server. That could help me before migrating the server into a new box. Any thoughts? Thanks!
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks ... See more...
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks for your help!  
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but th... See more...
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but this is not efficient way  | search state="Installed" | search NOT( os_type="solaris" AND os_version <12) | search NOT( os_type="*suse*" AND os_version <12) i was trying to use the below command | search state="Installed" NOT (( os_type="solaris" AND os_version <12) OR ( os_type="*suse*" AND os_version <12)) and its not working any suggestions
Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am en... See more...
Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am encountering a few issues: Status Name Field: This field, intended to provide the status of the incident (with a default value if not specified), is currently returning blank results. Summary and Notes Fields: These fields are returning incorrect data, displaying random strings instead of the expected information. Escalation Priority: The inclusion of the "status" field was an attempt to retrieve escalation priority, but it is populating with a random field that does not accurately reflect the case priority (1-5). I also tried to use the mc_investigations_lookup table but this too doesn't display current case statue or priority. Any guidance or support in resolving these issues would be greatly appreciated. SPL: | mcincidents | `get_realname(creator)` | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | `investigation_get_current_status` | `investigation_get_collaborator_count` | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | table time, age, status, status_name, display_id, name, description, assignee, summary
Hi According to the docs, 8.1.x reached  End of Life (EOL) on Apr 19 2023, no official support (including P3 level) will be provided, apart from for Universal Forwarders which are supported to Oct 2... See more...
Hi According to the docs, 8.1.x reached  End of Life (EOL) on Apr 19 2023, no official support (including P3 level) will be provided, apart from for Universal Forwarders which are supported to Oct 22 2025. Customers are expected to upgrade to a supported version before EOL to continue receiving any support. Splunk’s support policy defines EOL as the date after which all technical support, security fixes, and maintenance cease. The Splunk Software Support Policy details this for customers: https://www.splunk.com/en_us/legal/splunk-software-support-policy.html If this is a deal breaker for the customer then I would suggest escalating internally.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid  The interval is set to every 60 minutes and there are no errors logged against the internal index.    
Thanks @ITWhisperer  for your quick answer addtotals will give the total of the 3 columns for each row while in this case only the total of last two columns are needed. Any workaround ? Besides, tra... See more...
Thanks @ITWhisperer  for your quick answer addtotals will give the total of the 3 columns for each row while in this case only the total of last two columns are needed. Any workaround ? Besides, transposing adds a new row in the top while I want the second row to be the first one (header) of the table. Any idea ? thanks
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is th... See more...
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is the support level or details after this EOL date?
Hi @Sultan77 , if you have ES 7.x, you have to flag all the events and add to the same investigation. I haven't an ES 8.x to guide you  in this case. Ciao. Giuseppe
Have you tried using addtotals?
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose ... See more...
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose I get the following: I'm a bit stuck. Can anyone help me achieve my goal (getting a table similar to the first table just above) ? Thanks 
Dear @gcusello  Can you explain how to group more than one finding in one investigation?