All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) ... See more...
this is the search | rest /services/server/status/partitions-space splunk_server=* | eval free = if(isnotnull(available), available, free) | eval usage_TB = round((capacity - free) /1024/1024, 2) | eval free=round(free/1024/1024,2) | eval capacity_TB = round(capacity /1024/1024, 2) | eval pct_usage = round(usage / capacity * 100, 2) | table splunk_server, usage_TB , capacity_TB , free it gives memory usage of splunk servers , can this be implemented using _introspection index as well?
Hi @whitefang1726  Your deployment server should be sized based on the number of clients. Anything over 50 clients requires a dedicated server with the recommended specs of 12CPU cores and 12GB RAM.... See more...
Hi @whitefang1726  Your deployment server should be sized based on the number of clients. Anything over 50 clients requires a dedicated server with the recommended specs of 12CPU cores and 12GB RAM.  It would be worth checking the following docs which covers the sizing requirements based on your environment. https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Calculatedeploymentserverperformance   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @secure , are you sure about the field names? you used two different names for each of them (ostype and OS_type) but maybe it's a mistyping. Anyway, che ck the field names. Then check the value... See more...
Hi @secure , are you sure about the field names? you used two different names for each of them (ostype and OS_type) but maybe it's a mistyping. Anyway, che ck the field names. Then check the value of os_version, if you use the "<" char, it must be numeric. Ciao. Giuseppe
Hi @whitefang1726 , Deployment Server is a stand-alone Splunk server, so it requires at least 12 CPUs and 12 GB RAM. if it has few clients to manage you can use reducted requirements, and if it has... See more...
Hi @whitefang1726 , Deployment Server is a stand-alone Splunk server, so it requires at least 12 CPUs and 12 GB RAM. if it has few clients to manage you can use reducted requirements, and if it has to manage many clientss, you could use more resourses. Ciao. Giuseppe
Hello Guys, I have an existing deployment server and I'm reviewing the average network bandwidth of the server. That could help me before migrating the server into a new box. Any thoughts? Thanks!
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks ... See more...
Hello guys, how to add cryptography or other python lib to Splunk python own environment for scripted input on HF? Preferred solution is to put beside my app in etc/apps/myapp/bin/ folder. Thanks for your help!  
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but th... See more...
i have a query where im generating a table with columns  ostype ,osversion and status  i need to exclude anything below version 12 for solaris and suse im using the below command and it works but this is not efficient way  | search state="Installed" | search NOT( os_type="solaris" AND os_version <12) | search NOT( os_type="*suse*" AND os_version <12) i was trying to use the below command | search state="Installed" NOT (( os_type="solaris" AND os_version <12) OR ( os_type="*suse*" AND os_version <12)) and its not working any suggestions
Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am en... See more...
Hi all, I'm hoping someone could help assist with refining an SPL query to extract escalation data from Mission Control. The query is largely functional (feel free to steal borrow it), but I am encountering a few issues: Status Name Field: This field, intended to provide the status of the incident (with a default value if not specified), is currently returning blank results. Summary and Notes Fields: These fields are returning incorrect data, displaying random strings instead of the expected information. Escalation Priority: The inclusion of the "status" field was an attempt to retrieve escalation priority, but it is populating with a random field that does not accurately reflect the case priority (1-5). I also tried to use the mc_investigations_lookup table but this too doesn't display current case statue or priority. Any guidance or support in resolving these issues would be greatly appreciated. SPL: | mcincidents | `get_realname(creator)` | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | `investigation_get_current_status` | `investigation_get_collaborator_count` | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | table time, age, status, status_name, display_id, name, description, assignee, summary
Hi According to the docs, 8.1.x reached  End of Life (EOL) on Apr 19 2023, no official support (including P3 level) will be provided, apart from for Universal Forwarders which are supported to Oct 2... See more...
Hi According to the docs, 8.1.x reached  End of Life (EOL) on Apr 19 2023, no official support (including P3 level) will be provided, apart from for Universal Forwarders which are supported to Oct 22 2025. Customers are expected to upgrade to a supported version before EOL to continue receiving any support. Splunk’s support policy defines EOL as the date after which all technical support, security fixes, and maintenance cease. The Splunk Software Support Policy details this for customers: https://www.splunk.com/en_us/legal/splunk-software-support-policy.html If this is a deal breaker for the customer then I would suggest escalating internally.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid  The interval is set to every 60 minutes and there are no errors logged against the internal index.    
Thanks @ITWhisperer  for your quick answer addtotals will give the total of the 3 columns for each row while in this case only the total of last two columns are needed. Any workaround ? Besides, tra... See more...
Thanks @ITWhisperer  for your quick answer addtotals will give the total of the 3 columns for each row while in this case only the total of last two columns are needed. Any workaround ? Besides, transposing adds a new row in the top while I want the second row to be the first one (header) of the table. Any idea ? thanks
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is th... See more...
Customer would like to renew a perp contract from July 2025 to July 2026. But the version they are using now is 8.1.2, and the P3 support will be EOL after 12 May 2026? Customer is asking what is the support level or details after this EOL date?
Hi @Sultan77 , if you have ES 7.x, you have to flag all the events and add to the same investigation. I haven't an ES 8.x to guide you  in this case. Ciao. Giuseppe
Have you tried using addtotals?
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose ... See more...
Hello Guys, I'm trying to get the following table: I have the following fields in my index: ip, mac, lastdetect (timestamp) and user_id. Below is what I have tried so far: When I transpose I get the following: I'm a bit stuck. Can anyone help me achieve my goal (getting a table similar to the first table just above) ? Thanks 
Dear @gcusello  Can you explain how to group more than one finding in one investigation? 
Hi @Sultan77 , this means that many Correlation Searches (or Detections from 8.X) triggered events. It isn't a good idea grouping different Detections in one alert. Anyway, the only solution is di... See more...
Hi @Sultan77 , this means that many Correlation Searches (or Detections from 8.X) triggered events. It isn't a good idea grouping different Detections in one alert. Anyway, the only solution is disable Notable (or Finding) creation and use only Risk Score, then use a Finding Based Detection to have only one Finding containing all the others. In addition, you can group more Findings in one Investigation. Ciao. Giuseppe
Good day for everyone, I've built multiple use-cases through correlation search. The concern here , I am getting multiple alerts for same case. how can I set it to give only one alert contain all ... See more...
Good day for everyone, I've built multiple use-cases through correlation search. The concern here , I am getting multiple alerts for same case. how can I set it to give only one alert contain all data. screenshot can explain more:   
thanks @richgalloway & @isoutamo - I did expand those searches and used the same to clone required panels under my App - only lookup I needed to clone was reserved_cidrs.csv. Thank you.
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using  | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls"   My time zone preference is (GMT+01:00) Gr... See more...
Hello, I am facing an issue when a saved report is used in a simple xml dashboard using  | loadjob savedsearch="madhav.d@xyz.com:App_Support:All Calls"   My time zone preference is (GMT+01:00) Greenwich Mean Time : London and the report I am referring to (All Calls) is also created by me and runs every 15 mins. Now, when I use this report in a simple xml dashboard, it only provided data as on an hour ago. Example: when the report runs at 08:00 AM and I check dashboard at 08:05 AM, it will show report results for 07:00 AM run and not the latest. I expect this to be due to recent day light saving time changes in UK. Can someone please help how should I handle this? Thank you. Regards, Madhav