All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Giuseppe,  Thank you this is exactly what I was looking for, the rebuild option. I was looking in the incorrect area. This resolved my issue! 
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Moni... See more...
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Monitoring Console > Settings > Forwarder Monitoring setup > Rebuild forwarder asset list]. If you're using a custom search, it depends on how you are performing the check, probably using your own lookup, in this case, you have to remove the host fron this lookup. Ciao. Giuseppe
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. I... See more...
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. Is there documentation that I am missing? 
Hi @richgalloway, yes I already tried the filldown command. But I had no sucess with it. Probably I made it wrong. In the current thread I replied my use case to @PickleRick.
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timest... See more...
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.2","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.3","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.4","version":"1.1.0-1"} { "timestamp": "2025-08-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-2"} There are 4 items at 06/01 and one item with an advanced version at 08/01.   The query just counts the current version per day. source="..." | eval day=strftime(_time, "%Y-%m-%d") | chart count by day, version   The actual result is: | day                   | 1.1.0-1 | 1.1.0-2 | | -----------------| --------- |----------| | 2025-06-01 |  4            |  0           | | 2025-08-01 |  0            |  1           |   but what I expect is: | day                   | 1.1.0-1 | 1.1.0-2 | | ---------------- | --------- | ---------- | | 2025-06-01 |  4           |  0             | | 2025-07-01 |  4           |  0             | | 2025-08-01 |  3           |  1             |   Another challenge is that I want to spread the result for about 60 days and there are over 100.000 items.
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type ... See more...
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type = elastic-msgs-sms date_field_name = timestamp elasticsearch_indice = msgs-sms-v1.0-* elasticsearch_instance_url = vlelasp-fe-vip.at index = main interval = 60 port = 9200 secret = time_preset = 24h user = disabled = 0     ERROR FROM LOGS : 2025-06-10 12:21:15,012 ERROR pid=1333302 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 147, in collect_events opt_ca_certs_path = opt_ca_certs_path.strip() AttributeError: 'NoneType' object has no attribute 'strip'
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onbo... See more...
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onboarded the Apache web services adding the argument in catalina.sh.  
So, what is the solution you propose?
HEC sources, if writing to /event endpoint can provide own set of indexed fields beside the raw event. Also - with /event endpoint no line breaking takes place.
Also if some search works in one app/for one user and doesn't work in another app/for another user it's often a permissions issue.
Oooof, that's a golden shovel for you, Sir. But to the point - no. It's how Splunk works. It will allocate a single CPU for each search on a SH it's being run from as well as on each indexer taki... See more...
Oooof, that's a golden shovel for you, Sir. But to the point - no. It's how Splunk works. It will allocate a single CPU for each search on a SH it's being run from as well as on each indexer taking part in the search. So the way to "add cores to the search" is to grow your env horizontally in the indexer layer _and_ write your searches so that they use that layer properly.
First thing to debug inputs is usually, after verifying the config checking the output of splunk list monitor and splunk list inputstatus  
"...Splunk will use one core for each search" yep by default splunk will use 1 core for each search but can we adjust this limitation, let say one search can use 2 or 3 core?
Thanks Everyone for your response. Highly Appreciate your input. I was able to construct the query something like this: index="my_index" uri="*/experience/*" | eval common_uri = replace(uri, "^(/[^... See more...
Thanks Everyone for your response. Highly Appreciate your input. I was able to construct the query something like this: index="my_index" uri="*/experience/*" | eval common_uri = replace(uri, "^(/[^/]+){1,2}(/experience/.*)", "\2") | stats count(common_uri) as hits by common_uri | sort -hits | head 20
@ND1 Agreed with @sainag_splunk   Also, Most ES dashboard expects data in CIM fields or from a specific data model/summary index. Check fields Run your correlation search in Search & Reporting ... See more...
@ND1 Agreed with @sainag_splunk   Also, Most ES dashboard expects data in CIM fields or from a specific data model/summary index. Check fields Run your correlation search in Search & Reporting Use the field picker to see if required CIM fields are present If not, review your field extractions or data model configurations Check Datamodel | datamodel <datamodel_name> search If the data model is empty, review your data sources and field extractions. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Lien unfortunately, its not supported for the splunkcloud trial version. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/TypesofSplunkClouddeployment       If this Helps, Pleas... See more...
@Lien unfortunately, its not supported for the splunkcloud trial version. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/TypesofSplunkClouddeployment       If this Helps, Please Upvote!
Hi @livehybrid ,  Thank you for your reply. I only created one group. I am using Splunk cloud trial version. Is there any limitation for setting up SSO? Also another problem is once it shows that e... See more...
Hi @livehybrid ,  Thank you for your reply. I only created one group. I am using Splunk cloud trial version. Is there any limitation for setting up SSO? Also another problem is once it shows that error page, I could not logon with local user anymore. It redirect to Okta when I access. Then I lost opportunity to logon Splunk Cloud.
Thank you very much for the detailed comments. I edited my post with some details. I did not suspect anything with regards to the monitor stanza because another host with essentially the same config... See more...
Thank you very much for the detailed comments. I edited my post with some details. I did not suspect anything with regards to the monitor stanza because another host with essentially the same configuration works as expected. Where it doesn't work, I do find events from the /var/log/secure (from the same monitor stanza). I will run a btool debugging and report back. Thanks again!
Thank you for your reply. I edited my post with some more details. It's a custom TA with a simple file monitor stanza. I don't think the inputs configuration is an issue.
@ND1 It's not easy to troubleshoot without a screen share, but typically I recommend: Check the time filter on each dashboard panel Click the magnifying glass on the panel to view the search Expa... See more...
@ND1 It's not easy to troubleshoot without a screen share, but typically I recommend: Check the time filter on each dashboard panel Click the magnifying glass on the panel to view the search Expand the search to see what's actually running - you'll typically see macros there Expand those macros using Ctrl + Shift + E (Windows) or Cmd + Shift + E (Mac) Run the expanded search with a broader time range to see if data appears also check Time range mismatch: The ES dashboard is looking for recent data while your correlation search finds older events Data model acceleration: Your correlation search might need CIM-compliant field mappings Dashboard filters: Check if the dashboard has hidden drilldown tokens or filters applied check out this user guide: https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/analytics/available-dashboards-in-splunk-enterprise-security Additional help: If you have Splunk OnDemand Services credits available, I'd recommend using them to walk through this issue with a Splunk expert who can troubleshoot in real-time. If this Helps, Pleas Upvote.