All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [str... See more...
I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [string] is dynamic and can be many things, such as apple, banana etc. I need tabular data returned to look like  Word           Count apple          3 I tried this but doesnt seem to working  rex field=_raw ".*This is my (?<string>\d+).*" | stats count by string   
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVs... See more...
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVstore error message.  Any help is greatly appreciated and needed.
Still an issue in 9.3.2   The concept of "ignore all INFO level" messages doesn't sit with me well as a solution, there are useful messages at that level.
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual interve... See more...
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual intervention.
Hi @mcfly227 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Giuseppe,  Thank you this is exactly what I was looking for, the rebuild option. I was looking in the incorrect area. This resolved my issue! 
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Moni... See more...
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Monitoring Console > Settings > Forwarder Monitoring setup > Rebuild forwarder asset list]. If you're using a custom search, it depends on how you are performing the check, probably using your own lookup, in this case, you have to remove the host fron this lookup. Ciao. Giuseppe
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. I... See more...
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. Is there documentation that I am missing? 
Hi @richgalloway, yes I already tried the filldown command. But I had no sucess with it. Probably I made it wrong. In the current thread I replied my use case to @PickleRick.
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timest... See more...
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.2","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.3","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.4","version":"1.1.0-1"} { "timestamp": "2025-08-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-2"} There are 4 items at 06/01 and one item with an advanced version at 08/01.   The query just counts the current version per day. source="..." | eval day=strftime(_time, "%Y-%m-%d") | chart count by day, version   The actual result is: | day                   | 1.1.0-1 | 1.1.0-2 | | -----------------| --------- |----------| | 2025-06-01 |  4            |  0           | | 2025-08-01 |  0            |  1           |   but what I expect is: | day                   | 1.1.0-1 | 1.1.0-2 | | ---------------- | --------- | ---------- | | 2025-06-01 |  4           |  0             | | 2025-07-01 |  4           |  0             | | 2025-08-01 |  3           |  1             |   Another challenge is that I want to spread the result for about 60 days and there are over 100.000 items.
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type ... See more...
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type = elastic-msgs-sms date_field_name = timestamp elasticsearch_indice = msgs-sms-v1.0-* elasticsearch_instance_url = vlelasp-fe-vip.at index = main interval = 60 port = 9200 secret = time_preset = 24h user = disabled = 0     ERROR FROM LOGS : 2025-06-10 12:21:15,012 ERROR pid=1333302 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 147, in collect_events opt_ca_certs_path = opt_ca_certs_path.strip() AttributeError: 'NoneType' object has no attribute 'strip'
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onbo... See more...
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onboarded the Apache web services adding the argument in catalina.sh.  
So, what is the solution you propose?
HEC sources, if writing to /event endpoint can provide own set of indexed fields beside the raw event. Also - with /event endpoint no line breaking takes place.
Also if some search works in one app/for one user and doesn't work in another app/for another user it's often a permissions issue.
Oooof, that's a golden shovel for you, Sir. But to the point - no. It's how Splunk works. It will allocate a single CPU for each search on a SH it's being run from as well as on each indexer taki... See more...
Oooof, that's a golden shovel for you, Sir. But to the point - no. It's how Splunk works. It will allocate a single CPU for each search on a SH it's being run from as well as on each indexer taking part in the search. So the way to "add cores to the search" is to grow your env horizontally in the indexer layer _and_ write your searches so that they use that layer properly.
First thing to debug inputs is usually, after verifying the config checking the output of splunk list monitor and splunk list inputstatus  
"...Splunk will use one core for each search" yep by default splunk will use 1 core for each search but can we adjust this limitation, let say one search can use 2 or 3 core?
Thanks Everyone for your response. Highly Appreciate your input. I was able to construct the query something like this: index="my_index" uri="*/experience/*" | eval common_uri = replace(uri, "^(/[^... See more...
Thanks Everyone for your response. Highly Appreciate your input. I was able to construct the query something like this: index="my_index" uri="*/experience/*" | eval common_uri = replace(uri, "^(/[^/]+){1,2}(/experience/.*)", "\2") | stats count(common_uri) as hits by common_uri | sort -hits | head 20
@ND1 Agreed with @sainag_splunk   Also, Most ES dashboard expects data in CIM fields or from a specific data model/summary index. Check fields Run your correlation search in Search & Reporting ... See more...
@ND1 Agreed with @sainag_splunk   Also, Most ES dashboard expects data in CIM fields or from a specific data model/summary index. Check fields Run your correlation search in Search & Reporting Use the field picker to see if required CIM fields are present If not, review your field extractions or data model configurations Check Datamodel | datamodel <datamodel_name> search If the data model is empty, review your data sources and field extractions. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!