All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Sainag I've tried calling Splunk customer support and keep getting thwarted in circles via the automated calling system. I've watched multiple tutorials and even some specifically given by Splu... See more...
Hello Sainag I've tried calling Splunk customer support and keep getting thwarted in circles via the automated calling system. I've watched multiple tutorials and even some specifically given by Splunk and still no luck.
Hi @GeneralBlack  Please could you share the full error you are getting?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it... See more...
Hi @GeneralBlack  Please could you share the full error you are getting?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @caschmid  Would something like this work for you? This assumes you know the string you want count, is that right?   | rex max_match=100 field=_raw "(?<extract>\[string\])" | stats count by... See more...
Hi @caschmid  Would something like this work for you? This assumes you know the string you want count, is that right?   | rex max_match=100 field=_raw "(?<extract>\[string\])" | stats count by extract  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Use https://regex101.com to verify your regexes. In this case it won't work for "string" not being a number because \d+ means a sequence of digits. Depending on how precise you want to be with this ... See more...
Use https://regex101.com to verify your regexes. In this case it won't work for "string" not being a number because \d+ means a sequence of digits. Depending on how precise you want to be with this match, you might want \S+ or some other variation.
I can't propose any solution because I have no idea where the problem is. I don't even know which endpoint you're using. The remark about line breaking is just something worth knowing.
Ok. So it seems you're not just filling down because at the end you're substracting from what's already been counted. There is much more logic here. Are there any limitations to the versions per day?... See more...
Ok. So it seems you're not just filling down because at the end you're substracting from what's already been counted. There is much more logic here. Are there any limitations to the versions per day? What if there are more than two versions? It seems much more complicated.
@GeneralBlack Please work with splunk support, may be its missing the the mongod folder and it was not created after upgrade?
I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [str... See more...
I need a query that will tell me the count of a substring within a string like this ... "This is my [string]" and I need find the word and count of [string]. "This is my" is always the same but [string] is dynamic and can be many things, such as apple, banana etc. I need tabular data returned to look like  Word           Count apple          3 I tried this but doesnt seem to working  rex field=_raw ".*This is my (?<string>\d+).*" | stats count by string   
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVs... See more...
Hello after I installed Splunk 9.4.3 on Linux (Ubuntu) I am unable to run it. When I try to start Splunk, it says the directory does not exist. When I found it in the directory, I prompted with a KVstore error message.  Any help is greatly appreciated and needed.
Still an issue in 9.3.2   The concept of "ignore all INFO level" messages doesn't sit with me well as a solution, there are useful messages at that level.
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual interve... See more...
I’ve developed a custom Splunk app that fetches log data from external sources. Currently, I need to dynamically create dashboards whenever new data types/sources are ingested, without manual intervention.
Hi @mcfly227 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Giuseppe,  Thank you this is exactly what I was looking for, the rebuild option. I was looking in the incorrect area. This resolved my issue! 
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Moni... See more...
Hi @mcfly227 , it depends on how you implemented missing clients discovery alert: if you're using the Monitoring Console alert, you have only to rebuild the forwarder asset list in [Settings > Monitoring Console > Settings > Forwarder Monitoring setup > Rebuild forwarder asset list]. If you're using a custom search, it depends on how you are performing the check, probably using your own lookup, in this case, you have to remove the host fron this lookup. Ciao. Giuseppe
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. I... See more...
I recently had a AD machine which had a UF on it decommissioned. I have alerts setup for missing Forwarders as well. I cannot seem to find how to remove the UF from the HF and Splunk main instance. Is there documentation that I am missing? 
Hi @richgalloway, yes I already tried the filldown command. But I had no sucess with it. Probably I made it wrong. In the current thread I replied my use case to @PickleRick.
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timest... See more...
Hi @PickleRick , the use case is like the following description... My input is like the following json: { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.2","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.3","version":"1.1.0-1"} { "timestamp": "2025-06-01T09:26:00.000Z", "item":"I.4","version":"1.1.0-1"} { "timestamp": "2025-08-01T09:26:00.000Z", "item":"I.1","version":"1.1.0-2"} There are 4 items at 06/01 and one item with an advanced version at 08/01.   The query just counts the current version per day. source="..." | eval day=strftime(_time, "%Y-%m-%d") | chart count by day, version   The actual result is: | day                   | 1.1.0-1 | 1.1.0-2 | | -----------------| --------- |----------| | 2025-06-01 |  4            |  0           | | 2025-08-01 |  0            |  1           |   but what I expect is: | day                   | 1.1.0-1 | 1.1.0-2 | | ---------------- | --------- | ---------- | | 2025-06-01 |  4           |  0             | | 2025-07-01 |  4           |  0             | | 2025-08-01 |  3           |  1             |   Another challenge is that I want to spread the result for about 60 days and there are over 100.000 items.
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type ... See more...
Hello we are trying to implement without authentication and we have the same error, here is my stanza please adivse if there is some mistake :   [elasticsearch_json://TEST_INPUT] cust_source_type = elastic-msgs-sms date_field_name = timestamp elasticsearch_indice = msgs-sms-v1.0-* elasticsearch_instance_url = vlelasp-fe-vip.at index = main interval = 60 port = 9200 secret = time_preset = 24h user = disabled = 0     ERROR FROM LOGS : 2025-06-10 12:21:15,012 ERROR pid=1333302 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 147, in collect_events opt_ca_certs_path = opt_ca_certs_path.strip() AttributeError: 'NoneType' object has no attribute 'strip'
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onbo... See more...
How to configure AppDynamics Java agent with CCM , Travic port and push Application. To monitor the above mentioned application can some one help me how can i onboard  the application. Already onboarded the Apache web services adding the argument in catalina.sh.  
So, what is the solution you propose?