All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control -->... See more...
I have increase the Max Size of the "main" index at indexer clustering master node. I tried to push it to the peer node, it showed successful and I have also restart the peer node (Server Control --> Restart Splunk).  The Max Size of the "main" index is still not updated.     Splunk Enterprise version: 8.2
@PickleRick  @isoutamo @kiran_panchavat  Thank you for the replies! I think I should provide more information about the log. It is from snmp traps, and I have a script that will export the trap line... See more...
@PickleRick  @isoutamo @kiran_panchavat  Thank you for the replies! I think I should provide more information about the log. It is from snmp traps, and I have a script that will export the trap line by line to the log file that will be monitored by Splunk.  The props.conf @PickleRick  helped to amend works well if I use 'add data' to add a static log file instead of file monitoring, but If I use file monitoring (new lines of snmp traps will be written around every 10 minutes), the line breaking went wrong. So I was thinking is the problem due to the file being updated? But the snmp traps were written almost at the same time (as seen in the timestamps), if I would like to fix it, what configurations can I change?
I can confirm that this is fixed in 9.4.0   | makeresults format=csv data="field a a:b" | eval field = split(field, ":"), count = mvcount(field), map = mvmap(field, "1")   In 9.4.0, it returns ... See more...
I can confirm that this is fixed in 9.4.0   | makeresults format=csv data="field a a:b" | eval field = split(field, ":"), count = mvcount(field), map = mvmap(field, "1")   In 9.4.0, it returns count field map 1 a 1 2 a b 1 1 Before the fix, it would return the following, incorrect first row. count field map 1 a a 2 a b 1 1
I have to display a field called Info which has value A and color it based on range (low, severe, high) as was Splunk Classic but in Splunk Dashboard studio . How can i achieve that?
EVAL is a search-time configuration so it will not (I'm not eve  sure it's correct syntax in your example) work in index time.  
I can confirm, this type of setup does not work for the Windows logs:   [sanitize_metadata] EVAL-EEEE =replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = EEEE REGEX = (?ims)(.*) FORMAT = $1__-__... See more...
I can confirm, this type of setup does not work for the Windows logs:   [sanitize_metadata] EVAL-EEEE =replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = EEEE REGEX = (?ims)(.*) FORMAT = $1__-__$0 DEST_KEY = _raw The problem is that with this the Windows logs only contain the eventlog message part, as if they did not have any metadata attached.  
I have played around a bit more... This is what seems to be working for me: [sanitize_metadata] EVAL-_meta=replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = _meta REGEX = (?ims)(.*) FORMAT = $1... See more...
I have played around a bit more... This is what seems to be working for me: [sanitize_metadata] EVAL-_meta=replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = _meta REGEX = (?ims)(.*) FORMAT = $1__-__$0 DEST_KEY = _raw Note: __-__ is just a placeholder for a separator. I found an article that is aiming at a marginally similar thing as I do: https://zchandikaz.medium.com/alter-splunk-data-at-indexing-time-a10c09713f51 There, the individual uses EVAL instead of INGEST_EVAL. Is there any significant difference? Also, I changed your example because it worked differently if I did not use _meta as a target variable in the INGEST_EVAL. I noticed that with your version, the logs that originated from the Windows machine with the UF on it, were missing the metadata assigned there. When I use my version, all the metadata set on the UF (static key-value pairs) is there in the log. Any idea why that might be? Either way, thanks so much for your effort to help me! I really appreciate it!
Hi Yash Obs (observed value): Average of all data points seen for that interval. For a cluster or a time rollup, this represents the weighted average across nodes or over time.  Min: Minimum da... See more...
Hi Yash Obs (observed value): Average of all data points seen for that interval. For a cluster or a time rollup, this represents the weighted average across nodes or over time.  Min: Minimum data point value seen for that interval. Max: Maximum data point value seen for that interval. Sum: Sum of all data point values seen for that interval. For the Percentile Metric for the App Agent for Java, this is the result of the percentile value multiplied by the Count. Count: Number of data points generated for the metric in that interval. The collection interval for infrastructure metrics varies by environment. Remember if you wish immediate response, It's better to file a Support ticket with Splunk AppD Support team.
@ariel_esh   
Hi Kiran,          Thanks for the info. I did post my solution earlier today. And, I think it pretty much mirrors what you've got. So, at least I know I am on the right wavelength.
I have attached the raw data to the post. I am trying the following query to identify the ResourceTypes and the count but it is not giving me any results : index=app_shared source=aws.config | stat... See more...
I have attached the raw data to the post. I am trying the following query to identify the ResourceTypes and the count but it is not giving me any results : index=app_shared source=aws.config | stats count by resourceType | table resourceType I think we can also narrow down to only -  "detail-type": "Config Configuration Item Change"
@nelaturivijay Please have a look. https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/tokens#Using_tokens_in_a_search 
@danielbb  I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
| stats values(type) as types values(_time) as times by displayId Note that this will give you the times in internal format (number of seconds since the beginning of 1970) If you want the times for... See more...
| stats values(type) as types values(_time) as times by displayId Note that this will give you the times in internal format (number of seconds since the beginning of 1970) If you want the times formatted, you should create a field with the formatted version and collect those values.
@danielbb  Create `inputs.conf` and `outputs.conf` on the Heavy Forwarder (HF) if you want to forward data directly from the HF to the indexers. Alternatively, create `inputs.conf` and `outputs.conf... See more...
@danielbb  Create `inputs.conf` and `outputs.conf` on the Heavy Forwarder (HF) if you want to forward data directly from the HF to the indexers. Alternatively, create `inputs.conf` and `outputs.conf` on the Universal Forwarder (UF) to send data to the HF, which will then forward it to the indexers. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
@danielbb Hello Daniel, Please follow the below steps. 1. Install Splunk on all the required instances. 2. Enable the receiving port `9997` on the indexer. 3. If you are forwarding data from a Uni... See more...
@danielbb Hello Daniel, Please follow the below steps. 1. Install Splunk on all the required instances. 2. Enable the receiving port `9997` on the indexer. 3. If you are forwarding data from a Universal Forwarder (UF) to a Heavy Forwarder (HF) and then to the indexer, ensure the receiving port is open on both the Heavy Forwarder and the indexer. 4. Ensure the following ports are open: 9997: UF to HF and HF to Indexer 8089: Management port between Indexers and Search Heads 8000: Web port for HF and Search Head (optional for indexers in production environments) 5. Add your indexer to the Search Head: - Navigate to Settings > Distributed Search > Distributed Search Setup - Enable distributed search, then go to Settings > Distributed Search > Search Peers - Add the indexer details here and restart the Splunk instance. 6. If required, open port `8000` for the web interface on the Heavy Forwarder and Search Head. While optional for indexers, this port is typically not opened on production indexers. Note:Before configuring Splunk, perform a telnet test to verify port connectivity: - From UF to HF: `telnet <HF_IP_Address> 9997` - From HF to Indexer: `telnet <Indexer_IP_Address> 9997` - From Indexers to Search Heads: Ensure the management port `8089` is open. I hope this helps, if any reply helps you, you could add your upvote/karma points to that reply, thanks.
https://www.syslog-ng.com/community/b/blog/posts/installing-the-latest-syslog-ng-on-ubuntu-and-other-deb-distributions
I have an indexer, a search head, and a heavy forwarder for a small installation. How do I configure them to communicate correctly?
I'm in the process of creating a small Splunk installation and I would like to know from where I would download the syslog-ng Linux Ubuntu installation for version 20.x.