All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello! where did you find the old sendemail.py?
In the configuration of your HTTP Event Collector (HEC) token you can set how it handles the connection host.   I don't think this is in the GUI, so you might have to edit your inputs.conf file con... See more...
In the configuration of your HTTP Event Collector (HEC) token you can set how it handles the connection host.   I don't think this is in the GUI, so you might have to edit your inputs.conf file containing your HEC-related stanzas to set the connection_host property to get your desired behavior:   connection_host = [ip|dns|proxied_ip|none] * Specifies the host if an event doesn't have a host set. * "ip" sets the host to the IP address of the system sending the data. * "dns" sets the host to the reverse DNS entry for IP address of the system that sends the data. For this to work correctly, set the forward DNS lookup to match the reverse DNS lookup in your DNS configuration. * "proxied_ip" checks whether an X-Forwarded-For header was sent (presumably by a proxy server) and if so, sets the host to that value. Otherwise, the IP address of the system sending the data is used. * "none" leaves the host as specified in the HTTP header. * No default.  
There's several older posts that seem to be related to a default setting not being correct in their case in an etc/system/local/inputs.conf.  I would check through your Splunk configs to see if there... See more...
There's several older posts that seem to be related to a default setting not being correct in their case in an etc/system/local/inputs.conf.  I would check through your Splunk configs to see if there's a server name not set correctly, or if an override is in place that is causing it to use a non-existent hardcoded value instead of relying on what the OS thinks the server name is (i.e. look through our .conf files): Solved: Inputs.conf $decideonstartup - Splunk Community Why is host=$decideOnStartup for Splunk Stream, bu... - Splunk Community How to Configure host = $decideOnStartup correctl... - Splunk Community Solved: $decideOnStartup Remote Perfmon - Splunk Community
Is this possible to get source which sending the data or IP of the source. If it possible. Thanks
same problem here
There is some overhead involved in replicating KVStore changes within a SHC, but it should not have a noticeable effect on use of the lookup.
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the loo... See more...
I have a kvstore lookup in a single SH environment. If the environment is made into a cluster and kvstore replication is on, would that decrease the performance of updating or searching using the lookup?
Thank you very much. That did it! My approach was by regex and was not having any luck. 
Thanks @yuanliu ! That definitely does it. I was sure I had tried this already, but somehow, I seem to have missed that particular format string and was skipping the percent before the Z. Thanks agai... See more...
Thanks @yuanliu ! That definitely does it. I was sure I had tried this already, but somehow, I seem to have missed that particular format string and was skipping the percent before the Z. Thanks again for replying!
What cause the stream forwarder to be inactive?
Do you maybe have a lot of license messages? I've seen that triggering this issue.
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have ... See more...
Hello! As part of data separation activities I am migrating summary indexes between Splunk deployments.  Some of these summary  indexes have been collected with sourcetype=stash, while others have their sourcetype set to a specific one, let's say "specific_st". The data is very simple, here is one event: 2023-06-10-12:43:00;FRONT;GBX;OK The sourcetype is set as follows: [specific_st] DATETIME_CONFIG = NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%d-%H:%M:%S TZ = UTC category = Custom pulldown_type = 1 disabled = false SHOULD_LINEMERGE = false FIELD_DELIMITER = ; FIELD_NAMES = "TIMESTAMP_EVENT","SECTOR","CODE","STATUS" TIMESTAMP_FIELDS = TIMESTAMP_EVENT   After collecting on the source Splunk instance, I run the search on the summary index and the fields are extracted correctly.  After migrating the index to the new Splunk instance, the sourcetype does not seem to work and the fields are not extracted.  The event correctly lists the sourcetype as "specific_st".   To migrate I copied the db folder via SCP from source indexer (single) to target indexer which is part of a cluster.  I made sure to rename any buckets and when I brought the indexer back up the index was correctly recognized and replicated.  The sourcetype is located on all indexers as well as  the search head.   Has anybody had this problem before?  Do I maybe need to update the sourcetype in some way?   Thank you and best regards,   Andrew
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$... See more...
I want to change the tooltip text when hovering over a Flow Map Viz node. I am using the event below but it does not seem to work. var flowMapViz = mvc.Components.get('my_flow_map'); flowMapViz.$el.find('.node').on('mouseover', function(event) { console.log("on mouseover") });
Just as I showed in my example. If you need it adapted for your events, and need help with that, you will need to provide anonymised examples of your events for us to work with.
Thank you very much @richgalloway !
Thank you @ITWhisperer and @gcusello for the prompt response and great inputs. I have these strings (ex: Satisfied Conditions: XYZ, ABC, 123, abc) across thousands of log files. How do it get them i... See more...
Thank you @ITWhisperer and @gcusello for the prompt response and great inputs. I have these strings (ex: Satisfied Conditions: XYZ, ABC, 123, abc) across thousands of log files. How do it get them into a single bucket at runtime so I can apply the logic to split and group by.
The greater permission applies.  See https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/Aboutusersandroles#Role_inheritance
When running   | inputlookup testlookup   (which is an external lookup) I get the error message: The lookup table 'testlookup' requires a .csv or KV store lookup definition ... so I assume this... See more...
When running   | inputlookup testlookup   (which is an external lookup) I get the error message: The lookup table 'testlookup' requires a .csv or KV store lookup definition ... so I assume this isn't an intended use case. Quite a bummer because (as per some of my earlier posts) custom search commands kind of suck.
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; us... See more...
Hi all,   I've configured a new role to inherit settings from user and power role and I let default values for srchJobsQuota and rtSrchJobsQuota Basically: [role_new] importRoles = power; user srchDiskQuota=1000 srchIndexesAllowed = * srchIndexesDefault = * srchMaxTime = 8640000 srchJobsQuota = 3 rtSrchJobsQuota = 6   In this case, which values I will have for srchJobsQuota  and rtSrchJobsQuota? The one set in the role_new or the one set in inherited roles?   Thank you very much  
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unheal... See more...
One the search head that our SOC uses, i get the following: IOWait Sum of 3 highest per-cpu iowaits reached red threshold of 15 Maximum per-cpu iowait reached yellow threshold of 5 Under unhealthy instances, its listing our indexers.  I performed a TOP on one of them and I see the following: top - 15:41:36 up 37 days, 11:50, 1 user, load average: 5.31, 6.58, 6.95 Tasks: 416 total, 1 running, 415 sleeping, 0 stopped, 0 zombie %Cpu(s): 28.3 us, 2.5 sy, 0.0 ni, 66.2 id, 2.7 wa, 0.2 hi, 0.2 si, 0.0 st MiB Mem : 31858.5 total, 311.6 free, 3699.4 used, 27847.5 buff/cache MiB Swap: 4096.0 total, 769.0 free, 3327.0 used. 27771.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 984400 splunk 20 0 4475268 244140 36068 S 105.6 0.7 1:22.47 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 796457 splunk 20 0 9232920 790724 36932 S 100.7 2.4 56:56.65 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_scheduler__zzm+ 895450 splunk 20 0 1281092 337308 32668 S 85.8 1.0 23:31.00 [splunkd pid=42128] search --id=remote_"Search Head FQDN"_1698412482.432+ Where is says "Search Head FQDN", that's just listing one of our Search Heads Of course we started seeing this once we upgraded from 8.0.5 to 9.0.5 Seeking guidance on this matter