Activity Feed
- Got Karma for Re: Windows DHCP log files "too small to match seekptr checksum"?. 02-29-2024 11:03 PM
- Got Karma for Re: Windows DHCP log files "too small to match seekptr checksum"?. 09-14-2023 06:32 AM
- Got Karma for How can I control the size and number of Splunk's internal logs?. 12-27-2022 07:23 AM
- Got Karma for Re: How can I control the size and number of Splunk's internal logs?. 12-27-2022 07:23 AM
- Got Karma for Re: Is it possible to set app permissions via the REST API?. 05-03-2022 09:06 PM
- Got Karma for Re: How do I add metadata to events coming from a Splunk forwarder?. 04-13-2022 02:33 AM
- Got Karma for Re: Your maximum disk usage quota has been reached - WHat does this mean?. 03-16-2022 07:26 AM
- Got Karma for Re: in splunkd.log a lot of warnings : DispatchCommand - could not read metadata file. 05-11-2021 06:03 AM
- Karma How can I get a complete list of processes used by Splunk for Linux? for cwl. 06-05-2020 12:47 AM
- Karma Re: Browser Unsupported on IE after upgrade to 6.2 for jdastmalchi_spl. 06-05-2020 12:47 AM
- Karma Browser Unsupported on IE after upgrade to 6.2 for jdastmalchi_spl. 06-05-2020 12:47 AM
- Karma Re: DateParserVerbose - what is splunkd.log telling me? for martin_mueller. 06-05-2020 12:47 AM
- Karma Re: Will setting the percent_peers_to_restart = 0 prevent a rolling restart? for RicoSuave. 06-05-2020 12:47 AM
- Karma Will setting the percent_peers_to_restart = 0 prevent a rolling restart? for jwoger_splunk. 06-05-2020 12:47 AM
- Karma Re: Can you control what gets replicated between search heads? for RicoSuave. 06-05-2020 12:47 AM
- Karma Can you control what gets replicated between search heads? for jwoger_splunk. 06-05-2020 12:47 AM
- Karma Re: [SHC] What config changes will cause a rolling restart of search heads? for RicoSuave. 06-05-2020 12:47 AM
- Karma [SHC] What config changes will cause a rolling restart of search heads? for jwoger_splunk. 06-05-2020 12:47 AM
- Karma Re: Splunk App for Windows Infrastructure: Why can't I click "Next" on the Prerequisites page, even if all my check marks are green? for mgaraventa_splu. 06-05-2020 12:47 AM
- Karma Re: How can I get a complete list of processes used by Splunk for Linux? for hexx. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
1 | |||
1 | |||
3 | |||
0 | |||
10 | |||
5 | |||
1 | |||
1 | |||
2 |
07-20-2016
11:30 AM
My first guess is that this is likely down to excessive or inefficient REGEX operations during parsing. It could be index-time extractions or even field-replacement/masking operations. Another possibility is a lack up of-front sourcetyping at the input layer, if Splunk doesn’t know what sourcetype to apply to data it will try and figure it out itself on each event, which is obviously not a good idea.
I would review all inputs, props and transforms entries to see if you can spot any outliers or operations that have been applied to most or all of the incoming data. Alternatively there’s this app on Splunkbase – Data Curator: https://splunkbase.splunk.com/app/1848/ - which will provide efficiency scores on all sourcetypes based on *.conf file entries. It hasn’t been updated in a while so still says it’s for 6.2, but I expect it to work on 6.4.x as well.
... View more
04-22-2015
02:53 PM
3 Karma
This comment deserves way more karma than I can currently allocate
... View more
04-22-2015
02:36 PM
5 Karma
What we have here is an internal identifier that we call the 'pipelinechannelset' and is used to ensure that data from a particular input stream is not mingled with data from another stream. This is primarily used for network inputs where we would have incoming streams from multiple sources via the same TCP port, 9997 by default.
In the case of local file inputs, it's not necessary to have an identifier like this as our default parsing machinery already has the ability to keep data from different files separate - so that explains why you will sometimes see '/n' versus a number.
The more incoming data-streams you have (i.e. the more forwarders in your deployment), the higher this number will be.
... View more
Using the embedded report feature, how can I configure it so that the results are displayed in a table view instead of another visualization.
When we use the URL provided by the embed feature it will only show whatever visualization is shown on the visualization tab and NOT a table (stats tab). I must be overlooking something, this is a key and needed "visualization" for how we want to leverage this capability.
... View more
03-07-2014
02:35 PM
1 Karma
By default, Splunk forwarders will always use auto-LB when forwarding to any destination, Splunk indexer or otherwise. The forwarder will only use round-robin when you configure it to, but this is unlikely to work well when using a universal forwarder, as it will not parse the data before sending.
... View more
03-07-2014
02:32 PM
1 Karma
If a Splunk forwarder is sending information to a non Splunk indexer does the forwarder use the same load balancing logging/capabilities or is the information sent in a round robin fashion?
... View more
08-13-2013
12:26 PM
7 Karma
This event happens in the context of distributed search. It is coming from bundle replication, which is attempting to tar up all of your app files to push the search bundle to your indexers. In order to make this manageable, Splunk has a default limit of 50MB, which can be tuned with the following setting in distsearch.conf, in the [replicationSettings] stanza -
concerningReplicatedFileSize =
* Any individual file within a bundle that is larger than this value (in MB) will trigger a splunkd.log message.
* Where possible, avoid replicating such files, e.g. by customizing your blacklists.
* Defaults to: 50
However, the better solution here would be to simply blacklist these, and any other large files that are not necessary for searching on the indexers. Read the information here about controlling the size of your replicated bundles - http://docs.splunk.com/Documentation/Splunk/5.0.4/Deploy/Configuredistributedsearch#Limit_the_knowledge_bundle_size
And then for any changes you want to make to white & blacklist settings, you can edit the distsearch.conf file - http://docs.splunk.com/Documentation/Splunk/5.0.4/admin/Distsearchconf
... View more
05-07-2013
10:56 AM
1 Karma
ES is an app that sits on top of Splunk that provides insight into security data, which can come from anywhere depending on your use-case. The app includes various methods to collect and visualize data, but any 'monitoring' is dependent on the searches that you configure, and they will depend on what you are looking for.
The parsing of data is handled by splunkd, according to rules that can be put in place by any app. If you are looking at a 'standard' log, then it's likely we already have parsing rules included by default, but for custom logs you will likely have to define some if Splunk's initial attempts don't work for you
To set up a conversation specific to security use-cases, I recommend engaging your Sales Rep to set up a call with someone who can assess the use-case(s) you have in mind and provide
... View more
04-04-2013
07:19 AM
After enabling any input, it's important that you verify that you actually have data coming into your Splunk instance from that source. In this instance, I suspect that your original input stanza is not working because you're missing a \ in your monitor spec, i.e.
[monitor://C:inetpublogsLogFiles]
Should be:
[monitor://C:\inetpublogsLogFiles]
Once this is corrected and you have restarted your instance, you can verify if you're getting data simply by running the search "source=C:\inetpublogsLogFiles*"
... View more
12-17-2012
11:15 AM
3 Karma
The instructions in the docs are for specifically resetting auto-sourcetyped data, but you have already set a manual sourcetype in inputs.conf, so it's never going to get overwritten again, unless you specifically use a props/transforms entry to re-write it completely, an example is posted here - http://docs.splunk.com/Documentation/Splunk/5.0/Data/Advancedsourcetypeoverrides#Example:_Assign_a_source_type_to_events_from_a_single_input_but_different_hosts
Another alternative would be to remove ' sourcetype = syslog ' from inputs.conf and rely on a combination of auto-sourcetyping and other props.conf stanzas to set the sourcetypes on the non-syslog data.
... View more
12-05-2012
09:49 AM
1 Karma
http://wiki.splunk.com/Community:Run_multiple_Splunks_on_one_machine
The link above will take you to instructions on how to successfully configure & run multiple instances on a single machine. The problem you are encountering is a known behaviour on Mac OS X. In running the install package for the second time, the OS inadvertently clobbers the original install, removing files and directories.
Usually it will leave the '$SPLUNK_HOME/etc' and '$SPLUNK_HOME/var' directories intact, so your configurations & data should still exist, but the instance will not be in a usable state and the best option at this point is to move what is left aside, download the .tgx package, extract it to the desired location and move your configs and data back over.
Assuming your 2nd instance installed succesfully when you ran the .dmg that caused this problem, you should now have 2 instances on one machine. Please read the steps in the article linked above to ensure that there are no conflicts between the instances.
... View more
I am encountering the following error in the python.log file when Splunk tries to send an email alert.
2012-08-30 12:11:15,213 ERROR [Errno -3] Temporary failure in name resolution while sending mail to myEmail@email.me
Sending an email manually works without any problem from the command line and the UI
What is this error and how can it be resolved?
... View more
03-13-2012
05:11 PM
You need to ensure that the User running Splunk (by default the 'Local System User' on a Windows instance) has full access permissions to the $SPLUNK_DB location. When Splunk starts up, it will run through a validation check on existing index directories to verify that it has the correct permissions to create & modify files in those locations.
The user needs full permissions, read + write
... View more
02-24-2012
01:45 PM
7 Karma
This is actually an issue with the license functionality and the automatic expiration function. When working correctly, the license lifetime should start counting down from the first time you start up a new installation.
We have refreshed most the packages on the download page but you can also access an updated enterprise trial license here
To apply it, simply place the file in $SPLUNK_HOME/etc/licenses/download-trial/ and restart the Splunk instance. If your browser just displays the raw license .xml contents, just copy that text to a file called enttrial.lic in the above location.
... View more
02-21-2012
10:13 AM
3 Karma
We have Splunk 4.2.3 installed on some Linux hardened servers. Our Security team recently ran some scans and expressed concern regarding SSL on port 8089. After researching we determined that this port is used for Splunk deployment communication.
It seems that their concern is that the SSL version is too low. They would like to see at least version v3TL1.
I'm not very familiar with SSL. Could you tell me what SSL version Splunk uses? Is it possible to upgrade? What version of SSL does 4.3 use?
Thanks,
... View more
- Tags:
- sslv3
12-22-2011
12:07 PM
3 Karma
Currently, no, but the new 4.3 release in early 2012 will support IPv6. We're in the final testing stages right now.
... View more
11-30-2011
09:57 PM
6 Karma
In order to address this issue and spread the data evenly, use a regular (heavy) forwarder to collect the data and parse it before sending it to the indexer.
With the Universal Forwarder, minimal parsing is performed on the forwarder side before sending the data onwards. This means that the UF has no idea where line-breaks occur between events, so in order to use auto-LB, it has to wait until there's a break in the data-stream before switching the output connection to a new indexer. The same behaviour would be observed if it was monitoring a file, and the logging application never stopped writing to that file. As long as data from a specific source keeps appearing fast enough, the UF will continue to send that data to a single indexer in order to avoid corruption of the index.
A regular forwarder will parse the data fully parsed before being sending it to the indexers, making it easy to identify points where the connection can be switched. Note that using this instance will increase the resource usage on the host server, so if that box is running critical applications, we should advise using a separate, dedicated box for this
purpose.
... View more
07-05-2011
02:41 PM
1 Karma
The quick answer is yes, you can modify the settings in the .../default/transforms.conf by creating a stanza of the same name in the `...local/transforms.conf' file.
The default settings are based on what we expect syslog data to look like but it's not going to match every possible format out there. Just remember than any changes you make to files in the default directories may get overwritten on an upgrade, so make sure you always make your changes in the local directory.
... View more
03-08-2011
10:39 AM
Did you actually read the answer above? It gives you a general idea of good indexing speed (4 - 7MB/sec), a direct link to recommended hardware config AND lists out all of the main dependencies - hardware resources and data format. There is no straightforward answer/formula for this question, you need to test Splunk with your data.
... View more
03-07-2011
04:27 PM
2 Karma
We actually get this question quite a lot in the support team, and my usual response is:
What kind of performance stats are you looking for?
Splunk has 2 main operations, indexing and searching. Both of these operations are dependant upon the hardware resources available, the more resources, the faster Splunk will run. I'm not just referring to CPU and memory, Splunk is also very i/o intensive, so the speed of your storage volume is also very important. Further, if you intend to use RAID, that will also affect the performance numbers. Splunk recommends RAID 0 for best performance, and the recommended hardware config is detailed here
The performance of your server is also dependant on the data you are indexing and searching on. If you are just interested in standard single-line syslog, containing key = value data, Splunk will handle that data like a champ, and eat it up as fast as you can feed it in, provided that your disk is fast enough. If all of your events are multi-line however, with varying lengths, data format etc, Splunk will be slower to index it and searching will also be impacted.
The only way to know for sure how Splunk will perform with your data, is to run some tests with real data samples. There is an app on Splunkbase here that will help you with this, it's mainly a sequence of CLI commands that runs a test with a dataset you specify.
As you can see, there's no easy answer to this question, as there are a lot of dependancies, but on a well-tuned, beefy server we would expect to see average indexing thruput of 4 - 7 Mb/sec. Anything higher than that would likely impact search performance.
... View more
03-03-2011
03:15 PM
1 Karma
Yes, it makes sense and yes, what you have heard is pretty accurate. Moving data from one architcture/file-system to another is usually not possible.
We have seen some cases where moving data from Solaris to Linux has worked, if the Solaris box was x86 arch. If you have a SPARC box, then it's just not going to work. If you want to confirm, you can try moving just one bucket as a test.
The easiest option for you is the one you have described, setting up an instance on both architectures to act as search peers until such time as that data is no longer needed. Another, more time-consuming option would be to export the data from the original index and import it into the new ones, using a method similar to the one described here
Migrating data from one instance to another isn't complicated, and is simply a case of copying the buckets from one location to another, as described here
... View more
01-25-2011
11:44 PM
Have you heard of the bump endpoint ??
If you hit /info on any instance, and there's a link called static resource cache control near the bottom of the page. Click that and it'll allow you to increment the "bump" number on the instance.
We cache static files very aggressively. Ctrl+ will refresh it for you, but not for your other users. Clicking the bump endpoint works for all because it changes all the static URL's for everyhing on the entire server
... View more
01-25-2011
11:43 PM
3 Karma
Have you heard of the bump endpoint ??
If you hit /info on any instance, and there's a link called static resource cache control near the bottom of the page. Click that and it'll allow you to increment the "bump" number on the instance.
We cache static files very aggressively. Ctrl+ will refresh it for you, but not for your other users. Clicking the bump endpoint works for all because it changes all the static URL's for everyhing on the entire server
... View more
01-20-2011
09:05 PM
Yes it is possible, but not striaghtforward.
If you were to reproduce this view in a custom app, then you could simply define a new times.conf within that app containing only the timeranges you want to see. However, that would mean that the TimeRangePicker module in every view inside that app would also be limited.
Another method involves coding specific actions for that module on a per-view basis, which isn't easily done unless you are familiar with the Splunk UI architecture. Our Professional Services team could likely deliver this in about half a day, so if it's something that's critical to your deployment you may want to consider involving theme
... View more