Activity Feed
- Got Karma for Re: Universal Forwarder and props.conf and transforms.conf. 01-25-2024 05:21 PM
- Got Karma for Re: Universal Forwarder and props.conf and transforms.conf. 11-07-2022 08:40 PM
- Got Karma for Re: How to install the Splunk Add-on for Unix and Linux and get it to work with the Splunk App for Unix and Linux?. 04-27-2021 12:39 AM
- Got Karma for Re: Use WGET to download Splunk. 10-21-2020 09:10 AM
- Karma Is it possible to display the annotation by default on line chart without placing cursor on the annotation label? for shekharpogula. 06-05-2020 12:50 AM
- Karma Re: How to extract my event in index time using props.conf and transforms.conf? for DalJeanis. 06-05-2020 12:49 AM
- Karma Re: How to push *.conf to universal forwarders? for somesoni2. 06-05-2020 12:49 AM
- Karma Re: Excluding a field name from fields command exclusions for woodcock. 06-05-2020 12:49 AM
- Karma Re: Excluding a field name from fields command exclusions for somesoni2. 06-05-2020 12:49 AM
- Karma Re: Excluding a field name from fields command exclusions for damien_chillet. 06-05-2020 12:49 AM
- Karma Re: How to write an eval condition to replace a field ? for somesoni2. 06-05-2020 12:49 AM
- Karma Re: Common Information Model (CIM) Data Model Editor misbehaviour and broken error reporting for smoir_splunk. 06-05-2020 12:49 AM
- Karma Re: How to write a monitor stanza for configuring inputs.conf on forwarder for sub directories? for somesoni2. 06-05-2020 12:49 AM
- Karma Re: transforms.conf won't let me change the sourcetype for llacoste. 06-05-2020 12:49 AM
- Karma Re: Where is the logtype source type defined? for woodcock. 06-05-2020 12:49 AM
- Got Karma for Re: Excluding a field name from fields command exclusions. 06-05-2020 12:49 AM
- Got Karma for Re: How to EXTRACT regex expression in props.conf?. 06-05-2020 12:49 AM
- Got Karma for Re: pass4SymmKey error when setting Master License. 06-05-2020 12:49 AM
- Karma Re: Why is my regex for SEDCMD in props.conf not removing repeated dashes when parsing data? for s2_splunk. 06-05-2020 12:48 AM
- Karma Re: overriding default host field extraction won't work for rphillips_splk. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
07-08-2019
06:22 AM
No, that's not my point. My point is that my experience of wading through Splunk's documentation has repeatedly been extremely frustrating because of the abundance of blind alleys. Deprecated stuff is not always clearly marked as such. The occasional disconnect between an older piece of documentation and changed features is understandable, and usually gets corrected pretty fast once it's been pointed out on the offending page.
Getting back to Splunkbase 1810, having the compatibility discreetly marked as "Splunk Versions: 6.1" does not qualify as "clearly". A warning, ideally boxed and highlighted, should appear in the Overview or Details stating that this app is pointless with later versions of Splunk.
And MongoDB? Why is there nothing about JDBC/ODBC databases in the doc? Searching docs.splunk.com for ODBC returns precisely one (!) Documentation hit, and it doesn't mention JDBC at all besides being only concerned with connecting Splunk with Microsoft Excel, MicroStrategy, and Tableau.
Don't get me wrong, the overall Splunk documentation quality is top-notch, but that makes the missteps only stand out more.
... View more
07-04-2019
12:10 PM
Wading through Splunk documents concerning things which have deprecated over time is a pain, as no timeline can be found that explains things clearly. Case in point: Hunk.
Hunk used to be a Splunk product. Legacy documentation exists. But a history of Hunk? Tough luck. Here's what I've pieced together so far:
In 2013 (June), Splunk introduced its virtual index technology that enabled the seamless use of the entire Splunk technology stack—including the Splunk Search Processing Language (SPL)—for interactive exploration, analysis, and visualization of data stored anywhere, as if it were stored in a normal Splunk index. Hunk was the first product to exploit this innovation, delivering Splunk’s interactive data exploration, analysis, and visualizations for Hadoop. Splunk quickly opened this API to NoSQL and other data stores (reference required). In 2014, MongoDB partnered with Splunk to offer a MongoDB results provider for Hunk. By late 2016, Hunk’s functionality was completely incorporated into the Splunk Analytics for Hadoop Add-On (Splunkbase 3311) and Splunk Enterprise itself (versions 6.5+).
The question then is what is the status of the Hunk App for MongoDB (Splunkbase 1810), one of only three MongoDB-related entries in Splunkbase? Shouldn't this app be retired? And what of the eternal question of hooking MongoDB databases up as data sources for Splunk? Shouldn't that be handled by a simple add-on or app? The latest (June 2019) post on that question points to a Unity JDBC driver for MongoDB from a vaguely suspicious URL.
... View more
06-10-2019
02:21 PM
@ddrillic "You can change the 20% number if you want." Where? Which .conf file or Splunk Web page?
... View more
05-24-2019
11:28 AM
In 7.2.6 the cmds_black_list = lookup clause is absent from the [search_optimization::projection_elimination] stanza. Either it is now built-in, or projection_elimination now deals correctly with lookups.
... View more
05-24-2019
07:29 AM
1 Karma
Why is there no standard repository for Splunk and the Universal Forwarder? It would be so much simpler to add a repository to APT's or Synaptic's list of sources, so that Splunk could be upgraded like anything else that is installed.
... View more
05-15-2019
01:13 PM
Version 4.2.4 is long dead. This link still works as of version 7.2.6 : https://docs.splunk.com/Documentation/Splunk/latest/Data/Configuretimestamprecognition
... View more
03-27-2019
07:01 AM
@smitra_splunk We faced a number of constraints that did not allow use of JSON as a transmission format; the older collectd we used also limited the plug-ins we could use, which meant a few data streams would be missing from those expected by the Splunk Add-on for Linux. This second constraint is of course not a problem if you're doing your own analysis of the data streams. We were also unable to use collectd's write_graphite plug-in. We ended up using collectd's write_csv to "log" the data locally, combined with a Universal Forwarder that processed the logs and sent their events in simulated linux:collectd:graphite sourcetype.
The Universal Forwarder uses a network connection to send its data, very much like write_http does, but offers several advantages despite its light footprint: it can tag metadata; it buffers, compresses and secures the data transfers; it can consolidate data; it can handle index-time transformations; and it can even do load balancing (when its data are being consumed by several Splunk indexers).
Now, your problem seems to be that collectd is sending empty JSON fields, so my first thought would be to check the collectd configuration. The transmission mode (HEC vs. http vs. TCP vs. UDP) is extremely unlikely to be at fault here. Which collectd plug-ins are you using?
... View more
07-30-2018
12:21 PM
@ww9rivers Universal Forwarders do some processing: they can run add-ons to handle source and event typing as well as index-time transformations. The inputs/props/transforms triplet of conf files can be used to do so (and I have done it). This is why I'm surprised PREAMBLE_REGEX seems to be ignored by the UF.
... View more
05-03-2018
01:46 PM
Your problem may be the default logging level, which is ERROR . In order for your self.logger.info and self.logger.debug invocations to make it to the /opt/splunk/var/run/splunk/dispatch/.../search.log of the job, you must either globally reduce the logging level, or reduce it for the appropriate channel (Settings: Server Settings: Server Logging or /opt/splunk/etc/log.cfg , if you can identify the pertinent channel---I certainly can't), or issue self.logger.logging_level = 'DEBUG' in your command's generate method (or equivalent; generate is for GeneratingCommand subclasses).
... View more
04-30-2018
11:31 AM
There isn't a curl command in JKat's toolkit ( jkats-toolkit_006 ) since December 2016. There are only the commands motd , decimaltoip , and randomint . The curl and urlencode commands have moved to TA-Webtools (https://splunkbase.splunk.com/app/3420/).
... View more
04-23-2018
08:04 AM
(Late comment) Depending on what you're trying to do, you could also use SEDCMD in props.conf to throw away the parts of the events that you don't want indexed.
... View more
04-13-2018
12:00 PM
Can't we just add an <html> element to the <dashboard> instead of converting it to html? And, pardon my ignorance, how does one attach the Javascript to the button? I tried adding a <javascript> element after or inside the <button> without any success.
... View more
03-09-2018
08:13 AM
You mean Support Portal, I presume? That is a valid approach for someone with an active Support Contract or entitlement, but will not work for a prospective customer merely browsing the Splunk Web pages. I was expecting a "Report a problem with this web page" button.
... View more
03-09-2018
08:04 AM
(I know this isn't a question, but since the contact page only leads to Sales or to phone numbers, I'm using this platform instead)
On the https://www.splunk.com/getsplunk/cloud_trial page, the "Splunk Cloud FAQ" link points to http://docs.splunk.com/Documentation/SplunkCloud/SplunkCloud/FAQs/FAQs but that page does not exist. You end up bounced back to http://docs.splunk.com/Documentation. A search of the Documentation pages yields no hits for "Splunk Cloud FAQ". The closest match I was able to find is https://www.splunk.com/en_us/legal/terms/splunk-cloud-terms-of-service.html ("Splunk Cloud Terms of Service FAQs"), which is a partial hit (it does not discuss the free trial FAQs).
... View more
- Tags:
- splunk-cloud
02-08-2018
06:53 AM
I don't see why you could not change the port settings. On the server, Settings: (System) Server settings: General settings lets you choose the management port (normally 8089) as well as the others. I'm not sure if changing this will automatically adjust the forwarder settings or if you'll have to edit the latter manually ( /opt/splunkforwarder/etc/system/local/outputs.conf ).
... View more
02-08-2018
06:42 AM
The shell command line find /opt/splunk -name indexes.conf will find all instances of indexes.conf in the Splunk directory tree.
... View more
02-08-2018
06:33 AM
I'd first check to make sure the forwarder is registered with the indexer. Does it show up in Settings: (Distributed environment) Forwarder management ? Next check what is being watched by the forwarder. Check if the 'new' index exists ( Settings: (Data) Indexes ).
... View more
02-08-2018
06:27 AM
You could have a UF on COB working on behalf of MBS01, it would merely need to watch some file, which could be updated by a script on COB. Seems a roundabout way of doing things, however.
... View more
02-07-2018
02:02 PM
Settings: (Distributed environment) Forwarder management will give you all those that have registered with the Splunk instance.
In order to register, each forwarder must run this command line:
splunk set deploy-poll <hostname or ip_address>:<management port>
The <management port> defaults to 8089. The registration information ends up in /opt/splunkforwarder/etc/system/local/deploymentclient.conf , something like:
[target-broker:deploymentServer]
targetUri = <hostname or ip_address>:<management port>
[deployment-client]
clientName = <client_name>
... View more
02-07-2018
01:44 PM
Could you describe this in more detail? A sample set of events would do wonders. Also, do you want to do this at index time or at search time?
... View more
02-07-2018
11:15 AM
1 Karma
Maybe this will help: https://answers.splunk.com/answers/466921/
... View more
02-07-2018
10:02 AM
1 Karma
It looks okay, but just to be sure I'd write it as (?<Property>[^=]+)=(?<Value>.+)
... View more
02-07-2018
09:21 AM
I consider this solved now. I use the csv plugin to write the metrics to a local directory (cleaned by weekly cron job that deletes older files), and i have a Splunk Universal Forwarder massage the events into the linux:collectd:graphite format before sending them to the indexer/search head as such. Contact me for the details.
... View more
02-07-2018
08:51 AM
You can write props.conf and transforms.conf in /opt/splunk/etc/deployment-apps/_server_app_<server_class>/local (alongside inputs.conf ), making sure the props.conf [<sourcetype>] and [source::<source>] stanzas specify force_local_processing = true . When ready, issue the command line splunk reload deploy-server to deploy these to the forwarders and they'll do the indexing (and accompanying SEDCMD and TRANSFORMS ) instead of the indexer. See https://answers.splunk.com/answers/615924/ for a detailed example.
... View more
02-07-2018
08:43 AM
As somesoni2 indicated, the solution is to issue the command line splunk reload deploy-server on the main instance (the deployment server). There is apparently no such facility in the Splunk Web pages.
... View more