Activity Feed
- Got Karma for Re: My Splunk userid doesn't work with Add-On Builder validation site "Login to the App Certification service failed". 04-19-2021 01:46 AM
- Karma Re: What is the basic difference between the lookup, inputlook and outputlookup commands for acharlieh. 06-05-2020 12:48 AM
- Karma Re: Where and how do I add the multisite attribute for a search head? for aholzer. 06-05-2020 12:48 AM
- Karma How can I remove the error "Unable to initialize modular input "jmx" defined inside the app "SPLUNK4JMX"" received on our search heads? for michael_sleep. 06-05-2020 12:48 AM
- Karma Re: How can I remove the error "Unable to initialize modular input "jmx" defined inside the app "SPLUNK4JMX"" received on our search heads? for lycollicott. 06-05-2020 12:48 AM
- Karma Re: Accessing apps in 6.4.x results in "Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed" for jbarlow_splunk. 06-05-2020 12:48 AM
- Karma Re: Why am I getting "Invalid key in stanza [lookup:cam_category_lookup] in E:\Splunk\etc\apps\Splunk_SA_CIM\default\managed_configurations.conf, line 34: expose (value: 1)" for rpille_splunk. 06-05-2020 12:48 AM
- Karma What is the basic difference between the lookup, inputlook and outputlookup commands for janiceb. 06-05-2020 12:48 AM
- Got Karma for Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Why am I unable to add hosts with more than one Splunk instance to the Distributed Management Console?. 06-05-2020 12:48 AM
- Got Karma for Struggling with inputs.conf and conflicting rules. 06-05-2020 12:48 AM
- Got Karma for Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
- Got Karma for Re: Why does the sendemail command fail and display "('system library', 'fopen', 'No such file or directory')" error?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
02-15-2019
06:29 PM
Thanks for the info.
Great suggestion. I'll give that a shot.
Thanks
... View more
02-15-2019
11:19 AM
Hello.
I've been working on a case with Splunk support for a week or two that involves the receiver port on one or more indexers getting plugged up and not taking new events for a while from transmitting universal forwarders.
I won't go into all the details of that case, but I need to collect additional netstat information for the very intermittent times this happens. I have some other non-Splunk-y ways I could do this, but processing the results would be easiest if they were in Splunk. Since this is intermittent it would be far more data than I'd need, but whatever might be easiest...
If I were to use the Splunk App for *nix, and its netstat script, to gather this information on indexers, what happens when this receiver port issue occurs? Does the output from a generating script somehow depend on the receiver port (9997), or in the case of a local event source on an indexer, is this handled internally? If it depends on the receiver port somehow, then I definitely need to go with another approach.
Thanks
... View more
01-31-2019
07:08 AM
Got it. Sadly.
Thanks very much for the education there. I'll see if we can agree to change how this is written out.
Thanks
... View more
01-31-2019
06:49 AM
(Not disagreeing, just trying to understand better...)
I think I understand your point, but I wish I could at least figure out the path this takes to set that.
Since the hostname is getting somehow and being picked out as the field right after the timezone, it must be getting that from somewhere. I just can't figure out where.
If I'm not parsing the hostname directly, wouldn't it then be setting the hostname to the local host on which these events came from rather than from the event itself?
Thanks
... View more
01-31-2019
06:29 AM
Hmm. No, not doing anything with transforms.
On the UF, the monitor stanza looks like:
[monitor:///var/something/something/]
sourcetype = syslog
whitelist = /syslog_\d{8}$
recursive = true
index = index_for_that_syslog_stuff
ignoreOlderThan = 90d
So the sourcetype is getting set there.
I started wondering how it actually does figure out the hostname. Rather than say, using the local hostname where the source file lives.
I did not write this stuff, but the UF deployment also has a props.conf and a transforms.conf. The props.conf is for other sourcetypes, none of which would match syslog and then of course, nothing in transforms would match that either. And really, because this isn't the parsing layer, I think a lot of this is just ignored anyway.
This leads me to think that something is grabbing that syslog sourcetype and picking out the field after the date and taking it as the hostname.
FYI, the filename is .../syslog_YYYYMMDD
I looked on the indexer and the closest things that match either the sourcetype or the source are
[delayedrule::syslog] -- built-in but should only be called if the sourcetype isn't set
[source::....syslog] -- from Splunk_TA_nix's props.conf, doesn't match and doesn't set TZ
[source::.../syslog(.\d+)?] -- built-in, doesn't match filename
[syslog] -- from Splunk_TA_nix's props.conf, doesn't set TZ and host:: should take precendence, no?
[syslog] does have some host-related transforms, but they're all "REPORT"
I was primarily looking for a source:: stanza that might override any host:: TZ I'd set. It's my assumption that all 3 (source, host, sourcetype) are merged in that sequence such that any TZ settings I had for host are then merged "down" to sourcetype.
Still rather befuddled. The btool output definitely shows me the correct TZ for that host in props.conf on one of the indexers.
Thanks
... View more
01-31-2019
06:04 AM
Data is coming from a universal forwarder to an index cluster member. Yes, I made one "app" under master-apps and made it specifically for timezone setting and nothing else.
No heavy forwarder involved.
As it wasn't working, I did the 'btool' noted above on an indexer to confirm that those settings were getting to the indexer(s).
... View more
01-31-2019
06:00 AM
Interesting thought, but unfortunately, no. There are a LOT of these IPs. I'm using MetaWoot and trying to clean up the glut of latency issues. We've never added this many hosts to any config file, let alone transforms. (Ooops).
... View more
01-30-2019
07:56 PM
They’re all just as simple as
[host::1.2.3.4]
TZ = America/New_York
and
[host::2.3.4.5]
TZ = Europe/London
And so on.
And yes, whoever setup the rsyslogd on the server where this file gets written to does not look for the FQDN and thus all the host names are the IP’s 😕
Thanks
... View more
01-30-2019
02:30 PM
Hi,
I've got a problem that's driving me crazy. There is a source we're reading via a universal forwarder that is the output of a syslog on a whole bunch of servers. This means that some of the lines represent servers in different timezones depending on the host. Yeah, I know, not so great, but it's not within our control or influence.
I have been creating [host::] stanzas in a props.conf on our indexer cluster master and setting the TZ per host, such as "TZ = America/New York". If I go to one of the indexers and
splunk btool props list --debug
I can see the host entries I made.
However, the events are still being indexed as if they are the local time of the indexer. The sourcetype here is 'syslog' but I know that "host::" should override the sourcetype stanza in props.conf. I hunted around for a "source::" stanza that I might not know about that matches and I can't find one anywhere.
I'm not sure where to go from here, but any help would be appreciated. I hope I'm missing something obvious...
Thanks
... View more
12-13-2018
07:56 AM
I was thinking of this mostly in the abstract so I forgot that DATETIME_CONFIG is already set to CURRENT for this particular file.
We have no Splunk infrastructure between the UF and the indexer(s) so no intermediate forwarder stop before events hit the indexer.
... View more
12-13-2018
06:24 AM
Would setting it to "CURRENT" just end up hoping that the file isn't read/indexed too quickly? If there's say, a 210K CSV file that Splunk sees and slurps up as a batch, it's going to go from UF to indexer pretty quickly. I'm not sure that I'd imagine say, 5 seconds of latency in there unless I had a few heavy forwarders in between the UF and the indexer.
As it is, Splunk is wasting some time attempting to parse a timestamp out of each line. It fails, then uses the time it read the line. That time wasted would potentially increase the "spread" of time the events were seen even if it's very slight.
Unfortunately, they do a lot of delta calculations in their reports, so just looking at the current day's information isn't enough so a lookup table wouldn't really work.
Thanks
... View more
12-10-2018
01:51 PM
Hi,
We're currently indexing a number of CSV files that are all generated output from someone else's script. These files have no timestamps for "events" and are truly CSV data. Several of them have more than 100K lines and since Splunk is creating the timestamp at the time it reads the file, I regularly get
WARN DateParserVerbose - The same timestamp has been used for 100K consecutive times. If more than 200K events have the same timestamp, not all events may be retrieveable. Context: source=/
Everything must be between 100K and 200K because as far as I can tell no events are unsearchable. So far. I don't really have an expectation that these files are going to grow, but I think it would be good for me to plan that they will.
This is Splunk 7.1, by the way.
I don't think I could track down the owner of the scripts generating these CSV-data files and then get them to modify their scripts to add a fake, but incrementing timestamp to each row. Even I wouldn't want to do that...
The only other solution I can think of would be to change to a scripted input that read the file and generated bogus timestamps as it read the rows into Splunk. I'm not crazy about that solution either.
Is there some other potential solution that members of the community might suggest?
Thank you
... View more
07-17-2018
11:43 AM
That did it. Thanks very much, somesoni2.
This one falls into another "why didn't I think of that?" category for me. 🙂
... View more
07-17-2018
08:43 AM
I'm trying to use a search that looks like
index=<index> sourcetype=<sourcetype>
| eval site=<site>
| lookup host_and_site_coords site OUTPUT host AS siteHost
| search host=siteHost
That first 'eval' for 'site' is there because it's passed in as a token, but in the normal search it's not necessary. Just needed for troubleshooting this.
My problem is that everything works as expected up to the final 'search' command. That is, the lookup works and creates siteHost as I'd expect. The search command doesn't seem to get the value, however, result in what seems like ' search host="" '.
I know that with subsearches you can't pass in an eval'd value, but I didn't think that applied here.
Or maybe I'm missing something really obvious...
Thanks
... View more
07-09-2018
01:12 PM
1 Karma
After working through the Splunk support ticket, the issue seemed to be that the validation site doesn't like it if your password contains a colon. I was told that there's already an open issue about that.
I changed my password to not contain a colon and could run the validation.
FYI.
... View more
07-03-2018
10:35 AM
I opened support ticket 946120 just a few minutes ago.
thanks
... View more
07-03-2018
07:47 AM
I seem to be unable to connect to the Splunk Add-On Builder site to validate my package. I can't make the Test button on the credentials page pass.
We do have a proxy, but I've tried the configuration both with and without. I am 100% certain that this is my correct splunk.com id and password. I've just logged in to Splunk Answers (obviously 🙂 ) and the Splunk download site without issue. I keep getting
Login to the App Certification service failed. Verify your username and password and try again.
As for the proxy, there is a hostname and a port number. There is no username/password for our proxy server so I leave those blank.
splunk_app_addon-builder_ta_builder_validation.log shows me
2018-07-03 09:36:53,178 INFO pid=17325 tid=CP WSGIServer Thread-19 file=app_cert.py:_get_app_cert_conf:264 | Get the app_cert settings.
2018-07-03 09:36:54,212 ERROR pid=17325 tid=CP WSGIServer Thread-19 file=app_cert.py:get_token:140 | Authentication failed for App Certification.
splunk_app_addon-builder_ta_builder.log shows
2018-07-03 09:36:52,901 INFO pid=17325 tid=CP WSGIServer Thread-19 file=builder_ta_extraction.py:__init__:61 | Init Splunk Regex Loader...
2018-07-03 09:36:54,212 DEBUG pid=17325 tid=CP WSGIServer Thread-19 file=rest_client.py:send_request:130 | Failed to send rest request=https://api.splunk.com/2.0/rest/login/splunk, errcode=401, reason=Authentication failure, invalid access credentials.
I also tried this on my home server which has no proxy and is behind a firewall. I got the same result when trying to connect with my id.
Splunk 7.1.1 on Linux (SLES 12.1). Add-On Builder 2.2.0
I looked at other answers that seemed to be somewhat related. All of those people seemed to have some situations where they could connect successfully.
I looked at the documentation again and it didn't seem to say that I needed any special privileges to get this access other than my regular splunk.com account.
Any help would be greatly appreciated.
Thanks
... View more
05-23-2018
12:06 PM
niketnilay,
Wow. That's pretty cool. Yet another command I've never heard of (gentimes).
What I was hoping for was to be able to do reporting per period. The partly hypothetical example rattling around in my head would be too average the total daily number of user per period. So like, maybe I want to show the first 2 periods of the year, or explicitly periods 3, 4 and 5. And ideally have them labelled with the period name.
While the application group hasn't explicitly asked for this, they've hinted at it. Mostly what these guys do is generate a daily PDF that they send off to their leadership showing their application usage. Currently they're doing it monthly and weekly.
Thanks again, for all of your efforts on this. Greatly appreciated.
... View more
05-22-2018
02:35 PM
That makes a lot of sense.
I'm still sort of stuck with the same problem though, aren't I? Since with the exception of a few commands, I have to start with a search. In this case a pretty broad search, like maybe @y and then run another search in the pipeline to limit it only to the calculated time range.
Thanks
... View more
05-21-2018
07:30 AM
I put a little more detail in my comment above to niketnilay.
Sometimes I wish SPL were a bit more like a scripting language in that I could do some eval calculations (i.e. like setting variables) before I ran the search.
While the dates do seem arbitrary computing them from Jan 1 isn't really that hard as they are definitely every 28 days. And based on that p3_start would be 3*28. Not sure if I can say earliest=@y+28d and latest=@y+55d.
I have a rough idea of what you're suggesting, but I'm still not sure I can imagine the SPL for that. Could you provide something kind of pseudo-code-ish?
Thanks!
... View more
05-21-2018
07:23 AM
niketnilay,
I probably needed to be a bit more explicit in my question. I'd go back and modify it, but I can't seem to figure out how to do that.
Here's an example of some periods this year
P2 = 1/29 - 2/25
P3 = 2/26 - 3/25
P4 = 3/26 - 4/22
P5 = 4/23 - 5/1
and ultimately I'm interesting in reporting on these periods. It would be awesome if I could just use time modifiers ( "earliest=-2P@P", say), but ultimately it would be nice to be able to do a "BY" on an transforming command.
I don't think I follow you on the lookup method. Could you give me an example?
In thinking about this last week, it occurred to me that I could take an approach of counting forward from the start of the year (yeah, periods do at least start on 1/1), and then identify 4 week increments, but that would mean I'd have to pull back the entire year only to filter that down again to just what I wanted. Given that range, that could be expensive.
Thanks very much!
... View more
05-18-2018
11:25 AM
Wheels were made to be re-invented. I believe it was someone from marketing who once said that.
... View more
05-18-2018
11:24 AM
Thanks, niketnilay. That would change the pull-down search ranges, but it wouldn't help me report on them. That is, "show me the average X from P2".
On the one hand, I suppose one could construct something that starts on 1/1 and then rolls forward 4 weeks * the period #, where P2 = 4*2. In that case you'd have to pull back a lot of events and then filter them out.
What I think would be the nicest of the pain-in-the-butt mechanisms would be if I could create a lookup table that defined a manual time range for each period for that year. Yeah, I'd have to do it manually once per year. Unfortunately, that would mean a lookup table value that is actual SPL ("earliest=X latest=Y") that I'd need to have executed and if Splunk allows that I haven't seen it.
I actually need that lookup as SPL capability for something else I'm working on.
... View more
05-17-2018
10:53 AM
I have this exact same problem. I logged out of a splunk.com site and logged back in and confirmed that my splunk.com credentials are good.
I'm using add-on builder 2.2.0.
We do have a proxy for web traffic, but that is already configured on the server. That is, if I were to use curl or wget, they would get the system proxy values and work fine. I also tried manually entering the proxy information but that didn't help either.
I have output similar to the output above, but also see
2018-05-17 12:45:49,776 DEBUG pid=18445 tid=CP WSGIServer Thread-6 file=meta_manager.py:set_app_meta_data:373 | Set state. app:TA-web-ssl-logger, key:validation, value:{}
I also tried removing all but the app pre-certification validation options and still get the same result.
I wouldn't think it would matter, but my password is a bit long and complex, but that's the only thing I can possibly think of.
Thanks
... View more
05-10-2018
09:11 AM
I'm wondering if there isn't some way to use custom relative times in Splunk. I suspect not, but I thought I'd ask.
Here's the scenario. My company uses a time delineator called a 'period' that is 4 weeks long. This does not align on month boundaries and even better it doesn't start on specific times every year. People have been asking me if there's some way they can run a search that is effectively
<search stuff> earliest=-2P@P latest=@P
Where P is this custom 'period' thing. The pseudo search above would show events for the previous 2 periods.
I haven't seen anything in either Answers or the docs indicating that this is doable using the convention above. That is, defining something other than 'w', 'd', 'mon' etc for relative time.
Given that that's not possible, I'm assuming the next approach would be to setup a lookup table that I populate every year that showed start and stop dates for each period and come up with some fancy SPL to read that table and do some date calculations to determine the periods. I haven't looked into that last bit yet, but it's floating around in my mind now :-).
Thank you
... View more