Activity Feed
- Got Karma for Re: Cannot connect remotely to Splunk Web interface. Tuesday
- Got Karma for Re: How to make multivalue fields parse in props.conf and transforms.conf?. 08-25-2024 07:37 PM
- Got Karma for Re: How to make multivalue fields parse in props.conf and transforms.conf?. 12-08-2023 07:11 AM
- Got Karma for Interesting... passwd file over rules user-seed.conf. 06-05-2020 12:50 AM
- Karma Re: How do I find what is causing my typing queue blockage? for hrawat. 06-05-2020 12:49 AM
- Karma Re: Why does the regex work in search but not in props.conf? for 493669. 06-05-2020 12:49 AM
- Karma Re: Why does the regex work in search but not in props.conf? for cpetterborg. 06-05-2020 12:49 AM
- Karma Re: Why does the regex work in search but not in props.conf? for somesoni2. 06-05-2020 12:49 AM
- Got Karma for Re: Can you help me with the following KV Store error: "Detected unclean shutdown - /home/dbindex/kvstore/mongo/mongod.lock is not empty". 06-05-2020 12:49 AM
- Got Karma for Re: On a heavy forwarder, can I forward a subset of data to syslog and drop everything else?. 06-05-2020 12:49 AM
- Got Karma for Waiting for web server to be available for over 30 minutes. 06-05-2020 12:49 AM
- Got Karma for FireEye error message: "Could not load configuration" - why?. 06-05-2020 12:49 AM
- Got Karma for FireEye error message: "Could not load configuration" - why?. 06-05-2020 12:49 AM
- Got Karma for FireEye error message: "Could not load configuration" - why?. 06-05-2020 12:49 AM
- Karma Re: How to create multiple source types from a single log file? for somesoni2. 06-05-2020 12:48 AM
- Got Karma for How to output raw results from the Splunk Python SDK when using pagination?. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk 6.6.1 stops monitoring files. 06-05-2020 12:48 AM
- Got Karma for Re: How to parse the full Windows DNS Trace logs?. 06-05-2020 12:48 AM
- Got Karma for How to use extract kvdelim and pairdelim to parse all key value pairs in my sample data?. 06-05-2020 12:48 AM
- Got Karma for Re: How to output raw results from the Splunk Python SDK when using pagination?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 |
02-04-2020
01:51 PM
Sorry, I should have been clearer. @nickhillscpl has better wording.
... View more
01-20-2020
06:58 AM
No, we did not.
And for the record, the DB connection has stopped working and I haven't had a chance to figure out if it is a problem on the Splunk side or the ePO side.
... View more
Not sure if this has been seen by others and it didn't turn up in my searches...
I have a 7.3.3 instance where I forgot the admin password. So I created a $SPLUNK_HOME/etc/system/local/user-seed.conf, restarted, but I couldn't log in with the password. Additionally, the user-seed.conf file was still present.
Turns out there was still a $SPLUNK_HOME/etc/passwd file (presumably from previous upgrades). I moved that to the $SPLUNK_HOME/etc/passwd.bak, restarted and then Splunk used the user-seed.conf file to reset the admin password.
Hope this helps someone else...
... View more
11-22-2019
11:41 AM
OK, this is how I got things to work. I used:
REGEX = \[.+?\]\s+\w+\s+.+?sub.+?domain.+?com
I think I got that syntax from somewhere, but I can't find the reference....
... View more
11-21-2019
11:46 AM
First, thank you for the tips.
And this is where I should have reviewed my post. I actually had
sub\.domain\.com
and
sub\(6\)domain\(3\)com
but missed the reformatting changes after post.
Your answer helps. I will try both of those things.
... View more
11-21-2019
11:03 AM
I'm collecting DNS logs and I'm trying to drop all logs with sub.domain.com as the query. In my transforms.conf I have the following:
[dropdomain]
REGEX = sub.domain.com
DEST_KEY=queue
FORMAT=nullQueue
But those domains still show up in my index. I have this on both the HF and the Indexer for that sourcetype.
I also am collecting logs from windows DNS debug log. As you know those come across in (#)string(#)string(#)string(#) format. So when the above comes through one of those logs, I have (3)sub(6)domain(3)com(0) in my log. I'm trying to drop those also and here is my transforms.conf for that log:
[dropdomain]
REGEX = sub(6)domain(3)com
DEST_KEY=queue
FORMAT=nullQueue
But that isn't working either. Is my syntax correct? Do I need to escape the period or not? Do I escape the parenthesis or not?
Thanks.
(I'm sure this question has been asked before, but I have not found the right google-fu to get the answer)
... View more
11-04-2019
08:56 AM
The document of how indexing works was great. I put the props.conf on the indexers and that did the trick.
Thanks. and yes, I know @arjunpkishore5 also mentioned that link. Thanks for everyone.
Even after using Splunk for 7 years, I'm still learning...
... View more
10-28-2019
11:14 AM
There is a lot of information regarding the order of the precedence of props.conf within a single Splunk server, but how about between servers? I assumed it would be the last server (i.e. the SH), but I am having a problem where the time extraction works great in one spot but not in the SH despite the props.conf being the same. Let me explain.
I have this sequence: UF -> IDX cluster -> SH cluster.
I used the input wizard on one of the IDX servers to help create the props.conf file. The main reason for doing it this way was to get the correct config for the timestamp recognition (https://docs.splunk.com/Documentation/SplunkCloud/7.2.7/Data/Configuretimestamprecognition ). That config works just fine for any of that data sourcetype I uploaded to that IDX server.
I then copied that working props.conf to the UF AND pushed it out to the SH cluster.
But when I start indexing data via the UF, the timestamp field is ignored and Splunk uses the time of index instead.
So do I need to push the props.conf out to all the indexers also?
Is there any possibility that something else (an old config props.conf) on the SH is overriding the one from the UF?
If so, how would I find out?
Can btool pull from multiple servers to determine configuration conflicts?
Additional troubleshooting tips appreciated. Thanks in advance.
... View more
10-24-2019
05:36 AM
I've had a couple of python scripts that use the sdk to pull search results running for a couple of years. This week I'm upgrading those scripts from python 2.7 to 3.7. I also upgraded the SDK from 1.6.0 to 1.6.6. I'm on Splunk 7.2.3.
But suddenly the username/password of the special account created to be used by the script gets Logon denied errors.
I validated that the username/password works by logging into the browser and by going first to port 8000 on the SH and then to port 8089 on the SH. I go to port 8089 and click on services and type in the username password and I am granted access.
However, when I run the script, it raises the following:
AuthenticationError("login failed.",he)
splunklib.binding.AuthenticationError: Login failed
If I use my admin account, of course everything works.
The special account has user/power permissions. Is there another setting/permission I need to enable?
Thanks
... View more
08-08-2019
08:45 AM
Just a quick note that this config helped fix my issue.
... View more
08-07-2019
05:56 AM
Thanks. This solution worked for me as well....
... View more
07-16-2019
07:36 AM
Creating this config file worked.
... View more
04-08-2019
08:35 AM
It wasn't resolved. I upgraded the server, or rather I migrated to a new server running Windows 2016.
My issue was the fact it was running on 2008. If you are running on that OS, you will need to upgrade. If not, then there is a different problem and you will need to either contact support or open a new forum post.
sorry...
... View more
01-17-2019
05:46 AM
Thanks.
I'm going to mark this as answered, but if anyone else has suggestions, that would be much appreciated.
... View more
01-15-2019
12:25 PM
I have a search that starts off with
| metadata type=hosts ....
The problem is that the results are pulling back a host that is not sending logs or should be sending logs. A search for the host in Splunk turns up nothing.
So my question is why it's showing up in metadata if no logs are being collected (or have ever been collected) from that host?
... View more
- Tags:
- splunk-enterprise
01-09-2019
11:10 AM
My first thought was WTH?
But here are some of the things I tried:
https://answers.splunk.com/answers/369810/python-alert-script-fails-and-i-cant-see-errors-in.html
https://answers.splunk.com/answers/559456/script-for-lookup-table-whois-returned-error-code.html
But ultimately the information in the error clued me in. The linux server at work where I was running the frequency analysis server had iptables running and I had configured it to allow my SH to connect on the exact port the frequency analysis server was listening on. But what I didn't know, and isn't mentioned here (https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Configureexternallookups), is that the script is actually run from the search peers (which in my case were the indexers). Since the search peers were not permitted to connect to the frequency analysis server per the iptables rules, the script was blocked from connecting and thus it failed.
Once I opened up iptables to allow the search peers access to the frequency analysis server, everything worked.
Maybe someone from Splunk can better explain, but apparently the script is copied from the SH to one of the following locations (or both) on each search peer and run from there:
$SPLUNK_HOME$/var/run/searchpeers/<servername>-#########/searchscripts/
or
$SPLUNK_HOME$/var/run/searchpeers/<servername>-#########/system/bin/
Hopefully this helps someone else troubleshoot their scripted lookups....
... View more
01-09-2019
11:09 AM
So I created a way to use Mark Baggett's freq_server script as a lookup and blogged about it here (http://shadowtrackers.net/blog/get-your-freq-on-in-splunk).
But, that is not the point of this forum post.
I had based my blog post on my setup at home. I was running a single Splunk server and making the URL connection to a separate Linux server running in a vm on the same box as the Splunk server. So, when I got to work and tried to implement the same lookup, I was surprised and frustrated when it didn't work right out of the box. On the small Splunk instance where I implemented the lookup, I have a single search head and two Indexers. When I ran the lookup search:
index=dns | rename query as domain | lookup freqserver domain
I received the following error:
2 errors occurred while the search was executing. Therefore, search results might be incomplete. Hide errors.
[INDEX01] Script for lookup table 'freqserver' returned error code 1. Results may be incorrect.
[INDEX02] Script for lookup table 'freqserver' returned error code 1. Results may be incorrect.
... View more
12-07-2018
12:57 PM
I'm working with Splunk Support on a similar issue. One suggestion they made to help troubleshoot is to run the query from the Search window.
Here's a copy of the instructions they sent me:
| dbxquery query="LONG_QUERY" connection="YOUR_CONNECTION_NAME" timeout=6000
The easiest way to do this is to hit the “Open In Search” button on the SQL Explorer screen after you have written out the full query (the button is to the upper right corner). When the query opens on the next page just add timeout=6000 to the search as shown above.
As you probably can guess, this will enable you to test different portions of your query quickly. I'm using it to try and narrow down which part of my query is giving me trouble.
You can add or subtract or remove the timeout part......
... View more
11-30-2018
08:04 AM
I checked the security (right click, properties, security) on that exact file and the only account that had access to that file was the domain admin account I used to install/admin the server. I checked the security of the other files and found that SYSTEM and the local admin group were also listed and both had Full Control of that file. So I added those to this file and restarted splunk and boom! logs started flowing.
Not sure how access permissions got messed up. But that seems to have fixed it..
... View more
11-30-2018
08:03 AM
I was troubleshooting a weird issue in a Splunk Universal Forwarder installed on a Windows 2012R2 Print Server. I was getting the print logs but not the Application/System/Security logs. I looked at the inputs.conf for both and they were correct. I checked to see if the print server was sending data and if the indexers were receiving data and they were. I didn't see any errors in splunkd.log on the print server nor any errors in the _internal log in my Splunk index cluster. Splunk is running as local system and as I said, seems to have no problem reading the print logs
Then I started going through all the logs on the UF var/log directory and found btool.log had the following after the most recent restart:
ConfPathMapper - Failed to open: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf: Access is denied.
Well, I thought that should be simple, except I didn't know the simple answer. In *nix, the entire Splunk directory, subdirectory, and files need to owned by the user Splunk. If there is an equivalent error, I just run chown -R splunk.splunk /opt/splunk/ and everything is (usually) fixed.
But Windows doesn't have the equivalent of a Splunk user. Usually installing as local admin or domain admin takes care of setting all the correct permissions. But now that I have to fix the permissions on a single file/folder, what are the permissions that should be set on the whole Splunk directory structure? Googling hasn't yet found the answer. Closest I've found is this:
https://answers.splunk.com/answering/8980/view.html
and this:
https://answers.splunk.com/answering/9545/view.html
... View more
11-28-2018
10:49 AM
Have you tried that query in the SQL Explorer tab on your DB Connect? I found that when I was having problems, running the query there helped me troubleshoot.
... View more
11-15-2018
09:10 AM
1 Karma
The answer by @claudio.manig in this thread https://answers.splunk.com/answers/236495/splunk-kv-store-does-not-start.html helped me.
After verifying all the permissions discussed were set correctly on my server (linux), I stopped Splunk, deleted the mongod.lock file and restarted splunk and the error went away. I didn't need to run the repair utility.
... View more
08-28-2018
12:40 PM
For those using the Cisco eStreamer eNcore app and Cisco eStreamer eNcore add-on, could you verify which goes where? I think I missed those instructions in the documentation.
Add-on -> HF (linux), Indexers (linux)
App -> SH (linux)
The reason I'm asking is because I am not getting any data despite having a status of 'Running' in the dashboard on the Search Head.
On my HF when I look at the tcpdump I'm seeing data (encrypted so I don't know what data) moving between the FMC and the HF, but nothing is showing up in the cisco:estreamer:data sourcetype. All the boxes are checked under eStreamer Event Configuration on the FMC and also Log Extra Data, Log packets and Log Flows are checked on the Splunk app under eStreamer for Splunk:Settings.
There are no errors in /opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer.log or in the splunkd.log
Any other suggestions?
... View more
07-19-2018
06:15 AM
We are also on ePO 5.x, Splunk 7.x, Windows 2016, and McAfee add-on 2.21.
... View more