Activity Feed
- Got Karma for Re: Best practice for field extraction. 07-27-2021 01:36 PM
- Karma Re: JMS Messaging Modular Input on heavy forwarder: Why am I unable to create a connection using Client ID? for Damien_Dallimor. 06-05-2020 12:47 AM
- Karma Re: JMS Modular Input - Truncation of events greater than 10,000 chars for Damien_Dallimor. 06-05-2020 12:47 AM
- Karma Re: Disabled input methods/scripts throwing errors within Splunk ES for ekost. 06-05-2020 12:47 AM
- Karma Re: How to parse JSON with JSON array to identify fields? for somesoni2. 06-05-2020 12:47 AM
- Got Karma for JMS Modular Input - Truncation of events greater than 10,000 chars. 06-05-2020 12:47 AM
- Karma Re: How do I tell what enviroment is production? for lguinn2. 06-05-2020 12:46 AM
- Karma Re: How to set the default value for a dynamically populated Pulldown? for sideview. 06-05-2020 12:46 AM
- Karma Re: 'splunk status' return code for Takajian. 06-05-2020 12:46 AM
- Karma Re: Dedup on multiple fields but count the instance, and display as new field. for Ayn. 06-05-2020 12:46 AM
- Karma Re: variance betweeen _time and date_* fields for lguinn2. 06-05-2020 12:46 AM
- Karma Re: variance betweeen _time and date_* fields for lguinn2. 06-05-2020 12:46 AM
- Karma Re: Sourcetytping and override source name on directory with multiple files for kristian_kolb. 06-05-2020 12:46 AM
- Karma Anyone interested in Splunk for Sampled NetFlow and sFlow? for NetFlow_Logic. 06-05-2020 12:46 AM
- Karma Does the Universal forwarder index events? for tim9gray. 06-05-2020 12:46 AM
- Karma Re: Does the Universal forwarder index events? for ChrisG. 06-05-2020 12:46 AM
- Karma Re: Workflow Action - Mailto for MuS. 06-05-2020 12:46 AM
- Karma Re: Sorting of Fields base on Timestamp for kristian_kolb. 06-05-2020 12:46 AM
- Karma Re: Syslog UDP data filtering to index for mookiie2005. 06-05-2020 12:46 AM
- Karma Re: WinEvents are sent to indexer, but forwarder is disabled for lukejadamec. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
5 | |||
1 | |||
0 |
03-25-2015
12:35 AM
Hi Splunkers! (well, probably one particular Splunker... hi Damian!)
You might remember me from such questions as 'JMS Modular Input 1.3.7 - Client ID already in use?', which I believe is very related.
To give some background, we have been able to create working a JMS input which is happily connecting to WebMethods environment.
When browsing the configuration of the working input via the GUI, if one clicks on the “SAVE” button (even without making any changes to the configuration), this seems to cause another connection attempt to the already established JMS queue (instead of checking if there is an existing connection and ignoring the save) this results in Splunk throwing the following error:
03-23-2015 09:25:48.453 +1100 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\jms_ta\bin\jms.py"" Stanza jms://topic/host:xxxJson_Receive : Error connecting : javax.jms.InvalidClientIDException: [BRM.10.2002] JMS: Client ID "DEV-SPLUNK" is already in use.
Which in turn results in not only failing to start the additional connection (which is expected/desired), but tears down the original functioning JMS input (possibly based on the duplicated Client ID?). However this tear down does not appear to be “clean”… (read on)
From what I can tell, WebMethods thinks the connection is still established. This is configured as a “DURABLE” (i.e. reliable) connection, so a disconnection should result in WebMethods queuing up messages until such a time as the connection is restored.
However, what IS happening is:
WebMethods still sees Splunk as connected so messages are not queued and are “consumed” according to WebMethods and are not queued up when the connection drops.
Splunk does not accept any messages (I have not done a packet capture to confirm they are even getting there, so this is an assumption on my behalf)
Messages/events are lost with no avenue for recovery (boo)
It is also suspected (although yet to be confirmed) that this same action interrupts other existing inputs (the client has advised they are unable to have two simultaneous JMS inputs)
Is anyone else experiencing these type of issues? This may be a WebMethods thing, but would be interested to know if anyone else is having these issues.
Cheers & Beers!
RT
... View more
10-15-2014
06:24 PM
Hi Splunkers & Splunkettes,
So when attempting to remove a configured user via a REST API call, I don't seem to be able to specify a unique user by realm. For example, If I configure two users:
username: svc_splunk
realm: blank
and
username: svc_splunk
realm: SA-ThreatIntelligence
Issuing the following command will remove the first one:
curl -k -u admin:pass --request DELETE \
https://localhost:8089/servicesNS/nobody/search/storage/passwords/:svc_splunk:
``However, when running it again, I am advised that the user doesn't exist (despite the fact it's present on the Credential Manager page).
From the docs (LINK) there appears to be no way to specify the realm, hence no way to delete the user.
Is there something undocumented that I'm missing?
PS. The current method of changing & deleting users once added is horrible
... View more
10-01-2014
03:01 PM
Hi ekost - thanks for that. I should have specified we're running the latest versions of both. I'll log a formal ticket with support today. Thanks again 🙂
... View more
10-01-2014
02:15 AM
Hi All,
I have a pretty generic Splunk for Enterprise Security implementation. Every hour I get prompted with a whole bunch of messages such as the ones below:
I believe I have disabled all inputs that call on these binaries and scripts (checked using ./splunk cmd btool inputs list) and yet I still get them... can anyone suggest a possible culprit?
Cheers 🙂 RT
... View more
09-30-2014
11:46 PM
Thanks Damien - I think I was having a derr moment as I put the TRUNCATE statement in the inputs.conf... Mea culpa. Thanks again for your help.
... View more
09-30-2014
10:16 PM
1 Karma
Greetings!
We have configured the JMS Modular Input to be a durable subscriber to a JMS queue and we're happily retrieving data :thumbs-up:
We are also doing this in a DEV/TEST environment. In this environment, (large) stack traces are sometimes in the JSON payloads which tips the event over 10,000 characters... at which point we see the event become truncated, which breaks the JSON structure and the use of spath :sad-face:
Is there any setting anywhere that would allow me to disable/negate this?
Regards,
RT.
... View more
09-23-2014
01:53 AM
Hi kamermans - Did you have any luck with this? I am having a similar issue.
... View more
09-16-2014
05:55 PM
Thanks Damien - I'll give it a try and update accordingly!
... View more
09-15-2014
11:34 PM
Hi Splunkers/Splunkettes,
I have recently installed and configured the JMS Messaging Modular Input add-on a heavy forwarder, and I am unable to create a durable connection. In my configuration I use the Client ID: "SPLUNK#Consumer" as well as a few other simpler variations (SPLUNK_PREPROD & SK) and am still having no joy as Splunk is reporting the Client Id isn't set.
(Sorry about the kruddy quality)
Sure enough, WebMethods reports that the connection is coming from an unknown account and creates an ephemeral user (vclient20-67494 or something like that).
We are able to get data, we are just not able to create a durable subscription of said data which is a problem.
We have tested from non-Splunk applications and it functions as as expected. Is there something I'm missing here?
Any and all help greatly appreciated!
RT
... View more
04-08-2014
12:22 AM
Hi Splunkers!
Has anyone had any experience using Splunk to send a large number (1000+) of emails. The scenario I'm thinking of is say to send data usage reports (generated by Splunk) to a large number of users, where they are given a report showing their total usage for the period, and a link to log into Splunk to get the finer details.
Just putting this out there to see what anyone else might have done.
Cheers!
RT
... View more
- Tags:
03-27-2014
02:50 PM
Like I said, this is just to hopefully help people out - so I'll answer my own question 🙂
... View more
03-27-2014
02:49 PM
Hi Splunkers!
This is less of a question, and more of a (hopefully) handy tip that I hope will answer peoples questions when they go looking for an answer to timestamp extraction issues, specifically when setting TIMESTAMP_FORMAT in props.conf.
If you're looking to get a timestamp in strptime/strftime format, I've found this site really useful:
http://www.strftime.net/
It's not owned/run by me, and as far as I can tell has no ads.
Hope it helps!
RT 🙂
... View more
03-04-2014
10:17 PM
5 Karma
Oh hai Splunkers!
So I'm trying to extract a DB table for indexing into Splunk. I have successfully set up an ODBC connection to the external DB, and issuing SQL commands against it work without an issue (see below).
Where it all appears to fall down is when I am setting up the DB Connect database input.
Looking at the dbx_debug log, I see the following errors:
2014-03-05 17:14:00.036 dbx6402:WARN:Database - Database type=com.splunk.dbx.sql.type.impl.ODBC@29306c does not support connection validation
Followed by:
2014-03-05 16:46:00.026 dbx1035:ERROR:TailDatabaseMonitor - Error while executing database monitor: java.sql.SQLException: Invalid Fetch Size
java.sql.SQLException: Invalid Fetch Size
at sun.jdbc.odbc.JdbcOdbcStatement.setFetchSize(Unknown Source)
at com.splunk.dbx.sql.type.impl.AbstractDatabaseType.setStreamingResults(AbstractDatabaseType.java:355)
at com.splunk.dbx.sql.Database.configureStatement(Database.java:222)
at com.splunk.dbx.sql.Database.query(Database.java:256)
at com.splunk.dbx.monitor.impl.TailDatabaseMonitor.performMonitoring(TailDatabaseMonitor.java:115)
at com.splunk.dbx.monitor.DatabaseMonitorExecutor.executeMonitor(DatabaseMonitorExecutor.java:126)
at com.splunk.dbx.monitor.DatabaseMonitorExecutor.call(DatabaseMonitorExecutor.java:102)
at com.splunk.dbx.monitor.DatabaseMonitorExecutor.call(DatabaseMonitorExecutor.java:37)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
The SQL command that was used for the input returned only 36 events, so excessive results wouldn't appear to be the issue.
Is connection validation a necessity for inputs? Is there something obvious I'm missing here?
Environment
Windows Server 2012
Splunk 5.0.5
JRE7 (1.7.0_51 32-bit)
... View more
11-25-2013
02:42 PM
Unfortunately this won't help, as timezones differ (at a maximum) of 30 second increments. The delta above is ~2 minutes.
... View more
11-21-2013
02:20 PM
Hi Mahamed - I understand that if both of my indexers are available it will work, my question concerns the platform behaviour if one indexer has failed (e.g. "work as expected (i.e. full data availability in the event of a single indexer failure)."
... View more
11-19-2013
04:50 PM
FYI I have logged a support case for this and will report back with any findings.
... View more
11-18-2013
08:45 PM
1 Karma
Hi All,
In reading a recently posted (16 Oct 2013) Splunk blog post "Clustering Optimizations in Splunk 6", the following was mentioned:
In the previous Splunk 5 version, users will not be able to search and
use the cluster until the cluster master ensures that all of the
replication policies are met. In some cases, this might take long time and
users are unnecessarily blocked until then.
Should I take this to mean that in v5, functional Index replication & searchability is only possible when you have n+1 indexers (where n is the index replication factor)? For example, if I have two indexers, and have set an index replication & searchability factor of two, this won't actually work as expected (i.e. full data availability in the event of a single indexer failure).
Any input is appreciated 🙂
... View more
11-12-2013
04:42 AM
Hi All,
I am collecting Perfmon data via the Splunk_TA_windows app and for some reason the time stamp is not being parsed correctly, specifically there is a delta between the Splunk assigned timestamp and the on in the event itself. e.g.:
Having looked through the internal logs I am not seeing anywhere that would indicate the queues are blocked, but I am still getting this discrepency. No modifications have been made to the TA , and it is has been installed on both the server that is sending the data, and the Indexer.
Any & all suggestions appreciated!
... View more
11-06-2013
09:30 PM
Hi Hexx - Sorry for the delay in getting back to you. Unfortunately the upgrade didn't help and we're still seeing a LOT of the above errors. Being that we know what the cause is and it's a known bug, I'll mark this as answered, but I look forward to the next release where (hopefully) it'll be fixed.
Thanks again 🙂
... View more
10-24-2013
03:51 PM
Found the cause and solution here: http://answers.splunk.com/answers/82275/why-is-my-windows-cluster-peer-node-continually-restarting
Essentially, directory permissions on /slave-apps/ on the search peer had been lost (why?) and directory was set to read only. As per the link above, resetting the permissions allowed the Cluster Master to once again populate the directory with the required apps.
... View more
10-23-2013
05:43 PM
1 Karma
Hi All,
After fresh installs of Splunk (Windows v5.0.4) I had (had) a fully functioning cluster that was happily replicating and life was good.
After updating an app on the cluster master (removing extraneous text files from a directory) I kicked of the bundle deployment:
.\splunk.exe apply cluster-bundle
I then checked the status with the following command:
.\splunk.exe show cluster-bundle-status
Output:
Guid: 71F63992-BD86-4935-932E-24258A6A3CDD
ServerName: IDX-A
Status: Up
Bundle Validation Status: Validation successful
Latest Bundle: 1d6134c6cab9fd5a720516d8881a01a8
Active Bundle: 37b2f885aeac2bbe59bfa95a7a4202fc
Guid: BC734690-BACE-41CC-812D-254085234EE5
ServerName: IDX-B
Status: Restarting
Bundle Validation Status: Validation successful
Latest Bundle: 1d6134c6cab9fd5a720516d8881a01a8
Active Bundle: 37b2f885aeac2bbe59bfa95a7a4202fc
All well and good, but when I checked again not long after:
Guid: 71F63992-BD86-4935-932E-24258A6A3CDD
ServerName: IDX-A
Status: Restarting
Bundle Validation Status: Validation successful
Latest Bundle: 1d6134c6cab9fd5a720516d8881a01a8
Active Bundle:
Guid: BC734690-BACE-41CC-812D-254085234EE5
ServerName: IDX-B
Status: Up
Latest Bundle: 1d6134c6cab9fd5a720516d8881a01a8
Active Bundle: 1d6134c6cab9fd5a720516d8881a01a8
The impact of this is:
The Active Bundle for IDX-A is now blank
The app directories in /slave-apps are now empty
IDX-A is in a restart loop, and;
The splunkd.log on IDX-A indicate that the process is being told (repeatedly) to gracefully shut down.
This is not the first time this has happened... as this fresh install is a result of this happening previously and me taking the default "Reinstall & hope for the best" path... dammit.
Any and all suggestions greatly appreciated!
RT
EDIT #1: 10 minutes later and it's still happening.
EDIT #2: splunkd.log on the cluster master has this over & over again:
...
CMMaster - event=handleInputsQuiesced guid=71F63992-BD86-4935-932E-24258A6A3CDD
ClusterMasterPeerHandler - Add peer info replication_address=IDX-A forwarder_address= search_address= mgmtPort=8089 rawPort=9887 useSSL=false forwarderPort=0 forwarderPortUseSSL=true serverName=IDX-A activeBundleId= status=Up type=Initial-Add baseGen=0
CMMaster - event=removeOldPeer guid=71F63992-BD86-4935-932E-24258A6A3CDD hostport=IDX-A:8089 status=success
CMMaster - event=addPeer guid=71F63992-BD86-4935-932E-24258A6A3CDD replication_address=IDX-A forwarder_address= search_address= mgmtPort=8089 rawPort=9887 useSSL=false forwarderPort=0 forwarderPortUseSSL=true serverName=SE02SPL01LP activeBundleId= status=Up type=Initial-Add baseGen=0 bucket_count=0
CMPeer - peer=71F63992-BD86-4935-932E-24258A6A3CDD transitioning from=Down to=Up reason="addPeer successful."
CMMaster - event=addPeer msg='Bundle mismatch; restarting peer. '
CMMaster - committing gen=121 numpeers=2
CMMaster - event=addPeer guid=71F63992-BD86-4935-932E-24258A6A3CDD status=success initialized=1 npeers=2 basegen=121
CMPeer - peer=71F63992-BD86-4935-932E-24258A6A3CDD transitioning from=Up to=Restarting reason="restart peer"
CMBundleServer - event=streamingbundle status=success file=C:\Program Files\Splunk\var\run\splunk\cluster\remote-bundle\4a483d66a10ab4976b2d984c9361d040-1382573311.bundle totalBytesWritten=3317760 checksum=1d6134c6cab9fd5a720516d8881a01a8 Content-Length=3317760
ClusterSlaveControlHandler - Bundle validation success reported by [71F63992-BD86-4935-932E-24258A6A3CDD] successful for bundleid=1d6134c6cab9fd5a720516d8881a01a8
CMMaster - event=handleShutdown guid=71F63992-BD86-4935-932E-24258A6A3CDD status=Restarting
CMPeer - peer=71F63992-BD86-4935-932E-24258A6A3CDD has started master-initiated restart
...
... View more
10-23-2013
05:22 PM
It's intermittent in nature. e.g.
index=_internal ERROR greater | rex "2013 (? [^\s]+)\s" | stats values(time_of_error)
00:30:06.232
00:42:06.072
00:42:16.605
01:30:08.017
01:30:08.235
01:50:02.871
01:55:03.419
02:10:02.950
02:10:03.575
02:15:03.380
...
... View more
10-23-2013
05:02 PM
2 Karma
Hello Splunkers and/or Powershell Gurus!
I'm getting a bunch of errors when using the Splunk-on-Splunk TA for the collection of diagnostic data. I have enabled Powershell script execution, and I'm using the app as packaged deployed by a Cluster Master to an indexer.
Each of the lines below is prefixed with:
10-24-2013 08:30:06.594 +1100 ERROR ExecProcessor - message from ""C:\Program Files\Splunk\etc\slave-apps\TA-sos_win\bin\sospowershell.cmd" ps_sos.ps1
The reported error:
Error formatting a string: Index (zero based) must be greater than or equal to
zero and less than the size of the argument list..
At C:\Program
Files\Splunk\etc\slave-apps\TA-sos_win\bin\powershell\ps_sos.ps1:17 char:5
+ $CMDLINE = "{0}" -f ($CMDLINE -replace '"',"")
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: ({0}:String) [], RuntimeExcept
ion
+ FullyQualifiedErrorId : FormatError
Any & all help greatly appreciated 🙂
... View more
10-22-2013
11:48 PM
Hello Splunkers!
I have two clustered indexers (v5.0.4) replicating buckets between them. I have been testing the failover mechanism to tick a box saying that the data is indeed searchable if an indexer fails. The testing methodology is as follows:
Run a search (e.g. index=main ) for a window of time in the past.
Confirm the number of results returned from each splunk_server , and the total number of events returned
offline one of the indexers (IDX-A)
Re-run the test for the same period of time to confirm identical results are returned.
???
PROFIT!!
All pretty simple right? (apart from 5 & 6 anyway).
The problem I have is with step 4 - I do not get the results that were previously returned from the now offline indexer IDX-A. I have waited for ~10 minutes with no joy. I have a rep factor & search factor of 2, and the cluster master reports that everything is hunky-dory. But as soon as I restart the splunkd process on IDX-A, I get the correct/expected number of results from IDX-B. So yes, replication works... but I'd have expected the resumption of service on IDX-A not to be a trigger/catalyst for IDX-B actually returning events previously held by IDX-A.
Is there a setting/timer I'm missing here? Happy to be pointed in the right direction!
Thanks in advance 🙂
RT
... View more