Activity Feed
- Got Karma for Parallel development in Splunk on the same app - use GIT for management and source control??. 06-05-2020 12:48 AM
- Got Karma for Parallel development in Splunk on the same app - use GIT for management and source control??. 06-05-2020 12:48 AM
- Karma Re: Merge two fields into one field for lguinn2. 06-05-2020 12:46 AM
- Karma Re: Finding the earliest event matching a query for lguinn2. 06-05-2020 12:46 AM
- Karma Re: Filtering/nullQueue Question for kristian_kolb. 06-05-2020 12:46 AM
- Karma Re: Pivot based on Search vs Event for mattness. 06-05-2020 12:46 AM
- Karma Re: Single search in multiple charts for somesoni2. 06-05-2020 12:46 AM
- Karma Re: Path Analysis in Splunk for ShaneNewman. 06-05-2020 12:46 AM
- Karma Re: Appended search not appearing in timeline results for martin_mueller. 06-05-2020 12:46 AM
- Karma Re: Color appended search results? for martin_mueller. 06-05-2020 12:46 AM
- Karma Re: inputlookup search timerange for lguinn2. 06-05-2020 12:46 AM
- Karma Re: How to extract a field and chart it for lguinn2. 06-05-2020 12:46 AM
- Karma Re: Temporary user access for martin_mueller. 06-05-2020 12:46 AM
- Karma Re: Field extraction from another field for martin_mueller. 06-05-2020 12:46 AM
- Karma Re: Calculating a total for use later to calculate a percentage for somesoni2. 06-05-2020 12:46 AM
- Got Karma for Re: Merge Values from Two Fields into a New Field. 06-05-2020 12:46 AM
- Got Karma for Re: Merge Values from Two Fields into a New Field. 06-05-2020 12:46 AM
- Got Karma for Re: Merge Values from Two Fields into a New Field. 06-05-2020 12:46 AM
- Got Karma for Re: Merge Values from Two Fields into a New Field. 06-05-2020 12:46 AM
- Got Karma for Re: Merge Values from Two Fields into a New Field. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
2 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 |
03-17-2016
10:44 PM
2 Karma
Hi all,
Just wondering if anyone has had any experience using GIT as a tool to manage Splunk development work across multiple branches?
e.g. if I have two DEV environments and one master environment..
Clone DEV1/DEV2 from master.
Develop searches/reports/eventtypes on DEV1
Develop searches/report/eventttypes on DEV2
Merge DEV1 and DEV2 onto Master.
Anyone done this successfully? any issues encountered? Just wondering how GIT would handle multiple changes to the same configuration file.. it seems to be that we would always get conflicts that would have to be manually resolved? but I am no expert in GIT so would appreciate any help.
Alternatively if someone else has experience with merging parallel streams of work in Splunk together that would be good too.
Cheers,
... View more
- Tags:
- git
- splunk-enterprise
07-21-2014
10:35 PM
An alternative is to give the user access to the index but restrict the search terms. Under the role settings you can specify a search string which will be run against any other searches that user does..
e.g. index=security host=myhostname
This will mean that the user can only find events for that host in the security index.
... View more
07-21-2014
10:13 PM
Hi, I know this is an old thread but for anyone who might stumble upon this. You can inspect the query via the "Job Inspector". There's a button on the search screen after the search is completed which will give you details on the job. It's quite useful for determining how long a specific search takes and what parts of it took so long.
... View more
07-08-2014
08:20 PM
what's the difference between 2,5 and 7 here with regards to duration? they all show 3 different connections from the same src and host etc.. If you are wanting the minimum/max durations for each src/host combination that you should be able to:
[query]
| eval duration=strptime(End_Time, "%Y-%m-%d %H:%M:%S")-strptime(Start_Time, "%Y-%m-%d %H:%M:%S")
| stats max(duration), min(duration) by src,spt,dst,dpt
... View more
05-08-2014
04:51 PM
Seeing the same issue with my free test instance. This issue on our enterprise test instance though seems to have been resolved with the patch listed in those links.
... View more
05-04-2014
10:51 PM
Maybe I'm not understanding this correctly..
Why do you need to have different passwords for all the forwarders? Generally there's only really the admin user that's defined on the universal forwarders and you should change that when you install it. You can also script it using the CLI
/opt/splunkforwarder/bin/splunk edit user admin -password YOUR_NEW_PASSWORD -auth admin:changeme
where admin:changeme is the default user:password combination.
The only user who should be able to run the splunk binary is 'splunk' user or root.
To make it even easier, if you set up a deployment server, you can control all your forwarders from there and only have to remember the username/password for the deployment server.
... View more
05-04-2014
10:39 PM
Found the answer. I simply downloaded an older JDBC version and copied it into the lib folder and it just seems to have worked fine.
... View more
05-04-2014
10:06 PM
so I think it has something to do with the fact that these databases are on oracle 8i, but the JDBC driver is ojdbc6.jar which isn't compatible.
Now, how do I add a new .jar file for an older JDBC driver.. one that's comptaible with JDK 1.6??
... View more
05-04-2014
09:02 PM
I don't think it would be possible to do at index time so your events are indexed like that...
but It would be possible to do some extra post-processing after indexing to separate them out and change the _time field for each event using a combination of rex, eval and mvexpand..
the question is though, what you want to achieve by this?
Generally in most cases, these metrics will be used for calculations anyway, in which case there's no reason to differentiate them by the second. e.g. stats avg(yourfields) and you could simply just use an mv command for it.
for example:
[your search] | makemv delim=";" A_METRIC | makemv delim=";" B_METRIC | eval metrics=mvzip(A_METRIC,B_METRIC) | mvexpand metrics | rex field=metrics "(? \d+),(? \d+) | timechart span=5m avg(fieldA), avg(fieldB) by host
... View more
05-04-2014
07:58 PM
Trying to connect to an oracle database via DBconnect but it keeps giving me a unknown error when trying to validate the connection. I know that the port, username and db names are right as I can connect using TOAD.
dbx.log entry:
2014-05-05 14:55:10.736 dbx6855:ERROR:BridgeSession -
Exception occured while executing
command:
java.lang.ArrayIndexOutOfBoundsException:
7
java.lang.ArrayIndexOutOfBoundsException:
7 at
oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:989)
at
oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:292)
at
oracle.jdbc.driver.PhysicalConnection. (PhysicalConnection.java:508)
at
oracle.jdbc.driver.T4CConnection. (T4CConnection.java:203)
at
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)
at
oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:510)
at
java.sql.DriverManager.getConnection(DriverManager.java:620)
at
java.sql.DriverManager.getConnection(DriverManager.java:200)
at
com.splunk.dbx.sql.type.impl.AbstractDatabaseType.connect(AbstractDatabaseType.java:138)
at
com.splunk.dbx.sql.type.impl.Oracle.connect(Oracle.java:48)
at
com.splunk.dbx.sql.Database.connect(Database.java:522)
at
com.splunk.dbx.sql.Database.validateConnectionParameters(Database.java:436)
at
com.splunk.dbx.sql.validate.DatabaseValidator.invoke(DatabaseValidator.java:29)
at
com.splunk.bridge.session.BridgeSession.call(BridgeSession.java:92)
at
com.splunk.bridge.session.BridgeSession.call(BridgeSession.java:30)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at
java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at
java.lang.Thread.run(Thread.java:636)
It's strange, I know the DB connect app works fine since I've got other oracle connections set up working properly..
... View more
- Tags:
- Splunk DB Connect 1
02-26-2014
02:25 PM
Here are three things you can check:
Are you able to see the splunkd.logs from your server on the indexer? e.g if you search "index=_internal source=*splunkd.log host= .. - This should be visible if the indexer and forwarder are correctly configured to send/receive data.
Check that the splunk user has access to those logs - if you can't read them as splunk user then it can't forward them.
Do a search through "ALL TIME" for any logs from that host. I know it sounds stupid but I once had this issue where I thought it wasn't sending but it was actually using the wrong timestamps so events were being indexed on a date that was earlier than my search range by a few years.
If all else fails, you could turn the logging on the forwarder to DEBUG to get more information. Under $SPLUNK_HOME/etc/log.cfg.. Try setting the logging on the TailingProcessor and WatchedFile components to DEBUG and see what turns up.
... View more
02-26-2014
02:06 PM
Just in case anyone was wondering it was my lack of CSS knowledge that was the problem. Once I defined additional selector IDs to match the table IDs it worked fine.
... View more
02-10-2014
03:51 PM
what happens when you select a time window that's NOT real-time? Do the searches re-run as normal searches?
If you close the dashboard then the searches should stop running. If you don't want real-time searches to be running, you can disable them from the timerange picker or limit it in the configuration files so not too many can be run at the same time.
... View more
02-10-2014
03:46 PM
I'm having a bit of a problem with using JS scripts in my dashboard panels. I've been using the Simple XML examples and they work great. For example the table_row_highlighting.js file is great but it only works for one table where as I would like to have it for multiple tables.
I figured I could just copy it into a file nagios_highlight.js and change the table ID and it would work fine but that doesn't seem to be the case.
I've placed in the $SPLUNK_HOME/etc/apps/myapp/appserver/static folder, added the script name to my dashboard xml, added the new id to a different table and restarted the splunk instance but it's not working. Am I putting it in the wrong place or do I need to register the js file somewhere else??
<dashboard script="nagios_highlight.js" stylesheet="application.css">
...
<row>
<table id="nagios_highlight">
<title>Status Dashboard</title>
<searchString>sourcetype=nagios_status servicestatus | sort by _time DESC | dedup host_name, service_description | eval last_check=strftime(last_check,"%d/%m/%y %H:%M:%S") | eval event_duration=tostring(now()-last_state_change,"duration") | rename problem_has_been_acknowledged as acknowledged | table _time, host_name, service_description, plugin_output, current_state, last_check, event_duration, acknowledged</searchString>
<earliestTime>-10m</earliestTime>
<latestTime>now</latestTime>
<option name="wrap">true</option>
<option name="rowNumbers">false</option>
<option name="dataOverlayMode">none</option>
<option name="drilldown">row</option>
<option name="count">50</option>
</table>
</row>
...
I can't seem to figure this one out. From the page source it seems to load the .js file but the rows don't get highlighted.. any ideas?
... View more
- Tags:
- js
- rowcolorizer
01-20-2014
07:45 PM
How have you set up your livestatus xinetd settings? You have to link the livestatus socket on your nagios machine to xinetd socket. Here's an example from http://mathias-kettner.com/checkmk_livestatus.html for a /etc/xinetd.d/livestatus file.
service livestatus
{
type = UNLISTED
port = 6557
socket_type = stream
protocol = tcp
wait = no
# limit to 100 connections per second. Disable 3 secs if above.
cps = 100 3
# set the number of maximum allowed parallel instances of unixcat.
# Please make sure that this values is at least as high as
# the number of threads defined with num_client_threads in
# etc/mk-livestatus/nagios.cfg
instances = 500
# limit the maximum number of simultaneous connections from
# one source IP address
per_source = 250
# Disable TCP delay, makes connection more responsive
flags = NODELAY
user = nagios
server = /usr/bin/unixcat
server_args = /var/lib/nagios/rw/live
# configure the IP address(es) of your Nagios server here:
# only_from = 127.0.0.1 10.0.20.1 10.0.20.2
disable = no
}
... View more
11-20-2013
08:26 PM
1 Karma
to further add onto this, you can use an eval with strptime to convert your timestamps into epoch time for comparisons and then strftime to convert them back into readable strings..
... View more
11-20-2013
08:21 PM
There seems to be some kind of issue with dynamic drilldowns with Splunk and Internet Explorer.
I've been building dashboards and UIs with drilldown and it all worked fine until I tried to get other people to use it. I noticed that while drilldowns worked in Firefox, it didn't work properly in Internet explorer.
For example, when I try to pass a value from a drilldown to another form such as :
<table>
<title>Table panel with dynamic drilldown that passes the clicked row's 'count' column value to populate a form</title>
<searchString>index=_internal | head 1000 | stats count by user</searchString>
<option name="count">10</option>
<drilldown>
<link>/app/simple_xml_examples/simple_form_text?form.limit=$row.count$</link>
</drilldown>
</table>
The $row.count$ variable doesn't get passed to the form properly when using IE. I think it has something to do with the way it handles links and addresses? Not sure..
I'm on Splunk 6, and the problem is with IE8.. I haven't tested it on any other versions of IE since I don't have admin rights to upgrade or install.
Has anyone else seen this problem or know how to address it? (other than switching browsers)
Cheers,
... View more
11-12-2013
02:32 PM
I did wonder if you could have the same element ID for multiple tables or whether it would cause issues in splunk.. I was thinking more along the liens of having element_id = * for ALL tables..
... View more
11-11-2013
03:44 PM
Thanks for that Luke. Would it be possible to have a generic .js script that applies to all tables with a column of 'oss_alert'?
Just thinking that then you could just use the one .js and .css for multiple tables..
... View more
11-10-2013
07:22 PM
I tried the sideview utils method but that requires going to advanced xml which seems to be discouraged on Splunk 6.
Apparently it's better to use .js and .css combination but it doesn't seem to work. I took the js from this answer:
http://answers.splunk.com/answers/83206/color-in-a-table-based-on-values
and updated my dashboard.css/application.css:
classes for severities
.SimpleResultsTable tr td.d[field="oss_alerts"][data-value="OK"] {
background-color:#0C0;
color:white;
}
I guess I'm missing something? not too familiar with CSS and .JS
... View more
11-10-2013
05:39 PM
I have a search which generates a table from two different lookup files (one is updated with a scheduled search) and this table basically lists a whole bunch of files and when it was last received.
What I want to do is to be able to highlight the rows/files that have not been been received over the last 3 days.. I can do the calculations and the formatting of the table but I can't figure out how to get it the colours working with CSS and .js??
Can anyone help me with this? Ideally what I'd want is to be able to create a column/field called oss_alert and have the colour of the row depending on that value..
... View more
11-10-2013
03:16 PM
can you be more specific on your configuration and how you are trying to access it? via domain name or IP address?
try using your IP address instead
eg. http:// :8000/
If that doesn't work then, please give us some details about the configuration of your splunk machine and network.. e.g. is it windows/linux? virtual or physical? how is it connected to the rest of the network?
... View more
11-07-2013
01:31 PM
I assume you've done a trace on both ends to make sure that the syslog data is being sent from the originating servers and being received on the splunk instance??
Is there another syslog daemon running on your splunk instance or another application using that port? If so then it's possible the syslogs coming int your machine are being aggregated into the local syslog..
I would suggest doing a netstat to make sure there's no other applications using that. Or changing to a different port above 1024..
... View more
11-07-2013
01:08 PM
It should work fine on Chrome. Have you tried that it works with other computers on the same network? If not then it could be a firewall or routing issue. e.g. your splunk instance has not opened up port 8000 (or whatever port you're using) externally to other computers.
... View more
11-07-2013
01:06 PM
How have you configured your settings? If you are doing it via a data stream, then there are three things that need to be done for it to work.
The syslog servers need to be configured to send data to a specific port on the Splunk machine e.g. TCP/5000
The splunk server needs to be configured to read that port and index the data. This can be found under inputs.
The firewalls between the machines must be configured to allow that data to flow through the TCP/UDP ports. This includes the local windows firewall too.
A quick google search for any of these things will give you the information you need to do that.
Similiar principles apply if you're using a forwarder.. except in step 1, the forwarder reads the syslog and forwards it instead of the machine directly sending it out as a syslog stream.
... View more