Activity Feed
- Got Karma for Re: How Can I Verify that All Forwarders Received a Deployed Update?. 09-04-2024 07:36 AM
- Got Karma for Re: current user in search?. 04-05-2024 08:50 AM
- Got Karma for Re: Changing timestamp and sourcetype based on record type. 03-20-2024 02:07 PM
- Got Karma for Re: Can a subsearch return only the value (without the fieldname)?. 07-13-2023 06:02 AM
- Got Karma for Re: Difference between 'show default-hostname' and 'show servername'. 05-16-2023 04:18 AM
- Got Karma for Re: Difference between 'show default-hostname' and 'show servername'. 05-31-2022 08:05 PM
- Got Karma for Re: How to use a field in SingleValue label?. 09-23-2021 03:54 PM
- Got Karma for Re: Difference between 'show default-hostname' and 'show servername'. 05-17-2021 05:30 AM
- Got Karma for Re: How Can I Verify that All Forwarders Received a Deployed Update?. 05-11-2021 05:22 PM
- Got Karma for Re: How can you do OR statements in rex?. 01-27-2021 06:10 AM
- Got Karma for Re: get latest value and timestamp. 09-21-2020 08:19 AM
- Got Karma for Integration of Imperva Database Activity Monitor (DAM)?. 08-10-2020 09:28 AM
- Got Karma for Re: How to remove empty buckets in timechart. 07-16-2020 04:27 PM
- Got Karma for Re: Difference between 'show default-hostname' and 'show servername'. 06-17-2020 05:59 PM
- Karma Re: After upgrading to 6.5.0, KV Store will not start for jcrabb_splunk. 06-05-2020 12:48 AM
- Karma Re: How to prevent Splunk DB Connect 2 from disabling a database connection if the database goes offline briefly? for sni_splunk. 06-05-2020 12:48 AM
- Karma Re: Method to rename field to value of another field for acharlieh. 06-05-2020 12:47 AM
- Got Karma for Re: How to configure Splunk to access and parse AWS GovCloud Cloudtrail logs?. 06-05-2020 12:47 AM
- Got Karma for Re: How to configure Splunk to access and parse AWS GovCloud Cloudtrail logs?. 06-05-2020 12:47 AM
- Got Karma for Re: How to configure Splunk to access and parse AWS GovCloud Cloudtrail logs?. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
2 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
1 |
07-19-2016
03:43 AM
INDEXED_EXTRACTIONS are an exception in that the parsing/field extraction is performed on the UF instead of the HWF/IDX.
... View more
07-12-2016
03:33 AM
3 Karma
With v6.4, renaming the field to "search" only returns the first results. "query" works though:
`[makeresults count=5 | eval v=1 | accum v | rename v as query]`
-->
*normalizedSearch* = `litsearch ( ( 1 ) OR ( 2 ) OR ( 3 ) OR ( 4 ) OR ( 5 ) ) | ....`
... View more
02-10-2016
10:06 AM
3 Karma
According to the release notes, v3.0 of the AWS Add-on now supports GovCloud:
2015-12-23 ADDON-6870 Support for GovCloud and China regions in the configuration UI.
... View more
12-03-2015
12:27 AM
1 Karma
In Splunk 6.3 you can do this using the "finalized" and "set" tags.
Note that the "search" element is new, and "searchstring" has been deprecated.
<panel>
<single>
<title>Spectrum</title>
<search>
<query>index=foo | reltime | rangemap field=CPU low=0-60, elevated=61-80, default=severe | fields + CPU reltime</query>
<earliest>-5m@m</earliest>
<latest>now</latest>
<finalized>
<set token="RELTIME">$result.reltime$</set>
</finalized>
</search>
<option name="classField">range</option>
<option name="field">CPU</option>
<option name="underLabel">$RELTIME$</option>
<option name="refresh.auto.interval">60</option>
</single>
</panel>
... View more
09-20-2013
01:20 AM
Splunk on Splunk app is wrong in the CPU consumption dashboards in that it makes the assumption that "search duration=CPU consumption". Which is totally wrong.
... View more
07-17-2013
09:34 AM
Hi Dave, I believe it is an issue of query length (in characters): when run on a regular SQL client it executes in less than 1 second. I have thought to create a view but the approach is not possible as the query is actually a lookup (I pass in parameters using Splunk's $field$ synstax). Other, shorter, queries run just fine on the same DB. Thanks
... View more
07-12-2013
01:19 AM
@Mark, my use cases would be:
1) run a custom, live query on N4J (like DBX's |dbquery or |inputlookup or |inputcsv) and process the results in the pipeline.
2) perform custom lookups;
3) populate N4J with data coming from a splunk search.
The use cases are multiple. E.g.
a)Import the graph of a network and see all impacted -"downstram"- devices in case of failure (a top-down approach);
b)Trace all the connections of the servers for which I have logs and draw their connections (bottom-up approach). This would give me, over time, a precise schema of the services a complex application is using.
... View more
07-12-2013
01:07 AM
Thanks Ziegfried. As the N4J Jdbc driver states a bunch of sql interfaces (squirrelSQL and others...) which work with varying features, I was just hoping for an easy integration. But I understand it was totally out of scope for DBX.
... View more
07-11-2013
10:17 AM
Hi splunkers! I have a query which is just under 10k characters long that cannot be run through DB Connect's dbquery command. Has anybody had any similar issues? Do you have any workarounds?
-More info-
The DB is Oracle. When run from SQLDeveloper, the query completes in less than 1s with one or two results at most.
The dbquery command resides in a simpleXML form panel and it takes some parameters from the form itself. The splunk interface reports an error "PARSER: Applying intentions failed Splunkd daemon is not responding: ('The read operation timed out',)". The other panels (regular splunk searches and charts) load just fine.
If I run the same query in the DB Connect's dbquery dashboard, it doesn't work either.
DB Connect logs show no information about that particular query.
DB Connect version 1.09
Thanks, Paolo
... View more
06-26-2013
06:54 AM
Nice idea, even though I would miss the opportunity of lookups, "in-search" query functionality. Also, to be able to populate Neo4j from splunk searches would be very nice. Think about tracing all the net connections of the prod servers with "lsof" scripts and draw the chart of the "live" infrastructure in external tools as well...
... View more
06-26-2013
01:19 AM
I am trying to connect to Neo4j using their JDBC driver with no luck.
Has anybody done better than this?
... View more
- Tags:
- Splunk DB Connect 1
05-06-2013
12:05 PM
Splunk can index SOAP envelopes: they are just plain text. For instance, you could set jboss to trace the soap requests it is serving to file, then collect it with splunk.
You could assign that data a sourcetype which has the props.conf's **KV_MODE=xml** setting to automatically extract the fields at search time. Also, you could use multiple SEDCMD configs to strip the SOAP tags rightaway (unless you like to report on the xmlns you are using the most 🙂
... View more
03-16-2013
12:41 PM
2 Karma
Have you tried assigning the timestamp in the [test] stanza?
TIME_PREFIX = tailer.pl:\s(START\|([^\|]*\|){2}|STOP\|([^\|]*\|){4})
TIME_FORMAT= %m/%d/%Y|%H:%M:%S.%1N
... View more
02-07-2013
03:26 PM
It would be a real mess... Would it be possible for you to retrieve an "easier" recursive listing such as what provided from:
find folder_name -exec ls -l \{\}\;
This way file and path info would be stored on the same row.
... View more
02-07-2013
03:23 PM
1 Karma
I think you are looking for:
| head 1
| addinfo
The time the search was executed will be in the info_search_time field
... View more
02-07-2013
02:31 AM
1 Karma
The easy solution is to just disable drilldowns from the view altogether. You can achieve this by configuring the panels in the UI or by adding proper parameters in the AdvancedXML.
However, if you want to disable drilldowns for just a subset of users, you will probably have to code ad-hoc javascript in a custom application.js file. You can find examples and reference code on dev.splunk.com. Here are some useful infos customization examples, application.js.
... View more
02-07-2013
02:15 AM
You can transpose such results with "xyseries", but probably you will have to transform the _time column to something ad-hoc. E.g.
index=...etc...
| bucket _time span=10m
| stats count by _time, LogSource
| table count, LogSource, _time
| convert(_time) as time timeformat="%H_%M"
| xyseries LogSource time count
... View more
02-06-2013
01:05 PM
2 Karma
It would probably help only if you rotated the buckets with big *.data files to frozen (thus removing them from searchable data)
... View more
02-05-2013
12:49 AM
replace "| stats list(user_command) by host" with "| stats list(user_command) count dc(user_command) as distinct_count by host"
... View more
02-04-2013
01:55 PM
1 Karma
| top showcount=false lengthofpayload
... View more
02-04-2013
01:47 PM
1 Karma
Was the admin's default password of "changeme" ever changed? If you can not figure out the password, then
stop the UF
rename the $SPLUNK_HOME/etc/passwd file
Now you have the default credentials (admin/changeme) back.
However, if you want to achieve similar results to clean all, then:
stop the UF
rename $SPLUNK_HOME/var to something else
rename $SPLUNK_HOME/etc/users to something else
start the UF
if everything works fine, drop the renamed folders.
... View more
02-04-2013
01:34 PM
1 Karma
| stats list() will keep duplicate user-command tuples.
sourcetype=pu OR sourcetype=tik COMMAND
| multikv
| strcat "[" USER "] " COMMAND user_command
| stats list(user_command) by host
... View more
02-04-2013
01:22 PM
2 Karma
You can use | top: it will give you the distribution # and % of results grouped by the value of a field.
sourcetype="dbmon:kv"
| search EVENTTYPE="ScreenSharingEvent"
| eval lengthofpayload=len(PAYLOAD)
| bucket lengthofpayload bins=10
| top lengthofpayload
... View more
01-27-2013
04:42 PM
1 Karma
You can play with the graphical chart settings and set "null values" to "connect".
But if the problem happens with many data points, probably you might want to change the timespan over which buckets are computed.
| timechart span=2h count by host
... View more
01-27-2013
04:24 AM
I suppose you are using Search Head Pooling as well. In that case, it is possible the user and role configuration files such as authorize.conf reside in etc/system/local instead of etc/apps and are therefore not shared between search heads. You might have to configure both search heads independently in that case.
... View more