Activity Feed
- Got Karma for How to filter on KV Store lookup time-based fields using a time picker?. 10-04-2024 07:05 AM
- Got Karma for Re: How to search a list of all enabled apps in Splunk and their versions on a search head?. 07-29-2020 12:41 PM
- Karma Re: How to check when the index is disabled/enabled for woodcock. 06-05-2020 12:48 AM
- Karma For Splunk Enterprise, Splunk Light, and Hunk pre 6.3, default root certificates expire on July 21, 2016 - Recommendations? for Ellen. 06-05-2020 12:48 AM
- Karma Re: Is there a shorthand way to round all values in a search without using "eval value=round()" for each individual value? for somesoni2. 06-05-2020 12:48 AM
- Got Karma for Splunk DB Connect 2: How to troubleshoot why data is not getting indexed from an Oracle database using dbinputs?. 06-05-2020 12:48 AM
- Karma Dashboard issues - search runs in search window, not in dashboard as an inline string. for bwakely. 06-05-2020 12:47 AM
- Karma Re: Dashboard issues - search runs in search window, not in dashboard as an inline string. for unixadmins. 06-05-2020 12:47 AM
- Karma Re: Dashboard issues - search runs in search window, not in dashboard as an inline string. for landen99. 06-05-2020 12:47 AM
- Karma Re: Stacked bar graph using data across multiple indexes for acharlieh. 06-05-2020 12:47 AM
- Karma Re: How to write the regex to extract semicolon delimited fields? for aljohnson_splun. 06-05-2020 12:47 AM
- Karma Splunk DB Connect 2: Oracle does not input data in index for italogf. 06-05-2020 12:47 AM
- Karma Re: How to join large tables with more than 50,000 rows in Splunk? for wpreston. 06-05-2020 12:47 AM
- Karma Re: How to tell Splunk which fields are numbers in an uploaded .txt file? for lguinn2. 06-05-2020 12:47 AM
- Karma Re: When creating a statistical table using the table command, is there a way to disable sorting of data on click of table column headers? for vganjare. 06-05-2020 12:47 AM
- Karma Re: How to search a list of all enabled apps in Splunk and their versions on a search head? for MuS. 06-05-2020 12:47 AM
- Karma Re: Dynamic value display in the Panel Title for russellliss. 06-05-2020 12:47 AM
- Karma Re: Just installed DBX v2, what is rsCache.data used for? for jwelch_splunk. 06-05-2020 12:47 AM
- Karma Re: How do I route events into different indexes based on event type? for javiergn. 06-05-2020 12:47 AM
- Karma For large lookups, should I use the kv store? for a212830. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
1 | |||
0 | |||
1 | |||
1 | |||
0 | |||
0 | |||
1 | |||
0 |
02-12-2019
04:47 AM
index="x" sourcetype="x" ConfigManagerErrorCode=28
| stats dc(host) as host
for getting hosts
and as suggested by woodcock
(index="x" sourcetype="x" ConfigManagerErrorCode=28) OR (index="aix" sourcetype="x")
| rex "Model=(?<model>.*)"
| stats values(model) AS Model count(eval(index=x)) AS count BY host
| mvexpand Model
| stats sum(count) AS count BY Model
this should work
... View more
06-11-2018
07:52 AM
Looks like its SPLUNK GUI BUG, When i tried saving this token using XML at backend, it worked!! It was having trouble through Splunk GUI.
... View more
06-11-2018
04:04 AM
Hi All,
I would like to pass a drop down filter value to multiselect value.
code for dropdown is as :
<input type="dropdown" token="entity">
<label>Entity</label>
<choice value="*">All</choice>
<choice value="_time">Time</choice>
<choice value="_status">Status</choice>
</input>
I would like to pass this to "Token value Suffix" of Multiselect filter.
<input type="multiselect" token="servers" searchWhenChanged="true">
<label>Servers</label>
<prefix>(</prefix>
<valuePrefix>measurementIdentifier=ITSMServerLatency_</valuePrefix>
<valueSuffix>-ping**$entity$**</valueSuffix>
<delimiter> OR </delimiter>
<fieldForLabel>Server</fieldForLabel>
<fieldForValue>Server</fieldForValue>
<search>
<query>index="my_index" application=my_app Type=Network (hostname=*) |rex field=Identifier "ServerToITSMServerLatency_(?<Server>\w+)\-\w+\_(?<entity>\w+)"|dedup Identifier |stats count by Server</query>
<earliest>0</earliest>
<latest></latest>
</search>
<choice value="*">All</choice>
<default>*</default>
<suffix>)</suffix>
</input>
My concern is that it runs good for the first time, but next time , token variable looses its identity and is replaced be first time or default value of $entity$.
Please let me know how can i maintain the token variable in multiselect?
... View more
11-27-2017
02:20 AM
HI @DalJeanis
Thanks for replying. i will clear out the air here.
I am using a for loop and where variable i decides the batch size
inside loop there is if condition, if satified one condition it starts search4, till now everythings work great.
In else Condition, i am triggering start search9 where the value of variable i is being used to set a token used in search
so for e.g
if value of i =1 , i am setting token value token1=400 and token2= 200 and triggering search using startSearch().
if value of i =2 , i am setting token value token1=600 and token2=400 and riggering search using startSearch(). and so on till loop is valid.
now ideally in else loop, if there are 3 values of i there should be 3 instances of token and search JOB, instead i always and only get last instance of search9.
please let me know if you need more explainations.
Thanks
... View more
11-24-2017
06:22 AM
Hi
I have a splunk query(with javascript) which i would like to run multiple times using javascript loop. Please find below code
`dupcount1=splunkjs.mvc.Components.get("search8");
dupcount1.data("results").on("data", function(results) {
if (DupData == true) {
var dupcount = results._data['rows'][0][0];
var i;
var batchsize = 10
for (i = 0; i <= dupcount / batchsize; i++) {
if (i == 0) {
defaultTokenModel.set("head1", batchsize);
search4.startSearch();
} else {
defaultTokenModel.set("head2", batchsize * (i + 1));
defaultTokenModel.set("tail", batchsize * i);
search9.startSearch();
}
}
return true;
}
});`
based on value of i i am setting token and start search. when (i==0) the search runs fine and everything completes graciously. Issue i am facing is when (i !=0) than only for last loop job is triggered. what i would like is that , if there are 9 loops, 9 searches should be triggered. i am not able to understand why only last search is starting in else loop.
Thanks in advance.
... View more
11-10-2017
05:45 AM
@karthi2809 data come at 0300 HRS and and you want to schedule alert from 0600 at every 3 hours. is that understanding correct ? if so , schedule your alerts from 0300 using CRON and run it for last 3 hours.
... View more
07-27-2017
01:36 AM
Hi @tareddy
| inputcsv testCSV.csv , you will get date(human readable or EPOCH, However you had put them while creating CSV as STRING) , so u need to format string to time using strptime and strftime , once you have it in timeformat. you can use filters to get your desired results
... View more
07-27-2017
01:22 AM
1 Karma
HI @sarnagar
can you try putting core=0 in the saerch provided by @MuS ,
something like this
| rest /services/apps/local | search disabled=0 core=0|dedup label | table label version
Hope that helps
thanks
... View more
06-20-2017
01:51 AM
HI
i am also facing the same issue. Did anyone got any answers ?
... View more
09-20-2016
03:28 AM
@dmaislin
is this not the correct app for as400 - https://splunkbase.splunk.com/app/633/#/overview?
Thanks in advance
... View more
05-05-2016
10:45 PM
This Works wonders . thanks @woodcock
... View more
05-05-2016
02:41 AM
1 Karma
I have a large data set in my KV Store collections. These fields also contains time specific fields. I would like to perform filtering on these time based fields by time picker. Any suggestions for implementation.
... View more
05-02-2016
03:08 AM
Hi all
I have a large data set (20 million) since 2015 which keeps on growing. In my case, I am supposed to use lookup and I found out that KV store is best since records in index are getting updated with _key(ORDER_KEY) remaining constant, hence my lookup will also be updating. Now with this huge set of growing data, will I land in to some sort of performance issue?
I thought of using multiple KV Store lookup broken down by month such as events from nov2015 will go to kvlookup_nov2015 and events from dec2015 will go to kvlookup_dec2015 based on ORDER_KEY creation time, all the collection and transforms.conf entries for lookup definitions will be made earlier only, but I am not able to achieve this as run time in search |outputlookup .
I tried the macro approach with eval based definition. |outputlookup `filename(ORDER_KEY)`.
[filename(1)]
args = ORDER_KEY
definition =(case(match($ORDER_KEY$, "^201511.*"),"csv_lookup_nov_2015",match($ORDER_KEY$,"^201512.*"),"csv_lookup_dec_2015"))
It did not work for me. Please help me out
Thanks in advance.
... View more
04-26-2016
11:52 PM
No @ktugwell , there are no errors in splunkd.log or dbx.log.
... View more
04-20-2016
05:49 AM
1 Karma
I am trying to use Splunk DB Connect 2's dbinput feature to index data. All specifications such as connection and preview table all work fine, but data does not get indexed. When I try using DB Connect `, I am able to index data, but since data size is huge, there is some inconsistency and that's the reason I want to use DB Connect 2.
Database is Oracle, dump works fine, but there is an issue with the rising column.
rising column sample data :2016-04-16 18:30:23(yyy-mm-dd H:M:S)
inputs.conf
_rcvbuf = 1572864
allowSslCompression = true
allowSslRenegotiation = true
connection = yanrep
dedicatedIoThreads = 2
disabled = 1
enableSSL = 1
host = scompsprdrk1v
index = sterlingdb1
input_timestamp_column_name = MODIFYTS
interval = 05 * * * *
maxSockets = 0
maxThreads = 0
max_rows = 10000000
mode = tail
output_timestamp_format = yyyy-MM-dd HH:mm:ss
port = 8088
query = select
sourcetype = YFS_ORDER_LINE
sslVersions = *,-ssl2
tail_follow_only = 1
tail_rising_column_name = MODIFYTS
ui_query_catalog = NULL
ui_query_mode = advanced
useDeploymentServer = 0
source = dbmon1
Thanks in advance
... View more
04-11-2016
04:54 AM
@hkgserverteam are you still facing issue. yeah i have made to 50000 record limits
... View more
03-21-2016
12:08 AM
I also had this issue with |dbquery and it resolved as per @unixadmins's suggestions
... View more
03-17-2016
11:19 PM
Can you be please more specific with question ? is this your sample set of data ??
... View more
03-17-2016
02:42 AM
@martin_mueller Hey just a clarification , if i have 1 saved search which i have incorporated at 5 dashboards , will this search run 5 times or just a single run will populate all dashbaord. Thanks in advance
... View more
03-01-2016
04:25 AM
@hkgserverteam i am still facing this issue. did u resolve this 50000 record limit?
... View more
02-03-2016
10:48 PM
this is feasible and i completely understand it
... View more
02-03-2016
05:49 AM
1 Karma
I have an indexing scenario and below are the points to be considered. Imagine I have log file with DEBUG, INFO, and ERROR events .
Need to filter out events with INFO using nullQueue (feasible)
DEBUG and ERROR events need to go to DEBUGINDEX and ERRORINDEX respectively (is this feasible?)
Is the second scenario feasible, and if so, how?
I have my data flowing from a universal forwarder to an indexer via heavy forwarder.
... View more
12-11-2015
04:37 AM
I am currently experiencing issue in our production environment and wanted to check if any of you have encountered similar issue. I was getting data in index using Splunk DB Connect 1 since 2 days back, but suddenly it stopped indexing data. The following is our DB Connect configuration:
We are dumping data from Oracle database to Splunk index.
Splunk versions is 6.2.2
DB connect version is 1.1.7
DB input strategy is DB Dump
Output format is Multi-Line key Format.
When I run a query in the DB Connect query browser, it returns results successfully.
There are NO errors in splunkd logs, dbx_debug shows that this dbinput triggered and completed gracefully.
Splunk DB Connect app gets data from the database and keeps in SPLUNK_HOME/var/spool/dbmon directory. From where it runs a batch job to write data to index using policy sinkhole, on successful indexing of data, it deletes the temporary file from in SPLUNK_HOME/var/spool/dbmon directory.
I have data in SPLUNK_HOME/var/spool/dbmon, but not in the index.
I also have some other indexes which are getting the data from same Oracle db using the same DB Connect 1.1.7 and they continue to work fine while only few indexes have stopped working.
Kindly suggest appropriate actions.
... View more
10-15-2015
03:55 AM
Transpose is python custom command shipped with Search app. Transpose has global visibility , which means if you have access to the Search app, you can use Transpose from any app. Is there any way in which a user is able to see charts with the transpose command without having access to search app?
... View more
08-06-2015
05:06 AM
[default]
maxresults=100000
[email]
command = $action.email.preprocess_results{default=""}$ | sendemail "results_link=$results.url$" "ssname=$name$" "graceful=$graceful{default=True}$" "trigger_time=$trigger_time$" maxinputs="$action.email.maxresults{default=100000}$" maxtime="$action.email.maxtime{default=5m}$" results_file="$results.file$"
i have provided following changes , now i am facing a limit of 50000 records in my csv attachment
... View more