Activity Feed
- Posted Re: FILTERING on Deployment Architecture. 07-17-2020 08:43 AM
- Karma Re: Can you store data to Splunk without indexing? for afurrowgtri. 06-05-2020 12:50 AM
- Got Karma for Splunk UF wineventlog monitoring is too slow. 06-05-2020 12:50 AM
- Got Karma for Splunk UF wineventlog monitoring is too slow. 06-05-2020 12:50 AM
- Got Karma for "The icon found in the package is not a valid png file" error on publish to splunkbase. 06-05-2020 12:50 AM
- Karma Re: What is the meaning of index? for niketn. 06-05-2020 12:49 AM
- Got Karma for Manipulating data before indexing. 06-05-2020 12:49 AM
- Got Karma for Re: Manipulating data before indexing. 06-05-2020 12:49 AM
- Got Karma for What is the meaning of index?. 06-05-2020 12:49 AM
- Got Karma for Re: What is the meaning of index?. 06-05-2020 12:49 AM
- Karma Re: How to embed a Splunk dashboard in an iframe? for halr9000. 06-05-2020 12:48 AM
- Karma Re: How to resolve issues with mongod startup such as "Failed to start KV Store process" error? for hunters_splunk. 06-05-2020 12:48 AM
- Posted Slow web UI in Search Head cluster on Deployment Architecture. 12-12-2019 10:46 PM
- Tagged Slow web UI in Search Head cluster on Deployment Architecture. 12-12-2019 10:46 PM
- Posted "The icon found in the package is not a valid png file" error on publish to splunkbase on All Apps and Add-ons. 10-05-2019 09:49 AM
- Tagged "The icon found in the package is not a valid png file" error on publish to splunkbase on All Apps and Add-ons. 10-05-2019 09:49 AM
- Posted Re: Executing search query on a remote Splunk Instance, may be using REST command or Command line on Getting Data In. 10-01-2019 10:31 AM
- Posted Re: Index Time Fields Extraction on Summary Index on Splunk Search. 08-13-2019 04:50 AM
- Posted Re: Index Time Fields Extraction on Summary Index on Splunk Search. 08-13-2019 01:27 AM
- Posted Index Time Fields Extraction on Summary Index on Splunk Search. 08-12-2019 07:15 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 | |||
0 | |||
2 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
07-17-2020
08:43 AM
What exactly do you want to limit? If you want to limit the index size you can use `frozenTimePeriodInSecs` to set auto remove by time or `maxTotalDataSizeMB` to restrict index size, using indexes.conf file Regarding filtering, you can set filtering rules, depending on the input type you are using - see inputs.conf
... View more
12-12-2019
10:46 PM
Hey,
My SH cluster web UI is very slow when approaching several management pages, such as Views, Lookups definition & automatic lookups. (These pages load time is around 30 seconds)
The issue does not occur on the apps or the dashboards themselves.
Any ideas?
Thanks
... View more
10-05-2019
09:49 AM
1 Karma
Hey
I'm trying to publish an app to splunkbase.
This is the error message I get: "The icon found in the package is not a valid png file"
The app contains 4 icon files in the static folder in .png format (these are the files)
Does someone know how to solve this issue? AFAIK, those are the only needed icon files (I have a published app from a year ago which has a similar structure).
Thanks
... View more
10-01-2019
10:31 AM
I made a search command to do it easily.
Probably too late for this issue, but maybe someone will find it helpful and simpler than using the REST API directly:
https://github.com/omerl13/remote-splunk-search
Usage will be like:
| remote
host="mysplunk2.com"
query="index=main | head 50 | table _time host _raw"
username="user"
password="changeme"
(Tokens are also supported)
... View more
08-13-2019
04:50 AM
Yes, this might be a good solution. How should the additional fields be considered? Is ‘=‘ as a delimiter enough?
Thanks
... View more
08-13-2019
01:27 AM
No, I think it’s more an issue with the ‘collect’ command, since adding data manually does extract the fields, but using collect the fields are not being extracted at index time, event though I’m setting sourcetype=_json
... View more
08-12-2019
07:15 AM
Hey
I’m trying to extract fields in index time on my summary index, in order to use ‘tstats’ command.
I used ‘collect’ to index the data, Setting sourcetype=_json, but I couldn’t make the fields extracted in index time.
I tested the command by using ‘makeresults’, and manually building the _raw field, but the fields were only extracted in search time (with KV_MODE=auto). Using KV_MODE=none and INDEXED_EXTRACTIONS=json, the fields were not indexed.
So I made a different test. I copied the generated _raw to a local file, and added it using the Upload File option. This time the fields were extracted at index time, as desired.
Is it possible to index fields using the collect command? Or am I doing something wrong?
Also, I’ve checked the Accelerated Data Model, but it didn’t fit my needs (due to non streaming commands).
... View more
01-20-2019
01:09 PM
I tried to update the environment to version 7.2.3 and still no change. Trying to contact support. In the meantime - @lakshman239 You said this is a known issue, do you know anyone who had it / solve it / has suggestion regard it?
... View more
01-16-2019
02:21 PM
Thanks, but the suggestions in the link does not seem to help. Is there another suggestion, or maybe I should try a different tool for forwarding? I'm trying StreamSets edge data collector, and I wonder maybe you have a better tool for forwarding wineventlog. Thanks!
... View more
01-13-2019
11:45 AM
Well, I found out that the monitoring console setting page was the solution! The roles where correct, all I had to do is press Apply Settings . Weird, but it worked. Thanks!
... View more
01-13-2019
11:41 AM
2 Karma
Hey,
I have around 30 Splunk Universal Forwarders on my environment, monitoring the local Event Log (Windows Servers 2016).
Lately I noticed that a few forwarders are having a delay / sending events too slow.
I checked the traffic and noticed that once in every 20-30 seconds, the forwarder is sending around 3K events to the indexers, which is a very small amount of data, while the eventlog is creating much more events and much faster.
So the slow forwarding opened a gap of around 30 minutes in the data.
I tried to increase queues size and set thruput to unlimited. The performance of the server seems fine, no high CPU or memory usage.
I looked on another server, which currently seems to send the events on time (and has a lot more events on its eventlog, yet it is faster), and from sniffing the traffic it seems like the forwarder is sending events almost every second - no ~20 seconds interval.
I tried to forward to a different (test) environment, thinking the indexers are getting too many events from too many forwarders, but it does not seems to make any change.
Also, the Splunk universal forwarder on the servers is configured the same way, via a deploy server.
I wonder if any of you had this issue, or can think of a possible cause to the problem.
Thanks!
... View more
01-09-2019
10:36 PM
Is there a pattern to the rest of the word? Using ‘rex’ command should probably fit your need
... View more
01-09-2019
10:34 PM
I think this may help you:
https://answers.splunk.com/answers/92257/can-single-forwarder-forward-data-to-two-different-indexers.html
... View more
01-09-2019
12:10 AM
No I have not noticed anything, it was like this for weeks. Was it a good solution to add them manually? Shouldn’t it update automatically?
... View more
01-08-2019
11:39 PM
Yes, and it shows up in the Search Heads section on the Cluster Master “indexer clustering” page
... View more
01-08-2019
11:28 PM
Well, this is a bit different situation. I have a single search head, not a cluster, and it fails to add new Peers (indexers) to its dmc group.
... View more
01-08-2019
10:58 PM
Hey,
I noticed a problem on my clustered environment, when the SH could not search over 2 new peers I’ve added to the cluster earlier.
When trying to search over the new peers’ ‘_internal’ logs, no logs where shown. But when searching for the same on the cluster master, I found the events.
Note that the new peers were not marked as quarantined, but they did appear in the Disturbuted Search Peers list.
I noticed that the monitoring console did not show them on the Resource Usage section, which using the dmc lookup, so I found out a solution - I had to manually add the peers to the ‘distsearch.conf’ on SH (SPLUNK_HOME/etc/system/local/distsearch.conf)
I wonder why the peers where not in the file already, as the others were in it, and I never had to change it before.
Is it a bug? Would I have to do it each time adding a new peer or is there a better way to handle it?
Thanks!
... View more
05-26-2018
11:05 PM
Hey,
I am thinking of having 2 indexer clusters in my environment:
1. “Raw data” cluster, which receives data from windows event forwarders & other “external” connectors.
2. Summary cluster, which receives data from search heads, after those summarized it and took out only part of the “raw data” from cluster 1.
I was wondered whether this is the best solution to my problem, as I want to summarize the data to keep it searchable, which is not possible with the amounts of raw data I have, but still let the users use the “raw data” on real time, so both clusters are needed to be searched.
Is separating the clusters a good idea? Maybe it would be better to use 1 cluster for both purposes, using the same hardware?
Thanks!
... View more
04-05-2018
02:46 PM
Hey!
I'm trying to run a search in the JS Splunk SDK, and periodically check the job for the current results. I found out that there are multiple exec_mode s possible, and if I set it to normal , I can get the job ID before the search has ended.
I tried to run my job this way:
service.search(
searchQuery,
searchParams,
async function (err, job) {
console.log("...done!\n");
let count = 0;
const sleep = ms => new Promise(resolve => setTimeout(resolve, ms))
await sleep(4000);
job.preview({}, function (err, results) {
if (err) {
console.error(err)
return;
}
if (results.rows) {
count++;
console.log("========== Iteration " + count + " ==========");
console.log(results.rows)
for (var i = 0; i < results.rows.length; i++) {
var row = results.rows[i];
console.log(stat + row[countIndex]);
}
console.log("=================================");
}
});
});
with these properties:
const searchQuery = "search index=*";
const searchParams = {
exec_mode: "normal"
};
The output of this job.preview is that there are no rows yet.
Is there anything wrong with the way I'm checking the job results? Is there a way to achieve the results before the job ends?
Thanks!
... View more
03-29-2018
12:06 PM
Yes, I thought that would be the only solution. As a developer, I don't really like the idea of significant amount of coding on splunk, as it is harder to maintain and less reuseable. Do you think that the Excel layout of the Lookup Editor can fit a dashboard panel? I think this Excel layout merged with splunk table visualization would be the best.
Thank you!
... View more
03-28-2018
10:38 AM
Just a suggestion - Use @y@w to start from the first day of the week
... View more
03-28-2018
10:25 AM
Thank you for the suggestions!
I think that what I'm trying to achieve is not what splunk was built for.
What I need is more like Excel, where I can edit the data right on the table, without the need of an external inputs form and without re running a search on each edit.
I am familiar with the lookup editor app, but I wanted to work on a regular dashboard, using splunk's table (better UI and UX).
Thanks anyway!
... View more
03-27-2018
11:55 PM
I would recommend you to use the transaction command, as it seems to do exactly what you need.
So I would change this query:
index=milo sourcetype=rto FATAL earliest=-30m@d latest=now | bucket _time span=1m | stats count by failed_host _time | eval occurred=if(count!=3,"FTP failed", null()) | where isnotnull(occurred) | table occurred failed_host _time count
to something more like:
index=milo sourcetype=rto FATAL earliest=-30m@m
| transaction failed_host maxspan=1m
| search eventcount >= 3
| table failed_host _time eventcount
And now splunk will look for transaction of the same failing host within 1 minute (=maxspan), and connect them to one event, which includes the eventcount field that counts the number of events in the transaction. You may also find the field duration interesting (I excluded it in the query), since it tells you exactly what was the duration of the transaction.
I hope it helps you!
Omer
edit:
To organize the results as groups of time I would add this to the end of my query:
| bin _time span=1m | stats list(*) as * by _time
... View more
03-27-2018
11:24 PM
Hey,
As the example shows, I have 2 fields - IP and Version. Lets take the IP as the key of the table, and say that the Version colomn is partially filled - as in some of the rows are blank. The data I have comes from an index, but the source of this index does not have the data about all the IP s, which means I need a way to let the user complete the missing data manually.
What is, in your opinion, the (hopefully best) way to achieve that?
Thanks
... View more