Activity Feed
- Posted Re: How to Chart data by month with sorting order on Splunk Search. 02-16-2022 08:26 AM
- Posted How to chart data by month with sorting order? on Splunk Search. 02-15-2022 11:31 AM
- Tagged How to chart data by month with sorting order? on Splunk Search. 02-15-2022 11:31 AM
- Posted Forwarder outputs.conf checking DSN updates on Splunk Enterprise. 10-06-2021 08:38 AM
- Tagged Forwarder outputs.conf checking DSN updates on Splunk Enterprise. 10-06-2021 08:38 AM
- Posted Restoring data to new indexer for DR on Splunk Enterprise. 08-24-2021 11:55 AM
- Tagged Restoring data to new indexer for DR on Splunk Enterprise. 08-24-2021 11:55 AM
- Posted Read only the JSON section of each line in a monitored file on Getting Data In. 06-14-2021 10:09 AM
- Posted Website monitor app credentials on All Apps and Add-ons. 04-12-2021 02:16 PM
- Posted Figure out what forwarder data came from on Getting Data In. 12-02-2020 08:38 AM
- Posted Get data from Azure Function into Splunk Enterprise on Getting Data In. 11-16-2020 09:05 AM
- Posted Search Head version to Index Cluster version on Deployment Architecture. 11-03-2020 12:25 PM
- Tagged Search Head version to Index Cluster version on Deployment Architecture. 11-03-2020 12:25 PM
- Got Karma for Re: Auditing who disabled/enabled alerts on a Search Head. 10-22-2020 09:25 PM
- Got Karma for Auditing who disabled/enabled alerts on a Search Head. 10-22-2020 09:22 PM
- Posted Re: Question about uninstall/reinstall of UF on Installation. 08-25-2020 09:10 AM
- Posted Question about uninstall/reinstall of UF on Installation. 08-24-2020 10:14 AM
- Posted Reading an Environment Regkey and using in Searches on Splunk Search. 07-29-2020 01:00 PM
- Tagged Reading an Environment Regkey and using in Searches on Splunk Search. 07-29-2020 01:00 PM
- Posted What permissions are required for a non-admin user to be able to save a Slideshow? on All Apps and Add-ons. 07-21-2020 09:03 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
04-02-2019
10:38 AM
1 Karma
We just recently upgraded from Splunk 6.6.3 to 7.2.4.1 and noticed a change to one of our alerts based on its cron schedule.
The cron schedule for the alert is set to this:
3 21 1-7,15-24 * 0
Before the upgrade, this was working to send out the alert the 1st and 3rd Sundays of the month.
After the upgrade, this is now sending out on the Sunday AND every day between the 1st-7th and we figure will also send every day from the 15th-24th.
Did the cron scheduler get changed in the version upgrade?
Also, where can I find what cron version Splunk is utilizing?
For now we change changed the cron schedule to send out on the 1st and 15th, so it will only send twice a month but would like it to just be every other Sunday.
Thanks.
... View more
03-08-2019
09:35 AM
It would be weeks as our security team is moving away from Enterprise Security and I just didn't want to have to upgrade ES on its search head and risk something not going cleanly and then have to spend so much cycles getting it back and working before we remove ES.
We are discussing of just going ahead and doing the upgrade of ES on that server so that we can get the rest of the environment can be upgraded.
... View more
03-07-2019
02:25 PM
From what I read on this link
https://docs.splunk.com/Documentation/Splunk/7.2.4/Indexer/Upgradeacluster#Upgrade_each_tier_separately
It says that you can update each tier of the clustered index/search separately by going master, search heads, then indexers. I do see that it is not recommended to stay in a config of different versions for lengths of time but does seem to allow for that.
Am I just reading that incorrectly?
... View more
03-05-2019
01:31 PM
I am trying to figure out if I will run into any issues while upgrading our Splunk Enterprise environment from 6.6.3 to 7.2.3.
We have a distributed environment that has: • License Master (covers all environments) • Search Head (test) • Indexer (test) • 2 Heavy Forwarders (test) • Search Head Cluster (4 nodes with an additional Deployer server – which is also the Cluster Master for the West Coast datacenter) (prod) • Stand Alone Search Head (prod) • 2 Heavy Forwarders (prod) • Index Cluster (4 nodes with additional Cluster Master server) (prod) (West Coast datacenter) • Index Cluster (4 nodes with additional Cluster Master server) (prod) (East Coast datacenter) • Index Cluster (3 nodes with additional Cluster Master server) o The Cluster Master is also the Deployment Server for both prod and test environment • Search Head running Enterprise Security
We currently have a few caveats in the environment that will affect our upgrade. We cannot upgrade Enterprise Security for now which means that we cannot upgrade the Search Head it runs on since our ES version is 4.7.4 which cannot run with Splunk 7.2.3.
My plan is to upgrade in the following order: • License Master • Test Search Head • Test Indexer • Test Heavy Forwarders (both) • Prod Stand Alone Search Head • All 3 Cluster Masters o 1 is also the Deployer for the Search Head cluster o 1 is also the Deployment Server • Prod Search head Cluster
This will leave the Prod Heavy Forwarders and all of the Prod Indexers on Splunk 6.6.3. We will also not upgrade any of our Universal Forwarders until are able to move forward with updating the rest of the infrastructure servers.
Does this plan look to cover everything or we have problems with it?
Thanks.
... View more
Labels
- Labels:
-
heavy forwarder
-
indexer
-
license
-
search head
-
upgrade
03-01-2019
02:47 PM
You need to add v_user_name to line 4 as well as to the table line in 7.
In line 4 you are saying what fields to keep going forward and all you are bringing back from the subsearch is dest_ip
... View more
02-28-2019
08:13 AM
yes we have changed the ulimit number of open files.
I will see if the team is willing to change the setting to roll hourly.
... View more
02-28-2019
07:12 AM
Are you saying to change the way the logfiles create to make them contain date/hour in the filename? They only get created daily so that would not really change anything
max_days would not do anything as we only get the same days files for devices in the directory (all files are moved out nightly and new files created when new data comes in for a device after midnight)
will be trying the maxKBps=0
the multiple pipelines was a thought I had, I know it uses the same bandwidth for each pipeline (so if set to 256 then it would use 512 with 2 pipelines) but wasn't sure, would each pipeline work on its on rotation of the indexers it sends to or would both pipelines send to the same indexer? (i.e. if there are 4 indexers it sends to would both send to indexer 1 until it switches to indexer 2, etc.)
... View more
02-28-2019
07:05 AM
I will try changing the maxKBps to 0 so it is unlimited. Not sure if the servers can handle it though.
We only have a couple of hundred files in the directory that it is picking up on but some get very large.
There are no old files in the directory as they are cycled off to a different directory nightly, so new files for each devices get created when new data comes in after midnight.
... View more
02-27-2019
03:21 PM
Would making that change in the limits.conf prevent the files from going into batch mode? Or would it allow for reading move from the files in batch mode at a time?
... View more
02-27-2019
02:36 PM
We are running Splunk 6.6.3 and have universal forwarders on our syslog servers. We are finding that some of the data gets behind for some of the hosts that the syslog server has files for.
Some of the files get very large throughout the day (the file for each host sending to the syslog server cycle into a new file daily). At least 3 of the files get to a point where Splunk is enqueuing the files into Batch mode. These files are mostly from our InfoBlox servers or our Panorama for our firewalls.
The syslogs servers are not being over taxed, so I should be able to adjust some numbers higher to allow for better thruput, but I'm not sure what the best setting changes would be.
Thanks.
... View more
02-27-2019
02:31 PM
If you can grab a copy of the file you are trying to read, then on a dev splunk instance walk through the Add Data function in the web console.
Just import your file directly and when at the Set Source Type, choose, Structured->_json
You can then make sure it looks like it is parsing correctly and do a Save As to a new name/sourcetype name. Then when you finish getting it all read in, you can go to your drive and look for the inputs/props/transforms conf files it would create. Then you can use those on the forwarder you are trying to read the file originally from (or pushed out through a deployment server in an app).
... View more
09-20-2018
08:55 AM
With the new changes for cost of support on of Java version 8 coming soon, is there any other way that people are connecting into databases (mainly Microsoft SQL Server but other types of databases are possibly going to be needed in the future) that are not using DB Connect and Java? According to the docs for DB Connect, JRE 8 is required and that one is having the price changed for support.
This would be for both executing ad hoc queries to show data and pulling data in for indexing/storing in Splunk.
Thanks.
... View more
09-10-2018
08:21 AM
I am trying to figure out how I can measure the latency that my search head cluster nodes are experiencing between each other.
The configuration of the search head cluster is Splunk 6.6.3, all servers are Windows Server 2012 R2, 2 of the members are in 1 data center (along with the deployer) and the other node is in another data center.
The search head cluster has been up for a while and was running without any real issue. But, after this months Windows security patching and reboots, the captain fails over to a different member pretty regularly. Before, it was only failing over to another member when we were performing work on the cluster.
I am figuring that the issue has to do with latency between the cluster members and want to query the metric.
And if anyone has any other ideas why it might all of a sudden start having this issue (I have other stand alone search heads which got the same security patches and are having no issues).
... View more
08-24-2018
07:25 AM
I was discussing this with our Splunk engineer and the bigger caveat that came out of it was since the data needed to be sent to the third-party in syslog format, the only way to do that was to utilize a HF.
It is disappointing that I have to have another server out there just so that I can have the data sent out to the third-party.
... View more
08-22-2018
12:15 PM
Pretty much all of our forwarders are are the ones that have Windows logs (almost all) & IIS logs (many).
Can the filtering piece be done on the indexer cluster peer node level instead of having to hit a HF first?
... View more
08-22-2018
10:11 AM
I am working on a POC third-party system for some of our data and need to get data from Splunk forwarded over to it.
I was looking through this link http://docs.splunk.com/Documentation/Splunk/6.6.3/Forwarding/Forwarddatatothird-partysystemsd
And was hoping someone might have done what I am trying to do.
We want to send all of our Windows & IIS logs from our forwarders to the third-party system as a syslog feed.
All of our forwarders currently send directly to our backend indexers (which are a set of 3 different indexer clusters).
From looking at that link, it seems like if I want to separate data (only some sourcetypes/indexes/etc) that is getting sent from the forwarders to the other location, I have to pass the data through a heavy forwarder. I want to avoid doing this because that would mean repointing all of our forwarders to go through the heavy forwarder.
Can the division of the data be done from the forwarders themselves? Or even by making a change on the indexer side to get the raw data over to the third-party through a syslog feed?
... View more
04-19-2018
01:11 PM
I am trying to read data from an Azure Storage Table and currently am using the Splunk Add-on for Microsoft Cloud Services.
I am able to get the data read into Splunk for the whole table but am having trouble trying to get the host changed from the server where the data input runs and instead using part of one of the fields in the data being read in. (I want this done at index time)
The data in the Azure table is being written with NLog.
When the data is read in, Splunk recognizes multiple fields from the data in the columns. The field Message is json and inside there is a field of machine. That is what I am trying to get the host to be.
This is what I have in the .conf files:
inputs.conf
[mscs_storage_table://Test Table Read 10]
account = Testing POS Logs
collection_interval = 300
index = azure
sourcetype = mscs:storage:table:test10
start_time = 2018-04-17T16:00:09-07:00
table_list = POSNlog
props.conf
[mscs:storage:table:test10]
TRANSFORMS-host_rename=rename_host_by_field_host
transforms.conf
[rename_host_by_field_host]
SOURCE_KEY=field:Message
REGEX=Message="machine\":\"(?.+?(?=\"))"
FORMAT = host::$1
DEST_KEY=MetaData:Host
One of the entries being read in as indexed right now looks like this:
{"odata.etag": "W/\"datetime'2018-04-18T18%3A04%3A37.9493312Z'\"", "PartitionKey": "20180418.NLogAzureTest.Test2", "Timestamp": "2018-04-18T18:04:37.9493312Z", "Message": "{\"time\":\"2018-04-18 11:04:33.8902\",\"utc-time\":\"2018-04-18 18:04:33.8902\",\"level\":\"Error\",\"message\":\"Oh noes!\",\"exception\":\"System.ArgumentException: Too much boom!\r\n at NLogAzureTest.Test2.Log() in C:\\Users\\fischja\\Documents\\Visual Studio 2017\\Projects\\NLogAzureTest\\Program.cs:line 78\",\"exceptionData\":\"boomPercent: 100.10\",\"logger\":\"NLogAzureTest.Test2\",\"machine\":\"LT-B02107\",\"processId\":\"7924\",\"processName\":\"NLogAzureTest\",\"identity\":\"notauth::\",\"windowsIdentity\":\"TBECU\\fischja\"}", "RowKey": "0636596714738902451.0c653fa7-c116-4ba5-a3f5-327f7aebeb6f"}
Any ideas why I am not getting the host converted correctly?
Also a slightly different question about reading from the Azure Storage Tables. On the table we are reading from, we actually on care about the data in the Message field. Is there a way either with this app or something different to just pull in that field and part the data as straight json as that field is that way?
Thanks.
... View more
04-18-2018
12:35 PM
That was the issue. We had adjusted what goes in the replication bundle and it was not sending over the scripts.
Adjusted that part and sure enough it is working fine once again.
Thanks.
(I would mark this as the correct answer but for some reason it is not showing me that option)
... View more
04-18-2018
08:56 AM
I am trying to use the xmlkv command in a search on one of my search heads and it is returning errors. This had worked in the past so I am not sure what might have changed to start causing the issue.
The search being used is:
sourcetype=CUDL| xmlkv| top Status
The error that I am getting is:
[INDEX01] Streamed search execute failed because: Error in 'xmlkv' command: Cannot find program 'xmlkv' or script 'xmlkv'.
[INDEX02] Streamed search execute failed because: Error in 'xmlkv' command: Cannot find program 'xmlkv' or script 'xmlkv'.
[INDEX03] Streamed search execute failed because: Error in 'xmlkv' command: Cannot find program 'xmlkv' or script 'xmlkv'.
The setup we have is a non-clustered search head which is reading data from a 3 node set of clustered indexers.
All of our servers are running on Windows OS and Splunk Enterprise 6.6.3.
I have other search heads that connect to the same backend indexers for pulling data, and when executing the same command on them; the data returns as expected.
So I am thinking this is some issue on the search head triggering the errors, like a permission or something but I cannot find anything that is different from the other search heads.
Any ideas on what might be set incorrectly?
... View more
03-29-2018
08:59 AM
I have a dashboard for our SOC that is running and all of the panels work fine on it. but when we added an auto-refresh for the whole page we are finding that the HTML panel in the dashboard is not refreshing (the other panels which are just normal searches are refreshing fine).
This is the top of the XML code for the dashboard down through the panel which is not refreshing.
<dashboard refresh="300" script="access_center.js" stylesheet="hide_export_pdf.css">
<label>SOC Home</label>
<row>
<panel>
<html id="element1">
<div class="key-indicators" data-group-name="soc_home"/>
</html>
</panel>
</row>
This is in our Enterprise Security app and the top panel which is the one that is not refreshing is showing the Access/Network/Potential Data Loss/Malware Notables indicators.
I believe that these are getting populated and showing the script but not sure how to modify it so that it will refresh on a timer.
Any help would be appreciated.
Thanks.
... View more
03-08-2018
08:02 AM
I am looking at possible setting up multisite indexer clustering on some new indexers we are setting up. We have 2 indexers each in 2 sites.
If I wanted 1 copy of data to stay in the site and a replicated copy to be in the other site, would I just need to set the site replication value to this:
available_sites=site1,site2
site_replication_factor = origin:1,total:2
From reading the document on multisite indexer clustering, I see this line and assume that the above setting would force 1 copy on the originating site and since the other site does not have a copy it would push the required replicated copy to that site.
Because the total value can be greater than the total set of explicit values, the cluster needs a strategy to handle any "remainder" bucket copies. Here is the strategy:
If copies remain to be assigned after all site and origin values have been satisfied, those remainder copies are distributed across all sites, with preference given to sites with less or no copies, so that the distribution is as even as possible. Assuming that there are enough remainder copies available, each site will have at least one copy of the bucket.
Is my assumption correct?
... View more
12-27-2017
07:52 AM
Would this setting prevent large files from being enqueued? Or will it just change the amount of data the universal forwarder is sending out at a time.
I had changed that setting to 5120 but then it seemed like lots of the other files stopped being picked up.
... View more
12-22-2017
08:11 AM
We have a Linux server which is receiving our syslog traffic and on that machine we have a universal forwarder running on it to read all of the syslog files to send them off to our Splunk indexers.
The syslog server has 300+ different devices which send to it and a few of them get to be very large files. There is a separate file for each device and it rolls over to a new file at midnight.
This is where the issue occurs. The universal forwarder is hittting this error on some of the files:
WARN TailReader - Enqueuing a very large file
And it says that for each of those large files. Some of the files do seem to get read eventually but the data is behind at that point and other of the files are not read.
What can I do on the universal forwarder to avoid these files from being read in batch mode (which is how the ones that do eventually get read work) and instead just tail the files as they go along? And ensure that all of the files are getting picked up?
Thanks.
... View more
- « Previous
-
- 1
- 2
- Next »