Activity Feed
- Got Karma for Re: File will not be read, seekptr checksum did not match for a file in splunk. 10-18-2024 01:31 AM
- Got Karma for Re: Indexer Discovery Error (IndexerDiscoveryHeartbeatThread). 07-11-2024 07:27 AM
- Got Karma for Re: Forget Password Keyfor splunk Indexer cluster. 07-09-2024 11:57 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-09-2024 10:36 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-09-2024 10:36 AM
- Got Karma for Re: systemd start restart for splunk not working as expected. 07-03-2024 09:10 PM
- Got Karma for Re: Where to create an index in a clustered environment?. 05-14-2024 10:03 PM
- Got Karma for Re: ERROR Configuration from app=<appname> does not support reload: server.conf/[clustering]/master_uri. 02-29-2024 11:35 PM
- Got Karma for Re: File will not be read, seekptr checksum did not match for a file in splunk. 02-08-2024 08:02 AM
- Got Karma for Re: Which works best in a SHC? Even or Odd number of search heads to avoid the SHC Service becoming not available?. 01-26-2024 10:13 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:28 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:24 AM
- Got Karma for Re: Run a Scheduled Report on Demand. 01-05-2024 09:24 AM
- Got Karma for Re: Scripting admin credentials in scripted install. 12-13-2023 07:23 AM
- Got Karma for Re: Scripting admin credentials in scripted install. 12-13-2023 07:22 AM
- Got Karma for Re: ERROR DeployedApplication - Failed to install app=/web/splunk/etc/master-apps/s; reason=Application does not exist. 12-12-2023 06:19 AM
- Got Karma for Re: Applying quarantine and removing quarantine. 11-22-2023 05:49 AM
- Got Karma for Re: Forced bundle replication failed. Reverting to old behavior - using most recent bundles on all. 11-20-2023 12:13 PM
- Got Karma for Re: Which works best in a SHC? Even or Odd number of search heads to avoid the SHC Service becoming not available?. 11-14-2023 02:12 AM
- Got Karma for Re: can we get the previous results of scheduled report?. 11-02-2023 05:10 PM
Topics I've Started
No posts to display.
09-23-2021
12:34 PM
You should be able to accomplish this in props.conf by defining your sourcetype with a combination of SHOULD_LINEMERGE=true and the supporting parameters to define how/where Splunk should break off from creating a multi-line event. You'll just need to experiment in a test environment (or dummy index) with the settings and your actual events. I would start off with log files that contain only a few entries. Otherwise, you can potentially end up with a single event comprised of the entire file. https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Propsconf#Line_breaking
... View more
09-23-2021
09:23 AM
If your end goal is to gather data for powering a dashboard then using a datamodel seems a better solution. Then you can accelerate it for a given time range if you're looking for a performance increase. In your existing search you might want to look at your use of table toward the end as it doesn't transform results and looks like collect is going to pull in _raw. Also, summary indexing as you're doing with collect counts against your license so be careful. Datamodel (or report) acceleration does not cause a hit because you are not indexing new data. Just FYI...
... View more
09-23-2021
08:47 AM
This could be a couple of issues. First to check would be that all your $SPLUNK_HOME directories/configs are still owned by your Splunk user (splunk:splunk e.g.) and didn't change to root during your upgrade. Also, I think the password hashing algorithm was changed between those two versions. So you may need to reset the admin password via the cmd line and do the same for your bindDN user account if LDAP is not working. Admin password change would require cycling Splunk, bindDN account password change would not require cycling Splunk if you use the web UI. Don't forget to verify user:group ownership on both configs if you modify them as root.
... View more
09-23-2021
07:41 AM
Your best option is to enable Forwarder Monitoring on the Distributed Monitoring Console (DMC) or MC on the index master (if you don't have a DMC). That feature provides all types of detailed information on what forwarders are connected, status, data thruput, and a lot more. See the following documentation for more: https://docs.splunk.com/Documentation/Splunk/8.2.2/Updating/Forwardermanagementoverview The DMC (or MC on master) also provides license utilization information at: Monitoring Console > Indexing > License Usage - Today or Historic License Usage (those are the two 'canned' options). Worth noting if you have or try either option, you can hover over the graphs and click on "open in search" to see the search(es) that power the panels. Those can also help give you a good base for building upon/modifying to suit your specific needs.
... View more
09-22-2021
01:40 PM
Are these OS logs, or application specific? If specific to your application and you're unable to change how new logs are named (eliminate the timestamp), then you would have to find a workaround. One option would be to configure logrotate so that the OS rotates and compresses your logs daily, then you could blacklist .gz files.
... View more
09-22-2021
01:36 PM
I'm not sure what you mean by "remove datasource". Do you mean sourcetype? If so, and again, you cannot change data once it has been indexed. You would have to delete it all and re-index it using a modified or different sourcetype.
... View more
09-22-2021
01:23 PM
It sounds like you may be misunderstanding field extraction. When you send data to Splunk via a forwarder, it is tagged with the sourcetype that you defined/created. That's used to identify the fields contained within your data (events) when Splunk indexes the data. Field extraction occurs when you search the data, not when it is indexed. It is possible to modify extraction for NEW events coming in, but you cannot go back and redefine that sourcetype for existing data. Once it has been indexed it cannot be changed.
... View more
09-22-2021
01:06 PM
There are many ways to improve dashboard performance and countless discussions on that topic. But generally the first steps would be to replace any real-time index queries with results from saved/scheduled searches, eliminate as many fields and/or regex parsing as possible, and restrict the date/time range of your searches. Accelerated datamodels are an excellent choice also, but a bit more advanced. It just depends on your use case and ability. In all cases though, just remember that the search heads do all the work when it comes to parsing and presenting search results to the user. So eliminating workload there translates into better performance and user experience.
... View more
09-22-2021
12:30 PM
Based on the info in your post, Linux is already rotating your logs as it should. So all you really need to do is remove the wildcard and instead specify the file name with extension (access_log.log e.g.). Splunk will then monitor that file in real time and pick back up on it after the OS rotates it while ignoring any other files in that directory. access_log.log-20210922, access_log.log.1 or access_log.gz would all be ignored, for example.
... View more
09-21-2021
08:30 AM
There are many ways to do this, but the quick and easy method is to simply run your search, then in the top right click on "Save As" and choose Alert. From there you can give the alert a name, set scheduling, and trigger actions such as email, etc.
... View more
09-20-2021
12:08 PM
https://www.splunk.com/en_us/support-and-services/support-programs.html
... View more
09-14-2021
03:42 PM
If you are using a load balancer you can only route 443 to a single destination. I'm guessing that you want to expose the Splunk UI to end users and use https. The Splunk UI by default runs on port 8000 (which you can change), but unless you want the URL to your Splunk applications to include a specific port, you have to route https (443) to 8000 on the search heads. Otherwise, your URL would have to be something like https://www.mysplunkserver.com:8000. You want your API calls to be secure as well, but you can't route port 443 from the same IP to two different destinations. The second destination will never be reached because it would be satisfied by the first rule. The solution / best practice, is to have one IP to expose your UI to end users on https (443 to 8000) and a second IP for API calls and behind the scenes Splunk traffic (443 to 8089). Though not required necessarily, I generally assign a DNS name to the IP for the API. That way if the IP needs to change your only have to update DNS, and not your code.
... View more
09-14-2021
03:23 PM
The bars represent the number of events indexed by Splunk on a given day/time and is based on the range you selected with the date/time picker. The timestamp is shown below the bars. You can left click/hold on the graph and drag across it to drill down into smaller time ranges. To me, a data gap would mean no events received for some period of time. That would indicate a problem with the forwarders, hosts, the indexers, etc. Gaps can be normal, such as on weekends or holidays, for example. It just depends on your specific environment as @richgalloway mentioned.
... View more
09-14-2021
03:13 PM
Once your data has been indexed in Splunk it cannot be modified. Your only option would be to delete the existing data and re-ingest using your update schema. When ingesting CSV files the field names are assigned based on the first line of the file. Basically this should be the column header name as if you were viewing it in Excel. If that line doesn't exist, or isn't consistent you will get unpredictable results in your index. If you know ahead of time the header won't exist you can assign the field names in your sourcetype in props.conf. But any changes to the sourcetype will only affect new data coming in. Once the data has been indexed it's permanent.
... View more
09-14-2021
02:53 PM
How are you testing the connection? And is your app server behind a load balancer? If so, you'll likely need a second IP / DNS name and route calls from that IP on port 443 to 8089 on your search heads, and have your application use that for API calls.
... View more
09-14-2021
02:39 PM
You would need to contact Splunk support directly for an answer to that question.
... View more
09-14-2021
02:33 PM
1 Karma
If you want to see gaps in data ingestion, such as days or hours where no data came in you can run this: | tstats count where index=your_index_name by _time Then just click on "visualization" and you'll get a nice graph of event count over a timeline (controlled by your date/time picker). You can drill down further on the search to visualize by day, hour, seconds, etc.
... View more
09-14-2021
02:25 PM
1 Karma
Your configs look correct, assuming you have them in inputs.conf, and not "Props config" which you mentioned. When using batch mode for files of the same name, something has to be different about the new file in order for Splunk to pick it up. Generally it uses the timestamp, a different file size can trigger it as well. Unlike "monitor", batch does not consume files that are actively changing such as system logs. If the forwarder is running when you copy the file over there's a chance Splunk won't pick it up, from my experience anyway. A better method for testing the scenario you described would be to stop the Forwarder, copy over your file(s), the start the Forwarder back up. Once the Forwarder is up and inspects the directory and file(s) it should ingest it. Batch mode is more generally used for ingesting and deleting large numbers of files/logs with different names, timestamps, etc. such as rotated system logs where the timestamp of the rotation time was incorporated into the name. That said, it should still work for your use case. But try the testing method I suggested. You may also include a parameter in props.conf that will help recognize existing files with different content: CHECK_METHOD = modtime See the documentation for more details: https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Monitorfilesanddirectorieswithinputs.conf#Batch_syntax
... View more
09-14-2021
12:22 PM
2 Karma
Try cycling the index master, then a rolling restart of the indexer cluster. Once the cluster is back up try to re-validate the new bundle via the master. If that doesn't work, make a small change in your bundle somewhere like a add/modify a readme text file, etc. That's enough to cause the master to see it as a new bundle and re-validate. Using the GUI on the master is actually the easiest/best way to do the restart, cycling, and bundle validation/push, in my opinion. Just fyi.
... View more
09-14-2021
12:11 PM
6 Karma
Files can sometimes have the same few header lines which will confuse Splunk and cause the issue you posted. Add the following line to your monitor stanza in inputs.conf and cycle the forwarder(s). crcSalt = <SOURCE> Also ensure that you are using "monitor" for files that update, where "batch" is for historical data that won't change. More info here: https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Monitorfilesanddirectorieswithinputs.conf
... View more
09-14-2021
08:28 AM
That's strange. Are the fields you removed from the search part of the datamodel? Also, there are some aggregation restrictions regarding tstats and datamodels. Without seeing your search it's difficult to tell, but it might be worth reviewing this documentation: https://docs.splunk.com/Documentation/Splunk/8.2.2/SearchReference/Tstats#Complex_aggregate_functions
... View more
09-13-2021
11:30 AM
Did you check the status of the acceleration before and after you removed the fields? And do those fields contain a lot of data?
... View more
09-01-2021
10:16 AM
1 Karma
Hey @somoarn I'm glad to hear we got this resolved for you. Even the slightest typo in a Splunk config can cause some unexpected behavior. Configuring data retention, archiving, bucket rotation, etc. can become very complex. There are multiple layers of parameter settings and precedence rules that come into play. One issue in your case was using "main", which is a pre-configured, Splunk index. Because you were setting only a few of the index parameters, you inherited the others from the Splunk configuration. Those settings combined with yours were preventing the bucket rotation to frozen/deleted that you were intending. But looks like you did a great job finding the right config combination that worked for you. A couple of related notes worth mentioning... From your original post it looked like the data you were creating for testing didn't include a timestamp. In that case you would need to have DATETIME_CONFIG = CURRENT defined in props.conf for you sourcetype. You may have it there already, but without it that can cause issues with aging out data as well. Also, be very careful when you create a [default] stanza in /opt/splunk/etc/system/local/indexes.conf. Any parameter changes added there will be applied globally and affect every index in your environment. I know you're just testing on a container but its worth mentioning 😀
... View more
08-31-2021
08:49 AM
Try this: | rest /servicesNS/-/-/saved/searches | search search!=*index* |table search eai:acl.owner is_scheduled
... View more
08-31-2021
08:37 AM
You can limit results by adding "maxrows" to your dbxquery. maxrows=1000 for example. You can test with different values until you find one that doesn't cause the issue.
... View more