Activity Feed
- Got Karma for Re: How can I disable password change request at first time login to Splunkweb?. 01-08-2024 08:50 PM
- Got Karma for Re: Explain Data Models (Like I'm Five). 09-29-2023 05:34 AM
- Got Karma for Re: Explain Data Models (Like I'm Five). 09-07-2022 10:55 AM
- Got Karma for Re: (Troubleshooting) Indexer became unresponsive today; rebooting server fixed it. A number of splunkd processes are dying and starting back up, is this normal behavior?. 06-08-2022 01:07 PM
- Got Karma for Re: Scheduled searches for summary index does not run. No skipped log in scheduler.log. 11-05-2021 05:21 AM
- Got Karma for Re: Scheduled searches for summary index does not run. No skipped log in scheduler.log. 09-13-2021 05:01 AM
- Got Karma for Re: Explain Data Models (Like I'm Five). 09-07-2021 03:35 AM
- Got Karma for Re: Explain Data Models (Like I'm Five). 08-05-2020 07:25 AM
- Got Karma for Re: Scheduled searches for summary index does not run. No skipped log in scheduler.log. 07-10-2020 10:32 AM
- Karma Re: why is the cluster master not able to fixup buckets (generation tab) "cannot fix up search factor as bucket is not serviceable" for rphillips_splk. 06-05-2020 12:50 AM
- Karma Re: Splunk Add-on for NetApp Data ONTAP: Why doesn't a search that only uses source & source types not work unless i add an index? for dauren_akilbeko. 06-05-2020 12:50 AM
- Karma Re: How do I test connectivity over a specific port in Windows? for jpondrom_splunk. 06-05-2020 12:50 AM
- Karma Re: [smartStore] Configuring remote storage on indexes results repeat many times? for rbal_splunk. 06-05-2020 12:50 AM
- Karma Splunk Email createSSLContextSettings error for harvisingh9. 06-05-2020 12:49 AM
- Karma Re: How to find if the Splunk events are in future? for rbal_splunk. 06-05-2020 12:49 AM
- Karma Re: Event breaking not working on Tomcat Catalina data for richgalloway. 06-05-2020 12:49 AM
- Karma Re: Unable to search & getting this error : Unable to evict enough data for tpeveler_splunk. 06-05-2020 12:49 AM
- Karma Re: How do I find what is causing my typing queue blockage? for hrawat. 06-05-2020 12:49 AM
- Karma Re: With no fixup tasks pending, why are the Clustering UI states replication and search factors are not met? for bpaul_splunk. 06-05-2020 12:49 AM
- Karma Re: Why is IT Service Intelligence (ITSI) kvstore backup timing out? for ssmoot_splunk. 06-05-2020 12:49 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
4 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
4 | |||
1 |
09-12-2019
10:30 AM
Hi, @gordanristic,
Sounds like you were good regarding the question above.
The url sounds like UF can be installed in the appliance. In that case, you may file a Splunk Support case to ask more detail if you would like to know more detail of supporting UF installation in the VSCA appliance. But, they may tell you the limitation like what I answered here.
Hope this helps.
... View more
09-09-2019
10:19 AM
Hi, @gordanristic
Good to hear that it was enough to make it work.
Regarding the official doc you're referring, can you post the URL link in this thread?
Just F.Y.I., the app download page explains that it comes with a 60-day trial license(ref: https://splunkbase.splunk.com/app/725/#/overview)
... View more
09-05-2019
11:46 AM
/opt/splunk/bin/splunk enable boot-start calls splunk and splunk create an init script.
That script does not include chkconfig lines. That's why you get the error.
More important concern is that the linux distribution is not a supported OS. So, I think you should avoid installing a UF on VSCA directly. Instead, can you send logs to syslog server, and install Splunk UF on the syslog server to monitor the syslog log files?
... View more
08-22-2019
08:57 AM
Sorry, my first response was not accurate. The app does not need to access filers. It is only the TA instances.
Yes, I agree. Need to set a firewall rule for the TA to the filers.
... View more
08-21-2019
05:52 PM
You should avoid a proxy server with this app and TA, especially for production deployment. Unfortunately, they are not designed to use a proxy server. I've seen users experienced unexpected behaviors.
... View more
01-31-2019
03:09 PM
Probably you're using non-syslog sourcetype. In that case, try syslogSourceType attribute in outputs.conf. This should avoid adding the originated hostname.
syslogSourceType = <string>
* Specifies an additional rule for handling data, in addition to that
provided by the 'syslog' source type.
* This string is used as a substring match against the sourcetype key. For
example, if the string is set to "syslog", then all sourcetypes
containing the string 'syslog' receive this special treatment.
* To match a sourcetype explicitly, use the pattern
"sourcetype::sourcetype_name".
* Example: syslogSourceType = sourcetype::apache_common
* Data that is "syslog" or matches this setting is assumed to already be in
syslog format.
* Data that does not match the rules has a header, optionally a timestamp
(if defined in 'timestampformat'), and a hostname added to the front of
the event. This is how Splunk software causes arbitrary log data to match syslog expectations.
* No default.
For more detail,
Official Doc:
https://docs.splunk.com/Documentation/Splunk/latest/Data/HowSplunkEnterprisehandlessyslogdata
Community Wiki: (old)
https://wiki.splunk.com/Community:Test:How_Splunk_behaves_when_receiving_or_forwarding_udp_data
... View more
01-31-2019
03:03 PM
How can I avoid from adding an original hostname(or, IP address) to _SYSLOG_ROUGING event when forwarding a third party server?
I can see that Splunk add host information to original syslog event when using _SYSLOG_ROUTING to forward syslog events to a third party server?
Below is an example added the server's IP address 192.168.10.111 which was already in the original event.
192.168.10.111 Mar 16 00:01:29 192.168.10.111 postfix/qmgr[1106]: EA11004022: from=, size=3514, nrcpt=1 (queue active)
How can I remove the host name?
... View more
- Tags:
- syslog
01-17-2019
11:27 AM
you're awesome, @rphillips_splunk
... View more
01-11-2019
03:44 PM
Our doc explains management port (default 8089) is the required port opened between cluster peers. We always needed this port opened.
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Systemrequirements#Ports_that_the_cluster_nodes_use
But, who reads doc all the time ? Wish Splunk checks connectivity of the required ports, and show warning message in Indexer Clustering page.
... View more
04-14-2017
11:13 PM
Very odd use case. I agree with @jonmargulies. I do not understand why the person get all admin permission without web access.
Well, give one splunk instance without running Splunkweb. But, again, if the user have full admin access, he/she may be able to enable SplunkWeb and restart Splunk.
... View more
04-14-2017
10:44 PM
@niketnilay, please use Answer instead of comment field in Question
... View more
04-14-2017
10:40 PM
1 Karma
You are overriding _TCP_ROUTING key. It is same concept as normal field. If you use same field name twice for same event in transforms.conf, the last field value would be taken. It is not appending.
Maybe you try to create two tcpout group,
If defaultGroup is indexerRouting and another tcpout group is Routing-PA, can't you make use of the working example ( FORMAT= Routing-PA, indexersRouting ) ?
... View more
04-14-2017
06:45 PM
7 Karma
Without Data Model
Programmers : Write own program to create data analysis
Splunker: Write splunk search to populate fancy report/dashboard
Managers: Neglect to learn splunk search, but would like to impress his/her boss by fancy report chart . So usually he/she ask splunker for help.
Directors/CEO: Just want to see result(report). Do not care how to generate it
With Data Model
1. Managers ask Splunker to help easy way to create fancy chart
2. Splunker create Splunk Data Model which defines all field names from raw data
=> Managers will stop asking to create reports or splunk search.
3. Managers use Pivot feature ( which become available by Data Model), and just drag and drop, and/or select functions/fields from drop-down boxes to create fancy chart. No need to learn Splunk search commands.
=> Manager can impress his/her boss
4. Directors/CEO boss will feel reports are coming so quicker than before.
Bonus point of Data Model is 100 times faster search speed by making use of Data Model Acceleration feature.
Note: Programmers are still satisfied with his/her owns way to analyze data 🙂
... View more
04-14-2017
06:14 PM
1 Karma
It indicates this Splunk instance could not understand SHC node status. I suspect you cannot see SHC status (splunk show shcsluter-status) in this node when the message was generated. I also suspect your conf replication is failing.
In that case, you might need to run destructive resync command to resolve the issue. You should make a backup of etc directory before running the command.
You said a new search head cluster. Are those Search head instances new ? If so, do you have three or more search heads? Please make sure you meet requirement to deploy a new SHC.
... View more
04-06-2017
11:29 AM
In splunkd.log, you can find a log. The host of the splunkd.log is using up threads limit.
Here is an example;
02-04-2016 12:11:08.983 -0800 WARN HttpListener - Can't handle request for $REST_request_here$, max thread limit for REST HTTP server is 1000, threads already in use is 1001
... View more
04-06-2017
10:22 AM
I see. What you wanted was not meant for 'selective indexing'. It is because what you wanted was not partially indexing and forwarding selected ones. Your way should work. Or, I believe transforms should do similar job.
... View more
04-05-2017
04:20 PM
First, I would check the thread limit.
http://docs.splunk.com/Documentation/Splunk/6.5.3/Troubleshooting/HTTPthreadlimitissues
Then, the number is high enough, and the issue persists, I would recommend to contact Splunk Support for further investigation.
Splunk Support will probably ask to collect diag, ps or top output with thread option, and pstack so that Splunk engineer will be able to look into thread stacks.
... View more
04-05-2017
02:35 PM
Please file a Splunk Support case, and submit diags from the scheduler and DCN. I believe that's faster route to identify cause of the issue.
... View more
04-03-2017
11:25 AM
Probably that's best.
1. Stop Scheduler, delete the DCN node from the config page
2. Stop DCN Splunk
3. Remove all the VMware apps package
4. Deploy a new package
5. Start the DCN Splunk
6. Set up DCNs
7. Start Scheduler
... View more
04-03-2017
11:10 AM
Does this apply to Splunk's internal inputs/indexes
=> Yes, but I'm not sure if _audit index works. I'm not sure if _audit will be indexed or forwarded, or not indexed at all.
Does the forwarded information allow the receiving indexer to know what index they should be put in?
=> Yes, it is because processed event data contains a meta data where index should be used.
And my final question: if the first indexer has filters (transforms) to drop some logs, and index others, does this behavior apply to forwarded logs?
=> No because old indexer already "parsed" events, and a new indexer(2nd indexer) will not re-parse "parsed data"
=> forwarder(probably UF)-> old indexer(filter is working)->new indexer(this will not re-filter, which was already processed in the old indexer)
Why not cloning from the forwarder to both old and new indexer?
... View more
04-03-2017
10:52 AM
tstats groupby is similar to "stats split-by". So, if by field is null, you cannot populate result for null field.
So, you need to find a field or combination of fields for groupby.
I'm not sure if the following search works in your case...but, here is a tstats search example.
| tstats values(MXTIMING.Context) as Context
FROM datamodel=MXTIMING
where source="*/mxtiming_small.log"
groupby MXTIMING.Date MXTIMING.Time MXTIMING.UserName
prestats=t
| fillnull value=NULL
| stats count by Context
... View more
03-29-2017
02:10 PM
Can you post several sample data, and real tables you're looking for based on the sample events?
Otherwise, I cannot tell if that's possible in your case. I was posing possible data format.
... View more
03-29-2017
01:28 PM
1 Karma
I did a quick test;
inputs.conf
[http://test_json_batch]
disabled = 0
sourcetype = test_json_batch
token = __removed__
props.conf
[test_json_batch]
LINE_BREAKER = ([\[,\r\n]+)\{"(?:earlyAccess|batch_id)":
SHOULD_LINEMERGE = false
SEDCMD-remove_end = s/]}$//g
And a curl to ingest the event above;
curl -k https://10..1.100:8088/services/collector/raw -H ... -d '{"batch_id....,"eventProgramPoint":1490387400000}]}'
And, counting on "AUTO_KV_JSON = true" default search time json field extraction.
For my test above, each event was broken into event starting with {"earlyAccess" just like @beatus's screenshot.
... View more
03-29-2017
01:13 PM
Regarding event separation itself,
basically solution of @beatus should work for HEC "raw" inputs. It will not work for json endpoint "collector/event"
... View more