Activity Feed
- Karma Re: Sending data from one UF to other UF for esix_splunk. 06-05-2020 12:50 AM
- Karma Re: How can we track configuration changes on a universal forwarder server? for dkeck. 06-05-2020 12:50 AM
- Karma Re: Splunk event sourcetype overide and send event back to parsing queue for tiagofbmm. 06-05-2020 12:49 AM
- Karma Re: Recommended maximum concurrent searches? for s2_splunk. 06-05-2020 12:49 AM
- Karma Re: Can these three searches be combined and ran sequentially? for DalJeanis. 06-05-2020 12:49 AM
- Karma Re: Raw data back to files for gcusello. 06-05-2020 12:49 AM
- Karma Re: Raw data back to files for hardikJsheth. 06-05-2020 12:49 AM
- Karma Re: Splunk ingest SNMP traps for Damien_Dallimor. 06-05-2020 12:49 AM
- Karma Re: Can I use CLONE_SOURCETYPE to send events to multiple indexes? for DalJeanis. 06-05-2020 12:49 AM
- Karma Re: What is the best way to number each event in descending time? for Vijeta. 06-05-2020 12:49 AM
- Karma Re: Why is my input not getting parsed if I use wildcards? for MuS. 06-05-2020 12:49 AM
- Karma Re: Splunk fetching results from database in realtime for richgalloway. 06-05-2020 12:49 AM
- Karma Re: Timestamp extraction for varying subseconds and time zones? for adonio. 06-05-2020 12:49 AM
- Karma Re: How to configure alert based on other timezones? for woodcock. 06-05-2020 12:49 AM
- Got Karma for Does Heavy forwarder forwarded events will undergo parsing on indexers. 06-05-2020 12:49 AM
- Got Karma for Does Splunk use .spec files?. 06-05-2020 12:49 AM
- Karma Re: What type of storage is needed for frozen data? for nickhills. 06-05-2020 12:48 AM
- Karma Re: Splunk CRC check for woodcock. 06-05-2020 12:48 AM
- Karma Re: whats happens if "maxVolumeDataSizeMB" limit is reached for cold path. for somesoni2. 06-05-2020 12:48 AM
- Karma Re: DR deployer storage setup for s2_splunk. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
11-12-2021
12:02 AM
Hi @woodcock @somesoni2 Could you please help me out here. I have a little different scenario here, but facing similar issue. We are integrating the json logs via HEC into Splunk Heavy Forwarder. I have tried the below configurations.I am applying the props for the source. In transforms, there are different regexes and I would want to route it to different indexes based on log files and route all the other files not required to a null queue. I would not be able to use FORMAT=indexqueue in transforms.conf as I cannot mention multiple indexes in inputs.conf .This is not working and no data is getting indexed. Kindly help. The configs are like below: PROPS.CONF -- [source::*model-app*] TRANSFORMS-segment=setnull,security_logs,application_logs,provisioning_logs TRANSFORMS.CONF -- [setnull] REGEX=class\"\:\"(.*?)\" DEST_KEY = queue FORMAT = nullQueue [security_logs] REGEX=(class\"\:\"(/var/log/cron|/var/log/audit/audit.log|/var/log/messages|/var/log/secure)\") DEST_KEY=_MetaData:Index FORMAT=model_sec WRITE_META=true LOOKAHEAD=40000 [application_logs] REGEX=(class\"\:\"(/var/log/application.log|/var/log/local*?.log)\") DEST_KEY=_MetaData:Index FORMAT=model_app WRITE_META=true LOOKAHEAD=40000 [provisioning_logs] REGEX=class\"\:\"(/opt/provgw-error_msg.log|/opt/provgw-bulkrequest.log|/opt/provgw/provgw-spml_command.log.*?)\" DEST_KEY=_MetaData:Index FORMAT=model_prov WRITE_META=true
... View more
11-03-2021
12:07 AM
Nice to know. Now that could be a nice trigger for a pull the config from the affected machine and push it into VCS...
... View more
11-23-2020
09:45 AM
Hi Splunkers, I have logs like <Header> <Product>Microsoft SQL Server Reporting Services Version 2011.0110.6615.02 ((SQL11_SP3_QFE-CU).180109-2116 )</Product> <Locale>English ()</Locale> <TimeZone>Central Daylight Time</TimeZone> <Path>D:\Program Files\Microsoft SQL Server\MSRS11.CTSSRS2012\Reporting Services\Logfiles\ReportServerService__11_05_2020_14_52_11.log</Path> <SystemName>Avotrix69901</SystemName> <OSName>Microsoft Windows NT 6.2.9200</OSName> <OSVersion>6.2.9200</OSVersion> <ProcessID>3296</ProcessID> <Virtualization>Hypervisor</Virtualization> </Header> <ProcessorArchitecture>AMD64</ProcessorArchitecture> <ApplicationArchitecture>AMD64</ApplicationArchitecture> processing!ReportServer_0-51!1ed8!11/05/2020-14:52:11:: v VERBOSE: Mapping data reader successfully initialized. library!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v VERBOSE: Transaction commit. processing!ReportServer_0-51!1ed8!11/05/2020-14:52:11:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: , Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 3.; runningjobs!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v VERBOSE: Thread pool settings: Available worker: 399, Max worker: 400, Available IO: 400, Max IO: 400 runningjobs!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v VERBOSE: Spawning new thread for a work item. runningjobs!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v VERBOSE: ThreadJobContext.EndCancelableState runningjobs!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v VERBOSE: ThreadJobContext.WaitForCancelException entered runningjobs!ReportServer_0-51!2bc8!11/05/2020-14:52:11:: v And after indexing i am getting events like \x00c\x00h\x00u\x00n\x00k\x00s\x00!\x00R\x00e\x00p\x00o\x00r\x00t\x00S\x00e\x00r\x00v\x00e\x005\x001\x00!\x002\x001\x00d\x000\x00!\x001\x001\x00/\x000\x005\x00/\x002\x000\x002\x000\x00-\x001\x004\x00:\x005\x002\x00:\x001\x002\x00:\x00:\x00 \x00v\x00 \x00V\x00E\x00R\x00B\x00O\x00S\x00E\x00:\x00 \x00R\x00e\x00t\x00r\x00i\x00e\x00v\x00e\x00d\x00 \x00s\x00e\x00g\x00m\x00e\x00n\x00t\x00 \x004\x003\x00f\x00b\x000\x009\x009\x00d\x00-\x00c\x006\x006\x004\x00-\x00e\x00a\x001\x001\x00-\x008\x001\x002\x00d\x00-\x000\x000\x002\x001\x005\x00a\x009\x00b\x000\x008\x00a\x00c\x00 \x00f\x00o\x00r\x00 \x00c\x00h\x00u\x00n\x00k\x00 \x004\x002\x00f\x00b\x000\x009\x009\x00d\x00-\x00c\x006\x006\x004\x00-\x00e\x00a\x001\x001\x00-\x008\x001\x002\x00d\x00-\x000\x000\x002\x001\x005\x00a\x009\x00b\x000\x008\x00a\x00c\x00 \x00f\x00r\x00o\x00m\x00 \x00t\x00h\x00e\x00 \x00s\x00e\x00g\x00m\x00e\x00n\x00t\x00 I had solved this issue using the below settings in props.conf [MyOwnSourceType] CHARSET = UTF16-LE
... View more
09-18-2020
02:01 PM
LINE_BREAKER_LOOKBEHIND = <integer>
* The number of bytes before the end of the raw data chunk
to which Splunk software should apply the 'LINE_BREAKER' regex.
* When there is leftover data from a previous raw chunk,
LINE_BREAKER_LOOKBEHIND indicates the number of bytes before the end of
the raw chunk (with the next chunk concatenated) where Splunk software
applies the LINE_BREAKER regex. First of all above config kicks -in only if you have 'LINE_BREAKER' regex set. Assuming you have 'LINE_BREAKER' regex '\n'. First pass, chunk1 will be processed and since there is no previous leftover chunk, chunk1 : [data1]\n 2017-01-03 12:00:00 [data2]\n 2017-01-03 12:0 Will result in creating two events data1 and data2. Rest is leftover for chunk2. Second pass, chunk2 will be processed, since we have a leftover, LINE_BREAKER_LOOKBEHIND will be applied only if leftover size > LINE_BREAKER_LOOKBEHIND. chunk2 : 0:01 [data3]\n ... In this example LINE_BREAKER_LOOKBEHIND was not applicable as leftover bytes < LINE_BREAKER_LOOKBEHIND(default 100). In case, if there is a scenario where it's applicable, all splunk is doing is to exclude first LINE_BREAKER_LOOKBEHIND bytes from regex of new string ( leftover + chunk2). Why to apply regex on entire leftover part when we already know there is no regex match( during first pass).
... View more
06-10-2020
01:16 AM
Be noted that I don't want to use any custom modification in database due to performance impact. I tried to apply encryption on fields in select query and it turned out very high CPU in database. By moving data encryption to Splunk DBX, I can scale out the work load to a cluster of heavy forwarders.
... View more
11-17-2019
02:12 PM
This is best done using jquery tool before it comes into Splunk. The king of jquery and splunk is @mmodestino_splunk so maybe he will also comment.
... View more
07-31-2019
06:01 AM
Hi ankithreddy777
if you're satisfied by this answer, please accept and/or upvote it.
Bye, see next time.
Giuseppe
... View more
12-11-2018
10:14 AM
I have a Powershell script on windows UF servers. We have created a powershell input and pointed to the script. The output is forwarded to Indexers.
I created props.conf on the indexers to merge the events of the script to single event based on regex. But the properties are not being applied to the data. The parsing properties are working when I manually ingested same data from file and line breaking takes place as expected.
Then I tested this powershell input on HF placing props.conf on HF. It is still not working. Is there any reason for this? Does powershell script output formatted in different way?
... View more
12-10-2018
05:45 PM
Yes it can.
http://docs.splunk.com/Documentation/Splunk/7.2.1/Data/MonitorWindowsdatawithPowerShellscripts
Powershell is not packaged with UF, it must be installed to Windows.
... View more
12-10-2018
12:32 PM
Check this Splunk answer, and it seems the same question has been repeated here..
https://answers.splunk.com/answers/708443/splunk-scripted-input-to-run-a-btool-command-witho.html#answer-705615
... View more
12-10-2018
12:55 PM
I think ingesting configuration files each day is a bad idea for this. It will also cost you money via the license cost to do this aswell. A better approach would be to use the deployment server to exclusively send configuration files to the forwarders and lock down that user of that config file on the host. You should then use BitBucket to version control your deployment server files that are sent to the hosts
... View more
02-08-2019
03:36 AM
You can still send raw data to 3rd party systems from the UF on a tcpout configuration
... View more
11-14-2018
01:23 PM
I send out a email report everyday in csv format. One column in csv is the _raw field. The new lines in the raw data are being replaced by /n character in the Report. Is there any way to overcome this. I could able to download csv file of results without any issues. I can see _raw field value consisting of multiple lines. Only when csv report is emailed the newlines are replaced by /n field.
... View more
- Tags:
- splunk-enterprise
09-28-2018
09:19 AM
1 Karma
If you don't want to sort, then you can use tail before streamstats.
By default 10 gives 10 event count, if you have an idea about the maximum number of events in that time frame you can use that (eg. tail 100000)
... View more
09-24-2018
06:23 PM
Without knowing anything about your messaging environment or setup , the first thing you should do related to that message is refer to the documentation, specifically the "Java Heap Size" section , https://splunkbase.splunk.com/app/1317/#/details
Look for "-Xms64m","-Xmx64m"
To increase heap do something like "-Xms64m","-Xmx256m" , or whatever you need to max to be for your use case.
... View more
09-20-2018
03:21 PM
When search affinity is disabled, search head can search across all indexers across multiple sites.
But primaries on respective indexer information is provided to search head by indexer itself. If this is the case how search head searches primary copy only once eventhough it is present on two sites. How search head manages to search specific primary copy only one time if it is present on other site as well.
Is there any mechanism like S.H gets primary copy details from both sites indexers at first, then it will make decision which primary to search and accordingly intiate search on indexers?
... View more
09-19-2018
05:19 PM
2 Karma
The indexer automatically downloads the bundle.
Discussed in the docs here: http://docs.splunk.com/Documentation/Splunk/7.1.0/Indexer/Updatepeerconfigurations#Distribution_of_the_bundle_when_a_peer_starts_up
... View more
09-18-2018
11:39 AM
Hi,
I have a server.conf file under system/local directory which has following stanza
[general]
pass4SymmKey = $1$xxxxxxxxxx
I expect that this password is the encrypted form of pass4SymmKey in server.conf at system/default.
But even though I changed pass4SymmKey in server.conf at system/default , the pass4SymmKey in server.conf at system/local directory is not getting updated.
Then I removed the pass4SymmKey in server.conf at system/local directory. After restart , the same password is getting generated.
Do you know from which source pass4SymmKey in server.conf at system/local might get generated other than pass4SymmKey in server.conf at system/default ?
... View more
09-07-2018
10:57 AM
1 Karma
looks like splunk can handle it,
try below props.conf
[odd_timestamp]
SHOULD_LINEMERGE=true
NO_BINARY_CHECK=true
TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%9N, Timezone:%Z
TIME_PREFIX=^
MAX_TIMESTAMP_LOOKAHEAD=48
worked for me,
see screenshot below:
... View more
09-09-2018
04:56 AM
Generally speaking (with exception of singapore last i checked, probably more), the codes on this page work when used with TZ setting in props.conf
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
Note, since _time extraction occurs at index time (or before sending to indexers if you're using INDEXED_EXTRACTIONS and a TIMESTAMP_FIELDS settings), the data will have to be reloaded for the changes to be seen. Also, the props should be on the indexers or first heavy forwarder the data flows through (again unless using INDEXED_EXTRACTIONS).
So first i would try American/Chicago (CaSeSenstivity Unknown), then i would try Central, then I would try the deprecated US/Central. It all else fails read the excerpt from props.conf.spec here and see if that answers any questions. Also, let know your configuration so we can be more specific, and if you dont mind sharing a sample timestamp from an event, we can help further.
TZ = <timezone identifier>
* The algorithm for determining the time zone for a particular event is as
follows:
* If the event has a timezone in its raw text (for example, UTC, -08:00),
use that.
* If TZ is set to a valid timezone string, use that.
* If the event was forwarded, and the forwarder-indexer connection is using
the 6.0+ forwarding protocol, use the timezone provided by the forwarder.
* Otherwise, use the timezone of the system that is running splunkd.
* Defaults to empty.
TZ_ALIAS = <key=value>[,<key=value>]...
* Provides splunk admin-level control over how timezone strings extracted
from events are interpreted.
* For example, EST can mean Eastern (US) Standard time, or Eastern
(Australian) Standard time. There are many other three letter timezone
acronyms with many expansions.
* There is no requirement to use TZ_ALIAS if the traditional Splunk default
mappings for these values have been as expected. For example, EST maps to
the Eastern US by default.
* Has no effect on TZ value; this only affects timezone strings from event
text, either from any configured TIME_FORMAT, or from pattern-based guess
fallback.
* The setting is a list of key=value pairs, separated by commas.
* The key is matched against the text of the timezone specifier of the
event, and the value is the timezone specifier to use when mapping the
timestamp to UTC/GMT.
* The value is another TZ specifier which expresses the desired offset.
* Example: TZ_ALIAS = EST=GMT+10:00 (See props.conf.example for more/full
examples)
* Defaults to unset.
... View more
08-28-2018
11:12 AM
Splunk JMS app UI is basically showing to enter connection details for connecting solace queues. In case if I have to connect to MQ queue(where wee have host, serverchannel etc), May I know how to enter the values on UI. I tried to enter key value pairs in JNDI properties input box, but it is not working.
It would be a great help.
... View more
08-22-2018
06:23 PM
As per somesoni2's comment the process will fork a new process with the same userid to restart Splunk, it cannot change the username it was running as previously unless it's the root user.
If it was running as root and you updated /opt/splunk/etc/splunk-launch.conf to have the line:
SPLUNK_OS_USER=splunk
Or similar, then it will restart as the splunk user, however this is an edge case as you would have to have it running as root, then change the file while it's running before the deployment server triggered a restart...
... View more
08-10-2018
11:15 AM
Hi @ddrillic - Using wildcards in sourcetype like above follow stanza precedence in ASCII priority?.
... View more
08-09-2018
03:57 PM
Hi Adonia,
I am looking at different stanzas at different locations.
we know system/local has higher priority than app1/local.... But source has high priority over sourcetype.
Does source settings will be applied even though place at low precedence location.
... View more