Activity Feed
- Karma Re: Regex Character Limitation for richgalloway. 02-23-2022 06:52 AM
- Got Karma for Re: Regex Character Limitation. 02-23-2022 05:13 AM
- Got Karma for Re: Regex Character Limitation. 02-23-2022 03:23 AM
- Posted Re: Regex Character Limitation on Splunk Search. 02-22-2022 11:28 PM
- Posted Re: JRE/JDK shipped with Splunk Enterprise? on Installation. 01-25-2021 04:43 AM
- Got Karma for Re: Field regex extractions on non-raw fields from web UI. 07-12-2020 08:19 PM
- Got Karma for Re: Field regex extractions on non-raw fields from web UI. 07-12-2020 08:19 PM
- Got Karma for Re: Field regex extractions on non-raw fields from web UI. 07-12-2020 08:19 PM
- Got Karma for Re: Field regex extractions on non-raw fields from web UI. 07-12-2020 08:19 PM
- Got Karma for Re: Field regex extractions on non-raw fields from web UI. 07-12-2020 08:19 PM
- Karma Re: Writing reqular expressions for niketn. 06-05-2020 12:49 AM
- Karma Re: can we build app without "search" view? for niketn. 06-05-2020 12:49 AM
- Karma Re: Why does rex/regex return different results than field-extraction? for wenthold. 06-05-2020 12:49 AM
- Karma Re: Does Splunk enterprise 6.6.1 require Java? for hardikJsheth. 06-05-2020 12:49 AM
- Karma Re: Using where with a wildcard for elliotproebstel. 06-05-2020 12:49 AM
- Got Karma for Re: Why doesn't this eval statement work?. 06-05-2020 12:49 AM
- Got Karma for Re: Trouble getting the Windows universal forwarder to forward data. 06-05-2020 12:49 AM
- Got Karma for Re: Help in creating regex for encryption of data?. 06-05-2020 12:49 AM
- Got Karma for Re: Splunk refuses to ingest particular variable. 06-05-2020 12:49 AM
- Got Karma for Re: Splunk refuses to ingest particular variable. 06-05-2020 12:49 AM
Topics I've Started
No posts to display.
02-22-2022
11:28 PM
2 Karma
In case this was never resolved, or for others who are interested, the solution here is to use multiple transforms stanzas to bring the total size under 8190, like so: props.conf: [my_sourcetype] TRANSFORMS-index_routing = ruleset1, ruleset2 transforms.conf: [ruleset1] REGEX = (cat.*dog|rat.*count) FORMAT = sample1 DEST_KEY = _MetaData:Index [ruleset2] REGEX = (computer.*calculator|computer.*device.*v2) FORMAT = sample1 DEST_KEY = _MetaData:Index I would also argue in this specific case a different approach should be used as a regex this sizable will cause high CPU overhead during ingestion, especially if the source is high-volume.
... View more
01-25-2021
04:43 AM
Hi, Splunk does not ship with a JRE or JDK. My understanding is that the jars have recently been included to support multiple functions, Hadoop being one of the main functions, however without a JRE they cannot be run using an out of the box installation. As a second reason why there isn't one included - Splunk can't include a JRE as that would violate the license terms of all of the JRE/JDK providers that are supported.
... View more
10-09-2018
11:15 PM
Instead of avg() you can use sum(), like so:
- | eval esize=len(_raw) | stats sum(esize) as bytes by sourcetype
... View more
01-16-2018
06:05 PM
1 Karma
For any visitors finding this in 2018: The solution from andrewcg's comment above is still working for me in 7.0.
... View more
11-29-2017
03:49 PM
Hot and warm are always in the same directory, however your cold buckets can be stored in a different location.
This can be configured per index in the indexes.conf file by changing the homePath and coldPath settings.
Take a look at the documentation here:
http://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/Configureindexstorage
As an alternative on Windows, you can specify that a disk be mounted in a directory, as well as being a separate drive. This means you can keep the default directory structure of Splunk without having to adjust your indexes.conf files, but still browse to the data via drive letter.
Have a look at the following technet article for details on how to do this:
https://technet.microsoft.com/en-us/library/cc753321(v=ws.11).aspx
... View more
11-27-2017
02:56 PM
That's a very clean solution. I'm saving that in my book of tricks.
... View more
11-23-2017
02:21 PM
Great to hear!
... View more
11-22-2017
08:33 PM
I've updated my answer slightly.
... View more
11-22-2017
07:33 PM
Since your data is already in CSV format, you can enable CSV extraction using the header row as the field names. Your props.conf should look like this:
[ csv_custom ]
DATETIME_CONFIG=NONE
INDEXED_EXTRACTIONS=csv
FIELD_DELIMITER=,
Note: You will need to distribute this config to the universal forwarders as well, since they need the configuration to correctly read the CSV file. Are you doing this, or are you only placing the config on the indexer?
This config appears to work correctly for me (I added the missing make field into the data):
... View more
11-22-2017
06:18 PM
I've updated my answer with a link to the installation guide for universal forwarders.
You can place the Add-on in deployment apps, but you will need to configure the universal forwarder to poll the indexer for configuration, as well as creating a server class for the server (this can be achieved through conf files or the GUI).
I would suggest reading the 'About deployment server' documentation from the link in my answer if you are curious about this, as the topic is too large to properly cover in an answer here.
... View more
11-22-2017
06:15 PM
2 Karma
I don't really think there's an 'answer' here - it would seem to be something specific to your environment, but at least you've found a workaround. Maybe write up a quick summary of how you fixed it and post that, then accept it as your own answer?
... View more
11-22-2017
02:58 PM
1 Karma
is your outputs.conf correctly set up to forward data to the indexer?
install the Splunk Add-on for Windows on the universal forwarder: https://splunkbase.splunk.com/app/742/
The steps to install this on the universal forwarder are listed here: https://docs.splunk.com/Documentation/WindowsAddOn/4.8.4/User/InstalltheSplunkAdd-onforWindows#Install_the_add-on_on_a_universal_forwarder
The Add-on has all the right configuration to ingest windows events. This needs to be installed on the universal forwarder so that the forwarder knows what information to push to the indexer.
Typically, a deployment server is used to push this configuration to the universal forwarders. You can read more about them here:
http://docs.splunk.com/Documentation/Splunk/7.0.0/Updating/Aboutdeploymentserver
... View more
11-22-2017
02:49 PM
No unfortunately, as the authentication system is the same for both internally.
I would recommend creating a local admin user for each administrator, using something like DennisFFM_admin , vs your normal DennisFFM account. This way you can have local authentication on the cluster with auditing tied to the user, but still log into the web interface with SSO.
... View more
11-21-2017
10:23 PM
More than likely this won't work, but you might able to try adding the following stanza to transforms.conf to extract them and turn them into fields automatically (don't forget to reference it from props.conf):
[get-params]
REGEX = Param=\"-(.+?)\s'(.+?)'\"
FORMAT = $1::$2
MV_ADD = true
... View more
11-21-2017
09:25 PM
@doweaver If this solved your problem it'd be great if you could mark this as Answered, just to help others in the future. Cheers!
... View more
11-21-2017
08:48 PM
1 Karma
If you're already naming your fields in the extraction, then you don't need to enter anything in the format box 🙂
The format box is an alternative, so using your example, you could have this instead:
(?:\d*\.){2}(\d+)\.(\d+)\.([^\.]*)\.([^\.]*)
In which case you'd have to put: BuildNumber::$1 BuildRevision::$2 BuildArch::$3 BuildBranch::$4 in the format box.
... View more
11-21-2017
07:45 PM
1 Karma
Ahh yes, if you're using the field extractor UI, then it will often create regexes that rely on fixed data positioning. As an alternative, you can rewrite the regex to only look at the column with that header, in which case it won't matter whereabouts the column is placed in the CSV. I completely understand what you mean though, especially if you're relying on the field extractor UI.
I've updated my original answer and the one above with some information on field transformations, which I think will solve your problem.
... View more
11-21-2017
07:34 PM
1 Karma
You can use a field transform to achieve the result you want. Instead of doing a new field extraction, you can go to Settings --> Fields, then Field Transformations. This lets you choose a
source key to extract from.
That said, I still don't quite follow regarding how pulling from _raw instead of a specific field limits you to a strict order for the future. Are you able to explain this in a bit more detail, or even better, give an example? If anything, pulling from _raw gives you more flexibility, hence why the GUI extraction is the way it is.
... View more
11-21-2017
07:23 PM
1 Karma
Yes, we would definitely recommend using automatic extractions if possible. Note that this will work on _raw, so your regex will be a little different.
You can access this from the GUI by going to Settings --> Fields, then Field Extractions. Click New, then fill in the fields and click Save. The extraction will work automatically for all new searches in the relevant app context.
... View more
11-21-2017
07:18 PM
2 Karma
Extract a field from an existing field automatically:
You can use a field transform to automatically extract a new field from an existing auto-extracted field (this lets you choose a source key to extract from).
In the GUI, go to Settings --> Fields, then Field Transformations. Click New, then fill in the fields and click Save. The transformation will work automatically for all new searches in the relevant app context.
Extract a field from the raw data automatically:
If you need to automatically extract a new field from _raw in the first place, we would recommend using automatic extractions. Note that this will work on _raw, so your regex will be a little different - it will have to match on data from the whole event, so you might need something closer to FieldToExtractFrom:(?<ExtractedField>\d+) as your regex (using your example).
You can access this from the GUI by going to Settings --> Fields, then Field Extractions. Click New, then fill in the fields and click Save. The extraction will work automatically for all new searches in the relevant app context.
Extract data inline from a field with rex:
For once off extractions, you can use rex inline, like so:
| rex field=FieldToExtractFrom "(?<new_field_name>regexhere)"
As an example, say you wanted to extract the first name from a name field, where you have name="First Last" , you could use:
| rex field=name "^(?<first_name>\w+?)\s"
Which would extract the first name and put it in a new field called first_name .
If you have an example to post then I might be able to give a more specific answer for your use case, but I hope this helps.
... View more
11-21-2017
06:44 PM
Try using: rex field=_raw "pyActivity=(?<pyActivity>.+?)&"
This will match as few characters as possible up to the first ampersand. The question mark after the plus sign is what makes this happen. You also need to name the field in the extraction, which is the ?<pyActivity> portion of the regex.
... View more
11-21-2017
06:29 PM
As a test, can you go back to basics and ingest each metric separately, then do the addition of the stats inside splunk with eval? This saves all the calculations and data munging within the script.
I agree though, this is really bizarre.
Alternatively, can you run your script with a cron entry and write to an output file, then follow that with Splunk?
... View more
11-21-2017
04:33 PM
1 Karma
You can use html tags with <br> to add whitespace between panels, like below. You can use as few or as many <br> tags as required to increase or decrease the whitespace. *Edit - there's a formatting problem, it should all be <br> and </br> tags, but it's not displaying properly.
<row>
<html>
<br><br/><br/><br/></br></br></br></br>
</html>
</row>
Which looks like this:
... View more
11-21-2017
04:10 PM
1 Karma
If both fields are strings, you can use strcat instead, which is a bit simpler to work with. Replace your eval command with:
| strcat hc " " hm hardwaretype
This will create a new field "hardwaretype" with your two strings and a space in between.
Without knowing what your fields have in them, I worry that eval is attempting to sum your fields, rather than concatenate them. If you want to use eval specifically, you can try using periods instead of plus signs - this is the preferred format for joining text:
| eval hardwaretype=hm." ".hc
Or even:
| eval hardwaretype=tostring(hm)." ".tostring(hc)
... View more
11-21-2017
01:28 PM
This will mask it at index time, and yes, the local folder of your TA would be the right place to modify props.conf.
... View more