Activity Feed
- Got Karma for Re: Why do variations in sourcetype appear?. 02-19-2024 10:27 PM
- Got Karma for How to manage deployment clients?. 02-08-2024 01:24 AM
- Got Karma for Re: How can I search for a missing field?. 06-10-2022 07:36 AM
- Got Karma for Re: How do I set a timerange to be the last full 7 days?. 12-16-2021 01:42 PM
- Got Karma for Re: Does Splunk index gzip files?. 03-24-2021 12:16 AM
- Got Karma for Does outputlookup append or overwrite?. 10-07-2020 08:30 AM
- Karma [splunkd.log error] DispatchSearch - Unable to saved search history for user=admin for bckq. 06-05-2020 12:46 AM
- Karma Re: Can I get a count of distinct values in multivalue field? for jonuwz. 06-05-2020 12:46 AM
- Karma Re: Is it possible to dynamically reload a new/updated tags.conf file? for Ayn. 06-05-2020 12:46 AM
- Karma Re: Show description in legend instead of numbers for MarioM. 06-05-2020 12:46 AM
- Karma How to install the Universal Forwarder on a Windows Cluster for jasonstone. 06-05-2020 12:46 AM
- Karma Re: How to install the Universal Forwarder on a Windows Cluster for bsherwoodofdapt. 06-05-2020 12:46 AM
- Karma Re: How to convert scientific notation to decimal? for tchen_splunk. 06-05-2020 12:46 AM
- Karma Splunk bootstrap themes for Lazarix. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for Can I get a count of distinct values in multivalue field?. 06-05-2020 12:46 AM
- Got Karma for How to tell the sort command to sort by numerical order instead of lexigraphical?. 06-05-2020 12:46 AM
- Got Karma for Is it possible to dynamically reload a new/updated tags.conf file?. 06-05-2020 12:46 AM
- Got Karma for How to extract a variable number of fields?. 06-05-2020 12:46 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
2 | |||
3 | |||
1 | |||
0 | |||
1 | |||
1 | |||
1 | |||
2 |
11-17-2016
08:33 AM
1 Karma
Hello, the config should be applied on the instance which is collecting the data. This is usually the forwarder.
Secondly, best practice is no config should be updated or edited in the default folder. You can use Deployment Server, and propagate to the local folder or an app folder.
Does this answer your questions?
... View more
03-05-2014
12:18 AM
How can I get stats by author if I have multiline events like the below?
Project: /a/b/c
loc=100 author=aaa@foo.com
loc=100 author=bbb@foo.com
loc=100 author=ccc@foo.com
Project: /a/b/c
loc=200 author=aaa@foo.com
loc=200 author=ccc@foo.com
loc=200 author=ddd@foo.com
Given the 2 events above, am looking for a results table like this:
Project Author Total Lines of Code (loc)
-------------------------------------------------
/a/b/c aaa@foo.com 300
bbb@foo.com 100
ccc@foo.com 300
ddd@foo.com 200
... View more
10-08-2013
07:43 PM
2 Karma
To answer my own question. 😉
Here's the regex that fixed it all:
... | rex "^\"JOB_NEW\" \"(?<lsfcommand>(?:\"\"|[^\"])*+)\""
I needed to use the non-capturing capture group (thank you jonuwz) and a lazy/possessive quantifier. Not sure why it works, but happy that it does.
So it was found that these quantifiers worked:
... | rex "^\"JOB_NEW\" \"(?<lsfcommand>(?:\"\"|[^\"])*+)\""
... | rex "^\"JOB_NEW\" \"(?<lsfcommand>(?:\"\"|[^\"])*?)\""
... | rex "^\"JOB_NEW\" \"(?<lsfcommand>(?:\"\"|[^\"])+?)\""
... View more
10-08-2013
05:40 PM
I think I found something that works, but not entirely sure why it works. If I change the quantifier * to be possessive *+, then I seem to be able to get past the character limitation. Aaah, I love regex, but regex does not love me. 🙂
... View more
10-08-2013
05:17 PM
Thank you jonuwz. The 2nd regex does work for both only when the field is less than 498 chars long. It exceeds some kind of PCRE limit and fails. I tried your non-capturing suggestion and it also fails on the event with 498+ chars. 😞
... View more
10-08-2013
03:00 PM
I am trying to extract a field with 2 distinct problems:
The field length can often creep above 498 characters. This is where Splunk fails to complete the field extraction (maybe because of a PCRE recursion limit).
The field values are somewhat tricky as they are surrounded by quotes and include double quotes (to escape single quotes).
In psuedo speak, here is an example event:
"JOB_NEW" "pretend this is the ""very"" long field with more than 498 chars" "next field" "more fields"
The regex to solve #1: ... | rex "^\"JOB_NEW\" \"(?<lsfcommand>([^\"]*)\")"
The regex to solve #2: ... | rex "^\"JOB_NEW\" \"(?<lsfcommand>(\"\"|[^\"])*)\""
Can you help find a regex to solve both #1 and #2?
I've included a data sample below which I believe captures both problems. So if we can find a regex that works for all 3 events then we're golden.
=== props.conf ===
[regextest]
SHOULD_LINEMERGE = false
DATETIME_CONFIG = CURRENT
=== data sample (3 events) ===
"JOB_NEW" "siliconsmart -x ""set sis_stage libgen"" scripts/sis_runme.tcl" 0 "" "default" 32987 1 "LINUX64" "" "" "" "" 2104336 0 "" "" "/prj/abcdef/lsfgbcspool/x" -1 -1 -1 "default" 0 "" "" -1 "" 0 -1
"JOB_NEW" "/prj/abc/def-sys/bolt/users/fooo/KalmanRegressionTip/tip100173/wiltsim/tools/../../library/lsf_tools/lsf_jobname_wait.pl stressTest_iceqbe.pl.b68c83ae\* /prj/abc/lte-sys/bolt/users/fooo/KalmanRegressionTip/tip100173//wiltsim/tools/regress_finalize.pl -m xluo /prj/abc/lte-sys/bolt/users/fooo/KalmanRegressionTip/tip100173//regression/logs/log_stressTest_iceqbe.pl.20130829.013458.report.log fooo fooo" 0 "" "11644" 1 "LINUX64" "" "" "" "" 2098192 0 "" "" "/prj/abcdef/lsfgbcspool/x" -1 -1 -1 "default" 0 "" "" -1 "" 0 -1
"JOB_NEW" "/prj/abc/lte-sys/bolt/users/fooo/KalmanRegressionTip/tip100173/wiltsim/tools/../../library/lsf_tools/lsf_jobname_wait.pl stressTest_iceqbe.pl.b68c83ae.1.autosim_define\* /prj/abc/lte-sys/bolt/users/fooo/KalmanRegressionTip/tip100173//run/performance/ICEQBE/_SingleCellKalmanStressTests/_compare2reference.pl -cltv -metric PDSCH_INFO_UEID_1 ThroughputMbps 0.06 /prj/abc/lte-sys/bolt/users/fooo/KalmanRegressionTip/tip100173//regression/logs/log_stressTest_iceqbe.pl.20130829.013458.report.log fooo fooo" 0 "" "11644" 1 "LINUX64" "" "" "" "" 2098192 0 "" "" "/prj/abcdef/lsfgbcspool/x" -1 -1 -1 "default" 0 "" "" -1 "" 0 -1
... View more
10-07-2013
09:52 PM
2 Karma
Given the following data sample of 4 events where each event has a number immediately after the timestamp that indicates the number of hosts to be listed:
10/07/2013:09:00:00 3 "host1" "host2" "host3" foo
10/07/2013:09:01:00 2 "host1" "host2" bar
10/07/2013:09:02:00 4 "host1" "host2" "host3" "host4" baz
10/07/2013:09:03:00 1 "host1" foobarbaz
So in the first event, there are 3 hosts (host1, host2 and host3) and an unrelated field following the 3 host names (foo).
Is it possible to use rex or props/transforms to intelligently capture the number of hosts expected, then use that number to capture the following host names into a multi-value field, then just pick up where we leave off and extract the fields following the host parade? I guess I could write a custom search command to return all the fields sensibly, but am thinking there might be a clever way to do this inline.
... View more
03-13-2013
09:52 AM
You are brilliant! Thank you, this is exactly what we are looking for. 🙂
... View more
03-12-2013
05:10 PM
I was able to get the information desired, but not really in the clean format provided by the values() or list() functions using this approach:
... | stats list(abc) as tokens by id | mvexpand tokens | stats count by id,tokens | mvcombine tokens
id tokens count
bar 123 1
456
bar 789 2
foo 123 3
The output is a table of id, tokens and count, grouped by count. This technically answers the question, but not in a user-friendly format. 🙂
... View more
03-12-2013
04:49 PM
3 Karma
What I'm looking for is a hybrid of the stats list() and values() functions. First, I'd like the list of unique values for a multivalue field, then alongside each unique value, I'd like the count of occurrences of that value. Is this possible?
Maybe this is better illustrated through an example.
Given a set of events like this:
03/12/2013 15:55:00 id=foo abc=123,123,123
03/12/2013 15:56:00 id=bar abc=123,456,789,789
I can get tables like these with the list() and values() functions, respectively:
id abc id abc
---------------- ----------------
foo 123 foo 123
123 bar 123
123 456
bar 123 789
456
789
789
But what I really want is this:
id abc
----------------
foo 123 (3)
bar 123 (1)
456 (1)
789 (2)
I believe this to be possible... for a search query superhero (which I am not). 🙂
... View more
- Tags:
- multivalue
01-30-2013
09:55 AM
Yann, is there a configuration file to set the default higher for all CLI searches?
... View more
12-22-2012
09:52 AM
Thank you, Ayn! This is most excellent. 🙂
... View more
12-21-2012
10:39 PM
1 Karma
We would love it if there was a REST endpoint or way to create and update tags similar to the way we refreshed fields in the old days with "... | extract reload=t".
I understand you can simply restart Splunk to refresh new configuration including tags.conf, but we are trying to find ways to reload dynamically without restarting.
I also understand we have endpoints to support the addition/deletion of tags individually.
Knowledge: http://docs.splunk.com/Documentation/Splunk/latest/RESTAPI/RESTknowledge#search.2Ftags
Configurations: http://docs.splunk.com/Documentation/Splunk/latest/RESTAPI/RESTconfig
This is going to require some scripting or coding, which isn't terrible, but if there's a more efficient way to do this it would be preferred. Since there's no 'update' endpoint which means we also have to account for existing tags of the same when adding/deleting. Hoping there's some undocumented endpoint or utility to perform a bulk dynamic refresh.
We like the way tags are displayed next to the field in Splunk Web so using a lookup is not ideal, but could logically accomplish the same thing.
... View more
06-01-2012
09:12 AM
Thank you, Gerald. I thought to try the same thing morning and will test and post results here. We are using the streaming command precisely because Intersplunk limits the amount of data returned.
... View more
06-01-2012
12:42 AM
I have a custom search command which uses the streaming API to retrieve query results. Here's a snippet:
results = csv.DictReader(sys.stdin)
for r in results:
resultsFile.write(str(r['_raw']+'\n'))
Pretty basic.
The problem is I want to operate on the full set of results when the streaming has completed (perform a POST on everything). But how can I/the script tell Splunk is done streaming events?
... View more
- Tags:
- custom-search-script
11-09-2011
11:40 AM
1 Karma
When a deployment client is unable to connect to a deployment server, how many times does it retry before it gives up completely? We observe clients retrying on the phoneHomeIntervalInSecs interval, but otherwise there is no indication in deploymentclient.conf.spec on how many times it will retry. Maybe it never gives up?
... View more
10-28-2011
09:43 AM
Thank you, ewoo! We are trying something out which might work, description is posted as an answer.
... View more
10-28-2011
09:38 AM
When configuring a deployment client from the CLI, the config (including targetURI) is written to etc/system/local/deploymentclient.conf. We are trying a solution whereby we delete deploymentclient.conf from etc/system/local and package it as part of a deployment server app. This app is then synced to the SHP target repository by the search head designated as the deployment client. The effect should be the other search heads in the pool now become implicit deployment clients. Will report back on whether this worked or not, and if there are any unintended problems.
... View more
10-25-2011
03:06 PM
1 Karma
I understand it is possible to use Deployment Server to propagate config changes to a Search Head Pool as documented here:
http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Configuresearchheadpooling#Deployment_server_and_search_head_pooling
It is clear that a single search head can be designated as the deployment client. It will then sync config to the pool target repository and restart itself. What is not clear is how the other search heads in the pool detect the change then restart. When specifying the restartSplunkd parameter in serverclass.conf it does not appear to be honored by search heads which are not designated as deployment clients, but are part of the pool.
Is this the expected behavior? It makes sense that it is the expected behavior since the other search heads are not managed by DS. If this is the case, then what is the recommended way to automate a restart for members of the pool which are not deployment clients?
... View more
08-30-2011
03:09 PM
1 Karma
I want the series to sort as 1,2,3,10,11,12 not 1,10,11,12,2,3. The sort functions do not seem to have any effect when used in this context:
... | sort -num(myfield)
I don't see any examples of using the sort functions in the documentation or other questions. 😞
I have also tried:
... | sort by num(myfield)
... | sort num(myfield)
Halp!
... View more
07-01-2011
01:09 PM
1 Karma
Emily, you're right. The count option has no effect. The showPager option does. See if this workaround works for you:
in the flash timeline view, set 'Results per page' to 10
save your search as a new search (this associates a new viewstate with the paginator count as 10)
use the new saved search in your dashboard.
Alternatively, you (or your Splunk Admin) can correct the viewstate:
find the viewstate id associated with the saved search in savedsearches.conf (e.g. vsid=gp902hc2)
find the viewstate using the vsid in viewstates.conf.
edit the Count_#_#_#.default , MaxLines_#_#_#.default , MaxLines_#_#_#.maxLines variables.
restart Splunk.
This is a bug, which I will file on your behalf. Thank you for bringing this to our attention. 🙂
... View more
03-28-2011
06:54 AM
Hi Paolo, did you ever solve this problem? If not, since you are collecting this data via a scripted input, why not add the lookup capability on the user_name field in the same script? The script could either augment _raw with the user_role, or only write events for roles are you interested in. The 2nd option saves Splunk the trouble of having to apply index-time filtering altogether and maybe some CPU cycles.
... View more