Activity Feed
- Got Karma for Splunk Add-on for Amazon Web Services: How to get a CSV file stored in Amazon S3 to properly split at index-time?. 10-09-2024 07:59 AM
- Got Karma for ipmask (dys-)function: Why the SPL parser does not handle this fairly common case?. 08-03-2023 11:39 AM
- Posted ipmask (dys-)function: Why the SPL parser does not handle this fairly common case? on Splunk Search. 12-09-2022 05:40 AM
- Tagged ipmask (dys-)function: Why the SPL parser does not handle this fairly common case? on Splunk Search. 12-09-2022 05:40 AM
- Tagged ipmask (dys-)function: Why the SPL parser does not handle this fairly common case? on Splunk Search. 12-09-2022 05:40 AM
- Tagged ipmask (dys-)function: Why the SPL parser does not handle this fairly common case? on Splunk Search. 12-09-2022 05:40 AM
- Got Karma for Re: FileSize to human readable. 08-17-2020 12:16 PM
- Got Karma for Re: How to configure all forwarders from an old deployment server to a new deployment server?. 06-05-2020 12:48 AM
- Karma Re: REST endpoint for modifying $app/local/macros.conf for acharlieh. 06-05-2020 12:47 AM
- Got Karma for Re: how to change host value of the field in splunk web?. 06-05-2020 12:47 AM
- Got Karma for Splunk Add-on for Amazon Web Services: How to get a CSV file stored in Amazon S3 to properly split at index-time?. 06-05-2020 12:47 AM
- Got Karma for Re: Is it possible to add another secondary search option to the Event Details menu currently containing "Add to search", "Exclude from search", "New search"?. 06-05-2020 12:47 AM
- Got Karma for REST endpoint for modifying $app/local/macros.conf. 06-05-2020 12:47 AM
- Got Karma for REST endpoint for modifying $app/local/macros.conf. 06-05-2020 12:47 AM
- Got Karma for REST endpoint for modifying $app/local/macros.conf. 06-05-2020 12:47 AM
- Posted Re: FileSize to human readable on Splunk Search. 07-21-2017 07:48 AM
- Posted Re: what are your Deployment App naming conventions? on All Apps and Add-ons. 04-14-2016 01:51 PM
- Posted Re: How to configure all forwarders from an old deployment server to a new deployment server? on Getting Data In. 04-14-2016 01:42 PM
- Posted Re: Splunk Add-on for Check Point OPSEC LEA problem on All Apps and Add-ons. 04-14-2016 01:25 PM
- Posted Re: List of valid [perfmon://] stanzas for inputs.conf on Getting Data In. 02-03-2016 07:23 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
3 | |||
2 |
12-09-2022
05:40 AM
1 Karma
The documentation (9.0.2 Search Reference) describes a function ipmask(<mask>,<ip>) that is supposed to apply the given netmask to the given IP. Seems pretty simple, and the examples are mostly straightforward... unless you consider what a netmask of 0.255.0.244 would actually mean on the network.
The more interesting problem is what you're allowed to pass to this function. From what I can tell, the first parameter MUST be a quoted string of digits, and particularly NOT the name of a field in your data:
|makeresults 1 | eval ip = "1.2.3.4", mask = "255.255.255.0"
With these values defined,
| eval k = ipmask("255.255.255.0", "5.6.7.8") works fine, k=5.6.7.0
| eval k = ipmask("255.255.255.0", ip) works fine, k=1.2.3.0
| eval k = ipmask("255.255.255.0", mask) works fine, k=255.255.255.0 (but isn't a meaningful calculation).
| eval k = ipmask(mask, "5.6.7.8") does not work: Error in 'EvalCommand': The arguments to the 'ipmask' function are invalid.
| eval k = ipmask(mask, ip) does not work: Error in 'EvalCommand': The arguments to the 'ipmask' function are invalid.
I'm sure there's some highly technical reason why the SPL parser does not handle this fairly common case, and if there's anyone who can share that reason, I'd love to hear it.
--Joe
... View more
Labels
- Labels:
-
eval
07-21-2017
07:48 AM
1 Karma
Ancient thread necropsy, but here's a better macro (IMO). It's ugly but it works just like the -h option on many GNU tools.
Usage:
| eval readable_size=`readable(size)`
Definition: (as seen in Settings -> Adv Search -> Search macros -> new:
if( $num$ < 1024, tostring($num$), if ( (floor($num$/pow(1024,floor(log($num$,1024))))) < 10
, ( (tostring((floor($num$/pow(1024,floor(log($num$,1024)))))) + ".") + tostring(round((($num$/pow(1024,floor(log($num$,1024))))-(floor($num$/pow(1024,floor(log($num$,1024))))))*10))) + (substr("KMGTPEZY",floor(log($num$,1024)),1))
, ( tostring((floor($num$/pow(1024,floor(log($num$,1024)))))) + (substr("KMGTPEZY",floor(log($num$,1024)),1)) )
) )
Not an eval-based definition (unchecked)
Arguments: num
Validation Expression: !isnum($num$)
Validation Error Message: Numeric value required
My key observation for the algorithm is that the log base 1024 will give you the "scale"-- KB or PB or whatever, by dropping the fractional part (i.e. log_10(5.6MB) = 2 -> M).
In working on this, I used meaningful names and replace-all'd them to fundamental eval functions. Here's the pseudocode:
if $num$ < 1024:
printf("%4d", $num$)
else
if $num$ reduces to a single digit
# print in the form x.yS
printf( "%d.%d%c", whole_part(reduction), 1st digit of frac_part(reduction), KMGTPEZY suffix appropriate for this scale
else # This is actually the most common case. The result is just the whole part of the reduction and the suffix
printf("%3d%s", whole_part(reduction), suffix)
Hope this helps somebody
--Joe
... View more
04-14-2016
01:51 PM
sorry for the delay:
defaults/macros.conf
[int_webapp_idx1]
definition= index=the_real_index_name
defaults/savedsearches.conf
[Event count by host last 15 minutes]
search = `int_webapp_idx1`
dispatch.earliest_time = -7d
The macro name gets enclosed in backquotes in the query, which turns that bit of the answer into "code" and mangles my answer
... View more
04-14-2016
01:42 PM
1 Karma
You can... but it's ugly and error-prone.
The problem with deploying a deploymentclient.conf in an application is that the settings there are overridden by etc/system/local/deploymentclient.conf. So if you can change that (system/local) file, you're in business.
Ansible, Chef, Salt, Puppet, etc. are tools to change the file on the system, which is useful if they are already there, and you are allowed to make a change in the CM tool or can find a sysadmin long enough to explain what you need.
But you have Splunk on the system already, and we can do it in Splunk as a Splunk admin.
1) Create a deploy-client-config app in Splunk. You need 3 things in it (in addition to what comes out of the Blank application template):
bin/remove_deploy_system_setting.[bat|py], a script that (re)moves $SPLUNK_HOME/etc/system/local/deploymentclient.conf and restarts splunk
default/inputs.conf that runs the above script every... say 5 minutes
default/deploymentclient.conf that points at the new DS
2) Use the old deployment server to push this out to everybody (restart splunk after)
3) Create a same-named app on the new deploy server that just has the default/deploymentclient.conf piece (not the script or inputs.conf)
4) Tell the new deploy server to install the new app
A future migration or DS change (such as new https keys) would only require deploying a new version of the "deploy-client-config" app.
--Joe
... View more
04-14-2016
01:25 PM
Could iptables (or other host-based firewall) or apparmor or SE policies be preventing the splunk service (and specifically the lea_log_grabber.sh that runs under it) be blocking outbound connections?
... View more
02-03-2016
07:23 AM
An old question, but not an answer here that I like.
Per http://serverfault.com/questions/149816/easiest-way-to-get-perfmon-counter-names-into-a-text-file you can use the "typeperf.exe -q" (or -qx) command.
But as Ron said, the counters you get are dependent on what software is installed (and/or running) on the system. For example, when you install the .NET CLR, the counters for ".NET CLR Data()\SqlClient: ." are added. If you specify this in inputs.conf on a server that doesn't have the .NET CLR, you (obviously) won't get any data from that counter.
--Joe
... View more
12-18-2015
05:45 AM
While similar to the accepted answer above, I find it easier to see what is/should be deployed where if there is a slightly different order to the parts of the app name.
We create apps based on functionality -- "Internal App for our main webapp cluster" will be the user-facing name of it (functionality to monitor the J2EE stacks that back our production website), but on the back end, I have 3 apps^W^W 6 apps defined:
int_webapp_prod_data
int_webapp_prod_ui
int_webapp_prod_agent
int_webapp_test_data
int_webapp_test_ui
int_webapp_test_agent
(See the pattern, and how things would line up in the Deployment Server interface?)
The _data apps contain the index definitions and any index-time extractions (indexes.conf, fields.conf, etc.) This is deployed (via DS) to all of my indexers
The _ui app has search-time configurations-- saved searches, dashboards, search-time extractions, and other things that influence the user interface
The _agent app gets sent via the DS to the relevant forwarders (in this example, it would be deployed to the internal app servers. This contains (basically) 2 pieces: inputs.conf and outputs.conf. What to look for, and where to send it, but sometimes the inputs require a python script or something in .../bin
And of course, you have a test environment, right? 😉
A couple of code-requirements I've found useful in the apps:
Make a macro that abstracts the index names in the UI app. So instead of having "index=webapp_perf" as the search, put that in macros.conf and search webapp_index instead. Just in case your index needs to be renamed (or the app published)
Specify output locations for your data, even if the default would work. Much better for migration when you need a different indexer (like when you move your app from the test index to the production one)
Use version control (I use git) to move changes from "developer-sandbox" to "test" to "prod".
... View more
07-13-2015
08:40 AM
Most likely, the search is taking longer than the browser's timeout for an unresponsive website. You could try a different browser (or find the hidden config setting for your browser that changes the timeout).
How long does it take the search to run interactively?
If the search for the 800k+ events takes too long, you can run the search manually, then when the results are in, you can hit "Share" and copy down the link it generates. This will save the current resultset (i.e. it will not change when new data arrives) and make it more or less instant to pull up. Open that link in another tab, and you can export the results instantly.
--Joe
... View more
In inputs.conf, set a host= value:
[monitor:///var/log/H3C/information]
disabled=false
sourcetype=syslog_wisdom
host=192.168.1.254
--Joe
... View more
03-26-2015
08:07 AM
In general, yes.
But it depends on what kind of second factor you want to use.
The easiest way is to have Splunk sit behind an authenticating reverse proxy that handles the authentication and just passes the username back to splunk via an HTTP header (X-Authenticated-User, for example):
http://docs.splunk.com/Documentation/Splunk/6.2.2/Security/HowSplunkSSOworks
I have my splunk set up to look at the "SSL_CLIENT_S_DN" header, which gets set when I use my x.509 browser certificate.
--Joe
... View more
03-25-2015
12:26 PM
3 Karma
I'd like to have my app not clobber other people's index names, or to be able to reference an existing (but I don't know what index search)
I thought that I could, in my app's setup.xml, prompt the user for the desired index name.
But then, how do I get my saved search or view or dashboard to reference the value the user entered?
I created a macro in $app/etc/default/macros.conf, defining:
[appindex]
definition = index=foo
and in the app, I can define my searches referencing `appindex` therestofthequery and everything works fine.
I can get setup.xml to prompt for the desired index name, but I can't find the REST endpoint that will put the definition in $app/local/macros.conf
Is there another way to do this?
... View more
03-25-2015
12:16 PM
Write your app to work with a search macro, and have Puppet put the correct value into $app/local/macros.conf
In macros.conf:
[index_for_this_env]
definition = index=%%puppet_replace_this%%
In your search, instead of index=foo eventtype=bar ... , you would have `index_for_this_env` eventtype=bar ... (backquotes around the macro name)
--Joe
... View more
02-26-2015
12:04 PM
1 Karma
Not in a good way.
The mouseclick events are in $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/js/views/shared/eventsviewerdrilldown/SegmentationDrilldown.js
Your modifications to this would be overwritten by splunk upgrades.
Alternatively, you might (with a little jquery DOM magic) locate the that contains an and insert your own into it at page-render (actually at item-click) time.
Something along the lines of:
$("a.curr_inc_val").parent().parent().append('<li>Your Item here</li>')
which you can test with a reasonable Javascript console like Firebug.
--Joe
... View more
02-09-2015
06:41 AM
The "new file" with the same content is being reported as too small to index... perhaps splunk is trying to read the file before it has finished copying into place?
Otherwise, I'm out of ideas 🙂 Sorry.
... View more
02-06-2015
07:07 AM
Splunk looks at the beginning of a file (the first 5 or so lines) to see if it's already been seen, look at the documentation for crcSalt in props.conf.
I had near-duplicate files that were not being indexed until I set in props.conf crcSalt=<SOURCE>. I found out about this by reviewing "index=_internal sourcetype=splunkd" and the filename that was missed
With this setting, Splunk used the file name as part of the file identification process, and my nearly-identical files (with different paths) were happily imported.
As for monitoring, I would suggest that you could search the splunkd.log and look for the filenames. I can't tell you the exact syntax, but you may be able to group the relevant lines into a transaction, and count how many there are. If there are less than 5, splunk has missed a file.
... View more
02-05-2015
01:19 PM
If I mirror the S3 bucket to a local directory and monitor it, it splits nicely:
[monitor:///data]
disabled = 0
crcSalt = <SOURCE>
index = jm
sourcetype = s3_autoruns
whitelist = .*/autoruns.txt$
--Joe
... View more
02-05-2015
12:52 PM
2 Karma
I'm having trouble getting a CSV file that I've stored in Amazon S3 to properly split at index-time.
I'm using the Splunk Add-on for AWS, which allows me to define an S3 bucket to monitor. It pulls the data down just fine when a new CSV is uploaded:
[aws_s3://s3_autoruns]
disabled = false
aws_account = Splunk Reader
bucket_name = mybucket
index = jm
initial_scan_datetime = default
interval = 30
max_items = 100000
max_retries = 10
recursion_depth = 3
sourcetype = s3_autoruns
whitelist = .*/autoruns.txt$
blacklist = .*
character_set = UTF-16LE
I have in my props.conf a working transform (which changes the Host field to part of the S3 url), so I know this stanza is hitting for this data.
[source::.../autoruns.txt]
TRANSFORMS-s3host = transform-s3-integhost
DATETIME_CONFIG=CURRENT
With this, I get an event per line of the file.
I think I should be able to add to my props.conf:
INDEXED_EXTRACTIONS=CSV
FIELD_NAMES=Time,EntryLocation,Entry,Enabled,Category,Description,Publisher,ImagePath,LaunchString,MD5,SHA-1,SHA-256
FIELD_DELIMITER=,
But when I do that, it does not change anything. I still get one event per line, and no EntryLocation field to search on.
Any thoughts?
Thanks,
--Joe
... View more