Activity Feed
- Got Karma for Re: What is the basic difference between the lookup, inputlook and outputlookup commands. 12-05-2024 01:35 PM
- Karma Re: Why is frozenTimePeriodInSecs 188697600 ? for PickleRick. 12-05-2024 07:44 AM
- Got Karma for Re: How to get a list of logins without a count or multiple entries for users logging in?. 11-04-2024 10:05 PM
- Got Karma for Re: Can A Single Master Cluster Node support multiple clusters?. 08-05-2024 08:13 AM
- Posted Re: Priority precedence fields by sourcetype on Splunk Cloud Platform. 02-25-2024 11:26 PM
- Karma Re: Unable to delete indexes on splunk cloud for richgalloway. 02-25-2024 09:42 PM
- Got Karma for Re: Why do I get this error: Eventtype "does not exist or is disabled" when I open my dashboard?. 01-21-2024 07:51 PM
- Got Karma for Re: Is there a way to install Universal Forwarder back to Splunk Cloud, after installation?. 01-19-2024 01:35 PM
- Karma Splunkbase | Communicate With Your App's Users for thellmann. 01-18-2024 08:43 PM
- Posted Re: "Link to Search" interaction missing in Dashboard Studio on Dashboards & Visualizations. 01-18-2024 08:38 PM
- Posted Re: How do I create an index using REST API? on Splunk Cloud Platform. 01-18-2024 08:23 PM
- Posted Re: Is there a way to install Universal Forwarder back to Splunk Cloud, after installation? on Splunk Cloud Platform. 01-18-2024 08:00 PM
- Posted Admin's Little Helper v1.2.0 Released - Fixes issue with distributed btool on next version of Splunk Cloud on All Apps and Add-ons. 01-12-2024 01:44 PM
- Got Karma for Re: Splunk Enterprise 8.2.10. 11-01-2023 08:00 AM
- Got Karma for Re: Accurate License usage when data is SQUASHED per host. 07-24-2023 03:18 AM
- Got Karma for Re: How to do stats or top for each column in a table?. 07-21-2023 07:34 AM
- Got Karma for Re: How do you count the number of events in a transaction?. 05-16-2023 03:45 PM
- Got Karma for Re: splunk rex backreference not working as expected. 05-14-2023 05:47 PM
- Got Karma for Re: splunk rex backreference not working as expected. 05-14-2023 02:19 PM
- Got Karma for Re: Transpose 1 columns from table with 4 columns. 05-13-2023 01:59 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
1 | |||
0 | |||
3 |
02-25-2024
11:26 PM
I just stumbled on this and thought I'd add a few other notes on this... with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". Just to confirm, how is the field being created? I'm assuming you mean a search time field as opposed to an index time field. Skimming the #Mimecast for Splunk app it looks like there are field aliases, and eval statements for different source types around the `action` field which are both search time... but I could be missing something. If you were instead referring to an index time transformation, not only is precedence order different, but also reingestion of data would need to happen before things take effect. Speaking of precedence order, probably time to mention that search time attributes use user / app precedence order as documented: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Wheretofindtheconfigurationfiles#Precedence_within_app_or_user_context Some effects of this is 1) your app needs to be lexographically after the app you're trying to override (precedence by name of other apps is reverse lexographic order as opposed to forward lexographic order) 2) your app needs to export the corresponding settings 3) even with all that done, your settings from other apps will lose if searches are being launched from within the mimecast app because the current app's settings gets highest precedence for the same stanza. (may or may not be an issue, I'm not familiar enough with the app to say, but it is a potential edge case). Here's where we need to address something else: Splunk Cloud doesn't allow using local folder Splunk Cloud doesn't let you upload an app with a local folder, however, Calculated fields, and field aliases are both editable via the UI, so you could actually create a local override within the context of the original TA itself via the UI and those would win. (since merging between default and local per app happens first with user/app context resolution order). No additional app needed necessarily (that gets into a long term management discussion). @richgalloway already provided an alternative solution since within props.conf, there is an additional merging between stanzas for sourcetypes, hosts, and sources... But if course need to be careful with those since you could affect other sourcetypes too. From the props.conf.spec: **[<spec>] stanza precedence:**
For settings that are specified in multiple categories of matching [<spec>]
stanzas, [host::<host>] settings override [<sourcetype>] settings.
Additionally, [source::<source>] settings override both [host::<host>]
and [<sourcetype>] settings. In either case once settings are in place on the search head, they need to be replicated to your indexers as part of the knowledge bundle before they can take effect during a search... So if you're already over the 3GB limit there need to spend some time trimming the bundle size. Seeing resolved search time precedence can be done per stanza with the properties rest endpoint on the search head, and/or the btool command. (make sure to specify the appropriate `--app` and `--user` context for correct resolution order of search time values). (And before you say but btool is an enterprise only command.. I may have brought it to Cloud as a SPL command along with a knowledge bundle utility in Admins Little Helper for Splunk ... my officially unsupported but I think its useful side project Check it out on Splunkbase: https://splunkbase.splunk.com/app/6368 </shameless plug> ) Hope these notes help you and others in the future.
... View more
01-18-2024
08:38 PM
That blog post is announcing a feature being released in Cloud version 9.0.2305. Cloud release numbers and Enterprise release numbers aren't really directly comparable. (Features tend to cloud first and then get released in later Enterprise releases). That said if you go to the doc for the What's New in Dashboard Studio on Enterprise 9.1.2: https://docs.splunk.com/Documentation/Splunk/9.1.2/DashStudio/WhatNew You'll notice that the table looks very similar to the table underneath the header of "What's new in Splunk Cloud 9.0.2303" (the previous cloud release) from the link from the blog post: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2305/DashStudio/WhatNewSC So from that I'd suspect the feature that you're looking for that came out in a later version of cloud would likely come out in a later version of Enterprise... Hopefully 9.2.x but of course could always be later.
... View more
01-18-2024
08:23 PM
Are you certain you had the :8089 as part of your curl url? AND you used the correct url? The redirection response you have provided is identical to the one that Splunk Web (i.e. port 443 OR no port specified with HTTPS) would give in response to a request for /servicesNS/nobody/search/data/indexes (Which would be the enterprise API url instead of the cluster blaster one you state in your post.) Deliberately omitting the :8089 from the cluster_blaster_indexes request against my classic stack I get the following: $ curl https://redacted.splunkcloud.com/services/cluster_blaster_indexes/sh_indexes_manager?output_mode=json
<!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=https://redacted.splunkcloud.com/en-US/services/cluster_blaster_indexes/sh_indexes_manager?output_mode=json"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="https://redacted.splunkcloud.com/en-US/services/cluster_blaster_indexes/sh_indexes_manager?output_mode=json">here</a>.</p></body></html>
... View more
01-18-2024
08:00 PM
1 Karma
So the first screenshot you have is actually within the Universal Forwarder app... Assuming that the app wasn't recreated by Splunk's automation, and that it's not a case of you or one of your fellow admins didn't set the app invisible, also assuming that someone didn't actually just remove permissions from the app, I think logging a support case would be your best course of action.
... View more
01-12-2024
01:44 PM
Hi there! Your friendly neighborhood Splunk Teddy Bear. Just stopping in to let you know that if you're using my Admin's Little Helper app, you may want to update to the v1.2.0 version that passed cloud vetting just last Friday. What's going on is in the next version of Splunk Cloud (tentatively Feb/March 2024 or so) there's a change happening around how distributed search works. Unfortunately that change with combined with how I'm doing checking of capabilities in existing versions means if you're using any older version of sa-littlehelper on this new version of Splunk Cloud the `| btool` command will work for your search head, but will not return any results from your indexers (search peers), and instead give error messages about needing to have the correct capability. This v1.2.0 release fixes that issue on the upcoming SplunkCloud version as well as still work on all current supported versions of Splunk Enterprise & Splunk Cloud too, so I wanted to get the word out that you should update and save your future self some headache (with what I view to be core fuctionality) I've also posted variations of this notice on the #splunk_cloud channel on splunk-usergroups, the Splunk subreddit, and GoSplunk discord If there are other places you think I should let me know 🙂 As mentioned on the contact page on Splunkbase While this app is not formally supported, the developer (me) can be reached at teddybear@splunk.com OR in splunk-usergroups slack, @teddybfez. Responses are made on a best effort basis. Feedback is always welcome and appreciated! Learn more about splunk-usergroups slack Admins Little Helper for Splunk
... View more
Labels
- Labels:
-
upgrade
05-13-2023
11:44 AM
1 Karma
I'll highly recommend @alacercogitatus "Lesser Known Search Commands" perennial .conf talk, because this is where I learned the {} trick with eval. It looks like you want to create columns based on the value of the Shift column... Which eval could do with the trick above, and then you can combine rows based on Order and Date parameters with stats: ...
| eval SH_{Shift}=Count
| stats values(SH_*) as * by Order Date Alternatively, assuming that your given data is in fact the output of a stats command, another option would be instead of doing the stats, combine Order and Date into a single key with a delimiter, and then chart the count of your data by Order_Date and Shift. You then use eval / rex / others to split Order and Date back out by the delimiter into separate columns. But I'll leave that option as an exercise to the reader.
... View more
05-12-2023
08:33 AM
2 Karma
The general concept of a backreference is the same between both. Splunk's rex/regex processing in ingestion and during a search is powered by the Perl Compatible Regular Expressions library. There are syntactic and execution differences between PCRE & GNU SED's regular expressions, but other forums and sites would be appropriate for detailing out those exact differences. I think I have also heard that there may be some (recently released / soon coming) products/features that may leverage the RE2 regex library instead (partly from being Golang based and also some more control on predictability in time/complexity bounds of execution). Of course RE2 has it's own set of nuanced differences from PCRE & GNU SED, but general concepts are similar, and it too can be tested in regex101.com (pick the Golang flavor instead of the default PCRE)
... View more
05-10-2023
05:19 PM
1 Karma
If you watch @alacercogitatus' perennial .conf talk "Lesser Known Search Commands" , another way to achieve this, is through using eval to create fields named for other field values. For example: | rex ...
| eval JS_{job_status} = 1
| timechart count(JS_*) as * by job_name Of course I'm assuming there's not many potential values to job_status, or else, oof, that could be a bit brutal for the number of fields... and you can use this trick with any other statistical function here as well...
... View more
05-09-2023
08:25 PM
2 Karma
Capture groups are numbered to be backreferences, based on the order of their opening parenthesis... all capturing groups, so in your example`(?P<twin>` starts the group that corresponds to \1 To make things a bit clearer... let's name your other capturing groups in your example as well: (?P<twin>(?P<teddy>\d)(?P<bear>\d)\2\2) With this: \1 would be the value captured by twin \2 would be the value captured by teddy \3 would be the value captured by bear So your given rex would match 1211, 1311, 1411, 1511 ... and 1111 If you're wanting to match 1122 then you may want to start with something like: (?P<twin>(\d)\2(\d)\3) (which of course also matches 1111 ) ... but I'd recommend spending some time with https://regex101.com/ and other sites for helping with learning and experimenting with regular expressions.
... View more
03-24-2023
09:42 PM
1 Karma
With JSON, with \u#### encoding the digits are the literal unicode code point (or the UTF-16 representation of the character.) See: https://datatracker.ietf.org/doc/html/rfc8259#section-7 So, for example, a string
containing only a single reverse solidus character may be represented
as "\u005C" If it was UTF-8, that encoding wouldn't have the leading zeros \uc3b2 is indeed Hangul Syllable Ssyeobs The character you're looking for LATIN SMALL LETTER O WITH GRAVE is encoded in JSON correctly as \u00f2
... View more
03-19-2023
02:43 PM
You probably want to reach out to the developer of the custom addon to help with troubleshooting, if they still support it. Quick search for prtglivedata and I found this app: https://splunkbase.splunk.com/app/3282 which is only marked as being supported on Splunk 7. (Which is of course End of life, and before the Python3 transition), but I'm not sure if thats your app or not. These conf file configurations by themselves are slightly odd since they have both chunked = true and arguments that only make sense for the original Intersplunk protocol. From commands.conf.spec chunked = <boolean>
* Whether or not the search command supports the new "chunked" custom search
command protocol.
* If set to "true", this command supports the new "chunked" custom
search command protocol, and only the following commands.conf settings are valid:
* 'is_risky'
* 'maxwait'
* 'maxchunksize'
* 'filename'
* 'command.arg.<N>'
* 'python.version', and
* 'run_in_preview'.
* If set to "false", this command uses the legacy custom search command
protocol supported by Intersplunk.py.
* Default: false For reference... the "new" protocol came about a really long time ago. (Splunk SDK changes to support chunked were back in 2015 so something like Splunk 6? ) You could try to look in the search log (see the job inspector), splunkd log, or python log (index=_internal) to see if any errors or stacktraces related to your script are emitted.
... View more
03-18-2023
12:38 PM
3 Karma
@sshubh People answering posts on here are doing so by donating their time. As such you should not be preemptively tagging people to help with your question. You'll attract responses by showing what you've tried, what result you got and how you thought it should be different, and showing a willingness to learn. You posted the exact same question a few days ago, and were given an answer. Instead of asking again with no additional information, you could follow up with what is not working for you or what you're not understanding with that answer. It may help to break down the problem into steps and check each one. Start with are your fields extracted from your events properly? Do you get all values of the multi-valued fields as you expected on each event? Are your field names lined up between the bought and sold records so you can correlate them together? Do you have a field marking a result as a bought or a sold transaction already? ITWhisperer did some field manipulation and extraction with eval. Assuming you have the above done correctly, did you know that using makeresults and eval we can actually simulate your example data set with a Splunk Search and anyone can build from it? (names of fields might be slightly different, but this should be where you are at this stage. | makeresults count=8
| streamstats count
| eval AccountName=case(count in(1,2,6),"ABC", count=3,"DEF", true(),"EPF"), TransactionType=if(count<=5,"bought","sold"), BookId=case(count=1,split("book1,book2,book3",","),count in (2,5,7,8),"book1",count=3,split("book1,book2",","),count=4,split("book1,book3",","),count=6,"book2")
| fields - _time count If you don't have the above done correctly, then anything afterwards isn't going to work, and you should be talking about that problem first. (I'll also note that field names are always case sensitive) From this point IT Whisperer already showed you how stats can group by multiple fields, and even showed you the trick with eval and french braces {} in order to create fields with names based on the values of other fields, and running stats multiple times to combine things down. You can use the same tricks in a slightly different order to not need the fillnull command (but it's still useful to know). | eval T_{TransactionType}=1
| stats count(T_*) as * by AccountName BookId
| stats list(BookId) list(bought) list(sold) by AccountName I leave the total books calculation as an exercise for you, but also the hint that stats can perform multiple statistical functions in a single pass on multiple different fields of the input data set.
... View more
02-27-2023
08:48 AM
If there isn't a boot event (i.e. an event with the words "Linux version" in it) for a particular host in your time window, boot_time will come back as blank... This is the problem I was mentioning: But the more practical problem you'll run into is the unbounded nature of how far in the past boot time can be... thus requiring this search to become almost an All Time search which doesn't scale well at all.
... View more
02-26-2023
07:35 PM
1 Karma
It's actually on the previous releases page, but the sorting is alphabetical as opposed to by version numbers it seems, between 8.2.1 and 8.2.2 instead of after 8.2.9
... View more
02-24-2023
08:05 PM
The key words there are "for this historical scheduled search"... So likely looking at a search job that's taking longer than its scheduled period to execute. I'd start with looking at the runtimes of the skipping search you've already found. (of course not ruling out something crazy like the job wasn't running but the SHC captain thought it was...)
... View more
02-24-2023
07:19 PM
So for the idea of correlating multiple events together, you can do this in a single pass without a join e.g. index=abc sourcetype=foo host=hostabc
| eval boot_time=case(searchmatch("Linux version"),_time)
| stats latest(_time) latest(boot_time) by host
| rename latest(*) -> *
| convert timeformat="%F %T" ctime(_time) as Latest_Event_Time ctime(btoot_time) as Boot_Time
| eval delta=_time-boot_time, UP_Time = tostring(delta,"duration")
| fields host Boot_Time Latest_Event_Time UP_Time But the more practical problem you'll run into is the unbounded nature of how far in the past boot time can be... thus requiring this search to become almost an All Time search which doesn't scale well at all. If you can add data sources... instead of relying just on this log, you could have a scripted input that captures the output of `uptime` on a regular basis. But if not, another option may be to maintain a lookup containing the last boot time of a host, and pull that data in at search time instead... that way your search for looking at the latest events can be a much smaller window. Doing this off the top of my head, assuming a KVStore host_boots keyed by host, something like: index=abc sourcetype=foo host=hostabc
| eval boot_time=case(searchmatch("Linux version"),_time)
| stats latest(_time) latest(boot_time) by host
| rename latest(*) -> *
| lookup host_boots host OUTPUT boot_time AS last_boot
| eval boot_time=coalesce(boot_time,last_boot)
| fields - last_boot
| outputlookup append=t key_field=host host_boots
| convert timeformat="%F %T" ctime(_time) as Latest_Event_Time ctime(btoot_time) as Boot_Time
| eval delta=_time-boot_time, UP_Time = tostring(delta,"duration")
| fields host Boot_Time Latest_Event_Time UP_Time The question then becomes if you pull back this lookup for unseen hosts or not... and or if updating in this way makes sense (since the _time would get updated as frequently as the boot_time field...) and some other nuances...
... View more
01-04-2023
08:55 PM
1 Karma
in your drilldown, in addition to setting the normal timePicker tokens you want to also set the form.timePicker.tokens Here's a quick example that seems to work nicely on my cloud 9.0.2209.2 stack. obviously you'd use the earliest and latest drilldowns you were using instead of the row ones I am but this should give you the idea. (I don't remember where I learned about this trick... I think it was Clara and Niket's conf talk all about dashboards from a few years ago. <form version="1.1">
<label>timepicker_tokens</label>
<fieldset submitButton="false">
<input type="time" token="timePicker">
<default>
<earliest>-24h@h</earliest>
<latest>now</latest>
</default>
</input>
</fieldset>
<row>
<panel>
<table>
<title>Drilldown</title>
<search>
<query>| gentimes start=-7 | eval endtime=endtime+1 | addinfo</query>
<earliest>$timePicker.earliest$</earliest>
<latest>$timePicker.latest$</latest>
</search>
<drilldown>
<set token="timePicker.earliest">$row.starttime$</set>
<set token="timePicker.latest">$row.endtime$</set>
<set token="form.timePicker.earliest">$row.starttime$</set>
<set token="form.timePicker.latest">$row.endtime$</set>
</drilldown>
</table>
</panel>
</row>
</form>
... View more
01-03-2023
06:30 AM
1 Karma
So you might need to upgrade MLTK to 5.3.3, or follow the workaround to manually update conf files as listed in the Known issues under 5.3.1: https://docs.splunk.com/Documentation/MLApp/5.3.1/User/Knownissues
... View more
01-02-2023
01:01 PM
1 Karma
What version of MLTK are you on? There was a UI bug that's fixed in version 5.3.3: https://docs.splunk.com/Documentation/MLApp/5.3.3/User/Fixedissues Namely: MLA-4256 Save button under Settings is disabled
... View more
08-29-2022
08:26 PM
As it comes out of the box, the Splunk Add-on for Cisco ESA has no UI components.... If it was there'd be a default/data/ui folder which is missing... and the app.conf even states this isn't a visible app: $ tar tzvf splunk-add-on-for-cisco-esa_160.tgz
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/LICENSES/
-rw-r--r-- 1001/121 85947 2022-07-25 01:18 Splunk_TA_cisco-esa/LICENSES/LicenseRef-Splunk-8-2021.txt
-rw-r--r-- 1001/121 165 2022-07-25 01:18 Splunk_TA_cisco-esa/README.txt
-rw-r--r-- 1001/121 1916 2022-07-25 01:18 Splunk_TA_cisco-esa/THIRDPARTY
-rw-r--r-- 1001/121 11 2022-07-25 01:18 Splunk_TA_cisco-esa/VERSION
-rw-r--r-- 1001/121 1551 2022-07-25 01:18 Splunk_TA_cisco-esa/app.manifest
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/default/
-rw-r--r-- 1001/121 473 2022-07-25 01:18 Splunk_TA_cisco-esa/default/app.conf
-rw-r--r-- 1001/121 4770 2022-07-25 01:18 Splunk_TA_cisco-esa/default/eventtypes.conf
-rw-r--r-- 1001/121 24749 2022-07-25 01:18 Splunk_TA_cisco-esa/default/props.conf
-rw-r--r-- 1001/121 1208 2022-07-25 01:18 Splunk_TA_cisco-esa/default/tags.conf
-rw-r--r-- 1001/121 50633 2022-07-25 01:18 Splunk_TA_cisco-esa/default/transforms.conf
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/lookups/
-rw-r--r-- 1001/121 85 2022-07-25 01:18 Splunk_TA_cisco-esa/lookups/cisco_esa_authentication_action_lookup.csv
-rw-r--r-- 1001/121 617 2022-07-25 01:18 Splunk_TA_cisco-esa/lookups/cisco_esa_email_action_lookup.csv
-rw-r--r-- 1001/121 920 2022-07-25 01:18 Splunk_TA_cisco-esa/lookups/cisco_esa_proxy_status_action_lookup.csv
-rw-r--r-- 1001/121 309 2022-07-25 01:18 Splunk_TA_cisco-esa/lookups/cisco_esa_vendor_info_lookup_160.csv
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/metadata/
-rw-r--r-- 1001/121 105 2022-07-25 01:18 Splunk_TA_cisco-esa/metadata/default.meta
drwxr-xr-x 1001/121 0 2022-07-25 01:18 Splunk_TA_cisco-esa/static/
-rw-r--r-- 1001/121 3348 2022-07-25 01:18 Splunk_TA_cisco-esa/static/appIcon.png
-rw-r--r-- 1001/121 3348 2022-07-25 01:18 Splunk_TA_cisco-esa/static/appIconAlt.png
-rw-r--r-- 1001/121 6738 2022-07-25 01:18 Splunk_TA_cisco-esa/static/appIconAlt_2x.png
-rw-r--r-- 1001/121 6738 2022-07-25 01:18 Splunk_TA_cisco-esa/static/appIcon_2x.png
$ tar xOzvf splunk-add-on-for-cisco-esa_160.tgz Splunk_TA_cisco-esa/default/app.conf | grep visible
Splunk_TA_cisco-esa/default/app.conf
is_visible = false (Lots of addons don't ship UI pieces, especially if they're only doing non-ui related things like setting up props and transforms, or lookup enrichments) Without knowing your stack or this app in depth, I suspect what is actually happening is that someone set the app to visible in your environment (which isn't needed), and more likely than not some other app is globally exporting its nav default.xml with a default view set that isn't being shared globally, thus when you open the app, you try to get to a view that isn't available and thus 404. (Using Settings > User Interface > Navigation Menus you can see if there's a nav bar visible in this app's context from a different app) Or there could be other quirks... but this add on has no UI components out of the box...
... View more
01-21-2022
09:11 PM
No worries, but that distinction doesn't materially change the answer 🙂 | where ENT_CallType=if(* =="*","*",ltrim(*,"VQ_")) is just as syntactically wrong as what I thought your token value was ... Do the same exercise and you get ... " if (multiplication operator) is equal to ... " which doesn't make sense.
... View more
01-21-2022
08:49 PM
1 Karma
When you use tokens in dashboards they don't behave like variables, they behave more like #define macros in C... the literal value gets dropped into place where used prior to evaluation. (Ok this is an oversimplification, but when they're in your search strings... ) In the case that when your token has the value ALL/* then your where clause reads: | where ENT_CallType=if(ALL/* =="*","*",ltrim(ALL/*,"VQ_")) To parse that statement in words where the field ENT_CallType has a value equal to ... if ( value of the field ALL divided by multiplied by ... And we've stopped making sense, thus we call it a syntax error when we reach the * coming from your token value. With the value VQ_abc_efg you don't wind up with the same syntax problem, but you do wind up with something that is obviously not what you were intending: | where ENT_CallType=if(VQ_abc_efg =="*","*",ltrim(VQ_abc_efg ,"VQ_")) Again turning this into words, I'm looking in my results for a field called ENT_CallType, and seeing if it's equal to * when the field named VQ_abc_efg has the value *, otherwise the value of the field VQ_abc_efg removing VQ_ from the front of it... I suspect you want to read up on Syntax to consume tokens , in particular $tokenname|s$ where your token value gets wrapped in double quotes before it gets inserted (and also helps with escaping quotations within the token value too). There are a few other hints around tokens there too.
... View more
12-01-2021
08:59 PM
I'd want to try it to be certain, but it sounds like it could be a job for a lookup... with WILDCARD match type defined for some columns: https://community.splunk.com/t5/Splunk-Search/Can-we-use-wildcard-characters-in-a-lookup-table/m-p/94513#
... View more
11-21-2021
10:54 PM
1 Karma
Even with useAck disabled, if one side is completely blocked, then Splunk cannot send so the output Queue could back up and can eventually cause sending to all outputs to halt. (on a forwarder with useAck true the forwarder removes the data from its buffer after it gets acknowledgement that the indexer finished processing the data. With useAck false, the forwarder removes the data from the buffer after the indexer successfully received the data… if the forwarder cannot connect in either case, it fills the buffer) I think you’re looking for something like setting dropEventsOnQueueFull On the third party output, others may have more experience with tuning this and similar settings ref: https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/Outputsconf
... View more
11-20-2021
06:13 AM
1 Karma
Typically, PaloAlto logs aren’t ingested as pan:traffic directly, but rather as pan:log (or older as pan_log) this gets changed into pan:traffic (and pan:other-log-types) during the transforms step assuming you have the Pan-TA: https://github.com/PaloAltoNetworks/Splunk-Apps/blob/develop/Splunk_TA_paloalto/default/props.conf So you likely need [pan:log] or [pan_log] in your props instead of [pan:traffic] depending on what your inputs look like on your forwarders Secondly you mention this is on your indexers. Are your PAN logs being ingested by Universal Forwarders or Heavy Forwarders? If they are Heavy Forwarders, or you are sending through intermediate Heavy Forwarders, then parsing is already complete by the time you reach your indexers, and your props and transforms need to be on a different system (the first HF in the path from your syslog servers to your indexers) Hope this helps
... View more