All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It does seem like a bug. Splunk is supposed to calculate length based on number of characters, not bytes (and the same goes for parameters in settings, like TRUNCATE or MAX_TIMESTAMP_LOOKAHEAD; so it... See more...
It does seem like a bug. Splunk is supposed to calculate length based on number of characters, not bytes (and the same goes for parameters in settings, like TRUNCATE or MAX_TIMESTAMP_LOOKAHEAD; so it might be interesting to see if those are also affected). EDIT: TRUNCATE is actually in bytes. Should get rounded down if it would fall in the middle of a multibyte character. MAX_TIMESTAMP_LOOKAHEAD is in characters however. Confusing.
OK. Processing syslog is not as easy as it seems 1. There is hardly such thing as "standard syslog". Yes, there are some RFCs describing syslog protocol but in practice many solutions send practi... See more...
OK. Processing syslog is not as easy as it seems 1. There is hardly such thing as "standard syslog". Yes, there are some RFCs describing syslog protocol but in practice many solutions send practically anything on port 514 and consider it "syslog". 2. If the data is properly formatted (RFC3164 or RFC5424), the hostname can (and will if using properly configured sourcetypes) be parsed out from the event itself. Too bad many environments have - for example - a ton of routers in various locations, each called "gateway". 3. When you're receiving messages directly on network port with Splunk (either UF or "full" Splunk), you lose most of the metadata about the source (if properly configured, the input can set the host field to source IP or hostname but it can be subsequently overwritten by the value from the event - see previous point). So the recommended options of ingesting syslog data into splunk is to set up an intermediate syslog daemon which either: 1) Forwards to HEC input on Splunk adding proper metadata information (this can be done on rsyslog, syslog-ng or SC4S) or 2) Writes to files from which the UF picks up data and forwards it to Splunk (kinda similar to what you did). But the 2nd option is best done when writing to separate files from each host (for example with dynamic filename generation based on source IP). Then you have source file path  in the source field and you can parse the original IP or hostname from that.
This has helped to my query thank you!
Hello,  @Dietrich.Meier  sorry to bother here -  I'm curios to understand if the Synthetic Monitoring capability available from AppDynamics is capable of checking out of the box the SSL Certificat... See more...
Hello,  @Dietrich.Meier  sorry to bother here -  I'm curios to understand if the Synthetic Monitoring capability available from AppDynamics is capable of checking out of the box the SSL Certificate expiration date (I know ThousandEyes can do it). Are you aware of it? Or did you have to develop an entire extension just for this purpose? Regards
For my dashboard, I am using the following regex. Although the current date is displayed at the end of the dashboard and the oldest date is displayed at the top, I require the date format to be mm-dd... See more...
For my dashboard, I am using the following regex. Although the current date is displayed at the end of the dashboard and the oldest date is displayed at the top, I require the date format to be mm-dd-yy only. My dashboard should show the most recent date at the top. Give me your finest recommendations, please. | eval date=strftime(_time, "%m-%d-%y") | stats count by date,
@ITWhisperer  Double checked, and it was indeed an issue of permissions of the lookup transform A big thank from me for helping!
One additional remark about your searches (both the outer one and the subsearch) - don't use wildcards at the beginning of your search term if you can avoid it.
"i understand that there is way to send particular field/value to a nullqueue" You understand wrong, I'm afraid. As @richgalloway pointed out - you can send _whole events_ to nullQueue if they matc... See more...
"i understand that there is way to send particular field/value to a nullqueue" You understand wrong, I'm afraid. As @richgalloway pointed out - you can send _whole events_ to nullQueue if they match certain regex (or other criteria in case you use INGEST_EVAL). You can use transforms to cut specific parts of the events with regexes. But in the ingest pipeline Splunk has no knowledge about the search-time fields (the ones created with REPORT or EXTRACT entries as well as calculated fields or field aliases). It only knows the index-time fields (the default metadata ones and custom index-time extractions if any are defined). So if you want to trim your events you'd have to manipulate them with regexes. But since your events are structured, it'd be probably better to process your events before ingesting them into Splunk with something that can interpret XML and can selectively filter it based on XML structure, not plain regexes.
There are no miracles. Something must have been changed by someone. 1. Use btool to see where the settings come from 2. Check the config tracker to see when there were changes made to your environ... See more...
There are no miracles. Something must have been changed by someone. 1. Use btool to see where the settings come from 2. Check the config tracker to see when there were changes made to your environment. https://www.splunk.com/en_us/blog/platform/splunking-your-conf-files-how-to-audit-configuration-changes-like-a-boss.html?locale=en_us
It seems to be no different than other working lookups on the same application. What else can I check other than the lookup's application permissions?
Is the lookup visible to the user you are authenticating with for the API call?
That is a different question, given that you don't appear to have any dates in your dataset.
That works although it makes little difference to the pie chart
Hello all, I use Splunk API in order to export an SPL search. All queries are working well on my local dev environment and most work on production server. All queries that include or read from a c... See more...
Hello all, I use Splunk API in order to export an SPL search. All queries are working well on my local dev environment and most work on production server. All queries that include or read from a certain query (let's call it "SessionEntities") seem to return empty. For instance the query, " | inputlookup  SessionEntities", returns empty. The same query works both localy and even stranger, works on Splunk search page on the same server, while with the same query and different lookup, it returns with results. That lookup is no different than the others (no bigger content size), but still. Anyone has an idea of why could this be happening?
Error thrown: Internal configuration file error. Something wrong within the package or installation step. Contact your administrator for support. Detail: Error: duplicate l keys is not allowed at ap... See more...
Error thrown: Internal configuration file error. Something wrong within the package or installation step. Contact your administrator for support. Detail: Error: duplicate l keys is not allowed at appendError. I'm trying to create a new app in Splunk add-on builder. This error is thrown whenever I load the app's inputs or configuration page
What I mean by "revert certain application objects back to old settings", is that we noticed that 4 applications were reverted to the old version and the objects affected are dashboards, saved search... See more...
What I mean by "revert certain application objects back to old settings", is that we noticed that 4 applications were reverted to the old version and the objects affected are dashboards, saved searches, search files, porpos.conf and transfroms.conf   For example: We configured alert  to run every 2 minutes, then changed it to run every 5 minutes, and after the issue it was switched back to run every 2 minutes   The last bundle was pushed from the deployer server on November 7th and the issue occurred on November 10th      
Hi all, I am new to SPLUNK and would appreciate some community wisdom. We are trying to get data from an external AWS s3 bucket (hosted and managed by 3rd party supplier) onto our internal enterpris... See more...
Hi all, I am new to SPLUNK and would appreciate some community wisdom. We are trying to get data from an external AWS s3 bucket (hosted and managed by 3rd party supplier) onto our internal enterprise SPLUNK instance. We do not have any AWS accounts.  We have considered whitelisting but it is not secure enough. The supplier does not use AWS firehose Any ideas? 
Hi What you are actually  meaning with "the objects of specific Apps reverted back to old settings"? Old lookups are in use, old KOs are in use after you apply shcluster-bundle on deployer? It tha... See more...
Hi What you are actually  meaning with "the objects of specific Apps reverted back to old settings"? Old lookups are in use, old KOs are in use after you apply shcluster-bundle on deployer? It that then yo should read and check these: Choose a deployer push mode [shclustering] r. Ismo
Hi couple of additional comments: When you are indexing data REPORT-didn't executed it's (as EXTRACT) works only on search time. When you have several TRANSFORMS stanzas on own lines then those a... See more...
Hi couple of additional comments: When you are indexing data REPORT-didn't executed it's (as EXTRACT) works only on search time. When you have several TRANSFORMS stanzas on own lines then those are applied based on those names ASCII order! If you want apply those in specific order put those in one TRANSFORMS-xyz = a, e, c, b, d or ensure that names are evaluated on correct order (use eg 000x, 001y, 002a etc) One good instructions for index phase https://www.aplura.com/assets/pdf/props_conf_order.pdf. Aplura have some other Cheat Sheets which helps. r. Ismo  
You should look css and create your own to update those properties.