All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The field (not "column") returned by the subsearch ("unique_id" in Version 1 and "ignore" in Version 2) must exist in the main search.
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by sp... See more...
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by splunk
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming dat... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming data at index time, such as credit card or social security numbers. For more information, search the online documentation for "anonymize data." * Used to specify a sed script which Splunk software applies to the _raw field. * A sed script is a space-separated list of sed commands. Currently the following subset of sed commands is supported: * replace (s) and character substitution (y). * Syntax: * replace - s/regex/replacement/flags * regex is a perl regular expression (optionally containing capturing groups). * replacement is a string to replace the regex match. Use \n for back references, where "n" is a single digit. * flags can be either: g to replace all matches, or a number to replace a specified match. * substitute - y/string1/string2/ * substitutes the string1[i] with string2[i] * No default.    
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table. ... See more...
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table.  The challenging thing is, there are 3 columns at the same time to be taken into account for filtering.  After a lot of trials, I ended up in creating a key from the 3 columns (which is unique) and then filter on the key.  It is working, I just don't understand why :-(. Question: Has anybody an idea why the Version 1 filter works, and why Version 2 filter fails? Question: What needs to be changed to get Version 2 also to work? index="pm-azlm_internal_prod_events" sourcetype="azlmj" | strcat ocp "_" fr "_" el unique_id | table _time ocp fr el unique_id d_1 | search d_1="DEF ges AZ*" ``` VERSION 1: the working one ``` ``` As long the subsearch returns a table with the column unique_id ``` ``` which is exactly the name of the column I want to filter on, all works great.``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec unique_id | table unique_id] ``` VERSION 2: NOT working ``` ``` As soon I change the name of the column in the subsearch, the filte won't work anymore``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec ignore | table ignore]``` | timechart span=1d limit=0 count by unique_id   And the final question: is there a way for such filtering without going through the key creation? Thank you in advance.
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and... See more...
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and if there are any specific considerations for this process? Best regards,
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  Th... See more...
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  This is only a starting point as depending on how your environment was setup you may have very specific indexes or things may have been aggregated into only a few indexes. From there you can start decided what questions come next.
how to integrate microsoft intune in splunk using the connector downloaded from splunk base 
HI @gcusello ,  In splunk we monitor devices or Host and we get logs from them what i need to know how much memory (in GB) has been utilised by those host or log source where does splunk store such ... See more...
HI @gcusello ,  In splunk we monitor devices or Host and we get logs from them what i need to know how much memory (in GB) has been utilised by those host or log source where does splunk store such data in case of Onpremise instance ?
Hi @arjun , what's your requirement: to know the volume for each customer? or what else? Could you better describe your environment and your situation? E.g.: have you a multi-tenant environment or... See more...
Hi @arjun , what's your requirement: to know the volume for each customer? or what else? Could you better describe your environment and your situation? E.g.: have you a multi-tenant environment or not? Ciao. Giuseppe
How can we locate usage related data from splunk, I have onpremise splunk instance and looking for usage and billing related data grouped by day. I am not able to locate data in any index.
How splunk calculates health score of any servicebased on KPIS, does it use any AI model or weightage formula for health score  ?? 
is this props.conf/transform.conf command or in splunk command? the goal is to remove/alter the field prior entering it to splunk.
Charts have numeric scales for the y-axis except things like bubble charts but then the values are numeric so it is unlikely that you can get a chart as you proposed - what are you trying to show (th... See more...
Charts have numeric scales for the y-axis except things like bubble charts but then the values are numeric so it is unlikely that you can get a chart as you proposed - what are you trying to show (there may be alternative ways of representing the data)
Hi all, I am having two fields as eventfield2and eventfield3with values of eventfield3= LHCP , RHCP ,LHCP & values of eventfield2= RHCP , RHCP ,LHCP . I want a result like as shown .          T... See more...
Hi all, I am having two fields as eventfield2and eventfield3with values of eventfield3= LHCP , RHCP ,LHCP & values of eventfield2= RHCP , RHCP ,LHCP . I want a result like as shown .          Thanks for your time in advance.      
Up A week ago, I tried to enable DEBUG log to find the root case But found only the similar events without anything helpful to find the root case
@doeh  - You don't need to ingest the logs, just directly modify the lookup but with the help of rest endpoints instead of modifying file. The below document has methods that you can use. https://do... See more...
@doeh  - You don't need to ingest the logs, just directly modify the lookup but with the help of rest endpoints instead of modifying file. The below document has methods that you can use. https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTknowledge#data.2Flookup-table-files.2F.7Bname.7D   I cannot tell what change has happen after Upgrade, but what I can certainly tell you is direct file modification is not recommended practice and it will not work in Search Head Cluster for sure. So, its a good idea to switch to better approach.   I hope this helps! Kindly upvote if it does!!!
I just found it in a few files inside the : ./apps/splunk_monitoring_console/lookups/hwf-list.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv ./apps/splunk_monitoring_console... See more...
I just found it in a few files inside the : ./apps/splunk_monitoring_console/lookups/hwf-list.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv ./apps/splunk_monitoring_console/lookups/dmc_forwarder_assets.csv.c didn't removed them yet. The fact is we are going to rebuild the dmc/lm in a matter of weeks and wil see if these errors wil appear again. But i think they won't appear again. Until now it doesn't seem to matter, it all works great. grts jari
Hi @adoumbia , as @ITWhisperer said, it's really difficoult to help you without knowing the events to apply the search. Anyway, if you need a brute force attack sample search, you can see in the Sp... See more...
Hi @adoumbia , as @ITWhisperer said, it's really difficoult to help you without knowing the events to apply the search. Anyway, if you need a brute force attack sample search, you can see in the Splunk Security Essentials App ( https://splunkbase.splunk.com/app/3435 ) where you can find what you're searching and many other Security Use Cases. Ciao. Giuseppe
I tried the suggestions above. The SPL against the _internal index doesn't show modifications to dashboards. The SPL against the _audit index does but it shows a numeric ID for the user which I belie... See more...
I tried the suggestions above. The SPL against the _internal index doesn't show modifications to dashboards. The SPL against the _audit index does but it shows a numeric ID for the user which I believe to be unrelated to the actual user. I say this as because this same ID is responsible for 99% of action=modify events across the platform. So I would presume it to be the splunk system user.
It is the size of evtx files on disk. I have confirmed I have not reached the limit. Size after indexing is much below than the size on disk as it is not loading all the files.