All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | ... See more...
Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | "Avg Time App2" | "Max Time App1" | "Max Time App2" | ... Additionally, each row of the table represents a different date, so my team and I can check their performance through an arbitrary time interval.  My idea was to color a certain cell based on its value compared to the equivalent value of the other app. So, for example, let's say "Avg Time App1" = 5.0 and "Avg Time App2" = 8.0 on day X (an arbitrary row). My idea is to highlight the cell for the "Avg Time App2" on day X as its value is bigger than for App1.  I'm aware I can color cells dinamically with the `<format>` block, by setting `type="color"` and the `field` to whatever my field is. But I wanted to know how I can do this by each row (this means that even if the cell on the first row of column X is highlighted, the next rows won't necessarily be) and based on a comparison with another cell, from another column, on the same row.  One other detail is that the name of my columns contains a token. So a somewhat related problem I've been having is accessing the value from the cells, because, to my understanding, it would turn out as something of the sort: $row."Avg Time $app1$"$ So if someone could help me implement this conditional coloring idea, I would be very grateful. Thanks in advance,  Pedro
Hi @tech_g706 ,   Sometime the issue with the MongoDB as well  Please check the following, it will helpful for further troubleshooting. Mongodb status ps -ef | grep -i mongod if we are not gett... See more...
Hi @tech_g706 ,   Sometime the issue with the MongoDB as well  Please check the following, it will helpful for further troubleshooting. Mongodb status ps -ef | grep -i mongod if we are not getting any output means kvstore is not running. check the below logs, try to find any clue on this logs cat $SPLUNK_HOME/var/log/splunk/kvstore.log cat $SPLUNK_HOME/var/log/splunk/mongod.log
Hi @mchoudhary  The reason for the warning you are seeing is because its scheduled to run more frequently than the duration it takes to run. e.g. 5 minutes into the 8 minute runtime it wants to be r... See more...
Hi @mchoudhary  The reason for the warning you are seeing is because its scheduled to run more frequently than the duration it takes to run. e.g. 5 minutes into the 8 minute runtime it wants to be running again. There are two approaches to take here, which vary depending on your setup and stack load. Basically you want that search to take less than 5 minutes to run. Given that its an ES out of the box search it shouldnt need modifying (and doing so could end up with incorrect results). One thing I would want you to check though is to check what the `notable` macro expands to. If you find the macro through the UI (Settings -> Advanced Search -> Search macros) can you see the definition for the "notable" macro? Has this been modified at all? The other thing that could be impacting the time it takes for the search to run is the load on the service. What does your architecture look like? Is your deployment based on the recommended reference hardware spec? (Min 16 Physical cores and 32GB RAM for the Searchheads) Check out https://help.splunk.com/en/splunk-enterprise-security-8/install/8.1/planning/minimum-specifications-for-a-production-deployment for more information. Are there any other long-running searches which could be contributing to the load on the stack and thus slowing this search down?  In other words, its going to take some digging, sometimes the search that takes longer to run isnt the problematic search - its just starved of resource and cant run faster!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi @dbloms , Glad to hear. happy splunking! P.S.: Karma Points are appreciated by me and the other contributors     
Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( ... See more...
Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( 1-59/5 * * * *) Timerange:  earliest - 48d@d  ; latest - +0s (now) Message: The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached Search query:  `notable` | search NOT `suppression` | eval timeDiff_type=case(_time>=relative_time(now(), "-24h@h"),"current", 1=1, "historical") | expandtoken rule_title | table _time,event_id,security_domain,urgency,rule_name,rule_title,src,dest,src_user,user,dvc,status,status_group,owner,timeDiff_type,governance,control | outputlookup es_notable_events | stats count It is writing the output to an output-lookup.  and takes around 8 mins as runtime when checked under job management. Can some help me understand where the issue lies, what's making this search in particular to skip. The percentage skipped it around 50% and the status is critical.  
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the trad... See more...
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the traditional ways (completion handler) the AppD still collect properly. AppDynamics 
Yes, you are right. The admin password was changed, now it it starts without problems. Many thanks to you, you saved my day!
in my local i am using EST time zone only. 
Hi @amanthri ,    Disabling the KO is the safest option from your savedsearches.conf [<name_of_your aved_search>] disabled = 1   you can placed in the local directory /local/savedsearches.conf,... See more...
Hi @amanthri ,    Disabling the KO is the safest option from your savedsearches.conf [<name_of_your aved_search>] disabled = 1   you can placed in the local directory /local/savedsearches.conf, it will effectively overide from the default this works for the other KOs too.
Hi @bigchungusfan55 , Please do the following check to find and fix the issue 1. check and confirm that the KV store collection exists 2. Check where the KV Store Lookup is defined 3. Check... See more...
Hi @bigchungusfan55 , Please do the following check to find and fix the issue 1. check and confirm that the KV store collection exists 2. Check where the KV Store Lookup is defined 3. Check the permission for the lookup 4. check lookup defintion and collection configuration.
The error indicates the automation can't authenticate against 127.0.0.1:8089 Accept the license and try with below docker run -d \ --name splunk \ -e SPLUNK_START_ARGS="--accept-license" \ -e SP... See more...
The error indicates the automation can't authenticate against 127.0.0.1:8089 Accept the license and try with below docker run -d \ --name splunk \ -e SPLUNK_START_ARGS="--accept-license" \ -e SPLUNK_PASSWORD="yourpassword" \ splunk/splunk:latest
Hi @Sahansral  Have you set any local user-prefs.conf changes? Please could you run the following: $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug In the [general] stanza do you have a "... See more...
Hi @Sahansral  Have you set any local user-prefs.conf changes? Please could you run the following: $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug In the [general] stanza do you have a "lang" setting? There is no default so it could be missing but if its there it should be a valid lang value (e.g. "de-DE" not "de") lang = <string> * Specifies the per-user language preference for non-web ui operations, where multiple tags are separated by commas. * If unset, English "en-US" is used when required. * Only tags used in the "Accept-Language" HTTP header are allowed, such as "en-US" or "fr-FR". * Fuzzy matching is supported, where "en" will match "en-US". * Optional quality settings are supported, such as "en-US,en;q=0.8,fr;q=0.6" * No default.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks livehybrid,  The first option worked perfectly. I only wanted the field to be sanitized at search time and the first option does that. Cheers.  
@Raja_Selvaraj  DATETIME_CONFIG = CURRENT should work normally. But observed few times with monitor input, Splunk still scans the event content or file metadata (modtime) to determine _time, ev... See more...
@Raja_Selvaraj  DATETIME_CONFIG = CURRENT should work normally. But observed few times with monitor input, Splunk still scans the event content or file metadata (modtime) to determine _time, even if DATETIME_CONFIG = CURRENT is set. Are you using monitor input? But you can use the modification at search time. Eg: BASE_SEARCH | eval _time = now() Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-... See more...
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-alert the request gets redirected automatically to something like https://our-splunk-address/de/app/some-app/alert?s=some-alert which does not work. The Url should be https://our-splunk-address/de-DE/app/some-app/alert?s=some-alert (see Configure user language and locale | Splunk Docs) This incorrect redirect only happens in our productive environment  and only if the language setting of the browser is set to german. English works fine (redirect is .../en-GB/...) We tested different browsers (Edge, Firefox) with same results. Our test environment uses the same browsers, redirects correctly and we can't fathom any configurations differences between our test and production that could explain this bevaviour. Did you experience a similar phenomenen or can give me a hint where I can look for further clues? Regards, Jens
Hi @Raja_Selvaraj  Can you confirm which server(s) you have put the DATETIME_CONFIG = CURRENT on and what type of instance this is? (Universal Forwarder / Heavy Forwarder / Indexer) ? This needs to... See more...
Hi @Raja_Selvaraj  Can you confirm which server(s) you have put the DATETIME_CONFIG = CURRENT on and what type of instance this is? (Universal Forwarder / Heavy Forwarder / Indexer) ? This needs to be on the first full deployment (HF/Indexer) that the data hits as this is where it is parsed  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie  Do you want this to affect the raw data (e.g when its indexed) or do you want the original string to exist in the data but also have a field which has it without the suffix? You could... See more...
Hi @Splunkie  Do you want this to affect the raw data (e.g when its indexed) or do you want the original string to exist in the data but also have a field which has it without the suffix? You could do the following at search time: | rex field=Username_Field mode=sed "s/ sophos_event_input$//" (See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex) Alternatively you could use a REPLACE function: | eval cleaned_Username=REPLACE(Username_Field," sophos_event_input","") You could also make this an automatic calculated field so that you dont need to include it in your SPL:   If you want this to be replaced in the _raw event at index time then you need to deploy a props.conf file within a custom app to your HF or Indexers (whichever the data lands on first) with something like this: # props.conf # [yourSourcetype] SEDCMD-removeSophosSuffix = "s/ sophos_event_input//g"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie , do you want to do this at index time, recording the modified events or at search time (only in visualization)? if at search time, you can use a regex in your searches like the follow... See more...
Hi @Splunkie , do you want to do this at index time, recording the modified events or at search time (only in visualization)? if at search time, you can use a regex in your searches like the following: | rex mode=sed "s/sophos_event_input/ /g" if at index time, you should put in the props.conf: [<your_sourcetype>] SEDCMD = "s/sophos_event_input/ /g" This conf file must be located in the first full Splunk instance where data pass through, in other words in the first Heavy Forwarder (if present) or otherwise on the Indexers. Ciao. Giuseppe
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would l... See more...
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the Username field to only contain the users name, Example Username_Field Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this?