All Topics

Top

All Topics

Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 ... See more...
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 = System.currentTimeMillis(); log.info("logTime={}", start2 - start1);   This is our logback config -  Taking 30 ms is too long for a single log action. Are we missing anything in the config ?  
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding... See more...
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding': Too many events (> 10000) in a single second.. Failed to find target event in final sorted event list. Cannot properly prune results The result sets are not huge.. maybe 150 events. What does the above errors mean and how do we resolve this error?
I have successfully completed several free courses and four paid courses in the last few weeks,  but only received a few reward points from one free course and one paid course. The majority of my Spl... See more...
I have successfully completed several free courses and four paid courses in the last few weeks,  but only received a few reward points from one free course and one paid course. The majority of my Splunk learning rewards points promised have never showed up. https://www.splunk.com/en_us/training/splunk-learning-rewards.html What should I do? Any advice is much appreciated.
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and ... See more...
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and India. We need a drop with country and based on country site  also should be shown. How can we do this?? <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="site"> <label>SITE</label> <choice value="*">All</choice> <prefix>site="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search> <query> | makeresults | eval site="BDC" | fields site | append [ | makeresults | eval env="SOC" | fields site ] | sort site | table site </query> </search> </input> </fieldset> <row> <panel> <table> <title>Total Count Of DataRequests</title> <search> <query> index=Datarequest-index $site$ | rex field= _raw "application :\s(?<Reqtotal>\d+)" |stats sum(Reqtotal) </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app w... See more...
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app without a restart of the indexer? The TA we want to deploy is an extension to the nix TA, and all it does is run some simple bash scripted inputs.    
Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min"... See more...
Here's a part of my query, ignoring where the data is coming from:   | eval bucket=case(dur < 30, "Less than 30sec", dur <= 60, "30sec - 60sec", dur <= 120, "1min - 2min", dur <= 240, "2min - 4min", dur > 240, "More than 4min") | eval sort_field=case(bucket="Less than 30sec", 1, bucket="30sec - 60sec", 2, bucket="1min - 2min", 3, bucket="2min - 4min", 4, bucket="More than 4min", 5) | sort sort_field | stats count as "Number of Queries" by bucket   The problem I have is that the results are ordered alphabetically by the name of each bucket.  I'd prefer to have the order always be from quickest to slowest: <30s, 30-60s, 1-2m, 2-4m, >4m What I get:   1min - 2min | <value> 2min - 4min | <value> 30sec - 60sec | <value> Less than 30sec | <value> More than 4min | <value>   What I want:   Less than 30sec | <value> 30sec - 60sec | <value> 1min - 2min | <value> 2min - 4min | <value> More than 4min | <value>   I've tried a number of different approaches, none seeming to do anything.  Is this possible?
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech... See more...
Hi, I am quite new to Splunk, so sorry in advance if I ask silly questions. I have below task to do: "The logs show that Windows Defender has detected a Trojan on one of the machines on the ComTech network. Find the relevant alerts and investigate the logs." I keep searching but dont get the right logs. I seached below filters:  source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" source="XmlWinEventLog:Microsoft-Windows-Windows Defender/Operational" I would really appreciate if you could help. Thanks, Pere    
I need my trial extended 14 more days.  I have to do a demo for my bosses on Tuesday  User : https://app.us1.signalfx.com/#/userprofile/GM-tC55A4AA  
5/17/24 12:45:46.313 PM persistuse Environment = LTQ3   In the above event character "r" is missing on word persistuse ( but exist in raw_data on host )  hence the events are creating without ti... See more...
5/17/24 12:45:46.313 PM persistuse Environment = LTQ3   In the above event character "r" is missing on word persistuse ( but exist in raw_data on host )  hence the events are creating without timestamp and getting data quality issues how this can be fixed 
We are generating HEC tokens on a deployment server and pushing them out to the HECs.  HEC tokens are disabled by default on the HECs and the deployment server and need to be enabled in global setti... See more...
We are generating HEC tokens on a deployment server and pushing them out to the HECs.  HEC tokens are disabled by default on the HECs and the deployment server and need to be enabled in global settings.  What I've done so far is: -authorize.conf, this is for user tokens and isn't working for HEC tokens -the CLI command for token enable isn't working because it's not enabled globally -inputs.conf has [http] disabled=0   The only thing that has worked is enabling it via the UI. Is there a way to enable these over CLI?
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlat... See more...
Hi Team,   is it possible to update/enrich a notable after executing a playbook in splunk soar and that execution output must be attached in the Splunk notable. Example:   Assume I have correlation search named one and this triggers a notable and run a playbook actions. Now once the search triggers and notable is created, the action run a playbook should execute in soar and attach that output to the notable created. You think of this attaching ip reputation/geo locations of an ip to the notable so that soc can work without logging into virus total or any other sites.   Thank you
Hello, Can 8089 port traffic be encrypted? What are the pros and cons?
If I have 6 search peers configured in the distsearch.conf file but 3 of them go down, can Splunk recognize that a host is down and continue skipping down the list until it gets a live host?
Hello, Does Splunk 9.0 compatible with Oracle Linux?
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details requir... See more...
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details required in MISP add-on app. However, after the configuration, I'm getting the following error: (Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability). Furthermore, by looking into the role capabilities under Splunk UI setting, I dont see "dispatch_rest_to_indexers" role either. Could someone please assist?
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/sp... See more...
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/splunk_app_addon-builder/local/validation/TA-splunk-webhook-final/bin/ta_splunk_webhook_final" We wanted to create one parameter in any config file with value in the form of list of rest api endpoints and read that in python code. If rest api endpoint entered by user while adding the action to alert is present in the list added in config file then only need to proceed with process_data action in python else display a message saying rest api endpoint is not present So now we wanted to know In which conf to define the parameter and what changes to make in python file and which python file to be used as there are many python files under /bin directory. Also after making changes in any conf or python files and restarting the changes are not getting saved. How to get it saved after restarting splunk? PFA screenshots of conf and python files. Kindly help with any solution.
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a fie... See more...
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a field from an event and call it 'action'. The entry in the props.conf looks like : EXTRACT-pam_action = (Action\: (?P<action>\[[^:\]]+]) ) I know that the extraction is working as there is a field alias later in the props.conf : FIELDALIAS-aob_gen_syslog_alias_32 = action AS change_type When I run a basic generating search on the index & sourcetype, the field 'action' does not appear in the 'Interesting Fields' but the 'change_type' alias does appear! The regex is fine as I can create the 'action' field OK if I add the rex to the search. I have also added the exact same regex to the props.conf file but called the field 'action1' and that field is displayed OK. Another test I tried is to create a field alias for the action1 field name called 'action' : FIELDALIAS-aob_gen_syslog_alias_30 = action1 AS action FIELDALIAS-aob_gen_syslog_alias_32 = action1 AS change_type 'change_type' is visible but, again 'action' is not visible. Finally my search "index=my_index action=*" produces 0 results whereas "index=my_index change_type-*" produces an accurate output. I have looked in the props and transforms configs across my searchhead and can't see anything that might be 'removing' my field extraction but, I guess my question is..... how can I debug the creation ( or not ) of a field name? I have a deep suspicion that it is something to do with one one the Windows TA's apps that we have installed but am struggling to locate the offending configuration Many thanks for any help. Mark
Hi,   Is there a way of bulk enabling alerts in Splunk enterprise?   Thanks,   Joe
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have th... See more...
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have this raw event: <b>This<\b> is an <b>example<\b><br>of raw<br>event And I would like to transform it like this: This is an exampleof rawevent   I tried to create this transforms.conf: [remove_html_tags] REGEX = <\/?br?> FORMAT =  DEST_KEY = _raw   And this props.conf: [_sourcetype_] TRANSFORMS-html_tags = remove_html_tags But it doesn't work.   I also thought I could change the transforms.conf like this: [remove_html_tags] REGEX = (.*)<\/?br?>(.*) FORMAT = $1$2 DEST_KEY = _raw But it will stop after just one substitution and the REPEAT_MATCH property is not suitable because the doc says: NOTE: This setting is only valid for index-time field extractions. This setting is ignored if DEST_KEY is _raw. And I must set DEST_KEY = _raw     Can you help me? Thank you in advance.
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting lo... See more...
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting logs from Fudo to Splunk is not a problem at all, it's easily done over syslog. However, I don't know how to parse these logs. The syslog sourcetype doesn't properly parse the events, it misses a lot of useful information such as: users, processes, action done, accounts, basically almost everything except for the IP of the node and the timestamp of the event.  Does anyone know if there is a good add-on for parsing logs from Fudo PAM? Or any other good way how to parse its logs?  Thanks for taking time reading and replying to my post