All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,    I am running the following query.  index=sys_tools_ecc-appd application_name=CAPRI-1130 | table * | search source=business_transactions business_transactions.metricName="*Average Resp... See more...
Hello,    I am running the following query.  index=sys_tools_ecc-appd application_name=CAPRI-1130 | table * | search source=business_transactions business_transactions.metricName="*Average Response Time (ms)*" | timechart avg(business_transactions.metricValues{}.value) by business_transactions.metricPath   The business_transactions.metricPath names are all too long example below: 1. Business Transaction Performance|Business Transactions|APP|/dbq/ecrud|Average Response Time (ms) 2. Business Transaction Performance|Business Transactions|APP|/dbq/BTSXDRRequest_PortTypeWS|Average Response Time (ms)   Need to trim them from both side. I need to remove  "Business Transaction Performance|Business Transactions" from the front and "|Average Response Time (ms)" from the back before displaying them on column.       
Hi, I have Splunk UBA for that i want to set an alert whenever p6 level incident triggered, I should receive the email alert.   Please share how to set these settings. 
If you access the "All configurations" from other apps, results are pulled in fast, the page is loading. The problem is when navigating through the "Search & Reporting" app, It times out with 504 err... See more...
If you access the "All configurations" from other apps, results are pulled in fast, the page is loading. The problem is when navigating through the "Search & Reporting" app, It times out with 504 error.  We tried with max_view_cache size 2500 and splunkdtimeout setting with 60 sec. No luck. Does anyone have an idea to this issue? and how to adjust performance settings.
Hi Team,   Recently we got an requirement from our internal teams to ingest the Active Directory logs into Splunk.  Hence our Cluster Master, Search Heads & Indexers are hosted in Cloud and manag... See more...
Hi Team,   Recently we got an requirement from our internal teams to ingest the Active Directory logs into Splunk.  Hence our Cluster Master, Search Heads & Indexers are hosted in Cloud and managed by Support.  Hence I have downloaded the add-on "Splunk Supporting Add-on for Active Directory" and installed in my Heavy Forwarder server and performed the configurations as mentioned in the Add-On. i.e. Domain name : xyz Alternate domain name : xyz Base DN : xyz LDAP Server Hostname : xyz Port : 389 SSL : I didnt enable the check box. Credentials Bind DN : Provided my admin account information Password : Related Password Connection Status : Test Succeeded When clicked Save its not showing up as Saved.  Similarly we have installed the Add-On in our Search Heads as well but didn't perform any configurations since its in Cloud. So post doing it when I went to search head and try to search the below queries as provided in the Add-on I am not getting the desired results else we are getting the error as below.   Search Query : | ldaptestconnection domain="xyz" Getting error as below : External search command 'ldaptestconnection' returned error code 1. Script output = "error_message=Cannot find the configuration stanza for domain=xyz in ldap.conf. ".   Search Query : | ldapsearch search="(objectClass=group)" attrs=distinguishedName | ldapgroup Getting error as below : External search command 'ldapsearch' returned error code 1. Script output = "error_message=Missing required value for alternatedomain in ldap/default. ". So is that anything missed in the configuration and why I am getting this error so kindly help on how to get it fixed. Is anything i need to change in the configuration page which is installed in the Heavy Forwarder kindly let me know.      
We have multiple devices forwarding the logs to Splunk which syslog mechanism and UF, as it's difficult to identify the forward mechanism used for those devices. is there any way to identify the sysl... See more...
We have multiple devices forwarding the logs to Splunk which syslog mechanism and UF, as it's difficult to identify the forward mechanism used for those devices. is there any way to identify the syslog forwarding mechanism on port 514 ?
We currently have an issue with our "nobody" user in splunk whom we assign all our scheduled reports to. we are reaching daily the disk quota limit and  a lot of searches are getting skipped. Messag... See more...
We currently have an issue with our "nobody" user in splunk whom we assign all our scheduled reports to. we are reaching daily the disk quota limit and  a lot of searches are getting skipped. Message: "The maximum disk usage quota for this user has been reached." Now I want to increase the "srchDiskQuota" in the authorize.conf.  But having two questions: 1. Is it correct that if we want to assign anything to the "nobody" user we need to do this for [default] since the "nobody" user isnt assigned to any role? Or is the user actually part of the role "splunk-system-role"? 2. How can I find out what would be my maximum setting for the "srchDiskQuota" to not brake my system? Thanks for a short feedback.
Hi everyone, I am coming from background of   Java, Python in the past 12 years. I am new to Splunk. Currently I am with Doctrine studies at George Washington University in Cybersecurity-Analytic... See more...
Hi everyone, I am coming from background of   Java, Python in the past 12 years. I am new to Splunk. Currently I am with Doctrine studies at George Washington University in Cybersecurity-Analytics and I am planning to work on Praxis in 2 months, which involves in Splunk integration with MITRE framework.  Can someone guide me to pointers for Splunk MITRE framework architectures and project examples, with any free training, if possibly any intern opportunities with Splunk, in this framework directions. I have another 2 months to go.  Can someone please guide me some sweet spots of Splunk Analytics to MITRE Framework?
Consider I have 8 events. 1. txn started for fruit.mango 2. money paid for fruit.mango 3. received fruit.mango 4. txn completed for fruit.mango 5. cust wants to buy apple 6. getting money for a... See more...
Consider I have 8 events. 1. txn started for fruit.mango 2. money paid for fruit.mango 3. received fruit.mango 4. txn completed for fruit.mango 5. cust wants to buy apple 6. getting money for apple 7. sending apple to cust 8. txn started for veegtable.carrot I get a variable 'name' from my dashboard. This name is actually a string consisting of space separated words or just a single word. For ex. it could be either 'apple' or 'txn started for fruit.mango'.  What I want to do is first I want to extract the name of article then search the whole event space with that article name and then do further processing. Ex. If I get $name$= 'txn started for fruit.mango'. I want to extract mango and then run the query such that I get all the events which have the word mango in it. After that i create a table of the steps the transaction took. I use      index=.. $name | if(match($name, "\s"), mvindex(split($name, "."), 1), $name)     to extract the name but  then I get only a single event matching that is `txn started for fruit.mango`, other events which also has 'mango' in it like 'received fruit.mango' doesnt match. How do I go about it? Any help will be much appreciated, thanks.
Hi all, we are trying to save the password of our service in the a secure way as described here. We'd like to retrieve the Splunk instance current credentials without asking for them in the Add-... See more...
Hi all, we are trying to save the password of our service in the a secure way as described here. We'd like to retrieve the Splunk instance current credentials without asking for them in the Add-on input configuration, to use them as parameters to create the service instance to store password service = client.connect(...) storage_passwords = service.storage_passwords Is there a way to do it? Thanks for the help!    
Hi guys, it is even possible to schedule a report with cron  to run at 14:35 and 23:55 only per day? I tried something like 35,55 14,23 * * * but I think it will run also at 14:55, am I right? I ... See more...
Hi guys, it is even possible to schedule a report with cron  to run at 14:35 and 23:55 only per day? I tried something like 35,55 14,23 * * * but I think it will run also at 14:55, am I right? I was playing with crontab.guru but was not able to figure it out, so I think it might not be possible
Hi There!     Good day,     I need to remove repeated entries of same values in single field, I'm unable to separate into single values by using values() , mvsplit commands, Actual one -  src... See more...
Hi There!     Good day,     I need to remove repeated entries of same values in single field, I'm unable to separate into single values by using values() , mvsplit commands, Actual one -  src_name serial item-s1028501 5cd022g2wn 5cd022g2wn 5cd022g2wn 5cd022g2wn 5cd022g2wn 5cd022g2wn 5cd022g2wn 5cd022g2wn   Expected one - src_name serial item-s1028501 5cd022g2wn    I just mentioned only one values in src_name, we are having multiple values in src_name to work with, Thanks in Advance !
Hi, On Splunk, I have a macro called `ABC` . I use this macro in the first search like this: `ABC` | stats values(src_ip) values(src_zone) values(dest_ip) values(dest_port) values(app) values(tra... See more...
Hi, On Splunk, I have a macro called `ABC` . I use this macro in the first search like this: `ABC` | stats values(src_ip) values(src_zone) values(dest_ip) values(dest_port) values(app) values(transport) values(session_end_reason) by host rule action | rename values(*) as * | rex field=host "(?<host_new>[^\.]+?)(?:\.[01]|\.02)?\." I also have a second Splunk search as follows: | rex field=Device_FQDN "(?<Device_FQDN>[^\.]+?)(?:\.[01]|\.02)?\." I need to JOIN BOTH searches using the field "host_new" from the first search and the field "Device_FQDN" from the second search as the common fields to perform the JOIN on. What would the Splunk query be in this case, using both searches I have supplied and where the first search uses a macro? Many thanks, P
Hello, does anyone here have an idea why cisco cloud security umbrella addon is interfering the authentication within Splunk TA Cloud Services? I try to ingest nsg flow data via a storage blob.... See more...
Hello, does anyone here have an idea why cisco cloud security umbrella addon is interfering the authentication within Splunk TA Cloud Services? I try to ingest nsg flow data via a storage blob. All the other inputs within the Cloud Services TA are working (azure audit data via eventhub). When I disable the umbrella TA nsg flow logs can be received without a problem.         File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/solnlib/credentials.py", line 137, in get_password f"Failed to get password of realm={self._realm}, user={user}." solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_microsoft-cloudservices#configs/conf-inputs, user=cisco_cloud_security_umbrella_addon://UmbrellaDNS.       Feels like some python issue?  Thankful for any hint I can get. David
I have created my dashboard . I need to created pdf report of dashboard sent to my email daily 2pm ist.
Hello, I am attempting to add an External Splunk Enterprise Instance to SOAR and receive the following error when I click "Test connection":    I am running Splunk Enterprise On-Prem v8.2.9, S... See more...
Hello, I am attempting to add an External Splunk Enterprise Instance to SOAR and receive the following error when I click "Test connection":    I am running Splunk Enterprise On-Prem v8.2.9, Splunk App for SOAR 1.0.41, and SOAR (Unprivileged, On-prem) v6.0.0.114895. What's interesting is I can see the events be created in Splunk Enterprise in the phantom_action_run index:  Does anyone happen to know what the (not very descriptive) error "status" means? Whats also interesting is none of my hosts are named "Splunk", so wondering where the error is pulling that hostname from?   Thanks ahead of time for your help!   ~J      
"timestamp": "2023-05-12T10:41:28.479211Z", "level": "INFO", "filename": "splunk_sample_csv.py", "funcName": "main", "lineno": 38, "message": "Dataframe row : {\"_c0\":{\"0\":\"Linux\",\"1\":\"00:00:... See more...
"timestamp": "2023-05-12T10:41:28.479211Z", "level": "INFO", "filename": "splunk_sample_csv.py", "funcName": "main", "lineno": 38, "message": "Dataframe row : {\"_c0\":{\"0\":\"Linux\",\"1\":\"00:00:01\",\"2\":\"00:10:01\",\"3\":\"00:20:01\",\"4\":\"00:30:01\",\"5\":\"00:40:01\",\"6\":\"00:50:01\",\"7\":\"01:00:01\",\"8\":\"01:10:01\",\"9\":\"01:20:01\",\"10\":\"01:30:02\",\"11\":\"01:40:01\",\"12\":\"01:50:01\",\"13\":\"02:00:01\",\"14\":\"02:10:01\",\"15\":\"02:20:02\",\"16\":\"02:30:01\",\"17\":\"02:40:01\",\"18\":\"02:50:01\",\"19\":\"03:00:01\",\"20\":\"03:10:01\",\"21\":\"03:20:01\",\"22\":\"03:30:01\",\"23\":\"03:40:01\",\"24\":\"03:50:01\",\"25\":\"04:00:01\",\"26\":\"04:10:01\",\"27\":\"04:20:02\",\"28\":\"04:30:01\",\"29\":\"04:40:01\",\"30\":\"04:50:01\",\"31\":\"05:00:01\",\"32\":\"05:10:01\",\"33\":\"05:20:02\",\"34\":\"05:30:01\",\"35\":\"05:40:01\",\"36\":\"05:50:01\",\"37\":\"06:00:01\",\"38\":\"06:10:01\",\"39\":\"06:20:01\",\"40\":\"06:30:01\",\"41\":\"06:40:01\",\"42\":\"06:50:01\",\"43\":\"07:00:01\",\"44\":\"07:10:01\",\"45\":\"07:20:01\",\"46\":\"07:30:01\",\"47\":\"07:40:01\",\"48\":\"07:50:02\",\"49\":\"08:00:01\",\"50\":\"08:10:01\",\"51\":\"08:20:01\",\"52\":\"08:30:01\",\"53\":\"08:40:01\",\"54\":\"08:50:01\",\"55\":\"09:00:01\",\"56\":\"09:10:01\",\"57\":\"09:20:01\",\"58\":\"09:30:01\",\"59\":\"09:40:01\",\"60\":\"09:50:01\",\"61\":\"10:00:01\",\"62\":\"10:10:01\",\"63\":\"10:20:01\"   Hi Team, We have a sample event like above we have to extract the time values which are in the format **:**:** in the above event and add them to a new field called TIME. Please help us on this issue.
Hi, I am trying to read a field msg.logMessage.error into table. This field is having character length of upto 22,000. I need to read this field into table and when I try below it is giving blank. ... See more...
Hi, I am trying to read a field msg.logMessage.error into table. This field is having character length of upto 22,000. I need to read this field into table and when I try below it is giving blank.   basesearch | table msg.logMessage.error   Actually, I can show first 2000 characters of this error as well, So I tried this as well, but no luck   basesearch | eval error = substr(msg.logMessage.error,1,2000) | table error    I tried having single and double quotes around msg.logMessage.error but no luck. Not sure If I need to modify some splunk settings to read such fields into table. Can anyone please help me with the soluntion. Thanks in Advance!
Hello, I am facing an issue with the SPL of a dashboard panel. If you see the 2 figures, the SPL above the last 2-3 lines is same. When doing 'fields -' and removing the unnecessary fields, the cor... See more...
Hello, I am facing an issue with the SPL of a dashboard panel. If you see the 2 figures, the SPL above the last 2-3 lines is same. When doing 'fields -' and removing the unnecessary fields, the correct data is being outputted. But when doing 'fields' to take just the required fields and then removing '_raw' with 'fields -', the values are being overwritten? Note that the field of 'workflow_username' has no issues and only 'totalScore' and 'percentage' fields are having this issue. Another thing to note that the 'totalScore' field is derived from other data using 'foreach' and 'eval' commands but I don't think this issue is because of that. Any help is appreciated. Thanks.   Figure 1   Figure 2  
I use multisearch to build a list of search terms, then use them in different context, for building a "normal" search strings,and also at the beginning with a TERM. All these search strings have diff... See more...
I use multisearch to build a list of search terms, then use them in different context, for building a "normal" search strings,and also at the beginning with a TERM. All these search strings have different formats: index=info (TERM(something) OR TERM(something2)) .... | more processing | search field=something OR field=something  My approach is to use multiselect to build a list of search terms, then use it in others multiselect as an input. Because I know all possible values for the search terms, I use makeresults instead of search. How to make the second and  third multiselect  to use input from the first multiselect?   <form version="1.1"> <label>Test</label> <description>Test</description> <fieldset autoRun="true"> <input type="multiselect" token="log_level_csv"> <fieldForLabel>log_level</fieldForLabel> <fieldForValue>log_level</fieldForValue> <default>ERROR</default> <search> <query>| makeresults | eval log_levels="INFO WARN ERROR" | makemv delim=" " log_levels | mvexpand log_levels | stats count by log_levels</query> <earliest>-24h</earliest> <latest>now</latest> </search> <label>Log_Level from static makeresults</label> </input> <input type="multiselect" token="TERM" autoRun="false"> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>TERM(</valuePrefix> <valueSuffix>)</valueSuffix> <delimiter> OR </delimiter> <label>TERM</label> <search> <query>| makeresults | eval log_levels="$log_level_csv$" | makemv delim=" " log_levels | mvexpand log_levels | stats count by log_levels</query> <earliest>-24h</earliest> <latest>now</latest> </search> </input> <input type="multiselect" token="log_level_search"> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>log_level=</valuePrefix> <valueSuffix></valueSuffix> <delimiter> OR </delimiter> <label>Search</label> <search> <query>| makeresults | eval log_levels="$log_level_csv$" | makemv delim=" " log_levels | mvexpand log_levels | stats count by log_levels</query> <earliest>-24h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <table> <title>Table of events</title> <search> <query> index=_internal $TERM$ | where $log_level_search$ </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </table> </row> </form>  
Hello All,   I receive the following error after starting Splunk enterprise for the first time after an upgrade from 8.2.9 to 9.0.4.1 ; otherwise no other problem seen prior:   Exception: <cl... See more...
Hello All,   I receive the following error after starting Splunk enterprise for the first time after an upgrade from 8.2.9 to 9.0.4.1 ; otherwise no other problem seen prior:   Exception: <class 'UnicodeDecodeError'>, Value: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1212, in main parseAndRun(argsList) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 1067, in parseAndRun retVal = cList.getCmd(command, subCmd).call(argList, fromCLI = True) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 293, in call return self.func(args, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/control_api.py", line 35, in wrapperFunc return func(dictCopy, fromCLI) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/_internal.py", line 189, in firstTimeRun migration.autoMigrate(args[ARG_LOGFILE], isDryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3379, in autoMigrate migrate_add_default_dashboard_xml_version_1_1(dryRun) File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/migration.py", line 3131, in migrate_add_default_dashboard_xml_version_1_1 dashboard_contents = f.read() File "/opt/splunk/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte   Any help would / pointers would be appreciated!   ~J