All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have a site monitor  running in AppD synthetics. I am confused as to how visually complete time is greater than page fully loaded time. Is it indicative of a slow desktop in the monitoring in... See more...
Hi,  I have a site monitor  running in AppD synthetics. I am confused as to how visually complete time is greater than page fully loaded time. Is it indicative of a slow desktop in the monitoring infrastructure? Thanks for any comments.  Sincerely,  Jim 
Hi. My request to join the Phantom Community was approved, however the link I was provided has since expired and I cannot complete my registration. How can I request a new link? The support channel p... See more...
Hi. My request to join the Phantom Community was approved, however the link I was provided has since expired and I cannot complete my registration. How can I request a new link? The support channel provided support@phantom.us in the email I received appears to be discontinued. 
I'd like to know when a series of hosts go offline.  What would be the best SPL to use with something like this?  Thanks for your help! 
Hello,  I am trying to integrate sales force with Splunk.  I have installed sales force add-on to our Splunk HWF.  when i am trying to setup configuration i get this message -"Request time out whi... See more...
Hello,  I am trying to integrate sales force with Splunk.  I have installed sales force add-on to our Splunk HWF.  when i am trying to setup configuration i get this message -"Request time out while getting accesstoken. Please try again." configuration setup page -  account name - sales salesforce environment - other endpoint url - xxxx.salesforce.com api version - 4.8 auth type - Oauth 2.0 authencation client id - XXX secret ID - XxX redirect url - <heavyforwarder name>/en-US/app/Splunk_TA_Saleforce/Saplunk_ta_salesforce_redirect   Splunkd logs : ERROR ExecProcessor - message from "python /apps/splunk/etc/apps/Splunk_TA_salesforce/bin/sfdc_object.py" raise HTTPError(response)  0400 ERROR ExecProcessor - message from "python /apps/splunk/etc/apps/Splunk_TA_salesforce/bin/sfdc_object.py" HTTPError: HTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"Unexpected error \"<type 'exceptions.KeyError'>\" from python handler: \"u'salesforce'\". See splunkd.log for more details."}]}  
 I am seeing this error message from Mimecast TA,  ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_audit.py" ERRORHTTPSConnectionPool(host='us-ap... See more...
 I am seeing this error message from Mimecast TA,  ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_audit.py" ERRORHTTPSConnectionPool(host='us-api.mimecast.com', port=443): Max retries exceeded with url: /api/audit/get-audit-events (Caused by ReadTimeoutError("HTTPSConnectionPool(host='us-api.mimecast.com', port=443): Read timed out. (read timeout=30.0)",)) Did the Mimecast API change or it is something else causing this issue? Mimecast audit log is not getting received due to this issue.
I have a cluster deployment with one search cluster and one indexer cluster. Recently I upgraded MS Exchange App to the search cluster: Upgraded windows_TA from 5.0.1 to 7.0.0 Upgraded Exchange TA... See more...
I have a cluster deployment with one search cluster and one indexer cluster. Recently I upgraded MS Exchange App to the search cluster: Upgraded windows_TA from 5.0.1 to 7.0.0 Upgraded Exchange TAs from 3.5.1 to 4.0.1 Upgraded Exchange App from 3.5.1 to 4.0.1 Removed windows infrastructure app 1.5.1 The TAs are also pushed to the indexer cluster. I also have removed the windows_apps.csv lookup under Exchange app as there is a newer copy under windows_TA, which suppressed "Could not load lookup=LOOKUP-app4_for_windows_security" error. I did not change anything else to the App. However, every indexer reports  “Could not load lookup=LOOKUP-user_account_control_property “ error for any searches. The user_account_control_property lookup come with the Windows_TA and is readable by any user/app by default. Could somebody help? Thanks in advance!    
Hi All trying to set this up and im getting the following error: rllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is s... See more...
Hi All trying to set this up and im getting the following error: rllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings 08-18-2020 16:28:38.533 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: InsecureRequestWarning) 08-18-2020 16:28:38.539 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: /opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/urllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings 08-18-2020 16:28:38.539 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: InsecureRequestWarning) 08-18-2020 16:28:38.650 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: /opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/urllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings 08-18-2020 16:28:38.650 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: InsecureRequestWarning) 08-18-2020 16:28:38.656 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: /opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/urllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings 08-18-2020 16:28:38.656 +0100 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: InsecureRequestWarning) I realise the URL says : import urllib3 urllib3.disable_warnings() but where does that actually go in the script?
I have read through almost every Join label topic on the Splunk Community page and I don't seem to see one that fits my problem.  If there is one that works for this issue, please simply direct me to... See more...
I have read through almost every Join label topic on the Splunk Community page and I don't seem to see one that fits my problem.  If there is one that works for this issue, please simply direct me to the correct discussion. The closest discussion that looks like what I am shooting for is: How to join two searches on a common field where the value of the left search matches all values of the right search?  But this discussion doesn't have a solution.  And I've been through the docs.splunk.com pages reviewing the subsearch, append, appendcols, join and selfjoin. The two searches I would like to join are: Search 1: index="_internal" source="*metrics.log" per_index_thruput series=autoshell host=lelsplunkix* | eval GB=kb/(1024*1024) | timechart span=12h sum(GB) as GB by series Results: (example - 500k+ rows returned) _time                                               _raw   sourcetype      GB 2020-08-18 07:04:33.307     ABC     ship                     0.0000264551490559 2020-08-18 07:04:31.168     LMN    rum                      0.0000000828877091 2020-08-18 07:04:24.174     XYZ     jacksparrow     0.0000000940635800 IMPORTANT: The index of all of these is "_internal", not the actual index that the source data comes from. Search 2: | tstats count where (index=BlackPearl OR index=Tortuga OR index=Swashbuckler) by index, sourcetype | table sourcetype, index Results: (example - roughly 86 rows returned) sourcetype     index ship                    BlackPearl crew                  BlackPearl rum                    Tortuga wench              Tortuga willturner        Swachbuckler jacksparrow  Swashbuckler I want to join these results to make a single table of: _time                                               _raw   sourcetype      index                        GB 2020-08-18 07:04:33.307     ABC     ship                     BlackPearl             0.0000264551490559 2020-08-18 07:04:31.168     LMN    rum                      Tortuga                   0.0000000828877091 2020-08-18 07:04:24.174     XYZ     jacksparrow     Swachbuckler     0.0000000940635800 I tried to use append and it just adds the additional sourcetype/index rows below the actual results (not as a new column).  I tried to use appendcols and the number of rows between the first search and the second search don't match, so only the first handful of rows get an index and the index doesn't match up with the sourcetype.  I tried to use join with the max=0 and type=inner and it only returned a handful of rows (less than 1000) and only for a few of the index/sourcetype combinations.  I even just tried to use the second search as a subsearch of the first search to limit the sourcetypes to ONLY the ones returned in the tstats search... which I think worked, but still didn't tell me which index applied to each sourcetype.  I can run the two separately, extract the data into excel and do a vlookup to get the results I want, but I need this to be in the report/search.  Help me!  I'm drowning. Be gentle, this is my first discussion topic.  Hope this is enough information to clearly understand the problem.
License Usage by Each Indexer : Need to find license usage by each indexer.
Hello, Is there any RHEL 7 End of Life and End of Support Dates? For additional info, we are using software version 8.0.1.   Thanks
Hi Team, There was a recent failure on one of our host where we have hosted Splunk App for DB Connect.Resultant we have lost data for almost 5,6 hours. I wanted to create a secondary backup server,... See more...
Hi Team, There was a recent failure on one of our host where we have hosted Splunk App for DB Connect.Resultant we have lost data for almost 5,6 hours. I wanted to create a secondary backup server, if something goes wrong with our present server, Ideally it should start working on the secondary server. Also, i have realized that decrypting the passwords and entering it with script are not working, last time i tried I had to put it manually all the passwords. Could anyone please guide me or gives me the direction how can i achieve that ? Regards,
Hi. In our application we are generating logs based on "Audit Trail and Node Authentication" following RFC 3881 and DICOM specifications. Now our challange now is to parse those in to SPLUNK. Hav... See more...
Hi. In our application we are generating logs based on "Audit Trail and Node Authentication" following RFC 3881 and DICOM specifications. Now our challange now is to parse those in to SPLUNK. Have anyone done work with RFC 3881 or DICOM before? Espen  
Hi there, digging deeper into the REST API and XML parsing. When running an XML status command on our Ironport I get the following XML result displayed. "<status build="phoebe 13.5.1-277" hostname=... See more...
Hi there, digging deeper into the REST API and XML parsing. When running an XML status command on our Ironport I get the following XML result displayed. "<status build="phoebe 13.5.1-277" hostname="mv15int.xxxx.com" timestamp="20200818100104"> <birth_time timestamp="20200706100631 (42d 23h 54m 33s)"/> <last_counter_reset timestamp=""/> <system status="online"/> <oldest_message secs="3" mid="42741174"/> <features> <feature name="External Threat Feeds" time_remaining="11008734"/> <feature name="Sophos" time_remaining="11008734"/> <feature name="File Analysis" time_remaining="11008734"/> <feature name="Bounce Verification" time_remaining="9712734"/> <feature name="IronPort Anti-Spam" time_remaining="11008734"/> <feature name="IronPort Email Encryption" time_remaining="11008734"/> <feature name="Data Loss Prevention" time_remaining="11008734"/> <feature name="File Reputation" time_remaining="11008734"/> <feature name="Incoming Mail Handling" time_remaining="11077663"/> <feature name="Outbreak Filters" time_remaining="11008734"/> </features> <counters> <counter name="inj_msgs" reset="3890935" uptime="2294122" lifetime="3890935"/> <counter name="inj_recips" reset="4939054" uptime="2698122" lifetime="4939054"/> <counter name="gen_bounce_recips" reset="87501" uptime="71162" lifetime="87501"/> <counter name="rejected_recips" reset="3418" uptime="1075" lifetime="3418"/> <counter name="dropped_msgs" reset="0" uptime="0" lifetime="0"/> <counter name="soft_bounced_evts" reset="1631" uptime="1451" lifetime="1631"/> <counter name="completed_recips" reset="9789961" uptime="5324578" lifetime="9789961"/> <counter name="hard_bounced_recips" reset="157862" uptime="134939" lifetime="157862"/> <counter name="dns_hard_bounced_recips" reset="10635" uptime="5733" lifetime="10635"/> <counter name="5xx_hard_bounced_recips" reset="147227" uptime="129206" lifetime="147227"/> <counter name="filter_hard_bounced_recips" reset="0" uptime="0" lifetime="0"/> <counter name="expired_hard_bounced_recips" reset="0" uptime="0" lifetime="0"/> <counter name="other_hard_bounced_recips" reset="0" uptime="0" lifetime="0"/> <counter name="delivered_recips" reset="9568380" uptime="5147976" lifetime="9568380"/> <counter name="deleted_recips" reset="63719" uptime="41663" lifetime="63719"/> <counter name="global_unsub_hits" reset="0" uptime="0" lifetime="0"/> </counters> <current_ids message_id="42741194" injection_conn_id="3948223" delivery_conn_id="1609988"/> <rates> <rate name="inj_msgs" last_1_min="3121" last_5_min="5078" last_15_min="6575"/> <rate name="inj_recips" last_1_min="4795" last_5_min="7475" last_15_min="9384"/> <rate name="soft_bounced_evts" last_1_min="0" last_5_min="12" last_15_min="4"/> <rate name="completed_recips" last_1_min="9487" last_5_min="14846" last_15_min="18795"/> <rate name="hard_bounced_recips" last_1_min="180" last_5_min="48" last_15_min="34"/> <rate name="delivered_recips" last_1_min="9007" last_5_min="12997" last_15_min="16389"/> </rates> <gauges> <gauge name="ram_utilization" current="12"/> <gauge name="total_utilization" current="7"/> <gauge name="cpu_utilization" current="3"/> <gauge name="av_utilization" current="0"/> <gauge name="case_utilization" current="0"/> <gauge name="bm_utilization" current="0"/> <gauge name="disk_utilization" current="0"/> <gauge name="resource_conservation" current="0"/> <gauge name="log_used" current="10"/> <gauge name="log_available" current="333G"/> <gauge name="conn_in" current="4"/> <gauge name="conn_out" current="6"/> <gauge name="active_recips" current="6"/> <gauge name="unattempted_recips" current="6"/> <gauge name="attempted_recips" current="0"/> <gauge name="msgs_in_work_queue" current="0"/> <gauge name="dests_in_memory" current="97"/> <gauge name="kbytes_used" current="94"/> <gauge name="kbytes_free" current="71303074"/> <gauge name="msgs_in_policy_virus_outbreak_quarantine" current="0"/> <gauge name="kbytes_in_policy_virus_outbreak_quarantine" current="0"/> <gauge name="reporting_utilization" current="1"/> <gauge name="quarantine_utilization" current="1"/> </gauges> </status>" I tried using some examples posted here adding the xmldata into a XMLData field and then use spath to extract the data in question but it tells me the XMLdata content is not properly formatted.   What am i missing here ?   -marc  
Hi, I want to do masking for logs at index time but the replaced value ("X" here) should be same character length as original string. My requirement for masking is that for the value within [] only ... See more...
Hi, I want to do masking for logs at index time but the replaced value ("X" here) should be same character length as original string. My requirement for masking is that for the value within [] only last 4 characters should be visible. Example for below logs: 2020-08-18T13:17:43,990 [Engine 1] TRACE log data V01 [1|12345678] 2020-08-18T13:17:44,979 [Engine 2] TRACE log data V02 [2|A35453DFDF65] The indexed logs should be: 2020-08-18T13:17:43,990 [Engine 1] TRACE log data V01 [1|XXXX5678] 2020-08-18T13:17:44,979 [Engine 2] TRACE log data V02 [2|XXXXXXXXDF65]   Currently I am using SEDCMD command as below but that is taking only static length for X: s/(TRACE\slog\s+data\s+V\d+\s+\[\d\|)(\w+)(\w{4})/\1XXXXXX\3/g   Is there a way to replace string at index time with another string maintaining the count of  characters.
Hi Team, Im in the plan to develop default home dashboard for all the users in splunk which shows the information about their level of access (to which index/to which they have access and and inform... See more...
Hi Team, Im in the plan to develop default home dashboard for all the users in splunk which shows the information about their level of access (to which index/to which they have access and and information related to searches and stuff). My question is these metrics vary from user to user so how do I parameterise this for each and every  user who login to splunk OR from where I can get the userid of the user  to use it in the dashboard search query.  Thanks for the help.   @niketn  @gaurav_maniar 
I got above result from my splunk query:  index="cx_aws" source="notifications-service"|stats count by tokenValidatorInfo,requestValidationRequired,requestPayloadValidationRequired,responsePayloa... See more...
I got above result from my splunk query:  index="cx_aws" source="notifications-service"|stats count by tokenValidatorInfo,requestValidationRequired,requestPayloadValidationRequired,responsePayloadValidationRequired,aopUsed|rename tokenValidatorInfo as "TOKENVALIDATION"|rename requestValidationRequired as "REQUESTVALIDATION"|rename requestPayloadValidationRequired as "REQUESTPAYLOADVALIDATION"|rename responsePayloadValidationRequired as "RESPONSEPAYLOADVALIDATION"|rename aopUsed as AOP|fields TOKENVALIDATION,REQUESTVALIDATION,REQUESTPAYLOADVALIDATION,RESPONSEPAYLOADVALIDATION,AOP But i want the result sholud be like below TOKENVALIDATION:true REQUESTVALIDATION:false REQUESTPAYLOADVALIDATION:false RESPONSEPAYLOADVALIDATION:true AOP:false could anyone help me?
  Hello. Help me please. Where can I download Splunk DB Connect for splunk enterprise version 6.3.3.4?
Before a change was made, data was originally being sent to Splunk in the example of { %a | %b | %c | %d }. Now after a change, more data is being sent but was placed in the middle of the original or... See more...
Before a change was made, data was originally being sent to Splunk in the example of { %a | %b | %c | %d }. Now after a change, more data is being sent but was placed in the middle of the original order {%a | %b | %e | %f | %c | %d}. Causing a conflict in mapping the fields from before the change and after, affecting dashboard graphs etc. Any way to synchronize the two without having to reformat the order of data?
I have successfully used appendcol to gather percentage of successful and failed scans. I would like to make this data into a pie chart for my dashboard.  When I click visualization-> pie chart it i... See more...
I have successfully used appendcol to gather percentage of successful and failed scans. I would like to make this data into a pie chart for my dashboard.  When I click visualization-> pie chart it is all one color.    Here is my search    index="secops" sourcetype="tenable:sc:vuln" plugin_id=19506 pluginText!="*Host_Scan*" | dedup ip | search pluginText="*Credentialed checks : no*" | stats count(plugin_id) as failed_scans | appendcols [search index="secops" sourcetype="tenable:sc:vuln" plugin_id=19506 pluginText!="*Host_Scan*" | dedup ip | search pluginText="*Credentialed checks : yes*" | stats count(plugin_id) as successful_scans] | eval total_scans=(failed_scans+successful_scans) | eval failed_percent=(failed_scans/total_scans*100) | eval failed_percent=round(failed_percent,2) | eval success_percent=(successful_scans/total_scans*100) | eval success_percent=round(success_percent,2) | table success_percent failed_percent     Thank you in advanced. 
I have the tenable TA installed and the data is getting into Splunk correctly, however when looking at the logs the field pluginText is not parsed out correctly. I assume it is because of the additio... See more...
I have the tenable TA installed and the data is getting into Splunk correctly, however when looking at the logs the field pluginText is not parsed out correctly. I assume it is because of the additional code in that section of the logs <plugin_output> but I do not know how to break down all the other sub-fields.      patchPubDate: -1 pluginID: 19506 pluginInfo: 19506 (0/6) Nessus Scan Information pluginModDate: 1591977600 pluginName: Nessus Scan Information pluginPubDate: 1125072000 pluginText: <plugin_output>Information about this scan : Nessus version : 8.9.0 Plugin feed version : 202008150609 Scanner edition used : Nessus Scan type : Normal Scan policy used : 95a08a01-72d2-5765-b9ac-e3abc775c2ad-7940724/Copy of Corp Advanced Scan PoC Scanner IP : 10.32.34.182 Port scanner(s) : nessus_syn_scanner Port range : sc-default Thorough tests : no Experimental tests : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing: Yes Scan Start Date : 2020/8/17 6:26 EST Scan duration : 1533 sec </plugin_output> plugin_id: 19506 port: 0 protocol: TCP recastRisk: false     Like I would like Splunk to create fields for Scan Start Date, Scan duration...