All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have below files in one of the fields named "FileList" /dir1/dir2/f1.txt /dir1/f2.csv /dir1/dir2/dir3/xyzdir/f3.bat f4.txt /xvz/dir/f5.txt I want an output which shows only the last name wh... See more...
I have below files in one of the fields named "FileList" /dir1/dir2/f1.txt /dir1/f2.csv /dir1/dir2/dir3/xyzdir/f3.bat f4.txt /xvz/dir/f5.txt I want an output which shows only the last name which is the actual filename from the above "FileList" field as below f1.txt f2.csv f3.bat f4.txt f5.txt
Hi, I have a lookup which contains one column (name - vanity_url) and around 800 rows. Something like this - vanity_url /checkout /your-details /billing   My Splunk logs has the ev... See more...
Hi, I have a lookup which contains one column (name - vanity_url) and around 800 rows. Something like this - vanity_url /checkout /your-details /billing   My Splunk logs has the event related to these rows in a field called requested_content. Some of them are present in the logs and some are not. I want to print the matched and non matched values from the lookup in a table. Something like this - requested_content present /checkout yes /your-details yes /billing yes /direct-debit no   I have tried something like this but it doesn't seem to be working.     index=myapp_pp sourcetype=access_combined GET host="my-server-*" | eval type="MainIndex" | fields requested_content type | appendpipe [| inputlookup vanity.csv | eval type="lookup" | rename vanity_url as requested_content | fields type requested_content ] | stats dc(type) as pot, values(*) AS * by requested_content | where pot=1 and type="lookup"     @to4kawa 
Hi, I am referring to this ad-on https://splunkbase.splunk.com/app/3088/ Version 3.0.0 I am having issues in configuring google cloud connector, when i try to add inputs like pub/sub or storage ... See more...
Hi, I am referring to this ad-on https://splunkbase.splunk.com/app/3088/ Version 3.0.0 I am having issues in configuring google cloud connector, when i try to add inputs like pub/sub or storage connections i am getting below error , does anyone can help me out ? Generated google credentials with valid json format and enabled proxy . ############ Unexpected error "" from python handler: "(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676)'),)". See splunkd.log for more details. ############# 06-16-2020 15:54:28.793 +0800 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/pyt hon2.7/site-packages/splunk/admin.py", line 130, in init\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 594 , in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/spl unktalib/common/pattern.py", line 44, in __call__\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ ta_gcp/resthandlers/projects.py", line 38, in handleList\n res_mgr = grm.GoogleResourceManager(logger, config)\n File "/opt/splunk/etc/apps/Splunk_TA_go ogle-cloudplatform/bin/splunk_ta_gcp/legacy/resource_manager.py", line 51, in __init__\n self._client = gwc.create_google_client(self._config)\n File "/ opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/legacy/common.py", line 210, in create_google_client\n client = discovery.build(conf ig["service_name"], config["version"], http=http, cache_discovery=False)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/oauth2cli ent/util.py", line 137, in positional_wrapper\n return wrapped(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty /googleapiclient/discovery.py", line 229, in build\n requested_url, discovery_http, cache_discovery, cache)\n File "/opt/splunk/etc/apps/Splunk_TA_googl e-cloudplatform/bin/3rdparty/googleapiclient/discovery.py", line 276, in _retrieve_discovery_doc\n resp, content = http.request(actual_url)\n File "/opt /splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/google_auth.py", line 190, in request\n self._request, method, uri, request_headers)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/google/auth/credentials.py", line 124, in before_request\n self.refresh(request)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/google/oauth2/service_account.py", line 334, in refresh\n access_token, expiry, _ = _client.jwt_grant(request, self._token_uri, assertion)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/google/oauth2/_client.p y", line 153, in jwt_grant\n response_data = _token_endpoint_request(request, token_uri, body)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatfo rm/bin/3rdparty/google/oauth2/_client.py", line 105, in _token_endpoint_request\n response = request(method="POST", url=token_uri, headers=headers, body= body )\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/google_auth.py", line 117, in __call__\n url, method=method, body=bo dy, headers=headers, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 213 5, in request\n cachekey,\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 1796, in _request\n conn, request_uri, method, body, headers\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/__init__.py", line 171, in _conn_request\n raise _map_exception(e)\nSSLError: (SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676)'),)\n 06-16-2020 15:54:28.793 +0800 ERROR AdminManagerExternal - Unexpected error "<class 'ssl.SSLError'>" from python handler: "(SSLError(1, u'[SSL: CERTIFICATE_ VERIFY_FAILED] certificate verify failed (_ssl.c:676)'),)". See splunkd.log for more details. ############################ Appreciate, if someone able to help me . regards karthik
Hey everyone. I am a newbie to splunk and i am stuck at this problem. So i have a column chart which shows data for a time range between two dates ex. 6/9/2020-6/23/2020 like in the pic below. S... See more...
Hey everyone. I am a newbie to splunk and i am stuck at this problem. So i have a column chart which shows data for a time range between two dates ex. 6/9/2020-6/23/2020 like in the pic below. So the query i used for this is  Index=main sourcetype = timeline | eval dates= beginning_date."-".ending_date | stats count by dates, target | xyseries dates target count  So now what i want is...if i append another search to this which contains data on 6/15/2020, It is treating 6/15/2020 as a seperate date. Like this In the above image i want the line chart on date 2020-06-15 to go on to top of the column chart as that date lies in between 2020-06-09   -   2020-06-23(changed format of time).i want to overlay line chart on top of column chart.  Here these are the fields i used beginning date.              Ending date 6/9/2020.                         6/23/2020 And i want to create the query with these fields and not _time. Any help is appreciated and if there is anything u didn't understand in my question plz let me know. Thanks a lot.
Hi All, In the website monitoring app we have added the URLs of different region like EMEA, US and APAC. We found the response time of URLs of US and APAC is taking more time as splunk needs to ping... See more...
Hi All, In the website monitoring app we have added the URLs of different region like EMEA, US and APAC. We found the response time of URLs of US and APAC is taking more time as splunk needs to ping those URLs and get the response from that URL. So because of the increase in the Response time its status is coming as Failed. Also when we ping the URL by being in the same region then the response time is less than the one it is showing in the app. Is there any way to sort this issue.
Hi, I require a little help here as i having spent a lot of time researching for a solution without any luck   I have a vehicle idling data where the data is like below, MACHINE    STATUS A      ... See more...
Hi, I require a little help here as i having spent a lot of time researching for a solution without any luck   I have a vehicle idling data where the data is like below, MACHINE    STATUS A                      1 A                      1 A                      1 A                      0 A                      0 A                      1 A                      1   can i get a consolidated view from the above data in below consolidated way ?   MACHINE    STATUS    COUNT A                      1                  3 A                      0                  2 A                      1                  2   I wanted the data in the above way, so that i know for long the machine was idling at various part of time rather than how long the machine was idling during a filtered time any help will be much appreciated. Thank you
Hi, I have just begun ingesting F5 logs, I am not using the modular inputs component at present and am only seeing ASM logs via syslog. Logs are being sent to a syslog server and file monitoring is s... See more...
Hi, I have just begun ingesting F5 logs, I am not using the modular inputs component at present and am only seeing ASM logs via syslog. Logs are being sent to a syslog server and file monitoring is set to pull into splunk indexer. But when searching logs the logs dont seem to be separating expected "during index time, the add-on separates the data into more specific source types." I have an inputs.conf file on the rsyslog server distributed by a universal forwarder. [monitor:..........] disabled = 0 host_segment = 5 index=f5 sourcetype= f5:bigip:syslog I have removed the inputs from the indexer and have added the add-on to the search head as well. Confused as to why the logs are separating. Hoping someone can help Cheers
Hello, I have installed the add on for McAfee https://splunkbase.splunk.com/app/1819/ It says "Splunk Built",  but since it hasn't got updated since 2018, some of the McAfee tables and products ha... See more...
Hello, I have installed the add on for McAfee https://splunkbase.splunk.com/app/1819/ It says "Splunk Built",  but since it hasn't got updated since 2018, some of the McAfee tables and products have changed their name, thus the built-in query (for DB Connect) is not working. For example: VirusScan in now EndpointSecuity. For myself, I have manually updated the query, but the app is still broken. how can we fix that and make it official? where can I file a ticket? Thank you
Wanting to forward all raw events from Client/Application to a specified HTTP Event Collector (HEC) endpoint/URL for on-prem/self-hosted Splunk environment but Client/Application only allows for a UR... See more...
Wanting to forward all raw events from Client/Application to a specified HTTP Event Collector (HEC) endpoint/URL for on-prem/self-hosted Splunk environment but Client/Application only allows for a URL to be specified and does not allow specifying the HEC token in authorization header for HTTP Authentication or including it in basic authentication request.  How can the raw events be ingested into on-prem/self hosted Splunk using HTTP Event Collector (HEC) input without an Authorization header? Is it possible to specify the HEC token as a query string/parameter in the URL itself?
Good afternoon!  After upgrade to version 8.0.4.1 from 8.0.1 KV Store status failed.   Failed to start KV Store process. See mongod.log and splunkd.log for details. 15.06.2020, 21:12:09 KV St... See more...
Good afternoon!  After upgrade to version 8.0.4.1 from 8.0.1 KV Store status failed.   Failed to start KV Store process. See mongod.log and splunkd.log for details. 15.06.2020, 21:12:09 KV Store changed status to failed. KVStore process terminated.. 15.06.2020, 21:12:08 KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.   ./splunk show kvstore-status This member:                                  backupRestoreStatus : Ready                                       disabled : 0                                           guid : A5CBF13E-6100-40DA-8C51-541893E5A0A0                                           port : 8191                                     standalone : 1                                         status : failed   /data/splunk/var/log/splunk/mongod.log 2020-06-16T05:25:17.315Z W CONTROL  [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release.  2020-06-16T05:25:17.323Z F NETWORK  [main] The provided SSL certificate is expired or not yet valid.  2020-06-16T05:25:17.323Z F -        [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1214  2020-06-16T05:25:17.323Z F -        [main]  ***aborting after fassert() failure     /data/splunk/var/log/splunk/splunkd.log ERROR KVStoreAdminHandler - An error occurred. ERROR KVStorageProvider - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling ismaster on '127.0.0.1:8191'] ERROR KVStoreIntrospection - failed to get introspection data     Could you help me figure out how to fix this? Thanks in advance!    
Hi, When I iterate through ResultsReaderXml, it takes 8-10 seconds to iterate through 500-700 events. The simple code  ArrayList<Event> rows = new ArrayList<Event>(); for (Event eventObject : res... See more...
Hi, When I iterate through ResultsReaderXml, it takes 8-10 seconds to iterate through 500-700 events. The simple code  ArrayList<Event> rows = new ArrayList<Event>(); for (Event eventObject : resultsReader) { rows.add(eventObject); }  takes 8-10 seconds. Please suggest.
i have a alert created in Splunk. Can anyone please guide as to what setting has to be done in Edit Alert->Trigger Alert-> Send Email section to make sure that when the alert triggers and if the emai... See more...
i have a alert created in Splunk. Can anyone please guide as to what setting has to be done in Edit Alert->Trigger Alert-> Send Email section to make sure that when the alert triggers and if the email has to be sent internally then it should have subject line as [INTERNAL]{Subject line content} with defined recipient and if the email has to be sent outside the organisation the subject line should be {Subject line content} with its defined recipient list. Do i need to create 2 separate copies of same alert with these 2 configuration in Edit Alert setting defined or these both conditions can be saved in the configuration for single alert.    
I want to deploy a heavy forwarder onprem. I want to then set it up for SAML authentication with Okta. Okta has built in integrations with splunk: splunk cloud, splunk enterprise Can I use any of t... See more...
I want to deploy a heavy forwarder onprem. I want to then set it up for SAML authentication with Okta. Okta has built in integrations with splunk: splunk cloud, splunk enterprise Can I use any of these integrations to build the SAML or do I need to add this as a new app?
In a query that culminates in a curl command on a resuult, when the result set if empty, it's not possible to prevent the curl command from trying to execute. A workaround I am using is to set the u... See more...
In a query that culminates in a curl command on a resuult, when the result set if empty, it's not possible to prevent the curl command from trying to execute. A workaround I am using is to set the urifield rather than using uri, as then when there are not results and no urifield, the curl call just returns an error saying no URL specified. However, it would be nice to be able to indicate the curl command do nothing when the results.len = 0. Currently when results is 0, it just goes and tries to run the curl. My search is running as a saved search and all iterations will record an error when there are no  results.
I think there may be a bug with the use of datafield in the POST operation with curl. Currently the curl.py python (line 259) code does   data = json.loads(result[options['datafield']])   which w... See more...
I think there may be a bug with the use of datafield in the POST operation with curl. Currently the curl.py python (line 259) code does   data = json.loads(result[options['datafield']])   which will create a Python object called ‘data’ containing the parsed JSON fragment. In my example, my JSON payload is   {"devices": ["arn:aws:sns:ap-southeast-2:000000000000:endpoint/GCM/bcrm/cabf12dc-ec12-3af4-12e1-121a193ecc7b"], "message": {"title": "Reminder to complete your self checkup", "body": "Your last completed self checkup is to old. Please perform the self checkup before attending the workplace"}}   The curl_data_payload is then output with the debug=true setting as   {u'devices': [u'arn:aws:sns:ap-southeast-2:000000000000:endpoint/GCM/bcrm/cabf12dc-ec12-3af4-12e1-121a193ecc7b'], u'message': {u'title': u'Reminder to complete your self checkup', u'body': u'Your last completed self checkup is to old. Please perform the self checkup before attending the workplace'}}   When this gets passed into the requests.post call, it appears to then send the string representation of the Python object rather than the string data itself. The receiving system fails to parse the received object as it is not JSON any more. If I change the code to be   data = str(result[options['datafield']])   then this will send OK as the curl_data_payload is then   {"devices": ["arn:aws:sns:ap-southeast-2: 000000000000:endpoint/GCM/bcrm/cabf12dc-ec12-3af4-12e1-121a193ecc7b"], "message": {"title": "Reminder to complete your self checkup", "body": "Your last completed self checkup is to old. Please perform the self checkup before attending the workplace"}}   I am not sure what the intention of the datafield option is if it always tried to JSON parse the data with json.loads() and it seems to me that it will always fail if the data is valid JSON. Is this a bug or just a problem with how I am using curl?
I'm configuring TA-Tomato to process logs from DD-WRT router.  I've successfully configured data input to listen on UDP/514 port with source type "tomato" and it receives logs from my router just fin... See more...
I'm configuring TA-Tomato to process logs from DD-WRT router.  I've successfully configured data input to listen on UDP/514 port with source type "tomato" and it receives logs from my router just fine.  However, this only works for basic logs detailed logs showing both accepted and dropped packets overwhelm the router.  I want to capture detailed logs. My router has no problem logging detailed logs into a file, so I've configured router to write log to a directory I've made accessible on the network.   I've mounted this directory on the Splunk server. When I changed the data input to a file, logs are not correctly parsed.  I only see "tomato:dnsmasq-dhcp" sourcetypes, yet I've confirmed other logs are present in the file.  Why are they not picked up?
Hi all, I installed Heat Map Viz and I use it. https://splunkbase.splunk.com/app/4460/ I want to change x axis label color because I want to use it in dark mode. when it turned dark mode, x axis(... See more...
Hi all, I installed Heat Map Viz and I use it. https://splunkbase.splunk.com/app/4460/ I want to change x axis label color because I want to use it in dark mode. when it turned dark mode, x axis(date) is difficult to see. But, I have no idea to change label color. Is there anyone know how to change x axis label color? (I know this is svg.) Or Is there anyone know option name to change x axis label color? (like heat-map-viz.heat-map-viz.yaxiswidth) Thank you so much.
Background:  I'm trying to create a monthly report that tracks how many terminals we Add and how many terminals we Remove at a property.  We have two separate events that track these:  RoomTerminalAd... See more...
Background:  I'm trying to create a monthly report that tracks how many terminals we Add and how many terminals we Remove at a property.  We have two separate events that track these:  RoomTerminalAdd and RoomTerminalRemove.   Sometimes we have field reps that need to troubleshoot these terminals and end up with multiples of these events that don't actually indicate a full Add or Remove.  I would like to "pair up" these events and evaluate the time differences based on the property, room number, and terminal address.  If the time difference is below 15 minutes then I want to removed them from the final monthly count.   Here's a visual to help explain what I'm trying to do: After removing these "pairs" that fit the <15 min time difference, I want to then get a total count of each event type separately.   TerminalsAdded=##  TerminalsRemoved=## I have used the command transaction in the past but won't work here as it removes any events that don't have a pair and I need to keep those for my overall count.  My next option would be streamstats but I don't have any experience with using that command and not sure how to bring in the time difference for evaluating what to keep and what to remove from search.   Anyone out there have any advice or tips on how to reach my end goal?
HI I am pretty new to Splunk and had a question about showing event counts from last 7 days and first time was event ever seen in a table. I use the stats to get 7 days or all time results but cannot... See more...
HI I am pretty new to Splunk and had a question about showing event counts from last 7 days and first time was event ever seen in a table. I use the stats to get 7 days or all time results but cannot show both the values together. I also want to display the first ever occurrence of the event in the same table index="firewall" "confidence_level=high" action=Detect|stats count by protection_name protection_name count Packet Sanity 632 Joomla Object Injection Remote Command Execution 47
Good Afternoon I am fairly new to splunk and I am trying to figure out the best way to approach this. I am running the windows TA add on to monitor systems on a small number of servers. I would li... See more...
Good Afternoon I am fairly new to splunk and I am trying to figure out the best way to approach this. I am running the windows TA add on to monitor systems on a small number of servers. I would like to monitor those servers that could have resource limitations that present with % _total Processor times of greater than 80% within a 24 hour period but with a sustained timing of 5 mins or more. It is that 5 min window that I am getting caught on.   Most,  if not all servers will present with more than 80% processor times for spurts during use a min or less but it is the sustained processor times that I want to try and capture that could indicate a resource issue. Any advise or guidance on a good approach to try and capture that information would be appreciated. Thank you