All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I've been trying to get an EXTRACT to work in a TA that someone has made for me and after much searching I have narrowed the problem down to the fact that the source field is "model.name". A... See more...
Hi all, I've been trying to get an EXTRACT to work in a TA that someone has made for me and after much searching I have narrowed the problem down to the fact that the source field is "model.name". According to the props.conf docs, the source must be alphanumerical or an underscore, the docs also say that the source cannot be derived from a filed alias etc.   [mysourcetype] ... EXTRACT-CIM_category = ^(?<category>.*):: in model.name   What is the best way therefore, to perform the regex extraction at search time on that field if I can't alias it? The docs say it can come from an preceding EXTRACT but what would the syntax be to rename the field to something like "model_name"? And how would this work because surely my source is always going to have the period in, which is incompatible with EXTRACT? Many thanks in advance.
Recently I have completed these certifications: 1. Splunk 7.x Fundamentals 1 (IOD)  2. Splunk 7.x Fundamentals 2 (IOD)  3. Splunk 7.x Fundamentals 3 (IOD)  4. Advanced Searching and Reporting wit... See more...
Recently I have completed these certifications: 1. Splunk 7.x Fundamentals 1 (IOD)  2. Splunk 7.x Fundamentals 2 (IOD)  3. Splunk 7.x Fundamentals 3 (IOD)  4. Advanced Searching and Reporting with Splunk 7.x (IOD)  How can I get the certification ID/certification URL for these certifications to verify it as my external certifications? 
Hi,   I was trying to run 'reload deploy server' .. /opt/splunk/bin/splunk reload deploy-server Your session is invalid. Please login. Splunk username: admin Password: An authentication error ... See more...
Hi,   I was trying to run 'reload deploy server' .. /opt/splunk/bin/splunk reload deploy-server Your session is invalid. Please login. Splunk username: admin Password: An authentication error occurred: Client is not authenticated.   What went wrong ?
Hi,    Trying to configure authentication connection between Splunk and Service Now using service now add on. I can confirm the connection the service now server is open and operational but I am ... See more...
Hi,    Trying to configure authentication connection between Splunk and Service Now using service now add on. I can confirm the connection the service now server is open and operational but I am still seeing "Unable to reach server at https://dev.connectedservices.homeaffairs.gov.au. Check configurations and network settings" Looking into the splunkd logs and the splunk_ta_snow_main.log I am seeing the following error: Stack trace from python handler:\nTraceback (most recent call last):\n File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\splunk\admin.py", line 94, in init_persistent\n hand.execute(info)\n File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\splunk\admin.py", line 594, in execute\n if self.requestedAction == ACTION_CREATE: self.handleCreate(confInfo)\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\admin_external.py", line 51, in wrapper\n for entity in result:\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\handler.py", line 116, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\handler.py", line 85, in wrapper\n check_existing(self, name),\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\endpoint\__init__.py", line 79, in validate\n self._loop_fields('validate', name, data, existing=existing)\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\endpoint\__init__.py", line 76, in _loop_fields\n return [getattr(f, meth)(data, *args, **kwargs) for f in model.fields]\n File "C:\Program Files\Splunk\etc\apps\Splunk_TA_snow\lib\splunktaucclib\rest_handler\endpoint\field.py", line 52, in validate\n raise RestdError(400, self.validator.msg)\nRestError: REST Error [400]: Bad Request -- Unable to reach server at https://d. Check configurations and network settings.\n 08-28-2020 14:55:41.197 +1000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- Unable to reach server at https://. Check configurations and network settings.". I am hoping someone maybe able to provide some assistance with this issue?
Passing a token to dashboard using below is not working, dashboard is stuck on "search is waiting for input" message below is a json field so using spath to parse its fields, I have used the token i... See more...
Passing a token to dashboard using below is not working, dashboard is stuck on "search is waiting for input" message below is a json field so using spath to parse its fields, I have used the token in the title and it showed the right value, and below query works fine without token  index="indextest" source="my source" sourcetype="ab-xyz" | spath input=message | UserName = $UserName_Token$ | table JobServiceId CreateDateTime Status UserName     
How to get a complete list with descriptions of correlation searches in the Splunk Enterprise Security app with sourcetype and severity ?
Hi There: I'd like to set up some panels that show/hide based on the values of the  date/time range picker. Two different panels: 1. Show only when time range picker is set to <=30 days 2. Sh... See more...
Hi There: I'd like to set up some panels that show/hide based on the values of the  date/time range picker. Two different panels: 1. Show only when time range picker is set to <=30 days 2. Show only when time range picker is set to >30 days I've tested a few "depends" set ups with no luck. Any help would be greatly appreciated.   
I have a saved search that does:   | from datamodel:"Performance.Storage"   But, I am trying to make this saved search parameterized using:   | from datamodel:"$model$"   When I try t... See more...
I have a saved search that does:   | from datamodel:"Performance.Storage"   But, I am trying to make this saved search parameterized using:   | from datamodel:"$model$"   When I try to edit the search in the GUI, it throws this error:   Error in 'SearchOperator:datamodel': Error in 'DataModelEvaluator': Data model '$model$' was not found.   If I edit savedsearches.conf directly and change the SPL to use $model$ then it runs with no problem and parameterizes the search accordingly. Is this a bug in the UI? I'm using Splunk 8.0.1.  
I prepared csv to inputlookup to compare the Splunk logs. adhoc.csv // Account, test01,etc.... test02,etc.... // my Query index=msad sourcetype=msad [| inputlookup adhoc.csv | fields Account]... See more...
I prepared csv to inputlookup to compare the Splunk logs. adhoc.csv // Account, test01,etc.... test02,etc.... // my Query index=msad sourcetype=msad [| inputlookup adhoc.csv | fields Account]  Searching Period: Last 24 hours Cross check adhoc.csv match the searching logs. For those account did not perform any authentation which logs stored at index=msad, during search period, then show zero values.  
Hi. Does anyone know how to arrange a link list fieldset and status indicator viz side by side inside a panel? Each link list corresponds to one status located at the right panel. My goal is to place... See more...
Hi. Does anyone know how to arrange a link list fieldset and status indicator viz side by side inside a panel? Each link list corresponds to one status located at the right panel. My goal is to place them side by side.     
Hi  I have a use case where 6 links are there in navigation bar menu. One of the dashboard has restricted data and message is displayed to contact administrator for simple user.  The navigation men... See more...
Hi  I have a use case where 6 links are there in navigation bar menu. One of the dashboard has restricted data and message is displayed to contact administrator for simple user.  The navigation menu should not show the link to that particular dashboard  simple user in navigation menu but only power user shall be able to see all the links.  Thanks 
Hello, I am using HEC to send data from aws(dynamodb) to splunk. I am getting error called"ECONNREFUSED","errno":"ECONNREFUSED    at TCPConnectWrap.afterConnect [as oncomplete] ", Can anyone tell me... See more...
Hello, I am using HEC to send data from aws(dynamodb) to splunk. I am getting error called"ECONNREFUSED","errno":"ECONNREFUSED    at TCPConnectWrap.afterConnect [as oncomplete] ", Can anyone tell me a better method to do the task or some advice to solve this issue??
OK, so this search is reading an input file looking for where the field ErrorCode has data populated in it.  I am trying to count the occurrences of those errors and if they are 10 or more consecutiv... See more...
OK, so this search is reading an input file looking for where the field ErrorCode has data populated in it.  I am trying to count the occurrences of those errors and if they are 10 or more consecutive errors I will be triggering an alert. Here is the search: | inputlookup myfile.csv | eval _time=strptime(RequestDatetime,"%F %T") | search (RequestDatetime>="2020-08-19" AND RequestDatetime<"2020-08-20") | search (InfoSourceID="3" OR InfoSourceID="4") AND ErrorCode=* | streamstats reset_after=(isnull(errorCode)) count |stats latest(eval(if(count>=10,_time,NULL))) as _time The ErrorCode field may or may not have data in it.  The requirement is to count 10 or more consecutive errors and trigger an alert.  The issue is when testing I added some blank fields to see if the reset_after line would reset the count and it did not. For example, the line on the left works fine and triggers an alert.  The one on the right triggers an alert but  I don't want it to because they are not consecutive. ErrorCode ErrorCode data data data null data data data null data data data null data data data null data data data null   data   null   data   null   data   null   data   null   data   null   data   Am I using streamstats correctly here? Thanks.      
Hi All, I have a few existing inputs with EventGen (v6.5.2) and they work perfectly on Splunk 8.0.5. The use case I am having trouble with is where: a) The timestamp is not contained in the _raw... See more...
Hi All, I have a few existing inputs with EventGen (v6.5.2) and they work perfectly on Splunk 8.0.5. The use case I am having trouble with is where: a) The timestamp is not contained in the _raw data (the actual Production input uses the current index time) b) There are other timestamps in _raw (they are not related to _time, are separate fields). c) All events in the csv sample file must be ingested each interval.   I have tried both sample mode and replay mode. Sample mode ingests the data but identifies different date/time values in _raw and uses that as _time, which is unexpected. This is even while autotimestamp = false.   In replay mode, nothing is ingested due to errors where it can't find the timeField (there is a time column in the csv that I referenced).   So how can I ingest all events in the sample csv file while ensuring _time is the current time, and not dependent on any timestamps in the _raw?   Here is my config for mode = sample.   [ops_724events_ggprocdetail_sample] disabled = false mode = sample interval = 60 sampletype = csv autotimestamp=false earliest = -1m latest = now   Csv sample file has the following columns: time,index,host,source,sourcetype,"_raw"
Hi, I am facing an issue with my add-on in Splunk. When I enable the property requireClientCert = true in server.conf file, the add-on does not launch successfully and the internal log suggests that... See more...
Hi, I am facing an issue with my add-on in Splunk. When I enable the property requireClientCert = true in server.conf file, the add-on does not launch successfully and the internal log suggests that it is a SSLV3_ALERT_HANDSHAKE_FAILURE. Can someone please let me know if i am missing any configuration?  Thanks!!
"https://api.internal.t-mobile.com/customer-credit/v3/pre-screen-credit-offer/personal": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out] I want to extract the fo... See more...
"https://api.internal.t-mobile.com/customer-credit/v3/pre-screen-credit-offer/personal": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out] I want to extract the following  customer-credit/v3/pre-screen-credit-offer/personal it always starts with the pattern ".com/" and ends with ":
"https://api.internal.t-mobile.com/customer-credit/v3/pre-screen-credit-offer/personal": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out] I want to extract the fo... See more...
"https://api.internal.t-mobile.com/customer-credit/v3/pre-screen-credit-offer/personal": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out] I want to extract the following  customer-credit/v3/pre-screen-credit-offer/personal it always starts with the pattern ".com/" and ends with ":
Need help with a situation. Example table below: column1,column2,column3,_time 1,2,3,21st 1,2,3,22nd 1,2,3,23rd 3,2,1,23rd 4,5,6,23rd if on multiple days/times,  column1 ,2 and 3 are same(4th... See more...
Need help with a situation. Example table below: column1,column2,column3,_time 1,2,3,21st 1,2,3,22nd 1,2,3,23rd 3,2,1,23rd 4,5,6,23rd if on multiple days/times,  column1 ,2 and 3 are same(4th column is _time ) , then add a new field(count) with incrementing numbers by _time , like below. how can this be done  ? need help with query.. column1,column2,column3,_time,count 1,2,3,21st,1 1,2,3,22nd,2 1,2,3,23rd,3 3,2,1,23rd,1 4,5,6,23rd,1
Hi all, My team is embarking on the Summary Indexing journey as our environment is getting larger. We have various tenants in our environment that wish for their daily summary data to be synced up f... See more...
Hi all, My team is embarking on the Summary Indexing journey as our environment is getting larger. We have various tenants in our environment that wish for their daily summary data to be synced up from midnight to midnight of various time zones (GMT, Pacific Time, Central, etc.). I have my personal account set to Pacific time. We had been told the best way to ensure that you have no data overlap/gaps with summary indexing is to use the snap-to feature (the @d) syntax using the  earliest and latest time modifiers. Ex: [base search here] earliest=-1d@d latest=@d.... | [rest of search here] What I'm trying to figure out is if we have one tenant that wants us to run their summary searches from midnight to midnight GMT and another tenant that wants us to run their summary searches from midnight to midnight PST for example, what is the best way to approach that?
Hi  @jkat54 , I have a very simple health check that returns "ok" simple curl works fine with  curl  https://myhost:8041/local/check   But through Splunk/App I get this error.    HTTPSConnect... See more...
Hi  @jkat54 , I have a very simple health check that returns "ok" simple curl works fine with  curl  https://myhost:8041/local/check   But through Splunk/App I get this error.    HTTPSConnectionPool(host='myhost', port=8041): Max retries exceeded with url: /local/check (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known) Doing      | curl method=get uri=https://myhost:8041/local/check debug=t     Any Ideas? Thanks Chris