All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to create Splunk alert when there are no transactions continuously for 30mins. Kindly assist.
Hi all, we have migrated HF where DB connect app was installed and now events from DB app on new HF have different timestamp. 1 hour is missing from server time. They are using same indexers. The... See more...
Hi all, we have migrated HF where DB connect app was installed and now events from DB app on new HF have different timestamp. 1 hour is missing from server time. They are using same indexers. There is not TZ configured in props.conf on indexers. Those configurations in DB app do not work: 1. Configuration -> Databases -> Connections -> "your connection" (Timezone dropdown) 2. Add the following to the JVM options in the configuration tab of the DB connect app: -Duser.timezone=GMT   New HF:   1. The time should be 3:30 as server/hf has EDT time 3:30 (This works correctly on old HF, there is time t-6, not t-7). If this is not time from server/when was event created what is it then? I am confused here. 9:30 is ok, as we are CET.     2. 3. The server time of new HF is correct. So why events miss 1 hour?    Old HF: 1. 2. 3.timedatectl (I took ss 9 mins later) Thank you for every idea.
how to bring previous value by using value ?? could someone help me please.
I have a need for approximate statistics/metrics and am currently using Event Sampling, which drastically speeds up the queries. For queries that calculate averages this works great, but I also have ... See more...
I have a need for approximate statistics/metrics and am currently using Event Sampling, which drastically speeds up the queries. For queries that calculate averages this works great, but I also have a need to do counts. If you set the Event Sampling to for example 1:100, then Splunk seems to look at every 100 Events, which is also reflected in how many Events that are matched when doing 1:100 vs 1:1. Example count with and without sampling: 1:100 = 26311 1:1 = 2623658 1:100 scaled up to 1:1 = 2631100 Diff = 7442, which is 0.3% The Time Period was a previous hour (not the latest hour) as not to have incoming Events affect the Count. 0.3% difference is perfectly ok for my purpose.   Am I thinking of this correctly, or is there any risk of much bigger differences in Count (after upscaling the count)?
  We have Splunk MISP 42 installed which was working until a few days ago. However, after installing Splunk Sec Essentials app, neither did the SSE app work from the get go but looks like it also m... See more...
  We have Splunk MISP 42 installed which was working until a few days ago. However, after installing Splunk Sec Essentials app, neither did the SSE app work from the get go but looks like it also messedup something that cause MISP42 app "config" page not to load and connection to MISP Since then, I have tried many things like uninstalling SSE app, updating MISP app to latest version from 4.0.1 and then reverting back to 4.0.1 when it was working fine. But despite of all this, issue is still there. I noticed similar proxy related message for Palo Alto addon in Splunkd logs which suggests this is more of a Splunk internal issue than MISP 42.   Splunk Core - 8.1.5 Issue is on single instance Splunk ES version 7.0.1 Can someone please help fix this ?       09-28-2022 00:05:06.486 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 667, in urlopen\n self._prepare_proxy(conn)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 930, in _prepare_proxy\n conn.connect()\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connection.py", line 316, in connect\n self._tunnel()\n File "/opt/splunk/lib/python3.7/http/client.py", line 927, in _tunnel\n message.strip()))\nOSError: Tunnel connection failed: 403 Forbidden\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/adapters.py", line 449, in send\n timeout=timeout\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 725, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/misp42splunk/configs/conf-misp42splunk_instances/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 175, in all\n self.reload()\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 259, in reload\n action='_reload',\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 1241, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/splunk_rest_client.py", line 145, in request\n verify=verify, proxies=proxies, cert=cert, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/api.py", line 60, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/sessions.py", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/sessions.py", line 646, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/adapters.py", line 510, in send\n raise ProxyError(e, request=request)\nsolnlib.packages.requests.exceptions.ProxyError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/misp42splunk/configs/conf-misp42splunk_instances/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))\n". See splunkd.log for more details.          
I am trying to use a timestamp field from a lookup.csv as the earliest time marker, but it will not set the value of earliest when the report runs.   Can you help, please.  The earliest and latest al... See more...
I am trying to use a timestamp field from a lookup.csv as the earliest time marker, but it will not set the value of earliest when the report runs.   Can you help, please.  The earliest and latest always use the defaul preset value. index=a4_designeng_generic_app_audit_prd sourcetype=cba:designeng:nonstop:syslog [| inputlookup  Production_Health_Status.csv    | tail 1     | eval earliest=Status_Check_Timestamp    <--- this value is being set each time the report runs.     | fields earliest ]   ! Invalid value "09/28/2022 13:06:00" for time term 'earliest'
Rex error while extracting fields with delimiter Commas... For lot of the field it is NULL ( field1=NULL , field2=Null ...field4=value..) Why is rex error is occuring ! Error message : has exceede... See more...
Rex error while extracting fields with delimiter Commas... For lot of the field it is NULL ( field1=NULL , field2=Null ...field4=value..) Why is rex error is occuring ! Error message : has exceeded the configured depth_limit, consider raising the value in limits.conf. whats the solution to resolve this !!  
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I h... See more...
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I have is that I cannot create dashboards using dashboard studio and include custom visualizations. Only the default splunk visualizations will show up in the toolbar when adding Is there a way to include custom visualizations in dashboard studio? This would be really useful and make it easier to create & maintain dashboards without needing go into a react app for changes Also what do these props mean?  /** * visualization can be added through the toolbar */ CustomViz.includeInToolbar = true; /** * can switch to this visualization through the editor viz switcher */ CustomViz.includeInVizSwitcher = true;   I did not notice any change of behavior by setting them to true and was not able to do any editing. Thanks, Lohit
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I h... See more...
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I have is that I cannot create dashboards using dashboard studio and include custom visualizations. Only the default splunk visualizations will show up in the toolbar when adding Is there a way to include custom visualizations in dashboard studio? This would be really useful and make it easier to create & maintain dashboards without needing go into a react app for changes Also what do these props mean?  /** * visualization can be added through the toolbar */ CustomViz.includeInToolbar = true; /** * can switch to this visualization through the editor viz switcher */ CustomViz.includeInVizSwitcher = true;   I did not notice any change of behavior by setting them to true and was not able to do any editing. Thanks, Lohit
<form theme="dark"> <label>Test stats</label> <fieldset submitButton="false" autoRun="false"> <input type="time" token="globalTime" searchWhenChanged="true"> <label>Time Period</label> <default>... See more...
<form theme="dark"> <label>Test stats</label> <fieldset submitButton="false" autoRun="false"> <input type="time" token="globalTime" searchWhenChanged="true"> <label>Time Period</label> <default> <earliest>-5m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>By Service Stats</title> <input type="dropdown" token="serviceKey" searchWhenChanged="true"> <label>Service</label> <choice value="index=test Service.Key=test1servie cf_app_name=test-1*">test1servie</choice> <choice value="index=test Service.Key=test2servie cf_app_name=test-2*">test2servie</choice> <default>test1servie</default> <initialValue>test1servie</initialValue> </input> <table> <search> <query>$serviceKey$* AND my search query here</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>By Service TimeChart</title> <table> <search> <query>$serviceKey$* Header.Type="Inbound" msg.test.Flow-Type="*" |timechart span=15m count by msg.test.Flow-Type</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> </row> </form>
Hello All,   I am new to splunk and trying to create an executive level dashboard for few of our enterprise applications. All these applications produces very large data like 100GB/day and my nor... See more...
Hello All,   I am new to splunk and trying to create an executive level dashboard for few of our enterprise applications. All these applications produces very large data like 100GB/day and my normal indexed searches are taking for ever and sometimes timing out as well. Need some guidance on how i could achieve faster/optimised searches    I tried using tstats but i am running into problems as the data is not totally structured and i am not able to do aggregate functions on response times as that data has string "ms" at the end or doesn't have KV type .  Like below examples.   Data 1 :  2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   Data 2:  09/27/2022 16:34:57:998|101.123.456.789|106|1|C97EC2DA10C64F64A83C87AEEC1CDDBE703A546E1B554AD1|POST|/api/v1/resources/ods-passthrough|200|97 Data 2 fields: date_time|client_ip|appid|clientid|guid|http_method|uri_path|http_status_code|response_time     Need help/suggestions on how to achieve faster search.
In syslog ng I didn’t want to read the data and store the data , how do you do that?
I would like to use props.conf and/or transforms.conf to parse data coming from a generic single line log file using regex to search for "Error" or "Notice" I did a test for my regex in regex 101, ... See more...
I would like to use props.conf and/or transforms.conf to parse data coming from a generic single line log file using regex to search for "Error" or "Notice" I did a test for my regex in regex 101, and the regex seems ok. regex = (?<=Error)(.*$) I do have a sourcetype for the incoming data - what should I be looking for and what files should I edit to allow this? Thanks, eholz1
  The company I worked for paid for splunk observability cloud.   We got a sku number and some entitlement document but we still don't have aaccess to https://app.us1.signalfx.com/#/home. ... See more...
  The company I worked for paid for splunk observability cloud.   We got a sku number and some entitlement document but we still don't have aaccess to https://app.us1.signalfx.com/#/home. - We had a trial before hand that they (our splunk team) recommend us to get so that we can learn about splunk observability cloud tool. We can still access the trial, but we can't access any new account that we paid for or the trial hasn't converted into a full account. (I don't know what is suppose to happen. There is no billing page anywhere).   We reached out yesturday with our splunk team after we got a sku from sales but didn't know how to apply it - They said they would look into the issue.   They looked into the issue and they told me to setup a splunk account - I already setup a splunk account with my own email (lets say bill@conglomo.org) and the it email (it@conglomo.org) and I didn't see anything about payment, billing, etc.   I gave them snapshot pictures of what I saw, showing them the splunk dashboard for both my account and the IT account. - They said that didn't setup an entitlement document for us yet.   So they setup and sent over an entitlement document - then they told me to login in again and see if I notice anything.   However NOTHING was different - both accounts show trial in the splunk observability and there is nothing new in the normal splunk dashboard webpage.     I ask them where I can specifically look for billing, entitlement, or the sku in the splunk dashboard or from the observability cloud website - they said call support.     I tried calling support twice - the automated phone hung up on me.     At this point, I just want to make a case number somewhere online where someone can just help me. Is that possible?  
Hi Team, We are planning to upgrade Splunk version from 8.2 to 9.0. During this process we have installed the python readiness app for up gradation and run the scan. All the apps and other config f... See more...
Hi Team, We are planning to upgrade Splunk version from 8.2 to 9.0. During this process we have installed the python readiness app for up gradation and run the scan. All the apps and other config found to be compatible with 9.0 but 2 of the system config failed. 1. MongoDB TLS and DNS validation check. 2. Search peer SSL config check. Please suggest how we can modify/upgrade the System config so that our environment will be ready for Splunk version 9.0. Thanks in advance!
Hi Team, I am running with Splunk 8.2.2 version and wanted to upgrade to Splunk 9.0 version. Please address the below queries as i am performing this activity for the first time. 1.  How to downl... See more...
Hi Team, I am running with Splunk 8.2.2 version and wanted to upgrade to Splunk 9.0 version. Please address the below queries as i am performing this activity for the first time. 1.  How to download the license version of Installable from Splunk website as it is only giving to download the product with 60 days validity. 2. What the pre and post steps required to perform the up gradation activity. 3. Please tell me which directory should we backup for any bad luck. Thanks in advance!
Hi all,  I have My K8s version 1.23.5 , / cluster-agent:21.2.0   Im getting this error. INFO]: 2022-09-27 17:36:53 - podmonitoring.go:92 - No pods to register [INFO]: 2022-09-27 17:36:53 - contain... See more...
Hi all,  I have My K8s version 1.23.5 , / cluster-agent:21.2.0   Im getting this error. INFO]: 2022-09-27 17:36:53 - podmonitoring.go:92 - No pods to register [INFO]: 2022-09-27 17:36:53 - containermonitoringmodule.go:455 - Either there are no containers discovered or none of the containers are due for registration [INFO]: 2022-09-27 17:36:53 - agentregistrationmodule.go:122 - Registering agent again [INFO]: 2022-09-27 17:36:53 - agentregistrationmodule.go:145 - Successfully registered agent again lan-k8s-desa [INFO]: 2022-09-27 17:37:33 - nodemetadata.go:47 - Metadata collected for 3 nodes Any help appriciated. thanks. 
How can i delete my post ?
Below is the search I am using.I am joining two indexes and then doing a differences between two timefields Last_Boot_Time ,log_time .But I am unable to get the difference .     index=preos hos... See more...
Below is the search I am using.I am joining two indexes and then doing a differences between two timefields Last_Boot_Time ,log_time .But I am unable to get the difference .     index=preos host=* | stats values(Boot_Time) as Last_Boot_Time values(SN) as SN VALUES(PN) AS PN VALUES(VBIS) AS NV_VBIS VALUES(NV) AS NV values(PCI) as PCI BY id host | fillnull value=clear | search SN!=clear PN!=clear NV_VBIS!=clear NV!=clear | fields host id Last_Boot_Time PCI NV_VBIS NV_DRIVER PN SN | sort Last_Boot_Time | join [search index=syslog | search Error_Code="***" | stats count by host _time PC Error_Code pid name Log_Message | eval log_time=strftime(_time,"%Y-%m-%d %H:%M:%S") | table log_time host PC Error_Code pid name Log_Message count | sort by -log_time | dedup pid]|eval diff=log_time-Last_Boot_Time     . Following are sample events for index=preos (here Boot time is considered as timestamp) 2022-09-22T13:20:38.713211-07:00 preo log-inventory.sh[24193]: Boot timestamp: 2022-09-22 13:09:59 Index=syslog 2022-09-22T11:51:34.272862-07:00 preo kernel: [74400.062429] NVRM: Xid (PCI:xxx): 119, pid=17993, name=che_mr_ent, Timeout waiting for on 76 (P_RM_CONTROL) (0x20800a4c 0x4). Thanks in Advance
Has anyone done a playbook for crowdstrike serves stopped? Basically querying splunk for host name, etc?  If so can you please share how you have done this?