All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i goin... See more...
Hey all, I'm trying to find a way to bulk delete containers via the API in SOAR Cloud. Had an issue where Splunk created ~8000 containers  to one of my labels when testing, and no way am i going to sit here for an hour deleting them in the GUI. I've read this post: How-to-Delete-Multiple-container but that really only points me to the single requests.delete option - which is very slow. I can bulk update containers to change the status using requests.post and a list of container id's, but don't see any way to bulk delete. For context a for loop + requests.delete on each single container is actually slower than deleting them via the GUI. Am I missing it somewhere, or is this just not possible through API?
Hai all, Need help on to extract as new filed for user named after CORP\ Message=Task Scheduler started "{B9F5A32A-A340-49C1-B620-8C7A439CA849}" instance of the "\Microsoft\Office\OfficeTelemetryAg... See more...
Hai all, Need help on to extract as new filed for user named after CORP\ Message=Task Scheduler started "{B9F5A32A-A340-49C1-B620-8C7A439CA849}" instance of the "\Microsoft\Office\OfficeTelemetryAgentFallBack" task for user "CORP\s-ks4"   Thanks  
Hello, i'm currently ingesting XML and non-xml windows event logs, i wanna know the impact if i disable the render xml in my UF, i wanna know also if i'am getting duplicated raw event if im ren... See more...
Hello, i'm currently ingesting XML and non-xml windows event logs, i wanna know the impact if i disable the render xml in my UF, i wanna know also if i'am getting duplicated raw event if im rendering xml data (non-xml and xml are being ingested together ? ) if the answer is yes and if i disable the xml format do i get some data loss ? (because i get some event id that being ingested only in xml format).      
Hi Community, I need support to know how I can get the non-existent values from the two fields obtained from the "appendcols" command output. Example of Splunk output in table format below: 1st_Fi... See more...
Hi Community, I need support to know how I can get the non-existent values from the two fields obtained from the "appendcols" command output. Example of Splunk output in table format below: 1st_Field  2nd_Field 1111          2222 empty        3333 empty         1111 I am able to get 1111 after using the lookup command but I want to get 2222 and 3333 only as those are not present in 1st Field.
Hi Team,   One of the important dashboard has been deleted by an user in our Search Head and our SH  is hosted in Cloud and managed by Splunk Support. So is there any possibility to restore the d... See more...
Hi Team,   One of the important dashboard has been deleted by an user in our Search Head and our SH  is hosted in Cloud and managed by Splunk Support. So is there any possibility to restore the deleted dashboard in the Search head.         
Hello, When the cs_uri field is not present in the log, the url field is evaluated from cs_uri_scheme, cs_host, cs_uri_path  and cs_uri_query.  But it does not take in account the cs_uri_port in ... See more...
Hello, When the cs_uri field is not present in the log, the url field is evaluated from cs_uri_scheme, cs_host, cs_uri_path  and cs_uri_query.  But it does not take in account the cs_uri_port in case the url use a non standard port. For instance, if the real URL is http://somesite:8080/foo/bar, the TA will compute the url field as http://somesite/foo/bar. To solve this for the most common protocols (http, https with and w/o interception, ftp & rtsp), the line  EVAL-url = coalesce(cs_uri, if(isnull(cs_uri_scheme) OR (cs_uri_scheme=="-"), "", cs_uri_scheme+"://") + cs_host + cs_uri_path + if(isnull(cs_uri_query) OR (cs_uri_query == "-"), "", cs_uri_query)) should be replaced by  EVAL-url = coalesce(cs_uri, if(isnull(cs_uri_scheme) OR (cs_uri_scheme=="-"), "", cs_uri_scheme+"://") + cs_host + if((cs_uri_scheme=="http" AND cs_uri_port!=80) OR (cs_uri_scheme IN ("https","ssl") AND cs_uri_port!=443) OR (cs_uri_scheme="tcp" AND cs_method="CONNECT" AND cs_uri_port!="443") OR (cs_uri_scheme="ftp" AND cs_uri_port!=21) OR (cs_uri_scheme=="rtsp" AND cs_uri_port!=554),":".cs_uri_port,"") + cs_uri_path + if(isnull(cs_uri_query) OR (cs_uri_query == "-"), "", cs_uri_query))
Hi, I need to increase the size of text box filters in my dashboard studio? I need to be able to increase size of all or select textbox filter. I can't find articles about dashboard studio. Can some... See more...
Hi, I need to increase the size of text box filters in my dashboard studio? I need to be able to increase size of all or select textbox filter. I can't find articles about dashboard studio. Can someone assist pls? Thanks!
Hi, Since the Splunk cloud upgrade to 9.0.2208.1, we have noticed that our monitoring dashboards now have 90 second gaps on some of the search panels. Previously we would see a constant feed of dat... See more...
Hi, Since the Splunk cloud upgrade to 9.0.2208.1, we have noticed that our monitoring dashboards now have 90 second gaps on some of the search panels. Previously we would see a constant feed of data without gaps. Now we are experiencing intermittent lines with 90 second time gaps. we have tried tweaking the data model summarization period from the original 2 minutes to a large period and also a shorter 1 minute period. However, this does not prevent the 90 second gap in monitoring. Any suggestions on how to rectify this would be really appreciated. Thanks  
Hello, I have a rest query with a field that contain date and time Is it possible to limit the search by this field so it will search for the last 15 minutes ?   thanks
I have an issue where the logs aren't ingested regularly. The log file updates every 5 minutes with the same line entries, and will roll over to a new file end of day. -rw-r--r--+ 1 novlua novlua... See more...
I have an issue where the logs aren't ingested regularly. The log file updates every 5 minutes with the same line entries, and will roll over to a new file end of day. -rw-r--r--+ 1 novlua novlua 160416 Sep 18 23:55 iga_check_2022-09-18.log -rw-r--r--+ 1 novlua novlua 197664 Sep 19 23:55 iga_check_2022-09-19.log -rw-r--r--+ 1 novlua novlua 241056 Sep 20 23:55 iga_check_2022-09-20.log -rw-r--r--+ 1 novlua novlua 241056 Sep 21 23:55 iga_check_2022-09-21.log -rw-r--r--+ 1 novlua novlua 241056 Sep 22 23:55 iga_check_2022-09-22.log -rw-r--r--+ 1 novlua novlua 271783 Sep 23 23:55 iga_check_2022-09-23.log -rw-r--r--+ 1 novlua novlua 326880 Sep 24 23:55 iga_check_2022-09-24.log -rw-r--r--+ 1 novlua novlua 326880 Sep 25 23:55 iga_check_2022-09-25.log -rw-r--r--+ 1 novlua novlua 124783 Sep 26 09:06 iga_check_2022-09-26a.log -rw-r--r--+ 1 novlua novlua 271376 Sep 26 23:55 iga_check_2022-09-26.log -rw-r--r--+ 1 novlua novlua 248613 Sep 27 23:55 iga_check_2022-09-27.log -rw-r--r--+ 1 novlua novlua 97092 Sep 28 09:35 iga_check_2022-09-28.log Log file entries example: 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/work/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/completed/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/completed/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/changeset_*.* 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/queue/*.csv 09:35:02 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/work/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/mudad/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/isimprod/completed/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/changeset_*.* 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/queue/*.csv 09:40:01 Processing: /opt/netiq/idm/apps/tomcat/fulfillment/sanbussroles/completed/*.csv I noted when requesting a forced entry, it gets picked up. inputs.conf [monitor:///opt/netiq/idm/apps/tomcat/fulfillment/logs/*.log] # blacklist = (\.gz) whitelist = \.log$|\.txt$ # crcSalt = <SOURCE> # disabled = false index = IG_RequestLog sourcetype = IG:RequestLogCheck time_before_close = 10
I want to create Splunk alert when there are no transactions continuously for 30mins. Kindly assist.
Hi all, we have migrated HF where DB connect app was installed and now events from DB app on new HF have different timestamp. 1 hour is missing from server time. They are using same indexers. The... See more...
Hi all, we have migrated HF where DB connect app was installed and now events from DB app on new HF have different timestamp. 1 hour is missing from server time. They are using same indexers. There is not TZ configured in props.conf on indexers. Those configurations in DB app do not work: 1. Configuration -> Databases -> Connections -> "your connection" (Timezone dropdown) 2. Add the following to the JVM options in the configuration tab of the DB connect app: -Duser.timezone=GMT   New HF:   1. The time should be 3:30 as server/hf has EDT time 3:30 (This works correctly on old HF, there is time t-6, not t-7). If this is not time from server/when was event created what is it then? I am confused here. 9:30 is ok, as we are CET.     2. 3. The server time of new HF is correct. So why events miss 1 hour?    Old HF: 1. 2. 3.timedatectl (I took ss 9 mins later) Thank you for every idea.
how to bring previous value by using value ?? could someone help me please.
I have a need for approximate statistics/metrics and am currently using Event Sampling, which drastically speeds up the queries. For queries that calculate averages this works great, but I also have ... See more...
I have a need for approximate statistics/metrics and am currently using Event Sampling, which drastically speeds up the queries. For queries that calculate averages this works great, but I also have a need to do counts. If you set the Event Sampling to for example 1:100, then Splunk seems to look at every 100 Events, which is also reflected in how many Events that are matched when doing 1:100 vs 1:1. Example count with and without sampling: 1:100 = 26311 1:1 = 2623658 1:100 scaled up to 1:1 = 2631100 Diff = 7442, which is 0.3% The Time Period was a previous hour (not the latest hour) as not to have incoming Events affect the Count. 0.3% difference is perfectly ok for my purpose.   Am I thinking of this correctly, or is there any risk of much bigger differences in Count (after upscaling the count)?
  We have Splunk MISP 42 installed which was working until a few days ago. However, after installing Splunk Sec Essentials app, neither did the SSE app work from the get go but looks like it also m... See more...
  We have Splunk MISP 42 installed which was working until a few days ago. However, after installing Splunk Sec Essentials app, neither did the SSE app work from the get go but looks like it also messedup something that cause MISP42 app "config" page not to load and connection to MISP Since then, I have tried many things like uninstalling SSE app, updating MISP app to latest version from 4.0.1 and then reverting back to 4.0.1 when it was working fine. But despite of all this, issue is still there. I noticed similar proxy related message for Palo Alto addon in Splunkd logs which suggests this is more of a Splunk internal issue than MISP 42.   Splunk Core - 8.1.5 Issue is on single instance Splunk ES version 7.0.1 Can someone please help fix this ?       09-28-2022 00:05:06.486 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 667, in urlopen\n self._prepare_proxy(conn)\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 930, in _prepare_proxy\n conn.connect()\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connection.py", line 316, in connect\n self._tunnel()\n File "/opt/splunk/lib/python3.7/http/client.py", line 927, in _tunnel\n message.strip()))\nOSError: Tunnel connection failed: 403 Forbidden\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/adapters.py", line 449, in send\n timeout=timeout\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py", line 725, in urlopen\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\n File "/opt/splunk/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/misp42splunk/configs/conf-misp42splunk_instances/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 175, in all\n self.reload()\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/splunktaucclib/rest_handler/handler.py", line 259, in reload\n action='_reload',\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/splunklib/binding.py", line 1241, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/splunk_rest_client.py", line 145, in request\n verify=verify, proxies=proxies, cert=cert, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/api.py", line 60, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/sessions.py", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/sessions.py", line 646, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/etc/apps/misp42splunk/lib/aob_py3/solnlib/packages/requests/adapters.py", line 510, in send\n raise ProxyError(e, request=request)\nsolnlib.packages.requests.exceptions.ProxyError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /servicesNS/nobody/misp42splunk/configs/conf-misp42splunk_instances/_reload (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))\n". See splunkd.log for more details.          
I am trying to use a timestamp field from a lookup.csv as the earliest time marker, but it will not set the value of earliest when the report runs.   Can you help, please.  The earliest and latest al... See more...
I am trying to use a timestamp field from a lookup.csv as the earliest time marker, but it will not set the value of earliest when the report runs.   Can you help, please.  The earliest and latest always use the defaul preset value. index=a4_designeng_generic_app_audit_prd sourcetype=cba:designeng:nonstop:syslog [| inputlookup  Production_Health_Status.csv    | tail 1     | eval earliest=Status_Check_Timestamp    <--- this value is being set each time the report runs.     | fields earliest ]   ! Invalid value "09/28/2022 13:06:00" for time term 'earliest'
Rex error while extracting fields with delimiter Commas... For lot of the field it is NULL ( field1=NULL , field2=Null ...field4=value..) Why is rex error is occuring ! Error message : has exceede... See more...
Rex error while extracting fields with delimiter Commas... For lot of the field it is NULL ( field1=NULL , field2=Null ...field4=value..) Why is rex error is occuring ! Error message : has exceeded the configured depth_limit, consider raising the value in limits.conf. whats the solution to resolve this !!  
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I h... See more...
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I have is that I cannot create dashboards using dashboard studio and include custom visualizations. Only the default splunk visualizations will show up in the toolbar when adding Is there a way to include custom visualizations in dashboard studio? This would be really useful and make it easier to create & maintain dashboards without needing go into a react app for changes Also what do these props mean?  /** * visualization can be added through the toolbar */ CustomViz.includeInToolbar = true; /** * can switch to this visualization through the editor viz switcher */ CustomViz.includeInVizSwitcher = true;   I did not notice any change of behavior by setting them to true and was not able to do any editing. Thanks, Lohit
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I h... See more...
Hello, I am currently using the new splunk JSON & React framework to create dashboards and use custom visualizations. https://splunkui.splunk.com/Packages/dashboard-docs/ However, one big issue I have is that I cannot create dashboards using dashboard studio and include custom visualizations. Only the default splunk visualizations will show up in the toolbar when adding Is there a way to include custom visualizations in dashboard studio? This would be really useful and make it easier to create & maintain dashboards without needing go into a react app for changes Also what do these props mean?  /** * visualization can be added through the toolbar */ CustomViz.includeInToolbar = true; /** * can switch to this visualization through the editor viz switcher */ CustomViz.includeInVizSwitcher = true;   I did not notice any change of behavior by setting them to true and was not able to do any editing. Thanks, Lohit
<form theme="dark"> <label>Test stats</label> <fieldset submitButton="false" autoRun="false"> <input type="time" token="globalTime" searchWhenChanged="true"> <label>Time Period</label> <default>... See more...
<form theme="dark"> <label>Test stats</label> <fieldset submitButton="false" autoRun="false"> <input type="time" token="globalTime" searchWhenChanged="true"> <label>Time Period</label> <default> <earliest>-5m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>By Service Stats</title> <input type="dropdown" token="serviceKey" searchWhenChanged="true"> <label>Service</label> <choice value="index=test Service.Key=test1servie cf_app_name=test-1*">test1servie</choice> <choice value="index=test Service.Key=test2servie cf_app_name=test-2*">test2servie</choice> <default>test1servie</default> <initialValue>test1servie</initialValue> </input> <table> <search> <query>$serviceKey$* AND my search query here</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>By Service TimeChart</title> <table> <search> <query>$serviceKey$* Header.Type="Inbound" msg.test.Flow-Type="*" |timechart span=15m count by msg.test.Flow-Type</query> <earliest>$globalTime.earliest$</earliest> <latest>$globalTime.latest$</latest> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> </row> </form>