All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index="db_oracle-prod" source="AzureVOCprod" status=4 | eval MSGStatus=case(status=1,"CREATED", status=2,"RUNNING", status=3,"CANCELLED", status=4,"Failed", status=5,"PENDING", status=6,"ENDED UNEXP... See more...
index="db_oracle-prod" source="AzureVOCprod" status=4 | eval MSGStatus=case(status=1,"CREATED", status=2,"RUNNING", status=3,"CANCELLED", status=4,"Failed", status=5,"PENDING", status=6,"ENDED UNEXPECTEDLY", status=7,"SUCCEEDED",status=8,"STOPPING", status=9,"COMPLETED") | join package_name [inputlookup Azure_VOC.csv] | eval STARTTime=strptime((strftime(now(),"%Y-%m-%d")),"%Y-%m-%d") - strptime(start_time,"%Y-%m-%d") | where STARTTime=0 | stats count by Azure_Pipeline_name, start_time, end_time, MSGStatus   receiving record every 15mins instead we should have only failure ones based on timeframe
hi everyone! My Splunk is working correctly, but when I go to System -> Licensing the error 500 appears. On web_service.log     2023-04-26 19:10:48,078 ERROR [644922d7bf7f21240ca510] __init__... See more...
hi everyone! My Splunk is working correctly, but when I go to System -> Licensing the error 500 appears. On web_service.log     2023-04-26 19:10:48,078 ERROR [644922d7bf7f21240ca510] __init__:370 - Mako failed to render: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 366, in render_template return templateInstance.render(**template_args) File "/opt/splunk/lib/python3.7/site-packages/mako/template.py", line 476, in render return runtime._render(self, self.callable_, args, data) File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 883, in _render **_kwargs_for_callable(callable_, data) File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 920, in _render_context _exec_template(inherit, lclcontext, args=args, kwargs=kwargs) File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 947, in _exec_template callable_(context, *args, **kwargs) File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 15, in render_body <%self:render/> File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 21, in render_render <%self:pagedoc/> File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 95, in render_pagedoc <%next:body/> File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/admin_lite.html", line 92, in render_body ${next.body()} File "/opt/splunk/share/splunk/search_mrsparkle/templates/licensing/overview.html", line 209, in render_body % if hard_messages['cle_pool_over_quota'] is not None and hard_messages['cle_pool_over_quota']['count'] is not None and hard_messages['cle_pool_over_quota']['count'] >= stack_table[0]['max_violations']: KeyError: 'cle_pool_over_quota'     Do you have any ideas to troubleshoot this problem? 
Hello, I am looking for help to  investigate if license usage drops for perticular index day by day. Any SPL searches and other way is helpful. 
Hi, I didnt see any fields from the dlp datamodel apart from 1 event ,what we fix this to get all the fields and events ?      
Hi, I have an app which allows to send custom alerts to an external provider. Recently the app uses secret storage to: 1) Store/save keys during setup view 2) Read the key on Python alert scr... See more...
Hi, I have an app which allows to send custom alerts to an external provider. Recently the app uses secret storage to: 1) Store/save keys during setup view 2) Read the key on Python alert script to invoke the external provider Because any "common" user is able to create an Alert, during the executing I'm seeing an error reading the key due to the lack of "list_storage_passwords" role Questions: - Is this role required for all users that setup this Alert that read the App secret key? - I've have some concerns from administrators saying they don't want to give out this role as it implies allowing these users to read ALL secrets from the Splunk instance. Is this accurate? Or this list_storage_passwords role will actually only allow reading specific App secrets that are marked as read for all users? Thank you.
Hi, I have build a dashboard using new splunk dashboard studio. I am facing difficulties to add a submit button to my page. Basically I want to add all the inputs and then click submit button for spl... See more...
Hi, I have build a dashboard using new splunk dashboard studio. I am facing difficulties to add a submit button to my page. Basically I want to add all the inputs and then click submit button for splunk to populate my dashboard. Can someone help? thanks
HI Splunkers, We are updated splunk cloud Version: 9.0.2303.100 . After this HTML and css code is not working. dashboards are not loading with functionality and css beautifications. getting Await... See more...
HI Splunkers, We are updated splunk cloud Version: 9.0.2303.100 . After this HTML and css code is not working. dashboards are not loading with functionality and css beautifications. getting Awaiting for user confirmation Error. Can any one please help on this ? Thanks
hi, is there a way to generate a graph / chart that shows performance of Scheduler ? We are using Splunk Enterprise Security and there is an App available called "Cloud Monitoring console" which sho... See more...
hi, is there a way to generate a graph / chart that shows performance of Scheduler ? We are using Splunk Enterprise Security and there is an App available called "Cloud Monitoring console" which shows stuff like "Skipped Searches" , "Scheduler Activity" etc but not quite what we are looking for. We would like to see how many searches were kicked off ( as in dispatched)  every hour or every 30 mins, kind of plot run time of searches .  End goal is to identify if we have too many searches running at a particular time slot n so on.
How to view the currently running search of Splunk and display the amount of memory consumed during the execution of this search command? Based on this information, I would like to pause some searche... See more...
How to view the currently running search of Splunk and display the amount of memory consumed during the execution of this search command? Based on this information, I would like to pause some searches with high memory usage.
In the Below Chart is a cluster or log chart with 4 bars for a single component. Now I want a stacked for Inflow1 and Inflow combined with a stacked for outflow1 and outflow, one next to the other a... See more...
In the Below Chart is a cluster or log chart with 4 bars for a single component. Now I want a stacked for Inflow1 and Inflow combined with a stacked for outflow1 and outflow, one next to the other as a cluster chart for only one compoent.      
| tstats summariesonly=true max(_time) as lastTime, count FROM datamodel=Change BY "All_Changes.action", "All_Changes.result_id", "All_Changes.user", "All_Changes.dest" | rename "All_Changes.*" as * ... See more...
| tstats summariesonly=true max(_time) as lastTime, count FROM datamodel=Change BY "All_Changes.action", "All_Changes.result_id", "All_Changes.user", "All_Changes.dest" | rename "All_Changes.*" as * | search result_id = 4732 | convert ctime(lastTime) as lastTime   i am running this command , there is output , but i want to see events and know more details , but events not showing   total number of events Complete 590,046 events
When compared to original query with tstats query success, failed  and total count is not matching. original query: index=app-cod-idx   host_ip=11.123.345.23  sourcetype=code:logs |rex field =... See more...
When compared to original query with tstats query success, failed  and total count is not matching. original query: index=app-cod-idx   host_ip=11.123.345.23  sourcetype=code:logs |rex field =_raw "\|presentdata\:(?<COD_data>.*\|" |where isnotnull(COD_data) |eval Success=if(COD_data="0"  OR COD_data="", "Success", null()) |eval Failed=if(COD_data!="0", "Failed", null()) |stats count(Success) as Successlogs count(Failed ) as Failedlogs  count(COD_data) as totalcount OUTPUT: Successlogs Failedlogs totalcount 14 10 24   tstats query: |tstats count where index=app-cod-idx   host_ip=11.123.345.23  sourcetype=code:logs by PREFIX(cod-data=) |rename cod-data= as COD_data |where isnotnull(COD_data) |eval Success=if(COD_data="0"  OR COD_data="", "Success", null()) |eval Failed=if(COD_data!="0", "Failed", null()) |stats count(Success) as Successlogs count(Failed ) as Failedlogs  count(COD_data) as totalcount OUTPUT: Successlogs Failedlogs totalcount 1 0 1  
I am trying to calculate birth year and age, based on birthdate. The following works, but only for dates within the last 50 years. There seems to be no output for birthdays > 50 years old.      e... See more...
I am trying to calculate birth year and age, based on birthdate. The following works, but only for dates within the last 50 years. There seems to be no output for birthdays > 50 years old.      eval APPLICANT_BIRTH_YEAR = strftime(strptime(APPLICANT_DOB, "%Y-%m-%d %H:%M:%S"), "%y") Is there a way to do this properly?
Hi folks, Does anybody knows how to change the hostname of the clients that appear in the forwarder management of the Deployment server? I tried with: Server.conf [general] serverName = <AS... See more...
Hi folks, Does anybody knows how to change the hostname of the clients that appear in the forwarder management of the Deployment server? I tried with: Server.conf [general] serverName = <ASCII string> * The name that identifies this Splunk software instance for features such as distributed search.  Inputs.conf host = <string> * Sets the host key/field to a static value for this stanza. * Primarily used to control the host field, which will be used for events coming in via this input stanza. But changing those settings in the Universal forwarders does not change the client hostname in the Deployment server (forwarder management). Any ideas? Thanks in advance!
Is there an easy way of capturing the fields across different events?   example: event 1)        abc: {        build: 123        duration: 1.1        sw: gen1        hardware: h1 } ... See more...
Is there an easy way of capturing the fields across different events?   example: event 1)        abc: {        build: 123        duration: 1.1        sw: gen1        hardware: h1 }   event 2)       def: {        build: 124        duration: 1.4        sw: gen2        hardware: h2 } |rename abc.duration as a_duration, def.duration as d_duration | stats avg(d_duration) as avg_d_duration, avg(a_duration) as avg_a_duration by def.build, def.hardware | eval avg_d_duration = round(avg(avg_d_duration),3) | eval avg_a_duration=abc.duration          <= this is a limit line I want to implement based on the next search  | search abc.build="123",  abc.hardware="h1"     NOTE: There's multiple events similar to event1 & event2.  the differences between event1 & event2 are the different sw versions.   Problem: I am not able to specify a search/where to get the abc.duration.     Question: 1) How can I add a search at the end, so I can query the data from a different event?
Hi Team,   We have received a request to pull data from Rest API . Can you please help with any document which can help to configure .
Cannot view my closed cases in official support page All I can see is as in the image attached regards Altin
I know how to get the ingest bytes for non-internal logs using this ...   index=_internal source="*license_usage.log" host="{license_manager}" type=Usage   Anybody know how I could do the sam... See more...
I know how to get the ingest bytes for non-internal logs using this ...   index=_internal source="*license_usage.log" host="{license_manager}" type=Usage   Anybody know how I could do the same for the internal logs? I'm trying to figure out how many bytes per day we ingest total (internal and non-internal logs). Thanks.
Hi, We have integration with ServiceNow however we have an alert in a custom App to create a ticket to service now however the Problem_URL is wrong and is pointing to the search app instead of the ... See more...
Hi, We have integration with ServiceNow however we have an alert in a custom App to create a ticket to service now however the Problem_URL is wrong and is pointing to the search app instead of the custom app. I have checked the saved search in _internal logs and it is correct.   Any help would be greatly appreciated.   Thanks
I've made a dashboard which uses tokens to determine if all searches are ready (ready1, ready2 etc). If all tokens are set a message is displayed that the dashboard is ready.  When I open the das... See more...
I've made a dashboard which uses tokens to determine if all searches are ready (ready1, ready2 etc). If all tokens are set a message is displayed that the dashboard is ready.  When I open the dashboard and choose a value from the picklist sometimes all the tokens are set and sometimes a few are not set. So for example action 1: open dashboard, choose value from picklist, value ready1 is set and for example ready8 is not. Action 2 open the same dashboard choose the same value from picklist. ready1 is not set, ready8 is set and ready10 is not set. So exactly the same action with different results. Does anyone know a solution for this? Maybe something to do with caching?