All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the... See more...
@gcusello  Thank you  I looked at your post as saw your_search | eval date=strftime(_time,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] | ... in this way you exclude all the events in the days contained in the lookup. So now the question is  I am using this lookup file to say  Do not alert on these dates in the lookup but we need to +1 day on them so lets say the lookup table is   2025-02-17th We would need to add 1 day to it so now its actually Muting on the 18th if that make sense? to simplify, the lookup table Dates we just need to +1 day  to it and make sure on those dates, we just mute alert would it look like this? your_search | eval date=strftime(_time + 86400,"%Y-%m-%d") | search NOT [ inputlookup holidays.csv | fields date ] |... Also is there a difference using from inputlookup vs lookup? All the best!
hi any update on this from anyone ? Thank you!
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client si... See more...
I believe this only applies to how Splunk Web (UI) interacts with splunkd not how direct REST API calls are made to splunkd on port 8089.  I am trying to determine if I should just use a client side timeout for the endpoint call.
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this?... See more...
@prasanthkota did you get this working? I am working on a custom function to convert the splunk query result in the vaultid to csv and would like to know if there is one that already exists for this? what was your end custom function?
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Sup... See more...
Thats right - Users in the support portal are completely separate to your actual Splunk Instance and are not automatically setup - there is no link between them. Infact you can have users on your Support portal which do not have a login to your Splunk instance if appropriate.  
Thanks for the help on this. The final solution for me is the  [general] parallelIngestionPipelines = 200 I am not sure I see the benefit of taking the time to find the optimal size for the vario... See more...
Thanks for the help on this. The final solution for me is the  [general] parallelIngestionPipelines = 200 I am not sure I see the benefit of taking the time to find the optimal size for the various Queues a you suggest.  I have the available CPU and memory to simply increase the pipelines.  I will be adding several IFs and allow them to load balance and ultimately 200 will be way over kill and I may drop this back to something like 50 (or maybe I will not bother with this either since everything is working)  
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue... See more...
Hi. We started using Splunk Observability Cloud for our Azure infrazrtucture. We already setup Azure integration with Splunk and now in the process of creation dashboards and charts. I met an issue while creating any chart that contains several objects in it. If I want to see dtu_consumption_percent for SQL databases on my SQL Server (tens of databases) I can easily create a time chart, and it contains data for all databases, but I do not understand what line represents what database. Because the name of any database looks like /SUBSCRIPTIONS/FULL_ID_OF_SUBSCRIPRTION/RESOURCEGROUPS/RESOURCE_GROUP_NAME/PROVIDERS/MICROSOFT.SQL/SERVERS/THE_NAME_OF_SQL_SERVER/DATABASES/THE_NAME_OF_DATABASE I do not see this full name even on legend. And hovering the mouse over the line shows only a small part, like  "/SUBSCRIPTIONS/FULL_ID_OF_SU"   I would like to have THE_NAME_OF_DATABASE aka resource name instead of full azure_resource_id.   Is it possible? Thank You
Hello @daniedoe  /services/saved/searches Fetches saved searches globally across all apps the authenticated user has access to and results depend on the permissions of the user making the API call.... See more...
Hello @daniedoe  /services/saved/searches Fetches saved searches globally across all apps the authenticated user has access to and results depend on the permissions of the user making the API call. If you need a broader scope across multiple apps or want results influenced by user permissions, use this namespace. /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches Specifically queries saved searches within the SplunkEnterpriseSecuritySuite app and uses the nobody namespace, meaning searches owned by nobody (i.e., shared objects) in that app. If your application is specifically working within Splunk Enterprise Security and only needs correlation searches from ES, prefer this namespace. Hopefully this helps. Have a nice day,  
Hello Maybe you need to edit /opt/splunk/etc/system/local/web.conf adding the stanza: splunkdConnectionTimeout = 120 More HERE.
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint ... See more...
In a production application what factors should I consider deciding between using /services/saved/searches vs /servicesNS/nobody/SplunkEnterpriseSecuritySuite/saved/searches for a REST HTTP endpoint call to get correlation search information. Both return the same results for me.
Following on 'by configuration by an existing portal admin in your organisation.'   1.Now I understand that there is Admin role for the Splunk Support portal and I believe it's different from roles... See more...
Following on 'by configuration by an existing portal admin in your organisation.'   1.Now I understand that there is Admin role for the Splunk Support portal and I believe it's different from roles for the Splunk instance. Right?  2.A Splunk enterprise instance admin need not inherently be a Splunk portal admin too. Right? They should be given access to the Support Portal by the support portal admin or Splunk team upon request of the Splunk portal admin. Right?
Hi @harryvdtol  Have a look at https://community.splunk.com/t5/Splunk-Dev/Link-url-in-splunk-dashboard/m-p/688493 which I think should give you some insight into how to achieve this. Please let me ... See more...
Hi @harryvdtol  Have a look at https://community.splunk.com/t5/Splunk-Dev/Link-url-in-splunk-dashboard/m-p/688493 which I think should give you some insight into how to achieve this. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API ... See more...
I want to know if there is any server-side timeout that a response must complete in for a call to the endpoint Get /services/saved/searches. Does Splunk have a default timeout for handling these API calls?
Hello Can you, please, explain more what you have and what you try to achieve?  I really don't understand what your problem is. Thank you,
Hello, Please help me with the following request I have a table output of several url's to PDF files For example this one https://janenpiet.domain.nl/sites/conten_sites/BOT/Parts%20%20documents/A... See more...
Hello, Please help me with the following request I have a table output of several url's to PDF files For example this one https://janenpiet.domain.nl/sites/conten_sites/BOT/Parts%20%20documents/ADFM/Phonesrv/Highg%20Volume%20Phiones/ABCD%20Capture/60.%20Hardware/01.%20Scanners/Amsterdam/30/2025-01-20.pdf My request is as follows. When a user click on a link, it should open a new window that opens the PDF file. I notice that a simple drilldown does not work. Does anybody know how to do this? Reagrds, Harry
Hi We are working on a Cisco Sec+Splunk course, using the new Cisco Security Cloud App as well as coverage for the old apps like the Cisco ISE App/Add-on. In this course, we have a troubleshooting s... See more...
Hi We are working on a Cisco Sec+Splunk course, using the new Cisco Security Cloud App as well as coverage for the old apps like the Cisco ISE App/Add-on. In this course, we have a troubleshooting section, so for ISE, just checking if there are any ISE logs in Splunk for troubleshooting the App/Add-On   Thanks!
Hi @plao  The config you've shown shows that it uses a UDP input, therefore I would not expect to see any ISE specific log sources in your logs. Is there any issue that you are experiencing that yo... See more...
Hi @plao  The config you've shown shows that it uses a UDP input, therefore I would not expect to see any ISE specific log sources in your logs. Is there any issue that you are experiencing that you need additional debug logs for? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
 
Thanks .. I only see from SNA app     
Have look at this DOCUMENTATION PAGE. Debugging logs should be send into _internal index. Look at that index.