All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that... See more...
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that removes all the unused tags, similar to that retention policy. So, if a tag with an event ID is no longer being used, it will delete the tag. We currently have thousands of tags and it starts to bug the UI. 
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September... See more...
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September 30, 2024." Where can I implement a fix for this, and how can I do it?
The issue you're experiencing is related to event breaking, not the MAX_EVENTS setting. The warning suggests that Splunk is trying to merge multiple events into a single event, which is exceeding the... See more...
The issue you're experiencing is related to event breaking, not the MAX_EVENTS setting. The warning suggests that Splunk is trying to merge multiple events into a single event, which is exceeding the default limit of 256 lines. Your props should be on the indexers (your parsing instance or HWF), as there are only a very few settings which work on the universal forwarder such as EVENT_BREAKER_ENABLE, EVENT_BREAKER, and indexed extractions. The best way to address this here is to use:     LINE_BREAKER =([\r\n]+)(?:,) SHOULD_LINE_MERGE = False   These settings in your props.conf on the indexer will help ensure that each JSON object is treated as a separate event, preventing the merging that's causing the warning. Additionally, for JSON data like this, you might want to consider using this on the search head only:     KV_MODE = json     This setting helps Splunk interpret the JON structure during search time, making it easier to extract and query specific fields from your JSON data. Please UpVote if this is Helpful.
looking at the data, you can use something like this. | makeresults | eval servers="Server1,Server2,Server3,Server4,Server5,Server6,Server7,Server8,Server9,Server10" | eval statuses="Completed,C... See more...
looking at the data, you can use something like this. | makeresults | eval servers="Server1,Server2,Server3,Server4,Server5,Server6,Server7,Server8,Server9,Server10" | eval statuses="Completed,Completed,Completed,Completed,Completed,Completed,Pending,Pending,Pending,Pending" | makemv delim="," servers | makemv delim="," statuses | mvexpand servers | mvexpand statuses | stats count as total_servers, count(eval(statuses="Completed")) as completed_count, count(eval(statuses="Pending")) as pending_count | eval completed_percentage = round(completed_count / total_servers * 100, 0) | eval pending_percentage = round(pending_count / total_servers * 100, 0) | eval "Completed Servers" = completed_count . " (" . completed_percentage . "%)" | eval "Pending Servers" = pending_count . " (" . pending_percentage . "%)" | fields "Completed Servers", "Pending Servers"     please upvote if this is helpful.
Can you share your dashboard code, so we could help you. Please use element </> where you put it.
Each scheduled report has a single set of attributes.  If multiple attributes (time range, cron schedule, etc) are needed then the report should be cloned and new attributes set on the copy.
Hi, Are you asking if you can filter spans within a trace by a time range? If so, I don't think that is possible.
Are you looking dispatch directory or how many search jobs are running? If later then you can use _audit index to get number of jobs.
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) To... See more...
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) Total=10 Completed=6 Pending=4 Now I need to display Single value count of completed 6(60%) and second single value count of Pending 4(40%) with Percentage in the bracket in the 2 Panels show in Photo. Please provide me the query ServerName             UpgradeStatus ==========         ============= Server1                     Completed Server2                     Completed Server3                     Completed Server4                     Completed Server5                     Completed Server6                     Completed Server7                     Pending Server8                     Pending Server9                     Pending Server10                  Pending  
Hi https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 This gives me the current dispatch count - I am looking to make a time chart... See more...
Hi https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 This gives me the current dispatch count - I am looking to make a time chart. Using rest _time does not come back so I can't make a time chart. I am thinking if I run the command each minute in a saved search and output to a .csv with a timestamp that might work!
Hi! It looks like there's an authentication failure on the Azure side. You need to assign the correct permissions to the Azure app. Before proceeding on configuring https://docs.splunk.com/Docum... See more...
Hi! It looks like there's an authentication failure on the Azure side. You need to assign the correct permissions to the Azure app. Before proceeding on configuring https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/ConfigureappinAzureAD ensure your storage account token (SAS) has the following privileges: Use either Access key OR Shared Access Signature with: Allowed services: Blob, Table Allowed resource types: Service, Container, Object Allowed permissions: Read, List Please UpVote if this is helpful.    
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until... See more...
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until a local storage object is deleted.  /debug/refresh or restart splunk doesn't refresh the navigation.  I was able to to update navigation when deleting the following object -> chrome -> developer tools -> applications -> "local storage" ->  splunk-appnav:MYAPP:admin:en-GB:UUID containing the following datastructure..  { "nav": [ { "label": "Search", "uri": "/en-GB/app/testapp/search", "viewName": "search", "isDefault": true }, { "label": "testview", "uri": "/en-GB/app/testapp/testview", "viewName": "testview" } ], "color": null, "searchView": "search", "lastModified": 1727698963355 } I'm wondering why content of the nav is now saved on client side. This is a different behaviour than on v9.1 and v9.2. If i need to guess, they tried to improve response time of the webui. But how do i ensure that every user is receiving the latest version of navigation menu in an app?  Best regards, Andreas
yes - Sorry - i thought i did - cheers for help
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full pote... See more...
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full potential of the cloud? In this session, TekStream, an Elite Splunk Partner we'll dive into how cloud migration can not only enhance your security but also deliver measurable ROI through the AI-powered Splunk Cloud Platform (SaaS). By the end of the session, you'll be equipped with tools and best practices to strategize, optimize, and build the internal team to manage your security programs with long term efficiencies and scale. Don't miss this opportunity to educate, empower, and set your organization up for lasting success in the cloud. To Learn More Register Today!
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but ... See more...
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but with different time ranges in sequential order. Each run requires the previous run to finish so it can load the lookup results for the next run. We cant just schedule a single report that updates the lookup because we need it to run on different time ranges each time it triggers. Is there any way we can schedule a report to run in this particular way? We thought about cloning it multiple times and schedule each one of them differently but it is not an ideal solution. Regards.
Unfortunately, I can't recreate outside the ITSI App because the problem is inside the ITSI event management. The source code doesn't have anything about the Table that I talked about, btw. But, tha... See more...
Unfortunately, I can't recreate outside the ITSI App because the problem is inside the ITSI event management. The source code doesn't have anything about the Table that I talked about, btw. But, thank you for trying to help  Maximiliano Lopes  
  Run the query for All Time to identify the oldest bucket in the specified index. (Just to get information) The fields start_days and end_days represent the time range of events contained with... See more...
  Run the query for All Time to identify the oldest bucket in the specified index. (Just to get information) The fields start_days and end_days represent the time range of events contained within each bucket. Sort the buckets by end_days in descending order to find the oldest bucket in that index. For example, if the end_days value is 500 and you only want to retain 400 days of data, configure the following parameter in your index settings frozenTimePeriodInSecs = 34560000      #(Seconds equivalent to 400days) ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Sp... See more...
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Splunk Add-on for Microsoft Cloud Services.  When i enter the Account Name and the Account Secret it gives this error: This was configured from "Configuration" > "Azure Storage Account" > "Add". I have controlled the Account Name and the Access Key, they are correct. Looking at the logs this was the only noticeble error that pops up:   log_level=ERROR pid=3270316 tid=MainThread file=storageaccount.py:validate:97 | Error <urllib3.connection.HTTPSConnection object at 0x7e14a4a8e940>: Failed to establish a new connection: [Errno -2] Name or service not known while verifying the credentials: Traceback (most recent call last):   other than this i saw some http requests with 502 error on the splunkd.log but i don't know if it is related.  I have checked to see if the Splunk machine could reach the azure resourse and it can. It can also do api calls correctly.  At this point i have no idea on what could cause this problem.  Do you guys have any idea on what controls i could do to see where is the problem? Did  i miss some configurations? Could it be some problems on the Azure side? If yes what controls should i do?  (used the ufficial guide https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configurestorageaccount/) Thanks a lot in advance for your help.   
This has been answered here: https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 You can leverage this search and see if that help... See more...
This has been answered here: https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 You can leverage this search and see if that helps for your monitoring. index=_internal sourcetype=splunkd The number of search artifacts in the dispatch directory is higher than recommended TERM(count=*) | timechart span=1h max(count)     Please upvote if this is helpful.