All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=jo... See more...
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=job_event job_name="*abc/develop*" | stats count by job_name, type returns completed = 74 and started = 118 Ideally whatever is started should also be completed. so can you help me figuring out what could be the problem?
Hi @ITWhisperer,  the request is great ! it's working fine in the search indeed. Unfortunately it doesn't  work within a dashboard source code : first line is highlighted with the message "unencode... See more...
Hi @ITWhisperer,  the request is great ! it's working fine in the search indeed. Unfortunately it doesn't  work within a dashboard source code : first line is highlighted with the message "unencoded <"   | rex mode=sed field=origData "s/<(?<!)/ </g s/>(?=<)/> /g"    it there a way to make the request understandable in dashboard UI ?
It is a Splunk Supported add-on, so if you have a support contract then you could ask them.
In web.conf, there are some cache-related settings that might work to disable either the caching of views, or the cache entirely. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf ma... See more...
In web.conf, there are some cache-related settings that might work to disable either the caching of views, or the cache entirely. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf max_view_cache_size = <integer> * The maximum number of views to cache in the appserver. * Default: 1000 cacheBytesLimit = <integer> * Splunkd can keep a small cache of static web assets in memory. When the total size of the objects in cache grows larger than this setting, in bytes, splunkd begins ageing entries out of the cache. * If set to zero, disables the cache. * Default: 4194304  
This might work: <yoursearch> | eval <yourdisplayedtimefield> = strftime(<youroriginaltimefield>, "%B %e, %Y") And here is a good reference website for picking the string format characters: https:/... See more...
This might work: <yoursearch> | eval <yourdisplayedtimefield> = strftime(<youroriginaltimefield>, "%B %e, %Y") And here is a good reference website for picking the string format characters: https://strftime.net/
Please tell us more.  Are the emailed reports built-in to Splunk or custom (created by your organization)?  If the latter, please share the SPL used to generate the reports so we can suggest changes ... See more...
Please tell us more.  Are the emailed reports built-in to Splunk or custom (created by your organization)?  If the latter, please share the SPL used to generate the reports so we can suggest changes that will improve the readability. I take it by "chron number" you're referring to dates in integer ("epoch") format - the number of seconds since 1/1/1970.  If so, the report probably just needs to use the strftime function to change the format into something easier to read.
Are the tags indexed in Splunk?  If so, they cannot be deleted.  The tags will go away based on the retention policy for the index in which they are stored.
It says web interface does not seem to be available.
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade... See more...
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade and license. All went well until the end when I get WARNING: web interface does not seem to be available. Everything else says done until the end, when I get the warning message.   I checked splunkd.log and I see this message:   ERROR ClusteringMgr [60815 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. I checked server.conf file and compared with the backup file i made of the entire Splunk etc/system/local directory and the config in the files are the same.
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that... See more...
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that removes all the unused tags, similar to that retention policy. So, if a tag with an event ID is no longer being used, it will delete the tag. We currently have thousands of tags and it starts to bug the UI. 
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September... See more...
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September 30, 2024." Where can I implement a fix for this, and how can I do it?
The issue you're experiencing is related to event breaking, not the MAX_EVENTS setting. The warning suggests that Splunk is trying to merge multiple events into a single event, which is exceeding the... See more...
The issue you're experiencing is related to event breaking, not the MAX_EVENTS setting. The warning suggests that Splunk is trying to merge multiple events into a single event, which is exceeding the default limit of 256 lines. Your props should be on the indexers (your parsing instance or HWF), as there are only a very few settings which work on the universal forwarder such as EVENT_BREAKER_ENABLE, EVENT_BREAKER, and indexed extractions. The best way to address this here is to use:     LINE_BREAKER =([\r\n]+)(?:,) SHOULD_LINE_MERGE = False   These settings in your props.conf on the indexer will help ensure that each JSON object is treated as a separate event, preventing the merging that's causing the warning. Additionally, for JSON data like this, you might want to consider using this on the search head only:     KV_MODE = json     This setting helps Splunk interpret the JON structure during search time, making it easier to extract and query specific fields from your JSON data. Please UpVote if this is Helpful.
looking at the data, you can use something like this. | makeresults | eval servers="Server1,Server2,Server3,Server4,Server5,Server6,Server7,Server8,Server9,Server10" | eval statuses="Completed,C... See more...
looking at the data, you can use something like this. | makeresults | eval servers="Server1,Server2,Server3,Server4,Server5,Server6,Server7,Server8,Server9,Server10" | eval statuses="Completed,Completed,Completed,Completed,Completed,Completed,Pending,Pending,Pending,Pending" | makemv delim="," servers | makemv delim="," statuses | mvexpand servers | mvexpand statuses | stats count as total_servers, count(eval(statuses="Completed")) as completed_count, count(eval(statuses="Pending")) as pending_count | eval completed_percentage = round(completed_count / total_servers * 100, 0) | eval pending_percentage = round(pending_count / total_servers * 100, 0) | eval "Completed Servers" = completed_count . " (" . completed_percentage . "%)" | eval "Pending Servers" = pending_count . " (" . pending_percentage . "%)" | fields "Completed Servers", "Pending Servers"     please upvote if this is helpful.
Can you share your dashboard code, so we could help you. Please use element </> where you put it.
Each scheduled report has a single set of attributes.  If multiple attributes (time range, cron schedule, etc) are needed then the report should be cloned and new attributes set on the copy.
Hi, Are you asking if you can filter spans within a trace by a time range? If so, I don't think that is possible.
Are you looking dispatch directory or how many search jobs are running? If later then you can use _audit index to get number of jobs.
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) To... See more...
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) Total=10 Completed=6 Pending=4 Now I need to display Single value count of completed 6(60%) and second single value count of Pending 4(40%) with Percentage in the bracket in the 2 Panels show in Photo. Please provide me the query ServerName             UpgradeStatus ==========         ============= Server1                     Completed Server2                     Completed Server3                     Completed Server4                     Completed Server5                     Completed Server6                     Completed Server7                     Pending Server8                     Pending Server9                     Pending Server10                  Pending  
Hi https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 This gives me the current dispatch count - I am looking to make a time chart... See more...
Hi https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 This gives me the current dispatch count - I am looking to make a time chart. Using rest _time does not come back so I can't make a time chart. I am thinking if I run the command each minute in a saved search and output to a .csv with a timestamp that might work!
Hi! It looks like there's an authentication failure on the Azure side. You need to assign the correct permissions to the Azure app. Before proceeding on configuring https://docs.splunk.com/Docum... See more...
Hi! It looks like there's an authentication failure on the Azure side. You need to assign the correct permissions to the Azure app. Before proceeding on configuring https://docs.splunk.com/Documentation/AddOns/released/MSCloudServices/ConfigureappinAzureAD ensure your storage account token (SAS) has the following privileges: Use either Access key OR Shared Access Signature with: Allowed services: Blob, Table Allowed resource types: Service, Container, Object Allowed permissions: Read, List Please UpVote if this is helpful.