All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but ... See more...
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but with different time ranges in sequential order. Each run requires the previous run to finish so it can load the lookup results for the next run. We cant just schedule a single report that updates the lookup because we need it to run on different time ranges each time it triggers. Is there any way we can schedule a report to run in this particular way? We thought about cloning it multiple times and schedule each one of them differently but it is not an ideal solution. Regards.
Unfortunately, I can't recreate outside the ITSI App because the problem is inside the ITSI event management. The source code doesn't have anything about the Table that I talked about, btw. But, tha... See more...
Unfortunately, I can't recreate outside the ITSI App because the problem is inside the ITSI event management. The source code doesn't have anything about the Table that I talked about, btw. But, thank you for trying to help  Maximiliano Lopes  
  Run the query for All Time to identify the oldest bucket in the specified index. (Just to get information) The fields start_days and end_days represent the time range of events contained with... See more...
  Run the query for All Time to identify the oldest bucket in the specified index. (Just to get information) The fields start_days and end_days represent the time range of events contained within each bucket. Sort the buckets by end_days in descending order to find the oldest bucket in that index. For example, if the end_days value is 500 and you only want to retain 400 days of data, configure the following parameter in your index settings frozenTimePeriodInSecs = 34560000      #(Seconds equivalent to 400days) ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Sp... See more...
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Splunk Add-on for Microsoft Cloud Services.  When i enter the Account Name and the Account Secret it gives this error: This was configured from "Configuration" > "Azure Storage Account" > "Add". I have controlled the Account Name and the Access Key, they are correct. Looking at the logs this was the only noticeble error that pops up:   log_level=ERROR pid=3270316 tid=MainThread file=storageaccount.py:validate:97 | Error <urllib3.connection.HTTPSConnection object at 0x7e14a4a8e940>: Failed to establish a new connection: [Errno -2] Name or service not known while verifying the credentials: Traceback (most recent call last):   other than this i saw some http requests with 502 error on the splunkd.log but i don't know if it is related.  I have checked to see if the Splunk machine could reach the azure resourse and it can. It can also do api calls correctly.  At this point i have no idea on what could cause this problem.  Do you guys have any idea on what controls i could do to see where is the problem? Did  i miss some configurations? Could it be some problems on the Azure side? If yes what controls should i do?  (used the ufficial guide https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configurestorageaccount/) Thanks a lot in advance for your help.   
This has been answered here: https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 You can leverage this search and see if that help... See more...
This has been answered here: https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-monitor-the-number-of-files-in-the-dispatch/m-p/48389 You can leverage this search and see if that helps for your monitoring. index=_internal sourcetype=splunkd The number of search artifacts in the dispatch directory is higher than recommended TERM(count=*) | timechart span=1h max(count)     Please upvote if this is helpful.
Thanks this fixed the issue, added the stanza [indexAndForward] and they all popped right backup after the restart. Beautiful!!!! 
@isoutamo Thankd for sharing me all the valuable accepted solution link. I will go through each of the link.
@jawahir007 Thanks for providing this SPL. So My retention is currently at 437 days and I have to reduce the retention by 30 days, which means I have to set the retention at 407 days. I also need to ... See more...
@jawahir007 Thanks for providing this SPL. So My retention is currently at 437 days and I have to reduce the retention by 30 days, which means I have to set the retention at 407 days. I also need to adjust this in above query earliest=-437d latest=-407d. Is that right ?
chart uses the values of the second field in the by clause (DeviceCompliance) to provide field names / column headers, in this case "true" and "false" so if the DeviceCompliance had three values, the... See more...
chart uses the values of the second field in the by clause (DeviceCompliance) to provide field names / column headers, in this case "true" and "false" so if the DeviceCompliance had three values, these would be the field names with their respective counts.
The following SPL will also provide the bucket age in days. Once you have this information, you can configure the frozenTimePeriodInSecs attribute to specify how long to retain data and when to move ... See more...
The following SPL will also provide the bucket age in days. Once you have this information, you can configure the frozenTimePeriodInSecs attribute to specify how long to retain data and when to move it out. Run the query again to verify if the data has been removed |dbinspect index=your_index|eval end_days=round((now()-endEpoch)/86400,0) |eval start_days=round((now()-startEpoch)/86400,0)|table start_days,end_days,*  
Hello! The key point is that Splunk stores all data in UTC, but displays it based on user preferences. This approach allows for flexibility without changing the actual data or dashboard configuration... See more...
Hello! The key point is that Splunk stores all data in UTC, but displays it based on user preferences. This approach allows for flexibility without changing the actual data or dashboard configuration. Here's an example to illustrate: Let's say you have team members in New York (EST) and San Francisco (PST). An event occurs at 2:00 PM PST. The event is stored in Splunk as 10:00 PM UTC. The San Francisco team member (with PST preference) sees it as 2:00 PM. The New York team member (with EST preference) sees it as 5:00 PM. Both are looking at the same data, but it's displayed in their local time. To achieve this: Ensure correct timestamp configuration on data sources. Each user sets their preferred timezone in Splunk settings. This method maintains data consistency while accommodating different timezones without modifying dashboards. Please Upvote if this is Helpful.
This worked well and thanks.
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1... See more...
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1 minute and have a breakdown per minute of the increase in dispatch over time. Rob 
Hello, I understand the situation of managing multiple apps. Personally, I'm not a fan of merging or combining all apps together, as you may end up with merging conflicts and broken apps if not done ... See more...
Hello, I understand the situation of managing multiple apps. Personally, I'm not a fan of merging or combining all apps together, as you may end up with merging conflicts and broken apps if not done correctly. This process needs to be handled very carefully. It's not just the Apps directory we need to consider; we should also look into the users directory for private knowledge objects. If your goal is simply to put these apps in a code repository, you can dump the entire apps directory (excluding default apps) and the users directory from the search head and save it. If you believe there is no content in these apps, try to validate and consolidate packages as needed. It's better to start with the UI. Regarding your specific situation with Splunk Enterprise 9.2.1 on Windows and multiple apps: Creating a new app that references dashboards from other apps is possible, but it has some limitations. You can use the <dashboard ref="..."> tag in your new app to reference dashboards from other apps. For example: xml <dashboard ref="/servicesNS/nobody/myapp1/data/ui/views/dashboard1"> </dashboard> This approach allows you to maintain your existing app structure and version control while creating a centralized menu system. However, be aware that this method: Requires careful management of permissions across apps May not work seamlessly with all types of dashboards (especially those with complex dependencies) Could potentially break if the original apps are uninstalled or significantly modified An alternative approach could be to use a navigation app that doesn't actually contain the dashboards but provides links to them in their original locations. This would allow you to maintain your current structure while providing a unified entry point. If you decide to copy dashboards into the new app, consider using symbolic links or a build process in your DevOps cycle to maintain a single source of truth. Remember to thoroughly test any changes in a non-production environment first. Each approach has its pros and cons, so choose the one that best fits your specific needs and infrastructure. please upvote if you find this answer useful   Thanks
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splu... See more...
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splunk dashboard?
Hi there have been major update how DS is working on 9.2. There are several threads in community where this is discussed. But basically these describe change and how to fix it. https://docs.splunk... See more...
Hi there have been major update how DS is working on 9.2. There are several threads in community where this is discussed. But basically these describe change and how to fix it. https://docs.splunk.com/Documentation/Splunk/9.3.1/Updating/Upgradepre-9.2deploymentservers https://community.splunk.com/t5/Splunk-Enterprise/After-upgrading-my-DS-to-Enterprise-9-2-2-clients-can-t-connect/m-p/694582 (explanation later in this article, not se link to splunk.my.site.com) r. Ismo
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still se... See more...
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still see my client configs on the backend but not sure why they are not reporting on the GUI. 
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of c... See more...
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of course it's own dashboards defined etc. Now i'd like to group all these dashboards under one app and create a menu system for them.  Now I control each app under GIT and can deploy them using a Devops cycle. What I would like to do is create this new app but simply reference the dashboards that reside in the other apps in this new app so that I keep my source/version control. is this possible or would I simlpy have to copy all the dashaboards/views into this new app?
So I did some research of when the uptick happened. It started last Monday before I starting upgrading Splunk. I blacklisted the host that were having the large amount of audit logs and reached out t... See more...
So I did some research of when the uptick happened. It started last Monday before I starting upgrading Splunk. I blacklisted the host that were having the large amount of audit logs and reached out to the department for those host. Looks like it wasnt an app but servers possibly added or ingesting more due to a change. Will find out more once the department responds. Until then, will keep them blacklisted so that we stay under our license amount 
Thanks, this worked. Two additional questions, why the chart command specifically? And for this statement: | eval total=true + false Is the reason this line works because there are only tw... See more...
Thanks, this worked. Two additional questions, why the chart command specifically? And for this statement: | eval total=true + false Is the reason this line works because there are only two values available to the previous statement, being true and false? It is not the case here, but if there were three values; full, partial and none - would the same type of statement require defining these somewhere before this?