All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am getting started using DS to deploy new configurations to UFs. Need to view the list of Server classes , what they have / do & how to add / edit the server classes. Thanks in advance
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missi... See more...
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missing some important resources that we need like FSx For Lustre file systems. The plugin seems to have hard coded list of resources: How to add FSx ? I there a way to extend it on our own, we can write the fetch calls if needed.
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .M... See more...
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .MSI down to those systems to initiate the upgrade that way? Your input is GREATLY appreciated.  Thanks.
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the ex... See more...
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the example lets say, the machine can enter mode A, B or C. I receive an heartbeat every few seconds of hundred of these machines, which leads to a very large dataset. But I am not interested in the heartbeat, I am interested in the transition of the modes. Example: Time Machine_ID Mode 10:00:00 1 A 10:00:01 2 C 10:00:02 2 C 10:00:03 1 B 10:00:04 2 B   So what I am basically interested in here is the transition of machine 1 from mode A to B and of machine 2 from C to B.  In other words: I am searching for heartbeats, where the mode is different than the mode of the previous heartbeat of the same machine_ID. At the end, my result would look something like this _time _time_old_Mode machine_ID new_mode old_Mode 10:00:03 10:00:00 1 B A 10:00:04 10:00:02 2 B C   I have tried subsearches, but I was not sucessful. The simplified search for getting the heartbeat is currently: index="heartbeat" | rex field=_raw "......(?P<MODE>.......)"| fields _time ID MODE Performance is not crucial, as it is planned to run this at night for a summary index. Thanks in advance!  Best Regards
I notice some include .csv files. Do these .csv s need updating? Or do they stay stale? How are Data sets updated? Please advise. Thank u very much.
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can s... See more...
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can see the same data and event counts for each indexes in both the old and new instance for a specific time range set manually from the time range filter. But, since the older instance is in EDT time zone and new instance is in UTC time zone,  when I am comparing the dashboards for validation purpose which uses those indexes , I can see the count mismatch because of the time zone difference.   I tried changing the preference to same time zones in both the instances but its not working. Can anyone please help and let me know how can this issue be resolved so that the dashboards can be validated without setting the time range manually every time.
Hi!, I have recently deleted an user. I should not have done that.... Can I restore it? If anyone has any ideas I'd appreciate it greatly. Thanks in advance.
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now ... See more...
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now after tests are done, I'd like to remove from Splunk in the Jenkins add-on section in drop down list of 'Jenkins Master' all the dummy names. I tried to delete all jenkins related indexes, http event collector and uninstalled Jenkins add-on it self. After reinstalling it, I still can see all of those dummy jenkins master names. I'd be grateful if you could help me to remove them. Thanks!
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provide... See more...
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provides a baseline of average errors per day of the week in the last 60 days. So far I have, as an example: index=main sourcetype="access_combined_wcookie" status>=500 | chart count as "Server Errors" by date_wday | eventstats avg(Server Errors) This gives me the running average for errors by not   Day of the Week     Number of Errors     60 DAY Average errors for that day of the week Monday                      14                                    12.38 Tuesday                      10                                    13.69 etc...and be able to chart this. Any help and explanation of the how would be much appreciated.  Thank you in advance.
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is ... See more...
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is 1AM in here, not too convenient time to play CTFs.   Regards, Damian
I'm very stuck, how can I have a streamstats function accumulate a total and reset at 9.00am every day?  It's straightforward if I have an event at 9.00am, but if the last event was at say 8.55am, t... See more...
I'm very stuck, how can I have a streamstats function accumulate a total and reset at 9.00am every day?  It's straightforward if I have an event at 9.00am, but if the last event was at say 8.55am, then the next event is at 9.15am, the reset occurs, however, it will continue to reset for all events which occur between 9.00am and 9.59am as the statement remains true throughout the hour below in my example. index=main | eval Hour=strftime(_time,"%H") | streamstats reset_after="("Hour==09")" sum(Result) as Total I tried to experiment with specifying the minute, but the same situation exists if the 9.00am minute does not exist. index=main | eval Hour=strftime(_time,"%H%M") | streamstats reset_after="("Hour==0900")" sum(Result) as Total I think I need to either make a lookup to create an event every 9 am for each day, but I couldn't figure that out if the time range was greater than one day. I experimented with makeresults to create an event, but this needed an append which messed up all of my other parts of the query. I think the most elegant way to do this is to have an event created for every 9 am before the query is made, but I can't figure it out, any advice/ideas are welcomed!   Dave
Hello, I am trying to expose data within a lookup from a "logic" app to a "presentation" app for users that have the "user" role. To simplify the situation, I have a lookup "lookup_file.csv" with co... See more...
Hello, I am trying to expose data within a lookup from a "logic" app to a "presentation" app for users that have the "user" role. To simplify the situation, I have a lookup "lookup_file.csv" with corresponding lookup deinition "lookup_file" in the "logic" app. Both knowledge objects have "global" sharing and permissions set to "read" for the "user" role. Since I am admin I gave "read/write" permissions to the "admin" role. When I run the search "| inputlookup lookup_file" from the "presentation" app with my admin user I have no issues reading the data. When I run the same command with my user that has the "user" role assigned I get two errors: 1. The lookup table ‘lookup_file' is invalid. 2. The lookup table ‘lookup_file' requires a .csv or KV store lookup definition. Here is a diagram that explains the situation: I have tried many configurations but cannot get the data to load in the "presentation" app with a user that has the "user" role.  What am I missing? Any help would be greatly appreciated! Best regards, Andrew
Dear all, We are trying to install an AppDynamics agent in a Kubernetes cluster. We have deployed successfully appdynamics-operator, but when we deploy cluster-agent (according to manuals ) we get... See more...
Dear all, We are trying to install an AppDynamics agent in a Kubernetes cluster. We have deployed successfully appdynamics-operator, but when we deploy cluster-agent (according to manuals ) we get an error; error":"WATCH_NAMESPACE must be set","stacktrace":"operator-release/appdynamics-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tappdynamics-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main Below yaml file: Please help me out Best regards,
I get below result when use Chart count over field-A by Field-B We can see there are cell with value 0, is there any solution to replace these 0 with SPACE? Thanks. Over field value by field... See more...
I get below result when use Chart count over field-A by Field-B We can see there are cell with value 0, is there any solution to replace these 0 with SPACE? Thanks. Over field value by field value1 by field value2 by field value3 by field value 4 by field value5 Total Over value 1 0 0 1 0 0 1 Over value 2 0 0 0 603 0 603 Over value 3 0 0 12 0 0 12 Over value 4 0 0 0 600 0 600
Hi! I've set up the following "app" to be delployed on my Universal Forwarders for windows: "[WinEventLog://Microsoft-Windows-Windows Defender/Operational] index = windefender disabled = fals... See more...
Hi! I've set up the following "app" to be delployed on my Universal Forwarders for windows: "[WinEventLog://Microsoft-Windows-Windows Defender/Operational] index = windefender disabled = false evt_resolve_ad_obj = 1" This has worked flawlessly for years until this week when I started to NOT receive any updates from that log until restart of the Universal Forwarder. At first I thought it had something to do with that we had updated all UFs to 8.2.2 too but today when I did some investigation I also noticed that one of the UF wasn't updated and still used version 7.2. So my guess is that it has something to to with the splunk enterprise installation/upgrade (upgraded to 8.2.2 for about 1½weeks ago. from 7.4). Its not that the forwarder stops completely because I still receive logging from the Security, System etc. logs in the event viewer. It seems to just be the "defender" log and when I do a restart of the splunk service it will start to send again. Have I missed something or should I put an ticket to splunk?
Hi, I have a firewall log in which some of the destinations do not have SNI, but I have their IPs. I want to create/extract a new field from destination to get the destination details, for example ... See more...
Hi, I have a firewall log in which some of the destinations do not have SNI, but I have their IPs. I want to create/extract a new field from destination to get the destination details, for example the Resolve Host or the Organization. Can someone please advise if this is possible and how? Thank you in advance.
I have installed the app in the local splunk using the Splunk REST API endpoint https://<host>:<port>/services/apps/local but when I tried to install the app in remote splunk .It gave error like ... See more...
I have installed the app in the local splunk using the Splunk REST API endpoint https://<host>:<port>/services/apps/local but when I tried to install the app in remote splunk .It gave error like "Cannot perform action "POST" without a target name to act on"  or "Unparsable URI-encoded request data"   I included the name(.tar or .spl file) and filename like how it is mention in the splunk docs   https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/RESTREF/RESTapps#apps.2Flocal   Using this endpoint .I can create the app in remote splunk but I want to upload the app    I tried with requests module and postman. The same error came   is there any way to install the app in the remote splunk using postman or requests?   Thank you in advance...  
I searched if someone had done this already but haven't found a good solution. So I wrote my own and thought I'd share it Sometimes you get some stats results which include columns that have null... See more...
I searched if someone had done this already but haven't found a good solution. So I wrote my own and thought I'd share it Sometimes you get some stats results which include columns that have null values in all rows. It's a typical result of | rest calls if you're trying to list some splunk objects. It's not that uncommon that out of several dozens or even hundreds of columns you get in your results, many of them are completely empty. So I thought I'd clean the results so they're easier to browse (and a bit lighter on the internet browser you're using).
I have the following SPL and I want to show table below. The value of Total must be equal to count of events (1588).  How can I pur the total count of events into Total variable? index=abc  | sta... See more...
I have the following SPL and I want to show table below. The value of Total must be equal to count of events (1588).  How can I pur the total count of events into Total variable? index=abc  | stats count as Count by reason_code | where reason_code != "false" | addtotals col=t labelfield=reason_code label="Retrieval task cancelled" fieldname="Percentage" | eval "Percentage"= round((Count/Total) * 100,2)."%"
Hello Team, Do Synthetic private agents support windows 10 machines? In my organization, there are around 60+ private locations from where we perform Synthetic monitoring.  Thanks  Kunal