All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I can't seem to generate a HEC token. Help is appreciated
I'm trying to configure the Duo Splunk Connector on a Splunk heavy forwarder to leverage the web proxy configuration I have in Splunk's server.conf. This configuration works for all Splunk web commun... See more...
I'm trying to configure the Duo Splunk Connector on a Splunk heavy forwarder to leverage the web proxy configuration I have in Splunk's server.conf. This configuration works for all Splunk web communication, but doesn't seem to apply for the Duo inputs.
Hi, I have built out an AD inputlookup that includes lastlogon dates. When I attempt to find only those users with last logon older then 90 days I am unable to return any results. | inputlook... See more...
Hi, I have built out an AD inputlookup that includes lastlogon dates. When I attempt to find only those users with last logon older then 90 days I am unable to return any results. | inputlookup AD.csv | search lastLogon=* accountStatus!="ACCOUNTDISABLE" | where lastLogon>=relative_time(now(),"-90d@d") | table employee lastLogon I have parsed the dates with strftime and strptime within the lookup itself and can see the dates are being displayed correctly but no luck on refining the results to just those of interest. I have tried to define the relative time, reparsing the dates within the search itself. I have tried rearrange the date format and made sure to include the four year digits and still no luck. Not sure what I am missing. Any help would be appreciated
Hi, I'm working on displaying all at a time, Is there a way to disable clustering in maps?
This seems to be a common question and I've read several previous discussions. The issue seems to be that the default Linux UF config 'knows' the FQDN and returns that for log-flies which do not have... See more...
This seems to be a common question and I've read several previous discussions. The issue seems to be that the default Linux UF config 'knows' the FQDN and returns that for log-flies which do not have a 'host' value, but then some of the most important files, e.g. /var/log/messages, do include a host and so the UF 'defers' to that value, even if it's not FQDN. The simplest solution has been to update your Linux servers' rsyslog config to record the FQDN to all logs. But I am trying to avoid walking my environment to make that change. Instead I am looking for a specific example of the required transform.conf, which I could push to all UFs (via a deploy-app) so that they 're-substitute' the FQDN for the short 'host' value. Can someone please show me how? Thank you! P.S. I am also trying to avoid doing this at the indexer, both because it is unclear if the indexer has access to the FQDN and also because this is a shared environment and I do not have permission to edit this system-wide; I am only trying to fix my dept's servers.
Looking to alert based on the following scenario: Event 1: Device: XYZ, Status: Clear, SHA: 12345, Time: 12:30 Event 2: Device: XYZ, Status: Blocked, SHA: 12345, Time: 12:15 Event 3: Device: ZZZ,... See more...
Looking to alert based on the following scenario: Event 1: Device: XYZ, Status: Clear, SHA: 12345, Time: 12:30 Event 2: Device: XYZ, Status: Blocked, SHA: 12345, Time: 12:15 Event 3: Device: ZZZ, Status: Blocked, SHA: 34567, Time: 12:10 Event 4: Device CCC, Status: Blocked, SHA: 45678, Time: 12:00 Alert for Event 3 and 4, but not for Event 1 or 2 since the status changed from Blocked to Clear within a certain timeframe, say 30 min, and the Device and SHA match. Any help appreciated!
A couple of weeks ago, I migrated my Splunk server from one Windows Server to another. Same specs, just more capacity. However, when I look at the "License Usage - Previous 30 Days" prebuilt dashboar... See more...
A couple of weeks ago, I migrated my Splunk server from one Windows Server to another. Same specs, just more capacity. However, when I look at the "License Usage - Previous 30 Days" prebuilt dashboard, the only option for "License Masters" that I have is the old server. The latest license usage data that I have is the day of the migration. I confirmed that the new server is acting as the master, with no errors. Any suggestions?
Hi there! I'm running this query index="staging" |eval raw_len=len(_raw) | eval raw_len_gb = raw_len/1024/1024/1024 | stats sum(raw_len_gb) as GB by kubernetes_namespace | where GB > 0.5 When I'... See more...
Hi there! I'm running this query index="staging" |eval raw_len=len(_raw) | eval raw_len_gb = raw_len/1024/1024/1024 | stats sum(raw_len_gb) as GB by kubernetes_namespace | where GB > 0.5 When I'm running this query in "Search", I choose "For the last 24 hours". I want to save this query as alert, and the alert will run let's say once a hour. The question is - will it run this query like I run it in search (last 24 hours)? Or I need to specify it inside a query (last 24 hours)? Thanks, Aleksei
I am wanting to monitor multiple clients' networks with my Splunk install. I also need to make sure that the communication between the networks to my Splunk instance is secured. Much of the informa... See more...
I am wanting to monitor multiple clients' networks with my Splunk install. I also need to make sure that the communication between the networks to my Splunk instance is secured. Much of the information I am looking to get is syslog and SNMP data. Would I need to install an indexer locally and then use that to forward the information to my main install? I would then be looking to setup a few dashboards to look at the different clients as well. At this point, I just want to make sure that the information is being transferred across the Internet in a secure manner.
Hello All, I have alert policy which triggers at 10% every 15 minutes. The current expression for this is */15 * * * * Because overnight and on the weekend the transactions are less hence wan... See more...
Hello All, I have alert policy which triggers at 10% every 15 minutes. The current expression for this is */15 * * * * Because overnight and on the weekend the transactions are less hence want to use a different condition i.e. trigger at 50%. So the question is 1. For the existing 10%, i want to schedule only for weekday from morning 8 AM to 5 PM. Will this be the cron expression */15 8-17 * * 1-4 2. For the new 50% i want to schedule 5PM to next day 8 AM and all day long over the weekend. Will this be the cron expression */15 17-8,0-23 * * 1-4,5-0
need to upgrade splunk enterprise 6.4.1 standalone system to 8.0. This needs 2 steps, first upgrade to 7.0. where can i download Version 7.0 ?
Hi Experts , I have pushed a props.conf to all the indexers from CM using "splunk apply cluster-bundle" command . Now i want to delete this app on all the peer nods via CM only so that it gets d... See more...
Hi Experts , I have pushed a props.conf to all the indexers from CM using "splunk apply cluster-bundle" command . Now i want to delete this app on all the peer nods via CM only so that it gets deleted from all indexers in one shot . Can i achieve this because I do not think so rollback option works because I do not have any previou version for this app . If I push blank props, for same sourcetype will that work ? Not sure Regards Vg
Hi, I have installed a app of palo alto firewall in my splunk cloud. an someone please help me how an i connect splunk and palo alto firewall with each other. what are the steps. I am confused. w... See more...
Hi, I have installed a app of palo alto firewall in my splunk cloud. an someone please help me how an i connect splunk and palo alto firewall with each other. what are the steps. I am confused. which permission do i need to do this configuration. In splunk cloud documentation, a lots of material is for the splunk enterprise not for splunk cloud. thanks!
I have a data set similar to the following: "_time",source,increment "2020-02-26","third", "2020-02-25","third","yes" "2020-02-21","third", "2020-02-20","third","yes" "2020-02-29","second", "2020... See more...
I have a data set similar to the following: "_time",source,increment "2020-02-26","third", "2020-02-25","third","yes" "2020-02-21","third", "2020-02-20","third","yes" "2020-02-29","second", "2020-02-28","second","yes" "2020-02-27","second","yes" "2020-02-26","second","yes" "2020-02-25","second","yes" "2020-02-24","second","yes" "2020-02-23","second","yes" "2020-02-22","second","yes" "2020-03-01","first", "2020-02-29","first","yes" I would like to make this chart with first =blue, second =red, and third =green: So for each yes in the increment column add 1 to the current count for the source , else reset the count back to 0. If a source does not have a reset column it should continue at the current count to the end of the chart. Is this possible?
I have been asked to create an alert that looks at the index sizes (all indexes) for today, and compare them to the sizes as they were one week ago. I know I can get the index sizes for the last 7 da... See more...
I have been asked to create an alert that looks at the index sizes (all indexes) for today, and compare them to the sizes as they were one week ago. I know I can get the index sizes for the last 7 days with index=_introspection component=Indexes | eval data.total_size = 'data.total_size' / 1024 | timechart span=1d max("data.total_size") by data.name However, how can I compare the sizes of each index, 1 by 1, between today and 7 days ago. Thanks for the help.
Now that the data is populating properly and the Real-Time tab is doing what it should be, I've moved on to troubleshooting some of the other sections. Starting with Audience, I get no results in th... See more...
Now that the data is populating properly and the Real-Time tab is doing what it should be, I've moved on to troubleshooting some of the other sections. Starting with Audience, I get no results in the main Sessions box and the Pageviews, Pages/Session and Bouncerate all show errors like Error in 'map': Did not find value for required attribute 'site'. It looks like the Data Model is perhaps not working, but I have no idea how to troubleshoot it. Is it possibly to do with the fact my data isn't in the main index?
As the title already states, It is expected to lists all indexes and not just internal ones. I have read in other question that the possible solution is to set replication_factor to auto but ... See more...
As the title already states, It is expected to lists all indexes and not just internal ones. I have read in other question that the possible solution is to set replication_factor to auto but not quite sure how to do this, any advise? There is data in custom indexes, the only issue is that I'm not able to see them in this list. Thanks in advance. UPDATE Tried adding repFactor=auto for each index stanza, didn't work. Do I have to change the indexes.conf file to _cluster folder? Should I expect everything normal if a delete the custom app and migrate to _cluster?
Splunk is not restarting because we are getting the error "kvstore port [8191] - port is already bound". After I check, I observed the process is starting as a root and so while restarting it assumes... See more...
Splunk is not restarting because we are getting the error "kvstore port [8191] - port is already bound". After I check, I observed the process is starting as a root and so while restarting it assumes the port is being taken by another process. I killed the process and was able to start the splunk. But I wanted to know the reason and the resolution to prevent this from happening in the future. I have checked and verified that the /var/lib/splunk/kvstore/mongo is owned by splunk. But some of the files such as "admin.0" "admin.ns" "config.0" and "config.ns" are owned as root and not splunk. Wanted to know what are those files and if these permissions should also be changed to splunk. Also, the splunk.key have proper permission.
Hi Expert, Can we add or monitor all the templates as mentioned in the link "https://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/Datatypes" for "Splunk Add-on for Microsoft SQL Serve... See more...
Hi Expert, Can we add or monitor all the templates as mentioned in the link "https://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/Datatypes" for "Splunk Add-on for Microsoft SQL Server" for selected Database in one shot. we have a requirement to monitor all the template so thinking we can monitor all template related data in shot as we do in "Splunk_TA_nix"
I'm using Splunk Enterprise with a developer license. I have log files on my computer (access and error logs). I successfully indexed them and I can do searches. I have Splunk Security Essentials... See more...
I'm using Splunk Enterprise with a developer license. I have log files on my computer (access and error logs). I successfully indexed them and I can do searches. I have Splunk Security Essentials installed and now I want to test the previously indexed data with the given use cases from Security Essentials. I read the docs and all other stuff I found on Google but I don't get it. When I try to use "Automated Introspection" in "Data Inventory", I get no results. When I try to use "Data Source Check", I get no results. I don't know what to do. My task is to apply the given use cases on the data from the access and error logs and to evaluate if they are usable in our context. Further on I have to create own use cases to get a large spread over many use cases. All of those must be based on Kill Chains and Mitre Att&ck. I have no idea how to solve my problem with the data and how to go on with my task. Thanks in advance.