All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey guys I keep getting this privacy error every time i attempt to download Splunk Enterprise on Mac, i read somewhere that removing the s behind http should fix and resolve the issue but i still kee... See more...
Hey guys I keep getting this privacy error every time i attempt to download Splunk Enterprise on Mac, i read somewhere that removing the s behind http should fix and resolve the issue but i still keep getting an error. Thanks for any help   https://download.splunk.com/products/splunk/releases/9.1.1/osx/splunk-9.1.1-64e843ea36b1-darwin-64.tgz "download.splunk.com normally uses encryption to protect your information. When Chrome tried to connect to download.splunk.com this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be download.splunk.com, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Chrome stopped the connection before any data was exchanged."
Hi all, I deployed Splunk and enabled indexer clustering. Then I created an index in master-apps and it has been replicated to peer nodes. Now I want to export some event from an index and import ... See more...
Hi all, I deployed Splunk and enabled indexer clustering. Then I created an index in master-apps and it has been replicated to peer nodes. Now I want to export some event from an index and import to the newly created index. I tested multiple methods: I export events using following command: ./splunk cmd exporttool /opt/splunk/var/lib/splunk/defaultdb/db/db_1305913172_1301920239_29/ /myexportpath/export1.csv -et 1302393600 -lt 1302480000 -csv and import the result using following command: ./splunk cmd importtool /opt/splunk/var/lib/splunk/defaultdb/db /myexportpath/export1.csv  but the data not replicated to indexers. I tried another method using UI in cluster master. I import my events to newly created index. In the cluster master search everything is OK but this events not replicated to the indexers. Note that my newly index does not shown in the indexes tab in indexer clustering: manger node. There are just three indexes: _internal, _audit, _telementry I think I did a wrong way to do this. Does anyone have an idea?
Hi All,  I am from an application production support team and we use splunk as our monitoring tool along with other tools. We use splunk primarily to get an understanding of the user actions via log... See more...
Hi All,  I am from an application production support team and we use splunk as our monitoring tool along with other tools. We use splunk primarily to get an understanding of the user actions via logs.  We built some traditional dashboards and alerts to enhance our monitoring. We do our application health checks which include manually looking at splunk dashboards to see any spike in errors.  I would like to automate this step where we check the dashboards and report them if there are any queries on dashboards that are trending red. Preferably post a RGB status on teams chat / email Any leads on how to build this solution is much appreciated.
Does anyone know how to integrate glassbox session events with splunk? By the way, there is no option to export glassbox session events alone but we can we see all the events in expert view section i... See more...
Does anyone know how to integrate glassbox session events with splunk? By the way, there is no option to export glassbox session events alone but we can we see all the events in expert view section inside the session. Or else is there a way to export these events from glassbox session as json/text format?
Is there an application to send SOAR files to a server?
Is possible to develop Apps outside SOAR with IDE like Visual Studio and test from there and then import the app to soar?
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. T... See more...
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. The environment has 6 indexers with duplication across 3 sites, and hundreds of indexes set up and configured by the installers. The questions that I need to write a test for: "Is there sufficient storage available for compliance with data retention policies? (e.g. is there sufficient storage available to meet 5 year retention guidelines for audit logs?)" I would like to run simple search strings to produce the necessary data tables. I am no wizard at writing the appropriate queries, and I don't have access to an environment that is complicated enough to try these things out before I have limited time on the production environment to run my reports. After reading through the forums for hours, it seems like answering this storage question may be harder than originally anticipated, as Splunk does not seem to have any default awareness of how much on disk space it is actually consuming.   1. Research has shown that I need to make sure that the age off and size cap for each index is appropriately set with the FrozenTimePeriodInSecs and maxTotalDataSizeMB variables in each index.conf file. Is there a search I can run that will provide a simple table for all indexes across the environment with these two variables? e.g. index name, server, FrozenTimePeriodInSecs, maxTotalDataSizeMB 2. Is there any other configuration where allocated space is determined for an index that can be returned with a search?   3.  Is there a search string I can run to show the current storage consumption (size on disk) for all indexes on all servers? I have seen some options here on the forums and I think the answer for this one might be the following:    | dbinspect index=* | eval sizeOnDiskGB=sizeOnDiskMB/1024 | eval rawSizeGB=rawSize/1024 | stats sum(rawSizeGB) AS rawTotalGB, sum(sizeOnDiskGB) AS sizeOnDiskTotalGB BY index, splunk_server     4. What is the best search string to determine the average daily ingest "size on disk" by index and server/indexer to calculate required storage needed for retention policy purposes? So far, I have found something like this: index="_internal" source="*metrics.log" per_index_thruput source="/opt/splunk/var/log/splunk/metrics.log" | eval gb=kb/1024/1024 | timechart span=1d sum(gb) as "Total Per Day" by series useother=f | fields - VALUE_* I'm not sure quite what is happening above with the useother=f or the last line of the search. the thread I found it on is dead enough that I don't expect a reply.  I would need any/all results from these three searches in table format sorted by index, server to match up with the other searches for simple compilation. Any help that can be provided is greatly appreciated.
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a re... See more...
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a referer and a txn_id. Multiple events with the same referer field can have the same txn_id.     Referer Txn_id response_time google abcd1234 42 google abcd1234 43 google abcd1234 44 google 1234abcd 45 google 1234abcd 46 google 1234abcd 47 google 1234abcd 48 yahoo xyz123 110 yahoo 123xyx  120 yahoo 123xyz 130   What I am trying to do is get the average number of txn_ids per referer and the avg of response times for that. So something like this:     Referer avg(count txn_id) avg(response_time) google 3.5 44.5 yahoo 1.5 120   Any help would be appreciated. Thanks!
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "... See more...
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "nf" should display along with "user" and "role" and also display the columns in same order as it is mentioned, similarly to be applied  if i am selecting multiple options from the multi-select input as well but,  iam facing a issue while fetching the table in same order. i have tried using  |<search query> | stats list(*) as * by user, role but this one jumbles the column placement in alphabetical order, which i don't want to. also, tried using set tokens by giving the field_name starts with "nf" in one token and sf in another token. |< search query> | table user, role, $nf_fields$ $,sf_fields$ by trying this method also faced an issue example: if i am selecting only sf from the multi select input then the fields starts with nf also displayed with empty values   --> Is it possible to fix the placement of the columns. or, --> removing the empty columns based on the multi-select input both approaches works for me. Expected Output: please help me to solve this. Thanks in advance.
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now(... See more...
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now()/600))*600 is an invalid term
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' ... See more...
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' This install a SplunkForwarder service that runs with the user NT SERVICES/SplunkForwarder. Reading the documentation, this account is a virtual account which are managed local accounts.  Despite being described as managed local accounts, the documentation also states that "Services that run as virtual accounts access network resources by using the credentials of the computer account in the format <domain_name>\<computer_name>$."  Currently, my windows machines are joined to the AD Domain but I'm working to change it and to not join them to the AD in the future. I have a couple questions here: Can I use this default user (NT SERVICES/SplunkForwarder) even without joining the VM to the AD domain ? What are the limitations that I will face changing from this NT SERVICES account to a local account ? Thanks.
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats a... See more...
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats avg(count) as avg_count window=c   I get the error saying the option value is not an integer. Seems it doesn't take the value of the variable/token. Is there any way to change the parameter dynamically? "Invalid option value. Expecting a 'non-negative integer' for option 'window'. Instead got 'c'."
I am pretty new to ES correlation seraches and I am trying to figure out how to add additionals fields to notable events to make it esier to investigate. We have this correlation serach enabled "ESCU... See more...
I am pretty new to ES correlation seraches and I am trying to figure out how to add additionals fields to notable events to make it esier to investigate. We have this correlation serach enabled "ESCU - Detect New Local Admin account - Rule" `wineventlog_security` EventCode=4720 OR (EventCode=4732 Group_Name=Administrators) | transaction member_id connected=false maxspan=180m | rename member_id as user | stats count min(_time) as firstTime max(_time) as lastTime by user dest | `security_content_ctime(firstTime)`| `security_content_ctime(lastTime)` | `detect_new_local_admin_account_filter` When I run the above serach using the search and reporting app I get way more fields than what I see on the Additional Fields from the notable itself. for example, in the notable event the User field shows the SID and no other fields to idenity the actual username. To fix this I could add the field  Account_Name that shows when I  run the above serach from the search and reporting app.  I tried adding that field by going into Configure -> Incident Management -> Incidnet Review Settings -> Incident Review - Event Attributes. But it is still not showing. Am I missing something here? 
I have a DBConnect Input defined that produces the following output: Date Group_Name Number_of_Submissions 2023-10-02 Apple 780 2023-10-03 Apple 1116 2023-10-04 Apple 1154 2... See more...
I have a DBConnect Input defined that produces the following output: Date Group_Name Number_of_Submissions 2023-10-02 Apple 780 2023-10-03 Apple 1116 2023-10-04 Apple 1154 2023-10-05 Apple 786 2023-10-06 Apple 699 2023-10-02 Banana 358 2023-10-03 Banana 760 2023-10-04 Banana 254 2023-10-05 Banana 1009 2023-10-06 Banana 876 2023-10-02 Others 1265 2023-10-03 Others 1400 2023-10-04 Others 257 2023-10-05 Others 109 2023-10-06 Others 1709   I want to have this data displayed on a Dashboard as a multi-line chart, x-axis is the Date, y-axis is the Number of submissions, and there should be different color lines representing the different groups.  I am new to Splunk.  Very new.  I need succinct instructions pls.  Thanks!!!
Hi Community, I have created a dashboard having two panels. The query used in both the panels are same. Except that both the panel runs at different timeframe. The timeframe is sent based on the Tim... See more...
Hi Community, I have created a dashboard having two panels. The query used in both the panels are same. Except that both the panel runs at different timeframe. The timeframe is sent based on the Time input for both the panels. The token is then set to each of the panel( current time, Compared time). I wanted a third panel should have the difference of the out generated in the first two panels. Can someone guide me?
Hello everyone. I'm currently working on a lab assignment and I'm having trouble understanding the meaning of two specific fields in PowerShell log hunting. Could someone please explain these two fie... See more...
Hello everyone. I'm currently working on a lab assignment and I'm having trouble understanding the meaning of two specific fields in PowerShell log hunting. Could someone please explain these two fields to me? I would greatly appreciate it. Thank you.  
I want to know if there is any provision for NON-PROFIT organizations in the cybersecurity to use splunk as a part of real world lab training, related educational training, and on the job training.  ... See more...
I want to know if there is any provision for NON-PROFIT organizations in the cybersecurity to use splunk as a part of real world lab training, related educational training, and on the job training.  Our program is an apprenticeship one certified by DOL and approved to train IT specialist I and Cybersecurity defense analyst.  https://each1teach1.us   Our challenge is getting all the tools needed to make our apprentices time worth it. 
I have the following search index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |rename cLApName_0 as "HQ AP" |dedup "HQ AP" |stats list(*) as * by "_t... See more...
I have the following search index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |rename cLApName_0 as "HQ AP" |dedup "HQ AP" |stats list(*) as * by "_time" |table _time, "HQ AP", RogueApMacAddress Example results: _time HQ AP RogueAPMacAddress 2023-10-05 12:56:41 flr1-ap-5198-AP05 6e:e8:e9:cd:40:10 2023-10-06 04:09:29 flr1-ap-51c4 da:55:b8:8:db:b8 2023-10-06 08:42:14 flr1-ap-514E_AP07 84:fd:d1:fa:a7:3f 2023-10-06 08:53:12 flr1-ap-518C-B92 0:25:0:ff:94:73 2023-10-06 09:20:22 flr2-ap-51CA 28:24:ff:fd:a6:c0 2023-10-06 09:30:58 flr1-ap-51C2 flr2-ap-463C-AP02 32:b:61:48:a3:c3 2023-10-07 04:09:29 flr1-ap-444x-B11 da:55:b8:8:db:b8 2023-10-07 08:53:12 flr1-ap-69x4 0:25:0:ff:94:73   The search is showing access points in our office that have detected unauthorized access points. I have my search to look at the last 24 hours. I only want to filter for RogueApMacAddresses that have been present/detected for over 24 hours. In this example, both the red and blue events have been there for over the last 24 hours. How can I alert on just those events and disregard the rest? Thanks for any help
  I am taking the free GDI training on  Splunk Cloud observability.  Installed Ubuntu VM in my Windows  laptop and everything went ok after initial configurtion. Saw my hostname and metrics once.   ... See more...
  I am taking the free GDI training on  Splunk Cloud observability.  Installed Ubuntu VM in my Windows  laptop and everything went ok after initial configurtion. Saw my hostname and metrics once.   It happened yesterday (10/05/2023) around 22:00 Hrs EST. This morning not seeing any active communication. rebooted my VM. Seeing the process running in VM, but not seeing any active charts in https://app.us1.signalfx.com/#/infra?endTime=now&startTime=-3h.   Am I missing anything? How do i troubleshoot this communication issue?  
Hi Splunkers, I have a problem with a blacklist filter. On customer's UF, we filtered out some events changing the inputs.conf file. The ones based on comma separated list, like Windows EventID, ar... See more...
Hi Splunkers, I have a problem with a blacklist filter. On customer's UF, we filtered out some events changing the inputs.conf file. The ones based on comma separated list, like Windows EventID, are working fine with no problem, while the one based on regex not. Of course, as first thing, I checked regex syntax and I can confirm it works fine; testing it on regex101, it match perfectly what I want. Tests have been with different source logs, to be sure of a full proper working. This is how we placed regex on UF: [<stanza name>] ...other parameter... blacklist = \]\sA\s+(.*)(microsoft|office|azure|o365|onenote|outlook|windowsupdate)(\(\d+\))(com|net|us)(\(\d+\))\s This filter must be applied to logs coming by Windows DNS; its purpose is to avoid ingestion of legit domain, in all their combination, but only if they have a "normal" form. In regex you can see I put a filter about (<number>), because in raw log we have domains in format main_domain(<number>)root_domain, like microsoft(3)net. For example, microsoft(2)com and microsoft(3)net match the regex and should be filtered out, while microsoft(9)123(5)com not and should be sent to Splunk. My assumption is that I missed out some delimiter after the equals symbol; I mean, should I put regex code between any kind of symbols? Something like  regex = '<regex code'> Or regex = "<regex code>" etcetera.