All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have 2 server groups on Splunk (dmc_group_1 & dmc_group_2) and "dmc_group_1" is the default search group. When inspecting the DM acceleration scheduled searches, I found that they only run in peer... See more...
I have 2 server groups on Splunk (dmc_group_1 & dmc_group_2) and "dmc_group_1" is the default search group. When inspecting the DM acceleration scheduled searches, I found that they only run in peers of default search group and .tsidx files were just generated in this group's indexers, even though the search query contained "splunk_server=*". I don't know why Splunk cannot understand "splunk_server=*" in this case. INFO DistributedSearchResultCollectionManager - Default search group:dmc_group_1 INFO SearchTargeter - Peer NEW-IDX-1 is not in default group dmc_group_1. Not connecting INFO DistributedSearchResultCollectionManager - Not connecting to peer 'NEW-IDX-1' because it has been optimized out. How can the acceleration scheduled searches run in all search peers without adding "dmc_group_2" to the default group. Thank you
Hello, How can I find the duration to check the actual active hours of a user for a perticular day if the VPN session was not disconnected for a long time? For eg. for one 
Hi everyone, I am trying to search for a file path (e.g. C:\Finance\Salary) but the result return none. It works if I type directly C:\\Finance\\Salary but it does not work if I passed it to a varia... See more...
Hi everyone, I am trying to search for a file path (e.g. C:\Finance\Salary) but the result return none. It works if I type directly C:\\Finance\\Salary but it does not work if I passed it to a variable. Although the variable shows correct value.     index=wineventlog EventCode=4660 OR EventCode=4663 Account_Name!="ANONYMOUS LOGON" host="PTL*" Account_Name!="*$" | eval FilePath=urldecode("C%3A%5CFinance%5CSalary") | eval FilePath=replace(ObjectName,"\\\\","\\\\\\") | search Object_Name=FilePath | dedup _time host Account_Name Account_Domain Object_Name Accesses EventCodeDescription | table _time host Account_Name Account_Domain Object_Name FilePath Accesses EventCodeDescription | sort _time desc     The below search will show result if I replace:      |eval Object_Name="C:\\Finance\\Salary"        
Hello everyone, Does anyone know what the above error message means when using the sendemail function? I've googled it but there were no results. Alert email works fine but not sendemail. FYI, this... See more...
Hello everyone, Does anyone know what the above error message means when using the sendemail function? I've googled it but there were no results. Alert email works fine but not sendemail. FYI, this error occurred on Splunk Cloud. I've seen a similar error below in Splunk on-prem (solved by adjusting some roles to the account) but this is a new error that I've encountered.    [map]: command="sendemail", 'rootCAPath' while sending mail to: xxx@gmail.com   It appears that the sendemail function is not configured or installed in this Splunk Cloud instance. Is there any steps that needs to be configured by the admin (sendemail.py, etc)? I supposed the configurations should be the same as in Splunk on-prem? If yes, would appreciate your advice so that I can let my admin know.   Thank you.
I cannot determine the cause of an error I am getting when attempting to search for logs in Splunk Cloud. An example of the error I am getting is here: 2020-07-26 16:11:46,518 log_level=ERROR pid=11... See more...
I cannot determine the cause of an error I am getting when attempting to search for logs in Splunk Cloud. An example of the error I am getting is here: 2020-07-26 16:11:46,518 log_level=ERROR pid=11016 tid=MainThread file="ga.py" function="run" line_number="306" version="GSuiteForSplunk.v1.4.2.b310" {'timestamp': 'Sun, 26 Jul 2020 16:11:46 +0000', 'log_level': 'ERROR', 'msg': "'NoneType' object has no attribute 'tb_frame'", 'exception_type': "<class 'AttributeError'>", 'exception_arguments': "'NoneType' object has no attribute 'tb_frame'", 'filename': 'ga.py', 'exception_line': 131} I have read through two relevant posts in this forum: - https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-App-for-GSuite-error/m-p/388181 - https://community.splunk.com/t5/All-Apps-and-Add-ons/NoneType-object-has-no-attribute-tb-frame/m-p/385582#M46904 Both point me to an issue with installing the apps on HFW or SH or IA. I am using Splunk Cloud, with IDM, and no HFW. I am not aware that I need a Heavy Forwarder; maybe that's my mistake. The G Suite apps/add-ons were installed on the Search Head and the IDM, and I configured the inputs on the IDM only. Still, the error persists. Another thing that may be worth noting is that the errors I am seeing are indexed in the "main" indexers, not in the gsuite index I created. Environment description: - Splunk Cloud (with SH and IDM), version 8.0.2001.1 - Installed (by Splunk Support Team) two apps on IDM: GSuiteForSplunk Input Add-On (cannot tell which version - it was installed by Splunk Support Team yesterday), and G Suite For Splunk (version 1.4.2.b310) - Installed (by Splunk Support Team) one app on SH: G Suite For Splunk Configured GSuiteForSplunk Input Add-on on IDM: - Created a SuperAdmin on G Suite - Using the SuperAdmin account created above, created a Project on Google APIs & Services (https://console.developers.google.com/) - On said project, created an OAuth2 Client ID, and authorized Desktop App Type with the scope (allowing Admin SKD, G Suite Alert Center, and Google Drive API) - On Splunk Cloud SH, created a new index named gsuite - On GSuiteForSplunk Input Add-On (on SplunkCloud IDM), entered domain, Client ID, and Client Secret, and authenticated against Google, and retrieve token, and entered token into GSuiteForSplunk Input Add-On (on SplunkCloud IDM); saved - On GSuiteForSplunk Input Add-On (on SplunkCloud IDM) > clicked on Create New G Suite Input, I set interval for 60 (seconds), Index to gsuite, and None for Proxy Name. For Extra Configuration, I entered {} (per note on the page to leave this variable as default) - On GSuiteForSplunk Input Add-On (on SplunkCloud IDM) > G Suite > I selected th checkboxes for Activity - All, Activity - Admin, Activity - Drive, and Activity - Login, and confirmed that the Index was set to gsuite. - On GSuiteForSplunk Input Add-On (on SplunkCloud IDM) > Credentials, I see an entry named IA-GSuiteForSplunk with the username of the domain I entered above, and a redacted password. - On G Suite App for Splunk (on SplunkCloud SH) > Application Configuration > I see a Save button, with the correct variable value for the index. I do not see any entries under the Credentials tab, but I don't think this is an issue. If I search (on the SH) for logs, I see many errors (much like the ones I posted above). Under Application Health on the SH, I see (under Internal Log Errors), I see the same error multiple times (sourcetype="ga_modularinput", and message="'NoneType' object has no attribute 'tb_frame'"). What am I doing wrong here? Thank you for your time.
While installing splunk forwarder 8.0.5 using ansible, it is throwing an error saying the url is not correct. I am using a basic block to install the rpm for it. - name: install splunk   yum:    ... See more...
While installing splunk forwarder 8.0.5 using ansible, it is throwing an error saying the url is not correct. I am using a basic block to install the rpm for it. - name: install splunk   yum:     name: "{{ splunk_fwd_url }}"     state: present   The URL i am using is  https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.0.5&product=universalforwarder&filename=splunkforwarder-8.0.5-a1a6394cc5ae-linux-2.6-x86_64.rpm   I am not sure if this is the correct URL because this use to work fine with the old version (7.0.0) but not in the latest version.   Below is the ansible error for reference fatal: [default]: FAILED! => {"changed": false, "msg": "Failed to get nevra information from RPM package: https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.0.5&product=universalforwarder&filename=splunkforwarder-8.0.5-a1a6394cc5ae-linux-2.6-x86_64.rpm"}  
How many cpes and what type are required to maintain Splunk certifications? I have a Splunk User cert and need to know how to maintain it.
Greetings, I'm a student at the Hochschule Darmstadt in Germany. I'm currently working on a project for my university, where we’re trying to find a suitable Log management tool for our big data clus... See more...
Greetings, I'm a student at the Hochschule Darmstadt in Germany. I'm currently working on a project for my university, where we’re trying to find a suitable Log management tool for our big data cluster.  It will be so helpful if you can provide me with some information.   Equipment:    We have 48 nodes :   28 x Dell PowerEdge C6220 2 Intel Xeon E5-2609 (4 Cores for each) 64 GB RAM 16 x 1 TB SATA 7.2 k   20 x Dell PowerEdge C6320 2 Intel Xeon E5-2620v2 (6 Cores for each) 128 GB RAM 16 x 1 TB SATA 7.2 k    The nodes are connected with a high-bandwidth and low-latency network.   Every node generate for now 500 MB of logs daily, with the total of 24 GB logs daily    The criteria we’re considering are as follows :    The log management tool should be able to process the generated logs within 10 seconds Generation ⇒ arrival. This means, from the log source to the universal forwarder until to be ready for search in Splunk        2.      Splunk UI interaction performs within 1 second   We can use 24 nodes in order to scale Splunk  in therfore accelerate the process.   Can Splunk meet these criteria?  Are there any calculations we can do on the speed performance so if the log's quantity changed we can maintain response tau of 10 seconds?    Your help is very much appreciated.      
Hi,everybody. I would like to know if we can judge the order of index with some inner field in splunk. Is there any way to determine an event in one index. Dos Splunk has any inner field which we ... See more...
Hi,everybody. I would like to know if we can judge the order of index with some inner field in splunk. Is there any way to determine an event in one index. Dos Splunk has any inner field which we can use to judge the order of one index. like  it was in oracle,we can use the rowid to determine the unique number of the row. If you have any ideas,or advices,I will be appreciated for your help
Splunk Enterprise installed on Linux. But while login getting error "License Expired"
Hi, While connecting to splunk REST API with output_mode=csv, I get the exact result, but when I give output_mode=json, I see duplicate results.  May I know how to correct it?
Hello, I understand that you can have two evals in one line but i keep getting several errors when i try to combine two. This eval is going into the calculated field to pull info from another field... See more...
Hello, I understand that you can have two evals in one line but i keep getting several errors when i try to combine two. This eval is going into the calculated field to pull info from another field as well.   Attempted Eval: lower(if(isnull(action),attempted_action,action)), case(action == "OVERWRITTEN", "modified", action == "OPENED", "read", action == "DOES_NOT_EXIST", "deleted", action == "SUPERSEDED", "acl_modified", action == "CREATED", "created")   What am I trying to achieve? We want the action to display the attempted action field when the action field is blank. With the attempted action values, there are about 7 values that will appear in the action field. I would like them to be renamed to make the CIM data models in some way. 
 How to hide this message in dashboard ?
I have a syslog server receiving data from devices outside of my network and these are transmitted to my Splunk Indexer using a Universal Forwarder. All my configuration to get the data into the inde... See more...
I have a syslog server receiving data from devices outside of my network and these are transmitted to my Splunk Indexer using a Universal Forwarder. All my configuration to get the data into the indexer is working perfectly fine. I have configuration in inputs.conf in the forwarder to assign the host field from host_segment since the syslog server stores logs in directories named after the source IP address of the message. How can I then change the host field at index-time using a lookup table so the events are stored with the hostname and not the IP address? For example, I have a .csv file that looks like this: host ip host1 10.10.1.1 host2 10.10.1.2 So I can then perform the following search: index=idx host=host1 The key point is, I want to store the event with the host field set to the hostname, I don't want to do an automatic search-time lookup.
Hello All, I am using Splunk cloud HTTP event collector to send my application data from Azure West US region to Splunk West US region, However, We plan to move our apps to Azure US East region sho... See more...
Hello All, I am using Splunk cloud HTTP event collector to send my application data from Azure West US region to Splunk West US region, However, We plan to move our apps to Azure US East region shortly, and finding send data via HEC still goes to Splunk West US region.  Question -What settings need to change in Splunk so its region aware, ex: Data from East/West should accordingly be routed to its nearest available regions.   Thank you
Hi everyone,  This is the first time, I've used Splunk.  I have the data like this: ORDER_ID PRICE GROUP 00001 10 A 00002 20 B 00003 20 A 00004 15 B 00005 23.3 C ... See more...
Hi everyone,  This is the first time, I've used Splunk.  I have the data like this: ORDER_ID PRICE GROUP 00001 10 A 00002 20 B 00003 20 A 00004 15 B 00005 23.3 C   And I want to calculate the average price for each group, which returns the result like this: GROUP  AVERAGE_PRICE A 15 B 17.5 C 23.3   In order to do this average calculation, I know that I have to calculate the total number of ORDER_ID and the sum of PRICE for each GROUP. But I don't know how to perform this calculation in SPL. Please kindly guild me through this.  Regard.
While enabling Indexer discovery with SSL I am getting "Error initializing SSL context - check splunkd.log regarding configuration error for server for indexers x.x.x.x:9997". Could anyone help me t... See more...
While enabling Indexer discovery with SSL I am getting "Error initializing SSL context - check splunkd.log regarding configuration error for server for indexers x.x.x.x:9997". Could anyone help me to know what will be the issue as I checked the configurations and did not find anything.
I have created the reports based on the errors in the OS. Saved Reports: Report_Name  --  Description Network   --  Report for Network Errors Storage  --  Report for Storage Errors OS  -- Report... See more...
I have created the reports based on the errors in the OS. Saved Reports: Report_Name  --  Description Network   --  Report for Network Errors Storage  --  Report for Storage Errors OS  -- Report for OS Errors I would like to list the Report_Name that are existing in my host. For Example, my hostA has only OS and Storage Errors. The dashboard dropdown should list only OS and Storagewhen i select hostA from the other dropdown and then print the results of the saved report(s) if we select in multiselect. i could able to list all the saved reports with rest command, but could not able to list only the corresponding reports. Suggestions would be appreciated.
I am running a search against my windows event logs, lets call it sourcetypeA.  I need to use the IP address obtained form sourcetypeA to lookup up the  host information from sourcetypeB.   The end r... See more...
I am running a search against my windows event logs, lets call it sourcetypeA.  I need to use the IP address obtained form sourcetypeA to lookup up the  host information from sourcetypeB.   The end result needs to display the timestamp and other information from sourcetypeA and use the host information from sourcetypeB.  Subsearching so far has not seem to resolve the problem.   I merely need to use sourcetypeB as sort of a lookup table to plugin the host information found.
Team, We could see that splunk agents are running on one my AIX server, but I could not able see that server in splunk monitoring console.   Thanks Sanjay Singh