All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have asset management data that i need to create weekly reports. When i make query for the data like query below: index=a sourcetype=b stats values(ip_addr) as ip by hostname Result: host... See more...
I have asset management data that i need to create weekly reports. When i make query for the data like query below: index=a sourcetype=b stats values(ip_addr) as ip by hostname Result: hostname        ip Host A            1) 10.0.0.0                           2) 10.10.10.1                           3) 10.0.0.2 Host B            1) 192.1.1.1                           2) 172.1.1.1 i wanted the result not include the numbering in front of the ip address. Please assist on this. Thank you.
 if the search result of "past days count=0 and today count>0" then trigger another search to show count >0 log as _time field1 _raw
Hi i am kinda new to Splunk and I'm having this trouble  `A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py"... See more...
Hi i am kinda new to Splunk and I'm having this trouble  `A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py"` I have 1 Master sever(Cluster Master, SHC Deployer, License Master), 3 search heads(clustered) 3 indexers(clustered) 1 heavy forwarder i've run below command that I found on a web ``` | rest /services/admin/inputstatus/ModularInputs:modular%20input%20commands splunk_server=local count=0 | append [| rest /services/admin/inputstatus/ExecProcessor:exec%20commands splunk_server=local count=0] | fields inputs* | transpose | rex field=column "inputs(?<script>\S+)(?:\s\((?<stanza>[^\(]+)\))?\.(?<key>(exit status description)|(time closed)|(time opened))" | eval value=coalesce('row 1', 'row 2'), stanza=coalesce(stanza, "default"), started=if(key=="time opened", value, started), stopped=if(key=="time closed", value, stopped) | rex field=value "exited\s+with\s+code\s+(?<exit_status>\d+)" | stats first(started) as started, first(stopped) as stopped, first(exit_status) as exit_status by script, stanza | eval errmsg=case(exit_status=="0", null(), isnotnull(exit_status), "A script exited abnormally with exit status: "+exit_status, isnull(started) or isnotnull(stopped), "A script is in an unknown state"), ignore=if(`script_error_msg_ignore`, 1, 0) ``` and I got this result  exit_status 1 and 114 how do i get rid of this errors?  Thank you in advance.
Not really on a cluster map after geostats, because you can't split by City and IP. However, you can use the Maps+ app, which has way way more options for customisation https://splunkbase.splunk.com... See more...
Not really on a cluster map after geostats, because you can't split by City and IP. However, you can use the Maps+ app, which has way way more options for customisation https://splunkbase.splunk.com/app/3124 You wouldn't then use geostats, but stats count by City, ip and then in the maps+ app you can configure all sorts of things, such as map layers, HTML tooltips and so on. The app has a number of good examples of how to use it.
Hi @linaaabad  The Splunk App for Salesforce is a search head app containing views and dashboards shared by a Splunk community member as a starting point for other users, like yourself, to get a h... See more...
Hi @linaaabad  The Splunk App for Salesforce is a search head app containing views and dashboards shared by a Splunk community member as a starting point for other users, like yourself, to get a head start at looking at and understanding the SF event data.  Splunk would have no interest in providing a search head app for Salesforce as they are not experts in the Salesforce data.  Being limited to only using Splunk produced apps will only slow down any development in understanding the SF data. Having said that, an app is just an archive file containing configuration in flat text files - *.conf file and *.xml files for dashboards/views.  You do not have to install the app to be able to view these files, simply download the app and open the archive file using your favored utility, e.g. zip on Windows or tar on *nix and look at these types of  files under the default folder.  If you are not very experienced in Splunk then it will be a confusing place to start, however. Alternatively, if there is a test system you could install the app you could look at the configuration via the Web UI and copy what you want to your other system. Hope that helps a little bit.   
Is it possible to get the Search Strings or Source code from the Splunk App for Salesforce???? Anyone have the App and can provide the source code/search.?  We installed the Splunk Add on for Salesf... See more...
Is it possible to get the Search Strings or Source code from the Splunk App for Salesforce???? Anyone have the App and can provide the source code/search.?  We installed the Splunk Add on for Salesforce with doesn't have any dashboards, we can not install the Splunk App for Salesforce because it's not supported by Splunk... Suggestions, Help PLEASE!
@ankitarath2011 , Apologies for the delay, as I have been out of the office. The issue you are reporting is very different than what is discussed on the current post, and needs a new thread. Could ... See more...
@ankitarath2011 , Apologies for the delay, as I have been out of the office. The issue you are reporting is very different than what is discussed on the current post, and needs a new thread. Could you rephrase your full question in a new post, and tag me in it? I tried to start a new one for you that we could continue on but I'm not certain the full context of your issue and question. Regarding increasing maxBundleSize, it is normally a better practice to manage bundle sizes using the [replicationWhitelist] or [replicationBlacklist] stanzas in distsearch.conf.  Raising bundle size limits or raising bundle replication timeouts can cause bundles to take longer to reach your indexers. By default, Search Heads use knowledge bundles to send nearly the entire contents of all of their apps to the indexers. If an app contains large binaries that do not need to be shared with the indexers, reduce the size of the bundle by whitelisting or blacklisting particular files or types of files. See: Splunk Documentation: Limit the knowledge bundle size    Also as an aside... in the event this might be helpful, Admins Little Helper for Splunk can be used to view bundle contents (and computed/expected contents). I will be out this coming week also but will check periodically for your new post. I will not respond on this thread. Thank you,
I would write a script that converts the CSV into K=V format and run it as a scripted input.
Hi, Hello, thank you for your answer. I want to determine the active use of the old SMBv1 protocol. Because as you may know, SMBv1 is not secure at all. So we want to analyze all the ser... See more...
Hi, Hello, thank you for your answer. I want to determine the active use of the old SMBv1 protocol. Because as you may know, SMBv1 is not secure at all. So we want to analyze all the servers in the AD with event ID 3000 and sort them according to the number of events corresponding to event code 3000 that occurred on each of them. thanks for the reply.   I want to determine using actively the old protocol SMBv1.   Because as you may know smbv1 is not secure at all.   so we want to scan all servers in AD with event id 3000. and sort then bye tne number of event matching event code 3000 taht occure on each of them.   Regards
Hi all, I have CSV files  (they are exports from the Garmin R10 launch monitor session data via the Garmin Golf app) that contain 2 header lines, the first header line contains the field names and t... See more...
Hi all, I have CSV files  (they are exports from the Garmin R10 launch monitor session data via the Garmin Golf app) that contain 2 header lines, the first header line contains the field names and the second header line contains the unit of measurement (or blank if not applicable) For example: Date,Player,Club Name,Club Type,Club Speed,Attack Angle,... ,,,,[mph],[deg],... 09/10/23 10:00:45 AM,Johan,7 Iron,7 Iron,70.30108663634557,-7.360383987426758,...   Now, I would like to index the data in one of 2 ways: Add the unit of measurement to the value so that would become "70.30108663634557mph" for the Club Speed field Add an additional column that contains the unit of measurement Add column "Club Speed UOM" with value mph for every line indexed from the CSV file and do this for every column that contains a valid unit of measurement For me, option 2 would be the preferred option A third option, would be to skip the unit of measurement line altogether but I would rather not use this option.   I would appreciate any help that points me in the right direction to solve this challenge.   Thanx in advance.
Hi @michael_vi  for monotoring forwarders, no configuration is required from forwarders side. in the  moniroting console MC > settings> Forwarder moniroting setup ---> forwarder monitoring--->e... See more...
Hi @michael_vi  for monotoring forwarders, no configuration is required from forwarders side. in the  moniroting console MC > settings> Forwarder moniroting setup ---> forwarder monitoring--->enable and save after sometime in MC > Forwarders > Forwarders: Deployment shows the forwarders list and their health. P.S: deployment sevrer can not monitor forwarder health
Hi, I didn't find detailed info, how to connect Universal Forwarders to Monitoring Console. In our organization there is no deployment server, but we do want to monitor Splunk UF/HF with monitori... See more...
Hi, I didn't find detailed info, how to connect Universal Forwarders to Monitoring Console. In our organization there is no deployment server, but we do want to monitor Splunk UF/HF with monitoring console, so the info can be seen on MC > Forwarders > Forwarders: Deployment What are the steps on the UF side to configure this. Thanks
Hi @darphboubou, you have to do two actions: exactly identify and list in a document what you need to display: e.g. stats for users, table displaying a list of fields (e.g. timestamp, user, host, ... See more...
Hi @darphboubou, you have to do two actions: exactly identify and list in a document what you need to display: e.g. stats for users, table displaying a list of fields (e.g. timestamp, user, host, ip, etc...) create  some searches to execute your requirements. the most difficoult action is the first (usually a job in Splunk requires 70% of target technology knowledge and 30% of Splunk knowledge). Abour Splunk knowledge, I hint to follow the Splunk Search Tutorial ( http://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial ) that teach you how to search in Splunk. So, please, describe your use cases. Ciao. Giuseppe
Hi,   We wonder how to monitor the smbV1 access in a domain.   We are already enabled the eventcode 3000 log on windows log.   Now we want to know who use smbV1 to access on every host:   to ... See more...
Hi,   We wonder how to monitor the smbV1 access in a domain.   We are already enabled the eventcode 3000 log on windows log.   Now we want to know who use smbV1 to access on every host:   to start we use this request:       index=windows EventCode=3000 source="WinEventLog:Microsoft-Windows-SMBServer/Audit"         but now we want to display in a table / stats ... foreach host each computers / users access to them.     Could you help us please
Im having the same issue as ownerpost I tried your index=_internal host=___ I typed in my agent1 , agent2 and agent3 along with controller each tiime and data popped up for each 4 of them. but when I... See more...
Im having the same issue as ownerpost I tried your index=_internal host=___ I typed in my agent1 , agent2 and agent3 along with controller each tiime and data popped up for each 4 of them. but when I type in the command Index=”main” host=* | table host | dedup host it does not show anything at all? Can you help me troubleshoot this
I am learning splunk for the first time in my course, I had this task of setting up 4 VMs through VMware workstation , 1 being controller a Centos GUI, and the other 3 being agents centos CLI. I went... See more...
I am learning splunk for the first time in my course, I had this task of setting up 4 VMs through VMware workstation , 1 being controller a Centos GUI, and the other 3 being agents centos CLI. I went through the configuration of the VMs they all ping each other fine. I SSH the splunk onto the 4 VMs using mobaxterms. After creating the 9997 port on the controller and saving the port I configured each agent to have their agents ip address forward to the port of my controller. After going through my lab at the last part I had to type in an input Index=”main” host=* | table host | dedup host this had no results I was told if nothing popped up I would to troubleshoot by rebooting my vm and my host system but that didn't fix it would love some insights
Just because you installed two components, doesn't mean they know how to talk to each other. 1. What version of Splunk did you install? (Splunk Free or Splunk Enterprise with a proper commercial or ... See more...
Just because you installed two components, doesn't mean they know how to talk to each other. 1. What version of Splunk did you install? (Splunk Free or Splunk Enterprise with a proper commercial or trial license) 2. Did you configure the UF on/after installation in any way?
I believe it's the https://docs.splunk.com/Documentation/DBX/3.14.1/DeployDBX/Prerequisites#Configure_Java_Runtime_Environment_.28JRE.29_for_Splunk_DB_Connect step. (True, the docs could say how to d... See more...
I believe it's the https://docs.splunk.com/Documentation/DBX/3.14.1/DeployDBX/Prerequisites#Configure_Java_Runtime_Environment_.28JRE.29_for_Splunk_DB_Connect step. (True, the docs could say how to do it without the GUI; it might be worth posting a docs feedback - bottom of the webpage)
Hi @Cranie, if in your events you have one of the two fields RunID, ControllingRunID, you can use the solution from @yuanliu even if you could  simplify your token search:   | inputlookup errorLog... See more...
Hi @Cranie, if in your events you have one of the two fields RunID, ControllingRunID, you can use the solution from @yuanliu even if you could  simplify your token search:   | inputlookup errorLogs WHERE (RunStartTimeStamp == "2023-01-26-15.47.24.000000" AND HostName == "myhost.com" AND JobName == "runJob1" AND InvocationId == "daily") | eval RunID = coalesce(RunID, ControllingRunID) | stats values(RunID) as RunID   If instead you could have in the same event both the two fields, you should use a more structured search: in the token: | inputlookup errorLogs WHERE (RunStartTimeStamp == "2023-01-26-15.47.24.000000" AND HostName == "myhost.com" AND JobName == "runJob1" AND InvocationId == "daily") | rename RunID AS token | fields token | append [ | inputlookup errorLogs WHERE (RunStartTimeStamp == "2023-01-26-15.47.24.000000" AND HostName == "myhost.com" AND JobName == "runJob1" AND InvocationId == "daily") | rename ControllingRunID AS token | fields token ] | dedup token | fields token and in the search: <your_search> (ControllingRunID="$token$" OR RunID="$token$") Ciao. Giuseppe
HI @Aus01, the usual issue in this situations are the following: did you enabled Receiving on the Splunk Enterprise VM [ Settings > Forwardring and Receiving > Receiving ]? did you configured you... See more...
HI @Aus01, the usual issue in this situations are the following: did you enabled Receiving on the Splunk Enterprise VM [ Settings > Forwardring and Receiving > Receiving ]? did you configured your Universal Forwarder to send logs to the Splunk Enterprise VM? Did you disabled local firewall on the both the machines? Ciao. Giuseppe