All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have the following table: column1 column2 Andrew Andrew George George Paris Berlin I would like to get as output the following: column 1 column2 Paris ... See more...
Hello, I have the following table: column1 column2 Andrew Andrew George George Paris Berlin I would like to get as output the following: column 1 column2 Paris Berlin Tables come from the use of the | table command | table column1,column2 Is there any way this can be done? I tried | table column1, column2 | where NOT match(column1,column2) but no results are found
Hello, I've checked many of the Answers pages, but to no avail. In my table, the value "appears" to be converted from a string to a number. However, in the Interesting Fields, it still appears a... See more...
Hello, I've checked many of the Answers pages, but to no avail. In my table, the value "appears" to be converted from a string to a number. However, in the Interesting Fields, it still appears as alphanumeric. Here is one line of the event which contains the data I want to convert from string to number. SAG.TXT | 100 B | 6.5 KB/s | ascii | 100% Not concerned with the 1st, 4th, or 5th value in this event. Only 100 B and 6.5 KB/s. These have been regexed in props.conf. file_size=100 B file_transfer_rate=6.5 KB/s Here is my SPL. host=host1 | rex field=file_size "(?<fileSize>\d*)\s" | rex field=file_transfer_rate "(?<fileTransRate>\d*.\d{1,6})" | eval fs1=trim(fileSize) | eval fs2=tonumber(trim(fileSize)) | convert rmunit(file_size) AS fs3 | table file_size, fileSize, fs1, fs2, fs3 Here are the results. file_size is a string fileSize appears to a number fs1 appears to a number fs2 is blank fs3 appears to a number However, here are the Interesting Fields. All of them are alphanumeric, and fs2 is not present. Thanks in advance for any guidance. God bless, Genesius UPDATE I have not worked with multivalue fields before. I didn't know that certain commands will not work on multivalue fields. Apologies for not mentioning in the original post. Any ideas with this new information?
Hello, Configured an Indexer to send the data to a Heavy forwarder. I am able to telnet to the heavyforwarder with port XXXX from indexer without any issue. The following is my outputs.co... See more...
Hello, Configured an Indexer to send the data to a Heavy forwarder. I am able to telnet to the heavyforwarder with port XXXX from indexer without any issue. The following is my outputs.conf in Indexer :- [tcpout] defaultGroup = dev_null indexAndForward = true forwardedindex.filter.disable = true [tcpout:server_output] dropEventsOnQueueFull = 10 maxQueueSize = 50MB server = xyz.com:zzzz sendCookedData = false clientCert = XXXXX useACK = false sslPassword = YYYYY As specified above xyz.com is my heavy forwarder where I am trying to forward the data from indexer The following is my inputs.conf in heavy forwarder(xyz.com) [splunktcp] route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:typingQueue;absent_key:_linebreaker:parsingQueue [splunktcp-ssl:XXXX] queueSize = 100MB [SSL] serverCert = XXXXXXX requireClientCert = true sslPassword = XXXXX Now I am seeing the following ERROR in Heavy forwarder's splunkd.log ERROR TcpInputProc - Message rejected. Received unexpected message of size=842019376 bytes from src=X.X.X.X:YYYY in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. Any idea on how this can be resolved and have a data transfer from indexer to heavy forwarder? Also, The data that I'm trying to transfer from indexer is not that huge. Doesn't make any sense on the high vollume error message.
I created an alert that outputs multiple application names when the alert query conditions are met. I want to receive a separate alert for each application and throttle each one for an hour. I tried ... See more...
I created an alert that outputs multiple application names when the alert query conditions are met. I want to receive a separate alert for each application and throttle each one for an hour. I tried using $result.application$ as the "suppress results containing field value" input, but that prevented any alerts coming in after the first one was created. Is there any way to throttle alerts for each specific value without having to manually type in each one, as there are hundreds. Thanks
We have some spikes for concurrent search jobs? therefore, how can I list all the scheduled searches for a given moment?
Trying to pull specific fields out of the database tables "LastContact" and listing the output with a timestamp (LastContact), ManagerName (hostname), OSType etc. This is the query I'm using: SEL... See more...
Trying to pull specific fields out of the database tables "LastContact" and listing the output with a timestamp (LastContact), ManagerName (hostname), OSType etc. This is the query I'm using: SELECT [Hostname], [ManagerName],[OSType],[LastContact]FROM [SCSPDB].[dbo].[ASSET_VW] WHERE LastContact < dateadd(day,-1,getdate()); Works fine in SQL Explorer but not in DBX (errors out with the "index 1 out of range" message). Do I need to change my query from using a Rising Column (LastContact) to something else?
Good afternoon I am trying to perform an audit of the environmental lookups and I need to know if there is any query that allows to validate whether this knowledge object is being used or accessed... See more...
Good afternoon I am trying to perform an audit of the environmental lookups and I need to know if there is any query that allows to validate whether this knowledge object is being used or accessed Any information is appreciated Best regards
Does anyone have some SPL they could share that identifies SSO users that use NTLM to authenticate? Any help with this is greatly appreciated.
Good Afternoon everyone! We seem to be encountering a discrepancy with our IPLocation database. We're running Splunk 7.3.3 and recently updated the GeoLite lookup in /opt/splunk/share. We notic... See more...
Good Afternoon everyone! We seem to be encountering a discrepancy with our IPLocation database. We're running Splunk 7.3.3 and recently updated the GeoLite lookup in /opt/splunk/share. We noticed a discrepancy however between what the MaxMind's mmdb file returns and what other online IP lookups show. The IP in question shows as a London/U.K based IP in online engines (including MaxMind's own lookup.) However Splunk's iplocation function labels it as an Italy-based address with appropriate Lat/Lon values. I'm at a loss as to what'd be causing this apparent mismatch. Any insight/experiences as to what's causing this problem are appreciated!
Can anyone explain or point to documentation that can explain the ui_displatch_app vs. ui_dispatch_view configuration in savedsearches.conf. I have a search that ultimately results in the line: ... See more...
Can anyone explain or point to documentation that can explain the ui_displatch_app vs. ui_dispatch_view configuration in savedsearches.conf. I have a search that ultimately results in the line: | outputlookup creatinapp=true create_empty=false lookup_filename.csv The saved search is contained in Application_A The savedsearches.conf also adds the settings: request.ui_dispatch_app = Application_A request.ui_dispatch_view = search I'm not sure why the developer chose 'ui_dispatch_view = search' but the lookup file (lookup_filename.csv) seems to randomly end up in either Application_A/lookups or search/lookups directories. I thought at first that this was the difference between the job being scheduled, and the job being executed by hand. But, maybe that's just a mirage. The .spec file says this: request.ui_dispatch_app = <string> * Specifies a field used by Splunk UI to denote the app that this search should be dispatched in. * Default: empty string request.ui_dispatch_view = <string> * Specifies a field used by Splunk UI to denote the view this search should be displayed in. * Default: empty string I guess I don't understand the difference between an 'app' and a 'view', and neither of those would seem to relate to a outputlookup command. I'd appreciate the help.
How do I integrate appdynamics with service now. I want to make a policy or use an API through which if there is an event on AppD, an incident is automatically created on service now? 
I want to replace a dynamic string in an event.. Example: error occurred from the server ABCXYZ12345ABCXYZ under lenderprice hop... Here "ABCXYZ12345ABCXYZ" is dynamic field. So i want to repla... See more...
I want to replace a dynamic string in an event.. Example: error occurred from the server ABCXYZ12345ABCXYZ under lenderprice hop... Here "ABCXYZ12345ABCXYZ" is dynamic field. So i want to replace this string with XZXYYZZ"
I have multiple crashes on my VM Linux servers "SUSE 12" that are running Splunk service in a cluster, mainly what is crashing are indexers and Search heads. We had different causes from the crash l... See more...
I have multiple crashes on my VM Linux servers "SUSE 12" that are running Splunk service in a cluster, mainly what is crashing are indexers and Search heads. We had different causes from the crash logs under Splunk which is Segmentation Fault and also on the var/log messages we see logs for crashes with a Segmentation fault. What can be monitored on the Server OS level to identify the root cause of the issue like what resources should be monitored to triage these crashes?
got an alert that splunk is not running. Tried to restart using systemd restart SplunkForwarder. ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-star... See more...
got an alert that splunk is not running. Tried to restart using systemd restart SplunkForwarder. ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Mon 2020-02-24 07:25:40 MST; 1 day 1h ago Process: 344227 ExecStartPost=/bin/bash -c chown -R 2080:2080 /sys/fs/cgroup/memory/system.slice/%n (code=exited, status= Process: 344225 ExecStartPost=/bin/bash -c chown -R 2080:2080 /sys/fs/cgroup/cpu/system.slice/%n (code=exited, status=0/S Process: 344224 ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd (code=exited, status=203/EXEC) Main PID: 344224 (code=exited, status=203/EXEC) Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enab Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: Unit SplunkForwarder.service entered failed state. Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: SplunkForwarder.service failed. Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: SplunkForwarder.service holdoff time over, scheduling restart. Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot- Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: start request repeated too quickly for SplunkForwarder.service Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enab Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: Unit SplunkForwarder.service entered failed state. Feb 24 07:25:40 pplx2dbadm05.adt.com systemd[1]: SplunkForwarder.service failed.
Hello, I'm new to the Splunk Admin role and have a distributed environment. I have a single search head in a cluster that keeps locking an account out. We use LDAP to authenticate into Splunk and t... See more...
Hello, I'm new to the Splunk Admin role and have a distributed environment. I have a single search head in a cluster that keeps locking an account out. We use LDAP to authenticate into Splunk and this search keeps attempting log a user in, even when they are not attempting to log in. Any idea how to fix this?
Hi, I have an inputs.conf that seems to be ignoring the host entries that I've entered. Am I missing something? `[monitor:///data/syslog-ng/logs/unknown/.../*.log] host_segment = 5 disab... See more...
Hi, I have an inputs.conf that seems to be ignoring the host entries that I've entered. Am I missing something? `[monitor:///data/syslog-ng/logs/unknown/.../*.log] host_segment = 5 disabled = false index = unknown sourcetype = unknown_syslog blacklist=(1.2.3.4|4.5.6.7|6.7.8.9|127.0.0.1)
Hi, I have a SPL search that is producing counts for two values for my monitored application's transactions, Successful and Failure. I need a conditional alert that triggers every time Failure ... See more...
Hi, I have a SPL search that is producing counts for two values for my monitored application's transactions, Successful and Failure. I need a conditional alert that triggers every time Failure count exceeds Successful count by at least 10 within 5 mins. My search is running every 5 minutes looking back at data that comes within 5 minutes. I need custom Trigger Condition under Alert's Edit please. Sample (Normal): result count Successful 100 Failure 10 Sample (Alert): result count Successful 100 Failure 111 Thanks in advance!
I'm trying to use eval to calculate another field, using something simple newfield = eval if(like(http_request,"%"+site+"%"),1,0) This works fine from the search command line, but does not eva... See more...
I'm trying to use eval to calculate another field, using something simple newfield = eval if(like(http_request,"%"+site+"%"),1,0) This works fine from the search command line, but does not evaluate when I do it as an automatic field calculation, it's like the "site" field has not yet been defined and I cannot find where that is configured. Any ideas?
Hello Team, We have old splunk servers with the version 6.3.2 abd our organization has built brand new servers with SPlunk ( version 7.2.6) installed on it. i want to move all the contents , indexe... See more...
Hello Team, We have old splunk servers with the version 6.3.2 abd our organization has built brand new servers with SPlunk ( version 7.2.6) installed on it. i want to move all the contents , indexes , data models ,LDAP settings , apps and all the data into new servers. Could you please let me know what is the step by step process and kindly provide the user guide for this requirement. Many thanks in advance.
I am pulling two fields from a CSV based off of a field in live logs, then combining them into one field with a constant string in between them. What I have tried thus far : |eval field3=field1.... See more...
I am pulling two fields from a CSV based off of a field in live logs, then combining them into one field with a constant string in between them. What I have tried thus far : |eval field3=field1." - ".field2 |eval field3=field1 + " - " + field2 |eval field3=if(field1="", field1." - ".field2, "didnt work") |eval field3=if(field1="", field1 + " - " + field2, "didnt work") |eval field3=if(NOT (field1=""), field1." - ".field2, "didnt work") |eval field3=if( NOT (field1=""), field1 + " - " + field2, "didnt work") None of these work. Even with a fillnull before them.