All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native appl... See more...
https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native applications especially the end-user monitoring. can you please refer to how to configure this as well as the documentation in relation if any. Good day
Hi    Am trying to collect the windows logs from DCs and send them to both Splunk indexer and Third party System (Snare Central). I managed to send the logs using syslog configuration. But some how... See more...
Hi    Am trying to collect the windows logs from DCs and send them to both Splunk indexer and Third party System (Snare Central). I managed to send the logs using syslog configuration. But some how the logs are getting broken. I want my log format to be in "snare over syslog". Please suggest. UF => HF => Snare Central  
Generally indexer is used to store indexes but in the standalone architecture how data is stored ???  
Is it currently possible to do multiclass classification with any of the algorithms in the MLTK? I have investigated the RandomForestClassifier algorithm, which has multiclass functionality, but loo... See more...
Is it currently possible to do multiclass classification with any of the algorithms in the MLTK? I have investigated the RandomForestClassifier algorithm, which has multiclass functionality, but looking at the parameters available in the Splunk MLTK documentation (RandomForestClassifier ) I do not see any of the multiclass parameters being available (specifically n_classes_) - See also sklearn - RandomForestClassifier   
Hi Team, In tiers and nodes of APPD I found JVM heap,MAX heap,JVM cpu burnt, GC Time spent are showing zero. when i checked App and machine agent status are showing up and running. i found below err... See more...
Hi Team, In tiers and nodes of APPD I found JVM heap,MAX heap,JVM cpu burnt, GC Time spent are showing zero. when i checked App and machine agent status are showing up and running. i found below errors in app agent logs. can some one suggest me how to resolve this issue. [AD Thread-Metric Reporter1] 07 Jan 2022 03:12:27,211 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:12:52,221 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global1] 07 Jan 2022 03:12:52,221 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:22,227 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:22,227 ERROR AgentKernel - Error executing task - [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 ERROR JVMMetricReporter - Error getting thread count [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 WARN JVMMetricReporter - Error updating JVM JMX values [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global1] 07 Jan 2022 03:13:52,223 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:52,223 ERROR AgentKernel - Error executing task - Regards Charan
Dear Splunk Community, Every 5 minutes the following event is generated : 2022-01-05 21:20:33 : Running OR 2022-01-05 20:19:33 : Failed I would like to display a timeline with two (2) lines show... See more...
Dear Splunk Community, Every 5 minutes the following event is generated : 2022-01-05 21:20:33 : Running OR 2022-01-05 20:19:33 : Failed I would like to display a timeline with two (2) lines showing when the system is running and when it fails. I have come so far:   running OR failed | eval status = if(like(_raw, "%Running%"), "Running", "Not running") | table status     I am in need of some guidance in this matter. How do I change the above search so that I have a line chart visualization with the two lines in it? Thanks in advance.    
Hello all,   I am trying to extract an field from the below event and using the below add extraction, however this extraction is failing to extract the complete event and is breaking in the middle.... See more...
Hello all,   I am trying to extract an field from the below event and using the below add extraction, however this extraction is failing to extract the complete event and is breaking in the middle. Can you please help resolve.   212685,00004107,00000000,2404,"20220106111738","20220106111739",4,-1,-1,"SYSTEM","","psd240",327312673,"MS932","Server ジョブ(Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成:@52X6013)が異常終了しました(status: a, code: 100, host: PSC642, JOBID: 281767)","Error","jp1admin","/HITACHI/JP1/AJS2","JOB","AJSROOT1:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成","JOBNET","Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット","Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成","END","20220106111731","20220106111738","100",25,"A0","AJSROOT1:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系","A1","ユニット別フロアマスタテンポラリネット","A2","ユニット別フロアマスタテンポラリ作成","A3","@52X6013","ACTION_VERSION","0600","B0","n","B1","1","B2","jp1admin","B3","psd240","B4","a","C0","PSC642","C1","","C2","281767","C3","PSC642","C4","0","C5","0","C6","r","E0","1641435451","E1","1641435458","E2","0","E3","0","H2","578828","H3","pj","H4","q","PLATFORM","NT",     Extraction used: (?:[^,]+,){14}(?<alert_description>[^,]+),   Please help extract the highlighted fields.  Thank you
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var... See more...
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var                   10                    9.2                   0.8                           92 /opt                   10                    8.1                   1.9                          81 /logs                 10                    8.7                   1.3                          87 /apps                10                    8.4                   1.6                          84 /pcvs                10                    9.4                    0.6                         94 I need to create a multiselect option with the disk usage values to get the above table for a range of values. For e.g. If I select 80 in the multiselect it will show the table with values of disk usage in the range 76-80, then if I select 80 & 90 in the multiselect it will show the table with values of disk usage in the range 76-80 & 86-90 and so on. I created the multiselect with token as "DU" and created the search query for the table as: .... | where ((Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)) OR (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5))) | table File_System,Total,Used,Available,Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" With the above query I am able to get the results when I run a search with two different values (e.g. 100 & 65) for $DU$ in (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)). But with this query I am not able to get the table in the dashboard when I am using multiple values. Please help me with the delimiter to be added or help create a query so that upon selecting multiple options in multiselect will give the table for a range of disk usage values.
hi i have difficulties to understandand whats exacty do the field DEST_KEY and FORMAT on my host in stanza 1 and FORMAT in stanza 2 I have read the documentation but..... Thanks in advance [rfc... See more...
hi i have difficulties to understandand whats exacty do the field DEST_KEY and FORMAT on my host in stanza 1 and FORMAT in stanza 2 I have read the documentation but..... Thanks in advance [rfc5424_host] DEST_KEY = MetaData:Host REGEX = <\d+>\d{1}\s{1}\S+\s{1}(\S+) FORMAT = host::$1 [host_as_src] SOURCE_KEY = host REGEX = (.+) FORMAT = src::"$1  
I'm trying to use the Missile Map visualization, however, every time Custom Cluster Map Visualization in dashboard with Missile Map visualization , the label shows on error.     How can I ... See more...
I'm trying to use the Missile Map visualization, however, every time Custom Cluster Map Visualization in dashboard with Missile Map visualization , the label shows on error.     How can I solve this error? Thanks!
Splunk can not load old data only load current data. Though it shows event count. Before that I have moved some splunk cold db folder  in several times to free up space . and it worked fine. I dont u... See more...
Splunk can not load old data only load current data. Though it shows event count. Before that I have moved some splunk cold db folder  in several times to free up space . and it worked fine. I dont understand what happend now. Is there any way to recover data without splunk search? Installed in windows.
I have a index=weblogs where I filter results and then REX extract an IP address to a new field called RemoteIP. I want to then search our firewall logs on index=firewall for that newly extracted fi... See more...
I have a index=weblogs where I filter results and then REX extract an IP address to a new field called RemoteIP. I want to then search our firewall logs on index=firewall for that newly extracted field RemoteIP. I have been playing around with sub searches and joins but not getting far.   
Hi all,  I want to know how splunk extracts fields from TA_windows inputs when mode=multikv  The _raw event does not seem to have any sort of field indicator (as compared to events from TA_nix whic... See more...
Hi all,  I want to know how splunk extracts fields from TA_windows inputs when mode=multikv  The _raw event does not seem to have any sort of field indicator (as compared to events from TA_nix which has headers)  As an example:  Splunk_TA_windows/local/inputs.conf     [perfmon://Network-Bytes] disabled = false counters = Bytes Total/sec; Bytes Received/sec; Bytes Sent/sec; interval = 60 mode = multikv index = perfmon useEnglishOnly = true object = Network Interface sourcetype = PerfmonMk:Network     gives _raw events as seen indexed in Splunk:      vmxnet3_Ethernet_Adapter 19069.926362422757 11044.290764991998 8025.635597430761 vmxnet3_Ethernet_Adapter 26173.569591676503 15701.614528029395 10471.95506364711 vmxnet3_Ethernet_Adapter 28654.246470518276 17482.977608482255 11171.268862036022     From this output, splunk magically extracts fields like:      Bytes_Received/sec Bytes_Sent/sec Bytes_Total/sec instance category collection     I checked the TA_windows configs and ran btool, but could not trace configs other than some standard PerfmonMk:<object> stanzas in Splunk_TA_windows/default/props.conf which contain only FIELDALIAS settings What am I missing? How does splunk know which field is which?  How does it even get values for category & collection when those values are not even present in the _raw?    Further comparison, TA_nix add-on does this in a much more legible manner (which can be easily understood and played around with) like:  Name rxPackets_PS txPackets_PS rxKB_PS txKB_PS eth0 1024.00 1972.50 1415.04 674.94 ​     Additional:  I want to convert the PerfmonMk events to metrics, has anyone attempted that?     
I have a Splunk Dashboard. It has a text field named "Error msg" and a Time-Picker. (Image - "Dashboard items").  If the text field "Error msg" is empty, I am able to display all the logs within the... See more...
I have a Splunk Dashboard. It has a text field named "Error msg" and a Time-Picker. (Image - "Dashboard items").  If the text field "Error msg" is empty, I am able to display all the logs within the given time frame.  Query :    index=AppIndex cf_app_name=AppName msg!="*Hikari*" taskExecutor- | fields _time msg | sort -_time | | table _time msg   Now, If I enter a log message in the text field "Error msg", my goal is, for the given time frame, 1. Search all the occurrences of this "Log message". 2. Get the latest occurrence.  3. In the output table, print the logs right before the last occurrence of the msg.  In this way, user can trace the error msg and look at the logs (right before the error in the text field) to find what caused the error to happen.  Any suggestions on how this can be done via a query?
When I try to create a Shared Services server (for a development environment), it prompts me for the password for the "user" account. I have tried a variety of things, using the default password that... See more...
When I try to create a Shared Services server (for a development environment), it prompts me for the password for the "user" account. I have tried a variety of things, using the default password that comes with SOAR, adding a user called "user" and trying that password. None of it works, and after 5 attempts it ruins the installation and I have to scuttle the VM and start over. Anyone run into this issue?
Hi  I cannot find the documentation that explains the various statuses in the scheduler.log For example here are a few>>>  continued delegated_remote delegated_remote_completion delegated_remote_e... See more...
Hi  I cannot find the documentation that explains the various statuses in the scheduler.log For example here are a few>>>  continued delegated_remote delegated_remote_completion delegated_remote_error skipped success   Does anyone have a reference?   Thank you!
I'm new and a novice to Splunk although i have installed, setup and played with searches in Splunk in a lab. My question is if I have servers that are sending logs all from different “environments” ... See more...
I'm new and a novice to Splunk although i have installed, setup and played with searches in Splunk in a lab. My question is if I have servers that are sending logs all from different “environments” (prod, test, dev) what is the best way to organize the logs coming in by environment. I see I can use tags and/or indexes, but which way would make more sense.
I want to search like: index=whatever "term_1" AND (at least one event in the source of the found record contains term_2) Suppose source1 is: /var/log/source1.log event 1 event 2 term_2 event 3... See more...
I want to search like: index=whatever "term_1" AND (at least one event in the source of the found record contains term_2) Suppose source1 is: /var/log/source1.log event 1 event 2 term_2 event 3 event 4 term_1 source2 is: /var/log/source2.log event 1 event 2 event 3 term_1 When searching for term_1, I want to see the results only from source1. Because source1 also has an event having term_2 in it.
This is the basic case: I have an event 2021-12-28T06:24:17.567|SEARCHING|{"field1":"value1","field2":5,"field3":"la la la"} My search  index="redact" SEARCHING | spath path="field3" Splunk is s... See more...
This is the basic case: I have an event 2021-12-28T06:24:17.567|SEARCHING|{"field1":"value1","field2":5,"field3":"la la la"} My search  index="redact" SEARCHING | spath path="field3" Splunk is separating the values, but field3 column is empty for all events.   Can anyone please assist? 
Hello, I've got a search query where I'm looking for unexpected ssh connections to my instances, but I've got one server where my IP address dynamically changes and I want to exclude the IP address ... See more...
Hello, I've got a search query where I'm looking for unexpected ssh connections to my instances, but I've got one server where my IP address dynamically changes and I want to exclude the IP address of that host because I know there will be expected ssh connections from that IP address. I'm running a sub search to look at aws description logs, grabbing the IP of the box based on it's name and returning the IP address in hopes I can use it in my main search. So far it's not working how I expect and I'm not sure why. I would expect not to see entries for hostnameA with usernameA that's coming from a source IP that I'm getting from my subsearch, but my results include those entries. Here's my search so far:   index=X sourcetype=linux_secure eventtype=sshd_authentication action=success | eval exclude_host_ip=[ search index=X sourcetype=aws:description source=*:ec2_instances (tags.host=* OR tags.Name=*) earliest=-24h latest=now | eval hostName=coalesce('tags.host', 'tags.Name') | search hostName=dynamic_ip_hostname | sort - _time | dedup private_ip_address | eval ip="\"".private_ip_address."\"" | return $ip] | search NOT (host=hostnameA AND user=usernameA AND user_src_ip=exclude_host_ip) | table _time, user, host, user_src_ip | sort - _time | dedup _time user host user_src_ip | rename _time as Time, user as "Username", host as "Host", user_src_ip as "Source IP" | convert timeformat="%m-%d-%Y %H:%M:%S" ctime(Time)