All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to configure an alert where alert should trigger if any unique value from field A is 5 or more than 5 in last 10 minutes. I was able to configure the above part but along with the field A I a... See more...
I want to configure an alert where alert should trigger if any unique value from field A is 5 or more than 5 in last 10 minutes. I was able to configure the above part but along with the field A I also want the values of field B in an inline table of an alert which are associated with field A. Can someone suggest me on this?
I have a large query which works great to search CloudTrail logs for Security Group changes. Different events, however, place the notable fields in different paths, e.g. requestParameters.groupId ... See more...
I have a large query which works great to search CloudTrail logs for Security Group changes. Different events, however, place the notable fields in different paths, e.g. requestParameters.groupId requestElements.groupId requestParameters.UpdateSecurityGroupRuleDescriptionsIngressRequest.GroupId In this instance, I do a series of cascading evals where I set my 'GroupID' to each path ... then do another eval that starts with 'if(isnull(GroupID)' to determine if the previous path was empty. It all works (worked) great. But now I've found another event type where the notable fields are stashed under a much longer path, e.g. requestParameters.UpdateSecurityGroupRuleDescriptionsIngressRequest.GroupId There are actually 3 paths -- for GroupId, CIDR and Description. The lookup for the GroupId works but the lookups for CIDR and Description do not. I'm doing everything the same, it just doesn't work. Are they too long? Here's an example of the CIDR query:     | spath "requestParameters" | eval CIDR = 'requestParameters.cidrIp' | eval CIDR = if( isnull(CIDR), 'requestParameters.ipPermissions.items{}.cidrIp', CIDR ) | spath "requestParameters.UpdateSecurityGroupRuleDescriptionsIngressRequest.IpPermissions{}.IpRanges.CidrIp" | eval CIDR = if( isnull(CIDR), 'requestParameters.UpdateSecurityGroupRuleDescriptionsIngressRequest.IpPermissions.items{}.IpRanges.CidrIp', CIDR )     P.S. I don't think that second 'spath' command is needed (is it?) I only threw it in when I started having trouble getting it work ...  
Basically, I am looking at ways to easily disable/enable a few selected alerts before any maintenance activities. Is this possible?
I am trying to set up daily splunk dashboard email in HTML format.   The problem is, my dashboard has multiple tabs in one Dashboard. So when I sent email it always sends the table of First tab.  ... See more...
I am trying to set up daily splunk dashboard email in HTML format.   The problem is, my dashboard has multiple tabs in one Dashboard. So when I sent email it always sends the table of First tab.    I am looking the way to send HTML email of all the tables that dashboard contains. @niketn  @lguinn2 
Hello, I have this query that needs dynamically adjusted for time duration, the results are written every 5 mins so , in 24 hrs period there will be 288 results written,  I am dividing by 288 in quer... See more...
Hello, I have this query that needs dynamically adjusted for time duration, the results are written every 5 mins so , in 24 hrs period there will be 288 results written,  I am dividing by 288 in query below to calculate percentage rate. earliest = -24h index=error_log  | eventstats count as fcount by "Properties.QueryName" | eval percent = round((fcount/288)*100,2) | stats values(percent) as Failure_Percentage by "Properties.QueryName"  If I change this query to pass start time and end time , the query needs to calculate duration and divide by 5 mins to get number of data counts and calculate the percentage rate, how can this query be modified to calculate time duration and find the right count to divide assuming data is expected every 5 mins.  
Tried  a couple of functions ... nothing easy... Example (index=XXX) AND event="XXXXXX" | eval tim =strftime(_time,"%m/%d/%Y") | eventstats max(tim) as maxDate| stats count by dvchost, maxDate I ne... See more...
Tried  a couple of functions ... nothing easy... Example (index=XXX) AND event="XXXXXX" | eval tim =strftime(_time,"%m/%d/%Y") | eventstats max(tim) as maxDate| stats count by dvchost, maxDate I need to figure out how to find the most recent records....  code does not work... looked at other ways to do it .... nothing easy... help  
Receiving below errors when starting Splunk. Windows Server 2012R2 Splunk 7.3.3 Checking configuration... Error while parsing 'C:\Program Files\Splunk\e tc\datetime.xml': 'module' object has no ... See more...
Receiving below errors when starting Splunk. Windows Server 2012R2 Splunk 7.3.3 Checking configuration... Error while parsing 'C:\Program Files\Splunk\e tc\datetime.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\searchLanguage.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\exec\config.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\fschangemanager\c onfig.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\RemoteQueue\confi g.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\stashparsing\conf ig.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\structuredparsing \config.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\tailfile\config.x ml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\TCP\config.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\UDP\config.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\wineventlog\confi g.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\input\winparsing\config .xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\internal\scheduler\conf ig.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\modules\parsing\config.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\myinstall\splunkd.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\system\default\data\ui\views\jo b_management.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\system\default\data\ui\views\_a dmin.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\system\static\cliDirectory.xml' : 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\system\static\cliMaster.xml': 'module' object has no attribute 'parse' Error while parsing 'C:\Program Files\Splunk\etc\system\static\splunkrc_cmds.xml ': 'module' object has no attribute 'parse' There were problems with the configuration files. Would you like to ignore these errors? [y/n]:y
I have a search using stats count but it is not showing the result for an index that has 0 results. There is two columns, one for Log Source and the one for the count.   I'd like to  show the count o... See more...
I have a search using stats count but it is not showing the result for an index that has 0 results. There is two columns, one for Log Source and the one for the count.   I'd like to  show the count of EACH index, even if there is 0 result.  example log source  count A                    20 B                    10 C                     0   index=A or index=B or index=C | eval "Log Source"=case(index == "A", "indexA", index == "B", "indexB", index == "C", "IndexC") | stats count by "Log Source"      
Hi fellow Splunkers! I'm trying to figure out how to customize the subtitle of a dashboard (bold the font or change the font size for example). I'm currently using a hidden HTML style panel within m... See more...
Hi fellow Splunkers! I'm trying to figure out how to customize the subtitle of a dashboard (bold the font or change the font size for example). I'm currently using a hidden HTML style panel within my XML so if possible I'd like to continue with that method rather than converting the whole dashboard to HTML.   <row> <panel> <title>Quick Stats</title> <single> <title>User Counts</title> <search> ...... </search> </single> </panel> </row>   The "User Counts" title is the one I am interested in adding specific styles to. Here is an example of how I am currently customizing some of the elements of my dashboard:   ... <row> <panel>**<html depends="$hiddenForCSS$">** <style> .dashboard-row { padding-bottom: 5px !important; padding-top: 5px !important; } .dashboard-panel h2{ background:#65A637 !important; color:white !important; text-align: center !important; font-weight: bold !important; border-top-right-radius: 15px !important; border-top-left-radius: 15px !important; } </style> </html> </panel> </row> ...   I'm mainly trying to identify the correct way to reference the subtitle but if anyone has any general tips for the best way to identify the correct way to reference the various dashboard elements, that would be super helpful as well.
In my lookup table, I have the days of the week as columns with "Y" or "N" in the field (not able to change this as this is how the data is provided). I would like to return only the results from the... See more...
In my lookup table, I have the days of the week as columns with "Y" or "N" in the field (not able to change this as this is how the data is provided). I would like to return only the results from the lookup table that matches the day of the week and the day of the week columns with "Y".  I have been looking at using IF statements and Where clauses, but not really getting it. The data columns look like this: Mon Tue Wed Thu Fri Sat Sun Y N Y N N N N N Y N Y N Y N   I have been extracting the day by using dayOfWeek=strftime(_time, "%a") from the imbeded search query: | inputlookup somecsv.csv | join email [search index=someindex | eval dayOfWeek=strftime(_time, "%a")] | table or stats (data from inputlookup that matches the days of the week from the search) This is where I get stuck. Any help will be greatly appreciated. 
Hi Team, I have one requirement : I have multiple URL'S some contain id and some dont contain id's URL'S Example https://opu/api/processs/3fe13d52-d326-15a1-acef-ed3395edd973/registry(with ID) h... See more...
Hi Team, I have one requirement : I have multiple URL'S some contain id and some dont contain id's URL'S Example https://opu/api/processs/3fe13d52-d326-15a1-acef-ed3395edd973/registry(with ID) https://POI/api/processors/022adcc6-8001-3d7a-b291-3d0831458357(with ID) https://uyt/api/flow/config (WITHOUT ID) Like this there are more multiple URL's MY id pattern is this in URL's: 05ee3b30-d5e1-1977-9aa9-61c458568edb So I have made regex like this: ^.*([A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})$ Can someone help me out with complete regex . How can I put this (^.*([A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})$) in one column which only contain id's using regex.
Hey Guys, I have a best practice question.  I am currently in the process of making 20 or so dashboards. Eventually, I will be putting them inside a Splunk App. Will that be a foreseeable issue or ... See more...
Hey Guys, I have a best practice question.  I am currently in the process of making 20 or so dashboards. Eventually, I will be putting them inside a Splunk App. Will that be a foreseeable issue or can I continue making the individual dashboards and once I'm all done turn all those dashboards into one application?   Thank you, Marco
Hi Guys, I was hoping you can help me. I am using Splunk to analyze some logs that I got from a company, but I don't know how to interpret them. The files I am trying to analyze are XML, JMX, .log ... See more...
Hi Guys, I was hoping you can help me. I am using Splunk to analyze some logs that I got from a company, but I don't know how to interpret them. The files I am trying to analyze are XML, JMX, .log format. The logs contain real time information about servers of the company. For example, I would like to know how can I find errors in these logs. Another thing I can't explain is that why some logs have one event, while some others have more. Thank you in advance!
Hello Splunkers, I need help with below scenario: I need to form query from xml log in below format. TransactionID            LineNumber            Fulfiller 123                              ... See more...
Hello Splunkers, I need help with below scenario: I need to form query from xml log in below format. TransactionID            LineNumber            Fulfiller 123                                        1                             abc 124                                         1                            xyz 125                                         1                            def                                                   2                           xyz 126                                          1                           abc                                                    2                           def                                                    3                            xyz So, here in my xml logs sometime i am having only one LineNumber mentioned and correspondingly fulfiller. However, in some log events i am having multiple LineNumbers with corresponding fulfillers for same transactionid. I have used regex to extract transactionid, LineNumber and fullfiller name. I want result in above format. Hope I am able to explain my scenario.  
Hi All, I'm wondering if anybody has come into the below situation before, we have a process that Appd machine agent or the process monitor extension can not report on. So the process does not sho... See more...
Hi All, I'm wondering if anybody has come into the below situation before, we have a process that Appd machine agent or the process monitor extension can not report on. So the process does not show in the processes tab of the machine agent or it does not appear in the top 10 list of processes consuming CPU / Memory. This gives issues when we see spikes in CPU / memory at particular times but then the process does not report so its hard to identify what the issue was. Would anybody have had a situation like this with their systems? If you have did you have to get help from the software provider in order to identify the issue? How did the isssue get resolved for you? Background / Investigation: Process does not appear in process tab. Process runs as elevated and account on Windows 10 has permissions to folder where process is located. Running machine agent as administrator profile does not identify it either. Machine Agent Log Files Examples: 1) The "org.hyperic.sigar.SigarException: Incorrect function" exception is reported when native sigar library is not able to inspect the process. 2) The command line isnt identifying where the process exists Example of process identified with command line entry : "pid=1064, ppid=740, commandLine=C:\appdynamics\machineagent\bin\MachineAgentService.exe, name=MachineAgentService, owner=Administrator" Example of process not being identified with no command line entry : "pid=3324, ppid=740, commandLine=, name=" PPID 740 for both instances is services.msc
We have only one log in the Splunk, but the user is receiving 2 alerts at a time with the same search id.
I've a situation where I need to install two Splunk Universal Forwarders over a server.(Not possible to reuse existing UF since it is owned by our vendor, where cannot have control over it using thei... See more...
I've a situation where I need to install two Splunk Universal Forwarders over a server.(Not possible to reuse existing UF since it is owned by our vendor, where cannot have control over it using their deployment server) With Linux I'm following below activities for 2nd UF: 1. Unzip tar package over at different location. 2. Change management ports using web.conf 3. Change server name in splunk-launch.conf Am I missing any other steps? Do I need to perform third step??To change service name or no need of it?? If I'm installing two UFs over a windows, would I need to perform this third step their as well?? Is installing two splunk universal forwarders on Linux host officially supported by Splunk, if yes then any reference?? I know it doesn't support for Windows.    If  I use any alternatives to UF (Ex: Rsyslog & WMI), I loose the reliability, so having above approach.        
Hi, I have the following json which I put in through HEC: { "message": { "metadata": { "id": "https://...", "uri": "https://...", "type": "com...." ... See more...
Hi, I have the following json which I put in through HEC: { "message": { "metadata": { "id": "https://...", "uri": "https://...", "type": "com...." }, "messageGuid": "AF8aCGJx-9ZI-JGyvFTGoSufbXlA", "correlationId": "AF8aCGI8ISFZGiG8eh9NAegmK2q5", "logStart": "2020-07-23T22:00:02.4", "logEnd": "2020-07-23T22:00:10.866", "integrationFlowName": "Sample_Flow", "status": "DONE", "alternateWebLink": "https://...", "logLevel": "INFO", "customStatus": "DONE", "transactionId": "afdfb636cbce4dd0b537b6623954a490" } } I log it with the splunk logging library (appender is com.splunk.logging.HttpEventCollectorLogbackAppender) with a defined sourcetype. The _time attribute of the event in Splunk I need to set with the value of the json field "logStart". For this purpose I have the following settings in the sourcetype: I hoped, that Splunk will set the _time value on base of the settings TIMESTAMP_FIELDS and TIME_FORMAT. As result I get the following json in Splunk: { "severity": "INFO", "logger": "SplunkLogger", "time": "1595593644.384", "thread": "http-nio-8080-exec-1", "message": { "metadata": { "id": "https://...", "uri": "https://...", "type": "com...." }, "messageGuid": "AF8aCGJx-9ZI-JGyvFTGoSufbXlA", "correlationId": "AF8aCGI8ISFZGiG8eh9NAegmK2q5", "logStart": "2020-07-23T22:00:02.4", "logEnd": "2020-07-23T22:00:10.866", "integrationFlowName": "Sample_Flow", "status": "DONE", "alternateWebLink": "https://...", "logLevel": "INFO", "customStatus": "DONE", "transactionId": "afdfb636cbce4dd0b537b6623954a490" } } And the _time value has been setted on base of the epoch time, that was generated via the splunk appender (current log time). I didn't find any possibility to influence the generation of the "time" field in the splunk logging library: https://github.com/splunk/splunk-library-javalogging How can I let Splunk set the _time value on base of the specific json field "logStart"? Thanks a lot Best regards    
Hello, I'm looking into refreshing my hardware for my Splunk server and was wondering if: 1) Does Splunk support AMD Epyc CPU's? 2) Is there any performance difference between using Intel or AMD c... See more...
Hello, I'm looking into refreshing my hardware for my Splunk server and was wondering if: 1) Does Splunk support AMD Epyc CPU's? 2) Is there any performance difference between using Intel or AMD chips(ie is Intel preferred or is better optimized for)?
Quite a few "SAI not showing entities" articles, none seem to fix my problem. Extreme splunk newbie here. We have one enterprise splunk server called cotsplunk with Splunk App for Infrastructure a... See more...
Quite a few "SAI not showing entities" articles, none seem to fix my problem. Extreme splunk newbie here. We have one enterprise splunk server called cotsplunk with Splunk App for Infrastructure and Splunk Addon for Infrastructure installed. We've got logs from various forwarders pushing to 9997 with no issues. These are all in-house Windows machines. I really want to see what SAI is all about, and have done Add Data - followed the Windows tab.. added an entry to monitor E:\Bitbucket\log\* and took the generated script and ran on the server I'm gathering metrics from (cotuuwork). Still no entities showing. Someone mentioned running: | mcatalog values("entitytype") as "entitytype" values("os") as "os" WHERE metricname=processor.* AND index=em_metrics BY "host" but that shows 0 results All-Time. On the server I'm trying to collect metrics on (cotuuwork) - some pertinent file info: D:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf # *** Configure Metrics & Logs collected *** [perfmon://CPU] counters = % C1 Time;% C2 Time;% Idle Time;% Processor Time;% User Time;% Privileged Time;% Reserved Time;% Interrupt Time instances = * interval = 60 object = Processor mode = single useEnglishOnly = true sourcetype = PerfmonMetrics:CPU index = em_metrics _meta = os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 ip::"<redacted>" entity_type::Windows_Host [monitor://$SPLUNK_HOME\var\log\splunk\*.log*] sourcetype = uf disabled = false index = _internal [WinEventLog://Application] checkpointInterval = 10 current_only = 0 disabled = 0 start_from = oldest [WinEventLog://Security] checkpointInterval = 10 current_only = 0 disabled = 0 start_from = oldest [WinEventLog://System] checkpointInterval = 10 current_only = 0 disabled = 0 start_from = oldest [WinEventLog://Forwarded Events] checkpointInterval = 10 current_only = 0 disabled = 0 start_from = oldest [WinEventLog://Setup] checkpointInterval = 10 current_only = 0 disabled = 0 start_from = oldest [monitor://E:\Bitbucket\log] sourcetype = disabled = false index=bitbucket D:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = cotsplunk:9997 But then I also see a: D:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf Then another article mentioned I need to have a  D:\Program Files\SplunkUniversalForwarder\etc\apps\splunk_app_infra_uf_config\inputs.conf I understand that the local\inputs.conf is for customizations on top of default\inputs.conf .... but could someone please advise the etc\apps\SplunkUniversalForwarder vs the etc\system vs the etc\apps\splunk_app_infra_uf_config or what I can do to troubleshoot further? Thanks